4he !daptive #Ommunication %nvironment !#%: A Tutorial
4he !daptive #Ommunication %nvironment !#%: A Tutorial
ETWORK 3YSTEMS
A Tutorial
TABLE OF CONTENTS............................................................................................................................................I
THE ADAPTIVE COMMUNICATION ENVIRONMENT.................................................................................1
THE ACE ARCHITECTURE...........................................................................................................................................1
The OS Adaptation Layer.......................................................................................................................................1
The C++ wrappers layer........................................................................................................................................2
The ACE Framework Components........................................................................................................................3
IPC SAP ..........................................................................................................................................................................5
CATEGORIES OF CLASSES IN IPC SAP ........................................................................................................................5
THE SOCKETS CLASS CATEGORY (ACE_SOCK) ......................................................................................................6
Using Streams in ACE ............................................................................................................................................7
Using Datagrams in ACE.....................................................................................................................................11
Using Multicast with ACE....................................................................................................................................14
MEMORY MANAGEMENT...................................................................................................................................17
ALLOCATORS .............................................................................................................................................................17
Using the Cached Allocator .................................................................................................................................18
ACE_MALLOC...........................................................................................................................................................20
How ACE_Malloc works......................................................................................................................................20
Using ACE_Malloc...............................................................................................................................................21
USING THE MALLOC CLASSES WITH THE ALLOCATOR INTERFACE..........................................................................24
THREAD MANAGEMENT .....................................................................................................................................26
CREATING AND CANCELING THREADS ......................................................................................................................26
SYNCHRONIZATION PRIMITIVES IN ACE...................................................................................................................28
The ACE Locks Category .....................................................................................................................................29
Using the Mutex classes .......................................................................................................................................................30
Using the Lock and Lock Adapter for dynamic binding.....................................................................................................31
Using Tokens ........................................................................................................................................................................33
The ACE Guards Category ..................................................................................................................................34
The ACE Conditions Category.............................................................................................................................36
Other Miscellaneous Synchronization Classes ...................................................................................................38
Barriers in ACE.....................................................................................................................................................................38
Atomic Op.............................................................................................................................................................................40
THREAD MANAGEMENT WITH THE ACE_THREAD_MANAGER.................................................................................41
THREAD SPECIFIC STORAGE ......................................................................................................................................44
THE REACTOR ........................................................................................................................................................47
REACTOR COMPONENTS............................................................................................................................................47
EVENT HANDLERS .....................................................................................................................................................48
Registration of Event Handlers ............................................................................................................................51
Removal and lifetime management of Event Handlers .......................................................................................51
Implicit Removal of Event Handlers from the Reactors Internal dispatch tables..............................................................52
Explicit removal of Event Handlers from the Reactors Internal Dispatch Tables.............................................................52
EVENT HANDLING WITH THE REACTOR....................................................................................................................53
I/O Event De-multiplexing....................................................................................................................................53
TIMERS .......................................................................................................................................................................57
ACE_Time_Value .................................................................................................................................................57
Setting and Removing Timers...............................................................................................................................58
Using different Timer Queues ..............................................................................................................................59
HUGES NETWORK SYSTEMS )
HANDLING SIGNALS ..................................................................................................................................................60
USING NOTIFICATIONS...............................................................................................................................................60
THE ACCEPTOR AND CONNECTOR................................................................................................................65
THE ACCEPTOR AND CONNECTOR PATTERNS .........................................................................................65
THE ACCEPTOR PATTERN.................................................................................................................................66
COMPONENTS....................................................................................................................................................67
USAGE ..................................................................................................................................................................68
THE CONNECTOR.................................................................................................................................................72
USING THE ACCEPTOR AND CONNECTOR TOGETHER ..........................................................................73
ADVANCED SECTIONS ...............................................................................................................................................76
THE ACE_SVC_HANDLER CLASS ...................................................................................................................76
ACE_Task.............................................................................................................................................................................76
An Architecture: Communicating Tasks .............................................................................................................................77
Creating an ACE_ Svc_Handler ..........................................................................................................................................77
Creating multiple threads in the Service Handler................................................................................................................78
Using the message queue facilities in the Service Handler.................................................................................................81
HOW THE ACCEPTOR AND CONNECTOR PATTERNS WORK ................................................................85
End point or connection initialization phase.......................................................................................................85
Service Initialization Phase for the Acceptor ......................................................................................................86
Service Initialization Phase for the Connector....................................................................................................87
Service Processing................................................................................................................................................88
TUNING THE ACCEPTOR AND CONNECTOR POLICIES ...........................................................................88
The ACE_Strategy_Connector and ACE_Strategy_Acceptor classes ...............................................................89
Using the Strategy Acceptor and Connector .......................................................................................................................89
Using the ACE_Cached_Connect_Strategy for Connection caching ................................................................................91
USING SIMPLE EVENT HANDLERS WITH THE ACCEPTOR AND CONNECTOR PATTERNS .........................................96
MESSAGE QUEUES .................................................................................................................................................99
MESSAGE BLOCKS .....................................................................................................................................................99
Constructing Message Blocks ............................................................................................................................100
Inserting and manipulating data in a message block........................................................................................101
MESSAGE QUEUES IN ACE......................................................................................................................................102
WATER MARKS ........................................................................................................................................................104
USING MESSAGE QUEUE ITERATORS ......................................................................................................................105
DYNAMIC OR REAL-TIME MESSAGE QUEUES ........................................................................................................108
APPENDIX: UTILITY CLASES ...........................................................................................................................114
ADDRESS WRAPPER CLASSES .................................................................................................................................114
ACE_INET_Addr................................................................................................................................................114
ACE_UNIX_Addr ...............................................................................................................................................114
TIME WRAPPER CLASSES..........................................................................................................................................114
ACE_Time_Value ...............................................................................................................................................114
LOGGING WITH ACE_DEBUG AND ACE_ERROR .............................................................................................114
OBTAINING COMMAND LINE ARGUMENTS ..............................................................................................................116
ACE_Get_Opt .....................................................................................................................................................116
ACE_Arg_Shifter ................................................................................................................................................117
REFERENCE ............................................................................................................................................................120
4HE !DAPTIVE #OMMUNICATION %NVIRONMENT
An introduction
The Adaptive Communication Environment (ACE) is an object oriented framework and
toolkit that implements core concurrency and distribution patterns for communication
software. ACE includes several components to help in the development of
communication software and to achieve better flexibility, efficiency, reliability and
portability. Some of the things components in ACE can handle are
• Memory Management.
• Connection establishment and service initialization.
• Concurrency and Synchronization.
• Event demultiplexing and handler dispatching.
• Static and dynamic configuration and re-configuration of software.
• Layered protocol construction streams based framework.
• Distributed communication services – such as naming, logging, time
synchronization, event routing and network locking.
• Thread Management
• Timers etc.
ACE has a layered architecture as shown in the figure below. There are three basic layers
in the ACE framework.
• The Operating Systems (OS )Adaptation Layer
• The C++ wrapper layer
• The Frameworks and patterns layer
4HE /3 !DAPTATION ,AYER
The OS Adaptation is a thin layer of code which sits between the native OS API’s and
ACE. The OS Adaptation layer therefore shields the higher layers of ACE from the
The OS adaptation layer is also the reason why the ACE framework is available on so
many platforms. A few of the platforms on which ACE is currently available include
real-time operating systems, (VxWorks, Chorus, LynxOS and pSoS) most versions of
UNIX (SunOS 4.x and 5.x; SGI IRIX 5.x and 6.x; HP-UX 9.x,10.x and 11.x; DEC
UNIX 3.x and 4.x; AIX 3.x and 4.x; DG/UX; Linux; SCO; UnixWare; NetBSD and
FreeBSD), Win32 (WinNT 3.5.x, 4.x, Win95 and WinCE using MSVC++ and Borland
C++) and MVS OpenEdition.
)0# 3!0
Interprocess communication Service Access Point wrappers
Sockets, TLI, STREAM pipes and FIFO’s provide a wide range of interfaces for
accessing both local and global IPC mechanisms. However there are many problems
associated with these non-uniform interfaces. Problems such as non-type safety and
multiple dimensions of complexity lead to problematic and error prone programming.
The IPC SAP class category in ACE provides a uniform hierarchic category of classes
which encapsulate these tedious and error prone interfaces. IPC is designed to improve
the correctness, ease of learning, portability and reusability of communication software
while maintaining high performance.
ACE_IPC_SAP
The IPC SAP classes are divided into four major categories based on the different
underlying IPC interface they are using. This division is achieved as is illustrated by the
class diagram above. The ACE_IPC_SAP class provides a few functions which are
common to all IPC interfaces. From this class four different classes are derived for each
category of wrapper classes that ACE contains. These classes encapsulate functionality
which is common to that particular inteface. Thus for example the ACE_SOCK class
contains functions which are common to the entire BSD sockets programming interface.
Underneath each of these four classes lie a whole hierarchy of wrapper classes which
completely wrap the underlying interface and provide highly reusable, modular, safe and
easy to use wrapper classes.
The classes in this category all lie under the ACE_SOCK class. This category provides an
interface to the Internet domain and UNIX domain protocol families using the BSD
sockets programming interface. The family of classes in this category can be further
subdivided as
• Dgram Classes and Stream Classes: The Dgram classes are based on the UDP
datagram protocol and thus provide unreliable connectionless messaging
functionality. The Stream Classes on the other hand are based on the TCP protocol
and thus provide connection oriented messaging.
• Acceptor, Connector Classes and Stream Classes: The Acceptor and Connector
classes are used to passively and actively establish connections respectively. The
Acceptor classes encapsulates the BSD accept() call and the Connector encapsulates
the BSD connect() call. The Stream classes are used AFTER a connection has been
established to provide bi-directional data flow and encapsulate send and receive type
functions.
The Table below details the classes in this category and what their responsibilities are:
Class Name Responsibility
ACE_SOCK_Acceptor Used for passive connection establishment based on the BSD
accept() and listen() calls.
ACE_SOCK_Connector Used for active connection establishment based on the BSD
connect() call.
ACE_SOCK_Dgram Used to provide UDP (User Datagram Protocol) based
connection-less messaging services. Encapsulates calls such
as sendto() and receivefrom() and provides a simple send()
and recv() interface.
ACE_SOCK_IO Used to provide a connection oriented messaging service.
Encapsulates calls such as send(), recv() and write(). This
class is the base class for the ACE_SOCK_Stream and
ACE_SOCK_CODgram classes.
ACE_SOCK_Stream Used to provide TCP (Transmission Control Protocol) based
connection oriented messaging service. Derives from
ACE_SOCK_IO and provides further wrapper methods.
ACE_SOCK_CODgram Used to provide a connected datagram abstraction. Derives
from ACE_SOCK_IO and includes an open() method which
causes a bind() to the local address specified and connect()’s
to the remote address using UDP.
ACE_SOCK_Dgram_Mcast Used to provide a datagram based multicast abstraction.
Includes methods for subscribing to a multicast group and
then sending and receiving messages.
ACE_SOCK_Dgram_Bcast Used to provide a datagram based broadcast abstraction.
In the following sections we will illustrate how the IPC_SAP wrappers can be used
directly to handle inter process communication. Remember that this is just the tip of the
ice berg in ACE. All the good pattern oriented tools come in later chapters of this tutorial.
class Server{
public:
Server (int port):
server_addr_(port),peer_acceptor_(server_addr_){
data_buf_= new char[SIZE_BUF];
}
private:
char *data_buf_;
ACE_INET_Addr server_addr_;
ACE_INET_Addr client_addr_;
ACE_SOCK_Acceptor peer_acceptor_;
ACE_SOCK_Stream new_stream_;
ACE_HANDLE newhandle;
};
class Client{
public:
Client(char *hostname, int port):remote_addr_(hostname){
remote_addr_.set_port_number(port);
data_buf_=new char[SIZE_BUF];
}
private:
ACE_SOCK_Stream client_stream_;
ACE_INET_Addr remote_addr_;
ACE_SOCK_Connector connector_;
char *data_buf_;
};
Example 3
//Server
#include "ace/OS.h"
#include "ace/SOCK_Dgram.h"
#include "ace/INET_Addr.h"
class Server{
public:
Server(int local_port)
:local_addr_(local_port),local_(local_addr_){
data_buf = new char[DATA_BUFFER_SIZE];
}
//Expect data to arrive from the remote machine. Accept it and display
//it. After recieveing data immediately send some data back to the
//remote.
int accept_data(){
while(local_.recv(data_buf,SIZE_DATA,remote_addr_)!=-1){
ACE_DEBUG((LM_DEBUG, "Data received from remote %s was %s \n"
,remote_addr_.get_host_name(), data_buf));
ACE_OS::sleep(1);
if(send_data()==-1) break;
}
return -1;
}
private:
char *data_buf;
ACE_INET_Addr remote_addr_;
ACE_INET_Addr local_addr_;
ACE_SOCK_Dgram local_;
};
The above code is for a simple server which expects a client application to send it a
datagram on a known port, with a fixed amount of data in it. The Server on reception of
this data then proceeds to send a reply back to the client that originally sent the data.
The single class Server contains an ACE_SOCK_Dgram named local_ as a private
member which it uses both to receive and send data. The Server instantiates local_ in
its constructor, with a known ACE_INET_Addr (i.e., local host with known port) so that
the client can locate it to send messages to it.
The class contains two methods accept_data(), which is used to receive data from the
client (uses the local_ wrappers recv() call) and send_data() (uses the local_
wrappers send() call) used to send data to the remote client. Notice that the underlying
calls for both the send() and receive()of the local_ wrapper class wrap the BSD
sendto() and recvfrom() calls and have a similar interface.
The main function just instantiates an object of type server and calls the accept_data()
method on it which waits for data from the client and when it gets the data it is expecting
it calls send_data() to send a reply message back to the client. This goes on forever until
the client dies.
The corresponding client code is very similar and follows here.
Example 4
//Client
#include "ace/OS.h"
#include "ace/SOCK_Dgram.h"
#include "ace/INET_Addr.h"
//Accept data from the remote host using the datgram component local_
int accept_data(){
if(local_.recv(data_buf,SIZE_DATA,remote_addr_)!=-1){
ACE_DEBUG((LM_DEBUG, "Data received from remote server %s
was: %s \n" ,remote_addr_.get_host_name(), data_buf));
return 0;
}
else
return -1;
}
//Send data to the remote. Once data has been sent wait for a reply from
//the server.
int send_data(){
ACE_DEBUG((LM_DEBUG,"Preparing to send data to server %s:%d\n",
remote_addr_.get_host_name(),remote_addr_.get_port_number()));
ACE_OS::sprintf(data_buf,"Client says hello");
while(local_.send
(data_buf,ACE_OS::strlen(data_buf),remote_addr_)!=-1){
ACE_OS::sleep(1);
if(accept_data()==-1)
break;
}
return -1;
}
private:
char *data_buf;
ACE_INET_Addr remote_addr_;
ACE_INET_Addr local_addr_;
ACE_SOCK_Dgram local_;
};
Example 5
#include "ace/SOCK_Dgram_Mcast.h"
#include "ace/OS.h"
#define DEFAULT_MULTICAST_ADDR "224.9.9.2"
#define TIMEOUT 5
class Reciever_Multicast{
public:
Reciever_Multicast(int port):
mcast_addr_(port,DEFAULT_MULTICAST_ADDR),remote_addr_((u_short)0){
// Subscribe to multicast address.
if (mcast_dgram_.subscribe (mcast_addr_) == -1){
ACE_DEBUG((LM_DEBUG,"Error in subscribing to Multicast address \n"));
exit(-1);
}
}
~Reciever_Multicast(){
if(mcast_dgram_.unsubscribe()==-1)
ACE_DEBUG((LM_ERROR,”Error in unsubscribing from Mcast group\n”));
}
private:
ACE_INET_Addr mcast_addr_;
ACE_INET_Addr remote_addr_;
ACE_SOCK_Dgram_Mcast mcast_dgram_;
int mcast_info;
};
The next example shows how an application can send datagram messages to the multicast
address or group using the simple ACE_SOCK_Dgram wrapper class.
Example 6
#define DEFAULT_MULTICAST_ADDR "224.9.9.2"
#define TIMEOUT 5
#include "ace/SOCK_Dgram_Mcast.h"
#include "ace/OS.h"
class Sender_Multicast{
public:
Sender_Multicast(int port):
// Send multicast
if(dgram_.send
(&mcast_info, sizeof (mcast_info), multicast_addr_)==-1)
return -1;
ACE_DEBUG ((LM_DEBUG,
"%s; Sent multicast to group. Number sent is %d.\n",
__FILE__,
mcast_info));
return 0;
}
private:
ACE_INET_Addr multicast_addr_;
ACE_INET_Addr local_addr_;
ACE_SOCK_Dgram dgram_;
int mcast_info;
};
In this example the client application creates a datagram component which is used to send
data to the multicast group.
-EMORY -ANAGEMENT
An introduction to Memory Management in ACE
ACE contains two sets of classes which are used to manage local and shared memory.
The first set are those classes which are based on ACE_Allocator and use dynamic
binding to provide for flexibility and extensibility. These can only be used to provide for
local dynamic memory allocation. The second set of classes are based on the
ACE_Malloc template class. This set uses templates to provide for flexibility in memory
allocation mechanisms. The classes in this set not only include classes for local dynamic
memory management but also include classes to manage shared memory between
processes using the operating systems (OS) shared memory interface.
The tradeoff between these two sets is efficiency and flexibility. The ACE_Allocator
classes are more flexible as the actual allocation object and thus mechanism that is used
can be changed at run time using dynamic binding. This flexibility however does not
come without a cost. The indirection of virtual function calls makes this alternative the
more expensive option. The ACE_Malloc classes are configured at compile time with the
memory allocation mechanism that they use. Thus although ACE_Malloc is more
efficient it is not as flexible as ACE_Allocator.
!LLOCATORS
Allocator Description
ACE_Allocator Interface class for the set of Allocator classes in ACE. These
classes uses inheritance and dynamic binding to provide for
HUGES NETWORK SYSTEMS
flexiblity.
ACE_Static_Allocator This Allocator manages a fixed size of memory. Every time
a request is received for more memory it moves an internal
pointer to returns the chunk. This allocator assumes that
memory once allocated will never be freed.
ACE_Cached_Allocator This Allocator preallocates pool of memory which contains
a specified number of specified size chunks. These chunks
are maintained on an internal free list and returned when a
memory request (malloc())is received. The chunks can be
returned to the internal free list (When applications call
free() the chunk is returned back to the free list).
ACE_New_Allocator An allocator which provides a wrapper over the C++ new
and delete operators i.e., it internally uses the new and
delete operators to satisfy dynamic memory requests.
ACE
Cached Internal Free
Allocator List
Chunks
The Cached Allocator is illustrated in the diagram above. The free list maintains several
“chunks” of memory on a free list. These chunks may be any complex data type. In the
case of memory used for general allocation in a real-time system internal fragementation
of chunks is a concern.
The following example illustrates how the ACE_Cached_Allocator can be used to pre-
allocate memory and then handle requests for memory.
Example 1
#include "ace/Malloc.h"
//A chunk of size 1K is created
typedef char MEMORY_BLOCK[1024];
class MessageManager{
public:
//The constructor is passed the number of chunks that the allocator
//should pre-allocate and maintain on its free list.
MessageManager(int n_blocks):
allocator_(n_blocks),message_count_(0){}
//Free all memory allocated. This will cause the chunks to be returned
//to the allocator’s internal free list and NOT to the OS.
void free_all_msg(){
for(int i=0;i<message_count_;i++)
allocator_.free(mesg_array_[i]);
message_count_=0;
}
void display_all_msg(){
for(int i=0;i<message_count_;i++)
ACE_OS::printf("%s\n",mesg_array_[i]);
}
private:
char *mesg_array_[20];
Allocator allocator_;
int message_count_;
};
if(argc<2){
ACE_OS::printf("Usage: egXX <Number of blocks>\n");
exit(1);
}
//Use the Memory Manager class to assign messages and free them.
//Run this in your debug environment and you will notice that the
//amount of memory your program uses after Memory Manager has been
//instantiated remains the same. That means the Cached Allocator
//Do forever.
while(1){
//allocate the messages somewhere
for(int i=0; i<n_blocks;i++)
mm.allocate_msg("Hi there");
//show the messages
mm.display_all_msg();
for( i=0;i<n_blocks;i++)
mm.free_all_msg();
}
}
This simple example contains a message manager class which instantiates a cached
allocator. This allocator is then used to allocate , display and free messages forever. The
memory usage of the application however does not change. (You can check this out with
the debugging tool of your choice).
!#%?-ALLOC
As mentioned earlier the Malloc set of classes use the template class ACE_Malloc to
provide for memory management. The ACE_Malloc template takes two arguments, a
memory pool and a lock for the pool which gives us our allocator class as is shown in the
figure below
Memory Pool Class Lock Class
ACE_Malloc
Client OS
ACE_Malloc
Actual
Chunks for Allocation
Blocks for Client Allocator
Underlying
Memory Pool
When an application requests a block of memory the ACE_Malloc class will check if
there is enough space to allocate the block from one of the chunks it has already acquired
from the memory pool. If it cannot find a chunk with enough space on it then it asks the
underlying memory pool to return a larger chunk so that it can satisfy the applications
requests for a block of memory. When an application issues a free() call ACE_Malloc
will not return the memory that was freed back to the memory pool but will maintain it
on its free list. When subsequent requests for memory are received by ACE_Malloc then
it will use this free list to search for empty blocks that can be returned. Thus when the
ACE_Malloc class is used the amount of memory allocated from the OS will only go up
and not down if simple malloc() and free() calls are the only calls issued. The
ACE_Malloc class also includes a remove() method which issues a request to the memory
pool for it to return the memory to the OS. This method also returns the lock to the OS.
5SING !#%?-ALLOC
Using the ACE_Malloc class is simple. First instantiate ACE_Malloc with a memory pool
and locking mechanism of your choice, to create an allocator class. This allocator class is
subsequently used to instantiate an object which is the allocator your application will use.
When you instantiate an allocator object the first parameter to the constructor is a string
which is the “name” of the underlying memory pool you want the allocator object to use.
It is VERY important that the correct name is passed in to the constructor especially if
you are using shared memory otherwise the allocator will create a new memory pool for
you. This of course is not what you want if you are using a shared memory pool, as of
course then you get no sharing.
To facilitate the sharing of the underlying memory pool (again if you are using shared
memory this is important) the ACE_Malloc class also includes a map type interface. Each
block of memory that is allocated can be given a name and can thus easily be found by
another process looking through the memory pool. This includes bind()and find() calls.
The bind() call is used to give names to the blocks that are returned by the malloc() call to
ACE_Malloc. The find() call, as you probably expect, is then used to find the memory
previously associated with the name.
The different memory pools available are are enlisted in the table below:
Name of Pool Macro Description
ACE_MMAP_ ACE_MMAP_ MEMORY_POOL Uses the <mmap(2)> to create the
Memory_Pool pool. Thus memory can be shared
between processes. Memory is
updated to the backing store on
every update.
ACE_Lite_MMAP_ ACE_LITE_MMAP_MEMORY_POOL Uses the <mmap(2)> to create the
Memory_Pool pool. Unlike the previous map this
does not update to the backing
store. The tradeoff is lowered
reliability.
ACE_Sbrk_ ACE_SBRK_ MEMORY_POOL Uses the <sbrk(2)> call to create
Memory_Pool the pool.
ACE_Shared_ ACE_SHARED_ MEMORY_POOL Uses the System V <shmget(2)>
Memory_Pool call to create the memory pool.
Thus memory can be shared
between processes.
ACE_Local_ ACE__LOCAL_ MEMORY_POOL Creates a local memory pool based
Memory_Pool on the C++ new and delete
operators. This pool thus cant be
shared between processes.
The following example uses the ACE_Malloc class with a shared memory pool (the
example shows it using ACE_SHARED_MEMORY_POOL, but any memory pool, from
the table above, that supports shared memory may be used).
It creates a server process which creates a memory pool and then allocates memory from
the pool. The server then creates messages it wants the client process to “pick up” using
the memory it allocated from the pool. Next it binds names to these messages so that the
client can use the corresponding find operation to find the messages the server inserted
into the pool.
The client starts up and creates its own allocator but uses the SAME memory pool. This is
done by passing the same name to the constructor for the allocator. After which it uses
Example 2
#include "ace/Shared_Memory_MM.h"
#include "ace/Malloc.h"
#include "ace/Malloc_T.h"
#define DATA_SIZE 100
#define MESSAGE1 "Hiya over there client process"
#define MESSAGE2 "Did you hear me the first time?"
LPCTSTR poolname="My_Pool";
typedef ACE_Malloc<ACE_SHARED_MEMORY_POOL,ACE_Null_Mutex>
Malloc_Allocator;
static void
server (void){
//Create the memory allocator passing it the shared memory
//pool that you want to use
Malloc_Allocator shm_allocator(poolname);
//Set the Server to go to sleep for a while so that the client has
//a chance to do its stuff
ACE_DEBUG((LM_DEBUG,
"Server done writing.. going to sleep zzz..\n\n\n"));
ACE_OS::sleep(2);
static void
client(void){
//Create a memory allocator. Be sure that the client passes
// in the "right" name here so that both the client and the
//Lets get that first message. Notice that the find is looking up the
//memory based on the "name" that was bound to it by the server.
void *Message1;
if(shm_allocator.find("FirstMessage",Message1)==-1){
ACE_ERROR((LM_ERROR,
"Client: Problem cant find data that server has sent\n"));
ACE_OS::exit(1);
}
ACE_OS::printf(">>%s\n",(char*) Message1);
ACE_OS::fflush(stdout);
Most container classes in ACE allow for an Allocator object to be passed in for managing
memory used in the container. Since certain memory allocation schemes are only
4HREAD -ANAGEMENT
Synchronization and thread management mechanisms in ACE
ACE contains many different classes for creating and managing multi-threaded programs.
In this chapter we will go over a few of the mechanisms that are available in ACE to
provide for thread management. In the beginning we will go over the simple thread
wrapper classes which contain minimal management functionality. As the chapter
progresses, however, we will go over the more powerful management mechanisms
available in ACE_Thread_Manager. ACE also contains a very comprehensive set of
classes which deal with synchronization of threads. These classes will also be covered
here.
There are several different interfaces which are available for thread management on
different platforms. These include the POSIX pthreads interface, Solaris threads, Win32
threads, VxWorks threads, pSoS threads etc. Each of these interfaces provide the same or
similar functionality but with API’s that are vastly different. This leads to difficult,
tedious and error prone programming as the application programmer must make himself
familiar with several interfaces to write on different platforms. Further such programs
once written are non flexible.
ACE_Thread provides a simple wrapper around the OS thread calls which deal with
issues such as creation, suspension, cancellation and deletion of threads. This gives the
application programmer a simple and easy to use interface which is portable across
different threading API’s. ACE_Thread is a very thin wrapper with minimal overhead.
Most methods are inlined and thus are equivalent to a direct call to the underlying OS
specific threads interface. All methods in ACE_Thread are static and the class is not
meant be instantiated.
The following example illustrates how the ACE_Thread wrapper class can be used to
create and yield and join with threads.
static void*
worker(void *arg){
ACE_UNUSED_ARG(arg);
ACE_DEBUG((LM_DEBUG,"Thread (%t) Created to do some work"));
::number++;
ACE_DEBUG((LM_DEBUG," and number is %d\n",::number));
//Let the other guy go while I fall asleep for a random period
//of time
ACE_Thread::yield();
ACE_OS::sleep(ACE_OS::rand()%2);
//Exiting now
ACE_DEBUG((LM_DEBUG,
"\t\t Thread (%t) Done! \t The number is now: %d\n",number));
ACE_OS::fflush(stdout);
return 0;
}
int n_threads=ACE_OS::atoi(argv[1]);
//Setup the random number generator
ACE_OS::srand(::seed);
//Wait for all the threads to exit before you let the main fall through
//and have the process exit. This way of using join is non-portable
//and may not work on a system using pthreads.
int check_count=0;
while(ACE_Thread::join(NULL,NULL,NULL)==0) check_count++;
ACE_ASSERT(check_count==n_threads);
}
This program is a simple example in which first n_thread number of worker threads are
created to perform the worker() function defined in the program. The threads are created
by using the ACE_Thread::spawn() call and passing in the name of the function which is
There are several facts worth noting in this example. First and foremost is that there is no
management functionality available that we can just ask to take care of issuing the join or
remembering the ids of the threads that the application has spawned. Second no
synchronization primitives were used in the program, in this case they were not necessary
as all threads were just doing simple addition but in real-life code locks would be
required in the worker thread.
ACE has several classes which can be used for synchronization purposes. These classes
can be divided into the following categories
• The ACE Locks Class Category
• The ACE Guards Class Category
• The ACE Conditions Class Category
• Miscellaneous ACE Synchronization classes
The classes described in the table above all support the same interface. However, these
classes are NOT related to each other in any inheritance hierarchy. In ACE locks are
usually parameterized using templates as the overhead of having virtual function calls, in
most cases, is unacceptable. This usage of templates allows the programmer a certain
degree of flexibility in choosing the right type of locks for himself. Nevertheless in some
places the programmer may need to use dynamic binding and substitution and for these
cases ACE includes the ACE_Lock and ACE_Lock_Adapter classes.
Using the Mutex classes
The following example illustrates usage of the ACE_Thread_Mutex class. Notice that
ACE_Mutex could easily replace the use of ACE_Thread_Mutex class here as they both
have the same interface.
Example 2
#include "ace/OS.h"
#include "ace/Synch.h"
ACE_Thread::spawn_n
(ACE_OS::atoi(argv[1]),(ACE_THR_FUNC)worker,(void*)&arg);
//Spawn the worker threads
while(ACE_Thread::join(NULL,NULL,NULL)==0);
}
In the above example the ACE_Thread wrapper class is used to spawn off multiple
threads to perform the worker() function as in the previous example. Each thread is
passed in an Arg object which contains the number of iterations it is supposed to perform
and the mutex that it is going to use.
In this example each thread immediately on startup enters a for loop. Once inside the loop
the thread supposedly enters a critical section. It takes control of the work done within
this section of the loop using an ACE_Thread_Mutex lock object. This object was passed
in as an argument to the worker thread from the main thread. Control of the critical
section is obtained by issuing an acquire() call on the ACE_Thread_Mutex object.
Control of the critical section is given back by using the release() call.
Using the Lock and Lock Adapter for dynamic binding
The following example illustrates how the ACE_Lock class and ACE_Lock_Adapter
provide the application programmer with the facility of using dynamic binding and
substitution with the locking mechanisms.
Example 3
#include "ace/OS.h"
#include "ace/Synch.h"
#include "ace/Synch_T.h"
//Decide which lock you want to use at run time. Possible due to
//ACE_Lock.
if(ACE_OS::strcmp(argv[3],"Thread"))
lock=new ACE_Lock_Adapter<ACE_Thread_Mutex>;
else
lock=new ACE_Lock_Adapter<ACE_Mutex>
In this example the only thing that is changed is that the ACE_Lock class is used with
ACE_Lock_Adapter to provide dynamic binding. The decision as to whether the
underlying locking mechanism will be ACE_Thread_Mutex or ACE_Mutex is decided from
command line arguments while the program is running. The advantage of using dynamic
binding, again, is that the actual locking mechanism can be substituted at run time. The
disdvantage is that each all to the lock now entails an extra level of indirection through
the virtual function table.
As mentioned in the table the ACE_Token class provides for a “recursive mutex” which
can be reacquired multiple times by the same thread that had initially acquired it. This
means that if a thread calls a recursive routinue which acquires a lock again and again the
thread wont block on itself. However if any other thread tries to acquire the lock it will be
blocked. Another important fact about the ACE_Token type of lock is that once it releases
the lock it gives it to the next thread who had asked for it and is currently blocked. This is
not how locks are released usually. In the usual case once the lock is released any of the
threads that was previously blocked on it or any thread that is currently trying to acquire
it may take the lock.
When I ran the previous example (example 3) on SunOS 5.x I found that the thread that
released the lock was the one that managed to reacquire it too! ( in around 90% of the
cases.) However if you run the example with the ACE_Token class as the locking
mechanism each thread has its turn and then gives up to the next thread in line.
Example 4
#include "ace/OS.h"
#include "ace/Token.h"
while(ACE_Thread::join(NULL,NULL,NULL)==0);
Name Description
ACE_Guard Automatically calls acquire() and release() on the underlying
lock. Can be passed any of the locks in the ACE locks class
cateagory as its template parameter.
ACE_Read_Guard Automatically calls acquire_read() and release() on the
underlying lock.
ACE_Write_Guard Automatically calls acquire_read() and release() on the
underlying lock.
In the above example a guard is used to manage the critical section in the worker thread.
The Guard object is created from the ACE_Guard template class by passing it the type of
lock it is using as its template parameter and the actual lock object it is to use through its
constructor. The lock is automatically acquired then internally by ACE_Guard and the
section in curly braces is thus a protected critical section. Once it comes out of scope the
guard object is automatically destroyed which causes the lock to be released.
Example 6
#include "ace/Thread.h"
#include "ace/OS.h"
#include "ace/Synch_T.h"
#include "ace/Synch.h"
class Args{
public:
Args(ACE_Condition<ACE_Thread_Mutex> *cond, int threads):
cond_(cond), threads_(threads){}
ACE_Condition<ACE_Thread_Mutex> *cond_;
int threads_;
};
static void*
worker(void *arguments){
Args *arg= (Args*)arguments;
ACE_DEBUG((LM_DEBUG,"Thread (%t) Created to do some work\n"));
::number++;
//Work
ACE_OS::sleep(ACE_OS::rand()%2);
int n_threads=ACE_OS::atoi(argv[1]);
//Wait for signal indicating that all threads are done and program
//can exit
mutex.acquire();
if(number!=n_threads)
cond.wait();
ACE_DEBUG((LM_DEBUG,"(%t) Main Thread got signal. Program
exiting..\n"));
mutex.release();
ACE_OS::exit(0);
}
Notice that before evaluating the condition a mutex is acquired by the main thread. The
condition is then evaluated. If the condition is true then the main thread calls a wait on
the condition variable. The condition variable in turn releases the mutex automatically
and causes the main thread to fall asleep. Condition variables are always used in
Barriers have a good name. The name pretty much describes what they are supposed to
do. A thread which uses a barrier waits for a defined number of threads to all reach a
certain point in their execution path (after which they block) before it and all these other
threads continue with their execution. That is they all block one by one waiting for the
others to reach the barrier. Once all threads reach the point of the barrier in their
execution paths they all restart together.
In ACE the barrier is implemented in the ACE_Barrier class. The barrier object is
instantiated and passed through its constructor the number of threads it is going to be
waiting on. Each thread issues a wait() call on the barrier object once they reach the point
of execution after which they wish to wait for the other threads before they all continue
together. When the barrier has received the appropriate number of wait() calls from the
appropriate number of threads it wakes up all the blocked threads together and they all
continue at the same time.
The following example illustrates how barriers can be used with ACE
Example 7
#include "ace/Thread.h"
#include "ace/OS.h"
#include "ace/Synch_T.h"
#include "ace/Synch.h"
class Args{
public:
Args(ACE_Barrier *barrier):
barrier_(barrier){}
ACE_Barrier *barrier_;
};
static void*
worker(void *arguments){
Args *arg= (Args*)arguments;
//Work
ACE_OS::sleep(ACE_OS::rand()%2);
//Exiting now
ACE_DEBUG((LM_DEBUG,
"\tThread (%t) Done! \n\tThe number is now: %d\n",number));
//Let the barrier know we are done.
arg->barrier_->wait();
ACE_DEBUG((LM_DEBUG,"Thread (%t) is exiting \n"));
return 0;
}
int n_threads=ACE_OS::atoi(argv[1]);
ACE_DEBUG((LM_DEBUG,"Preparing to spawn %d threads",n_threads));
//Setup the random number generator
ACE_OS::srand(::seed);
//Wait for all the other threads to let the main thread
// know that they are done using hte barrier
barrier.wait();
ACE_DEBUG((LM_DEBUG,"(%t)Other threads are finished. Program
exiting..\n"));
ACE_OS::sleep(2);
}
In this example a barrier is created and then passed to the worker threads. Each worker
thread calls wait() on the barrier just before exiting causing all threads to be blocked after
they have completed their work and right before they exit. The main thread also blocks
just before exiting. Once all threads (including main) have reached the end of their
execution they all continue and then exit together.
ACE_Atomic_Op<ACE_Thread_Mutex,int> foo;
static void*
worker(void *arg){
ACE_UNUSED_ARG(arg);
foo=5;
ACE_ASSERT (foo == 5);
++foo;
ACE_ASSERT (foo == 6);
--foo;
ACE_ASSERT (foo == 5);
foo += 10;
ACE_ASSERT (foo == 15);
foo -= 10;
ACE_ASSERT (foo == 5);
foo = 5L;
ACE_ASSERT (foo == 5);
return 0;
}
int n_threads=ACE_OS::atoi(argv[1]);
ACE_DEBUG((LM_DEBUG,"Preparing to spawn %d threads\n",n_threads));
//Wait for all the other threads to let the main thread know
//when it is time to exit
while(ACE_Thread::join(NULL,NULL,NULL)==0);
ACE_DEBUG((LM_DEBUG,"(%t)Other threads are finished. Program
exiting..\n"));
}
In all the previous examples we have been using the ACE_Thread wrapper class to create
and destroy threads. However, the functionality of this wrapper class is somewhat
limited. The ACE_Thread_Manager provides a superset of the facilities that are available
in ACE_Thread. In particular it adds management functionality to make it easier to start,
cancel, suspend and resume a groups of related threads. It provides for creating and
destroying groups of threads and tasks (ACE_Task is a higher level construct then threads
and can be used in ACE for doing multi-threaded programming. We will talk more about
tasks later). It also provides functionality such as sending signals to a group of threads or
waiting on a group of threads instead of calling join in the non-portable fashion that we
have done in the previous examples.
The following example illustrates how ACE_Thread_Manager can be used to wait for
the completion of a group of threads.
Example 9
#include "ace/Thread_Manager.h"
#include "ace/OS.h"
#include "ace/Get_Opt.h"
ACE_Get_Opt get_opt(argc,argv,"a:b:");
char c;
while( (c=get_opt())!=EOF){
switch(c){
case ’a’:
num_task_1=ACE_OS::atoi(get_opt.optarg);
break;
case ’b’:
num_task_2=ACE_OS::atoi(get_opt.optarg);
break;
default:
ACE_ERROR((LM_ERROR,"Unknown option\n"));
ACE_OS::exit(1);
}
}
static void *
worker (int iterations)
{
for (int i = 0; i < iterations; i++){
if ((i % 1000) == 0){
ACE_DEBUG ((LM_DEBUG,
"(%t) checking cancellation before iteration %d!\n",
i));
if (ACE_Thread_Manager::instance ()->testcancel
(ACE_Thread::self ()) != 0){
ACE_DEBUG ((LM_DEBUG,
"(%t) has been cancelled before iteration %d!\n",i));
break;
}
}
}
return 0;
}
// Wait for 1 second and then suspend every thread in the group.
ACE_OS::sleep (1);
ACE_DEBUG ((LM_DEBUG, "(%t) suspending group\n"));
if (thr_mgr->suspend_grp (grp_id) == -1)
ACE_ERROR ((LM_DEBUG, "(%t) %p\n", "Could not suspend_grp"));
// Wait for 1 more second and then cancel all the threads.
ACE_OS::sleep (ACE_Time_Value (1));
ACE_DEBUG ((LM_DEBUG, "(%t) cancelling group\n"));
if (thr_mgr->cancel_grp (grp_id) == -1)
ACE_ERROR ((LM_DEBUG, "(%t) %p\n", "cancel_grp"));
// Perform a barrier wait until all the threads have shut down.
thr_mgr->wait ();
return 0;
}
In this example n_threads are created to execute the worker function. Each thread loops
in the worker function for n_iterations. While these threads loop in the worker
fucntion the main thread will suspend() them, then resume() them and lastly will cancel
them. Each thread in worker will check for cancellation using the testcancel() method of
ACE_Thread_Manager.
When a single threaded program wishes to create a variable whose value persists across
multiple function calls it allocates that data statically or globally. When such a program is
made multi-threaded this global or static data is the same for all the threads. This may or
may not be desirable. For example, a psuedo random generator may need a static or
global integer seed variable which is not affected by its value being changed by multiple
threads at the same time. However, in other cases the global or static data element may
need to be different for each thread that executes. For example, consider a multi-threaded
GUI application in which each window runs in a separate thread and has an input port
from which it recieves event input. Such an input port must remain “persist” across
function calls in the window but also must be window specific or private. To achieve this
Thread Specific Storage is used. A structure such as the input port can be put into thread
specific storage and is logically accessed as if it is static or global but is actually private
to the thread.
Traditionally thread specific storage was achieved using a confusing low level operating
system API. In ACE TSS is achieved using the ACE_TSS template class. The class which
is to be thread specific is passed into the ACE_TSS template and then all its public
methods may be invoked using the C++ -> operator.
The following example illustrates how simple it is to use thread specific storage in ACE.
#include "ace/OS.h"
#include "ace/Synch_T.h"
#include "ace/Thread_Manager.h"
class DataType{
public:
DataType():data(0){}
void increment(){ data++;}
void set(int new_data){ data=new_data;}
void decrement(){ data--;}
int get(){return data;}
private:
int data;
};
ACE_TSS<DataType> data;
In the above example the class DataType was created in thread specific storage.
4HE 2EACTOR
An Architectural Pattern for Event De-multiplexing and Dispatching
The Reactor Pattern has been developed to provide an extensible OO framework for
efficient event de-multiplexing and dispatching. Current OS abstractions that are used
for event de-multiplexing are difficult and complicated to use and are therefore error
prone. The Reactor pattern essentially provides for a set of higher-level programming
abstractions that simplify the design and implementation of event driven distributed
applications. Besides this the Reactor integrates together the de-multiplexing of several
different kinds of events to one easy to use API. In particular the Reactor handles timer
based events, signal events, I/O based port monitoring events and user defined
notifications uniformly.
In this chapter, we describe how the Reactor is used to de-multiplex all of these different
event types.
2EACTOR #OMPONENTS
Application
Application Application Application Application
Specific Specific Specific Specific
Framework
%VENT (ANDLERS
The Reactor pattern is implemented in ACE as the ACE_Reactor class which provides an
interface to the reactor frameworks functionality.
As was mentioned above the reactor uses event handler objects as the service providers
which handle an event once the reactor has successfully de-multiplexed and dispacthed it.
The reactor therefore internally remembers which event handler object is to be called
back when a certain type of event occurs. This association between events and their event
handlers is created when an application registers his handler object with the reactor to
handle a certain type of event.
Since the reactor needs to record which Event Handler is to be called back it needs to
know the type of all Event Handler object. This is achieved with the help of the
substitution pattern (or in other words through inheritance of the “is a type of” variety).
The framework provides an abstract interface class named ACE_Event_Handler from
which all application specific event handlers MUST derive. (This causes each of the
Application Specific Handlers to have the same type namely ACE_Event_Handler and
thus they can be substituted for each other). For more detail on this concept please see the
reference on the Substitution Pattern [I].
If you notice the component diagram above it shows the event handler ovals consist of a
blue Event_Handler portion, which corresponds to ACE_Event_Handler and a white
portion which corresponds to the application specific portion.
This is illustrated in the class diagram below:
int handle_input()
int handle_output()
…
Application_Handler1 Application_Handler2
int handle_input() int handle_output()
int get_handle() int get_handle()
The ACE_Event_Handler class has several different “handle” methods each of which are
used to handle different kinds of events. When an application programmer is interested in
a certain event he subclasses the ACE_Event_Handler class and implements the handle
methods that he is interested in. As mentioned above, he then proceeds to “register” his
event handler class for that particular event with the reactor. The reactor will then make
sure that when the event occurs the appropriate “handle” method in the appropriate event
handler object is called back automatically.
Thus once again, there are basically three steps to using the ACE_Reactor.
• Create a subclass of ACE_Event_Handler and implement the correct “handle_”
method in your subclass to handle the type of event you wish to service with this
event handler. (See table below to determine which “handle_” method you need
to implement. Note that you may use the same event handler object to handle
multiple types of events and thus may overload more then one of the “handle_”
methods.)
• Register your Event handler with the reactor by calling register_handler() on the
reactor object.
• As events occur, the reactor will automatically call back the correct “handle_”
method of the event handler object that was previously registered with the Reactor
to process that event.
A simple example should help to make this a little more clearer.
Example 1
#include <signal.h>
#include ”ace/Reactor.h”
#include ”ace/Event_Handler.h”
class
MyEventHandler: public ACE_Event_Handler{
int
case SIGINT:
ACE_DEBUG((LM_DEBUG, ”You pressed SIGINT \n”));
break;
}
return 0;
}
};
ACE_Reactor::instance()->register_handler(SIGWINCH,eh);
ACE_Reactor::instance()->register_handler(SIGINT,eh);
while(1)
//Start the reactors event loop
ACE_Reactor::instance()->handle_events();
}
As we saw in the example above an event handler is registered to handle a certain event
by calling the register_handler() method on the reactor. The register_handler() method is
overloaded i.e., there are actually several methods for registering different event types
each called register_handler()but having a different signature i.e., the methods differ in
their arguments. The register_handler() methods basically take the handle/event_handler
tuple or the signal/event_handler tuple as arguments and adds it to the reactors internal
dispatch tables. When an event occurs on handle it find the corresponding event_handler
in its internal dispatch table and automatically calls back the appropriate method on the
event_handler it finds. More details of specific calls to register handlers will be illustrated
in later sections.
2EMOVAL AND LIFETIME MANAGEMENT OF %VENT (ANDLERS
Once the required event has been processed it may not be necessary to keep the event
handler registered with the reactor. The reactor thus offers techniques to remove an event
handler from its internal dispatch tables. Once the event handler is removed it will no
longer be called back by the reactor.
An example of such a situation could be a server, which serves multiple clients. The
clients connect to the server, have it perform some work and then disconnect later. When
a new client connects to the server an event handler object is instantiated and registered
in the servers reactor to handle all I/O from this client. When the client disconnects then
the server must remove the event handler from the reactors dispatch queue as it no longer
expects any further I/O from the client. In this example the client server connection may
be closed down which leaves the I/O handle (file descriptor in UNIX) invalid. It is
important that such a defunct handle be removed from the Reactor as if this is not done
then the Reactor will mark the handle as “ready for reading” and continually call back the
handle_input() method of the event handler forever.
There are several techniques to remove an event handler from the reactor.
The more common technique to remove a handler from the reactor is implicit removal.
Each of the “handle_” methods of the event handler returns an integer to the reactor. If
this integer is 0 then the event handler remains registered with the reactor after the handle
method is completed. However if the “handle_” method returns <0 then the reactor will
automatically call back the handle_close() method of the Event Handler and remove it
from its internal dispatch tables. The handle_close() method is used to perform any
handler specific cleanup that needs to be done before the event handler is removed which
may include things like deleting dynamic memory that had been allocated by the handler
or closing log files.
In the example described above it is necessary to actually remove the event handler from
memory. Such removal can also occur in the handle_close() method of the concrete event
handler class. Consider the following concrete event handler:
class MyEventHandler: public ACE_Event_Handler{
public:
MyEventHandler(){//construct internal data members}
virtual int
handle_close(ACE_HANDLE handle, ACE_Reactor_Mask mask){
delete this; //commit suicide
}
~MyEventHandler(){//destroy internal data members}
private:
//internal data members
}
This class deletes itself when it is de-registers from the reactor and the handle_close()
hook method is called. It is VERY important however that MyEventHandler is always
allocated dynamically otherwise the global memory heap may be corrupted. One way to
ensure that the class is always created dynamically is to move the destructor into the
private section of the class. For example
class MyEventHandler: public ACE_Event_Handler{
public:
MyEventHandler(){//construct internal data members}
virtual int handle_close(ACE_HANDLE handle, ACE_Reactor_Mask mask){
delete this; //commit suicide}
private:
//Class must be allocated dynamically
~MyEventHandler(){//destroy internal data members}
};
Explicit removal of Event Handlers from the Reactors Internal Dispatch Tables
Another way to remove an Event Handler from the reactors internal tables is to explicitly
call the remove_handler() set of methods of the reactor. This method is also overloaded
as is register_handler(). It takes the handle or the signal number whose handler is to be
removed and removes it from the reactor’s internal dispatch tables. When the
remove_handler() is called it also calls the handle_close() method of the Event Handler
(which is being removed) automatically. This can be controlled by passing in the mask
In the next few sections we will illustrate how the Reactor is used to handle various types
of events.
)/ %VENT $E
MULTIPLEXING
The Reactor can be used to handle I/O device based input events by overloading the
handle_input() method in the concrete event handler class. Such I/O may be on disk files,
pipes, fifo’s or network sockets. For I/O device based event handling the Reactor
internally uses the handle to the device which is obtained from the operating system. (The
handle on UNIX based system is the file descriptor returned by the OS when a file or
socket is opened. In Windows the handle is a handle to the device returned by
Windows.). One of the most useful applications of such de-multiplexing is obviously for
network applications. The following example will help illustrate how the reactor may be
used in conjunction with the concrete acceptor to build a server.
Example 2
#include ”ace/Reactor.h”
#include ”ace/SOCK_Acceptor.h”
#define PORT_NO 19998
typedef ACE_SOCK_Acceptor Acceptor;
//forward declaration
class My_Accept_Handler;
class
My_Input_Handler: public ACE_Event_Handler{
public:
//Constructor
My_Input_Handler(){
ACE_DEBUG((LM_DEBUG,”Constructor\n”);
}
private:
ACE_SOCK_Stream peer_;
char data [12];
};
class
My_Accept_Handler: public ACE_Event_Handler{
public:
//Constructor
My_Accept_Handler(ACE_Addr &addr){
this->open(addr);
}
ACE_DEBUG((LM_DEBUG,”Connection established\n”));
In the above example two concrete event handlers are created. The first concrete event
handler, My_Accept_Handler is used to accept and establish incoming connections from
clients. The other event handler is My_Input_Handler which is used to handle the
connection after it has been established. Thus My_Accept_Handler accepts the
connection and delegates the actual handling off to My_Input_Handler.
creates
registers
get_handle()
handle input ()
creates
get_handle()
handle_input()
4IMERS
The Reactor also includes methods to schedule timers which on expiry call back the
handle_timeout() method of the appropriate event handler. To schedule such timers the
reactor has a schedule_timer() method. This method is passed the event handler whose
handle_timeout() method is to be called back and the delay in the form of an
ACE_Time_Value object. In addition an interval may also be specified which causes the
timer to be re-set automatically after it expires.
Internally the Reactor maintains an ACE_Timer_Queue which maintains all of the timers
in the order in which they are to be scheduled. The actual data structure used to hold the
timers can be varied by using the set_timer_queue() method of the reactor. Several
different timer structures are available to use with the reactor including timer wheels,
heaps and hashed timing wheels. These are discussed in a later section in detail.
!#%?4IME?6ALUE
The ACE_Time_Value object is a wrapper class which encapsulates the data and time
structure of the underlying OS platform It based on the timeval structure available on
most UNIX operating systems which stores absolute time in seconds and micro-seconds.
Other OS platforms, such as POSIX and Win32 use slightly different representations.
This class encapsulates these difference and provides a portable C++ interface.
The ACE_Time_Value class uses operator overloading which provides for simple
arithmetic additions, subtractions and comparisons. Methods in this class are
implemented to “normalize” time quantities. Normalization adjusts the two fields in a
timeval struct to use a canonical encoding scheme that ensures accurate comparisons.
(For more see Appendix and Reference Guide).
The following example illustrates how timers can be used with the reactor.
Example 3
#include ”test_config.h”
#include ”ace/Timer_Queue.h”
#include ”ace/Reactor.h”
#define NUMBER_TIMERS 10
//Increment count
count ++;
int
main (int, char *[])
{
ACE_Reactor reactor;
Time_Handler *th=new Time_Handler;
int timer_id[NUMBER_TIMERS];
int i;
while (!done)
reactor.handle_events ();
return 0;
}
In the above example an event handler, Time_Handler is first setup to handle the
timeouts by implementing the handle_timeout() method. The main routine instantiates an
object of type Time_Handler and schedules multiple timers (10 timers) using the
schedule_timer() method of the reactor. This method takes, as arguments, the a pointer to
the handler which will be called back, the time after which the timer will go off and an
argument that will be sent to the handle_timeout() method when it is called back. Each
time schedule_timer() is called it returns a unique timer identifier which is stored in the
array timer_id[]. This identifier may be used to cancel that timer at any time. An example
of canceling a timer is also shown in the above example where the fifth timer is canceled
by calling the reactors cancel_timer() method after all the timers have been initially
scheduled. We cancel this timer by using its timer_id as an argument to the
cancel_timer() method of the reactor.
5SING DIFFERENT 4IMER 1UEUES
Different environments may require different ways of scheduling and canceling timers.
The performance of algorithms to implement timers become an issue when any of the
following are true:
• Fine granularity timers are required.
• The number of outstanding timers at any one time can potentially be very large.
• The algorithm is implemented using hardware interrupts which are too expensive.
ACE allows the user to choose from amongst several timers which pre-exist in ACE or to
develop their own timers to an interface defined for timers. The different timers available
in ACE are detailed in the following table:
Timer Description of data structure Performance
ACE_Timer_Heap The timers are stored in a heap implementation Cost of schedule_timer()= O(lg n)
of a priority queue. Cost of cancel_timer()= O(lgn)
Cost of finding current timer O(1)
ACE_Timer_List The timers are stored in a doubly linked list .. Cost of schedule_timer()= O(n)
insertions are..?? Cost of cancel_timer()= O(1)
Cost of finding current timer O(1)
ACE_Timer_Hash This structure used in this case is a variation on Cost of schedule_timer()= Worst = O(n)
the timer wheel algorithm. The performance is Best = O(1)
highly dependent on the hashing function used. Cost of cancel_timer()= O(1)
Cost of finding current timer O(1)
(ANDLING 3IGNALS
As we saw in example 1 the Reactor includes methods to allow the handling of signals.
The Event Handler which handles the signals should overload the handle_signal()
method as this will be called back by the reactor when the signal occurs. To register for a
signal we use one of the register_handler() methods as was illustrated in example 1.
When interest in a certain signal ends the handler can be removed and restored to the
previously installed signal handler by calling remove_handler(). The Reactor internally
uses the sigaction() system call to set and reset signal handlers. Signal handling can also
be done without the reactor by using the ACE_Sig_Handlers class and its associated
methods.
One important difference in using the reactor for handling signals and using the
ACE_Sig_Handlers class is that the reactor based mechanism only allows the appliction
to associate one event handler with each signal. The ACE_Sig_Handlers class however
allows multiple event handlers to be called back when a signal occurs.
5SING .OTIFICATIONS
The reactor not only issues call backs when system events occur, but can also be used to
call back handlers when user defined events occur. This is done through the reactors
“Notification” interface which consists of two methods notify() and
max_notify_iterations() method.
The reactor can be explicitly instructed to issue a call back on a certain event handler
object by using the notify() method. This comes in very useful when the reactor is used in
conjunction with message queues or with co-operating tasks. Good examples of this kind
of usage can be found when the ASX framework components are used with the reactor.
The max_notify_iterations() method informs the reactor to perform only the specified
number of “iterations” at a time. Here iterations refers to the number of “notifications”
that can occur in a single handle_events() call. Thus if max_notify_iterations() is used to
set the max number of iterations to 20 and 25 notifications arrive simultaneously then the
handle_events() method will only service 20 of the notifications at a time. The remaining
five notifications will be handled when handle_events() is called the next time in the
event loop.
An example will help illustrate these ideas further
Example 4
#include ”ace/Reactor.h”
#include ”ace/Event_Handler.h”
#include ”ace/Synch_T.h”
#include ”ace/Thread_Manager.h”
//The actual handler which in this case will handle the notifications
int handle_input(int){
ACE_DEBUG((LM_DEBUG,”Got notification # %d\n”,no));
no++;
return 0;
}
private:
static int no;
};
//Static members
int My_Handler::no=1;
while(1){
//After WAIT_TIME the handle_events will fall through if no events
//arrive.
ACE_Reactor::instance()->handle_events(ACE_Time_Value(WAIT_TIME));
if(!done){
handler.perform_notifications();
done=1;
}
sleep(SLEEP_TIME);
}
}
Example 5
#include ”ace/Reactor.h”
#include ”ace/Event_Handler.h”
#include ”ace/Synch_T.h”
#include ”ace/Thread_Manager.h”
//The actual handler which in this case will handle the notifications
int handle_input(int){
ACE_DEBUG((LM_DEBUG, ”Got notification # %d\n”, no));
no++;
return 0;
}
//Static members
int My_Handler::no=1;
int My_Handler::svc_start(void* arg){
My_Handler *eh= (My_Handler*)arg;
eh->svc();
return -1; //de-register from the reactor
}
while(1){
ACE_Reactor::instance()->handle_events();
sleep(3);
}
}
This example is very similar to the previous example except for a few additional methods
to spawn a thread and then activate it in the event handler. In particular the constructor of
the concrete handler My_Handler calls the activate method. This method uses the
ACE_Thread_Manager::spawn() method to spawn a separate thread with its entry point
as svc_start().
The svc_start() method calls perform_notifications() and the notifications are sent to the
reactor but this time they are sent from this new thread instead of from the same thread
that the reactor resides in. Note that the entry point, svc_start(), of the thread was defined
4HE !CCEPTOR AND #ONNECTOR
Patterns for connection establishment
An acceptor is usually used where you would imagine you would use the BSD accept()
system call as was discussed in the chapter on stand alone acceptors. The Acceptor
Pattern is also applicable in the same situation but as we will see provides a lot more
functionality. In ACE the acceptor pattern is implemented with the help of a “Factory”
which is named ACE_Acceptor. A factory is a class which is used to abstract the
instantiation process of helper objects (usually). It is common in OO designs for a
complex class to delegate certain functions off to a helper class. The choice of which
class the complex class creates as a helper and then delegates to may have to be flexible.
This flexibility is afforded with the help of a factory. Thus a factory allows an object to
change it’s underlying strategies by changing the object that it delegates the work to. The
factory however provides the same interface to the applications which are using the
factory and thus the client code may not need to be changed at all. (Read more about
factories in the reference on “Design Patterns”).
Factory
Helper object
Helper object
Client
Helper object
?
#/-0/.%.43
As is clear from the above discussion there are three major participant classes in the
Acceptor pattern:
• The concrete acceptor which contains a specific strategy for establishing a
connection which is tied to an underlying transport protocol mechanism.
Examples of different concrete acceptors that can be used in ACE are
ACE_SOCK_ACCEPTOR (uses TCP to establish the connection),
ACE_LSOCK_ACCEPTOR(uses UNIX domain sockets to establish the
connection) etc.
• The concrete service handler which is written by the application developer and
whose open() method is called back automatically when the connection has been
Example 1
#include ”ace/Reactor.h”
#include ”ace/Svc_Handler.h”
#include ”ace/Acceptor.h”
#include ”ace/Synch.h”
class My_Svc_Handler:
public ACE_Svc_Handler <ACE_SOCK_STREAM,ACE_NULL_SYNCH>{
//the open method which will be called back automatically after the
//connection has been established.
public:
int open(void*){
cout<<”Connection established”<<endl;
}
};
// Create the acceptor as described above.
typedef ACE_Acceptor<My_Svc_Handler,ACE_SOCK_ACCEPTOR> MyAcceptor;
ACE_INET_Addr addr(PORT_NUM);
while(1)
// Start the reactor’s event loop
ACE_Reactor::instance()->handle_events();
}
In the above example we first create an endpoint address on which we wish to accept.
Since we have decided to use TCP/IP as the underlying connection protocol we create an
ACE_INET_Addr as our endpoint and pass it the port number we want it to listen on. We
pass this address and an instance of the reactor singleton to the acceptor that we
instantiate after this. This acceptor after being instantiated will automatically accept any
connection requests on PORT_NUM and call back My_Svc_Handler’s open() method after
establishing connections for such requests. Notice that when we instantiated the
ACE_Acceptor factory we passed it the concrete acceptor we wanted to use i.e.,
Example 2
#include ”ace/Reactor.h”
#include ”ace/Svc_Handler.h”
#include ”ace/Acceptor.h”
#include ”ace/Synch.h”
#include ”ace/SOCK_Acceptor.h”
#define PORT_NUM 10101
#define DATA_SIZE 12
//forward declaration
class My_Svc_Handler;
int handle_input(ACE_HANDLE){
//After using the peer() method of ACE_Svc_Handler to obtain a
//reference to the underlying stream of the service handler class
//we call recv_n() on it to read the data which has been received.
//This data is stored in the data array and then printed out
peer().recv_n(data,DATA_SIZE);
ACE_OS::printf(”<< %s\n”,data);
while(1)
//Start the reactor’s event loop
ACE_Reactor::instance()->handle_events();
}
The only difference between this example and the previous one is that we register the
service handler with the reactor in the open() method of our service handler. We
consequently have to write a handle_input() method which will be called back by the
reactor when data comes in on the connection. In this case we just print out the data we
receive on the screen. The peer() method of the ACE_Svc_Handler class is a useful
method which returns a reference to the underlying peer stream. We use the recv_n()
method of the underlying stream wrapper class to obtain the data received on the
connection.
The real power of this pattern lies in the fact that the underlying connection establishment
mechanism is fully encapsulated in the concrete acceptor. This can very easily be
changed. In the next example we change the underlying connection establishment
mechanism so that it uses UNIX domain sockets instead of TCP sockets as we were using
before. The example again with minimal change (changes underlined) is as follows:
Example3
class My_Svc_Handler:
public ACE_Svc_Handler <ACE_LSOCK_STREAM,ACE_NULL_SYNCH>{
public:
int open(void*){
cout<<”Connection established”<<endl;
ACE_Reactor::instance()
->register_handler(this,ACE_Event_Handler::READ_MASK);
}
int handle_input(ACE_HANDLE){
char* data= new char[DATA_SIZE];
peer().recv_n(data,DATA_SIZE);
ACE_OS::printf(”<< %s\n”,data);
return 0;
}
};
The differences between example 2 and example 3 are underlined. As noted the
differences between the two programs are very minimal, however they both use very
different connection establishment paradigms. Some of the connection establishment
mechanisms that are available in ACE are listed in the table below.
4(% #/..%#4/2
The Connector is very similar to the Acceptor. It is also a factory however in this case it
is used to actively connect to a remote host. After the connection has been established it
will automatically call back the open() method of the appropriate service handling object.
The connector is usually used where you would use the BSD connect() call. In ACE the
connector, just like the acceptor, is implemented as a template container class called
ACE_Connector. As mentioned earlier, it takes two parameters the first being a
“concrete” connector class and the second being the event handler class which is to be
called when the connection is established.
You MUST note that the underlying concrete connector and the factory ACE_Connector
are both very different things. The ACE_Connector factory USES the underlying
concrete connector to establish the connection. The ACE_Connector factory then USES
the appropriate event or service handling routine (the one passed in through its template
argument) to handle the new connection after the connection has been established by the
concrete connector. The concrete connectors can be used without the ACE_Connector
factory as we saw in the IPC chapter. The ACE_Connector factory however cannot be
used without a concrete connector class (as it is this class which actually handles
connection establishment).
Example 4
typedef ACE_Connector<My_Svc_Handler,ACE_SOCK_CONNECTOR> MyConnector;
In the above example, PORT_NO and HOSTNAME are the machine and port we wish to
actively connect to. After instantiating the connector we call it’s connect method, passing
it the service routine that is to be called back when the connection is fully established and
the address that we wish to connect to.
The Acceptor and Connector patterns will in general be used together. In a client-server
application the server will typically contain the acceptor whereas a client will contain the
connector. However in certain applications both the acceptor and connector may be used
together. An example of such an application is given below. In this example a single
message is repeatedly sent to the peer machine and at the same time another message is
received from the remote. Since two functions must be performed at the same time an
easy solution is to send and receive messages in separate threads.
This example contains both an acceptor and a connector. The user can take arguments at
the command prompt and tell the application whether it is going to play a server or client
role. The application will then call main_accept() or main_connect() as appropriate.
class MyServiceHandler:
public ACE_Svc_Handler<ACE_SOCK_STREAM,ACE_NULL_SYNCH>{
public:
//Used by the two threads “globally” to determine their peer stream
static ACE_SOCK_Stream* Peer;
int open(void*){
cout<<”Acceptor: received new connection”<<endl;
void main_accept(){
ACE_INET_Addr addr(PORT_NO);
Acceptor myacceptor(addr,Reactor::instance());
while(1)
Reactor::instance()->handle_events();
return 0;
}
void main_connect(){
ACE_INET_Addr addr(PORT_NO,HOSTNAME);
Connector myconnector;
myconnector.connect(my_svc_handler,addr);
while(1)
Reactor::instance()->handle_events();
This is a simple example which illustrates how the acceptor and connector patterns can be
used in combination to produce a service handling routine which is completely decoupled
from the underlying network establishment method. The above example can be easily
!DVANCED 3ECTIONS
The following sections give a more detailed explanation of how the Acceptor and
Connector patterns actually work. This is required if you wish to tune the service
handling and connection establishment policies. This includes tuning the creation and
concurrency strategy of your service handling routine and the connection establishment
strategy that the underlying concrete connector will use. In addition, there is a section
which explains how you can use the advanced features which you automatically get by
using the ACE_Svc_Handler classes. Lastly, we show how you can use a simple light
weight ACE_Event_Handler with the acceptor and connector patterns.
4(% !#%?36#?(!.$,%2 #,!33
ACE_Task
ACE_Task has been designed to be used with the ASX Streams framework which is
based on the streams facility in UNIX System V. ASX is also very similar in design to
the X-kernel protocol tools built by Larry Peterson.
The basic idea in ASX is that an incoming message is assigned to a stream. This stream is
constructed out of several modules. Each module performs some fixed function on the
incoming message which is then passed on to the next module for further processing until
it reaches the end of the stream. The actual processing in the module is done by tasks.
There are usually two tasks to each module, one for processing incoming messages and
one for processing outgoing messages. This kind of a structure is very useful when
constructing protocol stacks. Since each module used has a fixed simple interface
modules can be created and be easily re-used across different applications. For example
consider an application which is to process incoming messages from the data link layer.
The programmer would construct several modules each dealing with a different level of
the protocol processing. Thus he would construct a separate module to do network layer
processing, another for transport layer processing and still another for presentation layer
processing. After constructing these modules they can be chained together into a stream
(with the help of ASX) and used. At a later time if a new (and perhaps better) transport
module is created then the earlier transport module can be replaced in the stream without
affecting anything else in the program. Note that the module is like a container which
Each ACE_Task has an internal message queue which is its means of communicating
with other tasks, modules or the outside world. If one ACE_Task wishes to send a
message to another task it will enqueue the message on the destination tasks message
queue. Once the task receives the message it will immediately begin processing it.
Every ACE_Task can run as zero or more threads. Messages can be enqueued and
dequeued by multiple threads on an ACE_Task’s message queue without the programmer
worrying about corrupting any of the data structures. Thus tasks may be used as the
fundamental architectural component of a system of co-operating threads. Each thread of
control can be encapsulated in an ACE_Task which interacts with other tasks by sending
messages to their message queues which they process and then respond to.
The only problem with this kind of architecture is that tasks can only communicate with
each other through their message queues within the same process. The ACE_Svc_Handler
solves this problem. ACE_Svc_Handler inherits from both ACE_Task and
ACE_Event_Handler and adds a private data stream. This combination makes it possible
for an ACE_Svc_Handler object to act as a task that has the ability to react to events and
to send and receive data between remote tasks on remote hosts.
ACE_Task has been implemented as a template container which is instantiated with a
locking mechanism, the lock being used to insure the integrity of the internal message
queue in a multi-threaded environment. As mentioned earlier, ACE_Svc_Handler is also
a template container which is passed not only the locking mechanism but also the
underlying data stream that it will use for communication to remote tasks.
In Example 5, above, we created a separate thread to send data to the remote peer using
the ACE_Thread wrapper class and its static spawn() method. When we did this however
we had to define the send_data() method which was written at file scope using the C++
static specifier. The consequence of this, of course, was that we couldn’t access any data
members of the actual object we had instantiated. In other words we were forced to make
the send_data() member function as class wide when this was NOT what we wanted
to do. The only reason this was done was because ACE_Thread::spawn() can only use a
static member function as the entry point for the thread it creates. Another adverse side
affect was that a reference to the peer stream had to be made static also. In short this
wasn’t the best way this code could have been written.
ACE_Task provides a nice mechanism to avoid this problem. Each ACE_Task has an
activate() method which can be called to create threads for the ACE_Task. The entry
point of the created thread will be in the non-static member function svc(). Since svc() is
a non-static member function it can call any object instance specific data or member
functions. ACE hides all the nitty gritty of how this is done from the programmer. The
activate() method is very versatile. It allows the programmer to create multiple threads all
of which use the svc() method as their entry point. Thread priorities, handles, names etc.
can also be set. The method prototype for activate is
// = Active object activation method.
virtual int activate (long flags = THR_NEW_LWP,
int n_threads = 1,
int force_active = 0,
long priority = ACE_DEFAULT_THREAD_PRIORITY,
int grp_id = -1,
ACE_Task_Base *task = 0,
ACE_hthread_t thread_handles[] = 0,
void *stack[] = 0,
size_t stack_size[] = 0,
ACE_thread_t thread_names[] = 0);
The first parameter, flags, describe desired properties of the threads which are to be
created. These are described in detail on the chapter on threads. The possible flags here
are:
THR_CANCEL_DISABLE, THR_CANCEL_ENABLE, THR_CANCEL_DEFERRED,
THR_CANCEL_ASYNCHRONOUS, THR_BOUND, THR_NEW_LWP, THR_DETACHED,
THR_SUSPENDED, THR_DAEMON, THR_JOINABLE, THR_SCHED_FIFO,
THR_SCHED_RR, THR_SCHED_DEFAULT
The second parameter, n_threads specifies the number of threads to be created. The third
parameter force_active is used to specify whether new threads should be created even if
the activate() method has already been called previously and thus the task or service
handler is already running multiple threads. If this is set to false (0), then if activate() is
called again it will result in the failure code being set and no further threads will be
spawned.
Example 6
#include ”ace/Reactor.h”
#include ”ace/Svc_Handler.h”
#include ”ace/Acceptor.h”
#include ”ace/Synch.h”
#include ”ace/SOCK_Acceptor.h”
class MyServiceHandler:
public ACE_Svc_Handler<ACE_SOCK_STREAM,ACE_MT_SYNCH>{
// The two thread names are kept here
ACE_thread_t thread_names[2];
public:
int open(void*){
ACE_DEBUG((LM_DEBUG, ”Acceptor: received new connection \n”));
void send_message1(void){
//Send message type 1
ACE_DEBUG((LM_DEBUG,”(%t)Sending message::>>”));
int send_message2(void){
//Send message type 1
ACE_DEBUG((LM_DEBUG,”(%t)Sending message::>>”));
int svc(void){
ACE_DEBUG( (LM_DEBUG,”(%t) Svc thread \n”));
if(ACE_Thread::self()== thread_names[0])
while(1) send_message1(); //send message 1’s forever
else
while(1) send_message2(); //send message 2’s forever
return 0; // keep the compiler happy.
}
int handle_input(ACE_HANDLE){
ACE_DEBUG((LM_DEBUG,”(%t) handle_input ::”));
char* data= new char[13];
}
};
return 0;
}
In this example activate() is called after the service handler is registered with the reactor
in it’s open() method. It is used to create 2 threads. The names of the threads are
remembered so that when they call the svc() routine we can distinguish between them.
Each thread sends a different type of message to the remote peer. Notice that in this case
the thread creation is totally transparent. In addition since the entry point is a normal non-
static member function it is used without any ugly changes to remember data members
such as the peer stream. We can simply call the member function peer() to obtain the
underlying stream whenever we need it.
As mentioned before the ACE_Svc_Handler class has a built in message queue. This
message queue is used as the primary communication interface between an
ACE_Svc_Handler and the outside world. Messages that other tasks wish to send to the
service handler are enqueued into its message queue. These messages may then be
processed in a separate thread (created by calling the activate() method). Yet another
thread, may then, take the processed message and send it across the network to a different
remote destination (quite possibly to another ACE_Svc_Handler).
As mentioned earlier, in this multi-threaded scenario the ACE_Svc_Handler will
automatically ensure that the integrity of the message queue is maintained with the use of
locks. The lock used will be the same lock which was passed when the concrete service
handler was created by instantiating the ACE_Svc_Handler template class. The reason
that the locks are passed in this way is so that the programmer may “tune” his
application. Different locking mechanisms on different platforms have different amounts
of overhead. If required the programmer may create his own optimized lock which obeys
the ACE interface for a lock and use this lock with the service handler. This is just
another example of the kind of flexibility that the programmer can achieve by using
ACE. The important thing that the programmer MUST be aware of that additional threads
in the service handling routines WILL cause significant locking overhead. To keep this
overhead down to a minimum the programmer must design his program carefully
ensuring that such overhead is minimized. In particular the example described above
probably will have excessive overhead and may be infeasible in most situations.
class MyServiceHandler:
public ACE_Svc_Handler<ACE_SOCK_STREAM,ACE_MT_SYNCH>{
// The message sender and creator threads are handled here.
ACE_thread_t thread_names[2];
public:
int open(void*){
ACE_DEBUG((LM_DEBUG, ”Acceptor: received new connection \n”));
void send_message(void){
//Dequeue the message and send it off
ACE_DEBUG((LM_DEBUG,”(%t)Sending message::>>”));
int construct_message(void){
// A very fast message creation algorithm
// would lead to the need for queuing messages..
// here. These messages are created and then sent
// using the SLOW send_message() routine which is
// running in a different thread so that the message
//construction thread isn’t blocked.
ACE_DEBUG((LM_DEBUG,”(%t)Constructing message::>> ”));
int svc(void){
ACE_DEBUG( (LM_DEBUG,”(%t) Svc thread \n”));
int handle_input(ACE_HANDLE){
ACE_DEBUG((LM_DEBUG,”(%t) handle_input ::”));
char* data= new char[13];
}
};
return 0;
}
This example illustrates the use of the putq() and getq() methods to enqueue and dequeue
message blocks onto the queue. It also illustrates how to create a message block and then
how to set it’s write pointer and read from its read pointer. Note that the actual data inside
the message block starts at the read pointer of the message block. The length() member
function of the message block returns the length of the underlying data stored in the
message block and does not include the parts of ACE_Message_Block which are used for
book keeping purposes. In addition we also show how to release the message block (mb)
using the release() method.
To read more about how to use message blocks, data blocks or the message queue please
read the sections in this manual on message queues and the ASX framework. Also see the
relevant sections in the reference manual.
Both the acceptor and connector factories i.e., ACE_Connector and ACE_Acceptor have a
very similar operational structure. The working can be roughly divided into three stages.
• End point or connection initialization phase
• Service Initialization phase
• Service Processing phase
%ND POINT OR CONNECTION INITIALIZATION PHASE
In the case of the acceptor the application level programmer may either call the open()
method of the factory, ACE_Acceptor, or it’s default constructor (which in fact WILL
call the open() method) to start to passively listen to connections. When the open()
method is called on the acceptor factory it first instantiates the Reactor singleton if it has
not already been instantiated. It then proceeds to call the underlying concrete acceptors
open() method. The concrete acceptor will then go through the necessary initialization it
needs to perform to listen for incoming connections. For example in the case of
ACE_SOCK_Acceptor it will open a socket and bind the socket to the port and address on
which the user wishes to listen for new connections. After binding the port it will proceed
to issue the listen call. The open method then registers the acceptor factory with the
Reactor. Thus when any incoming connection requests are received the reactor will
automatically call back the Acceptor factories handle_input() method. Notice that the
Acceptor factory itself derives from the ACE_Event_Handler hierarchy for this very
//Asynchronous
OurConnector.connect_n(NUMBER_CONN,ArrayofMySvcHandlers,Remote_Addr,0,
ACE_Synch_Options::asynch);
If the connect call is issued to be asynchronous then the ACE_Connector will register
itself with the reactor awaiting the connection to be established (again ACE_Connector
also is from the ACE_Event_Handler hierarchy). Once the connection is established the
reactor will then automatically call back the connector. In the synchronous case,
however, the connect() call will block until either the connection is established or a
timeout value expires. The time out value can be specified by changing certain
ACE_Synch_Options. For details please see the reference manual.
3ERVICE )NITIALIZATION 0HASE FOR THE !CCEPTOR
When an incoming request comes in on the specified address and port the reactor
automatically calls back the ACE_Acceptor factories handle_input() method.
This method is a “Template Method”. A template method is used to define the order of
the steps of an algorithm but allow variation in how certain steps are performed. This
variation is achieved by allowing subclasses define the implementation of these methods.
(For more on the Template Method see the reference on Design Patterns).
In this case the Template method defines the algorithm as
• make_svc_handler(): Creates the Service Handler.
• accept_svc_handler(): Accept the connection into the created Service Handler
from the previous step.
• activate_svc_handler(): Start the new service handler up.
Each of these methods can be re-written to provide flexibility in how these operations are
actually performed.
Thus the handle_input() will first call the make_svc_handler() method which creates the
service handler of the approriate type (the type of the service handler is passed in by the
application programmer when the ACE_Acceptor template is instantiated, as we saw in
the examples above). In the default case the make_svc_handler() method just instantiates
the correct service handler . However the make_svc_handler() is a “bridge” method that
can be overloaded to perform more complex functionality. (A bridge is a design pattern
The connect() method which is issued by the application is similar to the handle_input()
method in the Acceptor factory i.e., it is a “Template Method”.
In this case the Template method connect()defines the following steps which can be
redefined.
• make_svc_handler(): Creates the Service Handler.
• connect_svc_handler(): Accept the connection into the created Service
Handler from the previous step.
• activate_svc_handler(): Start the new service handler up.
Each of these methods can be re-written to provide flexibility in how these operations are
actually performed.
Thus after the connect() call is issued by the application then the connector factory
proceeds to first instantiate the correct service handler by calling the make_svc_handler()
call exactly as it does in the case of the acceptor. The default behavior is to just
instantiate the approriate class. This can be overloaded exactly in the same manner as was
discussed for the Acceptor. The reasons for doing such an overload would probably very
similar to the ones mentioned above.
After the service handler has been created, the connect() call determines if the connect is
to be asynchronous or synchronous. If it is asynchronous it registers itself with the reactor
before continuing on to the next step. It then proceeds to call the connect_svc_handler()
method. This method by default calls the underlying concrete connectors connect()
method. In the case of ACE_SOCK_Connector this would mean issuing the BSD
connect() call with the correct options for blocking or non-blocking I/O.
Once the service handler has been created, the connection has been established and the
handle has been set in the service handler the handle_input() method of ACE_Acceptor
(or handle_output() or connect_svc_handler() in the case of ACE_Connector) will call
the activate_svc_handler() method. This method will then proceed to activate the service
handler i.e., to will start it running. The default method is to just call the open() method
as the entry point into the service handler. As we saw in the examples above the open()
method was indeed the first method which was called when the service handler started
running. It was here where we called the activate() method to create multiple threads of
control and also where we registered the service handler with the reactor so that it was
automatically called back when new data arrived on the connection. This method is also a
“bridge” method and can be overloaded to provide more complicated functionality. In
particular this overloaded method may provide for a more complicated concurrency
strategy, such as running the service handler in a different process.
As mentioned above the acceptor and connector can be easily tuned because of the bridge
methods which can be overloaded. The bridge methods allow tuning of:
• Creation Strategy for the Service Handler: By overloading the
make_svc_handler() method in either the acceptor or connector. For example, this
could mean re-using an existing service handler or using some complicated
method to obtain the service handler as was discussed above.
• Connection Strategy: The connection creation strategy can be changed by
overloading the connect_svc_handler() or accept_svc_handler() methods.
To facilitate the tuning of the acceptor and connector patterns along the lines mentioned
above ACE provides two special “tunable” acceptor and connector factories that are very
similar to ACE_Acceptor and ACE_Connector. These are ACE_Strategy_Acceptor and
ACE_Strategy_Connector. These classes make use of the “Strategy” Pattern.
The strategy pattern is used to decouple algorithmic behavior from the interface of a
class. The basic idea is to allow the underlying algorithms of a class (call it the Context
Class) to vary independently from the clients that use the class. This is done with the help
of concrete strategy classes. Concrete strategy classes encapsulate an algorithm or
method to perform an operation. These concrete strategy classes are then used by the
context class to perform operations (The context class delegates the “work” to the
concrete strategy class). Since the context class doesn’t perform any of the operations
directly it does not have to be modified when functionality is to be changed. The only
modification in the context class is a different concrete strategy class will be used to
perform the now changed operation. (To read more about the Strategy Pattern read the
appendix on Design Patterns).
In the case of ACE, the ACE_Strategy_Connector and the ACE_Strategy_Acceptor are
Strategy Pattern classes which use several concrete strategy classes to vary the algorithms
for creating service handlers, establishing connections and for setting the concurreny
method for service handlers. As you may have guessed the ACE_Strategy_Connector and
ACE_Strategy_Acceptor exploit the tunability provided by the bridge methods which
were mentioned above.
Several concrete strategy classes are already available in ACE that can be used to “tune”
the Strategy Acceptor and Connector. They are passed in as parameters to either Strategy
Acceptor or Connector when the class is instantiated. The following table shows some of
the classes that can be used to tune the Strategy Acceptor and Connector classes.
Some examples will help illustrate the use of the strategy acceptor and connector classes.
Example 8
#include ”ace/Reactor.h”
#include ”ace/Svc_Handler.h”
#include ”ace/Acceptor.h”
#include ”ace/Synch.h”
#include ”ace/SOCK_Acceptor.h”
#define PORT_NUM 10101
#define DATA_SIZE 12
//forward declaration
class My_Svc_Handler;
//instantiate a strategy acceptor
typedef ACE_Strategy_Acceptor<My_Svc_Handler,ACE_SOCK_ACCEPTOR>
MyAcceptor;
//instantiate a concurrency strategy
typedef ACE_Process_Strategy<My_Svc_Handler> Concurrency_Strategy;
int handle_input(ACE_HANDLE){
peer().recv_n(data,DATA_SIZE);
ACE_OS::printf(”<< %s\n”,data);
//Concurrency Strategy
Concurrency_Strategy my_con_strat;
In many applications clients connect and then reconnect to the same server several times,
each time establishing the connection, performing some work and then tearing down the
connection (such as is done in Web clients). Needless to say this is very inefficient and
expensive as connection establishment and teardown are expensive operations. A better
strategy in such a case would be for the connector to “remember” old connections and not
tear them down till it is sufficiently sure that the client will not try to re-establish a
connection again. The ACE_Cached_Connect_Strategy provides just such a caching
strategy and can be easily used with the ACE_Strategy_Connector as its connection
establishment strategy.
//forward declaration
class My_Svc_Handler;
//Function prototype
static void make_connections(void *arg);
class My_Svc_Handler:
public ACE_Svc_Handler <ACE_SOCK_STREAM,ACE_MT_SYNCH>{
private:
char* data;
public:
My_Svc_Handler(){
data= new char[DATA_SIZE];
}
My_Svc_Handler(ACE_Thread_Manager* tm){
data= new char[DATA_SIZE];
}
//Called before the service handler is recycled..
int
recycle (void *a=0){
ACE_DEBUG ((LM_DEBUG,
”(%P|%t) recycling Svc_Handler %d with handle %d\n”,
this, this->peer ().get_handle ()));
return 0;
}
int open(void*){
ACE_DEBUG((LM_DEBUG,”(%t)Connection established \n”));
int handle_input(ACE_HANDLE){
ACE_DEBUG((LM_DEBUG,”Got input in thread: (%t) \n”));
peer().recv_n(data,DATA_SIZE);
ACE_DEBUG((LM_DEBUG,”<< %s\n”,data));
int svc(void){
//send a few messages and then mark connection as idle so that it can
for(int i=0;i<3;i++){
ACE_DEBUG((LM_DEBUG,”(%t)>>Hello World\n”));
ACE_OS::fflush(stdout);
peer().send_n(”Hello World”,sizeof(”Hello World”));
}
//Mark the service handler as being idle now and let the
//other threads reuse this connection
this->idle(1);
//Concurrency Strategy
NULL_CONCURRENCY_STRATEGY concurrency_strategy;
//Connection Strategy
CACHED_CONNECT_STRATEGY caching_connect_strategy;
// Rest for a few seconds so that the connection has been freed up
ACE_OS::sleep (5);
}
}
In the above example the Cached Connection Strategy is used to cache connections. To
use this strategy a little extra effort is required to define the hash() method on the hash
map manager that is used internally by ACE_Cached_Connect_Strategy. The hash()
method is the hashing function which is used to hash into the cache map of service
handlers and connections that is used internally by the ACE_Cached_Connect_Strategy.
It simply uses the sum of the IP address and port number as the hashing function which is
probably not a very good hash function.
The example is also a little more complicated then the ones that have been shown so far
and thus warrants a little extra discussion.
We use a no op concurrency and creation strategy with the ACE_Strategy_Acceptor.
Using a no op creation strategy IS necessary, as was explained above, if this is not set to
a ACE_NOOP_Creation_Strategy the ACE_Cached_Connection_Strategy will cause an
assertion failure. When using the ACE_Cached_Connect_Strategy however any
concurrency strategy can be used with the strategy acceptor. As was mentioned above the
underlying creation strategy used by the ACE_Cached_Connect_Strategy can be set by
the user. The recycling strategy can also be set. This is done when instantiating the
caching_connect_strategy by passing its constructor the strategy objects for the required
creation and recycling strategies. Here we have not done so and are using both the default
creation and recycling strategy.
After the connector has been setup appropriately we use the Thread_Manager to spawn a
new thread with the make_connections() method as its entry point. This method uses
our new strategy connector to connect to a remote site. After the connection is established
this thread goes to sleep for five seconds and then tries to re-create the same connection
using our cached connector. This thread should then, in its next attempt, find the
connection in the connectors cache and reuse it.
Our service handler (My_Svc_Handler) is called back by the reactor, as usual, once the
connection has been established. The open() method of My_Svc_Handler then makes
At times using the heavy weight ACE_Svc_Handler as the handler for acceptors and
connectors may be unwarranted and cause code bloat. In these cases the user may use the
lighter ACE_Event_Handler method as the class which is called back by the reactor once
the connection has been established. To do so one needs to overload the get_handle()
method and also include a concrete underlying stream which will be used by the event
handler. An example should help illustrate these changes. Here we have also written a
new peer() method which returns a reference to the underlying stream as it did in the
ACE_Svc_Handler class.
Example 10
#include ”ace/Reactor.h”
#include ”ace/Svc_Handler.h”
#include ”ace/Acceptor.h”
#include ”ace/Synch.h”
#include ”ace/SOCK_Acceptor.h”
#define PORT_NUM 10101
#define DATA_SIZE 12
//forward declaration
class My_Svc_Handler;
int
open(void*){
cout<<”Connection established”<<endl;
//Register the event handler with the reactor
ACE_Reactor::instance()->register_handler(this,
ACE_Event_Handler::READ_MASK);
return 0;
}
int
handle_input(ACE_HANDLE){
// After using the peer() method of our ACE_Event_Handler to obtain a
//reference to the underlying stream of the service handler class we
//call recv_n() on it to read the data which has been received. This
//data is stored in the data array and then printed out
peer().recv_n(data,DATA_SIZE);
ACE_OS::printf(”<< %s\n”,data);
-ESSAGE 1UEUES
The use of Message Queues in ACE
Modern real-time applications are usually constructed as a set of communicating but
independent tasks. These tasks can communicate with each other through several
mechanisms, one which is commonly used is a message queue. The basic mode of
communication in this case is for a sender (or producer) task to enqueue a message onto a
message queue and the receiver (or consumer) task to dequeue the message from that
queue. This of course is just one of the ways message queues can be used. We will see
several different examples of message queue usage in the ensuing discussion.
The message queue in ACE has been modelled after the UNIX System V message queue
mechanisms and thus for those familiar with System V it should be easy to understand
their usage and structure. ACE has provided for several different types of message
queues. Each of these provide different features and performance characteristic mostly
which are specifically required in real-time systems.
-ESSAGE "LOCKS
Messages that are enqueued onto message queues are termed message blocks in ACE.
Each message block “contains” a header and a data block. Note that the “contains” does
NOT mean that the data block is inside the message block but rather that the message
block holds a pointer to a data block which it only logically contains. The data block in
turn holds a pointer to an actual data buffer. This allows flexible sharing of data between
multiple message blocks. This is illustrated in the figure below. Note that now the two
message blocks may be enqueued onto different queues but both use the same data block
and thus data. No overhead due to data copying will thus be incurred.
In ACE the message block is implemented as ACE_Message_Block and the data block as
ACE_Data_Block. ACE_Message_Block contains several different constructors which
furnish different means to create message blocks efficiently.
The easiest way to create a message block is to pass it the data buffer which it is to store
(remember that the ACE_Message_Block does this indirectly through an
ACE_Data_Block) and let the constructor of the message block be responsible for
creating the underlying data block and setting up the corresponding pointers. This can be
done as:
char *data[size]=”This is my data”;
ACE_Message_Block *mb = new ACE_Message_Block(data,size);
Note that when the message block mb is destroyed the associated data buffer data will
NOT be destroyed (i.e., this memory will not be deallocated). This only makes sense
since the memory was not allocated by the message block in the first place so it should
not be responsible for its deallocation.
A more powerful and thus useful constructor is also available. Among other things this
method allows to use any of the ACE memory allocators to efficiently manage the
memory associated with the data buffer. The constructor is:
ACE_Message_Block (size_t size,
ACE_Message_Type type = MB_DATA,
ACE_Message_Block *cont = 0,
const char *data = 0,
ACE_Allocator *allocator_strategy = 0,
ACE_Lock *locking_strategy = 0,
u_long priority = 0,
const ACE_Time_Value & execution_time = ACE_Time_Value::zero,
const ACE_Time_Value & deadline_time = ACE_Time_Value::max_time);
The above constructor is called with the parameters:
1. The size of the data buffer that is to be associated with the message block. Note
that the size of the message block will be size, but the length will be 0 until the
wr_ptr is set. This will be explained further later.
2. The type of the message. (There are several types available in the
ACE_Message_Type enumeration including data messages (which is the default)).
As mentioned earlier ACE has several different types of message queues, which in
general can be divided into two categories, static and dynamic. The static queue is a
general purpose message queue named ACE_Message_Queue (as if you couldn’t guess)
whereas the dynamic message queues (ACE_Dynamic_Message_Queue) are real-time
message queues. The major difference between these two types of queues is that
messages on static queues have static priority i.e., once the priority is set it does not
change. On the other hand in the dynamic message queues the priority of messages may
change dynamically based on parameters such as execution time and deadline.
The following example illustrates how to create a simple static message queue and then
how to enqueue and dequeue message blocks onto it.
Example 1
#include ”ace/Message_Queue.h”
#include ”ace/Get_Opt.h”
#define SIZE_BLOCK 1
#define NO_MSGS 10
class QTest{
public:
QTest():no_msgs_(NO_MSGS){
//First create a message queue of default size.
if(!(this->mq_=new ACE_Message_Queue<ACE_NULL_SYNCH> ()))
ACE_DEBUG((LM_ERROR,”Error in message queue initialization \n”));
}
int start_test(){
for(int i=0; i<no_msgs_;i++){
//create a new message block of size 1
ACE_Message_Block *mb= new ACE_Message_Block(SIZE_BLOCK);
void dequeue_all(){
ACE_DEBUG((LM_INFO,”\n\nBeginning DQ \n”));
ACE_DEBUG((LM_INFO,”No. of Messages on Q:%d Bytes on Q:%d \n”
,mq_->message_count(),mq_->message_bytes()));
ACE_Message_Block *mb;
//dequeue the head of the message queue until no more messages are
//left
for(int i=0;i<no_msgs_;i++){
mq_->dequeue_head(mb);
ACE_DEBUG((LM_INFO,”DQ’d data %d\n”,*mb->rd_ptr()));
}
}
private:
ACE_Message_Queue<ACE_NULL_SYNCH> *mq_;
int no_msgs_;
};
7ATER -ARKS
Water marks are used in message queues to indicate when the message queue has too
much data on it (the message queue has reached the high water mark) or when the
message queue has an insufficient amount of data on it (the message queue has reached
its low water mark). Both these marks are used for flow control for example the
low_water_mark may be used to avoid situations like the “silly window syndrome” in
TCP and the high_water_mark may be used to “stem” or slow down a sender or producer
of data.
The message queues in ACE achieve this functionality by maintaining a count of the total
amount of data in bytes that has been enqueued. Thus whenever a new message block is
enqueued on to the message queue it will first determine its length, then check if it can
enqueue the message block (i.e., make sure that the message queue doesn’t exceed its
high water mark if this new message block is enqueued). If the message queue cannot
enqueue the data and it possesses a lock (i.e., ACE_SYNC is used and not
ACE_NULL_SYNCH as the template parameter to the message queue) it will block the
caller until sufficient room is available or till the timeout in the enqueue method expires.
If the timeout expires or if the queue possessed a null lock then the enqueue method will
return with a value of -1 indicating that it was unable to enqueue the message.
Similarly when the dequeue_head method of ACE_Message_Queue is called it checks to
make sure that after dequeuing the amount of data left is more then the low water mark. If
this is not the case it blocks if the queue has a lock otherwise it returns -1 indicating
failure (the same way the enqueue methods work).
There are two methods which can be used to set and get the water marks that are
As is common with other container classes, forward and reverse iterators are available for
message queues in ACE. These iterators are named ACE_Message_Queue_Iterator and
ACE_Message_Queue_Reverse_Iterator. Each of these require a template parameter
which is used for synchronization while traversing the message queue. If multiple threads
are using the message queue then this should be set to ACE_SYNCH otherwise it may be
set to ACE_NULL_SYNCH. When an iterator object is created its constructor must be
passed a reference to the message queue we wish it to iterate over.
The following example illustrates the water marks and the iterators
Example 2
#include ”ace/Message_Queue.h”
#include ”ace/Get_Opt.h”
#include ”ace/Malloc_T.h”
#define SIZE_BLOCK 1
class Args{
public:
Args(int argc, char*argv[],int& no_msgs,
ACE_Message_Queue<ACE_NULL_SYNCH>* &mq){
ACE_Get_Opt get_opts(argc,argv,”h:l:t:n:xsd”);
while((opt=get_opts())!=-1)
switch(opt){
case ’n’:
//set the number of messages we wish to enqueue and dequeue
no_msgs=ACE_OS::atoi(get_opts.optarg);
ACE_DEBUG((LM_INFO,”Number of Messages %d \n”,no_msgs));
break;
case ’h’:
//set the high water mark
hwm=ACE_OS::atoi(get_opts.optarg);
mq->high_water_mark(hwm);
ACE_DEBUG((LM_INFO,”High Water Mark %d msgs \n”,hwm));
break;
case ’l’:
//set the low water mark
lwm=ACE_OS::atoi(get_opts.optarg);
mq->low_water_mark(lwm);
ACE_DEBUG((LM_INFO,”Low Water Mark %d msgs \n”,lwm));
break;
private:
int opt;
int hwm;
int lwm;
};
class QTest{
public:
QTest(int argc, char*argv[]){
//First create a message queue of default size.
if(!(this->mq_=new ACE_Message_Queue<ACE_NULL_SYNCH> ()))
ACE_DEBUG((LM_ERROR,”Error in message queue initialization \n”));
//Use the arguments to set the water marks and the no of messages
args_ = new Args(argc,argv,no_msgs_,mq_);
}
int start_test(){
for(int i=0; i<no_msgs_;i++){
//Create a new message block of data buffer size 1
ACE_Message_Block * mb= new ACE_Message_Block(SIZE_BLOCK);
void read_all(){
ACE_DEBUG((LM_INFO,”No. of Messages on Q:%d Bytes on Q:%d \n”
void dequeue_all(){
ACE_DEBUG((LM_INFO,”\n\nBeginning DQ \n”));
ACE_DEBUG((LM_INFO,”No. of Messages on Q:%d Bytes on Q:%d \n”,
mq_->message_count(),mq_->message_bytes()));
ACE_Message_Block *mb;
private:
Args *args_;
ACE_Message_Queue<ACE_NULL_SYNCH> *mq_;
int no_msgs_;
};
}
This example uses the ACE_Get_Opt class (See Appendix for more on this utility class)
to obtain the low and high water marks (in the Args class). The low and high water marks
are set using the low_water_mark() and high_water_mark() accessor functions. Besides
As was mentioned above dynamic message queues are queues in which the priority of the
messages enqueued change with time. Such message queues are thus inherently more
useful in real-time applications where such kind of behavior is desirable.
ACE currently provides for two types of dynamic message queues, deadline based and
laxity based (see [IV ]). The deadline based message queues use the deadlines of each of
the messages to set their priority. The message block on the queue which has the earliest
deadline will be dequeued first when the dequeue_head() method is called using the
earliest deadline first algorithm. The laxity based message queues however use both
execution time and deadline together to calculate the laxity which is then used to
prioritize each message block. The laxity is useful as when scheduling by deadline it may
be possible that a task is scheduled which has the earliest deadline but has such a long
execution time that it will not complete even if it is scheduled immediately. This
negatively affects other tasks as it may block out tasks which are schedulable. The laxity
will take into account this long execution time and make sure that if the task will not
complete that it is not scheduled. The scheduling in laxity queues is based on the
minimum laxity first algorithm.
Both laxity based message queues and deadline based message queues are implemented
as ACE_Dynamic_Message_Queue’s. ACE uses the STRATEGY pattern to provide for
dynamic queues with different scheduling characteristics. Each message queue uses a
different “strategy” object to dynamically set priorities of the messages on the message
queue. These “strategy” objects each encapsulate a different algorithm to calculate
priorities based on execution time, deadlines etc. and are called to do so whenever
messages are enqueued or removed from the message queue. (For more on the
STRATEGY pattern please see the reference “Design Patterns”). The message strategy
patterns derive from ACE_Dynamic_Message_Strategy and currently there are two
strategies available ACE_Laxity_Message_Strategy and
ACE_Deadline_Message_Strategy. Therefore to create a “laxity based” dynamic message
queue an ACE_Laxity_Message_Strategy object must be created first. Subsequently an
ACE_Dynamic_Message_Queue object should be instantiated which is passed the new
strategy object as one of the parameters to its constructor.
static ACE_Dynamic_Message_Queue<ACE_SYNCH_USE> *
create_deadline_message_queue ();
static ACE_Dynamic_Message_Queue<ACE_SYNCH_USE> *
create_laxity_message_queue ();
Each of these methods returns a pointer to the message queue it has just created. Notice
that all methods are static and that the create_static_message_queue() method returns an
ACE_Message_Queue whereas the other two methods return an
ACE_Dynamic_Message_Queue.
This simple example illustrates the creation and use of dynamic and static message
queues.
Example 3
#include ”ace/Message_Queue.h”
#include ”ace/Get_Opt.h”
#include ”ace/OS.h”
class Args{
public:
Args(int argc, char*argv[],int& no_msgs, int&
time,ACE_Message_Queue<ACE_NULL_SYNCH>* &mq){
ACE_Get_Opt get_opts(argc,argv,”h:l:t:n:xsd”);
while((opt=get_opts())!=-1)
switch(opt){
case ’t’:
time=ACE_OS::atoi(get_opts.optarg);
private:
int opt;
int hwm;
int lwm;
};
class QTest{
public:
QTest(int argc, char*argv[]){
args_ = new Args(argc,argv,no_msgs_,time_,mq_);
array_ =new ACE_Message_Block*[no_msgs_];
}
int start_test(){
for(int i=0; i<no_msgs_;i++){
this->dequeue_all();
return 0;
}
void dequeue_all(){
ACE_DEBUG((LM_INFO,”Beginning DQ \n”));
ACE_DEBUG((LM_INFO,”No. of Messages on Q:%d Bytes on Q:%d \n”,
mq_->message_count(),mq_->message_bytes()));
for(int i=0;i<no_msgs_ ;i++){
ACE_Message_Block *mb;
if(mq_->dequeue_head(mb)==-1){
ACE_DEBUG((LM_ERROR,”\nCould not dequeue from mq!!\n”));
return;
}
The above example is very similar to the previous examples but adds the dynamic
message queues into the picture. In the Args class we have added options to create all
the different types of message queues using the ACE_Message_Queue_Factory.
Furthermore two new methods have been added to the QTest class to set the deadlines
and execution times of each of the message blocks as they are created
(set_deadline()and set_execution_time()). These methods use the
ACE_Message_Block methods msg_execution_time() and msg_deadline_time(). Note that
these methods take the absolute and NOT the relative time which is why they are used in
conjunction with the ACE_OS::gettimeofday() method.
The deadlines and execution times are set with the help of a time parameter. The
deadline is set such that the first message will have the latest deadline and should be
scheduled last in the case of deadline message queues. Both the execution time and
deadline are taken into account when using the laxity queues however.
!
!PPENDIX 5TILITY #LASES
Utility classes used in ACE
!#%?).%4?!DDR
The ACE_INET_Addr is a wrapper class around the Internet domain address family
(AF_INET). This class derivers from ACE_Addr in ACE. The various constructors of this
class can be used to initialize the object to have a certain IP address and port. Besides this
the class has several set and get methods and has overloaded the comparison operations
i.e., = = operator and the != operator. For further details on how to use this class see th
reference manual
!#%?5.)8?!DDR
The ACE_UNIX_Addr class is a wrapper class around the Unix domain address family
(AF_UNIX) and also derives from ACE_Addr. This class has functionality similar to the
ACE_INET_Addr class. For further details see the reference manual.
!#%?4IME?6ALUE
The ACE_DEBUG and ACE_ERROR macros are useful macros for printing and logging
debug and error information. Their usage has been illustrated throughout this tutorial.
!#%?'ET?/PT
This utility class is used to obtain arguments from the user and is based on the getopt()
<3c> function in stdlib. The constructor of this class is passed in an sting called the
optstring which specifies the switches that the application wishes to respond to. If the
switch letter is followed by a colon it means that the switch also expects an argument. For
example if the optstring is “ab:c”, then the application expect “-a” and “-c” without an
argument and “-b” with an argument. For example the application would be run as:
MyApplication –a –b 10 –c
The () operator has been overloaded and is used to scan the elements of argv for the
options specified in the option string.
The following example will help make it clear how to use this class to obtain arguments
from the user.
Example
#include "ace/Get_Opt.h"
int main (int argc, char *argv[])
{
//Specify option string so that switches b, d, f and h all expect
//arguments. Switches a, c, e and g expect no arguments.
ACE_Get_Opt get_opt (argc, argv, "ab:cd:ef:gh:");
int c;
//optind indicates how much of argv has been scanned so far, while
//get_opt hasn’t returned EOF. In this case it indicates the index in
//argv from where the option switches have been fully recognized and the
//remaining elements must be scanned by the called himself.
for (int i = get_opt.optind; i < argc; i++)
ACE_DEBUG ((LM_DEBUG, "optind = %d, argv[optind] = %s\n",
i, argv[i]));
return 0;
}
For further details on using this utility wrapper class please see the reference manual.
!#%?!RG?3HIFTER
This ADT shifts known arguments, or options, to the back of the argv vector, so deeper
levels of argument parsing can locate the yet unprocessed arguments at the beginning of
the vector.
The ACE_Arg_Shifter copies the pointers of the argv vector into a temporary array. As
the ACE_Arg_Shifter iterates over the temp, is places known arguments in the rear of the
argv and unknown ones in the beginning. So, after having visited all the arguments in
the temp vector, ACE_Arg_Shifter has placed all the unknown arguments in their original
order at the front of argv.
This class is also very useful in parsing options from the command line. The following
example will help illustrate this:
Example
#include "ace/Arg_Shifter.h"
int main(int argc, char *argv[]){
ACE_Arg_Shifter arg(argc,argv);
while(arg.is_anything_left ()){
char *current_arg=arg.get_current();
if(ACE_OS::strcmp(current_arg,"-region")==0){
arg.consume_arg();
ACE_OS::printf("<region>= %s \n",arg.get_current());
}
<region> missouri
<tag>= 10
<view_uuid>=syyid
Resultant Vector: tim
Resultant Vector: -student
Resultant Vector: schmidt
Resultant Vector: -teacher
Resultant Vector: syyid
Resultant Vector: -view_uuid
Resultant Vector: 10
Resultant Vector: -tag
Resultant Vector: missouri
Resultant Vector: -region
Resultant Vector: ./arg
2
2EFERENCE
References and Bibliography
This is a list of the references that have been mentioned in the text