Object Interconnections C++ PDF
Object Interconnections C++ PDF
Comparing Alternative Programming Techniques for Multi-threaded CORBA Servers: Thread Pool
(Column 6)
1
incoming requests. This strategy amortizes the cost of thread HANDLE listener = create_server_endpoint (port);
creation and bounds the use of OS resources. Client requests Handle_Queue handle_queue;
can execute concurrently until the number of simultaneous
requests exceeds the number of threads in the pool. At this /* Initialize the thread-safe message queue. */
handle_queue_init (&handle_queue);
point, additional requests must be queued (or rejected) until
a thread becomes available. /* Initialize the thread pool. */
thread_pool_init (&handle_queue, pool_size);
Figure 1 illustrates the main components in this concur-
rency model. These components include a main thread, a /* The event loop for the main thread. */
svc_run (&handle_queue, listener);
request queue, and a set of pool threads. The main thread /* NOTREACHED */
receives new requests and inserts them into the tail of the }
request queue, while the pool threads remove requests from
the head of the queue and service them. Well explore the The svc run function runs the main threads event loop,
implementation and use of these components in this column as follows:
using C, C++ wrappers, and CORBA, respectively. void svc_run (Handle_Queue *handle_queue,
HANDLE listener)
{
/* Main event loop. */
3 The Multi-threaded C Thread Pool
for (;;) {
Solution /* Wait to accept a new connection. */
HANDLE handle = accept (listener, 0, 0);
2
3.1.2 The pool thread() Function void handle_queue_init (Handle_Queue *handle_queue,
u_int max)
Each newly created thread executes the following event loop {
handle_queue->max_count_ = max;
in the pool thread function: handle_queue->count_ = 0;
handle_queue->head_ = handle_queue->tail_ = 0;
void *pool_thread (void *arg)
{ /* Initialize synchronization variables that
Handle_Queue *handle_queue = are local to a single process. */
(Handle_Queue *) arg; mutex_init (&handle_queue->lock_,
USYNC_THREAD, 0);
/* The event loop for the each cond_init (&handle_queue->notempty_,
thread in the thread pool. */ USYNC_THREAD, 0);
cond_init (&handle_queue->notfull_,
for (;;) { USYNC_THREAD, 0);
HANDLE handle; }
/* Get next available HANDLE. */
handle_queue_remove (handle_queue, &handle); Three synchronization variables are used to implement
the thread-safe Handle Queue: two condition variables
/* Return stock quote to client. */
handle_quote (handle); (cond t notempty and notfull ) and one mutex
(mutex t lock ). The condition variables enable threads
/* Close handle to prevent leaks. */
close (handle); to insert and remove HANDLEs to and from the queue con-
} currently. The mutex lock is used by the condition vari-
/* NOTREACHED */
return 0; ables to serialize access to the internal state of the queue, as
} shown in the handle queue insert function below:
When a pool thread becomes available, it will dequeue the void
handle_queue_insert (Handle_Queue *handle_queue,
next handle (corresponding to a client request), use it to HANDLE handle)
look up the value of the stock quote, and return the quote to {
/* Ensure mutual exclusion for queue state. */
the client. mutex_lock (&handle_queue->lock_);
The Handle Queue data structure is managed by the fol- /* Inform waiting threads that queue isnt full. */
cond_signal (&handle_queue->notfull_);
lowing C functions. The handle queue init function mutex_unlock (&handle_queue->lock_);
initializes internal queue state: }
3
The handle queue remove function is called by all the /* Close handle to prevent leaks. */
pool threads. This function removes the next available close (h);
}
HANDLE from the queue, blocking if necessary until the /* NOTREACHED */
queue is no longer empty. After it removes the next HANDLE }
it signals the notfull condition to inform the main event
loop thread that theres more room in the queue.2 The main program is similar to the one shown in Sec-
tion 3.1.1, as shown below:
3.2 Evaluating the C Thread Pool Solution int main (int argc, char *argv[])
{
/* ... */
Depending on the degree of host parallelism and client ap-
plication behavior, the new thread pool solution can improve /* Create a passive-mode listener endpoint. */
listener = create_server_endpoint (port);
the performance of the original thread-per-request approach.
In particular, it will bound the amount of thread resources /* Initialize the thread pool. */
used by the server. There are still a number of drawbacks, for (i = 0; i < pool_size; i++)
however: /* Spawn off the thread pool. */
Too much infrastructure upheaval:
thr_create
The implementa- (0, /* Use default thread stack. */
tion of the thread pool concurrency model shown above is 0, /* Use default thread stack size. */
&pool_thread, /* Entry point. */
an extension of the thread-per-request server from our pre- (void *) listener, /* Entry point arg. */
vious column. We were able to reuse the core stock quote THR_DETACHED | THR_NEW_LWP, /* Flags. */
0); /* Dont bother returning thread id. */
routines (such as recv request, send response, and }
handle quote). However, the surrounding software ar-
/* Block waiting for a notification to
chitecture required many changes. Some changes were close down the server. */
relatively minor (such as pre-spawning a thread pool
/* ... */
rather than a thread-per-request). Other changes required
significant work (such as implementing the thread-safe /* Unblock the threads by closing
down the listener. */
Handle Queue). close (listener);
Lack of flexibility and reuse: Despite all the effort spent }
4
call, so that calls to accept are not atomic. If accept handle = accept (listener, 0, 0);
is not atomic then its possible for threads to receive FD_SET (handle, &read_hs);
if (maxhp1 <= handle)
EPROTO errors from accept, which means protocol maxhp1 = handle + 1;
error [4]. One solution to this problem is to explicitly }
temp_hs = read_hs;
add mutexes around the accept call, but this locking }
can itself become a bottleneck. /* NOTREACHED */
}
Caching open connections Our alternative thread pool
solution forces each thread to allocate a new connection In addition, the pool thread function would have to
since threads are always blocked in accept. As shown change (to emphasize the differences weve prefixed the
below, this may be inefficient in some situations. changes with /* !!!):
void *pool_thread (void *arg)
Therefore, well continue to use the thread-safe message {
queue example throughout the remainder of this paper. Be Handle_Queue *handle_queue =
(Handle_Queue *) arg;
aware, however, that there are other ways to implement the
thread pool concurrency model. Some of these approaches /* The event loop for each
thread in the thread pool. */
may be better suited for your requirements in certain circum-
stances. for (;;) {
High connection management overhead: All the
HANDLE handle;
thread pool and thread-per-request server implementations /* Get next available HANDLE. */
handle_queue_remove (handle_queue, &handle);
weve examined thus far have set up and torn down a con-
nection for each client request. This approach works fine if /* !!! Return stock quote to client. A
return of 0 means the client shut down. */
clients only request a single stock quote at a time from any if (handle_quote (handle) == 0) {
given server. When clients make a series of requests to the /* !!! Clear the bit in read_hs (i.e., the
fd_set) so the main event loop will ignore
same server, however, the connection management overhead this handle until its reconnected. */
can become a bottleneck. FD_CLR (handle, &read_hs);
One way to fix this problem is to keep each connection /* Close handle to prevent leaks. */
open until the client explicitly closes it down. However, ex- close (handle);
}
tending the C solution to implement this connection caching /* NOTREACHED */
strategy is subtle and error-prone. Several obvious solu- return 0;
tions will cause race conditions between the main thread and }
}
the pool threads. For example, the select event demul-
tiplexing call can be added to the original svc run event Unfortunately, this code contains several subtle race con-
loop, as follows: ditions. For instance, more than one thread can access
the fd set global variable read hs concurrently, which
// Global variable shared by the svc_run()
// and pool_thread() methods. can confuse the svc run methods demultiplexing strat-
static fd_set read_hs; egy. Likewise, the main thread can insert the same HANDLE
void svc_run (Handle_Queue *handle_queue, into the Handle Queue multiple times. Therefore, multi-
HANDLE listener) ple pool threads can read from the same HANDLE simultane-
{
HANDLE maxhp1 = listener + 1; ously, potentially causing inconsistent results.
fd_set temp_hs; Alleviating these problems will force us to rewrite por-
/* fd_sets maintain a set of HANDLEs that tions of the server by adding new locks and modifying the
select () uses to wait for events. */ existing handle quote code. Rather than spending any
FD_ZERO (&read_hs);
FD_ZERO (&temp_hs); more effort revising the C version, well incorporate these
FD_SET (listener, &read_hs); changes into the C++ solution in the next section.
/* Main event loop. */
for (;;) {
HANDLE handle;
4 The Multi-threaded C++ Wrappers
/* Demultiplex connection and data events */
select (maxhp1, &temp_hs, 0, 0, 0);
Thread Pool Solution
/* Check for stock quote requests and
insert the handle in the queue. */
4.1 C++ Wrapper Code
for (handle = listener + 1;
handle < maxhp1; This section illustrates a C++ thread pool implementation
handle++) based on ACE [5]. The C++ solution is structured using the
if (FD_ISSET (handle, &temp_hs))
handle_queue_insert (handle_queue, handle);
following four classes (shown in Figure 2):
5
is only one of these, well define it using the Singleton pat-
tern [7]. Doing this is easy using the following components
QUOTE SERVER 2: HANDLE INPUT provided by STL and ACE:
: Request
Queue 3: ENQUEUE REQUEST
// Forward declaration.
pool template <class PEER_STREAM>
thread : Quote class Quote_Handler;
: Quote
4: DEQUEUE & Handler
: Quote Handler // Use the STL pair component to create a
pool PROCESS Handler // tuple of objects to represent a client request.
thread REQUEST typedef pair<Quote_Handler<SOCK_Stream> *,
: Quote Quote_Request *>
Acceptor Quote_Tuple;
pool
pool
thread
thread : Reactor // An ACE thread-safe queue of Quote_Pairs.
typedef Message_Queue<Quote_Tuple> Quote_Queue;
6
Using the ACE Singleton wrapper in conjunction with // typeid (qt->second) == Quote_Request *
the ACE Message Queue and STL pair, the thread pool if (qt->first->handle_quote
(qt->second) == 0)
server can insert and remove Quote Handler objects as // Client shut down, so close down too.
follows: qt->first->close ();
delete qt->second;
}
Quote_Tuple qt (quote_handler, quote_request); /* NOTREACHED */
// ... }
Request_Queue::instance ()->insert (qt);
// !!! Complete the processing of a request.
// ... int handle_quote (Quote_Request *req) {
Request_Queue::instance ()->remove (qt); int value;
{
The first time that insert or remove is called, the // Constructor of m acquires lock.
Read_Guard<RW_Mutex> m (lock_);
Singleton::instance method dynamically allocates
and initializes the thread-safe Request Queue. The // Lookup stock price via Singleton.
value = QUOTE_DB::instance ()->
Singleton pattern also minimizes the need for global ob- lookup_stock_price (*req);
jects, which is important in C++ since the order of ini-
// Destructor of m releases lock.
tialization of global objects in C++ programs is not well- }
defined. Therefore, well use the same approach for the return send_response (value);
}
Quote Database and the Reactor:
// Close down the handler and release resources.
// Singleton for looking up quote values. void close (void) {
typedef Singleton<Quote_Database> QUOTE_DB; // Close down the connection.
this->peer_.close ();
// Singleton event demultiplexing and dispatching.
typedef Singleton<Reactor> REACTOR; // Reference counting omitted...
// Get next request from queue. This 4.1.3 The Quote Acceptor Class
// call blocks if queue is empty.
Request_Queue::instance ()->remove (qt);
The Quote Acceptor class is an implementation of the
// typeid (qt->first) == Quote_Handler * Acceptor pattern [9] that creates Quote Handlers to pro-
7
cess quote requests from clients. Its implementation is simi- The main threads event loop runs continuously, han-
lar to the one shown in our previous column: dling events like client connections and quote requests.
typedef Acceptor <Quote_Handler <SOCK_Stream>, The servers event handling is driven by callbacks from
// Quote service. the REACTOR Singleton to the Quote Acceptor and
SOCK_Acceptor> // Passive conn. mech. Quote Handler objects. Since this server uses the thread
Quote_Acceptor;
pool model, requests can be handled concurrently by any
The Quote Acceptors available thread.
strategy for initializing a Quote Handler is driven by up-
calls from the Reactor. Whenever a new client connects
with the server, the Quote Acceptors handle input 4.2 Evaluating the C++ Thread Pool Solution
method dynamically creates a Quote Handler, accepts
the connection into the handler, and automatically calls the The C++ implementation solves the drawbacks with the C
Quote Handler::open method. In the thread pool im- version shown in Section 3.2 as follows.
plementation, this open method registers itself with the
Reactor, as we showed in Section 4.1.2 above.
Less infrastructure upheaval: Compared to the
changes between our C program in our last column and the
C program shown in this column, the changes between the
4.1.4 The main() Server Function respective C++ programs are much fewer and more local-
The server main is responsible for creating a thread pool ized. In addition to creating a thread-safe Request Queue
and the Quote Acceptor, as follows: Singleton, the primary changes to our C++ thread pool im-
plementation are in the Quote Handler class and in our
// !!! Default constants.
const int DEFAULT_PORT = 12345; server main routine.
const int DEFAULT_POOL_SIZE = 4; In our last column, our Quote Handler::open
int main (int argc, char *argv[]) function spawned a thread to handle each incoming re-
{ quest. Here, open has been changed to register the new
u_short port =
argc > 1 ? atoi (argv[1]) : DEFAULT_PORT; Quote Handler with the Reactor. Then, when client
int pool_size = // !!! Size of the thread pool. requests arrive, the Quote Handlers handle input
argc > 2 ? atoi (argv[2]) : DEFAULT_POOL_SIZE;
method will queue both the request and the handler until a
// !!! Create a pool of threads to thread from the pool becomes available to service it. The
// handle quote requests from clients. only other change required was to make main create the
Thread::spawn_n
(pool_size, thread-safe queue, the thread pool, and the Reactor before
Quote_Handler<SOCK_Stream>::pool_thread, entering into its event loop.
(void *) 0,
8
5 The Multi-threaded CORBA In contrast, our current implementation of My Quoter does
not inherit from any generated skeleton. Instead, it uses
Thread Pool Solution an alternative provided by Orbix called the TIE approach,
This section illustrates how to implement the thread pool which is based on object composition rather than inheritance:
concurrency model with MT-Orbix. The solution we de- class My_Quoter // Note lack of inheritance!
{
scribe below uses the same general design as our C++ im- // ...
plementation above. It also uses many of the same compo- };
nents (such as the ACE Singleton and Message Queue
classes). We use the Orbix TIE approach to associate the CORBA
interfaces with our implementation as follows:
DEF_TIE_Quoter (My_Quoter)
5.1 Implementing Thread Pools in MT-Orbix
The TIE approach is an example of an object form of the
The My Quoter implementation class shown below is al-
Adapter pattern [7], whereas the inheritance approach we
most identical to the one we used in our previous column
used last column uses the class form of the pattern. The
to implement the thread-per-request model. The main dif-
object form of the Adapter uses delegation to tie the in-
ference is the use of object composition to associate the
terface of the My Quoter object implementation class to
My Quoter implementation class with the Quoter IDL
the interface expected by the Quoter skeleton generated by
interface. Well discuss this below, but first, heres the com-
MT-Orbix. When a request is received, the Orbix Object
plete implementation:
Adapter upcalls the TIE object. In turn, this object dispatches
class My_Quoter // Note the absence of inheritance! the call to the My Quoter object that is associated with the
{ TIE object.
public:
// Constructor The TIE approach is mentioned in the C++ Language
My_Quoter (const char *name); Mapping chapters of the CORBA 2.0 specification [10]. Not
// Returns the current stock value. surprisingly, the idea for putting it there originally came from
virtual CORBA::Long get_quote IONA Technologies, the makers of Orbix. Conforming ORB
(const char *stock_name,
CORBA::Environment &env) implementations are not required to support either the TIE
{ approach or the inheritance approach, however.3
CORBA::Long value;
{
// Constructor of m acquires lock. 5.1.2 The C++ Thread-Safe Request Queue
Read_Guard<RW_Mutex> m (lock_);
value = QUOTE_DB::instance ()-> The Request Queue used by the CORBA implementation
is reused almost wholesale from the C++ implementation
lookup_stock_price (stock_name);
// Destructor of m releases lock.
} shown in Section 4.1.1:
if (value == -1)
// Raise exception. // An ACE Singleton that accesses an ACE
env.exception (new Stock::Invalid_Stock); // thread-safe queue of CORBA Request pointers.
return value; typedef Singleton<Message_Queue<CORBA::Request *>,
Mutex>
}
Request_Queue;
protected:
// Serialize access to database. The primary difference is that we parameterize it with a
RW_Mutex lock_; CORBA::Request pointer, rather than a Quote Tuple.
};
The reason for this is that MT-Orbix performs the low-level
As before, its necessary to protect access to the quote demultiplexing, so we dont have to do it ourselves.
database with a readers/writer lock since multiple requests
can be processed simultaneously by threads in the pool. 5.1.3 Thread Filters
Orbix implements a non-standard CORBA extension called
5.1.1 Associating the IDL Interface with an Implemen- thread filters. Each incoming CORBA request is passed
tation through a chain of filters before being dispatched to its target
If youve been following our columns carefully, youll notice object implementation. To dispatch an incoming CORBA
that the Orbix implementation of the My Quoter class in request to a waiting thread, a subclass of ThreadFilter
the May 1995 C++ Report inherited from a skeleton called must be defined to override the inRequestPreMarshal
QuoterBOAImpl. This class was automatically generated method. By using a ThreadFilter, the MT Orbix ORB
by the Orbix IDL compiler, i.e.: and Object Adapter are unaffected by the choice of concur-
rency model selected by a CORBA server.
class My_Quoter 3 The lack of a clear specification of whether CORBA C++ server skele-
// Inherits from an automatically-generated
// CORBA skeleton class. tons use inheritance or delegation is another indication of the CORBA
: virtual public Stock::QuoterBOAImpl server-side portability problems we have described in previous columns.
9
not perform the operation dispatch itself, nor should it return
the result to the client. These operations will be performed
QUOTE SERVER 4: ENQUEUE REQUEST by one of the threads in the thread pool, as shown in Figure 3.
Figure 3 illustrates the role of the TP Thread Filter
: TP
: Request MY_QUOTER in the MT Orbix architecture for the Thread Pool stock quote
Thread
Queue FACTORY
Filter server. Our quote server must explicitly create an instance of
TP Thread Filter to get it installed into the Orbix filter
5: DEQUEUE REQUEST chain:
2: RECEIVE
pool pool pool 3: INVOKE TP_Thread_Filter tp_filter;
thread thread thread FILTER(S)
The constructor of this object automatically inserts the thread
6: UPCALLS : OBJECT pool thread filter at the end of the filter chain.
ADAPTER
: My_Quoter The pool thread static method serves as the entry
: My_Quoter
: My_Quoter point for each thread in the thread pool, as shown below:
void *TP_Thread_Filter::pool_thread (void *)
{
7: RETURN QUOTE VALUE // Loop forever, dequeueing new Requests,
// and dispatching them....
10
Optimized connection management overhead: MT-
Thread::spawn_n (pool_size, Orbix can perform certain optimizations (such as caching
Thread_Filter::pool_thread,
(void *) 0, connections in a thread-safe manner) without requiring any
THR_DETACHED | THR_NEW_LWP); programmer intervention. It also separates the concerns of
// Wait for work to do in the main thread application development from those involving the choice of
// (which is also the thread that shepherds suitable transports and protocols for the application. In other
// CORBA requests through TP_Thread_Filter).
TRY { words, using an ORB allows an application to be developed
CORBA::Orbix.impl_is_ready ("Quoter", independently of the underlying communication transports
IT_X);
} CATCHANY { and protocols.
cerr << IT_X << endl;
} ENDTRY The primary drawback, of course, is that the mechanisms
return 0; used by MT-Orbix are not standardized across the indus-
} try. In general, all the multi-threading techniques we dis-
cuss in this column arent standardized yet, and in particular
When the Quote server first starts up, it creates a the TP Thread Filter approach shown above is propri-
My Quoter object to service client quote requests. It then etary to Orbix. The fact that the CORBA solution shown
creates a pool of threads to service incoming requests us- here is not portable is yet another indication of the server-
ing the ACE spawn n method. Finally, the main server side portability problems with CORBA that weve discussed
thread calls Orbix.impl is ready to notify Orbix that in previous columns.
the Quoter implementation is ready to service requests. Despite these issues, it is important to note that the con-
The main thread is responsible for sheparding CORBA re- currency models, patterns, and techniques we discussed in
quests through the filter chain to the TP Thread Filter. this article are reusable. Our goal is to help you navigate
Finally, the object we initially created is implicitly de- through the space of design alternatives. We hope that youll
stroyed by the destructor of the My Quoter var. The be able to apply them to your projects, regardless of whether
OMG C++ Mapping provides for each IDL interface a you program in CORBA, DCE, Network OLE, ACE, or any
var class that can manage object references ( ptr other distributed computing toolkit.
types) of that interface type. If we didnt use a
My Quoter var type here, our code would have to man-
ually duplicate and release the object as required. By using 6 Concluding Remarks
a My Quoter var, we let the smart pointer perform the re-
source management. In this column, we examined the thread pool concurrency
model and illustrated how to use it to develop multi-threaded
servers for a distributed stock quote application. This
5.2 Evaluating the MT-Orbix Thread Pool So- example illustrated how object-oriented techniques, C++,
lution CORBA, and higher-level abstractions like the Singleton pat-
tern help to simplify programming and improve extensibility.
The following benefits arise from using MT-Orbix to imple-
ment the thread pool concurrency model: Our next column will explore yet another concurrency
model: thread-per-session. This model is supported by a
Almost no infrastructure upheaval: The implemen- number of CORBA implementations including MT-Orbix
tation of the MT-Orbix thread pool concurrency model and ORBeline. Having a choice of concurrency models can
shown above is almost identical to the thread-per-request help developers meet the performance, functionality, and
server from our previous column. The primary changes we maintenance requirements of their applications. The key
added were cosmetic (such as using Singletons rather than to success, of course, lies in thoroughly understanding the
global variables and using the object composition to tie tradeoffs between different models. As always, if there are
the Quoter skeleton with the My Quoter implementation any topics that youd like us to cover, please send us email at
rather than using inheritance). The ability to quickly and [email protected].
easily modify applications in this manner allows them to be Thanks to Prashant Jain, Tim Harrison, Ron Resnick, and
rapidly tuned and redeployed when necessary. Esmond Pitt for comments on this column.
Increased flexibility and reuse: The flexibility and
reuse of the MT-Orbix solution is similar to the ACE C++
solution. The main difference is that MT-Orbix is responsi-
References
ble for most of the low-level demultiplexing and concurrency [1] G. Booch, Object Oriented Analysis and Design with Ap-
control that we had to implement by hand in our C++ solu- plications (2nd Edition). Redwood City, California: Ben-
tion. In particular, MT-Orbix hides all its internal synchro- jamin/Cummings, 1993.
nization mechanisms from the server programmer. Thus, we [2] J. Eykholt, S. Kleiman, S. Barton, R. Faulkner, A. Shivalin-
are only responsible for locking server-level objects (such as giah, M. Smith, D. Stein, J. Voll, M. Weeks, and D. Williams,
the Request Queue).
11
Beyond Multiprocessing... Multithreading the SunOS Ker-
nel, in Proceedings of the Summer USENIX Conference, (San
Antonio, Texas), June 1992.
[3] D. C. Schmidt, An OO Encapsulation of Lightweight OS
Concurrency Mechanisms in the ACE Toolkit, Tech. Rep.
WUCS-95-31, Washington University, St. Louis, September
1995.
[4] W. R. Stevens, UNIX Network Programming, Second Edition.
Englewood Cliffs, NJ: Prentice Hall, 1997.
[5] D. C. Schmidt, ACE: an Object-Oriented Framework for
Developing Distributed Applications, in Proceedings of the
6th USENIX C++ Technical Conference, (Cambridge, Mas-
sachusetts), USENIX Association, April 1994.
[6] A. Stepanov and M. Lee, The Standard Template Library,
Tech. Rep. HPL-94-34, Hewlett-Packard Laboratories, April
1994.
[7] E. Gamma, R. Helm, R. Johnson, and J. Vlissides, Design Pat-
terns: Elements of Reusable Object-Oriented Software. Read-
ing, MA: Addison-Wesley, 1995.
[8] D. C. Schmidt and T. Harrison, Double-Checked Locking
An Object Behavioral Pattern for Initializing and Access-
ing Thread-safe Objects Efficiently, in Pattern Languages of
Program Design (R. Martin, F. Buschmann, and D. Riehle,
eds.), Reading, MA: Addison-Wesley, 1997.
[9] D. C. Schmidt, Design Patterns for Initializing Network Ser-
vices: Introducing the Acceptor and Connector Patterns,
C++ Report, vol. 7, November/December 1995.
[10] Object Management Group, The Common Object Request
Broker: Architecture and Specification, 2.0 ed., July 1995.
12