Coherence Developer Guide
Coherence Developer Guide
Coherence Developer Guide
June 2013 Documentation for Developers and Architects that describes how to develop applications that use Coherence for distributed caching and data grid computing. Includes information for installing Coherence, setting up Coherence clusters, configuring Coherence caches, and performing data grid operations.
Oracle Fusion Middleware Developing Applications with Oracle Coherence, 12c (12.1.2) E26039-01 Copyright 2008, 2013, Oracle and/or its affiliates. All rights reserved. Primary Author: Joseph Ruzzi
This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle USA, Inc., 500 Oracle Parkway, Redwood City, CA 94065. This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. This software and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.
Contents
Preface ............................................................................................................................................................... xxi
Audience..................................................................................................................................................... xxi Documentation Accessibility ................................................................................................................... xxi Related Documents ................................................................................................................................... xxi Conventions .............................................................................................................................................. xxii
Part I
Getting Started
1 Introduction to Coherence
Basic Concepts .......................................................................................................................................... Clustered Data Management............................................................................................................ A single API for the logical layer, XML configuration for the physical layer........................... Caching Strategies.............................................................................................................................. Data Storage Options......................................................................................................................... Serialization Options ......................................................................................................................... Configurability and Extensibility .................................................................................................... Namespace Hierarchy ....................................................................................................................... Read/Write Caching ................................................................................................................................. NamedCache....................................................................................................................................... Requirements for Cached Objects.................................................................................................... NamedCache Usage Patterns ........................................................................................................... Querying the Cache ................................................................................................................................. Invocation Service .................................................................................................................................... Event Programming ................................................................................................................................. Transactions ............................................................................................................................................... HTTP Session Management ................................................................................................................... Object-Relational Mapping Integration .............................................................................................. C++/.NET Integration .............................................................................................................................. Management and Monitoring ................................................................................................................ 1-1 1-1 1-2 1-2 1-2 1-3 1-3 1-3 1-4 1-4 1-4 1-5 1-6 1-6 1-7 1-7 1-7 1-7 1-8 1-8
iii
3 Understanding Configuration
Overview of the Default Configuration Files ..................................................................................... 3-1 Specifying an Operational Configuration File ................................................................................... 3-2 Using the Default Operational Override File................................................................................. 3-3 Specifying an Operational Override File ........................................................................................ 3-3 Defining Override Files for Specific Operational Elements......................................................... 3-4 Viewing Which Operational Override Files are Loaded.............................................................. 3-5 Specifying a Cache Configuration File ................................................................................................ 3-6 Using a Default Cache Configuration File...................................................................................... 3-6 Overriding the Default Cache Configuration File......................................................................... 3-7 Using the Cache Configuration File System Property.................................................................. 3-8 Viewing Which Cache Configuration File is Loaded ................................................................... 3-8 Specifying a POF Configuration File ................................................................................................... 3-8 Overriding the Default POF Configuration File............................................................................ 3-9 Using the POF Configuration File System Property .................................................................. 3-10 Combining Multiple POF Configuration Files ........................................................................... 3-10 Viewing Which POF Configuration Files are Loaded ............................................................... 3-11 Specifying Management Configuration Files ................................................................................. 3-11 Specifying a Custom Report Group Configuration File............................................................ 3-12 Overriding the Default Report Group Configuration File................................................. 3-12 Using the Report Group Configuration File System Property.......................................... 3-13 Specifying an MBean Configuration File..................................................................................... 3-14 Using the Default MBean Configuration Override File ..................................................... 3-14 Using the MBean Configuration File System Property ...................................................... 3-14 Viewing Which Management Configuration Files are Loaded................................................ 3-15 Disabling Schema Validation ............................................................................................................. 3-15 Understanding the XML Override Feature ...................................................................................... 3-16 Using the Predefined Override Files ............................................................................................ 3-16 Defining Custom Override Files ................................................................................................... 3-18 Defining Multiple Override Files for the Same Element........................................................... 3-19
iv
Changing Configuration Using System Properties ........................................................................ 3-20 Using Preconfigured System Properties ...................................................................................... 3-20 Creating Custom System Properties ............................................................................................ 3-21
5 Debugging in Coherence
Overview of Debugging in Coherence ................................................................................................ 5-1 Configuring Logging ............................................................................................................................... 5-2 Changing the Log Level .................................................................................................................... 5-2 Changing the Log Destination ......................................................................................................... 5-3 Sending Log Messages to a File ................................................................................................ 5-3 Changing the Log Message Format................................................................................................. 5-3 Setting the Logging Character Limit ............................................................................................... 5-4 Using JDK Logging for Coherence Logs......................................................................................... 5-5 Using Log4J Logging for Coherence Logs...................................................................................... 5-6 Using SLF4J for Coherence Logs...................................................................................................... 5-7 Performing Remote Debugging ............................................................................................................ 5-7 Troubleshooting Coherence-Based Applications............................................................................... 5-8 Using Coherence Logs....................................................................................................................... 5-8 Using JMX Management and Coherence Reports......................................................................... 5-9 Using JVM Options to Help Debug................................................................................................. 5-9 Capturing Thread Dumps................................................................................................................. 5-9 Capturing Heap Dumps................................................................................................................. 5-10 Monitoring the Operating System ................................................................................................ 5-11
7 Setting Up a Cluster
Overview of Setting Up Clusters .......................................................................................................... 7-1 Specifying a Cluster's Name .................................................................................................................. 7-2 Specifying a Cluster Member's Identity .............................................................................................. 7-2 Configuring Multicast Communication .............................................................................................. 7-4 Specifying a Cluster's Multicast Address ....................................................................................... 7-4 Changing the Multicast Socket Interface................................................................................. 7-5 Disabling Multicast Communication .............................................................................................. 7-5 Specifying the Multicast Time-to-Live ............................................................................................ 7-6 Specifying the Multicast Join Timeout ............................................................................................ 7-6 Changing the Multicast Threshold .................................................................................................. 7-7 Specifying a Cluster Member's Unicast Address ............................................................................... 7-8 Using Well Known Addresses ............................................................................................................... 7-9 Specifying WKA Member Addresses.............................................................................................. 7-9 Specifying a WKA Address Provider........................................................................................... 7-11 Enabling Single-Server Mode ............................................................................................................. 7-13 Configuring Death Detection ............................................................................................................. 7-13 Changing TCP-Ring Settings......................................................................................................... 7-14 Changing the Heartbeat Interval .................................................................................................. 7-15 Disabling Death Detection ............................................................................................................. 7-15 Specifying Cluster Priorities ............................................................................................................... 7-16 Specifying a Cluster Member's Priority ....................................................................................... 7-16 Specifying Communication Thread Priorities ............................................................................ 7-16 Specifying Thread Priorities for Services..................................................................................... 7-17
vi
vii
Enabling a Custom Guardian Failure Policy ....................................................................... 11-6 Issuing Manual Guardian Heartbeats ............................................................................................... 11-8
Part III
Using Caches
13 Configuring Caches
Overview ................................................................................................................................................. Defining Cache Mappings .................................................................................................................. Using One-to-One Cache Mappings ............................................................................................ Using Cache Name Pattern Mappings......................................................................................... Specifying Initialization Parameters in a Mapping.................................................................... Defining Cache Schemes ..................................................................................................................... Defining Distributed Cache Schemes........................................................................................... Defining Replicated Cache Schemes ............................................................................................ Defining Optimistic Cache Schemes ............................................................................................ Defining Local Cache Schemes ..................................................................................................... Controlling the Growth of a Local Cache............................................................................. Specifying a Custom Eviction Policy .................................................................................... Defining Near Cache Schemes .................................................................................................... Near Cache Invalidation Strategies ..................................................................................... Using Scheme Inheritance ................................................................................................................. Using Cache Scheme Properties ....................................................................................................... Using Parameter Macros .................................................................................................................... 13-1 13-2 13-2 13-2 13-3 13-4 13-4 13-5 13-6 13-7 13-7 13-8 13-11 13-12 13-13 13-14 13-15
viii
Enabling a Custom Map Implementation for a Journal Scheme ...................................... Changing Journaling Behavior...................................................................................................... Configuring the RAM Journal Resource Manager............................................................ Configuring the Flash Journal Resource Manager............................................................ Using Asynchronous Backup ............................................................................................................ Using Delta Backup ............................................................................................................................ Enabling Delta Backup ................................................................................................................. Enabling a Custom Delta Backup Compressor ........................................................................
17 Using Quorum
Overview of Using Quorum ................................................................................................................ 17-1 Using the Cluster Quorum .................................................................................................................. 17-1 Configuring the Cluster Quorum Policy ..................................................................................... 17-2
ix
Using the Partitioned Cache Quorums ............................................................................................. Configuring the Partitioned Cache Quorum Policy................................................................... Using the Proxy Quorum ..................................................................................................................... Configuring the Proxy Quorum Policy........................................................................................ Using Custom Action Policies ............................................................................................................ Enabling Custom Action Policies.................................................................................................. Enabling the Custom Failover Access Policy ..............................................................................
Overview of POF Serialization ........................................................................................................... Using the POF API to Serialize Objects............................................................................................ Implementing the PortableObject Interface ................................................................................ Implementing the PofSerializer Interface .................................................................................... Guidelines for Assigning POF Indexes ........................................................................................ Using POF Object References ........................................................................................................ Enabling POF Object References ........................................................................................... Registering POF Object Identities for Circular and Nested Objects................................. Registering POF Objects................................................................................................................. Configuring Coherence to Use the ConfigurablePofContext Class......................................... Configure the ConfigurablePofContext Class Per Service................................................. Configure the ConfigurablePofContext Class for All Services ......................................... Configure the ConfigurablePofContext Class For the JVM............................................... Using POF Annotations to Serialize Objects ................................................................................... Annotating Objects for POF Serialization ................................................................................. Registering POF Annotated Objects........................................................................................... Generating a POF Configuration File......................................................................................... Enabling Automatic Indexing ..................................................................................................... Providing a Custom Codec.......................................................................................................... Using POF Extractors and POF Updaters ....................................................................................... Navigating a POF object............................................................................................................... Using POF Extractors ................................................................................................................... Using POF Updaters ..................................................................................................................... Serializing Keys Using POF ..............................................................................................................
20-2 20-2 20-3 20-3 20-4 20-4 20-5 20-5 20-7 20-8 20-8 20-8 20-9 20-9 20-10 20-10 20-11 20-12 20-13 20-13 20-14 20-15 20-15 20-16
21 Pre-Loading a Cache
Bulk Loading Data Into a Cache ........................................................................................................ 21-1 Performing Distributed Bulk Loading .............................................................................................. 21-2 A Distributed Bulk Loading Example.......................................................................................... 21-2
xi
Interpreting Query Records......................................................................................................... Query Explain Plan Record .................................................................................................. Query Trace Record ............................................................................................................... Running The Query Record Example ........................................................................................
xii
Producing Readable Backing MapListener Events from Distributed Caches...................... 25-10 Using Synchronous Event Listeners ................................................................................................ 25-11
xiii
Changing Existing Values....................................................................................................... Inserting Entries in the Cache ................................................................................................ Deleting Entries in the Cache ................................................................................................. Working with Indexes .................................................................................................................... Creating an Index on the Cache............................................................................................. Removing an Index from the Cache...................................................................................... Issuing Multiple Query Statements.............................................................................................. Processing Query Statements in Batch Mode ...................................................................... Viewing Query Cost and Effectiveness...................................................................................... Using the CohQL Command-Line Tool .......................................................................................... Starting the Command-line Tool ................................................................................................ Using Command-Line Tool Arguments .................................................................................... A Command-Line Example ......................................................................................................... Building Filters in Java Programs .................................................................................................... Additional Coherence Query Language Examples ....................................................................... Simple SELECT * FROM Statements that Highlight Filters.................................................... Complex Queries that Feature Projection, Aggregation, and Grouping .............................. UPDATE Examples....................................................................................................................... Key and Value Pseudo-Function Examples ..............................................................................
28-8 28-8 28-8 28-9 28-9 28-9 28-9 28-9 28-10 28-10 28-11 28-11 28-12 28-14 28-15 28-15 28-16 28-17 28-17
29 Performing Transactions
Overview of Transactions .................................................................................................................... Using Explicit Locking for Data Concurrency ................................................................................. Using Entry Processors for Data Concurrency................................................................................. Using the Transaction Framework API ............................................................................................. Defining Transactional Caches...................................................................................................... Performing Cache Operations within a Transaction ................................................................. Using the NamedCache API .................................................................................................. Using the Connection API ...................................................................................................... Creating Transactional Connections .......................................................................................... Using Transactional Connections ............................................................................................... Using Auto-Commit Mode................................................................................................... Setting Isolation Levels ......................................................................................................... Using Eager Mode ................................................................................................................. Setting Transaction Timeout ................................................................................................ Using the OptimisticNamedCache Interface ............................................................................ Configuring POF When Performing Transactions................................................................... Configuring Transactional Storage Capacity ............................................................................ Performing Transactions from Java Extend Clients................................................................. Create an Entry Processor for Transactions ....................................................................... Configure the Cluster-Side Transaction Caches................................................................ Configure the Client-Side Remote Cache........................................................................... Use the Transactional Entry Processor from a Java Client .............................................. Viewing Transaction Management Information ...................................................................... CacheMBeans for Transactional Caches............................................................................. TransactionManagerBean ..................................................................................................... Using the Coherence Resource Adapter ......................................................................................... 29-1 29-2 29-3 29-5 29-6 29-8 29-8 29-8 29-10 29-11 29-11 29-12 29-13 29-13 29-13 29-14 29-14 29-16 29-16 29-17 29-18 29-19 29-19 29-19 29-20 29-21
xiv
Performing Cache Operations within a Transaction ............................................................... Creating a Coherence Connection ....................................................................................... Getting a Named Cache ........................................................................................................ Demarcating Transaction Boundaries................................................................................. Packaging the Application........................................................................................................... Configure the Connection Factory Resource Reference................................................... Configure the Resource Adapter Module Reference........................................................ Include the Required Libraries ............................................................................................ Using the Coherence Cache Adapter for Transactions............................................................
xv
authorized-hosts................................................................................................................................ cache-factory-builder-config.......................................................................................................... callback-handler .............................................................................................................................. cluster-config ................................................................................................................................... cluster-quorum-policy .................................................................................................................... coherence .......................................................................................................................................... configurable-cache-factory-config ................................................................................................ filter ................................................................................................................................................... filters.................................................................................................................................................. flashjournal-manager...................................................................................................................... flow-control...................................................................................................................................... host-range......................................................................................................................................... identity-asserter............................................................................................................................... identity-manager ............................................................................................................................. identity-transformer ....................................................................................................................... incoming-message-handler............................................................................................................ init-param......................................................................................................................................... init-params ....................................................................................................................................... instance ............................................................................................................................................. journaling-config ............................................................................................................................. key-store ........................................................................................................................................... license-config ................................................................................................................................... logging-config.................................................................................................................................. management-config ........................................................................................................................ mbean................................................................................................................................................ mbeans .............................................................................................................................................. mbean-filter...................................................................................................................................... member-identity.............................................................................................................................. multicast-listener ............................................................................................................................. notification-queueing...................................................................................................................... outgoing-message-handler ............................................................................................................ outstanding-packets........................................................................................................................ packet-buffer .................................................................................................................................... packet-bundling .............................................................................................................................. packet-delivery ................................................................................................................................ packet-publisher.............................................................................................................................. packet-size ........................................................................................................................................ packet-speaker ................................................................................................................................. pause-detection................................................................................................................................ provider ............................................................................................................................................ ramjournal-manager ....................................................................................................................... reporter ............................................................................................................................................. security-config ................................................................................................................................. serializer ........................................................................................................................................... serializers.......................................................................................................................................... service ............................................................................................................................................... Initialization Parameter Settings............................................................................................
A-9 A-10 A-11 A-12 A-14 A-15 A-16 A-18 A-19 A-20 A-22 A-23 A-24 A-25 A-26 A-27 A-28 A-29 A-30 A-31 A-32 A-33 A-34 A-35 A-37 A-39 A-40 A-41 A-43 A-45 A-46 A-47 A-48 A-49 A-50 A-51 A-52 A-53 A-54 A-55 A-56 A-57 A-58 A-59 A-60 A-61 A-62
xvi
DistributedCache Service Parameters ........................................................................... ReplicatedCache Service Parameters ............................................................................. OptimisticCache Service Parameters ............................................................................. Invocation Service Parameters........................................................................................ Proxy Service Parameters ................................................................................................ service-guardian.............................................................................................................................. services.............................................................................................................................................. shutdown-listener ........................................................................................................................... socket-address ................................................................................................................................. socket-provider................................................................................................................................ socket-providers .............................................................................................................................. ssl ....................................................................................................................................................... tcp-ring-listener ............................................................................................................................... traffic-jam ......................................................................................................................................... trust-manager .................................................................................................................................. unicast-listener................................................................................................................................. volume-threshold ............................................................................................................................ well-known-addresses.................................................................................................................... Attribute Reference ...............................................................................................................................
A-62 A-67 A-69 A-71 A-74 A-79 A-80 A-81 A-82 A-83 A-85 A-86 A-87 A-88 A-89 A-90 A-92 A-93 A-95
xvii
initiator-config ................................................................................................................................. init-param......................................................................................................................................... init-params ....................................................................................................................................... instance ............................................................................................................................................. interceptor ........................................................................................................................................ interceptors....................................................................................................................................... invocation-scheme .......................................................................................................................... invocation-service-proxy................................................................................................................ key-associator .................................................................................................................................. key-partitioning............................................................................................................................... key-store ........................................................................................................................................... lh-file-manager ................................................................................................................................ listener............................................................................................................................................... local-address .................................................................................................................................... local-scheme..................................................................................................................................... name-service-addresses.................................................................................................................. near-scheme ..................................................................................................................................... nio-file-manager .............................................................................................................................. nio-memory-manager..................................................................................................................... operation-bundling ......................................................................................................................... optimistic-scheme ........................................................................................................................... outgoing-message-handler ............................................................................................................ overflow-scheme ............................................................................................................................. paged-external-scheme................................................................................................................... partition-listener.............................................................................................................................. partitioned-quorum-policy-scheme ............................................................................................. provider ............................................................................................................................................ proxy-config..................................................................................................................................... proxy-scheme................................................................................................................................... proxy-quorum-policy-scheme....................................................................................................... ramjournal-scheme ......................................................................................................................... read-write-backing-map-scheme .................................................................................................. remote-addresses............................................................................................................................. remote-cache-scheme.................................................................................................................... remote-invocation-scheme........................................................................................................... replicated-scheme.......................................................................................................................... resource-config .............................................................................................................................. serializer ......................................................................................................................................... socket-address ............................................................................................................................... socket-provider.............................................................................................................................. ssl ..................................................................................................................................................... tcp-acceptor .................................................................................................................................... tcp-initiator..................................................................................................................................... transactional-scheme .................................................................................................................... trust-manager ................................................................................................................................ Attribute Reference .............................................................................................................................
B-47 B-48 B-49 B-50 B-51 B-52 B-53 B-56 B-57 B-58 B-59 B-60 B-61 B-62 B-63 B-66 B-67 B-69 B-71 B-73 B-74 B-77 B-79 B-81 B-84 B-85 B-86 B-87 B-88 B-92 B-93 B-94 B-99 B-100 B-101 B-102 B-106 B-107 B-108 B-109 B-110 B-111 B-115 B-117 B-122 B-123
xviii
xix
xx
Preface
Welcome to Developing Applications with Oracle Coherence. This document provides contextual information, instructions, and examples that are designed to teach developers and architects how to use Coherence and develop Coherence-based applications.
Audience
Developing Applications with Oracle Coherence is intended for the following audiences:
Primary Audience Application developers who want to understand core Oracle Coherence concepts and want to build applications that leverage an Oracle Coherence data grid. Secondary Audience System architects who want to understand core Oracle Coherence concepts and want to build data grid-based solutions.
The audience must be familiar with Java to use this guide. In addition, the examples in this guide require the installation and use of the Oracle Coherence product. The use of an IDE is not required to use this guide, but is recommended to facilitate working through the examples. A database and basic database knowledge is required when using cache store features.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc. Access to Oracle Support Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.
Related Documents
For more information, see the following documents that are included in the Oracle Coherence documentation set:
xxi
Administering Oracle Coherence Developing Remote Clients for Oracle Coherence Integrating Oracle Coherence Managing Oracle Coherence Securing Oracle Coherence Tutorial for Oracle Coherence Java API Reference for Oracle Coherence C++ API Reference for Oracle Coherence .NET API Reference for Oracle Coherence Release Notes for Oracle Coherence
Conventions
The following text conventions are used in this document:
Convention boldface italic monospace Meaning Boldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary. Italic type indicates book titles, emphasis, or placeholder variables for which you supply particular values. Monospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter.
xxii
Oracle universal installer and patching Complete installer, which provides all installation options and can operate in graphical or silent mode. See "Running the Coherence Installer" on page 2-2. Quick installer, which provides a silent installer with no API documentation or examples. See "Running the Coherence Quick Installer" on page 2-4. Supplemental installer, which provides only API documentation and examples. See "Running the Coherence Supplemental Installer" on page 2-4. Deinstaller, which provides the ability to deinstall a Coherence installation. See "Deinstalling Coherence" on page 2-8.
Maven integration, which simplifies and standardize including Coherence in a build process. See "Integration with Maven" on page 2-7. SLF4J integration, which allows SLF4J logging. See "Using SLF4J for Coherence Logs" on page 5-7. Cluster priorities, which are used to assign importance for cluster members, cluster communication components, and threads. See "Specifying Cluster Priorities" on page 7-16. Logical near cache invalidation strategy, which instructs a near cache to listen to all backing map events that are not synthetic deletes. See "Near Cache Invalidation Strategies" on page 13-12. Flash journal high size, which determines when to start removing stale values from the journal and is the basis for automatic tuning. See "Configuring the Flash Journal Resource Manager" on page 14-10. Asynchronous Backup, which allows clients to continue to respond to requests during backup operations. See "Using Asynchronous Backup" on page 14-11.
xxiii
Quorum failover access policy, which moderates client request load during a failover event in order to allow cache servers adequate opportunity to re-establish partition backups. See "Enabling the Custom Failover Access Policy" on page 17-6. Custom XML namespaces and namespace handlers, which allow user-defined XML elements and attributes to be added to a cache configuration file. See Chapter 19, "Extending Cache Configuration Files". POF configuration generator, which automatically creates a POF configuration file that includes user type entries for the classes that contain the @Portable annotation. See "Generating a POF Configuration File" on page 20-11. Implicit locks, which allows cache entries in multiple caches to be updated from a single entry processor in a transaction-like manner. See "Processing Entries in Multiple Caches" on page 24-3. Live events, which is an event programming model that allows extensibility within a cluster when performing operations against a data grid. The model uses events to represent observable occurrences of cluster operations. See Chapter 27, "Using Live Events". Partition assignment strategies, which define how partitions are assigned to storage-enabled cluster members. See "Changing the Partition Distribution Strategy" on page 30-5. InfiniBand Message Bus (IMB) provider, which is a native InfiniBand protocol that supports Remote Direct Memory Access (RDMA), zero message copy, kernel bypass, predictive notifications, and custom off-heap buffers. IMB is only available on Oracle Exalogic system. See Chapter A66, " unicast-listener Subelements," and Chapter B20, " distributed-scheme Subelements," for details about setting the <reliable-transport> element.
Revised the system requirements. See "System Requirements" on page 2-1. Revised the installation directory structure instructions. See "Browsing the Installation Directory" on page 2-5. Revised the packet buffer configuration instructions. See "Configuring the Size of the Packet Buffers" on page 10-7 and "Adjusting the Maximum Size of a Packet" on page 10-9. Revised the near cache invalidation strategy section to include new default behavior. See "Near Cache Invalidation Strategies" on page 13-12. Added a section about key serialization that lists considerations when serializing keys. See "Serializing Keys Using POF" on page 20-16. Added a section about deferring key association checks. See "Deferring the Key Association Check" on page 30-3. Added a section about changing the default number of partitions for a distributed cache service. See "Changing the Number of Partitions" on page 30-4. Revised the operational configuration reference for all new and changed elements. See Appendix A, "Operational Configuration Elements."
xxiv
Revised the cache configuration reference for all new and changed elements. See Appendix B, "Cache Configuration Elements."
xxv
xxvi
Part I
Part I
Getting Started
Chapter 1, "Introduction to Coherence" Chapter 2, "Installing Oracle Coherence for Java" Chapter 3, "Understanding Configuration" Chapter 4, "Building Your First Coherence Application" Chapter 5, "Debugging in Coherence"
1
1
Introduction to Coherence
This chapter provides an introduction to Coherence concepts and features. It outlines product capabilities, usage possibilities, and provides a brief overview of how particular features are implemented. The items discussed in this chapter are detailed throughout this guide. This chapter includes the following sections:
Basic Concepts Read/Write Caching Querying the Cache Invocation Service Event Programming Transactions HTTP Session Management Object-Relational Mapping Integration C++/.NET Integration Management and Monitoring
Basic Concepts
The topics in this section describes fundamental concepts that are associated with Coherence and discusses several important features that are associated with using Coherence to cluster data.
A fully coherent, single system image (SSI) Scalability for both read and write access Fast, transparent failover and failback Linear scalability for storage and processing No Single-Points-of-Failure (SPOFs) Cluster-wide locking and transactions
Introduction to Coherence
1-1
Basic Concepts
Built on top of this foundation are the various services that Coherence provides, including database caching, HTTP session management, grid agent invocation and distributed queries. Before going into detail about these features, some basic aspects of Coherence should be discussed.
A single API for the logical layer, XML configuration for the physical layer
Coherence supports many topologies for clustered data management. Each of these topologies has a trade-off in terms of performance and fault-tolerance. By using a single API, the choice of topology can be deferred until deployment if desired. This allows developers to work with a consistent logical view of Coherence, while providing flexibility during tuning or as application needs change.
Caching Strategies
Coherence provides several cache implementations:
Local CacheLocal on-heap caching for non-clustered caching. Replicated CachePerfect for small, read-heavy caches. Distributed CacheTrue linear scalability for both read and write access. Data is automatically, dynamically and transparently partitioned across nodes. The distribution algorithm minimizes network traffic and avoids service pauses by incrementally shifting data. Near CacheProvides the performance of local caching with the scalability of distributed caching. Several different near-cache strategies are available and offer a trade-off between performance and synchronization guarantees.
In-process caching provides the highest level of raw performance, since objects are managed within the local JVM. This benefit is most directly realized by the Local, Replicated, Optimistic and Near Cache implementations. Out-of-process (client/server) caching provides the option of using dedicated cache servers. This can be helpful when you want to partition workloads (to avoid stressing the application servers). This is accomplished by using the Partitioned cache implementation and simply disabling local storage on client nodes through a single command-line option or a one-line entry in the XML configuration. Tiered caching (using the Near Cache functionality) enables you to couple local caches on the application server with larger, partitioned caches on the cache servers, combining the raw performance of local caching with the scalability of partitioned caching. This is useful for both dedicated cache servers and co-located caching (cache partitions stored within the application server JVMs). See Part III, "Using Caches" for detailed information on configuring and using caches.
On-heapThe fastest option, though it can affect JVM garbage collection times. NIO RAMNo impact on garbage collection, though it does require serialization/deserialization. NIO DiskSimilar to NIO RAM, but using memory-mapped files.
Basic Concepts
JournalA combination of RAM storage and disk storage, optimized for solid state disks, that uses a journaling technique. Journal-based storage requires serialization/deserialization. File-basedUses a Berkeley Database JE storage system.
Coherence storage is transient: the disk-based storage options are for managing cached data only. For persistent storage, Coherence offers backing maps coupled with a CacheLoader/CacheStore. See Chapter 14, "Implementing Storage and Backing Maps," for detailed information.
Serialization Options
Because serialization is often the most expensive part of clustered data management, Coherence provides the following options for serializing/deserializing data:
com.tangosol.io.pof.PofSerializer The Portable Object Format (also referred to as POF) is a language agnostic binary format. POF was designed to be incredibly efficient in both space and time and is the recommended serialization option in Coherence. See Chapter 20, "Using Portable Object Format." java.io.Serializable The simplest, but slowest option. java.io.Externalizable This requires developers to implement serialization manually, but can provide significant performance benefits. Compared to java.io.Serializable, this can cut serialized data size by a factor of two or more (especially helpful with Distributed caches, as they generally cache data in serialized form). Most importantly, CPU usage is dramatically reduced. com.tangosol.io.ExternalizableLite This is very similar to java.io.Externalizable, but offers better performance and less memory usage by using a more efficient IO stream implementation. com.tangosol.run.xml.XmlBean A default implementation of ExternalizableLite.
Namespace Hierarchy
Coherence is organized as set of services. At the root is the Cluster service. A cluster is defined as a set of Coherence instances (one instance per JVM, with one or more JVMs on each computer). A cluster is defined by the combination of multicast address and port. A TTL (network packet time-to-live; that is, the number of network hops) setting can restrict the cluster to a single computer, or the computers attached to a single switch.
Introduction to Coherence
1-3
Read/Write Caching
Under the cluster service are the various services that comprise the Coherence API. These include the various caching services (Replicated, Distributed, and so on) and the Invocation Service (for deploying agents to various nodes of the cluster). Each instance of a service is named, and there is typically a default service instance for each type. The cache services contain named caches (com.tangosol.net.NamedCache), which are analogous to database tablesthat is, they typically contain a set of related objects. See Chapter 6, "Introduction to Coherence Clusters," for more information on the cluster service as well the other cluster-based service provided by Coherence.
Read/Write Caching
This section provides an overview of the NamedCache API, which is the primary interface used by applications to get and interact with cache instances. This section also includes some insight into the use of the NamedCache API.
NamedCache
The following source code returns a reference to a NamedCache instance. The underlying cache service is started if necessary. See the Java API Reference for Oracle Coherence for details on the NamedCache interface.
import com.tangosol.net.*; ... NamedCache cache = CacheFactory.getCache("MyCache");
Coherence scans the cache configuration XML file for a name mapping for MyCache. This is similar to Servlet name mapping in a web container's web.xml file. Coherence's cache configuration file contains (in the simplest case) a set of mappings (from cache name to cache scheme) and a set of cache schemes. By default, Coherence uses the coherence-cache-config.xml file found at the root of coherence.jar. This can be overridden on the JVM command-line with -Dtangosol.coherence.cacheconfig=file.xml. This argument can reference either a file system path, or a Java resource path. The com.tangosol.net.NamedCache interface extends several other interfaces:
java.util.Mapbasic Map methods such as get(), put(), remove(). com.tangosol.util.ObservableMapmethods for listening to cache events. (See Chapter 25, "Using Map Events". com.tangosol.net.cache.CacheMapmethods for getting a collection of keys (as a Map) that are in the cache and for putting objects in the cache. Also supports adding an expiry value when putting an entry in a cache. com.tangosol.util.QueryMapmethods for querying the cache. (See "Query the Cache" in the Developing Applications with Oracle Coherence) com.tangosol.util.ConcurrentMapmethods for concurrent access such as lock() and unlock(). com.tangosol.util.InvocableMapmethods for server-side processing of cache data.
Read/Write Caching
equals() methods, and those methods must return consistent results across cluster nodes. This implies that the implementation of hashCode() and equals() must be based solely on the object's serializable state (that is, the object's non-transient fields); most built-in Java types, such as String, Integer and Date, meet this requirement. Some cache implementations (specifically the partitioned cache) use the serialized form of the key objects for equality testing, which means that keys for which equals() returns true must serialize identically; most built-in Java types meet this requirement as well.
As a clustered implementation of java.util.Map with several added features (queries, concurrency), but with no persistent backing (a "side" cache). As a means of decoupling access to external data sources (an "inline" cache). In this case, the application uses the NamedCache interface, and the NamedCache takes care of managing the underlying database (or other resource).
a databaseThe most intuitive use of a cachesimply caching database tables (in the form of Java objects). a serviceMainframe, web service, service bureauany service that represents an expensive resource to access (either due to computational cost or actual access fees). calculationsFinancial calculations, aggregations, data transformations. Using an inline cache makes it very easy to avoid duplicating calculations. If the calculation is complete, the result is simply pulled from the cache. Since any serializable object can be used as a cache key, it is a simple matter to use an object containing calculation parameters as the cache key.
See Chapter 15, "Caching Data Sources" for more information on inline caching. Write-back options:
write-throughEnsures that the external data source always contains up-to-date information. Used when data must be persisted immediately, or when sharing a data source with other applications. write-behindProvides better performance by caching writes to the external data source. Not only can writes be buffered to even out the load on the data source, but multiple writes can be combined, further reducing I/O. The trade-off is that data is not immediately persisted to disk; however, it is immediately distributed across the cluster, so the data survives the loss of a server. Furthermore, if the entire data set is cached, this option means that the application can survive a complete failure of the data source temporarily as both cache reads and writes do not require synchronous access the data source.
To implement a read-only inline cache, you simply implement two methods on the com.tangosol.net.cache.CacheLoader interface, one for singleton reads, the other for bulk reads. Coherence provides an abstract class com.tangosol.net.cache.AbstractCacheLoader which provides a default implementation of the bulk method, which means that you need only implement a single method: public Object load(Object oKey). This method accepts an arbitrary cache key and returns the appropriate value object.
Introduction to Coherence
1-5
If you want to implement read/write caching, you must extend com.tangosol.net.cache.AbstractCacheStore (or implement the interface com.tangosol.net.cache.CacheStore), which adds the following methods:
public public public public void void void void erase(Object oKey); eraseAll(Collection colKeys); store(Object oKey, Object oValue); storeAll(Map mapEntries);
The method erase() should remove the specified key from the external data source. The method store() should update the specified item in the data source if it exists, or insert it if it does not presently exist. After the CacheLoader/CacheStore is implemented, it can be connected through the coherence-cache-config.xml file.
Invocation Service
The Coherence invocation service can deploy computational agents to various nodes within the cluster. These agents can be either execute-style (deploy and asynchronously listen) or query-style (deploy and synchronously listen). See Chapter 24, "Processing Data In a Cache," for more information on using the invocation service. The invocation service is accessed through the com.tangosol.net.InvocationService interface and includes the following two methods:
Example 11 Methods in the InvocationService API public void execute(Invocable task, Set setMembers, InvocationObserver observer);
An instance of the service can be retrieved from the com.tangosol.net.CacheFactory class. Coherence implements the WorkManager API for task-centric processing.
Event Programming
Coherence supports two event programming models that allow applications to receive and react to notifications of cluster operations. Applications observe events as logical concepts regardless of which computer caused the event. Events provide a common way of extending Coherence with application-specific logic. The event programming models are:
Live Events The live event programming model uses user-defined event interceptors that are registered to receive different types of events. Applications decide what action to take based on the event type. Many events that are available through the use of map events are also supported using live events. For details, see Chapter 27, "Using Live Events." Map Events The map event programming model uses user-defined map listeners that are attached to the underlying map implementation. Map events offer customizable server-based filters and lightweight events that can minimize network traffic and processing. Map listeners follow the JavaBean paradigm and can distinguish between system cache events (for example, eviction) and application cache events (for example, get/put operations). For details, see Chapter 25, "Using Map Events," for more detailed information on using events.
Transactions
Coherence provides various transaction options. The options include: basic data concurrency using the ConcurrentMap interface and EntryProcessor API, partition-level transactions using implicit locking and the EntryProcessor API, atomic transactions using the Transaction Framework API, and atomic transactions with full XA support using the Coherence resource adapter. See Chapter 29, "Performing Transactions" for detailed instructions.
Introduction to Coherence
1-7
C++/.NET Integration
C++/.NET Integration
Coherence provides support for cross-platform clients (over TCP/IP). All clients use the same wire protocol (the servers do not differentiate between client platforms). Also, note that there are no third-party components in any of these clients (such as embedded JVMs or language bridges). The wire protocol supports event feeds and coherent in-process caching for all client platforms. See Developing Remote Clients for Oracle Coherence for complete instructions on using Coherence*Extend to support remote C++ and .NET clients.
2
2
This chapter provides instructions for installing and running Oracle Coherence for Java (simply referred to as Coherence). The chapter does not include instructions for installing Coherence*Extend client distributions (C++ and .NET) or Coherence*Web. Refer to the Developing Remote Clients for Oracle Coherence and the Administering HTTP Session Management with Oracle Coherence*Web, respectively, for instructions on installing these components. This chapter includes the following sections:
System Requirements Performing a Coherence Installation Browsing the Installation Directory Setting Environment Variables Running Coherence for the First Time Integration with Maven Deinstalling Coherence
System Requirements
The following are the minimum requirements for running the Coherence installer:
300 MHz CPU 512 MB swap space 256 color monitor (required for GUI-based installation only) Java Development Kit (JDK) 1.6.0_4 or later
The following are the suggested minimum system requirements for running Coherence in a development environment:
100 MB disk space for complete installation (includes API documentation and examples) 1 GB of RAM (assuming a maximum Java heap size of 512MB) This amount of RAM can ideally support a maximum cache size of 150MB on a single node that is configured to store a backup of all data (150MB x 2) and leaves more than a 1/3 of the heap available for scratch and JVM tasks. See Administering Oracle Coherence for recommendations on calculating cache size. Cache servers and clients JVM 1.7 or later. Coherence*Extend clients JVM 1.6 or later
Windows or UNIX-based system that supports the required Java Version Network adapter
coherence_version.jar A full Coherence installation that can be run in either graphical mode or silent mode. See Running the Coherence Installer. coherence_quick_version.jar A minimum Coherence installation that is always run in silent mode. The quick installer provides a smaller footprint and does not include API documentation or examples. See Running the Coherence Quick Installer. coherence_quick_supp_version.jar A supplemental installation that is always run in silent mode. The supplemental installer contains only API documentation and examples. See Running the Coherence Supplemental Installer. wls_version.jar A full WebLogic Server installation that includes Coherence. See Installing Coherence with WebLogic Server.
Coherence is always installed to an ORACLE_HOME/coherence directory. The complete path to the coherence directory is referred to as COHERENCE_HOME throughout the Coherence documentation.
Performing a Coherence Installation In Graphical Mode Performing a Coherence Installation In Silent Mode
Copy the coherence_version.jar file to the target computer. From a command prompt, change directories to the location of the coherence_ version.jar file and execute the following command (assuming that JAVA_ HOME/bin is located on the computer's PATH):
java -jar coherence_version.jar
The Oracle Coherence installer starts and the Welcome screen displays. On UNIX-based platforms, you are first prompted (prior to the Welcome screen) to select an inventory directory where Oracle product inventory data is written. Follow the on-screen instructions to create an inventory directory.
3.
Click Next. The Installation Location screen displays. Use the drop-down list to select an existing ORACLE_HOME directory to which Coherence will be installed, or
enter an absolute path to create a new Coherence ORACLE_HOME directory. Click Browse to search for a directory if required.
4. 5.
Click Next. The Installation Type screen displays. Select which options to install. Click Next. The Installation Summary screens displays. Verify the installation. Click Save Response File if you intend to duplicate this installation on additional computers. A response file is created that can be used to perform a silent install with the exact same installation settings. For details on performing a silent install, see "Performing a Coherence Installation In Silent Mode" on page 2-3. Click Install. The Installation Progress screen displays and shows all tasks that have succeeded and failed. Click Next. The Installation Complete screen displays and shows a summary of the installation. Click Finish to close the Oracle Coherence installer.
6. 7. 8.
Copy the coherence_version.jar file and a response file to the target computer. From a command prompt, change directories to the location of the coherence_ version.jar file and execute the following command (assuming that JAVA_ HOME/bin is located on the computer's PATH):
java -jar coherence_version.jar -silent -responseFile full_path_to_response_ file -waitForCompletion
On UNIX-based platforms, the installer requires the location of the oraInst.loc inventory directory pointer file if it is not found in the default location (/etc). If this is the first time that an Oracle product has been installed on this computer, you can use the createCentralInventory.sh script to set up an inventory directory pointer file in the /etc directory. The script requires root permissions. If you want to use a custom location for the oraInst.loc file, use the -invPtrLoc installer option to specify the location. For example:
java -jar coherence_version.jar -silent -responseFile full_path_to_response_ file -waitForCompletion -invPtrLoc /MyDirectory/oraInst.loc
The contents of the oraInst.loc file contains the location and the ownership group for the inventory directory. For example:
inventory_loc=/MyDirectory/oraInventory inst_group=group
Copy the coherence_quick_version.jar file to a directory on the target computer. From a command prompt, change directories to the location of the coherence_ quick_version.jar file and execute the following command (assuming that JAVA_ HOME/bin is located on the computer's PATH):
java -jar coherence_quick_version.jar ORACLE_HOME=/oracle
The value of the ORACLE_HOME variable specifies the ORACLE_HOME directory to which Coherence will be installed. The value must be an absolute path. If the directory does not exist, the installer creates the directory. If the directory already exists, it must be empty or it must be an existing valid ORACLE_HOME. You can also start the installation from an empty current working directory and omit the ORACLE_HOME variable; the current working directory becomes the ORACLE_HOME directory. For example:
cd /oracle java -jar /tmp/coherence_quick_version.jar
On UNIX-based platforms, the quick installer attempts to find the oraInst.loc inventory directory pointer file in the /etc directory. If the file is not found, the /tmp directory is used as the inventory directory. If this is the first time that an Oracle product has been installed on this computer, you can use the createCentralInventory.sh script to set up an inventory directory pointer file in the /etc directory. The script requires root permissions. If you want to use a custom location for the oraInst.loc file, use the -invPtrLoc installer option to specify the location. For example:
java -jar coherence_quick_version.jar -invPtrLoc /MyDirectory/oraInst.loc
The contents of the oraInst.loc file contains the location and the ownership group for the inventory directory. For example:
inventory_loc=/MyDirectory/oraInventory inst_group=group
do not require the API documentation or code examples, then you can skip the supplemental installation.
1. 2.
Copy the coherence_quick_supp_version.jar file to the ORACLE_HOME directory where Coherence is installed. From a command prompt, change directories to the location of the coherence_ quick_supp_version.jar file and execute the following command (assuming that JAVA_HOME/bin is located on the computer's PATH):
java -jar coherence_quick_supp_version.jar
bin This directory includes a set of common scripts for performing different tasks, such as: starting a cache server, starting development tools, and performing network tests. The scripts are provided in both Windows (.cmd) and UNIX-based (.sh) formats. doc This directory contains the Coherence Java API Reference and a link to the Coherence documentation on the Oracle Technology Network (OTN). The Coherence Java API Reference is distributed as a JAR file and must be extracted. The JAR can also be imported into an IDE for easy access during development. To extract the Coherence Java API Reference, execute the following command from the /api directory (assuming that JAVA_HOME/bin is located on the computer's PATH):
jar -xvf CoherenceJavaDoc-version.jar
example This directory contains a set of examples that demonstrate many Coherence features and how to use the Coherence API. For detailed instructions on building and running the examples, see: http://wiki.tangosol.com/display/EXAMPLES/Home
lib This directory includes all delivered libraries. The coherence.jar library is the main development and run-time library and is discussed in detail throughout this documentation. plugins This directory contains Maven plugins that are used to integrate Coherence as part of a Maven build process. For details on Coherence Maven plugins, see "Integration with Maven" on page 2-7.
JAVA_HOME This variable is used when running the scripts that are included in the COHERENCE_HOME/bin directory. The value of this variable is the full path to the Java installation directory. If JAVA_HOME is not set, the scripts use the computer's default Java installation. Set this variable to ensure that the scripts use a specific Java version. COHERENCE_HOME This variable is typically set as a convenience. The value of this variable is the full path to the ORACLE_HOME/coherence directory.
Using a text editor, open the COHERENCE_HOME/bin/cache-server script. Modify the java_opts variable to include the tangosol.coherence.cluster and the tangosol.coherence.clusterport system properties as follows:
set java_opts="-Xms%memory% -Xmx%memory% -Dtangosol.coherence.cluster=cluster_ name -Dtangosol.coherence.clusterport=port"
Replace cluster_name and port with values that are unique for this cluster. For example, use your name for the cluster name and the last four digits of your phone number for the port.
3. 4. 5. 6.
Save and close the cache-server script. Repeat steps 1 to 3 for the COHERENCE_HOME/bin/coherence script. Run the cache-server script. The cache server starts and output is emitted that provides information about this cluster member. Run 2 instances of the coherence script. As each instance is started, output is emitted that provides information about the respective cluster members. Each instance returns a command prompt for the command-line tool.
Create a Cache
In this step, a cache is created and hosted on the basic cluster. A simple string is entered into the cache using the command-line tool of the first cache factory instance. The string is then retrieved from the cache using the command-line tool of the second cache factory instance. The example is simplistic and not very practical, but it does quickly demonstrate the distributed nature of Coherence caches. Moreover, these steps are typically performed directly using the Coherence API. To create a cache:
1.
At the command prompt for either cache factory instance, create a cache named Test using the cache command:
cache Test
2.
At the command prompt, use the put command to place a simple string in the new cache by entering a key/value pair (separated by a space):
put key1 Hello
The command returns and displays null. The put command always returns the previous value for a given key. The null value is returned because this is the first value entered for this key.
3.
Switch to the other cache factory instance and from the command prompt create the Test cache using the cache command:
cache Test
4.
From this command prompt, retrieve the string in the cache using the get command and entering the key name:
get key1
The command returns and displays hello. Either cache factory process can add or remove cache entries because the processes are part of the same cluster and because the Test cache is known to all cluster members. In addition, since the cache server is storing a backup of the cache data, either cache factory process (or both) can be shutdown and the cache data persists.
Deinstalling Coherence
plug-in allows Coherence artifacts to be uploaded to a Maven repository, which simplifies how the artifacts are consumed in development projects. For details about setting up Maven and using the synchronization plug-in, see Developing Applications Using Continuous Integration. In addition, the Maven integration includes an archetype and packaging plug-in for a Coherence Grid Archive (GAR). A Coherence GAR is a module type that is typically used to deploy Coherence applications within a WLS domain. The Maven archetype plug-in generates a GAR structure and provides example configuration files. The packaging plug-in generates a GAR based on a project's contents and dependencies and ensures that the dependencies, source, and configuration files are copied into the GAR. The Maven plug-in and configuration files for Coherence are located in the COHERENCE_HOME/plugins directory. The Maven GAR plug-in and archetype are installed in the enterprise repository as part of the synchronization plug-in. For instructions on using the plug-in to incorporate Coherence into a build process, see Developing Applications Using Continuous Integration.
Deinstalling Coherence
Coherence is deinstalled using the Oracle Fusion Middleware deinstaller. The deinstaller allows you to select which components in a Coherence ORACLE_HOME directory to deinstall and can also be used to completely remove a Coherence ORACLE_ HOME directory. To deinstall Coherence using the deinstallation wizard:
1.
Start the deinstaller using the Coherence ORACLE_HOME/oui/bin/deinstall.sh script on UNIX-based platforms or the Coherence ORACLE_ HOME\oui\bin\deinstall.cmd script on Windows. A shortcut to the script is available on Windows and is located in the Oracle program group on the start menu. The Oracle Fusion Middleware Deinstaller starts and the Welcome screen displays. Click Next. The Deinstallation Summary screen displays and lists the features that will be deinstalled. Click Deinstall. The Deinstallation Progress screen displays and shows all tasks that have succeeded and failed. Click Next. The Deinstallation Complete screen displays and shows a summary of the Deinstallation. Click Finish to close the Oracle Fusion Middleware Deinstaller.
Note: Additional files in the ORACLE_HOME directory must be manually deleted. On Windows, you must also manually delete the Oracle program group on the Start menu.
2. 3. 4. 5.
3
3
Understanding Configuration
This chapter describes each of the default configuration files that are distributed with Coherence and details how applications and solutions override these files when creating their own Coherence configurations. This chapter includes the following sections:
Overview of the Default Configuration Files Specifying an Operational Configuration File Specifying a Cache Configuration File Specifying a POF Configuration File Specifying Management Configuration Files Disabling Schema Validation Understanding the XML Override Feature Changing Configuration Using System Properties
tangosol-coherence.xml This files provides operational and run-time settings and is used to create and configure cluster, communication, and data management services. This file is typically referred to as the operational deployment descriptor. The schema for this file is the coherence-operational-config.xsd file. See Appendix A, "Operational Configuration Elements," for a complete reference of the elements in the operational deployment descriptor. tangosol-coherence-override-dev.xml This file overrides operational settings in the tangosol-coherence.xml file when Coherence is started in developer mode. By default, Coherence is started in developer mode and the settings in this file are used. The settings in this file are suitable for development environments. The
Understanding Configuration 3-1
schema file for this override file and the schema for the operational deployment descriptor are the same.
tangosol-coherence-override-eval.xml This file overrides operational settings in the tangosol-coherence.xml file when Coherence is started in evaluation mode. The settings in this file are suitable for evaluating Coherence. The schema file for this override file and the schema for the operational deployment descriptor are the same. tangosol-coherence-override-prod.xml This file overrides operational settings in the tangosol-coherence.xml file when Coherence is started in production mode. The settings in this file are suitable for production environments. The schema file for this override file and the schema for the operational deployment descriptor are the same. coherence-cache-config.xml This file is used to specify the various types of caches which can be used within a cluster. This file is typically referred to as the cache configuration deployment descriptor. The schema for this file is the coherence-cache-config.xsd file. See Appendix B, "Cache Configuration Elements," for a complete reference of the elements in this file. coherence-pof-config.xml This file is used to specify custom data types when using Portable Object Format (POF) to serialize objects. This file is typically referred to as the POF configuration deployment descriptor. The schema for this file is the coherence-pof-config.xsd file. See Appendix D, "POF User Type Configuration Elements," for a complete reference of the elements in this file. Management configuration files A set of files that are used to configure Coherence management reports. The files are located in the /reports directory within coherence.jar. The files include a report group configuration files (report-group.xml, the default), which refer to any number of report definition files. Each report definition file results in the creation of a report file that displays management information based on a particular set of metrics. The schema for these files are the coherence-report-config.xsd file and the coherence-report-group-config.xsd file, respectively. See Managing Oracle Coherence for detailed information on using reports and a reference for the elements in these configuration files.
Using the Default Operational Override File Specifying an Operational Override File Defining Override Files for Specific Operational Elements Viewing Which Operational Override Files are Loaded
Refer to Part II, "Using Coherence Clusters" for detailed instructions on configuring the operational run time.
Create a file named tangosol-coherence-override.xml. Edit the file and add any operational elements that are to be overridden. The following example configures a cluster name and overrides the default cluster name:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <member-identity> <cluster-name system-property="tangosol.coherence.cluster"> MyCluster</cluster-name> </member-identity> </cluster-config> </coherence>
3. 4.
Save and close the file. Make sure the location of the operational override file is located in the classpath at run time. The following example demonstrates starting a cache server that uses an override file that is located in COHERENCE_HOME.
java -cp COHERENCE_HOME;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
When using the cache-server and coherence scripts during development, add the location of the tangosol-coherence-override.xml file to the classpath using the Java -cp argument in each of the scripts.
Tip:
except that all elements are optional. Any missing elements are loaded from the tangosol-coherence.xml file. The tangosol.coherence.override system property provides an easy way to switch between different operational configurations and is convenient during development and testing. To specify an operational override file:
1. 2.
Create a text file. Edit the file and add any operational elements that are to be overridden. The following example configures the multicast port number:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <multicast-listener> <port system-property="tangosol.coherence.clusterport">3059</port> </multicast-listener> </cluster-config> </coherence>
3. 4.
Save the file as an XML file and close the file. Specify the name of the operational override file as a value of the tangosol.coherence.override system property. If the file is not located in the classpath, enter the full (or relative) path to the file and the name. The system property also supports the use of a URL when specifying the location of an operational override file. The following example demonstrates starting a cache server and using an operational override file that is named cluster.xml which is located in COHERENCE_HOME.
java -Dtangosol.coherence.override=cluster.xml -cp COHERENCE_HOME;COHERENCE_ HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
Create a tangosol-coherence-override.xml file as described in "Using the Default Operational Override File" on page 3-3.
2.
Add an xml-override attribute to an element that is to be overridden. The value of the xml-override attribute is the name of an override file. The following example defines an override file named cluster-config.xml that is used to override the <cluster-config> element.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config xml-override="/cluster-config.xml"> ... </cluster-config> </coherence>
3. 4. 5.
Save and close the file. Create a text file. Edit the file and add an XML node that corresponds to the element that is to be overridden. The XML root element must match the element that is to be overridden. Using the example from step 2, the following node is created to override the <cluster-config> element and specifies a multicast join timeout.
<?xml version='1.0'?> <cluster-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <multicast-listener> <join-timeout-milliseconds>4000</join-timeout-milliseconds> </multicast-listener> </cluster-config>
6. 7.
Save the file as an XML file with the same name used in the xml-override attribute. Make sure the location of both override files are located in the classpath at run time. The following example demonstrates starting a cache server that uses override files that are located in COHERENCE_HOME.
java -cp COHERENCE_HOME;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
coherence.jar!/tangosol-coherence.xml" Loaded operational overrides from resource "jar:file:/D:/coherence/lib/ coherence.jar!/tangosol-coherence-override-dev.xml" Loaded operational overrides from resource "file:/D:/coherence/ tangosol-coherence-override.xml" Optional configuration override "/cluster-config.xml" is not specified Optional configuration override "/custom-mbeans.xml" is not specified
The above output indicates that the operational deployment descriptor included in coherence.jar was loaded and that settings in this file are overridden by two loaded override files: tangosol-coherence-override-dev.xml and tangosol-coherence-override.xml. In addition, two override files were defined for specific operational elements but were not found or loaded at run time.
It is recommended (although not required) that all cache server nodes within a cluster use identical cache configuration descriptors. Coherence requires a cache configuration deployment descriptor to start. If the cache configuration deployment descriptor is not found at run time, an error message indicates that there was a failure loading the configuration resource and also provides the name and location for the file that was not found.
Using a Default Cache Configuration File Overriding the Default Cache Configuration File Using the Cache Configuration File System Property Viewing Which Cache Configuration File is Loaded
Refer to Part III, "Using Caches" for detailed instructions on configuring caches.
1.
Make a copy of the sample coherence-cache-config.xml file that is located in the coherence.jar and save it to a different location. The cache definitions that are included in the sample file are for demonstration purposes and are used as a starting point for creating solution-specific cache configurations. Ensure that the location where the coherence-cache-config.xml file is saved is in the classpath at run time and that the location precedes the coherence.jar file in the classpath. The following example demonstrates starting a cache server that uses a coherence-cache-config.xml cache configuration file that is located in COHERENCE_HOME.
java -cp COHERENCE_HOME;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
2.
Make a copy of the default coherence-cache-config.xml cache configuration file that is located in the coherence.jar and save it to a location with a different name. Create a tangosol-coherence-override.xml file as described in "Using the Default Operational Override File" on page 3-3. Edit the operational override file and enter a <configurable-cache-factory-config> node that specifies the name of the cache configuration file created in step 1. If the cache configuration file is not located in the classpath, enter the full (or relative) path to the file as well. The element also supports the use of a URL when specifying the location of a cache configuration file. The following example specifies a cache configuration deployment descriptor called MyConfig.xml.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <configurable-cache-factory-config> <init-params> <init-param> <param-type>java.lang.String</param-type> <param-value system-property="tangosol.coherence.cacheconfig"> MyConfig.xml</param-value> </init-param> </init-params> </configurable-cache-factory-config> </coherence>
4.
5.
Ensure that the location of the operational override file is located in the classpath at run time. The following example demonstrates starting a cache server using an operational override file and a custom cache configuration file that are located in COHERENCE_ HOME.
java -cp COHERENCE_HOME;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
Note:
It is recommended that all nodes within a cluster use identical POF configuration deployment descriptors. A POF configuration deployment descriptor is only loaded if the POF serializer is either configured as part of a cache scheme or configured globally for all cache schemes. The default coherence-cache-config.xml provides an example cache scheme that defines the POF serializer, but it is commented out by default.
Overriding the Default POF Configuration File Using the POF Configuration File System Property Combining Multiple POF Configuration Files Viewing Which POF Configuration Files are Loaded
Refer to Chapter 20, "Using Portable Object Format" for detailed instructions on configuring POF user types.
Create an XML file. Edit the file and create a <pof-config> node that includes the default Coherence POF user types:
<?xml version="1.0"?> <pof-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-pof-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-pof-config coherence-pof-config.xsd"> <user-type-list> <include>coherence-pof-config.xml</include> </user-type-list> </pof-config>
3. 4.
Save the file as pof-config.xml and close the file. Ensure that the location of the POF configuration file is located in the classpath at run time.
The following example demonstrates starting a cache server and using a POF configuration file that is located in COHERENCE_HOME.
java -cp COHERENCE_HOME;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
Create an XML file. Edit the file and create a <pof-config> node that includes the default Coherence POF user types:
<?xml version="1.0"?> <pof-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-pof-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-pof-config coherence-pof-config.xsd"> <user-type-list> <include>coherence-pof-config.xml</include> </user-type-list> </pof-config>
3. 4.
Save and close the file. Enter the name of the file as a value of the tangosol.pof.config system property. This is typically done as a -D Java option when starting a Coherence node. If the file is not located in the classpath, enter the full (or relative) path to the file and the name. The system property also supports the use of a URL when specifying the location of a POF configuration file. The following example starts a cache server and specifies a POF configuration deployment descriptor called MyPOF.xml that is located in COHERENCE_HOME.
java -Dtangosol.pof.config=MyPOF.xml -cp COHERENCE_HOME;COHERENCE_ HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
Open an existing POF configuration file that is being loaded at startup. Add an <include> element whose value is the name of a POF configuration file. If the file is not located in the classpath, enter the full (or relative) path to the file and the name. A URL can also be used to locate the file. The following example combines two POF configuration files in addition to the default Coherence POF configuration file:
<?xml version='1.0'?> <pof-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-pof-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-pof-config coherence-pof-config.xsd"> <user-type-list> <include>coherence-pof-config.xml</include> <include>hr-pof-config.xml</include> <include>crm-pof-config.xml</include> </user-type-list> </pof-config>
3. 4.
Save and close the file. If required, ensure that the location of the POF configuration files are located in the classpath at run time. The following example demonstrates starting a cache server that uses POF configuration files that are located in COHERENCE_HOME.
java -cp COHERENCE_HOME;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
report group configuration file A report group configuration file is used to list the name and location of report definition files and the output directory where
Understanding Configuration 3-11
reports are written. The name and location of this file is defined in the operational deployment descriptor. By default, the report-group.xml file is used and is located in the /reports directory of the coherence.jar. Additional report group configuration file are provided and custom report group files can be created as required.
report configuration files A report configuration file defines a report and results in the creation of a report file that displays management information for a particular set of metrics. Report configuration files must be referenced in a report group configuration file to be used at run time. The default report configuration files are located in the /reports directory of the coherence.jar and are referenced by the default report group configuration file. Custom report configuration files can be created as required. custom-mbeans.xml This file is the default MBean configuration override file and is used to define custom MBeans (that is, application-level MBeans) within the Coherence JMX management and monitoring framework. This allows any application-level MBean to be managed and monitored from any node within the cluster. Custom MBeans can be defined within the operational override file. However, the MBean configuration override file is typically used instead.
Specifying a Custom Report Group Configuration File Specifying an MBean Configuration File Viewing Which Management Configuration Files are Loaded
The report group configuration file is only loaded if JMX management is enabled. The examples in this section demonstrate enabling JMX management on nodes that host an MBean server.
Create an XML file. Edit the file and create a <report-group> node as follows. This example configures a single report.
<?xml version='1.0'?> <report-group xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-report-group-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-report-group-config coherence-report-group-config.xsd">
Save and close the file. Create a tangosol-coherence-override.xml file as described in "Using the Default Operational Override File" on page 3-3. Edit the file and enter a <management-config> node that specifies the name of the report group configuration file. If the report group configuration file is not located in the classpath, enter the full (or relative) path to the file as well. The element also supports the use of a URL when specifying the location of a report group configuration file. The following example enables JMX management and specifies a report group configuration deployment descriptor called my-group.xml.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <management-config> <managed-nodes system-property="tangosol.coherence.management">all </managed-nodes> <reporter> <configuration system-property="tangosol.coherence.management.report. configuration">my-group.xml</configuration> </reporter> </management-config> </coherence>
6. 7.
Save and close the file. Ensure that the location of the operational override file is located in the classpath at run time. The following example demonstrates starting a cache server using an operational override file and a report group configuration file that are located in COHERENCE_ HOME.
java -cp COHERENCE_HOME;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
the file is not located in the classpath, enter the full (or relative) path to the file and the name. The system property also supports the use of a URL when specifying the location of a report group configuration file. The following example starts a cache server, enables JMX management, and specifies a report group configuration file that is named my-group.xml and is located in COHERENCE_HOME.
java -Dtangosol.coherence.management=all -Dtangosol.coherence.management.report.configuration=my-group.xml -cp COHERENCE_ HOME;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
Create a file named custom-mbeans.xml. Edit the file and create an empty <mbeans> node as follows:
<?xml version='1.0'?> <mbeans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> </mbeans>
3. 4.
Save and close the file. Make sure the location of the custom MBean configuration override file is located in the classpath at run time. The following example demonstrates starting a cache server that uses a default MBean configuration override file that is located in COHERENCE_HOME.
java -cp COHERENCE_HOME;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
the full (or relative) path to the file and the name. The system property also supports the use of a URL when specifying the location of an MBean configuration override file. The following example starts a cache server and specifies an MBean configuration override file that is named my-mbeans.xml and is located in COHERENCE_HOME.
java -Dtangosol.coherence.mbeans=my-mbeans.xml -cp COHERENCE_HOME;COHERENCE_ HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
MBean Configuration Override File The MBean configuration message is emitted with the other operational override messages and is among the first messages to be emitted when starting a process. The output is especially helpful when using override files and is often useful when developing and testing Coherence applications and solutions. The following example output demonstrates an operational override message that indicates the default MBean configuration override file is loaded:
Loaded operational overrides from resource "file:/D:/coherence/custom-mbeans.xml"
Note:
When schema validation is disabled, Coherence only fails if the XML is malformed. Syntactical errors are ignored and may not be immediately apparent.
Using the Predefined Override Files Defining Custom Override Files Defining Multiple Override Files for the Same Element
tangosol-coherence-override.xml This override file is defined for the <coherence> root element and is used to override any element in the operational deployment descriptor. The root element in this file must be the <coherence> element. custom-mbeans.xml This override file is defined for the <mbeans> element and is used to add custom MBeans to the operational deployment descriptor. The root element in this file must be the <mbeans> element. cache-factory-config.xml This override file is defined for the <configurable-cache-factory-config> element and is used to customize a configurable cache factory. This override file is typically only used to support container integrations. The <configurable-cache-factory-config> element is not commonly overridden. cache-factory-builder-config.xml This override file is defined for the <cache-factory-builder-config> element and is used to customize a cache factory builder. This override file is typically only used to support container integrations. The <cache-factory-builder-config> element is not commonly overridden.
The following example demonstrates a tangosol-coherence-override.xml file that is used to override the default cluster name. All other operational settings are loaded from the tangosol-coherence.xml file.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <member-identity> <cluster-name system-property="tangosol.coherence.cluster">MyCluster </cluster-name> </member-identity> </cluster-config> </coherence>
The following example demonstrates a tangosol-coherence-override.xml file that is used to disable local storage for the distributed cache service on this node. Notice the use of an id attribute to differentiate an element that can have multiple occurrences. The id attribute must match the id attribute of the element being overridden.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <services> <service id="3"> <init-params> <init-param id="4"> <param-name>local-storage</param-name> <param-value system-property="tangosol.coherence.distributed. localstorage">false</param-value> </init-param> </init-params> </service> </services> </cluster-config> </coherence>
The following example demonstrates a custom-mbean.xml file that adds a standard MBean definition to the list of MBeans.
<mbeans xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <mbean id="100"> <mbean-class>com.oracle.customMBeans.Query</mbean-class> <mbean-name>type=Query</mbean-name> <enabled>true</enabled> </mbean> </mbeans>
To use this override file, create a document named cluster-config.xml and ensure that it and the base document (tangosol-coherence-override.xml in this case) are located in a directory that is in the classpath at run time. For this example, the override file's root element must be <cluster-config> as shown below.
<?xml version='1.0'?> <cluster-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <multicast-listener> <join-timeout-milliseconds>4000</join-timeout-milliseconds> </multicast-listener> </cluster-config>
An id attribute is used to distinguish elements that can occur multiple times. For example, to define a custom override file named dist-service-config.xml that is used to override the <service> element for the distributed cache service, add an xml-override attribute to the <service> element whose id is number 3 as shown below
3-18 Oracle Fusion Middleware Developing Applications with Oracle Coherence
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <services> <service id="3" xml-override="/dist-service-config.xml"> </service> </services> </cluster-config> </coherence>
To use this override file, create a document named dist-service-config.xml and ensure that it is located in a directory that is in the classpath at run time. For this example, the override file's root element must be <service> as shown below.
<?xml version='1.0'?> <service id="3" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <init-params> <init-param id="1"> <param-name>standard-lease-milliseconds</param-name> <param-value>2</param-value> </init-param> </init-params> </service>
Note:
If the element's id in the override document does not have a match in the base document, the elements are just appended to the base document.
The mode-specific override files are also located in coherence.jar and are used depending on the Coherence start mode (the value of the <license-mode> element).
Each of the mode-specific operational override files, in turn, defines the default operational override file as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd" xml-override="/tangosol-coherence-override.xml"> ... </coherence>
A fourth override file can be defined for the <coherence> element in the tangosol-coherence-override.xml file. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd" xml-override="/tangosol-coherence-override-staging.xml"> ... </coherence>
The chain can continue as required. The files are all loaded at run time if they are placed in a location in the classpath. Files higher up in the chain always override files below in the chain.
... </logging-config>
To use a preconfigured system property, add the system property as a Java -D option at startup. For the above example, the log level system property is specified as follows when starting a cache server:
java -Dtangosol.coherence.log.level=3 -cp COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
Note:
When using an operational override file and when creating a custom cache configuration file; the preconfigured system properties must always be included along with the element that is to be overridden; otherwise, the property is no longer available.
Defining a System Property for a Cache Configuration element The following example defines a system property called cache.name for a <cache-name> element and is added to a custom cache configuration file:
<?xml version='1.0'?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">
Changing a Preconfigured System Property The following example changes the preconfigured system property name for the <cluster-name> operational element and is added to an operational override file:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <member-identity> <cluster-name system-property="myapp.cluster.name"></cluster-name> </member-identity> </cluster-config> </coherence>
Note:
To remove a system property, delete the system property attribute from the element. If a system property is used at run time and it does not exist, it is disregarded.
4
4
The example in this chapter is basic and is only intended to teach general concepts. For more advanced examples, refer to the examples included with the installation. The examples are also distributed as part of the Coherence supplemental installation. For detailed instructions on building and running the examples, see: http://wiki.tangosol.com/display/EXAMPLES/Home
Task 1: Define the Example Cache Task 2: Configure and Start the Example Cluster Task 3: Create and Run a Basic Coherence Standalone Application Task 4: Create and Run a Basic Coherence JavaEE Web Application Using JDeveloper for Coherence Development
Create an XML file named example-config.xml. Copy the following distributed cache definition to the file:
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config"
4-1
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>hello-example</cache-name> <scheme-name>distributed</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>distributed</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </caching-schemes> </cache-config> 3.
Create a file named tangosol-coherence-override.xml. Add the following override configuration and replace cluster_name and port with values that are unique for this cluster. For example, use your name for the cluster name and the last four digits of your phone number for the port.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <member-identity> <cluster-name>cluster_name</cluster-name> </member-identity> <multicast-listener>
<address>224.3.6.0</address> <port>port</port> <time-to-live>0</time-to-live> </multicast-listener> </cluster-config> <configurable-cache-factory-config> <init-params> <init-param> <param-type>java.lang.String</param-type> <param-value system-property="tangosol.coherence.cacheconfig"> example-config.xml</param-value> </init-param> </init-params> </configurable-cache-factory-config> </coherence> 3. 4.
Save the file to the same directory where the example-config.xml file was saved. From a command prompt, start a cache server instance using the DefaultCacheServer class and include the location of the coherence.jar library and the configuration files as a Java -cp option. For example:
java -cp COHERENCE_HOME\config;COHERENCE_HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer
4-3
From a command prompt, compile the Hello World application. For example:
javac -cp COHERENCE_HOME\lib\coherence.jar com\examples\HelloWorld.java
2.
Run the Hello World application and include the location of the coherence.jar library and the configuration files as a Java -cp option. In addition, restrict the client from locally storing partitioned data. For example:
java -cp COHERENCE_HOME\config;COHERENCE_HOME\lib\coherence.jar -Dtangosol.coherence.distributed.localstorage=false com.example.HelloWorld
The Hello World application starts. The cache factory instance is created and becomes a member of the cluster. The k1 key with the Hello World! value is loaded into the hello-example cache. The key is then retrieved from the cache and the value is emitted as part of the output. Lastly, the cache factory is shutdown and leaves the cluster before the Hello World application exits.
From a command prompt, start a standalone cache factory instance using the CacheFactory class and include the location of the coherence.jar library and the configuration files as a Java -cp option. For example:
java -cp COHERENCE_HOME\config;COHERENCE_HOME\lib\coherence.jar -Dtangosol.coherence.distributed.localstorage=false com.tangosol.net.CacheFactory
The cache factory instance starts and becomes a member of the cluster and returns a command prompt for the command-line tool.
2.
At the command-line tool command prompt, get the hello-example cache using the cache command:
cache hello-example
3.
At the command-line tool command prompt, retrieve the contents of the cache using the list command.
list
WebLogic server includes a Coherence integration that standardizes the packaging and deployment of Coherence applications. For Weblogic server instructions about packaging and deploying Coherence applications, see Administering Oracle Coherence. The instructions in this section are not specific to, or recommended for, WebLogic Server.
2.
Copy the below JSP to a text file and save the file as hello.jsp in the root of the Web application directory.
Example 42 The Sample Hello World JSP <html> <head> <title>My First Coherence Cache</title> </head> <body> <h1> <%@ page language="java" import="com.tangosol.net.CacheFactory, com.tangosol.net.NamedCache" %> <% String key = "k2"; String value = "Hello World!"; CacheFactory.ensureCluster(); NamedCache cache = CacheFactory.getCache("hello-example"); Building Your First Coherence Application 4-5
Copy the following empty Web application deployment descriptor to a text file and save the file as web.xml in the /WEB-INF directory.
<?xml version = '1.0' ?> <web-app/>
4. 5. 6.
Copy the coherence.jar file to the WEB-INF/lib directory. Copy the example-config.xml file and the tangosol-coherence-override.xml file to the WEB-INF/classes directory. Create a Web ARchive file (WAR) using the jar utility and save the file as hello.war. For example, issue the following command from a command prompt at the root of the Web application directory:
jar -cvf hello.war *
Deploy the hello.war file to a JavaEE server. From a browser, run the Hello World application by accessing the hello.jsp file using the following URL. Substitute host and port with values specific to the deployment.
http://host:port/hello/hello.jsp
The Hello World application starts. The cache factory instance is created and becomes a member of the cluster. The k2 key with the Hello World! value is loaded into the hello-example cache. The key is then retrieved from the cache and the value is displayed in the browser. Lastly, the cache factory shuts down and leaves the cluster.
This step uses the cache factory command-line tool to connect to the hello-example cache and list all items in the cache. It demonstrates both the persistent and distributed nature of Coherence caches. To verify the cache:
1.
From a command prompt, start a standalone cache factory instance using the CacheFactory class and include the location of the coherence.jar library and the configuration files as a Java -cp option. For example:
java -cp COHERENCE_HOME\config;COHERENCE_HOME\lib\coherence.jar -Dtangosol.coherence.distributed.localstorage=false com.tangosol.net.CacheFactory
The cache factory instance starts and becomes a member of the cluster and returns a command prompt for the command-line tool.
2.
At the command-line tool command prompt, get the hello-example cache using the cache command:
cache hello-example
3.
At the command-line tool command prompt, retrieve the contents of the cache using the list command.
list
Running Coherence in JDeveloper Viewing Thread Dumps in JDeveloper Creating Configuration Files in JDeveloper
In JDeveloper, create a new Generic Application, which includes a single project. If you are new to JDeveloper, consult the Online Help for detailed instructions. In the Application Navigator, double-click the new project. The Project Properties dialog box displays.
4-7
3. 4. 5.
Select the Libraries and Classpath node. The Libraries and Classpath page displays On the Libraries and Classpath page, click Add JAR/Directory. The Add Archive or Directory dialog box displays. From the directory tree, select COHERENCE_HOME\lib\coherence.jar and click Select. The coherence.jar library displays in the Classpath Entries list as shown below:
6. 7. 8.
From the Project Properties dialog box, select the Run/Debug/Profile node. The Run/Debug/Profile page displays. From the Run/Debug/Profile page, click New. The Create Run Configuration dialog box displays. In the Name text box, enter a name for the new run configuration. In the Copy Settings From drop-down box, choose default. Click OK. The new run configuration displays in the Run Configuration list. From the Run Configuration list, select the new Run Configuration and click Edit. The Edit Run Configuration dialog box displays and the Launch Settings node is selected. Choose Default Run Target dialog box displays.
9.
10. From the Launch Settings page, click Browse to select a Default Run Target. The 11. From the directory tree, select COHERENCE_HOME\lib\coherence.jar\com\
tangosol\net\DefaultCacheServer.class and click Open. The DefaultCacheServer class is entered as the default run target as shown below:
Tip: Use the Java Options text box to set Coherence system properties.
12. Select the Tool Settings Node. The Tool Settings page displays. 13. From the Additional Runner Options section, click the Allow Program Input
check box. A check mark in the box indicates that the option is selected.
14. Click OK. 15. Repeat Steps 6 through 14 and select COHERENCE_HOME\lib\coherence.jar\
16. Click OK to close the Project Properties dialog box. 17. Use the Run button drop-down list to select and start the run configuration for the
cache server. A cache server instance is started and output is shown in the process's log tab as shown below:
4-9
18. Use the Run button drop-down list to select and start the run configuration for the
cache. A cache instance is started and output is shown in the process's log tab as shown below.
19. From the Cache Factory's Running Log tab, use the Input text box located at the
bottom of the tab to interact with the cache instance. For example, type help and press Enter to see a list of valid commands.
From a shell or command prompt, use JDK_HOME/bin/jps to get the Process ID (PID) of the Java process for which you want to view a thread dump. On Linux, use kill -3 PID to send a QUIT signal to the process. On Windows, you must use a third-party tool (such as SendSignal) to send a ctrl+break signal to a remote Java process.
Extract coherence-cache-config.xsd, coherence-cache-config-base.xsd, coherence-operational-config.xsd, coherence-operational-config-base.xsd, and coherence-config-base.xsd from the COHERENCE_HOME\lib\coherence.jar library to a directory on your computer. In the JDeveloper Application Navigator, double-click your coherence project. The Project Properties dialog box displays. Expand the Project Source Paths node and click Resources. The Resources page displays. In the Resources section, click Add to find and select the directory where you extracted the XSD files. In the Included tab, click Add and select the XSD files. Alternatively, you can allow JDeveloper to include all files in this directory and not explicitly add each file. Click OK. The XSDs are listed in the Included tab as shown below.
2. 3. 4. 5.
6.
7. 8.
Click OK to close the Project Properties dialog box. The XSDs are listed in the Application Navigator under the Resources folder for your project. From the File menu, click New. The New Gallery dialog box displays.
Building Your First Coherence Application 4-11
9.
From the Categories section, expand the General node and click XML.
10. Select XML Document and click OK. The Create XML File dialog box displays. 11. Enter coherence-cache-config.xml as the file name and save it to the same
directory where the XSD is located. At run time, this file must be found on the classpath and must be loaded before the coherence.jar file.
12. Click OK. The cache configuration file is created, opened for editing, and listed in
the Application Navigator under the resources folder for your project.
13. Add the following schema reference at the beginning of the file: <?xml version="1.0" ?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd">
The Component Palette refreshes and lists all the elements available from the coherence-cache-config.xsd file.
14. Save the coherence-cache-config.xml file. 15. Repeat steps 8 through 12 to create an operational override file called
The Component Palette refreshes and lists all the elements available from the coherence-operational-config.xsd file.
17. Save the tangosol-coherence-override.xml file.
5
5
Debugging in Coherence
This chapter provides instructions for configuring logging and includes general advice for debugging and troubleshooting Coherence applications. This chapter includes the following sections:
Overview of Debugging in Coherence Configuring Logging Performing Remote Debugging Troubleshooting Coherence-Based Applications
application code configuration files log files for all cluster members Thread and heap dumps are required under certain circumstances. Thread dumps should be sent if the application is running slow and/or appears to be hung. Heap
Debugging in Coherence 5-1
Configuring Logging
dumps should be sent if the application runs out of memory or is consuming more memory than expected.
Configuring Logging
Coherence has its own logging framework and also supports the use of log4j, slf4j, and Java logging to provide a common logging environment for an application. Logging in Coherence occurs on a dedicated and low-priority thread to reduce the impact of logging on the critical portions of the system. Logging is pre-configured and the default settings should be changed as required. This section includes the following topics:
Changing the Log Level Changing the Log Destination Changing the Log Message Format Setting the Logging Character Limit Using JDK Logging for Coherence Logs Using Log4J Logging for Coherence Logs Using SLF4J for Coherence Logs
0 This level includes messages that are not associated with a logging level. 1 This level includes the previous level's messages plus error messages. 2 This level includes the previous levels' messages plus warning messages. 3 This level includes the previous levels' messages plus informational messages. 4-9 These levels include the previous levels' messages plus internal debugging messages. More log messages are emitted as the log level is increased. The default log level is 5. -1 No log messages are emitted.
To change the log level, edit the operational override file and add a <severity-level> element, within the <logging-config> element, that includes the level number. For example:
... <logging-config> ... <severity-level system-property="tangosol.coherence.log.level">9 </severity-level> ... </logging-config> ...
The tangosol.coherence.log.level system property is used to specify the log level instead of using the operational override file. For example:
Configuring Logging
-Dtangosol.coherence.log.level=9
The tangosol.coherence.log system property is used to specify the log destination instead of using the operational override file. For example:
-Dtangosol.coherence.log=stdout
Configuring Logging
Note:
Changing the log message format must be done with caution as critical information (such as member or thread) can be lost which makes issues harder to debug.
Description This parameter shows the date/time (to a millisecond) when the message was logged. This parameter shows the amount of time that the cluster members has been operational. This parameter shows the product name and license type. This parameter shows Coherence version and build details. This parameter shows the logging severity level of the message. This parameter shows the thread name that logged the message. This parameter shows the cluster member id (if the cluster is currently running). This parameter shows the fully cluster member identification: cluster-name, site-name, rack-name, machine-name, process-name and member-name (if the cluster is currently running). This parameter shows the specified role of the cluster member. This parameter shows the text of the message. This parameter shows the Execution Context ID (ECID). The ECID is a globally unique ID that is attached to requests between Oracle components. The ECID is an Oracle-specific diagnostic feature and is used to correlate log messages across log files from Oracle components and products and is also used to track log messages pertaining to the same request within a single component when multiple requests are processed in parallel. Coherence clients that want to include the ECID in their logs must have an activated Dynamic Monitoring Service (DMS) execution context when invoking Coherence. Note: If JDK logging is used with an Oracle Diagnostic Logging (ODL) handler, then the {ecid} parameter does not apply because the ECID automatically becomes part of the ODL record form.
To change the log message format, edit the operational override file and add a <message-format> element, within the <logging-config> element, that includes the format. For example:
... <logging-config> ... <message-format>[{date}] <{level}> (thread={thread}) </message-format> ... </logging-config> ...
-->{text}
Configuring Logging
messages in the queue. The messages that are discarded are summarized by the logging system with a single log entry that details the number of messages that were discarded and their total size. For example:
Asynchronous logging character limit exceeded; discarding 5 log messages (lines=14, chars=968)
The truncation is only temporary; when the queue is processed (emptied), the logger is reset so that subsequent messages are logged.
Note:
The message that caused the total number of characters to exceed the maximum is never truncated.
The character limit is used to avoid situations where logging prevents recovery from a failing condition. For example, logging can increase already tight timings, which causes additional failures, which produces more logging. This cycle may continue until recovery is not possible. A limit on logging prevents the cycle from occurring. To set the log character limit, edit the operational override file and add a <character-limit> element, within the <logging-config> element. The character limit is entered as 0 (Integer.MAX_VALUE) or a positive integer. For example:
... <logging-config> ... <character-limit system-property="tangosol.coherence.log.limit">12288 </character-limit> </logging-config> ...
The tangosol.coherence.log.limit system property is used to specify the log character limit instead of using the operational override file. For example:
-Dtangosol.coherence.log.limit=12288
Create a logging.properties file. The following example configures the JDK logger to emit messages to both the console and to a file. Output to the console and file is configured to use the FINEST log level. For the file handler pattern, the specified path must already exist. Also, ensure that the specified directory can be accessed and has write permissions.
handlers=java.util.logging.FileHandler, java.util.logging.ConsoleHandler .level=INFO Coherence.level=FINEST java.util.logging.FileHandler.pattern=/tmp/coherence%u.log java.util.logging.FileHandler.limit=50000
Configuring Logging
Note:
In the above example, Coherence is used as the logger object name and is the default name that is used by the Coherence logging framework. A different name can be used by specifying the name within the <logger-name> element in the operational override file or by specifying the name as the value of the tangosol.coherence.log.logger system property. Set the JDK logging level to FINEST and allow the Coherence logging settings to determine which log messages to construct for JDK logging.
2.
Configure Coherence to use JDK logging by specifying jdk as the value of the <destination> element in an operational override file. For example:
... <logging-config> <destination system-property="tangosol.coherence.log">jdk</destination> ... </logging-config> ...
3.
Make sure the logging.properties file is specified using the java.util.logging.config.file system property. For example:
-Djava.util.logging.config.file=myfile
Create a log4j.properties file. The following example configures the log4j logger to emit messages to both the console and to a file. Output to the console and file are configured to use the FATAL log level. For the file appender, make sure the specified directory can be accessed and has write permissions.
log4j.logger.Coherence=FATAL, stdout, file log4j.appender.stdout=org.apache.log4j.ConsoleAppender log4j.appender.stdout.layout=org.apache.log4j.PatternLayout log4j.appender.stdout.layout.ConversionPattern=%m%n log4j.appender.file=org.apache.log4j.RollingFileAppender log4j.appender.file.File=/tmp/coherence.log log4j.appender.file.MaxFileSize=10MB log4j.appender.file.layout=org.apache.log4j.PatternLayout log4j.appender.file.layout.ConversionPattern=%m%n
Note:
In the above example, Coherence is used as the logger object name and is the default name that is used by the Coherence logging framework. A different name can be used by specifying the name within the <logger-name> element in the operational override file or by specifying the name as the value of the tangosol.coherence.log.logger system property. Set the log4j logging level to FATAL and allow the Coherence logging settings to determine which log messages to construct for log4j logging.
2.
Configure Coherence to use log4j logging by specifying log4j as the value of the <destination> element in an operational override file. For example:
... <logging-config> <destination system-property="tangosol.coherence.log">log4j</destination> ... </logging-config> ...
3.
Make sure both the log4j.jar file and the log4j.properties file are found on the classpath at run time.
specify slf4j as the value of the <destination> element in an operational override file. For example:
... <logging-config> <destination system-property="tangosol.coherence.log">slf4j</destination> ... </logging-config> ...
2.
Make sure both the slf4j-api.jar file and the appropriate logging framework SLF4J binding JAR is located on the classpath.
To enable remote debugging on a cache server, start the cache server with the following JVM options. Once the cache server has been started, use the IDE's debugger to connect to the JVM using the port specified (5005 in the example).
-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=5005
Remote debugging a Coherence application can be difficult when the application is no longer on a single node cluster because data is distributed across the members of the cluster. For example, when performing parallel grid operations, the operations are performed on the cluster members where the data is located. Since there are no guarantees on which members the data is located, it is best to constrain a test to use a singe cache server. In addition, the guardian and packet timeout can make cluster debugging difficult. If the debugger pauses the packet publishing, cluster, and service threads, it will cause disruptions across the cluster. In such scenarios, disable the guardian and increase the packet timeout during the debugging session. See "service-guardian" on page A-79 for details on configuring these settings.
Using Coherence Logs Using JMX Management and Coherence Reports Using JVM Options to Help Debug Capturing Thread Dumps Capturing Heap Dumps Monitoring the Operating System
support the use of rolling files and console output simultaneously. Lastly, consider placing all log files in a common directory. A common directory makes it easier to review the log files and package them for the Coherence support team. See "Configuring Logging" on page 5-2 for detailed information on configuring all aspects of logging.
-verbose:gc or -Xloggc:file These options are used to enable additional logs for each garbage collection event. In a distributed system, a GC pause on a single JVM can affect the performance of many JVMs, so it is essential to monitor garbage collection very closely. The -Xloggc option is similar to verbose GC but includes timestamps. -Xprof and -Xrunhprof These options are used to view JVM profile data and are not intended for production systems. -XX:-PrintGC, -XX:-PrintGCDetails, and -XX:-PrintGCTimeStamps These options are also used print messages at garbage collection. -XX:-HeapDumpOnOutOfMemoryError and -XX:HeapDumpPath=./java_ pid<pid>.hprof These options are used to initiate a heap dump when a java.lang.OutOfMemoryError is thrown. -XX:ErrorFile=./hs_err_pid<pid>.log This option saves error data to a file.
several dumps over a period of time since a thread dump is only snapshot in time. Always include a set of thread dumps when submitting a support issue. Coherence provides a native logClusterState JMX operation that is located on the ClusterMBean MBean and a native logNodeState JMX operation that is located on the ClusterNodeMBean MBean. These operations initiate a thread dump (including outstanding polls) on multiple cluster members or on a single cluster member, respectively. For a detailed reference of Coherence MBeans, see See Managing Oracle Coherence. To perform a thread dump locally on Unix or Linux operating systems, press Ctrl+\ at the application console. To perform a thread dump on Windows, press Ctrl+Break (or Pause). Both methods include a heap summary with the thread dump. Most IDEs provide a thread dump feature that can be used to capture thread dumps while working in the IDE. In addition, Unix and Linux operating systems can use the kill -3 pid to cause a remote thread dump in the IDE. On Windows, use a third party tool (such as SendSignal) to send a ctrl+break signal to a remote Java process and cause a dump in the IDE. Profiling tools, such as Oracle's VirtualVM (jvisualvm) and JConsole (jconsole) are able to perform thread dumps. These tools are very useful because they provide a single tool for troubleshooting and debugging and display many different types of information in addition to just thread details. Lastly, the jstack tool can be used to capture a thread dump for any process. For example, use jps to find a Java process ID and then execute the following from the command line:
jstack <pid>
The jstack tool is unsupported and may or may not be available in future versions of the JDK.
To view the heap dump in a browser, execute the following from the command line and then browse to the returned address. The file can also be loaded into VisualVM for viewing.
jhat /coherence.bin
The jmap and jhat tools are unsupported and may or may not be available in future versions of the JDK.
CPU Is the processor running at 100% for extended periods of time? Memory/Swapping Is the available RAM memory being exhausted and causing swap space to be used? Network Is buffer size, the datagram size, and the Maximum Transmission Unit (MTU) size affecting performance and success rates?
To monitor the overall health of the operating system, use tools such as vmstat and top for Unix/Linux; for Windows, use perfmon and tools available from Windows Sysinternals (for example procexp and procmon). See Administering Oracle Coherence for detailed instructions on how to test network performance.
Part II
Part II
Chapter 6, "Introduction to Coherence Clusters" Chapter 7, "Setting Up a Cluster" Chapter 8, "Starting and Stopping Cluster Members" Chapter 9, "Dynamically Managing Cluster Membership" Chapter 10, "Tuning TCMP Behavior" Chapter 11, "Using the Service Guardian"
6
6
This chapter describes Coherence clusters and how cluster members use the Tangosol Cluster Management Protocol (TCMP) to communicate with each other. Cluster services are also described and each cluster service is defined. This chapter includes the following sections:
Cluster Overview
A Coherence cluster is a network of JVM processes that run Coherence. JVMs automatically join together to form a cluster and are called cluster members or cluster nodes. Cluster members communicate using the Tangosol Cluster Management Protocol (TCMP). Cluster members use TCMP for both multicast communication (broadcast) and unicast communication (point-to-point communication). A cluster contains services that are shared by all cluster members. The services include connectivity services (such as the Cluster service), cache services (such as the Distributed Cache service), and processing services (such as the invocation service). Each cluster member can provide and consume such services. The first cluster member is referred to as the senior member and typically starts the core services that are required to create the cluster. If the senior member of the cluster is shutdown, another cluster member assumes the senior member role.
Understanding TCMP
TCMP is an IP-based protocol that is used to discover cluster members, manage the cluster, provision services, and transmit data. TCMP can be configured to use:
A combination of UDP/IP multicast and UDP/IP unicast. This is the default configuration. UDP/IP unicast only (that is, no multicast). See "Disabling Multicast Communication" on page 7-5. This configuration is used for network environments that do not support multicast or where multicast is not optimally configured. TCP/IP only (no UDP/IP multicast or UDP/IP unicast). See "Using the TCP Socket Provider" on page 10-16. This configuration is used for network environments that favor TCP.
6-1
Understanding TCMP
SDP/IP only (no UDP/IP multicast or UDP/IP unicast). See "Using the SDP Socket Provider" on page 10-17. This configuration is used for network environments that favor SDP. SSL over TCP/IP or SDP/IP. See "Using the SSL Socket Provider" on page 10-17. This configuration is used for network environments that require highly secure communication between cluster members.
Cluster discovery: Multicast is used to discover if there is a cluster running that a new member can join. Cluster heartbeat: The most senior member in the cluster issues a periodic heartbeat through multicast; the rate can be configured and defaults to one per second. Message delivery: Messages that must be delivered to multiple cluster members are often sent through multicast, instead of unicasting the message one time to each member.
Direct member-to-member (point-to-point) communication, including messages, asynchronous acknowledgments (ACKs), asynchronous negative acknowledgments (NACKs) and peer-to-peer heartbeats. A majority of the communication on the cluster is point-to-point. Under some circumstances, a message may be sent through unicast even if the message is directed to multiple members. This is done to shape traffic flow and to reduce CPU load in very large clusters. All communication is sent using unicast if multicast communication is disabled.
A TCP/IP ring is used as an additional death detection mechanism to differentiate between actual node failure and an unresponsive node (for example, when a JVM conducts a full GC). TCMP can be configured to exclusively use TCP for data transfers. Like UDP, the transfers can be configured to use only unicast or both unicast and multicast.
Protocol Reliability The TCMP protocol provides fully reliable, in-order delivery of all messages. Since the underlying UDP/IP protocol does not provide for either reliable or in-order delivery, TCMP uses a queued, fully asynchronous ACK- and NACK-based mechanism for reliable delivery of messages, with unique integral identity for guaranteed ordering of messages. Protocol Resource Utilization The TCMP protocol (as configured by default) requires only three UDP/IP sockets (one multicast, two unicast) and six threads per JVM, regardless of the cluster size. This is a key element in the scalability of Coherence; regardless of the number of
servers, each node in the cluster still communicates either point-to-point or with collections of cluster members without requiring additional network connections. The optional TCP/IP ring uses a few additional TCP/IP sockets, and an additional thread. Protocol Tunability The TCMP protocol is very tunable to take advantage of specific network topologies, or to add tolerance for low-bandwidth and high-latency segments in a geographically distributed cluster. Coherence comes with a pre-set configuration. Some TCMP attributes are dynamically self-configuring at run time, but can also be overridden and locked down for deployment purposes.
Proxy Service: This service allows connections (using TCP) from clients that run outside the cluster. While many applications are configured so that all clients are also cluster members, there are many use cases where it is desirable to have clients running outside the cluster. Remote clients are especially useful in cases where there are hundreds or thousands of client processes, where the clients are not running on the Java platform, or where a greater degree of coupling is desired.
Processing Services Invocation Service: This service provides clustered invocation and supports grid computing architectures. This services allows applications to invoke agents on any node in the cluster, or any group of nodes, or across the entire cluster. The agent invocations can be request/response, fire and forget, or an asynchronous user-definable model. Data Services Distributed Cache Service: This service allows cluster nodes to distribute (partition) data across the cluster so that each piece of data in the cache is managed (held) by only one cluster node. The Distributed Cache Service supports pessimistic locking. Additionally, to support failover without any data loss, the service can be configured so that each piece of data is backed up by one or more other cluster nodes. Lastly, some cluster nodes can be configured to hold no data at all; this is useful, for example, to limit the Java heap size of an application server process, by setting the application server processes to not hold any distributed data, and by running additional cache server JVMs to provide the distributed
6-3
cache storage. For more information on distributed caches, see "Distributed Cache" on page 12-1.
Replicated Cache Service: This is a synchronized replicated cache service that fully replicates all of its data to all cluster nodes that run the service. Replicated caches support pessimistic locking to ensure that all cluster members receive the update when data is modified. Replicated caches are often used to manage internal application metadata. For more information on replicated caches, see "Replicated Cache" on page 12-5. Optimistic Cache Service: This is an optimistic-concurrency version of the Replicated Cache Service that fully replicates all of its data to all cluster nodes and employs an optimization similar to optimistic database locking to maintain coherency. All servers end up with the same current value even if multiple updates occur at the same exact time from different servers. The Optimistic Cache Service does not support pessimistic locking; so, in general, it should only be used for caching most recently known values for read-only uses. This service is rarely used. For more information on optimistic caches, see "Optimistic Cache" on page 12-7.
A clustered service typically uses one daemon thread and optionally has a thread pool that can be configured to provide the service with additional processing bandwidth. For example, the invocation service and the distributed cache service both fully support thread pooling to accelerate database load operations, parallel distributed queries, and agent invocations. The above services are only the basic cluster services and not the full set of types of caches provided by Coherence. By combining clustered services with cache features, such as backing maps and overflow maps, Coherence provides an extremely flexible and configurable set of options for clustered applications. Within a cache service, there exists any number of named caches. A named cache provides the standard JCache API, which is based on the Java collections API for key-value pairs, known as java.util.Map.
7
7
Setting Up a Cluster
This chapter provides instructions for setting up and configuring Coherence clusters and is used to change the default out-of-box cluster settings. This chapter includes the following sections:
Overview of Setting Up Clusters Specifying a Cluster's Name Specifying a Cluster Member's Identity Configuring Multicast Communication Specifying a Cluster Member's Unicast Address Using Well Known Addresses Enabling Single-Server Mode Configuring Death Detection Specifying Cluster Priorities
To specify a cluster name, edit the operational override file and add a <cluster-name> element, within the <member-identity> element, that includes the cluster name. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <member-identity> <cluster-name system-property="tangosol.coherence.cluster">MyCluster </cluster-name> </member-identity> </cluster-config> </coherence>
The tangosol.coherence.cluster system property is used to specify the cluster name instead of using the operational override file. For example:
-Dtangosol.coherence.cluster=name
Site Name the name of the geographic site that hosts the cluster member. The server's domain name is used if no name is specified. For WAN clustering, this value identifies the datacenter where the member is located. The site name can be used as the basis for intelligent routing, load balancing, and disaster recovery planning (that is, the explicit backing up of data on separate geographic sites). The site name also helps determine where to back up data when using distributed caching and the default partition assignment strategy. Lastly, the name is useful for displaying management information (for example, JMX) and interpreting log entries.
Rack Name the name of the location within a geographic site that the member is hosted at and is often a cage, rack, or bladeframe identifier. The rack name can be used as the basis for intelligent routing, load balancing, and disaster recovery planning (that is, the explicit backing up of data on separate bladeframes). The rack name also helps determine where to back up data when using distributed caching and the default partition assignment strategy. Lastly, the name is useful for displaying management information (for example, JMX) and interpreting log entries. Machine Name the name of the server that hosts the cluster member. The server's host name is used if no name is specified. The name is used as the basis for creating an ID. The cluster service uses the ID to ensure that data are backed up on different computers to prevent single points of failure. Process Name the name of the JVM process that hosts the cluster member. The JVM process number is used if no name is specified. The process name makes it possible to easily differentiate among multiple JVMs running on the same computer. Member Name the cluster member's unique name. The name makes it easy to differentiate cluster members especially when multiple members run on the same computer or within the same JVM. Always specify a member name (as a best practice) even though it is not required to do so. Role Name the cluster member's role in the cluster. The role name allows an application to organize cluster members into specialized roles, such as cache servers and cache clients. Default role names (CoherenceServer for cache servers and application_class_name for cache clients) are used if no role name is specified.
To specify member identity information, edit the operational override file and add the member identity elements within the <member-identity> element as demonstrated below:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <member-identity> <site-name system-property="tangosol.coherence.site">pa-1</site-name> <rack-name system-property="tangosol.coherence.rack">100A</rack-name> <machine-name system-property="tangosol.coherence.machine">prod001 </machine-name> <process-name system-property="tangosol.coherence.process">JVM1 </process-name> <member-name system-property="tangosol.coherence.member">C1</member-name> <role-name system-property="tangosol.coherence.role">Server</role-name> </member-identity> </cluster-config> </coherence>
The following system properties are used to specify a cluster member's identity information instead of using the operational override file.
-Dtangosol.coherence.site=pa-1 -Dtangosol.coherence.rack=100A -Dtangosol.coherence.machine=prod001 -Dtangosol.coherence.process=JVM1 -Dtangosol.coherence.member=C1 -Dtangosol.coherence.role=Server
Specifying a Cluster's Multicast Address Disabling Multicast Communication Specifying the Multicast Time-to-Live Specifying the Multicast Join Timeout Changing the Multicast Threshold
Using the default multicast address and port (and the system generated cluster name) increases the chance of having overlapping cluster configurations on the network. This can lead to cluster members accidentally joining an unexpected cluster. Always use a unique port value to create a distinct cluster.
To specify a cluster multicast address, edit the operational override file and add both an <address> and <port> element and specify the address and port to be used by the cluster member. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <multicast-listener> <address system-property="tangosol.coherence.clusteraddress">224.3.6.0 </address> <port system-property="tangosol.coherence.clusterport">3059</port> </multicast-listener>
</cluster-config> </coherence>
The tangosol.coherence.clusteraddress and tangosol.coherence.clusterport system properties are used to specify the cluster multicast address instead of using the operational override file. For example:
-Dtangosol.coherence.clusteraddress=224.3.6.0 -Dtangosol.coherence.clusterport=3059
Disabling multicast does put a higher strain on the network. However, this only becomes an issue for large clusters with greater than 100 members.
The tangosol.coherence.ttl system property is used to specify the TTL value instead of using the operational override file. For example:
-Dtangosol.coherence.ttl=3
The first member of the cluster waits the full duration of the join timeout before it assumes the role of the senior member. If the cluster startup timeout is less than the join timeout, then the first member of the cluster fails during cluster startup. The cluster member timeout is specified using the packet publisher timeout (<timeout-milliseconds>). See "packet-delivery" on page A-50.
To specify the join timeout, edit the operational override file and add a <join-timeout-milliseconds> element that includes the timeout value. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ 7-6 Oracle Fusion Middleware Developing Applications with Oracle Coherence
The packet is being sent over the network to multiple nodes (d > 1). The number of nodes is greater than the specified threshold (d > (n-1) * (threshold/100)). For example, in a 25 member cluster with a multicast threshold of 25%, a cluster member only uses multicast if the packet is destined for 6 or more members (24 * .25 = 6).
Setting this value to 1 allows the cluster to use multicast for basically all multi-point traffic. Setting this value to 100 forces the cluster to use unicast for all multi-point traffic except for explicit broadcast traffic (for example, cluster heartbeat and discovery) because the 100% threshold is never exceeded. With the setting of 25 (the default) a cluster member sends the packet using unicast if it is destined for less than one-fourth of all nodes, and sends the packet using multicast if it is destined for one-fourth or more of all nodes. To specify the multicast threshold, edit the operational override file and add a <multicast-threshold-percent> element that includes the threshold value. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <multicast-listener> <multicast-threshold-percent>40</multicast-threshold-percent> </multicast-listener> </cluster-config> </coherence>
<address> A cluster member attempts to obtain the IP to bind to using the java.net.InetAddress.getLocalHost() call. The use of localhost may not work on systems that define localhost as the loopback address; in that case, the computer name or a specific IP address must be specified. An address must also be explicitly specified when a computer has multiple IPs or NICs. The <address> element supports Classless Inter-Domain Routing (CIDR) notation, which uses a subnet and mask pattern for a local IP address to bind to instead of specifying an exact IP address. CIDR simplifies configuration by allowing a single address configuration to be shared across computers on the same subnet. Each cluster member specifies the same CIDR address block and a local NIC on each computer is automatically found that matches the address pattern. For example, to specify a unicast address for multiple multi-NIC computers that are located on the same network and that will run a cluster on their 192.168.1.* address, specify an address such as 192.168.1.0/24 and each node finds a local NIC that matches the pattern. The /24 prefix size matches up to 256 available addresses: from 192.168.1.0 to 192.168.1.255.
<port> A cluster member uses two unicast UDP ports. The default behavior is to attempt to use port 8088 for the first port (port1). If port 8088 is not available, automatic port adjustment is used to select the next available port. The second port (port2) is automatically opened and defaults to the next available port after port1 (port1 + 1 if available). Automatic port adjustment can be disabled. In this case, port1 must be available and the second port is always port1 + 1. Two UDP ports are used because: It reduces contention between inbound and outbound traffic and avoids doing both heavy transmits and receives on the same port It allows for coherence members to communicate at the optimal packet size based on the Maximum Transmission Unit (MTU) of the operating system. One UDP port is used for large packets, and the other port is for packets with sizes at or under the network MTU. The separation allows for separate packet buffers based on size. It allows for large clusters (> 500 members) to be run without artificially increasing the size of the socket buffers
To specify a cluster member's unicast address, edit the operational override file and add both an <address> and <port> element (and optionally a <port-auto-adjust> element) and specify the address and port to be used by the cluster member. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd">
<cluster-config> <unicast-listener> <address system-property="tangosol.coherence.localhost">192.168.0.1 </address> <port system-property="tangosol.coherence.localport">8090</port> <port-auto-adjust system-property="tangosol.coherence.localport.adjust"> true </port-auto-adjust> </unicast-listener> </cluster-config> </coherence>
The tangosol.coherence.localhost, tangosol.coherence.localport, and tangosol.coherence.localport.adjust system properties are used to specify the unicast address instead of using the operational override file. For example:
-Dtangosol.coherence.localhost=192.168.0.1 -Dtangosol.coherence.localport=8090 -Dtangosol.coherence.localport.adjust=true
cluster member specifies its own address, then the cluster member is a WKA member when it is started. The list of WKA members must be the same on every cluster member to ensure that different cluster members do not operate independently from the rest of the cluster. The following example specifies two WKA members:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <unicast-listener> <well-known-addresses> <socket-address id="1"> <address>192.168.0.100</address> <port>8088</port> </socket-address> <socket-address id="2"> <address>192.168.0.101</address> <port>8088</port> </socket-address> </well-known-addresses> </unicast-listener> </cluster-config> </coherence>
Note:
When setting up a WKA member, the port value must match the port value that is specified for the member's unicast listener port. See "Specifying a Cluster Member's Unicast Address" on page 7-8 for more information on setting the unicast port.
Using WKA System Properties A single WKA member can be specified using the tangosol.coherence.wka and tangosol.coherence.wka.port system properties instead of specifying the address in an operational override file. The system properties are intended for demonstration and testing scenarios to quickly specify a single WKA member. For example:
-Dtangosol.coherence.wka=192.168.0.100 -Dtangosol.coherence.wka.port=8088
To create additional system properties to specify multiple WKA member addresses, an operational override file must be used to define multiple WKA member addresses and a system-property attribute must be defined for each WKA member address element. The attributes must include the system property names to be used to override the elements. The below example defines two addresses including system properties:
Note:
Defining additional system properties to specify a list of WKA members can be used during testing or in controlled production environments. However, the best practice is to exclusively use an operational override file to specify WKA members in production environments. This ensure the same list of WKA members exists on each cluster member.
<?xml version='1.0'?>
<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <unicast-listener> <well-known-addresses> <socket-address id="1"> <address system-property="tangosol.coherence.wka"></address> <port system-property="tangosol.coherence.wka.port"></port> </socket-address> <socket-address id="2"> <address system-property="tangosol.coherence.wka2"></address> <port system-property="tangosol.coherence.wka2.port"></port> </socket-address> </well-known-addresses> </unicast-listener> </cluster-config> </coherence>
For the above example, the WKA member addresses are specified using the system properties as follows:
-Dtangosol.coherence.wka=192.168.0.102 -Dtangosol.coherence.wka.port=8090 -Dtangosol.coherence.wka2=192.168.0.103 -Dtangosol.coherence.wka2.port=8094
See "Creating Custom System Properties" on page 3-21 for more information on defining system properties.
To use a WKA address provider implementation, add an <address-provider> element and specify the fully qualified name of the implementation class within the <class-name> element. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <unicast-listener> Setting Up a Cluster 7-11
As an alternative, the <address-provider> element supports the use of a <class-factory-name> element that is used to specify a factory class for creating AddressProvider instances, and a <method-name> element to specify the static factory method on the factory class that performs object instantiation. The following example gets an address provider instance using the getAddressProvider method on the MyAddressProviderFactory class.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <unicast-listener> <well-known-addresses> <address-provider> <class-factory-name>package.MyAddressProviderFactory </class-factory-name> <method-name>getAddressProvider</method-name> </address-provider> </well-known-addresses> </unicast-listener> </cluster-config> </coherence>
Any initialization parameters that are required for a class or class factory implementation can be specified using the <init-params> element. Initialization parameters are accessible by implementations that include a public constructor with a matching signature. The following example sets the iMaxTime parameter to 2000.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <unicast-listener> <well-known-addresses> <address-provider> <class-name>package.MyAddressProvider</class-name> <init-params> <init-param> <param-name>iMaxTime</param-name> <param-value>2000</param-value> </init-param> </init-params> </address-provider> </well-known-addresses> </unicast-listener>
</cluster-config> </coherence>
The tangosol.coherence.ttl and tangosol.coherence.localhost system properties are used to enable single-server mode instead of using the operational override file. For example:
-Dtangosol.coherence.ttl=0 -Dtangosol.coherence.localhost=127.0.0.1
On some UNIX operating systems, including some versions of Linux and Mac OS X, setting the TTL to zero may not be enough to isolate a cluster to a single computer. In such cases, a unique cluster name must also be configured. A cluster member cannot join an existing cluster if it uses a different cluster name. See "Specifying a Cluster's Name" on page 7-2 for details on configuring a cluster name. Lastly, on some versions of Mac OS X and Linux, a loopback route for localhost is not configured and results in the cluster failing to start. For such scenarios, use the WKA feature to ensure the discovery of cluster members. For example:
-Dtangosol.coherence.localhost=127.0.0.1 -Dtangosol.coherence.ttl=0 -Dtangosol.coherence.wka=127.0.0.1
See "Using Well Known Addresses" on page 7-9 for details on configuring WKA.
Setting Up a Cluster
7-13
to differentiate between actual member failure and an unresponsive member, such as the case when a JVM conducts a full garbage collection. Death detection identifies both process failures (TcpRing component) and hardware failure (IpMonitor component). Process failure is detected using a ring of TCP connections opened on the same port that is used for cluster UDP communication. Each cluster member issues a unicast heartbeat, and the most senior cluster member issues the cluster heartbeat, which is a broadcast message. Hardware failure is detected using the Java InetAddress.isReachable method which either issues a trace ICMP ping, or a pseudo ping and uses TCP port 7. Death detection is enabled by default and is configured within the <tcp-ring-listener> element. This section includes the following topics:
Changing TCP-Ring Settings Changing the Heartbeat Interval Disabling Death Detection
The values of the <ip-timeout> and <ip-attempts> elements should be high enough to insulate against allowable temporary network outages.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <tcp-ring-listener> <ip-timeout system-property="tangosol.coherence.ipmonitor.pingtimeout"> 25s </ip-timeout> <ip-attempts>5</ip-attempts> <listen-backlog>10</listen-backlog> </tcp-ring-listener> </cluster-config> </coherence>
The tangosol.coherence.ipmonitor.pingtimeout system property is used to specify a timeout instead of using the operational override file. For example:
-Dtangosol.coherence.ipmonitor.pingtimeout=20s
The heartbeat setting technically controls how often to evaluate whether or not a heartbeat needs to be emitted. The actual heartbeat interval may or may not be emitted within the specified interval depending on the evaluation process.
To change the death detection heartbeat interval, edit the operational override file and add a <heartbeat-milliseconds> element that includes the heartbeat value. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-delivery> <heartbeat-milliseconds>5000</heartbeat-milliseconds> </packet-delivery> </packet-publisher> </cluster-config> </coherence>
To disable death detection, edit the operational override file and add an <enabled> element that is set to false. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <tcp-ring-listener> <enabled>false</enabled>
Setting Up a Cluster
7-15
Specifying a Cluster Member's Priority Specifying Communication Thread Priorities Specifying Thread Priorities for Services
The tangosol.coherence.priority system property can also be used to specify a cluster member's priority instead of using the operational override file. For example:
-Dtangosol.coherence.priority=1
elements, respectively). For example, to specify a thread priority for the unicast listener, edit the operational override file and add a <priority> element, within the <unicast-listener> node, that includes a priority value between 1 and 10 where 10 is the highest priority:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <unicast-listener> <priority>5</priority> </unicast-listener> </cluster-config> </coherence>
Setting Up a Cluster
7-17
8
8
Starting Cache Servers Starting Cache Clients Stopping Cluster Members Performing a Rolling Restart
Overview of the DefaultCacheServer Class Starting Cache Servers From the Command Line Starting Cache Servers Programmatically
the name of a cache configuration file that is found on the classpath or the path to a Grid ARchive (GAR). If both are provided, the GAR takes precedence. A GAR includes the artifacts that comprise a Coherence application and adheres to a specific directory structure. A GAR can be left as a directory or can be archived with a .gar extension. For details about creating a GAR, see Administering Oracle Coherence. An optional application name for the GAR. If no name is provided, the archive name is used (the directory name or the file name without the .gar extension). The name provides an application scope that is used to separate applications on a cluster.
8-1
the number of seconds between checks for stopped services. Stopped services are only automatically started if they are set to be automatically started (as configured by an <autostart> element in the cache configuration file). The default value if no argument is provided is 5 seconds.
The following example starts a cache server member and uses the Coherence application artifacts that are packaged in the MyGar.gar file. The default name (MyGAR) is used as the application name.
java -server -Xms512m -Xmx512m -cp COHERENCE_HOME\config;COHERENCE_ HOME\lib\coherence.jar com.tangosol.net.DefaultCacheServer D:\example\MyGAR.gar
Note:
The cache configuration file that is packaged in a GAR file takes precedence over a cache configuration file that is located on the classpath.
The COHERENCE_HOME\bin\cache-server script is provided as a convenience and can start a cache server instance. The script sets up a basic environment and then runs the DefaultCacheServer class. There is a script for both the Windows and UNIX-based platforms. The scripts are typically modified as required for a particular cluster.
Tip: During testing, it is sometimes useful to create multiple scripts with different names that uniquely identify each cache server. For example: cahe-server-a, cache-server-b, and so on.
The DefaultCacheServer(ConfigurableCacheFactory) constructor uses a factory class to create a cache server instance that uses a specified cache configuration file. The following example uses the ExtensibleConfigurableCacheFactory implementation
and creates a DefaultCacheServer instance and also uses the startAndMonitor(long) method to start a cache server as in the previous example:
ExtensibleConfigurableCacheFactory factory; factory = new ExtensibleConfigurableCacheFactory("my-cache-config.xml"); DefaultCacheServer dcs = new DefaultCacheServer(factory); dcs.startAndMonitor(5000);
The static method startDaemon() method starts a cache server on a dedicated daemon thread and is intended for use within managed containers. Two additional static start methods (start() and start(ConfigurableCacheFactory)) are also available to start a cache server and return control. However, the cache factory class is typically used instead of these methods, which remain for backward compatibility. Applications that require even more fine-grained control can subclass the DefaultCacheServer class and override its methods to perform any custom processing as required. See Java API Reference for Oracle Coherence for detailed information on the DefaultCacheServer class.
Disabling Local Storage Using the CacheFactory Class to Start a Cache Client
8-3
When starting an application that is a cache client, use the Java -cp option to indicate the location of the coherence.jar file and the location where the tangosol-coherence-override.xml and coherence-cache-config.xml files are located. The location of the configuration files must precede the coherence.jar file on the classpath; otherwise, the default configuration files that are located in the coherence.jar file are used to start the cache server instance. See Chapter 3, "Understanding Configuration," for detailed information on configuration files. The following example starts an application that is a cache client, uses any configuration files that are placed in the COHERENCE_HOME\config directory, and disables storage on the member.
java -cp COHERENCE_HOME\config;COHERENCE_HOME\lib\coherence.jar -Dtangosol.coherence.distributed.localstorage=false com.MyApp
The COHERENCE_HOME\bin\coherence script is provided for testing purposes and can start a cache client instance. The script sets up a basic environment, sets storage to be disabled, and then runs the CacheFactory class, which returns a prompt. The prompt is used to enter commands for interacting with a cache and a cluster. There is a script for both the Windows and UNIX-based platforms. The scripts are typically modified as required for a particular cluster. The class can also be started directly from the command line instead of using the script. For example:
java -cp COHERENCE_HOME\config;COHERENCE_HOME\lib\coherence.jar -Dtangosol.coherence.distributed.localstorage=false com.tangosol.net.CacheFactory
If a Coherence application is packaged as a GAR, the GAR can be loaded by the CacheFactory instance using the server command at the prompt after the client member starts.
server [<path-to-gar>] [<app-name>]
The following example loads the Coherence application artifacts that are packaged in the MyGar.gar file. The default name (MyGAR) is used as the application name.
Map (?) server D:\example\MyGAR.gar
Stopping Cluster Members From the Command Line Stopping Cache Servers Programmatically
Issuing the kill -9 command triggers an abnormal JVM termination and the shutdown hook does not run. However, a graceful shutdown is generally not required if a service is known to be node-safe (as seen using JMX management) before termination.
The action a cluster member takes when receiving a shutdown command is configured in the operational override file within the <shutdown-listener> element. The following options are available:
none perform no explicit shutdown actions. This is the suggested value for production unless testing has verified that the behavior on external shutdown is exactly what is desired. force (default) perform a hard-stop on the node by calling Cluster.stop(). This is the default out-of-box action. graceful perform a normal shutdown by calling Cluster.shutdown() true same as force false same as none
The tangosol.coherence.shutdownhook system property is used to specify the shutdown hook behavior instead of using the operational override file. For example:
-Dtangosol.coherence.shutdownhook=none
Shutdown is supposed to be called in a standalone application where it shuts down the instance which the DefaultCacheServer class itself maintains as a static member.
shutdown() This is a static method that is used to shut down a cache server that was started on a different thread using the DefaultCacheServer.main() or DefaultCacheServer.start() methods. shutdownServer() This method is called on a DefaultCacheServer instance which an application keeps hold of.
8-5
Each cache server in the cluster can be restarted in turn to effectively restart the whole cluster. Rolling restarts are commonly performed when a cache server or its host computer must be updated or when upgrading a cache server to a new point release. However, the technique can be used whenever you want to restart a cache server that is currently managing a portion of cached data.
Note:
When upgrading a cluster, a rolling restart can only be used to upgrade between point releases and not major releases.
The cache servers in a cluster must provide enough capacity to handle the shutdown of a single cache server (n minus 1 where n is the number of cache servers in the cluster). An out-of-memory exception can occur during a redistribution of data if cache servers are running at capacity. For details on capacity planning see, Administering Oracle Coherence. Remote JMX management must be enabled on all cache servers and at least two cache servers must contain an operational MBean server. Ensure that you can connect to the MBean servers using an MBean browser such as JConsole. For details on configuring JMX management and connecting to an MBean server instance, see Managing Oracle Coherence.
Connect to a Coherence MBean server using an MBean browser. Ensure that the MBean server is not hosted on the cache server that is being restarted. From the Coherence Service MBean, select a cluster service that corresponds to a cache that is configured in the cache configuration file. Check the StatusHA attribute for any cluster member to ensure that the attribute's value is MACHINE-SAFE. The MACHINE-SAFE state indicates that all the cache servers running on any given computer could be stopped without data loss. If the attribute is not MACHINE-SAFE, then additional cache servers, possibly on different computers, must be started before performing a restart. For details on the StatusHA attribute, see Managing Oracle Coherence. Shutdown the cache server. From the MBean browser, recheck the StatusHA attribute and wait for the state to return to MACHINE-SAFE. Restart the cache server. From the MBean browser, recheck the StatusHA attribute and wait for the state to return to MACHINE-SAFE.
4. 5. 6. 7.
8.
8-7
9
9
Overview of Managing Cluster Membership Using the Cluster and Service Objects Using the Member Object Listening to Member Events
From the Cluster object, the application can determine the set of services that run in the cluster. This is illustrated in Example 91.
Example 91 Determining Services Running in the Cluster ... for (Enumeration enum = cluster.getServiceNames(); enum.hasMoreElements(); ) { String sName = (String) enum.nextElement(); ServiceInfo info = cluster.getServiceInfo(sName); // ... } ...
The ServiceInfo object provides information about the service, including its name, type, version and membership. For more information on this feature, see the API documentation for NamedCache, CacheService, Service, ServiceInfo and Cluster.
The Member's IP address What date/time the Member joined the cluster
As an example, if there are four servers in the cluster with each server running one copy ("instance") of the application and all four instances of the application are clustered, then the cluster is composed of four Members. From the Cluster object, the application can determine what the local Member is:
Member memberThis = cluster.getLocalMember();
From the Cluster object, the application can also determine the entire set of cluster members:
Set setMembers = cluster.getMemberSet();
From the ServiceInfo object, the application can determine the set of cluster members that are participating in that service:
ServiceInfo info = cluster.getServiceInfo(sName); Set setMembers = info.getMemberSet();
For more information on this feature, see the API documentation for Member.
The following example adds a listener implementation named MyMemberListener to a service named DistributedCache by adding the <member-listener> element to a distributed cache scheme definition:
<distributed-scheme> <scheme-name>example-distributed</scheme-name> <service-name>DistributedCache</service-name> <member-listener> <class-name>package.MyMemberListener</class-name> </member-listener> <backing-map-scheme> <local-scheme> <scheme-ref>example-binary-backing-map</scheme-ref> </local-scheme> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme>
The <member-listener> element can be used within the <distributed-scheme>, <replicated-scheme>, <optimistic-scheme>, <invocation-scheme>, and <proxy-scheme> elements. See Appendix B, "Cache Configuration Elements" for a reference of valid cache configuration elements.
Note:
A MemberListener implementation must have a public default constructor when using the <member-listener> element to add a listener to a service.
Example 92 demonstrates a MemberListener implementation that prints out all the membership events that it receives:
Example 92 A Sample MemberListener Implementation public class MemberEventPrinter extends Base implements MemberListener { public void memberJoined(MemberEvent evt) { out(evt); } public void memberLeaving(MemberEvent evt) { out(evt); } public void memberLeft(MemberEvent evt) { out(evt); } }
The MemberEvent object carries information about the event type (either MEMBER_ JOINED, MEMBER_LEAVING, or MEMBER_LEFT), the member that generated the event, and the service that acts as the source of the event. Additionally, the event provides a method, isLocal(), that indicates to the application that it is this member that is joining or leaving the cluster. This is useful for recognizing soft restarts in which an application automatically rejoins a cluster after a failure occurs.
Dynamically Managing Cluster Membership 9-3
Note: Calling the CacheFactory.shutdown() method unregisters all listeners. In this case, both the MEMBER_LEAVING and MEMBER_LEFT events are sent. If a member terminates for any other reason, only the MEMBER_LEFT event is sent.
For more information on these feature, see the API documentation for Service, MemberListener and MemberEvent.
10
10
This chapter provides instructions for changing default TCMP settings. A brief overview of TCMP is also provided. See "Understanding TCMP" on page 6-1 for additional details on TCMP. Also, see Administering Oracle Coherence which includes many tuning recommendations and instructions. This chapter includes the following sections:
Overview of TCMP Data Transmission Throttling Data Transmission Bundling Packets to Reduce Load Changing Packet Retransmission Behavior Configuring the Size of the Packet Buffers Adjusting the Maximum Size of a Packet Changing the Packet Speaker Volume Threshold Configuring the Incoming Message Handler Using Network Filters Changing the TCMP Socket Provider Implementation
10-1
TCMP data transmission behavior is configured within the tangosol-coherence-override.xml file using the <packet-publisher>, <packet-speaker>, <incoming-message-handler>, and <outgoing-message-handler> elements. See Appendix A, "Operational Configuration Elements," for a reference of all TCMP-related elements that are discussed in this chapter.
Adjusting Packet Flow Control Behavior Disabling Packet Flow Control Adjusting Packet Traffic Jam Behavior
<pause-detection> This setting controls the maximum number of packets that are resent to an unresponsive cluster member before determining that the member is paused. When a cluster member is marked as paused, packets addressed to it are sent at a lower rate until the member resumes responding. Pauses are typically due to long garbage collection intervals. The value is specified using the <maximum-packets> element and defaults to 16 packets. A value of 0 disables pause detection. <outstanding-packets> This setting is used to define the number of unconfirmed packets that are sent to a cluster member before packets addressed to that member are deferred. The value may be specified as either an explicit number by using the <maximum-packets> element, or as a range by using both a <maximum-packets> and <minimum-packets> elements. When a range is specified, this setting is dynamically adjusted based on network statistics. The maximum value should always be greater than 256 packets and defaults to 4096 packets. The minimum range should always be greater than 16 packets an defaults to 64 packets.
To adjust flow control behavior, edit the operational override file and add the <pause-detection> and <outstanding-packets> elements as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher>
<packet-delivery> <flow-control> <pause-detection> <maximum-packets>32</maximum-packets> </pause-detection> <outstanding-packets> <maximum-packets>2048</maximum-packets> <minimum-packets>128</minimum-packets> </outstanding-packets> </flow-control> </packet-delivery> </packet-publisher> </cluster-config> </coherence>
<maximum-packets> This setting controls the maximum number of pending packets that the packet publisher tolerates before determining that it is clogged and must slow down client requests (requests from local non-system threads). When the configured maximum packets limit is exceeded, client threads are forced to pause until the number of outstanding packets drops below the specified limit. This setting prevents most unexpected out-of-memory conditions by limiting the size of the resend queue. A value of 0 means no limit. The default value is 8192. <pause-milliseconds> This setting controls the number of milliseconds that the publisher pauses a client thread that is trying to send a message when the publisher is clogged. The publisher does not allow the message to go through until
10-3
the clog is gone, and repeatedly sleeps the thread for the duration specified by this property. The default value is 10. Specifying a packet limit which is to low, or a pause which is to long, may result in the publisher transmitting all pending packets and being left without packets to send. A warning is periodically logged if this condition is detected. Ideal values ensure that the publisher is never left without work to do, but at the same time prevent the queue from growing uncontrollably. The pause should be set short (10ms or under) and the limit on the number of packets be set high (that is, greater than 5000). When the <traffic-jam> element is used with the <flow-control> element, the setting operates in a point-to-point mode, only blocking a send if the recipient has too many packets outstanding. It is recommended that the <traffic-jam> element's <maximum-packets> subelement value be greater than the <maximum-packets> value for the <outstanding-packets> element. When <flow-control> is disabled, the <traffic-jam> setting takes all outstanding packets into account. To adjust the enqueue rate behavior, edit the operational override file and add the <maximum-packets> and <pause-milliseconds> elements as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <traffic-jam> <maximum-packets>8192</maximum-packets> <pause-milliseconds>10</pause-milliseconds> </traffic-jam> </packet-publisher> </cluster-config> </coherence>
<maximum-defferal-time> This setting specifies the maximum amount of time to defer a packet while waiting for additional packets to bundle. A value of zero results in the algorithm not waiting, and only bundling the readily accessible packets. A value greater than zero causes some transmission deferral while waiting for additional packets to become available. This value is typically set below 250 microseconds to avoid a detrimental throughput impact. If the units are not specified, nanoseconds are assumed. The default value is 1us (microsecond). <agression-factor> This setting specifies the aggressiveness of the packet deferral algorithm. Where as the <maximum-deferral-time> element defines the upper limit on the deferral time, the <aggression-factor> influences the average deferral time. The higher the aggression value, the longer the publisher may wait for additional packets. The factor may be expressed as a real number, and often times values between 0.0 and 1.0 allows for high packet utilization while keeping latency to a minimum. The default value is 0.
The default packet-bundling settings are minimally aggressive allowing for bundling to occur without adding a measurable delay. The benefits of more aggressive bundling is based on the network infrastructure and the application object's typical data sizes and access patterns. To adjust packet bundling behavior, edit the operational override file and add the <maximum-defferal-time> and <agression-factor> elements as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-delivery> <packet-bundling> <maximum-deferral-time>1us</maximum-deferral-time> <aggression-factor>0</aggression-factor> </packet-bundling> </packet-delivery> </packet-publisher> </cluster-config> </coherence>
Changing the Packet Resend Interval Changing the Packet Resend Timeout Configuring Packet Acknowledgment Delays
10-5
<coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-delivery> <resend-milliseconds>400</resend-milliseconds> </packet-delivery> </packet-publisher> </cluster-config> </coherence>
The default death detection mechanism is the TCP-ring listener, which detects failed cluster members before the resend timeout interval is ever reached. See "Configuring Death Detection" on page 7-13 for more information on death detection.
To change the packet resend timeout interval, edit the operational override file and add a <timeout-milliseconds> element as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-delivery> <timeout-milliseconds>420000</timeout-milliseconds> </packet-delivery> </packet-publisher> </cluster-config> </coherence>
<ack-delay-milliseconds> This element specifies the maximum number of milliseconds that the packet publisher delays before sending an ACK packet. The
ACK packet may be transmitted earlier if multiple batched acknowledgments fills the ACK packet. This value should be set substantially lower then the remote member's packet delivery resend timeout to allow ample time for the ACK to be received and processed before the resend timeout expires. The default value is 16.
<nack-delay-milliseconds> This element specifies the number of milliseconds that the packet publisher delays before sending a NACK packet. The default value is 1.
To change the ACK and NACK delay intervals, edit the operational override file and add the <ack-delay-milliseconds> and <nack-delay-milliseconds> elements as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <notification-queueing> <ack-delay-milliseconds>32</ack-delay-milliseconds> <nack-delay-milliseconds>1</nack-delay-milliseconds> </notification-queueing> </packet-publisher> </cluster-config> </coherence>
Understanding Packet Buffer Sizing Configuring the Outbound Packet Buffer Size Configuring the Inbound Packet Buffer Size
<maximum-packets> This setting specifies the number of packets (based on the configured packet size) that the datagram socket is asked to size itself to buffer. See java.net.SocketOptions#SO_SNDBUF and java.net.SocketOptions#SO_ RCVBUF properties for additional details. Actual buffer sizes may be smaller if the underlying socket implementation cannot support more than a certain size. For details on configuring the packet size, see "Adjusting the Maximum Size of a Packet" on page 10-9. <size> Specifies the requested size of the underlying socket buffer in bytes.
10-7
The operating system only treats the specified packet buffer size as a hint and is not required to allocate the specified amount. In the event that less space is allocated then requested, Coherence issues a warning and continues to operate with the constrained buffer, which may degrade performance. See Administering Oracle Coherence for details on configuring your operating system to allow larger buffers. Large inbound buffers can help insulate the Coherence network layer from JVM pauses that are caused by the Java Garbage Collector. While the JVM is paused, Coherence cannot dequeue packets from any inbound socket. If the pause is long enough to cause the packet buffer to overflow, the packet reception is delayed as the originating node must detect the packet loss and retransmit the packet(s).
xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <unicast-listener> <packet-buffer> <size>1500000</size> </packet-buffer> </unicast-listener> </cluster-config> </coherence>
The following example specifies the packet buffer size for the multicast listener and is entered using packets:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-buffer> <maximum-packets>128</maximum-packets> </packet-buffer> </packet-publisher> </cluster-config> </coherence>
Packet size is configured within the <packet-size> element and includes the following settings:
<maximum-length> Specifies the packet size, in bytes, which all cluster members can safely support. This value must be the same for all members in the cluster. A low value can artificially limit the maximum size of the cluster. This value should be at least 512. The default value is 64KB. <preferred-length> Specifies the preferred size, in bytes, of the DatagramPacket objects that are sent and received on the unicast and multicast sockets. This value can be larger or smaller than the <maximum-length> value, and need not be the same for all cluster members. The ideal value is one which fits within the network MTU, leaving enough space for either the UDP or TCP packet headers, which are 32 and 52 bytes respectively.
Tuning TCMP Behavior 10-9
This value should be at least 512. A default value is automatically calculated based on the local nodes MTU. An MTU of 1500 is used if the MTU cannot be obtained and is adjusted for the packet headers (1468 for UDP and 1448 for TCP). To adjust the packet size, edit the operational override file and add the <maximum-length> and <preferred-length> elements as follows:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <packet-publisher> <packet-size> <maximum-length>49152</maximum-length> <preferred-length>1500</preferred-length> </packet-size> </packet-publisher> </cluster-config> </coherence>
</cluster-config> </coherence>
Filters are defined in the operational deployment descriptor and must be explicitly enabled within a tangosol-coherence-override.xml file.
Note:
Use filters in an all-or-nothing manner: if one cluster member is using a filter and another is not, the messaging protocol fails. Stop the entire cluster before configuring filters.
Enabling the Compression Filter for Specific Services Enabling the Compression Filter for All Services Configuring the Compression Filter
Table 101
Compression Filter Parameters Description Specifies compression buffer length in bytes. Legal values are positive integers or zero. The default value is 0. Specifies the compression level. Legal values are:
strategy
Declaring a Custom Filter Enabling a Custom Filter for Specific Services Enabling a Custom Filter for All Services
<filter id="4"> <filter-name>MyFilter</filter-name> <filter-class>package.MyFilter</filter-class> <init-params> <init-param id="1"> <param-name>foo</param-name> <param-value>bar</param-value> </init-param> </init-params> </filter> </filters> </cluster-config> </coherence>
Using the TCP Socket Provider Using the SDP Socket Provider Using the SSL Socket Provider
if this socket provider is used without the WKA feature (multicast enabled), TCP is used for all unicast communications; while, multicast is utilized for group based communications.
The TCP socket provider uses up to two TCP connections between each pair of cluster members. No additional threads are added to manage the TCP traffic as it is all done using nonblocking NIO based sockets. Therefore, the existing TCMP threads handle all the connections. The connections are brought up on demand and are automatically reopened as needed if they get disconnected for any reason. Two connections are utilized because it reduces send/receive contention and noticeably improves performance. TCMP is largely unaware that it is using a reliable protocol and as such still manages guaranteed delivery and flow control. To specify the TCP socket provider, edit the operational override file and add a <socket-provider> element that includes the tcp value. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <unicast-listener> <socket-provider system-property="tangosol.coherence.socketprovider">tcp </socket-provider> </unicast-listener> </cluster-config> </coherence>
The tangosol.coherence.socketprovider system property is used to specify the socket provider instead of using the operational override file. For example:
-Dtangosol.coherence.socketprovider=tcp
if this socket provider is used without the WKA feature (multicast enabled), SDP is used for all unicast communications; while, multicast is utilized for group based communications.
To specify the SDP socket provider, edit the operational override file and add a <socket-provider> element that includes the sdp value. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <unicast-listener> <socket-provider system-property="tangosol.coherence.socketprovider">sdp </socket-provider> </unicast-listener> </cluster-config> </coherence>
The tangosol.coherence.socketprovider system property is used to specify the socket provider instead of using the operational override file. For example:
-Dtangosol.coherence.socketprovider=sdp
11
11
This chapter provides instructions for using and configuring the service guardian to detect and resolve deadlocked service threads and includes instructions for implementing custom failure policies. This chapter includes the following sections:
Overview
The service guardian is a mechanism that detects and attempts to resolve deadlocks in Coherence threads. Deadlocked threads on a member may result in many undesirable behaviors that are visible to the rest of the cluster, such as the inability to add new nodes to the cluster and the inability to service requests by nodes currently in the cluster. The service guardian receives periodic heartbeats that are issued by Coherence-owned and created threads. Should a thread fail to issue a heartbeat before the configured timeout, the service guardian takes corrective action. Both the timeout and corrective action (recovery) can be configured as required.
Note:
The term deadlock does not necessarily indicate a true deadlock; a thread that does not issue a timely heartbeat may be executing a long running process or waiting on a slow resource. The service guardian does not have the ability to distinguish a deadlocked thread from a slow one.
Interfaces That Are Executed By Coherence Implementations of the following interfaces are executed by Coherence-owned threads. Any processing in an implementation that exceeds the configured guardian timeout results in the service guardian attempting to recover the thread. The list is not exhaustive and only provides the most common interfaces that are implemented by end users. com.tangosol.net.Invocable com.tangosol.net.cache.CacheStore com.tangosol.util.Filter com.tangosol.util.InvocableMap.EntryAggregator com.tangosol.util.InvocableMap.EntryProcessor
11-1
com.tangosol.util.MapListener com.tangosol.util.MapTrigger Understanding Recovery The service guardian's recovery mechanism uses a series of steps to determine if a thread is deadlocked. Corrective action is taken if the service guardian concludes that the thread is deadlocked. The action to take can be configured and custom actions can be created if required. The recovery mechanism is outlined below:
Soft Timeout The recovery mechanism first attempts to interrupt the thread just before the configured timeout is reached. The following example log message demonstrates a soft timeout message:
<Error> (thread=DistributedCache, member=1): Attempting recovery (due to soft timeout) of Daemon{Thread="Thread[WriteBehindThread:CacheStoreWrapper(com. tangosol.examples.rwbm.TimeoutTest),5,WriteBehindThread:CacheStoreWrapper(com. tangosol.examples.rwbm.TimeoutTest)]", State=Running}
If the thread can be interrupted and it results in a heartbeat, normal processing resumes.
Hard Timeout The recovery mechanism attempts to stop a thread after the configured timeout is reached. The following example log message demonstrates a hard timeout message:
<Error> (thread=DistributedCache, member=1): Terminating guarded execution (due to hard timeout) of Daemon{Thread="Thread[WriteBehindThread:CacheStoreWrapper (com.tangosol.examples.rwbm.TimeoutTest),5,WriteBehindThread:CacheStoreWrapper (com.tangosol.examples.rwbm.TimeoutTest)]", State=Running}
Lastly, if the thread cannot be stopped, the recovery mechanism performs an action based on the configured failure policy. Actions that can be performed include: shutting down the cluster service, shutting down the JVM, and performing a custom action. The following example log message demonstrates an action taken by the recovery mechanism:
<Error> (thread=Termination Thread, member=1): Write-behind thread timed out; stopping the cache service
All threads This option allows a single timeout value to be applied to all Coherence-owned threads on a cluster node. This is the out-of-box configuration and is set at 305000 milliseconds by default. Threads per service type This option allows different timeout values to be set for specific service types. The timeout value is applied to the threads of all service
instances. If a timeout is not specified for a particular service type, then the timeout defaults to the timeout that is set for all threads.
Threads per service instance This option allows different timeout values to be set for specific service instances. If a timeout is not set for a specific service instance, then the service's timeout value, if specified, is used; otherwise, the timeout that is set for all threads is used.
Setting the timeout value to 0 stops threads from being guarded. In general, the service guardian timeout value should be set equal to or greater than the timeout value for packet delivery.
Note:
The guardian timeout can also be used for cache store implementations that are configured with a read-write-backing-map scheme. In this case, the <cachestore-timeout> element is set to 0, which defaults the timeout to the guardian timeout. See "read-write-backing-map-scheme" on page B-94.
The <timeout-milliseconds> value can also be set using the tangosol.coherence.guard.timeout system property.
11-3
The guardian-timeout initialization parameter can be set for the DistributedCache, ReplicatedCache, OptimisticCache, Invocation, and Proxy services. Refer to the tangosol-coherence.xml file that is located in the coherence.jar file for the correct service ID and initialization parameter ID to use when overriding the guardian-timeout parameter for a service. Each service also has a system property that sets the guardian timeout, respectively: tangosol.coherence.distributed.guard.timeout tangosol.coherence.replicated.guard.timeout tangosol.coherence.optimistic.guard.timeout tangosol.coherence.invocation.guard.timeout tangosol.coherence.proxy.guard.timeout
The <guardian-timeout> element can be used in the following schemes: <distributed-scheme>, <replicated-scheme>, <optimistic-scheme>, <transaction-scheme>, <invocation-scheme>, and <proxy-scheme>.
If the <task-timeout> element is set to 0, the default guardian timeout is used. See Appendix B, "Cache Configuration Elements" for more information on the different cache schemes that support the use of the <task-timeout> element.
exit-cluster This policy attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy causes the local node to stop the cluster services. This is the default policy if no policy is specified. exit-process This policy attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy cause the local node to exit the JVM and terminate abruptly. logging This policy logs any detected problems but takes no corrective action. custom the name of a Java class that provides an implementation for the com.tangosol.net.ServiceFailurePolicy interface. See "Enabling a Custom Guardian Failure Policy" on page 11-6.
The service guardian failure policy can be set three different ways based on the level of granularity that is required:
All threads This option allows a single failure policy to be applied to all Coherence-owned threads on a cluster node. This is the out-of-box configuration. Threads per service type This option allows different failure policies to be set for specific service types. The policy is applied to the threads of all service instances. If a policy is not specified for a particular service type, then the timeout defaults to the timeout that is set for all threads. Threads per service instance This option allows different failure policies to be set for specific service instances. If a policy is not set for a specific service instance, then the service's policy, if specified, is used; otherwise, the policy that is set for all threads is used.
11-5
The service-failure-policy initialization parameter can be set for the DistributedCache, ReplicatedCache, OptimisticCache, Invocation, and Proxy services. Refer to the tangosol-coherence.xml file that is located in the coherence.jar file for the correct service ID and initialization parameter ID to use when overriding the service-failure-policy parameter for a service.
The <service-failure-policy> element can be used in the following schemes: <distributed-scheme>, <replicated-scheme>, <optimistic-scheme>, <transaction-scheme>, <invocation-scheme>, and <proxy-scheme>.
following example enables a custom failure policy that is implemented in the MyFailurePolicy class. Custom failure policies can be enabled for all threads (as shown below) or can be enabled per service instance within a cache scheme definition.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <service-guardian> <service-failure-policy> <instance> <class-name>package.MyFailurePolicy</class-name> </instance> </service-failure-policy> </service-guardian> </cluster-config> </coherence>
As an alternative, the <instance> element supports the use of a <class-factory-name> element to use a factory class that is responsible for creating ServiceFailurePolicy instances, and a <method-name> element to specify the static factory method on the factory class that performs object instantiation. The following example gets a custom failure policy instance using the getPolicy method on the MyPolicyFactory class.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <service-guardian> <service-failure-policy> <instance> <class-factory-name>package.MyPolicyFactory</class-factory-name> <method-name>getPolicy</method-name> </instance> </service-failure-policy> </service-guardian> </cluster-config> </coherence>
Any initialization parameters that are required for an implementation can be specified using the <init-params> element. The following example sets the iMaxTime parameter to 2000.
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <service-guardian> <service-failure-policy> <instance> <class-name>package.MyFailurePolicy</class-name> Using the Service Guardian 11-7
<init-params> <init-param> <param-name>iMaxTime</param-name> <param-value>2000</param-value> </init-param> </init-params> </instance> </service-failure-policy> </service-guardian> </cluster-config> </coherence>
For known long running operations, the heartbeat can be issued with the number of milliseconds that should pass before the operation is considered "stuck:"
GuardSupport.heartbeat(long cMillis);
Part III
Part III
Using Caches
Chapter 12, "Introduction to Coherence Caches" Chapter 13, "Configuring Caches" Chapter 14, "Implementing Storage and Backing Maps" Chapter 15, "Caching Data Sources" Chapter 16, "Serialization Paged Cache" Chapter 17, "Using Quorum" Chapter 18, "Cache Configurations by Example"
12
12
This chapter provides an overview and comparison of basic cache types offered by Coherence. This chapter includes the following sections:
Distributed Cache Replicated Cache Optimistic Cache Near Cache Local Cache Remote Cache Summary of Cache Types
Distributed Cache
A distributed, or partitioned, cache is a clustered, fault-tolerant cache that has linear scalability. Data is partitioned among all the computers of the cluster. For fault-tolerance, partitioned caches can be configured to keep each piece of data on one or more unique computers within a cluster. Distributed caches are the most commonly used caches in Coherence. Coherence defines a distributed cache as a collection of data that is distributed (or, partitioned) across any number of cluster nodes such that exactly one node in the cluster is responsible for each piece of data in the cache, and the responsibility is distributed (or, load-balanced) among the cluster nodes. There are several key points to consider about a distributed cache:
Partitioned: The data in a distributed cache is spread out over all the servers in such a way that no two servers are responsible for the same piece of cached data. The size of the cache and the processing power associated with the management of the cache can grow linearly with the size of the cluster. Also, it means that operations against data in the cache can be accomplished with a "single hop," in other words, involving at most one other server. Load-Balanced: Since the data is spread out evenly over the servers, the responsibility for managing the data is automatically load-balanced across the cluster. Location Transparency: Although the data is spread out across cluster nodes, the exact same API is used to access the data, and the same behavior is provided by each of the API methods. This is called location transparency, which means that
Introduction to Coherence Caches 12-1
Distributed Cache
the developer does not have to code based on the topology of the cache, since the API and its behavior is the same with a local JCache, a replicated cache, or a distributed cache.
Failover: All Coherence services provide failover and failback without any data loss, and that includes the distributed cache service. The distributed cache service allows the number of backups to be configured; if the number of backups is one or higher, any cluster node can fail without the loss of data.
Access to the distributed cache often must go over the network to another cluster node. All other things equals, if there are n cluster nodes, (n - 1) / n operations go over the network:
Figure 121 Get Operations in a Partitioned Cache Environment
Since each piece of data is managed by only one cluster node, an access over the network is only a "single hop" operation. This type of access is extremely scalable, since it can use point-to-point communication and thus take optimal advantage of a switched network. Similarly, a cache update operation can use the same single-hop point-to-point approach, which addresses a known limitation of a replicated cache, the requirement to push cache updates to all cluster nodes.
Distributed Cache
In the figure above, the data is being sent to a primary cluster node and a backup cluster node. This is for failover purposes, and corresponds to a backup count of one. (The default backup count setting is one.) If the cache data were not critical, which is to say that it could be re-loaded from disk, the backup count could be set to zero, which would allow some portion of the distributed cache data to be lost if a cluster node fails. If the cache were extremely critical, a higher backup count, such as two, could be used. The backup count only affects the performance of cache modifications, such as those made by adding, changing or removing cache entries. Modifications to the cache are not considered complete until all backups have acknowledged receipt of the modification. There is a slight performance penalty for cache modifications when using the distributed cache backups; however it guarantees that if a cluster node were to unexpectedly fail, that data consistency is maintained and no data is lost. Failover of a distributed cache involves promoting backup data to be primary storage. When a cluster node fails, all remaining cluster nodes determine what data each holds in backup that the failed cluster node had primary responsible for when it died. Those data becomes the responsibility of whatever cluster node was the backup for the data:
Distributed Cache
If there are multiple levels of backup, the first backup becomes responsible for the data; the second backup becomes the new first backup, and so on. Just as with the replicated cache service, lock information is also retained with server failure; the sole exception is when the locks for the failed cluster node are automatically released. The distributed cache service also allows certain cluster nodes to be configured to store data, and others to be configured to not store data. The name of this setting is local storage enabled. Cluster nodes that are configured with the local storage enabled option provides the cache storage and the backup storage for the distributed cache. Regardless of this setting, all cluster nodes have the same exact view of the data, due to location transparency.
Replicated Cache
The Java heap size of the cluster nodes that have turned off local storage enabled are not affected at all by the amount of data in the cache, because that data is cached on other cluster nodes. This is particularly useful for application server processes running on older JVM versions with large Java heaps, because those processes often suffer from garbage collection pauses that grow exponentially with the size of the heap. Coherence allows each cluster node to run any supported version of the JVM. That means that cluster nodes with local storage enabled turned on could be running a newer JVM version that supports larger heap sizes, or Coherence's off-heap storage using the Java NIO features. The local storage enabled option allows some cluster nodes to be used just for storing the cache data; such cluster nodes are called Coherence cache servers. Cache servers are commonly used to scale up Coherence's distributed query functionality.
Replicated Cache
A replicated cache is a clustered, fault tolerant cache where data is fully replicated to every member in the cluster. This cache offers the fastest read performance with linear performance scalability for reads but poor scalability for writes (as writes must be processed by every member in the cluster). Because data is replicated to all servers, adding servers does not increase aggregate cache capacity. There are several challenges to building a reliable replicated cache. The first is how to get it to scale and perform well. Updates to the cache have to be sent to all cluster nodes, and all cluster nodes have to end up with the same data, even if multiple
Replicated Cache
updates to the same piece of data occur at the same time. Also, if a cluster node requests a lock, it should not have to get all cluster nodes to agree on the lock, otherwise it scales extremely poorly; yet with cluster node failure, all of the data and lock information must be kept safely. Coherence handles all of these scenarios transparently, and provides the most scalable and highly available replicated cache implementation available for Java applications. Replicated caches have very high access speed. Since the data is replicated to each cluster node, it is available for use without any waiting. This is referred to as "zero latency access," and is perfect for situations in which an application requires the highest possible speed in its data access. Each cluster node (JVM) accesses the data from its own memory:
Figure 125 Get Operation in a Replicated Cache Environment
In contrast, updating a replicated cache requires pushing the new version of the data to all other cluster nodes:
Near Cache
Coherence implements its replicated cache service in such a way that all read-only operations occur locally, all concurrency control operations involve at most one other cluster node, and only update operations require communicating with all other cluster nodes. The result is excellent scalable performance, and as with all of the Coherence services, the replicated cache service provides transparent and complete failover and failback. The limitations of the replicated cache service should also be carefully considered:
Data is managed by the replicated cache service on every cluster node that has joined the service. Therefore, memory utilization (the Java heap size) is increased for each cluster node and can impact performance. Replicated caches that have a high incidence of updates do not scale linearly as the cluster grows. The cluster suffers diminishing returns as cluster nodes are added. Replicated caches do not support per-entry expiration. Expiry needs to be set at the cache level.
Optimistic Cache
An optimistic cache is a clustered cache implementation similar to the replicated cache implementation but without any concurrency control. This implementation offers higher write throughput than a replicated cache. It also allows an alternative underlying store for the cached data (for example, a MRU/MFU-based cache). However, if two cluster members are independently pruning or purging the underlying local stores, the store content held by each member may be different.
Near Cache
A near cache is a hybrid cache; it typically fronts a distributed cache or a remote cache with a local cache. Near cache invalidates front cache entries, using a configured invalidation strategy, and provides excellent performance and synchronization. Near cache backed by a partitioned cache offers zero-millisecond local access for repeat data
Near Cache
access, while enabling concurrency and ensuring coherency and fail over, effectively combining the best attributes of replicated and partitioned caches. The objective of a Near Cache is to provide the best of both worlds between the extreme performance of the Replicated Cache and the extreme scalability of the Distributed Cache by providing fast read access to Most Recently Used (MRU) and Most Frequently Used (MFU) data. Therefore, the Near Cache is an implementation that wraps two caches: a "front cache" and a "back cache" that automatically and transparently communicate with each other by using a read-through/write-through approach. The "front cache" provides local cache access. It is assumed to be inexpensive, in that it is fast, and is limited in terms of size. The "back cache" can be a centralized or multitiered cache that can load-on-demand in case of local cache misses. The "back cache" is assumed to be complete and correct in that it has much higher capacity, but more expensive in terms of access speed. The use of a Near Cache is not confined to Coherence*Extend; it also works with TCMP. This design allows Near Caches to configure cache coherency, from the most basic expiry-based caches and invalidation-based caches, up to advanced caches that version data and provide guaranteed coherency. The result is a tunable balance between the preservation of local memory resources and the performance benefits of truly local caches. The typical deployment uses a Local Cache for the "front cache". A Local Cache is a reasonable choice because it is thread safe, highly concurrent, size-limited, auto-expiring, and stores the data in object form. For the "back cache", a remote, partitioned cache is used. The following figure illustrates the data flow in a Near Cache. If the client writes an object D into the grid, the object is placed in the local cache inside the local JVM and in the partitioned cache which is backing it (including a backup copy). If the client requests the object, it can be obtained from the local, or "front cache", in object form with no latency.
Near Cache
If the client requests an object that has been expired or invalidated from the "front cache", then Coherence automatically retrieves the object from the partitioned cache. The "front cache" stores the object before the object is delivered to the client.
Figure 128 Get Operations in a Near Cache Environment
Local Cache
Local Cache
While it is not a clustered service, the local cache implementation is often used in combination with various clustered cache services as part of a near cache.
Note:
Applications that use a local cache as part of a near cache must make sure that keys are immutable. Keys that are mutable can cause threads to hang and deadlocks to occur.
In particular, the near cache implementation uses key objects to synchronize the local cache (front map), and the key may also be cached in an internal map. Therefore, a key object passed to the get() method is used as a lock. If the key object is modified while a thread holds the lock, then the thread may fail to unlock the key. In addition, if there are other threads waiting for the locked key object to be freed, or if there are threads who attempt to acquire a lock to the modified key, then threads may hang and deadlocks can occur. For details on the use of keys, see the java.util.Map API documentation. A local cache is completely contained within a particular cluster node. There are several attributes of the local cache that are particularly interesting:
The local cache implements the same standard collections interface that the clustered caches implement, meaning that there is no programming difference between using a local or a clustered cache. Just like the clustered caches, the local cache is tracking to the JCache API, which itself is based on the same standard collections API that the local cache is based on. The local cache can be size-limited. The local cache can restrict the number of entries that it caches, and automatically evict entries when the cache becomes full. Furthermore, both the sizing of entries and the eviction policies can be customized. For example, the cache can be size-limited based on the memory used by the cached entries. The default eviction policy uses a combination of Most Frequently Used (MFU) and Most Recently Used (MRU) information, scaled on a logarithmic curve, to determine what cache items to evict. This algorithm is the best general-purpose eviction algorithm because it works well for short duration and long duration caches, and it balances frequency versus recentness to avoid cache thrashing. The pure LRU and pure LFU algorithms are also supported, and the ability to plug in custom eviction policies. The local cache supports automatic expiration of cached entries, meaning that each cache entry can be assigned a time to live in the cache. The local cache is thread safe and highly concurrent, allowing many threads to simultaneously access and update entries in the local cache. The local cache supports cache notifications. These notifications are provided for additions (entries that are put by the client, or automatically loaded into the cache), modifications (entries that are put by the client, or automatically reloaded), and deletions (entries that are removed by the client, or automatically expired, flushed, or evicted.) These are the same cache events supported by the clustered caches. The local cache maintains hit and miss statistics. These run-time statistics can accurately project the effectiveness of the cache, and adjust its size-limiting and auto-expiring settings accordingly while the cache is running.
The local cache is important to the clustered cache services for several reasons, including as part of Coherence's near cache technology, and with the modular backing map architecture.
Remote Cache
A remote cache describes any out of process cache accessed by a Coherence*Extend client. All cache requests are sent to a Coherence proxy where they are delegated to a cache (Replicated, Optimistic, Partitioned). See Developing Remote Clients for Oracle Coherence for more information on using remote caches.
JVMs = number of JVMs DataSize = total size of cached data (measured without redundancy) Redundancy = number of copies of data maintained LocalCache = size of local cache (for near caches)
Summary of Cache Types and Characteristics LocalCache Near Cache not clustered backed by partitioned cache Local Caches + Partitioned Cache Local Cache Instant 5
Table 121
Locally cached: Locally cached: instant 5 Remote: instant 5 network speed 1 Remote: network speed 1 Configurable 4 Zero to Extremely High Extremely fast 3 Configurable 4 Zero to Extremely High Extremely fast 3
Fault Tolerance
Extremely High
Extremely High
Zero
Fast 2 DataSize
Fast 2 DataSize
Instant 5 DataSize
fully coherent
n/a n/a
fully transactional
12-11
Notes: 1. As a rough estimate, with 100mb Ethernet, network reads typically require ~20ms for a 100KB object. With gigabit Ethernet, network reads for 1KB objects are typically sub-millisecond.
2. 3. 4. 5. 6.
Requires UDP multicast or a few UDP unicast operations, depending on JVM count. Requires a few UDP unicast operations, depending on level of redundancy. Partitioned caches can be configured with as many levels of backup as desired, or zero if desired. Most installations use one backup copy (two copies total) Limited by local CPU/memory performance, with negligible processing required (typically sub-millisecond performance). Listener-based Near caches are coherent; expiry-based near caches are partially coherent for non-transactional reads and coherent for transactional access.
13
13
Configuring Caches
This chapter provides detailed instructions on how to configure caches within a cache configuration deployment descriptor. Refer to Appendix B, "Cache Configuration Elements," for a complete reference of all the elements available in the descriptor. In addition, see Chapter 18, "Cache Configurations by Example," for various sample cache configurations. This chapter includes the following sections:
Overview Defining Cache Mappings Defining Cache Schemes Using Scheme Inheritance Using Cache Scheme Properties Using Parameter Macros
Overview
Caches are configured in a cache configuration deployment descriptor. By default, Coherence attempts to load the first coherence-cache-config.xml deployment descriptor that is found in the classpath. Coherence includes a sample coherence-cache-config.xml file in the coherence.jar. To use a different coherence-cache-config.xml file, the file must be located on the classpath and must be loaded before the coherence.jar library; otherwise, the sample cache configuration deployment descriptor is used. See "Specifying a Cache Configuration File" on page 3-6 for alternate methods that are available for specifying a cache configuration deployment descriptor. The cache configuration descriptor allows caches to be defined independently from the application code. At run time, applications get an instance of a cache by referring to a cache using the name that is defined in the descriptor. This allows application code to be written independent of the cache definition. Based on this approach, cache definitions can be modified without making any changes to the application code. This approach also maximizes cache definition reuse. The schema definition of the cache configuration descriptor is the coherence-cache-config.xsd file, which imports the coherence-cache-config-base.xsd file, which, in turn, implicitly imports the coherence-config-base.xsd file. This file is located in the root of the coherence.jar file. A cache configuration deployment descriptor consists of two primary elements that are detailed in this chapter: the <caching-scheme-mapping> element and the
<caching-schemes> element. These elements are used to define caches schemes and to define cache names that map to the cache schemes.
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>distributed</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>distributed</scheme-name> </distributed-scheme> </caching-schemes> </cache-config>
Example 132
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>*</cache-name> <scheme-name>distributed</scheme-name> </cache-mapping> <cache-mapping> <cache-name>account-*</cache-name> <scheme-name>account-distributed</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>distributed</scheme-name> </distributed-scheme> <distributed-scheme> <scheme-name>account-distributed</scheme-name> </distributed-scheme> </caching-schemes> </cache-config>
For the first mapping, an application can use any name when creating a cache and the name is mapped to the cache scheme definition with the scheme name distributed. The second mapping requires an application to use a pattern when specifying a cache name. In this case, an application must use the prefix account- before the name. For example, an application that specifies account-overdue as the cache name uses the cache scheme definition with the scheme name account-distributed.
xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>*</cache-name> <scheme-name>distributed</scheme-name> <init-params> <init-param> <param-name>back-size-limit</param-name> <param-value>8MB</param-value> </init-param> </init-params> </cache-mapping> <cache-mapping> <cache-name>account-*</cache-name> <scheme-name>distributed</scheme-name> </cache-mapping> </caching-scheme-mapping> ... </cache-config>
See "Using Cache Scheme Properties" on page 13-14 for more information on how cache scheme properties are configured for a cache scheme definition.
Defining Distributed Cache Schemes Defining Replicated Cache Schemes Defining Optimistic Cache Schemes Defining Local Cache Schemes Defining Near Cache Schemes
distributed caches can be defined in a cache configuration file. See "distributed-scheme" on page B-32 for a detailed reference of the <distributed-scheme> element. Example 134 defines a basic distributed cache that uses distributed as the scheme name and is mapped to the cache name example. The <autostart> element is set to true to start the service on a cache server node.
Example 134 Sample Distributed Cache Definition
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>distributed</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>distributed</scheme-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </caching-schemes> </cache-config>
In the example, the distributed cache defines a local cache to be used as the backing map. See Chapter 14, "Implementing Storage and Backing Maps" for more information on configuring backing maps.
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name>
<scheme-name>replicated</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <replicated-scheme> <scheme-name>replicated</scheme-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </replicated-scheme> </caching-schemes> </cache-config>
In the example, the replicated cache defines a local cache to be used as the backing map. See Chapter 14, "Implementing Storage and Backing Maps" for more information on configuring backing maps.
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>optimistic</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <optimistic-scheme> <scheme-name>optimistic</scheme-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </optimistic-scheme> </caching-schemes> </cache-config>
In the example, the optimistic cache defines a local cache to be used as the backing map. See Chapter 14, "Implementing Storage and Backing Maps" for more information on configuring backing maps.
A local cache is not typically used as a standalone cache on a cache server; moreover, a clustering cache server distribution does not start if the only cache definition in the cache configuration file is a local cache.
Sample Local Cache Definition
Example 137
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>local</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <local-scheme> <scheme-name>local</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>32000</high-units> <low-units>10</low-units> <unit-calculator>FIXED</unit-calculator> <expiry-delay>10ms</expiry-delay> </local-scheme> </caching-schemes> </cache-config>
See "Defining a Local Cache for C++ Clients" and "Configuring a Local Cache for .NET Clients" in the Developing Remote Clients for Oracle Coherence when using Coherence*Extend.
Local caches use the <expiry-delay> cache configuration element to configure the amount of time that items may remain in the cache before they expire. Client threads initiate these actions while accessing the cache. Therefore, the <expiry-delay> may be reached, but not initiated until a client thread accesses the cache. For example, if the <expiry-delay> value is set at 10 seconds (10s) and a client accesses the cache after 15 seconds, then expiry occurs after 15 seconds.
Note:
The client thread performs the evictions, not a background thread. In addition, the expiry delay parameter (cExpiryMillis) is defined as an integer and is expressed in milliseconds. Therefore, the maximum amount of time can never exceed Integer.MAX_VALUE (2147483647) milliseconds or approximately 24 days.
package com.tangosol.examples.eviction; import import import import import import import com.tangosol.net.cache.AbstractEvictionPolicy; com.tangosol.net.cache.ConfigurableCacheMap; com.tangosol.net.cache.LocalCache; com.tangosol.net.BackingMapManagerContext; com.tangosol.util.ConverterCollections; java.util.Iterator; java.util.Map;
/** * Custom eviction policy that evicts items randomly (or more specifically, * based on the natural order provided by the map's iterator.) * This example may be used in cases where fast eviction is required * with as little processing as possible. */ public class SimpleEvictionPolicy extends AbstractEvictionPolicy { /** * Default constructor; typically used with local caches or the front * parts of near caches. */ public SimpleEvictionPolicy() { }
/** * Constructor that accepts {@link BackingMapManagerContext}; should * be used with partitioned cache backing maps. * * @param ctx backing map context */ public SimpleEvictionPolicy(BackingMapManagerContext ctx) { m_ctx = ctx; } /** * {@inheritDoc} */ public void entryUpdated(ConfigurableCacheMap.Entry entry) { } /** * {@inheritDoc} */ public void entryTouched(ConfigurableCacheMap.Entry entry) { } /** * {@inheritDoc} */ public void requestEviction(int cMaximum) { ConfigurableCacheMap cache = getCache(); Iterator iter = cache.entrySet().iterator(); for (int i = 0, c = cache.getUnits() - cMaximum; i < c && iter.hasNext(); i++) { ConfigurableCacheMap.Entry entry = (ConfigurableCacheMap.Entry) iter.next(); StringBuffer buffer = new StringBuffer(); // If the contents of the entry (for example the key/value) need // to be examined, invoke convertEntry(entry) in case // the entry must be deserialized Map.Entry convertedEntry = convertEntry(entry); buffer.append("Entry: ").append(convertedEntry); // // // if Here's how to get metadata about creation/last touched timestamps for entries. This information might be used in determining what gets evicted. (entry instanceof LocalCache.Entry) { buffer.append(", create millis="); buffer.append(((LocalCache.Entry) entry).getCreatedMillis()); } buffer.append(", last touch millis="); buffer.append(entry.getLastTouchMillis()); // This output is for illustrative purposes; this may generate // excessive output in a production system System.out.println(buffer);
// iterate and remove items // from the cache until below the maximum. Note that // the non converted entry key is passed to the evict method cache.evict(entry.getKey()); } } /** * If a {@link BackingMapManagerContext} is configured, wrap the * Entry with {@link ConverterCollections.ConverterEntry} in order * to deserialize the entry. * * @see ConverterCollections.ConverterEntry * @see BackingMapManagerContext * * @param entry entry to convert if necessary * * @return an entry that deserializes its key and value if necessary */ protected Map.Entry convertEntry(Map.Entry entry) { BackingMapManagerContext ctx = m_ctx; return ctx == null ? entry : new ConverterCollections.ConverterEntry(entry, ctx.getKeyFromInternalConverter(), ctx.getValueFromInternalConverter(), ctx.getValueToInternalConverter()); } private BackingMapManagerContext m_ctx; }
Example 139 illustrates a Coherence cache configuration file (coherence-cache-config.xml) with an eviction policy:
Example 139 Custom Eviction Policy in a coherence-cache-config.xml File
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>*</cache-name> <scheme-name>example-near</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <near-scheme> <scheme-name>example-near</scheme-name> <front-scheme> <local-scheme> <eviction-policy> <class-scheme> <class-name> com.tangosol.examples.eviction.SimpleEvictionPolicy
</class-name> </class-scheme> </eviction-policy> <high-units>1000</high-units> </local-scheme> </front-scheme> <back-scheme> <distributed-scheme> <scheme-ref>example-distributed</scheme-ref> </distributed-scheme> </back-scheme> <invalidation-strategy>all</invalidation-strategy> <autostart>true</autostart> </near-scheme> <distributed-scheme> <scheme-name>example-distributed</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme> <eviction-policy> <class-scheme> <class-name> com.tangosol.examples.eviction.SimpleEvictionPolicy </class-name> <init-params> <!-Passing the BackingMapManagerContext to the eviction policy; this is required for deserializing entries --> <init-param> <param-type> com.tangosol.net.BackingMapManagerContext</param-type> <param-value>{manager-context}</param-value> </init-param> </init-params> </class-scheme> </eviction-policy> <high-units>20m</high-units> <unit-calculator>binary</unit-calculator> </local-scheme> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </caching-schemes> </cache-config>
Configuring Caches
13-11
Example 1310 defines of a near cache that uses near as the scheme name and is mapped to the cache name example. The front-tier is a local cache and the back-tier is a distributed cache.
Note:
Near caches are used for cache clients and are not typically used on a cache server; moreover, a cache server does not start if the only cache definition in the cache configuration file is a near cache.
Example 1310 Sample Near Cache Definition <?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>near</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <near-scheme> <scheme-name>near</scheme-name> <front-scheme> <local-scheme/> </front-scheme> <back-scheme> <distributed-scheme> <scheme-name>near-distributed</scheme-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </back-scheme> </near-scheme> </caching-schemes> </cache-config>
See "Defining a Near Cache for C++ Clients" and "Defining a Near Cache for .NET Clients" in the Developing Remote Clients for Oracle Coherence when using Coherence*Extend.
Table 131 describes the invalidation strategies. You can find more information on the invalidation strategies and the read-through/write-through approach in Chapter 15, "Caching Data Sources."
Table 131 Near Cache Invalidation Strategies Description The default strategy if no strategy is specified. This strategy is identical to the present strategy. This strategy instructs a near cache to listen to the back cache events related only to the items currently present in the front cache. This strategy works best when each instance of a front cache contains distinct subset of data relative to the other front cache instances (for example, sticky data access patterns). This strategy instructs a near cache to listen to all back cache events. This strategy is optimal for read-heavy tiered access patterns where there is significant overlap between the different instances of front caches. This strategy instructs a near cache to listen to all backing map events that are not synthetic deletes. A synthetic delete event could be emitted as a result of eviction or expiration. With this invalidation strategy, it is possible for the front map to contain cache entries that have been synthetically removed from the backing map. Any subsequent re-insertion of the entries to the backing map causes the corresponding entries in the front map to be invalidated. This strategy instructs the cache not to listen for invalidation events at all. This is the best choice for raw performance and scalability when business requirements permit the use of data which might not be absolutely current. Freshness of data can be guaranteed by use of a sufficiently brief eviction policy for the front cache.
all
logical
none
<distributed-scheme> <scheme-name>DistributedInMemoryCache</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme> <scheme-ref>LocalSizeLimited</scheme-ref> </local-scheme> </backing-map-scheme> </distributed-scheme> <local-scheme> <scheme-name>LocalSizeLimited</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>1000</high-units> <expiry-delay>1h</expiry-delay> </local-scheme>
In Example 1311, the first distributed scheme definition is more compact; however, the second definition offers the ability to easily reuse the LocalSizeLimited scheme within multiple schemes. Example 1312 demonstrates multiple schemes reusing the same LocalSizeLimited base definition and overriding the expiry-delay property.
Example 1312 Multiple Cache Schemes Using Scheme Inheritance <distributed-scheme> <scheme-name>DistributedInMemoryCache</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme> <scheme-ref>LocalSizeLimited</scheme-ref> </local-scheme> </backing-map-scheme> </distributed-scheme> <replicated-scheme> <scheme-name>ReplicatedInMemoryCache</scheme-name> <service-name>ReplicatedCache</service-name> <backing-map-scheme> <local-scheme> <scheme-ref>LocalSizeLimited</scheme-ref> <expiry-delay>10m</expiry-delay> </local-scheme> </backing-map-scheme> </replicated-scheme> <local-scheme> <scheme-name>LocalSizeLimited</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>1000</high-units> <expiry-delay>1h</expiry-delay> </local-scheme>
respective elements. See Appendix B, "Cache Configuration Elements," for a reference of all the properties that are supported for each cache scheme type. Many cache properties use default values unless a different value is explicitly given within the cache scheme definition. The clustered caches (distributed, replicated and optimistic) use the default values as specified by their respective cache service definition. Cache services are defined in the operational deployment descriptor. While it is possible to change property values using an operational override file, cache properties are most often set within the cache scheme definition. Example 1313 creates a basic distributed cache scheme that sets the service thread count property and the request timeout property. In addition, the local scheme that is used for the backing map sets properties to limit the size of the local cache. Instructions for using cache scheme properties are found throughout this guide and are discussed as part of the features that they support.
Example 1313 Setting Cache Properties <?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>example</cache-name> <scheme-name>DistributedInMemoryCache</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>DistributedInMemoryCache</scheme-name> <service-name>DistributedCache</service-name> <thread-count>4</thread-count> <request-timeout>60s</request-timeout> <backing-map-scheme> <local-scheme> <scheme-ref>LocalSizeLimited</scheme-ref> </local-scheme> </backing-map-scheme> </distributed-scheme> <local-scheme> <scheme-name>LocalSizeLimited</scheme-name> <eviction-policy>LRU</eviction-policy> <high-units>1000</high-units> <expiry-delay>1h</expiry-delay> </local-scheme> </caching-schemes> </cache-config>
Configuring Caches
13-15
configuring class constructor parameters. The macros should be entered enclosed in curly braces as shown below, without any quotes or spaces. Table 132 describes the parameter macros that may be specified:
Table 132 Parameter Macros for Cache Configuration <param-value> {cache-name} Description Used to pass the current cache name as a constructor parameter For example: <class-name>com.mycompany.cache.CustomCacheLoader </class-name> <init-params> <init-param> <param-type>java.lang.String</param-type> <param-value>{cache-name}</param-value> </init-param> </init-params> java.lang.ClassLoade {class-loader} r Used to pass the current classloader as a constructor parameter. For example: <class-name>com.mycompany.cache.CustomCacheLoader </class-name> <init-params> <init-param> <param-type>java.lang.ClassLoader</param-type> <param-value>{class-loader}</param-value> </init-param> </init-params>
<param-type> java.lang.String
Table 132 (Cont.) Parameter Macros for Cache Configuration <param-type> <param-value> Description Used to pass the current BackingMapManagerContext object as a constructor parameter. For example: <class-name>com.mycompany.cache.CustomCacheLoader </class-name> <init-params> <init-param> <param-type> com.tangosol.net.BackingMapManagerContext </param-type> <param-value>{manager-context}</param-value> </init-param> </init-params> Instantiates an object defined by the <class-scheme>, <local-scheme> or <file-scheme> with the specified <scheme-name> value and uses it as a constructor parameter. For example: <class-scheme> <scheme-name>dbconnection</scheme-name> <class-name>com.mycompany.dbConnection</class-name> <init-params> <init-param> <param-name>driver</param-name> <param-type>String</param-type> <param-value>org.gjt.mm.mysql.Driver </param-value> </init-param> <init-param> <param-name>url</param-name> <param-type>String</param-type> <param-value> jdbc:mysql://dbserver:3306/companydb </param-value> </init-param> <init-param> <param-name>user</param-name> <param-type>String</param-type> <param-value>default</param-value> </init-param> <init-param> <param-name>password</param-name> <param-type>String</param-type> <param-value>default</param-value> </init-param> </init-params> </class-scheme> ... <class-name>com.mycompany.cache.CustomCacheLoader </class-name> <init-params> <init-param> <param-type>{scheme-ref}</param-type> <param-value>dbconnection</param-value> </init-param> </init-params>
{scheme-ref}
local-scheme
Configuring Caches
13-17
Table 132 (Cont.) Parameter Macros for Cache Configuration <param-type> {cache-ref} <param-value> cache name Description Used to obtain a NamedCache reference for the specified cache name. Consider the following configuration example: <cache-config> <caching-scheme-mapping> <cache-mapping> <cache-name>boston-*</cache-name> <scheme-name>wrapper</scheme-name> <init-params> <init-param> <param-name>delegate-cache-name</param-name> <param-value>london-*</param-value> </init-param> </init-params> </cache-mapping> <cache-mapping> <cache-name>london-*</cache-name> <scheme-name>partitioned</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <class-scheme> <scheme-name>wrapper</scheme-name> <class-name> com.tangosol.net.cache.WrapperNamedCache </class-name> <init-params> <init-param> <param-type>{cache-ref}</param-type> <param-value>{delegate-cache-name} </param-value> </init-param> <init-param> <param-type>string</param-type> <param-value>{cache-name}</param-value> </init-param> </init-params> </class-scheme> <distributed-scheme> <scheme-name>partitioned</scheme-name> <service-name>partitioned</service-name> <backing-map-scheme> <local-scheme> <unit-calculator>BINARY</unit-calculator> </local-scheme> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> </caching-schemes> </cache-config> The CacheFactory.getCache("london-test") call would result in a standard partitioned cache reference. Conversely, the CacheFactory.getCache("boston-test") call would resolve the value of the delegate-cache-name parameter to london-test and would construct an instance of the WrapperNamedCache delegating to the NamedCache returned by the CacheFactory.getCache("london-test") call.
14
14
Cache Layers Local Storage Operations Capacity Planning Using Partitioned Backing Maps Using the Elastic Data Feature to Store Data Using Asynchronous Backup Using Delta Backup
Cache Layers
Partitioned (Distributed) cache service in Coherence has three distinct layers:
Client View The client view represents a virtual layer that provides access to the underlying partitioned data. Access to this tier is provided using the NamedCache interface. In this layer you can also create synthetic data structures such as NearCache or ContinuousQueryCache. Storage Manager The storage manager is the server-side tier that is responsible for processing cache-related requests from the client tier. It manages the data structures that hold the actual cache data (primary and backup copies) and information about locks, event listeners, map triggers, and so on. Backing Map The Backing Map is the server-side data structure that holds actual data.
Coherence allows users to configure out-of-the-box and custom backing map implementations. The only constraint for a Map implementation is the understanding that the Storage Manager provides all keys and values in internal (Binary) format. To deal with conversions of that internal data to and from an Object format, the Storage Manager can supply Backing Map implementations with a BackingMapManagerContext reference. Figure 141 shows a conceptual view of backing maps.
Local Storage
Local Storage
Local storage refers to the data structures that actually store or cache the data that is managed by Coherence. For an object to provide local storage, it must support the same standard collections interface, java.util.Map. When a local storage implementation is used by Coherence to store replicated or distributed data, it is called a backing map because Coherence is actually backed by that local storage implementation. The other common uses of local storage is in front of a distributed cache and as a backup behind the distributed cache. Coherence supports the following local storage implementations:
Safe HashMap: This is the default lossless implementation. A lossless implementation is one, like the Java Hashtable class, that is neither size-limited nor auto-expiring. In other words, it is an implementation that never evicts ("loses") cache items on its own. This particular HashMap implementation is optimized for extremely high thread-level concurrency. For the default implementation, use class com.tangosol.util.SafeHashMap; when an implementation is required that provides cache events, use com.tangosol.util.ObservableHashMap. These implementations are thread-safe. Local Cache: This is the default size-limiting and auto-expiring implementation. See "Capacity Planning" on page 14-4, for details on configuration. A local cache limits the size of the cache and automatically expires cache items after a certain period. For the default implementation, use com.tangosol.net.cache.LocalCache; this implementation is thread safe and supports cache events, com.tangosol.net.CacheLoader, CacheStore and configurable/pluggable eviction policies.
Operations
Read/Write Backing Map: This is the default backing map implementation for caches that load from a database on a cache miss. It can be configured as a read-only cache (consumer model) or as either a write-through or a write-behind cache (for the consumer/producer model). The write-through and write-behind modes are intended only for use with the distributed cache service. If used with a near cache and the near cache must be kept synchronous with the distributed cache, it is possible to combine the use of this backing map with a Seppuku-based near cache (for near cache invalidation purposes). For the default implementation, use class com.tangosol.net.cache.ReadWriteBackingMap. Binary Map (Java NIO): This is a backing map implementation that can store its information in memory but outside of the Java heap, or even in memory-mapped files, which means that it does not affect the Java heap size and the related JVM garbage-collection performance that can be responsible for application pauses. This implementation is also available for distributed cache backups, which is particularly useful for read-mostly and read-only caches that require backup for high availability purposes, because it means that the backup does not affect the Java heap size yet it is immediately available in case of failover. Serialization Map: This is a backing map implementation that translates its data to a form that can be stored on disk, referred to as a serialized form. It requires a separate com.tangosol.io.BinaryStore object into which it stores the serialized form of the data; usually, this is the built-in LH disk store implementation, but the Serialization Map supports any custom implementation of BinaryStore. For the default implementation of Serialization Map, use com.tangosol.net.cache.SerializationMap. Serialization Cache: This is an extension of the SerializationMap that supports an LRU eviction policy. For example, a serialization cache can limit the size of disk files. For the default implementation of Serialization Cache, use com.tangosol.net.cache.SerializationCache. Overflow Map: An overflow map does not actually provide storage, but it deserves mention in this section because it can combine two local storage implementations so that when the first one fills up, it overflows into the second. For the default implementation of OverflowMap, use com.tangosol.net.cache.OverflowMap.
Operations
There are number of operation types performed against the Backing Map:
Natural access and update operations caused by the application usage. For example, NamedCache.get() call naturally causes a Map.get() call on a corresponding Backing Map; the NamedCache.invoke() call may cause a sequence of Map.get() followed by the Map.put(); the NamedCache.keySet(filter) call may cause an Map.entrySet().iterator() loop, and so on. Remove operations caused by the time-based expiry or the size-based eviction. For example, a NamedCache.get() or NamedCache.size() call from the client tier could cause a Map.remove() call due to an entry expiry timeout; or NamedCache.put() call causing some Map.remove() calls (for different keys) caused by the total amount data in a backing map reaching the configured high water-mark value. Insert operations caused by a CacheStore.load() operation (for backing maps configured with read-through or read-ahead features) Synthetic access and updates caused by the partition distribution (which in turn could be caused by cluster nodes fail over or fail back). In this case, without any
Implementing Storage and Backing Maps 14-3
Capacity Planning
application tier call, some entries could be inserted or removed from the backing map.
Capacity Planning
Depending on the actual implementation, the Backing Map stores the cache data in the following ways:
on-heap memory off-heap memory disk (memory-mapped files or in-process DB) solid state device (journal files) combination of any of the above
Keeping data in memory naturally provides dramatically smaller access and update latencies and is most commonly used. More often than not, applications must ensure that the total amount of data placed into the data grid does not exceed some predetermined amount of memory. It could be done either directly by the application tier logic or automatically using size- or expiry-based eviction. The total amount of data held in a Coherence cache equals the sum of data volume in all corresponding backing maps (one per each cluster node that runs the corresponding partitioned cache service in a storage enabled mode). Consider following cache configuration excerpts:
<backing-map-scheme> <local-scheme/> </backing-map-scheme>
The backing map above is an instance of com.tangosol.net.cache.LocalCache and does not have any pre-determined size constraints and has to be controlled explicitly. Failure to do so could cause the JVM to go out-of-memory.
<backing-map-scheme> <local-scheme> <eviction-policy>LRU</eviction-policy> <high-units>100m</high-units> <unit-calculator>BINARY</unit-calculator> </local-scheme> </backing-map-scheme>
This backing map above is also a com.tangosol.net.cache.LocalCache and has a capacity limit of 100MB. As the total amount of data held by this backing map exceeds that high watermark, some entries are removed from the backing map, bringing the volume down to the low watermark value (<low-units> configuration element, which defaults to 80% of the <high-units>). The choice of the removed entries is based on the LRU (Least Recently Used) eviction policy. Other options are LFU (Least Frequently Used) and Hybrid (a combination of the LRU and LFU). The value of <high-units> is limited to 2GB. To overcome that limitation (but maintain backward compatibility) Coherence uses the <unit-factor> element. For example, the <high-units> value of 8192 with a <unit-factor> of 1048576 results in a high watermark value of 8GB.
<backing-map-scheme> <local-scheme> <expiry-delay>1h</expiry-delay> </local-scheme>
</backing-map-scheme>
The backing map above automatically evicts any entries that have not been updated for more than an hour. Note, that such an eviction is a "lazy" one and can happen any time after an hour since the last update happens; the only guarantee Coherence provides is that entries that exceed one hour are not returned to a caller. The following backing map is an instance of com.tangosol.net.cache.SerializationCache which stores values in the extended (nio) memory and has a capacity limit of 100MB (100*1048576).
<backing-map-scheme> <external-scheme> <nio-memory-manager> <initial-size>1MB</initial-size> <maximum-size>100MB</maximum-size> </nio-memory-manager> <high-units>100</high-units> <unit-calculator>BINARY</unit-calculator> <unit-factor>1048576</unit-factor> </external-scheme> </backing-map-scheme>
Configure a backup storage for this cache being off-heap (or file-mapped):
<backup-storage> <type>off-heap</type> <initial-size>1MB</initial-size> <maximum-size>100MB</maximum-size> </backup-storage>
A partitioned backing map is basically a multiplexer of actual Map implementations, each of which would contain only entries that belong to the same partition. Figure 143 shows a conceptual view of the partitioned backing map implementation.
Figure 143 Partitioned Backing Map Implementation
To configure a partitioned backing map, add a <partitioned> element with a value of true. For example:
<backing-map-scheme> <partitioned>true</partitioned> <external-scheme> <nio-memory-manager> <initial-size>1MB</initial-size> <maximum-size>50MB</maximum-size> </nio-memory-manager> <high-units>8192</high-units> <unit-calculator>BINARY</unit-calculator> <unit-factor>1048576</unit-factor> </external-scheme> </backing-map-scheme>
This backing map is an instance of com.tangosol.net.partition.PartitionSplittingBackingMap, with individual partition holding maps being instances of com.tangosol.net.cache.SerializationCache that each store values in the extended (nio) memory. The individual nio buffers have a limit of 50MB, while the backing map as whole has a capacity limit of 8GB (8192*1048576).
Elastic data contains two distinct components: the RAM journal for storing data in-memory and the flash journal for storing data to disk-based devices. These can be combined in different combinations and are typically used for backing maps and backup storage but can also be used with composite caches (for example, a near cache). The RAM journal always works with the flash journal to enable seamless overflow to disk. Caches that use RAM and flash journals are configured as part of a cache scheme definition within a cache configuration file. Journaling behavior is configured, as required, by using an operational override file to override the out-of-box configuration. This section includes the following topics:
Journaling Overview
Journaling refers to the technique of recording state changes in a sequence of modifications called a journal. As changes occur, the journal records each value for a specific key and a tree structure that is stored in memory keeps track of which journal entry contains the current value for a particular key. To find the value for an entry, you find the key in the tree which includes a pointer to the journal entry that contains the latest value. As changes in the journal become obsolete due to new values being written for a key, stale values accumulate in the journal. At regular intervals, the stale values are evacuated making room for new values to be written in the journal. The Elastic Data feature includes a RAM journal implementation and a Flash journal implementation that work seamlessly with each other. If for example the RAM Journal runs out of memory, the Flash Journal automatically accepts the overflow from the RAM Journal, allowing for caches to expand far beyond the size of RAM.
Note:
When journaling is enabled, additional capacity planning is required if you are performing data grid operations (such as queries and aggregations) on large result sets. See Administering Oracle Coherence for details.
A resource manager controls journaling. The resource manager creates and utilizes a binary store to perform operations on the journal. The binary store is implemented by the JournalBinaryStore class. All reads and writes through the binary store are handled by the resource manager. There is a resource manager for RAM journals (RamJournalRM) and one for flash journals (FlashJournalRM). Lastly, journaling uses the SimpleSerializationMap class as the backing map implementation. Custom implementation of SimpleSerializationMap can be created as required. See Java API Reference for Oracle Coherence for specific details on these APIs.
Configuring a RAM Journal Backing Map Configuring a Flash Journal Backing Map Referencing a Journal Scheme Using a Journal Scheme for Backup Storage Enabling a Custom Map Implementation for a Journal Scheme
Configuring the RAM Journal Resource Manager Configuring the Flash Journal Resource Manager
Binary values are limited by default to 64KB (and a maximum of 4MB). A flash journal is automatically used if a binary value exceeds the configured limit. An individual buffer (a journal file) is limited by default to 2MB (and a maximum of 2GB). The maximum file size should not be changed. A journal is composed of up to 512 files. The total memory used by the journal is limited to 1GB by default (and a maximum of 64GB). A flash journal is automatically used if the total memory of the journal exceeds the configured limit.
To configure a RAM journal resource manager, add a <ramjournal-manager> element within a <journaling-config> element and define any subelements that are to be overridden. The following example demonstrates overriding RAM journal subelements:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <journaling-config> <ramjournal-manager> <maximum-value-size>64K</maximum-value-size> <maximum-size>2G</maximum-size> </ramjournal-manager> </journaling-config> </cluster-config> </coherence>
Binary values are limited by default to 64MB. An individual buffer (a journal file) is limited by default to 2GB (and maximum 4GB). A journal is composed of up to 512 journal files and is thus limited by default to 1TB, with a theoretical maximum of 2TB.
A journal has a high journal size of 11GB by default. The high size determines when to start removing stale values from the journal. This is not a hard limit on the journal size, which can still grow to the maximum file count (512). A best practice is to only set the high journal size and allow journaling to automatically tune itself.
Keys remain in memory in a compressed format. For values, only the unwritten data (being queued or asynchronously written) remains in memory. When sizing the heap, a reasonable estimate is to allow 50 bytes for each entry to hold key data (this is true for both RAM and Flash journals) and include additional space for the buffers (16MB).
To configure a flash journal resource manager, add a <flashjournal-manager> element within a <journaling-config> element and define any subelements that are to be overridden. The following example demonstrates overriding flash journal subelements:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <journaling-config> <flashjournal-manager> <maximum-value-size>64K</maximum-value-size> <maximum-file-size>8M</maximum-file-size> <block-size>512K</block-size> <maximum-pool-size>32M</maximum-pool-size> <directory>/coherence_storage</directory> <async-limit>32M</async-limit> <high-journal-size>11GB</high-journal-size> </flashjournal-manager> </journaling-config> </cluster-config> </coherence>
Note:
The directory specified for storing journal files must exist. If the directory does not exist, a warning is logged and the default temporary file directory, as designated by the JVM, is used.
14-11
To enable asynchronous backup for a distributed cache, add an <async-backup> element, within a <distributed-scheme> element, that is set to true. For example:
<distributed-scheme> ... <async-backup>true</async-backup> ... </distributed-scheme>
To enable asynchronous backup for all instances of the distributed cache service type, override the partitioned cache service's async-backup initialization parameter in an operational override file. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <services> <service id="3"> <init-params> <init-param id="27"> <param-name>async-backup</param-name> <param-value system-property="tangosol.coherence.distributed.asyncbackup"> false </param-value> </init-param> </init-params> </service> </services> </cluster-config> </coherence>
The tangosol.coherence.distributed.asyncbackup system property is used to enable asynchronous backup for all instances of the distributed cache service type instead of using the operational override file. For example:
-Dtangosol.coherence.distributed.asyncbackup=true
To enable delta backup for all instances of the distributed cache service type, override the partitioned cache service's compressor initialization parameter in an operational override file. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <services> <service id="3"> <init-params> <init-param id="22"> <param-name>compressor</param-name> <param-value system-property="tangosol.coherence.distributed.compressor"> standard</param-value> </init-param> </init-params> </service> </services> </cluster-config> </coherence>
The tangosol.coherence.distributed.compressor system property is used to enable delta backup for all instances of the distributed cache service type instead of using the operational override file. For example:
-Dtangosol.coherence.distributed.compressor=standard
14-13
As an alternative, the <instance> element supports the use of a <class-factory-name> element to use a factory class that is responsible for creating DeltaCompressor instances, and a <method-name> element to specify the static factory method on the factory class that performs object instantiation. The following example gets a custom compressor instance using the getCompressor method on the MyCompressorFactory class.
<distributed-scheme> ... <compressor> <instance> <class-factory-name>package.MyCompressorFactory</class-factory-name> <method-name>getCompressor</method-name> </instance> </compressor> ... </distributed-scheme>
Any initialization parameters that are required for an implementation can be specified using the <init-params> element. The following example sets the iMaxTime parameter to 2000.
<distributed-scheme> ... <compressor> <instance> <class-name>package.MyDeltaCompressor</class-name> <init-params> <init-param> <param-name>iMaxTime</param-name> <param-value>2000</param-value> </init-param> </init-params> </instance> </compressor> ... </distributed-scheme>
15
15
This chapter provides instructions for caching data sources and using Coherence as a temporary system-of-record. The chapter includes samples and implementation considerations. This chapter includes the following sections:
Overview of Caching Data Sources Selecting a Cache Strategy Creating a Cache Store Implementation Plugging in a Cache Store Implementation Sample Cache Store Implementation Sample Controllable Cache Store Implementation Implementation Considerations
Read-through/write-through caching (and variants) are intended for use only with the Partitioned (Distributed) cache topology (and by extension, Near cache). Local caches support a subset of this functionality. Replicated and Optimistic caches should not be used.
Pluggable Cache Store Read-Through Caching Write-Through Caching Write-Behind Caching Refresh-Ahead Caching
Caching Data Sources 15-1
Read-Through Caching
When an application asks the cache for an entry, for example the key X, and X is not in the cache, Coherence automatically delegates to the CacheStore and ask it to load X from the underlying data source. If X exists in the data source, the CacheStore loads it, returns it to Coherence, then Coherence places it in the cache for future use and finally returns X to the application code that requested it. This is called Read-Through caching. Refresh-Ahead Cache functionality may further improve read performance (by reducing perceived latency). See "Refresh-Ahead Caching" on page 15-5 for more information.
Figure 151 Read-Through Caching
Write-Through Caching
Coherence can handle updates to the data source in two distinct ways, the first being Write-Through. In this case, when the application updates a piece of data in the cache (that is, calls put(...) to change a cache entry,) the operation does not complete (that is, the put does not return) until Coherence has gone through the cache store and successfully stored the data to the underlying data source. This does not improve write performance at all, since you are still dealing with the latency of the write to the data
source. Improving the write performance is the purpose for the Write-Behind Cache functionality. See "Write-Behind Caching" on page 15-3 for more information.
Figure 152 Write-Through Caching
Write-Behind Caching
In the Write-Behind scenario, modified cache entries are asynchronously written to the data source after a configured delay, whether after 10 seconds, 20 minutes, a day, a week or even longer. Note that this only applies to cache inserts and updates - cache entries are removed synchronously from the data source. For Write-Behind caching, Coherence maintains a write-behind queue of the data that must be updated in the data source. When the application updates X in the cache, X is added to the write-behind queue (if it is not there; otherwise, it is replaced), and after the specified write-behind delay Coherence calls the CacheStore to update the underlying data source with the latest state of X. Note that the write-behind delay is relative to the first of a series of modificationsin other words, the data in the data source never lags behind the cache by more than the write-behind delay. The result is a "read-once and write at a configured interval" (that is, much less often) scenario. There are four main benefits to this type of architecture:
The application improves in performance, because the user does not have to wait for data to be written to the underlying data source. (The data is written later, and by a different execution thread.) The application experiences drastically reduced database load: Since the amount of both read and write operations is reduced, so is the database load. The reads are reduced by caching, as with any other caching approach. The writes, which are typically much more expensive operations, are often reduced because multiple changes to the same object within the write-behind interval are "coalesced" and only written once to the underlying data source ("write-coalescing"). Additionally, writes to multiple cache entries may be combined into a single database transaction ("write-combining") by using the CacheStore.storeAll() method. The application is somewhat insulated from database failures: the Write-Behind feature can be configured in such a way that a write failure results in the object being re-queued for write. If the data that the application is using is in the
Coherence cache, the application can continue operation without the database being up. This is easily attainable when using the Coherence Partitioned Cache, which partitions the entire cache across all participating cluster nodes (with local-storage enabled), thus allowing for enormous caches.
Linear Scalability: For an application to handle more concurrent users you need only increase the number of nodes in the cluster; the effect on the database in terms of load can be tuned by increasing the write-behind interval.
Write-Behind Requirements
While enabling write-behind caching is simply a matter of adjusting one configuration setting, ensuring that write-behind works as expected is more involved. Specifically, application design must address several design issues up-front. The most direct implication of write-behind caching is that database updates occur outside of the cache transaction; that is, the cache transaction usually completes before the database transaction(s) begin. This implies that the database transactions must never fail; if this cannot be guaranteed, then rollbacks must be accommodated. As write-behind may re-order database updates, referential integrity constraints must allow out-of-order updates. Conceptually, this is similar to using the database as ISAM-style storage (primary-key based access with a guarantee of no conflicting updates). If other applications share the database, this introduces a new challengethere is no way to guarantee that a write-behind transaction does not conflict with an external update. This implies that write-behind conflicts must be handled heuristically or escalated for manual adjustment by a human operator. As a rule of thumb, mapping each cache entry update to a logical database transaction is ideal, as this guarantees the simplest database transactions. Because write-behind effectively makes the cache the system-of-record (until the write-behind queue has been written to disk), business regulations must allow cluster-durable (rather than disk-durable) storage of data and transactions.
Refresh-Ahead Caching
In the Refresh-Ahead scenario, Coherence allows a developer to configure a cache to automatically and asynchronously reload (refresh) any recently accessed cache entry from the cache loader before its expiration. The result is that after a frequently accessed entry has entered the cache, the application does not feel the impact of a read against a potentially slow cache store when the entry is reloaded due to expiration. The asynchronous refresh is only triggered when an object that is sufficiently close to its expiration time is accessedif the object is accessed after its expiration time, Coherence performs a synchronous read from the cache store to refresh its value. The refresh-ahead time is expressed as a percentage of the entry's expiration time. For example, assume that the expiration time for entries in the cache is set to 60 seconds and the refresh-ahead factor is set to 0.5. If the cached object is accessed after 60 seconds, Coherence performs a synchronous read from the cache store to refresh its value. However, if a request is performed for an entry that is more than 30 but less than 60 seconds old, the current value in the cache is returned and Coherence schedules an asynchronous reload from the cache store. Refresh-ahead is especially useful if objects are being accessed by a large number of users. Values remain fresh in the cache and the latency that could result from excessive reloads from the cache store is avoided. The value of the refresh-ahead factor is specified by the <refresh-ahead-factor> subelement of the <read-write-backing-map-scheme> element in the coherence-cache-config.xml file. Refresh-ahead assumes that you have also set an expiration time (<expiry-delay>) for entries in the cache. Example 151 configures a refresh-ahead factor of 0.5 and an expiration time of 20 seconds for entries in the local cache. If an entry is accessed within 10 seconds of its expiration time, it is scheduled for an asynchronous reload from the cache store.
Example 151 Specifying a Refresh-Ahead Factor
<distributed-scheme> <scheme-name>categories-cache-all-scheme</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <read-write-backing-map-scheme> <scheme-name>categoriesLoaderScheme</scheme-name> <internal-cache-scheme> <local-scheme> <scheme-ref>categories-eviction</scheme-ref> </local-scheme> </internal-cache-scheme> <cachestore-scheme> <class-scheme> <class-name> com.demo.cache.coherence.categories.CategoryCacheLoader </class-name> </class-scheme> </cachestore-scheme> <refresh-ahead-factor>0.5</refresh-ahead-factor> </read-write-backing-map-scheme> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> <local-scheme>
CacheLoader read-only caches CacheStore read/write caches BinaryEntryStore read/write for binary entry objects.
These interfaces are located in the com.tangosol.net.cache package. The CacheLoader interface has two main methods: load(Object key) and loadAll(Collection keys), and the CacheStore interface adds the methods store(Object key, Object value), storeAll(Map mapEntries), erase(Object key), and eraseAll(Collection colKeys). The BinaryEntryStore interface provides the same methods as the other interfaces, but it works directly on binary objects. See "Sample Cache Store Implementation" on page 15-8 and "Sample Controllable Cache Store Implementation" on page 15-13 for example CacheStore implementations.
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <caching-scheme-mapping> <cache-mapping> <cache-name>com.company.dto.*</cache-name> <scheme-name>distributed-rwbm</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <distributed-scheme> <scheme-name>distributed-rwbm</scheme-name> <backing-map-scheme> <read-write-backing-map-scheme>
<internal-cache-scheme> <local-scheme/> </internal-cache-scheme> <cachestore-scheme> <class-scheme> <class-name>com.example.MyCacheStore</class-name> <init-params> <init-param> <param-type>java.lang.String</param-type> <param-value>{cache-name}</param-value> </init-param> </init-params> </class-scheme> </cachestore-scheme> </read-write-backing-map-scheme> </backing-map-scheme> </distributed-scheme> </caching-schemes> </cache-config>
Note: The use of a cache store implementation substantially increases the consumption of cache service threads (even the fastest database select is orders of magnitude slower than updating an in-memory structure). Consequently, the cache service thread count must be increased (typically in the range 10-100). The most noticeable symptom of an insufficient thread pool is increased latency for cache requests (without corresponding behavior in the backing database).
package com.tangosol.examples.coherence;
import com.tangosol.net.cache.CacheStore; import com.tangosol.util.Base; import import import import java.sql.DriverManager; java.sql.Connection; java.sql.PreparedStatement; java.sql.ResultSet;
import java.sql.SQLException; import import import import import java.util.Collection; java.util.Iterator; java.util.LinkedList; java.util.List; java.util.Map;
/** * An example implementation of CacheStore * interface. * * @author erm 2003.05.01 */ public class DBCacheStore extends Base implements CacheStore { // ----- constructors --------------------------------------------------/** * Constructs DBCacheStore for a given database table. * * @param sTableName the db table name */ public DBCacheStore(String sTableName) { m_sTableName = sTableName; configureConnection(); } /** * Set up the DB connection. */ protected void configureConnection() { try { Class.forName("org.gjt.mm.mysql.Driver"); m_con = DriverManager.getConnection(DB_URL, DB_USERNAME, DB_PASSWORD); m_con.setAutoCommit(true); } catch (Exception e) { throw ensureRuntimeException(e, "Connection failed"); } }
// ---- accessors ------------------------------------------------------/** * Obtain the name of the table this CacheStore is persisting to. * * @return the name of the table this CacheStore is persisting to */ public String getTableName() { return m_sTableName;
} /** * Obtain the connection being used to connect to the database. * * @return the connection used to connect to the database */ public Connection getConnection() { return m_con; }
// ----- CacheStore Interface -------------------------------------------/** * Return the value associated with the specified key, or null if the * key does not have an associated value in the underlying store. * * @param oKey key whose associated value is to be returned * * @return the value associated with the specified key, or * <tt>null</tt> if no value is available for that key */ public Object load(Object oKey) { Object oValue = null; Connection con = getConnection(); String sSQL = "SELECT id, value FROM " + getTableName() + " WHERE id = ?"; try { PreparedStatement stmt = con.prepareStatement(sSQL); stmt.setString(1, String.valueOf(oKey)); ResultSet rslt = stmt.executeQuery(); if (rslt.next()) { oValue = rslt.getString(2); if (rslt.next()) { throw new SQLException("Not a unique key: " + oKey); } } stmt.close(); } catch (SQLException e) { throw ensureRuntimeException(e, "Load failed: key=" + oKey); } return oValue; } /** * Store the specified value under the specific key in the underlying * store. This method is intended to support both key/value creation * and value update for a specific key. * * @param oKey key to store the value under
* @param oValue value to be stored * * @throws UnsupportedOperationException if this implementation or the * underlying store is read-only */ public void store(Object oKey, Object oValue) { Connection con = getConnection(); String sTable = getTableName(); String sSQL; // the following is very inefficient; it is recommended to use DB // specific functionality that is, REPLACE for MySQL or MERGE for Oracle if (load(oKey) != null) { // key exists - update sSQL = "UPDATE " + sTable + " SET value = ? where id = ?"; } else { // new key - insert sSQL = "INSERT INTO " + sTable + " (value, id) VALUES (?,?)"; } try { PreparedStatement stmt = con.prepareStatement(sSQL); int i = 0; stmt.setString(++i, String.valueOf(oValue)); stmt.setString(++i, String.valueOf(oKey)); stmt.executeUpdate(); stmt.close(); } catch (SQLException e) { throw ensureRuntimeException(e, "Store failed: key=" + oKey); } } /** * Remove the specified key from the underlying store if present. * * @param oKey key whose mapping is to be removed from the map * * @throws UnsupportedOperationException if this implementation or the * underlying store is read-only */ public void erase(Object oKey) { Connection con = getConnection(); String sSQL = "DELETE FROM " + getTableName() + " WHERE id=?"; try { PreparedStatement stmt = con.prepareStatement(sSQL); stmt.setString(1, String.valueOf(oKey)); stmt.executeUpdate(); stmt.close(); } catch (SQLException e) {
throw ensureRuntimeException(e, "Erase failed: key=" + oKey); } } /** * Remove the specified keys from the underlying store if present. * * @param colKeys keys whose mappings are being removed from the cache * * @throws UnsupportedOperationException if this implementation or the * underlying store is read-only */ public void eraseAll(Collection colKeys) { throw new UnsupportedOperationException(); } /** * Return the values associated with each the specified keys in the * passed collection. If a key does not have an associated value in * the underlying store, then the return map does not have an entry * for that key. * * @param colKeys a collection of keys to load * * @return a Map of keys to associated values for the specified keys */ public Map loadAll(Collection colKeys) { throw new UnsupportedOperationException(); } /** * Store the specified values under the specified keys in the underlying * store. This method is intended to support both key/value creation * and value update for the specified keys. * * @param mapEntries a Map of any number of keys and values to store * * @throws UnsupportedOperationException if this implementation or the * underlying store is read-only */ public void storeAll(Map mapEntries) { throw new UnsupportedOperationException(); } /** * Iterate all keys in the underlying store. * * @return a read-only iterator of the keys in the underlying store */ public Iterator keys() { Connection con = getConnection(); String sSQL = "SELECT id FROM " + getTableName(); List list = new LinkedList(); try {
PreparedStatement stmt = con.prepareStatement(sSQL); ResultSet rslt = stmt.executeQuery(); while (rslt.next()) { Object oKey = rslt.getString(1); list.add(oKey); } stmt.close(); } catch (SQLException e) { throw ensureRuntimeException(e, "Iterator failed"); } return list.iterator(); }
// ----- data members --------------------------------------------------/** * The connection. */ protected Connection m_con; /** * The db table name. */ protected String m_sTableName; /** * Driver class name. */ private static final String DB_DRIVER = "org.gjt.mm.mysql.Driver"; /** * Connection URL. */ private static final String DB_URL = "jdbc:mysql://localhost:3306/CacheStore"; /** * User name. */ private static final String DB_USERNAME = "root"; /** * Password. */ private static final String DB_PASSWORD = null; }
The Main.java file in Example 154 illustrates two different approaches to interacting with a controllable cache store:
Use a controllable cache (note that it must be on a different service) to enable or disable the cache store. This is illustrated by the ControllableCacheStore1 class. Use the CacheStoreAware interface to indicate that objects added to the cache do not require storage. This is illustrated by the ControllableCacheStore2 class.
Both ControllableCacheStore1 and ControllableCacheStore2 extend the com.tangosol.net.cache.AbstractCacheStore class. This helper class provides unoptimized implementations of the storeAll and eraseAll operations. The CacheStoreAware interface can be used to indicate that an object added to the cache should not be stored in the database. See "Cache of a Database" on page 18-4 for a sample cache configurations. Example 154 provides a listing of the Main.java interface.
Example 154 import import import import Main.java - Interacting with a Controllable CacheStore
import java.io.Serializable; import java.util.Date; public class Main extends Base { /** * A cache controlled CacheStore implementation */ public static class ControllableCacheStore1 extends AbstractCacheStore { public static final String CONTROL_CACHE = "cachestorecontrol"; String m_sName; public static void enable(String sName) { CacheFactory.getCache(CONTROL_CACHE).put(sName, Boolean.TRUE); } public static void disable(String sName) { CacheFactory.getCache(CONTROL_CACHE).put(sName, Boolean.FALSE); } public void store(Object oKey, Object oValue) { Boolean isEnabled = (Boolean) CacheFactory.getCache(CONTROL_ CACHE).get(m_sName); if (isEnabled != null && isEnabled.booleanValue()) { log("controllablecachestore1: enabled " + oKey + " = " + oValue); } else {
log("controllablecachestore1: disabled " + oKey + " = " + oValue); } } public Object load(Object oKey) { log("controllablecachestore1: load:" + oKey); return new MyValue1(oKey); } public ControllableCacheStore1(String sName) { m_sName = sName; } } /** * a valued controlled CacheStore implementation that * implements the CacheStoreAware interface */ public static class ControllableCacheStore2 extends AbstractCacheStore { public void store(Object oKey, Object oValue) { boolean isEnabled = oValue instanceof CacheStoreAware ? !((CacheStoreAware) oValue).isSkipStore() : true; if (isEnabled) { log("controllablecachestore2: enabled " + oKey + " = " + oValue); } else { log("controllablecachestore2: disabled " + oKey + " = " + oValue); } } public Object load(Object oKey) { log("controllablecachestore2: load:" + oKey); return new MyValue2(oKey); } } public static class MyValue1 implements Serializable { String m_sValue; public String getValue() { return m_sValue; } public String toString() { return "MyValue1[" + getValue() + "]"; }
public MyValue1(Object obj) { m_sValue = "value:" + obj; } } public static class MyValue2 extends MyValue1 implements CacheStoreAware { boolean m_isSkipStore = false; public boolean isSkipStore() { return m_isSkipStore; } public void skipStore() { m_isSkipStore = true; } public String toString() { return "MyValue2[" + getValue() + "]"; } public MyValue2(Object obj) { super(obj); } } public static void main(String[] args) { try { // example 1 NamedCache cache1 = CacheFactory.getCache("cache1"); // disable cachestore ControllableCacheStore1.disable("cache1"); for(int i = 0; i < 5; i++) { cache1.put(new Integer(i), new MyValue1(new Date())); } // enable cachestore ControllableCacheStore1.enable("cache1"); for(int i = 0; i < 5; i++) { cache1.put(new Integer(i), new MyValue1(new Date())); } // example 2 NamedCache cache2 = CacheFactory.getCache("cache2"); // add some values with cachestore disabled
Implementation Considerations
for(int i = 0; i < 5; i++) { MyValue2 value = new MyValue2(new Date()); value.skipStore(); cache2.put(new Integer(i), value); } // add some values with cachestore enabled for(int i = 0; i < 5; i++) { cache2.put(new Integer(i), new MyValue2(new Date())); }
Implementation Considerations
Please keep the following in mind when creating a cache store implementation.
Idempotency
All operations should be designed to be idempotent (that is, repeatable without unwanted side-effects). For write-through and write-behind caches, this allows Coherence to provide low-cost fault-tolerance for partial updates by re-trying the database portion of a cache update during failover processing. For write-behind caching, idempotency also allows Coherence to combine multiple cache updates into a single invocation without affecting data integrity. Applications that have a requirement for write-behind caching but which must avoid write-combining (for example, for auditing reasons), should create a "versioned" cache key (for example, by combining the natural primary key with a sequence id).
Write-Through Limitations
Coherence does not support two-phase operations across multiple cache store instances. In other words, if two cache entries are updated, triggering calls to cache store implementations that are on separate cache servers, it is possible for one database update to succeed and for the other to fail. In this case, it may be preferable to use a cache-aside architecture (updating the cache and database as two separate components of a single transaction) with the application server transaction manager. In many cases it is possible to design the database schema to prevent logical commit failures (but obviously not server failures). Write-behind caching avoids this issue as "puts" are not affected by database behavior (as the underlying issues have been addressed earlier in the design process).
Implementation Considerations
Cache Queries
Cache queries only operate on data stored in the cache and do not trigger a cache store implementation to load any missing (or potentially missing) data. Therefore, applications that query cache store-backed caches should ensure that all necessary data required for the queries has been pre-loaded. For efficiency, most bulk load operations should be done at application startup by streaming the data set directly from the database into the cache (batching blocks of data into the cache by using NamedCache.putAll(). The loader process must use a "Controllable Cachestore" pattern to disable circular updates back to the database. The cache store may be controlled by using an Invocation service (sending agents across the cluster to modify a local flag in each JVM) or by setting the value in a Replicated cache (a different cache service) and reading it in every cache store implementation method invocation (minimal overhead compared to the typical database operation). A custom MBean can also be used, a simple task with Coherence's clustered JMX facilities.
Re-entrant Calls
The cache store implementation must not call back into the hosting cache service. This includes ORM solutions that may internally reference Coherence cache services. Note that calling into another cache service instance is allowed, though care should be taken to avoid deeply nested calls (as each call consumes a cache service thread and could result in deadlock if a cache service thread pool is exhausted).
Connection Pools
Database connections should be retrieved from the container connection pool (or a third party connection pool) or by using a thread-local lazy-initialization pattern. As dedicated cache servers are often deployed without a managing container, the latter may be the most attractive option (though the cache service thread-pool size should be constrained to avoid excessive simultaneous database connections).
16
16
This chapter provides general information about caching large amounts of binary data off-heap. This chapter includes the following sections:
Understanding Serialization Paged Cache Configuring Serialization Paged Cache Optimizing a Partitioned Cache Service Configuring for High Availability Configuring Load Balancing and Failover Supporting Huge Caches
Serialization implies that objects stored in the cache are serialized and stored in a Binary Store; refer to the existing features Serialization Map and Serialization Cache. Paged implies that the objects stored in the cache are segmented for efficiency of management. Cache implies that there can be limits specified to the size of the cache; in this case, the limit is the maximum number of concurrent pages that the cache manages before expiring pages, starting with the oldest page.
The result is a feature that organizes data in the cache based on the time that the data was placed in the cache, and then can efficiently expire that data from the cache, an entire page at a time, and typically without having to reload any data from disk.
17
17
Using Quorum
This chapter provides instructions for using and configuring quorum policies to control when specific service actions are allowed in a cluster in order to ensure that a cluster is adequately provisioned. This chapter includes the following sections:
Overview of Using Quorum Using the Cluster Quorum Using the Partitioned Cache Quorums Using the Proxy Quorum Using Custom Action Policies
control service behavior at different service member levels mandate the minimum service member levels that are required for service operations ensure an optimal cluster and cache environment for a particular application or solution
terminating suspect members. A member is considered suspect if it has not responded to network communications and is in imminent danger of being disconnected from the cluster. The quorum can be specified generically across all members or constrained to members that have a specific role in the cluster, such as client or server members. See the <role-name> element in "member-identity" on page A-41 for more information on defining role names for cluster members. This quorum is typically used in environments where network performance varies. For example, intermittent network outages may cause a high number of cluster members to be removed from the cluster. Using this quorum, a certain number of members are maintained during the outage and are available when the network recovers. This behavior also minimizes the manual intervention required to restart members. Naturally, requests that require cooperation by the nodes that are not responding are not able to complete and are either blocked for the duration of the outage or are timed out.
Distribution Quorum This quorum mandates the minimum number of storage-enabled members of a partitioned cache service that must be present before the partitioned cache service is allowed to perform partition distribution. Restore Quorum This quorum mandates the minimum number of storage-enabled members of a partitioned cache service that must be present before the partitioned cache service is allowed to restore lost primary partitions from backup. Read Quorum This quorum mandates the minimum number of storage-enabled members of a partitioned cache service that must be present to process read requests. A read request is any request that does not mutate the state or contents of a cache. Write Quorum This quorum mandates the minimum number of storage-enabled members of a partitioned cache service that must be present to process write requests. A write request is any request that may mutate the state or contents of a cache.
These quorums are typically used to indicate at what service member levels different service operations are best performed given the intended use and requirements of a distributed cache. For example, a small distributed cache may only require three storage-enabled members to adequately store data and handle projected request volumes. While; a large distributed cache may require 10, or more, storage-enabled members to adequately store data and handle projected request volumes. Optimal member levels are tested during development and then set accordingly to ensure that the minimum service member levels are provisioned in a production environment. If the number of storage-enabled nodes running the service drops below the configured level of read or write quorum, the corresponding client operation are rejected by throwing the com.tangosol.net.RequestPolicyException. If the number of storage-enabled nodes drops below the configured level of distribution quorum, some data may become "endangered" (no backup) until the quorum is reached. Dropping below the restore quorum may cause some operation to be blocked until the quorum is reached or to be timed out.
The <partitioned-quorum-policy-scheme> element also supports the use of scheme references. In the below example, a <partitioned-quorum-policy-scheme>, with the name partitioned-cache-quorum, is referenced from within the <distributed-scheme> element:
<distributed-scheme> <scheme-name>partitioned-cache-with-quorum</scheme-name> <service-name>PartitionedCacheWithQuorum</service-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <partitioned-quorum-policy-scheme> <scheme-ref>partitioned-cache-quorum</scheme-ref> </partitioned-quorum-policy-scheme> <autostart>true</autostart> </distributed-scheme> <distributed-scheme>
<scheme-name>dist-example</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <partitioned-quorum-policy-scheme> <scheme-name>partitioned-cache-quorum</scheme-name> <distribution-quorum>4</distribution-quorum> <restore-quorum>3</restore-quorum> <read-quorum>3</read-quorum> <write-quorum>5</write-quorum> </partitioned-quorum-policy-scheme> <autostart>true</autostart> </distributed-scheme>
The <proxy-quorum-policy-scheme> element also supports the use of scheme references. In the below example, a <proxy-quorum-policy-scheme>, with the name proxy-quorum, is referenced from within the <proxy-scheme> element:
<proxy-scheme> <scheme-name>proxy-with-quorum</scheme-name> <service-name>TcpProxyService</service-name> ... <proxy-quorum-policy-scheme> <scheme-ref>proxy-quorum</scheme-ref> </proxy-quorum-policy-scheme> <autostart>true</autostart> </proxy-scheme> <proxy-scheme> <scheme-name>proxy-example</scheme-name> <service-name>TcpProxyService</service-name> ... <proxy-quorum-policy-scheme> <scheme-name>proxy-quorum</scheme-name> <connect-quorum>3</connect-quorum> </proxy-quorum-policy-scheme> <autostart>true</autostart> </proxy-scheme>
Enabling Custom Action Policies Enabling the Custom Failover Access Policy
As an alternative, a factory class can create custom action policy instances. To define a factory class, use the <class-factory-name> element to enter the fully qualified class name and the <method-name> element to specify the name of a static factory method on the factory class which performs object instantiation. For example.
<distributed-scheme> <scheme-name>partitioned-cache-with-quorum</scheme-name> <service-name>PartitionedCacheWithQuorum</service-name>
cThresholdMillis Specifies the delay before the policy should start holding requests (after becoming endangered). The default value is 5000 milliseconds. cLimitMillis Specifies the delay before the policy makes a maximal effort to hold requests (after becoming endangered). The default values is 60000 milliseconds. cMaxDelayMillis Specifies the maximum amount of time to hold a request. The default value is 5000 milliseconds.
The following example enables the custom failover access policy and sets each of the parameters:
<distributed-scheme> <scheme-name>partitioned-cache-with-quorum</scheme-name> <service-name>PartitionedCacheWithQuorum</service-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <partitioned-quorum-policy-scheme> <class-name>com.tangosol.net.partition.FailoverAccessPolicy/class-name> <init-params> <init-param> <param-name>cThresholdMillis</param-name> <param-value>7000</param-value> </init-param> <init-param> <param-name>cLimitMillis</param-name> <param-value>30000</param-value> </init-param> <init-param> <param-name>cMaxDelayMillis</param-name> <param-value>2000</param-value> </init-param> </init-params> </partitioned-quorum-policy-scheme> <autostart>true</autostart>
</distributed-scheme>
18
18
This chapter provides a series of basic cache scheme definitions that can be used or modified as required. See Chapter 13, "Configuring Caches," for detailed instructions on how to configure caches. In addition, the samples in this chapter build upon one another and often use a <scheme-ref> element to reuse other samples as nested schemes. See "Using Scheme Inheritance" on page 13-13 for details on using the <scheme-ref> element. Lastly, these samples only specify a minimum number of settings, follow the embedded links to a scheme's documentation to see the full set of options. This chapter contains the following sections:
Local Caches (accessible from a single JVM) Clustered Caches (accessible from multiple JVMs)
In-memory Cache NIO In-memory Cache In-memory Cache with Expiring Entries Cache on Disk Size Limited Cache on Disk Persistent Cache on Disk In-memory Cache with Disk Based Overflow Cache of a Database
In-memory Cache
Example 181 uses a local-scheme to define an in-memory cache. The cache stores as much as the JVM heap allows.
Example 181 <local-scheme> Configuration for a Local, In-memory Cache
<scheme-name>SampleMemoryScheme</scheme-name> </local-scheme>
Cache on Disk
Example 185 uses an external-scheme to define an on disk cache.
Note: This example uses the bdb-store-manager (Berkeley Database) for its on disk storage implementation. See external-scheme for additional external storage options.
Example 185
Example 188
<overflow-scheme> <scheme-name>SampleOverflowScheme</scheme-name> <front-scheme> <local-scheme> <scheme-ref>SampleMemoryLimitedScheme</scheme-ref> </local-scheme> </front-scheme> <back-scheme> <external-scheme> <scheme-ref>SampleDiskScheme</scheme-ref> </external-scheme> </back-scheme> </overflow-scheme>
Cache of a Database
Example 189 uses a read-write-backing-map-scheme to define a cache of a database. This scheme maintains local cache of a portion of the database contents. Cache misses are read-through to the database, and cache writes are written back to the database. The cachestore-scheme element is configured with a custom class implementing either the com.tangosol.net.cache.CacheLoader or com.tangosol.net.cache.CacheStore interface. This class is responsible for all operations against the database, such as reading and writing cache entries. See "Sample Cache Store Implementation" on page 15-8 implementations for examples of writing a cache store. The {cache-name} macro is used to inform the cache store implementation of the name of the cache it backs. See "Using Parameter Macros" on page 13-15 for more information on this macro.
Example 189 Configuration for the Cache of a Database
<read-write-backing-map-scheme> <scheme-name>SampleDatabaseScheme</scheme-name> <internal-cache-scheme> <local-scheme> <scheme-ref>SampleMemoryScheme</scheme-ref> </local-scheme> </internal-cache-scheme> <cachestore-scheme> <class-scheme> <class-name>com.tangosol.examples.coherence.DBCacheStore</class-name> <init-params> <init-param> <param-type>java.lang.String</param-type> <param-value>{cache-name}</param-value> </init-param> </init-params> </class-scheme> </cachestore-scheme> </read-write-backing-map-scheme>
Replicated Cache Replicated Cache with Overflow Partitioned Cache Partitioned Cache with Overflow Partitioned Cache with Journal Storage Partitioned Cache of a Database Partitioned Cache with a Serializer Near Cache
Replicated Cache
Example 1810 uses the replicated-scheme element to define a clustered cache in which a copy of each cache entry is stored on all cluster nodes. The sample in "In-memory Cache" on page 18-1 is used to define the cache storage on each cluster node. The size of the cache is only limited by the cluster node with the smallest JVM heap.
Example 1810 Configuration for a Replicated Cache <replicated-scheme> <scheme-name>SampleReplicatedScheme</scheme-name> <backing-map-scheme> <local-scheme> <scheme-ref>SampleMemoryScheme</scheme-ref> </local-scheme> </backing-map-scheme> </replicated-scheme>
</replicated-scheme>
Partitioned Cache
Example 1812 uses the distributed-scheme to define a clustered cache in which cache storage is partitioned across all cluster nodes. The "In-memory Cache" on page 18-1 is used to define the cache storage on each cluster node. The total storage capacity of the cache is the sum of all storage enabled cluster nodes running the partitioned cache service. See the <local-storage> subelement of "distributed-scheme" on page B-32.
Example 1812 Configuration for a Partitioned Cache <distributed-scheme> <scheme-name>SamplePartitionedScheme</scheme-name> <backing-map-scheme> <local-scheme> <scheme-ref>SampleMemoryScheme</scheme-ref> </local-scheme> </backing-map-scheme> </distributed-scheme>
Example 1814 Configuration for a Partitioned Cache with RAM Journaling <distributed-scheme> <scheme-name>SamplePartitionedJournalScheme</scheme-name> <backing-map-scheme> <ramjournal-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme>
Example 1815 define a partitioned cache that directly uses a flash journal for the backing map.
Example 1815 Configuration for a Partitioned Cache with Flash Journaling <distributed-scheme> <scheme-name>SamplePartitionedJournalScheme</scheme-name> <backing-map-scheme> <flashjournal-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme>
Serializers that are defined in the tangosol-coherence.xml deployment descriptor can also be referenced.
Example 1818 Partitioned Cache that References a Serializer <distributed-scheme> <scheme-name>SamplePartitionedPofScheme</scheme-name> <service-name>PartitionedPofCache</service-name> <serializer>pof</serializer> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme>
Lastly a default serializer can be defined for all cache schemes and alleviates having to explicitly include a <serializer> element in each cache scheme definition. The global serializer definitions can also reference serializers that are defined in the tangosol-coherence.xml deployment descriptor
Example 1819 Defining a Default Serializer <?xml version='1.0'?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <defaults> <serializer>pof</serializer> </defaults> ...
Near Cache
Example 1820 uses the near-scheme to define a local in-memory cache of a subset of a partitioned cache. The result is that any cluster node accessing the partitioned cache maintains a local copy of the elements it frequently accesses. This offers read performance close to the replicated-scheme-based caches, while offering the high scalability of a distributed-scheme-based cache. The "Size Limited In-memory Cache" on page 18-2 sample is reused to define the "near" (<front-scheme>) cache, while the "Partitioned Cache" on page 18-6 sample is reused to define the near-scheme. Note that the size limited configuration of the front-scheme specifies the limit on how much of the back-scheme cache is locally cached.
Example 1820 Configuration for a Local Cache of a Partitioned Cache <near-scheme> <scheme-name>SampleNearScheme</scheme-name> <front-scheme> <local-scheme> <scheme-ref>SampleLimitedMemoryScheme</scheme-ref> </local-scheme> </front-scheme> <back-scheme> <distributed-scheme> <scheme-ref>SamplePartitionedScheme</scheme-ref> </distributed-scheme> </back-scheme> </near-scheme>
19
Extending Cache Configuration Files
19
This chapter provides detailed instructions for extending Coherence cache configuration files using XML namespaces and namespace handler classes. The instructions in this chapter assume a general understanding of XML namespaces and XML processing. This chapter includes the following sections:
Introduction to Extending Cache Configuration Files Declaring XML Namespaces Creating Namespace Handlers Example: the JNDI Resource Namespace Handler
Cache configuration files are typically extended to allow applications to perform custom initialization, background tasks, or perform monitoring and maintenance of caches in a cluster. For example, an application can:
establish domain-specific cache entry indexes preload cached information load configuration into a cluster run or schedule background tasks against the cluster
Extending Cache Configuration Files 19-1
Extending cache configuration files offers applications a common and consolidated place for configuration. In addition, application logic is embedded and managed in a cluster, which is a high-availability and scalable environment that can provide automated recovery of failed application logic if required.
The URI value for a namespace must be in the format class://FullyQualifiedClassName. If an incorrect format is provided, then a failure occurs and the Coherence member fails to start. The handler class provided in the declaration must be a class that implements the NamespaceHandler interface. For details about creating handler classes, see "Creating Namespace Handlers" on page 19-3. The handler class must be able to process the associated XML elements and attributes as they occur within the cache configuration file. More specifically, during the processing of the XML DOM for the cache configuration, all XML elements and attributes that occur within a namespace are passed to the associated handler instance for processing. The following example uses the MyNamespaceHandler to process all elements that use the ex namespace prefix.
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd" xmlns:ex="class://MyNamespaceHandler"> <ex:myelement/> ...
Guidelines for Declaring an XML Namespace Use the following guidelines when declaring an XML namespace:
A URI class must implement the NamespaceHandler interface; otherwise, the processing of the cache configuration file fails.
XML elements and attributes must not use an undeclared namespace prefix; otherwise, the processing of the cache configuration file fails. The default handler that is used for Coherence elements and attributes cannot be overridden; otherwise, the processing of the cache configuration file fails. The default namespace is reserved for Coherence.
Implementing the Namespace Handler Interface Extending the Namespace Handler Abstract Class
public class GreetingNamespaceHandler implements NamespaceHandler { public AttributeProcessor getAttributeProcessor(XmlAttribute xmlAttribute) { return null; } public DocumentPreprocessor getDocumentPreprocessor() { return null; } public ElementProcessor getElementProcessor(XmlElement xmlElement) { if (xmlElement.getName().equals("ex:message")) { MessageProcessor mp = new MessageProcessor(); return mp; } else { throw new RuntimeException("Unknown element type " + xmlElement.getQualifiedName()); } } public void onEndNamespace(ProcessingContext processingContext, XmlElement xmlElement, String string, URI uRI) { } public void onStartNamespace(ProcessingContext processingContext, XmlElement xmlElement, String string, URI uRI) { } public class MessageProcessor implements ElementProcessor { public Object process(ProcessingContext processingContext, XmlElement xmlElement) throws ConfigurationException { System.out.println("Greeting is: " + xmlElement.getString()); return null; } } }
The above class handles the <ex:message> element content. However, it does not distinguish between the different types of elements that may occur. All elements that occur within the default XML namespace are provided to the same process method. In order to process each type of XML element, conditional statement are required within the process method. This technique may be sufficient for trivial XML content; however, more complex XML content may require many conditional statements and can become overly complicated. A more declarative approach is provided with the AbstractNamespaceHandler class.
19-4 Oracle Fusion Middleware Developing Applications with Oracle Coherence
Namespace Handler Callback Methods The NamespaceHandler interface provides the onStartNamespace and the onEndNamespace callback methods. These methods allow additional processing to be performed on the first and last encounter of a namespace in a cache configuration file.
Registering Processors
The AbstractNamespaceHandler class provides methods for declaratively registering element and attribute processors. The methods alleviate the need to check element names and types. There are two registration mechanisms: explicit registration and implicit registration. Explicit Processor Registration To use explicit processor registration, call the registerProcessor method within a sub-class constructor and manually register both element and attribute processors. Example 192 re-implements Example 191 and uses the registerProcessor method to register the MessageProcessor element processor.
Example 192 import import import import import AbstractNamespaceHandler Implementation with Explicit Registration
public class GreetingNamespaceHandler extends AbstractNamespaceHandler { public GreetingNamespaceHandler() { registerProcessor("message", new MessageProcessor()); } public class MessageProcessor implements ElementProcessor { public Object process(ProcessingContext processingContext, XmlElement xmlElement) throws ConfigurationException { System.out.println("Greeting is: " + xmlElement.getString()); return null; } } }
Implicit Processor Registration To use implicit processor registration, annotate processor classes with the @XmlSimpleName annotation. The processors are automatically registered for use within the associated namespace.The @XmlSimpleName annotation is used to determine which of the processor implementations are appropriate to handle XML content encountered during XML DOM processing. If an XML element or attribute is encountered for which there is no defined processor, then the onUnknownElement or onUnknownAttribute methods are called, respectively. The methods allow corrective action to be taken if possible. By default, a ConfigurationException exception is raised if unknown XML content is discovered during processing. Example 193 re-implements Example 191 and uses the @XmlSimpleName annotation to register the MessageProcessor element processor.
Example 193 import import import import import import AbstractNamespaceHandler Implementation with Implicit Registration
public class GreetingNamespaceHandler extends AbstractNamespaceHandler { public GreetingNamespaceHandler() { } @XmlSimpleName("message") public class MessageProcessor implements ElementProcessor { public Object process(ProcessingContext processingContext, XmlElement xmlElement) throws ConfigurationException { System.out.println("Greeting is: " + xmlElement.getString()); return null; } } }
An element processor can be used to hand-code the processing of the <english> element:
import com.tangosol.config.ConfigurationException;
public class GreetingNamespaceHandler extends AbstractNamespaceHandler { public GreetingNamespaceHandler() { } @XmlSimpleName("message") public class MessageProcessor implements ElementProcessor { public Object process(ProcessingContext processingContext, XmlElement xmlElement) throws ConfigurationException { String engMsg = processingContext.getMandatoryProperty("english", String.class, xmlElement); Message message = new Message(); message.setEnglish(engMsg); System.out.println("Greeting is: "+ message.getEnglish()); return message; } } }
The inject method uses Java reflection, under the assumption that the object to be configured follows the Java bean naming conventions. First the inject method identifies the appropriate setter methods that may be called on the Java bean. Typically, this is achieved by locating setter methods that are annotated with the @Injectable annotation. Next, it determines the appropriate types of values required by the setter methods. Lastly, it uses the provided XmlElement to locate, parse, convert, coerce, and then set appropriately typed values into the object using the available setter methods. The inject method supports primitives, enumerations, formatted values, complex user-defined types, and collection types (sets, lists, and maps). A ConfigurationException exception is thrown if the inject method fails. For example, it fails to format a value into the expected type. The following example demonstrates a Message class for the above example that supports injection with the inject method:
import com.tangosol.config.annotation.Injectable; public class Message {
private String m_sEnglish = "a greeting"; public Message() { } @Injectable("english") public void setEnglish (String sEnglish) { m_sEnglish = sEnglish; } public String getEnglish() { return m_sEnglish; } }
If the @Injectable annotation property is omitted, then the inject method tries to use the setter method Java bean name. For the above example, @injectable("") results in the use of english.
Note:
Typically, element and attribute processors follow the same pattern when using the inject method. For example:
@XmlSimpleName("element") public class XProcessor implements ElementProcessor { public Object process(ProcessingContext processingContext, XmlElement xmlElement) throws ConfigurationException { return processingContext.inject(new X(), xmlElement); } }
Declarations and registrations for such implementations can be automated using the registerElementType and registerAttributeType methods for elements and attributes, respectively. These methods are available in the AbstractNamespaceHandler class and are often used in constructors of AbstractNamespaceHandler sub-classes. The following example demonstrates using the registerElementType method and does not require a processor implementation.
Note:
To support type-based registration of classes, the specified class must provide a no-argument constructor.
import com.tangosol.config.xml.AbstractNamespaceHandler; public class GreetingNamespaceHandler extends AbstractNamespaceHandler { public GreetingNamespaceHandler() { registerElementType("message", Message.class); } }
Create the JNDI Resource Namespace Handler Declare the JNDI Namespace Handler Use the JNDI Resource Namespace Handler
The JndiBasedParameterizedBuilder class performs a JNDI context lookup to locate and create an object using the name and initialization parameters that are provided in the <resource-name> and <init-parms> elements, respectively. The setter methods for these elements (setResourceNameExpression and setParameterList) use the @Injectable annotation to pass the values configured in the cache configuration files.
import com.tangosol.coherence.config.ParameterList; import com.tangosol.coherence.config.SimpleParameterList; import com.tangosol.coherence.config.builder.ParameterizedBuilder; import com.tangosol.coherence.config.builder.ParameterizedBuilder. ReflectionSupport; import com.tangosol.config.annotation.Injectable; import com.tangosol.config.expression.Expression; import com.tangosol.config.expression.LiteralExpression; import com.tangosol.config.expression.Parameter; import com.tangosol.config.expression.ParameterResolver; import com.tangosol.util.Base; import java.util.Hashtable; import java.util.logging.Level; import java.util.logging.Logger; import javax.naming.InitialContext; import javax.naming.NamingException;
public class JndiBasedParameterizedBuilder implements ParameterizedBuilder<Object>, ReflectionSupport { private static final Logger logger = Logger.getLogger(JndiBasedParameterizedBuilder.class.getName()); private Expression<String> m_exprResourceName; private ParameterList m_parameterList; public JndiBasedParameterizedBuilder() { m_exprResourceName = new LiteralExpression<String>(""); m_parameterList = new SimpleParameterList(); } public Expression<String> getResourceNameExpression() { return m_exprResourceName; } @Injectable("resource-name") public void setResourceNameExpression(Expression<String> exprResourceName) { m_exprResourceName = exprResourceName; } public ParameterList getParameterList() { return m_parameterList; } @Injectable("init-params") public void setParameterList(ParameterList parameterList) { m_parameterList = parameterList; } public boolean realizes(Class<?> clazz, ParameterResolver parameterResolver, ClassLoader classLoader) { return clazz.isAssignableFrom(realize(parameterResolver, classLoader, null).getClass()); } public Object realize(ParameterResolver parameterResolver, ClassLoader classLoader, ParameterList parameterList) { InitialContext initialContext; try { String sResourceName = m_exprResourceName.evaluate(parameterResolver); Hashtable<String, Object> env = new Hashtable<String, Object>(); for (Parameter parameter : m_parameterList) {
env.put(parameter.getName(), parameter.evaluate(parameterResolver)); } initialContext = new InitialContext(env); if (logger.isLoggable(Level.FINE)) { logger.log(Level.FINE, "Looking up {0} using JNDI with the environment {1}", new Object[] {sResourceName, env}); } Object resource = initialContext.lookup(sResourceName); if (logger.isLoggable(Level.FINE)) { logger.log(Level.FINE, "Found {0} using JNDI", resource); } return resource; } catch (NamingException e) { throw Base.ensureRuntimeException(e, "Unable to resolve the JNDI resource: " + m_exprResourceName.toString()); } } public String toString() { return String.format("%s{resourceName=%s, parameters=%s}", this.getClass().getName(), m_exprResourceName, m_parameterList); } }
19-11
Based on the handler implementation, the <resource> and <resource-name> element are required and the <init-params> element is optional. The following example uses a JNDI resource for a cache store when defining a distributed scheme:
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-cache-config coherence-cache-config.xsd" xmlns:jndi="class://com.examples.JndiNamespaceHandler"> ... <caching-schemes> <distributed-scheme> <scheme-name>distributed-rwbm</scheme-name> <backing-map-scheme> <read-write-backing-map-scheme> <internal-cache-scheme> <local-scheme/> </internal-cache-scheme> <cachestore-scheme> <class-scheme> <jndi:resource> <jndi:resource-name>MyCacheStore</jndi:resource-name> <init-params> <init-param> <param-type>java.lang.String</param-type> <param-value>{cache-name}</param-value> </init-param> </init-params> </jndi:resource> </class-scheme> </cachestore-scheme> </read-write-backing-map-scheme> </backing-map-scheme> </distributed-scheme> <caching-schemes> </cache-config>
The following example uses a JNDI resource to resolve a JMS connection factory:
<?xml version="1.0"?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-cache-config coherence-cache-config.xsd" xmlns:jndi="class://com.examples.JndiNamespaceHandler"> 19-12 Oracle Fusion Middleware Developing Applications with Oracle Coherence
... <jndi:resource> <jndi:resource-name>ConnectionFactory</jndi:resource-name> <init-params> <init-param> <param-name>java.naming.factory.initial</param-name> <param-value>org.apache.activemq.jndi.ActiveMQInitialContextFactory </param-value> </init-param> <init-param> <param-name>java.naming.provider.url</param-name> <param-value system-property="java.naming.provider.url"></param-value> </init-param> </init-params> </jndi:resource>
19-13
Part IV
Part IV
Chapter 20, "Using Portable Object Format" Chapter 21, "Pre-Loading a Cache" Chapter 22, "Querying Data In a Cache" Chapter 23, "Using Continuous Query Caching" Chapter 24, "Processing Data In a Cache" Chapter 25, "Using Map Events" Chapter 26, "Controlling Map Operations with Triggers" Chapter 27, "Using Live Events" Chapter 28, "Using Coherence Query Language" Chapter 29, "Performing Transactions" Chapter 30, "Working with Partitions" Chapter 31, "Managing Thread Execution" Chapter 32, "Constraints on Re-entrant Calls"
20
20
This chapter provides instructions for using the Portable Object Format (POF) to serialize objects in Coherence. The instructions are specific to using POF for Java clients. For information on how to work with POF when building .NET extend clients, see "Building Integration Objects for .NET Clients" in Developing Remote Clients for Oracle Coherence. For information on how to work with POF when building C++ extend clients, see "Building Integration Objects for C++ Clients" in Developing Remote Clients for Oracle Coherence. This chapter includes the following sections:
Understanding Serialization in Coherence Overview of POF Serialization Using the POF API to Serialize Objects Using POF Annotations to Serialize Objects Using POF Extractors and POF Updaters Serializing Keys Using POF
com.tangosol.io.pof.PofSerializer The Portable Object Format (also referred to as POF) is a language agnostic binary format. POF was designed to be incredibly efficient in both space and time and is the recommended serialization option in Coherence. See "Using the POF API to Serialize Objects" on page 20-2. java.io.Serializable The simplest, but slowest option. java.io.Externalizable This requires developers to implement serialization manually, but can provide significant performance benefits. Compared to java.io.Serializable, this can cut serialized data size by a factor of two or more (especially helpful with Distributed caches, as they generally cache data in serialized form). Most importantly, CPU usage is dramatically reduced.
com.tangosol.io.ExternalizableLite This is very similar to java.io.Externalizable, but offers better performance and less memory usage by using a more efficient I/O stream implementation. com.tangosol.run.xml.XmlBean A default implementation of ExternalizableLite (For more details, see the API Javadoc for XmlBean).
Note:
When serializing an object, Java serialization automatically crawls every visible object (by using object references, including collections like Map and List). As a result, cached objects should not refer to their parent objects directly (holding onto an identifying value like an integer is allowed). Objects that implement their own serialization routines are not affected.
It's language independent with current support for Java, .NET, and C++. It's very efficient, in a simple test class with a String, a long, and three ints, (de)serialization was seven times faster, and the binary produced was one sixth the size compared with standard Java serialization. It's versionable, objects can evolve and have forward and backward compatibility. It supports the ability to externalize your serialization logic. It's indexed which allows for extracting values without deserializing the whole object. See "Using POF Extractors and POF Updaters" on page 20-13.
Guidelines for Assigning POF Indexes Using POF Object References Registering POF Objects Configuring Coherence to Use the ConfigurablePofContext Class
POF elements are indexed by providing a numeric value for each element that you write or read from the POF stream. It's important to keep in mind that the indexes must be unique to each element written and read from the POF stream, especially when you have derived types involved because the indexes must be unique between the super class and the derived class. The following example demonstrates implementing the PortableObject interface:
public void readExternal(PofReader in) throws IOException { m_symbol = (Symbol) in.readObject(0); m_ldtPlaced = in.readLong(1); m_fClosed = in.readBoolean(2); } public void writeExternal(PofWriter out) throws IOException { out.writeObject(0, m_symbol); out.writeLong(1, m_ldtPlaced); out.writeBoolean(2, m_fClosed); }
As with the PortableObject interface, all elements written to or read from the POF stream must be uniquely indexed. Below is an example implementation of the PofSerializer interface:
Example 201 Implementation of the PofSerializer Interface
public Object deserialize(PofReader in) throws IOException { Symbol symbol = (Symbol)in.readObject(0); long ldtPlaced = in.readLong(1); bool fClosed = in.readBoolean(2); Using Portable Object Format 20-3
// mark that reading the object is done in.readRemainder(null); return new Trade(symbol, ldtPlaced, fClosed); } public void serialize(PofWriter out, Object o) throws IOException { Trade trade = (Trade) o; out.writeObject(0, trade.getSymbol()); out.writeLong(1, trade.getTimePlaced()); out.writeBoolean(2, trade.isClosed()); // mark that writing the object is done out.writeRemainder(null); }
Order your reads and writes: start with the lowest index value in the serialization routine and finish with the highest. When deserializing a value, perform reads in the same order as writes. Non-contiguous indexes are acceptable but must be read/written sequentially. When Subclassing reserve index ranges: index's are cumulative across derived types. As such, each derived type must be aware of the POF index range reserved by its super class. Do not re-purpose indexes: to support Evolvable, it's imperative that indexes of attributes are not re-purposed across class revisions. Label indexes: indexes that are labeled with a public static final int, are much easier to work with, especially when using POF Extractors and POF Updaters. See "Using POF Extractors and POF Updaters" on page 20-13. Indexes that are labeled must still be read and written out in the same order as mentioned above.
Object references are only supported for user defined object types. Object references are not supported for Evolvable objects. Object references are not supported for keys.
Objects that have been written out with a POF context that does not support references cannot be read by a POF context that supports references. The opposite is also true. POF objects that use object identity and references cannot be queried using POF extractors. Instead, use the ValueExtractor API to query object values or disable object references.
Enabling POF Object References Registering POF Object Identities for Circular and Nested Objects
To enable object references when using the SimplePofContext class, call the setReferenceEnabled method with a property set to true. For example:
SimplePofContext ctx = new SimplePofContext(); ctx.setReferenceEnabled(true);
} public Customer(String sName, Product product) { m_sName = sName; m_product = product; } public String getName() { return m_sName; } public Product getProduct() { return m_product; } public void setProduct(Product product) { m_product = product; } }
The serializer implementation registers an identity during deserialization and is defined as follows:
public class CustomerSerializer implements PofSerializer { @Override public void serialize(PofWriter pofWriter, Object o) throws IOException { Customer customer = (Customer) o; pofWriter.writeString(0, customer.getName()); pofWriter.writeObject(1, customer.getProduct()); pofWriter.writeRemainder(null); } @Override public Object deserialize(PofReader pofReader) throws IOException { String sName = pofReader.readString(0); Customer customer = new Customer(sName);
Note: Coherence reserves the first 1000 type-id's for internal use. As shown in the above example, the <user-type-list> includes the coherence-pof-config.xml file that is located in the root of the coherence.jar file. This is where Coherence specific user types are defined and should be included in all of your POF configuration files.
Per Service Each service provides a full ConfigurablePofContext serializer class configuration or references a predefined configuration that is included in the operational configuration file. All Services All services use a global ConfigurablePofContext serializer class configuration. Services that provide their own configuration override the global configuration. The global configuration can also be a full configuration or reference a predefined configuration that is included in the operational configuration file. JVM The ConfigurablePofContext serializer class is enabled for the whole JVM.
The following example references the default definition in the operational configuration file. Refer to "serializer" on page A-59 to see the default ConfigurablePofContext serializer definition.
<distributed-scheme> <scheme-name>example-distributed</scheme-name> <service-name>DistributedCache</service-name> <serializer>pof</serializer> </distributed-scheme>
The following example demonstrates a global configuration for the ConfigurablePofContext class and defines a custom POF configuration file:
<?xml version='1.0'?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <defaults> <serializer> <instance> <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name> <init-params> <init-param> <param-type>String</param-type> <param-value>my-pof-config.xml</param-value> </init-param> </init-params> </instance> </serializer> </defaults> ...
The following example references the default definition in the operational configuration file. Refer to "serializer" on page A-59 to see the default ConfigurablePofContext serializer definition.
<?xml version='1.0'?> <cache-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-cache-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-cache-config coherence-cache-config.xsd"> <defaults> <serializer>pof</serializer> </defaults> ...
tangosol.pof.enabled=true - Enables POF for the entire JVM instance. tangosol.pof.config=CONFIG_FILE_PATH - The path to the POF configuration file you want to use. If the files is not in the classpath, then it must be presented as a file resource (for example, file:///opt/home/coherence/mycustom-pof-config.xml).
Annotating Objects for POF Serialization Registering POF Annotated Objects Generating a POF Configuration File Enabling Automatic Indexing Providing a Custom Codec
@Portable Marks the class as POF serializable. The annotation is only permitted at the class level and has no members. @PortableProperty Marks a member variable or method accessor as a POF serialized attribute. Annotated methods must conform to accessor notation (get, set, is). Members can be used to specify POF indexes as well as custom codecs that are executed before or after serialization or deserialization. Index values may be omitted and automatically assigned. If a custom codec is not entered, the default codec is used.
The following example demonstrates annotating a class, method, and properties and assigning explicit property index values. See "Guidelines for Assigning POF Indexes" on page 20-4 for additional details on POF indexing.
@Portable public class Person { @PortableProperty(0) public String getFirstName() { return m_firstName; } private String m_firstName; @PortableProperty(1) private String m_lastName; @PortableProperty(2) private int m_age; }
is automatically assumed if an object is annotated and does not need to be included in the user type definition. The following example registers a user type for an annotated Person object:
<?xml version='1.0'?> <pof-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-pof-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-pof-config coherence-pof-config.xsd"> <user-type-list> <include>coherence-pof-config.xml</include> <!-- User types must be above 1000 --> <user-type> <type-id>1001</type-id> <class-name>com.examples.Person</class-name> </user-type> </user-type-list> </pof-config>
The -root argument is required and lists the locations (separated directories, JAR files, or GAR file) to scan for POF annotated classes. A user type entry and ID is generated for each annotated class that is found and the resulting pof-config.xml file is written to the current working directory. For example:
pof-config-gen.cmd -root c:\src\myPofClasses.jar
Note:
If you specify a GAR file as the root, the output is a new GAR file with a count suffix (filename-n.gar) that includes the generated POF configuration file.
-out: Use the -out argument to specify either the path to an output directory or a path and filename. The default output directory is the working directory. The default filename if only a directory is specified is pof-config.xml. If a directory is specified and a pof-config.xml file already exists, then a new file is created with a count suffix (pof-config-n.xml). If a path and filename are specified and the file currently exists, then the file is overwritten. -config: Use the -config argument to specify the path and filename of an existing POF configuration file from which existing user types must be added to the generated POF configuration file. Existing user type IDs are retained in the
Using Portable Object Format 20-11
generated file. This argument can be used to support backwards compatibility when generating a POF configuration file multiple times.
-include: Use the -include argument to specify whether an existing POF configuration file (as specified by the -config argument) should only be referenced in the generated POF configuration file. The argument results in an <include> element that references the existing file. Existing user types and IDs are not recreated in the generated file. At runtime, the referenced file must be located in the classpath. A reference to the Coherence-specific POF configuration file (coherence-pof-config.xml) is automatically added to the generated POF configuration file if it is not found in an existing POF configuration file and does not need to be added using the -include argument.
Note:
-packages: Use the -packages argument to constrain the class scan to specific packages. The packages are entered as a comma separated list. -startTypeId: Use the -startTypeId argument to specify the user type ID number from which to start allocating IDs. IDs up to 1000 are reserved for Coherence-specific types and cannot be used.
The following example scans the c:\classes\pof directory and creates a new POF configuration file called my-pof-config.xml in the c:\tmp directory that includes (by reference) the existing c:\tmp\pof-config.xml file.
pof-config-gen.cmd -out c:\tmp\my-pof-config.xml -config c:\tmp\pof-config.xml -include -root c:\classes\pof
Note:
classes. To enable automatic indexing, the PofAnnotationSerializer serializer class must be explicitly defined when registering the object as a user type in the POF configuration file. The fAutoIndex boolean parameter in the constructor enables automatic indexing and must be set to true. For example:
20-12 Oracle Fusion Middleware Developing Applications with Oracle Coherence
<user-type> <type-id>1001</type-id> <class-name>com.examples.Person</class-name> <serializer> <class-name>com.tangosol.io.pof.PofAnnotationSerializer</class-name> <init-params> <init-param> <param-type>int</param-type> <param-value>{type-id}</param-value> </init-param> <init-param> <param-type>class</param-type> <param-value>{class}</param-value> </init-param> <init-param> <param-type>boolean</param-type> <param-value>true</param-value> </init-param> </init-params> </serializer> </user-type>
To assign a codec to a property, enter the codec as a member of the @PortableProperty annotation. If a codec is not specified, a default codec (DefaultCodec) is used. The following example demonstrates assigning the above LinkedListCodec codec:
@PortableProperty(codec = LinkedListCodec.class) private List<String> m_aliases;
object without the requirement to go through the full serialization/deserialization routines. PofExtractor and PofUpdater adds flexibility in working with non-primitive types in Coherence. For many extend client cases, a corresponding Java classes in the grid is no longer required. Because POF extractors and POF updaters can navigate the binary, the entire key/value does not have to be deserialized into Object form. This implies that indexing can be achieved by simply using POF extractors to pull a value to index on. However, a corresponding Java class is still required when using a cache store. In this case, the deserialized version of the key and value is passed to the cache store to write to the back end.
Notice that there's a constant for each data member that is being written to and from the POF stream. This is an excellent practice to follow as it simplifies both writing your serialization routines and makes it easier to work with POF extractors and POF updaters. By labeling each index, it becomes much easier to think about the index. As mentioned above, in the simplest case, the work address can be pulled out of the contact by using the WORK_ADDRESS index. The SimplePofPath also allows using an Array of ints to traverse the PofValues. For example, to get the zip code of the work address use [WORK_ADDRESS, ZIP]. The example are discussed in more detail below.
In the above case, PofExtractor has a convenience constructor that uses a SimplePofPath to retrieve a singular index, in our case the Contact.LASTNAME index. To find all contacts with the area code 01803, the query is as follows:
ValueExtractor veZip = new PofExtractor( String.class, new SimplePofPath(new int[] {Contact.WORK_ADDRESS, Address.ZIP})); Filter filter = new EqualsFilter(veZip, "01803"); // find all entries that have a work address in the 01803 zip code Set setEntries = cache.entrySet(filter);
Notice that in the previous examples, the PofExtractor constructor has a first argument with the class of the extracted value or null. The reason for passing type information is that POF uses a compact form in the serialized value when possible. For example, some numeric values are represented as special POF intrinsic types in which the type implies the value. As a result, POF requires the receiver of a value to have implicit knowledge of the type. PofExtractor uses the class supplied in the constructor as the source of the type information. If the class is null, PofExtractor infers the type from the serialized state, but the extracted type may differ from the expected type. String types, in fact, can be correctly inferred from the POF stream, so null is sufficient in the previous examples. In general, however, null should not be used.
Note:
while these examples operate on String based values, this functionality works on any POF encoded value.
POF defines a cross-platform object format, it cannot always provide a symmetrical conversion. That is, when a serialized object is deserialized, the object type is different than the original type. This occurs because some data types in Java do not have equivalents in the .NET and C++ platforms. As a result, avoid using classes that potentially have an asymmetrical POF conversion as keys, or parts of keys, for caches and other Java collections. Avoided using the java.util.Date type. POF is designed to serialize to java.sql.Timestamp (which extends java.util.Date). The wire formats for those two classes are identical, and a deserialization of that wire representation always results in a java.sql.Timestamp instance. Unfortunately, the equals method of those two base classes breaks the symmetry requirement for keys in Java collections. That is, if you have two objects: D (of java.util.Date) and T (of java.sql.Timestamp) that are equivalent from the POF wire format perspective, then D.equals(T) yields true, while T.equals(D) yields false. Therefore, the use of java.util.Date must be avoided. Use a Long representation of the date or the java.sql.Timestamp type to avoid breaking the key symmetry requirement. Keys that are using POF object references cannot be serialized. In addition, POF object references support circular references. Therefore, you must ensure that your key class does not have circular references.
21
21
Pre-Loading a Cache
This chapter describes how to pre-load data into a cache using the bulk loading and distributed loading patterns. This chapter includes the following sections:
public static void bulkLoad(NamedCache cache, Connection conn) { Statement s; ResultSet rs; try { s = conn.createStatement(); rs = s.executeQuery("select key, value from table"); while (rs.next()) { Integer key = new Integer(rs.getInt(1)); String value = rs.getString(2); cache.put(key, value); } ... } catch (SQLException e) {...} }
Loading the cache can be made much more efficient by using the ConcurrentMap.putAll method instead. This is illustrated in Example 212:
Pre-Loading a Cache Using ConcurrentMap.putAll void bulkLoad(NamedCache cache, Connection conn) s; rs; buffer = new HashMap();
{ int count = 0; s = conn.createStatement(); rs = s.executeQuery("select key, value from table"); while (rs.next()) { Integer key = new Integer(rs.getInt(1)); String value = rs.getString(2); buffer.put(key, value); // this loads 1000 items at a time into the cache if ((count++ % 1000) == 0) { cache.putAll(buffer); buffer.clear(); } } if (!buffer.isEmpty()) { cache.putAll(buffer); } ... } catch (SQLException e) {...} }
which cluster members performs the load how to divide the data set among the members
The application should consider the load that is placed on the underlying data source (such as a database or file system) when selecting members and dividing work. For example, a single database can easily be overwhelmed if too many members execute queries concurrently.
1.
Retrieve the set of storage-enabled members. For example, the following method uses the getStorageEnabledMembers method to retrieve the storage-enabled members of a distributed cache.
Retrieving Storage-Enabled Members of the Cache
Example 213
Divide the work among the storage enabled cluster members. For example, the following routine returns a map, keyed by member, containing a list of files assigned to that member.
Routine to Get a List of Files Assigned to a Cache Member
Example 214
protected Map<Member, List<String>> divideWork(Set members, List<String> fileNames) { Iterator i = members.iterator(); Map<Member, List<String>> mapWork = new HashMap(members.size()); for (String sFileName : fileNames) { Member member = (Member) i.next(); List<String> memberFileNames = mapWork.get(member); if (memberFileNames == null) { memberFileNames = new ArrayList(); mapWork.put(member, memberFileNames); } memberFileNames.add(sFileName); // recycle through the members if (!i.hasNext()) { i = members.iterator(); } } return mapWork; } 3.
Launch a task that performs the load on each member. For example, use Coherence's InvocationService to launch the task. In this case, the implementation of LoaderInvocable must iterate through memberFileNames and process each file, loading its contents into the cache. The cache operations normally performed on the client must execute through the LoaderInvocable.
Class to Load Each Member of the Cache
Example 215
public void load() { NamedCache cache = getCache(); Set members = getStorageMembers(cache); List<String> fileNames = getFileNames(); Map<Member, List<String>> mapWork = divideWork(members, fileNames);
InvocationService service = (InvocationService) CacheFactory.getService("InvocationService"); for (Map.Entry<Member, List<String>> entry : mapWork.entrySet()) { Member member = entry.getKey(); List<String> memberFileNames = entry.getValue(); LoaderInvocable task = new LoaderInvocable(memberFileNames, cache.getCacheName()); service.execute(task, Collections.singleton(member), this); } }
22
22
This chapter provides instructions for performing queries and using indexes to retrieve data in a cache that matches a certain criteria. Queries and indexes can be simple, employing filters packaged with Coherence, or they can be run against multi-value attributes such as collections and arrays. This chapter includes the following sections:
Query Overview Performing Queries Using Query Indexes Performing Batch Queries Performing Queries on Multi-Value Attributes Using Chained Extractors Evaluating Query Cost and Effectiveness
Query Overview
Coherence provides the ability to search for cache entries that meet a given set of criteria. The result set may be sorted if desired. Queries are evaluated with Read Committed isolation. Queries apply only to currently cached data (and do not use the CacheLoader interface to retrieve additional data that may satisfy the query). Thus, the data set should be loaded entirely into cache before queries are performed. In cases where the data set is too large to fit into available memory, it may be possible to restrict the cache contents along a specific dimension (for example, "date") and manually switch between cache queries and database queries based on the structure of the query. For maintainability, this is usually best implemented inside a cache-aware data access object (DAO). Indexing requires the ability to extract attributes on each Partitioned cache node; for dedicated cache server instances, this implies (usually) that application classes must be installed in the cache server's classpath. For Local and Replicated caches, queries can be evaluated locally against unindexed or indexed data. For Partitioned caches, queries are typically performed in parallel across the cluster an use indexes. Access to unindexed attributes requires object deserialization (though indexing on other attributes can reduce the number of objects that must be evaluated). Lastly, Coherence includes a Cost-Based Optimizer (CBO) and also provides support for trace and explain reports that help ensure the efficiency of a query.
22-1
Performing Queries
Query Concepts
The concept of querying is based on the ValueExtractor interface. A value extractor is used to extract an attribute from a given object for querying (and similarly, indexing). Most developers need only the ReflectionExtractor implementation of this interface. The implementation uses reflection to extract an attribute from a value object by referring to a method name which is typically a getter method. For example:
ValueExtractor extractor = new ReflectionExtractor("getName");
Any void argument method can be used, including Object methods like toString() (useful for prototype/debugging). Indexes may be either traditional field indexes (indexing fields of objects) or functional-based indexes (indexing virtual object attributes). For example, if a class has field accessors getFirstName and getLastName, the class may define a function getFullName which concatenates those names, and this function may be indexed. See "Using Query Indexes" on page 22-4 for more information on indexes. To query a cache that contains objects with getName attributes, a Filter must be used. A filter has a single method which determines whether a given object meets a criterion.
Filter filter = new EqualsFilter(extractor, "Bob Smith");
Note that the filters also have convenience constructors that accept a method name and internally construct a ReflectionExtractor:
Filter filter = new EqualsFilter("getName", "Bob Smith");
The following example shows a routine to select the entries of a cache that satisfy a particular filter:
for (Iterator iter = cache.entrySet(filter).iterator(); iter.hasNext(); ) { Map.Entry entry = (Map.Entry)iter.next(); Integer key = (Integer)entry.getKey(); Person person = (Person)entry.getValue(); System.out.println("key=" + key + " person=" + person); }
The following example uses a filter to select and sort cache entries:
// entrySet(Filter filter, Comparator comparator) Iterator iter = cache.entrySet(filter, null).iterator();
The additional null argument specifies that the result set should be sorted using the "natural ordering" of Comparable objects within the cache. The client may explicitly specify the ordering of the result set by providing an implementation of Comparator. Note that sorting places significant restrictions on the optimizations that Coherence can apply, as sorting requires that the entire result set be available before sorting.
Performing Queries
Example 221 demonstrates how to create a query and uses the GreaterEqualsFilter filter. Coherence includes many pre-built filters located in the com.tangosol.util.filter package. See the Java API Reference for Oracle Coherence for a complete list of all the pre-built filters.
Performing Queries
Example 221
Filter filter = new GreaterEqualsFilter("getAge", 18); for (Iterator iter = cache.entrySet(filter).iterator(); iter.hasNext(); ) { Map.Entry entry = (Map.Entry) iter.next(); Integer key = (Integer) entry.getKey(); Person person = (Person) entry.getValue(); System.out.println("key=" + key + " person=" + person); }
Although queries can be executed through a near cache, the query does not use the front portion of a near cache. If using a near cache with queries, the best approach is to use the following sequence:
public static void performQuery() { NamedCache c = CacheFactory.getCache("test"); // Search for entries that start with 'c' Filter query = new LikeFilter(IdentityExtractor.INSTANCE, "c%", '\\', true); // Perform query, return keys of entries that match Set keys = c.keySet(query); // The amount of objects to process at a time final int BUFFER_SIZE = 100; // Object buffer Set buffer = new HashSet(BUFFER_SIZE); for (Iterator i = keys.iterator(); i.hasNext();) { buffer.add(i.next()); if (buffer.size() >= BUFFER_SIZE)
22-3
{ // Bulk load BUFFER_SIZE number of objects from cache Map entries = c.getAll(buffer); // Process each entry process(entries); // Done processing these keys, clear buffer buffer.clear(); } } // Handle the last partial chunk (if any) if (!buffer.isEmpty()) { process(c.getAll(buffer)); } } public static void process(Map map) { for (Iterator ie = map.entrySet().iterator(); ie.hasNext();) { Map.Entry e = (Map.Entry) ie.next(); out("key: "+e.getKey() + ", value: "+e.getValue()); } }
Creating an Index
The addIndex method of the QueryMap class is used to create indexes. Any attribute able to be queried may be indexed using this method. The method includes three parameters:
addIndex(ValueExtractor extractor, boolean fOrdered, Comparator comparator)
NamedCache cache = CacheFactory.getCache("MyCache"); ValueExtractor extractor = new ReflectionExtractor("getAttribute"); cache.addIndex(extractor, true, null);
The fOrdered argument specifies whether the index structure is sorted. Sorted indexes are useful for range queries, such as "select all entries that fall between two dates" or "select all employees whose family name begins with 'S'". For "equality" queries, an
unordered index may be used, which may have better efficiency in terms of space and time. The comparator argument can provide a custom java.util.Comparator for ordering the index. The addIndex method is only intended as a hint to the cache implementation and, as such, it may be ignored by the cache if indexes are not supported or if the desired index (or a similar index) exists. It is expected that an application calls this method to suggest an index even if the index may exist, just so that the application is certain that index has been suggested. For example in a distributed environment, each server likely suggests the same set of indexes when it starts, and there is no downside to the application blindly requesting those indexes regardless of whether another server has requested the same indexes. Note that queries can be combined by Coherence if necessary, and also that Coherence includes a cost-based optimizer (CBO) to prioritize the usage of indexes. To take advantage of an index, queries must use extractors that are equal ((Object.equals()) to the one used in the query. A list of applied indexes can be retrieved from the StorageManagerMBean by using JMX. See Managing Oracle Coherence for more information on using JMX with Coherence.
In the above example, the value of the entry is checked for null before extraction, but it could be done after. If the value of the entry is null then nothing is inserted into the index. A similar check for null would also be required for the MapIndex update
Querying Data In a Cache 22-5
method. The rest of the MapIndex methods must be implemented appropriately as well.
In the above example, an underlying extractor is actually used to create the index and ultimately extracts the values from the cache entries. The IndexAwareExtractor implementation is used to manage the creation and destruction of a custom MapIndex implementation while preserving the existing QueryMap interfaces. The IndexAwareExtractor is passed into the QueryMap.addIndex and QueryMap.removeIndex calls. Coherence, in turn, calls createIndex and destroyIndex on the IndexAwareExtractor. Also note that it is the responsibility of the IndexAwareExtractor to maintain the Map of extractor-to-index associations that is passed into createIndex and destroyIndex.
to the index if the filter evaluates to true. The implemented classes are ConditionalIndex and ConditionalExtractor, respectively. The ConditionalIndex is created by a ConditionalExtractor. The filter and extractor used by the ConditionalIndex are set on the ConditionalExtractor and passed to the ConditionalIndex constructor during the QueryMap.addIndex call. The ConditionalExtractor is an IndexAwareExtractor implementation that is only used to create a ConditionalIndex. The underlying ValueExtractor is used for value extraction during index creation and is the extractor that is associated with the created ConditionalIndex in the given index map. Using the ConditionalExtractor to extract values in not supported. For example:
ValueExtractor extractor = new ReflectionExtractor("getLastName"); Filter filter = new NotEqualsFilter("getId", null); ValueExtractor condExtractor = new ConditionalExtractor(filter, extractor, true); // add the conditional index which should only contain the last name values for the // entries with non-null Ids cache.addIndex(condExtractor, true, null);
// key set(Filter filter) Set setKeys = cache.keySet(filter); Set setPageKeys = new HashSet(); int PAGE_SIZE = 100; for (Iterator iter = setKeys.iterator(); iter.hasNext();) { setPageKeys.add(iter.next()); if (setPageKeys.size() == PAGE_SIZE || !iter.hasNext()) { // get a block of values Map mapResult = cache.getAll(setPageKeys); // process the block // ... setPageKeys.clear(); } }
A LimitFilter may be used to limit the amount of data sent to the client, and also to provide paging. Example 225 demonstrates using a LimitFilter:
Example 225 Using a Limit Filter
int pageSize = 25; Filter filter = new GreaterEqualsFilter("getAge", 18); // get entries 1-25
22-7
Filter limitFilter = new LimitFilter(filter, pageSize); Set entries = cache.entrySet(limitFilter); // get entries 26-50 limitFilter.nextPage(); entries = cache.entrySet(limitFilter);
When using a distributed/partitioned cache, queries can be targeted to partitions and cache servers using a PartitionedFilter. This is the most efficient way of batching query results as each query request is targeted to a single cache server, thus reducing the number of servers that must respond to a request and making the most efficient use of the network.
Note: Use of PartitionedFilter is limited to cluster members; it cannot be used by Coherence*Extend clients. Coherence*Extend clients may use the two techniques described above, or these queries can be implemented as an Invocable and executed remotely by a Coherence*Extend client.
if (!partsProcessed.isFull()) { partsProcessed.invert(); Filter filter = new PartitionedFilter(filter, partsProcessed); // process the remaining entries ... }
Set searchTerms = new HashSet(); searchTerms.add("java"); searchTerms.add("clustering"); searchTerms.add("books"); // The cache contains instances of a class "Document" which has a method // "getWords" which returns a Collection<String> containing the set of // words that appear in the document. Filter filter = new ContainsAllFilter("getWords", searchTerms); Set entrySet = cache.entrySet(filter); // iterate through the search results // ...
This extractor could be passed into a query, allowing queries (for example) to select all people with names not exceeding 10 letters. Method invocations may be chained indefinitely, for example getName.trim.length.
22-9
in order to perform better. See also the StorageManagerMBean reference in Managing Oracle Coherence for details on viewing query-based statistics. This section includes the following topics:
Creating Query Records Interpreting Query Records Running The Query Record Example
AllFilter filter = new AllFilter(new Filter[] { new OrFilter( new EqualsFilter(new ReflectionExtractor("getAge"), 16), new EqualsFilter(new ReflectionExtractor("getAge"), 19)), new EqualsFilter(new ReflectionExtractor("getLastName"), "Smith"), new EqualsFilter(new ReflectionExtractor("getFirstName"), "Bob"), });
Name This column shows the name of each filter in the query. Composite filters show information for each of the filters within the composite filter. Index This column shows whether or not an index can be used with the given filter. If an index is found, the number shown corresponds to the index number on the Index Lookups table. In the example, an ordered simple map index (0) was found for getAge(). Cost This column shows an estimated cost of applying the filter. If an index can be used, the cost is given as 1. The value of 1 is used since the operation of applying the index requires just a single access to the index content. In the example, there are 4 storage-enabled cluster members and thus the cost reflects accessing the index on all four members. If no index exists, the cost is calculated as EVAL_COST * number of keys. The EVAL_COST value is a constant value and is 1000. This is intended to show the relative cost of doing a full scan to reduce the key set using the filter. In the example, there are 1500 cache entries which need to be evaluated. Querying indexed entries is always relatively inexpensive as compared to non-indexed entries but does not necessarily guarantee effectiveness.
The record in Example 228 shows that the equal filter for getAge() has a low cost because it has an associated index and would be applied before getLastName() and getFirstName(). However, the getAge() filter, while inexpensive, may not be very effective if all entries were either 16 and 19 and only few entries matched Bob and Smith. In this case, it is more effective to add an index for getLastName() and getFirstName(). Moreover, the cost (mainly memory consumption) associated with creating an index is wasted if the index does a poor job of reducing the key set.
Example 228 Sample Query Explain Plan Record
Explain Plan Name Index Cost ================================================================================== com.tangosol.util.filter.AllFilter | ---| 0 com.tangosol.util.filter.OrFilter | ---| 0 EqualsFilter(.getAge(), 16) | 0 | 4 EqualsFilter(.getAge(), 19) | 0 | 4 EqualsFilter(.getLastName(), Smit | 1 | 1500000 EqualsFilter(.getFirstName(), Bob | 2 | 1500000
Index Lookups Index Description Extractor Ordered ================================================================================== 0 SimpleMapIndex: Extractor=.getAge(), Ord .getAge() true 1 No index found .getLastName() false 2 No index found .getFirstName() false
Name This column shows the name of each filter in the query. Composite filters show information for each of the filters within the composite filter. Index This column shows whether or not an index can be used with the given filter. If an index is found, the number shown corresponds to the index number on the Index Lookups table. In the example, an ordered simple map index (0) was found for getAge(). Effectiveness This column shows the amount a key set was actually reduced as a result of each filter. The value is given as prefilter_key_set_size | postfilter_ key_set_size and is also presented as a percentage. The prefilter_key_set_size value represents the key set size prior to evaluating the filter or applying an index. The postfilter_key_set_size value represents the size of the key set remaining after evaluating the filter or applying an index. For a composite filter entry, the value is the overall results for its contained filters. Once a key set size can no longer be reduced based on an index, the resulting key set is deserialized and any non index filters are applied. Duration This column shows the number of milliseconds spent evaluating the filter or applying an index. A value of 0 indicates that the time registered was below the reporting threshold. In the example, the 63 milliseconds is the result of having to deserialize the key set which is incurred on the first filter getLastName() only.
The record in Example 229 shows that it took approximately 63 milliseconds to reduce 1500 entries to find 100 entries with the first name Bob, last name Smith, and with an age of 16 or 19. The key set of 1500 entries was initially reduced to 300 using the index for getAge(). The resulting 300 entries (because they could not be further reduced using an index) were then deserialized and reduced to 150 entries based on getLastName() and then reduced to 100 using getFirstName(). The example shows that an index on getAge() is well worth the resources because it was able to effectively reduce the key set by 1200 entries. An index on getLastName and getFirstName would increase the performance of the overall query but may not be worth the additional resource required to create the index.
Example 229 Trace Name Sample Query Trace Record Index Effectiveness Duration
================================================================================== com.tangosol.util.filter.AllFilter | ---| 1500|300(80%) | 0 com.tangosol.util.filter.OrFilter | ---| 1500|300(80%) | 0 EqualsFilter(.getAge(), 16) | 0 | 1500|150(90%) | 0 EqualsFilter(.getAge(), 19) | 0 | 1350|150(88%) | 0 EqualsFilter(.getLastName(), Smit | 1 | 300|300(0%) | 0 EqualsFilter(.getFirstName(), Bob | 2 | 300|300(0%) | 0 com.tangosol.util.filter.AllFilter | ---| 300|100(66%) | 63 EqualsFilter(.getLastName(), Smit | ---| 300|150(50%) | 63 EqualsFilter(.getFirstName(), Bob | ---| 150|100(33%) | 0
Index Lookups Index Description Extractor Ordered ================================================================================== 0 SimpleMapIndex: Extractor=.getAge(), Ord .getAge() true 1 No index found .getLastName() false 2 No index found .getFirstName() false
public class QueryRecordExanple { public static void main(String[] args) { testExplain(); testTrace(); } public static void testExplain() { NamedCache cache = CacheFactory.getCache("mycache"); cache.addIndex(new ReflectionExtractor("getAge"), true, null); PopulateCache(cache); AllFilter filter = new AllFilter(new Filter[] { new OrFilter( new EqualsFilter(new ReflectionExtractor("getAge"), 16), new EqualsFilter(new ReflectionExtractor("getAge"), 19)),
new EqualsFilter(new ReflectionExtractor("getLastName"), "Smith"), new EqualsFilter(new ReflectionExtractor("getFirstName"), "Bob"), }); QueryRecorder agent = new QueryRecorder(RecordType.EXPLAIN); Object resultsExplain = cache.aggregate(filter, agent); System.out.println("\nExplain Plan=\n" + resultsExplain + "\n"); } public static void testTrace() { NamedCache cache = CacheFactory.getCache("hello-example"); cache.addIndex(new ReflectionExtractor("getAge"), true, null); PopulateCache(cache); AllFilter filter = new AllFilter(new Filter[] { new OrFilter( new EqualsFilter(new ReflectionExtractor("getAge"), 16), new EqualsFilter(new ReflectionExtractor("getAge"), 19)), new EqualsFilter(new ReflectionExtractor("getLastName"), "Smith"), new EqualsFilter(new ReflectionExtractor("getFirstName"), "Bob"), }); QueryRecorder agent = new QueryRecorder(RecordType.TRACE); Object resultsExplain = cache.aggregate(filter, agent); System.out.println("\nTrace =\n" + resultsExplain + "\n"); } private static void PopulateCache(NamedCache cache) { for (int i = 0; i < 1500; ++i) { Person person = new Person(i % 3 == 0 ? "Joe" : "Bob", i % 2 == 0 ? "Smith" : "Jones", 15 + i % 10); cache.put("key" + i, person); } } public static class Person implements Serializable { public Person(String sFirstName, String sLastName, int nAge) { m_sFirstName = sFirstName; m_sLastName = sLastName; m_nAge = nAge; } public String getFirstName() { return m_sFirstName; } public String getLastName() { return m_sLastName; } public int getAge() {
return m_nAge; } public String toString() { return "Person( " +m_sFirstName + " " + m_sLastName + " : " + m_nAge + ")"; } private String m_sFirstName; private String m_sLastName; private int m_nAge; } }
23
23
This chapter provides instructions for using continuous query caching to ensure that a query always retrieves the latest results from a cache in real-time. This chapter includes the following sections:
Overview of Using Continuous Query Caching Understanding the Continuous Query Cache Implementation Constructing a Continuous Query Cache Caching only keys, or caching both keys and values Listening to the ContinuousQueryCache Making the ContinuousQueryCache Read-Only
It is an ideal building block for Complex Event Processing (CEP) systems and event correlation engines. It is ideal for situations in which an application repeats a particular query, and would benefit from always having instant access to the up-to-date result of that query. A continuous query cache is analogous to a materialized view, and is useful for accessing and manipulating the results of a query using the standard NamedCache API, and receiving an ongoing stream of events related to that query.
Using Continuous Query Caching 23-1
A continuous query cache can be used in a manner similar to a Near Cache, because it maintains an up-to-date set of data locally where it is being used, for example on a particular server node or on a client desktop; note that a Near Cache is invalidation-based, but the continuous query cache actually maintains its data in an up-to-date manner.
An example use case is a trading system desktop, in which a trader's open orders and all related information must always be maintained in an up-to-date manner. By combining the Coherence*Extend functionality with Continuous Query Caching, an application can support literally tens of thousands of concurrent users.
Note:
continuous query caches are useful in almost every type of application, including both client-based and server-based applications, because they provide the ability to very easily and efficiently maintain an up-to-date local copy of a specified sub-set of a much larger and potentially distributed cached data set.
Cache access and manipulation using the Map interface: NamedCache extends the standard Map interface from the Java Collections Framework, which is the same interface implemented by the JDK's HashMap and Hashtable classes. Events for all objects modifications that occur within the cache: NamedCache extends the ObservableMap interface. Identity-based clusterwide locking of objects in the cache: NamedCache extends the ConcurrentMap interface. Querying the objects in the cache: NamedCache extends the QueryMap interface. Distributed Parallel Processing and Aggregation of objects in the cache: NamedCache extends the InvocableMap interface.
Since the ContinuousQueryCache implements the NamedCache interface, which is the same API provided by all Coherence caches, it is extremely simple to use, and it can be easily substituted for another cache when its functionality is called for.
The underlying cache that it is based on; A query of that underlying cache that produces the sub-set that the continuous query cache caches.
The underlying cache is any Coherence cache, including another continuous query cache. A cache is usually obtained from a CacheFactory, which allows the developer to simply specify the name of the cache and have it automatically configured based on the application's cache configuration information; for example:
NamedCache cache = CacheFactory.getCache("orders");
See Appendix B, "Cache Configuration Elements" for more information on specifying cache configuration information. The query is the same type of query that would be used to filter data. For example:
Example 231 A Query for a Continuous Query Cache
See Chapter 22, "Querying Data In a Cache" for more information on queries. Normally, to query a cache, a method from QueryMap is used. For example, to obtain a snap-shot of all open trades for a trader object:
Example 232 Getting Data for the Continuous Query Cache
Similarly, the continuous query cache is constructed from those same two pieces:
Example 233 Constructing the Continuous Query Cache
If necessary, the CacheValues property can also be modified after the cache has been instantiated; for example:
Example 235 Setting the CacheValues Property
cacheOpenTrades.setCacheValues(true);
23-3
Assuming some processing has to occur against every item that is in the cache and every item added to the cache, there are two approaches. First, the processing could occur then a listener could be added to handle any later additions:
Example 237 Processing Continuous Query Cache Entries and Adding a Listener
ContinuousQueryCache cacheOpenTrades = new ContinuousQueryCache(cache, filter); for (Iterator iter = cacheOpenTrades.entrySet().iterator(); iter.hasNext(); ) { Map.Entry entry = (Map.Entry) iter.next(); // .. process the cache entry } cacheOpenTrades.addMapListener(listener);
However, that code is incorrect because it allows events that occur in the split second after the iteration and before the listener is added to be missed! The alternative is to add a listener first, so no events are missed, and then do the processing:
Example 238 Adding a Listener Before Processing Continuous Query Cache Entries
ContinuousQueryCache cacheOpenTrades = new ContinuousQueryCache(cache, filter); cacheOpenTrades.addMapListener(listener); for (Iterator iter = cacheOpenTrades.entrySet().iterator(); iter.hasNext(); ) { Map.Entry entry = (Map.Entry) iter.next(); // .. process the cache entry }
However, the same entry can appear in both an event an in the Iterator, and the events can be asynchronous, so the sequence of operations cannot be guaranteed. The solution is to provide the listener during construction, and it receives one event for each item that is in the continuous query cache, whether it was there to begin with (because it was in the query) or if it got added during or after the construction of the cache:
Example 239 Providing a Listener When Constructing the Continuous Query Cache
A read-only ContinuousQueryCache does not allow objects to be added to, changed in, removed from or locked in the cache. When a ContinuousQueryCache has been set to read-only, it cannot be changed back to read/write.
23-5
24
24
This chapter provides instructions for using entry processors and aggregators to perform data grid processing across a cluster. These data grid features perform in a similar manner to other map-reduce patterns and allow the processing of large amounts of data at very low latencies. This chapter includes the following sections:
Overview of Processing Data In a Cache Using Agents for Targeted, Parallel and Query-Based Processing Performing Data Grid Aggregation Performing Node-Based Processing Using a Work Manager
In the case of partitioned data, the agent executes on the grid node that owns the data to execute against. The queuing, concurrency management, agent execution, data access by the agent, and data modification by the agent all occur on that grid node. (Only the synchronous backup of the resultant data modification, if any, requires additional network traffic.) For many processing purposes, it is much more efficient to move the serialized form of the agent (at most a few hundred bytes) than to handle distributed concurrency control, coherency and data updates. For request and response processing, the agent returns a result:
Object oResult = map.invoke(key, agent);
24-1
Coherence, as a data grid, determines the location to execute the agent based on the configuration for the data topology. It moves the agent to the determined location, executes the agent (automatically handling concurrency control for the item while executing the agent), backs up the modifications (if any), and returns a result.
For request and response processing, the agent returns one result for each key processed:
Map mapResults = map.invokeAll(collectionKeys, agent);
Coherence determines the optimal location(s) to execute the agent based on the configuration for the data topology. It then moves the agent to the determined locations, executes the agent (automatically handling concurrency control for the item(s) while executing the agent), backs up the modifications (if any), and returns the coalesced results. See "Performing Data Grid Aggregation" on page 24-6 for instructions on performing aggregation against a result set.
NamedCache map = CacheFactory.getCache("trades"); Filter filter = new AndFilter(new EqualsFilter("getTrader", traderid), new EqualsFilter("getStatus", Status.OPEN)); Set setOpenTradeIds = mapTrades.keySet(filter);
By combining this feature with the use of parallel executions in the data grid, Coherence provides the ability to execute an agent against a query. As in the previous section, the execution occurs in parallel, and instead of returning the identities or entries that match the query, Coherence executes the agents against the entries:
map.invokeAll(filter, agent);
For request and response processing, the agent returns one result for each key processed:
Map mapResults = map.invokeAll(filter, agent);
Coherence combines parallel query and parallel execution to achieve query-based agent invocation against a data grid.
As with the other types of agent invocation, request and response processing is supported:
Map mapResults = map.invokeAll((Filter) null, agent);
An application can process all the data spread across a particular map in the data grid with a single line of code.
AbstractProcessor - an abstract base class for building an EntryProcessor ExtractorProcessor - extracts and returns a value (such as a property value) from an object stored in an InvocableMap CompositeProcessor - bundles a collection of EntryProcessor objects that are invoked sequentially against the same entry ConditionalProcessor - conditionally invokes an EntryProcessor if a Filter against the entry-to-process evaluates to true PropertyProcessor - an abstract base class for EntryProcessor implementations that depend on a PropertyManipulator NumberIncrementor - pre- or post-increments any property of a primitive integral type, and Byte, Short, Integer, Long, Float, Double, BigInteger, BigDecimal NumberMultiplier - multiplies any property of a primitive integral type, and Byte, Short, Integer, Long, Float, Double, BigInteger, BigDecimal, and returns either the previous or new value
The EntryProcessor interface (contained within the InvocableMap interface) contains only two methods: process and processAll. The AbstractProcessor provides the default implementation of the processAll method. When processing multiple keys, a single EntryProcessor object is re-used for all the keys and its state should not be mutated. If the processAll call throws an exception, changes are only made to the underlying Map for entries that were removed from the setEntries. Changes that are made to the remaining entries are not processed.
Note:
The InvocableMap.Entry that is passed to an EntryProcessor is an extension of the Map.Entry interface that allows an EntryProcessor implementation to obtain the necessary information about the entry and to make the necessary modifications in the most efficient manner possible.
24-3
entries must be located in the same partition. For details about ensuring that entries are in the same partition, see "Specifying Data Affinity" on page 30-1. The process and processAll operations are performed in a transaction-like manner that uses implicit locks when accessing, inserting, updating, modifying, or removing cache entries. If an exception is thrown during the processing of the entries, the entries are rolled back leaving all underlying values unchanged. The processAll operation is atomic with respect to all entries within a single partition (or member if no service threads are configured) rather than the individual entry or entire request.
Note:
The implicit lock may create a deadlock if entries are locked in conflicting orders on different threads. The application is responsible for ensuring that cache entries are accessed (locked) in a deadlock-free manner. In the case where a deadlock is detected, an exception is thrown but the underlying service is not stopped.
The com.tangosol.net.BackingMapContext API is used to process entries in multiple caches and provides a way to directly access entries in a cache's backing map. The backing map is the actual Map implementation where entries are stored (as opposed to the logical representation of a cache typically used by an application). Entries in a backing map are stored in binary format and therefore require an application to handle the serialized form of an entry. For details on backing maps, see Chapter 14, "Implementing Storage and Backing Maps." The com.tangosol.util.BinaryEntry API provides easy access to a backing map context and is typically used by an application. The following sample code demonstrates how to update entries in two different caches within the process method of an entry processor using the BinaryEntry API.
public Object process(Entry entry) { BinaryEntry binEntry = (BinaryEntry) entry; Binary binKey = binEntry.getBinaryKey(); Trade trade = (Trade) binEntry.getValue(); // Update a Trade object in cache1 trade.setPrice(trade.getPrice() + factor); binEntry.setValue(trade); // update a Trade object in cache2 BackingMapManagerContext ctx = binEntry.getContext(); BinaryEntry binEntry2 = (BinaryEntry) ctx.getBackingMapContext("cache2").getBackingMapEntry(binKey); Trade trade2 = (Trade) binEntry2.getValue(); trade2.setPrice(trade2.getPrice() + factor); binEntry2.setValue(trade2); return null; }
Note:
The getBackingMapEntry method may only be called within the context of an entry processor invocation. Any changes made to the entry are persisted with the same lifecycle as those made by the enclosing invocation. The returned entry is only valid for the duration of the enclosing invocation and multiple calls to this method within the same invocation context returns the same entry object.
Makes the client's memory footprint independent of the size of the cache. Avoids transferring all affected keys to the client which could result in an OutOfMemoryError exception on the client if the cache is too large. Avoids deserialization costs in the client for all keys. Avoids transferring the map and all keys through a proxy node (for Extend clients).
To ignore entry processor results, override the processor's processAll method to return an empty Map or null. The following example demonstrates a simple entry processor agent that always returns null after processing an entry. The example is not very realistic but does show how to override the processAll method.
public static class Agent implements InvocableMap.EntryProcessor { private static final long serialVersionUID = 1L; @Override public Object process(Entry entry) { return null; } @Override public Map processAll(Set set) { for (Entry entry : (Set<Entry>) set) { process(entry); } return null; }
24-5
/** * Perform an aggregating operation against the entries specified by the * passed keys. * * @param collKeys the Collection of keys that specify the entries within * this Map to aggregate across * @param agent the EntryAggregator that is used to aggregate across * the specified entries of this Map * * @return the result of the aggregation */ public Object aggregate(Collection collKeys, EntryAggregator agent); /** * Perform an aggregating operation against the set of entries that are * selected by the given Filter. * <p/> * <b>Note:</b> calling this method on partitioned caches requires a * Coherence Enterprise Edition (or higher) license. * * @param filter the Filter that is used to select entries within this * Map to aggregate across * @param agent the EntryAggregator that is used to aggregate across * the selected entries of this Map * * @return the result of the aggregation */ public Object aggregate(Filter filter, EntryAggregator agent);
/** * An EntryAggregator represents processing that can be directed to occur * against some subset of the entries in an InvocableMap, resulting in a * aggregated result. Common examples of aggregation include functions * such as min(), max() and avg(). However, the concept of aggregation * applies to any process that must evaluate a group of entries to * come up with a single answer. */ public interface EntryAggregator extends Serializable { /** * Process a set of InvocableMap Entry objects to produce an * aggregated result. * * @param setEntries a Set of read-only InvocableMap Entry objects to * aggregate
* * @return the aggregated result from processing the entries */ public Object aggregate(Set setEntries); }
For efficient execution in a Data Grid, an aggregation process must be designed to operate in a parallel manner.
Example 244 ParallelAwareAggregator API for running Aggregation in Parallel
/** * A ParallelAwareAggregator is an advanced extension to EntryAggregator * that is explicitly capable of being run in parallel, for example in a * distributed environment. */ public interface ParallelAwareAggregator extends EntryAggregator { /** * Get an aggregator that can take the place of this aggregator in * situations in which the InvocableMap can aggregate in parallel. * * @return the aggregator that is run in parallel */ public EntryAggregator getParallelAggregator(); /** * Aggregate the results of the parallel aggregations. * * @return the aggregation of the parallel aggregation results */ public Object aggregateResults(Collection collResults); }
See the com.tangosol.util.aggregator package for a list of Coherence aggregators. To implement your own aggregator, see the AbstractAggregator abstract base class.
24-7
Agents are simply runnable classes that are part of the application. An example of a simple agent is one designed to request a GC from the JVM:
Example 245 Simple Agent to Request Garbage Collection
/** * Agent that issues a garbage collection. */ public class GCAgent extends AbstractInvocable { public void run() { System.gc(); } }
To execute that agent across the entire cluster, it takes one line of code:
service.execute(new GCAgent(), null, null);
/** * Agent that determines how much free memory a grid node has. */ public class FreeMemAgent extends AbstractInvocable { public void run() { Runtime runtime = Runtime.getRuntime(); int cbFree = runtime.freeMemory(); int cbTotal = runtime.totalMemory(); setResult(new int[] {cbFree, cbTotal}); } }
To execute that agent across the entire grid and retrieve all the results from it, it still takes only one line of code:
Map map = service.query(new FreeMemAgent(), null);
While it is easy to do a grid-wide request/response, it takes a bit more code to print the results:
Example 247
Iterator iter = map.entrySet().iterator(); while (iter.hasNext()) { Map.Entry entry = (Map.Entry) iter.next(); Member member = (Member) entry.getKey(); int[] anInfo = (int[]) entry.getValue(); if (anInfo != null) // nullif member died System.out.println("Member " + member + " has " + anInfo[0] + " bytes free out of " + anInfo[1] + " bytes total"); }
The agent operations can be stateful, which means that their invocation state is serialized and transmitted to the grid nodes on which the agent is to be run.
Example 248 Stateful Agent Operations
/** * Agent that carries some state with it. */ public class StatefulAgent extends AbstractInvocable { public StatefulAgent(String sKey) { m_sKey = sKey; } public void run() { // the agent has the key that it was constructed with String sKey = m_sKey; // ... } private String m_sKey; }
24-9
Example 249
Work[] aWork = ... Collection collBigItems = new ArrayList(); Collection collAllItems = new ArrayList(); for (int i = 0, c = aWork.length; i < c; ++i) { WorkItem item = manager.schedule(aWork[i]); if (i < 3) { // the first three work items are the important ones collBigItems.add(item); } collAllItems.add(item); } Collection collDone = manager.waitForAll(collAllItems, 5000L); if (!collDone.containsAll(collBigItems)) { // wait the remainder of 30 seconds for the important work to finish manager.waitForAll(collBigItems, 25000L); }
25
25
This chapter provides instructions for using map event listeners to receive cache events and events from any class in Coherence that implements the ObservableMap interface. This chapter includes the following sections:
Overview of Map Events Signing Up for All Events Using an Inner Class as a MapListener Configuring a MapListener For a Cache Signing Up For Events On Specific Identities Filtering Events Using Lite Events Listening to Queries Using Synthetic Events Using Backing Map Events Using Synchronous Event Listeners
Coherence also includes the live event programming model. Live events provide support for common event types and can be used instead of map events. For details about using live events, see "Using Live Events" on page 27-1.
An application object that implements the MapListener interface can sign up for events from any Coherence cache or class that implements the ObservableMap interface, simply by passing an instance of the application's MapListener implementation to a addMapListener() method. The MapEvent object that is passed to the MapListener carries all of the necessary information about the event that has occurred, including the source (ObservableMap) that raised the event, the identity (key) that the event is related to, what the action was against that identity (insert, update or delete), what the old value was and what the new value is. For details about the MapListener interface and the MapEvent class, see the Java API Reference for Oracle Coherence.
A catastrophic cluster failure that caused the data loss (for example, simultaneous crash of two machines holding data). In this case, the PARTITION_LOST event is emitted to all registered PartitionListener instances on the server side. Client disconnect. In this case, the MEMBER_LEFT event is emitted to all registered MemberListener instances on the client side.
Regardless of the cache topology and the number of servers, and even if the modifications are being made by other servers, the events are delivered to the application's listeners.
In addition to the Coherence caches (those objects obtained through a Coherence cache factory), several other supporting classes in Coherence also implement the ObservableMap interface:
ObservableHashMap LocalCache OverflowMap NearCache ReadWriteBackingMap AbstractSerializationCache, SerializationCache, and SerializationPagedCache WrapperObservableMap, WrapperConcurrentMap, and WrapperNamedCache
public void addMapListener(MapListener listener); public void addMapListener(MapListener listener, Object oKey, boolean fLite); public void addMapListener(MapListener listener, Filter filter, boolean fLite);
Let's create an example MapListener implementation. Example 252 illustrates a sample MapListener implementation that prints each event as it receive.
Example 252 Sample MapListener Implementation
/** * A MapListener implementation that prints each event as it receives * them. */ public static class EventPrinter extends Base implements MapListener { public void entryInserted(MapEvent evt) { out(evt); } public void entryUpdated(MapEvent evt) { out(evt); } public void entryDeleted(MapEvent evt) { out(evt); } }
Using this implementation, it is extremely simple to print all events from any given cache (since all caches implement the ObservableMap interface):
cache.addMapListener(new EventPrinter());
Of course, to be able to later remove the listener, it is necessary to hold on to a reference to the listener:
Example 253 Holding a Reference to a Listener
Listener listener = new EventPrinter(); cache.addMapListener(listener); m_listener = listener; // store the listener in a field
Each addMapListener method on the ObservableMap interface has a corresponding removeMapListener method. To remove a listener, use the removeMapListener method that corresponds to the addMapListener method that was used to add the listener.
Another helpful base class for creating a MapListener is the MultiplexingMapListener, which routes all events to a single method for handling. This class would allow you to simplify the EventPrinter example to the code illustrated in Example 256. Since only one method must be implemented to capture all events, the MultiplexingMapListener can also be very useful when creating an inner class to use as a MapListener.
Example 256 Routing All Events to a Single Method for Handling
public static class EventPrinter extends MultiplexingMapListener { public void onMapEvent(MapEvent evt) { out(evt); } }
Filtering Events
Thus, the code in Example 257 would only trigger an event when the Integer key 5 is inserted or updated:
Example 257 Triggering an Event when a Specific Integer Key is Inserted or Updated
for (int i = 0; i < 10; ++i) { Integer key = new Integer(i); String value = "test value for key " + i; cache.put(key, value); }
Filtering Events
Similar to listening to a particular key, it is possible to listen to particular events. In Example 258 a listener is added to the cache with a filter that allows the listener to only receive delete events.
Example 258 Adding a Listener with Filter for Deleted Events
// Filters used with partitioned caches must be // Serializable, Externalizable or ExternalizableLite public class DeletedFilter implements Filter, Serializable { public boolean evaluate(Object o) { MapEvent evt = (MapEvent) o; return evt.getId() == MapEvent.ENTRY_DELETED; } } cache.addMapListener(new EventPrinter(), new DeletedFilter(), false);
Note:
When building a filter for querying, the object that is passed to the evaluate method of the Filter is a value from the cache, or - if the filter implements the EntryFilter interface - the entire Map.Entry from the cache. When building a filter for filtering events for a MapListener, the object that is passed to the evaluate method of the filter is of type MapEvent. See "Listening to Queries" on page 25-7, for more information on how to use a query filter to listen to cache events, . If you then make the following sequence of calls:
cache.put("hello", "world"); cache.put("hello", "again"); cache.remove("hello");
MapListener listener = new MultiplexingMapListener() { public void onMapEvent(MapEvent evt) { out("event has occurred: " + evt); out("(the wire-size of the event would have been " + ExternalizableHelper.toBinary(evt).length() + " bytes.)"); } }; cache.addMapListener(listener); // insert a 1KB value cache.put("test", new byte[1024]); // update with a 2KB value cache.put("test", new byte[2048]); // remove the 2KB value cache.remove("test");
The output from running the test, illustrated inExample 2510, shows that the first event carries the 1KB inserted value, the second event carries both the replaced 1KB value and the new 2KB value, and the third event carries the removed 2KB value.
Example 2510 Sample Output event has occurred: CacheEvent{LocalCache added: key=test, value=[B@a470b8} (the wire-size of the event would have been 1283 bytes.) event has occurred: CacheEvent{LocalCache updated: key=test, old value=[B@a470b8, new value=[B@1c6f579} (the wire-size of the event would have been 3340 bytes.) event has occurred: CacheEvent{LocalCache deleted: key=test, value=[B@1c6f579} (the wire-size of the event would have been 2307 bytes.)
When an application does not require the old and the new value to be included in the event, it can indicate that by requesting only "lite" events. When adding a listener, you can request lite events by using a addMapListener method that takes an additional boolean fLite parameter. In Example 259, the only change would be:
cache.addMapListener(listener, (Filter) null, true);
Note:
Obviously, a lite event's old value and new value may be null. However, even if you request lite events, the old and the new value may be included if there is no additional cost to generate and deliver the event. In other words, requesting that a MapListener receive lite events is simply a hint to the system that the MapListener does not have to know the old and new values for the event.
Listening to Queries
Listening to Queries
All Coherence caches support querying by any criteria. When an application queries for data from a cache, the result is a point-in-time snapshot, either as a set of identities (keySet) or a set of identity/value pairs (entrySet). The mechanism for determining the contents of the resulting set is referred to as filtering, and it allows an application developer to construct queries of arbitrary complexity using a rich set of out-of-the-box filters (for example, equals, less-than, like, between, and so on), or to provide their own custom filters (for example, XPath). The same filters that are used to query a cache can listen to events from a cache. For example, in a trading system it is possible to query for all open Order objects for a particular trader:
Example 2511 Listening for Events from a Cache NamedCache mapTrades = ... Filter filter = new AndFilter(new EqualsFilter("getTrader", traderid), new EqualsFilter("getStatus", Status.OPEN)); Set setOpenTrades = mapTrades.entrySet(filter);
To receive notifications of new trades being opened for that trader, closed by that trader or reassigned to or from another trader, the application can use the same filter:
Example 2512 Listening for Events on an Object // receive events for all trade IDs that this trader is interested in mapTrades.addMapListener(listener, new MapEventFilter(filter), true);
The MapEventFilter converts a query filter into an event filter. The MapEventFilter has several very powerful options, allowing an application listener to receive only the events that it is specifically interested in. More importantly for scalability and performance, only the desired events have to be communicated over the network, and they are communicated only to the servers and clients that have expressed interest in those specific events. Example 2513 illustrates these scenarios.
Example 2513 Using MapEventFilter to Filter on Various Events // receive all events for all trades that this trader is interested in nMask = MapEventFilter.E_ALL; mapTrades.addMapListener(listener, new MapEventFilter(nMask, filter), true); // receive events for all this trader's trades that are closed or // re-assigned to a different trader nMask = MapEventFilter.E_UPDATED_LEFT | MapEventFilter.E_DELETED; mapTrades.addMapListener(listener, new MapEventFilter(nMask, filter), true); // receive events for all trades as they are assigned to this trader nMask = MapEventFilter.E_INSERTED | MapEventFilter.E_UPDATED_ENTERED; mapTrades.addMapListener(listener, new MapEventFilter(nMask, filter), true); // receive events only fornew trades assigned to this trader nMask = MapEventFilter.E_INSERTED; mapTrades.addMapListener(listener, new MapEventFilter(nMask, filter), true);
For more information on the various options supported, see the API documentation for MapEventFilter.
When entries automatically expire from a cache; When entries are evicted from a cache because the maximum size of the cache has been reached; When entries are transparently added to a cache as the result of a Read-Through operation; When entries in a cache are transparently updated as the result of a Read-Ahead or Refresh-Ahead operation.
Each of these represents a modification, but the modifications represent natural (and typically automatic) operations from within a cache. These events are referred to as synthetic events. When necessary, an application can differentiate between client-induced and synthetic events simply by asking the event if it is synthetic. This information is carried on a sub-class of the MapEvent, called CacheEvent. Using the previous EventPrinter example, it is possible to print only the synthetic events:
Example 2514 Determining Synthetic Events public static class EventPrinter extends MultiplexingMapListener { public void onMapEvent(MapEvent evt) {
For more information on this feature, see the API documentation for CacheEvent.
For more information on this feature, see the API documentation for BackingMapManager.
Implement the AbstractMultiplexingBackingMapListener class. Register the implementation in the <listener> section of the backing-map-scheme in the cache-config file. Start the cache server application file and the client file with the cacheconfig Java property:
-Dtangosol.coherence.cacheconfig="cache-config.xml"
The AbstractMultiplexingBackingMapListener class provides an onBackingMapEvent method which you can override to specify how you would like the event returned. The following listing of the VerboseBackingMapListener class is a sample implementation of AbstractMultiplexingBackingMapListener. The onBackingMapEvent method has been over-ridden to send the results to standard output.
Example 2515 An AbstractMultiplexingBackingMapListener Implementation import com.tangosol.net.BackingMapManagerContext; import com.tangosol.util.MapEvent;
public class VerboseBackingMapListener extends AbstractMultiplexingBackingMapListener { public VerboseBackingMapListener(BackingMapManagerContext context) { super(context); } @Override protected void onBackingMapEvent(MapEvent mapEvent, Cause cause) { System.out.printf("Thread: %s Cause: %s Event: %s\n", Thread.currentThread().getName(), cause, mapEvent); } }
Example 2516 is an example distributed scheme definition. In the <listener> section of the file, the VerboseBackingMapListener is identified as being of type com.tangosol.net.BackingMapManagerContext.
Example 2516 Distributed Scheme Specifying a Verbose Backing Map Listener <distributed-scheme>
<scheme-name>my-dist-scheme</scheme-name> <service-name>DistributedCache</service-name> <backing-map-scheme> <read-write-backing-map-scheme> <internal-cache-scheme> <local-scheme> <high-units>0</high-units> <expiry-delay>0</expiry-delay> </local-scheme> </internal-cache-scheme> <cachestore-scheme> <class-scheme> <class-name>CustomCacheStore</class-name> <init-params> <init-param> <param-type>java.lang.String</param-type> <param-value>{cache-name}</param-value> </init-param> </init-params> </class-scheme> </cachestore-scheme> <listener> <class-scheme> <class-name>VerboseBackingMapListener</class-name> <init-params> <init-param> <param-type>com.tangosol.net.BackingMapManagerContext </param-type> <param-value>{manager-context}</param-value> </init-param> </init-params> </class-scheme> </listener> </read-write-backing-map-scheme> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme>
25-11
26
26
override the requested change with a different value undo the pending change by resetting the original value remove the entry from the underlying map reject the pending change by throwing a runtime exception
MapTrigger functionality is typically added as part of an application start-up process. It can be added programmatically as described in the MapTrigger API, or it can be configured using the class-factory mechanism in the coherence-cache-config.xml configuration file. In this case, a MapTrigger is registered during the very first CacheFactory.getCache(...) call for the corresponding cache. Example 261 assumes that the createMapTrigger method would return a new MapTriggerListener(new MyCustomTrigger());:
Example 261 Example MapTriggerListener Configuration
<distributed-scheme> ... <listener> <class-scheme> <class-factory-name>package.MyFactory</class-factory-name> <method-name>createTriggerListener</method-name> <init-params> <init-param> <param-type>string</param-type> <param-value>{cache-name}</param-value> </init-param> </init-params> </class-scheme> </listener> </distributed-scheme>
In addition to the MapTrigger.Entry and MapTrigger interfaces, Coherence provides the FilterTrigger and MapTriggerListener classes. The FilterTrigger is a generic MapTrigger implementation that performs a predefined action if a pending change is rejected by the associated Filter. The FilterTrigger can either reject the pending operation, ignore the change and restore the entry's original value, or remove the entry itself from the underlying map. The MapTriggerListener is a special purpose MapListener implementation that is used to register a MapTrigger with a corresponding NamedCache. In Example 262, MapTriggerListener is used to register the PersonMapTrigger with the People named cache.
Example 262 A MapTriggerListener Registering a MapTrigger with a Named Cache
These API reside in the com.tangosol.util package. For more information on these API, see Java API Reference for Oracle Coherence.
public class PersonMapTrigger implements MapTrigger { public PersonMapTrigger() { } public void process(MapTrigger.Entry entry) { Person person = (Person) entry.getValue(); String sName = person.getLastName(); String sNameUC = sName.toUpperCase(); if (!sNameUC.equals(sName)) { person.setLastName(sNameUC); System.out.println("Changed last name of [" + sName + "] to [" + person.getLastName() + "]"); entry.setValue(person); } } // ---- hashCode() and equals() must be implemented public boolean equals(Object o) { return o != null && o.getClass() == this.getClass(); } public int hashCode() { return getClass().getName().hashCode(); } }
The MapTrigger in Example 264, calls the PersonMapTrigger. The new MapTriggerListener passes the PersonMapTrigger to the People NamedCache.
Example 264 ... public class MyFactory { /** * Instantiate a MapTriggerListener for a given NamedCache */ public static MapTriggerListener createTriggerListener(String sCacheName) { MapTrigger trigger; if ("People".equals(sCacheName)) { trigger = new PersonMapTrigger(); } else { throw IllegalArgumentException("Unknown cache name " + sCacheName); } Calling a MapTrigger and Passing it to a Named Cache
System.out.println("Creating MapTrigger for cache " + sCacheName); return new MapTriggerListener(trigger); } public static void main(String[] args) { NamedCache cache = CacheFactory.getCache("People"); cache.addMapListener(createTriggerListener("People")); System.out.println("Installed MapTrigger into cache People"); } }
27
27
This chapter describes live events and provides instructions for being notified of events using event interceptors. Applications use live events to react to cluster operations with application logic. This chapter includes the following sections:
Overview of Live Events Understanding Live Event Types Handling Live Events
Partitioned Cache Events A set of events that represent the operations being performed against a set of entries in a cache. Partitioned cache events include both entry events and entry processor events. Entry events are related to inserting, removing, and updating entries in a cache. Entry processor events are related to the execution of entry processors. Partitioned Service Events A set of events that represent the operations being performed by a partitioned service. Partitioned service events include both partition transfer events and partition transaction events. Partition transfer events are related to the movement of partitions among cluster members. Partition transaction events are related to changes that may span multiple caches and are performed within the context of a single request. Lifecycle Events A set of events that represent the activation and disposal of a ConfigurableCacheFactory instance.
Applications create and register event interceptors to consume events. Event interceptors handle the events and implement any custom logic as required. Events have different rules which govern whether or not mutable actions may be performed upon receiving the event.
event type. This section describes each of the supported event types and is organized according to the functional areas in which the events are raised. The following topics are included in this section:
Understanding Partitioned Cache Events Understanding Partitioned Service Events Understanding Lifecycle Events
Entry Events
Entry events represent operations for inserting, removing, and updating entries in a cache. Precommit entry events (INSERTING, REMOVING, and UPDATING) are raised before the operation is performed to allow modification to an entry. The following holds true when modifying entries:
Event interceptors that are registered for precommit entry events are synchronously called. A lock is held for each entry during the processing of the event to prevent concurrent updates. Throwing an exception prevents the operation from being committed.
Postcommit entry events (INSERTED, REMOVED, and UPDATED) are raised after an operation is performed and in the same order as the events occurred. Postcommit events indicate that an entry is no longer able to be modified. Event interceptors for postcommit entry events are asynchronously processed. Table 271 lists the entry event types.
Table 271 Event Types INSERTING INSERTED REMOVING REMOVED UPDATING UPDATED Entry Events Description Indicates that an entry (or multiple entries) is going to be inserted in the cache. Indicates that an entry (or multiple entries) has been inserted in the cache. Indicates that an entry (or multiple entries) is going to be removed from the cache. Indicates that an entry (or multiple entries) has been removed from the cache. Indicates that an entry (or multiple entries) is going to be updated in the cache. Indicates that an entry (or multiple entries) has been updated in the cache.
processor is executed to allow modification to the entry processor instance. The following holds true when modifying an entry processor:
Event interceptors that are registered for precommit entry processor events are synchronously called. Entry processors can be shared across threads; therefore, ensure thread safety when modifying an entry processor. A lock is held for each entry during the processing of the event to prevent concurrent updates. Throwing an exception prevents the entry processor from being executed.
Postcommit entry processor events (EXECUTED) are raised after an entry processor is executed and in the same order that the events occurred. Postcommit events indicate that an entry is no longer able to be modified. Event interceptors for postcommit entry processor events are asynchronously processed. Table 272 lists the entry processor event types.
Table 272 Event Types EXECUTING EXECUTED Entry Processor Events Description Indicates that an entry processor is going to be executed on a set of entries. Indicates that an entry processor has been executed on a set of entries.
Transfer Events
Partitioned service transfer events represent partition transfers between storage enabled members. The event includes the service name for which the transfer is taking place, the partition ID, the cluster members involved in the transfer and a map of cache names to entries. The entries cannot be modified.
Note:
Transfer events are raised while holding a lock on the partition being transferred that blocks any operations for the partition.
Transaction Events
Partitioned service transaction events represent changes to binary entries (possibly from multiple caches) that are made in the context of a single service request.
Using Live Events 27-3
Precommit transaction events (COMMITTING) are raised before any operations are performed to allow modification to the entries. The following holds true when modifying entries:
A lock is held for each entry during the processing of the event to prevent concurrent updates. Throwing an exception prevents the operation from being committed.
Postcommit transaction events (COMMITTED) are raised after an operation is performed. Postcommit events indicate that the entries are no longer able to be modified. Table 274 lists the transaction event types.
Table 274 Event Types COMMITTING COMMITTED Transaction Events Description Indicates that entries are going to be inserted in their respective cache. Indicates that entries have been inserted in their respective cache.
the same event type are automatically chained together and executed in the context of a single event. This section includes the following topics:
Creating Event Interceptors Understanding Event Threading Registering Event Interceptors Chaining Event Interceptors
The @Interceptor annotation can be used to further restrict the events to specific event types and also provides further configuration of the interceptor. The following example defines an interceptor identifier and restricts the events to only transfer DEPARTING events:
@Interceptor(identifier = "redist", transferEvents = TransferEvent.Type.DEPARTING) public class RedistributionInterceptor implements EventInterceptor<TransferEvent> public void onEvent(TransferEvent event) { ...
identifier Specifies a unique identifier for the interceptor. The identifier can be overridden when registering an interceptor class in the cache configuration file. This attribute is optional. A unique name is automatically generated by the event infrastructure if the attribute is omitted. entryEvents Specifies an array of entry event types to which the interceptor wants to subscribe. entryProcessorEvents Specifies an array of entry processor event types to which the interceptor wants to subscribe. transferEvents Specifies an array of transfer event types to which the interceptor wants to subscribe. transactionEvents Specifies an array of transaction event types to which the interceptor wants to subscribe.
order Specifies whether the interceptor is placed at the front of a chain of interceptors. See "Chaining Event Interceptors" on page 27-9. The legal values are LOW and HIGH. A value of HIGH indicates that the interceptor is placed at the front of the chain of interceptors. A value of LOW indicates no order preference. The default value is LOW. The order can be overridden when registering an interceptor class in the cache configuration file.
The following example demonstrates a basic event interceptor implementation that subscribes to all transfer event types (both DEPARTING and ARRIVED). The onEvent method simply logs the events to show partition activity. The example is part of the Coherence examples.
Note: Event instances are immutable and their lifecycle is controlled by the underlying system. References to event classes must not be held across multiple invocations of the onEvent() method.
Example 271 Example Event Interceptor Implementation
package com.tangosol.examples.events; import com.tangosol.net.CacheFactory; import com.tangosol.net.events.EventInterceptor; import com.tangosol.net.events.annotation.Interceptor; import com.tangosol.net.events.partition.TransferEvent; @Interceptor(identifier = "redist") public class RedistributionInterceptor implements EventInterceptor<TransferEvent> { public void onEvent(TransferEvent event) { CacheFactory.log(String.format("Discovered event %s for partition-id %d from remote member %s\n", event.getType(), event.getPartitionId(), event.getRemoteMember()), CacheFactory.LOG_INFO); } }
Precommit Events Precommit event types allow event interceptors to modify entries before the entries are committed to a cache. The interceptors are processed synchronously and must not perform long running operations (such as database access) that could potentially block or slow cache operations. Calls to external resource must always return as quickly as possible to avoid blocking cache operations.
Configure a thread pool for a partitioned service if event interceptors are to handle precommit events. The thread pool is disabled by default and requires setting the <thread-count> element within a distributed cache definition. For details on the <thread-count> element, see " distributed-scheme Subelements" on page B-33. The thread pool creates additional threads to process cache operations and helps alleviate the overall impact of event interceptors that handle precommit events, but the potential for blocking still exists. Postcommit Events Postcommit events do not allow an event interceptor to modify entries. The events are raised in the same order as the events occurred and the interceptors are processed asynchronously. Event interceptors that perform long running operations can cause a backlog of requests that could ultimately affect performance. It is a best practice to use the Java Executor service to perform such operations on a separate thread. The Java Executor services also provides the ability to create a pool of interceptors to be executed in parallel. For example, an interceptor could dispatch additional interceptors using threads within an Executor service.
Event interceptors for service-level events (such as transfer events and transaction events) must be registered for a partition service and cannot be restricted to a specific cache.
Registering Event Interceptors For a Specific Cache Registering Event Interceptors For a Partitioned Service Using Custom Registration Guidelines for Registering Event Interceptors
Note: If an interceptor is configured without using the annotations, then the configuration cannot be overridden using the cache configuration file.
Interceptors can also be programmatically registered using the InterceptorRegistry API. Registering an interceptor causes it to be introduced to all currently registered and future event dispatchers. An interceptor can determine whether or not to bind to a dispatcher by using the introduceEventDispatcher method as shown in the previous example. The InterceptorRegistry API is available from the ConfigurableCacheFactory interface and is called using the getInterceptorRegistry method. The API can be used together with the cache configuration file when declaratively registering interceptors. The API is often used with custom DefaultCacheServer implementations to add interceptors programmatically, or the API is used to selectively register interceptors when using the InvocationService interface. The following example demonstrates registering an interceptor.
CacheFactory.getConfigurableCacheFactory().getInterceptorRegistry() .registerEventInterceptor(new MyEventIntercepor());
An interceptor class can be duplicated multiple time in a distributed scheme or in multiple schemes of the same service as long as the identifier names are unique or no identifier name is defined. For the later, a unique name is automatically generated by the event infrastructure. An interceptor class (duplicated or not) with the same identifier name cannot be registered multiple times in a distributed scheme and results in a registration error. An interceptor class that is inherited by scheme reference is registered once for each distinct service name in the reference chain.
Modify data associated with the event if allowed. For example, precommit operations such as INSERTING, UPDATING, and REMOVING entry events allow data modifications. Deny a precommit operations by throwing an exception.
Enlist a new entry into a partition-level transaction. Observe the results of any side effects caused by event interceptors further down the chain. If the interceptor chain is associated with a precommitted storage event, the ability to observe the results provides a second opportunity to deny the processing. Observing the side effects of downstream event interceptors is accomplished using the Event.nextInterceptor method. When this method returns, it signifies that all downstream event interceptors executed and any changes to the state associated with the event can be inspected. The Event object holds state on the processing of the event. The event calls each event interceptor's onEvent method as long as there are still event interceptors in the chain. If the event interceptor itself calls the Event.nextInterceptor method, it triggers the next event interceptor in the chain to execute. If the event interceptor returns, the event itself triggers the next event interceptor in the chain's onEvent method. Either way, all event interceptors are executed, but those that choose to call the nextInterceptor method have the option of taking action based on the side effects that are observed.
The order can also be specified declaratively when registering an event interceptor in the cache configuration file and overrides the order attribute that is specified in an event interceptor class. To specify the ordering of interceptors in the cache configuration file, include an <order> element, within an <interceptor> element, that is set to HIGH or LOW. For example:
<interceptors> <interceptor> <name>MyInterceptor</name> <order>HIGH</order> <instance> <class-name>package.MyInterceptor</class-name> </instance> </interceptor> <interceptor> <name>MySecondInterceptor</name> <instance> <class-name>package.MySecondInterceptor</class-name> </instance> </interceptor> </interceptors>
28
28
This chapter describes how to use Coherence Query Language (CohQL) to interact with Coherence caches. CohQL is a light-weight syntax (in the tradition of SQL) that is used to perform cache operations on a Coherence cluster. The language can be used either programmatically or from a command-line tool. This chapter includes the following sections:
Understanding Coherence Query Language Syntax Using the CohQL Command-Line Tool Building Filters in Java Programs Additional Coherence Query Language Examples
Note:
Although the CohQL syntax may appear similar to SQL, it is important to remember that the syntax is not SQL and is actually more contextually related to the Java Persistence Query Language (JPQL) standard. CQL (Continuous Query Language) is a query language related to Complex Event Processing (CEP) and should not be confused with CohQL.
Query Syntax Basics Retrieving Data Managing the Cache Lifecycle Working with Cache Data Working with Indexes
For reference, Table 281 lists the Coherence query statements, clauses, and expressions in alphabetical order.
Table 281 Coherence Query Language Statements
For this statement, clause, or expression... See this section BACKUP CACHE bind variables CREATE CACHE CREATE INDEX DELETE DROP CACHE DROP INDEX ENSURE CACHE ENSURE INDEX GROUP BY INSERT key() pseudo-function path-expressions RESTORE CACHE SELECT SOURCE UPDATE value() pseudo-function WHERE Writing a Serialized Representation of a Cache to a File Using Bind Variables Creating a Cache Creating an Index on the Cache Deleting Entries in the Cache Removing a Cache from the Cluster Removing an Index from the Cache Creating a Cache Creating an Index on the Cache Aggregating Query Results Inserting Entries in the Cache Using Key and Value Pseudo-Functions Using Path-Expressions Loading Cache Contents from a File Retrieving Data from the Cache Processing Query Statements in Batch Mode Changing Existing Values Using Key and Value Pseudo-Functions Filtering Entries in a Result Set
Using Path-Expressions
One of the main building blocks of CohQL are path-expressions. Path expressions are used to navigate through a graph of object instances. An identifier in a path expression is used to represent a property in the Java Bean sense. It is backed by a ReflectionExtractor that is created by prepending a get and capitalizing the first letter. Elements are separated by the "dot" (.) character, that represents object traversal. For example the following path expression is used to navigate an object structure:
a.b.c
the question mark (?)Enter a question mark, immediately followed by a number to signify a positional place holder that indexes a collection of objects that are "supplied" before the query is run. The syntax for this form is: ?n where n can be any number. Positional bind variables can be used by the QueryHelper class in the construction of filters. For example:
QueryHelper.createFilter("number = ?1" , new Object[]{new Integer(42)};
the colon (:)Enter a colon, immediately followed by the identifier to be used as a named place holder for the object to be supplied as a key value pair. The syntax for this is :identifier where identifier is an alpha-numeric combination, starting with an alphabetic character. Named bind variables can be used by the QueryHelper class in the construction of filters. For example:
HashMap env = new HashMap(); env.put("iNum",new Integer(42)); QueryHelper.createFilter("number = :iNum" , env};
See "Building Filters in Java Programs" on page 28-14 for more information on the QueryHelper class and constructing filters programmatically.
Using Aliases
Although not needed semantically, CohQL supports aliases to make code artifacts as portable as possible to JPQL. CohQL supports aliases attached to the cache name and at the head of dotted path expressions in the SELECT, UPDATE, and DELETE commands. CohQL also allows the cache alias as a substitute for the value() pseudo function and as an argument to the key() pseudo function.
Retrieving Data
The following sections describe the SELECT statement and the WHERE clause. These entities are the basic building blocks of most cache queries.
The asterisk (*) character represents the full object instead of subparts. It is not required to prefix a path with the cache-name. The FROM part of the SELECT statement targets the cache that forms the domain over which the query should draw its results. The cache-name is the name of an existing cache. See "Simple SELECT * FROM Statements that Highlight Filters" on page 28-15 for additional examples. Example: Select all of the items from the cache dept.
select * from "dept"
Path operator (.) Unary + and Multiplication (*) and division (/)
4. 5. 6.
Addition (+) and subtraction (-) Comparison operators: =, >, >=, <, <=, <>, [NOT] BETWEEN, [NOT] LIKE, [NOT] IN, IS [NOT] NULL, CONTAINS [ALL|ANY] Logical operators (AND, OR, NOT)
The WHERE clause supports only arithmetic at the language level. The BETWEEN operator can be used in conditional expressions to determine whether the result of an expression falls within an inclusive range of values. Numeric, or string expressions can be evaluated in this way. The form is: BETWEEN lower AND upper. The LIKE operator can use the _ and % wildcards. The _ wildcard is used to match exactly one character, while the % wildcard is used to match zero or more occurrences of any characters. To escape the wildcards, precede them with an escape character that is defined using the escape keyword. The following example escapes the % wildcard using the \ escape character in order to select a key literally named k%1.
SELECT key(),value() FROM mycache WHERE key() LIKE "k\%1" escape "\"
In addition, any character may be defined as the escape character. For example:
SELECT key(),value() FROM mycache WHERE key() LIKE "k#%1" escape "#"
The IN operator can check whether a single-valued path-expression is a member of a collection. The collection is defined as an inline-list or expressed as a bind variable. The syntax of an inline-list is:
"(" literal* ")"
CONTAINS [ALL|ANY] are very useful operators because Coherence data models typically use de-normalized data. The CONTAINS operator can determine if a many-valued path-expression contains a given value. For example: e.citys CONTAINS "Boston" The ALL and ANY forms of CONTAINS take a inline-list or bind-variable with the same syntax as the IN operator.
Note:
Coherence provides a programmatic API that enables you to create standalone Coherence filters based on the WHERE clause conditional-expression syntax. See "Building Filters in Java Programs" on page 28-14.
See "Simple SELECT * FROM Statements that Highlight Filters" on page 28-15 for additional examples. Example: Select all of the items in the cache dept where the value of the deptno key equals 10.
select * from "dept" where deptno = 10
Creating a Cache
Before sending queries, connect to an existing cache or create a new cache using the CREATE CACHE or ENSURE CACHE statements, respectively. This statement first attempts to connect to a cache with the specified cache-name. If the cache is not found in the cluster, Coherence attempts to create a cache with the specified name based on the current cache configuration file. This statement is especially useful on the command line. If you are using this statement in a program, you have the option of specifying service and classloader information instead of a name (classloaders cannot be accessed from the command line).
Note:
Cache names and service names must be enclosed in quotes (either double-quotes (" ") or single-quotes (' ')) in a statement.
Note:
The backup (and subsequent restore) functionality is designed for use in a development and testing environment and should not be used on a production data set as it makes no provisions to ensure data consistency. It is not supported as a production backup, snapshot, or checkpointing utility. In particular:
The backup is slow since it only operates on a single node in the cluster. The backup is not atomic. That is, it misses changes to elements which occur during the backup and results in a dirty read of the data. The backup stops if an error occurs and results in an incomplete backup. In such scenarios, an IOException is thrown that describes the error. The backup is not forward or backward compatible between 3.x and 12.1.x.
Example: Write a serialized representation of the cache dept to the file textfile.
Note:
The Double type is assumed if a type is not explicitly specified. See "Complex Queries that Feature Projection, Aggregation, and Grouping" on page 28-16 for additional examples.
Using Coherence Query Language 28-7
Example: Select the total amount and average price for items from the orders cache, grouped by supplier.
select supplier,sum(amount),avg(price) from "orders" group by supplier
Select the total amount and average price (using a BigDecimal type) for items from the orders cache, grouped by supplier.
select supplier,bd_sum(amount),bd_avg(price) from "orders" group by supplier
Each update-statement consists of a path expression, assignment operator (=), and an expression. The expression choices for the assignment statement are restricted. The right side of the assignment must resolve to a literal, a bind-variable, a static method, or a new Java-constructor with only literals or bind-variables. The UPDATE statement also supports the use of aliases. See "UPDATE Examples" on page 28-17 for additional examples. Example: For employees in the employees cache whose ranking is above grade 7, update their salaries to 1000 and vacation hours to 200.
update "employees" set salary = 1000, vacation = 200 where grade > 7
Example: Insert the key writer with the value David into the employee cache.
insert into "employee" key "writer" value "David"
The WHERE clause for the DELETE statement functions the same as it would for a SELECT statement. All conditional-expressions are available to filter the set of entities to be removed. The DELETE statement also supports the use of aliases.
Be Careful:
If the WHERE clause is not present, then all entities in the given cache are removed.
Example: Delete the entry from the cache employee where bar.writer key is not David.
delete from "employee" where bar.writer IS NOT "David"
The value-extractor-list is a comma-delimited list that uses path expressions to create ValueExtractors. If multiple elements exist, then a MultiExtractor is used. To create a KeyExtractor, then start the path expression with a key() pseudo-function. Natural ordering for the index is assumed. Example: Create a index on the attribute lastname in the orders cache.
create index "orders" lastname
Example: Remove the index on the lastname attribute in the orders cache.
drop index "orders" lastname
SOURCE can be abbreviated with an "at" symbol (@) as in @"filename". On the command command line only, a "period" symbol '.' can be used as an abbreviation for '@' but must no contain quotes around the filename. Example: Process the statements in the file command_file.
source from file "command_file"
or,
@ "command_file"
or,
. command_file
Trace:
TRACE select statement | update statement | delete statement
Example:
EXPLAIN PLAN FOR select * from "mycache" where age=19 and firstName=Bob
or,
TRACE SELECT * from "MyCache" WHERE age=19
storage-enabled node can be created for testing by changing the storage_enabled setting in the script to true. As configured, the startup script uses the default operational configuration file (tangosol-coherence.xml) and the default cache configuration file (coherence-cache-config.xml) that are located in the coherence.jar when creating/joining a cluster and configuring caches. For more information on configuring Coherence, see Chapter 3, "Understanding Configuration."
Note:
The script provides the option for setting the COHERENCE_HOME environment variable. If COHERENCE_HOME is not set on the computer, set it in the script to the location where Coherence was installed. CohQL uses JLine for enhanced command-line editing capabilities, such as having the up and down arrows move through the command history. However, JLine is not required to use CohQL. The script automatically uses the jline.jar library that is located in the COHERENCE_HOME/lib/ directory. A different location can be specified by modifying the JLINE_HOME variable and classpath in the script. If the JLine library is not found, a message displays and CohQL starts without JLine capabilities.
Start a cache server cluster node or ensure that an existing cache server cluster node is started. To start a cache server cluster node, open a command prompt or shell and execute the cache server startup script that is located in the /bin directory: cache-server.cmd on the Windows platform or cache-server.sh for UNIX platforms. The cache server starts and output is emitted that provides information about this cluster member.
2.
Open a command prompt or shell and execute the CohQL command-line startup script that is located in the /bin directory: query.cmd on the Windows platform or query.sh for UNIX platforms. Information about the Java environment displays. The command-line tool prompt (CohQL>) is returned.
Note:
When joining an existing cache server node, modify the startup script to use the same cluster settings as the existing cache server node, including the same cache configuration.
3.
Enter help at the prompt to view the complete command-line help. Enter commands to list the help without detailed descriptions. See "A Command-Line Example" on page 28-12 for a series of query statements that exercise the command-line tool.
28-11
larger script for example, as part of a build process or to pipe I/O. Enter help at the CohQL> prompt to view help for the arguments within the command-line tool.
Table 282 Argument -t -c Coherence Query Language Command-Line Tool Arguments Description enable trace mode to print debug information. Exit the command-line tool after processing the command-line arguments. This argument should not be used when redirecting from standard input; in which case, the tool exits as soon as the command line arguments are finished being processed and the redirected input is never read. Run the command-line tool in silent mode to remove extraneous verbiage. This allows the command line tool to be used in pipes or filters by redirecting standard input (<myInput) and standard output (>myOuput). Run the command-line tool in extended language mode. This mode allows object literals in update and insert commands. See the command-line help for complete usage information. Execute the given statement. Statements must be enclosed in single or double quotes. Any number of -l arguments can be used. Process the statements in the given file. The statements in the file must be separated by a semicolon (;). The file is an operating system-dependent path and must be enclosed in single or double quotes. Any number of -f arguments can be used.
-s
-e
-l statement -f filename
Examples Return all entries in the contact cache and print the entries to the standard out then exit the command-line tool.
query.sh -c -l "select * from contact"
Return all entries in the dist-example cache and print the entries (suppressing extra verbiage) to the file named myOutput then exit the command-line tool.
query.cmd -s -c -l "select * from 'dist-example'" >myOutput
Process all the segments in the file named myStatements then exit the command-line tool.
query.sh -c -f myStatements
Read the commands from the myInput file and print the output (suppressing extra verbiage) to the file named myOutput.
query.sh -s <myInput >myOutput
A Command-Line Example
The following examples illustrate using the command-line tool on Windows. This example is intended for a single cluster member, so the storage_enabled setting in the startup script is set to true. The example illustrates creating and dropping a cache, storing and retrieving entries, and restoring the cache from a backup file. It also highlights the use of the key() and value() pseudo-functions. When starting the query.cmd script at the command prompt, information about the Java environment, the Coherence version and edition, and Coherence cache server is displayed. Enter query statements at the prompt (CohQL>).
Retrieve the value of a key that does not exist. An empty result set is returned:
CohQL> select key(), value() from "employees" where key() is "Richard"
Delete the contents of the employees cache. An empty result set is returned:
CohQL> delete from "employees"
28-13
Retrieve the cache contents again. Notice that the deleted key and value are not present.
CohQL> select key(), value() from "employees"
Retrieve the cache contents. Notice that all of the entries have been restored and returned:
CohQL> select key(), value() from "employees"
public static Filter createFilter(String s)where s is a String in the Coherence query representing a Filter. public static Filter createFilter(String s, Object[] aBindings)where s is a String in the Coherence query representing a Filter and aBindings is an array of Objects to use for bind variables. public static Filter createFilter(String s, Map bindings)where s is a String in the Coherence query representing a Filter and bindings is a Map of Objects to use for bind variables. public static Filter createFilter(String s, Object[] aBindings, Map bindings)where s is a String in the Coherence query representing a Filter, aBindings is an array of Objects to use for bind variables, and bindings is a Map of Objects to use for bind variables.
These factory methods throw a FilterBuildingException if there are any malformed, syntactically incorrect expressions, or semantic errors. Since this exception is a subclass
28-14 Oracle Fusion Middleware Developing Applications with Oracle Coherence
of RuntimeException, catching the error is not required, but the process could terminate if you do not. Example The following statement uses the createFilter(String s) form of the method. It constructs a filter for employees who live in Massachusetts but work in another state.
.. QueryHelper.createFilter("homeAddress.state = 'MA' 'MA'") ... and workAddress.state !=
This statement is equivalent to the following filter/extractor using the Coherence API:
AndFilter(EqualsFilter(ChainedExtractor(#getHomeAddress[], #getState[]), MA), NotEqualsFilter(ChainedExtractor(#getWorkAddress[], #getState[]), MA)))
The QueryHelper class also provides a createExtractor method that enables you to create value extractors when building filters. The extractor is used to both extract values (for example, for sorting or filtering) from an object, and to provide an identity for that extraction. The following example demonstrates using createExtractor when creating an index:
cache.addIndex(QueryHelper.createExtractor("key().lastName"),/*fOrdered*/ true, /*comparator*/ null);
Select the items from the cache orders where 40 is greater than the value of the price key.
select * from "orders" where 40 > price
Select the items from the cache orders where the value of the price key exactly equals 100, and the value of insurance key is less than 10 or the value of the shipping key is greater than or equal to 20.
select * from "orders" where price is 100 and insurance < 10 or shipping >= 20
Select the items from the cache orders where the value of the price key exactly equals 100, and either the value of insurance key is less than 10 or the value of the shipping key is greater than or equal to 20.
28-15
select * from "orders" where price is 100 and (insurance < 10 or shipping >= 20)
Select the items from the cache orders where either the value of the price key equals 100, or the bar key equals 20.
select * from "orders" where price = 100 or shipping = 20
Select the items from the cache orders where the value of the insurance key is not null.
select * from "orders" where insurance is not null
Select the items from the cache employees where the emp_id key has a value between 1 and 1000 or the bar.emp key is not "Smith".
select * from "employees" where emp_id between 1 and 1000 or bar.emp is not "Smith"
Select items from the cache orders where the value of item key is similar to the value "coat".
select * from "orders" where item like "coat%"
Select items from the cache employees where the value of emp_id is in the set 5, 10, 15, or 20.
select * from "employees" where emp_id in (5,10,15,20)
Select items from the cache employees where emp_id contains the list 5, 10, 15, and 20.
select * from "employees" where emp_id contains (5,10,15,20)
Select items from the cache employees where emp_id contains the all of the items 5, 10, 15, and 20.
select * from "employees" where emp_id contains all (5,10,15,20)
Select items from the cache employees where emp_id contains any of the items 5, 10, 15, or 20.
select * from "employees" where emp_id contains any (5,10,15,20)
Select items from cache employees where the value of foo key is less than 10 and occurs in the set 10, 20.
select * from "employees" where emp_id < 10 in (10,20)
Select the home state and age of employees in the cache ContactInfoCache, and group by state and age.
select homeAddress.state, age, count() from "ContactInfoCache" group by homeAddress.state, age
Select the spurious frobit key from the orders cache. Note, an empty result set is returned.
select frobit,supplier,sum(amount),avg(price) from "orders" group by supplier
For the items in the orders cache that are greater than $1000, select the items, their prices and colors.
select item_name,price,color from "orders" where price > 1000
Select the total amount for items from the orders cache. The Double type is assumed.
select sum(amount) from "orders"
Select the total amount for items from the orders cache as a Long type.
select long_sum(amount) from "orders"
Select the total amount for items from the orders cache as a BigDecimal type.
select bd_sum(amount) from "orders"
Select the total amount for items from the orders cache where the color attribute is red or green.
select sum(amount) from "orders" where color is "red" or color is "green"
Select the total amount and average price for items from the orders cache
select sum(amount),avg(price) from "orders"
Select one copy of the lastname and city from possible duplicate rows from the employees cache, where the state is California.
select distinct lastName,city from "employees" where state = "CA"
UPDATE Examples
For employees in the employees cache whose ranking is above grade 7, increase their salaries by 10% and add 50 hours of vacation time.
update "employees" set salary = salary*1.10, vacation = vacation + 50 where grade > 7
Select the employees from the ContactInfoCache whose home address is in Massachusetts, but work out of state.
select key().firstName, key().lastName from "ContactInfoCache" where homeAddress.state is 'MA' and workAddress.state != "MA"
Select the employees from the ContactInfoCache cache whose age is greater than 42.
select key().firstName, key().lastName, age from "ContactInfoCache" where age > 42
28-17
29
29
Performing Transactions
This chapter provides instructions for using Coherence's transaction and data concurrency features. Users should be familiar with transaction principles before reading this chapter. In addition, the Coherence Resource Adapter requires knowledge of J2EE Connector Architecture (J2CA), Java Transaction API (JTA) and Java EE deployment. This chapter includes the following sections:
Overview of Transactions Using Explicit Locking for Data Concurrency Using Entry Processors for Data Concurrency Using the Transaction Framework API Using the Coherence Resource Adapter
Overview of Transactions
Transactions ensure correct outcomes in systems that undergo state changes by allowing a programmer to scope multiple state changes into a unit of work. The state changes are committed only if each change can complete without failure; otherwise, all changes must be rolled back to their previous state. Transactions attempt to maintain a set of criteria that are commonly referred to as ACID properties (Atomicity, Consistency, Isolation, Durability):
Atomic - The changes that are performed within the transaction are either all committed or all rolled back to their previous state. Consistent - The results of a transaction must leave any shared resources in a valid state. Isolated - The results of a transaction are not visible outside of the transaction until the transaction has been committed. Durable - The changes that are performed within the transaction are made permanent.
Sometimes ACID properties cannot be maintained solely by the transaction infrastructure and may require customized business logic. For instance, the consistency property requires program logic to check whether changes to a system are valid. In addition, strict adherence to the ACID properties can directly affect infrastructure and application performance and must be carefully considered.
Coherence offers various transaction options that provide different transaction guarantees. The options should be selected based on an application's or solution's transaction requirements. Table 291 summarizes the various transactions option that Coherence offers.
Table 291 Option Name Explicit locking Coherence Transaction Options Description The ConcurrentMap interface (which is extended by the NamedCache interface) supports explicit locking operations. The locking API guarantees data concurrency but does not offer atomic guarantees. For detailed information on this option, see "Using Explicit Locking for Data Concurrency" on page 29-2. Coherence also supports a lock-free programming model through the EntryProcessor API. For many transaction types, this minimizes contention and latency and improves system throughput, without compromising the fault-tolerance of data operations. This option offers high-level concurrency control but does not offer atomic guarantees. For detailed information on this option, see "Using Entry Processors for Data Concurrency" on page 29-3. Coherence supports partition-level transactions using implicit locking through the EntryProcessor API. Partition-level transactions provide atomic guarantees when updating multiple caches in a single operation. For details, see "Processing Entries in Multiple Caches" on page 24-3.
Entry Processors
Implicit locking
Transaction Framework API Coherence Transaction Framework API is a connection-based API that provides atomic transaction guarantees across partitions and caches even with a client failure. The framework supports the use of NamedCache operations, queries, aggregation, and entry processors within the context of a transaction. For detailed information on this option, see "Using the Transaction Framework API" on page 29-5. Coherence Resource Adapter The Coherence resource adapter leverages the Coherence Transaction Framework API and allows Coherence to participate as a resource in XA transactions that are managed by a JavaEE container's transaction manager. This transaction option offers atomic guarantees. For detailed information on this option, see "Using the Coherence Resource Adapter" on page 29-21.
... NamedCache cache = CacheFactory.getCache("dist-cache"); Object key = "example_key"; cache.lock(key, -1); try { Object value = cache.get(key); // application logic cache.put(key, value); } finally 29-2 Oracle Fusion Middleware Developing Applications with Oracle Coherence
{ // Always unlock in a "finally" block // to ensure that uncaught exceptions // do not leave data locked cache.unlock(key); } ...
Coherence lock functionality is similar to the Java synchronized keyword and the C# lock keyword: locks only block locks. Threads must cooperatively coordinate access to data through appropriate use of locking. If a thread has locked the key to an item, another thread can read the item without locking. Locks are unaffected by server failure and failover to a backup server. Locks are immediately released when the lock owner (client) fails. Locking behavior varies depending on the timeout requested and the type of cache. A timeout of -1 blocks indefinitely until a lock can be obtained, 0 returns immediately, and a value greater than 0 waits the specified number of milliseconds before timing out. The boolean return value should be examined to ensure the caller has actually obtained the lock. See ConcurrentMap.lock() for more details. Note that if a timeout value is not passed to lock() the default value is zero. With replicated caches, the entire cache can be locked by using ConcurrentMap.LOCK_ALL as the key, although this is usually not recommended. This operation is not supported with partitioned caches. In both replicated and partitioned caches, gets are permitted on keys that are locked. In a replicated cache, puts are blocked, but they are not blocked in a partitioned cache. When a lock is in place, it is the responsibility of the caller (either in the same thread or the same cluster node, depending on the lease-granularity configuration) to release the lock. This is why locks should always be released with a finally clause (or equivalent). If this is not done, unhandled exceptions may leave locks in place indefinitely. For more information on lease-granularity configuration, see "DistributedCache Service Parameters".
Invoke an entry processor on a specific key. Note that the key need not exist in the cache to invoke an entry processor on it. Invoke an entry processor on a collection of keys. Invoke an entry processor on a Filter. In this case, the Filter is executed against the cache entries. Each entry that matches the Filter criteria has the entry processor executed against it. For more information on Filters, see Chapter 22, "Querying Data In a Cache".
Entry processors are executed in parallel across the cluster (on the nodes that own the individual entries.) This provides a significant advantage over having a client lock all affected keys, pull all required data from the cache, process the data, place the data back in the cache, and unlock the keys. The processing occurs in parallel across multiple computers (as opposed to serially on one computer) and the network overhead of obtaining and releasing locks is eliminated.
Note:
EntryProcessor implementations must be available in the classpath for each cluster node.
Here is a sample of high-level concurrency control. Code that requires network access is commented:
Example 292 Concurrency Control without Using EntryProcessors
final NamedCache cache = CacheFactory.getCache("dist-test"); final String key = "key"; cache.put(key, new Integer(1));
// begin processing // *requires network access* if (cache.lock(key, 0)) { try { // *requires network access* Integer i = (Integer) cache.get(key); // *requires network access* cache.put(key, new Integer(i.intValue() + 1)); } finally { // *requires network access* cache.unlock(key); } } // end processing
The following is an equivalent technique using an entry processor. Again, network access is commented:
Example 293 Concurrency Control Using EntryProcessors
final NamedCache cache = CacheFactory.getCache("dist-test"); final String key = "key"; cache.put(key, new Integer(1));
// end processing ... public static class MyCounterProcessor extends AbstractProcessor { // this is executed on the node that owns the data, // no network access required public Object process(InvocableMap.Entry entry) { Integer i = (Integer) entry.getValue(); entry.setValue(new Integer(i.intValue() + 1)); return null; } }
entry processors are individually executed atomically; however, multiple entry processor invocations, through the use of InvocableMap.invokeAll(), do not execute as one atomic unit. As soon as an individual entry processor has completed, any updates made to the cache is immediately visible while the other entry processors are executing. Furthermore, an uncaught exception in an entry processor does not prevent the others from executing. Should the primary node for an entry fail while executing an entry processor, the backup node performs the execution instead. However if the node fails after the completion of an entry processor, the entry processor is not invoked on the backup. Generally, entry processors should be short lived. Applications with longer running entry processor should increase the size of the distributed service thread pool so that other operations performed by the distributed service are not blocked by a long running entry processor. For more information on the distributed service thread pool, see "DistributedCache Service Parameters".
Known Limitations The Transaction Framework API has the following limitations:
Database Integration For existing Coherence users, the most noticeable limitation is the lack of support for database integration as compared to the existing Partitioned NamedCache implementation. Server-Side Functionality Transactional caches do not support eviction or expiry, though they support garbage collection of older object versions. Backing map listeners, triggers, and CacheStore modules are not supported.
Explicit Locking and Pessimistic Transactions Pessimistic/explicit locking (ConcurrentMap interface) are not supported. Filters Filters, such as PartitionedFilter, LimitFilter and KeyAssociationFilter, are not supported. Synchronous Listener The SynchronousListener interface is not supported. Near Cache Wrapping a near cache around a transactional cache is not supported. Key Partitioning Strategy You cannot specify a custom KeyPartitioningStrategy for a transactional cache; although, KeyAssociation or a custom KeyAssociator works.
Defining Transactional Caches Performing Cache Operations within a Transaction Creating Transactional Connections Using Transactional Connections Using the OptimisticNamedCache Interface Configuring POF When Performing Transactions Configuring Transactional Storage Capacity Performing Transactions from Java Extend Clients Performing Cache Operations within a Transaction
The Transaction Framework API is also the underling transaction framework for the Coherence JCA resource adapter. For details on using the resource adapter, see "Using the Coherence Resource Adapter" on page 29-21.
The use of transaction schemes within near cache schemes is currently not supported.
The following example demonstrates defining a transactional cache scheme in a cache configuration file. The cache is named MyTxCache and maps to a
<transactional-scheme> that is named example-transactional. The cache name can also use the tx-* convention which allows multiple cache instances to use a single mapping to a transactional cache scheme.
Note: The <service-name> element, as shown in the example below, is optional. If no <service-name> element is included in the transactional cache scheme, TransactionalCache is used as the default service name. In this case, applications must connect to a transactional service using the default service name. See "Creating Transactional Connections" on page 29-10.
Example 294 Example Transactional Cache Definition
<caching-scheme-mapping> <cache-mapping> <cache-name>MyTxCache</cache-name> <scheme-name>example-transactional</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <!-- Transactional caching scheme. --> <transactional-scheme> <scheme-name>example-transactional</scheme-name> <service-name>TransactionalCache</service-name> <thread-count>10</thread-count> <request-timeout>30000</request-timeout> <autostart>true</autostart> </transactional-scheme> </caching-schemes>
The <transactional-scheme> element also supports the use of scheme references. In the below example, a <transactional-scheme> with the name example-transactional references a <transactional-scheme> with the name base-transactional:
<caching-scheme-mapping> <cache-mapping> <cache-name>tx-*</cache-name> <scheme-name>example-transactional</scheme-name> </cache-mapping> </caching-scheme-mapping> <caching-schemes> <transactional-scheme> <scheme-name>example-transactional</scheme-name> <scheme-ref>base-transactional</scheme-ref> <thread-count>10</thread-count> </transactional-scheme> <transactional-scheme> <scheme-name>base-transactional</scheme-name> <service-name>TransactionalCache</service-name> <request-timeout>30000</request-timeout> <autostart>true</autostart> </transactional-scheme> </caching-schemes>
Using the NamedCache API Applications use the NamedCache API to implicitly perform cache operations within a transaction. Using the Connection API Applications use the Connection API to explicitly perform cache operations within a transaction. Using the Coherence Resource Adapter Java EE applications use the Coherence Resource Adapter to connect to a Coherence data cluster and perform cache operations as part of a distributed (global) transaction.
CacheFactory.ensureCluster(); NamedCache cache = CacheFactory.getCache("MyTxCache"); Map map = new HashMap(); map.put(key, "value"); map.put(key2, "value2"); map.put(key3, "value3"); map.put(key4, "value4"); //operations performed on the cache are atomic cache.putAll(map); CacheFactory.shutdown(); ...
The examples in this section demonstrate how to use the Connection interface, DefaultConnectionFactory class, and the OptimisticNamedCache interface which are located in the com.tangosol.coherence.transaction package. The examples use the transactional cache that was defined in Example 294. The Connection API is discussed in detail following the examples. Example 295 demonstrates an auto-commit transaction; where, two insert operations are each executed as separate transactions.
Example 295 Performing an Auto-Commit Transaction
... Connection con = new DefaultConnectionFactory(). createConnection("TransactionalCache"); OptimisticNamedCache cache = con.getNamedCache("MytxCache"); cache.insert(key, value); cache.insert(key2, value2); con.close(); ...
Example 296 demonstrates a non auto-commit transaction; where, two insert operations are performed within a single transaction. Applications that use non auto-commit transactions must manually demarcate transaction boundaries.
Example 296 Performing a Non Auto-Commit Transaction
... Connection con = new DefaultConnectionFactory(). createConnection("TransactionalCache"); con.setAutoCommit(false); try { OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.insert(key, value); cache.insert(key2, value2); con.commit(); catch (Exception e) { con.rollback(); throw e; } finally { con.close(); } ...
Example 297 demonstrates performing a transaction that spans multiple caches. Each transactional cache must be defined in a cache configuration file.
Example 297
... Connection con = new DefaultConnectionFactory(). createConnection("TransactionalCache"); con.setAutoCommit(false); OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); OptimisticNamedCache cache1 = con.getNamedCache("MyTxCache1"); cache.insert(key, value); cache1.insert(key2, value2); con.commit(); con.close(); ...
Note:
Transactions can span multiple partitions and caches within the same service but cannot span multiple services.
In this example, the first cache configuration file found on the classpath (or specified using the -Dtangosol.coherence.cacheconfig system property) is used by this Connection instance. Optionally, a URI can be passed as an argument to the factory class that specifies the location and name of a cache configuration file. For example, the following code demonstrates constructing a connection factory that uses a cache configuration file named cache-config.xml that is located in a config directory found on the classpath.
Connection con = new DefaultConnectionFactory("config/cache-config.xml"). createConnection("TransactionalCache");
createConnection() The no-argument method creates a connection that is a member of the default transactional service, which is named TransactionalCache. Use the no-argument method when the <transactional-scheme> element being used does not include a specific<service-name> element. For details on defining transactional cache schemes and specifying the service name, see "Defining Transactional Caches" on page 29-6. createConnection(ServiceName) This method creates a connection that is a member of a transactional service. The service name is a String that indicates the transactional service to which this connection belongs. The ServiceName maps to a <service-name> element that is defined within a <transactional-scheme> element in the cache configuration file. If no service name is used, the default name (TransactionalCache) is used as the service name. For details on defining
transactional cache schemes and specifying the service name, see "Defining Transactional Caches" on page 29-6.
createConnection(ServiceName, loader) This method also creates a connection that is a member of a transactional service. In addition, it specifies the class loader to use. In the above example, the connection is created by only specifying a service name; in which case, the default class loader is used.
A transaction is automatically committed or rolled back for each cache operation. See "Using Auto-Commit Mode" on page 29-11. A transaction uses the read committed isolation level. See "Setting Isolation Levels" on page 29-12. A transaction immediately performs operations on the cache. See "Using Eager Mode" on page 29-13. A transaction has a default timeout of 300 seconds. See "Setting Transaction Timeout" on page 29-13.
A connection's default behaviors can be changed using the Connection instance's methods as required.
Multiple operations are performed as part of a single transaction by disabling auto-commit mode. If auto-commit mode is disabled, an application must manually demarcate transaction boundaries. The following code from Example 296 demonstrates insert operations that are performed within a single transaction:
con.setAutoCommit(false); OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.insert(key, value); cache.insert(key2, value2); Performing Transactions 29-11
con.commit();
An application cannot use the commit() or rollback() method when auto-commit mode is enabled. Moreover, if auto-commit mode is enabled while in an active transaction, any work is automatically rolled back.
READ_COMMITTED This is the default isolation level if no level is specified. This isolation level guarantees that only committed data is visible and does not provide any consistency guarantees. This is the weakest of the isolation levels and generally provides the best performance at the cost of read consistency. STMT_CONSISTENT_READ This isolation level provides statement-scoped read consistency which guarantees that a single operation only reads data for the consistent read version that was available at the time the statement began. The version may or may not be the most current data in the cache. See the note below for additional details. STMT_MONOTONIC_CONSISTENT_READ This isolation level provides the same guarantees as STMT_CONSISTENT_READ, but reads are also guaranteed to be monotonic. A read is guaranteed to return a version equal or greater than any version that was previously encountered while using the connection. Due to the monotinic read guarantee, reads with this isolation may block until the necessary versions are available. TX_CONSISTENT_READ This isolation level provides transaction-scoped read consistency which guarantees that all operations performed in a given transaction read data for the same consistent read version that was available at the time the transaction began. The version may or may not be the most current data in the cache. See the note below for additional details. TX_MONOTONIC_CONSISTENT_READ This isolation level provides the same guarantees as TX_CONSISTENT_READ, but reads are also guaranteed to be monotonic. A read is guaranteed to return a version equal or greater than any version that was previously encountered while using the connection. Due to the monotinic read guarantee, the initial read in a transaction with this isolation may block until the necessary versions are available.
Note: Consistent read isolation levels (statement or transaction) may lag slightly behind the most current data in the cache. If a transaction writes and commits a value, then immediately reads the same value in the next transaction with a consistent read isolation level, the updated value may not be immediately visible. If reading the most recent value is critical, then the READ_COMMITTED isolation level is required.
Isolation levels are set on a Connection instance and must be set before starting an active transaction. For example:
... Connection con = new DefaultConnectionFactory(). createConnection("TransactionalCache"); con.setIsolationLevel(STMT_CONSISTENT_READ); ...
Performing Transactions
29-13
The following code sample from Example 295 demonstrates getting a transactional cache called MyTxCache and performs operations on the cache. For this example, a transactional cache that is named MyTxCache must be located in the cache configuration file at run-time. For details on defining a transactional cache, see "Defining Transactional Caches" on page 29-6.
OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.insert(key, value); cache.insert(key2, value2);
The Transaction Framework API also includes its own POF types which are defined in the txn-pof-config.xml POF configuration file which is included in coherence.jar. The POF types are required and must be found at run-time. To load the transaction POF types at run time, modify an application's POF configuration file and include the txn-pof-config.xml POF configuration file using the <include> element. For example:
<pof-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-pof-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-pof-config coherence-pof-config.xsd"> <user-type-list> <include>coherence-pof-config.xml</include> <include>txn-pof-config.xml</include> </user-type-list> ... </pof-config>
See "Combining Multiple POF Configuration Files" on page 3-10 for more information on using the <include> element to combine POF configuration files.
Due to the internal storage requirements, transactional caches have a constant overhead associated with every entry written to the cache. Moreover, transactional caches use multi-version concurrency control, which means that every write operation produces a new row into the cache even if it is an update. Therefore, the Transaction Framework API uses a custom eviction policy to help manage the growth of its internal storage caches. The eviction policy works by determining which versions of an entry can be kept and which versions are eligible for eviction. The latest version for a given key (the most recent) is never evicted. The eviction policy is enforced whenever a configured high-water mark is reached. After the threshold is reached, 25% of the eligible versions are removed.
Note:
The eviction policy does not take the entire transactional storage into account when comparing the high-water mark. Therefore, transactional storage slightly exceeds the high-water mark before the storage eviction policy is notified. It is possible that storage for a transactional cache exceeds the maximum heap size if the cache is sufficiently broad (large number of distinct keys) since the current entry for a key is never evicted.
Because the storage eviction policy is notified on every write where the measured storage size exceeds the high-water mark, the default high-water mark may have to be increased so that it is larger than the size of the current data set. Otherwise, the eviction policy is notified on every write after the size of the current data set exceeds the high water mark resulting in decreased performance. If consistent reads are not used, the value can be set so that it slightly exceeds the projected size of the current data set since no historical versions is ever read. When using consistent reads, the high-water mark should be high enough to provide for enough historical versions. Use the below formulas to approximate the transactional storage size. The high-water mark is configured using the <high-units> element within a transactional scheme definition. The following example demonstrates configuring a high-water mark of 20 MB.
<transactional-scheme> ... <high-units>20M</high-units> ... </trnsactional-scheme>
The following formulas provide a rough estimate of the memory usage for a row in a transactional cache. For insert operations:
Primary key(serialized) + key (on-heap size) + value(serialized) + 1095 bytes constant overhead Backup key(serialized) + value(serialized) + 530 bytes constant overhead
Primary value(serialized) + 685 bytes constant overhead Backup value(serialized) + 420 bytes constant overhead
Performing Transactions
29-15
Create an Entry Processor for Transactions Configure the Cluster-Side Transaction Caches Configure the Client-Side Remote Cache Use the Transactional Entry Processor from a Java Client
public class MyTxProcessor extends AbstractProcessor { public Object process(InvocableMap.Entry entry) { // obtain a connection and transaction cache ConnectionFactory connFactory = new DefaultConnectionFactory(); Connection conn = connFactory.createConnection("TransactionalCache"); OptimisticNamedCache cache = conn.getNamedCache("MyTxCache"); conn.setAutoCommit(false); // get a value for an existing entry String sValue = (String) cache.get("existingEntry"); // create predicate filter Filter predicate = new EqualsFilter(IdentityExtractor.INSTANCE, sValue); try { // update the previously obtained value cache.update("existingEntry", "newValue", predicate); } catch (PredicateFailedException e) { // value was updated after it was read conn.rollback();
return false; } catch (UnableToAcquireLockException e) { // row is being updated by another tranaction conn.rollback(); return false; } try { conn.commit(); } catch (RollbackException e) { // transaction was rolled back return false; } return true; } }
<service-name>DistributedCache</service-name> <backing-map-scheme> <local-scheme/> </backing-map-scheme> <autostart>true</autostart> </distributed-scheme> <proxy-scheme> <service-name>ExtendTcpProxyService</service-name> <acceptor-config> <tcp-acceptor> <local-address> <address>localhost</address> <port>9099</port> </local-address> </tcp-acceptor> </acceptor-config> <autostart>true</autostart> </proxy-scheme> </caching-schemes> </cache-config>
Size TotalGets
Integer Long
Performing Transactions
29-19
Table 292 (Cont.) Transactional Cache Supported Attributes Attribute TotalGetsMillis TotalPuts TotalPutsMillis Type Long Long Long Description The total number of milliseconds spent on get() operations since the cache statistics were last reset. The total number of put() operations since the cache statistics were last reset. The total number of milliseconds spent on put() operations since the cache statistics were last reset.
For transactional caches, the resetStatistics operation is supported and resets all transaction manager statistics.
TransactionManagerBean
The TransactionManagerMBean managed resource is specific to the transactional framework. It provides global transaction manager statics by aggregating service-level statistics from all transaction service instances. Each cluster node has an instance of the transaction manager managed bean per service. The object name uses the form: type=TransactionManager, service=service name, nodeId=cluster node's id
Note:
For certain transaction manager attributes, the count is maintained at the coordinator node for the transaction, even though multiple nodes may have participated in the transaction. For example, a transaction may include modifications to entries stored on multiple nodes but the TotalCommitted attribute is only incremented on the MBean on the node that coordinated the commit of that transaction.
TotalCommitted
Long
TotalRecovered
Long
Table 293 (Cont.) TransactionManagerMBean Attributes Attribute TotalRolledback Type Long Description The total number of transactions that have been rolled back by the Transaction Manager since the last time the statistics were reset. Note that the count is maintained at the coordinator node for the transaction being rolled back, even though multiple nodes may have participated in the transaction. The cumulative time (in milliseconds) spent on active transactions. The transaction timeout value in milliseconds. Note that this value only applies to transactional connections obtained after the value is set. This attribute is currently not supported.
TotalTransactionMillis TimeoutMillis
Long Long
The TransactionManagerMBean includes a single operation called resetStatistics, which resets all transaction manager statistics.
Performing Cache Operations within a Transaction Packaging the Application Using the Coherence Cache Adapter for Transactions
Performing Transactions
29-21
Context initCtx = new InitialContext(); ConnectionFactory cf = (ConnectionFactory) initCtx.lookup("java:comp/env/eis/CoherenceTxCF"); Connection con = cf.createConnection("TransactionalCache"); try { OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.delete("key1", null); cache.insert("key1", "value1"); } finally { con.close(); }
Example 2910 Performing a User-Controlled Transaction Context initCtx = new InitialContext(); ConnectionFactory cf = (ConnectionFactory) initCtx.lookup("java:comp/env/eis/CoherenceTxCF"); UserTransaction ut = (UserTransaction) new InitialContext().lookup("java:comp/UserTransaction"); ut.begin(); Connection con = cf.createConnection("TransactionalCache"); try { OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); 29-22 Oracle Fusion Middleware Developing Applications with Oracle Coherence
cache.delete("key1", null); cache.insert("key1", "value1"); ut.commit(); } catch (Exception e) { ut.rollback(); throw e; } finally { con.close(); }
The ConnectionFactory is then used to create a com.tangosol.coherence.transaction.Connection instance. The Connection instance represents a logical connection to a Coherence service:
Connection con = cf.createConnection("TransactionalCache");
The creatConnection(ServiceName) method creates a connection that is a member of a transactional service. The service name is a String that indicates which transactional service this connection belongs to and must map to a service name that is defined in a <transactional-scheme> within a cache configuration file. For details on defining transactional cache schemes and specifying the service name, see "Defining Transactional Caches" on page 29-6. A Connection instance always has an associated transaction which is scoped within the connection. A new transaction is started when a transaction is completed. The following default behaviors are associated with a connection. For more information on the Connection interface and changing the default settings, see "Using Transactional Connections" on page 29-11.
Connections are in auto-commit mode by default which means that each statement is executed in a distinct transaction and when the statement completes the transaction is committed and the connection is associated with a new transaction.
Performing Transactions
29-23
Note:
When the connection is used for a global transaction, auto-commit mode is disabled and cannot be enabled. Cache operations are performed in a single transaction and either commit or roll back as a unit. In addition, the Connection interface's commit an rollback methods cannot be used if the connection is enlisted in a global transaction.
The connection's isolation level is set to READ_COMMITTED. The transaction can only view committed data from other transactions. Eager mode is enabled by default which means every operation is immediately flushed to the cluster and are not queued to be flushed in batches. The default transaction timeout is 300 seconds.
Note:
When the connection is used for a global transaction, the transaction timeout that is associated with a connection is overridden by the transaction timeout value that is set by a container's JTA configuration. If an application attempts to set the transaction timeout value directly on the connection while it is enlisted in a global transaction, the attempt is ignored and a warning message is emitted indicating that the transaction timeout cannot be set. The original timeout value that is set on the connection is restored after the global transaction completes.
Note:
OptimisticNamedCache does not extend any operations from the ConcurrentMap interface since it uses its own locking strategy.
InitialContext().lookup("java:comp/UserTransaction"); ut.begin(); Connection con = cf.createConnection("TransactionalCache"); try { OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.delete("key1", null); cache.insert("key1", "value1"); ut.commit();
The above code demonstrates a typical scenario where the connection and the named cache exist within the transaction boundaries. However, the resource adapter also supports scenarios where connections are used across transaction boundaries and are obtained before the start of a global transaction. For example:
Connection con = cf.createConnection("TransactionalCache"); try { OptimisticNamedCache cache = con.getNamedCache("MyTxCache"); cache.delete("key1", null); UserTransaction ut = (UserTransaction) new InitialContext().lookup("java:comp/UserTransaction"); ut.begin(); cache.insert("key1", "value1"); ut.commit();
Configure the Connection Factory Resource Reference Configure the Resource Adapter Module Reference Include the Required Libraries
Performing Transactions
29-25
</resource-ref>
In addition to the standard Java EE application component deployment descriptors, many application servers require a proprietary deployment descriptor as well. For example, WebLogic server resource references are defined in the weblogic.xml or weblogic-ejb-jar.xml files respectively:
<reference-descriptor> <resource-description> <res-ref-name>eis/CoherenceTxCF</res-ref-name> <jndi-name>tangosol.coherenceTx</jndi-name> </resource-description> </reference-descriptor>
Consult your application server vendor's documentation for detailed information on using their proprietary application component deployment descriptors and information on alternate methods for defining resource reference using dependency injection or annotations.
In addition to the standard Java EE application deployment descriptors, many application servers require a proprietary application deployment descriptor as well. For example, the Coherence resource adapter is defined in the WebLogic server weblogic-application.xml file as follows:
<weblogic-application> <classloader-structure> ... <module-ref> <module-uri>coherence-transaction.rar</module-uri> </module-ref> ... </classloader-structure> </weblogic-application>
Consult your application server vendor's documentation for detailed information on using their proprietary application deployment descriptors
/ /coherence-transaction.rar /coherence.jar
When deploying to WebLogic server, the coherence.jar file must be placed in the /APP-INF/lib directory of the EAR file. For example:
/ /coherence-transaction.rar /APP-INF/lib/coherence.jar
This deployment scenario results in a single Coherence cluster node that is shared by all application components in the EAR. See Administering Oracle Coherence for different Coherence deployment options.
Performing Transactions
29-27
Example 2912 demonstrates performing cache operations within a transaction when using the CacheAdapter class with the coherence-tx.rar resource adapter.
Example 2912 Using the CacheAdapter Class When Using coherence-tx.rar String key = "key"; Context ctx = new InitialContext(); UserTransaction tx = null; try { // the transaction manager from container tx = (UserTransaction) ctx.lookup("java:comp/UserTransaction"); tx.begin(); // the try-catch-finally block below is the block of code // that could be on an EJB and therefore automatically within // a transactional context CacheAdapter adapter = null; try { adapter = new CacheAdapter(ctx, "tangosol.coherenceTx", CacheAdapter.CONCUR_OPTIMISTIC, CacheAdapter.TRANSACTION_GET_COMMITTED, 0); NamedCache cache = adapter.getNamedCache("dist-test", getClass().getClassLoader()); int n = ((Integer)cache.get(key)).intValue(); cache.put(key, new Integer(++n)); } catch (Throwable t) { String sMsg = "Failed to connect: " + t; System.err.println(sMsg); t.printStackTrace(System.err); } finally { try { adapter.close(); } catch (Throwable ex) { System.err.println("SHOULD NOT HAPPEN: " + ex); } } } finally { try { tx.commit(); } catch (Throwable t) { String sMsg = "Failed to commit: " + t; System.err.println(sMsg); }
Performing Transactions
29-29
30
30
This chapter provides instructions for using data affinity and includes instructions for changing the default partition setup. The instructions are specific to distributed caches. This chapter includes the following sections:
Specifying Data Affinity Changing the Number of Partitions Changing the Partition Distribution Strategy
Overview of Data Affinity Specifying Data Affinity with a KeyAssociation Specifying Data Affinity with a KeyAssociator Deferring the Key Association Check Example of Using Affinity
Note:
Data affinity is specified in terms of entry keys (not values). As a result, the association information must be present in the key class. Similarly, the association logic applies to the key class, not the value class.
Affinity is specified in terms of a relationship to a partitioned key. In the Order-LineItem example above, the Order objects would be partitioned normally, and the LineItem objects would be associated with the appropriate Order object. The association does not have to be directly tied to the actual parent key - it only must be a functional mapping of the parent key. It could be a single field of the parent key (even if it is non-unique), or an integer hash of the parent key. All that matters is that all child keys return the same associated key; it does not matter whether the associated key is an actual key (it is simply a "group id"). This fact may help minimize the size impact on the child key classes that do not contain the parent key information (as it is derived data, the size of the data may be decided explicitly, and it also does not affect the behavior of the key). Note that making the association too general (having too many keys associated with the same "group id") can cause a "lumpy" distribution (if all child keys return the same association key regardless of what the parent key is, the child keys are all assigned to a single partition, and are not spread across the cluster). There are two ways to ensure that a set of cache entries are co-located. Note that association is based on the cache key, not the value (otherwise updating a cache entry could cause it to change partitions). Also, note that while the Order is co-located with the child LineItems, Coherence does not currently support composite operations that span multiple caches (for example, updating the Order and the collection of LineItems within a single invocation request com.tangosol.util.InvocableMap.EntryProcessor).
import com.tangosol.net.cache.KeyAssociation; public class LineItemId implements KeyAssociation { // {...} public Object getAssociatedKey() { return getOrderId(); } // {...} }
Example 302
A Custom KeyAssociator
import com.tangosol.net.partition.KeyAssociator; public class LineItemAssociator implements KeyAssociator { public Object getAssociatedKey(Object oKey) { if (oKey instanceof LineItemId) { return ((LineItemId) oKey).getOrderId(); } else if (oKey instanceof OrderId) { return oKey; } else { return null; } } public void init(PartitionedService service) { } }
The key associator is configured for a NamedCache in the <distributed-scheme> element that defined the cache:
Example 303 Configuring a Key Associator
Note:
If the parameter is set to true, a key class implementation must be found on the cluster even if key association is no being used.
See Developing Remote Clients for Oracle Coherence (.NET and C++) for more information on deferring key association with .NET and C++ clients, respectively.
OrderId orderId = new OrderId(1234); // this Filter is applied to all LineItem objects to fetch those // for which getOrderId() returns the specified order identifier // "select * from LineItem where OrderId = :orderId"Filter filterEq = new EqualsFilter("getOrderId", orderId); // this Filter directs the query to the cluster node that currently owns // the Order object with the given identifier Filter filterAsc = new KeyAssociatedFilter(filterEq, orderId); // run the optimized query to get the ChildKey objects Set setLineItemKeys = cacheLineItems.keySet(filterAsc); // get all the Child objects immediately Set setLineItems = cacheLineItems.getAll(setLineItemKeys); // Or remove all immediately cacheLineItems.keySet().removeAll(setLineItemKeys);
Deciding the number of Partitions There is no exact formula for selecting a partition count. An ideal partition count balances the number of partitions on each cluster member with the amount of data each partition holds. Use the following guidelines when selecting a partition count and always perform tests to verify that the partition count is not adversely affecting performance.
The partition count should always be a prime number. A list of primes can be found at http://primes.utm.edu/lists/. The number of partitions must be large enough to support a balanced distribution without each member managing too few partitions. For example, a partition count that results in only two partitions on each member is too constraining. The number of partitions must not be too large that network bandwidth is wasted with transfer overhead and bookkeeping for many partition transfers (a unit of transfer is a partition). For example, transferring thousands of partitions to a new cache server member requires a greater amount of network resources and can degrade cluster performance especially during startup. The amount of data a partition manages must not be too large (the more data a partition manages: the higher the partition promotion and transfer costs). The amount of data a partition manages is only limited by the amount of available memory on the cache server. A partition limit of 50MB typically ensures good performance. A partition limit between 50MB-100MB (even higher with 10GbE or faster) can be used for larger clusters. Larger limits can be used with the understanding that there will be a slight increase in transfer latency and that larger heaps with more overhead space are required.
As an example, consider a cache server that is configured with a 4G heap and stores approximately 1.3G of primary data not including indexes (leaving 2/3 of the heap for backup and scratch space). If the decided partition limit is a conservative 25MB, then a single cache server can safely use 53 partitions (1365M/25M rounded down to the previous prime). Therefore, a cluster that contains 20 cache servers can safely use 1051 partitions (53*20 rounded down to the previous prime) and stores approximately 25G of primary data. A cluster of 100 cache servers can safely use 5297 partitions and can store approximately 129G of primary data.
Autonomous distribution This method of distribution requires each storage-enabled service member to calculate its own balanced share of the available partitions and to request distribution in order to balance the service. Centralized distribution This method of distribution uses a centralized partition assignment strategy to make a global distribution decision to be carried out by each storage-enabled member. The centralized distribution allows for more expressive distribution algorithms to be utilized and uses a more complete and global view of the service.
Custom centralized distribution strategy can be created by implementing the com.tangosol.net.partition.PartitionAssignmentStrategy interface.
legacy (deprecated) The legacy assignment strategy indicates that partition distribution is managed individually on each cluster member. The legacy partition assignment strategy is an autonomous distribution strategy. simple (default) The simple assignment strategy is a centralized distribution strategy that attempts to balance partition distribution while ensuring machine-safety and is more deterministic and efficient than the legacy strategy. mirror:<service-name> The mirror assignment strategy is a centralized distribution strategy that attempts to co-locate the service's partitions with the partitions of the specified service. This strategy is used to increase the likelihood that key-associated, cross-service cache access remains local to a member. custom a class that implements the com.tangosol.net.partition.PartitionAssignmentStrategy interface.
To configure a partition assignment strategy for a specific partitioned cache service, add a <partition-assignment-strategy> element within a distributed cache definition:
<distributed-scheme> ... <partition-assignment-strategy>mirror:<MyService> </partition-assignment-strategy> ... </distributed-scheme>
To configure the partition assignment strategy for all instances of the distributed cache service type, override the partitioned cache service's partition-assignment-strategy initialization parameter in an operational override file. For example:
<?xml version='1.0'?> <coherence xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-operational-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/ coherence-operational-config coherence-operational-config.xsd"> <cluster-config> <services> <service id="3"> <init-params> <init-param id="21"> <param-name>partition-assignment-strategy</param-name> <param-value>mirror:<MyService></param-value> </init-param> </init-params> </service> </services> </cluster-config> </coherence>
element. The following example enables a partition assignment strategy that is implemented in the MyPAStrategy class.
<distributed-scheme> ... <partition-assignment-strategy> <instance> <class-name>package.MyPAStrategy</class-name> </instance> </partition-assignment-strategy> ... </distributed-scheme>
As an alternative, the <instance> element supports the use of a <class-factory-name> element to use a factory class that is responsible for creating PartitionAssignmentStrategy instances, and a <method-name> element to specify the static factory method on the factory class that performs object instantiation. The following example gets a strategy instance using the getStrategy method on the MyPAStrategyFactory class.
<distributed-scheme> ... <partition-assignment-strategy> <instance> <class-factory-name>package.MyPAStrategyFactory</class-factory-name> <method-name>getStrategy</method-name> </instance> </partition-assignment-strategy> ... </distributed-scheme>
Any initialization parameters that are required for an implementation can be specified using the <init-params> element. The following example sets the iMaxTime parameter to 2000.
<distributed-scheme> ... <partition-assignment-strategy> <instance> <class-name>package.MyPAStrategy</class-name> <init-params> <init-param> <param-name>iMaxTime</param-name> <param-value>2000</param-value> </init-param> </init-params> </instance> </partition-assignment-strategy> ... </distributed-scheme>
31
31
This chapter provides instructions for controlling the execution behavior of Coherence service threads using task timeouts and the PriorityTask API for custom execution processing. This chapter includes the following sections:
Overview of Priority Tasks Setting Priority Task Timeouts Creating Priority Task Execution Objects
31-1
Table 311
Execution Timeout Elements Description Specifies the default timeout value for requests that can time out (for example, implement the PriorityTask interface), but do not explicitly specify the request timeout value. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
1. 2. 3. 4.
The time it takes to deliver the request to an executing node (server). The interval between the time the task is received and placed into a service queue until the execution starts. The task execution time. The time it takes to deliver a result back to the client.
If the value does not contain a unit, a unit of milliseconds is assumed. Legal values are positive integers or zero (indicating no default timeout). Default value is an infinite timeout (0s) for clustered client requests and 30 seconds (30s) for non-clustered client requests. <task-timeout> Specifies the default timeout value for tasks that can be timed-out (for example, entry processors that implement the PriorityTask interface), but do not explicitly specify the task execution timeout value. The task execution time is measured on the server side and does not include the time spent waiting in a service backlog queue before being started. This attribute is applied only if the thread pool is used (the thread-count value is positive). If zero is specified, the default service-guardian <timeout-milliseconds> value is used. Specifies the amount of time in milliseconds that a task can execute before it is considered "hung". Note: A posted task that has not yet started is never considered as hung. This attribute is applied only if the Thread pool is used (the thread-count value is positive).
<task-hung-threshold>
The following example sets a distributed cache's thread count to 7 with a task time out of 5000 milliseconds and a task hung threshold of 10000 milliseconds:
Example 311 Sample Task Time and Task Hung Configuration
</distributed-scheme>
The request-timeout should always be longer than the thread-hung-threshold or the task-timeout.
Note:
tangosol.coherence.replicated.request.time The default client request timeout for the out Replicated cache service tangosol.coherence.optimistic.request.time The default client request timeout for the out Optimistic cache service tangosol.coherence.distributed.request.tim The default client request timeout for eout distributed cache services tangosol.coherence.distributed.task.timeou The default server execution timeout for t distributed cache services tangosol.coherence.distributed.task.hung The default time before a thread is reported as hung by distributed cache services
tangosol.coherence.invocation.request.time The default client request timeout for out invocation services tangosol.coherence.invocation.task.hung The default time before a thread is reported as hung by invocation services
tangosol.coherence.invocation.task.timeout The default server execution timeout invocation services tangosol.coherence.proxy.request.timeout tangosol.coherence.proxy.task.timeout tangosol.coherence.proxy.task.hung The default client request timeout for proxy services The default server execution timeout proxy services The default time before a thread is reported as hung by proxy services
SCHEDULE_STANDARDa task is scheduled for execution in a natural (based on the request arrival time) order SCHEDULE_FIRSTa task is scheduled in front of any equal or lower scheduling priority tasks and executed as soon as any of worker threads become available
31-3
SCHEDULE_IMMEDIATEa task is immediately executed by any idle worker thread; if all of them are active, a new thread is created to execute this task
PriorityProcessor can be extended to create a custom entry processor. PriorityFilter can be extended to create a custom priority filter. PriorityAggregator can be extended to create a custom aggregation. PriorityTask can be extended to create an priority invocation class.
After extending each of these classes the developer must implement several methods. The return values for getRequestTimeoutMillis, getExecutionTimeoutMillis, and getSchedulingPriority should be stored on a class-by-class basis in your application configuration parameters. These methods are described in Table 313.
Table 313 Method public long getRequestTimeoutMillis() Methods to Support Task Timeout Description Obtains the maximum amount of time a calling thread is can wait for a result of the request execution. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes: the time it takes to deliver the request to the executing node(s); the interval between the time the task is received and placed into a service queue until the execution starts; the task execution time; the time it takes to deliver a result back to the client. The value of TIMEOUT_DEFAULT indicates a default timeout value configured for the corresponding service; the value of TIMEOUT_NONE indicates that the client thread is can wait indefinitely until the task execution completes or is canceled by the service due to a task execution timeout specified by the getExecutionTimeoutMillis() value. Obtains the maximum amount of time this task is allowed to run before the corresponding service attempts to stop it. The value of TIMEOUT_DEFAULT indicates a default timeout value configured for the corresponding service; the value of TIMEOUT_NONE indicates that this task can execute indefinitely. If, by the time the specified amount of time passed, the task has not finished, the service attempts to stop the execution by using the Thread.interrupt() method. In the case that interrupting the thread does not result in the task's termination, the runCanceled method is called. Obtains this task's scheduling priority. Valid values are SCHEDULE_STANDARD, SCHEDULE_FIRST, SCHEDULE_ IMMEDIATE This method is called if and only if all attempts to interrupt this task were unsuccessful in stopping the execution or if the execution was canceled before it had a chance to run at all. Since this method is usually called on a service thread, implementors must exercise extreme caution since any delay introduced by the implementation causes a delay of the corresponding service.
com.tangosol.net.RequestTimeoutException: Request timed out after 4015 millis at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.checkRequestTi meout(Service.CDB:8) at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.poll(Service.C DB:52) at com.tangosol.coherence.component.util.daemon.queueProcessor.Service.poll(Service.C DB:18) at com.tangosol.coherence.component.util.daemon.queueProcessor.service.InvocationServ ice.query(InvocationService.CDB:17) at com.tangosol.coherence.component.util.safeService.SafeInvocationService.query(Safe InvocationService.CDB:1)
31-5
32
32
This chapter describes the constraints on service re-entrant calls and provides general guidelines for making calls between service threads. This chapter includes the following sections:
Overview of Constraints on Re-Entrant Calls Re-entrancy, Services, and Service Threads Re-entrancy and Listeners
Embed the child objects within the parent object (using an "aggregate" pattern) or, Use direct access to the server-side backing map (which requires advanced knowledge to do safely), or Run the logic on another service (for example, Invocation targeted by using PartitionedService.getKeyOwner), and have that service access the data by using NamedCache interfaces, or Place the child objects on another service which would allow reentrant calls (but incur network access since there is no affinity between partitions in different cache services).
Using the aggregate pattern is probably the best solution for most use cases. However, if this is impractical (due to size restrictions, for example), and there is a requirement to access both the parent and child objects without using a client/server model, the Invocation service approach is probably the best compromise for most use cases.
Avoiding Deadlock
Even when re-entrancy is allowed, one should be very careful to avoid saturating the thread pool and causing catastrophic deadlock. For example, if service A calls service B, and service B calls service A, there is a possibility that enough concurrent calls could fill a thread pool, which would cause a form of deadlock. As with traditional locking, using ordered access (for example, service A can call service B, but not vice versa) can help. So:
Service A calling into service A is never allowed Service A calling into service B, and service B calling back into service A is technically allowed but is deadlock-prone and should be avoided if at all possible.
Service A calling into service B, and service B calling into service C, and service C calling back into service A is similarly restricted
Service A calling into service B, and service B calling into service C, and service A calling into service C is similarly allowed
A service thread is defined as any thread involved in fulfilling a Coherence API request. Service threads may invoke any of the following entities:
Map Listeners Membership Listeners Custom Serialization/Deserialization such as ExternalizableLite implementations Backing Map Listeners CacheLoader/CacheStore Modules Query logic such as Aggregators, Filters, ValueExtractors and Comparators Entry Processors Triggers InvocationService Invocables
These entities should never make re-entrant calls back into their own services.
A
A
This appendix provides a detailed reference of the operational deployment descriptor elements and briefly describes the deployment descriptor files in which these elements can appear. This appendix includes the following sections:
Operational Deployment Descriptor Operational Override File Element Reference Attribute Reference
A-1
Notes:
The schema located in the coherence.jar library is always used at run time even if the xsi:schemaLocation attribute references the Web URL. The xsi:schemaLocation attribute can be omitted to disable schema validation. When deploying Coherence into environments where the default character set is EBCDIC rather than ASCII, ensure that the deployment descriptor file is in ASCII format and is deployed into its run-time environment in the binary format.
Element Reference
Element Reference
Table A1 lists all non-terminal operational configuration elements.
Table A1 Element access-controller address-provider address-providers authorized-hosts cache-factory-builder-config callback-handler cluster-config cluster-quorum-policy coherence configurable-cache-factory-config filter filter flashjournal-manager flow-control host-range identity-asserter identity-manager identity-transformer incoming-message-handler init-param init-params instance journaling-config key-store license-config logging-config management-config mbean mbeans mbean-filter member-identity multicast-listener Non-Terminal Operational Configuration Elements Used in security-config well-known-addresses, address-providers cluster-config cluster-config coherence security-config coherence cluster-config root element coherence filters cluster-config journaling-config packet-delivery authorized-hosts security-config ssl security-config cluster-config init-params access-controller, address-provider, callback-handler, configurable-cache-factory-config, filter, service socket-provider, service-failure-policy cluster-config identity-manager, trust-manager coherence coherence coherence mbeans management-config management-config cluster-config cluster-config
A-3
Element Reference
Table A1 (Cont.) Non-Terminal Operational Configuration Elements Element notification-queueing outgoing-message-handler outstanding-packets packet-buffer packet-bundling packet-delivery packet-publisher packet-size packet-speaker pause-detection provider ramjournal-manager reporter security-config serializer serializers service-guardian service services shutdown-listener socket-address socket-provider socket-providers ssl tcp-ring-listener traffic-jam trust-manager unicast-listener volume-threshold well-known-addresses Used in packet-publisher cluster-config flow-control multicast-listener, packet-publisher, unicast-listener packet-delivery packet-publisher cluster-config packet-publisher cluster-config flow-control ssl, identity-manager, trust-manager journaling-config management-config coherence serializers cluster-config cluster-config services cluster-config cluster-config well-known-addresses socket-providers, unicast-listener cluster-config socket-provider cluster-config packet-publisher ssl cluster-config packet-speaker unicast-listener
Element Reference
access-controller
Used in: security-config.
Description
The access-controller element contains the configuration information for the class that implements the com.tangosol.net.security.AccessController interface, which is used by the Coherence Security Framework to check access right and encrypt/decrypt node-to-node communications.
Elements
Table A2 describes the subelements of the access-controller element.
Table A2 Element <class-name> access-controller Subelements Required/ Optional Required Description Specifies the name of a Java class that implements com.tangosol.net.security.AccessController interface, which is used by the security framework to check access rights for clustered resources and encrypt/decrypt node-to-node communications regarding those rights. See Securing Oracle Coherence for more information on using an access controller. The default value is com.tangosol.net.security.DefaultController. Contains one or more initialization parameter(s) for a class that implements the AccessController interface. For the default AccessController implementation the parameters are the paths to the key store file and permissions description file, specified as follows: <init-params> <init-param id="1"> <param-type>java.io.File</param-type> <param-value system-property="tangosol.coherence.security.keystore"></param-value> </init-param> <init-param id="2"> <param-type>java.io.File</param-type> <param-value system-property="tangosol.coherence.security.permissions"></param-val ue> </init-param> </init-params> The preconfigured system property overrides based on the default AccessController implementation and the default parameters as specified above are tangosol.coherence.security.keystore and tangosol.coherence.security.permissions. For more information on the subelements of the init-param element, see "init-param" on page A-28.
<init-params>
Optional
A-5
address-provider
address-provider
Used in: well-known-addresses, address-providers
Description
The address-provider element specifies either socket address information (IP, or DNS name, and port) or an implementation of the com.tangosol.net.AddressProvider interface. The interface offers a programmatic way to define socket addresses. Within the well-known-addresses element, use the address-provider element to specify an address factory that implements the com.tangosol.net.AddressProvider interface. The interface offers a programmatic way to define Well Known Addresses (WKA) members. For details on WKA, see "Using Well Known Addresses" on page 7-9. The following example demonstrates using the address-provider element within the well-known-addresses element:
<unicast-listener> <well-known-addresses> <address-provider> <class-name>package.MyAddressProvider</class-name> </address-provider> </well-known-addresses> </unicast-listener>
Within the address-providers element, use the address-provider element to specify a socket address or an address factory that implements the com.tangosol.net.AddressProvider interface. The address provider definitions are referenced by TCP/IP acceptors and TCP/IP initiators which are used to setup Coherence*Extend. The tcp-acceptor, remote-addresses, and name-service-addresses elements in the cache configuration file reference address provider definitions using the specified id attribute value. For details on the tcp-acceptor element, see "tcp-acceptor" on page B-111. For details on the remote-addresses element, see "remote-addresses" on page B-99. For details on the name-service-addresses element, see "name-service-addresses" on page B-66. The following example demonstrates defining address provider definitions within the <address-providers> element.
<address-providers> <address-provider id="ap1"> <class-name>package.MyAddressProvider</class-name> </address-provider> <address-provider id="ap2"> <socket-address> <address>192.168.1.3</address> <port>9999</port> </socket-address> </address-provider> <address-provider id="ap3"> <socket-address> <address>192.168.1.4</address> <port>9999</port> </socket-address> </address-provider> </address-providers>
Elements
Table A3 describes the subelements of the address-provider element.
A-6 Oracle Fusion Middleware Developing Applications with Oracle Coherence
Element Reference
Table A3 Element
address-provider Subelements Required/ Optional Optional Description Specifies the fully qualified name of a class that implements the com.tangosol.net.AddressProvider interface. This element cannot be used with the <class-factory-name> element.
<class-name>
<class-factory-name>
Optional
Specifies the fully qualified name of a factory class for creating implementation class instances. This element cannot be used with the <class-name> element and is used with the <method-name> element.
<method-name> <init-params>
Optional Optional
Specifies the name of a static factory method on the factory class which performs object instantiation. Specifies initialization parameters which are accessible by implementations that include a public constructor with a matching signature. Initialization parameters can be specified when using both the <class-name> element and the <class-factory-name> element. Specifies the address (IP, or DNS name, and port) to which a socket is bound. This element should only be used when defining socket addresses for a TCP/IP acceptor. This element cannot be used if an address provider implementation is defined using the <class-name> or <class-factory-name> element.
<socket-address>
Optional
A-7
address-providers
address-providers
Used in: cluster-config
Description
The address-providers element contains the declarative data for each address provider.
Elements
Table A4 describes the subelements of the address-providers element.
Table A4 Element <address-provider> address-providers Subelements Required/ Optional Optional Description Specifies either socket address information or an implementation of the com.tangosol.net.AddressProvider interface. Multiple address-provider elements can be specified.
Element Reference
authorized-hosts
Used in: cluster-config.
Description
If specified, restricts cluster membership to the cluster nodes specified in the collection of unicast addresses, or address range. The unicast address is the address value from the authorized cluster nodes' unicast-listener element. Any number of host-address and host-range elements may be specified.
Elements
Table A5 describes the subelements of the authorized-hosts element.
Table A5 Element <host-address> authorized-hosts Subelements Required/ Optional Optional Description Specifies an IP address or host name. If any are specified, only hosts with specified host-addresses or within the specified host-ranges is allowed to join the cluster. The content override attributes id can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document. Specifies a range of IP addresses. If any are specified, only hosts with specified host-addresses or within the specified host-ranges is allowed to join the cluster. The content override attributes id can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document. Specifies class configuration information for a com.tangosol.util.Filter implementation that is used by the cluster to determine whether to accept a new cluster member. The evaluate() method is passed the java.net.InetAddress of the client. Implementations should return true to allow the new member to join the cluster.
<host-range>
Optional
<host-filter>
Optional
A-9
cache-factory-builder-config
cache-factory-builder-config
Used in: coherence
Description
The cache-factory-builder-config element contains the configuration information for constructing an instance of the com.tangosol.net.CacheFactoryBuilder interface. The default implementation is the com.tangosol.net.DefaultCacheFactoryBuilder class, which can be extended in advanced use-cases to provide further domain-specific logic for creating and managing ConfigurableCacheFactory instances. A custom CacheFactoryBuilder implementation is used to build and manage multiple cache factory configurations across multiple class loaders. This is an advanced use case that allows applications that are scoped by different class loaders to use separate cache configuration files (as is the case with JavaEE and OSGI). For example, the following code uses a custom ConfigurableCacheFactory implementation from two classloaders.
CacheFactoryBuilder cfb = CacheFactory.getCacheFactoryBuilder(); //load the first configuration cfb.getConfigurableCacheFactory("example-config.xml", loader0); CacheFactory.ensureCluster(); NamedCache cache = CacheFactory.getCache("dist-example"); //load the second configuration cfb.getConfigurableCacheFactory("example-config1.xml", loader1); CacheFactory.ensureCluster(); NamedCache cache1 = CacheFactory.getCache("dist-example1");
Elements
Table A6 describes the subelements of the cache-factory-builder-config element.
Table A6 Element <class-name> cache-factory-builder-config Subelements Required/ Optional Optional Description Specifies the name of a Java class that implements the com.tagosol.net.CacheFactoryBuilder interface. The default value is com.tangosol.net.DefaultCacheFactoryBuilder. Contains initialization parameters for the cache factory builder implementation. Specifies the configuration information for a class that implements the com.tangosol.net.ScopeResolver interface. A scope resolver implementation provides the ability to modify the scope name for a given ConfigurableCacheFactory at run time to enforce (or disable) isolation between applications running in the same cluster. The custom scope resolver implementation is specified within an <class-name> subelement. See Java API Reference for Oracle Coherence for details on the ScopeResolver interface. See the <scope-name> subelement of the <cache-config> element on page B-22 for details on specifying a scope name within a cache configuration file.
<init-params> <scope-resolver>
Optional Optional
Element Reference
callback-handler
Used in: security-config. Table A7 describes the subelements of the callback-handler element.
Table A7 Element <class-name> <init-params> callback-handler Subelement Required/ Optional Required Optional Description Specifies the name of a Java class that provides the implementation for the javax.security.auth.callback.CallbackHandler interface. Contains one or more initialization parameter(s) for a CallbackHandler implementation.
A-11
cluster-config
cluster-config
Used in: <coherence>
Description
Contains the cluster configuration information, including communication and service parameters.
Elements
Table A8 describes the subelements of the cluster-config element.
Table A8 Element <member-identity> <unicast-listener> cluster-config Subelements Required/ Optional Optional Required Description Specifies detailed identity information that is useful for defining the location and role of the cluster member. Specifies the configuration information for the unicast listener, used for receiving point-to-point network communications. Specifies the configuration information for the multicast listener, used for receiving point-to-multipoint network communications. Specifies configuration information for the TCP ring listener, used to death detection. Specifies the action to take upon receiving an external shutdown request. Specifies the configuration information for the service guardians, used for detecting and resolving service deadlock. Specifies configuration information for the packet speaker, used for network data transmission. Specifies configuration information for the packet publisher, used for managing network data transmission. Specifies configuration information for the incoming message handler, used for dispatching incoming cluster communications. Specifies configuration information for the outgoing message handler, used for dispatching outgoing cluster communications. Specifies the hosts which are allowed to join the cluster. Specifies the declarative data for all available Coherence services. Specifies data transformation filters, which can perform custom transformations on data being transferred between cluster nodes. Specifies any number of serializer class configurations that implement com.tangosol.io.Serializer. Specifies any number of address provider definitions that can be referenced by a proxy service's TCP/IP acceptor.
<multicast-listener>
Required
<outgoing-message-handler>
Required
<serializers> <address-providers>
Optional Optional
Element Reference
Table A8 (Cont.) cluster-config Subelements Element <socket-providers> <cluster-quorum-policy> <journaling-config> Required/ Optional Required Optional Optional Description Contains socket provider definitions. Contains the configuration information for the quorum-based action policy for the Cluster service. Specifies configuration for the journaling subsystem.
A-13
cluster-quorum-policy
cluster-quorum-policy
Used in: <cluster-config>
Description
The cluster-quorum-policy element contains quorum policy settings for the Cluster service.
Element
Table A9 describes the subelements of the cluster-quorum-policy element.
Table A9 Element <timeout-survivor-quorum> cluster-quorum-policy-scheme Subelements Required/ Optional Optional Description Specifies the minimum number of cluster members that must remain to terminate one or more cluster members due to a detected network timeout, irrespective of the root cause. The value must be a nonnegative integer. Use the role attribute to specify this value for cluster members of a given role (as defined in the <role-name> element). For example: <timeout-survivor-quorum role="Server">50 </timeout-survivor-quorum> <class-name> Optional Specifies a class that provides custom quorum policies. This element cannot be used with the <timeout-survivor-quorum> or the <class-factory-name> element. The class must implement the com.tangosol.net.ActionPolicy interface. Initialization parameters can be specified using the <init-params> element. <class-factory-name> Optional Specifies a factory class for creating custom action policy instances. This element cannot be used with the <timeout-survivor-quorum> or <class-name> elements. This element is used with the <method-name> element. The action policy instances must implement the com.tangosol.net.ActionPolicy interface. In addition, initialization parameters can be specified using the <init-params> element.
Element Reference
coherence
root element
Description
The coherence element is the root element of the operational deployment descriptor tangosol-coherence.xml.
Elements
Table A10 describes the subelements of the coherence element.
Table A10 Element <cluster-config> <logging-config> coherence Subelements Required/ Optional Required Required Description Contains the cluster configuration information. This element is where most communication and service parameters are defined. Contains the configuration information for the logging facility. Contains configuration information for the configurable cache factory, which controls from where and how the cache configuration settings are loaded. Contains the configuration information for a cache factory builder, which allows building and managing multiple cache factory configurations across multiple class loaders. Contains the configuration information for the coherence Management Framework. See Managing Oracle Coherence for more information. Contains the configuration information for the Coherence Security Framework. Contains the edition and operational mode configuration.
<management-config>
Required
<security-config> <license-config>
Optional Optional
A-15
configurable-cache-factory-config
configurable-cache-factory-config
Used in: coherence
Description
The configurable-cache-factory-config element contains the configuration information for constructing an instance of the com.tangosol.net.ConfigurableCacheFactory interface. The default implementation is the com.tangosol.net.ExtensibleConfigurableCacheFactory class. Using a custom ConfigurableCacheFactory implementation is an advanced use case and is typically used to allow applications that are scoped by different class loaders to use separate cache configuration files (as is the case with JavaEE and OSGI). The following example loads two configuration files which contain different cache definitions and use different ClassLoaders.
//load the first configuration and use a cache ConfigurableCacheFactory eccf= new ExtensibleConfigurableCacheFactory("example-config.xml", loader0); NamedCache cache = eccf.ensureCache("dist-example", loader0); cache.put(key, value); //load the second cache configuration and use a cache ConfigurableCacheFactory eccf1= new ExtensibleConfigurableCacheFactory("example-config1.xml", loader1); NamedCache cache1 = eccf1.ensureCache("dist-example1", loader1); cache1.put(key, value);
Note:
This example requires each cache definition to use a different service name; otherwise, an exception is thrown indicating that the service was started by a factory with a different configuration descriptor.
Elements
Table A11 describes the subelements of the configurable-cache-factory-config element.
Element Reference
configurable-cache-factory-config Subelements Required/ Optional Required Description Specifies the name of a Java class that implements the com.tangosol.net.ConfigurableCacheFactory interface. The default value is com.tangosol.net.ExtensibleConfigurableCacheFactory. The preconfigured system property override is tangosol.coherence.cachefactory.
<init-params>
Optional
Contains initialization parameters for the cache configuration factory implementation. For the default cache configuration factory class, a single parameter is used as follows: <init-param> <param-type>java.lang.String</param-type> <param-value>coherence-cache-config.xml</param-value> </init-param> Unless an absolute or relative path is specified, such as with ./path/to/config.xml, the application's classpath is used to find the specified descriptor. The preconfigured system property override is tangosol.coherence.cacheconfig.
A-17
filter
filter
Used in: filters.
Description
Data transformation filters can be used by a service to apply a custom transformation on data being transferred between cluster nodes. This can be used for instance to compress network traffic. See the <filter-class> element for more information.
Implementation
Data transformation filters are implementations of the com.tangosol.util.WrapperStreamFactory interface. Data transformation filters are not related to com.tangosol.util.Filter, which is part of the Coherence API for querying caches.
Note:
Elements
Table A12 describes the subelements of each filter element.
Table A12 Element <filter-name> <filter-class> filter Subelements Required/ Optional Required Required Description Specifies the canonical name of the filter. This name is unique within the cluster. For example: gzip. Specifies the class name of the filter implementation. This class must have a zero-parameter public constructor and must implement the com.tangosol.util.WrapperStreamFactory interface. Specifies initialization parameters for configuring filters. For example when using a com.tangosol.net.CompressionFilter the parameters are specified as follows: <init-param> <param-name>strategy</param-name> <param-value>gzip</param-value> </init-param> <init-param> <param-name>level</param-name> <param-value>default</param-value> </init-param> For more information on the parameter values for the standard filters, refer to "Using Network Filters" on page 10-12.
<init-params>
Optional
Element Reference
filters
Used in cluster-config.
Description
The filters element contains the declarative data for each filter.
Elements
Table A13 describes the subelements of the filters element.
Table A13 Element <filter> filters Subelements Required/ Optional Optional Description The filter element contains the declarative data of a particular filter.
A-19
flashjournal-manager
flashjournal-manager
Used in: journaling-config
Description
The <flashjournal-manager> element contains the configuration for a flash journal resources manager, which manages I/O for temporary journal-based files to a solid state device.
Elements
Table A14 describes the subelements of the flashjournal-manager element.
Table A14 Element <minimum-load-factor> <maximum-value-size> flashjournal-manager Subelements Required/ Optional Optional Optional Description Specifies the factor of live data below which a journal file is eligible for compaction (garbage collection). Specifies the maximum size, in bytes, of binary values that are to be stored in the flash journal. The value cannot exceed 64MB. The default value is 64MB. Specifies the maximum file size of the underlying journal files. The value must be a power of two and a multiple of the block size. The value must be between 1MB and 4GB. The default value is 2GB. Specifies the size of the write buffers in which writes to an underlying disk file occur. The size should match or be a multiple of the physical device's optimal block size and must be a power of two. The value must be between 4KB and 1MB. The default value is 256KB. Specifies the size, in bytes, for the buffer pool. The size does not limit the number of buffers that can be allocated or that can exist at any point in time. The size only determines the amount of buffers that are recycled. The pools size cannot exceed 1GB. The default value is 16MB. Specifies the directory where the journal files should be placed. The directory must exist and is not created at run time. If the directory does not exist or is not specified, the JVM/operating system default temporary directory is used. The suggested location is a local flash (SSD) drive. Specifying a directory that is located on a drive which is shared by other applications or system operations increases the potential for unplanned space usage. Use a directory location on a non-shared disk partition to ensure a more predictable environment.
<maximum-file-size>
Optional
<block-size>
Optional
<maximum-pool-size>
Optional
<directory>
Optional
Element Reference
Table A14 (Cont.) flashjournal-manager Subelements Element <async-limit> Required/ Optional Optional Description Specifies the maximum size, in bytes, of the backlog. The backlog is the amount of data that has yet to be persisted. Client threads are blocked if the configured limit is exceeded and remain blocked until the backlog recedes below the limit. This helps prevent out-of-memory conditions. Note: The maximum amount of memory used by the backlog is at least twice the configured amount, since the data is in binary form and rendered to the write-behind buffers. The value must be between 4KB and 1GB. The default value is 16MB. Specifies the amount of time to wait before a temporary file that is used by the Journaling subsystem is eligible for removal. The delay begins after the file is last used. The default value is 2 hours. Specifies the soft limit, in bytes, on the journal size. The soft limit allows the compaction (garbage collection) thread to tune itself to remove stale values and keep the journal within the soft limit. This is not a hard limit and the journal can still grow up to the maximum file count (512). The default value is 11GB. The preconfigured system property override is -Dtangosol.coherence.flashjournal.highjournalsize.
<tmp-purge-delay>
Optional
<high-journal-size>
Optional
A-21
flow-control
flow-control
Used in: packet-delivery.
Description
The flow-control element contains configuration information related to packet throttling and remote GC detection.
Elements
Table A15 describes the subelements of the flow-control element.
Table A15 Element <enabled> <pause-detection> flow-control Subelements Required/ Optional Optional Optional Description Specifies if flow control is enabled. Valid values are true or false. The default value is true Defines the number of packets that are resent to an unresponsive cluster node after which the node is assumed to be paused. Defines the number of unconfirmed packets that are sent to a cluster node before packets addressed to that node are deferred.
<outstanding-packets>
Optional
Element Reference
host-range
Used in: authorized-hosts.
Description
Specifies a range of unicast addresses of nodes which are allowed to join the cluster.
Elements
Table A16 describes the subelements of each host-range element.
Table A16 Element <from-address> <to-address> host-range Subelements Required/ Optional Required Required Description Specifies the starting IP address for a range of host addresses. For example: 198.168.1.1. Specifies to-address element specifies the ending IP address (inclusive) for a range of hosts. For example: 198.168.2.255.
A-23
identity-asserter
identity-asserter
Used in: security-config
Description
The <identity-asserter> element contains the configuration information for a class that implements the com.tangosol.net.security.IdentityAsserter interface. The class is called to validate an identity token to establish a user's identity and is used on a Coherence*Extend proxy server. The identity asserter is used with an identity transformer (used on a Coherence*Extend client) to ensure that only valid clients are allowed to connect to an extend proxy.
Elements
Table A17 describes the subelements of the <identity-asserter> element.
Table A17 Element <class-name> identity-asserter Subelements Required/ Optional Optional Description Specifies a class that implements com.tangosol.net.security.IdentityAsserter. This element cannot be used with the <class-factory-name> element. Specifies a factory class for creating asserter instances. The instances must implement com.tangosol.net.security.IdentityAsserter. This element cannot be used with the <class-name> element. This element can be used with the <method-name> element. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Contains class initialization parameters for the asserter implementation.
<class-factory-name>
Optional
Element Reference
identity-manager
Used in: ssl.
Description
The <identity-manager> element contains the configuration information for initializing a javax.net.ssl.KeyManager instance. The identity manager is responsible for managing the key material which is used to authenticate the local connection to its peer. If no key material is available, the connection cannot present authentication credentials.
Elements
Table A18 describes the subelements of the identity-manager element.
Table A18 Element <algorithm> <provider> <key-store> <password> identity-manager Subelements Required/ Optional Optional Optional Optional Required Description Specifies the algorithm used by the identity manager. The default value is SunX509. Specifies the configuration for a security provider instance. Specifies the configuration for a key store implementation. Specifies the private key password.
A-25
identity-transformer
identity-transformer
Used in: security-config
Description
The <identity-transformer> element contains the configuration information for a class that implements the com.tangosol.net.security.IdentityTransformer interface. The class is called to transform a Subject (Principal in .NET) to a token that asserts identity and is used on a Coherence*Extend client. The identity transformer is used with an identity asserter (used on a Coherence*Extend proxy server) to ensure that only valid clients are allowed to connect to an extend proxy.
Elements
Table A19 describes the subelements of the <identity-transformer> element.
Table A19 Element <class-name> identity-transformer Subelements Required/ Optional Optional Description Specifies a class that implements com.tangosol.net.security.IdentityTransformer. This element cannot be used with the <class-factory-name> element. Specifies a factory class for creating asserter instances. The instances must implement com.tangosol.net.security.IdentityTransformer. This element cannot be used with the <class-name> element. This element can be used with the <method-name> element. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Contains class initialization parameters for the transformer implementation.
<class-factory-name>
Optional
Element Reference
incoming-message-handler
Used in: cluster-config.
Description
The incoming-message-handler assembles UDP packets into logical messages and dispatches them to the appropriate Coherence service for processing.
Elements
Table A20 describes the subelements of the incoming-message-handler element.
Table A20 Element incoming-message-handler Subelements Required/ Optional Description Specifies the maximum time variance between sending and receiving broadcast Messages when trying to determine the difference between a new cluster Member's system time and the cluster time. The smaller the variance, the more certain one can be that the cluster time is closer between multiple systems running in the cluster; however, the process of joining the cluster is extended until an exchange of Messages can occur within the specified variance. Normally, a value as small as 20 milliseconds is sufficient, but with heavily loaded clusters and multiple network hops a larger value may be necessary. The default value is 16. Specifies whether the packet receiver uses negative acknowledgments (packet requests) to pro-actively respond to known missing packets. See "notification-queueing" on page A-45 for additional details and configuration. Legal values are true or false. The default value is true. Specifies a priority of the incoming message handler execution thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 7.
<maximum-time-variance> Required
<use-nack-packets>
Required
<priority>
Required
A-27
init-param
init-param
Used in: init-params.
Description
Defines an initialization parameter. Any number of init-param elements may be specified.
Elements
Table A21 describes the subelements of the init-param element.
Table A21 Element <param-name> init-param Subelement Required/ Optional Optional Description Specifies the name of the initialization parameter. For example: <init-param> <param-name>sTableName</param-name> <param-value>EmployeeTable</param-value> </init-param> The <param-name> element cannot be specified if the <param-type> element is specified. See "Initialization Parameter Settings" on page A-62 for information on the pre-defined parameters that are available for specific services. <param-type> Optional Specifies the Java type of the initialization parameter.The following standard types are supported:
java.lang.String (string) java.lang.Boolean (boolean) java.lang.Integer (int) java.lang.Long (long) java.lang.Double (double) java.math.BigDecimal java.io.File java.sql.Date java.sql.Time java.sql.Timestamp
For example: <init-param> <param-type>java.lang.String</param-type> <param-value>EmployeeTable</param-value> </init-param> The <param-type> element cannot be specified if the <param-name> element is specified. <param-value> <description> Required Optional Specifies the value of the initialization parameter. The value is in the format specific to the Java type of the parameter. Specifies a description for the initialization parameter.
Element Reference
init-params
Used in: address-provider, filter, service, configurable-cache-factory-config, access-controller, and callback-handler.
Description
Defines a series of initialization parameters.
Elements
Table A22 describes the subelements of the init-params element.
Table A22 Element <init-param> init-params Subelement Required/ Optional Optional Description Defines an individual initialization parameter.
A-29
instance
instance
Used in: service-failure-policy, scope-resolver, and partition-assignment-strategy
Description
The <instance> element contains the configuration of an implementation class or class factory that is used to plug in custom functionality.
Elements
Table A23 describes the subelements of the instance element.
Table A23 Element <class-name> instance Subelements Required/ Optional Optional Description Specifies the fully qualified name of an implementation class. This element cannot be used with the <class-factory-name> element. <class-factory-name> Optional Specifies the fully qualified name of a factory class for creating implementation class instances. This element cannot be used with the <class-name> element and is used with the <method-name> element. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Specifies initialization parameters which are accessible by implementations that include a public constructor with a matching signature. Initialization parameters can be specified when using both the <class-name> element and the <class-factory-name> element.
Element Reference
journaling-config
Used in: cluster-config
Description
The <journaling-config> element contains the configuration for the resource managers that are responsible for storing data in a binary format to flash and RAM memory.
Elements
Table A23 describes the subelements of the journaling-config element.
Table A24 Element <ramjournal-manager> <flashjournal-manager> journaling-config Subelements Required/ Optional Required Required Description Specifies the RAM Journal Resource Manager's configuration. Specifies the Flash Journal Resource Manager's configuration.
A-31
key-store
key-store
Used in: identity-manager, trust-manager.
Description
The key-store element specifies the configuration for a key store implementation to use when implementing SSL. The key store implementation is an instance of the java.security.KeyStore class.
Elements
Table A25 describes the subelements of the key-store element.
Table A25 Element <url> <password> <type> key-store Subelements Required/ Optional Required Optional Optional Description Specifies the Uniform Resource Locator (URL) to a key store. Specifies the password for the key store. Specifies the type of a java.security.KeyStore instance. The default value is JKS.
Element Reference
license-config
Used in: coherence. Table A26 describes the subelements of the license-config element.
Table A26 Element <edition-name> license-config Subelements Required/ Optional Optional Description Specifies the product edition that the member uses. Valid values are: GE (Grid Edition), EE (Enterprise Edition), SE (Standard Edition), RTC (Real-Time Client), DC (Data Client). The default value is GE. Note: The edition switches no longer enforce license restrictions. Do not change the default setting (GE). <license-mode> Optional Specifies whether the product is being used in a development or production mode. Valid values are prod (Production), and dev (Development). Note: This value cannot be overridden in tangosol-coherence-override.xml. It must be specified in tangosol-coherence.xml or (preferably) supplied as system property tangosol.coherence.mode on the Java command line. The default value is dev.
A-33
logging-config
logging-config
Used in: coherence.
Elements
The following table describes the subelements of the logging-config element.
Table A27 Element <destination> logging-config Subelements Required/ Optional Required Description Specifies the output device used by the logging system. Legal values are:
If log4j is specified, the Log4j libraries must be in the classpath. In both cases, the appropriate logging configuration mechanism (system properties, property files, and so on) are necessary to configure the JDK/Log4j logging libraries. The preconfigured system property override is tangosol.coherence.log. <logger-name> Optional Specifies a logger name within chosen logging system that logs Coherence related messages. This value is only used by the JDK and log4j logging systems. The default value is Coherence. The preconfigured system property override is tangosol.coherence.log.logger. <severity-level> Required Specifies which logged messages are emitted to the log destination. The legal values are -1 to 9. No messages are emitted if -1 is specified. More log messages are emitted as the log level is increased. The preconfigured system property override is tangosol.coherence.log.level. <message-format> Required Specifies how messages that have a logging level specified are formatted before passing them to the log destination. The format can include static text and any of the following replaceable parameters: {date}, {uptime}, {product}, {version}, {level}, {thread}, {member}, {location}, {role}, {text}, and {ecid}. The default value is: {date}/{uptime} {product} {version} <{level}> (thread={thread}, member={member}): {text} <character-limit> Required Specifies the maximum number of characters that the logger daemon processes from the message queue before discarding all remaining messages in the queue. All messages that are discarded are summarized by the logging system with a single log entry that details the number of messages that were discarded and their total size. Legal values are positive integers or 0. Zero implies no limit. The default value in production mode is 1048576 and 2147483647 in development mode. The preconfigured system property override is tangosol.coherence.log.limit.
Element Reference
management-config
Used in: coherence.
Elements
Table A28 describes the subelements of the management-config element.
Table A28 Element <managed-nodes> management-config Subelements Optional/ Required Required Description Specifies whether a cluster node's JVM has an [in-process] MBean server and if so, whether this node allows management of other nodes' managed objects. Legal values are:
none (default) No MBean server is instantiated on this cluster node. local-only Manage only MBeans which are local to this cluster node (that is, within the same JVM). remote-only Manage MBeans on other remotely manageable cluster nodes. See <allowed-remote-management> subelement. all Manage both local and remotely manageable cluster nodes. See <allowed-remote-management> subelement.
The preconfigured system property override is tangosol.coherence.management. <allow-remote-management> Required Specifies whether this cluster node exposes its managed objects to remote MBean server(s). Legal values are: true or false. The default value is true. The preconfigured system property override is tangosol.coherence.management.remote. <refresh-policy> Optional Specifies the method which is used to refresh remote management information. Legal values are: refresh-ahead, refresh-behind or refresh-expired. The default value is refresh-ahead. The preconfigured system property override is tangosol.coherence.management.refresh.policy <refresh-expiry> Optional Specifies the time interval (in milliseconds) after which a remote MBean information is invalidated on the management node. Legal values are strings representing time intervals. The default value is 1s. The preconfigured system property override is tangosol.coherence.management.refresh.expiry <refresh-timeout> Optional Specifies the duration which the management node waits for a response from a remote node when refreshing MBean information. This value must be less than the refresh-expiry interval. Legal values are strings representing time intervals. The default value is 250ms. The preconfigured system property override is tangosol.coherence.management.refresh.timeout
A-35
management-config
Table A28 (Cont.) management-config Subelements Element <read-only> Optional/ Required Optional Description Specifies whether the managed objects exposed by this cluster node allow operations that modify run-time attributes. Legal values are: true or false. The default value is false. The preconfigured system property override is tangosol.coherence.management.readonly. <default-domain-name> Optional Specifies the domain name of an existing MBean server that is used to register MBeans exposed by the Coherence management framework. This element is used only if a cluster member has management enabled and the MBean server is located in the same process as the cluster member. If a value is not specified, the first existing MBean server is used. The element should only be used to identify an existing MBean server. This element is also used when implementing the MBeanServerFinder interface. See the <server-factory> element below. <service-name> Optional Specifies the name of the Invocation Service used for remote management. This element is used only if allow-remote-management is set to true. Contains the configuration information for an MBeanServer factory that implements the com.tangosol.net.management.MBeanServerFinder interface, which is used to find an MBean server that is used by the Coherence JMX framework to register new or locate existing MBeans. If a domain name is provided in the <default-domain-name> element, then it is used when instantiating the class. The class name is entered using the <class-name> subelement and supports initialization parameters using the <init-params> element. Contains a list of MBeans to be registered when a node joins the cluster. Contains the configuration information of a filter class that is used to filter MBeans before they are registered. Contains the Reporter's configuration. Specifies whether global MBean names that are identified with a nodeId attribute are extended to identify the corresponding member name (if specified). Legal values are: true or false. The default value is false and indicates that the member name is not included in the Global MBean name. The preconfigured system property override is tangosol.coherence.management.extendedmbeanname.
<server-factory>
Optional
Element Reference
mbean
Used in: mbeans
Description
The mbean element contains a list of elements to be instantiated and registered with the Coherence management framework.
Elements
Table A29 describes the subelements of the mbean element.
Table A29 Element <mbean-class> Subelements of mbean Required/ Optional Description Optional Specifies the full class name of the standard MBean to instantiate and register with the Coherence management framework. The MBean class must be in the classpath to correctly instantiate. This element cannot be used with the <mbean-factory> element or the <mbean-query> element. <mbean-factory> Optional Specifies the name of a class factory used to obtain MBeans to register with the Coherence management framework. The factory class must be in the classpath to correctly instantiate. This element is used with the <mbean-accessor> element. This element cannot be used with the <mbean-class> element or the <mbean-query> element. <mbean-query> Optional Specifies a JMX ObjectName query pattern. The query pattern is executed against a local MBean server and the resulting objects are registered with the Coherence management framework. This allows for a single point of consolidation of MBeans for the grid. For example, the following query includes all the MBeans under the java.lang domain in the Coherence management infrastructure. <mbean-query>java.lang:*</mbean-query> This element cannot be used with the <mbean-class> element or the <mbean-factory> element. <mbean-server-domain> Optional Specifies the name of a default domain for the source MBean server. This is used to locate the MBean server where the mbean-query should be executed. Specifies the method name on the factory class (specified by the <mbean-factory> element) that is used to instantiate the MBean.
<mbean-accessor>
Optional
A-37
mbean
Table A29 (Cont.) Subelements of mbean Element <mbean-name> Required/ Optional Description Required Specifies the JMX ObjectName prefix for the MBean that is registered with the Coherence management framework. The prefix should be a comma-delimited Key=Value pair. The Coherence MBean naming convention stipulates that the name should begin with a type/value pair (for example, type=Platform). Specifies whether the MBean is visible across the cluster. Valid values are true or false. If set to true, the MBean is registered only with a local MBean server and is not accessible by other cluster nodes. If set to false, the nodeId=... key attribute is added to its name and the MBean is visible from any of the managing nodes (nodes that set the <managed-nodes> element to values of all or remote-only). The default value is false. Specifies whether the MBean should be instantiated and registered on this instance. Valid values are true or false. The default value is false. Specifies whether the MBean should extend beyond the node connection life cycle. Valid values are true or false. If true, the MBean maintains the statistics and values across connections (coincides with the JVM life cycle). If false, the MBean is destroyed and re-created when a node is disconnected from the grid. The default value is false.
<local-only>
Optional
<enabled>
Optional
<extend-lifecycle>
Optional
Element Reference
mbeans
Used in: management-config
Description
The mbeans element is the root element for defining custom mbeans and is the root element of a custom mbean configuration file. It contains a list of mbean elements to be instantiated and registered with the Coherence management framework.
Elements
Table A30 describes the subelements of the mbeans element.
Table A30 Element <mbean> Subelement of mbeans Required/ Optional Required Description Specifies the MBean type, implementation, and ObjectName that are instantiated and registered with the Coherence management framework.
A-39
mbean-filter
mbean-filter
Used in management-config.
Description
The mbean-filter element is used to specify a filter that evaluates MBean names before they are registered in the MBean server. The com.tangosol.net.management.ObjectNameExcludeFilter class is the default filter and is used to exclude MBeans from being registered based on their JMX object name using standard regex patterns. The list is entered as a list of names separated by any white space characters. The following MBeans are excluded by the out-of-box configuration:
<management-config> <mbean-filter> <class-name>com.tangosol.net.management.ObjectNameExcludeFilter</class-name> <init-params> <init-param> <param-type>string</param-type> <param-value system-property="tangosol.coherence.management.exclude"> .*type=Service,name=Management,.* .*type=Platform,Domain=java.lang,subType=ClassLoading,.* .*type=Platform,Domain=java.lang,subType=Compilation,.* .*type=Platform,Domain=java.lang,subType=MemoryManager,.* .*type=Platform,Domain=java.lang,subType=Threading,.* </param-value> </init-param> </init-params> </mbean-filter> </management-config>
Elements
Table A44 describes the subelements of the mbean-filter element.
Table A31 Element <class-name> mbean-filter Subelements Required/ Optional Optional Description Specifies the name of a filter class for filtering mbeans. This element cannot be used with the <class-factory-name> element. <class-factory-name> Optional Specifies a factory class for creating filter instances. This element cannot be used with the <name> element or the <class-name> element. This element can be used with the <method-name> element. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Contains class initialization parameters for the filter implementation.
Element Reference
member-identity
Used in: cluster-config. The member-identity element contains detailed identity information that is useful for defining the location and role of the cluster member.
Elements
Table A32 describes the subelements of the member-identity element.
Table A32 Element <cluster-name> member-identity Subelements Required/ Optional Optional Description The cluster-name element contains the name of the cluster. To join the cluster, all members must specify the same cluster name. A cluster name should always be specified for production systems to prevent accidental cluster discovery among applications. The preconfigured system property override is tangosol.coherence.cluster. <site-name> Optional The site-name element contains the name of the geographic site that the member is hosted at. For WAN clustering, this value identifies the datacenter within which the member is located. The site name can be used as the basis for intelligent routing, load balancing, and disaster recovery planning (that is, the explicit backing up of data on separate geographic sites). The site name also helps determine where to back up data when using distributed caching and the default partition assignment strategy. Lastly, the name is useful for displaying management information (for example, JMX) and interpreting log entries. The preconfigured system property override is tangosol.coherence.site. <rack-name> Optional The rack-name element contains the name of the location within a geographic site that the member is hosted at and is often a cage, rack, or bladeframe identifier. The rack name can be used as the basis for intelligent routing, load balancing, and disaster recovery planning (that is, the explicit backing up of data on separate bladeframes). The rack name also helps determine where to back up data when using distributed caching and the default partition assignment strategy. Lastly, the name is useful for displaying management information (for example, JMX) and interpreting log entries. The preconfigured system property override is tangosol.coherence.rack. <machine-name> Optional The machine-name element contains the name of the physical server that the member is hosted on. This is often the same name as the server identifies itself as (for example, its HOSTNAME, or its name as it appears in a DNS entry). If provided, the name is used as the basis for creating a ID, which in turn is used to guarantee that data are backed up on different computers to prevent single points of failure (SPOFs). The name is also useful for displaying management information (for example, JMX) and interpreting log entries. It is optional to provide a value for this element. However, it is strongly encouraged that a name always be provided. The preconfigured system property override is tangosol.coherence.machine.
A-41
member-identity
Table A32 (Cont.) member-identity Subelements Element <process-name> Required/ Optional Optional Description The process-name element contains the name of the process (JVM) that the member is hosted on. This name makes it possible to easily differentiate among multiple JVMs running on the same computer. The name is also useful for displaying management information (for example, JMX) and interpreting log entries. It is optional to provide a value for this element. Often, a single member exists per JVM, and in that situation this name would be redundant. The preconfigured system property override is tangosol.coherence.process. <member-name> Optional The member-name element contains the name of the member itself. This name makes it possible to easily differentiate among members, such as when multiple members run on the same computer (or even within the same JVM). The name is also useful for displaying management information (for example, JMX) and interpreting log entries. It is optional to provide a value for this element. However, it is strongly encouraged that a name always be provided. The preconfigured system property override is tangosol.coherence.member. <role-name> Optional The role-name element contains the name of the member role. This name allows an application to organize members into specialized roles, such as cache servers and cache clients. The name is also useful for displaying management information (for example, JMX) and interpreting log entries. It is optional to provide a value for this element. However, it is strongly encouraged that a name always be provided. The preconfigured system property override is tangosol.coherence.role. <priority> Optional The priority element specifies a priority of the corresponding member. The priority is used as the basis for determining tie-breakers between members. If a condition occurs in which one of two members are ejected from the cluster, and in the rare case that it is not possible to objectively determine which of the two is at fault and should be ejected, then the member with the lower priority is ejected. Valid values are from 1 to 10 where 10 is the highest priority. The preconfigured system property override is tangosol.coherence.priority.
Element Reference
multicast-listener
Used in: cluster-config.
Description
Specifies the configuration information for the Multicast listener. This element is used to specify the address and port that a cluster uses for cluster wide and point-to-multipoint communications. All nodes in a cluster must use the same multicast address and port, whereas distinct clusters on the same network should use different multicast addresses. If you are having difficulties establishing a cluster when using multicast, see Administering Oracle Coherence for instructions on performing a multicast connectivity test.
Multicast-Free Clustering
By default, Coherence uses a multicast protocol to discover other nodes when forming a cluster. If multicast networking is undesirable, or unavailable in your environment, the well-known-addresses feature may be used to eliminate the need for multicast traffic.
Elements
Table A33 describes the subelements of the multicast-listener element.
Table A33 Element <interface> multicast-listener Subelements Required /Optional Optional Description Specifies the IP address that a multicast socket is bound to. By default, the interface (NIC) of the unicast-listener IP address is used for the multicast socket; this option allows a different interface to be specified for multicast. Setting this address to 0.0.0.0 allows the operating system to use the unicast routing table to select the interface automatically. Specifies the multicast IP address that a Socket listens or publishes on. Legal values are from 224.0.0.0 to 239.255.255.255. The default value depends on the release and build level and typically follows the convention of {build}.{major version}.{minor version}.{patch}. The preconfigured system property override is tangosol.coherence.clusteraddress. <port> Required Specifies the port that the Socket listens or publishes on. Legal values are from 1 to 65535. The default value depends on the release and build level and typically follows the convention of {version}+{{{build}. The preconfigured system property override is tangosol.coherence.clusterport. <time-to-live> Required Specifies the time-to-live setting for the multicast. This determines the maximum number of "hops" a packet may traverse, where a hop is measured as a traversal from one network segment to another by using a router. Legal values are from 0 to 255. The default value is 4. The preconfigured system property override is tangosol.coherence.ttl. <packet-buffer> Required Specifies how many incoming packets the operating system is requested to buffer. The value may be expressed either in terms of packets or bytes.
<address>
Required
A-43
multicast-listener
Table A33 (Cont.) multicast-listener Subelements Element <priority> Required /Optional Required Description Specifies a priority of the multicast listener execution thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 8. Specifies the number of milliseconds that a new member waits without finding any evidence of a cluster before starting its own cluster and electing itself as the senior cluster member. Legal values are from 1000 to 1000000. The default value is 3000. Note: For production use, the recommended value is 30000. <multicast-threshold-pe rcent> Required Specifies the threshold percentage value used to determine whether a packet is sent by using unicast or multicast. It is a percentage value and is in the range of 1% to 100%. In a cluster of "n" nodes, a particular node sending a packet to a set of other (that is, not counting self) destination nodes of size "d" (in the range of 0 to n-1), the packet is sent multicast if and only if the following both hold true:
1. 2.
<join-timeout-milliseco nds>
Required
The packet is being sent over the network to multiple nodes, that is, (d > 1). The number of nodes is greater than the threshold, that is, (d > (n-1) * (threshold/100)). Setting this value to 1 allows the implementation to use multicast for basically all multi-point traffic. Setting it to 100 forces the implementation to use unicast for all multi-point traffic except for explicit broadcast traffic (for example, cluster heartbeat and discovery) because the 100% threshold is never exceeded. With the setting of 25 the implementation sends the packet using unicast if it is destined for less than one-fourth of all nodes, and send it using multicast if it is destined for the one-fourth or more of all nodes.
Legal values are from 1 to 100. The default value is 25. Note: This element is only used if the well-known-addresses element is empty.
Element Reference
notification-queueing
Used in: packet-publisher.
Description
The notification-queueing element is used to specify the timing of notifications packets sent to other cluster nodes. Notification packets are used to acknowledge the receipt of packets which require confirmation.
Elements
The following table describes the subelements of the notification-queuing element.
Table A34 Element <ack-delaymilliseconds> notification-queuing Subelements Required/ Optional Required Description Specifies the maximum number of milliseconds that the packet publisher delays before sending an ACK packet. The ACK packet may be transmitted earlier if number of batched acknowledgments fills the ACK packet. This value should be substantially lower then the remote node's packet-delivery resend timeout, to allow ample time for the ACK to be received and processed by the remote node before the resend timeout expires. The default value is 16. Specifies the number of milliseconds that the packet publisher delays before sending a NACK packet. The default value is 1.
<nack-delaymilliseconds>
Required
A-45
outgoing-message-handler
outgoing-message-handler
Used in: cluster-config
Description
The outgoing-message-handler element contains the outgoing message handler (also known as a dispatcher) related configuration information.
Elements
Table A35 describes the subelements of the outgoing-message-handler element.
Table A35 Element <use-filters> outgoing-message-handler Subelement Required/ Optional Optional Description Specifies a list of <filter-name> elements to be used by this handler. See the <filter> element for detailed information on defining a filter.
Element Reference
outstanding-packets
Used in: flow-control.
Description
Defines the number of unconfirmed packets that are sent to a cluster node before packets addressed to that node are deferred. This helps to prevent the sender from flooding the recipient's network buffers.
Elements
Table A36 describes the subelements of the outstanding-packets element.
Table A36 Element <maximum-packets> outstanding-packets Subelements Required/ Optional Optional Description The maximum number of unconfirmed packets that are sent to a cluster node before packets addressed to that node are deferred. It is recommended that this value not be set below 256. The default value is 4096. The lower bound on the range for the number of unconfirmed packets that are sent to a cluster node before packets addressed to that node are deferred. It is recommended that this value not be set below 16. The default value is 64.
<minimum-packets>
Optional
A-47
packet-buffer
packet-buffer
Used in: unicast-listener, multicast-listener, packet-publisher.
Description
Specifies the size (in packets or bytes) of the operating system buffer for datagram sockets.
Elements
Table A37 describes the subelements of the packet-buffer element.
Table A37 Element <maximum-packets> packet-buffer Subelements Required/ Optional Optional Description For unicast-listener, multicast-listener and packet-publisher: Specifies the number of packets of packet-size that the datagram socket are asked to size itself to buffer. See SO_SNDBUF and SO_RCVBUF. Actual buffer sizes may be smaller if the underlying socket implementation cannot support more than a certain size. The default values are 32 for publishing, 64 for multicast listening, and 1428 for unicast listening. The <maximum-packets> element cannot be specified if the <size> element is specified. <size> Optional Specifies the requested size of the underlying socket buffer in bytes rather than the number of packets. The <size> element cannot be specified if the <maximum-packets> element is specified.
Element Reference
packet-bundling
Used in: packet-delivery.
Description
The packet-bundling element contains configuration information related to the bundling of multiple small packets into a single larger packet to reduce the load on the network switching infrastructure.
Elements
Table A38 describes the subelements of the packet-bundling element.
Table A38 Element <maximum-deferraltime> packet-bundling Subelements Required/Optional Optional Description The maximum amount of time to defer a packet while waiting for additional packets to bundle. A value of zero results in the algorithm not waiting, and only bundling the readily accessible packets. A value greater than zero causes some transmission deferral while waiting for additional packets to become available. This value is typically set below 250 microseconds to avoid a detrimental throughput impact. If the units are not specified, nanoseconds are assumed. The default value is 1us (microsecond). Specifies the aggressiveness of the packet deferral algorithm. Where as the maximum-deferral-time element defines the upper limit on the deferral time, the aggression-factor influences the average deferral time. The higher the aggression value, the longer the Publisher may wait for additional packets. The factor may be expressed as a real number, and often times values between 0.0 and 1.0 allows for high packet utilization while keeping latency to a minimum. The default value is 0.
<aggression-factor>
Optional
A-49
packet-delivery
packet-delivery
Used in: packet-publisher.
Description
Specifies timing and transmission rate parameters related to packet delivery.
Elements
Table A39 describes the subelements of the packet-delivery element.
Table A39 Element <resend-milliseconds> packet-delivery Subelements Required/ Optional Required Description For packets which require confirmation, specifies the minimum amount of time in milliseconds to wait for a corresponding ACK packet, before resending a packet. The default value is 200. For packets which require confirmation, specifies the maximum amount of time, in milliseconds, that a packet is resent. After this timeout expires Coherence makes a determination if the recipient is to be considered terminated. This determination takes additional data into account, such as if other nodes are still able to communicate with the recipient. The default value is 300000. For production use, the recommended value is the greater of 300000 and two times the maximum expected full GC duration. Specifies the interval between heartbeats. Each member issues a unicast heartbeat, and the most senior member issues the cluster heartbeat, which is a broadcast message. The heartbeat is used by the tcp-ring-listener as part of fast death detection. The default value is 1000. Configures per-node packet throttling and remote GC detection. Configures how aggressively Coherence attempts to maximize packet utilization.
<timeout-milliseconds>
Required
<heartbeat-milliseconds>
Required
<flow-control> <packet-bundling>
Optional Optional
Element Reference
packet-publisher
Used in: cluster-config.
Description
Specifies configuration information for the Packet publisher, which manages network data transmission.
Elements
Table A40 describes the subelements of the packet-publisher element.
Table A40 Element <packet-size> <packet-delivery> <notification-queueing> <traffic-jam> <packet-buffer> packet-publisher Subelements Required/ Optional Optional Required Required Required Required Description Specifies the UDP packet sizes to use. Specifies timing parameters related to reliable packet delivery. Contains the notification queue related configuration info. Specifies the maximum number of packets which can be enqueued on the publisher before client threads block. Specifies how many outgoing packets the operating system is requested to buffer. The value may be expressed either in terms of packets of bytes. Specifies a priority of the packet publisher execution thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 6. Specifies if TCMP clustering is enabled. When using both Coherence*Extend and Coherence TCMP based clustering, this feature allows TCMP to be disabled to ensure that a node only connects by using the Extend protocol. Valid values are true and false. The default value is true. The preconfigured system property override is tangosol.coherence.tcmp.enabled.
<priority>
Required
<enabled>
Optional
A-51
packet-size
packet-size
Used in: packet-publisher.
Description
The packet-size element specifies the maximum and preferred UDP packet sizes. All cluster nodes must use identical maximum packet sizes.
Elements
Table A41 describes the subelements of the packet-size element.
Table A41 Element <maximum-length> packet-size Subelement Required/ Optional Required Description Specifies the packet size, in bytes, which all cluster members can safely support. This value must be the same for all members in the cluster. A low value can artificially limit the maximum size of the cluster. This value should be at least 512, and defaults to 64KB. Specifies the preferred size, in bytes, of the DatagramPacket objects that are sent and received on the unicast and multicast sockets. This value can be larger or smaller than the <maximum-length> value, and need not be the same for all cluster members. The ideal value is one which fits within the network MTU, leaving enough space for either the UDP or TCP packet headers, which are 32, and 52 bytes respectively. The preferred length should be at least 512. The default value is based on the local nodes MTU.
<preferred-length>
Required
Element Reference
packet-speaker
Used in: cluster-config.
Description
Specifies configuration information for the packet speaker which is used for network data transmission when packet publisher loads are high.
Elements
Table A42 describes the subelements of the packet-speaker element.
Table A42 Element <enabled> packet-speaker Subelements Required/ Optional Optional Description Specifies whether or not the packet-speaker thread is enabled. Valid values are true and false. The default value is true. The preconfigured system property override is tangosol.coherence.speaker.enabled. <volume-threshold> <priority> Optional Required Specifies the packet load which must be present for the speaker to be activated. Specifies a priority of the packet speaker execution thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 8.
A-53
pause-detection
pause-detection
Used in: flow-control.
Description
Remote Pause detection allows Coherence to detect and react to a cluster node becoming unresponsive (likely due to a long GC). When a node is marked as paused, packets addressed to it are sent at a lower rate until the node resumes responding. This remote GC detection is used to avoid flooding a node while it is incapable of responding.
Elements
Table A43 describes the subelements of the pause-detection element.
Table A43 Element <maximum-packets> pause-detection Subelements Required/ Optional Optional Description The maximum number of packets that are resent to an unresponsive cluster node after which the node is assumed to be paused. Specifying a value of 0 disables pause detection. The default value is 16.
Element Reference
provider
Used in: ssl, identity-manager, trust-manager.
Description
The provider element contains the configuration information for a security provider that extends the java.security.Provider class.
Elements
Table A44 describes the subelements of the provider element.
Table A44 Element <name> provider Subelements Required/ Optional Optional Description Specifies the name of a security provider that extends the java.security.Provider class. The class name can be entered using either this element or by using the <class-name> element or by using the <class-factory-name> element. <class-name> Optional Specifies the name of a security provider that extends the java.security.Provider class. This element cannot be used with the <name> element or the <class-factory-name> element. <class-factory-name> Optional Specifies a factory class for creating Provider instances. The instances must implement the java.security.Provider class. This element cannot be used with the <name> element or the <class-name> element. This element can be used with the <method-name> element. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Contains class initialization parameters for the provider implementation. This element cannot be used with the <name> element.
A-55
ramjournal-manager
ramjournal-manager
Used in: journaling-config
Description
The <ramjournal-manager> element contains the configuration for a RAM journal resources manager, which manages memory buffers for journal-based storage in-memory. A RAM journal resource manager always uses a flash journal resource manager to store large objects and is also used as an overflow when the amount of total memory allocated to the RAM journal is reached. A RAM journal also uses a flash journal when the journal garbage collection is temporarily not able to keep up with demand. See "flashjournal-manager" on page A-20 for details on configuring a flash journal resource manager.
Elements
Table A45 describes the subelements of the ramjournal-manager element.
Table A45 Element <minimum-load-factor> <maximum-value-size> ramjournal-manager Subelements Required/ Optional Optional Optional Description Specifies the factor of live data below which a journal file is eligible for compaction (garbage collection). Specifies the maximum size, in bytes, of binary values that are to be stored in the RAM journal. The value cannot exceed 4MB. The default value is 64KB. Binary values that exceed the maximum value size are automatically delegated to a flash journal. <maximum-file-size> Optional Specifies the maximum file size of the underlying journal files. The value must be a power of two and a multiple of the block size. The value must be between 2MB and 2GB. The default value is 2MB. The maximum file size should not be changed. Specifies the maximum amount of RAM that is used by the journal. The value can either be specified as a percentage of the maximum available heap or as a specific amount of memory. If the value contains a percentage sign (%), it is interpreted as a percentage of the maximum JVM heap (the JVM max heap is typically specified by the -Xmx argument on the java command line). If specified as a specific amount of memory, the value must be between 16MB and 64GB. The default value is 25%. That is, the RAM journal resource manager uses a maximum of 25% of the available JVM heap. A RAM journal is always backed up by a flash journal and all data in excess of the maximum RAM size is automatically delegated to the flash journal. <off-heap> Optional Specifies whether to use in the virtual machine's byte buffer or of-heap NIO buffers.
<maximum-size>
Optional
Element Reference
reporter
Used in: management-config.
Description
The Reporter provides JMX reporting capabilities. The Reporter provides out-of-the-box reports and also supports the creation of custom reports. The reports help administrators and developers manage capacity and trouble shoot problems.
Elements
Table A46 describes the subelements of the reporter element.
Table A46 Element <configuration> reporter Subelements Required/ Optional Required Description Specifies the location for the report group deployment descriptor. The default file is reports/report-group.xml and is located in the coherence.jar library. Specifies whether the Reporter automatically starts when the node starts. Valid values are true and false. The default value is false. Specifies whether the reporter runs on multiple management nodes. Valid values are true and false. The default value is false. Specifies the time zone to be used for timestamps that are displayed within a report. See java.util.TimeZone for supported time zone formats. The default, if no time zone is specified, is the local time zone. Specifies the time and date format to be used for timestamps that are displayed within a report. The value must be a pattern supported by the java.text.SimpleDateFormat class. The default value is EEE MMM dd HH:mm:ss zzz yyyy.
<timeformat>
Optional
A-57
security-config
security-config
Used in: coherence.
Elements
Table A47 describes the subelements of the security-config element.
Table A47 Element <enabled> security-config Subelements Required/ Optional Required Description Specifies whether the access controller security feature is enabled. Legal values are true or false. The default value is false. The preconfigured system property override is tangosol.coherence.security. <login-module-name> Required Specifies the name of the JAAS LoginModule that is used to authenticate the caller. This name should match a module in a configuration file is used by the JAAS (for example specified by using the -Djava.security.auth.login.config Java command line attribute). For details, refer to the Oracle Login Module Developer's Guide. Contains the configuration information for the class that implements com.tangosol.net.security.AccessController interface, which is used by the security framework to check access rights for clustered resources and encrypt/decrypt node-to-node communications regarding those rights. Contains the configuration information for the class that implements javax.security.auth.callback.CallbackHandler interlace which is called if an attempt is made to access a protected clustered resource when there is no identity associated with the caller. Contains the configuration information for a class that implements the com.tangosol.net.security.IdentityAsserter interface which is called to validate an identity token to establish a user's identity. An identity asserter is used with an identity transformer to protect connections between Coherence*Extend clients and proxies. Contains the configuration information for the class that implements com.tangosol.net.security.IdentityTransformer interface which is called to transform a Subject (Principal for .NET) to a token that asserts identity. An identity transformer is used with an identity asserter to protect connections between Coherence*Extend clients and proxies. Specifies whether the remote cache or service reference is shared by subject. Valid values are true or false. Setting the value to true means that remote references are not globally shared; each subject gets a different reference. The default value is false. Contains the configuration information for a class that implements the com.tangosol.net.security.Authorizer interface which represents an environment-specific facility for authorizing callers to perform actions described by the corresponding permission objects. Use the <instance> element to enter the class.
<access-controller>
Required
<callback-handler>
Optional
<identity-asserter>
Optional
<identity-transformer>
Optional
<subject-scope>
Optional
<authorizer>
Optional
Element Reference
serializer
Used in: serializers
Description
The serializer element contains a serializer class configuration. Serializer classes must implement com.tangosol.io.Serializer. A Java serializer and POF serializer are predefined:
<cluster-config> <serializers> <serializer id="java"> <class-name>com.tangosol.io.DefaultSerializer</class-name> </serializer> <serializer id="pof"> <class-name>com.tangosol.io.pof.ConfigurablePofContext</class-name> <init-params> <init-param> <param-type>String</param-type> <param-value>pof-config.xml</param-value> </init-param> </init-params> </serializer> </serializers> </cluster-config>
Serializer definitions are referenced by individual cache scheme definitions (see "serializer" on page B-107) and can be referenced by the default serializer for services that do not explicitly define a serializer (see "defaults" on page B-31). Additional serializers can be defined in an operational override file as required.
Elements
Table A48 describes the subelements of the serializer element.
Table A48 Element <class-name> <class-factory-name> serializer Subelements Required/ Optional Optional Optional Description Specifies a class that implements com.tangosol.io.Serializer. This element cannot be used with the <class-factory-name> element. Specifies a factory class for creating custom serializer instances. The instances must implement com.tangosol.io.Serializer. This element cannot be used with the <class-name> element. This element can be used with the <method-name> element. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Contains class initialization parameters for the serializer implementation.
A-59
serializers
serializers
Used in: cluster-config
Description
The serializers element contains the declarative data for each serializer.
Elements
Table A49 describes the subelements of the serializers element.
Table A49 Element <serializer> serializers Subelements Required/ Optional Optional Description Specifies the declarative data of a particular serializer.
Element Reference
service
Used in: services.
Description
Specifies the configuration for Coherence services.
Service Components
The types of services which can be configured includes:
PartitionedService.PartitionedCacheA cache service which evenly partitions cache entries across the cluster nodes which run the service. This service is often referred to as the distributed cache service ReplicatedCacheA cache service which maintains copies of all cache entries on all cluster nodes which run the service. ReplicatedCache.OptimisticA version of the ReplicatedCache which uses optimistic locking. SimpleCache A version of the ReplicatedCache which lacks concurrency control. LocalCacheA cache service for caches where all cache entries reside in a single cluster node. InvocationServiceA service used for performing custom operations on remote cluster nodes. ProxyServiceA service that accepts connections from Coherence*Extend clients.
Elements
Table A50 describes the subelements of the services element.
A-61
service
service Subelements Required/ Optional Required Description Specifies the canonical name for a service, allowing the service to be referenced from the service-name element in cache configuration caching schemes. See "caching-schemes" on page B-27 for more information. Specifies either the fully qualified class name of the service or the relocatable component name relative to the base Service component. Legal values are:
<service-type>
<service-component>
Required
<use-filters>
Optional
Contains the list of filter names to be used by this service. For example, specify use-filter as follows: <use-filters> <filter-name>gzip</filter-name> </use-filters> The example activates gzip compression for the network messages used by this service, which can help substantially with WAN and low-bandwidth networks.
<init-params>
Optional
Specifies the initialization parameters that are specific to each service. Each parameter is described below.
DistributedCache Service Parameters ReplicatedCache Service Parameters OptimisticCache Service Parameters Invocation Service Parameters Proxy Service Parameters
The tables in each section describes the specific <param-name>/<param-value> pairs that can be configured for each service. The Parameter Name column refers to the value of the <param-name> element and Parameter Value Description column refers to the possible values for the corresponding <param-value> element.
DistributedCache Service Parameters
A DistributedCache service supports the parameters listed in Table A51. These settings may also be specified for each service instance as part of the <distributed-scheme> element in the coherence-cache-config.xml descriptor.
Element Reference
Table A51
DistributedCache Service Parameters Parameter Value Description Specifies the lease ownership granularity. Legal values are:
A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it. partition-count Specifies the number of distributed cache partitions. Each storage-enabled cluster member that is running the distributed cache service manages a balanced number of partitions. Valid values are positive integers between 1 and 32767 and should be a prime number. The default value is 257 partitions. See "Changing the Number of Partitions" on page 30-4 for additional details. local-storage Specifies whether this member of the DistributedCache service enables local storage. Normally this value should be left unspecified within the configuration file, and instead set on a per-process basis using the tangosol.coherence.distributed.localstorage system property. This allows cache clients and servers to use the same configuration descriptor. Legal values are true or false. The default value is true. The preconfigured system property override is tangosol.coherence.distributed.localstorage. transfer-threshold Specifies the threshold for the primary buckets distribution in kilobytes. When a new node joins the distributed cache service or when a member of the service leaves, the remaining nodes perform a task of bucket ownership re-distribution. During this process, the existing data gets rebalanced along with the ownership information. This parameter indicates a preferred message size for data transfer communications. Setting this value lower makes the distribution process take longer, but reduces network bandwidth utilization during this activity. Legal values are integers greater then zero. The default value is 0.5MB. The preconfigured system property override is tangosol.coherence.distributed.transfer. backup-count Specifies the number of members of the partitioned cache service that hold the backup data for each unit of storage in the cache. The default value is 1. The preconfigured system property override is tangosol.coherence.distributed.backupcount. thread-count Specifies the number of daemon threads used by the partitioned cache service. Legal values are positive integers, 0, or -1. The value 0 indicates that all relevant tasks are performed on the service thread. The value -1 indicates that tasks are performed on the caller's thread where possible. The default value is 0. Set the value to 0 for scenarios with purely in-memory data (no read-through, write-through, or write-behind) and simple access (no entry processors, aggregators, and so on). For heavy compute scenarios (such as aggregators), the number of threads should be the number of available cores for that computer. For example, if you run 4 nodes on a 16 core box, then there should be roughly 4 threads in the pool. For I/O intensive scenarios (such as read through, write-through, and write-behind), the number of threads must be higher. In this case, increase the threads just to the point that the box is saturated. The preconfigured system property override is tangosol.coherence.distributed.threads.
A-63
service
Table A51 (Cont.) DistributedCache Service Parameters Parameter Name key-associator/ class-name key-partitioning/ class-name partition-listener/ class-name task-hung-threshold Parameter Value Description Specifies the name of a class that implements the com.tangosol.net.partition.KeyAssociator interface. This implementation must have a zero-parameter public constructor. Specifies the name of a class that implements the com.tangosol.net.partition.KeyPartitioningStrategy interface. This implementation must have a zero-parameter public constructor. Specifies the name of a class that implements the com.tangosol.net.partition.PartitionListener interface. This implementation must have a zero-parameter public constructor. Specifies the amount of time in milliseconds that a task can execute before it is considered hung. Legal values are positive integers or zero (indicating no default timeout). Note: a posted task that has not yet started is never considered hung. This attribute is applied only if the Thread pool is used (the thread-count value is positive). The preconfigured system property override is tangosol.coherence.distributed.task.hung. task-timeout Specifies the timeout value in milliseconds for requests executing on the service worker threads. This attribute applies only if the thread pool is used (the thread-count value is positive) and only applies to entry processor implementations that implement the PriorityTask interface. The task execution time is measured on the server side and does not include the time spent waiting in a service backlog queue before being started. Legal values are positive integers or zero. If zero is specified, the default service-guardian <timeout-milliseconds> value is used. The preconfigured system property override is tangosol.coherence.distributed.task.timeout. request-timeout Specifies the maximum amount of time a client waits for a response before abandoning the original request. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
the time it takes to deliver the request to an executing node (server) the interval between the time the task is received and placed into a service queue until the execution starts the task execution time the time it takes to deliver a result back to the client
The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. Legal values are positive integers or zero (indicating no default timeout). The default value is an infinite timeout (0s) for clustered client requests and 30 seconds (30s) for extend client requests. The preconfigured system property override is tangosol.coherence.distributed.request.timeout.
Element Reference
Table A51 (Cont.) DistributedCache Service Parameters Parameter Name serializer Parameter Value Description Specifies a serializer class for object serialization. Serializer classes must implement the com.tangosol.io.Serializer interface. The preferred method for specifying a serializer is to define it within the global serializer element and then configure it for a cache within the cache configuration file. Specifies the number of members of the partitioned cache service that holds the backup data for each unit of storage in the cache that does not require write-behind, that is, data that is not vulnerable to being lost even if the entire cluster were shut down. Specifically, if a unit of storage is marked as requiring write-behind, then it is backed up on the number of members specified by the backup-count parameter. If the unit of storage is not marked as requiring write-behind, then it is backed up by the number of members specified by the backup-count-after-writebehind paramter. This value should be set to 0 or this setting should not be specified at all. The rationale is that since this data is being backed up to another data store, no in-memory backup is required, other than the data temporarily queued on the write-behind queue to be written. The value of 0 means that when write-behind has occurred, the backup copies of that data is discarded. However, until write-behind occurs, the data is backed up in accordance with the backup-count parameter. Recommended value is 0. guardian-timeout Specifies the guardian timeout value to use for guarding the service and any dependent threads. If the parameter is not specified, the default guardian timeout (as specified by the <timeout-milliseconds> operational configuration element) is used. See the service-guardian element to globally configure the service guardian for all services. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
backup-count-afterwritebehind
If the value does not contain a unit, a unit of milliseconds is assumed. The preconfigured system property override is tangosol.coherence.distributed.guard.timeout.
A-65
service
Table A51 (Cont.) DistributedCache Service Parameters Parameter Name service-failure-policy Parameter Value Description Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. See the service-guardian element to globally configure the service guardian for all services. Legal values are:
exit-cluster attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy causes the local node to stop the cluster services. exit-process attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy cause the local node to exit the JVM and terminate abruptly. logging causes any detected problems to be logged, but no corrective action to be taken. a custom class the class configuration information for a com.tangosol.net.ServiceFailurePolicy implementation.
member-listener/ class-name
Specifies the configuration information for a class that implements the com.tangosol.net.MemberListener interface. The implementation must have a public default constructor. The MemberListener implementation receives cache service lifecycle events. The member-listener is used as an alternative to programmatically adding a MapListener on a service.
partitioned-quorumpolicy-scheme partition-assignmentstrategy
Specifies quorum policy settings for the partitioned cache service. See "partitioned-quorum-policy-scheme" on page B-85. Specifies the strategy that is used by a partitioned service to manage partition distribution.
legacy (deprecated) The legacy assignment strategy indicates that partition distribution is managed individually on each cluster member. simple (default) The simple assignment strategy attempts to balance partition distribution while ensuring machine-safety and is more deterministic and efficient than the legacy strategy. mirror:<service-name> The mirror assignment strategy attempts to co-locate the service's partitions with the partitions of the specified service. This strategy is used to increase the likelihood that key-associated, cross-service cache access remains local to a member. custom a class that implements the com.tangosol.net.partition.PartitionAssignmentStrategy interface.
The preconfigured system property override is tangosol.coherence.distributed.assignmentstrategy. compressor Specifies whether or not backup updates should be compressed in delta form or sent whole. A delta update represents the parts of a backup entry that must be changed in order to synchronize it with the primary version of the entry. Valid values are:
none (default) Disables delta backup; no compressor is used. The whole backup binary entry is replaced when the primary entry changes. standard Automatically selects a delta compressor based on the serializer being used by the partitioned service. the fully qualified name of a class that implements the com.tangosol.io.DeltaCompressor interface.
Element Reference
Table A51 (Cont.) DistributedCache Service Parameters Parameter Name service-priority event-dispatcher-priority worker-priority reliable-transport Parameter Value Description Specifies the priority for the service thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 10. Specifies the priority for the event dispatcher thread for each service. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 10. Specifies the priority for the worker threads. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 5. Specifies the name of the transport used by this service for reliable point-to-point communication. Valid values are: datagram, tmb (TCP Message Bus), sdmb (SDP Message Bus), and imb (InfiniBand Message Bus). The imb and sdmb transports are only available for Oracle Exalogic systems. The imb transport is enabled by default on Oracle Exalogic systems for all distributed cache services. The imb transport is automatically used as long as TCMP has not been configured with SSL. Note: Setting a value other than datagram results in the majority of cluster data being transported outside of the protection of SSL. Specifically, only cluster join operations are protected when enabling SSL for TCMP. The preconfigured system property override is tangosol.coherence.distributed.transport.reliable. async-backup Specifies whether the partitioned (distributed) cache service should backup changes asynchronously while concurrently responding to the client. Legal values are true or false. The default value is false. The preconfigured system property override is tangosol.coherence.distributed.asyncbackup.
A ReplicatedCache service supports the parameters listed in Table A52. These settings may also be specified for each service instance as part of the replicated-scheme element in the coherence-cache-config.xml descriptor.
Table A52 ReplicatedCache Service Parameters Value Description Specifies the duration of the standard lease in milliseconds. When a lease has aged past this number of milliseconds, the lock is automatically released. Set this value to zero to specify a lease that never expires. The purpose of this setting is to avoid deadlocks or blocks caused by stuck threads; the value should be set higher than the longest expected lock duration (for example, higher than a transaction timeout). It's also recommended to set this value higher then packet-delivery/timeout-milliseconds value. Legal values are from positive long numbers or zero. The default value is 0. Specifies the lease ownership granularity. Available since release 2.3.Legal values are:
lease-granularity
A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it. mobile-issues Specifies whether lease issues should be transferred to the most recent lock holders. Legal values are true or false. The default value is false.
A-67
service
Table A52 (Cont.) ReplicatedCache Service Parameters Parameter Name request-timeout Value Description Specifies the maximum amount of time a client waits for a response before abandoning the original request. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
the time it takes to deliver the request to an executing node (server) the interval between the time the task is received and placed into a service queue until the execution starts the task execution time the time it takes to deliver a result back to the client
The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. Legal values are positive integers or zero (indicating no default timeout). The default value is an infinite timeout (0s) for clustered client requests and 30 seconds (30s) for extend client requests. The preconfigured system property override is tangosol.coherence.replicated.request.timeout. serializer Specifies a serializer class for object serialization. Serializer classes must implement the com.tangosol.io.Serializer interface. The preferred method for specifying a serializer is to define it within the global serializer element and then configure it for a cache within the cache configuration file. Specifies the guardian timeout value to use for guarding the service and any dependent threads. If the paramter is not specified, the default guardian timeout (as specified by the <timeout-milliseconds> operational configuration element) is used. See the service-guardian element to globally configure the service guardian for all services. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
guardian-timeout
If the value does not contain a unit, a unit of milliseconds is assumed. The preconfigured system property override is tangosol.coherence.replicated.guard.timeout.
Element Reference
Table A52 (Cont.) ReplicatedCache Service Parameters Parameter Name service-failure-policy Value Description Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. See the service-guardian element to globally configure the service guardian for all services. Legal values are:
exit-cluster attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy causes the local node to stop the cluster services. exit-process attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy cause the local node to exit the JVM and terminate abruptly. logging causes any detected problems to be logged, but no corrective action to be taken. a custom class the class configuration information for a com.tangosol.net.ServiceFailurePolicy implementation.
member-listener/ class-name
Specifies the configuration information for a class that implements the com.tangosol.net.MemberListener interface. The implementation must have a public default constructor. The MemberListener implementation receives cache service lifecycle events. The member-listener is used as an alternative to programmatically adding a MapListener on a service.
Specifies the priority for the service thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 10. Specifies the priority for the event dispatcher thread for each service. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 10. Specifies the name of the transport used by this service for reliable point-to-point communication. Valid values are: datagram, tmb (TCP Message Bus), sdmb (SDP Message Bus), and imb (InfiniBand Message Bus). The imb and sdmb transports are only available for Oracle Exalogic systems. The default value is the value specified in the <reliable-transport> subelement of the <unicast-listener> element. Note: Setting a value other than datagram results in the majority of cluster data being transported outside of the protection of SSL. Specifically, only cluster join operations are protected when enabling SSL for TCMP. The preconfigured system property override is tangosol.coherence.replicated.transport.reliable.
An OptimisiticCache service supports the parameters listed in Table A53. These settings may also be specified for each service instance as part of the optimistic-scheme element in the coherence-cache-config.xml descriptor.
A-69
service
Table A53
OptimisiticCache Service Parameters Value, Description Specifies the maximum amount of time a client waits for a response before abandoning the original request. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
the time it takes to deliver the request to an executing node (server) the interval between the time the task is received and placed into a service queue until the execution starts the task execution time the time it takes to deliver a result back to the client
The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. Legal values are positive integers or zero (indicating no default timeout). The default value is an infinite timeout (0s) for clustered client requests and 30 seconds (30s) for extend client requests. The preconfigured system property override is tangosol.coherence.optimistic.request.timeout. serializer Specifies a serializer class for object serialization. Serializer classes must implement the com.tangosol.io.Serializer interface. The preferred method for specifying a serializer is to define it within the global serializer element and then configure it for a cache within the cache configuration file. Specifies the guardian timeout value to use for guarding the service and any dependent threads. If the paramter is not specified, the default guardian timeout (as specified by the <timeout-milliseconds> operational configuration element) is used. See the service-guardian element to globally configure the service guardian for all services. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
guardian-timeout
If the value does not contain a unit, a unit of milliseconds is assumed. The preconfigured system property override is tangosol.coherence.optimistic.guard.timeout.
Element Reference
Table A53 (Cont.) OptimisiticCache Service Parameters Parameter Name service-failure-policy Value, Description Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. See the service-guardian element to globally configure the service guardian for all services. Legal values are:
exit-cluster attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy causes the local node to stop the cluster services. exit-process attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy cause the local node to exit the JVM and terminate abruptly. logging causes any detected problems to be logged, but no corrective action to be taken. a custom class the class configuration information for a com.tangosol.net.ServiceFailurePolicy implementation.
member-listener/ class-name
Specifies the configuration information for a class that implements the com.tangosol.net.MemberListener interface. The implementation must have a public default constructor. The MemberListener implementation receives cache service lifecycle events. The member-listener is used as an alternative to programmatically adding a MapListener on a service.
Specifies the priority for the service thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 10. Specifies the priority for the event dispatcher thread for each service. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 10. Specifies the name of the transport used by this service for reliable point-to-point communication. Valid values are: datagram, tmb (TCP Message Bus), sdmb (SDP Message Bus), and imb (InfiniBand Message Bus). The imb and sdmb transports are only available for Oracle Exalogic systems. The default value is the value specified in the <reliable-transport> subelement of the <unicast-listener> element. Note: Setting a value other than datagram results in the majority of cluster data being transported outside of the protection of SSL. Specifically, only cluster join operations are protected when enabling SSL for TCMP. The preconfigured system property override is tangosol.coherence.optimisitic.transport.reliable.
An Invocation service supports the parameters listed in Table A54. These settings may also be specified for each service instance as part of the invocation-scheme element in the coherence-cache-config.xml descriptor.
A-71
service
Table A54
Invocation Service Parameters Value, Description Specifies the number of daemon threads used by the partitioned cache service. Legal values are positive integers, 0, or -1. The value 0 indicates that all relevant tasks are performed on the service thread. The value -1 indicates that tasks are performed on the caller's thread where possible. Set the value to 0 for scenarios with purely in-memory data (no read-through, write-through, or write-behind) and simple access (no entry processors, aggregators, and so on). For heavy compute scenarios (such as aggregators), the number of threads should be the number of available cores for that compute. For example, if you run 4 nodes on a 16 core box, then there should be roughly 4 threads in the pool. For I/O intensive scenarios (such as read through, write-through, and write-behind), the number of threads must be higher. In this case, increase the threads just to the point that the box is saturated. The preconfigured system property override is tangosol.coherence.invocation.threads.
task-hung-threshold
Specifies the amount of time in milliseconds that a task can execute before it is considered "hung". Note: a posted task that has not yet started is never considered as hung. This attribute is applied only if the Thread pool is used (the thread-count value is positive). The preconfigured system property override is tangosol.coherence.invocation.task.hung.
task-timeout
Specifies the default timeout value in milliseconds for tasks that can be timed-out (for example, implement the com.tangosol.net.PriorityTask interface), but do not explicitly specify the task execution timeout value. The task execution time is measured on the server side and does not include the time spent waiting in a service backlog queue before being started. This attribute is applied only if the thread pool is used (the thread-count value is positive). Legal values are positive integers or zero (indicating no default timeout). The preconfigured system property override is tangosol.coherence.invocation.task.timeout.
Element Reference
Table A54 (Cont.) Invocation Service Parameters Parameter Name request-timeout Value, Description Specifies the default timeout value in milliseconds for requests that can time out (for example, implement the com.tangosol.net.PriorityTask interface), but do not explicitly specify the request timeout value. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
the time it takes to deliver the request to an executing node (server) the interval between the time the task is received and placed into a service queue until the execution starts the task execution time the time it takes to deliver a result back to the client
The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. Legal values are positive integers or zero (indicating no default timeout). The default value is an infinite timeout (0s) for clustered client requests and 30 seconds (30s) for extend client requests. The preconfigured system property override is tangosol.coherence.invocation.request.timeout. serializer Specifies a serializer class for object serialization. Serializer classes must implement the com.tangosol.io.Serializer interface. The preferred method for specifying a serializer is to define it within the global serializer element and then configure it for a cache within the cache configuration file. Specifies the guardian timeout value to use for guarding the service and any dependent threads. If the paramter is not specified, the default guardian timeout (as specified by the <timeout-milliseconds> operational configuration element) is used. See the service-guardian element to globally configure the service guardian for all services. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
guardian-timeout
If the value does not contain a unit, a unit of milliseconds is assumed. The preconfigured system property override is tangosol.coherence.invocation.guard.timeout.
A-73
service
Table A54 (Cont.) Invocation Service Parameters Parameter Name service-failure-policy Value, Description Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. See the service-guardian element to globally configure the service guardian for all services. Legal values are:
exit-cluster attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy causes the local node to stop the cluster services. exit-process attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy cause the local node to exit the JVM and terminate abruptly. logging causes any detected problems to be logged, but no corrective action to be taken. a custom class the class configuration information for a com.tangosol.net.ServiceFailurePolicy implementation.
member-listener/ class-name
Specifies the configuration information for a class that implements the com.tangosol.net.MemberListener interface. The implementation must have a public default constructor. The MemberListener implementation receives cache service lifecycle events. The member-listener is used as an alternative to programmatically adding a MapListener on a service.
service-priority
Specifies the priority for the service thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 10.
event-dispatcher-priori Specifies the priority for the event dispatcher thread for each service. Legal values are ty from 1 to 10 where 10 is the highest priority. The default value is 10. worker-priority reliable-transport Specifies the priority for the worker threads. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 5. Specifies the name of the transport used by this service for reliable point-to-point communication. Valid values are: datagram, tmb (TCP Message Bus), sdmb (SDP Message Bus), and imb (InfiniBand Message Bus). The imb and sdmb transports are only available for Oracle Exalogic systems. The default value is the value specified in the <reliable-transport> subelement of the <unicast-listener> element. Note: Setting a value other than datagram results in the majority of cluster data being transported outside of the protection of SSL. Specifically, only cluster join operations are protected when enabling SSL for TCMP. The preconfigured system property override is tangosol.coherence.invocation.transport.reliable.
A Proxy service supports the parameters listed in Table A55. These settings may also be specified for each service instance as part of the proxy-scheme element in the coherence-cache-config.xml descriptor.
Element Reference
Table A55
Proxy Service Parameters Value Description Contains the configuration of the connection acceptor used by the service to accept connections from Coherence*Extend clients and to allow them to use the services offered by the cluster without having to join the cluster. Contains the configuration of the clustered service proxies managed by this service. Specifies the exact number of daemon threads for use by the proxy service. Legal values are positive integers or 0. Note: Specifying a value disables dynamic thread pooling. The value 0 indicates that all relevant tasks are performed on the service thread. The default value is no value and indicates that dynamic thread pooling is enabled and the number of threads automatically adjusts based on proxy service load. Use the thread-count-max and thread-count-min parameters to tune dynamic thread pool behavior. Proxy service threads perform operations on behalf of the calling application. Therefore, when explicitly setting the number of threads, set the value to as many threads as there are concurrent operations that are occurring. The preconfigured system property override is tangosol.coherence.proxy.threads.
proxy-config thread-count
task-hung-threshold
Specifies the amount of time in milliseconds that a task can execute before it is considered hung. Legal values are positive integers or zero (indicating no default timeout). Note: a posted task that has not yet started is never considered hung. This attribute is applied only if the Thread pool is used (the thread-count value is positive). The preconfigured system property override is tangosol.coherence.proxy.task.hung.
task-timeout
Specifies the default timeout value in milliseconds for tasks that can be timed-out (for example, implement the com.tangosol.net.PriorityTask interface), but do not explicitly specify the task execution timeout value. The task execution time is measured on the server side and does not include the time spent waiting in a service backlog queue before being started. This attribute is applied only if the thread pool is used (the thread-count value is positive). Legal values are positive integers or zero (indicating no default timeout). The preconfigured system property override is tangosol.coherence.proxy.task.timeout.
A-75
service
Table A55 (Cont.) Proxy Service Parameters Parameter Name request-timeout Value Description Specifies the maximum amount of time a client waits for a response before abandoning the original request. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
the time it takes to deliver the request to an executing node (server) the interval between the time the task is received and placed into a service queue until the execution starts the task execution time the time it takes to deliver a result back to the client
The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. Legal values are positive integers or zero (indicating no default timeout). The default value is an infinite timeout (0s) for clustered client requests and 30 seconds (30s) for extend client requests. The preconfigured system property override is tangosol.coherence.proxy.request.timeout. serializer Specifies a serializer class for object serialization. Serializer classes must implement the com.tangosol.io.Serializer interface. The preferred method for specifying a serializer is to define it within the global serializer element and then configure it for a cache within the cache configuration file. Specifies the guardian timeout value to use for guarding the service and any dependent threads. If the parameter is not specified, the default guardian timeout (as specified by the <timeout-milliseconds> operational configuration element) is used. See the service-guardian element to globally configure the service guardian for all services. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
guardian-timeout
If the value does not contain a unit, a unit of milliseconds is assumed. The preconfigured system property override is tangosol.coherence.proxy.guard.timeout.
Element Reference
Table A55 (Cont.) Proxy Service Parameters Parameter Name Value Description
service-failure-policy Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. See the service-guardian element to globally configure the service guardian for all services. Legal values are:
exit-cluster attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy causes the local node to stop the cluster services. exit-process attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy cause the local node to exit the JVM and terminate abruptly. logging causes any detected problems to be logged, but no corrective action to be taken. a custom class the class configuration information for a com.tangosol.net.ServiceFailurePolicy implementation.
member-listener/ class-name
Specifies the configuration information for a class that implements the com.tangosol.net.MemberListener interface. The implementation must have a public default constructor. The MemberListener implementation receives cache service lifecycle events. The member-listener is used as an alternative to programmatically adding a MapListener on a service.
proxy-quorum-policy-sc Specifies quorum policy settings for the Proxy service. heme load-balancer Specifies the default load balancing strategy that is used by a proxy service if a strategy is not explicitly configured as part of the proxy scheme. Legal values are:
proxy (default) This strategy attempts to distribute client connections equally across proxy service members based upon existing connection count, connection limit, incoming and outgoing message backlog, and daemon pool utilization. client This strategy relies upon the client address provider implementation to dictate the distribution of clients across proxy service members. If no client address provider implementation is provided, the extend client tries each proxy service in a random order until a connection is successful.
service-priority
Specifies the priority for the service thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 10.
event-dispatcher-prior Specifies the priority for the event dispatcher thread for each service. Legal values are ity from 1 to 10 where 10 is the highest priority. The default value is 10. worker-priority Specifies the priority for the worker threads. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 5.
A-77
service
Table A55 (Cont.) Proxy Service Parameters Parameter Name reliable-transport Value Description Specifies the name of the transport used by this service for reliable point-to-point communication. Valid values are: datagram, tmb (TCP Message Bus), sdmb (SDP Message Bus), and imb (InfiniBand Message Bus). The imb and sdmb transports are only available for Oracle Exalogic systems. The default value is the value specified in the <reliable-transport> subelement of the <unicast-listener> element. Note: Setting a value other than datagram results in the majority of cluster data being transported outside of the protection of SSL. Specifically, only cluster join operations are protected when enabling SSL for TCMP. The preconfigured system property override is tangosol.coherence.proxy.transport.reliable. thread-count-max Specifies the maximum number of daemon threads that are allowed in the thread pool. This parameter is only valid if a thread-count value is not set. Legal values are positive integers or 0. A value of 0 indicates that no daemon threads are created and all client requests are handled by the proxy service thread. The default value is 8 times the number of cores. The preconfigured system property override is tangosol.coherence.proxy.threads.max. thread-count-min Specifies the minimum number of daemon threads that are allowed (and initially created) in the thread pool. This parameter is only valid if a thread-count value is not set. Legal values are positive integers. The default value is 2 times the number of cores. The preconfigured system property override is tangosol.coherence.proxy.threads.min.
Element Reference
service-guardian
Used in: cluster-config
Description
Specifies the configuration of the service guardian, which detects and attempts to resolve service deadlocks.
Elements
Table A56 describes the subelements of the service-guardian element.
Table A56 Element <service-failure-policy> service-guardian Subelements Required/ Optional Optional Description Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. Legal values are:
exit-cluster (default) attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy causes the local node to stop the cluster services. exit-process attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy cause the local node to exit the JVM and terminate abruptly. logging causes any detected problems to be logged, but no corrective action to be taken. a custom class an <instance> subelement is used to provide the class configuration information for a com.tangosol.net.ServiceFailurePolicy implementation.
<timeout-milliseconds>
Optional
The timeout value used to guard against deadlocked or unresponsive services. It is recommended that service-guardian/timeout-milliseconds be set equal to or greater than the packet-delivery/timeout-milliseconds value. The default value is 305000. The preconfigured system property override is tangosol.coherence.guard.timeout
The content override attribute xml-override can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document. See "Attribute Reference" on page A-95 for more information.
A-79
services
services
Used in: cluster-config
Description
The services element contains the declarative data for each service.
Elements
Table A50 describes the subelements of the services element.
Table A57 Element <service> services Subelements Required/ Optional Optional Description Specifies the declarative data of a particular service.
Element Reference
shutdown-listener
Used in: cluster-config.
Description
Specifies the action a cluster node should take upon receiving an external shutdown request. External shutdown includes the "kill" command on UNIX and Ctrl-C on Windows and UNIX.
Elements
Table A58 describes the subelements of the shutdown-listener element.
Table A58 Element <enabled> shutdown-listener Subelements Required/ Optional Required Description Specifies the type of action to take upon an external JVM shutdown. Legal values:
none perform no explicit shutdown actions force (default) perform "hard-stop" the node by calling Cluster.stop() graceful perform a "normal" shutdown by calling Cluster.shutdown() true same as force false same as none
Note: For production use, the suggested value is none unless testing has verified that the behavior on external shutdown is exactly what is desired. The preconfigured system property override is tangosol.coherence.shutdownhook.
A-81
socket-address
socket-address
Used in: well-known-addresses, address-provider
Description
The socket-address element specifies the address (IP, or DNS name, and port) to which a socket is bound.
Elements
Table A59 describes the subelements of the socket-address element.
Table A59 Element <address> socket-address Subelements Required/ Optional Required Description Specifies the IP address that a Socket listens or publish on. Enter either an IP address or DNS name. Note: The localhost setting may not work on systems that define localhost as the loopback address; in that case, specify the computer name or the specific IP address. <port> Required Specifies the port that the Socket listens or publish on. Legal values are from 1 to 65535. Note: When setting up a WKA member, the port value must match the port value that is specified for the unicast listener port. See the <unicast-listener> element.
Element Reference
socket-provider
Used in: socket-providers, unicast-listener, ssl.
Description
The <socket-provider> element contains the configuration information for a socket and channel factory. The following socket providers are supported and referenced using their defined id attribute name.
<socket-providers> <socket-provider id="system"> <system/> </socket-provider> <socket-provider id="tcp"> <tcp/> </socket-provider> <socket-provider id="ssl"> <ssl> <identity-manager> <key-store> <url system-property="tangosol.coherence.security.keystore"> file:keystore.jks</url> <password system-property="tangosol.coherence.security.password"/> </key-store> <password system-property="tangosol.coherence.security.password"/> </identity-manager> <trust-manager> <algorithm>PeerX509</algorithm> <key-store> <url system-property="tangosol.coherence.security.keystore"> file:keystore.jks</url> <password system-property="tangosol.coherence.security.password"/> </key-store> </trust-manager> <socket-provider>tcp</socket-provider> </ssl> </socket-provider> <socket-provider id="sdp"> <sdp/> </socket-provider> </socket-providers>
Alternate SSL definitions can be created to support more elaborate SSL configurations.
Elements
Table A60 describes the subelements of the socket-provider element.
A-83
socket-provider
socket-provider Subelements Required/ Optional Optional Description Specifies a socket provider that produces instances of the JVM's default socket and channel implementations. This is the default socket provider. Specifies a socket provider that produces socket and channel implementations which use SSL. Specifies a socket provider that produces TCP-based sockets and channel implementations. Specifies a socket provider that produce SDP-based sockets and channel implementations provided that the JVM and underlying network stack supports SDP.
Element Reference
socket-providers
Used in cluster-config
Description
The socket-providers element contains the declarative data for each socket provider implementation. Coherence supports the following socket providers: system, tcp, ssl, and sdp.
Elements
Table A61 describes the subelements of the socket-providers element.
Table A61 Element <socket-provider> socket-providers Subelements Required/ Optional Optional Description Specifies the configuration information for a socket and channel factory.
A-85
ssl
ssl
Used in: socket-provider.
Description
The <ssl> element contains the configuration information for a socket provider that produces socket and channel implementations which use SSL. If SSL is configured for the unicast listener, the listener must be configured to use well known addresses.
Elements
Table A62 describes the subelements of the ssl element.
Table A62 Element <protocol> ssl Subelements Required/ Optional Optional Description Specifies the name of the protocol used by the socket and channel implementations produced by the SSL socket provider. The default value is TLS. Specifies the configuration for a security provider instance. Specifies the configuration information for an implementation of the java.util.concurrent.Executor interface. A <class-name> subelement is used to provide the name of a class that implements the Executor interface. As an alternative, use a <class-factory-name> subelement to specify a factory class for creating Executor instances and a <method-name> subelement that specifies the name of a static factory method on the factory class which performs object instantiation. Either approach can specify initialization parameters using the <init-params> element. <identity-manager> <trust-manager> <hostname-verifier> Optional Optional Optional Specifies the configuration information for initializing an identity manager instance. Specifies the configuration information for initializing a trust manager instance. Specifies the configuration information for an implementation of the javax.net.ssl.HostnameVerifier interface. During the SSL handshake, if the URL's host name and the server's identification host name mismatch, the verification mechanism calls back to this instance to determine if the connection should be allowed. A <class-name> subelement is used to provide the name of a class that implements the HostnameVerifier interface. As an alternative, use a <class-factory-name> subelement to specify a factory class for creating HostnameVerifier instances and a <method-name> subelement that specifies the name of a static factory method on the factory class which performs object instantiation. Either approach can specify initialization parameters using the <init-params> element. <socket-provider> Optional Specifies the configuration information for a delegate provider for SSL. Valid values are tcp and sdp. The default value is tcp.
<provider> <executor>
Optional Optional
Element Reference
tcp-ring-listener
Used in: cluster-config.
Description
The TCP-ring provides a means for fast death detection of another node within the cluster. When enabled, the cluster nodes use a single "ring" of TCP connections spanning the entire cluster. A cluster node can use the TCP connection to detect the death of another node within a heartbeat interval (the default value s one second; see the <heartbeat-milliseconds> subelement of packet-delivery). If disabled, the cluster node must rely on detecting that another node has stopped responding to UDP packets for a considerately longer interval (see the <timeout-milliseconds> subelement of packet-delivery). When the death has been detected it is communicated to all other cluster nodes.
Elements
Table A63 describes the subelements of the tcp-ring-listener element.
Table A63 Element <enabled> tcp-ring-listener Subelements Required/ Optional Optional Description Specifies whether the tcp ring listener should be enabled to defect node failures faster. Valid values are true and false. The default value is true. Specifies the timeout to use for determining that a computer that is hosting cluster members has become unreachable. A number of connection attempts may be made before determining that the unreachable members should be removed. Legal values are strings representing time intervals. A timeout of 0 disables system-level monitoring and is not recommended. The default value is 5s. The values of the <ip-timeout> and <ip-attempts> elements should be high enough to insulate against allowable temporary network outages. This feature relies upon the java.net.InetAddress.isReachable mechanism, refer to the API documentation see for a description of how it identifies reachability. <ip-attempts> Optional specifies the number of connection attempts to make before determining that a computer that is hosting cluster members has become unreachable, and that those cluster members should be removed. The values of the <ip-timeout> and <ip-attempts> elements should be high enough to insulate against allowable temporary network outages. Legal values are positive integers. The default value is 3. <listen-backlog> Optional Specifies the size of the TCP/IP server socket backlog queue. Valid values are positive integers. The default value is operating system dependent. Specifies a priority of the tcp ring listener execution thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 6.
<ip-timeout>
Optional
<priority>
Required
A-87
traffic-jam
traffic-jam
Used in: packet-publisher.
Description
The traffic-jam element is used to control the rate at which client threads enqueue packets for the packet publisher to transmit on the network. When the limit is exceeded any client thread is forced to pause until the number of outstanding packets drops below the specified limit. To limit the rate at which the Publisher transmits packets see the flow-control element.
Elements
Table A64 describes the subelements of the traffic-jam element.
Table A64 Element <maximum-packets> traffic-jam Subelements Required/ Optional Required Description Specifies the maximum number of pending packets that the Publisher tolerates before determining that it is clogged and must slow down client requests (requests from local non-system threads). Zero means no limit. This property prevents most unexpected out-of-memory conditions by limiting the size of the resend queue. The default value is 8192. Number of milliseconds that the Publisher pauses a client thread that is trying to send a message when the Publisher is clogged. The Publisher does not allow the message to go through until the clog is gone, and repeatedly sleeps the thread for the duration specified by this property. The default value is 10.
<pause-milliseconds>
Required
Element Reference
trust-manager
Used in: ssl.
Description
The <trust-manager> element contains the configuration information for initializing a javax.net.ssl.TrustManager instance. A trust manager is responsible for managing the trust material that is used when making trust decisions and for deciding whether credentials presented by a peer should be accepted. A valid trust-manager configuration contains at least one child element.
Elements
Table A65 describes the elements of the trust-manager element.
Table A65 Element <algorithm> <provider> <key-store> trust-manager Subelements Required/ Optional Optional Optional Optional Description Specifies the algorithm used by the trust manager. The default value is SunX509. Specifies the configuration for a security provider instance. Specifies the configuration for a key store implementation.
A-89
unicast-listener
unicast-listener
Used in: cluster-config.
Description
Specifies the configuration information for the Unicast listener. This element is used to specify the address and port that a cluster node binds to, to listen for point-to-point cluster communications.
Multicast-Free Clustering
By default Coherence uses a multicast protocol to discover other nodes when forming a cluster. If multicast networking is undesirable, or unavailable in your environment, the well-known-addresses feature may be used to eliminate the need for multicast traffic.
Elements
Table A66 describes the subelements of the unicast-listener element.
Table A66 Element <socket-provider> unicast-listener Subelements Required/ Optional Optional Description Specifies either: the configuration for a socket provider, or it references a socket provider configuration that is defined within the <socket-providers> element. The following socket providers are available: system (default), ssl, tcp, and sdp. Refer to the socket provider configuration using their defined id attribute name. For example: <socket-provider>ssl</socket-provider> The preconfigured system property override is tangosol.coherence.socketprovider. <reliable-transport> Optional Specifies the name of the transport used by TCMP for reliable point-to-point communication. Valid values are: datagram, tmb (TCP Message Bus), sdmb (SDP Message Bus), and imb (InfiniBand Message Bus). The imb and sdmb transports are only available for Oracle Exalogic systems. The default value is datagram. Note: Setting a value other than datagram results in the majority of cluster data being transported outside of the protection of SSL. Specifically, only cluster join operations are protected when enabling SSL for TCMP. The preconfigured system property override is tangosol.coherence.transport.reliable. <well-known-addresses> Optional Contains a list of "well known" addresses (WKA) that are used by the cluster discovery protocol instead of using multicast broadcast to discover cluster members.
Element Reference
Table A66 (Cont.) unicast-listener Subelements Element <address> Required/ Optional Required Description Specifies the IP address that a Socket listens or publishes on. The address may also be entered using CIDR notation as a subnet and mask (for example, 192.168.1.0/24), which allows runtime resolution against the available local IP addresses. Note: The localhost setting may not work on systems that define localhost as the loopback address; in that case, specify the computer name or the specific IP address. Also, the multicast listener, by default, binds to the same interface as defined by this address. The default value is localhost. The preconfigured system property override is tangosol.coherence.localhost. <port> Required Specifies the ports that the Socket listens or publishes on. A second port is automatically opened and defaults to the next available port. Legal values are from 1 to 65535. The default value is 8088 for the first port and 8089 (if available) for the second port. The preconfigured system property override is tangosol.coherence.localport. <port-auto-adjust> Required Specifies whether the unicast port is automatically incremented if the specified port cannot be bound to because it is in use. Legal values are true or false. The default value is true. The preconfigured system property override is tangosol.coherence.localport.adjust. <packet-buffer> Required Specifies how many incoming packets the operating system is requested to buffer. The value may be expressed either in terms of packets of bytes. Specifies a priority of the unicast listener execution thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 8.
<priority>
Required
A-91
volume-threshold
volume-threshold
Used in: packet-speaker
Description
Specifies the minimum outgoing packet volume which must exist for the speaker daemon to be activated.
Performance Impact
Elements
Table A67 describes the subelements of the packet-speaker element.
Table A67 Element <minimum-packets> packet-speaker Subelements Required/ Optional Required Description Specifies the minimum number of packets which must be ready to be sent for the speaker daemon to be activated. A value of 0 forces the speaker to always be used, while a very high value causes it to never be used. If unspecified (the default), it matches the packet-buffer.
Element Reference
well-known-addresses
Used in: unicast-listener.
Note:
This is not a security-related feature, and does not limit the addresses which are allowed to join the cluster. See the authorized-hosts element for details on limiting cluster membership.
If you are having difficulties establishing a cluster when using multicast, see Administering Oracle Coherence for instructions on performing a multicast connectivity test.
Description
By default, Coherence uses a multicast protocol to discover other nodes when forming a cluster. If multicast networking is undesirable, or unavailable in your environment, the Well Known Addresses feature may be used to eliminate the need for multicast traffic. When in use the cluster is configured with a relatively small list of nodes which are allowed to start the cluster, and which are likely to remain available over the cluster lifetime. There is no requirement for all WKA nodes to be simultaneously active at any point in time. This list is used by all other nodes to find their way into the cluster without the use of multicast, thus at least one node that is configured as a well-known node must be running for other nodes to be able to join.
Example
The following example configures two well-known-addresses with the default port.
<cluster-config> <unicast-listener> <well-known-addresses> <socket-address id="1"> <address>192.168.0.100</address> <port>8088</port> </socket-address> <socket-address id="2"> <address>192.168.0.101</address> <port>8088</port> </socket-address> </well-known-addresses> </unicast-listener> </cluster-config>
Elements
Table A68 describes the subelements of the well-known-addresses element.
A-93
well-known-addresses
well-known-addresses Subelements Required/ Optional Description Optional Specifies a list of WKA that are used by the cluster discovery protocol instead of using multicast broadcast. If one or more WKA is specified, for a member to join the cluster it either has to be a WKA or there has to be at least one WKA member running. Additionally, all cluster communication is performed using unicast. If empty or unspecified, multicast communications is used. The preconfigured system property overrides are tangosol.coherence.wka and tangosol.coherence.wka.port.
<socket-address>
<address-provider>
Optional
Contains the configuration for a com.tangosol.net.AddressProvider implementation that supplies the WKAs. The calling component attempts to obtain the full list upon node startup, the provider must return a terminating null address to indicate that all available addresses have been returned.
Attribute Reference
Attribute Reference
Table A69 describes the attributes available in the operational deployment descriptor.
Table A69 Attribute xml-override Operational Deployment Descriptor Attributes Required/ Optional Optional Description The xml-override attribute allows the content of an element to be fully or partially overridden with an XML document that is external to the base document. Legal value of this attribute is the name of the XML document an should be accessible using the ClassLoader.getResourceAsStream(String name) by the classes contained in coherence.jar library. In general, the name should be prefixed with '/' and located in the classpath. The override XML document referred by this attribute does not have to exist. However, if it does exist then its root element must have the same name as the element it overrides. In cases where there are multiple elements with the same name (for example, <service>) the id attribute is used to identify the base element that is overridden and the override element itself. The elements of the override document that do not have a match in the base document are just appended to the base. The following elements can be overridden by its own XML override file: address-provider, authorized-hosts, cache-factory-builder-config, cluster-config, coherence, configurable-cache-factory-config, incoming-message-handler, logging-config, multicast-listener, outgoing-message-handler, security-config, serializer, service, service-failure-policy, shutdown-listener, tcp-ring-listener, unicast-listener, packet-speaker, packet-publisher, mbeans id Optional The id attribute differentiates elements that can have multiple occurrences (for example, <service>). See "Understanding the XML Override Feature" on page 3-16. This attribute is used to specify a system property name for any element. The system property is used to override the element value from the Java command line. This feature enables the same operational descriptor (and override file) to be used across all cluster nodes and customize each node using the system properties. See Appendix C, "System Property Overrides," for more information on this feature.
system-property
Optional
A-95
Attribute Reference
B
B
This appendix provides a detailed reference of the cache configuration deployment descriptor elements and includes a brief overview of the descriptor. This appendix includes the following sections:
B-1
Notes:
The schema located in the coherence.jar library is always used at run time even if the xsi:schemaLocation attribute references the Web URL. The xsi:schemaLocation attribute can be omitted to disable schema validation. When deploying Coherence into environments where the default character set is EBCDIC rather than ASCII, ensure that the deployment descriptor file is in ASCII format and is deployed into its run-time environment in the binary format.
Element Reference
Element Reference
Table B1 lists all non-terminal cache configuration deployment descriptor elements.
Table B1 Element acceptor-config address-provider async-store-manager authorized-hosts back-scheme backing-map-scheme backup-storage bdb-store-manager cache-config cache-mapping cache-service-proxy caching-scheme-mapping caching-schemes class-scheme Non-Terminal Cache Configuration Elements Used In proxy-scheme name-service-addresses, remote-addresses, tcp-acceptor external-scheme, paged-external-scheme tcp-acceptor near-scheme, overflow-scheme distributed-scheme, optimistic-scheme, replicated-scheme distributed-scheme external-scheme, paged-external-scheme, async-store-manager root element caching-scheme-mapping proxy-config cache-config cache-config caching-schemes, local-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, overflow-scheme, read-write-backing-map-scheme, cachestore-scheme, listener local-scheme, read-write-backing-map-scheme external-scheme, paged-external-scheme, async-store-manager cache-config caching-schemes, near-scheme, overflow-scheme caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, overflow-scheme, read-write-backing-map-scheme acceptor-config ssl back-scheme, backing-map-scheme, caching-schemes, internal-cache-scheme near-scheme, overflow-scheme init-params class-scheme remote-cache-scheme, remote-invocation-scheme interceptors cache-mapping, distributed-scheme
B-3
Element Reference
Table B1 (Cont.) Non-Terminal Cache Configuration Elements Element instance invocation-scheme key-associator key-partitioning key-store lh-file-manager listener Used In serializer, socket-provider, service-failure-policy caching-schemes distributed-scheme distributed-scheme identity-manager, trust-manager external-scheme, paged-external-scheme, async-store-manager local-scheme, external-scheme, paged-external-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, overflow-scheme, read-write-backing-map-scheme http-acceptor, tcp-acceptor, tcp-initiator caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, overflow-scheme, read-write-backing-map-scheme tcp-initiator caching-schemes external-scheme, paged-external-scheme, async-store-manager external-scheme, paged-external-scheme, async-store-manager cachestore-scheme, distributed-scheme, remote-cache-scheme caching-schemes, near-scheme, overflow-scheme
local-address local-scheme
outgoing-message-handler acceptor-config, initiator-config overflow-scheme paged-external-scheme caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, read-write-backing-map-scheme caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, overflow-scheme, read-write-backing-map-scheme
partitioned-quorum-polic distributed-scheme y-scheme provider proxy-config proxy-scheme identity-manager, ssl, trust-manager proxy-scheme caching-schemes
read-write-backing-map-s caching-schemes, distributed-scheme, replicated-scheme, cheme optimistic-scheme remote-addresses remote-cache-scheme tcp-initiator cachestore-scheme, caching-schemes, near-scheme
remote-invocation-scheme caching-schemes
Element Reference
Table B1 (Cont.) Non-Terminal Cache Configuration Elements Element replicated-scheme serializer Used In caching-schemes, near-scheme, overflow-scheme acceptor-config, defaults, distributed-scheme, initiator-config, invocation-scheme, optimistic-scheme, replicated-scheme, transactional-scheme name-service-addresses, remote-addresses tcp-acceptor, tcp-initiator socket-provider acceptor-config initiator-config caching-schemes ssl
B-5
acceptor-config
acceptor-config
Used in: proxy-scheme
Description
The acceptor-config element specifies the configuration information for a TCP/IP or HTTP (for REST) connection acceptor. The connection acceptor is used by a proxy service to enable Coherence*Extend clients to connect to the cluster and use cluster services without having to join the cluster.
Elements
Table B2 describes the subelements of the acceptor-config element.
Table B2 Element <http-acceptor> acceptor-config Subelements Required/ Optional Optional Description Specifies the configuration information for a connection acceptor that accepts connections from remote REST clients over HTTP. This element cannot be used together with the <tcp-acceptor> element. Specifies the configuration information for a connection acceptor that enables Coherence*Extend clients to connect to the cluster over TCP/IP. This element cannot be used together with the <http-acceptor> element. Specifies the configuration information used by the connection acceptor to detect dropped client-to-cluster connections.
<tcp-acceptor>
Optional
<outgoing-message-handler>
Optional
Element Reference
Table B2 (Cont.) acceptor-config Subelements Element <use-filters> Required/ Optional Optional Description Contains the list of filter names to be used by this connection acceptor. For example, specifying <use-filter> as follows activates gzip compression for all network messages, which can help substantially with WAN and low-bandwidth networks. <use-filters> <filter-name>gzip</filter-name> </use-filters> <serializer> Optional Specifies the class configuration information for a com.tangosol.io.Serializer implementation used by the connection acceptor to serialize and deserialize user types. For example, the following configures a ConfigurablePofContext that uses the my-pof-types.xml POF type configuration file to deserialize user types to and from a POF stream: <serializer> <class-name>com.tangosol.io.pof.ConfigurablePofContext </class-name> <init-params> <init-param> <param-type>string</param-type> <param-value>my-pof-types.xml</param-value> </init-param> </init-params> </serializer> <connection-limit> Optional The maximum number of simultaneous connections allowed by this connection acceptor. Valid values are positive integers and zero. A value of zero implies no limit. The default value is zero.
B-7
address-provider
address-provider
Used in: name-service-addresses, remote-addresses, tcp-acceptor
Description
The address-provider element specifies either socket address information (IP, or DNS name, and port) or an implementation of the com.tangosol.net.AddressProvider interface. The interface offers a programmatic way to define socket addresses. The preferred approach is to reference an address provider definition that is included in an operational override file. This approach decouples deployment configuration from application configuration. However, socket addresses can also be configured in-line and is typical during development. For additional details on defining address providers that can be referenced, see "address-provider" on page A-6. The following example references an address provider definition that contains the socket address to which a TCP/IP acceptor is bound.
<tcp-acceptor> <address-provider>ap2</address-provider> </tcp-acceptor>
The following example references an address provider definition that contains the socket address of a TCP/IP acceptor on the cluster.
<tcp-initiator> <remote-addresses> <address-provider>ap3</address-provider> </remote-addresses> </tcp-initiator>
The following example references an address provider definition that contains the socket address of a name service TCP/IP acceptor on the cluster.
<tcp-initiator> <name-service-addresses> <address-provider>ap4</address-provider> </name-service-addresses> </tcp-initiator>
Elements
Table B3 describes the subelements of the address-provider element.
Element Reference
Table B3 Element
address-provider Subelements Required/ Optional Optional Description Specifies the fully qualified name of a class that implements the com.tangosol.net.AddressProvider interface. This element cannot be used with the <class-factory-name> element.
<class-name>
<class-factory-name>
Optional
Specifies the fully qualified name of a factory class for creating address provider instances. The instances must implement the com.tangosol.net.AddressProvider interface. This element cannot be used with the <class-name> element and is used with the <method-name> element.
<method-name> <init-params>
Optional Optional
Specifies the name of a static factory method on the factory class which performs object instantiation. Specifies initialization parameters which are accessible by implementations that include a public constructor with a matching signature. Initialization parameters can be specified for both the <class-name> element and the <class-factory-name> element. Specifies the address (IP, or DNS name, and port) to which a socket is bound. This element cannot be used if an address provider implementation is defined using the <class-name> or <class-factory-name> element.
<socket-address>
Optional
B-9
async-store-manager
async-store-manager
Used in: external-scheme, paged-external-scheme.
Description
The async-store-manager element adds asynchronous write capabilities to other store manager implementations. Supported store managers include:
custom-store-managerallows definition of custom implementations of store managers bdb-store-manageruses Berkeley Database JE to implement an on disk cache lh-file-manageruses a Coherence LH on disk database cache nio-file-manageruses NIO to implement memory-mapped file based cache nio-memory-manageruses NIO to implement an off JVM heap, in-memory cache
Implementation
This store manager is implemented by the com.tangosol.io.AsyncBinaryStoreManager class.
Elements
Table B4 describes the subelements of the async-store-manager element.
Table B4 Element <class-name> async-store-manager Subelements Required/ Optional Optional Description Specifies a custom implementation of the async-store-manager. Any custom implementation must extend the com.tangosol.io.AsyncBinaryStoreManager class and declare the exact same set of public constructors. Specifies initialization parameters, for use in custom async-store-manager implementations. Configures the external cache to use Berkeley Database JE on disk databases for cache storage. Configures the external cache to use a custom storage manager implementation. Configures the external cache to use a Coherence LH on disk database for cache storage. The LH store manager is deprecated. Use Berkeley DB store manager <bdb-store-manager> for similar functionality.
Element Reference
Table B4 (Cont.) async-store-manager Subelements Element <nio-file-manager> <nio-memory-manager> <async-limit> Required/ Optional Optional Optional Optional Description Configures the external cache to use a memory-mapped file for cache storage. Configures the external cache to use an off JVM heap, memory region for cache storage. Specifies the maximum number of bytes that are queued to be written asynchronously. Setting the value to zero indicates that the implementation default for the maximum number of bytes is used. The value of this element must be in the following format: (\d)+((.)(\d)+)?[K|k|M|m]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. Valid values are any positive memory sizes and zero. The default value is 4MB.
authorized-hosts
authorized-hosts
Used in: tcp-acceptor.
Description
This element contains the collection of IP addresses of TCP/IP initiator hosts that are allowed to connect to the cluster using a TCP/IP acceptor. If this collection is empty no constraints are imposed. Any number of host-address and host-range elements may be specified.
Elements
Table B5 describes the subelements of the authorized-hosts element.
Table B5 Element <host-address> authorized-hosts Subelements Required/ Optional Optional Description Specifies an IP address or host name. If any are specified, only hosts with specified host-addresses or within the specified host-ranges are allowed to join the cluster. The content override attributes id can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document. Specifies a range of IP addresses. If any are specified, only hosts with specified host-addresses or within the specified host-ranges are allowed to join the cluster. Specifies class configuration information for a com.tangosol.util.Filter implementation that is used by a TCP/IP acceptor to determine whether to accept a particular TCP/IP initiator. The evaluate() method is passed to the java.net.InetAddress of the client. Implementations should return true to allow the client to connect. Classes are specified using the <class-name> subelement. Any initialization parameters can be defined within an <init-params> subelement.
<host-range>
Optional
<host-filter>
Optional
The content override attributes xml-override and id can be optionally used to fully or partially override the contents of this element with XML document that is external to the base document. See "Attribute Reference" on page A-95.
Element Reference
back-scheme
Used in: near-scheme, overflow-scheme
Description
The back-scheme element specifies the back-tier cache of a composite cache.
Elements
Table B6 describes the subelements of the back-scheme element.
Table B6 Element <distributed-scheme> <optimistic-scheme> <replicated-scheme> <transactional-scheme> <local-scheme> back-scheme Subelements Required/ Optional Optional Optional Optional Optional Optional Description Defines a cache scheme where storage of cache entries is partitioned across the cluster nodes. Defines a replicated cache scheme which uses optimistic rather then pessimistic locking. Defines a cache scheme where each cache entry is stored on all cluster nodes. Defines a cache scheme where storage of cache entries is partitioned across the cluster nodes with transactional guarantees. Local cache schemes define in-memory "local" caches. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near scheme. External schemes define caches which are not JVM heap based, allowing for greater storage capacity. As with external-scheme, paged-external-schemes define caches which are not JVM heap based, allowing for greater storage capacity. Class schemes provide a mechanism for instantiating an arbitrary Java object for use by other schemes. The scheme which contains this element dictates what class or interface(s) must be extended. Specifies a scheme that uses journaling to store data to flash memory. Specifies a scheme that uses journaling to store data to RAM memory. Defines a cache scheme that enables caches to be accessed from outside a Coherence cluster by using Coherence*Extend.
<external-scheme> <paged-external-scheme>
Optional Optional
<class-scheme>
Optional
backing-map-scheme
backing-map-scheme
Used in: distributed-scheme, optimistic-scheme, replicated-scheme
Description
Specifies what type of cache is used within the cache server to store entries. When using an overflow-based backing map, it is important that the corresponding backup-storage be configured for overflow (potentially using the same scheme as the backing-map). See "Partitioned Cache with Overflow" on page 18-6 for an example configuration.
Note:
The partitioned subelement is only used if the parent element is the distributed-scheme element.
Elements
Table B7 describes the subelements of the backing-map-scheme element.
Table B7 Element <partitioned> backing-map-scheme Subelements Required/ Optional Optional Description Specifies whether the enclosed backing map is a PartitionAwareBackingMap. (This element is respected only within a distributed-scheme.) If the value is true, the scheme that is specified as the backing map is used to configure backing maps for each individual partition of the PartitionAwareBackingMap. If the value is false, the scheme is used for the entire backing map itself. The concrete implementations of the PartitionAwareBackingMap interface are:
Valid values are true or false. The default value is false. Note: Backing maps that use RAM and Flash journaling are always partitioned. <local-scheme> Optional Local cache schemes define in-memory "local" caches. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near scheme. External schemes define caches which are not JVM heap based, allowing for greater storage capacity. As with external-scheme, paged-external-schemes define caches which are not JVM heap based, allowing for greater storage capacity. The overflow-scheme defines a two-tier cache consisting of a fast, size limited front-tier, and slower but much higher capacity back-tier cache.
<external-scheme> <paged-external-scheme>
Optional Optional
<overflow-scheme>
Optional
Element Reference
Table B7 (Cont.) backing-map-scheme Subelements Element <class-scheme> Required/ Optional Optional Description Class schemes provide a mechanism for instantiating an arbitrary Java object for use by other schemes. The scheme which contains this element dictates what class or interface(s) must be extended. Specifies a scheme that uses journaling to store data to flash memory. Specifies a scheme that uses journaling to store data to RAM memory. The read-write-backing-map-scheme defines a backing map which provides a size limited cache of a persistent store.
backup-storage
backup-storage
Used in: distributed-scheme.
Description
The backup-storage element specifies the type and configuration of backup storage for a partitioned cache.
Elements
Table B8 describes the subelements of the backup-storage element.
Element Reference
backup-storage Subelements Required/ Optional Optional Description Specifies the type of the storage used to hold the backup data. Legal values are:
on-heap (default) The corresponding implementations class is java.util.HashMap. off-heapThe corresponding implementations class is com.tangosol.io.nio.BinaryMap using the com.tangosol.io.nio.DirectBufferManager. file-mappedThe corresponding implementations class is com.tangosol.io.nio.BinaryMap using the com.tangosol.io.nio.MappedBufferManager. customThe corresponding implementations class is the class specified by the class-name element. schemeThe corresponding implementations class is specified as a caching-scheme by the scheme-name element.
The preconfigured system property override is tangosol.coherence.distributed.backup. <initial-size> Optional Specifies the initial buffer size in bytes. The class name is only applicable if the <type> element is set to off-heap or file-mapped. Specifies the initial buffer size in bytes.The value of this element must be in the following format: (\d)+[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023 (that is, 2,147,482,624 bytes). The default value is 1MB. <maximum-size> Optional Specifies the initial buffer size in bytes. The class name is only applicable if the <type> element is set to off-heap or file-mapped. The value of this element must be in the following format: (\d)+[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023 (that is, 2,147,482,624 bytes). The default value is 1024MB.
backup-storage
Table B8 (Cont.) backup-storage Subelements Element <directory> Required/ Optional Optional Description Specifies the path name for the directory that the disk persistence manager (com.tangosol.util.nio.MappedBufferManager) uses as the root to store files. The directory is only applicable if the <type> element is set to file-mapped. If a value is not specified or a non-existent directory is specified, a temporary file in the default location is used. The default value is the default temporary directory designated by the Java run time. Specifies a class name for the custom storage implementation. The class name is only applicable if the <type> element is set to custom. Specifies a scheme name for the ConfigurableCacheFactory. The scheme name is only applicable if the <type> element is set to scheme.
<class-name> <scheme-name>
Optional Optional
Element Reference
bdb-store-manager
Used in: external-scheme, paged-external-scheme, async-store-manager. Berkeley Database JE Java class libraries are required to use a bdb-store-manager, see the Berkeley Database JE product page for additional information.
Note:
http://www.oracle.com/technology/documentation/ berkeley-db/je/index.html
Description
The BDB store manager is used to define external caches which uses Berkeley Database JE on disk embedded databases for storage. See the examples of Berkeley-based store configurations in "Persistent Cache on Disk" on page 18-3 and "In-memory Cache with Disk Based Overflow" on page 18-3.
Implementation
This store manager is implemented by the com.tangosol.io.bdb.BerkeleyDBBinaryStoreManager class, and produces BinaryStore objects implemented by the com.tangosol.io.bdb.BerkeleyDBBinaryStore class.
Elements
Table B9 describes the subelements of the bdb-store-manager element.
Table B9 Element <class-name> bdb-store-manager Subelements Required/ Optional Optional Description Specifies a custom implementation of the Berkeley Database BinaryStoreManager. Any custom implementation must extend the com.tangosol.io.bdb.BerkeleyDBBinaryStoreManager class and declare the exact same set of public constructors.
bdb-store-manager
Table B9 (Cont.) bdb-store-manager Subelements Element <init-params> Required/ Optional Optional Description Specifies additional Berkeley DB configuration settings. See the Berkeley DB Configuration instructions: http://www.oracle.com/technology/documentation/ berkeley-db/je/GettingStartedGuide/administration.html#p ropertyfile Also used to specify initialization parameters, for use in custom implementations. <directory> Optional Specifies the path name to the root directory where the Berkeley Database JE store manager stores files. If not specified or specified with a non-existent directory, a temporary directory in the default location is used. Specifies the name for a database table that the Berkeley Database JE store manager uses to store data in. Specifying this parameter causes the bdb-store-manager to use non-temporary (persistent) database instances. This is intended only for local caches that are backed by a cache loader from a non-temporary store, so that the local cache can be pre-populated from the disk on startup. This setting should not be enabled with replicated or distributed caches. Normally, the <store-name> element should be left unspecified, indicating that temporary storage is to be used. When specifying this property, it is recommended to use the {cache-name} macro. See "Using Parameter Macros" on page 13-15 for more information on the {cache-name} macro.
<store-name>
Optional
Element Reference
bundle-config
Used in: operation-bundling.
Description
The bundle-config element specifies the bundling strategy configuration for one or more bundle-able operations.
Elements
Table B10 describes the subelements of the bundle-config element.
Table B10 Element <operation-name> bundle-config Subelements Required/ Optional Optional Description Specifies the operation name for which calls performed concurrently on multiple threads are "bundled" into a functionally analogous "bulk" operation that takes a collection of arguments instead of a single one. Valid values depend on the bundle configuration context. For the <cachestore-scheme> the valid operations are:
In all cases there is a pseudo operation named all, referring to all valid operations. The default value is all. <preferred-size> Optional Specifies the bundle size threshold. When a bundle size reaches this value, the corresponding "bulk" operation is invoked immediately. This value is measured in context-specific units. Valid values are zero (disabled bundling) or positive values. The default value is zero. <delay-millis> Optional Specifies the maximum amount of time in milliseconds that individual execution requests are allowed to be deferred for a purpose of "bundling" them and passing into a corresponding bulk operation. If the preferred-size threshold is reached before the specified delay, the bundle is processed immediately. Valid values are positive numbers. The default value is 1. <thread-threshold> Optional Specifies the minimum number of threads that must be concurrently executing individual (non-bundled) requests for the bundler to switch from a pass-through to a bundling mode. Valid values are positive numbers. The default value is 4. <auto-adjust> Optional Specifies whether the auto adjustment of the preferred-size value (based on the run-time statistics) is allowed. Valid values are true or false. The default value is false.
cache-config
cache-config
Root Element
Description
The cache-config element is the root element of the cache configuration descriptor, coherence-cache-config.xml. For more information on this document, see "Cache Configuration Deployment Descriptor" on page B-1. At a high level, a cache configuration consists of cache schemes and cache scheme mappings. Cache schemes describe a type of cache, for instance a database backed, distributed cache. Cache mappings define what scheme to use for a given cache name.
Elements
Table B11 describes the subelements of the cache-config element.
Table B11 Element <scope-name> cache-config Subelements Required/ Optional Optional Description The use of the <scope-name> element within the <cache-config> element has been deprecated. Use the <scope-name> element within the defaults element instead. Defines factory wide default settings. Specifies the caching-scheme that is used for caches, based on the cache's name. Defines the available caching-schemes for use in the cluster.
Element Reference
cache-mapping
Used in: caching-scheme-mapping
Description
Each cache-mapping element specifies the caching-schemes which are to be used for a given cache name or pattern.
Elements
Table B12 describes the subelements of the cache-mapping element.
Table B12 Element <cache-name> cache-mapping Subelements Required/ Optional Required Description Specifies a cache name or name pattern. The name is unique within a cache factory.The following cache name patterns are supported:
exact match, for example, MyCache prefix match, for example, My* that matches to any cache name starting with My any match "*", that matches to any cache name
The patterns get matched in the order of specificity (more specific definition is selected whenever possible). For example, if both MyCache and My* mappings are specified, the scheme from the MyCache mapping is used to configure a cache named MyCache. <scheme-name> Required Contains the caching scheme name. The name is unique within a configuration file. Caching schemes are configured in the caching-schemes element. Allows specifying replaceable cache scheme parameters. During cache scheme parsing, any occurrence of any replaceable parameter in format param-name is replaced with the corresponding parameter value. Consider the following cache mapping example: <cache-mapping> <cache-name>My*</cache-name> <scheme-name>my-scheme</scheme-name> <init-params> <init-param> <param-name>cache-loader</param-name> <param-value>com.acme.MyCacheLoader</param-value> </init-param> <init-param> <param-name>size-limit</param-name> <param-value>1000</param-value> </init-param> </init-params> </cache-mapping> For any cache name match My*, any occurrence of the literal cache-loader in any part of the corresponding cache-scheme element is replaced with the string com.acme.MyCacheLoader and any occurrence of the literal size-limit is replaced with the value of 1000. <interceptors> Optional Specifies any number of event interceptors that process events for a specific cache.
<init-params>
Optional
cache-service-proxy
cache-service-proxy
Used in: proxy-config
Description
The cache-service-proxy element contains the configuration information for a cache service proxy that is managed by a proxy service.
Elements
Table B13 describes the subelements of the cache-service-proxy element.
Table B13 Element <class-name> cache-service-proxy Subelements Required/ Optional Optional Description Specifies the fully qualified name of a class that implements the com.tangosol.net.CacheService interface. The class acts as an interceptor between a client and a proxied cache service to implement custom processing as required. For example, the class could be used to perform authorization checks before allowing the use of the proxied cache service. Contains initialization parameters for the CacheService implementation. Specifies whether the cache service proxy is enabled. If disabled, clients are not able to access any proxied caches. Legal values are true or false. The default value is true. Specifies whether lock requests from remote clients are permitted on a proxied cache. Legal values are true or false. The default value is false. Specifies whether requests from remote clients that update a cache are prohibited on a proxied cache. Legal values are true or false. The default value is false.
<init-params> <enabled>
Optional Optional
<lock-enabled> <read-only>
Optional Optional
Element Reference
cachestore-scheme
Used in: local-scheme, read-write-backing-map-scheme
Description
Cache store schemes define a mechanism for connecting a cache to a back-end data store. The cache store scheme may use any class implementing either the com.tangosol.net.cache.CacheStore or com.tangosol.net.cache.CacheLoader interfaces, where the former offers read-write capabilities, where the latter is read-only. Custom implementations of these interfaces may be produced to connect Coherence to various data stores. See "Cache of a Database" on page 18-4 for an example of using a cachestore-scheme.
Elements
Table B14 describes the subelements of the cachestore-scheme element.
Table B14 Element <scheme-name> <scheme-ref> <class-scheme> cachestore-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13. Specifies the implementation of the cache store. The specified class must implement either of the following interfaces.
com.tangosol.net.cache.CacheStorefor read-write support com.tangosol.net.cache.CacheLoaderfor read-only support com.tangosol.net.cache.BinaryEntryStore similar to CacheStore, but operates on BinaryEntry objects.
<remote-cache-scheme> <operation-bundling>
Optional Optional
Configures the cachestore-scheme to use Coherence*Extend as its cache store implementation. Specifies the configuration information for a bundling strategy.
caching-scheme-mapping
caching-scheme-mapping
Used in: cache-config
Description
Defines mappings between cache names, or name patterns, and caching-schemes. For instance you may define that caches whose names start with accounts- uses a distributed (distributed-scheme) caching scheme, while caches starting with the name rates- uses a replicated-scheme caching scheme.
Elements
Table B15 describes the subelement you can define within the caching-scheme-mapping element.
Table B15 Element <cache-mapping> caching-scheme-mapping Subelement Required/ Optional Required Description Contains a single binding between a cache name and the caching scheme this cache uses.
Element Reference
caching-schemes
Used in: cache-config
Description
The caching-schemes element defines a series of cache scheme elements. Each cache scheme defines a type of cache, for instance a database backed partitioned cache, or a local cache with an LRU eviction policy. Scheme types are bound to actual caches using mappings (see caching-scheme-mapping).
Elements
Table B16 describes the different types of schemes you can define within the caching-schemes element.
Table B16 Element <distributed-scheme> <optimistic-scheme> <replicated-scheme> <transactional-scheme> caching-schemes Subelements Required/ Optional Optional Optional Optional Optional Description Defines a cache scheme where storage of cache entries is partitioned across the cluster nodes. Defines a replicated cache scheme which uses optimistic rather then pessimistic locking. Defines a cache scheme where each cache entry is stored on all cluster nodes. Defines a cache scheme where storage of cache entries is partitioned across the cluster nodes with transactional guarantees. Defines a cache scheme which provides on-heap cache storage. Defines a cache scheme which provides off-heap cache storage, for instance on disk. Defines a cache scheme which provides off-heap cache storage, that is size-limited by using time based paging. Defines a two tier cache scheme where entries evicted from a size-limited front-tier overflow and are stored in a much larger back-tier cache. Defines a cache scheme using a custom cache implementation. Any custom implementation must implement the java.util.Map interface, and include a zero-parameter public constructor. Additionally if the contents of the Map can be modified by anything other than the CacheService itself (for example, if the Map automatically expires its entries periodically or size-limits its contents), then the returned object must implement the com.tangosol.util.ObservableMap interface. Specifies a scheme that stores data to flash memory. Specifies a scheme that stores data to RAM memory. Defines a two tier cache scheme which consists of a fast local front-tier cache of a much larger back-tier cache.
<overflow-scheme>
Optional
<class-scheme>
Optional
caching-schemes
Table B16 (Cont.) caching-schemes Subelements Element <invocation-scheme> Required/ Optional Optional Description Defines an invocation service which can be used for performing custom operations in parallel across cluster nodes. Defines a backing map scheme which provides a cache of a persistent store. Defines a cache scheme that enables caches to be accessed from outside a Coherence cluster by using Coherence*Extend. Defines an invocation scheme that enables invocations from outside a Coherence cluster by using Coherence*Extend. Defines a proxy service scheme that enables remote connections to a cluster using Coherence*Extend.
<read-write-backing-map-scheme> <remote-cache-scheme>
Optional Optional
<remote-invocation-scheme>
Optional
<proxy-scheme>
Optional
Element Reference
class-scheme
Used in: caching-schemes, local-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, front-scheme, overflow-scheme, read-write-backing-map-scheme, cachestore-scheme, listener, eviction-policy, unit-calculator.
Description
Class schemes provide a mechanism for instantiating an arbitrary Java object for use by other schemes. The scheme which contains this element dictates what class or interface(s) must be extended. See "Cache of a Database" on page 18-4 for an example of using a class-scheme. The class-scheme may be configured to either instantiate objects directly by using their class-name, or indirectly by using a class-factory-name and method-name. The class-scheme must be configured with either a class-name or class-factory-name and method-name.
Elements
Table B17 describes the subelements of the class-scheme element.
Table B17 Element <scheme-name> <scheme-ref> <class-name> class-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Contains a fully specified Java class name to instantiate. This class must extend an appropriate implementation class as dictated by the containing scheme and must declare the exact same set of public constructors as the superclass. This element cannot be used with the <class-factory-name> element. <class-factory-na Optional me> Specifies a fully specified name of a Java class that is used as a factory for object instantiation. This element cannot be used with the <class-name> element and is used with the <method-name> element. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Specifies initialization parameters which are accessible by implementations that include a public constructor with a matching signature.
custom-store-manager
custom-store-manager
Used in: external-scheme, paged-external-scheme, async-store-manager.
Description
Used to create and configure custom implementations of a store manager for use in external caches.
Elements
Table B18 describes the subelements of the custom-store-manager element.
Table B18 Element <class-name> <init-params> custom-store-manager Subelements Required/ Optional Required Optional Description Specifies the implementation of the store manager. The specified class must implement the com.tangosol.io.BinaryStoreManager interface. Specifies initialization parameters, for use in custom store manager implementations.
Element Reference
defaults
Used in: cache-config
Description
The defaults element defines factory wide default settings. This feature enables global configuration of serializers and socket providers used by all services which have not explicitly defined these settings.
Elements
Table B19 describes the subelements of the defaults element.
Table B19 Element <scope-name> defaults Subelements Required/ Optional Optional Description Specifies the scope name for this configuration. The scope name is added (as a prefix) to all services generated by a cache factory. The name is used to isolate services when using multiple cache factories; thus, avoiding unintended joining of services with similar names from different configurations. Specifies either: the class configuration information for a com.tangosol.io.Serializer implementation, or it references a serializer class configuration that is defined within the <serializers> element in the operational configuration file. Two pre-defined serializers are available: java (default) and pof and are referred to using their defined id attribute name. For example: <serializer>pof</serializer> <socket-provider> Optional Specifies either: the configuration for a socket provider, or it references a socket provider configuration that is defined within the <socket-providers> element of the operational deployment descriptor. The following socket providers are available: system (default), tcp, ssl, and sdp. Refer to the socket providers using their defined id attribute name. For example: <socket-provider>ssl</socket-provider> This setting only specifies the socket provider for Coherence*Extend services. The TCMP socket provider is specified within the <unicast-listener> element in the operational configuration.
<serializer>
Optional
distributed-scheme
distributed-scheme
Used in: caching-schemes, near-scheme, overflow-scheme
Description
The distributed-scheme defines caches where the storage for entries is partitioned across cluster nodes. See "Distributed Cache" on page 12-1 for a more detailed description of partitioned caches. See "Partitioned Cache" on page 18-6 for examples of various distributed-scheme configurations.
Cache Clients
The partitioned cache service supports the concept of cluster nodes which do not contribute to the overall storage of the cluster. Nodes which are not storage enabled (see <local-storage> subelement) are considered "cache clients".
Cache Partitions
The cache entries are evenly segmented into several logical partitions (see <partition-count> subelement), and each storage enabled (see <local-storage> subelement) cluster node running the specified partitioned service (see <service-name> subelement) is responsible for maintain a fair-share of these partitions.
Key Association
By default the specific set of entries assigned to each partition is transparent to the application. In some cases it may be advantageous to keep certain related entries within the same cluster node. A key-associator (see <key-associator> subelement) may be used to indicate related entries, the partitioned cache service ensures that associated entries reside on the same partition, and thus on the same cluster node. Alternatively, key association may be specified from within the application code by using keys which implement the com.tangosol.net.cache.KeyAssociation interface.
Failover
For the purposes of failover, a configured number of backups (see <backup-count> subelement) of the cache may be maintained in backup-storage (see <backup-storage> subelement) across the cluster nodes. Each backup is also divided into partitions, and when possible a backup partition does not reside on the same computer as the primary partition. If a cluster node abruptly leaves the cluster, responsibility for its partitions
Element Reference
are automatically reassigned to the existing backups, and new backups of those partitions are created (on remote nodes) to maintain the configured backup count.
Partition Redistribution
When a node joins or leaves the cluster, a background redistribution of partitions occurs to ensure that all cluster nodes manage a fair-share of the total number of partitions. The amount of bandwidth consumed by the background transfer of partitions is governed by the transfer-threshold (see <transfer-threshold> subelement).
Elements
Table B20 describes the subelements of the distributed-scheme element.
Table B20 Element <scheme-name> <scheme-ref> <service-name> distributed-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies a name for the distributed cache service instance that manages the cache that is created from this distributed scheme. The distributed cache service definition is defined within the <services> element in the tangosol-coherence.xml file. See "DistributedCache Service Parameters" on page A-62 for more information. Different distributed schemes can use different partitioned cache service instances to maintain separate caches. The default name if no name is specified is DistributedCache. Specifies the priority for the service thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is the service-priority value specified in the tangosol-coherence.xml descriptor. See the service-priority parameter in "DistributedCache Service Parameters" on page A-62 for more information. Specifies the priority for the event dispatcher thread for each service. Legal values are from 1 to 10 where 10 is the highest priority. The default value is the event-dispatcher-priority value specified in the tangosol-coherence.xml descriptor. See the event-dispatcher-priority parameter in "DistributedCache Service Parameters" on page A-62 for more information. Specifies either: the class configuration information for a com.tangosol.io.Serializer implementation used to serialize and deserialize user types, or it references a serializer class configuration that is defined in the operational configuration file (see "serializer" on page A-59). Specifies the name of the transport used by this service instance for reliable point-to-point communication. Valid values are: datagram, tmb (TCP Message Bus), sdmb (SDP Message Bus), and imb (InfiniBand Message Bus). The imb and sdmb transports are only available for Oracle Exalogic systems. The imb transport is enabled by default on Oracle Exalogic systems for all distributed cache services. See the reliable-transport parameter in "DistributedCache Service Parameters" on page A-62 for more information.
<service-priority>
Optional
<serializer>
Optional
<reliable-transport>
Optional
distributed-scheme
Table B20 (Cont.) distributed-scheme Subelements Element <compressor> Required/ Optional Optional Description Specifies whether or not backup updates should be compressed in delta form or sent whole. A delta update represents the parts of a backup entry that must be changed in order to synchronize it with the primary version of the entry. Deltas are created and applied using a compressor. The default value is the compressor value specified in the tangosol-coherence.xml descriptor. See the compressor parameter in "DistributedCache Service Parameters" on page A-62 for more information. Valid values are:
none Disables delta backup; no compressor is used. The whole backup binary entry is replaced when the primary entry changes. standard Automatically selects a delta compressor based on the serializer being used by the partitioned service. <instance> The configuration for a class that implements the com.tangosol.io.DeltaCompressor interface.
<thread-count>
Optional
Specifies the number of daemon threads used by the partitioned cache service. Legal values are positive integers, 0, or -1. The value 0 indicates that all relevant tasks are performed on the service thread. The value -1 indicates that tasks are performed on the caller's thread where possible. The default value is the thread-count value specified in the tangosol-coherence.xml descriptor. See the thread-count parameter in "DistributedCache Service Parameters" on page A-62 for more information. Specifies the priority for the worker threads. Legal values are from 1 to 10 where 10 is the highest priority. The default value is the worker-priority value specified in the tangosol-coherence.xml descriptor. See the worker-priority parameter in "DistributedCache Service Parameters" on page A-62 for more information. Specifies the lease ownership granularity. Legal values are:
<worker-priority>
Optional
<lease-granularity>
Optional
thread member
A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it. The default value is the lease-granularity value specified in the tangosol-coherence.xml descriptor. See the lease-granularity parameter in "DistributedCache Service Parameters" on page A-62 for more information. <local-storage> Optional Specifies whether a cluster node contributes storage to the cluster, that is, maintain partitions. When disabled the node is considered a cache client. Legal values are true or false. The default value is the local-storage value specified in the tangosol-coherence.xml descriptor. See the local-storage parameter in "DistributedCache Service Parameters" on page A-62 for more information.
Element Reference
Table B20 (Cont.) distributed-scheme Subelements Element <partition-count> Required/ Optional Optional Description Specifies the number of distributed cache partitions. Each storage-enabled cluster member that is running the distributed cache service manages a balanced number of partitions. Valid values are positive integers between 1 and 32767 and should be a prime number. A list of primes can be found at http://primes.utm.edu/lists/. The default value is the value specified in the tangosol-coherence.xml descriptor. See the partition-count parameter in "DistributedCache Service Parameters" on page A-62. <transfer-threshold> Optional Specifies the threshold for the primary buckets distribution in kilobytes. When a new node joins the partitioned cache service or when a member of the service leaves, the remaining nodes perform a task of bucket ownership re-distribution. During this process, the existing data gets re-balanced along with the ownership information. This parameter indicates a preferred message size for data transfer communications. Setting this value lower makes the distribution process take longer, but reduces network bandwidth utilization during this activity. Legal values are integers greater then zero. The default value is the transfer-threshold value specified in the tangosol-coherence.xml descriptor. See the transfer-threshold parameter in "DistributedCache Service Parameters" on page A-62 for more information. Specifies the number of members of the partitioned cache service that hold the backup data for each unit of storage in the cache. A value of 0 means that for abnormal termination, some portion of the data in the cache is lost. The default value is the backup-count value specified in the tangosol-coherence.xml descriptor. See "DistributedCache Service Parameters" on page A-62 for more information. Specifies the number of members of the partitioned cache service that holds the backup data for each unit of storage in the cache that does not require write-behind, that is, data that is not vulnerable to being lost even if the entire cluster were shut down. Specifically, if a unit of storage is marked as requiring write-behind, then it is backed up on the number of members specified by the <backup-count> subelement. If the unit of storage is not marked as requiring write-behind, then it is backed up by the number of members specified by the <backup-count-after-writebehind> element. This value should be set to 0 or this setting should not be specified at all. The rationale is that since this data is being backed up to another data store, no in-memory backup is required, other than the data temporarily queued on the write-behind queue to be written. The value of 0 means that when write-behind has occurred, the backup copies of that data is discarded. However, until write-behind occurs, the data is backed up in accordance with the <backup-count> setting. Recommended value is 0 or this element should be omitted. <backup-storage> <key-associator> Optional Optional Specifies the type and configuration for the partitioned cache backup storage. Specifies a class that is responsible for providing associations between keys and allowing associated keys to reside on the same partition. This implementation must have a zero-parameter public constructor.
<backup-count>
Optional
<backup-count-after-write behind>
Optional
distributed-scheme
Table B20 (Cont.) distributed-scheme Subelements Element <key-partitioning> Required/ Optional Optional Description Specifies a class that implements the com.tangosol.net.partition.KeyPartitioningStrategy interface, which is responsible for assigning keys to partitions. This implementation must have a zero-parameter public constructor. If unspecified, the default key partitioning algorithm is used, which ensures that keys are evenly segmented across partitions. Specifies the strategy that is used by a partitioned service to manage partition distribution. The default value is the partition-assignment-strategy value specified in the tangosol-coherence.xml descriptor. See the partition-assignment-strategy parameter in "DistributedCache Service Parameters" on page A-62 for more information.
legacy (deprecated) The legacy assignment strategy indicates that partition distribution is managed individually on each cluster member. simple The simple assignment strategy attempts to balance partition distribution while ensuring machine-safety and is more deterministic and efficient than the legacy strategy. mirror:<service-name> The mirror assignment strategy attempts to co-locate the service's partitions with the partitions of the specified service. This strategy is used to increase the likelihood that key-associated, cross-service cache access remains local to a member. custom a class that implements the com.tangosol.net.partition.PartitionAssignmentStrat egy interface. Enter a custom strategy using the <instance> element.
<partition-listener> <task-hung-threshold>
Optional Optional
Specifies a class that implements the com.tangosol.net.partition.PartitionListener interface. Specifies the amount of time in milliseconds that a task can execute before it is considered "hung". Note: a posted task that has not yet started is never considered as hung. This attribute is applied only if the Thread pool is used (the thread-count value is positive). Legal values are positive integers or zero (indicating no default timeout). The default value is the task-hung-threshold value specified in the tangosol-coherence.xml descriptor. See the task-hung-threshold parameter in "DistributedCache Service Parameters" on page A-62 for more information. Specifies the timeout value in milliseconds for requests executing on the service worker threads. This attribute applies only if the thread pool is used (the thread-count value is positive) and only applies to entry processor implementations that implement the PriorityTask interface. If zero is specified, the default service-guardian <timeout-milliseconds> value is used. Legal values are nonnegative integers. The default value is the value specified in the tangosol-coherence.xml descriptor. See the task-timeout parameter in "DistributedCache Service Parameters" on page A-62.
<task-timeout>
Optional
Element Reference
Table B20 (Cont.) distributed-scheme Subelements Element <request-timeout> Required/ Optional Optional Description Specifies the maximum amount of time a client waits for a response before abandoning the original request. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
the time it takes to deliver the request to an executing node (server) the interval between the time the task is received and placed into a service queue until the execution starts the task execution time the time it takes to deliver a result back to the client
Legal values are positive integers or zero (indicating no default timeout). The default value is the value specified in the tangosol-coherence.xml descriptor. See the request-timeout parameter in "DistributedCache Service Parameters" on page A-62 for more information. <guardian-timeout> Optional Specifies the guardian timeout value to use for guarding the service and any dependent threads. If the element is not specified for a given service, the default guardian timeout (as specified by the <timeout-milliseconds> operational configuration element) is used. See <service-guardian>. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. <service-failure-policy> Optional Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. Legal values are:
exit-cluster (default) attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy causes the local node to stop the cluster services. exit-process attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy cause the local node to exit the JVM and terminate abruptly. logging causes any detected problems to be logged, but no corrective action to be taken. a custom class an <instance> subelement is used to provide the class configuration information for a com.tangosol.net.ServiceFailurePolicy implementation.
distributed-scheme
Table B20 (Cont.) distributed-scheme Subelements Element <member-listener> Required/ Optional Optional Description Specifies the configuration information for a class that implements the com.tangosol.net.MemberListener interface. The implementation must have a public default constructor. See the subelements for "instance" on page B-50 for the elements used to define the class. The MemberListener implementation receives cache service lifecycle events. The <member-listener> element is used as an alternative to programmatically adding a MapListener on a service. <operation-bundling> <backing-map-scheme> Optional Optional Specifies the configuration information for a bundling strategy. Specifies what type of cache is used within the cache server to store the entries. Legal schemes are:
Note that when using an off-heap backing map it is important that the corresponding <backup-storage> be configured for off-heap (potentially using the same scheme as the backing-map). Here off-heap refers to any storage where some or all entries are stored outside of the JVMs garbage collected heap space. Examples include: <overflow-scheme> and <external-scheme>. See "Partitioned Cache with Overflow" on page 18-6 for an example configuration. <partitioned-quorum-polic y-scheme> <listener> <autostart> Optional Optional Optional Specifies quorum policy settings for the partitioned cache service. Specifies an implementation of a MapListener which is notified of events occurring on the cache. The autostart element is intended to be used by cache servers (that is, com.tangosol.net.DefaultCacheServer). It specifies whether the cache services associated with this cache scheme should be automatically started at a cluster node. Legal values are true or false. The default value is false. Specifies any number of event interceptors that process events for all caches of a specific distributed service. Specifies whether the partitioned (distributed) cache service backs up data asynchronously while concurrently responding to the client. Asynchronous backup is often used to increase client performance. However, applications that require strict data integrity must be designed and tested to ensure that data is not at risk. Legal values are true or false. The default value is false.
<interceptors> <async-backup>
Optional Optional
Element Reference
external-scheme
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, overflow-scheme, read-write-backing-map-scheme
Description
External schemes define caches which are not JVM heap based, allowing for greater storage capacity. See "Local Caches (accessible from a single JVM)" on page 18-1 for examples of various external cache configurations.
Implementation
This scheme is implemented by:
if the <high-units> subelement is specified and not zero then SerializationCache is used; otherwise SerializationMap is used.
a wrapper providing asynchronous write capabilities for of other store manager implementations allows definition of custom implementations of store managers uses Berkeley Database JE to implement an on disk cache uses a Coherence LH on disk database cache uses NIO to implement memory-mapped file based cache uses NIO to implement an off JVM heap, in-memory cache
Eviction against disk-based caches can be expensive, consider using a paged-external-scheme for such cases.
Entry Expiration
External schemes support automatic expiration of entries based on the age of the value, as configured by the <expiry-delay> subelement.
external-scheme
Elements
Table B21 describes the subelements of the external-scheme element.
Table B21 Element <scheme-name> <scheme-ref> <class-name> external-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information Specifies a custom implementation of the external cache. Any custom implementation must extend either of the following classes:
com.tangosol.net.cache.SerializationCachefor size limited caches com.tangosol.net.cache.SerializationMapfor unlimited size caches com.tangosol.net.cache.SimpleSerializationMapfor unlimited size caches
and declare the exact same set of public constructors as the superclass. <init-params> <async-store-manager> Optional Optional Specifies initialization parameters, for use in custom external cache implementations. Configures the external cache to use an asynchronous storage manager wrapper for any other storage manager. See "Pluggable Storage Manager" on page B-39 Configures the external cache to use Berkeley Database JE on disk databases for cache storage. Configures the external cache to use a custom storage manager implementation. Configures the external cache to use a Coherence LH on disk database for cache storage. The LH store manager is deprecated. Use Berkeley DB store manager <bdb-store-manager> for similar functionality. <nio-file-manager> <nio-memory-manager> Optional Optional Configures the external cache to use a memory-mapped file for cache storage. Configures the external cache to use an off JVM heap, memory region for cache storage.
Element Reference
Table B21 (Cont.) external-scheme Subelements Element <high-units> Required/ Optional Optional Description Used to limit the size of the cache. Contains the maximum number of units that can be placed in the cache before pruning occurs. An entry is the unit of measurement. When this limit is exceeded, the cache begins the pruning process, evicting the least recently used entries until the number of units is brought below this limit. The scheme's class-name element may be used to provide custom extensions to SerializationCache, which implement alternative eviction policies. Legal values are positive integers or zero. Zero implies no limit. The default value is zero. Specifies the type of unit calculator to use. A unit calculator is used to determine the cost (in "units") of a given object. Legal values are:
<unit-calculator>
Optional
FIXED A unit calculator that assigns an equal weight of 1 to all cached objects. BINARY (default) A unit calculator that assigns an object a weight equal to the number of bytes of memory that are required to cache the object. This calculator is used for Partitioned Caches that cache data in a binary serialized form. See com.tangosol.net.cache.BinaryMemoryCalculator for additional details. <class-scheme> A custom unit calculator, specified as a class-scheme. The class specified within this scheme must implement the com/tangosol/net/cache/ConfigurableCacheMap.UnitCal culator interface.
This element is used only if the high-units element is set to a positive number.
external-scheme
Table B21 (Cont.) external-scheme Subelements Element <unit-factor> Required/ Optional Optional Description Specifies the factor by which the low-units and high-units properties are adjusted. Using a BINARY unit calculator, for example, the factor of 1048576 could be used to count megabytes instead of bytes. Note: This element was introduced only to avoid changing the type of the low units and high units properties from 32-bit values to 64-bit values and is only used if the high-units element is set to a positive number. Valid values are positive integers. The default value is 1. <expiry-delay> Optional Specifies the amount of time since the last update that entries are kept by the cache before being expired. Entries that have expired are not be accessible and are evicted the next time a client accesses the cache. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed. A value of zero implies no expiry. The default value is 0. Note: The expiry delay parameter (cExpiryMillis) is defined as an integer and is expressed in milliseconds. Therefore, the maximum amount of time can never exceed Integer.MAX_VALUE (2147483647) milliseconds or approximately 24 days. <listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which is notified of events occurring on the cache.
Element Reference
flashjournal-scheme
Used in: back-scheme, backing-map-scheme, caching-schemes, internal-cache-scheme
Description
The flashjournal-scheme element contains the configuration information for a scheme that stores data to external block-based file stores (flash). A flash journal resource manager controls flash journal behavior. See "flashjournal-manager" on page A-20 for additional details on configuring flash journal behavior. This scheme uses the com.tangosol.net.cache.SimpleSerializationMap class as the backing map implementation and the com.tangosol.io.journal.JournalBinaryStore to store and retrieve binary key value pairs to a journal.
Elements
Table B56 describes the subelements of the flashjournal-scheme element.
Table B22 Element <scheme-name> <scheme-ref> <class-name> flashjournal-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies a custom implementation of the simple serialization map cache. Any custom implementation must extend the com.tangosol.net.cache.SimpleSerializationMap class and declare the exact same set of public constructors as the superclass. Specifies the initialization parameters for a custom serialization map cache. Specifies an implementation of a com.tangosol.util.MapListener which is notified of events occurring on the cache.
<init-params> <listener>
Optional Optional
front-scheme
front-scheme
Used in: near-scheme, overflow-scheme
Description
The front-scheme element specifies the front-tier cache of a composite cache.
Elements
Table B23 describes the subelements of the front-scheme element.
Table B23 Element <local-scheme> front-scheme Subelements Required/ Optional Optional Description Local cache schemes define in-memory "local" caches. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near scheme. Class schemes provide a mechanism for instantiating an arbitrary Java object for use by other schemes. The scheme which contains this element dictates what class or interface(s) must be extended.
<class-scheme>
Optional
Element Reference
http-acceptor
Used in acceptor-config
Description
The http-acceptor element specifies an acceptor for connections from remote REST clients over HTTP.
Elements
Table B24 describes the subelements of the http-acceptor element.
Table B24 Elements <class-name> http-acceptor subelements Required/ Optional Optional Description Specifies an HTTP server class that implements the com.tangosol.coherence.rest.server.HttpServer interface. The HTTP server class handles inbound HTTP requests. Coherence REST provides two implementations out of the box: com.tangosol.coherence.rest.server.DefaultHttpServer (backed by Oracle's lightweight HTTP server) and com.tangosol.coherence.rest.server.GrizzlyHttpServer (backed by Grizzly). The default value if no value is specified is com.tangosol.coherence.rest.server.DefaultHttpServer. Contains class initialization parameters for the HTTP server class. Specifies the configuration for a socket and channel factory. Specifies the local address (IP, or DNS name, and port) on which the HTTP server socket is bound. Specifies a Jersey resource configuration class that is used by the HTTP acceptor to load resource and provider classes. Specifies the authentication mechanism for the HTTP server. A client must have authenticated using the configured mechanism as a prerequisite to gaining access to any resources exposed by the server. Legal values are:
basic This method requires the client to be authenticated using HTTP basic authentication. cert This method requires the client to be authenticated using client-side SSL certificate-based authentication. The certificate must be passed to the server to authenticate. An SSL-based socket provider must be configured using the <socket-provider> element. cert+basic This method requires the client to be authenticated using both client-side SSL certificate and HTTP basic authentication. none (default) This method does not require the client to be authenticated.
identity-manager
identity-manager
Used in: ssl.
Description
The <identity-manager> element contains the configuration information for initializing a javax.net.ssl.KeyManager instance. The identity manager is responsible for managing the key material which is used to authenticate the local connection to its peer. If no key material is available, the connection cannot present authentication credentials.
Elements
Table B25 describes the elements you can define within the identity-manager element.
Table B25 Element <algorithm> <provider> <key-store> <password> identity-manager Subelements Required/ Optional Optional Optional Optional Required Description Specifies the algorithm used by the identity manager. The default value is SunX509. Specifies the configuration for a security provider instance. Specifies the configuration for a key store implementation. Specifies the private key password.
Element Reference
initiator-config
Used in: remote-cache-scheme, remote-invocation-scheme.
Description
The initiator-config element specifies the configuration information for a TCP/IP connection initiator. A connection initiator allows a Coherence*Extend client to connect to a cluster (by using a connection acceptor) and use the clustered services offered by the cluster without having to first join the cluster.
Elements
Table B26 describes the subelements of the initiator-config element.
Table B26 Element <tcp-initiator> <outgoing-message-handler> initiator-config Subelements Required/ Optional Optional Optional Description Specifies the configuration information for a connection initiator that connects to the cluster over TCP/IP. Specifies the configuration information used by the connection initiator to detect dropped client-to-cluster connections. Contains the list of filter names to be used by this connection acceptor. For example, specifying <use-filter> as follows activates gzip compression for all network messages, which can help substantially with WAN and low-bandwidth networks. <use-filters> <filter-name>gzip</filter-name> </use-filters> <serializer> Optional Specifies either: the class configuration information for a com.tangosol.io.Serializer implementation used to serialize and deserialize user types, or it references a serializer class configuration that is defined in the operational configuration file (see "serializer" on page A-59). Specifies the maximum amount of time to wait while establishing a connection with a connection acceptor.The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
<use-filters>
Optional
<connect-timeout>
Optional
If the value does not contain a unit, a unit of milliseconds is assumed. The default value is an infinite timeout.
init-param
init-param
Used in: init-params.
Description
Defines an individual initialization parameter.
Elements
Table B27 describes the subelements of the init-param element.
Table B27 Element <param-name> init-param Subelements Required/ Optional Optional Description Specifies the name of the initialization parameter. For example: <init-param> <param-name>sTableName</param-name> <param-value>EmployeeTable</param-value> </init-param> The <param-name> element cannot be specified if the <param-type> element is specified. <param-type> Optional Specifies the Java type of the initialization parameter.The following standard types are supported:
java.lang.String (string) java.lang.Boolean (boolean) java.lang.Integer (int) java.lang.Long (long) java.lang.Double (double) java.math.BigDecimal java.io.File java.sql.Date java.sql.Time java.sql.Timestamp
For example: <init-param> <param-type>java.lang.String</param-type> <param-value>EmployeeTable</param-value> </init-param> The <param-type> element cannot be specified if the <param-name> element is specified. <param-value> <description> Required Optional Specifies the value of the initialization parameter. The value is in the format specific to the Java type of the parameter. Specifies a description for the initialization parameter.
Element Reference
init-params
Used in: class-scheme, cache-mapping.
Description
Defines a series of initialization parameters as name-value pairs. See "Partitioned Cache of a Database" on page 18-7 for an example of using init-params.
Elements
Table B28 describes the subelements of the init-params element.
Table B28 Element <init-param> init-params Subelements Required/ Optional Optional Description Defines an individual initialization parameter.
instance
instance
Used in: interceptor, serializer, service-failure-policy, load-balancer, and partition-assignment-strategy
Description
The <instance> element contains the configuration of an implementation class or class factory that is used to plug in custom functionality.
Elements
Table B29 describes the subelements of the instance element.
Table B29 Element <class-name> instance Subelements Required/ Optional Optional Description Specifies the fully qualified name of an implementation class. This element cannot be used with the <class-factory-name> element. <class-factory-name> Optional Specifies the fully qualified name of a factory class for creating implementation class instances. This element cannot be used with the <class-name> element and is used with the <method-name> element. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Contains class initialization parameters for the implementation class.
Element Reference
interceptor
Used in: interceptors
Description
The interceptor element defines the configuration associated with an event interceptor that is responsible for processing live events. Event interceptors extend the com.tangosol.net.events.AbstractConfigurableEventInterceptor class. The base class provides the ability to restrict an event interceptor to a specific cache or service. The base class also provides support for the @Interceptor annotation, which allows an implementation to register for a subset of events based on event types and to configure an event interceptor identifier and the ordering of event interceptors. Specifying an interceptor's identifier and ordering within the interceptor element overrides the settings in the base class.
Elements
Table B30 describes the subelements of the interceptor element.
Table B30 Element <name> <order> interceptor Subelements Required/ Optional Optional Optional Description Specifies a unique identifier for the interceptor. Specifies whether the interceptor is the first interceptor in a chain of interceptors. The legal values are LOW and HIGH. A value of HIGH indicates that the interceptor is first in the chain of interceptors. A value of LOW indicates no order preference. The default value is LOW. Specifies the interceptor class to instantiate. The interceptor class must extend the AbstractConfigurableEventInterceptor class.
<instance>
Required
interceptors
interceptors
Used in: cache-mapping and distributed-scheme
Description
The interceptors element contains any number of event interceptor definitions.
Elements
Table B31 describes the subelements of the interceptors element.
Table B31 Element <interceptor> interceptors Subelements Required/ Optional Optional Description Specifies an event interceptor implementation.
Element Reference
invocation-scheme
Used in: caching-schemes.
Description
Defines an Invocation Service. The invocation service may be used to perform custom operations in parallel on any number of cluster nodes. See the com.tangosol.net.InvocationService API for additional details.
Elements
Table B32 describes the subelements of the invocation-scheme element.
Table B32 Element <scheme-name> <scheme-ref> <service-name> <service-priority> invocation-scheme Subelements Required/ Optional Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies the name of the service which manages invocations from this scheme. Specifies the priority for the service thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is the service-priority value specified in the tangosol-coherence.xml descriptor. See the service-priority parameter in "Invocation Service Parameters" on page A-71. Specifies the priority for the event dispatcher thread for each service. Legal values are from 1 to 10 where 10 is the highest priority. The default value is the event-dispatcher-priority value specified in the tangosol-coherence.xml descriptor. See the event-dispatcher-priority parameter in "Invocation Service Parameters" on page A-71. Specifies either: the class configuration information for a com.tangosol.io.Serializer implementation used to serialize and deserialize user types, or it references a serializer class configuration that is defined in the operational configuration file (see "serializer" on page A-59). Specifies the name of the transport used by this service instance for reliable point-to-point communication. Valid values are: datagram, tmb (TCP Message Bus), sdmb (SDP Message Bus), and imb (InfiniBand Message Bus). The imb and sdmb transports are only available for Oracle Exalogic systems. The default value is the reliable-transport value specified in the tangosol-coherence.xml descriptor. See the reliable-transport parameter in "Invocation Service Parameters" on page A-71 for more information. Specifies the number of daemon threads used by the invocation service. Legal values are positive integers, 0, or -1. The value 0 indicates that all relevant tasks are performed on the service thread. The value -1 indicates that tasks are performed on the caller's thread where possible. The default value is the thread-count value specified in the tangosol-coherence.xml descriptor. See "Invocation Service Parameters" on page A-71 for more information.
<event-dispatcher-pri ority>
Optional
<serializer>
Optional
<reliable-transport>
Optional
<thread-count>
Optional
invocation-scheme
Table B32 (Cont.) invocation-scheme Subelements Element <worker-priority> Required/ Optional Optional Description Specifies the priority for the worker threads. Legal values are from 1 to 10 where 10 is the highest priority. The default value is the worker-priority value specified in the tangosol-coherence.xml descriptor. See the worker-priority parameter in "Invocation Service Parameters" on page A-71. Specifies the amount of time in milliseconds that a task can execute before it is considered "hung". Note: a posted task that has not yet started is never considered as hung. This attribute is applied only if the Thread pool is used (the thread-count value is positive). Legal values are positive integers or zero (indicating no default timeout). The default value is the task-hung-threshold value specified in the tangosol-coherence.xml descriptor. See the task-hung-threshold parameter in "Invocation Service Parameters" on page A-71. Specifies the default timeout value in milliseconds for tasks that can be timed-out (for example, implement the com.tangosol.net.PriorityTask interface), but do not explicitly specify the task execution timeout value. The task execution time is measured on the server side and does not include the time spent waiting in a service backlog queue before being started. This attribute is applied only if the thread pool is used (the thread-count value is positive). If zero is specified, the default service-guardian <timeout-milliseconds> value is used. Legal values are nonnegative integers. The default value is the task-timeout value specified in the tangosol-coherence.xml descriptor. See the task-timeout parameter in "Invocation Service Parameters" on page A-71. Specifies the default timeout value in milliseconds for requests that can time out (for example, implement the com.tangosol.net.PriorityTask interface), but do not explicitly specify the request timeout value. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following: (1) the time it takes to deliver the request to an executing node (server); (2) the interval between the time the task is received and placed into a service queue until the execution starts; (3) the task execution time; (4) the time it takes to deliver a result back to the client. Legal values are positive integers or zero (indicating no default timeout). The default value is the request-timeout value specified in the tangosol-coherence.xml descriptor. See the request-timeout parameter in "Invocation Service Parameters" on page A-71.
<task-hung-threshold>
Optional
<task-timeout>
Optional
<request-timeout>
Optional
Element Reference
Table B32 (Cont.) invocation-scheme Subelements Element <guardian-timeout> Required/ Optional Optional Description Specifies the guardian timeout value to use for guarding the service and any dependent threads. If the element is not specified for a given service, the default guardian timeout (as specified by the <timeout-milliseconds> operational configuration element) is used. See <service-guardian>. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. <service-failure-poli cy> Optional Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. Legal values are:
exit-cluster (default) attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy causes the local node to stop the cluster services. exit-process attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy cause the local node to exit the JVM and terminate abruptly. logging causes any detected problems to be logged, but no corrective action to be taken. a custom class an <instance> subelement is used to provide the class configuration information for a com.tangosol.net.ServiceFailurePolicy implementation.
<member-listener>
Optional
Specifies the configuration information for a class that implements the com.tangosol.net.MemberListener interface. The implementation must have a public default constructor. The MemberListener implementation receives service lifecycle events. The <member-listener> element is used as an alternative to programmatically adding a MapListener on a service.
<autostart>
Optional
The autostart element is intended to be used by cache servers (that is, com.tangosol.net.DefaultCacheServer). It specifies whether this service should be automatically started at a cluster node. Legal values are true or false. The default value is false.
invocation-service-proxy
invocation-service-proxy
Used in: proxy-config
Description
The invocation-service-proxy element contains the configuration information for an invocation service proxy managed by a proxy service.
Elements
Table B33 describes the subelements of the invocation-service-proxy element.
Table B33 Element <class-name> invocation-service-proxy Subelement Required/ Optional Optional Description Specifies the fully qualified name of a class that implements the com.tangosol.net.InvocationService interface. The class acts as an interceptor between a client and a proxied invocation service to implement custom processing as required. For example, the class could be used to perform authorization checks before allowing the use of the proxied invocation service. Contains initialization parameters for the InvocationService implementation. Specifies whether the invocation service proxy is enabled. If disabled, clients are not able to execute Invocable objects on the proxy service JVM. Legal values are true or false. The default value is true.
<init-params> <enabled>
Optional Optional
Element Reference
key-associator
Used in: distributed-scheme
Description
Specifies an implementation of a com.tangosol.net.partition.KeyAssociator which is used to determine associations between keys, allowing related keys to reside on the same partition. Alternatively the cache's keys may manage the association by implementing the com.tangosol.net.cache.KeyAssociation interface.
Elements
Table B34 describes the subelements of the key-associator element.
Table B34 Element <class-name> key-associator Subelements Required/ Optional Required Description The name of a class that implements the com.tangosol.net.partition.KeyAssociator interface. This implementation must have a zero-parameter public constructor. The default value is the value of the key-associator/class-name parameter specified in the tangosol.coherence.xml descriptor. See "DistributedCache Service Parameters" on page A-62 for more information. Specifies the fully qualified name of a factory class for creating implementation class instances. This element cannot be used with the <class-name> element and is used with the <method-name> element. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Contains class initialization parameters for the implementation class.
<class-factory-name>
Optional
key-partitioning
key-partitioning
Used in: distributed-scheme
Description
Specifies an implementation of a com.tangosol.net.partition.KeyPartitioningStrategy which is used to determine the partition in which a key resides.
Elements
Table B35 describes the subelements of the key-partitioning element.
Table B35 Element <class-name> key-partitioning Subelements Required/ Optional Required Description The name of a class that implements the com.tangosol.net.partition.KeyPartitioningStrategy interface. This implementation must have a zero-parameter public constructor. The default value is the value of the key-partitioning/class-name parameter specified in the tangosol-coherence.xml descriptor. See "DistributedCache Service Parameters" on page A-62 for more information. Specifies the fully qualified name of a factory class for creating implementation class instances. This element cannot be used with the <class-name> element and is used with the <method-name> element. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Contains class initialization parameters for the implementation class.
<class-factory-name>
Optional
Element Reference
key-store
Used in: identity-manager, trust-manager.
Description
The key-store element specifies the configuration for a key store implementation to use when implementing SSL. The key store implementation is an instance of the java.security.KeyStore class.
Elements
Table B36 describes the elements you can define within the key-store element.
Table B36 Element <url> <password> <type> key-store Subelements Required/ Optional Required Optional Optional Description Specifies the Uniform Resource Locator (URL) to a key store. Specifies the password for the key store. Specifies the type of a java.security.KeyStore instance. The default value is JKS.
lh-file-manager
lh-file-manager
Used in: external-scheme, paged-external-scheme, async-store-manager.
Description
Note:
The LH store manager is deprecated. Use Berkeley DB store manager for similar functionality. For details on configuring the Berkeley DB store manager (see "bdb-store-manager" on page B-19).
Configures a store manager which uses a Coherence LH on disk, embedded database for storage. See "Persistent Cache on Disk" on page 18-3 and "In-memory Cache with Disk Based Overflow" on page 18-3 for examples of LH-based store configurations.
Implementation
Implemented by the com.tangosol.io.lh.LHBinaryStoreManager class. The BinaryStore objects created by this class are instances of javadoc:com.tangosol.io.lh.LHBinaryStore.
Elements
Table B37 describes the subelements of the lh-file-manager element.
lh-file-manager Subelements Required/ Optional Optional Description Specifies a custom implementation of the LH BinaryStoreManager. Any custom implementation must extend the com.tangosol.io.lh.LHBinaryStoreManager class and declare the exact same set of public constructors. Specifies initialization parameters, for use in custom LH file manager implementations. Specifies the path name for the root directory that the LH file manager uses to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location is used. Specifies the name for a non-temporary (persistent) file that the LH file manager uses to store data in. Specifying this parameter causes the lh-file-manager to use non-temporary database instances. Use this parameter only for local caches that are backed by a cache loader from a non-temporary file: this allows the local cache to be pre-populated from the disk file on startup. When specified it is recommended that it use the {cache-name} macro described in "Using Parameter Macros" on page 13-15. Normally this parameter should be left unspecified, indicating that temporary storage is to be used.
<init-params> <directory>
Optional Optional
<file-name>
Optional
Element Reference
listener
Used in: local-scheme, external-scheme, paged-external-scheme, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, overflow-scheme, read-write-backing-map-scheme
Description
The Listener element specifies an implementation of a com.tangosol.util.MapListener which is notified of events occurring on a cache.
Elements
Table B38 describes the subelements of the listener element.
Table B38 Element <class-scheme> listener Subelement Required/ Optional Required Description Specifies the full class name of the listener implementation to use. The specified class must implement the com.tangosol.util.MapListener interface.
local-address
local-address
Used in: http-acceptor, tcp-acceptor, tcp-initiator
Description
The local-address element specifies the local address (IP, or DNS name, and port) to which a socket is bound. A local address for the <tcp-acceptor> element specifies a TCP/IP server socket that is used by the proxy service to accept connections from Coherence*Extend clients. A local address for the <http-acceptor> element specifies a HTTP server socket that is used to accept connections from REST clients. The following example binds a server socket to 192.168.0.2:9099.
<local-address> <address>192.168.0.2</address> <port>9099</port> </local-address>
A local address for the <tcp-initiator> element specifies a TCP/IP client socket that is used by remote services to connect to a proxy service on the cluster. The following example binds the client socket to 192.168.0.1 on port 9099:
<local-address> <address>192.168.0.1</address> <port>9099</port> </local-address>
Note:
A socket addresses for the TCP/IP acceptor can also be defined using an address-provider element. For details, see "address-provider" on page B-8.
Elements
Table B64 describes the subelements of the local-address element.
Table B39 Element <address> local-address Subelements Required/ Optional Optional Description Specifies the address (IP or DNS name) on which a socket listens and publishes. If the address is a bind address, then the address may also be entered using CIDR notation as a subnet and mask (for example, 192.168.1.0/24), which allows runtime resolution against the available local IP addresses. Specifies the port on which a TCP/IP socket listens and publishes. The legal values are from 1 to 65535.
<port>
Optional
Element Reference
local-scheme
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, front-scheme, overflow-scheme, read-write-backing-map-scheme, backing-map-scheme
Description
Local cache schemes define in-memory "local" caches. Local caches are generally nested within other cache schemes, for instance as the front-tier of a near-scheme. See "Near Cache" on page 18-8 for examples of various local cache configurations.
Implementation
Local caches are implemented by the com.tangosol.net.cache.LocalCache class.
Entry Expiration
The local cache supports automatic expiration of entries based on the age of the value (see the <expiry-delay> subelement).
Elements
Table B40 describes the subelements of the local-scheme element.
Table B40 Element <scheme-name> <scheme-ref> <class-name> local-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies a custom implementation of the local cache. Any custom implementation must extend the com.tangosol.net.cache.LocalCache class and declare the exact same set of public constructors. Specifies the name of the service which manages caches created from this scheme. Services are configured from within the <services> element in the tangosol-coherence.xml descriptor. See Appendix A, "Operational Configuration Elements" for more information.
<service-name>
Optional
local-scheme
Table B40 (Cont.) local-scheme Subelements Element <init-params> <eviction-policy> Required/ Optional Optional Optional Description Specifies initialization parameters, for use in custom local cache implementations. Specifies the type of eviction policy to use.Legal values are:
LRU Least Recently Used eviction policy chooses which entries to evict based on how recently they were last accessed, evicting those that were not accessed the for the longest period first. LFU Least Frequently Used eviction policy chooses which entries to evict based on how often they are being accessed, evicting those that are accessed least frequently first. HYBRID (default) Hybrid eviction policy chooses which entries to evict based on the combination (weighted score) of how often and recently they were accessed, evicting those that are accessed least frequently and were not accessed for the longest period first. <class-scheme> A custom eviction policy, specified as a class-scheme. The class specified within this scheme must implement the ConfigurableCacheMap.EvictionPolicy interface or extend the AbstractEvictionPolicy class
<high-units>
Optional
Used to limit the size of the cache. Contains the maximum number of units that can be placed in the cache before pruning occurs. An entry is the unit of measurement, unless it is overridden by an alternate unit-calculator (see <unit-calculator> subelement). When this limit is exceeded, the cache begins the pruning process, evicting entries according to the eviction policy. Legal values are positive integers or zero. Zero implies no limit. The default value is 0. Contains the lowest number of units that a cache is pruned down to when pruning takes place. A pruning does not necessarily result in a cache containing this number of units, however a pruning never results in a cache containing less than this number of units. An entry is the unit of measurement, unless it is overridden by an alternate unit-calculator (see <unit-calculator> subelement). When pruning occurs entries continue to be evicted according to the eviction policy until this size. Legal values are positive integers or zero. Zero implies the default. The default value is 80% of the high-units setting (that is, for a high-units setting of 1000 the default low-units is 800). Specifies the type of unit calculator to use. A unit calculator is used to determine the cost (in "units") of a given object. This element is used only if the high-units element is set to a positive number. Legal values are:
<low-units>
Optional
<unit-calculator>
Optional
FIXED A unit calculator that assigns an equal weight of 1 to all cached objects. BINARY (default) A unit calculator that assigns an object a weight equal to the number of bytes of memory that are required to cache the object. This calculator is used for Partitioned Caches that cache data in a binary serialized form. See com.tangosol.net.cache.BinaryMemoryCalculator for additional details. <class-scheme> A custom unit calculator, specified as a class-scheme. The class specified within this scheme must implement the com/tangosol/net/cache/ConfigurableCacheMap.UnitCalculator interface.
Element Reference
Table B40 (Cont.) local-scheme Subelements Element <unit-factor> Required/ Optional Optional Description Specifies the factor by which the low-units and high-units properties are adjusted. Using a BINARY unit calculator, for example, the factor of 1048576 could be used to count megabytes instead of bytes. Note: This element was introduced only to avoid changing the type of the low units and high units properties from 32-bit values to 64-bit values and is only used if the high-units element is set to a positive number. Valid values are positive integers. The default value is 1. <expiry-delay> Optional Specifies the amount of time since the last update that entries are kept by the cache before being expired. Entries that have expired are not accessible and are evicted the next time a client accesses the cache. Any attempt to read an expired entry results in a reloading of the entry from the configured cache store (see <cachestore-scheme>. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed. A value of zero implies no expiry. The default value is 0. Note: The expiry delay parameter (cExpiryMillis) is defined as an integer and is expressed in milliseconds. Therefore, the maximum amount of time can never exceed Integer.MAX_VALUE (2147483647) milliseconds or approximately 24 days. <cachestore-scheme> Optional Specifies the store which is being cached. If unspecified the cached data only resides in memory, and only reflects operations performed on the cache itself. Specifies whether a cache pre-loads data from its CacheLoader (or CacheStore) object. Valid values are true and false. The default value is false. Specifies an implementation of a com.tangosol.util.MapListener which is notified of events occurring on the cache.
<pre-load>
Optional
<listener>
Optional
name-service-addresses
name-service-addresses
Used in: tcp-initiator
Description
The name-service-addresses element contains the address (IP, or DNS name, and port) of one or more name service TCP/IP acceptors. A TCP/IP initiator uses this information to establish a connection with a remote cluster. The TCP/IP initiator attempts to connect to the addresses in a random order until either the list is exhausted or a connection is established. See Developing Remote Clients for Oracle Coherence for additional details and example configurations.
Elements
Table B41 describes the subelements of the name-service-addresses element.
Table B41 Element <socket-address> name-service-addresses Subelements Required/ Optional Optional Description Specifies the address (IP, or DNS name, and port) on which a name service TCP/IP acceptor is listening. Multiple <socket-address> elements can be defined. Specifies the address (IP, or DNS name, and port) on which a name service TCP/IP acceptor is listening or the configuration for a com.tangosol.net.AddressProvider implementation that supplies the address. The address-provider element also supports socket address references. A <name-services-addresses> element can include either a <socket-address> element or an <address-provider> element but not both.
<address-provider>
Optional
Element Reference
near-scheme
Used in: caching-schemes.
Description
The near-scheme defines a two-tier cache consisting of a front-tier which caches a subset of a back-tier cache. The front-tier is generally a fast, size limited cache, while the back-tier is slower, but much higher capacity cache. A typical deployment might use a local cache for the front-tier, and a distributed cache for the back-tier. The result is that a portion of a large partitioned cache is cached locally in-memory allowing for very fast read access. See "Near Cache" on page 12-7 for a more detailed description of near caches, and "Near Cache" on page 18-8 for an example of near cache configurations.
Implementation
The near scheme is implemented by the com.tangosol.net.cache.NearCache class.
Front-tier Invalidation
The <invalidation-strategy> subelement defines a strategy that is used to keep the front tier of the near cache synchronized with the back tier. Depending on that strategy, a near cache is configured to listen to certain events occurring on the back tier and automatically update (or invalidate) the front portion of the near cache.
Elements
Table B42 describes the subelements of the near-scheme element.
Table B42 Element <scheme-name> <scheme-ref> <class-name> near-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies a custom implementation of the near cache. Any custom implementation must extend the com.tangosol.net.cache.NearCache class and declare the exact same set of public constructors. Specifies initialization parameters for custom near cache implementations. Specifies the cache to use as the front-tier cache. Specifies the cache to use as the front-tier cache.
near-scheme
Table B42 (Cont.) near-scheme Subelements Element <invalidation-strategy> Required/ Optional Optional Description Specifies the strategy used keep the front-tier in-sync with the back-tier. Please see com.tangosol.net.cache.NearCache for more details. Legal values are:
none instructs the cache not to listen for invalidation events at all. This is the best choice for raw performance and scalability when business requirements permit the use of data which might not be absolutely current. Freshness of data can be guaranteed by use of a sufficiently brief eviction policy. The worst case performance is identical to a standard Distributed cache. present instructs the near cache to listen to the back map events related only to the items currently present in the front map. This strategy works best when cluster nodes have sticky data access patterns (for example, HTTP session management with a sticky load balancer). all instructs the near cache to listen to all back map events. This strategy is optimal for read-heavy access patterns where there is significant overlap between the front caches on each cluster member. logical instructs a near cache to listen to all backing map events that are not synthetic deletes. A synthetic delete event could be emitted as a result of eviction or expiration. With this invalidation strategy, it is possible for the front map to contain cache entries that have been synthetically removed from the backing map. Any subsequent re-insertion of the entries to the backing map causes the corresponding entries in the front map to be invalidated. auto (default) This strategy is identical to the present strategy.
<listener>
Optional
Specifies an implementation of a com.tangosol.util.MapListener which is notified of events occurring on the cache. The autostart element is intended to be used by cache servers (that is, com.tangosol.net.DefaultCacheServer). It specifies whether the cache services associated with this cache scheme should be automatically started at a cluster node. Legal values are true or false. The default value is false.
<autostart>
Optional
Element Reference
nio-file-manager
Used in: external-scheme, paged-external-scheme, async-store-manager.
Description
Configures an external store which uses memory-mapped file for storage.
Implementation
This store manager is implemented by the com.tangosol.io.nio.MappedStoreManager class. The BinaryStore objects created by this class are instances of the com.tangosol.io.nio.BinaryMapStore.
Elements
Table B43 describes the subelements of the nio-file-manager element.
Table B43 Element <class-name> nio-file-manager Subelements Required/ Optional Optional Description Specifies a custom implementation of the local cache. Any custom implementation must extend the com.tangosol.io.nio.MappedStoreManager class and declare the exact same set of public constructors. Specifies initialization parameters, for use in custom nio-file-manager implementations.
<init-params>
Optional
nio-file-manager
Table B43 (Cont.) nio-file-manager Subelements Element <initial-size> Required/ Optional Optional Description Specifies the initial buffer size in megabytes.The value of this element must be in the following format: (\d)+[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023 (that is, 2,147,482,624 bytes). The default value is 1MB. <maximum-size> Optional Specifies the maximum buffer size in bytes.The value of this element must be in the following format: (\d)+[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023 (that is, 2,147,482,624 bytes). The default value is 1024MB. <directory> Optional Specifies the path name for the root directory that the manager uses to store files in. If not specified or specifies a non-existent directory, a temporary file in the default location is used.
Element Reference
nio-memory-manager
Used in: external-scheme, paged-external-scheme, async-store-manager.
Description
Configures a store-manager which uses an off JVM heap, memory region for storage, which means that it does not affect the Java heap size and the related JVM garbage-collection performance that can be responsible for application pauses. See "NIO In-memory Cache" on page 18-2 for an example of an NIO cache configuration. JVMs require the use of a command line parameter if the total NIO buffers is greater than 64MB. For example: -XX:MaxDirectMemorySize=512M
Note:
Implementation
Implemented by the com.tangosol.io.nio.DirectStoreManager class. The BinaryStore objects created by this class are instances of the com.tangosol.io.nio.BinaryMapStore.
Elements
Table B44 describes the subelements of the nio-memory-manager element.
Table B44 Element <class-name> nio-memory-manager Subelements Required/ Optional Optional Description Specifies a custom implementation of the local cache. Any custom implementation must extend the com.tangosol.io.nio.DirectStoreManager class and declare the exact same set of public constructors.
nio-memory-manager
Table B44 (Cont.) nio-memory-manager Subelements Element <init-params> <initial-size> Required/ Optional Optional Optional Description Specifies initialization parameters, for use in custom nio-memory-manager implementations. Specifies the initial buffer size in bytes. The value of this element must be in the following format: (\d)+[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023 (that is, 2,147,482,624 bytes). The default value is 1MB. <maximum-size> Optional Specifies the maximum buffer size in bytes. The value of this element must be in the following format: (\d)+[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of mega is assumed. Legal values are positive integers between 1 and Integer.MAX_VALUE - 1023 (that is, 2,147,482,624 bytes). The default value is 1024MB.
Element Reference
operation-bundling
Used in: cachestore-scheme, distributed-scheme, remote-cache-scheme.
Description
The operation-bundling element specifies the configuration information for a particular bundling strategy. Bundling is a process of coalescing multiple individual operations into "bundles". It could be beneficial when
there is a continuous stream of operations on multiple threads in parallel; individual operations have relatively high latency (network or database-related); and there are functionally analogous "bulk" operations that take a collection of arguments instead of a single one without causing the latency to grow linearly (as a function of the collection size).
Note:
As with any bundling algorithm, there is a natural trade-off between the resource utilization and average request latency. Depending on a particular application usage pattern, enabling this feature may either help or hurt the overall application performance. Operation bundling affects cache store operations. If operation bundling is configured, the CacheStore.storeAll() method is always called even if there is only one ripe entry.
Elements
Table B45 describes the subelement for the operation-bundling element.
Table B45 Element <bundle-config> operation-bundling Subelement Required/ Optional Description Required Describes one or more bundle-able operations.
optimistic-scheme
optimistic-scheme
Used in: caching-schemes, near-scheme, overflow-scheme The optimistic scheme defines a cache which fully replicates all of its data to all cluster nodes that run the service (see <service-name> subelement). See "Optimistic Cache" on page 12-7 for a more detailed description of optimistic caches.
Optimistic Locking
Unlike the replicated-scheme and distributed-scheme caches, optimistic caches do not support concurrency control (locking). Individual operations against entries are atomic but there is no guarantee that the value stored in the cache does not change between atomic operations. The lack of concurrency control allows optimistic caches to support very fast write operations.
Elements
Table B46 describes the subelements of the optimistic-scheme element.
Table B46 Element <scheme-name> <scheme-ref> <service-name> optimistic-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies the name of the service which manages caches created from this scheme. Services are configured from within the <services> parameter in tangosol-coherence.xml. See Appendix A, "Operational Configuration Elements" for more information. Specifies the priority for the service thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 10. Specifies the priority for the event dispatcher thread for each service. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 10. Specifies either: the class configuration information for a com.tangosol.io.Serializer implementation used to serialize and deserialize user types, or it references a serializer class configuration that is defined in the operational configuration file (see "serializer" on page A-59). Specifies the name of the transport used by this service instance for reliable point-to-point communication. Valid values are: datagram, tmb (TCP Message Bus), sdmb (SDP Message Bus), and imb (InfiniBand Message Bus). The imb and sdmb transports are only available for Oracle Exalogic systems. The default value is the reliable-transport value specified in the tangosol-coherence.xml descriptor. See the reliable-transport parameter in "OptimisticCache Service Parameters" on page A-69 for more information.
Optional Optional
Optional
<reliable-transport>
Optional
Element Reference
Table B46 (Cont.) optimistic-scheme Subelements Element <request-timeout> Required/ Optional Optional Description Specifies the maximum amount of time a client waits for a response before abandoning the original request. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
the time it takes to deliver the request to an executing node (server) the interval between the time the task is received and placed into a service queue until the execution starts the task execution time the time it takes to deliver a result back to the client
Legal values are positive integers or zero (indicating no default timeout). The default value is the value specified in the tangosol-coherence.xml descriptor. See the request-timeout parameter in "ReplicatedCache Service Parameters" on page A-67 for more information. <guardian-timeout> Optional Specifies the guardian timeout value to use for guarding the service and any dependent threads. If the element is not specified for a given service, the default guardian timeout (as specified by the <timeout-milliseconds> operational configuration element) is used. See <service-guardian>. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. <service-failure-policy> Optional Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. Legal values are:
exit-cluster (default) attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy causes the local node to stop the cluster services. exit-process attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy cause the local node to exit the JVM and terminate abruptly. logging causes any detected problems to be logged, but no corrective action to be taken. a custom class an <instance> subelement is used to provide the class configuration information for a com.tangosol.net.ServiceFailurePolicy implementation.
optimistic-scheme
Table B46 (Cont.) optimistic-scheme Subelements Element <member-listener> Required/ Optional Optional Description Specifies the configuration information for a class that implements the com.tangosol.net.MemberListener interface. The implementation must have a public default constructor. The MemberListener implementation receives cache service lifecycle events. The <member-listener> element is used as an alternative to programmatically adding a MapListener on a service. <backing-map-scheme> Optional Specifies what type of cache is used within the cache server to store the entries. Legal values are:
To ensure cache coherence, the backing-map of an optimistic cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching. <listener> <autostart> Optional Optional Specifies an implementation of a com.tangosol.util.MapListener which is notified of events occurring on the cache. The autostart element is intended to be used by cache servers (that is, com.tangosol.net.DefaultCacheServer). It specifies whether the cache services associated with this cache scheme should be automatically started at a cluster node. Legal values are true or false. The default value is false.
Element Reference
outgoing-message-handler
Used in: acceptor-config, initiator-config.
Description
The outgoing-message-handler specifies the configuration information used to detect dropped client-to-cluster connections. For connection initiators and acceptors that use connectionless protocols, this information is necessary to detect and release resources allocated to dropped connections. Connection-oriented initiators and acceptors can also use this information as an additional mechanism to detect dropped connections.
Elements
Table B47 describes the subelements of the outgoing-message-handler element.
outgoing-message-handler
outgoing-message-handler Subelements Required/ Optional Optional Description Specifies the interval between ping requests. A ping request is used to ensure the integrity of a connection. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
<heartbeat-interval>
If the value does not contain a unit, a unit of milliseconds is assumed. A value of zero disables ping requests. The default value is zero. <heartbeat-timeout> Optional Specifies the maximum amount of time to wait for a response to the heartbeat ping request before closing the underlying connection. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. <request-timeout> Optional Specifies the maximum amount of time to wait for a response message before abandoning the request. The connection is not closed until the <heartbeat-timeout> value is reached. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. The default value is an infinite timeout (0s).
Element Reference
overflow-scheme
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, read-write-backing-map-scheme
Description
The overflow-scheme defines a two-tier cache consisting of a fast, size limited front-tier, and slower but much higher capacity back-tier cache. When the size limited front fills up, evicted entries are transparently moved to the back. In the event of a cache miss, entries may move from the back to the front. A typical deployment might use a local-scheme for the front-tier, and a external-scheme for the back-tier, allowing for fast local caches with capacities larger than the JVM heap allows. In such a deployment, the local-scheme element's high-units and eviction-policy controls the transfer (eviction) of entries from the front to back caches.
Note:
Relying on overflow for normal cache storage is not recommended. It should only be used to help avoid eviction-related data loss in the case where the storage requirements temporarily exceed the configured capacity. In general, the overflow's on-disk storage should remain empty.
Implementation
Implemented by either com.tangosol.net.cache.OverflowMap or com.tangosol.net.cache.SimpleOverflowMap, see expiry-enabled for details.
Entry Expiration
Overflow supports automatic expiration of entries based on the age of the value, as configured by the <expiry-delay> subelement.
Elements
Table B48 describes the subelements of the overflow-scheme element.
Table B48 Element <scheme-name> <scheme-ref> <class-name> overflow-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies a custom implementation of the overflow cache. Any custom implementation must extend either the com.tangosol.net.cache.OverflowMap or com.tangosol.net.cache.SimpleOverflowMap class, and declare the exact same set of public constructors. Specifies initialization parameters, for use in custom overflow cache implementations. Specifies the cache to use as the front-tier cache. Specifies the cache-scheme to use in creating the back-tier cache.
overflow-scheme
Table B48 (Cont.) overflow-scheme Subelements Element <miss-cache-scheme> Required/ Optional Optional Description Specifies a cache-scheme for maintaining information on cache misses. For caches which are not expiry-enabled (see <expiry-enabled> subelement), the miss-cache is used track keys which resulted in both a front and back tier cache miss. The knowledge that a key is not in either tier allows some operations to perform faster, as they can avoid querying the potentially slow back-tier. A size limited scheme may be used to control how many misses are tracked. If unspecified, no cache-miss data is maintained. Legal values are:
local-scheme
<expiry-enabled>
Optional
Turns on support for automatically-expiring data, as provided by the com.tangosol.net.cache.CacheMap API. When enabled, the overflow-scheme is implemented using com.tangosol.net.cache.OverflowMap, rather then com.tangosol.net.cache.SimpleOverflowMap. Legal values are true or false. The default value is false. Specifies the amount of time since the last update that entries are kept by the cache before being expired. Entries that have expired are not be accessible and are evicted the next time a client accesses the cache. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
<expiry-delay>
Optional
If the value does not contain a unit, a unit of seconds is assumed. A value of zero implies no expiry. The default value is 0. Note: The expiry delay parameter (cExpiryMillis) is defined as an integer and is expressed in milliseconds. Therefore, the maximum amount of time can never exceed Integer.MAX_VALUE (2147483647) milliseconds or approximately 24 days. <autostart> Optional The autostart element is intended to be used by cache servers (that is, com.tangosol.net.DefaultCacheServer). It specifies whether the cache services associated with this cache scheme should be automatically started at a cluster node. Legal values are true or false. The default value is false. Specifies an implementation of a com.tangosol.util.MapListener which is notified of events occurring on the cache.
<listener>
Optional
Element Reference
paged-external-scheme
Used in: caching-schemes, distributed-scheme, replicated-scheme, optimistic-scheme, near-scheme, overflow-scheme, read-write-backing-map-scheme
Description
As with external-scheme, paged-external-schemes define caches which are not JVM heap based, allowing for greater storage capacity. The paged-external scheme optimizes LRU eviction by using a paging approach. See Chapter 16, "Serialization Paged Cache," for a detailed description of the paged cache functionality.
Implementation
This scheme is implemented by the com.tangosol.net.cache.SerializationPagedCache class.
Paging
Cache entries are maintained over a series of pages, where each page is a separate com.tangosol.io.BinaryStore, obtained from the configured storage manager (see "Pluggable Storage Manager"). When a page is created it is considered to be the current page and all write operations are performed against this page. On a configured interval (see <page-duration> subelement), the current page is closed and a new current page is created. Read operations for a given key are performed against the last page in which the key was stored. When the number of pages exceeds a configured maximum (see <page-limit> subelement), the oldest page is destroyed and those items which were not updated since the page was closed are evicted. For example configuring a cache with a duration of ten minutes per page, and a maximum of six pages, results in entries being cached for at most an hour. Paging improves performance by avoiding individual delete operations against the storage manager as cache entries are removed or evicted. Instead the cache simply releases its references to those entries, and relies on the eventual destruction of an entire page to free the associated storage of all page entries in a single stroke.
async-store-managera wrapper providing asynchronous write capabilities for of other store-manager implementations custom-store-managerallows definition of custom implementations of store-managers bdb-store-manageruses Berkeley Database JE to implement an on disk cache lh-file-manageruses a Coherence LH on disk database cache nio-file-manageruses NIO to implement memory-mapped file based cache nio-memory-manageruses NIO to implement an off JVM heap, in-memory cache
paged-external-scheme
(persistence) and do not survive beyond the life of the JVM. Clustered persistence should be configured by using a read-write-backing-map-scheme on a distributed-scheme. If a non-clustered persistent cache is what is needed, refer to "Persistence (long-term storage)" on page B-40.
Elements
Table B49 describes the subelements of the paged-external-scheme element.
Table B49 Element <scheme-name> <scheme-ref> <class-name> paged-external-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies a custom implementation of the external paged cache. Any custom implementation must extend the com.tangosol.net.cache.SerializationPagedCache class and declare the exact same set of public constructors. Specifies initialization parameters, for use in custom external paged cache implementations. Configures the paged external cache to use an asynchronous storage manager wrapper for any other storage manager. See "Pluggable Storage Manager" on page B-39 for more information. Configures the paged external cache to use Berkeley Database JE on disk databases for cache storage. Configures the paged external cache to use a custom storage manager implementation. Configures the paged external cache to use a Coherence LH on disk database for cache storage. The LH store manager is deprecated. Use Berkeley DB store manager <bdb-store-manager> for similar functionality. <nio-file-manager> <nio-memory-manager> Optional Optional Configures the paged external cache to use a memory-mapped file for cache storage. Configures the paged external cache to use an off JVM heap, memory region for cache storage.
<init-params> <async-store-manager>
Optional Optional
Element Reference
Table B49 (Cont.) paged-external-scheme Subelements Element <page-limit> Required/ Optional Optional Description Specifies the maximum number of pages that the cache manages before older pages are destroyed. Legal values are zero or positive integers between 2 and 3600. The default value is zero. Specifies the length of time, in seconds, that a page in the cache is current. After the duration is exceeded, the page is closed and a new current page is created. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
<page-duration>
Optional
If the value does not contain a unit, a unit of seconds is assumed. Legal values are zero or values between 5 and 604800 seconds (one week). The default value is zero. <listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which is notified of events occurring on the cache.
partition-listener
partition-listener
Used in: distributed-scheme
Description
Specifies an implementation of a com.tangosol.net.partition.PartitionListener interface, which allows receiving partition distribution events.
Elements
Table B50 describes the subelements of the partition-listener element.
Table B50 Element <class-name> partition-listener Subelements Required/ Optional Required Description The name of a class that implements the PartitionListener interface. This implementation must have a zero-parameter public constructor. The default value is the value specified in the partition-listener/class-name parameter in the tangosol-coherence.xml descriptor. See "DistributedCache Service Parameters" on page A-62 for more information. Specifies the fully qualified name of a factory class for creating implementation class instances. This element cannot be used with the <class-name> element and is used with the <method-name> element. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Contains class initialization parameters for the implementation class.
<class-factory-name>
Optional
Element Reference
partitioned-quorum-policy-scheme
Used in: distributed-scheme
Description
The partitioned-quorum-policy-scheme element contains quorum policy settings for the partitioned cache service.
Elements
Table B51 describes the subelements of the partitioned-quorum-policy-scheme element.
Table B51 Element <scheme-name> <scheme-ref> <distribution-quoru m> partitioned-quorum-policy-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies the minimum number of ownership-enabled members of a partitioned service that must be present to perform partition distribution. The value must be a nonnegative integer. <restore-quorum> Optional Specifies the minimum number of ownership-enabled members of a partitioned service that must be present to restore lost primary partitions from backup. The value must be a nonnegative integer. <read-quorum> Optional specifies the minimum number of storage members of a cache service that must be present to process "read" requests. A "read" request is any request that does not mutate the state or contents of a cache. The value must be a nonnegative integer. <write-quorum> Optional specifies the minimum number of storage members of a cache service that must be present to process "write" requests. A "write" request is any request that may mutate the state or contents of a cache. The value must be a nonnegative integer. <class-name> Optional Specifies a class that provides custom quorum policies. This element cannot be used with the default quorum elements or the <class-factory-name> element. The class must implement the com.tangosol.net.ActionPolicy interface. Initialization parameters can be specified using the <init-params> element. <class-factory-name > Optional Specifies a factory class for creating custom action policy instances. This element cannot be used with the default quorum elements or the <class-name> element. This element is used with the <method-name> element. The action policy instances must implement the com.tangosol.net.ActionPolicy interface. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Contains class initialization parameters for the implementation class.
provider
provider
Used in: ssl, identity-manager, trust-manager.
Description
The provider element contains the configuration information for a security provider that extends the java.security.Provider class.
Elements
Table B52 describes the subelements you can define within the provider element.
Table B52 Element <name> provider Subelements Required/ Optional Optional Description Specifies the name of a security provider that extends the java.security.Provider class. The class name can be entered using either this element or by using the <class-name> element or by using the <class-factory-name> element. <class-name> Optional Specifies the name of a security provider that extends the java.security.Provider class. This element cannot be used with the <name> element or the <class-factory-name> element. <class-factory-name> Optional Specifies a factory class for creating Provider instances. The instances must implement the java.security.Provider class. This element cannot be used with the <name> element or the <class-name> element. This element can be used with the <method-name> element. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Contains class initialization parameters for the provider implementation. This element cannot be used with the <name> element.
Element Reference
proxy-config
Used in: proxy-scheme.
Description
The proxy-config element specifies the configuration information for the clustered service proxies managed by a proxy service. A service proxy is an intermediary between a remote client (connected to the cluster by using a connection acceptor) and a clustered service used by the remote client.
Elements
Table B53 describes the subelements of the proxy-config element.
Table B53 Element <cache-service-proxy> <invocation-service-proxy> proxy-config Subelements Required/ Optional Optional Optional Description Specifies the configuration information for a cache service proxy managed by the proxy service. Specifies the configuration information for an invocation service proxy managed by the proxy service.
proxy-scheme
proxy-scheme
Used in: caching-schemes.
Description
The proxy-scheme element contains the configuration information for a clustered service that allows Coherence*Extend clients to connect to the cluster and use clustered services without having to join the cluster.
Elements
Table B54 describes the subelements of the proxy-scheme element.
Table B54 Element <scheme-name> <scheme-ref> <service-name> <service-priority> proxy-scheme Subelements Required/ Optional Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies the name of the service. Specifies the priority for the service thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 10. The default value is the value specified in the service-priority parameter in the tangosol-coherence.xml descriptor. See "Proxy Service Parameters" on page A-74 for more information. Specifies the priority for the event dispatcher thread for each service. Legal values are from 1 to 10 where 10 is the highest priority. The default value is the value specified in the event-dispatcher-priority parameter in the tangosol-coherence.xml descriptor. See "Proxy Service Parameters" on page A-74 for more information. Specifies the name of the transport used by this service instance for reliable point-to-point communication. Valid values are: datagram, tmb (TCP Message Bus), sdmb (SDP Message Bus), and imb (InfiniBand Message Bus). The imb and sdmb transports are only available for Oracle Exalogic systems. The default value is the reliable-transport value specified in the tangosol-coherence.xml descriptor. See the reliable-transport parameter in "Proxy Service Parameters" on page A-74 for more information. Specifies the exact number of daemon threads for use by the proxy service. Legal values are positive integers or 0. Note: Specifying a value disables dynamic thread pooling. The value 0 indicates that all relevant tasks are performed on the service thread. If no value is specified, dynamic thread pooling is enabled and the number of threads automatically adjusts based on proxy service load. Use the <thread-count-max> and <thread-count-min> elements to tune dynamic thread pool behavior. The default value is the value specified in the thread-count parameter of the tangosol-coherence.xml descriptor. See "Proxy Service Parameters" on page A-74 for more information.
<reliable-transport>
Optional
<thread-count>
Optional
Element Reference
Table B54 (Cont.) proxy-scheme Subelements Element <thread-count-max> Required/ Optional Optional Description Specifies the maximum number of daemon threads that are allowed in the thread pool. This element is only valid if a <thread-count> value is not specified. Legal values are positive integers or zero. A value of zero indicates that no daemon threads are created and that all client requests are handled by the proxy service thread. The default value is the value specified in the thread-count-max parameter of the tangosol-coherence.xml descriptor. See "Proxy Service Parameters" on page A-74 for more information. Specifies the minimum number of daemon threads that are allowed (and initially created) in the thread pool. This element is only valid if a <thread-count> value is not specified. Legal values are positive integers. The default value is the value specified in the thread-count-min parameter of the tangosol-coherence.xml descriptor. See "Proxy Service Parameters" on page A-74 for more information. Specifies the priority for the worker threads. Legal values are from 1 to 10 where 10 is the highest priority. The default value is the value specified in the worker-priority parameter in the tangosol-coherence.xml descriptor. See "Proxy Service Parameters" on page A-74 for more information. Specifies the amount of time in milliseconds that a task can execute before it is considered "hung". Note: a posted task that has not yet started is never considered as hung. This attribute is applied only if the Thread pool is used (the thread-count value is positive). Legal values are positive integers or zero (indicating no default timeout). The default value is the value specified in the tangosol-coherence.xml descriptor. See the task-hung-threshold parameter in "Proxy Service Parameters" on page A-74 for more information. Specifies the timeout value in milliseconds for requests executing on the service worker threads. This attribute is applied only if the thread pool is used (the thread-count value is positive). If zero is specified, the default service-guardian <timeout-milliseconds> value is used. Legal values are nonnegative integers. The default value is the value specified in the tangosol-coherence.xml descriptor. See the task-timeout parameter in "Proxy Service Parameters" on page A-74. Specifies the maximum amount of time a client waits for a response before abandoning the original request. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
<thread-count-min>
Optional
<worker-priority>
Optional
<task-hung-threshold> Optional
<task-timeout>
Optional
<request-timeout>
Optional
the time it takes to deliver the request to an executing node (server) the interval between the time the task is received and placed into a service queue until the execution starts the task execution time the time it takes to deliver a result back to the client
Legal values are positive integers or zero (indicating no default timeout). The default value is the value specified in the tangosol-coherence.xml descriptor. See the request-timeout parameter in "Proxy Service Parameters" on page A-74 for more information.
proxy-scheme
Table B54 (Cont.) proxy-scheme Subelements Element <guardian-timeout> Required/ Optional Optional Description Specifies the guardian timeout value to use for guarding the service and any dependent threads. If the element is not specified for a given service, the default guardian timeout (as specified by the <timeout-milliseconds> operational configuration element) is used. See <service-guardian>. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. <service-failure-poli Optional cy> Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. Legal values are:
exit-cluster (default) attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy causes the local node to stop the cluster services. exit-process attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy cause the local node to exit the JVM and terminate abruptly. logging causes any detected problems to be logged, but no corrective action to be taken. a custom class an <instance> subelement is used to provide the class configuration information for a com.tangosol.net.ServiceFailurePolicy implementation.
<member-listener>
Optional
Specifies the configuration information for a class that implements the com.tangosol.net.MemberListener interface. The implementation must have a public default constructor. The MemberListener implementation receives service lifecycle events. The <member-listener> element is used as an alternative to programmatically adding a MapListener on a service.
<acceptor-config>
Required
Contains the configuration of the connection acceptor used by the service to accept connections from Coherence*Extend clients and to allow them to use the services offered by the cluster without having to join the cluster. Contains the configuration of the clustered service proxies managed by this service.
<proxy-config>
Optional
Element Reference
Table B54 (Cont.) proxy-scheme Subelements Element <load-balancer> Required/ Optional Optional Description Specifies a pluggable strategy used by the proxy service to distribute client connections across the set of clustered proxy service members. Legal values are:
proxy (default) This strategy attempts to distribute client connections equally across proxy service members based upon existing connection count, connection limit, incoming and outgoing message backlog, and daemon pool utilization. client This strategy relies upon the client address provider implementation to dictate the distribution of clients across proxy service members. If no client address provider implementation is provided, the extend client tries each proxy service in a random order until a connection is successful. a custom class an <instance> subelement is used to provide the configuration information for a class that implements the com.tangosol.net.proxy.ProxyServiceLoadBalancer interface.
Specifies quorum policy settings for the Proxy service. The autostart element is intended to be used by cache servers (that is, com.tangosol.net.DefaultCacheServer). It specifies whether this service should be automatically started at a cluster node. Legal values are true or false. The default value is false.
proxy-quorum-policy-scheme
proxy-quorum-policy-scheme
Used in: proxy-scheme
Description
The proxy-quorum-policy-scheme element contains quorum policy settings for the Proxy service.
Elements
Table B54 describes the subelements of the proxy-quorum-policy-scheme element.
Table B55 Element <scheme-name> <scheme-ref> <connect-quorum> proxy-quorum-policy-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. specifies the minimum number of members of a proxy service that must be present to allow client connections. The value must be a nonnegative integer. <class-name> Optional Specifies a class that provides custom quorum policies. This element cannot be used with the <connect-quorum> element or the <class-factory-name> element. The class must implement the com.tangosol.net.ActionPolicy interface. Initialization parameters can be specified using the <init-params> element. <class-factory-name> Optional Specifies a factory class for creating custom action policy instances. This element cannot be used with the <connect-quorum> element or the <class-name> element. This element is used with the <method-name> element. The action policy instances must implement the com.tangosol.net.ActionPolicy interface. <method-name> <init-params> Optional Optional Specifies the name of a static factory method on the factory class which performs object instantiation. Contains class initialization parameters for the implementation class.
Element Reference
ramjournal-scheme
Used in: back-scheme, backing-map-scheme, caching-schemes, internal-cache-scheme
Description
The ramjournal-scheme element contains the configuration information for a scheme that stores data to buffers (journal files) in-memory. A RAM journal resource manager controls RAM journal behavior. See "ramjournal-manager" on page A-56 for additional details on configuring RAM journal behavior. This scheme uses the com.tangosol.net.cache.SimpleSerializationMap class as the backing map implementation and the com.tangosol.io.journal.JournalBinaryStore to store and retrieve binary key value pairs to a journal.
Elements
Table B56 describes the subelements of the ramjournal-scheme element.
Table B56 Element <scheme-name> <scheme-ref> <class-name> ramjournal-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies a custom implementation of the simple serialization map cache. Any custom implementation must extend the com.tangosol.net.cache.SimpleSerializationMap class and declare the exact same set of public constructors as the superclass. Specifies the initialization parameters for a custom serialization map cache. Specifies an implementation of a com.tangosol.util.MapListener which is notified of events occurring on the cache.
<init-params> <listener>
Optional Optional
read-write-backing-map-scheme
read-write-backing-map-scheme
Used in: caching-schemes, distributed-scheme.
Description
The read-write-backing-map-scheme defines a backing map which provides a size limited cache of a persistent store. See Chapter 15, "Caching Data Sources" for more details.
Implementation
The read-write-backing-map-scheme is implemented by the com.tangosol.net.cache.ReadWriteBackingMap class.
Refresh-Ahead Caching
When enabled (see <refreshahead-factor> subelement) the cache watches for recently accessed entries which are about to expire, and asynchronously reload them from the cache store. This insulates the application from potentially slow reads against the cache store, as items periodically expire.
Write-Behind Caching
When enabled (see <write-delay> subelement), the cache delays writes to the back-end cache store. This allows for the writes to be batched (see <write-batch-factor> subelement) into more efficient update blocks, which occur asynchronously from the client thread.
Elements
Table B57 describes the subelements of the read-write-backing-map-scheme element.
Table B57 Element <scheme-name> <scheme-ref> read-write-backing-map-scheme Subelements Required/ Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies a custom implementation of the read write backing map. Any custom implementation must extend the com.tangosol.net.cache.ReadWriteBackingMap class and declare the exact same set of public constructors. Specifies initialization parameters, for use in custom read write backing map implementations.
<class-name>
Optional
<init-params>
Optional
Element Reference
Table B57 (Cont.) read-write-backing-map-scheme Subelements Element <internal-cache-scheme> Required/ Optional Required Description Specifies a cache-scheme which is used to cache entries. Legal values are:
<write-max-batch-size>
Optional
Specifies the maximum number of entries to write in a single storeAll operation. Valid values are positive integers or zero. The default value is 128 entries. This value has no effect if write behind is disabled. Specifies a cache-scheme for maintaining information on cache misses. The miss-cache is used track keys which were not found in the cache store. The knowledge that a key is not in the cache store allows some operations to perform faster, as they can avoid querying the potentially slow cache store. A size-limited scheme may be used to control how many misses are cached. If unspecified no cache-miss data is maintained. Legal values are:
<miss-cache-scheme>
Optional
local-scheme
<cachestore-scheme>
Optional
Specifies the store to cache. If unspecified the cached data only resides within the internal cache (see <internal-cache-scheme> subelement), and only reflect operations performed on the cache itself. Specifies if the cache is read only. If true the cache loads data from cachestore for read operations and do not perform any writing to the cachestore when the cache is updated. Legal values are true or false. The default value is false.
<read-only>
Optional
read-write-backing-map-scheme
Table B57 (Cont.) read-write-backing-map-scheme Subelements Element <write-delay> Required/ Optional Optional Description Specifies the time interval to defer asynchronous writes to the cachestore for a write-behind queue. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of seconds is assumed. If zero, synchronous writes to the cachestore (without queuing) take place, otherwise the writes are asynchronous and deferred by specified time interval after the last update to the value in the cache. The default value is zero. This element cannot be used with the <write-delay-seconds> element. <write-delay-seconds> Optional Specifies the number of seconds to defer asynchronous writes to the cachestore for a write-behind queue. If zero, synchronous writes to the cachestore (without queueing) take place; otherwise, the writes are asynchronous and deferred by the number of seconds after the last update to the value in the cache. This element cannot be used with the <write-delay> element.
Element Reference
Table B57 (Cont.) read-write-backing-map-scheme Subelements Element <write-batch-factor> Required/ Optional Optional Description The write-batch-factor element is used to calculate the "soft-ripe" time for write-behind queue entries. A queue entry is considered to be "ripe" for a write operation if it has been in the write-behind queue for no less than the write-delay interval. The "soft-ripe" time is the point in time before the actual ripe time after which an entry is included in a batched asynchronous write operation to the CacheStore (along with all other ripe and soft-ripe entries). In other words, a soft-ripe entry is an entry that has been in the write-behind queue for at least the following duration: D' = (1.0 - F) * D where D = write-delay interval and F = write-batch-factor. Conceptually, the write-behind thread uses the following logic when performing a batched update:
1. 2. 3.
The thread waits for a queued entry to become ripe. When an entry becomes ripe, the thread dequeues all ripe and soft-ripe entries in the queue. The thread then writes all ripe and soft-ripe entries either by using store() (if there is only the single ripe entry) or storeAll() (if there are multiple ripe/soft-ripe entries). Note: if operation bundling (<operation-bundling>) is configured, then storeAll() is always called even if there is only a single ripe entry. The thread then repeats (1).
4.
This element is only applicable if asynchronous writes are enabled (that is, the value of the write-delay element is greater than zero) and the CacheStore implements the storeAll() method. The value of the element is expressed as a percentage of the write-delay interval. Legal values are nonnegative doubles less than or equal to 1.0. The default value is zero. <write-requeue-threshold> Optional Specifies the size of the write-behind queue at which additional actions could be taken. If zero, write-behind requeuing is disabled. Otherwise, this value controls the frequency of the corresponding log messages. For example, a value of 100 produces a log message every time the size of the write queue is a multiple of 100. Legal values are positive integers or zero. The default value is zero. The refresh-ahead-factor element is used to calculate the "soft-expiration" time for cache entries. Soft-expiration is the point in time before the actual expiration after which any access request for an entry schedules an asynchronous load request for the entry. This attribute is only applicable if the internal cache is a local-scheme, configured with the <expiry-delay> subelement. The value is expressed as a percentage of the internal LocalCache expiration interval. If zero, refresh-ahead scheduling is disabled. If 1.0, then any get operation immediately triggers an asynchronous reload. Legal values are nonnegative doubles less than or equal to 1.0. The default value is zero.
<refresh-ahead-factor>
Optional
read-write-backing-map-scheme
Table B57 (Cont.) read-write-backing-map-scheme Subelements Element <cachestore-timeout> Required/ Optional Optional Description Specifies the timeout interval to use for CacheStore read and write operations. If a CacheStore operation times out, the executing thread is interrupted and may ultimately lead to the termination of the cache service. Timeouts of asynchronous CacheStore operations (for example, refresh-ahead, write-behind) do not result in service termination. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. If 0 is specified, the default service-guardian <timeout-milliseconds> value is used. The default value if none is specified is 0. <rollback-cachestore-failures> Optional Specifies whether exceptions caught during synchronous cachestore operations are rethrown to the calling thread (possibly over the network to a remote member). Legal values are true or false. If the value of this element is false, an exception caught during a synchronous cachestore operation is logged locally and the internal cache is updated. If the value is true, the exception is rethrown to the calling thread and the internal cache is not changed. If the operation was called within a transactional context, this would have the effect of rolling back the current transaction. The default value is true. Specifies an implementation of a com.tangosol.util.MapListener which is notified of events occurring on the cache.
<listener>
Optional
Element Reference
remote-addresses
Used in: tcp-initiator
Description
The remote-addresses element contains the address (IP, or DNS name, and port) of one or more TCP/IP acceptors. A TCP/IP initiator uses this information to establish a connection with a proxy service on a remote cluster. TCP/IP acceptors are configured within the proxy-scheme element. The TCP/IP initiator attempts to connect to the addresses in a random order until either the list is exhausted or a connection is established. See Developing Remote Clients for Oracle Coherence for additional details and example configurations.
Note:
The name-service-addresses element can also be used to establish a connection with a proxy service on a a remote cluster. For details, see "name-service-addresses" on page B-66.
The following example configuration instructs the initiator to connect to 192.168.0.2:9099 and 192.168.0.3:9099 in a random order:
<remote-addresses> <socket-address> <address>192.168.0.2</address> <port>9099</port> </socket-address> <socket-address> <address>192.168.0.3</address> <port>9099</port> </socket-address> </remote-addresses>
Elements
Table B58 describes the subelements of the remote-addresses element.
Table B58 Element <socket-address> remote-addresses Subelements Description Specifies the address (IP, or DNS name, and port) on which a TCP/IP acceptor is listening. Multiple <socket-address> elements can be used within a <remote-addresses> element. Specifies the address (IP, or DNS name, and port) on which a TCP/IP acceptor is listening or the configuration for a com.tangosol.net.AddressProvider implementation that supplies the address. The address-provider element also supports socket address references. A <remote-addresses> element can include either a <socket-address> element or an <address-provider> element but not both.
<address-provider>
Optional
remote-cache-scheme
remote-cache-scheme
Used in: cachestore-scheme, caching-schemes, near-scheme.
Description
The remote-cache-scheme element contains the configuration information necessary to use a clustered cache from outside the cluster by using Coherence*Extend.
Elements
Table B59 describes the subelements of the remote-cache-scheme element.
Table B59 Element <scheme-name> <scheme-ref> <service-name> <proxy-service-name> remote-cache-scheme Subelements Required/ Optional Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies the name of the service which manages caches created from this scheme. Specifies the proxy service name to which this remote service connects. This element is only used if this remote service uses a name service to lookup a proxy service. For details on configuring a name service, see "name-service-addresses" on page B-66. The value must match the <service-name> value of the proxy service. This element can be omitted if the <service-name> value of this remote service matches the <service-name> value of the proxy service. For details on defining proxy services, see "proxy-scheme" on page B-88. <operation-bundling> <initiator-config> Optional Required Specifies the configuration information for a bundling strategy. Contains the configuration of the connection initiator used by the service to establish a connection with the cluster. Specifies whether key association processing is done by the extend client or deferred to the cluster side. Valid values are true and false. The default value is false and indicates that key association processing is done by the extend client. If the value is set to true, NET and C++ clients must include a parallel Java implementation of the key class on the cluster cache servers.
B-100
Element Reference
remote-invocation-scheme
Used in: caching-schemes
Description
The remote-invocation-scheme element contains the configuration information necessary to execute tasks within a cluster without having to first join the cluster. This scheme uses Coherence*Extend to connect to the cluster.
Elements
Table B60 describes the subelements of the remote-invocation-scheme element.
Table B60 Element <scheme-name> <scheme-ref> <service-name> <proxy-service-name> remote-invocation-scheme Subelements Required/ Optional Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies the name of the service. Specifies the proxy service name to which this remote service connects. This element is only used if this remote service uses a name service to lookup a proxy service. For details on configuring a name service, see "name-service-addresses" on page B-66. The value must match the <service-name> value of the proxy service. This element can be omitted if the <service-name> value of this remote service matches the <service-name> value of the proxy service. For details on defining proxy services, see "proxy-scheme" on page B-88. <initiator-config> Required Contains the configuration of the connection initiator used by the service to establish a connection with the cluster.
replicated-scheme
replicated-scheme
Used in: caching-schemes, near-scheme, overflow-scheme
Description
The replicated scheme defines caches which fully replicate all their cache entries on each cluster nodes running the specified service. See "Replicated Cache" on page 12-5 for a detailed description of replicated caches.
Elements
Table B61 describes the subelements of the replicated-scheme element.
Table B61 Element <scheme-name> <scheme-ref> replicated-scheme Subelements Required/ Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies the name of the service which manages caches created from this scheme. Services are configured from within the <services> element in the tangosol-coherence.xml file. See Appendix A, "Operational Configuration Elements" for more information. Specifies the priority for the service thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is the service-priority value specified in the tangosol-coherence.xml descriptor. See "ReplicatedCache Service Parameters" on page A-67 for more information. Specifies the priority for the event dispatcher thread for each service. Legal values are from 1 to 10 where 10 is the highest priority. The default value is the event-dispatcher-priority value specified in the tangosol-coherence.xml descriptor. See "ReplicatedCache Service Parameters" on page A-67 for more information.
<service-name>
Optional
<service-priority>
Optional
<event-dispatcher-priority>
Optional
B-102
Element Reference
Table B61 (Cont.) replicated-scheme Subelements Element <serializer> Required/ Optional Optional Description Specifies either: the class configuration information for a com.tangosol.io.Serializer implementation used to serialize and deserialize user types, or it references a serializer class configuration that is defined in the operational configuration file (see "serializer" on page A-59). Specifies the name of the transport used by this service instance for reliable point-to-point communication. Valid values are: datagram, tmb (TCP Message Bus), sdmb (SDP Message Bus), and imb (InfiniBand Message Bus). The imb and sdmb transports are only available for Oracle Exalogic systems. The default value is the reliable-transport value specified in the tangosol-coherence.xml descriptor. See the reliable-transport parameter in "ReplicatedCache Service Parameters" on page A-67 for more information. Specifies the duration of the standard lease in milliseconds. When a lease has aged past this number of milliseconds, the lock is automatically released. Set this value to zero to specify a lease that never expires. The purpose of this setting is to avoid deadlocks or blocks caused by stuck threads; the value should be set higher than the longest expected lock duration (for example, higher than a transaction timeout). It's also recommended to set this value higher than packet-delivery/timeout-milliseconds value. Legal values are from positive long numbers or zero. The default value is the value specified for packet-delivery/timeout-milliseconds in the tangosol-coherence.xml descriptor. See "ReplicatedCache Service Parameters" on page A-67 for more information. Specifies the lease ownership granularity. Legal values are:
<reliable-transport>
Optional
<standard-lease-milliseconds>
Optional
<lease-granularity>
Optional
thread member
A value of thread means that locks are held by a thread that obtained them and can only be released by that thread. A value of member means that locks are held by a cluster node and any thread running on the cluster node that obtained the lock can release it. The default value is the lease-granularity value specified in the tangosol-coherence.xml descriptor. See "ReplicatedCache Service Parameters" on page A-67 for more information.
replicated-scheme
Table B61 (Cont.) replicated-scheme Subelements Element <request-timeout> Required/ Optional Optional Description Specifies the maximum amount of time a client waits for a response before abandoning the original request. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
the time it takes to deliver the request to an executing node (server) the interval between the time the task is received and placed into a service queue until the execution starts the task execution time the time it takes to deliver a result back to the client
Legal values are positive integers or zero (indicating no default timeout). The default value is the value specified in the tangosol-coherence.xml descriptor. See the request-timeout parameter in "ReplicatedCache Service Parameters" on page A-67 for more information. <guardian-timeout> Optional Specifies the guardian timeout value to use for guarding the service and any dependent threads. If the element is not specified for a given service, the default guardian timeout (as specified by the <timeout-milliseconds> operational configuration element) is used. See <service-guardian>. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
B-104
Element Reference
Table B61 (Cont.) replicated-scheme Subelements Element <service-failure-policy> Required/ Optional Optional Description Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. Legal values are:
exit-cluster (default) attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy causes the local node to stop the cluster services. exit-process attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy cause the local node to exit the JVM and terminate abruptly. logging causes any detected problems to be logged, but no corrective action to be taken. a custom class an <instance> subelement is used to provide the class configuration information for a com.tangosol.net.ServiceFailurePolicy implementation.
<member-listener>
Optional
Specifies the configuration information for a class that implements the com.tangosol.net.MemberListener interface. The implementation must have a public default constructor. The MemberListener implementation receives cache service lifecycle events. The <member-listener> element is used as an alternative to programmatically adding a MapListener on a service.
<backing-map-scheme>
Optional
Specifies what type of cache is used within the cache server to store the entries. Legal values are:
To ensure cache coherence, the backing-map of a replicated cache must not use a read-through pattern to load cache entries. Either use a cache-aside pattern from outside the cache service, or switch to the distributed-scheme, which supports read-through clustered caching. <listener> Optional Specifies an implementation of a com.tangosol.util.MapListener which is notified of events occurring on the cache. The autostart element is intended to be used by cache servers (that is, com.tangosol.net.DefaultCacheServer). It specifies whether the cache services associated with this cache scheme should be automatically started at a cluster node. Legal values are true or false. The default value is false.
<autostart>
Optional
resource-config
resource-config
Used in: http-acceptor
Description
The resource-config element contains the configuration information for a class that extends the com.sun.jersey.api.core.ResourceConfig class. The instance is used by the HTTP acceptor to load resource and provider classes for the Coherence REST application that is mapped to the specified context path. Multiple resource configuration classes can be configured and mapped to different context paths.
Elements
Table B62 describes the subelements of the resource-config element.
Table B62 Element <context-path> <instance> resource-config Subelements Required/ Optional Optional Optional Description Specifies a root URI path for a REST application. The first character of the path must be /. The default value is /. Contains the configuration information for a class that extends the com.sun.jersey.api.core.ResourceConfig class. The default instance is the com.tangosol.coherence.rest.server.DefaultResourceConfig class.
B-106
Element Reference
serializer
Used in: acceptor-config, defaults, distributed-scheme, initiator-config, invocation-scheme, optimistic-scheme, replicated-scheme, transactional-scheme,
Description
The serializer element contains the class configuration information for a com.tangosol.io.Serializer implementation. The serializer element accepts either a reference to a serializer configuration or a full serializer configuration. The best practice is to reference a configuration which is defined in the operational configuration file. The operational configuration file contains two pre-defined serializer class configuration: one for Java (default) and one for POF. See "serializer" on page A-59. The following example demonstrates referring to the POF serializer definition that is in the operational configuration file:
... <serializer>pof</serializer> ...
Elements
Table B63 describes the subelements of the serializer element.
Table B63 Element <instance> serializer Subelements Required/ Optional Optional Description Contains the class configuration information for a com.tangosol.io.Serializer implementation.
socket-address
socket-address
Used in: address-provider, remote-addresses
Description
The socket-address element specifies the address and port on which a TCP/IP acceptor is listening.
Elements
Table B64 describes the subelements of the socket-address element.
Table B64 Element <address> socket-address Subelements Required/ Optional Required Description Specifies the IP address (IP or DNS name) on which a TCP/IP acceptor socket is listening. If the address is a bind address, then the address may also be entered using CIDR notation as a subnet and mask (for example, 192.168.1.0/24), which allows runtime resolution against the available local IP addresses. Specifies the port on which a TCP/IP acceptor socket is listening. Legal values are from 1 to 65535.
<port>
Required
B-108
Element Reference
socket-provider
Used in: tcp-acceptor, tcp-initiator, defaults, ssl.
Description
The <socket-provider> element contains the configuration information for a socket and channel factory. The socket providers that are configured within the <tcp-acceptor> and <tcp-initiator> elements are for use with Coherence*Extend. A socket provider that is configured within the <http-acceptor> element is used by Coherence REST. Socket providers for TCMP are configured in an operational override within the <unicast-listener> element. The <socket-provider> element accepts either a reference to a socket provider configuration or a full socket provider configuration. The best practice is to reference a configuration which is defined in the operational configuration file. See "socket-provider" on page A-83. The following socket providers are available: system (default), ssl, tcp, and sdp. Socket provider configurations are referred to using their id attribute name. The following example refers to the pre-defined SSL socket provider configuration:
<socket-provider>ssl</socket-provider>
Elements
Table B65 describes the subelements you can define within the socket-provider element.
Table B65 Element <system> <ssl> <tcp> <sdp> socket-provider Subelements Required/ Optional Optional Optional Optional Optional Description Specifies a socket provider that produces instances of the JVM's default socket and channel implementations. Specifies a socket provider that produces socket and channel implementations which use SSL. Specifies a socket provider that produces TCP-based sockets and channel implementations. Specifies a socket provider that produce SDP-based sockets and channel implementations provided that the JVM and underlying network stack supports SDP.
ssl
ssl
Used in: socket-provider.
Description
The <ssl> element contains the configuration information for a socket provider that produces socket and channel implementations which use SSL.
Elements
Table B66 describes the elements you can define within the ssl element.
Table B66 Element <protocol> ssl Subelements Required/ Optional Optional Description Specifies the name of the protocol used by the socket and channel implementations produced by the SSL socket provider. The default value is TLS. Specifies the configuration for a security provider instance. Specifies the configuration information for an implementation of the java.util.concurrent.Executor interface. A <class-name> subelement is used to provide the name of a class that implements the Executor interface. As an alternative, a <class-factory-name> subelement can specify a factory class for creating Executor instances and a <method-name> subelement that specifies the name of a static factory method on the factory class which performs object instantiation. Either approach can specify initialization parameters using the <init-params> element. <identity-manager> <trust-manager> <hostname-verifier> Optional Optional Optional Specifies the configuration information for initializing an identity manager instance. Specifies the configuration information for initializing a trust manager instance. Specifies the configuration information for an implementation of the javax.net.ssl.HostnameVerifier interface. During the SSL handshake, if the URL's host name and the server's identification host name mismatch, the verification mechanism calls back to this instance to determine if the connection should be allowed. A <class-name> subelement is used to provide the name of a class that implements the HostnameVerifier interface. As an alternative, a <class-factory-name> subelement can specify a factory class for creating HostnameVerifier instances and a <method-name> subelement that specifies the name of a static factory method on the factory class which performs object instantiation. Either approach can specify initialization parameters using the <init-params> element. <socket-provider> Optional Specifies the configuration information for a delegate provider for SSL. Valid values are tcp and sdp. The default value is tcp.
<provider> <executor>
Optional Optional
B-110
Element Reference
tcp-acceptor
Used in: acceptor-config.
Description
The tcp-acceptor element specifies the configuration information for a connection acceptor that accepts connections from Coherence*Extend clients over TCP/IP. See Developing Remote Clients for Oracle Coherence for additional details and example configurations.
Elements
Table B67 describes the subelements of the tcp-acceptor element.
Table B67 Element <socket-provider> <local-address> <address-provider> tcp-acceptor Subelements Required/ Optional Optional Optional Optional Description Specifies the configuration for a socket and channel factory. Specifies the local address (IP, or DNS name, and port) on which the TCP/IP server socket (opened by the connection acceptor) is bound. Specifies either the local address (IP, or DNS name, and port) on which the TCP/IP server socket is bound or an implementation of the com.tangosol.net.AddressProvider interface that programmatically provides a socket address. The address-provider element also supports socket address references. A <tcp-acceptor> element can include either a <local-address> or an <address-provider> element but not both. <reuse-address> Optional Specifies whether a TCP/IP socket can be bound to an in-use or recently used address. This setting is deprecated because the resulting behavior is significantly different across operating system implementations. The JVM, in general, selects a reasonable default which is safe for the target operating system. Valid values are true and false. The default value depends on the operating system. <keep-alive-enabled> Optional Indicates whether the SO_KEEPALIVE option is enabled on a TCP/IP socket. The SO_KEEPALIVE option detects idle sockets and determines whether the socket should be closed. Valid values are true and false. The default value is true. Indicates whether TCP delay (Nagle's algorithm) is enabled on a TCP/IP socket. Valid values are true and false. TCP delay is disabled by default.
<tcp-delay-enabled>
Optional
tcp-acceptor
Table B67 (Cont.) tcp-acceptor Subelements Element <receive-buffer-size> Required/ Optional Optional Description Configures the size of the underlying TCP/IP socket network receive buffer.Increasing the receive buffer size can increase the performance of network I/O for high-volume connections, while decreasing it can help reduce the backlog of incoming data.The value of this element must be in the following format: (\d)+((.)(\d)+)?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. The default value is operating system dependent. <send-buffer-size> Optional Configures the size of the underlying TCP/IP socket network send buffer. The value of this element must be in the following format: (\d)+((.)(\d)+)?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. The default value is operating system dependent. <listen-backlog> Optional Configures the size of the TCP/IP server socket backlog queue. Valid values are positive integers. The default value is operating system dependent. Specifies a value for the SO_LINGER option on a TCP/IP socket. The SO_LINGER option controls how long to wait before a socket is forcefully closed. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
<linger-timeout>
Optional
If the value does not contain a unit, a unit of milliseconds is assumed. The default value is -1, which disables the linger timeout. A linger timeout is only set if the value is greater than zero. <authorized-hosts> Optional A collection of IP addresses of TCP/IP initiator hosts that are allowed to connect to this TCP/IP acceptor. Specifies whether the suspect protocol is enabled to detect and close rogue Coherence*Extend client connections. The suspect protocol is enabled by default. Valid values are true and false.
B-112
Element Reference
Table B67 (Cont.) tcp-acceptor Subelements Element <suspect-buffer-size> Required/ Optional Optional Description Specifies the outgoing connection backlog (in bytes) after which the corresponding client connection is marked as suspect. A suspect client connection is then monitored until it is no longer suspect or it is closed to protect the proxy server from running out of memory. The value of this element must be in the following format: (\d)+((.)(\d)+)?[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. The default value is 10000000. <suspect-buffer-length> Optional Specifies the outgoing connection backlog (in messages) after which the corresponding client connection is marked as suspect. A suspect client connection is then monitored until it is no longer suspect or it is closed to protect the proxy server from running out of memory. The default value is 10000. Specifies the outgoing connection backlog (in bytes) at which point a suspect client connection is no longer considered to be suspect. The value of this element must be in the following format: (\d)+((.)(\d)+)?[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
<nominal-buffer-size>
Optional
If the value does not contain a factor, a factor of one is assumed. The default value is 2000000.
tcp-acceptor
Table B67 (Cont.) tcp-acceptor Subelements Element <nominal-buffer-length> Required/ Optional Optional Description Specifies the outgoing connection backlog (in messages) at which point a suspect client connection is no longer considered to be suspect. The default value is 2000. Specifies the outgoing connection backlog (in bytes) at which point the corresponding client connection must be closed to protect the proxy server from running out of memory. The value of this element must be in the following format: (\d)+((.)(\d)+)?[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
<limit-buffer-size>
Optional
If the value does not contain a factor, a factor of one is assumed. The default value is 100000000. <limit-buffer-length> Optional Specifies the outgoing connection backlog (in messages) at which point the corresponding client connection must be closed to protect the proxy server from running out of memory. The default value is 60000.
B-114
Element Reference
tcp-initiator
Used in: initiator-config.
Description
The tcp-initiator element specifies the configuration information for a connection initiator that enables Coherence*Extend clients to connect to a remote cluster by using TCP/IP. See Developing Remote Clients for Oracle Coherence for additional details and example configurations.
Elements
Table B68 describes the subelements of the tcp-initiator element.
Table B68 Element <socket-provider> <local-address> tcp-initiator Subelements Required/ Optional Optional Optional Description Specifies the configuration for a socket and channel factory. Specifies the local address (IP, or DNS name, and port) on which the TCP/IP client socket (opened by the TCP/IP initiator) is bound. Contains the address (IP, or DNS name, and port) of one or more name service TCP/IP acceptors. The TCP/IP connection initiator uses this information to establish a connection with a remote cluster. This element cannot be used if the remote-addresses element is used to configure remote addresses. <remote-addresses> Optional Contains the address (IP, or DNS name, and port) of one or more TCP/IP connection acceptors. The TCP/IP connection initiator uses this information to establish a connection with a remote cluster. This element cannot be used if the name-service-addresses element is used. <reuse-address> Optional Specifies whether a TCP/IP socket can be bound to an in-use or recently used address. This setting is deprecated because the resulting behavior is significantly different across operating system implementations. The JVM, in general, selects a reasonable default which is safe for the target operating system. Valid values are true and false. The default value depends on the operating system. <keep-alive-enabled> Optional Indicates whether the SO_KEEPALIVE option is enabled on a TCP/IP socket. The SO_KEEPALIVE option detects idle sockets and determines whether the socket should be closed. Valid values are true and false. The default value is true. Indicates whether TCP delay (Nagle's algorithm) is enabled on a TCP/IP socket. Valid values are true and false. TCP delay is disabled by default.
<name-service-addresses> Optional
<tcp-delay-enabled>
Optional
tcp-initiator
Table B68 (Cont.) tcp-initiator Subelements Element <receive-buffer-size> Required/ Optional Optional Description Configures the size of the underlying TCP/IP socket network receive buffer.Increasing the receive buffer size can increase the performance of network I/O for high-volume connections, while decreasing it can help reduce the backlog of incoming data.The value of this element must be in the following format: (\d)+((.)(\d)+)?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. The default value is operating system dependent. <send-buffer-size> Optional Configures the size of the underlying TCP/IP socket network send buffer.The value of this element must be in the following format: (\d)+((.)(\d)+)?[K|k|M|m|G|g]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. The default value is operating system dependent. <connect-timeout> <linger-timeout> Optional Optional Deprecated. The <connect-timeout> element is now specified within the <initiator-config> element. Specifies a value for the SO_LINGER option on a TCP/IP socket. The SO_ LINGER option controls how long to wait before a socket is forcefully closed. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. The default value is -1, which disables the linger timeout. A linger timeout is only set if the value is greater than zero.
B-116
Element Reference
transactional-scheme
Used in caching-schemes
Description
The transactional-scheme element defines a transactional cache, which is a specialized distributed cache that provides transactional guarantees. Multiple transactional-scheme elements may be defined to support different configurations. Applications use transactional caches in one of three ways:
Applications use the CacheFactory.getCache() method to get an instance of a transactional cache. In this case, there are implicit transactional guarantees when performing cache operations. However, default transaction behavior cannot be changed. Applications explicitly use the Transaction Framework API to create a Connection instance that uses a transactional cache. In this case, cache operations are performed within a transaction and the application has full control to change default transaction behavior as required. Java EE applications use the Coherence Resource Adapter to create a Transaction Framework API Connection instance that uses a transactional cache. In this case, cache operations are performed within a transaction that can participate as part of a distributed (global) transaction. Applications can change some default transaction behavior.
Elements
Table B69 describes the subelements of the transactional-scheme element.
Table B69 Element <scheme-name> <scheme-ref> <service-name> transactional-scheme Subelements Required/ Optional Optional Optional Optional Description Specifies the scheme's name. The name must be unique within a configuration file. Specifies the name of another scheme to inherit from. See "Using Scheme Inheritance" on page 13-13 for more information. Specifies the name of the service which manages caches created from this scheme. The default service name if no service name is provided is TransactionalCache. Specifies the priority for the service thread. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 10. Specifies the priority for the event dispatcher thread for each service. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 10. Specifies either: the class configuration information for a com.tangosol.io.Serializer implementation used to serialize and deserialize user types, or it references a serializer class configuration that is defined in the operational configuration file (see "serializer" on page A-59).
<service-priority>
Optional
transactional-scheme
Table B69 (Cont.) transactional-scheme Subelements Element <reliable-transport> Required/ Optional Optional Description Specifies the name of the transport used by this service instance for reliable point-to-point communication. Valid values are: datagram, tmb (TCP Message Bus), sdmb (SDP Message Bus), and imb (InfiniBand Message Bus). The imb and sdmb transports are only available for Oracle Exalogic systems. The default value is the reliable-transport value specified in the tangosol-coherence.xml descriptor. See the reliable-transport parameter in "DistributedCache Service Parameters" on page A-62 for more information. Specifies the number of daemon threads used by the partitioned cache service. Legal values are positive integers, 0, or -1. The value 0 indicates that all relevant tasks are performed on the service thread. The value -1 indicates that tasks are performed on the caller's thread where possible. The default value is the thread-count value specified in the tangosol-coherence.xml descriptor. See the thread-count parameter in "DistributedCache Service Parameters" on page A-62 for more information. Specifying a thread count changes the default behavior of the Transactional Framework's internal transaction caches that are used for transactional storage and recovery. <worker-priority> Optional Specifies the priority for the worker threads. Legal values are from 1 to 10 where 10 is the highest priority. The default value is 5. Specifies whether a cluster node contributes storage to the cluster, that is, maintain partitions. When disabled the node is considered a cache client. Normally this value should be left unspecified within the configuration file, and instead set on a per-process basis using the tangosol.coherence.distributed.localstorage system property. This allows cache clients and servers to use the same configuration descriptor. Legal values are true or false. The default value is the local-storage value specified in the tangosol-coherence.xml descriptor. See the local-storage parameter in "DistributedCache Service Parameters" on page A-62 for more information.
<thread-count>
Optional
<local-storage>
Optional
B-118
Element Reference
Table B69 (Cont.) transactional-scheme Subelements Element <partition-count> Required/ Optional Optional Description Specifies the number of partitions that a partitioned (distributed) cache is "chopped up" into. Each member running the partitioned cache service that has the local-storage (<local-storage> subelement) option set to true manages a "fair" (balanced) number of partitions. The number of partitions should be a prime number and sufficiently large such that a given partition is expected to be no larger than 50MB. The following are good defaults for sample service storage sizes: service storage _______________ 100M 1G 10G 50G 100G partition-count ______________ 257 509 2039 4093 8191
A list of first 1,000 primes can be found at http://primes.utm.edu/lists/ Valid values are positive integers. The default value is the value specified in the tangosol-coherence.xml descriptor. See the partition-count parameter "DistributedCache Service Parameters" on page A-62 for more information. <high-units> Optional Specifies the transaction storage size. Once the transactional storage size is reached, an eviction policy is used that removes 25% of eligible entries from storage. The value of this element must be in the following format: (\d)+((.)(\d)+)?[K|k|M|m|G|g|T|t]?[B|b]? where the first non-digit (from left to right) indicates the factor with which the preceding decimal value should be multiplied:
If the value does not contain a factor, a factor of one is assumed. The default value is 10MB. <transfer-threshold> Optional Specifies the threshold for the primary buckets distribution in kilo-bytes. When a new node joins the partitioned cache service or when a member of the service leaves, the remaining nodes perform a task of bucket ownership re-distribution. During this process, the existing data gets re-balanced along with the ownership information. This parameter indicates a preferred message size for data transfer communications. Setting this value lower makes the distribution process take longer, but reduces network bandwidth utilization during this activity. Legal values are integers greater then zero. The default value is the transfer-threshold value specified in the tangosol-coherence.xml descriptor. See the transfer-threshold parameter in "DistributedCache Service Parameters" on page A-62 for more information.
transactional-scheme
Table B69 (Cont.) transactional-scheme Subelements Element <backup-count> Required/ Optional Optional Description Specifies the number of members of the partitioned cache service that hold the backup data for each unit of storage in the cache. A value of 0 means that for abnormal termination, some portion of the data in the cache is lost. The default value is the backup-count value specified in the tangosol-coherence.xml descriptor. See the backup-count parameter in value specified in the tangosol-coherence.xml descriptor. See "DistributedCache Service Parameters" on page A-62 for more information. Specifies the strategy that is used by a partitioned service to manage partition distribution. Valid values are legacy, simple, or a class that implements the com.tangosol.net.partition.PartitionAssignmentStrategy interface. The legacy assignment strategy indicates that partition distribution is managed individually on each cluster member. The simple strategy is a centralized distribution strategy that attempts to balance partition distribution, while ensuring machine-safety and is more deterministic and efficient than the legacy strategy. The default value is simple. Enter the custom strategy using the <instance> element. Specifies the amount of time in milliseconds that a task can execute before it is considered "hung". Note: a posted task that has not yet started is never considered as hung. This attribute is applied only if the Thread pool is used (the thread-count value is positive). Legal values are positive integers or zero (indicating no default timeout). The default value is the task-hung-threshold value specified in the tangosol-coherence.xml descriptor. See the task-hung-threshold parameter in "DistributedCache Service Parameters" on page A-62 for more information. Specifies the timeout value in milliseconds for requests executing on the service worker threads. This attribute is applied only if the thread pool is used (the thread-count value is positive). If zero is specified, the default service-guardian <timeout-milliseconds> value is used. Legal values are nonnegative integers. The default value is the value specified in the tangosol-coherence.xml descriptor. See the task-timeout parameter in "DistributedCache Service Parameters" on page A-62. Specifies the maximum amount of time a client waits for a response before abandoning the original request. The request time is measured on the client side as the time elapsed from the moment a request is sent for execution to the corresponding server node(s) and includes the following:
<task-hung-threshold>
Optional
<task-timeout>
Optional
<request-timeout>
Optional
the time it takes to deliver the request to an executing node (server) the interval between the time the task is received and placed into a service queue until the execution starts the task execution time the time it takes to deliver a result back to the client
Legal values are positive integers or zero (indicating no default timeout). The default value is the value specified in the tangosol-coherence.xml descriptor. See the request-timeout parameter in "DistributedCache Service Parameters" on page A-62 for more information.
B-120
Element Reference
Table B69 (Cont.) transactional-scheme Subelements Element <guardian-timeout> Required/ Optional Optional Description Specifies the guardian timeout value to use for guarding the service and any dependent threads. If the element is not specified for a given service, the default guardian timeout (as specified by the <timeout-milliseconds> operational configuration element) is used. See <service-guardian>. The value of this element must be in the following format: (\d)+((.)(\d)+)?[MS|ms|S|s|M|m|H|h|D|d]? where the first non-digits (from left to right) indicate the unit of time duration:
If the value does not contain a unit, a unit of milliseconds is assumed. <service-failure-policy> Optional Specifies the action to take when an abnormally behaving service thread cannot be terminated gracefully by the service guardian. Legal values are:
exit-cluster (default) attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy causes the local node to stop the cluster services. exit-process attempts to recover threads that appear to be unresponsive. If the attempt fails, an attempt is made to stop the associated service. If the associated service cannot be stopped, this policy cause the local node to exit the JVM and terminate abruptly. logging causes any detected problems to be logged, but no corrective action to be taken. a custom class an <instance> subelement is used to provide the class configuration information for a com.tangosol.net.ServiceFailurePolicy implementation.
Specifies quorum policy settings for the partitioned cache service. The element is intended to be used by cache servers (that is, com.tangosol.net.DefaultCacheServer). It specifies whether the cache services associated with this cache scheme should be automatically started at a cluster node. Legal values are true or false. The default value is false.
trust-manager
trust-manager
Used in: ssl.
Description
The <trust-manager> element contains the configuration information for initializing a javax.net.ssl.TrustManager instance. A trust manager is responsible for managing the trust material that is used when making trust decisions and for deciding whether credentials presented by a peer should be accepted. A valid trust-manager configuration contains at least one child element.
Elements
Table B70 describes the elements you can define within the trust-manager element.
Table B70 Element <algorithm> <provider> <key-store> trust-manager Subelements Required/ Optional Optional Optional Optional Description Specifies the algorithm used by the trust manager. The default value is SunX509. Specifies the configuration for a security provider instance. Specifies the configuration for a key store implementation.
B-122
Attribute Reference
Attribute Reference
Table B71 describes the system property attribute that is available in the cache configuration deployment descriptor.
Table B71 Attribute system-property Cache Configuration Deployment Descriptor Attribute Required/ Optional Optional Description This attribute is used to specify a system property name for any element. The system property is used to override the element value from the Java command line. This feature enables the same operational descriptor (and override file) to be used across all cluster nodes and customize each node using the system properties. See Appendix C, "System Property Overrides," for more information on this feature.
Attribute Reference
B-124
C
C
This appendix describes the system property override feature and lists many of the predefined system properties that are used to change the default settings in Coherence. This appendix contains the following sections:
Override Example
For example, to create an override for the IP address of the multi-home server to avoid using the default localhost, and instead specify a specific IP address for the interface (for instance, 192.168.0.301). Call this override tangosol.coherence.localhost. First, add a system-property to the cluster-config, unicast-listener, or address element. for example:
<address system-property="tangosol.coherence.localhost">localhost</address>
Then use it by modifying the Java command line and specifying an IP address instead of the default localhost:
java -Dtangosol.coherence.localhost=192.168.0.301 -jar coherence.jar
tangosol.coherence.clusterport
tangosol.coherence.distributed.backup
tangosol.coherence.distributed.backupcount
tangosol.coherence.distributed.localstorage
tangosol.coherence.distributed.threads tangosol.coherence.distributed.transfer
tangosol.coherence.edition tangosol.coherence.invocation.threads
tangosol.coherence.localhost
tangosol.coherence.localport tangosol.coherence.localport.adjust
tangosol.coherence.log
tangosol.coherence.log.level tangosol.coherence.log.limit
tangosol.coherence.machine
Table C1 (Cont.) Preconfigured System Property Override Values Override Option tangosol.coherence.management tangosol.coherence.management.readonly tangosol.coherence.management.remote tangosol.coherence.member tangosol.coherence.mode tangosol.coherence.override tangosol.coherence.priority tangosol.coherence.process tangosol.coherence.proxy.threads Setting JMX management mode. See "management-config" on page A-35. JMX management read-only flag. "management-config" on page A-35. Remote JMX management enabled flag. See"management-config" on page A-35. Member name. See "member-identity" on page A-41. Operational mode. See "license-config" on page A-33. Deployment configuration override filename. Priority. See "member-identity" on page A-41. Process name"member-identity" on page A-41. Coherence*Extend service thread pool size. See thread-count subelement in "Proxy Service Parameters" on page A-74. Rack name. See "member-identity" on page A-41. Role name. See "member-identity" on page A-41. Cache access security enabled flag. See "security-config" on page A-58. Security access controller keystore file name. See "security-config" on page A-58. Security access controller permissions file name. See "security-config" on page A-58. Shutdown listener action. See "shutdown-listener" on page A-81. Site name. See "member-identity" on page A-41. TCMP enabled flag. See <packet-publisher-enabled> subelement in "packet-publisher" on page A-51. Multicast packet time to live (TTL). See <mulitcast-listener-ttl> subelement in "multicast-listener" on page A-43. Well known IP address. See "well-known-addresses" on page A-93. Well known IP port. See "well-known-addresses" on page A-93.
tangosol.coherence.ttl
tangosol.coherence.wka tangosol.coherence.wka.port
D
D
Notes:
The schema located in the coherence.jar library is always used at run time even if the xsi:schemaLocation attribute references the Web URL. The xsi:schemaLocation attribute can be omitted to disable schema validation. When deploying Coherence into environments where the default character set is EBCDIC rather than ASCII, ensure that the deployment descriptor file is in ASCII format and is deployed into its run-time environment in the binary format.
Coherence-specific user types are defined in the coherence-pof-config.xml file that is also located in the root of the coherence.jar library. This file should always be referenced as follows when creating a pof-config.xml file:
<?xml version='1.0'?> <pof-config xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://xmlns.oracle.com/coherence/coherence-pof-config" xsi:schemaLocation="http://xmlns.oracle.com/coherence/coherence-pof-config coherence-pof-config.xsd"> <user-type-list> <include>coherence-pof-config.xml</include> </user-type-list> ... </pof-config>
Element Index
Element Index
Table D1 lists all non-terminal POF configuration deployment descriptor elements.
Table D1 Element default-serializer init-param init-params pof-config serializer user-type user-type-list POF Configuration Elements Used In pof-config init-params serializer root element user-type user-type-list pof-config
default-serializer
default-serializer
Used in: pof-config
Description
This element specifies a PofSerializer to use when serializing and deserializing all user types defined within the pof-config element. If a serializer is specified within a user-type, then that serializer is used for that user-type instead of the default serializer. If the default serializer element is omitted, the serializer defined for the specific user type is used. If the serializer for the user type is also omitted, then the user type is assumed to implement the PortableObject interface, and the PortableObjectSerializer implementation is used as the PofSerializer. If the init-params element is omitted from the default serializer element, then the following four constructors are attempted on the specific PofSerializer implementation, and in this order:
(int nTypeId, Class clz, ClassLoader loader) (int nTypeId, Class clz) (int nTypeId) ()
Elements
Table D2 describes the subelements of the default-serializer element.
Table D2 Element <class-name> <init-params> default-serializer Subelements Required/ Optional Required Optional Description Specifies the fully qualified name of the PofSerializer implementation. Specifies zero or more arguments (each as an init-param) that correspond to the parameters of a constructor of the class that is being configured.
Element Index
init-param
Used in: init-params
Description
The init-param element provides a type for a configuration parameter and a corresponding value to pass as an argument.
Elements
Table D3 describes the subelements of the init-param element.
Table D3 Element <param-type> init-param Subelements Required/ Optional Required Description The param-type element specifies the Java type of initialization parameter. Supported types are:
stringindicates that the value is a java.lang.String booleanindicates that the value is a java.lang.Boolean intindicates that the value is a java.lang.Integer longindicates that the value is a java.lang.Long doubleindicates that the value is a java.lang.Double decimalindicates that the value is a java.math.BigDecimal fileindicates that the value is a java.io.File date indicates that the value is a java.sql.Date timeindicates that the value is a java.sql.Timedatetime datetimeindicates that the value is a java.sql.Timestamp xmlindicates that the value is the entire init-param XmlElement.
The value is converted to the specified type, and the target constructor or method must have a parameter of that type for the instantiation to succeed. <param-value> Required The param-value element specifies a value of the initialization parameter. The value is in a format specific to the type of the parameter. There are four reserved values that can be specified. Each of these values is replaced at run time with a value before the constructor is invoked:
{type-id}replaced with the Type ID of the User Type; {class-name}replaced with the name of the class for the User Type; {class}replaced with the Class for the User Type; {class-loader}replaced with the ConfigurablePofContext's ContextClassLoader.
init-params
init-params
Used in: serializer, default-serializer
Description
The init-params element contains zero or more arguments (each as an init-param) that correspond to the parameters of a constructor of the class that is being configured.
Elements
Table D4 describes the subelements of the init-params element.
Table D4 Element <init-param> init-params Subelements Required/ Optional Required Description The init-param element provides a type for a configuration parameter and a corresponding value to pass as an argument.
Element Index
pof-config
root element
Description
The pof-config element is the root element of the POF user type configuration descriptor.
Elements
Table D5 describes the subelements of the pof-config element.
Table D5 Element <user-type-list> pof-config Subelements Required/ Optional Required Description The user-type-list element contains zero or more user-type elements. Each POF user type that is used must be listed in the user-type-list. The user-type-list element may also contain zero or more include elements. Each include element is used to add user-type elements defined in another pof-config file. The allow-interfaces element indicates whether the user-type class-name can specify Java interface types in addition to Java class types. Valid values are true or false. The default value is false. The allow-subclasses element indicates whether the user-type class-name can specify a Java class type that is abstract, and whether sub-classes of any specified user-type class-name is permitted at run time and automatically mapped to the specified super-class for purposes of obtaining a serializer. Valid values are true or false. The default value is false. The enable-references element indicates whether or not Identity/Reference type support is enabled. Valid values are true or false. Default value is false. The default-serializer specifies what serializer to use to serialize and deserialize all user types defined in the POF configuration file. If a user-type defines a specific serializer, then that serializer is used instead of the default serializer.
<allow-interfaces>
Optional
<allow-subclasses>
Optional
<enable-references>
Optional
<default-serializer>
Optional
serializer
serializer
Used in: user-type
Description
The serializer element specifies what POF serializer to use to serialize and deserialize a specific user type. A PofSerializer implementation is used to serialize and deserialize user type values to and from a POF stream. If the serializer element is omitted, then the user type is assumed to implement the PortableObject interface and the PortableObjectSerializer implementation is used as the POF serializer. If POF annotations are used, then the PofAnnotationSerializer implementation is used as the POF serializer. If the init-params element is omitted, then the following four constructors are attempted (in this order) on the specific PofSerializer implementation:
(int nTypeId, Class clz, ClassLoader loader) (int nTypeId, Class clz) (int nTypeId) ()
Elements
Table D6 describes the subelements of the serializer element.
Table D6 Element <class-name> <init-params> serializer Subelements Required/ Optional Required Optional Description Specifies the fully qualified name of the PofSerializer implementation. The init-params element contains zero or more arguments (each as an init-param) that correspond to the parameters of a constructor of the class that is being configured.
Element Index
user-type
Used in: user-type-list
Description
The user-type element contains the declaration of a POF user type. A POF user type is a uniquely identifiable, portable, versionable object class that can be communicated among systems regardless of language, operating system, hardware and location. Within the user-type element, the type-id element is optional, but its use is strongly suggested to support schema versioning and evolution. Within the user-type element, the class-name element is required, and specifies the fully qualified name of the Java class or interface that all values of the user type are type-assignable to. If the serializer element is omitted, then the user type is assumed to implement the PortableObject interface, and the PortableObjectSerializer implementation is used as the PofSerializer.
Elements
Table D7 describes the subelements of the user-type element.
Table D7 Element <type-id> user-type Subelements Required/ Optional Optional Description The type-id element specifies an integer value (n >= 0) that uniquely identifies the user type. If none of the user-type elements contains a type-id element, then the type IDs for the user types are based on the order in which they appear in the user-type-list, with the first user type being assigned the type ID 0, the second user type being assigned the type ID 1, and so on. However, it is strongly recommended that user types IDs always be specified, to support schema versioning and evolution. The first 1000 IDs are reserved for Coherence internal use and cannot be used. The class-name element specifies the fully qualified name of a Java class or interface that all values of the user type are type-assignable to. The serializer element specifies what PofSerializer to use to serialize and deserialize a specific user type. A PofSerializer is used to serialize and deserialize user type values to and from a POF stream. Within the serializer element, the class-name element is required, and zero or more constructor parameters can be defined within an init-params element. If the serializer element is omitted, then the user type is assumed to implement the PortableObject interface, and the PortableObjectSerializer implementation is used as the PofSerializer. If the init-params element is omitted from the serializer element, then the following four constructors are attempted on the specific PofSerializer implementation, and in this order:
<class-name> <serializer>
Required Optional
(int nTypeId, Class clz, ClassLoader loader) (int nTypeId, Class clz) (int nTypeId) ()
user-type-list
user-type-list
Used in: pof-config
Description
The user-type-list element contains zero or more user-type elements. Each POF user type that is used must be listed in the user-type-list. The user-type-list element may also contain zero or more include elements. Each include element is used to add user-type elements defined in another pof-config file.
Elements
Table D8 describes the subelements of the user-type-list element.
Table D8 Element <user-type> user-type-list Subelements Required/ Optional Optional Description The user-type element contains the declaration of a POF user type. A POF user type is a uniquely identifiable, portable, versionable object class that can be communicated among systems regardless of language, operating system, hardware and location. Any number of <user-type> elements may be specified. Within the user-type element, the type-id element is optional, but its use is strongly suggested to support schema versioning and evolution. Within the user-type element, the class-name element is required, and specifies the fully qualified name of the Java class or interface that all values of the user type are type-assignable to. If the serializer element is omitted, then the user type is assumed to implement the PortableObject interface, and the PortableObjectSerializer implementation is used as the PofSerializer. <include> Optional The include element specifies the location of a POF configuration file to load user-type elements from. The value is a locator string (either a valid path or URL) that identifies the location of the target file. Any number of <include> elements may be specified.
E
E
This appendix describes the binary streams for the Portable Object Format (POF) and the Portable Invocation Format (PIF) that are used to serialize objects in a platform and language neutral way. This appendix includes the following sections:
Overview of the PIF-POF Binary Format Stream Format Binary Formats for Predefined Types Binary Format for User Types
Stream Format
The PIF-POF stream format is octet-based; a PIF-POF stream is a sequence of octet values. For the sake of clarity, this documentation treats all octets as unsigned 8-bit integer values in the range 0x00 to 0xFF (decimal 0 to 255). Byte-ordering is explicitly not a concern since (in PIF-POF) a given octet value that is represented by an unsigned 8-bit integer value is always written and read as the same unsigned 8-bit integer value. A PIF stream contains exactly one Invocation. An Invocation consists of an initial POF stream that contains an Integer Value for the remaining length of the Invocation, immediately followed by a POF stream that contains an Integer Value that is the conversation identifier, immediately followed by a POF stream that contains a User Type value that is the message object. The remaining length indicates the total number of octets used to encode the conversation identifier and the message object; the remaining length is provided so that a process receiving an Invocation can determine when the Invocation has been fully received. The conversation identifier is used to support multiple logical clients and services multiplexed through a single connection, just as TCP/IP provides multiple logical port numbers for a given IP address. The message object is defined by the particular high-level conversational protocol.
The PIF-POF Binary Format E-1
Stream Format
A POF stream contains exactly one Value. The Value contains a Type Identifier, and if the Type Identifier does not imply a value, then it is immediately trailed by a data structure whose format is defined by the Type Identifier.
Integer Values
The stream format relies extensively on the ability to encode integer values in a compact form. Coherence refers to this integer binary format as a packed integer. This format uses an initial octet and one or more trailing octets as necessary; it is a variable-length format. Table E1 describes the three regions in the first octet.
Table E1 Regions in the First Octet of a Packed Integer Description Continuation indicator Negative indicator integer value (6 binary LSDs)
Example E1 illustrates writing a 32-bit integer value to an octet stream as supported in Coherence.
Example E1 Writing a 32-bit Integer Value to an Octet Stream public static void writeInt(DataOutput out, int n) throws IOException { int b = 0; if (n < 0) { b = 0x40; n = ~n; } b |= (byte) (n & 0x3F); n >>>= 6; while (n != 0) { b |= 0x80; out.writeByte(b); b = (n & 0x7F); n >>>= 7; } out.writeByte(b); }
Example E2 illustrates reading a 32-bit integer value from an octet stream as supported in Coherence.
Stream Format
Example E2 Reading a 32-bit Integer Value from an Octet Stream public static int readInt(DataInput in) throws IOException { int b = in.readUnsignedByte(); int n = b & 0x3F; int cBits = 6; boolean fNeg = (b & 0x40) != 0; while ((b & 0x80) != 0) { b = in.readUnsignedByte(); n |= ((b & 0x7F) << cBits); cBits += 7; } if (fNeg) { n = ~n; } return n; }
Integer values used within this documentation without an explicit Type Identifier are assumed to be 32-bit signed integer values that have a decimal range of -231 to 231-1. Table E3 illustrates some integer value examples.
Table E3 Value 0 1 2 99 9999 -1 -2 -99 -9999 Binary Formats for Integer Values Without a Type Identifier Binary Format 0x00 0x01 0x02 0xA301 0x8F9C01 0x40 0x41 0xE201 0xCE9C01
Type Identifiers
A Type Identifier is encoded in the binary stream as an Integer Value. Type Identifiers greater than or equal to zero are user Type Identifiers. Type Identifiers less than zero are predefined ("intrinsic") type identifiers. Table E4 lists the predefined identifiers.
Table E4 Type ID -1 (0x40) -2 (0x41) -3 (0x42) Predefined Type Identifiers Description int16 int32 int64
Stream Format
Table E4 Type ID -4 (0x43) -5 (0x44) -6 (0x45) -7 (0x46) -8 (0x47) -9 (0x48) -10 (0x49) -11 (0x4A) -12 (0x4B) -13 (0x4C) -14 (0x4D) -15 (0x4E) -16 (0x4F) -17 (0x50) -18 (0x51) -19 (0x52) -20 (0x53) -21 (0x54) -22 (0x55) -23 (0x56) -24 (0x57) -25 (0x58) -26 (0x59) -27 (0x5A) -28 (0x5B) -29 (0x5C) -30 (0x5D) -31 (0x5E) -32 (0x5F)
(Cont.) Predefined Type Identifiers Description int128* float32 float64 float128* decimal32* decimal64* decimal128* boolean octet octet-string char char-string date year-month-interval* time time-interval* datetime day-time-interval* collection uniform-collection array uniform-array sparse-array uniform-sparse-array map uniform-keys-map uniform-map identity reference
Type Identifiers less than or equal to -33 are a combination of a type and a value. This form is used to reduce space for these commonly used values. Table E5 lists the type identifiers that combine type and value.
Table E5 Type ID -33 (0x60) -34 (0x61) Type Identifiers that Combine a Type and a Value Description boolean:false boolean:true
Table E5 Type ID -35 (0x62) -36 (0x63) -37 (0x64) -38 (0x65) -39 (0x66) -40 (0x67) -41 (0x68) -42 (0x69) -43 (0x6A) -44 (0x6B) -45 (0x6C) -46 (0x6D) -47 (0x6E) -48 (0x6F) -49 (0x70) -50 (0x71) -51 (0x72) -52 (0x73) -53 (0x74) -54 (0x75) -55 (0x76) -56 (0x77) -57 (0x78) -58 (0x79) -59 (0x7A) -60 (0x7B) -61 (0x7C) -62 (0x7D) -63 (0x7E) -64 (0x7F)
(Cont.) Type Identifiers that Combine a Type and a Value Description string:zero-length collection:empty reference:null floating-point:+infinity floating-point:-infinity floating-point:NaN int:-1 int:0 int:1 int:2 int:3 int:4 int:5 int:6 int:7 int:8 int:9 int:10 int:11 int:12 int:13 int:14 int:15 int:16 int:17 int:18 int:19 int:20 int:21 int:22
Int
Four signed integer types are supported: int16, int32, int64, and int128. If a type identifier for a integer type is encountered in the stream, it is immediately followed by an Integer Value. The four signed integer types vary only by the length that is required to support the largest value of the type using the common "twos complement" binary format. The Type Identifier, one of int16, int32, int64, or int128 is followed by an Integer Value in the stream. If the Integer Value is outside of the range supported by the type (-215 to 215-1 for int16, -231 to 231-1, for int32, -263 to 263-1 for int64, or -2127 to 2127-1 for int128,) then the result is undefined and may be bitwise truncation or an exception. Additionally, there are some Type Identifiers that combine the int designation with a value into a single byte for purpose of compactness. As a result, these Type Identifiers are not followed by an Integer Value in the stream, since the value is included in the Type Identifier. Table E6 illustrates these type identifiers.
Table E6 Value 0 1 2 99 9999 -1 -2 -99 -9999 Type Identifiers that Combine an int Data Type with a Value int16 0x69 0x6A 0x6B 0x40A301 0x408F9C01 0x68 0x4041 0x40E201 0x40CE9C01 int32 0x69 0x6A 0x6B 0x41A301 0x418F9C01 0x68 0x4141 0x41E201 0x41CE9C01 int64 0x69 0x6A 0x6B 0x42A301 0x428F9C01 0x68 0x4241 0x42E201 0x42CE9C01 int128 0x69 0x6A 0x6B 0x43A301 0x438F9C01 0x68 0x4341 0x43E201 0x43CE9C01
The Java type equivalents are short (int16), int (int32), long (int64) and BigInteger (int128). Since BigInteger can represent much larger values, it is not possible to encode all BigInteger values in the int128 form; values out of the int128 range are basically unsupported, and would result in an exception or would use a different encoding, such as a string encoding.
Table E7 Type ID -6 (0x45) -7 (0x46) -8 (0x47) -9 (0x48) -10 (0x49) -12 (0x4B) -14 (0x4D)
(Cont.) Type IDs of Integer Types that can be Coerced into Other Types Description float64 float128 decimal32 decimal64 decimal128 octet char
In other words, if the recipient reads any of the above types from the stream and it encounters an encoded integer value, it automatically converts that value into the expected type. This capability allows a set of common (that is, small-magnitude) octet, character, integer, decimal and floating-point values to be encoded using the single-octet integer form (Type Identifiers in the range -41 to -64). For purposes of unsigned types, the integer value -1 is translated to 0xFF for the octet type, and to 0xFFFF for the char type. (In the case of the char type, this does unfortunately seem to imply a UTF-16 platform encoding; however, it does not violate any of the explicit requirements of the stream format.)
Decimal
There are three floating-point decimal types supported: decimal32, decimal64, and decimal128. If a type identifier for a decimal type is encountered in the stream, it is immediately followed by two packed integer values. The first integer value is the unscaled value, and the second is the scale. These values are equivalent to the parameters to the constructor of Java's BigDecimal class: java.math.BigDecimal(BigInteger unscaledVal, int scale). In addition to the coercion of integer values into decimal values supported as described in "Coercion of Integer Types" on page E-6, the constant type+value identifiers listed in Table E8 are used to indicate special values supported by IEEE 754r.
Table E8 Type ID -38 (0x65) -39 (0x66) -40 (0x67) Type Identifiers that can Indicate Decimal Values Description floating-point:+infinity floating-point:-infinity floating-point:NaN
Java does not provide a standard (that is, portable) decimal type; rather, it has the awkward BigDecimal implementation that was intended originally for internal use in Java's cryptographic infrastructure. In Java, the decimal values for positive and negative infinity, and not-a-number (NaN), are not supported.
Floating Point
Three base-2 floating point types are supported: float32, float64, and float128. If a type identifier for a floating point type is encountered in the stream, it is immediately
followed by a fixed-length floating point value, whose binary form is defined by IEEE 754/IEEE754r. IEEE 754 format is used to write floating point numbers to the stream, and IEEE 754r format is used for the float128 type. In addition to the coercion of integer values into decimal values as described in "Coercion of Integer Types" on page E-6, the constants in Table E9 are used to indicate special values supported by IEEE-754
Table E9 Type ID -38 (0x65) -39 (0x66) -40 (0x67) Type Identifiers that can Indicate IEEE 754 Special Values Description floating-point:+infinity floating-point:-infinity floating-point:NaN
Other special values defined by IEEE-754 are encoded using the full 32-bit, 64-bit or 128-bit format, and may not be supported on all platforms. Specifically, by not providing any means to differentiate among them, Java only supports one NaN value.
Boolean
If the type identifier for Boolean occurs in the stream, it is followed by an integer value, which represents the Boolean value false for the integer value of zero, or true for all other integer values. While it is possible to encode Boolean values as described in "Coercion of Integer Types" on page E-6, the only values for the Boolean type are true and false. As such, the only expected binary formats for Boolean values are the predefined (and compact) forms described in Table E10.
Table E10 Type ID -33 (0x60) -34 (0x61) Type Identifiers that can Indicate Boolean Values Description boolean:false boolean:true
Octet
If the type identifier for Octet occurs in the stream, it is followed by the octet value itself, which is by definition in the range 0 to 255 (0x00 to 0xFF). As described in "Coercion of Integer Types" on page E-6, the compact form of integer values can be used for Octet values, with the integer value -1 being translated as 0xFF. Table E11 lists the integer values that may be used as Octet values.
Table E11 Value 0 (0x00) 1 (0x01) 2 (0x02) 99 (0x63) 254 (0xFE) 255 (0xFF) Integer Values that may be Used for Octet Values Octet 0x69 0x6A 0x6B 0x4B63 0x4BFE 0x68
Octet String
If the type identifier for Octet String occurs in the stream, it is followed by an Integer Value for the length n of the string, and then n octet values. An Octet String of zero length is encoded using the "string:zero-length" Type Identifier.
Char
If the type identifier for Char occurs in the stream, it is followed by a UTF-8 encoded character. As described in the section on "Coercion of Integer Types" on page E-6, the compact form of integer values may be used for Char values, with the integer value -1 being translated as 0xFFFF.
Note: POF optimizes the storage of String data by using only one byte for each character when possible. Custom POF character codecs (ASCII for example) are not required and do not result in better performance.
case 0x4: case 0x5: case 0x6: case 0x7: // 1-byte format: 0xxx xxxx ch = (char) b; break; case 0xC: case 0xD: { // 2-byte format: 110x xxxx, 10xx xxxx int b2 = in.readUnsignedByte(); if ((b2 & 0xC0) != 0x80) { throw new UTFDataFormatException(); } ch = (char) (((b & 0x1F) << 6) | b2 & 0x3F); break; } case 0xE: { // 3-byte format: 1110 xxxx, 10xx xxxx, 10xx xxxx int n = in.readUnsignedShort(); int b2 = n >>> 8; int b3 = n & 0xFF; if ((b2 & 0xC0) != 0x80 || (b3 & 0xC0) != 0x80) { throw new UTFDataFormatException(); } ch = (char) (((b & 0x0F) << 12) | ((b2 & 0x3F) << 6) | b3 & 0x3F); break; } default: throw new UTFDataFormatException( "illegal leading UTF byte: " + b); } return ch; }
Char String
If the type identifier for Char String occurs in the stream, it is followed by an Integer Value for the length n of the UTF-8 representation string in octets, and then n octet values composing the UTF-8 encoding described above. Note that the format length-encodes the octet length, not the character length. A Char String of zero length is encoded using the string:zero-length Type Identifier. Table E12 illustrates the Char String formats.
Table E12 Values Values for Char String Formats Char String Format 0x62 (or 0x4E00) "ok" 0x4E026F6B
Date
Date values are passed using ISO8601 semantics. If the type identifier for Date occurs in the stream, it is followed by three Integer Values for the year, month and day, in the ranges as defined by ISO8601.
Year-Month Interval
If the type identifier for Year-Month Interval occurs in the stream, it is followed by two Integer Values for the number of years and the number of months in the interval.
Time
Time values are passed using ISO8601 semantics. If the type identifier for Time occurs in the stream, it is followed by five Integer Values, which may be followed by two more Integer Values. The first four Integer Values are the hour, minute, second and fractional second values. Fractional seconds are encoded in one of three ways:
0 indicates no fractional seconds. [1..999] indicates the number of milliseconds. [-1..-999999999] indicates the negated number of nanoseconds.
The fifth Integer Value is a time zone indicator, encoded in one of three ways:
0 indicates no time zone. 1 indicates Universal Coordinated Time (UTC). 2 indicates a time zone offset, which is followed by two more Integer Values for the hour offset and minute offset, as described by ISO8601.
The encoding for variable fractional and time zone does add complexity to the parsing of a Time Value, but provide for much more complete support of the ISO8601 standard and the variability in the precision of clocks, while achieving a high degree of binary compactness. While time values tend to have no fractional encoding or millisecond encoding, the trend over time is toward higher time resolution.
Time Interval
If the type identifier for Time Interval occurs in the stream, it is followed by four Integer Values for the number of hours, minutes, seconds and nanoseconds in the interval.
Date-Time
Date-Time values are passed using ISO8601 semantics. If the type identifier for Date-Time occurs in the stream, it is followed by eight or ten Integer Values, which correspond to the Integer Values that compose the Date and Time values.
E-11
Day-Time Interval
If the type identifier for Day-Time Interval occurs in the stream, it is followed by five Integer Values for the number of days, hours, minutes, seconds and nanoseconds in the interval.
Collections
A collection of values, such as a bag, a set, or a list, are encoded in a POF stream using the Collection type. Immediately following the Type Identifier, the stream contains the Collection Size, an Integer Value indicating the number of values in the Collection, which is greater than or equal to zero. Following the Collection Size, is the first value in the Collection (if any), which is itself encoded as a Value. The values in the Collection are contiguous, and there is exactly n values in the stream, where n equals the Collection Size. If all the values in the Collection have the same type, then the Uniform Collection format is used. Immediately following the Type Identifier (uniform-collection), the uniform type of the values in the collection writes to the stream, followed by the Collection Size n as an Integer Value, followed by n values without their Type Identifiers. Note that values in a Uniform Collection cannot be assigned an identity, and that (as a side-effect of the explicit type encoding) an empty Uniform Collection has an explicit content type. Table E13 illustrates examples of Collection and Uniform Collection formats for several values.
Table E13 Values Collection and Uniform Collection Formats for Various Values Collection Format 0x63 (or 0x5500) 1 1,2,3 1, "ok" 0x55016A 0x55036A6B6C 0x55026A4E026F6B Uniform Collection Format n/a 0x56410101 0x564103010203 n/a
Arrays
An indexed array of values is encoded in a POF stream using the Array type. Immediately following the Type Identifier, the stream contains the Array Size, an Integer Value indicating the number of elements in the Array, which must be greater than or equal to zero. Following the Array Size is the value of the first element of the Array (the zero index) if there is at least one element in the array which is itself encoded using as a Value. The values of the elements of the Array are contiguous, and there must be exactly n values in the stream, where n equals the Array Size. If all the values of the elements of the Array have the same type, then the Uniform Array format is used. Immediately following the Type Identifier (uniform-array), the uniform type of the values of the elements of the Array writes the stream, followed by the Array Size n as an Integer Value, followed by n values without their Type Identifiers. Note that values in a Uniform Array cannot be assigned an identity, and that (as a side-effect of the explicit type encoding) an empty Uniform Array has an explicit array element type. Table E14 illustrates examples of Array and Uniform Array formats for several values.
Array and Uniform Array Formats for Various Values Array Format 0x63 (or 0x5700) Uniform Array Format 0x63 (or 0x584100) This example assumes an element type of Int32. 0x58410101 0x584103010203 n/a
1 1,2,3 1, "ok"
Sparse Arrays
For arrays whose element values are sparse, the Sparse Array format allows indexes to be explicitly encoded, implying that any missing indexes have a default value. The default value is false for the Boolean type, zero for all numeric, octet and char types, and null for all reference types. The format for the Sparse Array is the Type Identifier (sparse-array), followed by the Array Size n as an Integer Value, followed by not more than n index/value pairs, each of which is composed of an array index encoded as an Integer Value i (0 <= i < n) whose value is greater than the previous element's array index, and an element value encoded as a Value; the Sparse Array is finally terminated with an illegal index of -1. If all the values of the elements of the Sparse Array have the same type, then the Uniform Sparse Array format is used. Immediately following the Type Identifier (uniform-sparse-array), the uniform type of the values of the elements of the Sparse Array writes the stream, followed by the Array Size n as an Integer Value, followed by not more the n index/value pairs, each of which is composed of an array index encoded as an Integer Value i (0 <= i < n) whose value is greater than the previous element's array index, and a element value encoded as a Value without a Type Identifier; the Uniform Sparse Array is finally terminated with an illegal index of -1. Note that values in a Uniform Sparse Array cannot be assigned an identity, and that (as a side-effect of the explicit type encoding) an empty Uniform Sparse Array has an explicit array element type. Table E15 illustrates examples of Sparse Array and Uniform Sparse Array formats for several values.
Table E15 Values Sparse Array and Uniform Sparse Array Formats for Various Values Sparse Array format 0x63 (or 0x590040) 1 1,2,3 1,,,,5,,,,9 1,,,,"ok" 0x5901006A40 0x5903006A016B026C40 0x5909006A046E087240 0x5905006A044E026F6B40 Uniform Sparse Array format 0x63 (or 0x5A410040) This example assumes an element type of Int32. 0x5A4101000140 0x5A410300010102020340 0x5A410900010405080940 n/a
The generic map encoding is a sequence of keys and values; The uniform-keys-map encoding is a sequence of keys of a uniform type and their corresponding values;
The PIF-POF Binary Format E-13
The uniform-map encoding is a sequence of keys of a uniform type and their corresponding values of a uniform type.
The format for the Key-Value Map is the Type Identifier (map), followed by the Key-Value Map Size n as an Integer Value, followed by n key/value pairs, each of which is composed of a key encoded as Value, and a corresponding value encoded as a Value. Table E16 illustrates several examples of key/value pairs and their corresponding binary format.
Table E16 Values Binary Formats for Key/Value Pairs Binary format 0x63 (or 0x5B00) 1="ok" 1="ok", 2="no" 0x5B016A4E026F6B 0x5B026A4E026F6B6B4E026E6F
If all of the keys of the Key-Value Map are of a uniform type, then the encoding uses a more compact format, starting with the Type Identifier (uniform-keys-map), followed by the Type Identifier for the uniform type of the keys of the Key-Value Map, followed by the Key-Value Map Size n as an Integer Value, followed by n key/value pairs, each of which is composed of a key encoded as a Value without a Type Identifier, and a corresponding value encoded as a Value. Table E17 illustrates several examples of the binary formats for Key/Value pairs where the Keys are of uniform type.
Table E17 Values Binary Formats for Key/Value Pairs where Keys are of Uniform Type Binary format 0x63 (or 0x5C4100) 1="ok" 1="ok", 2="no" 0x5C4101014E026F6B 0x5C4102014E026F6B024E026E6F
If all of the keys of the Key-Value Map are of a uniform type, and all the corresponding values of the map are also of a uniform type, then the encoding uses a more compact format, starting with the Type Identifier (uniform-map), followed by the Type Identifier for the uniform type of the keys of the Key-Value Map, followed by the Type Identifier for the uniform type of the values of the Key-Value Map, followed by the Key-Value Map Size n as an Integer Value, followed by n key/value pairs, each of which is composed of a key encoded as a Value without a Type Identifier, and a corresponding value encoded as a Value without a Type Identifier. Table E18 illustrates several examples of the binary formats for Key/Value pairs where the Keys and Values are of uniform type.
Table E18 Type Values Binary Formats for Key/Value Pairs where Keys and Values are of Uniform Binary format 0x63 (or 0x5D414E00) 1="ok" 1="ok", 2="no" 0x5D414E0101026F6B 0x5D414E0201026F6B02026E6F
Identity
If the type identifier for Identity occurs in the stream, it is followed by an Integer Value, which is the Identity. Following the Identity is the value that is being identified, which is itself encoded as a Value. Any value within a POF stream that occurs multiple times, is labeled with an Identity, and subsequent instances of that value within the same POF stream are replaced with a Reference. For platforms that support "by reference" semantics, the identity represents a serialized form of the actual object identity. An Identity is an Integer Value that is greater than or equal to zero. A value within the POF stream has at most one Identity. Values within a uniform data structure can be assigned an identity.
Reference
A Reference is a pointer to an Identity that has been encountered inside the current POF stream, or a null pointer. For platforms that support "by reference" semantics, the reference in the POF stream becomes a reference in the realized (deserialized) object, and a null reference in the POF stream becomes a null reference in the realized object. For platforms that do not support "by reference" semantics, and for cases in which a null reference is encountered in the POF stream for a non-reference value (for example, a primitive property in Java), the default value for the type of value is used. Table E19 illustrates examples of binary formats for several "by reference" semantics.
Table E19 Value Id #1 Id #350 null Binary Formats for "By Reference" Semantics Binary Format 0x5F01 0x5F9E05 0x60
Support for forward and outer references is not required by POF. In POF, both the identity that is referenced and the value that is being referenced by the identity have occurred within the POF stream. In the first case, a reference is not made to an identity that has not yet been encountered, and in the second case, a reference is not made within a complex value (such as a collection or a user type) to that complex value itself.
E-15
The binary format for a User Type is very similar to that of a Sparse Array; conceptually, a User Type can be considered a Sparse Array of property values. The format for User Types is the Type Identifier (an Integer Value greater than or equal to zero), followed by the Version Identifier (an Integer Value greater than or equal to zero), followed by index/value pairs, each of which is composed of a Property Index encoded as an Integer Value i (0 <= i) whose value is greater than the previous Property Index, and a Property Value encoded as a Value; the User Type is finally terminated with an illegal Property Index of -1. Like the Sparse Array, any property that is not included as part of the User Type encoding is assumed to have a default value. The default value is false for the Boolean type, zero for all numeric, octet and char types, and null for all reference types.