WebLogic 11 G Tuning
WebLogic 11 G Tuning
WebLogic 11 G Tuning
Performance and Tuning for Oracle WebLogic Server 11g Release 1 (10.3.6)
E13814-06
November 2011 This document is for people who monitor performance and tune the components in a WebLogic Server environment.
Oracle Fusion Middleware Performance and Tuning for Oracle WebLogic Server, 11g Release 1 (10.3.6) E13814-06 Copyright 2007, 2011, Oracle and/or its affiliates. All rights reserved. This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited. The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing. If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable: U.S. GOVERNMENT RIGHTS Programs, software, databases, and related documentation and technical data delivered to U.S. Government customers are "commercial computer software" or "commercial technical data" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, the use, duplication, disclosure, modification, and adaptation shall be subject to the restrictions and license terms set forth in the applicable Government contract, and, to the extent applicable by the terms of the Government contract, the additional rights set forth in FAR 52.227-19, Commercial Computer Software License (December 2007). Oracle America, Inc., 500 Oracle Parkway, Redwood City, CA 94065. This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group. This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.
Contents
Preface ............................................................................................................................................................... xiii
Documentation Accessibility ................................................................................................................... Conventions ............................................................................................................................................... xiii xiii
3.2
iv
7.4.7 7.5 7.5.1 7.5.1.1 7.5.1.2 7.5.1.3 7.5.2 7.5.3 7.5.4 7.5.5 7.5.6 7.5.6.1 7.5.6.2 7.5.6.3 7.5.7 7.5.8 7.5.9 7.6 7.6.1 7.6.2 7.6.2.1 7.6.2.2 7.7 7.7.1 7.7.2 7.7.3 7.7.4 7.7.5 7.7.5.1 7.7.5.2 7.7.6 7.7.7 7.7.8 7.7.9 7.8 7.8.1 7.8.2 7.8.3 7.8.4 7.8.5 7.8.6 7.9 7.9.1 7.9.2
Tuning the Stuck Thread Detection Behavior ................................................................. 7-5 Tuning Network I/O.................................................................................................................. 7-6 Tuning Muxers ..................................................................................................................... 7-6 Java Muxer..................................................................................................................... 7-6 Native Muxers............................................................................................................... 7-7 Non-Blocking IO Muxer .............................................................................................. 7-7 Which Platforms Have Performance Packs?.................................................................... 7-7 Enabling Performance Packs.............................................................................................. 7-8 Changing the Number of Available Socket Readers ...................................................... 7-8 Network Channels............................................................................................................... 7-8 Reducing the Potential for Denial of Service Attacks..................................................... 7-8 Tuning Message Size.................................................................................................... 7-9 Tuning Complete Message Timeout.......................................................................... 7-9 Tuning Number of File Descriptors........................................................................... 7-9 Tune the Chunk Parameters .............................................................................................. 7-9 Tuning Connection Backlog Buffering .......................................................................... 7-10 Tuning Cached Connections ........................................................................................... 7-10 Setting Your Compiler Options ............................................................................................. 7-10 Compiling EJB Classes ..................................................................................................... 7-10 Setting JSP Compiler Options ......................................................................................... 7-11 Precompile JSPs ......................................................................................................... 7-11 Optimize Java Expressions....................................................................................... 7-11 Using WebLogic Server Clusters to Improve Performance............................................... 7-11 Scalability and High Availability ................................................................................... 7-11 How to Ensure Scalability for WebLogic Clusters....................................................... 7-12 Database Bottlenecks........................................................................................................ 7-12 Session Replication ........................................................................................................... 7-13 Asynchronous HTTP Session Replication..................................................................... 7-13 Asynchronous HTTP Session Replication using a Secondary Server................ 7-13 Asynchronous HTTP Session Replication using a Database .............................. 7-14 Invalidation of Entity EJBs .............................................................................................. 7-14 Invalidation of HTTP sessions ........................................................................................ 7-14 JNDI Binding, Unbinding and Rebinding..................................................................... 7-15 Running Multiple Server Instances on Multi-Core Machines.................................... 7-15 Monitoring a WebLogic Server Domain............................................................................... 7-15 Using the Administration Console to Monitor WebLogic Server ............................. 7-15 Using the WebLogic Diagnostic Framework................................................................ 7-15 Using JMX to Monitor WebLogic Server....................................................................... 7-16 Using WLST to Monitor WebLogic Server ................................................................... 7-16 Resources to Monitor WebLogic Server ........................................................................ 7-16 Third-Party Tools to Monitor WebLogic Server .......................................................... 7-16 Tuning Class and Resource Loading .................................................................................... 7-16 Filtering Loader Mechanism ........................................................................................... 7-16 Class Caching .................................................................................................................... 7-17
8.1.1 Using the Default Persistent Store..................................................................................... 8-1 8.1.2 Using Custom File Stores and JDBC Stores ..................................................................... 8-2 8.1.3 Using a JDBC TLOG Store.................................................................................................. 8-2 8.1.4 Using JMS Paging Stores .................................................................................................... 8-2 8.1.4.1 Using Flash Storage to Page JMS Messages.............................................................. 8-3 8.1.5 Using Diagnostic Stores ...................................................................................................... 8-3 8.2 Best Practices When Using Persistent Stores .......................................................................... 8-3 8.3 Tuning JDBC Stores .................................................................................................................... 8-4 8.4 Tuning File Stores ....................................................................................................................... 8-4 8.4.1 Basic Tuning Information ................................................................................................... 8-4 8.4.2 Tuning a File Store Direct-Write-With-Cache Policy ..................................................... 8-5 8.4.2.1 Using Flash Storeage to Increase Performance ........................................................ 8-6 8.4.2.2 Additional Considerations .......................................................................................... 8-6 8.4.3 Tuning the File Store Direct-Write Policy ........................................................................ 8-7 8.4.4 Tuning the File Store Block Size ........................................................................................ 8-7 8.4.4.1 Setting the Block Size for a File Store......................................................................... 8-8 8.4.4.2 Determining the File Store Block Size ....................................................................... 8-9 8.4.4.3 Determining the File System Block Size.................................................................... 8-9 8.4.4.4 Converting a Store with Pre-existing Files ............................................................... 8-9 8.5 Using a Network File System.................................................................................................... 8-9 8.5.1 Configuring Synchronous Write Policies ...................................................................... 8-10 8.5.2 Test Server Restart Behavior ........................................................................................... 8-10 8.5.3 Handling NFS Locking Errors ........................................................................................ 8-10 8.5.3.1 Solution 1 - Copying Data Files to Remove NFS Locks ....................................... 8-11 8.5.3.2 Solution 2 - Disabling File Locks in WebLogic Server File Stores ...................... 8-12 8.5.3.2.1 Disabling File Locking for the Default File Store........................................... 8-12 8.5.3.2.2 Disabling File Locking for a Custom File Store ............................................. 8-13 8.5.3.2.3 Disabling File Locking for a JMS Paging File Store....................................... 8-13 8.5.3.2.4 Disabling File Locking for a Diagnostics File Store....................................... 8-14
9 DataBase Tuning
9.1 9.2 9.2.1 9.2.2 9.2.3 General Suggestions ................................................................................................................... Database-Specific Tuning .......................................................................................................... Oracle..................................................................................................................................... Microsoft SQL Server .......................................................................................................... Sybase .................................................................................................................................... 9-1 9-2 9-2 9-3 9-3
10.3.1 Tuning the Stateless Session Bean Pool ......................................................................... 10.3.2 Tuning the MDB Pool....................................................................................................... 10.3.3 Tuning the Entity Bean Pool ........................................................................................... 10.4 CMP Entity Bean Tuning ........................................................................................................ 10.4.1 Use Eager Relationship Caching .................................................................................... 10.4.1.1 Using Inner Joins ....................................................................................................... 10.4.2 Use JDBC Batch Operations ............................................................................................ 10.4.3 Tuned Updates.................................................................................................................. 10.4.4 Using Field Groups .......................................................................................................... 10.4.5 include-updates................................................................................................................. 10.4.6 call-by-reference................................................................................................................ 10.4.7 Bean-level Pessimistic Locking ....................................................................................... 10.4.8 Concurrency Strategy....................................................................................................... 10.5 Tuning In Response to Monitoring Statistics....................................................................... 10.5.1 Cache Miss Ratio............................................................................................................... 10.5.2 Lock Waiter Ratio ............................................................................................................. 10.5.3 Lock Timeout Ratio .......................................................................................................... 10.5.4 Pool Miss Ratio.................................................................................................................. 10.5.5 Destroyed Bean Ratio..................................................................................................... 10.5.6 Pool Timeout Ratio ......................................................................................................... 10.5.7 Transaction Rollback Ratio............................................................................................ 10.5.8 Transaction Timeout Ratio ............................................................................................ 10.6 Using the JDT Compiler........................................................................................................
10-3 10-4 10-4 10-4 10-4 10-5 10-5 10-5 10-5 10-6 10-6 10-6 10-7 10-8 10-8 10-8 10-9 10-9 10-10 10-10 10-10 10-11 10-11
vii
12.8 12.9
Advanced Configurations for Oracle Drivers and Databases........................................... 12-3 Use Best Design Practices ....................................................................................................... 12-3
13 Tuning Transactions
13.1 Logging Last Resource Transaction Optimization.............................................................. 13-1 13.1.1 LLR Tuning Guidelines.................................................................................................... 13-1 13.2 Read-only, One-Phase Commit Optimizations ................................................................... 13-2
14-1 14-3 14-3 14-5 14-5 14-5 14-6 14-7 14-7 14-7 14-8 14-8 14-9 14-9 14-9 14-10 14-10 14-11 14-11 14-11 14-12 14-12 14-13 14-13 14-13 14-14 14-15 14-15 14-16 14-16 14-17 14-17 14-18 14-18 14-19 14-19 14-20 14-20
14.15.2 Using UOO and Distributed Destinations .................................................................. 14.15.3 Migrating Old Applications to Use UOO ................................................................... 14.16 Using One-Way Message Sends .......................................................................................... 14.16.1 Configure One-Way Sends On a Connection Factory............................................... 14.16.2 One-Way Send Support In a Cluster With a Single Destination ............................. 14.16.3 One-Way Send Support In a Cluster With Multiple Destinations .......................... 14.16.4 When One-Way Sends Are Not Supported ................................................................ 14.16.5 Different Client and Destination Hosts ....................................................................... 14.16.6 XA Enabled On Client's Host Connection Factory .................................................... 14.16.7 Higher QOS Detected..................................................................................................... 14.16.8 Destination Quota Exceeded......................................................................................... 14.16.9 Change In Server Security Policy ................................................................................. 14.16.10 Change In JMS Server or Destination Status .............................................................. 14.16.11 Looking Up Logical Distributed Destination Name ................................................. 14.16.12 Hardware Failure............................................................................................................ 14.16.13 One-Way Send QOS Guidelines ................................................................................... 14.17 Tuning the Messaging Performance Preference Option .................................................. 14.17.1 Messaging Performance Configuration Parameters.................................................. 14.17.2 Compatibility With the Asynchronous Message Pipeline........................................ 14.18 Client-side Thread Pools....................................................................................................... 14.19 Best Practices for JMS .NET Client Applications...............................................................
14-20 14-20 14-21 14-21 14-21 14-22 14-22 14-22 14-22 14-22 14-23 14-23 14-23 14-23 14-23 14-23 14-24 14-25 14-26 14-26 14-26
ix
18.1.1 Disable Page Checks......................................................................................................... 18.1.2 Use Custom JSP Tags ....................................................................................................... 18.1.3 Precompile JSPs................................................................................................................. 18.1.4 Disable Access Logging ................................................................................................... 18.1.5 Use HTML Template Compression ............................................................................... 18.1.6 Use Service Level Agreements........................................................................................ 18.1.7 Related Reading ................................................................................................................ 18.2 Session Management ............................................................................................................... 18.2.1 Managing Session Persistence ........................................................................................ 18.2.2 Minimizing Sessions......................................................................................................... 18.2.3 Aggregating Session Data ............................................................................................... 18.3 Pub-Sub Tuning Guidelines ...................................................................................................
18-1 18-1 18-2 18-2 18-2 18-2 18-2 18-2 18-3 18-3 18-3 18-4
B Capacity Planning
B.1 B.1.1 B.1.2 B.1.3
x
Capacity Planning Factors ........................................................................................................ Programmatic and Web-based Clients ............................................................................ RMI and Server Traffic....................................................................................................... SSL Connections and Performance ..................................................................................
B.1.4 B.1.5 B.1.6 B.1.7 B.1.8 B.1.9 B.1.10 B.2 B.3 B.3.1 B.3.2 B.4 B.4.1 B.5
WebLogic Server Process Load......................................................................................... Database Server Capacity and User Storage Requirements ......................................... Concurrent Sessions ........................................................................................................... Network Load ..................................................................................................................... Clustered Configurations .................................................................................................. Server Migration ................................................................................................................. Application Design............................................................................................................. Assessing Your Application Performance Objectives .......................................................... Hardware Tuning ...................................................................................................................... Benchmarks for Evaluating Performance ....................................................................... Supported Platforms .......................................................................................................... Network Performance ............................................................................................................... Determining Network Bandwidth ................................................................................... Related Information ..................................................................................................................
B-3 B-3 B-3 B-4 B-4 B-4 B-5 B-5 B-5 B-5 B-5 B-5 B-6 B-6
xi
xii
Preface
This preface describes the document accessibility features and conventions used in this guidePerformance and Tuning for Oracle WebLogic Server.
Documentation Accessibility
For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc. Access to Oracle Support Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.
Conventions
The following text conventions are used in this document:
Convention boldface italic monospace Meaning Boldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary. Italic type indicates book titles, emphasis, or placeholder variables for which you supply particular values. Monospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter.
xiii
xiv
1
1
This chapter describes the contents and organization of this guidePerformance and Tuning for Oracle WebLogic Server.
Section 1.1, "Document Scope and Audience" Section 1.2, "Guide to this Document" Section 1.3, "Performance Features of this Release"
This chapter, Chapter 1, "Introduction and Roadmap," introduces the organization of this guide. Chapter 2, "Top Tuning Recommendations for WebLogic Server," discusses the most frequently recommended steps for achieving optimal performance tuning for applications running on WebLogic Server. Chapter 3, "Performance Tuning Roadmap," provides a roadmap to help tune your application environment to optimize performance. Chapter 4, "Operating System Tuning," discusses operating system issues. Chapter 5, "Tuning Java Virtual Machines (JVMs)," discusses JVM tuning considerations. Chapter 6, "Tuning WebLogic Diagnostic Framework and JRockit Flight Recorder Integration," provides information on how WebLogic Diagnostic Framework (WLDF) works with JRockit Mission Control Flight Recorder. Chapter 7, "Tuning WebLogic Server," contains information on how to tune WebLogic Server to match your application needs. Chapter 8, "Tuning the WebLogic Persistent Store," provides information on how to tune a persistent store. Chapter 9, "DataBase Tuning," provides information on how to tune your data base.
1-1
Chapter 10, "Tuning WebLogic Server EJBs," provides information on how to tune applications that use EJBs. Chapter 11, "Tuning Message-Driven Beans," provides information on how to tune Message-Driven beans. Chapter 12, "Tuning Data Sources," provides information on how to tune JDBC applications. Chapter 13, "Tuning Transactions," provides information on how to tune Logging Last Resource transaction optimization. Chapter 14, "Tuning WebLogic JMS," provides information on how to tune applications that use WebLogic JMS. Chapter 15, "Tuning WebLogic JMS Store-and-Forward," provides information on how to tune applications that use JMS Store-and-Forward. Chapter 16, "Tuning WebLogic Message Bridge," provides information on how to tune applications that use the Weblogic Message Bridge. Chapter 17, "Tuning Resource Adapters," provides information on how to tune applications that use resource adaptors. Chapter 18, "Tuning Web Applications," provides best practices for tuning WebLogic Web applications and application resources: Chapter 19, "Tuning Web Services," provides information on how to tune applications that use Web services. Chapter 20, "Tuning WebLogic Tuxedo Connector," provides information on how to tune applications that use WebLogic Tuxedo Connector. Appendix A, "Using the WebLogic 8.1 Thread Pool Model," provides information on using execute queues. Appendix B, "Capacity Planning," provides an introduction to capacity planning.
Section 8.4.2, "Tuning a File Store Direct-Write-With-Cache Policy" Section 8.5, "Using a Network File System" Section 6, "Tuning WebLogic Diagnostic Framework and JRockit Flight Recorder Integration"
2
2
Section 2.1, "Tune Pool Sizes" Section 2.2, "Use the Prepared Statement Cache" Section 2.3, "Use Logging Last Resource Optimization" Section 2.4, "Tune Connection Backlog Buffering" Section 2.5, "Tune the Chunk Size" Section 2.6, "Use Optimistic or Read-only Concurrency" Section 2.7, "Use Local Interfaces" Section 2.8, "Use eager-relationship-caching" Section 2.9, "Tune HTTP Sessions" Section 2.10, "Tune Messaging Applications"
For WebLogic Server releases 9.0 and higherA server instance uses a self-tuned thread-pool. The best way to determine the appropriate pool size is to monitor the pool's current size, shrink counts, grow counts, and wait counts. See Section 7.4, "Thread Management". Tuning MDBs are a special case, please see Chapter 11, "Tuning Message-Driven Beans". For releases prior to WebLogic Server 9.0In general, the number of connections should equal the number of threads that are expected to be required to process the requests handled by the pool. The most effective way to ensure the right pool size is to monitor it and make sure it does not shrink and grow. See Section A.1, "How to Enable the WebLogic 8.1 Thread Pool Model".
Optimistic-concurrency with cache-between-transactions work best with read-mostly beans. Using verify-reads in combination with these provides high data consistency guarantees with the performance gain of caching. See Chapter 10, "Tuning WebLogic Server EJBs". Query-caching is a WebLogic Server 9.0 feature that allows the EJB container to cache results for arbitrary non-primary-key finders defined on read-only EJBs. All of these parameters can be set in the application/module deployment descriptors. See Section 10.4.8, "Concurrency Strategy".
In release prior to WebLogic Server 8.1, call-by-reference is turned on by default. For releases of WebLogic Server 8.1 and higher, call-by-reference is turned off by default. Older applications migrating to WebLogic Server 8.1 and higher that do not explicitly turn on call-by-reference may experience a drop in performance.
Chapter 8, "Tuning the WebLogic Persistent Store" Chapter 14, "Tuning WebLogic JMS" Chapter 15, "Tuning WebLogic JMS Store-and-Forward" Chapter 16, "Tuning WebLogic Message Bridge"
3
3
This chapter provides a tuning roadmap and tuning tips for you can use to improve system performance:
Section 3.1.1, "Understand Your Performance Objectives" Section 3.1.2, "Measure Your Performance Metrics" Section 3.1.5, "Locate Bottlenecks in Your System" Section 3.1.6, "Minimize Impact of Bottlenecks" Section 3.1.12, "Achieve Performance Objectives"
The anticipated number of users. The number and size of requests. The amount of data and its consistency. Determining your target CPU utilization. Your target CPU usage should not be 100%, you should determine a target CPU utilization based on your application needs, including CPU cycles for peak usage. If your CPU utilization is optimized at 100% during normal load hours, you have no capacity to handle a peak load. In applications that are latency sensitive and maintaining the ability for a fast response time is important, high CPU usage (approaching 100% utilization) can reduce response times while throughput stays constant or even increases because of work queuing up in the server. For such applications, a 70% - 80% CPU utilization recommended. A good target for non-latency sensitive applications is about 90%.
The configuration of hardware and software such as CPU type, disk size vs. disk speed, sufficient memory. There is no single formula for determining your hardware requirements. The process of determining what type of hardware and software configuration is required to meet application needs adequately is called capacity planning. Capacity planning requires assessment of your system performance goals and an understanding of your application. Capacity planning for server hardware should focus on maximum performance requirements. See Appendix B, "Capacity Planning."
The ability to interoperate between domains, use legacy systems, support legacy data. Development, implementation, and maintenance costs.
You will use this information to set realistic performance objectives for your application environment, such as response times, throughput, and load on specific hardware.
Section 3.1.3, "Monitor Disk and CPU Utilization" Section 3.1.4, "Monitor Data Transfers Across the Network"
Application server (disk and CPU utilization) Database server (disk and CPU utilization)
The goal is to get to a point where the application server achieves your target CPU utilization. If you find that the application server CPU is under utilized, confirm whether the database is bottle necked. If the database CPU is 100 percent utilized, then check your application SQL calls query plans. For example, are your SQL calls using indexes or doing linear searches? Also, confirm whether there are too many ORDER BY clauses used in your application that are affecting the database CPU. See Chapter 4, "Operating System Tuning". If you discover that the database disk is the bottleneck (for example, if the disk is 100 percent utilized), try moving to faster disks or to a RAID (redundant array of independent disks) configuration, assuming the application is not doing more writes then required. Once you know the database server is not the bottleneck, determine whether the application server disk is the bottleneck. Some of the disk bottlenecks for application server disks are:
Persistent Store writes Transaction logging (tlogs) HTTP logging Server logging
The disk I/O on an application server can be optimized using faster disks or RAID, disabling synchronous JMS writes, using JTA direct writes for tlogs, or increasing the HTTP log buffer.
Even if you find that the CPU is 100 percent utilized, you should profile your application for performance improvements.
Section 3.1.7, "Tune Your Application" Section 3.1.8, "Tune your DB" Section 3.1.9, "Tune WebLogic Server Performance Parameters" Section 3.1.10, "Tune Your JVM" Section 3.1.11, "Tune the Operating System" Section 8, "Tuning the WebLogic Persistent Store"
Chapter 10, "Tuning WebLogic Server EJBs" Chapter 11, "Tuning Message-Driven Beans"
Tuning Tips
Chapter 12, "Tuning Data Sources" Chapter 13, "Tuning Transactions" Chapter 14, "Tuning WebLogic JMS" Chapter 15, "Tuning WebLogic JMS Store-and-Forward" Chapter 16, "Tuning WebLogic Message Bridge" Chapter 17, "Tuning Resource Adapters" Chapter 18, "Tuning Web Applications" Chapter 19, "Tuning Web Services" Chapter 20, "Tuning WebLogic Tuxedo Connector"
Performance tuning is not a silver bullet. Simply put, good system performance depends on: good design, good implementation, defined performance objectives, and performance tuning.
Tuning Tips
Performance tuning is ongoing process. Implement mechanisms that provide performance metrics which you can compare against your performance objectives, allowing you to schedule a tuning phase before your system fails. The object is to meet your performance objectives, not eliminate all bottlenecks. Resources within a system are finite. By definition, at least one resource (CPU, memory, or I/O) will be a bottleneck in the system. Tuning allows you minimize the impact of bottlenecks on your performance objectives. Design your applications with performance in mind: Keep things simple - avoid inappropriate use of published patterns. Apply Java EE performance patterns. Optimize your Java code.
Tuning Tips
4
4
This chapter describes how to tune your operating system. Proper OS tuning improves system performance by preventing the occurrence of error conditions. Operating system error conditions always degrade performance. Typically most error conditions are TCP tuning parameter related and are caused by the operating system's failure to release old sockets from a close_wait call. Common errors are "connection refused", "too many open files" on the server-side, and "address in use: connect" on the client-side. In most cases, these errors can be prevented by adjusting the TCP wait_time value and the TCP queue size. Although users often find the need to make adjustments when using tunnelling, OS tuning may be necessary for any protocol under sufficiently heavy loads. Tune your operating system according to your operating system documentation. For Windows platforms, the default settings are usually sufficient. However, the Solaris and Linux platforms usually need to be tuned appropriately.
5
5
This chapter describes how to configure JVM tuning options for WebLogic Server. The Java virtual machine (JVM) is a virtual "execution engine" instance that executes the bytecodes in Java class files on a microprocessor. How you tune your JVM affects the performance of WebLogic Server and your applications.
Section 5.1, "JVM Tuning Considerations" Section 5.2, "Which JVM for Your System?" Section 5.3, "Garbage Collection" Section 5.4, "Enable Spinning for IA32 Platforms"
Table 51 (Cont.) General JVM Tuning Considerations Tuning Factor UNIX threading models Information Reference Choices you make about Solaris threading models can have a large impact on the performance of your JVM on Solaris. You can choose from multiple threading models and different methods of synchronization within the model, but this varies from JVM to JVM. See "Performance Documentation For the Java Hotspot Virtual Machine: Threading" at http://java.sun.com/docs/hotspot/threads/threads .html.
Section 5.3.1, "VM Heap Size and Garbage Collection" Section 5.3.2, "Choosing a Garbage Collection Scheme" Section 5.3.3, "Using Verbose Garbage Collection to Determine Heap Size" Section 5.3.4, "Specifying Heap Size Values" Section 5.3.8, "Automatically Logging Low Memory Conditions" Section 5.3.9, "Manually Requesting Garbage Collection" Section 5.3.10, "Requesting Thread Stacks"
Garbage Collection
any pointer in the running program, it is considered "garbage" and ready for collection. A best practice is to tune the time spent doing garbage collection to within 5% of execution time. The JVM heap size determines how often and how long the VM spends collecting garbage. An acceptable rate for garbage collection is application-specific and should be adjusted after analyzing the actual time and frequency of garbage collections. If you set a large heap size, full garbage collection is slower, but it occurs less frequently. If you set your heap size in accordance with your memory needs, full garbage collection is faster, but occurs more frequently. The goal of tuning your heap size is to minimize the time that your JVM spends doing garbage collection while maximizing the number of clients that WebLogic Server can handle at a given time. To ensure maximum performance during benchmarking, you might set high heap size values to ensure that garbage collection does not occur during the entire run of the benchmark. You might see the following Java error if you are running out of heap space:
java.lang.OutOfMemoryError <<no stack trace available>> java.lang.OutOfMemoryError <<no stack trace available>> Exception in thread "main"
To modify heap space values, see Section 5.3.4, "Specifying Heap Size Values". To configure WebLogic Server to detect automatically when you are running out of heap space and to address low memory conditions in the server, see Section 5.3.8, "Automatically Logging Low Memory Conditions" and Section 5.3.4, "Specifying Heap Size Values".
For an overview of the garbage collection schemes available with Sun's HotSpot VM, see "Tuning Garbage Collection with the 5.0 Java Virtual Machine" at http://www.oracle.com/technetwork/java/gc-tuning-5-138395.htm l. For a comprehensive explanation of the collection schemes available, see "Improving Java Application Performance and Scalability by Reducing Garbage Collection Times and Sizing Memory Using JDK 1.4.1" at http://www.oracle.com/technetwork/java/index-jsp-138820.html. For a discussion of the garbage collection schemes available with the JRockit JDK, see "Using the JRockt Memory Management System" at http://download.oracle.com/docs/cd/E13150_01/jrockit_ jvm/jrockit/webdocs/index.html. For some pointers about garbage collection from an HP perspective, see "Performance tuning Java: Tuning steps" at http://h21007.www2.hp.com/dspp/tech/tech_ TechDocumentDetailPage_IDX/1,1701,1604,00.html.
Tuning Java Virtual Machines (JVMs) 5-3
Garbage Collection
Monitor the performance of WebLogic Server under maximum load while running your application. Use the -verbosegc option to turn on verbose garbage collection output for your JVM and redirect both the standard error and standard output to a log file. This places thread dump information in the proper context with WebLogic Server informational and error messages, and provides a more useful log for diagnostic purposes. For example, on Windows and Solaris, enter the following:
% java -ms32m -mx200m -verbosegc -classpath $CLASSPATH -Dweblogic.Name=%SERVER_NAME% -Dbea.home="C:\Oracle\Middleware" -Dweblogic.management.username=%WLS_USER% -Dweblogic.management.password=%WLS_PW% -Dweblogic.management.server=%ADMIN_URL% -Dweblogic.ProductionModeEnabled=%STARTMODE% -Djava.security.policy="%WL_HOME%\server\lib\weblogic.policy" weblogic.Server >> logfile.txt 2>&1
where the logfile.txt 2>&1 command redirects both the standard error and standard output to a log file. On HPUX, use the following option to redirect stderr stdout to a single file:
-Xverbosegc:file=/tmp/gc$$.out
where $$ maps to the process ID (PID) of the Java process. Because the output includes timestamps for when garbage collection ran, you can infer how often garbage collection occurs.
3.
How often is garbage collection taking place? In the weblogic.log file, compare the time stamps around the garbage collection. How long is garbage collection taking? Full garbage collection should not take longer than 3 to 5 seconds. What is your average memory footprint? In other words, what does the heap settle back down to after each full garbage collection? If the heap always settles to 85 percent free, you might set the heap size smaller.
4.
Review the New generation heap sizes (Sun) or Nursery size (Jrockit).
For Jrockit: see Section 5.3.6, "JRockit JVM Heap Size Options". For Sun: see Section 5.3.7, "Java HotSpot VM Heap Size Options".
5.
Make sure that the heap size is not larger than the available free RAM on your system. Use as large a heap size as possible without causing your system to "swap" pages to disk. The amount of free RAM on your system depends on your hardware configuration and the memory requirements of running processes on your
Garbage Collection
machine. See your system administrator for help in determining the amount of free RAM on your system.
6.
If you find that your system is spending too much time collecting garbage (your allocated virtual memory is more than your RAM can handle), lower your heap size. Typically, you should use 80 percent of the available RAM (not taken by the operating system or other processes) for your JVM.
7.
If you find that you have a large amount of available free RAM remaining, run more instances of WebLogic Server on your machine. Remember, the goal of tuning your heap size is to minimize the time that your JVM spends doing garbage collection while maximizing the number of clients that WebLogic Server can handle at a given time. JVM vendors may provide other options to print comprehensive garbage collection reports. For example, you can use the JRockit JVM -Xgcreport option to print a comprehensive garbage collection report at program completion, see "Viewing Garbage Collection Behavior", at http://download.oracle.com/docs/cd/E13150_01/jrockit_ jvm/jrockit/webdocs/index.html.
Section 5.3.5, "Tuning Tips for Heap Sizes" Section 5.3.6, "JRockit JVM Heap Size Options" Section 5.3.7, "Java HotSpot VM Heap Size Options"
The heap sizes should be set to values such that the maximum amount of memory used by the VM does not exceed the amount of available physical RAM. If this value is exceeded, the OS starts paging and performance degrades significantly. The VM always uses more memory than the heap size. The memory required for internal VM functionality, native libraries outside of the VM, and permanent generation memory (for the Sun VM only: memory required to store classes and methods) is allocated in addition to the heap size settings. When using a generational garbage collection scheme, the nursery size should not exceed more than half the total Java heap size. Typically, 25% to 40% of the heap size is adequate. In production environments, set the minimum heap size and the maximum heap size to the same value to prevent wasting VM resources used to constantly grow and shrink the heap. This also applies to the New generation heap sizes (Sun) or Nursery size (Jrockit).
Garbage Collection
-Xmx
-Xgc: parallel To do this, the bottleneck detector will run with a higher frequency from the start and then gradually lower its frequency. This options also tells JRockit to use the available memory aggressively.
Performs adaptive -XXaggressive:memory optimizations as early as possible in the Java application run.
For example, when you start a WebLogic Server instance from a java command line, you could specify the JRockit VM heap size values as follows:
$ java -Xns10m -Xms512m -Xmx512m
The default size for these values is measured in bytes. Append the letter 'k' or 'K' to the value to indicate kilobytes, 'm' or 'M' to indicate megabytes, and 'g' or 'G' to indicate gigabytes. The example above allocates 10 megabytes of memory to the Nursery heap sizes and 512 megabytes of memory to the minimum and maximum heap sizes for the WebLogic Server instance running in the JVM. For detailed information about setting the appropriate heap sizes for WebLogic's JRockit JVM, see "Tuning the JRockit JVM" at http://download.oracle.com/docs/cd/E13150_01/jrockit_ jvm/jrockit/webdocs/index.html.
Garbage Collection
-XX:SurvivorRatio
-Xmx
Setting Big Heaps and -XX:+UseISM See Intimate Shared Memory -XX:+AggressiveHeap http://java.sun.com/docs/hotspot /ism.html
For example, when you start a WebLogic Server instance from a java command line, you could specify the HotSpot VM heap size values as follows:
$ java -XX:NewSize=128m -XX:MaxNewSize=128m -XX:SurvivorRatio=8 -Xms512m -Xmx512m
The default size for these values is measured in bytes. Append the letter 'k' or 'K' to the value to indicate kilobytes, 'm' or 'M' to indicate megabytes, and 'g' or 'G' to indicate gigabytes. The example above allocates 128 megabytes of memory to the New generation and maximum New generation heap sizes, and 512 megabytes of memory to the minimum and maximum heap sizes for the WebLogic Server instance running in the JVM.
http://www.oracle.com/technetwork/java/javase/tech/vmoptions-jsp -140102.html for more information on the command-line options and environment variables that can affect the performance characteristics of the Java HotSpot Virtual Machine. For additional examples of the HotSpot VM options, see:
"Standard Options for Windows (Win32) VMs" at http://download.oracle.com/javase/6/docs/technotes/tools/wind ows/java.html. "Standard Options for Solaris VMs and Linux VMs" at http://download.oracle.com/javase/6/docs/technotes/tools/sola ris/java.html.
The Java Virtual Machine document provides a detailed discussion of the Client and Server implementations of the Java virtual machine for Java SE 5.0 at http://download.oracle.com/javase/1.5.0/docs/guide/vm/index.html .
5.4.2 JRockit
The JRockit VM automatically adjusts the spinning for different locks, eliminating the need set this parameter.
6
6
Section 6.1, "Using JRockit Flight Recorder" Section 6.2, "Using WLDF" Chapter 6.3, "Tuning Considerations"
If you use the -XX:-FlightRecorder option to disable JFR, it disables all JRA support, including the ability to turn on JRA at runtime.
Tuning Considerations
JRockit and WLDF generate global events that contain information about the recording settings even when disabled. For example, JVM Metadata events list active recordings and WLDF GlobalInformationEvents list the domain, server, machine, and volume level. You can use WLDF Image capture to capture JFR recordings along with other component information from WebLogic Server, such as configuration. The Diagnostic Volume setting does not affect explicitly configured diagnostic modules. By default, the Diagnostic Volume is set to Low. By default the JRockit default recording is also Off. Setting the WLDF volume to Low or higher enables WLDF event generation and the JVM events which would have been included by default if the JRockit default recording were enabled separately. So if you turn the WLDF volume to Low, Medium, or High, you get WLDF events and JVM events recorded in the JFR, there is no need to separately enable the JRockit default recording.
6.2.1 Using JRockit controls outside of WLDF to control the default JVM recording
You can enable the JVM default recording (or another recording) and continue to generate JVM events regardless of the WLDF volume setting. This allows you to continue to generate JVM events when WLDF events are off.
There is a known issue that can cause a potentially large set of JVM events to be generated when WLDF captures the JFR data to the image capture. The events generated are at the same timestamp as the WLDF image capture and are mostly large numbers of JIT compilation events. With the default JFR and WLDF settings, these events are seen only during the WLDF image capture can be ignored. The expectation is that the VM events generating during Diagnostic Image capture should have little performance impact, as they are generated in a short window and are disabled during normal operation.
7
7
This chapter describes how to tune WebLogic Server to match your application needs.
Section 7.1, "Setting Java Parameters for Starting WebLogic Server" Section 7.2, "Development vs. Production Mode Default Tuning Values" Section 7.4, "Thread Management" Section 7.5, "Tuning Network I/O" Section 7.6, "Setting Your Compiler Options" Section 7.7, "Using WebLogic Server Clusters to Improve Performance" Section 7.8, "Monitoring a WebLogic Server Domain" Section 7.9, "Tuning Class and Resource Loading"
Change the value of the variable JAVA_HOME to the location of your JDK. For example:
set JAVA_HOME=C:\Oracle\Middleware\jdk160_11
For higher performance throughput, set the minimum java heap size equal to the maximum heap size. For example:
7-1
See Section 5.3.4, "Specifying Heap Size Values" for details about setting heap size options.
Production
The following table lists the performance-related configuration parameters that differ when switching from development to production startup mode.
Table 72 Differences Between Development and Production Modes In development mode . . . You can use the demonstration digital certificates and the demonstration keystores provided by the WebLogic Server security services. With these certificates, you can design your application to work within environments secured by SSL. For more information about managing security, see "Configuring SSL" in Securing WebLogic Server. Deploying Applications WebLogic Server instances can automatically deploy and update applications that reside in the domain_name/autodeploy directory (where domain_name is the name of a domain). It is recommended that this method be used only in a single-server development environment. For more information, see "Auto-Deploying Applications in Development Domains" in Deploying Applications to Oracle WebLogic Server. The auto-deployment feature is disabled, so you must use the WebLogic Server Administration Console, the weblogic.Deployer tool, or the WebLogic Scripting Tool (WLST). For more information, see "Understanding WebLogic Server Deployment" in Deploying Applications to Oracle WebLogic Server. In production mode . . . You should not use the demonstration digital certificates and the demonstration keystores. If you do so, a warning message is displayed.
Thread Management
For information on switching the startup mode from development to production, see "Change to Production Mode" in the Oracle WebLogic Server Administration Console Help.
7.3 Deployment
The following sections provide information on how to improve deployment performance:
Section 7.3.1, "On-demand Deployment of Internal Applications" Section 7.3.2, "Use FastSwap Deployment to Minimize Redeployment Time" Section 7.3.3, "Generic Overrides"
Thread Management
Section 7.4.1, "Tuning a Work Manager" Section 7.4.4, "Tuning Execute Queues" Section 7.4.5, "Understanding the Differences Between Work Managers and Execute Queues" Section 7.4.7, "Tuning the Stuck Thread Detection Behavior"
Section 7.4.2, "How Many Work Managers are Needed?" Section 7.4.3, "What are the SLA Requirements for Each Work Manager?"
See "Using Work Managers to Optimize Scheduled Work" in Configuring Server Environments for Oracle WebLogic Server.
7.4.3 What are the SLA Requirements for Each Work Manager?
Service level agreement (SLA) requirements are defined by instances of request classes. A request class expresses a scheduling guideline that a server instance uses to allocate threads. See "Understanding Work Managers" in Configuring Server Environments for Oracle WebLogic Server.
7.4.5 Understanding the Differences Between Work Managers and Execute Queues
The easiest way to conceptually visualize the difference between the execute queues of previous releases with work managers is to correlate execute queues (or rather, execute-queue managers) with work managers and decouple the one-to-one relationship between execute queues and thread-pools. For releases prior to WebLogic Server 9.0, incoming requests are put into a default execute queue or a user-defined execute queue. Each execute queue has an associated execute queue manager that controls an exclusive, dedicated thread-pool with a fixed
7-4 Performance and Tuning for Oracle WebLogic Server
Thread Management
number of threads in it. Requests are added to the queue on a first-come-first-served basis. The execute-queue manager then picks the first request from the queue and an available thread from the associated thread-pool and dispatches the request to be executed by that thread. For releases of WebLogic Server 9.0 and higher, there is a single priority-based execute queue in the server. Incoming requests are assigned an internal priority based on the configuration of work managers you create to manage the work performed by your applications. The server increases or decreases threads available for the execute queue depending on the demand from the various work-managers. The position of a request in the execute queue is determined by its internal priority:
The higher the priority, closer it is placed to the head of the execute queue. The closer to the head of the queue, more quickly the request will be dispatched a thread to use.
Work managers provide you the ability to better control thread utilization (server performance) than execute-queues, primarily due to the many ways that you can specify scheduling guidelines for the priority-based thread pool. These scheduling guidelines can be set either as numeric values or as the capacity of a server-managed resource, like a JDBC connection pool.
Migrating application domains from a previous release to WebLogic Server 9.x does not automatically convert an execute queues to work manager. If execute queues are present in the upgraded application configuration, the server instance assigns work requests appropriately to the execute queue specified in the dispatch-policy. Requests without a dispatch-policy use the self-tuning thread pool.
See "Roadmap for Upgrading Your Application Environment" in Upgrade Guide for Oracle WebLogic Server.
7-5
Section 7.5.1, "Tuning Muxers" Section 7.5.2, "Which Platforms Have Performance Packs?" Section 7.5.3, "Enabling Performance Packs" Section 7.5.4, "Changing the Number of Available Socket Readers" Section 7.5.5, "Network Channels" Section 7.5.6, "Reducing the Potential for Denial of Service Attacks" Section 7.5.7, "Tune the Chunk Parameters" Section 7.5.8, "Tuning Connection Backlog Buffering" Section 7.5.9, "Tuning Cached Connections"
Section 7.5.1.1, "Java Muxer" Section 7.5.1.2, "Native Muxers" Section 7.5.1.3, "Non-Blocking IO Muxer"
WebLogic Server selects which muxer implementation is using the following criteria:
If the Muxer Class attribute is set to weblogic.socket.NIOSocketMuxer or -Dweblogic.MuxerClass=weblogic.socket.NIOSocketMuxer flag is set, the NIOSocketMuxer is used. If NativeIOEnabled is false and MuxerClass is null, the Java Socket Muxer is used. If NativeIOEnabled is true and MuxerClass is null, native muxers are used, if available for your platform.
Uses pure Java to read data from sockets. It is also the only muxer available for RMI clients. Blocks on reads until there is data to be read from a socket. This behavior does not scale well when there are a large number of sockets and/or when data arrives infrequently at sockets. This is typically not an issue for clients, but it can create a huge bottleneck for a server.
If the Enable Native IO parameter is not selected, the server instance exclusively uses the Java muxer. This maybe acceptable if there are a small number of clients and the rate at which requests arrive at the server is fairly high. Under these conditions, the Java muxer performs as well as a native muxer and eliminate Java Native Interface
(JNI) overhead. Unlike native muxers, the number of threads used to read requests is not fixed and is tunable for Java muxers by configuring the Percent Socket Readers parameter setting in the Administration Console. See Section 7.5.4, "Changing the Number of Available Socket Readers". Ideally, you should configure this parameter so the number of threads roughly equals the number of remote concurrently connected clients up to 50% of the total thread pool size. Each thread waits for a fixed amount of time for data to become available at a socket. If no data arrives, the thread moves to the next socket.
where xx is the amount of time, in microseconds, to delay before checking if data is available. The default value is 0, which corresponds to no delay.
The Certicom SSL implementation is not supported with WebLogic Servers non-blocking IO implementation (weblogic.socket.NIOSocketMuxer). If you need to enable secure communication between applications, Oracle supports implementing JSSE (Java Secure Socket Extension). For more information, see "Secure Sockets Layer (SSL)" in Understanding Security for Oracle WebLogic Server.
See "Supported Configurations" in What's New in Oracle WebLogic Server to find links to the latest Certifications Pages. Select your platform from the list of certified platforms.
7-7
3.
Use your browser's Edit > Find to locate all instances of "Performance Pack" to verify whether it is included for the platform.
Configure multiple network channels using different IP and port settings. See "Configure custom network channels" in Oracle WebLogic Server Administration Console Help. In your client-side code, use a JNDI URL pattern similar to the pattern used in clustered environments. The following is an example for a client using two network channels:
t3://<ip1>:<port1>,<ip2>:<port2>
2.
See "Understanding Network Channels" in Configuring Server Environments for Oracle WebLogic Server.
Maximum incoming message size Complete message timeout Number of file descriptors (UNIX systems)
For optimal system performance, each of these settings should be appropriate for the particular system that hosts WebLogic Server and should be in balance with each other, as explained in the sections that follow.
7-9
weblogic.ChunksizeSets the size of a chunk (in bytes). The primary situation in which this may need to be increased is if request sizes are large. It should be set to values that are multiples of the network's maximum transfer unit (MTU), after subtracting from the value any Ethernet or TCP header sizes. Set this parameter to the same value on the client and server. weblogic.utils.io.chunkpoolsizeSets the maximum size of the chunk pool. The default value is 2048. The value may need to be increased if the server starts to allocate and discard chunks in steady state. To determine if the value needs to be increased, monitor the CPU profile or use a memory/ heap profiler for call stacks invoking the constructor weblogic.utils.io.Chunk. weblogic.PartitionSizeSets the number of pool partitions used (default is 4). The chunk pool can be a source of significant lock contention as each request to access to the pool must be synchronized. Partitioning the thread pool spreads the potential for contention over more than one partition.
"Implementing Enterprise Java Beans" in Programming WebLogic Enterprise JavaBeans for Oracle WebLogic Server. "Configure compiler options" in Oracle WebLogic Server Administration Console Help.
Add more RAM if you have only 256 MB. Raise the file descriptor limit, for example:
set rlim_fd_max = 4096 set rlim_fd_cur = 1024
Server, add another WebLogic Server instance to your clusterwithout changing your application. Clusters provide two key benefits that are not provided by a single server: scalability and availability. WebLogic Server clusters bring scalability and high-availability to Java EE applications in a way that is transparent to application developers. Scalability expands the capacity of the middle tier beyond that of a single WebLogic Server or a single computer. The only limitation on cluster membership is that all WebLogic Servers must be able to communicate by IP multicast. New WebLogic Servers can be added to a cluster dynamically to increase capacity. A WebLogic Server cluster guarantees high-availability by using the redundancy of multiple servers to insulate clients from failures. The same service can be provided on multiple servers in a cluster. If one server fails, another can take over. The ability to have a functioning server take over from a failed server increases the availability of the application to clients.
Note:
Provided that you have resolved all application and environment bottleneck issues, adding additional servers to a cluster should provide linear scalability. When doing benchmark or initial configuration test runs, isolate issues in a single server environment before moving to a clustered environment.
Clustering in the Messaging Service is provided through distributed destinations; connection concentrators, and connection load-balancing (determined by connection factory targeting); and clustered Store-and-Forward (SAF). Client load-balancing with respect to distributed destinations is tunable on connection factories. Distributed destination Message Driven Beans (MDBs) that are targeted to the same cluster that hosts the distributed destination automatically deploy only on cluster servers that host the distributed destination members and only process messages from their local destination. Distributed queue MDBs that are targeted to a different server or cluster than the host of the distributed destination automatically create consumers for every distributed destination member. For example, each running MDB has a consumer for each distributed destination queue member.
Section 7.7.3, "Database Bottlenecks" Section 7.7.4, "Session Replication" Section 7.7.5, "Asynchronous HTTP Session Replication" Section 7.7.6, "Invalidation of Entity EJBs" Section 7.7.7, "Invalidation of HTTP sessions" Section 7.7.8, "JNDI Binding, Unbinding and Rebinding"
load on the database by exploring other options. See Chapter 9, "DataBase Tuning" and Chapter 12, "Tuning Data Sources".
HTTP session management provides more options for handling fail-over, such as replication, a shared DB or file. Superior scalability. Replication of the HTTP session state occurs outside of any transactions. Stateful session bean replication occurs in a transaction which is more resource intensive. The HTTP session replication mechanism is more sophisticated and provides optimizations a wider variety of situations than stateful session bean replication.
Section 7.7.5.1, "Asynchronous HTTP Session Replication using a Secondary Server" Section 7.7.5.2, "Asynchronous HTTP Session Replication using a Database"
MAN
WAN
During undeployment or redeployment: The session is unregistered and removed from the update queue. The session on the secondary server is unregistered.
If the application is moved to admin mode, the sessions are flushed and replicated to the secondary server. If secondary server is down, the system attempts to failover to another server. A server shutdown or failure state triggers the replication of any batched sessions to minimize the potential loss of session information.
During undeployment or redeployment: The session is unregistered and removed from the update queue. The session is removed from the database.
If the application is moved to admin mode, the sessions are flushed and replicated to the database.
The memory requirements of the application. Choose the heap sizes of individual instance and the total number of instances to ensure that you're providing sufficient memory for the application and achieving good GC performance. For some applications, allocating very large heaps to a single instance may lead to longer GC pause times. In this case the performance may benefit from increasing the number of instances and giving each instance a smaller heap. Maximizing CPU utilization. While WebLogic Server is capable of utilizing multiple cores per instance, for some applications increasing the number of instances on a given machine (reducing the number of cores per instance) can improve CPU utilization and overall performance.
Section 7.8.1, "Using the Administration Console to Monitor WebLogic Server" Section 7.8.2, "Using the WebLogic Diagnostic Framework" Section 7.8.3, "Using JMX to Monitor WebLogic Server" Section 7.8.4, "Using WLST to Monitor WebLogic Server" Section 7.8.6, "Third-Party Tools to Monitor WebLogic Server"
New for this release is the ability to filter resource loading requests. The basic configuration of resource filtering is specified in META-INF/weblogic-application.xml file and is similar to the class filtering. The the syntax for filtering resources is shown in the following example:
<prefer-application-resources> <resource-name>x/y</resource-name> <resource-name>z*</resource-name> </prefer-application-resources>
In this example, resource filtering has been configured for the exact resource name "x/y" and for any resource whose name starts with "z". '*' is the only wild card pattern allowed. Resources with names matching these patterns are searched for only on the application classpath, the system classpath search is skipped.
Note:
If you add a class or resource to the filtering configuration and subsequently get exceptions indicating the class or resource isn't found, the most likely cause is that the class or resource is on the system classpath, not on the application classpath.
Reduces server startup time. The package level index reduces search time for all classes and resources.
For more information, see Configuring Class Caching in Developing Applications for Oracle WebLogic Server.
Note:
Class caching is supported in development mode when starting the server using a startWebLogic script. Class caching is disabled by default and is not supported in production mode. The decrease in startup time varies among different JRE vendors.
8
8
Section 8.1, "Overview of Persistent Stores" Section 8.2, "Best Practices When Using Persistent Stores" Section 8.3, "Tuning JDBC Stores" Section 8.4, "Tuning File Stores" Section 8.5, "Using a Network File System"
Before reading this chapter, Oracle recommends becoming familiar with WebLogic store administration and monitoring. See Using the WebLogic Persistent Store in Configuring Server Environments for Oracle WebLogic Server.
Section 8.1.1, "Using the Default Persistent Store" Section 8.1.2, "Using Custom File Stores and JDBC Stores" Section 8.1.3, "Using a JDBC TLOG Store" Section 8.1.4, "Using JMS Paging Stores" Section 8.1.5, "Using Diagnostic Stores"
Using the WebLogic Persistent Store in Configuring Server Environments for Oracle WebLogic Server. Modify the Default Store Settings in Oracle WebLogic Server Administration Console Help.
Tuning the WebLogic Persistent Store 8-1
When to Use a Custom Persistent Store in Configuring Server Environments for Oracle WebLogic Server. Comparing File Stores and JDBC Stores in Configuring Server Environments for Oracle WebLogic Server. Creating a Custom (User-Defined) File Store in Configuring Server Environments for Oracle WebLogic Server. Creating a JDBC Store in Configuring Server Environments for Oracle WebLogic Server.
Paged persistent messages are potentially physical stored in two different places: Always in a recoverable default or custom store. Potentially in a paging directory.
Most Flash storage devices are a single point of failure and are typically only accessible as a local device. They are suitable for JMS server paging stores which do not recover data after a failure and automatically reconstruct themselves as needed. In most cases, Flash storage devices are not suitable for custom or default stores which typically contains data that must be safely recoverable. A configured Directory attribute of a default or custom store should not normally reference a directory that is on a single point of failure device.
Use the following steps to use a Flash storage device to page JMS messages:
1. 2.
Set the JMS server Message Paging Directory attribute to the path of your flash storage device, see Section 14.12.1, "Specifying a Message Paging Directory." Tune the Message Buffer Size attribute (it controls when paging becomes active). You may be able to use lower threshold values as faster I/O operations provide improved load absorption. See Section 14.12.2, "Tuning the Message Buffer Size Option." Tune JMS Server quotas to safely account for any Flash storage space limitations. This ensures that your JMS server(s) will not attempt to page more messages than the device can store, potentially yielding runtime errors and/or automatic shutdowns. As a conservative rule of thumb, assume page file usage will be at least 1.5 * ((Maximum Number of Active Messages) * (512 + average message body size)) rounded up to the nearest 16MB. See Section 14.7, "Defining Quota."
3.
For subsystems that share the same server instance, share one store between multiple subsystems rather than using a store per subsystem. Sharing a store is more efficient for the following reasons: A single store batches concurrent requests into single I/Os which reduces overall disk usage. Transactions in which only one resource participates are lightweight one-phase transactions. Conversely, transactions in which multiple stores participate become heavier weight two-phase transactions.
For example, configure all SAF agents and JMS servers that run on the same server instance so that they share the same store.
Tuning the WebLogic Persistent Store 8-3
Add a new store only when the old store(s) no longer scale.
Under heavy JDBC store I/O loads, you can improve performance by configuring a JDBC store to use multiple JDBC connections to concurrently process I/O operations. See "Enabling I/O Multithreading for JDBC Stores" in Configuring Server Environments for Oracle WebLogic Server. When using Oracle BLOBS, you may be able to improve performance by tuning the ThreeStepThreshold value. See "Enabling Oracle BLOB Record Columns" in Configuring Server Environments for Oracle WebLogic Server. The location of the JDBC store DDL that is used to initialize empty stores is now configurable. This simplifies the use of custom DDL for database table creation, which is sometimes used for database specific performance tuning. For information, see "Create JDBC stores" in Oracle WebLogic Server Administration Console Help and "Using the WebLogic Persistent Store" in Configuring Server Environments for Oracle WebLogic Server.
Section 8.4.1, "Basic Tuning Information" Section 8.4.2, "Tuning a File Store Direct-Write-With-Cache Policy" Section 8.4.3, "Tuning the File Store Direct-Write Policy" Section 8.4.4, "Tuning the File Store Block Size"
For basic (non-RAID) disk hardware, consider dedicating one disk per file store. A store can operate up to four to five times faster if it does not have to compete with any other store on the disk. Remember to consider the existence of the default file store in addition to each configured store and a JMS paging store for each JMS server. For custom and default file stores, tune the Synchronous Write Policy. There are three transactionally safe synchronous write policies: Direct-Write-With-Cache, Direct-Write, and Cache-Flush. Direct-Write-With-Cache is generally has the best performance of these policies, Cache-Flush generally has the lowest performance, and Direct-Write is the default. Unlike other policies, Direct-Write-With-Cache creates cache files in addition to primary files. The Disabled synchronous write policy is transactionally unsafe. The Disabled write-policy can dramatically improve performance, especially at low client loads. However, it is unsafe because writes become asynchronous and data can be lost in the event of Operating System or power failure. See Guidelines for Configuring a Synchronous Write Policy in Configuring Server Environments for Oracle WebLogic Server.
Note:
Certain older versions of Microsoft Windows may incorrectly report storage device synchronous write completion if the Windows default Write Cache Enabled setting is used. This violates the transactional semantics of transactional products (not specific to Oracle), including file stores configured with a Direct-Write (default) or Direct-Write-With-Cache policy, as a system crash or power failure can lead to a loss or a duplication of records/messages. One of the visible symptoms is that this problem may manifest itself in high persistent message/transaction throughput exceeding the physical capabilities of your storage device. You can address the problem by applying a Microsoft supplied patch, disabling the Windows Write Cache Enabled setting, or by using a power-protected storage device. See http://support.microsoft.com/kb/281672 and http://support.microsoft.com/kb/332023.
When performing head-to-head vendor comparisons, make sure all the write policies for the persistent store are equivalent. Some non-WebLogic vendors default to the equivalent of Disabled. Depending on the synchronous write policy, custom and default stores have a variety of additional tunable attributes that may improve performance. These include CacheDirectory, MaxWindowBufferSize, IOBufferSize, BlockSize, InitialSize, and MaxFileSize. For more information see the JMSFileStoreMBean in the Oracle WebLogic Server MBean Reference.
Note:
.The JMSFileStoreMBean is deprecated, but the individual bean attributes apply to the non-deprecated beans for custom and default file stores.
If disk performance continues to be a bottleneck, consider purchasing disk or RAID controller hardware that has a built-in write-back cache. These caches significantly improve performance by temporarily storing persistent data in volatile memory. To ensure transactionally safe write-back caches, they must be protected against power outages, host machine failure, and operating system failure. Typically, such protection is provided by a battery-backed write-back cache.
whereas cache files are strictly for performance and not for high availability and can be stored locally. When the Direct-Write-With-Cache synchronous write policy is selected, there are several additional tuning options that you should consider:
Setting the CacheDirectory. For performance reasons, the cache directory should be located on a local file system. It is placed in the operating system temp directory by default. Increasing the MaxWindowBufferSize and IOBufferSize attributes. These tune native memory usage of the file store. Increasing the InitialSize and MaxFileSize tuning attributes. These tune the initial size of a store, and the maximum file size of a particular file in the store respectively. Tune the BlockSize attribute. SeeSection 8.4.4, "Tuning the File Store Block Size."
For more information on individual tuning parameters, see the JMSFileStoreMBean in the Oracle WebLogic Server MBean Reference.
There may be additional security and file locking considerations when using the Direct-Write-With-Cache synchronous write policy. See Securing a Production Environment for Oracle WebLogic Server and the CacheDirectory and LockingEnabled attributes of the JMSFileStoreMBean in the Oracle WebLogic Server MBean Reference. The JMSFileStoreMBean is deprecated, but the individual bean attributes apply to the non-deprecated beans for custom and default file stores.
It is safe to delete a cache directory while the store is not running, but this may slow down the next store boot. Cache files are re-used to speed up the file store boot and recovery process, but only if the store's host WebLogic server has been shut down cleanly prior to the current boot (not after kill -9, nor after an OS/JVM crash) and there was no off-line change to the primary files (such as a store admin compaction). If the existing cache files cannot be safely used at boot time, they are automatically discarded and new files are created. In addition, a Warning log 280102 is generated. After a migration or failover event, this same Warning message is generated, but can be ignored. If the a Direct-Write-With-Cache file store fails to load a wlfileio native driver, the synchronous write policy automatically changes to the equivalent of Direct-Write with AvoidDirectIO=true. To view a running custom or default file store's configured and actual synchronous write policy and driver, examine the server log for WL-280008 and WL-280009 messages.
To prevent unused cache files from consuming disk space, test and development environments may need to be modified to periodically delete cache files that are left over from temporarily created domains. In production environments, cache files are managed automatically by the file store.
section are still supported in this release, but have been deprecated as of 11gR1PS2. Use the configurable Direct-Write-With-Cache synchronous write policy as an alternative to the Direct-Write policy. For file stores with the synchronous write policy of Direct-Write, you may be directed by Oracle Support or a release note to set weblogic.Server options on the command line or start script of the JVM that runs the store:
For the default store, where server-name is the name of the server hosting the store:
-Dweblogic.store._WLS_server-name.AvoidDirectIO=true
Setting AvoidDirectIO on an individual store overrides the setting of the global -Dweblogic.store.AvoidDirectIO option. For example: If you have two stores, A and B, and set the following options:
-Dweblogic.store.AvoidDirectIO=true -Dweblogic.store.A.AvoidDirectIO=false
Setting the AvoidDirectIO option may have performance implications which often can be mitigated using the block size setting described in Section 8.4.4, "Tuning the File Store Block Size."
A single WebLogic JMS producer sends persistent messages one by one. The network overhead is known to be negligible.
The file store's disk drive has a 10,000 RPM rotational rate. The disk drive has a battery-backed write-back cache.
and the messaging rate is measured at 166 messages per second. In this example, the low messaging rate matches the disk drive's latency (10,000 RPM / 60 seconds = 166 RPS) even though a much higher rate is expected due to the battery-backed write-back cache. Tuning the store's block size to match the file systems' block size could result in a significant improvement. In some other cases, tuning the block size may result in marginal or no improvement:
The caches are observed to yield low latency (so the I/O subsystem is not a significant bottleneck). Write-back caching is not used and performance is limited by larger disk drive latencies.
There may be a trade off between performance and file space when using higher block sizes. Multiple application records are packed into a single block only when they are written concurrently. Consequently, a large block size may cause a significant increase in store file sizes for applications that have little concurrent server activity and produce small records. In this case, one small record is stored per block and the remaining space in each block is unused. As an example, consider a Web Service Reliable Messaging (WS-RM) application with a single producer that sends small 100 byte length messages, where the application is the only active user of the store. Oracle recommends tuning the store block size to match the block size of the file system that hosts the file store (typically 4096 for most file systems) when this yields a performance improvement. Alternately, tuning the block size to other values (such as paging and cache units) may yield performance gains. If tuning the block size does not yield a performance improvement, Oracle recommends leaving the block size at the default as this helps to minimize use of file system resources.
The BlockSize command line properties that are described in this section are still supported in 11gR1PS2, but are deprecated. Oracle recommends using the BlockSize configurable on custom and default file stores instead.
To set the block size of a store, use one of the following properties on the command line or start script of the JVM that runs the store:
Globally sets the block size of all file stores that don't have pre-existing files.
-Dweblogic.store.BlockSize=block-size
Sets the block size for a specific file store that doesnt have pre-existing files.
-Dweblogic.store.store-name.BlockSize=block-size
Sets the block size for the default file store, if the store doesnt have pre-existing files:
-Dweblogic.store._WLS_server-name. BlockSize=block-size
The value used to set the block size is an integer between 512 and 8192 which is automatically rounded down to the nearest power of 2.
Setting BlockSize on an individual store overrides the setting of the global -Dweblogic.store.BlockSize option. For example: If you have two stores, A and B, and set the following options:
-Dweblogic.store.BlockSize=8192 -Dweblogic.store.A.BlockSize=512
then store B has a block size of 8192 and store A has a block size of 512.
Note:
Setting the block size using command line properties only takes effect for file stores that have no pre-existing files. If a store has pre-existing files, the store continues to use the block size that was set when the store was first created.
Linux ext2 and ext3 file systems: run /sbin/dumpe2fs /dev/device-name and look for "Block size" Windows NTFS: run fsutil fsinfo ntfsinfo device letter: and look for "Bytes Per Cluster"
java -Dweblogic.store.BlockSize=block-size weblogic.store.Admin Type help for available commands. Storeadmin->compact -dir file-store-directory
See "Store Administration Using a Java Command-line" in Configuring Server Environments for Oracle WebLogic Server.
Section 8.5.1, "Configuring Synchronous Write Policies" Section 8.5.2, "Test Server Restart Behavior" Section 8.5.3, "Handling NFS Locking Errors"
You can configure a NFS v4 based Network Attached Storage (NAS) server to release locks within the approximate time required to complete server migration. If you tune and test your NFS v4 environment, you do not need to follow the procedures in this section. See your storage vendor's documentation for information on locking files stored in NFS-mounted directories on the storage device.
If Oracle WebLogic Server does not restart after abrupt machine failure when JMS messages and transaction logs are stored on NFS mounted directory, the following errors may appear in the server log files:
Example 81 Store Restart Failure Error Message MMM dd, yyyy hh:mm:ss a z> <Error> <Store> <BEA-280061> <The persistent store "_ WLS_server_soa1" could not be deployed: weblogic.store.PersistentStoreException: java.io.IOException: [Store:280021]There was an error while opening the file store file "_WLS_SERVER_ SOA1000000.DAT" at weblogic.store.io.file.Heap.open(Heap.java:168) at weblogic.store.io.file.FileStoreIO.open(FileStoreIO.java:88) ... java.io.IOException: Error from fcntl() for file locking, Resource temporarily unavailable, errno=11
This error is due to the NFS system not releasing the lock on the stores. WebLogic Server maintains locks on files used for storing JMS data and transaction logs to protect from potential data corruption if two instances of the same WebLogic Server
are accidentally started. The NFS storage device does not become aware of machine failure in a timely manner and the locks are not released by the storage device. As a result, after abrupt machine failure, followed by a restart, any subsequent attempt by WebLogic Server to acquire locks on the previously locked files may fail. Refer to your storage vendor documentation for additional information on the locking of files stored in NFS mounted directories on the storage device. If it is not reasonably possible to tune locking behavior in your NFS environment, use one of the following two solutions to unlock the logs and data files. Use one of the following two solutions to unlock the logs and data files:
Section 8.5.3.1, "Solution 1 - Copying Data Files to Remove NFS Locks" Section 8.5.3.2, "Solution 2 - Disabling File Locks in WebLogic Server File Stores"
With this solution, the WebLogic file locking mechanism continues to provide protection from any accidental data corruption if multiple instances of the same servers were accidently started. However, the servers must be restarted manually after abrupt machine failures. File stores will create multiple consecutively numbered.DAT files when they are used to store large amounts of data. All files may need to be copied and renamed when this occurs.
8-11
With this solution, since the WebLogic Server locking is disabled, automated server restarts and failovers should succeed. Be very cautious, however, when using this option. The WebLogic file locking feature is designed to help prevent severe file corruptions that can occur in undesired concurrency scenarios. If the server using the file store is configured for server migration, always configure the database based leasing option. This enforces additional locking mechanisms using database tables, and prevents automated restart of more than one instance of the same WebLogic Server. Additional procedural precautions must be implemented to avoid any human error and to ensure that one and only one instance of a server is manually started at any give point in time. Similarly, extra precautions must be taken to ensure that no two domains have a store with the same name that references the same directory.
You can also use the WebLogic Server Administration Console to disable WebLogic file locking mechanisms for the default file store, a custom file store, a JMS paging file store, and a Diagnostics file store, as described in the following sections:
Section 8.5.3.2.1, "Disabling File Locking for the Default File Store" Section 8.5.3.2.2, "Disabling File Locking for a Custom File Store" Section 8.5.3.2.3, "Disabling File Locking for a JMS Paging File Store" Section 8.5.3.2.4, "Disabling File Locking for a Diagnostics File Store"
8.5.3.2.1 Disabling File Locking for the Default File Store Follow these steps to disable file locking for the default file store using the WebLogic Server Administration Console:
1. 2. 3. 4. 5. 6. 7. 8.
If necessary, click Lock & Edit in the Change Center (upper left corner) of the Administration Console to get an Edit lock for the domain. 2.In the Domain Structure tree, expand the Environment node and select Servers. In the Summary of Servers list, select the server you want to modify. Select the Configuration > Services tab. Scroll down to the Default Store section and click Advanced. Scroll down and deselect the Enable File Locking check box. Click Save to save the changes. If necessary, click Activate Changes in the Change Center. Restart the server you modified for the changes to take effect.
8.5.3.2.2 Disabling File Locking for a Custom File Store Use the following steps to disable file locking for a custom file store using the WebLogic Server Administration Console:
1. 2. 3. 4. 5. 6. 7.
If necessary, click Lock & Edit in the Change Center (upper left corner) of the Administration Console to get an Edit lock for the domain. In the Domain Structure tree, expand the Services node and select Persistent Stores. In the Summary of Persistent Stores list, select the custom file store you want to modify. On the Configuration tab for the custom file store, click Advanced to display advanced store settings. Scroll down to the bottom of the page and deselect the Enable File Locking check box. Click Save to save the changes. If necessary, click Activate Changes in the Change Center. If the custom file store was in use, you must restart the server for the changes to take effect.
8.5.3.2.3 Disabling File Locking for a JMS Paging File Store Use the following steps to disable file locking for a JMS paging file store using the WebLogic Server Administration Console:
1. 2. 3. 4.
If necessary, click Lock & Edit in the Change Center (upper left corner) of the Administration Console to get an Edit lock for the domain. In the Domain Structure tree, expand the Services node, expand the Messaging node, and select JMS Servers. In the Summary of JMS Servers list, select the JMS server you want to modify. On the Configuration > General tab for the JMS Server, scroll down and deselect the Paging File Locking Enabled check box.
8-13
5. 6.
Click Save to save the changes. If necessary, click Activate Changes in the Change Center. Restart the server you modified for the changes to take effect.
8.5.3.2.4 Disabling File Locking for a Diagnostics File Store Use the following steps to disable file locking for a Diagnostics file store using the WebLogic Server Administration Console:
1. 2. 3. 4. 5. 6.
If necessary, click Lock & Edit in the Change Center (upper left corner) of the Administration Console to get an Edit lock for the domain. In the Domain Structure tree, expand the Diagnostics node and select Archives. In the Summary of Diagnostic Archives list, select the server name of the archive that you want to modify. On the Settings for [server_name] page, deselect the Diagnostic Store File Locking Enabled check box. Click Save to save the changes. If necessary, click Activate Changes in the Change Center. Restart the server you modified for the changes to take effect.
9
9
DataBase Tuning
This chapter describes how to tune your database to prevent it from becoming a major enterprise-level bottleneck. Configure your database for optimal performance by following the tuning guidelines in this chapter and in the product documentation for the database you are using.
Good database design Distribute the database workload across multiple disks to avoid or reduce disk overloading. Good design also includes proper sizing and organization of tables, indexes, and logs. Disk I/O optimization Disk I/O optimization is related directly to throughput and scalability. Access to even the fastest disk is orders of magnitude slower than memory access. Whenever possible, optimize the number of disk accesses. In general, selecting a larger block/buffer size for I/O reduces the number of disk accesses and might substantially increase throughput in a heavily loaded production environment. Checkpointing This mechanism periodically flushes all dirty cache data to disk, which increases the I/O activity and system resource usage for the duration of the checkpoint. Although frequent checkpointing can increase the consistency of on-disk data, it can also slow database performance. Most database systems have checkpointing capability, but not all database systems provide user-level controls. Oracle, for example, allows administrators to set the frequency of checkpoints while users have no control over SQLServer 7.x checkpoints. For recommended settings, see the product documentation for the database you are using. Disk and database overhead can sometimes be dramatically reduced by batching multiple operations together and/or increasing the number of operations that run in parallel (increasing concurrency). Examples: Increasing the value of the Message bridge BatchSize or the Store-and-Forward WindowSize can improve performance as larger batch sizes produce fewer but larger I/Os. Programmatically leveraging JDBC's batch APIs. Use the MDB transaction batching feature. See Chapter 11, "Tuning Message-Driven Beans".
Database-Specific Tuning
Increasing concurrency by increasing max-beans-in-free-pool and thread pool size for MDBs (or decreasing it if batching can be leveraged).
Section 9.2.1, "Oracle" Section 9.2.2, "Microsoft SQL Server" Section 9.2.3, "Sybase"
Note:
9.2.1 Oracle
This section describes performance tuning for Oracle.
Number of processes On most operating systems, each connection to the Oracle server spawns a shadow process to service the connection. Thus, the maximum number of processes allowed for the Oracle server must account for the number of simultaneous users, as well as the number of background processes used by the Oracle server. The default number is usually not big enough for a system that needs to support a large number of concurrent operations. For platform-specific issues, see your Oracle administrator's guide. The current setting of this parameter can be obtained with the following query:
SELECT name, value FROM v$parameter WHERE name = 'processes';
Buffer pool size The buffer pool usually is the largest part of the Oracle server system global area (SGA). This is the location where the Oracle server caches data that it has read from disk. For read-mostly applications, the single most important statistic that affects data base performance is the buffer cache hit ratio. The buffer pool should be large enough to provide upwards of a 95% cache hit ratio. Set the buffer pool size by changing the value, in data base blocks, of the db_cache_ size parameter in the init.ora file. Shared pool size The share pool in an important part of the Oracle server system global area (SGA). The SGA is a group of shared memory structures that contain data and control information for one Oracle database instance. If multiple users are concurrently connected to the same instance, the data in the instance's SGA is shared among the users. The shared pool portion of the SGA caches data for two major areas: the library cache and the dictionary cache. The library cache stores SQL-related information and control structures (for example, parsed SQL statement, locks). The dictionary cache stores operational metadata for SQL processing. For most applications, the shared pool size is critical to Oracle performance. If the shared pool is too small, the server must dedicate resources to managing the limited amount of available space. This consumes CPU resources and causes contention because Oracle imposes restrictions on the parallel management of the various caches. The more you use triggers and stored procedures, the larger the shared pool must be. The SHARED_POOL_SIZE initialization parameter specifies the size of the shared pool in bytes.
Database-Specific Tuning
The following query monitors the amount of free memory in the share pool:
SELECT * FROM v$sgastat WHERE name = 'free memory' AND pool = 'shared pool';
Maximum opened cursor To prevent any single connection taking all the resources in the Oracle server, the OPEN_CURSORS initialization parameter allows administrators to limit the maximum number of opened cursors for each connection. Unfortunately, the default value for this parameter is too small for systems such as WebLogic Server. Cursor information can be monitored using the following query:
SELECT name, value FROM v$sysstat WHERE name LIKE 'opened cursor%';
Database block size A block is Oracle's basic unit for storing data and the smallest unit of I/O. One data block corresponds to a specific number of bytes of physical database space on disk. This concept of a block is specific to Oracle RDBMS and should not be confused with the block size of the underlying operating system. Since the block size affects physical storage, this value can be set only during the creation of the database; it cannot be changed once the database has been created. The current setting of this parameter can be obtained with the following query:
SELECT name, value FROM v$parameter WHERE name = 'db_block_size';
Sort area size Increasing the sort area increases the performance of large sorts because it allows the sort to be performed in memory during query processing. This can be important, as there is only one sort area for each connection at any point in time. The default value of this init.ora parameter is usually the size of 68 data blocks. This value is usually sufficient for OLTP operations but should be increased for decision support operation, large bulk operations, or large index-related operations (for example, recreating an index). When performing these types of operations, you should tune the following init.ora parameters (which are currently set for 8K data blocks):
sort_area_size = 65536 sort_area_retained_size = 65536
Store tempdb on a fast I/O device. Increase the recovery interval if perfmon shows an increase in I/O. Use an I/O block size larger than 2 KB.
9.2.3 Sybase
The following guidelines pertain to performance tuning parameters for Sybase databases. For more information about these parameters, see your Sybase documentation.
Lower recovery interval setting results in more frequent checkpoint operations, resulting in more I/O operations.
Database-Specific Tuning
Use an I/O block size larger than 2 KB. Sybase controls the number of engines in a symmetric multiprocessor (SMP) environment. They recommend configuring this setting to equal the number of CPUs minus 1.
10
10
This chapter describe how to tune WebLogic Server EJBs for your application environment.
Section 10.1, "General EJB Tuning Tips" Section 10.2, "Tuning EJB Caches" Section 10.3, "Tuning EJB Pools" Section 10.4, "CMP Entity Bean Tuning" Section 10.5, "Tuning In Response to Monitoring Statistics" Section 10.6, "Using the JDT Compiler"
Deployment descriptors are schema-based. Descriptors that are new in this release of WebLogic Server are not available as DTD-based descriptors. Avoid using the RequiresNew transaction parameter. Using RequiresNew causes the EJB container to start a new transaction after suspending any current transactions. This means additional resources, including a separate data base connection are allocated. Use local-interfaces or set call-by-reference to true to avoid the overhead of serialization when one EJB calls another or an EJB is called by a servlet/JSP in the same application. Note the following: In release prior to WebLogic Server 8.1, call-by-reference is turned on by default. For releases of WebLogic Server 8.1 and higher, call-by-reference is turned off by default. Older applications migrating to WebLogic Server 8.1 and higher that do not explicitly turn on call-by-reference may experience a drop in performance. This optimization does not apply to calls across different applications.
Use Stateless session beans over Stateful session beans whenever possible. Stateless session beans scale better than stateful session beans because there is no state information to be maintained. WebLogic Server provides additional transaction performance benefits for EJBs that reside in a WebLogic Server cluster. When a single transaction uses multiple EJBs, WebLogic Server attempts to use EJB instances from a single WebLogic Server instance, rather than using EJBs from different servers. This approach minimizes network traffic for the transaction. In some cases, a transaction can use EJBs that reside on multiple WebLogic Server instances in a cluster. This can occur
in heterogeneous clusters, where all EJBs have not been deployed to all WebLogic Server instances. In these cases, WebLogic Server uses a multitier connection to access the datastore, rather than multiple direct connections. This approach uses fewer resources, and yields better performance for the transaction. However, for best performance, the cluster should be homogeneous all EJBs should reside on all available WebLogic Server instances.
Section 10.2.1, "Tuning the Stateful Session Bean Cache" Section 10.2.2, "Tuning the Entity Bean Cache" Section 10.2.3, "Tuning the Query Cache"
Section 10.2.2.1, "Transaction-Level Caching" Section 10.2.2.2, "Caching between Transactions" Section 10.2.2.3, "Ready Bean Caching"
The query is least recently used and the query-cache has hit its size limit. At least one of the EJBs that satisfy the query has been evicted from the entity bean cache, regardless of the reason. The query corresponds to a finder that has eager-relationship-caching enabled and the query for the associated internal relationship finder has been evicted from the related bean's query cache.
It is possible to let the size of the entity-bean cache limit the size of the query-cache by setting the max-queries-in-cache parameter to 0, since queries are evicted from the cache when the corresponding EJB is evicted. This may avoid some lock contention in the query cache, but the performance gain may not be significant.
Section 10.3.1, "Tuning the Stateless Session Bean Pool" Section 10.3.2, "Tuning the MDB Pool" Section 10.3.3, "Tuning the Entity Bean Pool"
The upper bound is specified by the max-beans-in-free-pool parameter. It should be set equal to the number of threads expected to invoke the EJB concurrently. Using too small of a value impacts concurrency.
The lower bound is specified by the initial-beans-in-free-pool parameter. Increasing the value of initial-beans-in-free-pool increases the time it takes to deploy the application containing the EJB and contributes to startup time for the server. The advantage is the cost of creating EJB instances is not incurred at run time. Setting this value too high wastes memory.
A target objects for invocation of finders via reflection. A pool of bean instances the container can recruit if it cannot find an instance for a particular primary key in the cache.
The entity pool contains anonymous instances (instances that do not have a primary key). These beans are not yet active (meaning ejbActivate() has not been invoked on them yet), though the EJB context has been set. Entity bean instances evicted from the entity cache are passivated and put into the pool. The tunables are the initial-beans-in-free-pool and max-beans-in-free-pool. Unlike stateless session beans and MDBs, the max-beans-in-free-pool has no relation with the thread count. You should increase the value of max-beans-in-free-pool if the entity bean constructor or setEnityContext() methods are expensive.
Section 10.4.1, "Use Eager Relationship Caching" Section 10.4.2, "Use JDBC Batch Operations" Section 10.4.3, "Tuned Updates" Section 10.4.4, "Using Field Groups" Section 10.4.5, "include-updates" Section 10.4.6, "call-by-reference" Section 10.4.7, "Bean-level Pessimistic Locking" Section 10.4.8, "Concurrency Strategy"
See "Relationship Caching" in Programming WebLogic Enterprise JavaBeans for Oracle WebLogic Server. In this release of WebLogic Server, if a CMR field has specified both relationship-caching and cascade-delete, the owner bean and related bean are loaded to SQL which can provide an additional performance benefit.
fields from the data base that are included in the field group. This means that if most transactions do not use a particular field that is slow to load, such as a BLOB, it can be excluded from a field-group. Similarly, if an entity bean has a lot of fields, but a transaction uses only a small number of them, the unused fields can be excluded.
Note:
Be careful to ensure that fields that are accessed in the same transaction are not configured into separate field-groups. If that happens, multiple data base calls occur to load the same bean, when one would have been enough.
10.4.5 include-updates
This flag causes the EJB container to flush all modified entity beans to the data base before executing a finder. If the application modifies the same entity bean more than once and executes a non-pk finder in-between in the same transaction, multiple updates to the data base are issued. This flag is turned on by default to comply with the EJB specification. If the application has transactions where two invocations of the same or different finders could return the same bean instance and that bean instance could have been modified between the finder invocations, it makes sense leaving include-updates turned on. If not, this flag may be safely turned off. This eliminates an unnecessary flush to the data base if the bean is modified again after executing the second finder. This flag is specified for each finder in the cmp-rdbms descriptor.
10.4.6 call-by-reference
When it is turned off, method parameters to an EJB are passed by value, which involves serialization. For mutable, complex types, this can be significantly expensive. Consider using for better performance when:
The application does not require call-by-value semantics, such as method parameters are not modified by the EJB.
or
If modified by the EJB, the changes need not be invisible to the caller of the method.
This flag applies to all EJBs, not just entity EJBs. It also applies to EJB invocations between servlets/JSPs and EJBs in the same application. The flag is turned off by default to comply with the EJB specification. This flag is specified at the bean-level in the WebLogic-specific deployment descriptor.
ExclusiveThe EJB container ensures there is only one instance of an EJB for a given primary key and this instance is shared among all concurrent transactions in the server with the container serializing access to it. This concurrency setting generally does not provide good performance unless the EJB is used infrequently and chances of concurrent access is small. DatabaseThis is the default value and most commonly used concurrency strategy. The EJB container defers concurrency control to the database. The container maintains multiple instances of an EJB for a given primary-key and each transaction gets it's own copy. In combination with this strategy, the database isolation-level and bean level pessimistic locking play a major role in determining if concurrent access to the persistent state should be allowed. It is possible for multiple transactions to access the bean concurrently so long as it does not need to go to the database, as would happen when the value of cache-between-transactions is true. However, setting the value of cache-between-transactions to true unsafe and not recommended with the Dababase concurrency strategy. OptimisticThe goal of the optimistic concurrency strategy is to minimize locking at the data base and while continuing to provide data consistency. The basic assumption is that the persistent state of the EJB is changed very rarely. The container attempts to load the bean in a nested transaction so that the isolation-level settings of the outer transaction does not cause locks to be acquired at the data base. At commit-time, if the bean has been modified, a predicated update is used to ensure it's persistent state has not been changed by some other transaction. If so, an OptimisticConcurrencyException is thrown and must be handled by the application. Since EJBs that can use this concurrency strategy are rarely modified, using cache-between-transactions on can boost performance significantly. This strategy also allows commit-time verification of beans that have been read, but not changed. This is done by setting the verify-rows parameter to Read in the cmp-rdbms descriptor. This provides very high data-consistency while at the same time minimizing locks at the data base. However, it does slow performance somewhat. It is recommended that the optimistic verification be performed using a version column: it is faster, followed closely by timestamp, and more distantly by modified and read. The modified value does not apply if verify-rows is set to Read. When an optimistic concurrency bean is modified in a server that is part of a cluster, the server attempts to invalidate all instances of that bean cluster-wide in the expectation that it will prevent OptimisticConcurrencyExceptions. In some cases, it may be more cost effective to simply let other servers throw an OptimisticConcurrencyException. in this case, turn off the cluster-wide invalidation by setting the cluster-invalidation-disabled flag in the cmp-rdbms descriptor.
ReadOnlyThe ReadOnly value is the most performant. When selected, the container assumes the EJB is non-transactional and automatically turns on cache-between-transactions. Bean states are updated from the data base at periodic, configurable intervals or when the bean has been programmatically invalidated. The interval between updates can cause the persistent state of the bean to become stale. This is the only concurrency-strategy for which
Tuning WebLogic Server EJBs 10-7
A high cache miss ratio could be indicative of an improperly sized cache. If your application uses a certain subset of beans (read primary keys) more frequently than others, it would be ideal to size your cache large enough so that the commonly used beans can remain in the cache as less commonly used beans are cycled in and out upon demand. If this is the nature of your application, you may be able to decrease your cache miss ratio significantly by increasing the maximum size of your cache. If your application doesn't necessarily use a subset of beans more frequently than others, increasing your maximum cache size may not affect your cache miss ratio. We recommend testing your application with different maximum cache sizes to determine which give the lowest cache miss ratio. It is also important to keep in mind that your server has a finite amount of memory and therefore there is always a trade-off to increasing your cache size.
A high lock waiter ratio can indicate a suboptimal concurrency strategy for the bean. If acceptable for your application, a concurrency strategy of Database or Optimistic will allow for more parallelism than an Exclusive strategy and remove the need for locking at the EJB container level. Because locks are generally held for the duration of a transaction, reducing the duration of your transactions will free up beans more quickly and may help reduce your lock waiter ratio. To reduce transaction duration, avoid grouping large amounts of work into a single transaction unless absolutely necessary.
The lock timeout ratio is closely related to the lock waiter ratio. If you are concerned about the lock timeout ratio for your bean, first take a look at the lock waiter ratio and our recommendations for reducing it (including possibly changing your concurrency strategy). If you can reduce or eliminate the number of times a thread has to wait for a lock on a bean, you will also reduce or eliminate the amount of timeouts that occur while waiting. A high lock timeout ratio may also be indicative of an improper transaction timeout value. The maximum amount of time a thread will wait for a lock is equal to the current transaction timeout value. If the transaction timeout value is set too low, threads may not be waiting long enough to obtain access to a bean and timing out prematurely. If this is the case, increasing the trans-timeout-seconds value for the bean may help reduce the lock timeout ratio. Take care when increasing the trans-timeout-seconds, however, because doing so can cause threads to wait longer for a bean and threads are a valuable server resource. Also, doing so may increase the request time, as a request ma wait longer before timing out.
If your pool miss ratio is high, you must determine what is happening to your bean instances. There are three things that can happen to your beans.
Check your destroyed bean ratio to verify that bean instances are not being destroyed. Investigate the cause and try to remedy the situation. Examine the demand for the EJB, perhaps over a period of time.
One way to check this is via the Beans in Use Current Count and Idle Beans Count displayed in the Administration Console. If demand for your EJB spikes during a certain period of time, you may see a lot of pool misses as your pool is emptied and unable to fill additional requests. As the demand for the EJB drops and beans are returned to the pool, many of the beans created to satisfy requests may be unable to fit in the pool and are therefore removed. If this is the case, you may be able to reduce the number of pool misses by increasing the maximum size of your free pool. This may allow beans that were
created to satisfy demand during peak periods to remain in the pool so they can be used again when demand once again increases.
To reduce the number of destroyed beans, Oracle recommends against throwing non-application exceptions from your bean code except in cases where you want the bean instance to be destroyed. A non-application exception is an exception that is either a java.rmi.RemoteException (including exceptions that inherit from RemoteException) or is not defined in the throws clause of a method of an EJB's home or component interface. In general, you should investigate which exceptions are causing your beans to be destroyed as they may be hurting performance and may indicate problem with the EJB or a resource used by the EJB.
A high pool timeout ratio could be indicative of an improperly sized free pool. Increasing the maximum size of your free pool via the max-beans-in-free-pool setting will increase the number of bean instances available to service requests and may reduce your pool timeout ratio. Another factor affecting the number of pool timeouts is the configured transaction timeout for your bean. The maximum amount of time a thread will wait for a bean from the pool is equal to the default transaction timeout for the bean. Increasing the trans-timeout-seconds setting in your weblogic-ejb-jar.xml file will give threads more time to wait for a bean instance to become available. Users should exercise caution when increasing this value, however, since doing so may cause threads to wait longer for a bean and threads are a valuable server resource. Also, request time might increase because a request will wait longer before timing out.
Begin investigating a high transaction rollback ratio by examining the Section 10.5.8, "Transaction Timeout Ratio" reported in the Administration Console. If the transaction timeout ratio is higher than you expect, try to address the timeout problem first. An unexpectedly high transaction rollback ratio could be caused by a number of things. We recommend investigating the cause of transaction rollbacks to find potential problems with your application or a resource used by your application.
A high transaction timeout ratio could be caused by the wrong transaction timeout value. For example, if your transaction timeout is set too low, you may be timing out transactions before the thread is able to complete the necessary work. Increasing your transaction timeout value may reduce the number of transaction timeouts. You should exercise caution when increasing this value, however, since doing so can cause threads to wait longer for a resource before timing out. Also, request time might increase because a request will wait longer before timing out. A high transaction timeout ratio could be caused by a number of things such as a bottleneck for a server resource. We recommend tracing through your transactions to investigate what is causing the timeouts so the problem can be addressed.
Both JDT and Javac is supported in the EJB container. JDT is the default option. You can set up to use different compilers in appc and WLS. For: appc, use -compiler, such as -java weblogic.appc -compiler javac ... WLS, use the ejb-container tag in config.xml file. For example: <ejb-container> <java-compiler>jdt</java-compiler> </ejb-container>
If you use JDT in appc, only the -keepgenerated and -forceGeneration command line options are currently supported. These options have the same meaning as when using Javac.
11
11
This chapter provides tuning and best practice information for Message-Driven Beans (MDBs).
Section 11.1, "Use Transaction Batching" Section 11.2, "MDB Thread Management" Section 11.3, "Best Practices for Configuring and Deploying MDBs Using Distributed Topics" Section 11.4, "Using MDBs with Foreign Destinations" Section 11.5, "Token-based Message Polling for Transactional MDBs Listening on Queues/Topics" Section 11.6, "Compatibility for WLS 10.0 and Earlier-style Polling"
Using batching may require reducing the number of concurrent MDB instances. If too many MDB instances are available, messages may be processed in parallel rather than in a batch. See Section 11.2, "MDB Thread Management". While batching generally increases throughput, it may also increase latency (the time it takes for an individual message to complete its MDB processing).
Section 11.2.1, "Determining the Number of Concurrent MDBs" Section 11.2.2, "Selecting a Concurrency Strategy" Section 11.2.3, "Thread Utilization When Using WebLogic Destinations" Section 11.2.4, "Limitations for Multi-threaded Topic MDBs"
Type of work manager or execute queue Default work manager or unconstrained work manager Default work manager with self-tuning disabled
Transactional WebLogic MDBs use a synchronous polling mechanism to retrieve messages from JMS destinations if they are either: A) listening to non-WebLogic queues; or B) listening to a WebLogic queue and transaction batching is enabled. See Section 11.5, "Token-based Message Polling for Transactional MDBs Listening on Queues/Topics".
Every application is unique, select a concurrency strategy based on how your application performs in its environment.
In most situations, if the message stream has bursts of messages, using an unconstrained work manager with a high fair share is adequate. Once the messages in a burst are handled, the threads are returned to the self-tuning pool. In most situations, if the message arrival rate is high and constant or if low latency is required, it makes sense to reserve threads for MDBs. You can reserve threads by either specifying a work manager with a min-threads-constraint or by using a custom execute queue. If you migrate WebLogic Server 8.1 applications that have custom MDB execute queues, you can: Continue to use a custom MDB execute queue, see Appendix A, "Using the WebLogic 8.1 Thread Pool Model." Convert the MDB execute queue to a custom work manager that has a configured max-threads-constraint parameter and a high fair share setting.
Note:
You must configure the max-threads-constraint parameter to override the default concurrency of 16.
In WebLogic Server 8.1, you could increase the size of the default execute queue knowing that a larger default pool means a larger maximum MDB concurrency. Default thread pool MDBs upgraded to WebLogic Server 9.0 will have a fixed maximum of 16. To achieve MDB concurrency numbers higher than 16, you will need to create a custom work manager or custom execute queue. See Table 111.
Non-transactional WebLogic MDBs allocate threads from the thread-pool designated by the dispatch-policy as needed when there are new messages to be processed. If the MDB has successfully connected to its source destination, but there are no messages to be processed, then the MDB will use no threads. Transactional WebLogic MDBs with transaction batching disabled work the same as non-transactional MDBs except for Topic MDBs with a Topic Messages Distribution Mode of Compatibility (the default), in which case the MDB always limits the thread pool size to 1. The behavior of transactional MDBs with transaction batching enabled depends on whether the MDB is listening on a topic or a queue: MDBs listening on topics: Each deployed MDB uses a dedicated daemon polling thread that is created in Non-Pooled Threads thread group. * Topic Messages Distribution Mode = Compatibility: Each deployed MDB uses a dedicated daemon polling thread that is created in the Non-Pooled Threads thread group. Topic Messages Distribution Mode = One-Copy-Per-Server or One-Copy-Per-Application: Same as queues.
MDBs listening on queues Instead of a dedicated thread, each deployed MDB uses a token-based, synchronous polling mechanism that always uses at least one thread from the dispatch-policy. See Section 11.5, "Token-based Message Polling for Transactional MDBs Listening on Queues/Topics".
For information on how threads are allocated when WebLogic Server interoperates with MDBs that consume from Foreign destinations, see Section 11.4.2, "Thread Utilization for MDBs that Process Messages from Foreign Destinations".
Best Practices for Configuring and Deploying MDBs Using Distributed Topics
Caution:
Non-transactional Foreign Topics: Oracle recommends explicitly setting max-beans-in-free-pool to 1 for non-transactional MDBs that work with foreign (non-WebLogic) topics. Failure to do so may result in lost messages in the event of certain failures, such as the MDB application throwing Runtime or Error exceptions. Unit-of-Order: Oracle recommends explicitly setting max-beans-in-free-pool to 1 for non-transactional Compatibility mode MDBs that consume from a WebLogic JMS topic and process messages that have a WebLogic JMS Unit-of-Order value. Unit-of-Order messages in this use case may not be processed in order unless max-beans-in-free-pool is set to 1. Transactional MDBs automatically force concurrency to 1 regardless of the max-beans-in-free-pool setting.
11.3 Best Practices for Configuring and Deploying MDBs Using Distributed Topics
Message-driven beans provide a number of application design and deployment options that offer scalability and high availability when using distributed topics. For more detailed information, see Configuring and Deploying MDBs Using Distributed Topics in Programming Message-Driven Beans for Oracle WebLogic Server.
The term "Foreign destination" in this context refers to destinations that are hosted by a non-WebLogic JMS provider. It does not refer to remote WebLogic destinations.
The following sections provide information on the behavior of WebLogic Server when using MDBs that consume messages from Foreign destinations:
Section 11.4.1, "Concurrency for MDBs that Process Messages from Foreign Destinations" Section 11.4.2, "Thread Utilization for MDBs that Process Messages from Foreign Destinations"
11.4.1 Concurrency for MDBs that Process Messages from Foreign Destinations
The concurrency of MDBs that consume from destinations hosted by foreign providers (non-WebLogic JMS destinations) is determined using the same algorithm that is used for WebLogic JMS destinations.
11.4.2 Thread Utilization for MDBs that Process Messages from Foreign Destinations
The following section provides information on how threads are allocated when WebLogic Server interoperates with MDBs that process messages from foreign destinations:
Non-transactional MDBs use a foreign vendor's thread, not a WebLogic Server thread. In this situation, the dispatch-policy is ignored except for determining concurrency. Transactional MDBs run in WebLogic Server threads, as follow: MDBs listening on topics Each deployed MDB uses a dedicated daemon polling thread that is created in Non-Pooled Threads thread group. MDBs listening on queues Instead of a dedicated thread, each deployed MDB uses a token-based, synchronous polling mechanism that always uses at least one thread from the dispatch-policy. See Section 11.5, "Token-based Message Polling for Transactional MDBs Listening on Queues/Topics"
Listening to non-WebLogic queues Listening to a WebLogic queue and transaction batching is enabled Listening to a WebLogic Topic where: Topic Messages Distribution Mode = One-Copy-Per-Server and transaction batching is enabled Topic Messages Distribution Mode = One-Copy-Per-Application and transaction batching is enabled
With synchronous polling, one or more WebLogic polling threads synchronously receive messages from the MDB's source destination and then invoke the MDB application's onMessage callback. As of WebLogic 10.3, the polling mechanism changed to a token-based approach to provide better control of the concurrent poller thread count under changing message loads. In previous releases, the thread count ramp-up could be too gradual in certain use cases. Additionally, child pollers, once awoken, could not be ramped down and returned back to the pool for certain foreign JMS providers. When a thread is returned to the thread pool with token-based polling, the thread's internal JMS consumer is closed rather than cached. This assures that messages will not be implicitly pre-fetched by certain foreign JMS Providers while there is no polling thread servicing the consumer. In addition, each MDB maintains a single token that provides permission for a given poller thread to create another thread.
On receipt of a message A poller thread that already has the token or that is able to acquire the token because the token is not owned, wakes up an additional poller thread and gives the token to the new poller if the maximum concurrency has not yet been reached. If maximum concurrency has been reached, the poller thread simply releases the token (leaving it available to any other poller). On finding an empty queue/Topic A poller tries to acquire the token and if successful will try to poll the queue periodically. If it fails to acquire the token, it returns itself back to the pool. This ensures that with an empty queue or topic, there is still at least one poller checking for messages.
At the server level, set the weblogic.mdb.message.81StylePolling system property to True to override the token-based polling behavior. At the MDB level, set the use81-style-polling element under message-driven-descriptor to override the token-based polling behavior. When using foreign transactional MDBs with the WLS 8.1-style polling flag, some foreign vendors require a permanently allocated thread per concurrent MDB instance. These threads are drawn from the pool specified by dispatch-policy and are not returned to the pool until the MDB is undeployed. Since these threads are not shared, the MDB can starve other resources in the same pool. In this situation, you may need to increase the number of threads in the pool. With the token-based polling approach for such foreign vendors, the thread's internal JMS message consumer is closed rather than cached to assure that messages will not be reserved by the destination for the specific consumer.
12
12
This chapter provides tips on how to get the best performance from your WebLogic data sources.
Section 12.1, "Tune the Number of Database Connections" Section 12.2, "Waste Not" Section 12.3, "Use Test Connections on Reserve with Care" Section 12.4, "Cache Prepared and Callable Statements" Section 12.5, "Using Pinned-To-Thread Property to Increase Performance" Section 12.6, "Database Listener Timeout under Heavy Server Loads" Section 12.7, "Disable Wrapping of Data Type Objects" Section 12.8, "Advanced Configurations for Oracle Drivers and Databases" Section 12.9, "Use Best Design Practices"
Waste Not
JNDI lookups are relatively expensive, so caching an object that required a looked-up in client code or application code avoids incurring this performance hit more than once. Once client or application code has a connection, maximize the reuse of this connection rather than closing and reacquiring a new connection. While acquiring and returning an existing creation is much less expensive than creating a new one, excessive acquisitions and returns to pools creates contention in the connection pool and degrades application performance. Don't hold connections any longer than is necessary to achieve the work needed. Getting a connection once, completing all necessary work, and returning it as soon as possible provides the best balance for overall performance.
The exception thrown is a ResourceDeadException and the driver exception was Socket read timed out. The workaround is to increase the timeout of the database server using the following: sqlnet.ora: SQLNET.INBOUND_CONNECT_TIMEOUT=180 listener.ora: INBOUND_CONNECT_TIMEOUT_listener_name=180
13
13
Tuning Transactions
This chapter provides background and tuning information for transaction optimization.
Section 13.1, "Logging Last Resource Transaction Optimization" Section 13.2, "Read-only, One-Phase Commit Optimizations"
Typical two-phase transactions in JMS applications usually involve both a JMS server and a database server. The LLR option can as much as double performance compared to XA. The safety of the JDBC LLR option contrasts with well known but less-safe XA optimizations such as "last-agent", "last-participant", and "emulate-two-phase-commit" that are available from other vendors as well as WebLogic. JDBC LLR works by storing two-phase transaction records in a database table rather than in the transaction manager log (the TLOG).
See "Logging Last Resource Transaction Optimization" in Programming JTA for Oracle WebLogic Server.
Oracle recommends that you read and understand "Logging Last Resource Transaction Optimization" in Programming JTA for Oracle WebLogic Server and "Programming Considerations and Limitations for LLR Data Sources" in Configuring and Managing JDBC Data Sources for Oracle WebLogic Server. LLR has a number of important administration and design implications. JDBC LLR generally improves performance of two-phase transactions that involve SQL updates, deletes, or inserts. LLR generally reduces the performance of two-phase transactions where all SQL operations are read-only (just selects).
JDBC LLR pools provide no performance benefit to WebLogic JDBC stores. WebLogic JDBC stores are fully transactional but do not use JTA (XA) transactions on their internal JDBC connections. Consider using LLR instead of the less safe "last-agent" optimization for connectors, and the less safe "emulate-two-phase-commit" option for JDBC connection pools (formerly known as the "enable two-phase commit" option for pools that use non-XA drivers). On Oracle databases, heavily used LLR tables may become fragmented over time, which can lead to unused extents. This is likely due to the highly transient nature of the LLR table's data. To help avoid the issue, set PCT_FREE to 5 and PCT_USED to 95 on the LLR table. Also periodically defragment using the ALTER TABLESPACE [tablespace-name] COALESCE command.
14
14
This chapter explains how to get the most out of your applications by implementing the administrative performance tuning features available with WebLogic JMS.
Section 14.1, "JMS Performance & Tuning Check List" Section 14.2, "Handling Large Message Backlogs" Section 14.3, "Cache and Re-use Client Resources" Section 14.4, "Tuning Distributed Queues" Section 14.5, "Tuning Topics" Section 14.6, "Tuning for Large Messages" Section 14.7, "Defining Quota" Section 14.8, "Blocking Senders During Quota Conditions" Section 14.9, "Tuning MessageMaximum" Section 14.10, "Setting Maximum Message Size for Network Protocols" Section 14.11, "Compressing Messages" Section 14.12, "Paging Out Messages To Free Up Memory" Section 14.13, "Controlling the Flow of Messages on JMS Servers and Destinations" Section 14.14, "Handling Expired Messages" Section 14.15, "Tuning Applications Using Unit-of-Order" Section 14.16, "Using One-Way Message Sends" Section 14.17, "Tuning the Messaging Performance Preference Option" Section 14.18, "Client-side Thread Pools" Section 14.19, "Best Practices for JMS .NET Client Applications"
Always configure quotas, see Section 14.7, "Defining Quota." Verify that default paging settings apply to your needs, see Section 14.12, "Paging Out Messages To Free Up Memory.". Paging lowers performance but may be required if JVM memory is insufficient.
Avoid large message backlogs. See Section 14.2, "Handling Large Message Backlogs." Create and use custom connection factories with all applications instead of using default connection factories, including when using MDBs. Default connection factories are not tunable, while custom connection factories provide many options for performance tuning. Write applications so that they cache and re-use JMS client resources, including JNDI contexts and lookups, and JMS connections, sessions, consumers, or producers. These resources are relatively expensive to create. For information on detecting when caching is needed, as well as on built-in pooling features, see Section 14.3, "Cache and Re-use Client Resources." For asynchronous consumers and MDBs, tune MessagesMaximum on the connection factory. Increasing MessagesMaximum can improve performance, decreasing MessagesMaximum to its minimum value can lower performance, but helps ensure that messages do not end up waiting for a consumer that's already processing a message. See Section 14.9, "Tuning MessageMaximum." Avoid single threaded processing when possible. Use multiple concurrent producers and consumers and ensure that enough threads are available to service them. Tune server-side applications so that they have enough instances. Consider creating dedicated thread pools for these applications. See Section 11, "Tuning Message-Driven Beans." For client-side applications with asynchronous consumers, tune client-side thread pools using Section 14.18, "Client-side Thread Pools." Tune persistence as described in Section 8, "Tuning the WebLogic Persistent Store." In particular, it's normally best for multiple JMS servers, destinations, and other services to share the same store so that the store can aggregate concurrent requests into single physical I/O requests, and to reduce the chance that a JTA transaction spans more than one store. Multiple stores should only be considered once it's been established that the a single store is not scaling to handle the current load. If you have large messages, see Section 14.6, "Tuning for Large Messages." Prevent unnecessary message routing in a cluster by carefully configuring connection factory targets. Messages potentially route through two servers, as they flow from a client, through the client's connection host, and then on to a final destination. For server-side applications, target connection factories to the cluster. For client-side applications that work with a distributed destination, target connection factories only to servers that host the distributed destinations members. For client-side applications that work with a singleton destination, target the connection factory to the same server that hosts the destination. If JTA transactions include both JMS and JDBC operations, consider enabling the JDBC LLR optimization. LLR is a commonly used safe "ACID" optimization that can lead to significant performance improvements, with some drawbacks. See Section 13, "Tuning Transactions." If you are using Java clients, avoid thin Java clients except when a small jar size is more important than performance. Thin clients use the slower IIOP protocol even when T3 is specified so use a full java client instead. See Programming Stand-alone Clients for Oracle WebLogic Server. Tune JMS Store-and-Forward according to Section 15, "Tuning WebLogic JMS Store-and-Forward."
Tune a WebLogic Messaging Bridge according Section 16, "Tuning WebLogic Message Bridge." If you are using non-persistent non-transactional remote producer clients, then consider enabling one-way calls. See Section 14.16, "Using One-Way Message Sends." Consider using JMS distributed queues. See "Using Distributed Queues" in Programming JMS for Oracle WebLogic Server. If you are already using distributed queues, see Section 14.4, "Tuning Distributed Queues." Consider using advanced distributed topic features (PDTs). See Developing Advanced Pub/Sub Applications in Programming JMS for Oracle WebLogic Server. If your applications use Topics, see Section 14.5, "Tuning Topics." Avoid configuring sorted destinations, including priority sorted destinations. FIFO or LIFO destinations are the most efficient. Destination sorting can be expensive when there are large message backlogs, even a backlog of a few hundred messages can lower performance. Use careful selector design. See "Filtering Messages" in Programming JMS for Oracle WebLogic Server. Run applications on the same WebLogic Servers that are also hosting destinations. This eliminates networking and some or all marshalling overhead, and can heavily reduce network and CPU usage. It also helps ensure that transactions are local to a single server. This is one of the major advantages of using an application server's embedded messaging.
Indicates consumers may not be capable of handling the incoming message load, are failing, or are not properly load balanced across a distributed queue. Can lead to out-of-memory on the server, which in turn prevents the server from doing any work. Can lead to high garbage collection (GC) overhead. A JVM's GC overhead is partially proportional to the number of live objects in the JVM.
Follow the JMS tuning recommendations as described in Section 14.1, "JMS Performance & Tuning Check List." Check for programming errors in newly developed applications. In particular, ensure that non-transactional consumers are acknowledging messages, that transactional consumers are committing transactions, that plain javax.jms applications called javax.jms.Connection.start(), and that transaction timeouts are tuned to reflect the needs of your particular application. Here are some symptoms of programming errors: consumers are not receiving any
messages (make sure they called start()), high "pending" counts for queues, already processed persistent messages re-appearing after a shutdown and restart, and already processed transactional messages re-appearing after a delay (the default JTA timeout is 30 seconds, default transacted session timeout is one hour).
Check WebLogic statistics for queues that are not being serviced by consumers. If you're having a problem with distributed queues, see Section 14.4, "Tuning Distributed Queues." Check WebLogic statistics for topics with high pending counts. This usually indicates that there are topic subscriptions that are not being serviced. There may be a slow or unresponsive consumer client that's responsible for processing the messages, or it's possible that a durable subscription may no longer be needed and should be deleted, or the messages may be accumulating due to delayed distributed topic forwarding. You can check statistics for individual durable subscriptions on the administration console. A durable subscription with a large backlog may have been created by an application but never deleted. Unserviced durable subscriptions continue to accumulate topic messages until they are either administratively destroyed, or unsubscribed by a standard JMS client. Understand distributed topic behavior when not all members are active. In distributed topics, each produced message to a particular topic member is forwarded to each remote topic member. If a remote topic member is unavailable then the local topic member will store each produced message for later forwarding. Therefore, if a topic member is unavailable for a long period of time, then large backlogs can develop on the active members. In some applications, this backlog can be addressed by setting expiration times on the messages. See Section 14.14.1, "Defining a Message Expiration Policy." In certain applications it may be fine to automatically delete old unprocessed messages. See Section 14.14, "Handling Expired Messages." For transactional MDBs, consider using MDB transaction batching as this can yield a 5 fold improvement in some use cases. Leverage distributed queues and add more JVMs to the cluster (in order to add more distributed queue member instances). For example, split a 200,000 message backlog across 4 JVMs at 50,000 messages per JVM, instead of 100,000 messages per JVM. For client applications, use asynchronous consumers instead of synchronous consumers when possible. Asynchronous consumers can have a significantly lower network overhead, lower latency, and do not block a thread while waiting for a message. For synchronous consumer client applications, consider: enabling prefetch, using CLIENT_ACKNOWLEDGE to enable acknowledging multiple consumed messages at a time, and using DUPS_OK_ACKNOWLEDGE instead of AUTO_ ACKNOWLEDGE. For asynchronous consumer client applications, consider using DUPS_OK_ ACKNOWLEDGE instead of AUTO_ACKNOWLEDGE. Leverage batching. For example, include multiple messages in each transaction, or send one larger message instead of many smaller messages. For non-durable subscriber client-side applications handling missing ("dropped") messages, investigate MULTICAST_NO_ACKNOWLEDGE. This mode broadcasts messages concurrently to subscribers over UDP multicast.
Set lower quotas. See Section 14.7, "Defining Quota." Use fewer producer threads. Tune a sender blocking timeout that occurs during a quota condition, as described Section 14.8, "Blocking Senders During Quota Conditions.". The timeout is tunable on connection factory. Tune producer flow control, which automatically slows down producer calls under threshold conditions. See Section 14.13, "Controlling the Flow of Messages on JMS Servers and Destinations." Consider modifying the application to implement flow-control. For example, some applications do not allow producers to inject more messages until a consumer has successfully processed the previous batch of produced messages (a windowing protocol). Other applications might implement a request/reply algorithm where a new request isn't submitted until the previous reply is received (essentially a windowing protocol with a window size of 1). In some cases, JMS tuning is not required as the synchronous flow from the RMI/EJB/Servlet is adequate.
It puts back-pressure on the down-stream flow that is calling the producer. Sometimes the down-stream flow cannot handle this back-pressure, and a hard-to-handle backlog develops behind the producer. The location of the backlog depends on what's calling the producer. For example, if the producer is being called by a servlet, the backlog might manifest as packets accumulating on the incoming network socket or network card. Blocking calls on server threads can lead to thread-starvation, too many active threads, or even dead-locks. Usually the key to address this problem is to ensure that the producer threads are running in a size limited dedicated thread pool, as this ensures that the blocking threads do not interfere with activity in other thread pools. For example, if an EJB or servlet is calling a "send" that might block for a significant time: configure a custom work manager with a max threads constraint, and set the dispatch-policy of the EJB/servlet to reference this work-manager.
For server-side applications, WebLogic automatically wraps and pools JMS resources that are accessed using a resource reference. See "Enhanced Support for Using WebLogic JMS with EJBs and Servlets" in Programming JMS for Oracle WebLogic Server. This pooling code can be inefficient at pooling producers if the target destination changes frequently, but there's a simple work-around: use anonymous producers by passing null for the destination when the application calls createProducer() and then instead pass the desired destination into each send call.
To check for heavy JMS resource allocation or leaks, you can monitor mbean stats and/or use your particular JVM's built in facilities. You can monitor mbean stats using the console, WLST, or java code. Check JVM heap statistics for memory leaks or unexpectedly high allocation counts (called a JRA profile in JRockit). Similarly, check WebLogic statistics for memory leaks or unexpectedly high allocation counts.
Ensure that your application is creating enough consumers and the consumer's connection factory is tuned using the available load balancing options. In particular, consider disabling the default server affinity setting.) Change applications to periodically close and recreate consumers. This forces consumers to re-load balance. Consume from individual queue members instead of from the distributed queues logical name. Each distributed queue member is individually advertised in JNDI as jms-server-name@distributed-destination-jndi-name. Configure the distributed queue to enable forwarding. Distributed queue forwarding automatically internally forwards messages that have been idled on a member destination without consumers to a member that has consumers. This approach may not be practical for high message load applications.
Note:
Queue forwarding is not compatible with the WebLogic JMS Unit-of-Order feature, as it can cause messages to be delivered out of order.
See "Using Distributed Destinations" in Programming JMS for Oracle WebLogic Server and Configuring Advanced JMS System Resources in Configuring and Managing JMS for Oracle WebLogic Server.
Defining Quota
You may want to convert singleton topics to distributed topics. A distributed topic with a Partitioned policy generally outperforms the Replicated policy choice. Oracle highly recommends leveraging MDBs to process Topic messages, especially when working with Distributed Topics. MDBs automate the creation and servicing of multiple subscriptions and also provide high scalability options to automatically distribute the messages for a single subscription across multiple Distributed Topic members. There is a Sharable subscription extension that allows messages on a single topic subscription to be processed in parallel by multiple subscribers on multiple JVMs. WebLogic MDBs leverage this feature when they are not in Compatibility mode. If produced messages are failing to load balance evenly across the members of a Partitioned Distributed Topic, you may need to change the configuration of your producer connection factories to disable server affinity (enabled by default). Before using any of these previously mentioned advanced features, Oracle recommends fully reviewing the following related documentation: "Configuring and Deploying MDBs Using Distributed Topics" in Programming Message-Driven Beans for Oracle WebLogic Server "Developing Advanced Pub/Sub Applications" in Configuring and Managing JMS for Oracle WebLogic Server "Advanced Programming with Distributed Destinations Using the JMS Destination Availability Helper API" in Configuring and Managing JMS for Oracle WebLogic Server
Section 14.9, "Tuning MessageMaximum" Section 14.10, "Setting Maximum Message Size for Network Protocols" Section 14.11, "Compressing Messages" Section 14.12, "Paging Out Messages To Free Up Memory"
Defining Quota
topics is to assume that each current JMS message consumes 256 bytes of memory plus an additional 256 bytes of memory for each subscriber that hasn't acknowledged the message yet. For example, if there are 3 subscribers on a topic, then a single published message that hasn't been processed by any of the subscribers consumes 256 + 256*3 = 1024 bytes even when the message is paged out. Although message header memory usage is typically significantly less than these rules of thumb indicate, it is a best practice to make conservative estimates on memory utilization. In prior releases, there were multiple levels of quotas: destinations had their own quotas and would also have to compete for quota within a JMS server. In this release, there is only one level of quota: destinations can have their own private quota or they can compete with other destinations using a shared quota. In addition, a destination that defines its own quota no longer also shares space in the JMS server's quota. Although JMS servers still allow the direct configuration of message and byte quotas, these options are only used to provide quota for destinations that do not refer to a quota resource.
Quota Sharing
Quota Policy
For more information about quota configuration parameters, see QuotaBean in the Oracle WebLogic Server MBean Reference. For instructions on configuring a quota resource using the Administration Console, see "Create a quota for destinations" in the Oracle WebLogic Server Administration Console Help.
The Quota parameter of a destination defines which quota resource is used to enforce quota for the destination. This value is dynamic, so it can be changed at any time. However, if there are unsatisfied requests for quota when the quota resource is changed, then those requests will fail with a javax.jms.ResourceAllocationException.
Note:
Outstanding requests for quota will fail at such time that the quota resource is changed. This does not mean changes to the message and byte attributes for the quota resource, but when a destination switches to a different quota.
Section 14.8.1, "Defining a Send Timeout on Connection Factories" Section 14.8.2, "Specifying a Blocking Send Policy on JMS Servers"
Follow the directions for navigating to the JMS Connection Factory: Configuration: Flow Control page in "Configure message flow control" in the Oracle WebLogic Server Administration Console Help. In the Send Timeout field, enter the amount of time, in milliseconds, a sender will block messages when there is insufficient space on the message destination. Once the specified waiting period ends, one of the following results will occur:
2.
If sufficient space becomes available before the timeout period ends, the operation continues.
Tuning MessageMaximum
If sufficient space does not become available before the timeout period ends, you receive a resource allocation exception. If you choose not to enable the blocking send policy by setting this value to 0, then you will receive a resource allocation exception whenever sufficient space is not available on the destination. For more information about the Send Timeout field, see "JMS Connection Factory: Configuration: Flow Control" in the Oracle WebLogic Server Administration Console Help.
3.
Click Save.
Follow the directions for navigating to the JMS Server: Configuration: Thresholds and Quotas page of the Administration Console in "Configure JMS server thresholds and quota" in Oracle WebLogic Server Administration Console Help. From the Blocking Send Policy list box, select one of the following options:
2.
FIFO All send requests for the same destination are queued up one behind the other until space is available. No send request is permitted to complete when there another send request is waiting for space before it. Preemptive A send operation can preempt other blocking send operations if space is available. That is, if there is sufficient space for the current request, then that space is used even if there are previous requests waiting for space. For more information about the Blocking Send Policy field, see "JMS Server: Configuration: Thresholds and Quota" in the Oracle WebLogic Server Administration Console Help.
3.
Click Save.
For example, if the JMS application acknowledges 50 messages at a time, set the MessagesMaximum value to 101.
Compressing Messages
Increased memory usage on the client. Affinity to an existing client as its pipeline fills with messages. For example: If MessagesMaximum has a value of 10,000,000, the first consumer client to connect will get all messages that have already arrived at the destination. This condition leaves other consumers without any messages and creates an unnecessary backlog of messages in the first consumer that may cause the system to run out of memory. Packet is too large exceptions and stalled consumers. If the aggregate size of the messages pushed to a consumer is larger than the current protocol's maximum message size (default size is 10 MB and is configured on a per WebLogic Server instance basis using the console and on a per client basis using the -Dweblogic.MaxMessageSize command line property), the message delivery fails.
Note:
This setting applies to all WebLogic Server network packets delivered to the client, not just JMS related packets.
Connection factories "Configure default delivery parameters" in the Oracle WebLogic Server Administration Console Help.
Store-and-Forward (SAF) remote contexts "Configure SAF remote contexts" in the Oracle WebLogic Server Administration Console Help.
Once configured, message compression is triggered on producers for client sends, on connection factories for message receives and message browsing, or through SAF forwarding. Messages are compressed using GZIP. Compression only occurs when message producers and consumers are located on separate server instances where messages must cross a JVM boundary, typically across a network connection when WebLogic domains reside on different machines. Decompression automatically occurs on the client side and only when the message content is accessed, except for the following situations:
Using message selectors on compressed XML messages can cause decompression, since the message body must be accessed in order to filter them. For more information on defining XML message selectors, see "Filtering Messages" in Programming JMS for Oracle WebLogic Server. Interoperating with earlier versions of WebLogic Server can cause decompression. For example, when using the Messaging Bridge, messages are decompressed when sent from the current release of WebLogic Server to a receiving side that is an earlier version of WebLogic Server.
On the server side, messages always remains compressed, even when they are written to disk.
mw_home\user_projects\domains\domainname\servers\servername\tmp
where domainname is the root directory of your domain, typically c:\Oracle\Middleware\user_projects\domains\domainname, which is parallel to the directory in which WebLogic Server program files are stored, typically c:\Oracle\Middleware\wlserver_10.x. To configure the Message Paging Directory attribute, see "Configure general JMS server properties" in Oracle WebLogic Server Administration Console Help.
Section 14.13.1, "How Flow Control Works" Section 14.13.2, "Configuring Flow Control" Section 14.13.3, "Flow Control Thresholds"
As producers slow themselves down, the threshold condition gradually corrects itself until the server/destination is unarmed. At this point, a producer is allowed to increase its production rate, but not necessarily to the maximum possible rate. In fact, its message flow continues to be controlled (even though the server/destination is no longer armed) until it reaches its prescribed flow maximum, at which point it is no longer flow controlled.
Flow Interval
Table 142 (Cont.) Flow Control Parameters Attribute Flow Steps Description The number of steps used when a producer is adjusting its flow from the Flow Minimum amount of messages to the Flow Maximum amount, or vice versa. Specifically, the Flow Interval adjustment period is divided into the number of Flow Steps (for example, 60 seconds divided by 6 steps is 10 seconds per step). Also, the movement (that is, the rate of adjustment) is calculated by dividing the difference between the Flow Maximum and the Flow Minimum into steps. At each Flow Step, the flow is adjusted upward or downward, as necessary, based on the current conditions, as follows: The downward movement (the decay) is geometric over the specified period of time (Flow Interval) and according to the specified number of Flow Steps. (For example, 100, 50, 25, 12.5). The movement upward is linear. The difference is simply divided by the number of Flow Steps.
For more information about the flow control fields, and the valid and default values for them, see "JMS Connection Factory: Configuration: Flow Control" in the Oracle WebLogic Server Administration Console Help.
For detailed information about other JMS server and destination threshold and quota fields, and the valid and default values for them, see the following pages in the Administration Console Online Help:
"JMS Server: Configuration: Thresholds and Quotas" "JMS Queue: Configuration: Thresholds and Quotas" "JMS Topic: Configuration: Thresholds and Quotas"
over how the system searches for expired messages and how it handles them when they are encountered. Active message expiration ensures that expired messages are cleaned up immediately. Moreover, expired message auditing gives you the option of tracking expired messages, either by logging when a message expires or by redirecting expired messages to a defined error destination.
Section 14.14.1, "Defining a Message Expiration Policy" Section 14.14.7, "Tuning Active Message Expiration"
Section 14.14.2, "Configuring an Expiration Policy on Topics" Section 14.14.3, "Configuring an Expiration Policy on Queues" Section 14.14.4, "Configuring an Expiration Policy on Templates" Section 14.14.5, "Defining an Expiration Logging Policy"
Follow the directions for navigating to the JMS Topic: Configuration: Delivery Failure page in "Configure topic message delivery failure options" in the Oracle WebLogic Server Administration Console Help. From the Expiration Policy list box, select an expiration policy option.
2.
Discard Expired messages are removed from the system. The removal is not logged and the message is not redirected to another location. Log Removes expired messages and writes an entry to the server log file indicating that the messages were removed from the system. You define the actual information that will be logged in the Expiration Logging Policy field in next step. Redirect Moves expired messages from their current location into the Error Destination defined for the topic. For more information about the Expiration Policy options for a topic, see "JMS Topic: Configuration: Delivery Failure" in the Oracle WebLogic Server Administration Console Help.
3.
If you selected the Log expiration policy in previous step, use the Expiration Logging Policy field to define what information about the message is logged. For more information about valid Expiration Logging Policy values, see Section 14.14.5, "Defining an Expiration Logging Policy".
4.
Click Save.
Follow the directions for navigating to the JMS Queue: Configuration: Delivery Failure page in "Configure queue message delivery failure options" in the Oracle WebLogic Server Administration Console Help. From the Expiration Policy list box, select an expiration policy option.
2.
Discard Expired messages are removed from the system. The removal is not logged and the message is not redirected to another location. Log Removes expired messages from the queue and writes an entry to the server log file indicating that the messages were removed from the system. You define the actual information that will be logged in the Expiration Logging Policy field described in the next step. Redirect Moves expired messages from the queue and into the Error Destination defined for the queue. For more information about the Expiration Policy options for a queue, see "JMS Queue: Configuration: Delivery Failure" in the Oracle WebLogic Server Administration Console Help.
3.
If you selected the Log expiration policy in the previous step, use the Expiration Logging Policy field to define what information about the message is logged. For more information about valid Expiration Logging Policy values, see Section 14.14.5, "Defining an Expiration Logging Policy".
4.
Click Save
Follow the directions for navigating to the JMS Template: Configuration: Delivery Failure page in "Configure JMS template message delivery failure options" in the Oracle WebLogic Server Administration Console Help. In the Expiration Policy list box, select an expiration policy option.
2.
Discard Expired messages are removed from the messaging system. The removal is not logged and the message is not redirected to another location. Log Removes expired messages and writes an entry to the server log file indicating that the messages were removed from the system. The actual information that is logged is defined by the Expiration Logging Policy field described in the next step.
Redirect Moves expired messages from their current location into the Error Destination defined for the destination. For more information about the Expiration Policy options for a template, see "JMS Template: Configuration: Delivery Failure" in the Oracle WebLogic Server Administration Console Help.
3.
If you selected the Log expiration policy in Step 4, use the Expiration Logging Policy field to define what information about the message is logged. For more information about valid Expiration Logging Policy values, see Section 14.14.5, "Defining an Expiration Logging Policy".
4.
Click Save.
JMSPriority, Name, Address, City, State, Zip %header%, Name, Address, City, State, Zip JMSCorrelationID, %properties%
The JMSMessageID field is always logged and cannot be turned off. Therefore, if the Expiration Policy is not defined (that is, none) or is defined as an empty string, then the output to the log file contains only the JMSMessageID of the message.
If no header fields are displayed, the line for header fields is not be displayed. If no user properties are displayed, that line is not be displayed. If there are no header fields and no properties, the closing </ExpiredJMSMessage> tag is not necessary as the opening tag can be terminated with a closing bracket (/>).
14-18 Performance and Tuning for Oracle WebLogic Server
For example:
<ExpiredJMSMessage JMSMessageID='ID:N<223476.1022177121567.1' />
All values are delimited with double quotes. All string values are limited to 32 characters in length. Requested fields and/or properties that do not exist are not displayed. Requested fields and/or properties that exist but have no value (a null value) are displayed as null (without single quotes). Requested fields and/or properties that are empty strings are displayed as a pair of single quotes with no space between them. For example:
<ExpiredJMSMessage JMSMessageID='ID:N<851839.1022176920344.0' > <UserProperties First='Any string longer than 32 char ...' Second=null Third='' /> </ExpiredJMSMessage>
14.14.8 Configuring a JMS Server to Actively Scan Destinations for Expired Messages
Follow these directions to define how often a JMS server will actively scan its destinations for expired messages. The default value is 30 seconds, which means the JMS server waits 30 seconds between each scan interval.
1.
Follow the directions for navigating to the JMS Server: Configuration: General page of the Administration Console in "Configure general JMS server properties" in the Oracle WebLogic Server Administration Console Help. In the Scan Expiration Interval field, enter the amount of time, in seconds, that you want the JMS server to pause between its cycles of scanning its destinations for expired messages to process. To disable active scanning, enter a value of 0 seconds. Expired messages are passively removed from the system as they are discovered. For more information about the Expiration Scan Interval attribute, see "JMS Server: Configuration: General" in the Oracle WebLogic Server Administration Console Help.
2.
3.
Click Save.
There are a number of design choices that impact performance of JMS applications. Some others include reliability, scalability, manageability, monitoring, user transactions, message driven bean support, and integration with an application server. In addition, there are WebLogic JMS extensions and features have a direct impact on performance. For more information on designing your applications for JMS, see "Best Practices for Application Design" in Programming JMS for Oracle WebLogic Server.
A dedicated consumer with a unique selector per each sub-ordering A new destination per sub-ordering, one consumer per destination.
See "Using Message Unit-of-Order" in Programming JMS for Oracle WebLogic Server.
Ideal for applications that have strict message ordering requirements. UOO simplifies administration and application design, and in most applications improves performance. Use MDB batching to: Speed-up processing of the messages within a single sub-ordering. Consume multiple messages at a time under the same transaction. See Chapter 11, "Tuning Message-Driven Beans".
You can configure a default UOO for the destination. Only one consumer on the destination processes messages for the default UOO at a time.
Hashing Is generally faster and the UOO setting. Hashing works by using a hash function on the UOO name to determine the physical destination. It has the following drawbacks: It doesn't correctly handle the administrative deleting or adding physical destinations to a distributed destination. If a UOO hashes to an unavailable destination, the message send fails.
Path Service Is a single server UOO directory service that maps the physical destination for each UOO. The Path Service is generally slower than hashing if there are many differently named UOO created per second. In this situation, each new UOO name implicitly forces a check of the path service before sending the message. If the number of UOOs created per second is limited, Path Service performance is not an issue as the UOO paths are cached throughout the cluster.
Ensure the maximum asynchronous consumer message backlog (The MessagesMaximum parameter on the connection factory) was set to a value of 1.
UOO relaxes these requirements significantly as it allows for multiple consumers and allows for a asynchronous consumer message backlog of any size. To migrate older applications to take advantage of UOO, simply configure a default UOO name on the physical destination. See "Configure connection factory unit-of-order parameters" in Oracle WebLogic Server Administration Console Help and "Ordered Redelivery of Messages" in Programming JMS for Oracle WebLogic Server.
One-way message sends are disabled if your connection factory is configured with "XA Enabled". This setting disables one-way sends whether or not the sender actually uses transactions.
Configure the cluster wide RMI load balancing algorithm to "Server Affinity". Ensure that no two destinations are hosted on the same WebLogic Server instance. Configure each destination to have the same local-jndi-name. Configure a connection factory that is targeted to only those WebLogic Server instances that host the destinations. Ensure sender clients use the JNDI names configured in Steps 3 and 4 to obtain their destination and connection factory from their JNDI context. Ensure sender clients use URLs limited to only those WebLogic Server instances that host the destinations in Step 3.
This solution disables RMI-level load balancing for clustered RMI objects, which includes EJB homes and JMS connection factories. Effectively, the client will obtain a connection and destination based only on the network address used to establish the JNDI context. Load balancing can be achieved by leveraging network load balancing, which occurs for URLs that include a comma-separated list of WebLogic Server addresses, or for URLs that specify a DNS name that resolves to a round-robin set of IP addresses (as configured by a network administrator). For more information on Server Affinity for clusters, see "Load Balancing for EJBs and RMI Objects" in Using Clusters for Oracle WebLogic Server.
When used in conjunction with the Blocking Sends feature, then using one-way sends on a well-running system should achieve similar QOS as when using the two-way send mode. One-way send mode for topic publishers falls within the QOS guidelines set by the JMS Specification, but does entail a lower QOS than two-way mode (the WebLogic Server default mode). One-way send mode may not improve performance if JMS consumer applications are a system bottleneck, as described in "Asynchronous vs. Synchronous Consumers" in Programming JMS for Oracle WebLogic Server. Consider enlarging the JVM's heap size on the client and/or server to account for increased batch size (the Window) of sends. The potential memory usage is proportioned to the size of the configured Window and the number of senders. The sending application will not receive all quota exceptions. One-way messages that exceed quota are silently deleted, without throwing exceptions back to the sending client. See Section 14.16.8, "Destination Quota Exceeded" for more information and a possible work around. Configuring one-way sends on a connection factory effectively disables any message flow control parameters configured on the connection factory. By default, the One-way Window Size is set to "1", which effectively disables one-way sends as every one-way message will be upgraded to a two-way send. (Even in one-way mode, clients will send a two-way message every One Way Send Window Size number of messages configured on the client's connection factory.) Therefore, you must set the one-way send window size much higher. It is recommended to try setting the window size to "300" and then adjust it according to your application requirements. The client application will not immediately receive network or server failure exceptions, some messages may be sent but silently deleted until the failure is detected by WebLogic Server and the producer is automatically closed. See Section 14.16.12, "Hardware Failure" for more information.
This is an advanced option for fine tuning. It is normally best to explore other tuning options first.
The Messaging Performance Preference tuning option on JMS destinations enables you to control how long a destination should wait (if at all) before creating full batches of available messages for delivery to consumers. At the minimum value, batching is disabled. Tuning above the default value increases the amount of time a destination is willing to wait before batching available messages. The maximum message count of a full batch is controlled by the JMS connection factory's Messages Maximum per Session setting. Using the Administration Console, this advanced option is available on the General Configuration page for both standalone and uniform distributed destinations (or via the DestinationBean API), as well as for JMS templates (or via the TemplateBean API). Specifically, JMS destinations include internal algorithms that attempt to automatically optimize performance by grouping messages into batches for delivery to consumers.
In response to changes in message rate and other factors, these algorithms change batch sizes and delivery times. However, it isn't possible for the algorithms to optimize performance for every messaging environment. The Messaging Performance Preference tuning option enables you to modify how these algorithms react to changes in message rate and other factors so that you can fine-tune the performance of your system.
75
100
It may take some experimentation to find out which value works best for your system. For example, if you have a queue with many concurrent message consumers, by selecting the Administration Console's Do Not Batch Messages value (or specifying "0" on the DestinationBean MBean), the queue will make every effort to promptly push messages out to its consumers as soon as they are available. Conversely, if you have a queue with only one message consumer that doesn't require fast response times, by selecting the console's High Waiting Threshold for Message Batching value (or specifying "100" on the DestinationBean MBean), then the queue will strongly attempt to only push messages to that consumer in batches, which will increase the waiting period but may improve the server's overall throughput by reducing the number of sends. For instructions on configuring Messaging Performance Preference parameters on a standalone destinations, uniform distributed destinations, or JMS templates using the Administration Console, see the following sections in the Administration Console Online Help:
"Configure advanced topic parameters" "Configure advanced queue parameters" "Uniform distributed topics - configure advanced parameters"
"Uniform distributed queues - configure advanced parameters" "Configure advanced JMS template parameters"
For more information about these parameters, see DestinationBean and TemplateBean in the Oracle WebLogic Server MBean Reference.
Always register a connection exception listener using an IConnection if the application needs to take action when an idle connection fails. Have multiple .NET client threads share a single context to ensure that they use a single socket. Cache and reuse frequently accessed JMS resources, such as contexts, connections, sessions, producers, destinations, and connection factories. Creating and closing these resources consumes significant CPU and network bandwidth. Use DNS aliases or comma separated addresses for load balancing JMS .NET clients across multiple JMS .NET client host servers in a cluster.
For more information on best practices and other programming considerations for JMS .NET client applications, see "Programming Considerations" in Use the WebLogic JMS Client for Microsoft .NET.
15
15
Avoid using SAF if remote destinations are already highly available. JMS clients can send directly to remote destinations. Use SAF in situations where remote destinations are not highly available, such as an unreliable network or different maintenance schedules. Use the better performing JMS SAF feature instead of using a Messaging Bridge when forwarding messages to remote destinations. In general, a JMS SAF agent is significantly faster than a Messaging Bridge. One exception is a configuration when sending messages in a non-persistent exactly-once mode.
Note:
A Messaging Bridge is still required to store-and-forward messages to foreign destinations and destinations from releases prior to WebLogic 9.0.
Configure separate SAF Agents for JMS SAF and Web Services Reliable Messaging Agents (WS-RM) to simplify administration and tuning. Sharing the same WebLogic Store between subsystems provides increased performance for subsystems requiring persistence. For example, transactions that include SAF and JMS operations, transactions that include multiple SAF destinations, and transactions that include SAF and EJBs. See Section 8, "Tuning the WebLogic Persistent Store".
Target imported destinations to multiple SAF agents to load balance message sends among available SAF agents.
Tuning Tips
Increase the JMS SAF Window Size for applications that handle small messages. By default, a JMS SAF agent forwards messages in batches that contain up to 10 messages. For small messages size, it is possible to double or triple performance by increasing the number of messages in each batch to be forwarded. A more appropriate initial value for Window Size for small messages is 100. You can then optimize this value for your environment. Changing the Window Size for applications handling large message sizes is not likely to increase performance and is not recommended. Window Size also tunes WS-RM SAF behavior, so it may not be appropriate to tune this parameter for SAF Agents of type Both.
Note:
For a distributed queue, WindowSize is ignored and the batch size is set internally at 1 message.
Increase the JMS SAF Window Interval. By default, a JMS SAF agent has a Window Interval value of 0 which forwards messages as soon as they arrive. This can lower performance as it can make the effective Window size much smaller than the configured value. A more appropriate initial value for Window Interval value is 500 milliseconds. You can then optimize this value for your environment. In this context, small messages are less than a few K, while large messages are on the order of tens of K. Changing the Window Interval improves performance only in cases where the forwarder is already able to forward messages as fast as they arrive. In this case, instead of immediately forwarding newly arrived messages, the forwarder pauses to accumulate more messages and forward them as a batch. The resulting larger batch size improves forwarding throughput and reduces overall system disk and CPU usage at the expense of increasing latency.
Note:
Set the Non-Persistent QOS value to At-Least-Once for imported destinations if your application can tolerate duplicate messages.
16
16
This chapter provides information on various methods to improve message bridge performance.
Section 16.1, "Best Practices" Section 16.2, "Changing the Batch Size" Section 16.3, "Changing the Batch Interval" Section 16.4, "Changing the Quality of Service" Section 16.5, "Using Multiple Bridge Instances" Section 16.6, "Changing the Thread Pool Size" Section 16.7, "Avoiding Durable Subscriptions" Section 16.8, "Co-locating Bridges with Their Source or Target Destination" Section 16.9, "Changing the Asynchronous Mode Enabled Attribute"
Avoid using a Messaging Bridge if remote destinations are already highly available. JMS clients can send directly to remote destinations. Use messaging bridge in situations where remote destinations are not highly available, such as an unreliable network or different maintenance schedules. Use the better performing JMS SAF feature instead of using a Messaging Bridge when forwarding messages to remote destinations. In general, a JMS SAF agent is significantly faster than a Messaging Bridge. One exception is a configuration when sending messages in a non-persistent exactly-once mode.
Note:
A Messaging Bridge is still required to store-and-forward messages to foreign destinations and destinations from releases prior to WebLogic 9.0.
16-1
environment. See "Configure transaction properties" in Oracle WebLogic Server Administration Console Help.
Some JMS products do not seem to benefit much from using multiple bridges WebLogic JMS messaging performance typically improves significantly, especially when handling persistent messages. If the CPU or disk storage is already saturated, increasing the number of bridge instances may decrease throughput.
Use the common thread poolA server instance changes its thread pool size automatically to maximize throughput, including compensating for the number of bridge instances configured. See "Understanding How WebLogic Server Uses Thread Pools" in Configuring Server Environments for Oracle WebLogic Server. Configure a work manager for the weblogic.jms.MessagingBridge class. See "Understanding Work Managers" in Designing and Configuring WebLogic Server Environments. Use the Administration console to set the Thread Pool Size property in the Messaging Bridge Configuration section on the Configuration: Services page for a server instance. To avoid competing with the default execute thread pool in the server, messaging bridges share a separate thread pool. This thread pool is used only in synchronous mode (Asynchronous Mode Enabled is not set). In asynchronous mode the bridge runs in a thread created by the JMS provider for the source destination. Deprecated in WebLogic Server 9.0.
Asynchronous Mode Enabled Values for QOS Level Asynchronous Mode Enabled Attribute value false true true
If the source destination is a non-WebLogic JMS provider and the QOS is Exactly-once, then the Asynchronous Mode Enabled attribute is disabled and the messages are processed in synchronous mode.
See "Configure messaging bridge instances" in Oracle WebLogic Server Administration Console Help.
16-3
A quality of service of Exactly-once has a significant effect on bridge performance. The bridge starts a new transaction for each message and performs a two-phase commit across both JMS servers involved in the transaction. Since the two-phase commit is usually the most expensive part of the bridge transaction, as the number of messages being processed increases, the bridge performance tends to decrease.
17
17
Section 17.1, "Classloading Optimizations for Resource Adapters" Section 17.2, "Connection Optimizations" Section 17.3, "Thread Management" Section 17.4, "InteractionSpec Interface"
Deploy the resource adapter in an exploded format. This eliminates the nesting of JARs and hence reduces the performance hit involved in looking for classes. If deploying the resource adapter in exploded format is not an option, the JARs can be exploded within the RAR file. This also eliminates the nesting of JARs and thus improves the performance of classloading significantly.
Thread Management
18
18
This chapter describes Oracle best practices for tuning Web applications and managing sessions.
Section 18.1, "Best Practices" Section 18.2, "Session Management" Section 18.3, "Pub-Sub Tuning Guidelines"
Section 18.1.1, "Disable Page Checks" Section 18.1.2, "Use Custom JSP Tags" Section 18.1.3, "Precompile JSPs" Section 18.1.4, "Disable Access Logging" Section 18.1.5, "Use HTML Template Compression" Section 18.1.6, "Use Service Level Agreements" Section 18.1.7, "Related Reading"
Session Management
"Servlet Best Practices" in Developing Web Applications, Servlets, and JSPs for Oracle WebLogic Server. "Servlet and JSP Performance Tuning" at http://www.javaworld.com/javaworld/jw-06-2004/jw-0628-perform ance_p.html, by Rahul Chaudhary, JavaWorld, June 2004.
Section 18.2.1, "Managing Session Persistence" Section 18.2.2, "Minimizing Sessions" Section 18.2.3, "Aggregating Session Data"
Session Management
"Configuring Session Persistence" in Developing Web Applications, Servlets, and JSPs for Oracle WebLogic Server "HTTP Session State Replication" in Using Clusters for Oracle WebLogic Server "Using a Database for Persistent Storage (JDBC Persistence)" in Developing Web Applications, Servlets, and JSPs for Oracle WebLogic Server
Use of sessions involves a scalability trade-off. Use sessions sparingly. In other words, use sessions only for state that cannot realistically be kept on the client or if URL rewriting support is required. For example, keep simple bits of state, such as a user's name, directly in cookies. You can also write a wrapper class to "get" and "set" these cookies, in order to simplify the work of servlet developers working on the same project. Keep frequently used values in local variables.
For more information, see "Setting Up Session Management" in Developing Web Applications, Servlets, and JSPs for Oracle WebLogic Server.
Aggregate session data that changes in tandem into a single session attribute. Aggregate session data that changes frequently and read-only session data into separate session attributes
Tuning Web Applications 18-3
For example: If you use a a single large attribute that contains all the session data and only 10% of that data changes, the entire attribute has to be replicated. This causes unnecessary serialization/deserialization and network overhead. You should move the 10% of the session data that changes into a separate attribute.
Increase file descriptors to cater for a large number of long-living connections, especially for applications with thousands of clients. Tune logging level for WebLogic Server. Disable Access Logging. Tune JVM options. Suggested options: -Xms1536m -Xmx1536m -Xns512m -XXtlaSize:min=128k,preferred=256k Increase the maximum message. If your application publishes messages under high volumes, consider setting the value to <max-message-size>10000000</max-message-size>.
19
19
This chapter describes Oracle best practices for designing, developing, and deploying WebLogic Web Services applications and application resources.
Section 19.1, "Web Services Best Practices" Section 19.2, "Tuning Web Service Reliable Messaging Agents" Section 19.3, "Tuning Heavily Loaded Systems to Improve Web Service Performance"
Design Web Service applications for course-grained service with moderate size payloads. Choose correct service-style & encoding for your Web service application. Control serializer overheads and namespaces declarations to achieve better performance. Use MTOM/XOP or Fast Infoset to optimizing the format of a SOAP message. Carefully design SOAP attachments and security implementations for minimum performance overheads. Consider using an asynchronous messaging model for applications with: Slow and unreliable transport. Complex and long-running process.
For transactional Service Oriented Architectures (SOA) consider using the Last Logging Resource transaction optimization (LLR) to improve performance. See Section 13, "Tuning Transactions". Use replication and caching of data and schema definitions to improve performance by minimizing network overhead. Consider any XML compression technique only when XML compression/decompression overheads are less than network overheads involved. Applications that are heavy users of XML functionality (parsers) may encounter performance issues or run out of file descriptors. This may occur because XML parser instances are bootstrapped by doing a lookup in the jaxp.properties
Tuning Web Services 19-1
file (JAXP API). Oracle recommends setting the properties on the command line to avoid unnecessary file operations at runtime and improve performance and resource usage.
Follow "JWS Programming Best Practices" in Getting Started With JAX-WS Web Services for Oracle WebLogic Server. Follow best practice and tuning recommendations for all underlying components, such as Section 10, "Tuning WebLogic Server EJBs", Section 18, "Tuning Web Applications", Section 12, "Tuning Data Sources", and Section 14, "Tuning WebLogic JMS".
Configure separate SAF Agents for JMS SAF and Web Services Reliable Messaging Agents to simplify administration and tuning. Sharing the same WebLogic Store between subsystems provides increased performance for subsystems requiring persistence. For example, transactions that include SAF and JMS operations, transactions that include multiple SAF destinations, and transactions that include SAF and EJBs. See Section 8, "Tuning the WebLogic Persistent Store". Consider increasing the WindowSize parameter on the remote SAF agent. For small messages of less than 1K, tuning WindowSize as high as 300 can improve throughput.
Note:
WindowSize also tunes JMS SAF behavior, so it may not be appropriate to tune this parameter for SAF agents of type both.
Ensure that retry delay is not set too low. This may cause the system to make unnecessary delivery attempts.
Section 19.3.1, "Setting the Work Manager Thread Pool Minimum Size Constraint" Section 19.3.2, "Setting the Buffering Sessions" Section 19.3.3, "Releasing Asynchronous Resources"
19.3.1 Setting the Work Manager Thread Pool Minimum Size Constraint
Define a Work Manager and set the thread pool minimum size constraint (min-threads-constraint) to a value that is at least as large as the expected number of concurrent requests or responses into the service. For example, if a Web service client issues 20 requests in rapid succession, the recommended thread pool minimum size constraint value would be 20 for the application hosting the client. If the configured constraint value is too small, performance can be severely degraded as incoming work waits for a free processing thread. For more information about the thread pool minimum size constraint, see "Constraints" in Configuring Server Environments for Oracle WebLogic Server.
20
20
This chapter provides information on how to get the best performance from WebLogic Tuxedo Connector (WTC) applications. The WebLogic Tuxedo Connector (WTC) provides interoperability between WebLogic Server applications and Tuxedo services. WTC allows WebLogic Server clients to invoke Tuxedo services and Tuxedo clients to invoke WebLogic Server Enterprise Java Beans (EJBs) in response to a service request. See "WebLogic Tuxedo Connector" in Information Roadmap for Oracle WebLogic Server .
You may have more than one WTC Service in your configuration. You can only target one WTC Service to a server instance. WTC does not support connection pooling. WTC multiplexes requests though a single physical connection. Configuration changes implemented as follows: Changing the session/connection configuration (local APs, remote APs, Passwords, and Resources) before a connection/session is established. The changes are accepted and are implemented in the new session/connection. Changing the session/connection configuration (local APs, remote APs, Passwords, and Resources) after a connection/session is established.The changes accepted but are not implemented in the existing connection/session until the connection is disconnected and reconnected. See "Assign a WTC Service to a Server" in Oracle WebLogic Server Administration Console Help. Changing the Imported and Exported services configuration. The changes are accepted and are implemented in the next inbound or outbound request. Oracle does not recommend this practice as it can leave in-flight requests in an unknown state. Changing the tBridge configuration. Any change in a deployed WTC service causes an exception. You must untarget the WTC service before making any tBridge configuration changes. After untargeting and making configuration changes, you must target the WTC service to implement the changes.
Best Practices
When configuring the connection policy, use ON_STARTUP and INCOMING_ONLY. ON_STARTUP and INCOMING_ONLY always paired. For example: If a WTC remote access point is configured with ON_STARTUP, the DM_TDOMAIN section of the Tuxedo domain configuration must be configured with the remote access point as INCOMING_ONLY. In this case, WTC always acts as the session initiator. See "Configuring the Connections Between Access Points" in the WebLogic Tuxedo Connector Administration Guide for Oracle WebLogic Server. Avoid using connection policy ON_DEMAND. The preferred connection policy is ON_STARTUP and INCOMING_ONLY. This reduces the chance of service request failure due to the routing semantics of ON_DEMAND. See "Configuring the Connections Between Access Points" in the WebLogic Tuxedo Connector Administration Guide for Oracle WebLogic Server. Consider using the following WTC features: Link Level Failover, Service Level failover and load balancing when designing your application. See "Configuring Failover and Failback" in the WebLogic Tuxedo Connector Administration Guide for Oracle WebLogic Server. Consider using WebLogic Server clusters to provide additional load balancing and failover. To use WTC in a WebLogic Server cluster: Configure a WTC instance on all the nodes of the WebLogic Server cluster. Each WTC instance in each cluster node must have the same configuration.
See "How to Manage WebLogic Tuxedo Connector in a Clustered Environment" in the WebLogic Tuxedo Connector Administration Guide for Oracle WebLogic Server.
If your WTC to Tuxedo connection uses the internet, use the following security settings: Set the value of Security to DM_PW. See "Authentication of Remote Access Points" in the WebLogic Tuxedo Connector Administration Guide for Oracle WebLogic Server. Enable Link-level encryption and set the min-encrypt-bits parameter to 40 and the max-encrypt-bits to 128. See "Link-Level Encryption" in the WebLogic Tuxedo Connector Administration Guide for Oracle WebLogic Server.
Your application logic should provide mechanisms to manage and interpret error conditions in your applications. See "Application Error Management" in the WebLogic Tuxedo Connector Programmers Guide for Oracle WebLogic Server. See "System Level Debug Settings" in the WebLogic Tuxedo Connector Administration Guide for Oracle WebLogic Server.
Avoid using embedded TypedFML32 buffers inside TypedFML32 buffers. See "Using FML with WebLogic Tuxedo Connector" in the WebLogic Tuxedo Connector Programmers Guide for Oracle WebLogic Server. If your application handles heavy loads, consider configuring more remote Tuxedo access points and let WTC load balance the work load among the access points. See "Configuring Failover and Failback" in the WebLogic Tuxedo Connector Administration Guide for Oracle WebLogic Server.
Best Practices
When using transactional applications, try to make the remote services involved in the same transaction available from the same remote access point. See "WebLogic Tuxedo Connector JATMI Transactions" in the WebLogic Tuxedo Connector Programmers Guide for Oracle WebLogic Server. The number of client threads available when dispatching services from the gateway may limit the number of concurrent services running. There is no WebLogic Tuxedo Connector attribute to increase the number of available threads. Use a reasonable thread model when invoking service. See Section 7.4, "Thread Management" and "Using Work Managers to Optimize Scheduled Work" in Configuring Server Environments for Oracle WebLogic Server. WebLogic Server Releases 9.2 and higher provide improved routing algorithms which enhance transaction performance. Specifically, performance is improved when there are more than one Tuxedo service requests involved in a 2 phase commit (2PC) transaction. If your application does only single service request to the Tuxedo domain, you can disable this feature by setting the following WebLogic Server command line parameter:
-Dweblogic.wtc.xaAffinity=false
Call the constructor TypedFML32 using the maximum number of objects in the buffer. Even if the maximum number is difficult to predict, providing a reasonable number improves performance. You approximate the maximum number by multiplying the number of fields by 1.33.
Note:
For example: If there are 50 fields in a TypedFML32 buffer type then the maximum number is 63. Calling the constructor TypedFML32(63, 50) performs better than TypedFML32(). If there are 50 fields in a TypedFML32 buffer type and each can have maximum 10 occurrences, then call the constructor TypedFML32(625, 50) will give better performance than TypedFML32()
When configuring Tuxedo applications that act as servers interoperating with WTC clients, take into account of parallelism that may be achieved by carefully configuring different servers on different Tuxedo machines. Be aware of the possibility of database access deadlock in Tuxedo applications. You can avoid deadlock through careful Tuxedo application configuration. If your are using WTC load balancing or service level failover, Oracle recommends that you do not disable WTC transaction affinity. For load balancing outbound requests, configure the imported service with multiple entries using a different key. The imported service uses composite key to determine each record's uniqueness. The composite key is compose of "the service name + the local access point + the primary route in the remote access point list". The following is an example of how to correctly configure load balancing requests for service1 between TDomainSession(WDOM1,TUXDOM1) and TDomainSession(WDOM1,TUXDOM2:
Best Practices
Table 201
Example of Correctly Configured Load Balancing LocalAccessPoint WDOM1 WDOM1 RemoteAccessPointList TUXDOM1 TUXDOM2 RemoteName TOLOWER TOLOWER2
The following is an example an incorrectly configured load balancing requests. The following configuration results in the same composite key for service1:
Table 202 Example of Incorrectly Configured Load Balancing LocalAccessPoint WDOM1 WDOM1 RemoteAccessPointList TUXDOM1 TUXDOM1 RemoteName TOLOWER TOLOWER
A
A
Section A.1, "How to Enable the WebLogic 8.1 Thread Pool Model" Section A.2, "Tuning the Default Execute Queue" Section A.3, "Using Execute Queues to Control Thread Usage" Section A.4, "Monitoring Execute Threads" Section A.5, "Allocating Execute Threads to Act as Socket Readers" Section A.6, "Tuning the Stuck Thread Detection Behavior"
If you have not already done so, stop the WebLogic Server instance. Edit the config.xml file, setting the use81-style-execute-queues element to true. Reboot the server instance. Explicitly create the weblogic.kernel.Default execute queue from the Administration Console. Reboot the server instance.
The following example code allows an instance of myserver to use execute queues:
Example A1 Using the use81-style-execute-queues Element . .
A-1
. <server> <name>myserver</name> <ssl> <name>myserver</name> <enabled>true</enabled> <listen-port>7002</listen-port> </ssl> <use81-style-execute-queues>true</use81-style-execute-queues> <listen-address/> </server> . . .
Configured work managers are converted to execute queues at runtime by the server instance.
Unless you configure additional execute queues, and assign applications to them, the server instance assigns requests to the default execute queue. If native performance packs are not being used for your platform, you may need to tune the default number of execute queue threads and the percentage of threads that act as socket readers to achieve optimal performance. For more information, see Section A.5, "Allocating Execute Threads to Act as Socket Readers".
Note:
The value of the ThreadCount attribute depends very much on the type of work your application does. For example, if your client application is thin and does a lot of its work through remote invocation, that client application will spend more time connected and thus will require a higher thread count than a client application that does a lot of client-side processing. If you do not need to use more than 15 threads (the development default) or 25 threads (the production default) for your work, do not change the value of this attribute. As a general rule, if your application makes database calls that take a long time to return, you will need more execute threads than an application that makes calls that are short and turn over very rapidly. For the latter case, using a smaller number of execute threads could improve performance. To determine the ideal thread count for an execute queue, monitor the queue's throughput while all applications in the queue are operating at maximum load. Increase the number of threads in the queue and repeat the load test until you reach the optimal throughput for the queue. (At some point, increasing the number of threads will lead to enough context switching that the throughput for the queue begins to decrease.)
Note: The WebLogic Server Administration Console displays the cumulative throughput for all of a server's execute queues. To access this throughput value, follow steps 1-6 in Section A.3, "Using Execute Queues to Control Thread Usage".
Table A2 shows default scenarios for adjusting available threads in relation to the number of CPUs available in the WebLogic Server domain. These scenarios also assume that WebLogic Server is running under maximum load, and that all thread requests are satisfied by using the default execute queue. If you configure additional execute queues and assign applications to specific queues, monitor results on a pool-by-pool basis.
Table A2 When... Thread Count < number of CPUs Scenarios for Modifying the Default Thread Count And you see... CPUs are under utilized, but there is work that could be done. CPUs are under utilized, but there is work that could be done. Do This: Increase the thread count.
Thread Count > number of CPUs (by CPUs have high utilization, a moderate number of threads) with a moderate amount of context switching. Thread Count > number of CPUs (by Too much context a large number of threads) switching.
Tune the moderate number of threads and compare performance results. Reduce the number of threads.
A-3
available. In such a situation, the division of threads into multiple queues may yield poorer overall performance than having a single, default execute queue. Default WebLogic Server installations are configured with a default execute queue which is used by all applications running on the server instance. You may want to configure additional queues to:
Optimize the performance of critical applications. For example, you can assign a single, mission-critical application to a particular execute queue, guaranteeing a fixed number of execute threads. During peak server loads, nonessential applications may compete for threads in the default execute queue, but the mission-critical application has access to the same number of threads at all times. Throttle the performance of nonessential applications. For an application that can potentially consume large amounts of memory, assigning it to a dedicated execute queue effectively limits the amount of memory it can consume. Although the application can potentially use all threads available in its assigned execute queue, it cannot affect thread usage in any other queue. Remedy deadlocked thread usage. With certain application designs, deadlocks can occur when all execute threads are currently utilized. For example, consider a servlet that reads messages from a designated JMS queue. If all execute threads in a server are used to process the servlet requests, then no threads are available to deliver messages from the JMS queue. A deadlock condition exists, and no work can progress. Assigning the servlet to a separate execute queue avoids potential deadlocks, because the servlet and JMS queue do not compete for thread resources.
Be sure to monitor each execute queue to ensure proper thread usage in the system as a whole. See Section A.2.1, "Should You Modify the Default Thread Count?" for general information about optimizing the number of threads.
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit. In the left pane of the console, expand Environment > Servers. On the Summary of Servers page, select the server instance for which you will configure an execute queue. Select the Configuration > Queues tab and click New. Name the execute queue and click OK. On the User-Defined Execute Queues page, select the execute queue you just created. On the execute queue Configuration tab, modify the following attributes or accept the system defaults: Queue LengthAlways leave the Queue Length at the default value of 65536 entries. The Queue Length specifies the maximum number of simultaneous requests that the server can hold in the queue. The default of 65536 requests represents a very large number of requests; outstanding requests in the queue should rarely, if ever reach this maximum value.
If the maximum Queue Length is reached, WebLogic Server automatically doubles the size of the queue to account for the additional work. Exceeding 65536 requests in the queue indicates a problem with the threads in the queue, rather than the length of the queue itself; check for stuck threads or an insufficient thread count in the execute queue. Queue Length Threshold PercentThe percentage (from 199) of the Queue Length size that can be reached before the server indicates an overflow condition for the queue. All actual queue length sizes below the threshold percentage are considered normal; sizes above the threshold percentage indicate an overflow. When an overflow condition is reached, WebLogic Server logs an error message and increases the number of threads in the queue by the value of the Threads Increase attribute to help reduce the workload. By default, the Queue Length Threshold Percent value is 90 percent. In most situations, you should leave the value at or near 90 percent, to account for any potential condition where additional threads may be needed to handle an unexpected spike in work requests. Keep in mind that Queue Length Threshold Percent must not be used as an automatic tuning parameterthe threshold should never trigger an increase in thread count under normal operating conditions. Thread CountThe number of threads assigned to this queue. If you do not need to use more than 15 threads (the default) for your work, do not change the value of this attribute. (For more information, see Section A.2.1, "Should You Modify the Default Thread Count?".) Threads IncreaseThe number of threads WebLogic Server should add to this execute queue when it detects an overflow condition. If you specify zero threads (the default), the server changes its health state to "warning" in response to an overflow condition in the thread, but it does not allocate additional threads to reduce the workload.
Note:
If WebLogic Server increases the number of threads in response to an overflow condition, the additional threads remain in the execute queue until the server is rebooted. Monitor the error log to determine the cause of overflow conditions, and reconfigure the thread count as necessary to prevent similar conditions in the future. Do not use the combination of Threads Increase and Queue Length Threshold Percent as an automatic tuning tool; doing so generally results in the execute queue allocating more threads than necessary and suffering from poor performance due to context switching.
Threads MinimumThe minimum number of threads that WebLogic Server should maintain in this execute queue to prevent unnecessary overflow conditions. By default, the Threads Minimum is set to 5. Threads MaximumThe maximum number of threads that this execute queue can have; this value prevents WebLogic Server from creating an overly high thread count in the queue in response to continual overflow conditions. By default, the Threads Maximum is set to 400.
8. 9.
Click Save. To activate these changes, in the Change Center of the Administration Console, click Activate Changes. Not all changes take effect immediatelysome require a restart.
10. You must reboot the server to use the new thread detection behavior values.
A-5
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit. In the left pane of the console, expand Environment > Servers. On the Summary of Servers page, select the server instance for which you will configure thread detection behavior. On the Configuration > Queues tab, select the execute queue for which you will modify the default thread count. You can only modify the default execute queue for the server or a user-defined execute queue. Locate the Thread Count value and increase or decrease it, as appropriate. Click Save. To activate these changes, in the Change Center of the Administration Console, click Activate Changes. Not all changes take effect immediatelysome require a restart. You must reboot the server to use the new thread detection behavior values.
9.
The threshold at which the server indicates an overflow condition. This value is set as a percentage of the configured size of the execute queue (the QueueLength value). The number of threads to add to the execute queue when an overflow condition is detected. These additional threads work to reduce the size of the queue and reduce the size of the queue to its normal operating size. The minimum and maximum number of threads available to the queue. In particular, setting the maximum number of threads prevents the server from assigning an overly high thread count in response to overload conditions.
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit. In the left pane of the console, expand Environment > Servers. On the Summary of Servers page, select the server instance for which you will configure overflow conditions behavior. Select the Configuration > Queues tab, select the execute queue for which you will configure overflow conditions behavior.
5.
Specify how the server instance should detect an overflow condition for the selected queue by modifying the following attributes: Queue LengthSpecifies the maximum number of simultaneous requests that the server can hold in the queue. The default of 65536 requests represents a very large number of requests; outstanding requests in the queue should rarely, if ever reach this maximum value. Always leave the Queue Length at the default value of 65536 entries. Queue Length Threshold PercentThe percentage (from 199) of the Queue Length size that can be reached before the server indicates an overflow condition for the queue. All actual queue length sizes below the threshold percentage are considered normal; sizes above the threshold percentage indicate an overflow. By default, the Queue Length Threshold Percent is set to 90 percent.
6.
To specify how this server should address an overflow condition for the selected queue, modify the following attribute: Threads IncreaseThe number of threads WebLogic Server should add to this execute queue when it detects an overflow condition. If you specify zero threads (the default), the server changes its health state to "warning" in response to an overflow condition in the execute queue, but it does not allocate additional threads to reduce the workload.
7.
To fine-tune the variable thread count of this execute queue, modify the following attributes: Threads MinimumThe minimum number of threads that WebLogic Server should maintain in this execute queue to prevent unnecessary overflow conditions. By default, the Threads Minimum is set to 5. Threads MaximumThe maximum number of threads that this execute queue can have; this value prevents WebLogic Server from creating an overly high thread count in the queue in response to continual overflow conditions. By default, the Threads Maximum is set to 400.
8. 9.
Click Save. To activate these changes, in the Change Center of the Administration Console, click Activate Changes. Not all changes take effect immediatelysome require a restart.
10. You must reboot the server to use the new thread detection behavior values.
A-7
See "Creating and Configuring Servlets" in Developing Web Applications, Servlets, and JSPs for Oracle WebLogic Server for more information about specifying initialization parameters in web.xml.
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit. In the left pane of the console, expand Environment > Servers. On the Summary of Servers page, select the server instance for which you will configure thread detection behavior. Select the Monitoring > Threads tab. A table of the execute queues available on this server instance is displayed. Select an execute queue for which you would like to view thread information. A table of execute threads for the selected execute queue is displayed.
A.5.1 Setting the Number of Socket Reader Threads For a Server Instance
To use the Administration Console to set the maximum percentage of execute threads that read messages from a socket:
1. 2. 3. 4. 5.
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit. In the left pane of the console, expand Environment > Servers. On the Summary of Servers page, select the server instance for which you will configure thread detection behavior. Select the Configuration > Tuning tab. Specify the percentage of Java reader threads in the Socket Readers field. The number of Java socket readers is computed as a percentage of the number of total execute threads (as shown in the Thread Count field for the Execute Queue). Click Save. To activate these changes, in the Change Center of the Administration Console, click Activate Changes.
6. 7.
If you have not already done so, in the Change Center of the Administration Console, click Lock & Edit. In the left pane of the console, expand Environment > Servers. On the Summary of Servers page, select the server instance for which you will configure thread detection behavior. On the Configuration > Tuning tab, update as necessary:
A-9
Stuck Thread Max TimeAmount of time, in seconds, that a thread must be continually working before a server instance diagnoses a thread as being stuck. Stuck Thread Timer IntervalAmount of time, in seconds, after which a server instance periodically scans threads to see if they have been continually working for the configured Stuck Thread Max Time.
5. 6.
Click Save. To activate these changes, in the Change Center of the Administration Console, click Activate Changes. Not all changes take effect immediatelysome require a restart. You must reboot the server to use the new thread detection behavior values.
7.
B
B
Capacity Planning
This chapter provides an introduction to capacity planning. Capacity planning is the process of determining what type of hardware and software configuration is required to meet application needs adequately. Capacity planning is not an exact science. Every application is different and every user behavior is different.
Section B.1, "Capacity Planning Factors" Section B.2, "Assessing Your Application Performance Objectives" Section B.3, "Hardware Tuning" Section B.4, "Network Performance" Section B.5, "Related Information"
Capacity Planning Questions Is WebLogic Server well-tuned? How well-designed is the user application? Is there enough bandwidth? How many transactions need to run simultaneously? Is the database a limiting factor? Are there additional user storage requirements? What is running on the machine in addition to WebLogic Server?
Table B1 (Cont.) Capacity Planning Factors and Information Reference Capacity Planning Questions Do clients use SSL to connect to WebLogic Server? What types of traffic do the clients generate? What types of clients connect to the WebLogic Server application? Is your deployment configured for a cluster? Are your servers configured for migration? For Information, See: Section B.1.3, "SSL Connections and Performance" Section B.1.2, "RMI and Server Traffic" Section B.1.1, "Programmatic and Web-based Clients" Section B.1.8, "Clustered Configurations" Section B.1.9, "Server Migration"
Web-based clients, such as Web browsers and HTTP proxies, use the HTTP or HTTPS (secure) protocol to obtain HTML or servlet output. Programmatic clients, such as Java applications and applets, can connect through the T3 protocol and use RMI to connect to the server.
The stateless nature of HTTP requires that the server handle more overhead than is the case with programmatic clients. However, the benefits of HTTP clients are numerous, such as the availability of browsers and firewall compatibility, and are usually worth the performance costs. Programmatic clients are generally more efficient than HTTP clients because T3 does more of the presentation work on the client side. Programmatic clients typically call directly into EJBs while Web clients usually go through servlets. This eliminates the work the server must do for presentation. The T3 protocol operates using sockets and has a long-standing connection to the server. A WebLogic Server installation that relies only on programmatic clients should be able to handle more concurrent clients than an HTTP proxy that is serving installations. If you are tunneling T3 over HTTP, you should not expect this performance benefit. In fact, performance of T3 over HTTP is generally 15 percent worse than typical HTTP and similarly reduces the optimum capacity of your WebLogic Server installation.
SSL involves intensive computing operations. When supporting the cryptography operations in the SSL protocol, WebLogic Server can not handle as many simultaneous connections. The number of SSL connections required out of the total number of clients required. Typically, for every SSL connection that the server can handle, it can handle three non-SSL connections. SSL substantially reduces the capacity of the server depending upon the strength of encryption used in the SSL connections. Also, the amount of overhead SSL imposes is related to how many client interactions have SSL enabled. WebLogic Server includes native performance packs for SSL operations.
interactions per second with WebLogic Server represents the total number of interactions that should be handled per second by a given WebLogic Server deployment. Typically for Web deployments, user interactions access JSP pages or servlets. User interactions in application deployments typically access EJBs. Consider also the maximum number of transactions in a given period to handle spikes in demand. For example, in a stock report application, plan for a surge after the stock market opens and before it closes. If your company is broadcasting a Web site as part of an advertisement during the World Series or World Cup Soccer playoffs, you should expect spikes in demand.
Network Performance
Related Information
"Capacity Planning for Web Performance: Metrics, Models, and Methods". Prentice Hall, 1998, ISBN 0-13-693822-1 at http://www.cs.gmu.edu/~menasce/webbook/index.html. "Configuration and Capacity Planning for Solaris Servers" at http://btobsearch.barnesandnoble.com/booksearch/isbninquiry.a sp?userid=36YYSNN1TN&isbn=0133499529&TXT=Y&itm=1, Brian L. L. Wong. "J2EE Applications and BEA WebLogic Server". Prentice Hall, 2001, ISBN 0-13-091111-9 at http://www.amazon.com/J2EE-Applications-BEA-WebLogic-Server/d p/0130911119. Web portal focusing on capacity-planning issues for enterprise application deployments at http://www.capacityplanning.com/.