/ Java EE Support Patterns: IBM AIX
Showing posts with label IBM AIX. Show all posts
Showing posts with label IBM AIX. Show all posts

11.18.2012

IBM AIX: Java process size monitoring

This article will provide you with a quick reference guide on how to calculate the Java process size memory footprint for Java VM processes running on IBM AIX 5.3+ OS.

This is a complementary post to my original article on this subject: how to monitor the Java native memory on AIX. I highly recommend this read to any individual involved in production support or development of Java applications deployed on AIX.

Why is this knowledge important?

From my perspective, basic knowledge on how the OS is managing the memory allocation of your JVM processes is very important. We often overlook this monitoring aspect and only focus on the Java heap itself.

From my experience, most Java memory related problems are observed from the Java heap itself such as garbage collection problems, leaks etc. However, I’m confident that you will face situations in the future involving native memory problems or OS memory challenges. Proper knowledge of your OS and virtual memory management is crucial for proper root causes analysis, recommendations and solutions.

AIX memory vs. pages

As you may have seen from my earlier post, the AIX Virtual Memory Manager (VMM) is responsible to manage memory requests from the system and its applications.

The actual physical memory is converted and partitioned in units called pages; allocated either in physical RAM or stored on disk until it is needed. Each page can have a size of 4 KB (small page), 64 KB (medium page) or 16 MB (large page). Typically for a 64-bit Java process you will see a mix of all of the above.

What about the topas command?

The typical reflex when supporting applications on AIX is to run the topas command, similar to Solaris top. Find below an example of output from AIX 5.3:



As you can see, the topas command is not very helpful to get a clear view on the memory utilization since it is not providing the breakdown view that we need for our analysis. It is still useful to get a rough idea of the paging space utilization which can give you a quick idea of your top "paging space" consumer processes. Same can be achieved via the ps aux command.

AIX OS command to the rescue: svmon

The AIX svmon command is by far my preferred command to deep dive into the Java process memory utilization. This is a very powerful command, similar to Solaris pmap. It allows you to monitor the current memory “pages” allocation along with each segment e.g. Java Heap vs. native heap segments. Analyzing the svmon output will allow you to calculate the memory footprint for each page type (4 KB, 64 KB, and 16 MB).

Now find below a real example which will allow you to understand how the calculation is done:

# 64-bit JVM with -Xms2048m & -Xmx2048m (2 GB Java Heap)
# Command: svmon –P <Java PID>




As you can see, the total footprint of our Java process size was found at 2.2 GB which is aligned with current Java heap settings. You should be able to easily perform the same memory footprint analysis from your AIX environment

I hope this article has helped you to understand how to calculate the Java process size on AIX OS. Please feel free to post any comment or question.

2.08.2012

Too many open files – Case Study

This case study describes the complete root cause analysis and resolution of a File Descriptor (Too many open files) related problem that we faced following a migration from Oracle ALSB 2.6 running on Solaris OS to Oracle OSB 11g running on AIX.

This article will also provide you with proper AIX OS commands you can use to troubleshoot and validate the File Descriptor configuration of your Java VM process.

Environment specifications

-        Java EE server: Oracle Service Bus 11g
-        Middleware OS: IBM AIX 6.1
-        Java VM: IBM JRE 1.6.0 SR9 – 64 bit
-        Platform type: Service Bus – Middle Tier

Problem overview

-        Problem type: java.net.SocketException: Too many open files error was observed under heavy load causing our Oracle OSB managed servers to suddenly hang

Such problem was observed only during high load and did require our support team to take corrective action e.g. shutdown and restart the affected Weblogic OSB managed servers

Gathering and validation of facts

As usual, a Java EE problem investigation requires gathering of technical and non technical facts so we can either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts below were verified in order to conclude on the root cause:

·        What is the client impact? HIGH; Full JVM hang
·        Recent change of the affected platform? Yes, recent migration from ALSB 2.6 (Solaris OS) to Oracle OSB 11g (AIX OS)
·        Any recent traffic increase to the affected platform? No
·        What is the health of the Weblogic server? Affected managed servers were no longer responsive along with closure of the Weblogic HTTP (Server Socket) port
·        Did a restart of the Weblogic Integration server resolve the problem? Yes but temporarily only

-        Conclusion #1: The problem appears to be load related

Weblogic server log files review

A quick review of the affected managed servers log did reveal the error below:

java.net.SocketException: Too many open files

This error indicates that our Java VM process was running out of File Descriptor. This is a severe condition that will affect the whole Java VM process and cause Weblogic to close its internal Server Socket port (HTTP/HTTPS port) preventing any further inbound & outbound communication to the affected managed server(s).

File Descriptor – Why so important for an Oracle OSB environment?

The File Descriptor capacity is quite important for your Java VM process. The key concept you must understand is that File Descriptors are not only required for pure File Handles but also for inbound and outbound Socket communication. Each new Java Socket created to (inbound) or from (outound) your Java VM by Weblogic kernel Socket Muxer requires a File Descriptor allocation at the OS level.

An Oracle OSB environment can require a significant number of Sockets depending how much inbound load it receives and how much outbound connections (Java Sockets) it has to create in order to send and receive data from external / downstream systems (System End Points).

For that reason, you must ensure that you allocate enough File Descriptors / Sockets to your Java VM process in order to support your daily load; including problematic scenarios such as sudden slowdown of external systems which typically increase the demand on the File Descriptor allocation.

Runtime File Descriptor capacity check for Java VM and AIX OS

Following the discovery of this error, our technical team did perform a quick review of the current observed runtime File Descriptor capacity & utilization of our OSB Java VM processes. This can be done easily via the AIX procfiles <Java PID> | grep rlimit & lsof -p <Java PID> | wc –l commands as per below example:

## Java VM process File Descriptor total capacity

>> procfiles 5425732 | grep rlimit
  Current rlimit: 2000 file descriptors

## Java VM process File Descriptor current utilization

>> lsof -p <Java PID> | wc –l
  1920

As you can see, the current capacity was found at 2000; which is quite low for a medium size Oracle OSB environment. The average utilization under heavy load was also found to be quite close to the upper limit of 2000.

The next step was to verify the default AIX OS File Descriptor limit via the ulimit -S –n command:

>> ulimit -S –n
  2000

-        Conclusion #2: The current File Descriptor limit for both OS and OSB Java VM appears to be quite low and setup at 2000. The File Descriptor utilization was also found to be quite close to the upper limit which explains why so many JVM failures were observed at peak load

Weblogic File Descriptor configuration review

The File Descriptor limit can typically be overwritten when you start your Weblogic Java VM. Such configuration is managed by the WLS core layer and script can be found at the following location:

<WL_HOME>/wlserver_10.3/common/bin/commEnv.sh

..................................................
resetFd() {
  if [ ! -n "`uname -s |grep -i cygwin || uname -s |grep -i windows_nt || \
       uname -s |grep -i HP-UX`" ]
  then
    ofiles=`ulimit -S -n`
    maxfiles=`ulimit -H -n`
    if [ "$?" = "0" -a  `expr ${maxfiles} : '[0-9][0-9]*$'` -eq 0 -a `expr ${ofiles} : '[0-9][0-9]*$'` -eq 0 ]; then
      ulimit -n 4096
    else
      if [ "$?" = "0" -a `uname -s` = "SunOS" -a `expr ${maxfiles} : '[0-9][0-9]*$'` -eq 0 ]; then
        if [ ${ofiles} -lt 65536 ]; then
          ulimit -H -n 65536
        else
          ulimit -H -n 4096
        fi
      fi
    fi
  fi
.................................................

Root cause: File Descriptor override only working for Solaris OS!

As you can see with the script screenshot below, the override of the File Descriptor limit via ulimit is only applicable for Solaris OS (SunOS) which explains why our current OSB Java VM running on AIX OS did end up with the default value of 2000 vs. our older ALSB 2.6 environment running on Solaris OS which had a File Descriptor limit of 65536.


Solution: script tweaking for AIX OS

The resolution of this problem was done by modifying the Weblogic commEnv script as per below. This change did ensure a configuration of 65536 File Descriptor (from 2000); including for the AIX OS:


** Please note that the activation of any change to the Weblogic File Descriptor configuration requires a restart of both the Node Manager (if used) along with the managed servers. **

A runtime validation was also performed following the activation of the new configuration which did confirm the new active File Descriptor limit:

>> procfiles 6416839 | grep rlimit
  Current rlimit: 65536 file descriptors

No failure has been observed since then.

Conclusion and recommendations

-        When upgrading your Weblogic Java EE container to a new version, please ensure that you verify your current File Descriptor limit as per the above case study
-         From a capacity planning perspective, please ensure that you monitor your File Descriptor utilizaiton on a regular basis in order to identify any potential capacity problem, Socket leak etc..

Please don’t hesitate to post any comment or question on this subject if you need any additional help.

12.23.2011

HashMap infinite loop problem – Case Study

This article will provide you with complete root cause analysis and solution of a java.util.HashMap infinite loop problem affecting an Oracle OSB 11g environment running on IBM JRE 1.6 JVM.
This case study will also demonstrate how you can combine AIX ps –mp command and Thread Dump analysis to pinpoint you top CPU contributor Threads within your Java VM(s). It will also demonstrate how dangerous using a non Thread safe HashMap data structure can be within a multi Thread environment / Java EE container.

Environment specifications

-        Java EE server: Oracle Service Bus 11g
-        Middleware OS: AIX 6.1
-        Java VM: IBM JRE 1.6 SR9 – 64-bit
-        Platform type: Service Bus

Monitoring and troubleshooting tools

-        AIX nmon & topas (CPU monitoring)
-        AIX ps –mp (CPU and Thread breakdown OS command)
-        IBM JVM Java core / Thread Dump (thread analysis and ps –mp data corrleation)

Problem overview

-        Problem type: Very High CPU observed from our production environment

A high CPU problem was observed from AIX nmon monitoring hosting a Weblogic Oracle Service Bus 11g middleware environment.

Gathering and validation of facts

As usual, a Java EE problem investigation requires gathering of technical and non-technical facts so we can either derived other facts and/or conclude on the root cause. Before applying a corrective measure, the facts below were verified in order to conclude on the root cause:

·        What is the client impact? HIGH
·        Recent change of the affected platform? Yes, platform was recently migrated from Oracle ALSB 2.6 (Solaris & HotSpot 1.5) to Oracle OSB 11g (AIX OS & IBM JRE 1.6)
·        Any recent traffic increase to the affected platform? No
·        How does this high CPU manifest itself?  A sudden CPU increase was observed and is not going down; even after load goes down e.g. near zero level.
·        Did an Oracle OSB recycle resolve the problem? Yes, but problem is returning after few hours or few days (unpredictable pattern)

-        Conclusion #1: The high CPU problem appears to be intermittent vs. pure correlation with load
-        Conclusion #2: Since high CPU remains after load goes down, this indicates either JVM threshold are triggered along with point of non-return and / or the presence of some hang or infinite looping Threads

AIX CPU analysis

AIX nmon & topas OS command were used to monitor the CPU utilization of the system and Java process. The CPU utilization was confirmed to go up as high as 100% utilization (saturation level).

Such high CPU level did remain very high until the JVM was recycled.

AIX CPU Java Thread breakdown analysis

One of the best troubleshooting approaches to deal with this type of issue is to generate an AIX ps –mp snapshot combined with Thread Dump. This was achieved by executing the command below:

ps -mp <Java PID> -o THREAD

Then immediately execute:

kill -3 <Java PID>

** This will generate a IBM JRE Thread Dump / Java core file (javacorexyz..) **

The AIX ps –mp command output was generated as per below:

USER      PID     PPID       TID ST  CP PRI SC    WCHAN        F     TT BND COMMAND
user 12910772  9896052         - A    97  60 98        *   342001      -   - /usr/java6_64/bin/java -Dweblogic.Nam
-        -        -   6684735 S    0  60  1 f1000f0a10006640  8410400      -   - -
-        -        -   6815801 Z    0  77  1        -   c00001      -   - -
-        -        -   6881341 Z    0 110  1        -   c00001      -   - -
-        -        -   6946899 S    0  82  1 f1000f0a10006a40  8410400      -   - -
-        -        -   8585337 S    0  82  1 f1000f0a10008340  8410400      -   - -
-        -        -   9502781 S    2  82  1 f1000f0a10009140  8410400      -   - -
-        -        -  10485775 S    0  82  1 f1000f0a1000a040  8410400      -   - -
-        -        -  10813677 S    0  82  1 f1000f0a1000a540  8410400      -  
-        -        -  21299315 S    95  62  1 f1000a01001d0598   410400      -   - -
-        -        -  25493513 S    0  82  1 f1000f0a10018540  8410400      -   - -
-        -        -  25690227 S    0  86  1 f1000f0a10018840  8410400      -   - -
-        -        -  25755895 S    0  82  1 f1000f0a10018940  8410400      -   - -
-        -        -  26673327 S    2  82  1 f1000f0a10019740  8410400      -  



As you can see in the above snapshot, 1 primary culprit Thread Id (21299315) was found taking ~95% of the entire CPU.

Thread Dump analysis and PRSTAT correlation

Once the primary culprit Thread was identified, the next step was to correlate this data with the Thread Dump data and identify the source / culprit at the code level.

But first, we had to convert the decimal format to HEXA format since IBM JRE Thread Dump native Thread Id’s are printed in HEXA format.

Culprit Thread Id 21299315 >> 0x1450073 (HEXA format)

A quick search within the generated Thread Dump file did reveal the culprit Thread as per below.

Weblogic ExecuteThread #97 Stack Trace can be found below:

3XMTHREADINFO      "[STUCK] ExecuteThread: '97' for queue: 'weblogic.kernel.Default (self-tuning)'" J9VMThread:0x00000001333FFF00, j9thread_t:0x0000000117C00020, java/lang/Thread:0x0700000043184480, state:CW, prio=1
3XMTHREADINFO1            (native thread ID:0x1450073, native priority:0x1, native policy:UNKNOWN)
3XMTHREADINFO3           Java callstack:
4XESTACKTRACE                at java/util/HashMap.findNonNullKeyEntry(HashMap.java:528(Compiled Code))
4XESTACKTRACE                at java/util/HashMap.putImpl(HashMap.java:624(Compiled Code))
4XESTACKTRACE                at java/util/HashMap.put(HashMap.java:607(Compiled Code))
4XESTACKTRACE                at weblogic/socket/utils/RegexpPool.add(RegexpPool.java:20(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpClient.resetProperties(HttpClient.java:129(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpClient.openServer(HttpClient.java:374(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpClient.New(HttpClient.java:252(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpURLConnection.connect(HttpURLConnection.java:189(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/http/HttpOutboundMessageContext.send(HttpOutboundMessageContext.java(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/http/wls/HttpTransportProvider.sendMessageAsync(HttpTransportProvider.java(Compiled Code))
4XESTACKTRACE                at sun/reflect/GeneratedMethodAccessor2587.invoke(Bytecode PC:58(Compiled Code))
4XESTACKTRACE                at sun/reflect/DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37(Compiled Code))
4XESTACKTRACE                at java/lang/reflect/Method.invoke(Method.java:589(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/Util$1.invoke(Util.java(Compiled Code))
4XESTACKTRACE                at $Proxy115.sendMessageAsync(Bytecode PC:26(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/LoadBalanceFailoverListener.sendMessageAsync(LoadBalanceFailoverListener.java:141(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/LoadBalanceFailoverListener.onError(LoadBalanceFailoverListener.java(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/http/wls/HttpOutboundMessageContextWls$RetrieveHttpResponseWork.handleResponse(HttpOutboundMessageContextWls.java(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/AsyncResponseHandler$MuxableSocketHTTPAsyncResponse$RunnableCallback.run(AsyncResponseHandler.java:531(Compiled Code))
4XESTACKTRACE                at weblogic/work/ContextWrap.run(ContextWrap.java:41(Compiled Code))
4XESTACKTRACE                at weblogic/work/SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528(Compiled Code))
4XESTACKTRACE                at weblogic/work/ExecuteThread.execute(ExecuteThread.java:203(Compiled Code))
4XESTACKTRACE                at weblogic/work/ExecuteThread.run(ExecuteThread.java:171(Compiled Code))

Thread Dump analysis – HashMap infinite loop condition!

As you can see from the above Thread Stack Trace of Thread #97, the Thread is currently stuck in an infinite loop / Thread race condition over a java.util.HashMap object (IBM JRE implementation).

This finding was quite interesting given this HashMap is actually created / own by the Weblogic 11g kernel code itself >> weblogic/socket/utils/RegexpPool

Root cause: non Thread safe HashMap in Weblogic 11g (10.3.5.0) code!

Following this finding and data gathering exercise, our team created a SR with Oracle support which did confirm this defect within the Weblogic 11g code base.

As you may already know, usage of non Thread safe / non synchronized HashMap under concurrent Threads condition is very dangerous and can easily lead to internal HashMap index corruption and / or infinite looping. This is also a golden rule for any middleware software such as Oracle Weblogic, IBM WAS, Red Hat JBoss which rely heavily on HashMap data structures from various Java EE and caching services.

The most common solution is to use the ConcurrentHashMap data structure which is designed for that type of concurrent Thread execution context.

Solution

Since this problem was also affecting other Oracle Weblogic 11g customers, Oracle support was quite fast providing us with a patch for our target WLS 11g version. Please find the patch description and detail:

Content:
========
This patch contains Smart Update patch AHNT for WebLogic Server 10.3.5.0

Description:
============
HIGH CPU USAGE AT HASHMAP.PUT() IN REGEXPPOOL.ADD()

Patch Installation Instructions:
================================
- copy content of this zip file with the exception of README file to your SmartUpdate cache directory (MW_HOME/utils/bsu/cache_dir by default)
- apply patch using Smart Update utility

Conclusion

I hope this case study has helped you understand how to pinpoint culprit of high CPU Threads at the code level when using AIX & IBM JRE and the importance of proper Thread safe data structure for high concurrent Thread / processing applications.

Please don’t hesitate to post any comment or question.

12.08.2011

PRSTAT AIX – How to pinpoint high CPU Java VM Threads

Most of you are probably familiar with the powerful Solaris OS prstat command. This command allows you to generate a snapshot of all running Java VM Threads along with CPU % for each of them.

When combined with Thread Dump analysis, it allows you to pinpoint high CPU problems such as:

·         Java Threads involved in infinite or heavy loopin
·         Java Threads involved in excessive / nonstop garbage collection (GC Threads)
·         Java Threads involved in heavy logging / IO activities


This analysis strategy is extremely important for any individual involved in Java EE middleware and / or application support such as Oracle Weblogic, IBM Websphere, RedHat JBoss etc.

Ok thanks but I’m using the AIX OS; what can I do?

The prstat command is not available on the AIX OS but fortunately there is an equivalent command & strategy that you can use to simulate and generate the same Thread & CPU % breakdown.

This is great news. Now please show me how it works

Please simply follow the instructions below:

1)   Identify the AIX process Id of your Java VM process
2)  When high CPU is observed, execute the following command: ps -mp <Java PID> -o THREAD

Example: ps –mp captured of a Weblogic Java process running at 40% CPU utilization

USER      PID     PPID       TID ST  CP PRI SC    WCHAN        F     TT BND COMMAND
user 12910772  9896052         - A    40  60 98        *   342001      -   - /usr/java6_64/bin/java -Dweblogic.Nam
-        -        -   6684735 S    0  60  1 f1000f0a10006640  8410400      -   - -
-        -        -   6815801 Z    0  77  1        -   c00001      -   - -
-        -        -   6881341 Z    0 110  1        -   c00001      -   - -
-        -        -   6946899 S    0  82  1 f1000f0a10006a40  8410400      -   - -
-        -        -   8585337 S    0  82  1 f1000f0a10008340  8410400      -   - -
-        -        -   9502781 S    30  82  1 f1000f0a10009140  8410400      -   - -
-        -        -  10485775 S    0  82  1 f1000f0a1000a040  8410400      -   - -
-        -        -  10813677 S    0  82  1 f1000f0a1000a540  8410400      -   - -
-        -        -  11206843 S    3  82  1 f1000f0a1000ab40  8410400      -   - -
-        -        -  11468831 S    0  82  1 f1000f0a1000af40  8410400      -   - -
-        -        -  11796597 S    0  82  1 f1000f0a1000b440  8410400      -   - -
-        -        -  19070989 S    0  82  1 f1000f0a10012340  8410400      -   - -
-        -        -  25034989 S    2  62  1 f1000a01001d0598   410400      -   - -
-        -        -  25493513 S    0  82  1 f1000f0a10018540  8410400      -   - -
-        -        -  25690227 S    0  86  1 f1000f0a10018840  8410400      -   - -
-        -        -  25755895 S    0  82  1 f1000f0a10018940  8410400      -   - -
-        -        -  26673327 S    2  82  1 f1000f0a10019740  8410400      -   - -
-        -        -  26804377 S    0  60  1 f1000a0100220998   410400      -   - -
-        -        -  27787407 S    0  82  1        -   418400      -   - -
-        -        -  28049461 S    2  82  1 f1000f0a1001ac40  8410400      -   - -
-        -        -  28114963 S    0  82  1 11a835728   c10400      -   - -
-        -        -  29491211 S    0  82  1 f1000f0a1001c240  8410400      -   - -
-        -        -  29884565 S    0  78  1 f1000f0a1001c840  8410400      -   - -



3)      Immediately after, generate a Java VM Thread Dump by executing kill -3 <Java PID>. This command will generate a AIX / IBM Thread Dump (javacore.xyz format). At this point, you should have both ps –mp data output and a AIX Java VM Thread Dump
4)      Now analyse your ps –mp output data identify the Thread Id(s) with the highest CPU contribution and convert the TID decimal format to HEXA format

Example: Thread TID: 9502781 >> 91003D

5)      At this point, you are now ready to pinpoint and determine why such Java Thread(s) is / are using so much CPU. The answer is in the JVM Thread Dump. Simply search form the Thread Dump using the Thread TID; HEXA format. The final step is to analyze the affected Thread(s) Stack Trace and determine the root cause e.g. application code problem, middleware problem etc.

Example: In our example, the primary culprit (Thread TID: 0x91003D) was identified in the Thread Dump. As you can see, this Thread is currently involved in an infinite loop condition from a Hashmap. This is a common problem when using non Thread safe Hashmap data structure and combined with high concurrent Threads.

3XMTHREADINFO      "[STUCK] ExecuteThread: '97' for queue: 'weblogic.kernel.Default (self-tuning)'" J9VMThread:0x00000001333FFF00, j9thread_t:0x0000000117C00020, java/lang/Thread:0x0700000043184480, state:CW, prio=1
3XMTHREADINFO1            (native thread ID:0x91003D, native priority:0x1, native policy:UNKNOWN)
3XMTHREADINFO3           Java callstack:
4XESTACKTRACE                at java/util/HashMap.findNonNullKeyEntry(HashMap.java:528(Compiled Code))
4XESTACKTRACE                at java/util/HashMap.putImpl(HashMap.java:624(Compiled Code))
4XESTACKTRACE                at java/util/HashMap.put(HashMap.java:607(Compiled Code))
4XESTACKTRACE                at weblogic/socket/utils/RegexpPool.add(RegexpPool.java:20(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpClient.resetProperties(HttpClient.java:129(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpClient.openServer(HttpClient.java:374(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpClient.New(HttpClient.java:252(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/HttpURLConnection.connect(HttpURLConnection.java:189(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/http/HttpOutboundMessageContext.send(HttpOutboundMessageContext.java(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/http/wls/HttpTransportProvider.sendMessageAsync(HttpTransportProvider.java(Compiled Code))
4XESTACKTRACE                at sun/reflect/GeneratedMethodAccessor2587.invoke(Bytecode PC:58(Compiled Code))
4XESTACKTRACE                at sun/reflect/DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:37(Compiled Code))
4XESTACKTRACE                at java/lang/reflect/Method.invoke(Method.java:589(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/Util$1.invoke(Util.java(Compiled Code))
4XESTACKTRACE                at $Proxy115.sendMessageAsync(Bytecode PC:26(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/LoadBalanceFailoverListener.sendMessageAsync(LoadBalanceFailoverListener.java:141(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/LoadBalanceFailoverListener.onError(LoadBalanceFailoverListener.java(Compiled Code))
4XESTACKTRACE                at com/bea/wli/sb/transports/http/wls/HttpOutboundMessageContextWls$RetrieveHttpResponseWork.handleResponse(HttpOutboundMessageContextWls.java(Compiled Code))
4XESTACKTRACE                at weblogic/net/http/AsyncResponseHandler$MuxableSocketHTTPAsyncResponse$RunnableCallback.run(AsyncResponseHandler.java:531(Compiled Code))
4XESTACKTRACE                at weblogic/work/ContextWrap.run(ContextWrap.java:41(Compiled Code))
4XESTACKTRACE                at weblogic/work/SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:528(Compiled Code))
4XESTACKTRACE                at weblogic/work/ExecuteThread.execute(ExecuteThread.java:203(Compiled Code))
4XESTACKTRACE                at weblogic/work/ExecuteThread.run(ExecuteThread.java:171(Compiled Code))

Need any additional help?

I hope this short tutorial has helped you understand how you can pinpoint high CPU Thread contributors using the AIX prstat equivalent command.

For any question or additional help please simply post a comment or question below this article. You can also email me directly @[email protected].