IBM Technology For Java Virtual Machine in IBM i5OS
IBM Technology For Java Virtual Machine in IBM i5OS
Aleksandr Nartovich
Adam Smye-Rumsby
Paul Stimets
George Weaver
ibm.com/redbooks
International Technical Support Organization
February 2007
SG24-7353-00
Note: Before using this information and the product it supports, read the information in “Notices” on
page vii.
This edition applies to IBM® Developer Kit and Runtime Environment, Java™ 2 Technology Edition, Version
5.0.
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
The team that wrote this IBM Redbook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .x
Contents v
DTFJ API documentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
DTFJ API or JVMTI? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not give you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
Enterprise JavaBeans, EJB, Java, Javadoc, JavaBeans, JavaServer, JavaServer Pages, JDBC, JDK, JMX,
JRE, JSP, JVM, J2EE, J2ME, J2SE, Solaris, Sun, Sun Microsystems, and all Java-based trademarks are
trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.
Internet Explorer, Microsoft, Windows, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel
Corporation or its subsidiaries in the United States, other countries, or both.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
This IBM® Redbook gives a broad understanding of a new 32-bit Java™ Virtual Machine
(JVM™) in IBM i5/OS®. With the arrival of this new JVM, IBM System i™ platform now
comfortably supports Java and WebSphere® applications on a wide array of different server
models: from entry size boxes to the huge enterprise systems.
This book provides in-depth information about setting Java and IBM WebSphere
environments with new 32-bit JVM, tuning its performance, and monitoring or troubleshooting
its runtime with the new set of tools.
Information in this IBM Redbook helps system architects, Java application developers, and
system administrators in their work with 32-bit JVM in i5/OS.
Important: Despite the fact that this book targets i5/OS implementation, most information
in this book applies to all IBM server platforms, where the new 32-bit JVM is supported.
Paul Stimets is an Advisory Software Engineer at IBM, working in the Rochester Support
Center. He supports Java and WebSphere Application Server on the System i platform. He
has over 8 years of experience at IBM and has worked with Java on System i platform since
its introduction in 1998 with IBM OS/400® V4R2.
George Weaver is a software engineer with the System i Technology Center at IBM in
Rochester, MN. He has provided consultations, education, writings, Web content, and videos
to help System i clients, software vendors, business partners and solution providers enhance
their applications and services portfolios over the last 13 years. His current areas of focus are
Web application development, and WebSphere Application Server configuration, and
administration, and performance.
Many thanks to Java developers at IBM Rochester, MN for their enthusiastic support of this
IBM Redbook.
Bill Berg
Marc Blais
Arv Fisher
Steve Fullerton
Jesse Gorzinski
Sandra Marquardt
Mark Schleusner
Nishant Thakkar
Blair Wyman
IBM Rochester, Minnesota
Richard Chamberlain
Ben Corrie
Holly Cummins
IBM Hursley, UK
Your efforts helps increase product acceptance and customer satisfaction. As a bonus, you
will develop a network of contacts in IBM development labs, and increase your productivity
and marketability.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
We want our IBM Redbooks™ to be as helpful as possible. Send us your comments about
this or other IBM Redbooks in one of the following ways:
Use the online Contact us review IBM Redbook form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Because this is a technical book, it provides a brief introduction to several key definitions
related to Java.
Any programming language is based on a well defined syntax and semantic1. Any high-level
language program, including Java, is a text file with a unique file extension (.java for the Java
programs). You cannot execute a program on a computer as a text file. There is a
requirement for another component, called compiler, that converts human readable text file
into machine instructions. Typically, these instructions are unique for a specific hardware
platform, such as Intel® or IBM PowerPC®. There is a compiler for Java too, but it works
differently from other compilers. Refer to 1.2, “Java Virtual Machine” on page 4 and 1.4, “Java
platform development tools and API” on page 8 for further information.
With time, people have noticed that some programming segments are duplicated in many
different applications. As the result, the most frequently used functions (segments of
programs) are prewritten and grouped in special libraries. These libraries are called
differently for different platforms and languages. For Java they are called Java class libraries.
You can download these packages to a computer and include (or reference) these packages
to any program. Therefore you achieve code reuse.
As a language (meaning syntax and semantic) Java has not changed much, but the runtime
support and the number of available packages have grown tremendously. As a result, Java
programming language is one of the most popular languages for enterprise applications.
1 Semantic is the correct set of actions (or machine instructions) executed for each operand in your program.
Java SDK
JRE
Supporting files Core classes
JVM
Interpreter
Memory management
Class loading JIT
Diagnostic
Different applications might require different Java platforms. Therefore, there are several
editions of the Java platform in existence today:
Java Platform Micro Edition (Java ME, formerly called J2ME™): This edition targets
development and deployment of the Java applications for small, mobile devices, such as
PDAs and cell phones.
Java Platform Standard Edition (Java SE, formerly J2SE™): This edition targets
development and deployment for most of the Java applications running on the servers.
Java Platform Enterprise Edition (Java EE, formerly J2EE™): This edition has additional
support application programming interface (API) for enterprise-level applications. The
examples of additional API include Web services, distributed deployment of the
applications, and communication API.
However, you cannot execute this bytecode yet because your system understands machine
instructions that are unique to your system’s architecture (PowerPC, Intel, and so on). In
order to execute a Java program, you have to install a middleware component that fits
between your Java program and operating system. This component is called Java Virtual
Machine. JVM runs as an application on top of an operating system, as shown in Figure 1-2,
and provides a runtime environment for all Java applications. JVM effectively emulates an
operating system environment for Java programs. That is why it is called a Virtual Machine.
Java
application
Operating system
. (i5/OS, AIX, Linux, Windows, etc.)
HW platform
(PowerPC, Intel, etc.)
Figure 1-2 Java Virtual Machine
The main purpose of JVM is to convert the Java bytecode to the machine instructions that
you can execute on the hardware platform where you run your application. JVM is the
overhead in a Java Runtime Environment compared with traditional programming languages,
such as C or RPG. However, this is the price you pay for Java portability.
Converting Java bytecode is not the only task that JVM performs. The next section looks at
the JVM architecture, because a good understanding of the JVM components can help you in
tuning, maintaining, and monitoring a JRE.
JVM
JVM API
Interpreter
Interpreter is a computer program that processes Java bytecodes by calling a fixed set of
native instructions for each kind of Java bytecode. This is the slowest way to process Java
bytecodes.
Class loader
Before Java interpreter can start converting Java bytecode, the Java class file has to be
loaded. Java class loader is responsible for supporting Java’s dynamic code loading facilities,
which include the following:
Reading standard Java .class files.
Resolving class definitions in the context of the current runtime environment. Java
language allows you to define more than one class with the same name. However, these
classes have to be in the different packages2. Class loader is responsible for the correct
class name resolution.
Verifying the bytecodes defined by the class file to determine whether the bytecodes are
language-legal.
Initializing the class definition after it is accepted into the managed runtime environment.
Class initialization includes memory allocation for the class and assignment of the initial
values to all class parameters and variables.
Supporting various reflection APIs for introspection on the class and its defined members.
2
You can have multiple class loaders, and two versions of the same class in the same package can exist in the same
JVM as long as they are loaded with two different class loaders.
Diagnostics component
The diagnostics component provides Reliability, Availability, and Serviceability (RAS)
facilities to the JVM. The IBM Virtual Machine for Java is distinguished by its extensive RAS
capabilities. It is designed to be deployed in business-critical operations and includes several
trace and debug utilities to assist with problem determination. If a problem occurs in the field,
it is possible to use the capabilities of the diagnostics component to trace the runtime function
of the JVM and help to identify the cause of the problem. The diagnostics component can
produce output selectively from various parts of the JVM and the just-in-time (JIT) compiler.
JVM API
The JVM API encapsulates all the interaction between external programs and the JVM.
Examples include the following:
Creation and initialization of the JVM through the invocation APIs. Creating a JVM involves
native code. JVM API provides Java Native Interface (JNI) API to communicate with the
native code.
Handling command-line directives. Any command line arguments that you supply during
JVM startup are processed by JVM API component.
Presentation of public JVM APIs such as JNI and JVM Tool Interface (JVMTI). JVMTI
supports monitoring and debugging interfaces for the JVM tools.
Presentation and implementation of private JVM APIs used by core Java classes.
Just-in-time compiler
Java bytecode interpretation is done by the JVM. It interprets one bytecode at a time. If your
Java program calls the same method twice, the JVM is going to interpret that method two
times. This is just one example. If you look at most of the business programs, string
manipulations, math calculations, and database access methods are among the most typical
operations in such applications.
However, there is a better way to handle interpretation processes of the most frequently used
methods. This is the optimization mechanism that improves the overall performance of Java
applications. This mechanism is implemented in the additional component called just-in-time
compiler.
Note: JIT compiler is not a required component of JVM, but it is found in all production
implementations of JVM.
Note: You get JRE in i5/OS by installing 5722JV1 option 8 (J2SE 5.0 32-bit). JRE is
located in the following directory in the Integrated File System (IFS):
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/
In addition to the tools, Java SDK includes examples of Java programs and development
APIs, such as internationalization and IDL API.
IBM also released the IBM Toolbox for Java to the open source community as JTOpen. This
enabled Java application developers to create applications that access System i resources,
even if they did not have a System i platform in their IT infrastructure. IBM continues to
enhance both the licensed program product version (5722-JC1) and the JTOpen versions of
IBM Toolbox for Java.
At that time one of the most common usages of Java with System i applications was Java
applets. A Java applet is a Java application that typically runs within the JVM support
included with a Web browser. A Java developer would create an applet that, for example,
displayed a bar graph of data. A Web developer would code a Hypertext Markup Language
(HTML) page to launch this applet within the user’s browser session.
More importantly from a System i perspective, Java applications could run directly on i5/OS.
Application developers could take advantage of the “write once run anywhere” capabilities
that Java afforded and make them available on multiple platforms, including System i
platform. Workstation-based applications that used the IBM Toolbox for Java classes to
access System i resources could run directly on System i platform.
Save/Restore
TIMI Instructions
ADDN
JAVA*
CRTPG
TIMI
Database Comm
JVM
Stacks
Trusted Code
Generator Task Security
Mgmt
i5/OS provided interfaces to the Java platform via the Qshell environment. i5/OS also
provided a RUNJVA control language (CL) command that enabled Java applications to be
called directly from a command line or CL program. i5/OS also included a CRTJVAPGM CL
command to optimize application performance and create a reusable program object that
could be saved between system initial program loads (IPLs). See Figure 2-2. This is known as
the direct executable (DE) environment.
Additional Parameters
The Classic JVM is used by default for Java applications run from the i5/OS Qshell
environment, and the RUNJVA CL command is also used. The Classic JVM is also the
default for WebSphere Application Server workloads. IBM releases new versions of the Java
Development Kit (JDK™) by creating new options for the base IBM Developer Kit for Java
licensed program product. Fixes are delivered by the standard System i program temporary
fix (PTF) process. IBM provides a Java group PTF that includes fixes for the specific i5/OS
release (which is V5R4M0) and the software development kit (SDK) versions (which are 1.3,
1.4, and 5.0). This greatly simplifies system management tasks, especially compared with
other platforms that require a separate fix pack installation for each JDK version.
IBM has invested very heavily in Java technologies for over 10 years. IBM research has
recently developed a highly configurable core JVM that you can use for micro edition
implementations, where a small footprint is of utmost importance, and you can also use it for
server and enterprise implementations, where scalability and extensibility are necessary. This
JVM also has configurable garbage collection policies, which are discussed in further detail in
5.2.1, “Garbage collection policies” on page 37, and advanced performance profiling and
optimization features. This new JVM implementation is referred to as IBM Technology for JVM
throughout the rest of this book.
IBM Technology for JVM is available and supported on most of the popular operating
systems, including i5/OS. IBM Technology for JVM is compliant with JSE version 1.4.2 and
version 5.0 specifications. IBM Technology for JVM is available on i5/OS V5R4M0 and later
releases as 5722-JV1, option 8, and is compliant with JSE version 5.
2.2.1 Similarities
Before discussing the differences between Classic JVM and IBM Technology for JVM for
System i platform, it is helpful to understand the similarities. The most important similarity is
that both VMs are fully compliant with the JSE specifications, and Java programs can run in
either VM without modification or recompilation. You can use both environments within the
i5/OS Qshell environment to compile (the javac command) and run (the java command) Java
programs. Both are shipped as part of the 5722-JV1 (IBM Developer Kit for Java).
Both can use the RUNJVA CL command, although some parameters for IBM Technology for
JVM are ignored. Both can use environment variables defined by the i5/OS ADDENVVAR CL
command. Both have their fixes bundled with the i5/OS Java group PTF (SF99291 for
V5R4M0). Both can use the IBM Toolbox for Java to access resources such as data queues
and data areas. Both can use the System i native Java Database Connectivity (JDBC™)
driver to access database resources. Both can use System i debugging API’s that are started
via STRSRVJOB or STRDBG CL commands.
2.2.2 Differences
There are differences between IBM Technology for JVM and Classic JVM. Some are based
upon how IBM Technology for JVM is implemented on System i platform. Others are based
on the different code bases used between the two.
PASE is discussed in more detail in 3.1, “i5/OS portable application solutions environment” on
page 20. The PASE on System i platform does not use an emulation or operating system
layer and tends to be very fast. The PASE can access System i resources such as the file
system, programs, and Transmission Control Protocol/Internet Protocol (TCP/IP) sockets.
Therefore IBM Technology for JVM can access the same resources as Classic JVM.
PASE (AIX)
Applications and
shared libraries
IBM SDK
Compilers User Java
Applications Work
Mgmt JDK 32-bit JVM
Save/Restore
ADDN TIMI Instructions
syscall
JAVA*
CRTPG TIMI
Comm
Database JVM
Stacks
Trusted Code
syscall
Task
Generator Security
Mgmt
PowerPC
PowerPC
The PASE on System i platform supports 32-bit and 64-bit address spaces. With i5/OS
V5R4M0, IBM Technology for JVM currently uses a 32-bit address space. This limits the
theoretical maximum memory usage of IBM Technology for JVM to approximately 4 GB (232
bytes) versus approximately 17 billion GB (264 bytes) for Classic JVM. The IBM Technology
for JVM uses approximately 20% of the memory for its own internal operations, limiting the
maximum heap to approximately 3.25 GB (the practical limit for most applications is in the 2.5
GB to 3.0 GB range). Classic JVM has a maximum heap of 243 GB.
Even though the IBM Technology for JVM’s use of a 32-bit address space limits the maximum
heap size compared with Classic JVM, this does have advantages. The 32-bit address space
uses less memory for the JVM, compared with Classic JVM. As a rule of thumb, the heap size
required for the IBM Technology for JVM is approximately 60% of the Classic JVM, all things
being equal. Suppose you are running WebSphere Application Server workloads today with
the Classic JVM and have a heap size of 4 GB. That same environment running with IBM
Technology for JVM would be approximately 2.5 GB and most likely be able to run fine.
The IBM Technology for JVM is displayed as an application in the PASE. This environment is
very similar to the implementation of IBM Technology for JVM in AIX®. It yields good
performance because it is so close to the processor and has a relatively short code path.
Because it is not implemented above the System i TIMI, it does not have implementation
independence from the underlying hardware changes. However, this must only impact the
JVM and not end user applications.
The i5/OS CRTJVAPGM and DSPJVAPGM commands are not applicable with the IBM
Technology for JVM. Also, the RUNJVA command parameters such as OPTIMIZE are
ignored.
Another difference is that many applications must see performance improvements such as
decreased response time and higher throughput. One reason is because of the smaller
memory requirements: object references are 32 bits with IBM Technology for JVM, compared
with 64 bits for Classic JVM. Also, IBM has optimized the IBM Technology for JVM for the
PASE. The calling methods for applications running in IBM Technology for JVM that utilize
System i services (such as Integrated Language Environment® (ILE) programs and TCP/IP
sockets) incur a slight increase in overhead compared to Classic JVM. However, the
execution methods within applications running in IBM Technology for JVM incur less
overhead than with Classic JVM and typically more than compensate for the calling methods.
Finally, there are substantial differences in monitoring agents and diagnostic tools. Both VMs
support a set of tools. However, in most cases these tools are different for each VM. For
System i developers, who have been working with the native tools, Table 2-1 shows the
support of native tools for IBM Technology for JVM.
ANZJVM command No
DMPJVM command No
The new set of tools for IBM Technology for JVM is described in Chapter 8, “Analyzing JVM
behavior” on page 83.
There are a few other differences that are listed in 2.4, “Using IBM Technology for JVM with
existing applications” on page 16.
If you do have i5/OS V5R4 or later and, optionally, WebSphere Application Server version 6.1
or later, then how do you choose? If scalability in a single JVM process on a large
multiprocessor System i platform (which means eight or more processors) or i5/OS
integration are important, then Classic JVM might be a better fit.
In general, you must estimate the memory requirements for your application running in IBM
Technology for JVM. If the application’s memory requirements fit in 32-bit addressable space,
you must test your application in both VMs to choose the best performing one. Refer to
Chapter 4, “Making the switch to IBM Technology for JVM” on page 27 for more details.
In our tests IBM Technology for JVM, on average, has been 7-10% faster than Classic JVM.
IBM Technology for JVM represents IBM strategic JVM offering. Using a common code base
across multiple platforms provides a focus for IBM research and development investment in
Java technology.
In the future you can expect to see a 64-bit version of the IBM Technology for JVM. This
enables you to compare the 32-bit and 64-bit versions of the IBM Technology for JVM to
determine which is optimal for your specific environment. The eventual introduction of 64-bit
IBM Technology for JVM on i5/OS brings i5/OS in line with other platforms such as AIX and
Linux®, for which both 32-bit and 64-bit versions of IBM Technology for JVM are available.
Finally, you can expect to see application development tools such as WebSphere
Development Studio Client for iSeries, incorporate IBM Technology for JVM. This will enable
developers to extensively test and debug their stand-alone applications and WebSphere
Application Server based IBM Technology for JVM applications prior to putting them into
production on a System i platform.
To maximize the reuse of common code, i5/OS uses the AIX version of the IBM Technology
for JVM as its base. The code is customized for use under the i5/OS portable applications
solutions environment (i5/OS PASE). Because most of the code is common, the i5/OS
version is able to rapidly adapt updates and fixes made in IBM Technology for JVM,
especially those updates with AIX-specific benefits or considerations.
It is also important to ensure that the latest program temporary fixes (PTFs) are installed.
i5/OS requires:
Latest cumulative PTF package
Latest HIPER Group PTF (SF99539)
Note: For more information about required PTFs, visit the following Web site:
http://www.ibm.com/systems/support/i/fixes/index.html
For i5/OS PASE, the list of required PTFs can be found here:
http://www-03.ibm.com/servers/enable/site/porting/iseries/pase/misc.html
IBM Technology for JVM is included in licensed program 5722-JV1 (option 8). The install
media for 5722-JV1 is included on the V5R4 Standard Set Media. Option 8 of 5722-JV1 is
typically found on the CD labeled D29xx_07.
When the installation completes you have to install the latest Java group PTF (SF99291). An
easy way to install SF99291 is by using the GO PTF menu and take option 8 (install program
temporary fix package). When this book was written, SF99291 was at level 4. You can use
the following Web site to check if there is a more current version:
http://www.ibm.com/systems/support/i/fixes/index.html
On a CL command line you can use the WRKPTFGRP command to verify the level of each
group PTF installed on the system.
Note: By adding -showversion to the java command additional lines of output are
displayed that indicate which JVM is in use.
$
export JAVA_HOME=/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit
$
java -showversion -classpath /qibm/proddata/java400 Hello
java version "1.5.0"
Java(TM) 2 Runtime Environment, Standard Edition (build jclap32dev)
IBM J9 VM (build 2.3, J2RE 1.5.0 IBM J9 2.3 OS400 ppc-32 (JIT enabled)
J9VM - 20060501_06428_bHdSMR
JIT - 20060428_1800_r8
GC - 20060501_AA)
JCL - jclap32dev
Hello World
$
Figure 3-2 Running the Hello program to verify IBM Technology for JVM installation
If your system has a java.version=1.3 or 1.4 specified in a property file, you may still see an
output for Classic JVM. Adding the property -Djava.version=1.5 on a command line assures
that you actually use IBM Technology for JVM, for example:
java -showversion -Djava.version=1.5 -classpath /qibm/proddata/java400 Hello
In order to activate the IBM Technology for JVM, JAVA_HOME must be set to the following:
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit
Note: This value is case sensitive. If JAVA_HOME is set to an invalid directory, any attempt
to invoke Java fails with an error similar to Figure 3-3.
If you are familiar with the Classic JVM on i5/OS you know that you can have multiple JDKs
installed (1.3, 1.4, 1.5, and so on) and you can switch between them by setting the
java.version property. The only way to use IBM Technology for JVM is by setting the
JAVA_HOME environment variable. The behavior of JAVA_HOME is as follows:
If JAVA_HOME is not set, or set to the empty string, all forms of Java invocation uses the
i5/OS default Classic JVM implementation. If the Classic JVM is not installed, IBM
Technology for JVM is used.
If java.version is set to 1.3 or 1.4, JAVA_HOME is ignored and Classic JVM is used.
If JAVA_HOME is set to a valid JVM installation directory, all Java invocations use the
specified VM.
If JAVA_HOME is set to any other value, Java invocation fails with an error message as
shown in Figure 3-3.
JAVA_HOME directory /BLAH not found. Java Virtual Machine not created.
Java program completed with exit code 1
Figure 3-3 Showing the error message displayed when JAVA_HOME is not set correctly
If you want to remove the JAVA_HOME variable type, use the following:
RMVENVVAR JAVA_HOME
2. Set JAVA_HOME from within QSHELL. For example:
a. Open a Qshell session using:
STRQSH
b. Export the variable to your process:
export -s JAVA_HOME=/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit
To remove the JAVA_HOME environment variable from within Qshell use the unset
command. For example, use the following:
unset JAVA_HOME
3. Use a .profile file to initialize the shell. If you would like JAVA_HOME to be set every time
you start Qshell, you can create a .profile file in a user’s home directory. For example, if
your i5/OS user profile is John you would perform the following steps:
a. Create a file called .profile in /home/John.
b. Add the following text to the file:
export -s JAVA_HOME=/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit
c. Save the file.
Now, when you enter Qshell it initializes the shell with JAVA_HOME. From within the
Qshell session, run the env command to confirm that JAVA_HOME is set. See the sample
output in Figure 3-4.
$
env
LANG=/QSYS.LIB/EN_US.LOCALE
JAVA_HOME=/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit
QIBM_USE_DESCRIPTOR_STDIO=I
TRACEOPT=UNLINK
QIBM_DESCRIPTOR_STDERR=CRLN=N
QIBM_DESCRIPTOR_STDOUT=CRLN=N
QIBM_DESCRIPTOR_STDIN=CRLN=Y
LOGNAME=PAULS
SHLVL=1
HOSTTYPE=powerpc
HOSTID=123.123.123.123
HOSTNAME=RCHAS60.SERVER.IBM.COM
OSTYPE=os400
MACHTYPE=powerpc-ibm-os400
TERMINAL_TYPE=5250
...
Figure 3-4 JAVA_HOME is set correctly
If you prefer to use a method that works on all platforms, you can refer to Appendix D of the
IBM Developer Kit and Runtime Environment, Java 2 Technology Edition, Version 5.0
Diagnostics Guide, available at:
http://download.boulder.ibm.com/ibmdl/pub/software/dw/jdk/diagnosis/diag50.pdf
You can create the SystemDefault.properties file in either of the following locations:
Your user.home directory:
For example: /home/John/SystemDefault.properties
The /QIBM/UserData/Java400 directory:
/QIBM/UserData/Java400/SystemDefault.properties
If you prefer to customize the name and location of the properties file you can define the
QIBM_JAVA_PROPERTIES_FILE environment variable. The following is a sample of a CL
command:
ADDENVVAR ENVVAR(QIBM_JAVA_PROPERTIES_FILE)
VALUE(/qibm/userdata/java400/mySystem.properties)
For additional details and examples review Chapter 17 in the IBM Developer Kit and Runtime
Environment, Java 2 Technology Edition, Version 5.0 Diagnostics Guide, available at:
http://download.boulder.ibm.com/ibmdl/pub/software/dw/jdk/diagnosis/diag50.pdf
java.class.path property and Classpath is being overridden Modify system properties file
$CLASSPATH environment by a java.class.path entry in a accordingly.
variable are ignored. system properties file.
IBM Technology for JVM does not This is not a standard feature Modify your program to
support adopted authority. of a JVM. remove the adopt authority
dependency (refer to 7.3.4,
“Adopted authority” on
page 77).
Call to native program fails. Program was not compiled to Recompile the native program
support teraspace. with the following options:
TERASPACE(*YES)
STGMDL(*TERASPACE)
DTAMDL(*LLP64).
JAVA_HOME points to IBM java.version is set to 1.3 or 1.4 Change java.version to 1.5.
Technology for JVM, but your Java in one of the property files.
program starts in Classic JVM.
Tip: A general rule of thumb is that any application running under Classic JVM with a heap
that is less than about 5 GB in size must run in IBM Technology for JVM without a problem.
The general procedure recommended for switching an application from Classic JVM to IBM
Technology for JVM is as follows:
1. Measure the heap usage of the application under Classic JVM. Refer to 4.1.1, “Measuring
heap usage in Classic JVM”. If the heap usage is significantly larger than 5 GB under
Classic JVM, the application may run out of heap space under IBM Technology for JVM.
If you are trying to run a new application, skip this step.
2. Estimate the required maximum heap size under IBM Technology for JVM based on the
usage under Classic JVM. You can do this if you multiply your heap requirement in Classic
JVM by 0.6.
If you are testing a new application, or are not certain about the performance
characteristics of an existing application running in the Classic 64-bit VM, start by running
the application in IBM Technology for JVM with the default heap size parameters
(currently an initial heap size of 4 MB and a maximum of 2 GB).
3. Determine heap settings based on estimated heap requirements. Refer to 5.2.2, “Tuning
heap size” on page 41 for information about how to change heap settings.
4. Test application under load using IBM Technology for JVM, collecting verbose garbage
collection (GC) data. Keep the default optthruput GC policy, but use the heap settings
from step 3 listed previously.
Refer to 5.2.3, “Verbose GC output” on page 44 for information about how to collect
verbose GC data.
If performance/memory usage do not meet expectations or OutOfMemoryErrors are
generated, then perform the following steps:
a. Analyze heap usage using EVTK or manual review of verbose GC data. Refer to
“Troubleshooting garbage collection” on page 133 for information about using EVTK.
b. Adjust heap settings and repeat test cycle until satisfactory performance is achieved.
5. Optionally, once the application is using a suitable heap size, you may wish to test
different GC policies and how they interact with your application.
Refer to 5.2.1, “Garbage collection policies” on page 37 for more information about the
available GC policies and how to use a different policy.
The remainder of this chapter provides guidance on some of these steps and some other
considerations for switching to IBM Technology for JVM.
Attention: DMPJVM does hold the job briefly while it takes a snap-shot of the JVM. On
rare occasions, with large JVMs, this can cause a performance problem that could require
a JVM restart to recover.
........................................................................
. Garbage Collection .
........................................................................
Garbage collector parameters
Initial size: 98304 K
Max size: 240000000 K
Current values
Heap size: 453768 K
Garbage collections: 115
Additional values
JIT heap size: 188256 K
JVM heap size: 310524 K
Last GC cycle time: 449 ms
........................................................................
Figure 4-1 Showing total heap
Figure 4-1 shows an extract of the DMPJVM output. You have to use Heap size value from the
output to estimate if this application fits into the IBM Technology for JVM.
Note: If the DMPJVM command does not produce any output, the command may have
been terminated by your interactive job. Check the Default wait time of your job using the
CHGJOB command. It has a default value of 30 seconds. You may want to try increasing
this to 300 seconds to give the command sufficient time to complete.
For more information about the DMPJVM command, refer to the System i Information Center,
V5R4 at the following link:
http://publib.boulder.ibm.com/infocenter/iseries/v5r4/index.jsp
What you see on your screen is some basic information about every Classic JVM on the
system that is active. To locate the job you are interested in, look for the task name.
Figure 4-2 illustrates what you will see for a job named “WAS60SVR.”
The various heap sizes (in bytes) are shown in Figure 4-2. You have to use the first value of
heap (SIZE=513261568) from the output to estimate if this application fits into IBM
Technology for JVM.
Therefore now you might be wondering which GC policy is best for your application. The
recommendation is that you always start with the default GC policy. The default policy,
optimize for throughput (optthruput) is typically used for applications where raw throughput is
more important than short GC pauses. The application is stopped each time that garbage is
collected. If you prefer to test another policy, it is recommended that you read about each
policy in Chapter 5, “Tuning the garbage collector” on page 35.
Tip: The recommendation is that you always start with the default GC policy. The default
policy, optimize for throughput (optthruput) is typically used for applications where raw
throughput is more important than short GC pauses.
To give an example we include Table 4-2 to show you how the WebSphere Application Server
product changes these values when switching between Classic JVM and IBM Technology for
JVM. For additional help choosing appropriate values for minimum and maximum heap, refer
to 5.2.2, “Tuning heap size” on page 41.
Table 4-2 Default heap size values for WebSphere Application Server V6.1
VM Default value for -Xms Default value for -Xmx
Table 4-3 Common properties of Classic JVM and IBM Technology for JVM
Properties Description
file.encoding Maps the coded character set identifier (CCSID) to the corresponding ISO
American Standard Code for Information Interchange (ASCII) CCSID. Also,
sets the file.encoding value to the Java value that represents the ISO ASCII
CCSID. See file.encoding values and iSeries CCSID for a table that shows the
relationship between possible file.encoding values and the closest matching
CCSID. ISO8859_1 is the default value
java.class.path Designates the path that i5/OS uses to locate classes. Defaults to the
user-specified CLASSPATH.
java.compiler Specifies whether you use the just-in-time (JIT) compiler or not. For Classic
JVM the value can be jitc (JIT compiler) or jitc_de (both the JIT compiler
and direct execution).
For IBM Technology for JVM the values are NONE or j9jit23. If you specify any
other value, JIT compiler just ignores the setting and acts as though you
specified j9jit23.
os400.stdio.convert Allows control of the data conversion for stdin, stdout, and stderr in Java. Data
conversion occurs by default in the Java virtual machine to convert ASCII data
to or from Extended Binary Coded Decimal Interchange Code (EBCDIC). You
can turn these conversions on or off with this property, which affects the
current Java program.
If your Java program uses adopted authority, it has to be modified. Refer to 7.3.4,
“Adopted authority” on page 77 for more information.
To make sure that switching WebSphere Application Server to use IBM Technology for JVM
is a good idea, review the previous topics in this chapter:
Refer to 4.1, “Fitting your application into a 32-bit JVM” on page 28. Notice, however, that
the practical limit for the Java heap running WebSphere Application Server is 2.5 GB.
Refer to 4.3, “Finding dependencies to the Classic JVM” on page 32.
If you decide to switch your WebSphere Application Server to use IBM Technology for JVM,
then refer to Appendix A, “Running WebSphere Application Server with IBM Technology for
JVM” on page 167. It has detailed instructions about how to make the switch.
IBM Technology for JVM includes an option to produce verbose output, which gives great
detail regarding the operation of the garbage collector. You can analyze this information to
give insight regarding whether adjusting the garbage collection policy and command-line
parameter settings might lead to increased garbage collection performance.
Memory management involves two distinct but related functions of the JVM:
A memory allocator to allocate areas of storage for objects
A garbage collector to free memory when it is no longer being used
When a JVM initializes, it reserves an area of system memory called the heap. As the Java
application runs and objects are created, these are stored in the heap. Because system
memory is finite, the heap also has a finite size. The heap has an initial, or minimum, size and
also a maximum size. The heap size can expand or contract automatically during the lifetime
of the JVM as required by the garbage collector.
The garbage collector must ensure that there is enough free space in the heap to satisfy new
allocation requests. It does this by marking live objects and sweeping dead objects to free
memory. These are objects to which there are no references from elsewhere in the JVM, such
as from other objects. You can then use this memory for new allocations.
If the heap runs out of space for some reason, it can cause the JVM to fail with an
OutOfMemoryException message and other messages to this effect. Therefore, garbage
collection is an important function of the JVM.
In the Classic JVM, the Garbage Collector runs asynchronously (in the background) and
concurrently (at the same time) to application threads. This collector is unique to i5/OS, it
does not use “Stop The World” approach for collecting objects.
Tuning options in the Classic JVM are limited to setting the minimum heap size and maximum
heap size. Setting the minimum heap size provides rudimentary control over the frequency of
GC cycles. There is no compaction of the heap to reduce fragmentation. This means that the
heap can grow to a larger size than actually required by the application.
Fragmentation occurs because Java objects require contiguous storage. Over the lifetime of
the application, many allocation and free operations take place, each involving different size
chunks of memory. This gradually results in the heap containing many small fragments of
memory. These fragments might represent a significant chunk of memory in total, yet there is
insufficient contiguous storage to fulfill new allocation requests. Eliminating fragmentation is
therefore a key requirement of the garbage collector. GC in IBM Technology for JVM uses a
compaction technique which moves all live object to the beginning of the heap. Thus,
compaction aggregates all free chunks of memory in a single, continuous piece of memory.
There is also scope for fine-tuning other garbage collection attributes to meet your
application’s requirements which are described in 5.2, “Available options” on page 37.
It is possible that your application running on Classic JVM uses a heap larger than 3 GB and
therefore you might think that it will not run under the IBM Technology for JVM. However,
bear in mind that object references are twice the size in a 64-bit JVM compared to a 32-bit
JVM. Therefore, object references require more storage and result in a larger memory
footprint in the Classic JVM.
Conversely, running the application under IBM Technology for JVM requires less storage for
object references and therefore the memory footprint is reduced. In this way, applications that
seemingly are not going to fit within the address space of the IBM Technology for JVM, can
due to the contraction in size of object references and other factors.
In testing, memory footprints of applications decreased by between 20% and 50% when
moved from Classic JVM to IBM Technology for JVM. Therefore, applications running under
Classic JVM with up to approximately a 4 GB heap are good candidates to run under IBM
Technology for JVM. If your current application requires up to 6 GB of memory, IBM
Technology for JVM might or might not fit this application. You have to perform stress testing
of your application running in 32-bit JVM to understand memory requirements.
Refer to 4.1.1, “Measuring heap usage in Classic JVM” on page 29 for more information
about how to determine the current heap size of the application under the Classic JVM.
The garbage collection component in IBM Technology for JVM has a variety of functions
available at its disposal, such as concurrent marking and concurrent sweeping.
Behind-the-scenes, each GC policy defines which of these functions must be used in which
combination to achieve a particular goal.
Table 5-1 lists the available policies and explains when you must use each one.
Optimize for -Xgcpolicy:optthruput The default policy. Typically used for applications where raw
throughput (optional) throughput is more important than short GC pauses. The
application is stopped each time that garbage is collected.
Optimize for pause -Xgcpolicy:optavgpause Trades high throughput for shorter GC pauses by performing
time some garbage collection concurrently. The application is
paused for shorter periods compared to optthruput.
Generational -Xgcpolicy:gencon Handles short-lived objects differently than objects that are
concurrent long-lived. Applications that have many short-lived objects can
see shorter pause times with this policy while still producing
good throughput.
Subpooling -Xgcpolicy:subpool Uses an algorithm similar to the default policy but employs an
allocation strategy that is more suitable for multiprocessor
machines. Recommended for symmetric multiprocessing
(SMP) machines with 16 or more processors. Applications that
have to scale on large machines can benefit from this policy.
If you want to use a policy other than the default optthruput policy, you have to use the
-Xgcpolicy command line parameter with an appropriate argument when launching the JVM.
If you do not specify the -Xgcpolicy parameter, the default policy is used.
For example, to use the Optimize for pause time (optavgpause) GC policy, you use the
following command line:
java -Xgcpolicy:optavgpause HelloWorld
The following sections describe the characteristics of each policy in more detail.
All objects are then traced by the collector and those that are live (those objects which are still
referred to by other objects or elsewhere in the JVM) are marked.
The mark and sweep phases take place during every GC cycle. At the end of the sweep
phase there may still be insufficient memory to satisfy the allocation request. This can occur if
there is excessive fragmentation of the heap.
If this scenario arises, the heap is compacted. Compaction moves objects toward the
beginning of the heap, aligning them so that no free space remains between objects as
shown in Figure 5-1. Compaction is an expensive operation compared to the mark and sweep
phases, therefore, GC avoids it unless absolutely necessary to prevent the JVM from running
out of memory.
For some applications, varying GC pause times can be undesirable. For instance, graphical
applications which require consistent response times to user interaction might not be best
suited to optthruput.
The optavgpause policy sacrifices some application throughput in order to reduce the
average GC pause time. A mark-sweep-compact collection process is still used, but the
application threads perform some of the mark and sweep work themselves while the
application is still running as shown in Figure 5-2. One or more low-priority GC threads also
run in the background to perform concurrent marking while the application is idle.
Stop-The-World collection is still used but does not take as long as optthruput because some
of the work has already been done. The throughput penalty when using optavgpause is
around 5% but this varies by application.
Generational GC separates the heap into two areas referred to as nursery space and tenured
space. Allocations are initially in the nursery space. If an object survives enough garbage
collections, it is promoted to the tenured space, and the object is said to have tenured. The
nursery and tenured areas are collected differently for performance.
The nursery space is further split into an allocate space and a survivor space as shown in
Figure 5-3. Objects are allocated to the allocate space. When the allocate space is full, a GC
process called scavenge is triggered. The scavenge copies live objects into either the survivor
space, or tenured space if they are old enough. Age is measured in terms of how many
garbage collections the object has survived. The JVM determines dynamically how old an
object must be to tenure, however the maximum is 14 GC cycles. Live objects are copied to
the survivor space and aligned in such a way so that it can avoid fragmentation.
Figure 5-3 The heap is divided into nursery and tenured spaces in the gencon GC policy
After live objects have been copied into the survivor space, the allocate space contains only
dead objects. The allocate space and survivor space then swap or flip roles, this means that
the allocate space becomes the survivor space for the next scavenge and the survivor space
becomes the allocate space. On the next scavenge, live objects overwrite dead objects from
the previous scavenge for efficiency. The allocate and survivor spaces then flip roles again.
The scavenge process runs on the nursery area of the heap only, therefore, the cost of
collecting the entire heap on each GC cycle is not incurred. See Figure 5-4.
Objects in the tenured space typically live longer than those in the nursery, and therefore the
tenured space is deliberately collected less often than the nursery. The tenured space is
marked concurrently using a similar approach to the optavgpause policy, hence the policy is
called generational concurrent (gencon). However, concurrent sweep is not used, therefore,
when concurrent marking of the tenured space is completed, the application is paused for the
global collection to be completed. This global collection collects garbage in both the nursery
and tenured spaces.
Before collection
Nursery Tenured space
Allocate space Survivor space
After collection
Objects that have survived long enough are promoted to the tenured space
Figure 5-4 Example of heap layout before and after garbage collection using gencon policy
All GC policies in IBM Technology for JVM employ a free list. The free list is a data structure
that keeps track of which areas of the heap are available for allocations. The free list also
contains information about the size of each area of free space. Typically a single free list is
maintained for the entire heap. Much of the list might have to be searched before a suitable
chunk of free space is found.
Subpooling makes use of multiple free lists referred to as pools. A pool contains the location
of chunks of free space of a specific size. Each pool is associated with a different size as
shown in Figure 5-5. The pools are ordered by size, therefore the memory allocator only has
to go to the first element of the right pool to allocate an object. This results in faster object
allocation compared to GC policies with a single free list, because the only search required is
to locate the correct size pool to allocate from.
Subpooling also makes use of per-processor “mini heaps” which further improve performance
on multiprocessor systems. These are managed automatically by the JVM, therefore no
mechanism is provided for adjustment.
Figure 5-5 Subpooling employs multiple pools each with a list of free chunks of a specific size
To override the default heap size settings, you use the -Xms and -Xmx command-line
parameters for minimum and maximum respectively. Default settings are used if these
parameters are not specified.
For example, if you want to change a maximum heap size to 256 MB and use the default
value for a minimum heap size, then you use Xmx parameter:
java -Xmx256m HelloWorld
To specify a minimum heap size of 32 MB and use the default maximum heap size:
java -Xms32m HelloWorld
To specify a minimum heap size of 32 MB and maximum heap size of 256 MB:
java -Xms32m -Xmx256m HelloWorld
The IBM Technology for JVM now supports use of the G or g argument to specify the heap
size settings in gigabytes rather than megabytes as a convenience:
java -Xmx1g HelloWorld
This sets the maximum heap size to 1 GB. Only whole numbers of gigabytes are allowed,
therefore, specifying -Xmx1.5g returns an error.
If you try to specify too large a maximum heap, the java command returns the following error:
JVMJ9GC028E Option too large
The application must be run up to steady state with no load. Analyze the verbose GC output
at this stage. The current heap size, reported in the most recent entry in the verbose GC
output, gives you a rough indication of the initial size of the heap you must use (defined by the
-Xms command-line parameter).
Next, you must run the application under stress to determine an appropriate value for the
maximum heap size. When the application is processing the highest workload, check the
verbose GC output for the current heap size. Use a slightly larger value to set the maximum
heap size (-Xmx command-line parameter). For example, if JVM reports 650 MB as a heap
size under the highest workload, set the maximum heap size for your JVM to 680 MB. This
gives your JVM a small cushion in the production environment.
For fixed-size heaps there is no concept of heaptop. For non-fixed heaps, heaptop may move
toward heapmin or heapmax, depending how much free space is in the heap after each
collection.
The garbage collector by default tries to maintain a minimum of 30% free space in the heap
for new allocations. If less than 30% free space is available after garbage has been collected
in the current GC cycle, the collector might expand the heap by the amount required to reach
30% free space. You can override the minimum free space the garbage collector tries to
maintain by specifying the -Xminf command-line parameter. Specify the size as a decimal in
the range 0-1.
For example to maintain 20% minimum free space in the heap, specify the following:
java -Xminf0.2 HelloWorld
For example to maintain a maximum of 50% free space in the heap, specify the following:
java -Xmaxf0.5 HelloWorld
Note: As with other GC related settings, the default -Xminf and -Xmaxf settings work well,
therefore, most users do not have to adjust them.
Verbose GC output produced by IBM Technology for JVM is now formatted as Extensible
Markup Language (XML). This allows tools to be written to analyze verbose GC output
automatically.
You can collect verbose GC output for review if you want to analyze the GC characteristics
and performance of an application. You can use the analysis of the information in verbose GC
output to support a decision to move to a different GC policy or to tune GC options such as
the heap size. Verbose GC output can also help you identify possible causes of poor GC
performance.
Verbose GC output is enabled by setting one of two command-line options when launching
the JVM, either -verbose:gc (or -verbosegc) or -Xverbosegclog:filename. -verbose:gc
writes its output to the standard error stream, while -Xverbosegclog writes its output to the file
filename.
For example, to enable verbose GC output and capture the output in a file GClog.txt stored in
the Integrated File System (IFS), use the following:
java -Xverbosegclog:/home/rumsbya/GClog.txt HelloWorld
Tip: To analyze the GC output with the tools, use the verbosegclog option. Tools do not
read the standard error stream correctly because it might contain some other data, for
example, exception stacks.
...
<af type="tenured" id="36" timestamp="Mon Oct 09 13:58:15 2006" intervalms="382.636">
<minimum requested_bytes="972016" />
<time exclusiveaccessms="0.117" />
<tenured freebytes="14877752" totalbytes="130819584" percent="11" >
<soa freebytes="14877752" totalbytes="130819584" percent="11" />
<loa freebytes="0" totalbytes="0" percent="0" />
</tenured>
<gc type="global" id="36" totalid="36" intervalms="85383.105">
<compaction movecount="885558" movebytes="53290192" reason="compact to meet allocation" />
<classloadersunloaded count="58" timetakenms="11.881" />
<refs_cleared soft="7" weak="429" phantom="152" />
<finalization objectsqueued="336" />
<timesms mark="233.177" sweep="8.977" compact="527.274" total="781.744" />
<tenured freebytes="77525984" totalbytes="130819584" percent="59" >
<soa freebytes="76217824" totalbytes="129511424" percent="58" />
<loa freebytes="1308160" totalbytes="1308160" percent="100" />
</tenured>
</gc>
<tenured freebytes="76553968" totalbytes="130819584" percent="58" >
<soa freebytes="75245808" totalbytes="129511424" percent="58" />
<loa freebytes="1308160" totalbytes="1308160" percent="100" />
</tenured>
<time totalms="782.403" />
</af>
...
Figure 5-7 Extract from verbose GC log for one collection using optthruput policy
The top-level element <af> indicates this GC cycle occurred due to an allocation failure.
Other possible values you might see are <sys> for collections forced by System.gc() calls and
<con> for concurrent collections.
Tip: <sys> elements in verbose GC output indicate the application is making calls to
System.gc(). You are strongly discouraged to use System.gc() calls to try to influence
garbage collection because this can severely degrade performance. Remove System.gc()
calls from the application or use the -Xdisableexplicitgc command-line option when
launching the JVM so that System.gc() calls have no effect.
This shows that only 382.636 milliseconds have passed since the last allocation failure. This
can be an indication that the heap is filling up too quickly. Consider increasing the maximum
heap size.
The next line shows the size of the allocation request that caused the failure, in this case
nearly 950 KB:
<minimum requested_bytes="972016" />
Note: It is recommended that you use the value of totalbytes from the first <tenured>
elements when you estimate the maximum heap size setting of your JVM.
Although there is over 14 MB total free space in the heap, there is not a large enough
contiguous chunk to allocate the requested 950 KB. Therefore, compaction is triggered to
move objects toward the beginning of the heap. On this occasion over 880,000 objects are
moved:
<compaction movecount="885558" movebytes="53290192" reason="compact to meet
allocation" />
From the verbose GC output you can see the high cost of compaction compared to the mark
and sweep phases of the GC cycle. This line shows the time in milliseconds taken for each
phase of the GC cycle:
<timesms mark="233.177" sweep="8.977" compact="527.274" total="781.744" />
On this occasion, compaction took more than twice as long than the mark phase and nearly
60 times longer than sweeping dead objects. The whole GC cycle took nearly 0.8 seconds.
This emphasizes the importance of avoiding compaction.
Before you consider changing the garbage collection policy, you have to determine which
performance characteristics are important for your application. Throughput and response time
are two measurements of an application’s performance that you must consider. Throughput
measures how much data is processed by the application in a given time period, usually
expressed as operations or transactions per second. Response time measures how long an
application takes to complete processing of a request, starting from when the request is
received by the application.
The default garbage collection policy is optimized to provide high application throughput while
providing GC pauses short enough for the majority of applications. It is recommended that
you run the application using the default GC policy initially. If this gives satisfactory
performance, then there is no requirement to try other GC policies, although doing so is
straightforward. Just specify the relevant -Xgcpolicy:policy command-line parameter when
launching the JVM.
Note: Before considering changing the GC policy, you must be using a heap size
appropriate to your application’s requirements. For example, if the maximum heap size is
set too small, then performance suffers regardless of which GC policy you choose.
If the default GC policy does not give satisfactory performance, analyze verbose GC output
as described in “Interpreting verbose GC output” on page 44, looking for clues to find the
particular aspect of GC behavior which is causing concern.
Tip: There is an excellent article available from IBM which presents a quantitative
approach to choosing a GC policy using a case study. The article is available from IBM
developerWorks® at:
http://www-128.ibm.com/developerworks/java/library/j-ibmjava3/
For instance, excessive fragmentation of the heap might be occurring if the application is
creating many short-lived objects. As a result, you will see in the verbose GC output that
compaction is happening too often, increasing the GC pause time. Subsequently the
application response time may increase, although the extent to which this affects the
response time depends on the workload. The effect is more noticeable in applications that are
lightly-loaded compared to heavily-loaded applications in which GC pause time makes up
only a small fraction of a response time which also includes more significant factors such as
network latency. Switching to the generational concurrent GC policy reduces fragmentation
and therefore reduces pause times.
If you have a multiprocessor server with 16 or more processors (such as a 16-way IBM
System i5™ 570 or IBM System i5 595) consider using subpooling for faster object allocation.
Per-processor “mini heaps” used automatically in this policy might further improve
performance. In addition to IBM System i5, subpooling is also available on IBM System p5™
and IBM System z9™ platforms.
This chapter presents an overview of class loading, then explains how shared classes fit into
the class loading model. The details of the shared classes featured in IBM Technology for
JVM are then explained.
The effect of shared classes on application performance is then discussed. This chapter
gives an overview of how you can integrate shared classes support into your own
custom-written classloaders and mention security considerations for using shared classes.
You must load classes into memory before you can create any instances of those classes, or
any static data referenced by other classes. You can load classes from several sources, such
as from a .class, zip, or Java archive (JAR) file present in the local filesystem, or from a
remote server.
Categories of classloader
Classes are loaded by a classloader. There are several classloaders supplied with IBM
Technology for JVM, which are used for different class-loading purposes:
Bootstrap classloader
Loads classes from the boot class path (typically classes under jre/lib, such as classes
stored in rt.jar). These classes constitute the core of the runtime environment required by
all applications (such as Object or RuntimeException). The bootstrap class loader is
built-in to the JVM.
Extension classloader
Loads classes from the installed extensions located on the extension class path. For
example, the IBM JCE provider classes which are included as a value-add feature of IBM
JVMs. In IBM Technology for JVM on i5/OS, such extensions are found in the following
directory by default:
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/ext
System classloader (also known as Application classloader)
Loads classes from the general class path (java.class.path system property) such as the
classes that make up an application.
When a classloader is asked to load a class, it searches for the class in the following order:
Classloader cache: if this classloader has previously loaded the class, it reads the class
from its cache.
Parent classloader
Shared class cache: if shared classes are enabled. Refer to 6.2, “Shared classes in IBM
Technology for JVM” on page 53.
Filesystem
From this you can see a delegation model is used to load classes, as shown in Figure 6-1.
The particular delegation model used in IBM Technology for JVM is referred to as parent first
delegation. This means that if the current classloader has not already loaded the class, it asks
its parent classloader if it has already loaded the class. If the parent has not loaded the class
already, it delegates to its parent, and so on up to the bootstrap classloader.
The usage of these different classloaders is largely transparent to the programmer. The
programmer must be aware of the order in which the runtime environment searches for
classes, to ensure no name clashes occur. In practice, it is rare that any conflicts will occur,
therefore, exactly how application classes get loaded is usually of little concern unless you
are using a user-defined classloader.
User-defined classloaders
It is possible to write your own classloader in Java. This can be useful to extend class-loading
behavior and capabilities. For example your application might have to:
Find and load a class from a unique location in the file system.
Load a different version of the class that has been loaded by another classloader. You can
load the same class multiple times as long as each version of the class is loaded by a
different classloader.
Generate a class on-the-fly depending on the state of your application, or modify the
bytecode to add tracing.
In traditional classloader models, these core classes are loaded into the private address
space of the JVM. Therefore, if multiple JVMs are running, no JVM can see what classes
other JVMs might already have loaded. Loading multiple copies of the same core classes
clearly leads to a larger memory footprint for each JVM in this scenario. This aspect of the
Java runtime environment is the driving force behind research into technologies such as
shared classes.
As mentioned in “Classloading and application startup” on page 52, hundreds of core classes
have to be loaded by the JVM for any application. On a server running multiple JVMs, this
leads to the same classes being loaded multiple times into each JVM. Class loading takes
time and also memory in which to store the loaded classes, therefore it is clearly inefficient to
load multiple copies of the same class into different areas of memory.
When enabled, shared classes remove this issue by maintaining a single copy of each class
loaded by the JVMs running on the server, which is shared between all JVMs that require it.
The classes are stored in shared memory which is accessible by all the JVMs running on the
server. When a classloader has to load a class which was not previously loaded by a parent
classloader, it looks in the shared cache in case another JVM has already loaded the required
class. Only if the class is not in the cache will the classloader search the filesystem.
Because there is only one copy of a class stored in memory, sharing between JVMs reduces
the memory footprint of each JVM. Retrieving a copy of the class from memory is clearly
faster than loading the class from a .class, zip, or JAR file stored on disk or remotely.
Another benefit of using shared classes is that the class cache remains on the system until
next initial program load (IPL). Therefore, even if you run only one JVM at a time but use
frequent restarts of a JVM, you still benefit from using the shared classes feature.
Version 5.0 of IBM Technology for JVM offers, for the first time across the major platforms, a
completely transparent and dynamic means of sharing all loaded classes (with the exception
of Sun Solaris™ and the HP hybrids). Furthermore, no restrictions are placed on JVMs that
are sharing the class data.
IBM has provided a Shared Classes Helper application programming interface (API) so that
support for shared classes can be added to user-defined classloaders which do not inherit
from java.net.URLClassLoader. The helper API is described in more detail in 6.4, “Shared
Classes Helper API” on page 63.
In addition to a cache, the first JVM to start with shared classes enabled creates
the/tmp/javasharedresources directory. This directory holds information about all the caches
on the server. Because the cache is stored in shared memory, a cache can persist only until
the next IPL, at which time it is lost. One or all caches can be explicitly destroyed at any time
if required, without the necessity to IPL.
A cache is created with a fixed size which cannot change after the cache is created. After the
cache is full, classes can still be loaded from the cache, but no more classes can be stored.
The default cache size on i5/OS is 16 MB. This can be overridden at runtime using the
-Xscmx command-line parameter. Refer to “Setting class cache size” on page 59.
The cache only contains immutable (read-only) data from classes, such as fields declared as
final static. Such data is stored in the cache as an internal structure called a ROMClass. Data
which could change is stored separately in a structure called a RAMClass. They are stored in
the JVMs private process memory and point to the relevant ROMClass. Multiple RAMClasses
can point to the same ROMClass because the ROMClass contains read-only data. This is the
fundamental principle on which class sharing is based. The distinction between ROMClass
and RAMClass is managed by the JVM and is transparent to classloaders.
Metadata is stored in the cache along with the ROMClass consisting of versioning information
and information about where each class was loaded from. This is used to make sure the
correct class is loaded from the cache in cases where the on-disk representation of the class
changes during the cache lifetime or if the same class exists on a different class path.
Important: It is recommended that you apply program temporary fix (PTF) SI25920. This
fixes an issue whereby a class name had to be specified for -Xshareclasses suboptions to
work, even for the utility options.
===> _________________________________________________________________________________________
______________________________________________________________________________________________
______________________________________________________________________________________________
______________________________________________________________________________________________
F3=Exit F6=Print F9=Retrieve F12=Disconnect
F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 6-2 Creating two class caches
===> ________________________________________________________________________________________
______________________________________________________________________________________________
______________________________________________________________________________________________
______________________________________________________________________________________________
F3=Exit F6=Print F9=Retrieve F12=Disconnect
F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 6-3 The listAllCaches suboption lists all the class caches present on the system
Note: Some of the -Xshareclasses suboptions are utility options. Although they use the
JVM launcher (the java command), by design a JVM is not started. Consequently, an
“Unable to create Java Virtual Machine” message is always output by the launcher
after a utility has run, as shown in Figure 6-3. This message is not an error.
# ROMClasses = 278
# Classpaths = 2
# URLs = 0
# Tokens = 0
# Stale classes = 0
% Stale classes = 0%
Cache is 6% full
===> ________________________________________________________________________________________
______________________________________________________________________________________________
______________________________________________________________________________________________
______________________________________________________________________________________________
F3=Exit F6=Print F9=Retrieve F12=Disconnect
F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 6-4 Displaying summary cache statistics using printStats suboption
1: 0xE0FFF8B4 CLASSPATH
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/vm.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/core.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/charsets.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/graphics.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/security.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/ibmpkcs.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/ibmorb.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/ibmcfw.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/ibmorbapi.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/ibmjcefw.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/ibmjgssprovider.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/ibmjsseprovider2.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/ibmjaaslm.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/ibmcertpathprovider.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/server.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/xml.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/IBMmisc.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/lib/IBMi5OSJSSE.jar
1: 0xE0FFF87C ROMCLASS: java/lang/Object at 0xE0000058.
Index 0 in classpath 0xE0FFF8B4
1: 0xE0FFF854 ROMCLASS: java/lang/J9VMInternals at 0xE00006E0.
Index 0 in classpath 0xE0FFF8B4
1: 0xE0FFF82C ROMCLASS: java/lang/Class at 0xE00020A8.
Index 0 in classpath 0xE0FFF8B4
...
Figure 6-5 Displaying class cache contents using PrintAllStats suboption
Taking a closer look at the meaning of the address information given for each ROMClass in
the cache, in this case java.lang.Object:
1: 0xE0FFF87C ROMCLASS: java/lang/Object at 0xE0000058.
Index 0 in classpath 0xE0FFF8B4
===> ________________________________________________________________________________________
______________________________________________________________________________________________
______________________________________________________________________________________________
______________________________________________________________________________________________
F3=Exit F6=Print F9=Retrieve F12=Disconnect
F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 6-6 Deleting a class cache
There might be several reasons why you want to delete a class cache. One example is if you
want to recreate a class cache with a different size. Another example is if you do a lot of
testing with the different versions of your applications, your class cache may contain many
classes that you do not require anymore. In order to clean the cache, you delete it. Next time
you start your applications, the JVM creates a clean class cache.
Important: Only the user profile under which a cache was created, may delete the cache.
This is true even if the groupAccess suboption was specified when the cache was created.
Refer to 6.5.1, “Operating system security” on page 67 for more information about the
groupAccess suboption.
Notice the use of the verbose suboption to show that the class cache has become full when
using an artificially small cache size of 10 KB.
===> ________________________________________________________________________________________
______________________________________________________________________________________________
______________________________________________________________________________________________
______________________________________________________________________________________________
F3=Exit F6=Print F9=Retrieve F12=Disconnect
F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 6-7 Setting class cache size using -Xscmx parameter
If you try to use -Xscmx to change the size of a cache after it has been created, the new size
is ignored.
The existing .class file in the filesystem is replaced with the updated class, invalidating the
version stored in the cache.
If you run the application again, you will see that the cache detects a newer version of the
class is available, and marks the cached copy as stale.
Cache naming
If multiple users are running the same application, you may want them to use different
caches. In this case, each user can use the default cache name, which incorporates the
current user profile name to ensure uniqueness.
Or if you want a more meaningful unique name, you can use the %u modifier when specifying
the cache name, which substitutes the user profile name.
If the same operating system group of users use the same application, you might want them
to share the same cache to maximize the number of classes that are shared.
On i5/OS, by default, caches are only accessible by the user profile that created them. To
share a cache between a group of users, they must belong to the same primary group profile.
===> __________________________________________________________________________
_______________________________________________________________________________
_______________________________________________________________________________
_______________________________________________________________________________
F3=Exit F6=Print F9=Retrieve F12=Disconnect
F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure 6-9 The groupAccess suboption allows different users in the same group to share a cache
If multiple operating system groups are running the same application, the %g modifier can be
added to the cache name. Each group running the application then gets a separate cache.
Even in these situations, it might be more efficient to create a unique modification context for
the application in the single cache, rather than to create a separate cache. Refer to
“Modification contexts” on page 68 for information about creating a modification context.
JAR and zip files are handled differently by the cache compared to .class files. When a
classloader loads a class from a JAR or zip file, it can lock the file, preventing it from being
updated. Therefore, checking is not performed as often for .class files, which because they
are not locked by the classloader, might disappear from the filesystem at any time (or a newer
version of the .class file could open, invalidating the cached version). The implication is that
cache operation is more efficient when classes are loaded from a JAR or zip file, compared to
a .class file. Therefore, when designing Java applications to exploit shared classes, consider
packaging classes into JAR or zip files rather than a number of stand-alone .class files.
It is also worth knowing that if a class in a full cache is marked as stale, there is going to be no
space to store updated metadata about the class. Classes are pessimistically marked stale
and a new piece of metadata is stored if it eventually proves not to be stale, redeeming the
class. If there is no space in the cache to store the updated metadata, the cache cannot
redeem any stale classes, and they are read from disk even though the class has not
changed from the version originally stored in the cache.
We suggest, therefore, to create the cache with sufficient size for the applications that use it
so that it does not fill completely. You can do this by running the application initially using a
large cache size (set using the -Xscmx command-line parameter) then using the printStats
utility suboption to determine how much class data was stored. You must perform this after
the application has terminated because classes are loaded throughout the JVM lifetime, even
during the shutdown process. You must add a small amount to the value shown in printStats
output for contingency, as shown in Figure 6-4 on page 56. A good starting number is 10% of
the cache size, but you still have to test your application to make sure you have sufficient
cache size.
If you are in a test or development environment and are restarting the JVM often to test
different tuning strategies, shared classes lead to faster JVM restarts.
Class cache takes some space available in a JVM process. This is one of the reasons why it
is said that the practical memory limit for a Java heap is 2.5 GB to 3 GB. For WebSphere
Application Server, the maximum memory available for a Java heap is approximately 2.5 GB.
Note: This section applies to you only if you plan to write your own classloader.
For your classloader to share classes, it must obtain a SharedClassHelper object from a
SharedClassHelperFactory. The SharedClassHelperFactory is a singleton object returned by
calling the following static method:
com.ibm.oti.shared.Shared.getSharedClassHelperFactory()
This method returns a factory if shared classes is enabled in the JVM. Otherwise it returns
null.
The SharedClassHelper gives the classloader a simple API for finding and storing classes in
the class cache to which the JVM is connected. After it is created, the SharedClassHelper
belongs to the classloader that requested it and can only store classes defined by that
classloader. The classloader and SharedClassHelper have a one-to-one relationship.
Classes stored in the cache by a SharedClassTokenHelper are not dynamically updated if the
class changes in the filesystem. The reason is that the Tokens used to store classes have no
meaning to the cache, therefore, it has no way of obtaining version information to use for
comparison. Therefore, if you require cached classes dynamically updated to reflect any
changes, use a SharedClassURLHelper or SharedClassURLClasspathHelper to store
classes instead.
Important: Since the bytes returned from findSharedClass() are not actual class bytes,
the classloader must never try to define a class from these bytes and store the resulting
class in the cache using storeSharedClass(). Only classes loaded from disk must be
passed to storeSharedClass().
if (factory != null) {
try {
this.helper = factory.getURLClasspathHelper(this, initialClassPath);
} catch (HelperAlreadyDefinedException ex) {
ex.printStackTrace();
}
}
}
if (Shared.isSharingEnabled()) {
SharedClassIndexHolder indexHolder = new SharedClassIndexHolder();
byte[] sharedClazz = helper.findSharedClass(name, indexHolder);
if (sharedClazz != null) {
clazz = super.defineClass(name,sharedClazz,0,sharedClazz.length);
return clazz;
}
else{
byte[] newClazzBytes = ... // read class bytes from disk
Class newClazz = super.defineClass(name, newClazzBytes,0,newClazzBytes.length);
helper.storeSharedClass(newClazz, storeAt);
storeAt++;
}
}
else{ // shared classes disabled, load class from disk, no cache
...
}
return clazz;
}
The Shared Classes Helper API documentation is available in the SharedClasses folder of
the extracted apidoc.zip file. Refer to Appendix E, “Additional material” on page 199 to
download this file.
All of these operations are subject to standard operating system security. Therefore any could
conceivably fail for a variety of reasons, such as insufficient authority or even insufficient
shared memory/disk space.
The cache is created with user access by default unless the groupAccess suboption of
-Xshareclasses is used, as shown in Figure 6-9 on page 61. This means that by default, only
JVMs started under the same user profile as the JVM which initially created the cache, is able
to use the cache.
If JVMs are started under different user profiles, these profiles have to share the same
primary group profile in order to share the same cache. Usually however JVMs start under the
same user profile, therefore, it is rare that you have to use the groupAccess suboption to
share the same cache. For example, WebSphere Application Server, by default, runs all
application servers under the QEJBSVR user profile.
Example 6-2 shows granting “read/write” shared classes permissions to all classloaders in
the com.yourco.customclassloaders package.
Example 6-2 Extract from java.policy file showing shared class permission
permission com.ibm.oti.shared.SharedClassPermission
"com.yourco.customclassloaders.*", "read,write";
Restriction: You cannot use a SecurityManager to restrict or modify the shared classes
permissions of the Bootstrap, Extension, or System classloaders supplied with IBM
Technology for JVM.
You can perform bytecode instrumentation by a variety of methods. A JVM Tools Interface
(JVMTI) agent might be used to hook into the application, or the classloader itself may modify
the class bytes before defining the class. Storing such classes in the cache without any
indication they have been modified could cause major problems for JVMs that expect to
retrieve the original class from the cache.
Fortunately, IBM Technology for JVM manages the cache so that it may be shared safely by
both JVMs that use the modified class, and also those that do not.
Modification contexts
When starting the JVM, if you want to intentionally share or use a shared instrumented class,
you can specify the appropriate modification context. A modification context is a user-defined
descriptor which indicates the type of modification applied to the class bytecodes. Each
modification context is associated with a private storage area where modified classes are
stored. Specifying a modification context means that JVMs not using the modified class can
safely use the same cache as those that are.
A cache can contain multiple modification contexts. You might have a modification context
used for debugging, which is specified when starting an application that is causing problems.
If you decide to enable debugging dynamically at runtime, any classes instrumented with
debug code are safely stored in a separate area of the cache so that they cannot interfere
with the unmodified version used by other JVMs. The application being debugged still gets
the benefit of sharing any unmodified classes while it is running in debug mode.
Bytecode instrumentation might have been performed by a JVM Tool Interface agent, or by a
user-defined classloader without using a JVMTI agent. If instrumentation is performed by a
JVMTI agent, a modification context does not strictly have to be specified when sharing
classes. The JVM can detect the modification and handle partitioning transparently.
Tip: It is recommended always specifying a modification context when sharing classes that
have been modified by a JVMTI agent. Although not strictly required for safe sharing of
modified and unmodified classes in the same cache, the JVM has to perform extra checks
at runtime on modified classes not associated with a modification context. These checks
impact performance slightly.
Important: If class modification was not performed by a JVMTI agent (for example, by a
user-defined classloader) you must specify a modification context. If you do not, the cache
is not partitioned and all JVMs connected to the same cache are going to use the modified
version of the class, causing unexpected and undesirable results.
We used the printAllStats suboption to view the modification contexts associated with a
cache. Refer to “Interpreting printAllStats output” on page 57 for more information.
Figure 6-10 shows the HelloWorld application creating a cache called myCache and a
modification context called myModification1.
1: 0xE0FFF8A4 CLASSPATH
(modContext=myModification1)
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/vm.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/core.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/charsets.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/graphics.jar
...
1: 0xE0FFCD08 CLASSPATH
(modContext=myModification1)
/home/rumsbya
1: 0xE0FFCCD0 ROMCLASS: HelloWorld at 0xE010DC10.
Index 0 in classpath 0xE0FFCD08
...
Figure 6-10 Creating a new cache and modification context
The output in Figure 6-10 indicates that JVM 1 stored the class HelloWorld in the cache. The
metadata about the class is stored at address 0xE0FFCCD0 and the class itself is written to
address 0xE010DC10 in the cache. It also indicates the classpath against which the class is
stored and from which index in that classpath the class was loaded, in this case from the first
(and only) entry in the classpath at 0xE0FFCD08.
1: 0xE0FFF8A4 CLASSPATH
(modContext=myModification1)
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/vm.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/core.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/charsets.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/graphics.jar
...
1: 0xE0FFCD08 CLASSPATH
(modContext=myModification1)
/home/rumsbya
1: 0xE0FFCCD0 ROMCLASS: HelloWorld at 0xE010DC10.
Index 0 in classpath 0xE0FFCD08
...
3: 0xE0FFC534 CLASSPATH
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/vm.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/core.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/charsets.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/graphics.jar
...
3: 0xE0FF99BC CLASSPATH
/home/rumsbya
3: 0xE0FF9984 ROMCLASS: HelloWorld at 0xE010DC10.
Index 0 in classpath 0xE0FF99BC
...
5: 0xE0FF91D8 CLASSPATH
(modContext=myModification2)
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/vm.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/core.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/charsets.jar
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib/graphics.jar
...
5: 0xE0FF6650 CLASSPATH
(modContext=myModification2)
/home/rumsbya
5: 0xE0FF6618 ROMCLASS: HelloWorld at 0xE010DC10.
Index 0 in classpath 0xE0FF6650
...
Figure 6-11 Content of the class cache
For this example, we use the HPROF JVMTI agent. HPROF is a simple demonstration JVMTI
agent supplied with IBM Technology for JVM. It can gather various data about an application,
such as time spent in each method and heap activity. HPROF uses bytecode instrumentation
depending on which options you specify at runtime. Not all options require the application to
be instrumented.
When you run Java with HPROF, an output file is created at the end of program execution.
This file is placed in the current working directory and is called java.hprof.txt. You can specify
additional command line argument file=<filename> to specify a different file name.
For our purposes we want to modify the bytecodes, therefore we use the cpu=times option
with HPROF. This inserts bytecodes into all method entry and exit points, therefore, clearly
the bytecodes of the HelloWorld class are going to differ from the version in the cache after it
is instrumented by HPROF. The command line and output are shown in Figure 6-12.
JNI provides a standard way to interface with programs written in other languages. This
allows you to maintain a single version of your native method libraries on that platform.
Note: If an application uses the JNI, by virtue of relying on native code it ceases to be a
100% pure Java application. Therefore, if portability is important to you, investigate
alternatives to JNI such as the IBM Toolbox for Java classes.
You can use JNI to write native methods to handle those situations when an application
cannot be written entirely in the Java programming language. For example, you might have to
use native methods and JNI in the following situations:
The standard Java class library does not support the platform-dependent features that
your application requires.
You have a library or application that is written in another programming language and you
want to make it accessible to Java applications.
You want to implement a small portion of time-critical code in a lower-level programming
language, such as C, and have your Java application call these functions.
Programming with the JNI framework lets you use native methods to perform many
operations. You may use native methods to represent legacy applications or explicitly to solve
a problem that is best handled outside the Java programming environment. The JNI
framework lets your native method use Java objects in the same way that Java code uses
these objects.
The JNI serves as the glue between Java and native applications.
A native method can create Java objects, including arrays and strings, and then inspect and
use these objects to perform its tasks. A native method can also inspect and use objects that
are created by Java application code. A native method can even update Java objects that it
created or that were passed to it. These updated objects are available to the Java application.
Therefore, both the native language side and the Java side of an application can create,
update, and access Java objects and then share these objects between them.
Native methods can also call Java methods. Often you will already have developed a library of
Java methods. Your native method does not have to “re-invent the wheel” to perform
functionality that is already incorporated in existing Java methods. The native method, using
the JNI framework, can call the existing Java method, pass it the required parameters, and
get the results back when the method completes.
For the System i platform, several other alternatives to using the JNI are available including:
IBM Toolbox for Java access classes such as Distributed Program Call and Data Queue
support
The Extensible Program Call Markup Language (XPCML) support available with the IBM
Toolbox for Java
The IBM Toolbox for Java access classes and XPCML are easier to program than JNI. The
advantage of using JNI is that both the calling program and the called program run in the
same process (job) on the System i server, while some other methods, such as ProgramCall,
might use a new process (job). This makes JNI calls faster at startup time and less resource
intensive. However, using IBM Toolbox for Java access classes ensures the application
remains 100% pure Java for maximum portability.
When 64-bit IBM Technology for JVM is available in i5/OS, you are going to be able to call
64-bit PASE native methods.
As IBM moves to a Java implementation where the JVM is a user level program, Java can no
longer provide this feature. In order to achieve the same functionality, the most common
approach is to add an Integrated Language Environment (ILE) native method to the Java
program that adopts equivalent user profile authority and performs the required operation.
This section discusses two scenarios where it refers to some attributes of the native objects.
During runtime, an i5/OS program object identifies the adoption attributes of the invocation.
One of these attributes is Allow/Drop Adopted Authority. In the scenarios we specify the value
of this attribute in the parentheses. We also highlight a method which adopts authority.
Scenario 1
A Java method adopts authority immediately before calling a native method. See Figure 7-2.
Java code...
Thread initiation
In this example, Java method J is contained in a Java program which adopts user profile P
and directly calls native method X. Native method X calls ILE method Y which requires the
adopted authority. To run this Java program using IBM Technology for JVM, select one of the
following two alternatives.
Java code...
Thread initiation
Alternative 2
Another way to preserve user profile adoption is to create an entirely new native method N
contained in a service program which adopts user profile P. This new method is called by Java
method J and calls native method X. Java method J requires to be changed to call N instead
of X, but native method X does not have to be changed or repackaged. Figure 7-4 shows
alternative 2.
Java code...
Thread initiation
Java code...
Thread initiation
In this example, a Java method J1 is contained in a Java program which adopts user profile P.
J1 calls Java method J2, which calls J3, which calls native method X. Native method X calls
ILE method Y which requires the adopted authority.
In order to preserve adopted authority in this case, a new native method N can be created.
This native method is contained in a service program which adopts user profile P. Native
method N would then use JNI to call Java method J2, which is unchanged. Java method J1
would require to be changed to call native method N instead of Java method J2.
Java code...
Thread initiation
On the other hand, calls to i5/OS PASE native methods from Java applications running under
IBM Technology for JVM, perform better than in the Classic JVM. This is because IBM
Technology for JVM runs in the same address space as i5/OS PASE native methods.
Attention: It is recommended that if a Java application must use JNI, then use i5/OS
PASE native methods wherever possible in preference to other native methods.
Performance of i5/OS PASE native methods is several orders of magnitude better than
other native methods.
Notice, however, that the presence of dump files does not indicate that you have a JNI
problem. These files can be generated for a wide variety of reasons.
There are many good resources for troubleshooting problems in a top-down manner. These
approaches assume there is some high-level symptom which is being observed or reported
by end users. For example, a Web-facing application might be timing out in a Web browser.
If you suspect the issue might be with WebSphere Application Server, it is particularly
recommended consulting the IBM Redbook WebSphere Application Server V6 Problem
Determination for Distributed Platforms, SG24-6798.
Using a bottom-up approach to troubleshooting is suited to cases where there is some visible
symptom occurring at a low level, typically this would be some anomaly in the JVM itself. For
example, the JVM job might disappear unexpectedly and no longer be visible through
WRKACTJOB. Or you might be seeing a memory leak in the form of heap exhaustion.
Therefore, the first stage is to make sure there is no issue with the JVM itself causing the
heap exhaustion.
If you find no evidence that the problem is with IBM Technology for JVM, then the next stage
would be to troubleshoot the middleware layer, for example WebSphere Application Server.
Logically, the next step after this would be to troubleshoot the application itself if no cause
was found for the problem in the middleware layer.
Using a bottom-up approach does not guarantee faster problem resolution. You might
eventually discover for instance that the application is making inefficient use of the heap by
preserving object references longer than is required or instantiating many unnecessary
objects. In this case, although the problem is with the application, the most visible symptom
was the heap running out of space at the JVM level.
In some situations you have to contact IBM Support in order to identify and fix the problem.
However, by following the recommendations in this chapter you can significantly reduce the
amount of time required for IBM to provide a fix or workaround.
Yes Yes
Environment problem
Refer to chapter 4
Is the
application No Is the No
currently JVM job active?
running?
Yes Yes
Yes
Application
performance Memory leak
Refer to chapter 8.6 Refer to chapter 8.5
Figure 8-1 Basic problem determination steps for IBM Technology for JVM
If there is a newer fix level you must order and install it.
If a simple program like Hello runs properly, you must investigate what properties and
environment variables are being set when you invoke your program.
Refer to Chapter 3, “New user guide” on page 19 for more information about setting
properties and environment variables for IBM Technology for JVM.
More generally, the first step in analyzing a crash must be to check any log file and also look
for javacore files. In many cases (especially when it is not a true crash of the process) these
log files give an indication of what went wrong.
Environment problems might include running out of memory, incorrectly setting environment
variables, and so on. Often problems in the environment involve some degree of user control,
while JVM problems might not be resolvable by the user beyond implementing a temporary
workaround.
JVM problems might include defects in the JVM itself or corrupt data in the JVM.
Application problems might include crashes due to poorly-written Java Native Interface (JNI)
or error-handling code.
Crashes might involve a certain degree of predictability. They might occur whenever a certain
set of conditions occur, or they might seemingly occur at random. For example, if a certain
transaction keeps crashing the JVM at the same point, then there is a good chance either the
data or methods associated with that transaction are causing the problem.
You can check the Javadump file generated at the time of the crash (refer to “Locating a
Javadump file” on page 88) to confirm if the problem is in your native code. The file has the
following format:
javacore.DDDDDDDD.nnnnnn.nnnn.txt
It is recommended that you perform this before contacting IBM Support. The Javadump file
shows which library the crash occurred in, as shown in Example 8-1.
Example 8-1 Extract from Javadump showing faulty library caused a General Protection Fault
------------------------------------------------------------------------
0SECTION TITLE subcomponent dump routine
NULL ===============================
1TISIGINFO Dump Event "gpf" (00002000) received
1TIDATETIME Date: 2006/10/04 at 17:57:02
1TIFILENAME Javacore filename:
/home/rumsbya/jars/javacore.20061004.175702.9689.txt
NULL
------------------------------------------------------------------------
0SECTION GPINFO subcomponent dump routine
NULL ================================
2XHOSLEVEL OS Level : OS400 5.4
2XHCPUS Processors -
3XHCPUARCH Architecture : ppc
3XHNUMCPUS How Many : 2
NULL
In Example 8-1 because the crash occurred in a custom library which is not part of the JVM,
you must investigate any recent changes to the library which might have introduced a bug. If
you have a previous version that was known to work, you could revert to using that while
debugging the new library.
If the crash is occurring in a test environment rather than production, then the importance of a
quick fix might be lessened. However, the approaches that follow are still useful in narrowing
the scope of where the issue is occurring and allows more specific troubleshooting steps to
be followed. This particularly applies in cases where IBM Support are involved.
If the Classic JVM also crashes, then the issue might be an environment problem. However if
the application runs properly under Classic JVM, then there might be an issue with IBM
Technology for JVM. In either case, you are closer to resolving the issue because IBM
Support has an understanding of whether it is likely a JVM or environment issue and can
apply more specific troubleshooting techniques.
Although considered very rare, a failure in the JIT compiler might cause the JVM to crash. For
this reason, if the JVM fails, it is useful to rule out the JIT compiler as a possible cause early
in troubleshooting.
Therefore, it is recommended that you run the application with the JIT compiler disabled. To
disable the JIT compiler, start the application with the -Xint option specified on the command
line. For example, issuing the following command from a Qshell prompt causes the
HelloWorld application to be run in purely interpreted mode:
java -Xint HelloWorld
If the application then runs properly, the issue mighty be related to the JIT compiler. In this
case it is recommended that you contact IBM Support to report the failure. You can refer to
the IBM Developer Kit and Runtime Environment, Java 2 Technology Edition, Version 5.0
Diagnostics Guide for instructions on how to identify the specific method causing the JIT
compiler failure. The IBM Developer Kit and Runtime Environment, Java 2 Technology
Edition, Version 5.0 Diagnostics Guide is available on the Web at:
http://www-128.ibm.com/developerworks/java/jdk/diagnosis/
Identifying the failing method means you can selectively disable compilation of just that
method as opposed to all methods. The benefit of applying a more granular approach and
disabling JIT compilation of just the failing method is that the vast majority of methods can still
be compiled into native code. As a result, you only lose a negligible fraction of JVM
performance in comparison to using interpreted mode only.
For example, issuing the following command from a Qshell prompt prevents JIT compilation
of just the Math.max() method when the HelloWorld application runs:
java -Xjit:exclude={java/lang/Math.max(II)I} HelloWorld
This method, instead, is executed in interpreted mode, bypassing the JIT compiler. All other
methods are eligible for JIT compilation.
Heap differences
One case where the application might run properly under the Classic JVM, but not under IBM
Technology for JVM, is where the JVM runs out of memory. Because of the approximate
3 GB heap size limit with the 32-bit IBM Technology for JVM, the application might not fit
within the 32-bit address space. In this case you must continue to run the application under
Classic JVM until such time that the 64-bit IBM Technology for JVM becomes available.
If you are experiencing crashes in IBM Technology for JVM but not Classic JVM, you must
increase the maximum heap size as much as possible. If the application still uses all the heap
and crashes, then it might be that the application’s heap requirements are too large for IBM
Technology for JVM.
A memory leak might be a more likely cause in this scenario where both JVMs crash,
therefore, refer to in 8.5, “Investigating a suspected memory leak” on page 111 for guidance.
Try specifying a larger maximum heap size using the -Xmx command line parameter. For
example, the following uses a maximum 3 GB heap size instead of the default 2 GB under
IBM Technology for JVM:
java -Xmx3g HelloWorld
However, if the application crashes with an unlimited heap under Classic JVM, specifying a
larger maximum heap size under IBM Technology for JVM is unlikely to alleviate the problem.
It is straightforward from the library name and path whether the crash occurred in a JVM
library or a custom library that might be used by an application. JVM libraries typically have a
libj9 prefix and are located in the following directory by default:
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/lib
As explained earlier, if the crash is in a custom library, this has to be debugged by the
provider of the custom library. Crashes in JVM libraries must be reported to IBM Support for
further investigation.
Regardless of whether the crash occurs in a JVM or custom library, using debugging tools
such as those provided with WebSphere Development Studio Client for iSeries can help
isolate the area of application code which triggers the crash. This assists with the problem
isolation and determination effort. Refer to 8.7, “Application debugging” on page 155 for more
information about using WebSphere Development Studio Client for iSeries to do application
debugging.
If you plan to contact IBM Support about a hang issue with WebSphere Application Server,
IBM Support Assistant can be used to collect the “MustGather” data required by IBM Support.
Refer to “Submitting a problem report to IBM” on page 187 for more information.
It is rare that a hang is due to an underlying JVM error. In addition to a JVM error, a hang can
occur due to a poorly-written application or problem with the system environment. For
example, if threads are not properly synchronized, deadlock can arise if resources are shared
between multiple threads and contention occurs.
It is likely that a hung JVM has to be restarted because it is unlikely to make any progress in
such a degraded state. If the hang occurs in a test environment, however, you might want to
wait to determine if any progress is made if the application is left long enough.
Running the application under Classic JVM is a valid troubleshooting step. If the hang occurs
in Classic JVM, then the application might be at fault. If no hang occurs, then there might be
an issue with IBM Technology for JVM.
In situations where the hang occurs only under IBM Technology for JVM, it is recommended
that you run the application with the JIT compiler disabled. Refer to “Disable the just-in-time
compiler” on page 88 for information about how to disable the JIT compiler.
For situations where the hang occurs under both JVMs, you must investigate the hang further
to determine if it is an issue with the application or the environment. If the application tends to
run for a relatively long period before hanging, a temporary workaround to “buy some time”
for further investigation might be to schedule a periodic restart of the application. If you can
identify with some degree of confidence how often or how long after startup the application
hangs, you can schedule a restart before the expected window where the hang could occur,
to reset the application environment and therefore reduce the likelihood of the hang
occurring.
Under any circumstances you must take a javadump (refer to “Generating the Javadump” on
page 93) before ending or restarting the JVM. Even if you do not intend to look at it, the
javacore can be very helpful to IBM support.
In which order you try the troubleshooting steps depends on the symptoms you are seeing
and your preference for using system tools or the ThreadAnalyzer tool with IBM Support
Assistant. It is worth mentioning that the system tools do not readily help you detect hangs
due to deadlock. In this case it is recommended that you capture a Javadump which you can
use as input to ThreadAnalyzer for deadlock detection which is reported in the
ThreadAnalyzer summary section.
This section also describes how you can use WebSphere Development Studio Client for
iSeries to investigate a suspected deadlock scenario.
You can analyze thread usage at several different levels, starting with a high-level graphical
view, and drilling down to a detailed tally of individual threads. If any deadlocks exist in the
thread dump, ThreadAnalyzer detects and reports them. It is straightforward using
ThreadAnalyzer to establish if deadlock is the cause of a hang, which is very useful in our
“dining philosophers” example.
The philosophers are independent people, therefore, each of them eats or waits or thinks for
different lengths of time. Because of this, inevitably the situation eventually arises that one
philosopher has the fork, while another has the knife. Both are waiting for the other to release
the utensil they require. However, because of the last restriction that neither might put their
utensil down until they have eaten, the philosophers cannot make any progress and are
therefore deadlocked.
The deadlock in this example is usually resolved by lifting one of the restrictions, such as the
restriction that a philosopher might not put the knife/fork down until they have eaten
something. The philosopher instead waits for a period of time, and if they do not get the other
utensil, then they put down the one they currently hold without having eaten. Such a move
prevents deadlock from occurring in future. In an application this would be implemented by a
thread yielding the resource it currently holds, after waiting unsuccessfully for the other
resource it requires, to become available. The thread would then try again after a random
period of time.
Important: The JVM continues running after the kill command is issued. However,
because the application cannot progress in a hung state, you can eventually terminate the
JVM job manually through WRKACTJOB in order to restart the application.
The process ID is that of the hung JVM. It can be found at the top of the JVM’s job log. To
display the job log, enter WRKACTJOB. Put an Option “5-Work with” next to the hung JVM, then
choose Option “10-Display job log...”
In our case the process ID of the hung JVM is 9793 as shown in the job log in Figure 8-2.
We run the following command in a Qshell session to generate the Javadump required by
ThreadAnalyzer:
kill -QUIT 9793
Refresh the job log by pressing F5. You can see messages posted in the job log as the
Javadump is created, also shown in Figure 8-2.
The name and location of the Javadump file is given in the job log so that you can find it
easily. In our case the Javadump file is as follows:
/home/rumsbya/javacore.20061004.140450.9607.txt
Bottom
Press Enter to continue.
Figure 8-3 ThreadAnalyzer is accessed from the Tools menu in IBM Support Assistant
From the summary information you can see a deadlock has been detected by
ThreadAnalyzer. Notice the “bang” icon next to the Javadump file name which gives visual
indication deadlock has been detected.
If you click Overall monitor analysis, you can view the details of the deadlock, as shown
in Figure 8-6.
Figure 8-6 Overall monitor analysis section of ThreadAnalyzer gives more detail about the deadlock
In this case because the deadlocked threads are application threads, the application would
have to be debugged further to determine the cause. Refer to “Analyzing deadlocks in
WebSphere Development Studio Client for iSeries” on page 104 for guidance on
troubleshooting an application using WebSphere Development Studio Client for iSeries.
Using ThreadAnalyzer, you can view the call stacks of application threads. This is useful
when troubleshooting a hang due to deadlock. You can establish which method the thread
was executing at the time of the hang, narrowing the scope of where the problem could lie.
This also clarifies if the hang is likely an application issue or (less likely) a problem with one of
the Java shared class libraries.
Use ThreadAnalyzer to look at the Javadump from the dining philosopher’s example again to
see which methods were being executed when the deadlock occurred.
1. If ThreadAnalyzer is not currently running, start ThreadAnalyzer and load the Javadump
from the hang into ThreadAnalyzer. Refer to “Analyzing the Javadump with
ThreadAnalyzer” on page 94 for more information.
2. Select the Javadump file which was generated from the hang and click Open.
3. Expand the Overall thread analysis subfolder of the report:
a. Expand the root entry for this Javadump
b. Expand the Analysis subfolder
c. Expand the Overall thread analysis subfolder. You can see a summary of the
methods and how many times each method is found to be at the top of a thread’s
stack, as shown in Figure 8-7.
Figure 8-7 The overall thread analysis shows the occurrences of each method among the threads
Investigate the stack trace of the hung threads. Click the Eatery DINER_0 thread. The thread
information, including the call stack trace, is displayed, as shown in Figure 8-8.
Notice the stack trace reported in the thread information. This shows that the run() method
was active when the hang occurred. Therefore, for further troubleshooting you must
investigate the code in the run() method for logic errors that could lead to a hang. 8.6,
“Performance issues” on page 127 shows you how to use WebSphere Development Studio
Client for iSeries to profile the application.
Note: This section applies only to WebSphere Application Server V6.1 and later, because
support for IBM Technology for JVM was introduced in this version.
ThreadAnalyzer includes a custom filter feature that allows you to reduce the amount of
thread data that is displayed in the thread analysis view. You can also work with multiple
thread dumps simultaneously in a comparative analysis.
Figure 8-9 Multiple Dump Analysis aggregates data from multiple Javadump files
The number of occurrences of threads with each method at the top of the stack, is
reported in a separate column for each Javadump also. Comparing these figures gives an
easy indication if the application has made any progress in the time between capturing
each Javadump. For larger applications, there can be hundreds of threads, and
correspondingly, hundreds of methods listed in this view.
Many of the methods are executed by core WebSphere Application Server threads,
because the application we are analyzing runs in a WebSphere Application Server
environment. It is easier to work with just the application threads that are involved in the
problem and discount the WebSphere Application Server threads.
This particular application was composed mostly of Servlets, therefore we want to focus
on the threads that run the servlet code, that is Servlet handler threads.
Figure 8-10 ThreadAnalyzer’s Custom Filter feature lets you focus on methods related to the application
You then have to analyze the methods further. Start with the top method listed in the
summary because it was the most common method being executed when the thread
dump was captured.
8. Double-click the top method, in this case java.net.SocketInputStream.socketRead().
Another view opens showing a tree structure under which the threads that have this
method at the top of their stack, are displayed.
The next stage is to analyze the thread call stacks that have this method at the top. You
are looking for a clue or common application methods which opens in many threads’
stacks. If you find such a method, it warrants further investigation.
10.We did this by pressing the down arrow key and watching the stack in the right pane, to
see if and how it changed between each thread. We noticed that the method writeToLog()
was displayed in every thread’s call stack.
From the package name, we knew this was not a method from one of the classes supplied
in the standard Java Class Libraries. Therefore, we decided to investigate this method
further.
Figure 8-12 Finding occurrences of a method using ThreadAnalyzer’s custom filter feature
From Figure 8-12, you can see that it is only threads with the
java.net.SocketInputStream.socketRead() method at the top of the stack, which also have
called the writeToLog() method.
Remember that the java.net.SocketInputStream.socketRead() method was the most
common “top of stack” method in the application. The implication is that most of the
threads also spend time in the writeToLog() method.
12.This insight prompts you to ask the following questions:
– What is the function of the writeToLog() method?
– Why is it called by so many threads?
– Why does it take so long to return?
13.These are questions that you resolve by going to the source code for the writeToLog()
method.
After reviewing the source code, you discover this method is used for logging to a custom
log file. The method makes a synchronous call to a remote JVM to do the logging. The
inefficient operation of this method, combined with the sheer number of times it is called,
causes noticeable perturbation to the application.
The issue is resolved in this instance by rewriting the writeToLog() method to use
asynchronous logging by the current JVM rather than using expensive remote,
synchronous calls.
A thread contention, or race conditions, occurs when a thread is waiting for a lock or resource
that another thread holds. Programmers often add synchronization mechanisms to avoid
these contentions, but it is possible that the synchronization itself can lead to deadlocks, if
done incorrectly. In this section you see how to identify a deadlock situation between threads.
Restriction: At the time of this writing, WebSphere Development Studio Client for iSeries
version 6.0.1 has not supported J9 for remote debugging. However the approach
presented in this section is applicable to IBM Technology for JVM when this is supported
for remote debugging.
The vertical arrows between threads are of interest. An arrow indicates that one thread
(the thread in which the arrow originates) is waiting for another thread (the thread to which
the arrow is pointing) to release a lock. A double arrow indicates that two threads are in a
deadlocked state, both waiting for the other to release a lock.
12.The initial view may be difficult to work with. Click Switch to Compressed Time button to
view a more user friendly representation of the thread activity, as shown in Figure 8-15.
15.The UML2 Trace Interactions view opens, showing the object Interactions.
16.Scroll across the view to find the appropriate Fork.<id number> icon and select it, as
shown in Figure 8-18.
17.Scroll down the view to find the Locking Thread.<thread id> interaction.
19.In this example you can see that thread philo#1 has started to run and has seized the Fork
object (Fork:8008 in this example). In the Thread view you can see that there are no
threads waiting for locks or any deadlocks.
20.In the UML2 trace interactions view scroll down to find the request for the same resource,
that originated in a different thread (the getName() method in this example).
21.Click the getName() method and the arrow must be highlighted as shown in Figure 8-20.
You must also see a cursor in the thread view for this point in time.
In this example you can see that thread philo#3 is starting to run and several other threads
have started to run but then stopped to wait for a lock. This indicates a deadlock is going
to occur shortly. The next task is to determine the method that is causing the problem.
22.Right-click the Profiling resource for the run and select Open With → UML2 Thread
Interactions.
23.In the Thread View, click the menu drop-down button and select Open Call Stack View as
shown in Figure 8-21.
Figure 8-22 View the call stack for the active thread
26.At this time you must analyze the source code where the deadlock has occurred (based
on the call stack). Select the Open source option to open source code as shown in
Figure 8-23.
Figure 8-23 Open the source for the method identified with the deadlock
With IBM Technology for JVM, however, depending on how much memory is installed, the
JVM is likely to run out of memory before i5/OS has to start paging. This is because IBM
Technology for JVM can use only approximately 3 GB of memory, which might be
containable in main memory without the requirement to page. Therefore, you cannot
necessarily rely on paging to occur to let you know that the JVM is having a memory issue
when using IBM Technology for JVM.
This section describes how to approach the solving of such problems. This sections begins
by offering some simple suggestions and moves toward some more advanced analysis tools
and techniques.
The contents of the native heap are more stable than Java heap and are not subject to
garbage collection. The native heap is typically used when application JNI code does a
malloc(). Space for code that is optimized by the JIT compiler is also allocated from the native
heap. The size of the native heap is influenced by the -Xmx setting, because both heaps must
fit within a 32-bit address space, you can increase the size of the native heap by decreasing
the amount used by the Java heap, using the -Xmx setting.
Triggered by an OutOfMemoryError
If you are familiar with the Classic JVM on i5/OS you probably are not accustomed to seeing
an OutOfMemoryError very often. The reason for this is that the default setting for maximum
heap size is essentially *NOMAX. The default maximum heap size for IBM Technology for
JVM is 2 GB. Because the default setting is defined and users of IBM Technology for JVM are
likely to tune the maximum heap size, the odds of seeing an OutOfMemoryError increases.
When the VM reaches an out of memory condition a heap dump is generated. As you can see
in Figure 8-24, the location of the dump file is logged to stderr.
$
> java -Xdump:Heap SleepTimer 400
Figure 8-25 Setting the JVM to generate a heap dump
From a separate Qshell session you can use the ps utility to list all active processes running
under a given user profile. Figure 8-26 shows an example. There is a job
(204462/pauls/qp0zspwp in our example) that corresponds to the Java application. Note that
the process ID (PID) is 298.
$
ps -u pauls
PID DEVICE TIME FUNCTION STATUS JOBID
299 - 00:00 pgm-ps run 204463/pauls/qp0zspwp
262 - 00:00 pgm-qzshsh evtw 204350/pauls/qzshsh
228 qpadev0006 00:00 cmd-qsh deqw 204242/pauls/qpadev0006
263 qpadev0008 00:00 cmd-qsh dspw 204351/pauls/qpadev0008
297 - 00:00 pgm-qzshsh evtw 204461/pauls/qzshsh
298 - 00:01 - thdw 204462/pauls/qp0zspwp
$
kill -QUIT 298
$
Figure 8-26 Find the PID and start the Javadump
Figure 8-26 also shows how to use the kill command to generate the Javadump. A
Javadump captures the current state of the JVM and can be automatically triggered under
certain error conditions. The dump includes basic information about garbage collection, lock
information, heap statistics, and a stack for each thread. In our example, (Figure 8-25) the
JVM was started with the -Xdump:heap option, so that a Heap dump is generated in addition to
the Javadump.
Important: Do not let the terms “kill” and “quit” confuse you. The VM continues running
after the kill command is issued. However, generating a heap dump (when JVM is
started with the -Xdump:heap command-line option) might take several minutes and it could
have a noticeable impact on your application. For this reason it is advisable that you only
generate heap dumps in a development environment. If the memory leak is only noticeable
when running an application in a production environment, then you have to plan for a time
that has the least impact to your business if an outage were to occur.
You might have noticed in Figure 8-26 that there is no confirmation message after the kill
command is issued.
8.5.4 Using the Memory Dump Diagnostic for Java (MDD4J) tool
Now that you have generated some heap dumps you have to analyze them to see if you can
get closer to finding a root cause of the memory leak. These dumps are much too big and
cryptic to try any kind of manual analysis, therefore, a tool is necessary to process and
summarize the data. There are a few tools that can do such analysis but we are going to
focus on one called Memory Dump Diagnostic for Java (MDD4J). The MDD4J tool combines
many of the best features from existing tools, such as Leakbot and HeapAnalyzer. IBM is
currently committed to making MDD4J the primary tool for performing memory leak analysis.
Note: The Memory Dump Diagnostic for Java tool has the following recommendations for
minimum hardware specifications:
Approximately 5 GB of disk space
1.5 GB of physical memory (RAM)
2 GHz CPU (for Intel processors)
Installing MDD4J
MDD4J is available through the IBM Support Assistant tool. IBM Support Assistant (ISA) is
the mechanism for delivering and maintaining tools such as MDD4J. If you do not have ISA
installed or if you would like to read more about it, refer to Appendix B, “IBM Support
Assistant” on page 181.
Owner
Chain
Container HashSet
Leaking HashMap
Unit
Notice in Figure 8-31, the MDD4J tool can accept two files as input. For a good comparative
analysis, primary and baseline dumps are collected during a single run of a memory leaking
application. The second file is optional and is used to do comparative analysis to the primary
file. The primary dump refers to the dump taken after the memory leak has progressed
considerably, consuming a large amount of the maximum configured heap size. The baseline
dump is captured early on, when the heap has not yet been consumed significantly due to the
memory leak. Because a comparative analysis typically yields the best results we show a
comparative analysis in our example.
Figure 8-31 MDD4J can accept two files as input for comparative analysis
We generated two heap dumps using the instructions in “Capturing a heap dump manually”
on page 112 while the sample leak program was running and specified them in the Baseline
and Primary dump file fields. When you click the Upload and Analyze button you must see
the screen shown in Figure 8-32.
If the analysis does not seem to be making any progress, you might want to look in the
following directory to see if there are any errors in the log files:
<IBM Support Assistant install>/workspace/logs
The default workspace directory location is shown in Figure 8-33. Some potential errors are:
The dump file was truncated or corrupted in some way
There are too many objects in the dump or the size of the dump exceeded what MDD4J
can process. Currently MDD4J can process dumps containing up to 30 million objects.
When the analysis does finish a summary screen is displayed as shown in Figure 8-34.
Figure 8-35 Following the Next Steps to view the analysis results
The Suspects view, shown in Figure 8-36, is a great place to start looking because, as the
name implies, it indicates likely causes of a memory leak.
You can also see details about selected objects in the tree by switching to the Browse tab.
In Figure 8-38 the details of the class MyLeakPgm are displayed. You can see that the total
reach size is about 98 MB. This is a very big clue because you already know that the total
heap size was about 105 MB when the second heap dump was generated, as shown in
Figure 8-34 on page 118. This means that nearly all the heap space is “rooted” in the
MyLeakPgm class.
This is a very obvious example of a memory leak but it is this type of information that would
allow a Java developer to locate and fix the faulty code.
Note: Total reach size is defined as “The size of all objects which are reachable from the
given object, in a single pass depth-first search of all the objects in the memory dump
starting from all root objects where any object is visited at most once”.
If you would like to view the heap contents directly you can bring up the object table in a
sortable view. For example, you can view the object table by performing the following steps:
1. Start at the Analysis summary page, shown in Figure 8-34 on page 118, and click To see
all the objects and object types in the heap in a sortable and tabulated format: Click
here.
2. When the object table loads click the Types Table tab.
3. Click the Growth in instances since Baseline dump column.
The object table is now sorted and showing the object types that have grown the most
since the baseline dump was taken.
Restriction: At the time of this writing, you can only use the Classic JVM for remote
debugging in WebSphere Development Studio Client for iSeries version 6.0.1. However
the approach presented in this section is applicable to IBM Technology for JVM when this
is supported for remote debugging.
7. If you do not have the profiling and logging perspective open, then you can see a confirm
perspective switch? window open. If so, click Yes.
8. You must be in the profiling and logging perspective and see your application status as
monitoring. If the console does not open go to the toolbar and select Window → Show
View → Console. This displays any output data.
You have to manually capture at least two heap dumps to do memory leak analysis.
10.You must see a heap dump listed under your running application. Let the application
continue running.
11.After the application has run a few more times, click the Capture Heap Dump button
again.
12.You must see two heap dumps, as shown in Figure 8-42. Click the Terminate button to
stop the application and the profiling.
Note: The memory leak analysis tools use technology from MDD4J. You can import
and analyze heap dumps from standard heap dumps (using the IBM_HEAPDUMP
environment variable, an HPROF heap dump, an OS/390® heap dump, or a Hyades
optimized heap dump.
14.At the “select leak analysis options” window you must see your two heap dumps selected.
You must also see the threshold set to 20 (the default). Setting the number low attempts to
find many leaks. Setting the number high attempts to find fewer. In this example the
default is used. Click OK after specifying the appropriate heap dumps and threshold.
It might take a few minutes to analyze the heap dumps. You must see the progress
statistics in the lower right part of the window, as shown in Figure 8-44.
If one or more memory leaks have been detected, they are listed in the leak candidates
view. The higher the likelihood value, the higher the probability that a memory leak has
been detected. You must also see the parent application, the object type causing the leak
and the amount of memory.
15.Double click the line with the leak candidate data entries. You must see an object
reference view open, with the “path” to the offending container and object. If you hover the
mouse over an object or the link, you see statistics on the number of objects and memory
leaked, as shown in Figure 8-46.
In this example the memory leak is in the SecondaryQueue class, from a vector object that
is retaining references to String objects.
16.The next step is to go back to the Java perspective and open up the source file for the
offending class, as shown in Example 8-2. This is the offending code. myQ is a vector
object and retains references to all objects that it contains.
17.Look at the add(Object obj) method. Each invocation of this method adds an object to
the myQ vector. These could be String or other objects. As objects are added to the
vector, the memory usage increases accordingly.
18.Look at the getNext() method. This returns the object at currentPos, but the object still
stays in the myQ vector. This is a form of a memory leak. In order to properly remove an
object from Vector, one must use the remove method:
public E remove(int index)
This method removes an object from the Vector and returns it to the caller.
The next code snippet shown in Example 8-3 illustrates how to use this to fix the memory
leak. In this case the object is removed from the Vector after the getNext() method. Now
the myQ vector object cannot grow indefinitely like it could in the previous example.
return null;
}
Generally speaking, performance problems can be caused by the client, network, or server.
Common client problems include old hardware, inadequate memory, missing fixes, and
poorly written code. Common network problems include inadequate bandwidth, latency,
excessive broadcasting or file transfer activity, and poor design. Your first step in dealing with
perceived performance issues must be to investigate whether the client or network is an
issue, and then look at potential issues at the server level.
The Java virtual machine guidelines are covered next in 8.6.3, “Java virtual machine” on
page 132.
Hardware
General guidelines for hardware include having adequate processor, memory, disk, and
communications resources. System i IBM POWER™ 5 and IBM POWER 5+ based servers
provide faster response time and more throughput than previous generation IBM POWER 4
based servers. You can use i5/OS commands such as WRKACTJOB, WRKSYSSTS, and
WRKDSKSTS to perform real time sampling to understand processor, memory, and disk
usage on your System i platform. You can also use i5/OS Performance Collection Services to
perform long term monitoring.
You can use the IBM Systems Workload Estimator (WLE) to size a new system, a proposed
upgrade to an existing system, or a consolidation of several systems. The WLE tool currently
does not have the capability to size generic Java workloads. However, it can size WebSphere
Application Server-based workloads and provide basic guidelines on processor, memory, and
disk requirements for a given workload. WLE is available at no charge on the Web at:
http://www-912.ibm.com/supporthome.nsf/document/16533356
Java applications tend to consume more memory and processor resources than compiled
applications written in RPG or COBOL. Java applications also tend to use more high level
language instructions than traditional System i applications and benefit from newer System i
models. The IBM Accelerator for System i5 is available on selected System i5 520 models
and can also significantly improve Java application performance. The accelerator is a
software feature code that enables the entire processor’s capacity to be available for batch
processing workloads. For example, feature number 7354, available for the System i 7143
520 Express Configuration, increases the commercial processing workload (CPW) rating for
System i platform from 1200 to 3800. Another key recommendation is for Java based
applications to be run on systems with L3 cache. Refer to the IIBM System i5 Handbook IBM
i5/OS Version 5 Release 4 January 2006, SG24-7486.
Running Java based applications on System i platform might require you to change your
i5/OS work management and tuning strategies. Java is inherently a multithreaded
environment and the JVM spawns threads based on how the application was written and
other runtime events. Java based middleware such as WebSphere Application Server also
makes extensive use of threads for its internal operations, and also accessing System i
applications and data resources. An execution thread on System i platform is represented as
an activity level. Activity levels can be specified at multiple levels (memory pool, subsystem,
and so on).
In general, the activity level must be high enough to avoid wait → ineligible and active →
ineligible transitions, but not so high as to cause system instability. You can use the i5/OS
WRKSYSSTS command to view the current activity level in your memory pools. Refer to the
Max Active column shown in Figure 8-47.
Bottom
Figure 8-47 WRKSYSSTS screen shows the size and activity levels for each memory pool
There are many other i5/OS tuning tips and techniques you can use, such as ensuring the
prestart jobs for database requests (job QSQSRVR in the QSYSWRK subsystem and job
QZDASOINIT in the QUSRWRK subsystem) are set high enough. For a comprehensive list of
recommendations on i5/OS tuning for Java workloads, refer to chapter 5 of the IBM Redbook
titled Maximum Performance with WebSphere Application Server V5.1 on iSeries,
SG24-6383.
Generally speaking, Java based applications use a Java Database Connectivity (JDBC)
driver to connect to relational database resources such as DB2 for i5/OS. There are two
JDBC drivers available for DB2 for i5/OS:
The native driver
The IBM Toolbox for Java driver
Generally speaking, the toolbox driver is more flexible but the native driver performs better.
Also, an XA enabled JDBC driver (for transactional integrity such as two phase commit)
imposes additional overhead compared with a non-XA enabled JDBC driver.
Table 8-1 Comparison between the System i native and toolbox drivers
Attribute Native JDBC driver IBM Toolbox for Java JDBC
driver
Implementation Database and Java application Database and Java application can
must be on the same machine be on the same or different
or partition. partitions.
Application usage Can be used with JDBC driver Can be used with JDBC driver
manager or data source manager or data source
connections. connections.
Interface to DB2 for i5/OS Uses direct call level interface Uses database host server job and
to DB2 for i5/OS. TCP/IP sockets interface to DB2 for
i5/OS.
Coding techniques
Another key to optimizing performance is coding techniques related to DB2 access. For
overall Java coding recommendations refer to chapter 9 in the IBM Redbook titled Maximum
Performance with WebSphere Application Server V5.1 on iSeries, SG24-6383.
Using a data source connection within your application (rather than the JDBC driver
manager) allows you to take advantage of database connection pooling. Connection pooling
provides a set of reusable i5/OS database access jobs that improves scalability. Connection
pools also facilitates the reuse of prepared statements and open data paths, which can also
improve scalability. Both the native and toolbox JDBC drivers support connection pooling.
Stored procedures are another option for improving database access performance. Stored
procedures represent callable programs on the System i platform, which improve
performance in several ways. Stored procedures help reduce the number of calls between
the application and the DB2 for i5/OS runtime environment. Stored procedures also minimize
the overhead associated with dynamic Structured Query Language (SQL) such as parsing
and validating SQL statements, and creating an access plan. The toolbox JDBC driver
supports SQL packages, which can also improve overall scalability of your applications.
Efficient SQL coding techniques, such as only accessing the data you require, are also
essential. For examples, refer to section 9.2.4 of the IBM Redbook titled Maximum
Performance with WebSphere Application Server V5.1 on iSeries, SG24-6383.
Finally, ensure you have good database design concepts in place, such as indexes. DB2 for
i5/OS supports binary radix indexes, and also encoded vector instances. Depending on your
specific database environment and SQL statements used in your applications, one might be
much preferable over the other. You can use the Visual Explain tools (part of the iSeries
Navigator product) to create and analyze database monitor jobs on your System i platform.
For more detailed information about DB2 for i5/OS performance, the following publications
are helpful:
Preparing for and Tuning the SQL Query Engine on DB2 for i5/OS, SG24-6598
Stored Procedures, Triggers, and User-Defined Functions on DB2 Universal Database for
iSeries, SG24-6503
SQL Performance Diagnosis on IBM DB2 Universal Database for iSeries, SG24-6654
It is worth repeating that for the majority of applications the default GC policy gives
satisfactory performance. Adjusting the GC policy must be done only when your application
performance does not meet your goals. Keep in mind though that you have to perform tuning
activities at the application level first, for example, based on application profiling.
The performance gain to be achieved by using an alternative garbage collection policy varies
by application, but can be quite small. In some cases a particular GC policy might in fact have
a negative impact on application performance.
General guidelines
The IBM Technology for JVM has many built-in performance features, such as an advanced
JIT compiler that optimizes code. For example, you do not have to “tune” the JIT threshold
parameter. In Classic JVM, you can set the os400.jit.mmi.threshold=xyz custom property to a
value other than the default of 2000, however, this applies to the entire JVM and is not widely
used in practice. Refer to chapter 5 of IBM Developer Kit and Runtime Environment, Java 2
Technology Edition, Version 5.0 Diagnostics Guide, for details on the IBM Technology for
JVM JIT optimization.
Verbose garbage collection typically imposes modest overhead and provides useful JVM
runtime information. However, verbose class loading and verbose JNI impose overhead and
are most useful for debugging, rather than runtime performance analysis. You can also adjust
the size of your shared class cache, as described in 6.2.2, “Deploying shared classes” on
page 54.
Important: Be very careful when you specify JVM command line arguments. Mistakes
as simple as case sensitivity in the parameters can prevent the JVM from starting.
Overview of EVTK
EVTK parses and plots verbose GC logs and -Xtgc output. It provides graphical display of a
wide range of verbose GC data values and at the time of writing is compatible with optthruput,
optavgpause, and gencon GC policies.
You can use EVTK to support GC tuning and also general GC analysis and troubleshooting.
We present a scenario where EVTK is used to analyze verbose GC output from a
WebSphere Application Server environment which is not performing as well as expected.
After identifying the possible cause of the performance bottleneck from the EVTK analysis,
we make changes and use EVTK to confirm that the problem has been resolved.
Note: At the time of writing this book IBM was planning to make EVTK available in IBM
Support Assistant.
The scenario presented here involved a WebSphere Application Server V6.1.0.1 Base server
with the Trade 6 performance sample application installed.
Trade 6 application
IBM Trade Performance Benchmark Sample for WebSphere Application Server (otherwise
known as Trade 6) is the WebSphere end-to-end benchmark and performance sample
application. The new Trade benchmark has been re-designed and developed to cover
WebSphere's significantly expanding programming model. This provides a real world
workload driving WebSphere's implementation of Java 2 Enterprise Edition (J2EE) 1.4 and
Web Services, including key WebSphere performance components and features.
Test scenario
To stress the application server we generated a simulated workload for the Trade 6
application using Rational® Performance Tester V6.0.0.1. We configured verbose GC output
prior to starting WebSphere Application Server by specifying the following filename in the
Generic JVM arguments field:
verbosegclog:filename
Refer to “Configuring the WebSphere Application Server JVM” on page 173 for instructions
on setting JVM arguments.
These parameters were also set in the Java Virtual Machine settings of our test server.
The resource constraints in this scenario are clearly highly exaggerated for the purposes of
demonstrating the usage of EVTK for analyzing verbose GC data. Correspondingly, the
results are not typical, but show that for a highly-loaded and constrained application, GC
pauses can have a measurable impact on application throughput and response time, as you
might expect. EVTK allows us to determine that on this occasion the cause of poor
performance is related to increased garbage collection due to insufficient resources.
The server was placed under load by Rational Performance Tester for ten minutes and the
verbose GC log collected for analysis. Because garbage collection cycles occurred prior to
test start, they were not due to the test workload, data for these collections was manually
removed from the verbose GC output before analysis with EVTK. This was necessary so that
the graphs produced by EVTK and Rational Performance Tester covered the same period
and could therefore be compared like-for-like.
It is possible to change the period of verbose GC data displayed in EVTK, but you would have
to adjust both the start and end times displayed through EVTK to correlate to the start of the
test (as opposed to the startup of the JVM) for the same reason. We felt it was more accurate
to edit the verbose GC log file manually because we were able to refer to the time stamp in
each GC cycle entry and tell for certain whether it occurred during the test period.
Figure 8-48 The default view displayed in EVTK after you open a verbose GC log file for analysis
The default graph did not meet our requirements. The data you want to visualize changes
during the troubleshooting process, but fortunately displaying new graphs in EVTK is
easy. You simply add or remove the data you want to plot through the VGC Data menu.
5. For instance, we wanted to look at how frequent and how long GC pauses were during the
test period. To do this starting from the default graph shown in Figure 8-48:
a. Select VGC Data then uncheck Tenured heap size from the menu, as shown in
Figure 8-49.
Notice EVTK dynamically updates the graph to reflect the selections in the VGC Data
menu.
b. Select VGC Data then uncheck Free tenured heap (after collection).
This gives a blank chart onto which you can plot the required data in the next step.
c. Select VGC Data then check Pause times (mark-sweep-compact) collections to
display GC pause data on the graph.
This results in the chart shown in Figure 8-49.
Figure 8-49 Deselect the Tenured heap size data to give a better scale for the free heap space data
Test results
The effect of insufficient space in the heap on application throughput, application response
times, and GC pause times is clear from Figure 8-50, Figure 8-51, and Figure 8-52.
Figure 8-50 shows that average response times increase as the workload on the WebSphere
Application Server increases.
Figure 8-51 shows that the throughput of the Trade 6 application drops as the workload on the
WebSphere Application Server increases, reaching a point of zero throughput when the
garbage collector is running very frequently to ensure the heap does not completely run out of
free space.
Figure 8-51 Throughput drops as response time (and GC pause times) increases
In Figure 8-52 you can see that as the workload increases, garbage collections occur more
frequently, and take longer. This is because of the very low maximum heap size. As the
workload increases, the garbage collector has to frequently compact the heap to prevent the
heap running out of free space completely:
Figure 8-52 Garbage collection pauses increase in length and frequency as workload increases
In our case, we initially created a chart of the GC pause times in EVTK from the verbose GC
data, using the steps in “Viewing verbose GC data in EVTK” on page 134. This clearly
showed that GC pauses were occurring more frequently and getting longer as the workload
increased. This lead us to suspect that the heap is running out of space under load.
You can use EVTK to easily verify your suspicion that the heap is running out of space.
Re-plot the chart in EVTK to show the free heap space remaining against time. This gives the
chart shown in Figure 8-53.
Figure 8-53 Plotting free heap size clearly shows that the heap is running out of space as workload increases
You can see that at the half-way point in the test, the available free heap space suddenly and
sharply decreases to very low levels and the JVM eventually runs out of heap space.
Restart WebSphere Application Server so that the new heap size settings take effect.
Figure 8-54 Average response time decreases after adjusting heap settings to more appropriate values
The application throughput also increases with the new heap settings, as shown in
Figure 8-55.
Figure 8-55 Application throughput increases after adjusting heap settings to more appropriate values
EVTK allows you to plot data from multiple verbose GC logs simultaneously on the same
chart. This is very useful for comparison purposes. In our scenario we overlaid the free heap
data from the application run with new heap settings, over the data from the problematic run:
1. Copy the new verbose GC log file from i5/OS to the PC running EVTK.
2. Open the original verbose GC log file if it is not currently open and plot the “Free tenured
heap (after collection)” data only:
a. Select File → Open File.
b. Select the original verbose GC log file and click Open. The default graph of the original
data displays in EVTK showing Tenured heap size and Free tenured heap.
c. Select VGC Data, then uncheck Tenured heap size.
3. Add the data from the latest test with new heap settings, to the graph:
a. Select File → Add File.
b. Select the new verbose GC log file and click Open.
The new data is then displayed as the blue plot line in the chart shown in Figure 8-56.
Figure 8-56 More free heap space is available after adjusting heap settings to more appropriate values
You can clearly see from the new data plot in Figure 8-56 that the heap no longer rapidly runs
out of space as the application workload increases.
If you have a lot of string manipulations in your program, for example, concatenating three or
more strings, use StringBuffer as the target object for concatenated string. This is especially
important if you do string concatenation in a loop.
As the result, in the first code example 8 new objects have been created, while in the second
example only 5. If you have to run this code segment as part of the loop you would create 3 x
<number of iterations> less objects with the second approach.
The JVMPI has been deprecated in JDK version 5.0 in favor of the Java Virtual Machine
Tools Interface (JVMTI). JVMTI is the strategic interface for application monitoring and
performance analysis.
IBM JVMs use a remote agent controller server daemon to implement JVMPI and JVMTI.
These are available on a variety of platforms (including System i platform) and are currently
packaged with application development tools such as WebSphere Development Studio Client
for iSeries. There are two key client application interfaces to JVMPI and JVMTI:
Tivoli Performance Viewer, which is integrated within the WebSphere Application Server
administration console
Rational Software Development Platform profiling and logging perspective
Tivoli Performance Viewer provides basic information about response time for servlets,
JavaServer™ Pages™ (JSP™), and Enterprise JavaBeans™ (EJB™) methods, and also
detailed information about JVM statistics such as garbage collection. Refer to Appendix A,
“Running WebSphere Application Server with IBM Technology for JVM” on page 167 for more
details.
WebSphere Development Studio Client for iSeries Advanced Edition provides detailed
information about method execution time, memory usage, thread analysis and code usage in
J2EE and Java 2 Standard Edition (J2SE) applications. The next few sections illustrate how
to identify potential performance and functional problems in Java applications using
WebSphere Development Studio Client for iSeries.
Attention: At the time of this writing, WebSphere Development Studio Client for iSeries
was using JVMPI interface. However, the concepts must still apply to JVMTI when the
appropriate support is available in the tool.
Restriction: At the time of this writing, only the Classic JVM could be used for remote
debugging in WebSphere Development Studio Client for iSeries version 6.0.1. However,
the approach presented in this section is applicable to IBM Technology for JVM when this
is supported for remote debugging.
Figure 8-58 Monitor basic memory usage and method response times
10.Click Next and you must see the default filter set selected.
11.In the Contents of selected filter set section click the Add button.
12.Specify the package you want to monitor. You can use the wildcard character * to select
all classes in the package.
By default, all methods are selected and the filter rule is included.
Click OK when you are finished.
13.You can repeat the previous steps to monitor any additional packages. After you have
specified all classes to profile click Finish.
14.You must be back to the properties window, as shown in Figure 8-57 on page 143. Click
OK to profile the application.
15.If you do not have the profiling and logging perspective open, you see a “confirm
perspective switch?” window open. If so, click Yes.
16.You will be in the profiling and logging perspective and see your application status as
monitoring. At the toolbar select Window → Show View → Console. This displays any
standard out data.
17.After your application completes you must see your application status reported as
terminated. Verify that all output data is correct, because you want to ensure the
application ran properly.
18.Expand the terminated link and you see options for:
– Basic Memory Analysis
– Execution Time Analysis
– Thread analysis
Figure 8-60 Gather basic memory usage statistics in the profiling and logging perspective
20.You see statistics for memory and object usage, sorted by application package. You must
see these metrics, as shown in Figure 8-61:
– Total Instances
The total number of instances that have been created of the selected package, class,
or method.
– Live Instances
The number of instances of the selected package, class, or method where no garbage
collection has taken place.
– Collected
The number of instances of the selected package, class, or method that were removed
during garbage collection.
– Total Size
The total size (in bytes) of the selected package, class, or method of all instances that
were created for it, including objects that were garbage collected.
– Active Size
The summed size of all live instances.
Figure 8-61 View basic memory usage statistics in the profiling and logging perspective
In this example you can see that the default package, which is where the test application
resides, generates the most activity. If you expand the default package you can see
statistics for primitives such as character, byte, integer, and so on.
If the application is still active, you can continue running it and analyze the number of
objects and memory usage. The profiling and logging perspective has icons which show
increases or decreases. You can use this technique to help detect excessive object
creation or a memory leak. See Figure 8-62.
Figure 8-62 Memory statistics view shows increases and decreases in objects
Figure 8-63 Gather basic response time statistics in the profiling and logging perspective
22.You must see a view with response times sorted by application package. See Figure 8-64.
You can then expand the packages and the response times for individual methods. You
must see the following metrics:
– Base Time
This is the time taken to execute the invocation, excluding the time spent in other
methods that were called during the invocation. This is typically the most useful metric.
– Average Base Time
The base time divided by the number of calls.
– Cumulative Time
This is the time taken to execute all methods called from an invocation. If an invocation
has no additional method calls, then the cumulative time is equal to the base time.
– Calls
The number of calls made by a selected method.
In this example you can see that the mypackage package has the largest contribution to
response time.
In this example you can see that the most lengthy method (getdtaara.setDtaaraparms())
has the thickest line. You can also see that the application is executed in a single thread. If
you hover the mouse cursor over the individual methods, you see a pop-up window that
lists the response time metrics and source file.
This example illustrates creating a Performance Explorer (PEX) trace for a Java application
that utilizes IBM Technology for JVM, then interpreting the results with Performance Trace
Data Visualizer (PTDV). You can see how to determine the CPU time for the JVM process,
and also the jobs that access i5/OS resources via the IBM Toolbox for Java. You can also see
how to determine the relative impact of garbage collection. PTDV can be downloaded from
the IBM alphaWorks® Web site at:
http://www.alphaworks.ibm.com/tech/ptdv
Note: i5/OS provides the support for creating a Performance Explorer trace. You must
install the Performance Tools for iSeries product (5722-PT1) if you want to print reports.
9. Select the Thick Client tab and enter the server name, user name and password.
10.You can click the Browse button to search for your PEX collection. Otherwise, enter the
library and collection name manually.
11.Click the Start Processing button to start the analysis.
12.You must see a TPROF options window open. Accept the defaults and click OK.
You must be at the cumulative information tab, which shows information about when the
trace was taken, the number of events recorded, sampling interval, and sample time. You
must also see system information such as the system model, amount of processors,
number of disk arms, and other data.
The application performs a database query, using the Native JDBC driver. Notice that the
associated QSQSRVR jobs consumed a fairly small amount of CPU.
16.The next step is to view the details for the TEST32JVM job. Double-click it.
18.You must see a component level breakdown view for the number of hits to SLIC (System i
microcode), MI (i5/OS tasks), PASE (where IBM Technology for JVM runs), and JAVA.
Note that the PASE component has the most hits in this example. This is because the
applications were run with IBM Technology for JVM, and there was very little other
workload running during the trace. If they had been run with Classic JVM the hits would be
displayed in the SLIC and Java breakdown categories.
19.Expand the PASE breakdown category. See Figure 8-71.
20.The PASE breakdown shows the relative activity for the IBM Technology for JVM
components.
Restriction: At the time of this writing, only the Classic JVM can be used for remote
debugging in WebSphere Development Studio Client for iSeries version 6.0.1. However
the approach presented in this section is applicable to IBM Technology for JVM when this
support is available.
The following are the basic steps you have to follow to debug a Java application remotely on
a System i platform:
1. Create or import a Java application project in your WebSphere Development Studio Client
for iSeries workspace.
2. Create, import, or modify the Java source files for your application.
3. Specify the appropriate breakpoints in your application code.
4. Test the application locally to ensure it actually works.
5. Define a remote system explorer connection to the System i platform.
Steps 1 and 2 are not covered here. If necessary, refer to Rational Application Developer V6
Programming Guide, SG24-6449.
Figure 8-72 shows that an icon is placed next to the line of code where a breakpoint is set.
Figure 8-72 Setting breakpoints in WebSphere Development Studio Client for iSeries
If there is no graphical user interface, all input and output is done at the console. If there are
graphics they must open in a window on your workstation. You can check the console for any
error messages or output data as shown in Figure 8-74.
If you have already created a remote system explorer (RSE) connection to your System i
platform you can ignore this section. Otherwise, these are the steps you have to perform:
1. At your WebSphere Development Studio Client for iSeries workspace open the remote
system explorer perspective by selecting Window → Open Perspective → Other... →
Remote System Explorer.
2. At the top of the Remote Systems view expand New Connection.
3. Right-click iSeries... and select New connection.
If you have an existing profile it must open as the parent profile option. You can use that,
or specify a new profile, as shown in Figure 8-75.
4. Enter a descriptive value for the location name, and the host name or TCP/IP address of
your System i platform.
5. Click Finish.
6. In the Remote Systems view right-click your server connection and select the Connect
option. Enter an appropriate user profile and password. You must be connected. You can
view library objects, the System i file systems, and other capabilities.
7. Now that you have connected to the server you can copy the application to the System i
platform.
2. Right-click your Java application project and select the Export option.
3. You must see numerous options, including file system (mapped network drive), FTP, and
remote file system (remote system explorer connection created earlier). To use the
connection created earlier, select the Remote file system option and click Next.
4. You must see all the objects in your local project selected (check boxes selected). Click
the Browse... button next to the To directory field.
6. Click the OK button, then Finish to copy the files to the System i file system.
7. You can use a mapped network drive or the i5/OS WRKLNK command to verify that the
Java classes and other resources have been copied to the System i file system.
Now that the objects are on the System i file system you can run the application in debug
mode.
6. At the Browse for folder window expand the remote system connection link you specified
earlier and locate the folder you copied the Java application to and click OK.
See Figure 8-80. Note that the Include working folder in classpath option is selected.
7. Click the Properties tab. Enter java.version for the property name and 1.5 for the value.
8. Click the Append button. This ensures that JDK 1.5 is used on System i platform. See
Figure 8-81.
Restriction: At the time of this writing, only Classic JVM can be used for remote
debugging on System i platform.
9. The classpath includes the working folders you uploaded to the System i platform. If you
have to specify additional classpath information or other environment variables, click the
Environment tab and specify each variable and the value. In this example the IBM
Toolbox for Java has been added.
10.Enter the appropriate properties, environment variables, library list, and any other
necessary parameters. Then click Apply.
11.Click the Debug button. You might be prompted to enter a user profile and password to
connect to the System i platform. Enter the necessary values and click OK.
12.You must see a progress window open, then another window asking you to switch to the
debug perspective. Click Yes.
Figure 8-83 Debug session for a Java application running on System i platform
3. If you are interested in looking at the i5/OS job details, start a 5250 emulator session and
sign in. You can then use the WRKACTJOB or WRKJOB command to view the job log.
Figure 8-84 shows an example of what you must see.
Figure 8-85 Use the Resume action to step through the application
5. Click the Resume icon to continue stepping through the application and determine if and
where there is an application error. You can also monitor the values of variables and
determine the status of each thread. See Figure 8-86.
6. Figure 8-86 shows an example of the mainpgm class instantiating a JavaBean (the
getdtaara class). The getdtaara class includes a method that reads a decimal data area on
the System i platform, into a variable called “thevalue”. In this example, “thevalue” is .66.
7. You can continue clicking the Resume icon to continue stepping through the application
(or other icons, like Step over or Step into) and determine if and where there is an
application error. You can also monitor the console output.
8. Depending on how many and where the breakpoints were set, you might have to
experiment some to determine the exact cause of the application problem.
You can also use the remote debugging function to get an idea of the application response
time components.
Important: If you install any of the listed products after installing the i5/OS cumulative
PTF package, you must reinstall the cumulative PTF package to ensure all required
PTF are applied. Reinstall the WebSphere Application Server group PTF if you installed
5722-JV1 option 7, 8, or 5722-SS1 option 33 (i5/OS PASE) after installing group PTF.
Refer to the WebSphere Application Server group PTF cover letter for installation
instructions. Ensure you run the update script, which actually applies the WebSphere
Application Server fixes.
Attention: WebSphere Application Server 6.1 supports installing the product multiple
times on the same server or logical partition. To get a list of all of your installed
environments including the default profile location and install library, you can use the
following script:
/QIBM/ProdData/WebSphere/AppServer/V61/base/bin/querywasinstalls
WebSphere Application Server version 6.1 for iSeries includes an i5/OS Qshell script that can
switch the JVM type, if you do not use the default installation directory, adjust these
instructions accordingly:
/QIBM/ProdData/WebSphere/AppServer/V61/Base/bin/enableJvm
This script requires the user to have *ALLOBJ authority. The command syntax is:
enableJVM parameters
Important: If a profile is not specified, all profiles are switched to the specified JVM.
Appendix A. Running WebSphere Application Server with IBM Technology for JVM 169
Enable an existing WebSphere Application Server profile with
IBM Technology for JVM
These instructions provide a complete set of steps for switching to IBM Technology for JVM:
1. Sign in to the System i platform with an appropriate userID and password.
2. Enter STRQSH on the CL command line and hit Enter.
3. At the Qshell prompt, run the following command:
cd /QIBM/ProdData/WebSphere/AppServer/V61/Base/bin
4. If your profile is running, stop it (we use the default profile) as shown in Figure A-1:
stopServer -profileName default
$
> cd /QIBM/ProdData/WebSphere/AppServer/V61/Base/bin
$
> stopServer -profileName default
ADMU0116I: Tool information is being logged in file
/QIBM/UserData/WebSphere/AppServer/V61/Base/profiles/default/logs/server1/stopS
erver.log
ADMU0128I: Starting tool with the default profile
ADMU3100I: Reading configuration for server: server1
ADMU3201I: Server stop request issued. Waiting for stop status.
ADMU4000I: Server server1 stop completed.
===>
F3=Exit F6=Print F9=Retrieve F12=Disconnect
F13=Clear F17=Top F18=Bottom F21=CL command entry
Figure A-1 Ending the profile
5. Invoke the enableJVM script to change your profile to use the IBM Technology for JVM:
enableJVM -profile default -jvm std32
See Figure A-2.
The job log option is the easier one to use and is illustrated.
Appendix A. Running WebSphere Application Server with IBM Technology for JVM 171
See Figure A-4.
Bottom
Press Enter to continue.
If it is IBM Technology for JVM, then you see the message shown in Figure A-5.
Bottom
Press Enter to continue.
4. Note that the process identifier PID(343) is shown in the job log. You can use this
information in case you have to do a heap dump or other diagnostics described in
Chapter 8, “Analyzing JVM behavior” on page 83.
Figure A-6 WebSphere Application Server V6.1 console makes it easy to customize JVM properties
3. In this example you can see that verbose garbage collection has been specified. If you
see no values in the Initial Heap Size and Maximum Heap Size fields, then the default
values are used:
initial heap size: 50 MB
maximum heap size: 256 MB
These are defined in the following file:
<WAS_install_root>/<edition>/classes/properties/os400j9.systemlaunch.properties
4. You can change these if you specify the appropriate parameters and click the Apply
button. You are then prompted to save or review the changes. Click the Save link after
ensuring all parameters have been entered correctly.
5. Note that the generic JVM arguments parameter is blank in Figure A-6. This tells you that
the default garbage collection policy (optimum throughput) is in place.
Appendix A. Running WebSphere Application Server with IBM Technology for JVM 173
These are some commonly used JVM arguments for the WebSphere Application Server
version 6.1 environment:
– -agentlib:QWASJVMTI (enables JVM profiling)
– -Xgcpolicy:optthruput (specifies the optimum throughput garbage collection policy)
– -Xgcpolicy:optavgpause (specifies the minimum average pause garbage collection
policy)
– -Xgcpolicy:gencon (specifies the generational garbage collection policy)
– -Xgcpolicy:subpool (specifies the subpool garbage collection policy)
Because IBM Technology for JVM uses the just-in-time (JIT) compiler exclusively, you do
not specify the -Djava.compiler=jitc parameter like you would with Classic JVM.
Attention: Be very careful when you specify JVM command line arguments. Mistakes
as simple as case sensitivity in the parameters can prevent the JVM from starting. JVM
command line arguments are specified in the profile’s server.xml file.
JVM parameters exclusive to Classic JVM may prevent the IBM Technology for JVM
from starting, and vice versa.
Important: At the time of this writing, the JVM profiling functions were not working.
3. Click the Runtime tab. This enables you to dynamically specify the performance
monitoring policies.
4. Expand the basic monitoring statistic link and scroll down to the JVM options. You must
see the following JVM metrics available for monitoring:
– JVM Runtime.ProcessCpuUsage (the percentage of CPU usage of the JVM runtime)
– JVM Runtime.UpTime (the total time, in seconds, that the JVM has been running)
– JVM Runtime.UsedMemory (the used memory, in KB, of the JVM runtime
– JVM Runtime.HeapSize (the total memory, in KB, of the JVM runtime)
These metrics are available if JVM profiling is enabled:
– Garbage Collection.GCTime
– Garbage Collection.GCIntervalTime
– Garbage Collection.GCCount
– Object.ObjectFreedCount
– Object.ObjectAllocateCount
– Object.ObjectMovedCount
– Thread.ThreadEndedCount
– Thread.ThreadStartedCount
– Monitor.WaitForLockTime
– Monitor.WaitsForLockCount
5. At your console session click Performance Viewer → Current Activity → <application
server name>.
Appendix A. Running WebSphere Application Server with IBM Technology for JVM 175
6. Expand the Performance Modules link and click the JVM Runtime box.
See Figure A-8.
7. Click the View Module(s) button to see the JVM runtime data.
8. You must see the JVM statistics in either tabular or graphic format. The following are the
requirements for viewing graphics:
– The Adobe Scalable Vector Graphics plugin for your browser
– i5/OS PASE (required for IBM Technology for JVM)
– Set the following parameters:
-Djava.awt.headless=true
-Dos400.awt.native=true
– In the following file:
os400j9.systemlaunch.properties
This file is located in:
<WAS_install_root>/<edition>/classes/properties/os400j9.systemlaunch.propert
ies
9. In Figure A-9 you can also see that the total heap size is relatively constant and that CPU
usage is minimal.
10.Figure A-9 shows an example of the JVM runtime data in graphical format. In this case the
graph scaling was modified to improve the view in order to see how the used memory
changes.
11.Figure A-10 shows an example of the JVM runtime data in tabular format.
Appendix A. Running WebSphere Application Server with IBM Technology for JVM 177
12.You can use the Tivoli Performance Viewer to get a conceptual view of your JVM’s
memory usage. If the memory usage is fairly consistent over time, it is a good indicator
that currently there are no memory leaks in your applications.
On the other hand, if memory usage continues to grow, even with a steady workload, it
might indicate a memory leak and you have to utilize the techniques in Chapter 8,
“Analyzing JVM behavior” on page 83 to analyze the problem in more detail.
In this directory the verbose garbage collection statistics is written to the following files:
For IBM Technology for JVM: native_stderr.log
For Classic JVM: native_stdout.log
The SystemOut.log file in the aforementioned directory includes helpful JVM information. The
excerpt in Example A-1 shows that the IBM Technology for JVM is used.
Example: A-1 Excerpt showing the use of IBM Technology for JVM
WebSphere Platform 6.1 [BASE 6.1.0.1 cf10631.18] running with process name
RCHAS60_j9res_GW\RCHAS60_j9res_GW\j9res_GW and process id 212236/QEJBSVR/J9RES_GW
Host Operating System is OS/400, version V5R4M0
Java version = J2RE 1.5.0 IBM J9 2.3 OS400 ppc-32 (JIT enabled)
J9VM - 20060501_06428_bHdSMR
JIT - 20060428_1800_r8
GC - 20060501_AA, Java Compiler = j9jit23, Java VM name = IBM J9 VM
The excerpt in Example A-2 shows the heap monitor status during the server startup, in
addition to the i5/OS memory pool details.
Example: A-2 Excerpt showing heap monitor status during server startup
Heap Monitor started for 212236/QEJBSVR/J9RES_GW in subsystem QWAS61
in Pool *BASE pool ID=2
Poolsize(MB)=25107 Reserved(MB)=2
Heap total(MB)=50 Free(MB)=19 UsedHeap(MB)=30
MaxHeap(MB)=256
InitHeap(MB)=50
Tip: It is recommended that you collect the verbose GC output to a separate file using the
-Xverbosegclog parameter in the Generic JVM arguments field. See Figure A-6 on
page 173. For further information, refer to 5.2.3, “Verbose GC output” on page 44.
Additional information
The WebSphere Application Server information center also has helpful information. It is
available at:
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/topic/com.ibm.websphere.base
.iseries.doc/info/welcome_base.html
Appendix A. Running WebSphere Application Server with IBM Technology for JVM 179
180 IBM Technology for Java Virtual Machine in IBM i5/OS
B
The focus of this appendix is in demonstrating the typical tasks for downloading, installing,
and using the tool. For more information visit the ISA Web site at:
http://www.ibm.com/software/support/isa
Important: ISA is the strategic delivery mechanism for IBM Java virtual machine (JVM)
tools.
You can use ISA when experiencing a problem. ISA offers resources for self-help that can
enable you to identify, assess, and overcome questions or problems without having to contact
IBM. When it is necessary to contact IBM, ISA offers resources for fast submission of problem
reports and immediate, automated collection of diagnostic data that can accelerate problem
resolution.
Updates to the ISA application are delivered via the Updater function within ISA (refer to the
documentation delivered with the product).
Prerequisites
The following list describes the operating systems that are supported by ISA:
Microsoft® Windows® XP SP1, 2000, and 2003 server
Linux RedHat Advanced Server 3, Linux SuSE 9.0
HP/UX 11
Solaris 9
AIX 5.2 and 5.3
ISA, along with the tools described in this IBM Redbook, requires a minimum of 200 MB of
free space for installation.
You can use ISA with i5/OS to gather product information, run problem determination tools,
and analyze problems, but you must install ISA on a supported workstation that you can
connect to your i5/OS. It is also recommended setting up a integrated file system share to
easily access i5/OS files from your workstation.
The following steps document how to download ISA to the Windows platform. Modify the
steps as required for your operating system. Perform the following steps:
1. Open the following URL in your Web browser:
https://www14.software.ibm.com/webapp/iwm/web/preLogin.do?source=isa
2. Click Sign in to sign in to the Web site using your IBM ID and password. If you do not
have one, register by clicking the register now link.
3. Select the latest version for your operating system and click Continue. See Figure B-1.
4. View the license agreement and select the I agree check box to accept it.
5. Click I confirm.
6. Select the check box next to your operating system’s version and click Download now.
7. Click Save in the dialog box to save the zip file into a temporary directory on your
workstation.
8. When the download of the zip file has completed, unzip the zip files using a zip
compression utility (PKZIP, WinZip, or InfoZip) to a temporary directory, for example
C:\Downloads\isa_v3.
Note: Do not unzip the files into a directory name that contains spaces.
9. After it is extracted, run the executable file (setupwin32.exe) to start the installation.
3. A new pop-up window is displayed. Be sure to read the license and description for each
feature and tool. If you agree with the license, click I agree.
4. The installation starts. When all selected tools and features are installed, a pop-up window
opens. Click OK to restart ISA.
At the time of publication, the following features were available for use with WebSphere
Application Server:
IBM Guided Activity Assistant
IBM Pattern Modeling and Analysis Tool for Java Garbage Collector (PMAT)
Memory Dump Diagnostic for Java (MDD4J)
ThreadAnalyzer
Keeping ISA current is critical to your success in finding the most value out of the ISA tool.
Often times, new tools or features are released on an intermittent basis.
Important: It is important to occasionally check for new updates for the installed tools or to
see what other tools for your products have been made available since your installation or
last update.
Note: You can also uninstall ISA in the console or silent mode. Run the following
command from a command prompt:
<install_root>\_uninst\<uninstaller_executable_file> -console
Or:
<install_root>\_uninst\<uninstaller_executable_file> -silent -options
options.txt
You can also use the ISA interface to check for new IBM products and tools that are
developed for use with ISA.
To start ISA on Windows, click Start → All Programs → IBM Support Assistant → IBM
Support Assistant V3.
Note: If you do not have the Mozilla browser installed, you have to manually open the
ISA URL in another supported Web browser. The URL used to access ISA is also
written to the console window where you ran the startisa.sh script. The URL is also
listed in the <install_root>/workspace/logs/isaurl.log file. Open this file, copy URL, and
paste it in a Web browser window.
4. Click a link for a desirable tool. In our example we click Memory Dump Diagnostic for
Java (MDD4J). If you see any warning message, read its text and click OK. Some tools
have prerequisites.
5. A new window opens. Now you can use the tool. Refer to Chapter 8, “Analyzing JVM
behavior” on page 83 for the instructions about how to use each tool.
You also can submit problems directly to IBM by creating a problem report and attaching a
generated collector file at the same time.
Perform the following steps to collect data from your i5/OS and submit a problem report with
the collected data:
1. Start ISA. Refer to “Starting IBM Support Assistant” on page 186 for more information.
2. Click Service.
3. Click Create Portable Collector.
4. Select System collector in the Select a product box.
5. Enter the integrated file system path to your i5/OS in the Output directory field. For
example, as follows:
U:\home\collector_output
Here U: is a mapped network drive to your system’s Integrated File System (IFS) share.
6. Enter the name of the .jar file to be created in the Output file name (*.jar) field. You must
enter the .jar extension. For example, as follows:
problem.jar
7. Click Export. See Figure B-5.
There is an installation and uninstallation log file available in the location where ISA is
installed. The file name is log.txt.
There are log files available for problems starting ISA in the <install_root>/workspace/logs
directory. The isa_*.log files represent a set of rolling log files, where the most recent log file
is always named isa_0.log.
This file is part of the directory that is created when you expand the ZIP file downloaded from
the Web. Refer to “Downloading IBM Support Assistant from Web” on page 183.
One of the core components of DTFJ is an application programming interface (API) to provide
access to data stored in the operating system image, making it simpler to write Java
diagnostics tools. In the implementation of the DTFJ API shipped with IBM Technology for
JVM, the basic abstract concepts of an image and runtime environment have been extended
by adding interfaces for Java-specific entities such as Java threads.
Tools that use the DTFJ API can gain access to the huge array of information about a Java
process that is available in a system dump. No knowledge of the system dump format or how
Java objects/structures are laid out in memory is required by a tool-writer.
Using the DTFJ API, cross-platform tools can be written in Java to process the wide range of
data available in system dump files. This includes information about the platform on which the
process is running, including:
Physical memory
CPU number and type
Libraries
Commands
Thread stacks
Registers
The API can also provide information about the state of the Java runtime and the Java
application it is running, including:
Class loaders
Threads
Monitors
Heaps
Objects
Java threads
Methods
Compiled code
Fields and their values
Because you can create system dump files non-destructively (for example, using the -Xdump
command-line option or calling the static method com.ibm.jvm.Dump.SystemDump()), you
can create tools to perform analysis of system dumps collected while the JVM in question is
still running, in addition to traditional analysis of system dumps generated due to a JVM
crash.
jextract
Due to the differences in system dump format between platforms and Java releases, the
jextract utility must run against a system dump file before it can be processed using any tool
that utilizes the DTFJ API methods. jextract must be run on the system that produced the
system dump. jextract handles the JVM-version-specific and platform-specific details for you,
jextract is supplied with IBM Technology for JVM in the following directory:
/QOpenSys/QIBM/ProdData/JavaVM/jdk50/32bit/jre/bin
You supply the name of the input system dump file as a command-line argument to jextract.
For example, issuing the following command in a Qshell session:
jextract /home/rumsbya/core.20061002.140109.9288.dmp
The DTFJ allows speedier development of further diagnostics tools due to the common API
and planned re-use of some common tool components. It is worth mentioning that a tool’s
usage of the DTFJ API is transparent to end users of the tool. Of course, as improvements to
the diagnostics tools are made and new features are implemented, undoubtedly the tool
interfaces are going to be updated to reflect these.
For guidance on how to use the DTFJ API for tool development, refer to the following IBM
developerWorks article:
http://www-128.ibm.com/developerworks/java/library/j-ibmjava5/
We also suggest consulting the IBM Developer Kit and Runtime Environment, Java 2
Technology Edition, Version 5.0 Diagnostics Guide for more information about DTFJ:
http://www-128.ibm.com/developerworks/java/jdk/diagnosis/
The DTFJ API and JVMTI have different goals and implementations. Therefore, although
there is a degree of crossover in terms of the data that can be obtained through these
mechanisms, the way in which the data is gathered and used is markedly different in most
cases. It is not intended in any way for the DTFJ API to replace the JVMTI because they are
intended for different purposes.
Table C-1 presents the major differences between the DTFJ API and JVMTI.
Interface type Java based on extendable API Native code, based on JVM
callbacks
Tool portability Java code can be run on any Native code has to be
compatible JVM on any recompiled for different
platform platforms
Live or post-mortem analysis? Tool can run on system dump Tool runs while the JVM being
gathered non-destructively analyzed is live
from live JVM or generated
automatically in the event of
JVM failure
Impact on JVM being analyzed For post-mortem analysis, Variable. Depends on what type
none. Taking a non-destructive of events the agent has asked
system dump can take time. to be notified about
The tool runs in a separate
JVM, therefore, no impact after
the system dump is generated
Appendix D. jconsole
jconsole is a standard tool to monitor runtime parameters of the Java virtual machine (JVM).
This appendix demonstrates how to start jconsole from your workstation. During the time of
writing this book jconsole, however, is not supported in WebSphere Application Server.
Select the Additional materials and open the directory that corresponds with the IBM
Redbook form number, SG247353.
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this IBM Redbook.
IBM Redbooks
For information about ordering these publications, see “How to get IBM Redbooks” on
page 201. Note that some of the documents referenced here may be available in softcopy
only.
WebSphere Application Server V6 Problem Determination for Distributed Platforms,
SG24-6798
Online resources
These Web sites are also relevant as further information sources:
IBM eServer iSeries Information Center
http://publib.boulder.ibm.com/iseries/
IBM(R) Developer Kit and Runtime Environment, Java(TM) 2 Technology Edition, version
5.0, Diagnostics Guide:
http://publib.boulder.ibm.com/infocenter/javasdk/v5r0/index.jsp
IBM Systems Workload Estimator
http://www.ibm.com/systems/support/tools/estimator/index.html
Java Native Interface documentation
http://java.sun.com/j2se/1.5.0/docs/guide/jni/index.html
Performance Management for IBM System i
http://www-03.ibm.com/servers/eserver/iseries/perfmgmt/resource.html
System i group PTFs by release
http://www-912.ibm.com/s_dir/sline003.NSF/GroupPTFs?OpenView&view=GroupPTFs
WebSphere Application Server V6.1 Information Center
http://publib.boulder.ibm.com/infocenter/wasinfo/v6r1/index.jsp
Symbols D
data queue 75, 128
.class 4
dead objects 39–40
.class file 50, 52, 61
deadlock 91–92, 94, 97, 99
.java 2
debugging
.profile 24
application 97
default garbage collection policy 38, 47
Numerics default heap size 42
32-bit address space 37 defineClass() 64
5722-JC1 10 defining heap 111
5722-JV1 10 Diagnostic Tooling Framework for Java (DTFJ)
64-bit PASE native methods 76 API 193–194
API documentation 193
overview 192
A direct execution program 11
activity levels 129 distributed program call 75
ADDENVVAR 25 DMPJVM 29
address space 80 DSPJVAPGM 14
adopted authority 77 dynamic linking 50
allocate space 40
allocation
failure 45 E
request 45 enableJVM 168–169
application EVTK tool 138, 140, 193
performance 99, 141 overview 133
startup 52 Extensible Program Call Markup Language 75
asynchronous garbage collection 12 extension
class path 50
classloader 50, 54, 67
B
basic configuration 23
bootstrap classloader 50, 54, 67 F
bytecode 51 findHelperForClassLoader() 63
modification 53, 68 finding dependencies 32
findSharedClass() 64
fixed-size heaps 43
C free list 41
C function 76
C programming language 74–75
C++ 75 G
C++ programming language 74 garbage collection 6, 35–36, 41, 46
call stack 98 "Stop The World" 38–39
class loading 49 available options 37
class path 50 choosing a policy 47
classes 50 collection cycles 36
Classic JVM 35, 37, 42, 52, 91, 110 configuring 30
classloader 50, 61 default GC policy 47
cache 50 fine-tuning operation 37
delegation model 50, 54 free list 41
classpath 58 garbage collector threads 39
code reuse 2 identifying causes of poor GC performance 44
command-line options 25 in Classic JVM 36
common problems 25 interpreting verbose GC output 44
compaction 36 live object tracing 38
compiler 2 mark phase 36, 38–39, 46
CRTJVAPGM 14 marking live objects 38
M Q
machine interface 10 QEJBSVR user profile 67
maximum heap 31 QPFRADJ 130
MDD4J 185 Qshell 11
MDD4J tool 193 creating a Javadump in 93
launching 115 QWAS61 subsystem 99
memory requirements 114 QZDASOINIT 130
Index 205
R subpooling 38, 41, 48
RAMClass 53 subsystem 129
RAS 6 survivor space 40
Redbooks Web site 201 syntax 2
contact us x system API reference 76
Reliability, Availability, and Serviceability 6 system classloader 50–51, 54
requirements for i5/OS 168 system dump
response time 47 creating non-destructively 192
effect of garbage collection on 139 System i 41
ROMClass 53, 57, 63–64, 70–71 performance management Web site 129
RPG 74–75 System Licensed Internal Code 80
run() method 98 system.gc() 45
RUNJVA 12 SystemDefault.properties 25
running WebSphere Application Server
with new JVM 33 T
TCP/IP 13
S tenured space 40
scavenge 40, 46 teraspace storage model 76
SDK 3 thread
semantic 2 dump 98
Servlet 100 resource contention 93
Servlet handler threads 100 safety 76
shared class ThreadAnalyzer 95, 185
cache 50, 53 tool 92, 95, 193
metadata 53 analyzing Javadump using 94
permissions 67 monitor analysis 96
shared classes 49 summary 96
cache size 53 thread analysis 97
connecting to a class cache 53 threads 100, 103
deleting the class cache 58 throughput 47
deployment 54 effect of garbage collection on 139
dynamic cache update 59 effect of optthruput GC policy on 38
helper API 53–54, 63 Tivoli Performance Viewer 15
example 65 TMPDIR 88
Javadoc 66 Trade 6 performance benchmark sample 44, 133
history 52 transaction-based applications 40
listAllCaches suboption 55 troubleshooting tips 179
metadata 58, 62
overview 52 U
printAllStats suboption 57, 69 uninstalling 22
printStats suboption 56, 62 user-defined classloader 51, 68
recommendations 60 user-defined classloaders 53
security considerations 67 using the JAVAGCINFO 30
utility options 55
viewing class cache contents 57
viewing summary cache statistics 56 V
shared memory 52, 67 -verbose
shared semaphore 67 gc 44
SharedClassHelper 63 verbose garbage collection 132
SharedClassHelperFactory 63 verbose GC output 42
SharedClassTokenHelper 64 virtual machine 4
SharedClassURLClasspathHelper 64
SharedClassURLHelper 64
SLIC 80 W
source code 103 weak generational hypothesis 40
stale classes 59, 64 Web application 99
Stop The World 36 WebSphere Administrative Console 173
storeSharedClass() 65 WebSphere Application Server 12, 44, 51, 62, 67, 92, 98
subpool 41 default heap size 31
hang detection option 91
X
-Xdisableexplicitgc 45
-Xdump
heap 113
-Xgcpolicy 47
gencon 38
optavgpause 38
optthruput 38
subpool 38
-Xmaxf 44
-Xminf 43
XML 44
-Xms 42
-Xmx 42–43
-Xnoclassgc 133
XPCML 75
-Xscmx 59, 62
-Xshareclasses 54–56, 67, 69
-Xverbosegclog 44
Index 207
208 IBM Technology for Java Virtual Machine in IBM i5/OS
IBM Technology for Java Virtual
Machine in IBM i5/OS
IBM Technology for Java Virtual
Machine in IBM i5/OS
IBM Technology for Java Virtual Machine in IBM i5/OS
IBM Technology for Java Virtual Machine in IBM i5/OS
(0.2”spine)
0.17”<->0.473”
90<->249 pages
IBM Technology for Java Virtual
Machine in IBM i5/OS
IBM Technology for Java Virtual
Machine in IBM i5/OS
Back cover ®