Thread Dump
Thread Dump
Thread dump is just a list of all threads and the full stack trace of code running on each thread. A Java thread dump is a way of
finding out what every thread in the JVM is doing at a particular point in time. This is especially useful if your Java application
sometimes seems to hang when running under load, as an analysis of the dump will show where the threads are stuck.
A stack trace is a dump of the current execution stack that shows the method calls running on that thread from the bottom up.
Here is an example stack trace for a thread running in Web Logic: “Execute Thread: '2' for queue: 'weblogic.socket.Muxer'"
daemon prior=1 tid=0x0938ac90 nid=0x2f53 waiting for monitor entry [0x80c77000...0x80c78040]
at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:95)
- waiting to lock <0x8d3f6df0> (a weblogic.socket.PosixSocketMuxer$1)
at weblogic.socket.SocketReaderRequest.run(SocketReaderRequest.java:29)
at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:42)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:117)
The key here knows that what is currently running (or waiting) is always the top method.
Thread Pool:
Most application servers use thread pools to manage execution tasks of a certain type. A thread pool is merely a collection of
threads set aside for a specific task
In most cases, when someone reports that the application server isn't responding, it usually means that the application code
deployed to the application server isn't working right - and you'll need to figure out why. So you'll need to identify the thread
pool that your application code runs on and find those threads in the Java thread dump to see what's going on.
The easiest way to get a thread dump is to send the HUP or BREAK signal to the java process. In UNIX, this is done by figuring
out the PID (process ID) of the Java virtual machine and doing a "kill -3 ".
Where you go to find the thread dump usually depends on Java implementation. For most vendors, you'll go to the "standard
out" log file. In WebLogic, it's often referred to as "weblogic.out", "nohup.out" or something you've created yourself by
redirecting standard output to a file.
A thread dump can only show the thread status at the time of measurement, so in order to see the change in thread status, it is
recommended to extract them from 5 to 10 times with 5-second intervals
Heap Dump:
A heap dump is a snapshot of the memory of a Java process. The snapshot contains information about the Java objects and
classes in the heap at the moment the snapshot is triggered. Because there are different formats for persisting this data, there
might be some differences in the information provided. Typically, a full garbage collection is triggered before the heap dump is
written, so the dump contains information about the remaining objects in the heap.
You can use jmap to get a dump of any process running, assuming you know the pid. Use Task Manager or Resource Monitor to
get the PID. Then jmap -dump: format=b, file=cheap.bin to get the heap for that process.
Heap Dump extension: heap dumps (.hprof files). A heap dump does not contain allocation information; therefore you cannot
work out what created the objects or where the objects were created.
Memory leaks are notoriously hard to debug. Java, with its built in garbage collector, handles most memory leak issues.
True memory leak happens when objects are stored in memory but are not accessible by running code. These kinds of
inaccessible objects are handled by Java garbage collector (in most cases). Another type of memory leak happens when we
have an unneeded reference to the object somewhere. These are not true memory leaks as objects are still accessible, but none
the less can cause some nasty bugs.
For example, storing large objects in session and not cleaning its references. This kind of issue will often go unnoticed until we
get a large number of concurrent users and your application starts throwing out of memory errors. Without load test this will
most probably happen in production.
One way to find memory leaks is analyzing heap dumps. There are several ways to get a heap dump (not including 3rd party
tools):
1. HeapDumpOnCtrlBreak
Add this option when starting JVM. It will create heap dump every time ctrl+break (kill -3) signal is sent to JVM. HeapDumpPath
is used to set location of heap dumps.
2. HeapDumpOnOutOfMemoryError
This JVM option will create heap dump every time when your application throws an OutOfMemoryError. HeapDumpPath is
used to set location of heap dumps.
3. Jmap
Jmap is a tool that comes with JDK installation. To use it we need a PID of our java process.
Then we can use it like this:
jmap -dump:file=D:\temp\heapdumps\dump.bin 1234
5. HotSpotDiagnosticMXBean
This option is available in Java 1.6+ and uses sun.management.ManagementFactory.
ManagementFactory.getDiagnosticMXBean ().dumpHeap ("D:\\temp\\heapdumps\\dump.bin", true);
How to collect server hung or performance issues information for OMNIbus WebGUI:
The heap and thread dumps are essential when it comes to troubleshoot memory leaks issue. The heap dump provides a
snapshot of the JVM’s memory. The thread dump provides information about the running threads. The information could tell us
which thread may hang the application or troubleshoot high CPU usage.
A. Environment variables
Exact steps for configuring your JVM to produce a heap dump may vary by JVM, for precise instructions please consult your JVM
vendor's documentation. Below we'll describe typical settings. This information is also provided in the JProbe Reference Guide.
Environment variables control the IBM JVM's heap dump behavior. To enable the JVM to produce heapdumps on demand, you
may need to set either, or both, of the following environment variables:
IBM_HEAP_DUMP=true
IBM_HEAPDUMP=true
If you want to have the JVM create a heap dump file and/or a java core automatically on an Out-of-Memory (OOM) condition,
set the following additional environment variables:
IBM_HEAPDUMP_OUTOFMEMORY=true
IBM_JAVACORE_OUTOFMEMORY=true
Note that more recent IBM JVMs set these latter two environment variables automatically. If you want to disable the automatic
heapdump and/or javacore on OOM, you can set either or both of these environment variables to 'false'.
You may also wish to specify the heapdump location with an optional environment variable:
IBM_HEAPDUMPDIR=<directory>
Before starting the JVM, this can be achieved with the JAVA_DUMP_OPTS parameter, like
Note that this method does not allow modification of the maximum number of dumps once the process is started.
C. wsadmin
For example, you might have a scenario where one of your applications is frequently running into an OutOfMemory, but the
Application Server keeps running. In this case, you may want to stop the JVM from writing heapdumps at each OOM, without
restarting the Application Server.
To generate a heap dump while the application server is running, run the command below:
Wsadm123456789in>AdminControl.invoke(jvm, "generateHeapDump")'C:\\WebSphere\\AppServer\\profiles\\AppSrv01\\.\\
heapdump.20110328.162836.1880.0001.phd'
Ways to setup for collecting thread dumps
In case of a server crash that is the application server spontaneously dies, look for a file. The JVM creates the file in the product
directory structure, with a name like javacore [number].txt.
In case of a server hang, you can force an application to create a thread dump (or javacore).
Using the wsadmin command prompt, get a handle to the problem application server:
Look for an output file in the installation root directory with a name like javacore.date.time.id.txt.
Adding additional memory to crashing JVMs is most often not a temporary fix. If you have a real Java memory leak it will just
take longer until the Java runtime crashes. It will even incur more overhead due to garbage collection when using larger heaps.
The real answer to this is to use the simple approach. Look at the memory metrics to identify whether you have a leak or not.
Then identify which objects are causing the issue and why they are not collected by the GC. Working with engineers or 3rd
party providers (as in this case) will help you find a permanent solution that allows you to run the system without impacting end
users and without additional resource requirements.
One of our customers – a large online retail store – ran into such an issue. They run one of their online gift card self-service
interfaces on two JVMs. Especially during peak holiday seasons – when users are activating their gift cards or checking the
balance – crashes due to OOM (Out Of Memory) were more frequent which caused bad user experience. The first “measure”
they took was to double the JVM Heap Size. This didn’t solve the problem as JVMs were still crashing, so they followed the
memory diagnostics approach for production as explained in Java Memory Leaks to identify and fix the root cause of the
problem.
Before we walk through the individual steps, let’s look at the memory graph that shows the problems they had in December
during the peak of the holiday season. The problem persisted even after increasing the memory. They could fix the problem
after identifying the real root cause and applying specific configuration changes to a 3rd party software component:
After identifying the actual root cause and applying necessary configuration changes did the memory leak issue go away?
Increasing Memory was not even a temporary solution that worked.
Java Heap Size of both JVMs showed significant growth starting Dec 2nd and Dec 4th resulting in a crash on Dec 6th for both
JVMs when the 512MB Max Heap Size was exceeded.
A closer look at an instance of the VPReportEntry4 shows that it contains 5 Strings – with one consuming 23KB (as compared to
several bytes of other string objects).This also explains the high GC Size of the String class in the overall Heap Dump.
Following the referrer chain further up reveals the complete picture. The EventQueue keeps LogEvents in an Array which itself
keeps VPReportEntrys in an Array. All of these objects seem to be kept in memory as the objects are being added to these
arrays but never removed and therefore not garbage collected:
Following the referrer tree reveals that global EventQueue objects hold on to the LogEvent and VPReportEntry objects in array
lists which are never removed from these arrays
It is the report method that allocates the VPReportEntry objects which stay on the heap for quite a while
Step 4: Why are these objects not removed from the Heap?
The premise of the 3rd party logging framework is that log entries will be created by the application and written in batches at
certain times by sending these log entries to a remote logging service using JMS. The memory behavior indicates that – even
though these log entries might be sent to the service, these objects are not always removed from the EventQueue leading to
the out-of-memory exception.
Further analysis revealed that the background batch writer thread calls a logBatch method which loops through the event
queue (calling EventQueue.next) to send current log events in the queue. The question is whether as many messages were
taken out of the queue (using next) vs put into the queue (using add) and whether the batch job is really called frequently
enough to keep up with the incoming event entries. The following chart shows the method executions of add, as well as the call
to logBatch highlighting that logBatch is actually not called frequently enough and therefore not calling next to remove
messages from the queue:
The highlighted area shows that messages are put into the queue but not taken out because the background batch job is not
executed. Once this leads to an OOM and the system restarts it goes back to normal operation but older log messages will be
lost.
After making the recommended changes the system could again run with the previous heap memory size without experiencing
any out-of-memory exceptions.
The Memory Leak issue has been solved and the application now runs even with the initial 512MB Heap Space without any
problem.
They still use the same dashboards they have built to troubleshoot this issue, to monitor for any future excessive logging
problems.
These dashboards allow them to verify that the logging framework can keep up with log messages after they applied the
changes.
Conclusion
Adding additional memory to crashing JVMs is most often not a temporary fix. If you have a real Java memory leak it will just
take longer until the Java runtime crashes. It will even incur more overhead due to garbage collection when using larger heaps.
The real answer to this is to use the simple approach explained here. Look at the memory metrics to identify whether you have
a leak or not. Then identify which objects are causing the issue and why they are not collected by the GC. Working with
engineers or 3rd party providers (as in this case) will help you find a permanent solution that allows you to run the system
without impacting end users and without additional resource requirements.
Citrix:
ctrx_key ("DOWN_ARROW_KEY", 0, "", CTRX_LAST);
ctrx_key ("UP_ARROW_KEY", 0, "", CTRX_LAST);
ctrx_key ("RIGHT_ARROW_KEY", 0, "", CTRX_LAST);
ctrx_key ("LEFT_ARROW_KEY", 0, "", CTRX_LAST);
[CITRIX]
DesktopColors=16
Window=800 x 600
Compression=1
Cache=0
Queue=0
BitmapSyncLevel=Exact
web_url("launcher.aspx",
"URL=https://gpsii.novartis.net/Citrix/GPSII/site/launcher.aspx?
CTX_Application=Citrix.MPS.App.CTX_V4P_EU.GPSII%20Start%20Menu%20PHCHBS-
T3635&CTX_Token={CitrixXenApp_Session_Token}&LaunchId=1363954877969",
"TargetFrame=",
"Resource=0",
"RecContentType=text/html",
"Referer=https://gpsii.novartis.net/Citrix/GPSII/site/default.aspx?CTX_CurrentFolder=%5cLoad%20Test
%20GPSII",
"Snapshot=t21.inf",
"Mode=HTML",
EXTRARES,
"URL=../media/LaunchSpinner.gif", "Referer=https://gpsii.novartis.net/Citrix/GPSII/site/default.aspx?
CTX_CurrentFolder=%5cLoad%20Test%20GPSII", ENDITEM,
LAST);
web_reg_save_param("icadata","LB=[Encoding]","RB=",LAST);
web_url("appembed.aspx", "URL=https://gpsii.novartis.net/Citrix/GPSII/site/appembed.aspx?
CTX_Application=Citrix.MPS.App.CTX_V4P_EU.GPSII%20Start%20Menu%20PHCHBS-
T3635&CTX_AppFriendlyNameURLENcoded=GPSII%20Start%20Menu%20PHCHBS-
T3635&CTX_Token={CitrixXenApp_Session_Token}&LaunchId=1363954877969&CTX_WindowWidth=1280&CTX_WindowHeigh
t=800&Title=GPSII%20Start%20Menu%20PHCHBS-T3635",
"TargetFrame=",
"Resource=0",
"RecContentType=text/html", "Referer=https://gpsii.novartis.net/Citrix/GPSII/site/launcher.aspx?
CTX_Application=Citrix.MPS.App.CTX_V4P_EU.GPSII%20Start%20Menu%20PHCHBS-
T3635&CTX_Token={CitrixXenApp_Session_Token}&LaunchId=1363954877969",
"Snapshot=t22.inf",
"Mode=HTML",
EXTRARES, "URL=launch.ica?CTX_UID=1&CTX_Application=Citrix.MPS.App.CTX_V4P_EU.GPSII
%20Start%20Menu%20PHCHBS-T3635&CTX_Token={CitrixXenApp_Session_Token}&LaunchId=1363954877969",
ENDITEM,LAST);
lr_output_message(lr_eval_string("{icadata}"));
sprintf(icafile,"[Encoding]%s",lr_eval_string("{icadata}"));
tmp=(char *) getenv("TEMP");
sprintf(filename,"%s\\%s.%s",tmp,lr_eval_string("{ICANAME}"),"ica");
fp = fopen(filename, "w");
fprintf(fp, icafile);
fclose(fp);
ctrx_set_connect_opt(ICAFILE, filename);
USEFULL - LINKssssxcshn
http://wendyliu327.wordpress.com/2010/06/28/configurationtroubleshooting-the-setup-of-controller-and-load-generator-
machines/ MI Listener
http://www.teamquest.com/pdfs/whitepaper/tqwp23.pdf
A MI listener is used when carrying out performance testing over a firewall. Actually the common port 80 is blocked when
firewall is used. So for communication between the controller and load generator MI listener is used along with MOFW. Using
both of this together communication is done through port 443.
1. Assess the problem and establish numeric values that categorize acceptable behavior.
2. Measure the performance of the system before modification.
3. Identify the part of the system that is critical for improving the performance. This is called the bottleneck.
4. Modify that part of the system to remove the bottleneck.
5. Measure the performance of the system after modification.
6. If the modification makes the performance better, adopt it. If the modification makes the performance worse, put it
back the way it was.
2. Hard Disk holds the original copy of the program permanently while when you want to use a program, a
temporary copy is put into RAM and that’s the copy you use.
3. When working on a file, the original file is left untouched in the Hard Drive until you do a “save;” the “save”
copies the new version of the file that’s in RAM onto the Hard Disk (and usually replaces the original
file) while The file you are modifying, plus all the changes you make, are kept in RAM until you do a “save”
programs currently running on your desktop. If you open a program when RAM is full, your OS will try to locate
programs on RAM which are not in use currently. It will then transfer those programs to some areas of hard disk, that
ways space will be created on RAM for your new programs to run. So effectively, though there was no space on RAM
but your OS created a memory space with the help of your hard disk. This memory is called as Virtual Memory. The
area of hard disk where RAM image is copied is known as page file and process as paging.
You might ask why we can’t eliminate the use of hard disk or RAM, given the above scenario…here is a beautiful
The read/write speed of a hard drive is much slower than RAM, and the technology of a
hard drive is not geared toward accessing small pieces of data at a time. If your system has
to rely too heavily on virtual memory, you will notice a significant performance drop. The
key is to have enough RAM to handle everything you tend to work on simultaneously —
then, the only time you “feel” the slowness of virtual memory is when there’s a slight pause
when you’re changing tasks. When that’s the case, virtual memory is perfect.
When it is not the case, the operating system has to constantly swap information back and
forth between RAM and the hard disk. This is called thrashing, and it can make your
Memory usage:
In terms of Load Runner, you should ensure that Commit charge should always be less than Physical Memory (RAM)
on your load generator machines so that minimal paging is required.
program where the program fails to release memory when no longer needed. This condition
is normally the result of a bug in a program that prevents it from freeing up memory that it
no longer needs. This term has the potential to be confusing, since memory is not physically
lost from the computer. Rather, memory is allocated to a program, and that program
An interrupt that occurs when a program requests data that is not currently in real
memory. The interrupt triggers the operating system to fetch the data from a virtual
An invalid page fault or page fault error occurs when the operating system cannot find the
data in virtual memory. This usually happens when the virtual memory area, or the table
Now the most important question comes up, how do they affect Load Runner functioning?
As you might guess, memory leak, if left unattended and not corrected, could prove to be fatal. Memory leaks can be
found out by running tests for long duration (say about an hour) and continuously checking memory usage.
Issues caused by memory leaks are essentially based on two variables for a standalone windows application 1)
Frequency of usage 2) Size of memory leak. If either one or both are very high, the computer might come to a point
when no memory is available for other applications. This could lead to a computer crash. If it is a network based
application then you will also have to consider network traffic. If each network transaction causes a memory leak,
Vuser type (Web, CITRIX, SAP GUI); application and system where you intend to run the test.
So if your Web/HTTP Vuser takes 5MB when running as a process, 50 Vusers would take 250 MB. Then remote
desktop connection, third party program (that is continuously running in background), memory consumed by
application all summed will give you the memory requirement for a test.
program while Threads are a way for a program to split itself into two or more simultaneously running tasks. In
general, a thread is contained inside a process and different threads in the same process share some resources
In terms of Load runner, when we run Vuser as a process, Load Runner creates 1 process calledmmdrv.exe per
While when we run Vuser as a thread, Load Runner creates 1 thread per Vuser. So if we have 10 Vusers, then we will
have 1 process with 10 threads running inside it if the limit is 10 threads per process.
Running Vuser as a thread is more memory efficient that running Vuser as a process for obvious reasons that less
Message Functions:
1. lr_debug_message --> Sends a debug message to the Output window or the Business
Process Monitor log files.
2. lr_error_message --> Sends an error message to the Output window or the Business Process
Monitor log files.
4. lr_output_message --> Sends a message to the Output window or the Business Process
Monitor log files.
5. lr_message --> Sends a message to the Vuser log and Output window or the Business
Process Monitor log files.
Run-Time Functions
5. lr_rendezvous --> Sets a rendezvous point in a Vuser script. Not applicable for Application
Management.
Transaction Functions:
3. lr_end_transaction_instance --> Marks the end of a transaction instance for performance analysis.
4. lr_fail_trans_with_error --> Sets the status of open transactions to LR_FAIL and sends an error
message.
5. lr_get_trans_instance_duration --> Gets the duration of a transaction instance specified by its handle.
6. lr_get_trans_instance_wasted_time --> Gets the wasted time of a transaction instance by its handle.
10. lr_resume_transaction --> Resumes collecting transaction data for performance analysis.
11. lr_resume_transaction_instance --> Resumes collecting transaction instance data for performance
analysis.
17. lr_start_transaction_instance --> Starts a nested transaction specified by its parent’s handle.
19. lr_stop_transaction_instance --> Stops collecting data for a transaction specified by its handle.
20. lr_wasted_time --> Removes wasted time from all open transactions.
Application Performance
Application performance is an area of increasing importance. We are building bigger and bigger applications. The
functionality of today’s applications is getting more and more powerful. At the same time we use highly distributed,
large scale architectures which also integrate external services as central pieces of the application landscape
When discussing performance we must carefully differentiate these two points (People are likely to say that “the
performance of the application is bad”, but does this mean that response times are too high or that the application
cannot scale up to more than some large number of concurrent users?). It’s not always an easy task, because at the
same time, they are interrelated and the one can, and often does, affect the other.
Read more: http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-
jvm.html#ixzz3CtRF4iEL
Read more: http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-
jvm.html#ixzz3CtT6RsSK
Read more: http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-
jvm.html#ixzz3CtUNUOYd
Read more: http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-
jvm.html#ixzz3CtVu8Iu3
By the way you can increase size of java heap space based on your application need and I always recommend this
to avoid using default JVM heap values. If your application is large and lots of object created you can change size of
heap space by using JVM options -Xms and -Xmx. Xms denotes starting size of Heap while -Xmx denotes
maximum size of Heap in Java. There is another parameter called -Xmn which denotes Size of new generation
of Java Heap Space. Only thing is you cannot change the size of Heap in Java dynamically, you can only
provide Java Heap Size parameter while starting JVM.
I have shared some more useful JVM options related to Java Heap space and Garbage collection on my post 10 JVM
options Java programmer must know, you may find useful.
Update:
Regarding default heap size in Java, from Java 6 update 18 there are significant changes in how JVM calculates
default heap size in 32 and 64 bit machine and on client and server JVM mode:
1) Initial heap space and maximum heap space is larger for improved performance.
2) Default maximum heap space is 1/2 of physical memory of size upto 192 bytes and 1/4th of physical memory for
size upto 1G. So for 1G machine maximum heap size is 256MB 2.maximum heap size will not be used until program
creates enough object to fill initial heap space which will be much lesser but at-least 8 MB or 1/64th part of Physical
memory upto 1G.
3) For Server Java virtual machine default maximum heap space is 1G for 4GB of physical memory on a 32 bit JVM.
for 64 bit JVM its 32G for a physical memory of 128GB.
Reference: http://www.oracle.com/technetwork/java/javase/6u18-142093.html
Read more: http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-
jvm.html#ixzz3CtWennvY
1) Main difference between heap and stack is that stack memory is used to store local variables and function call,
while heap memory is used to store objects in Java. No matter, where object is created in code e.g. as member
variable, local variable or class variable, they are always created inside heap space in Java.
2) Each Thread in Java has their own stack which can be specified using -Xss JVM parameter, similarly you can
also specify heap size of Java program using JVM option -Xms and -Xmx where -Xms is starting size of heap and -
Xmx is maximum size of java heap.
3) If there is no memory left in stack for storing function call or local variable, JVM will
throw java.lang.StackOverFlowError, while if there is no more heap space for creating object, JVM will
throw java.lang.OutOfMemoryError: Java Heap Space.
4) If you are using Recursion, on which method calls itself, you can quickly fill up stack memory. Another difference
between stack and heap is that size of stack memory is lot lesser than size of heap memory in Java.
5) Variables stored in stacks are only visible to the owner Thread, while objects created in heap are visible to all
thread. In other words stack memory is kind of private memory of Java Threads, while heap memory is shared among
all threads.
How can you retrieve the additional attribute values from run time settings in loadrunner
vugen?
The additional attribute values from run time settings in vugen can be retrieved using the function lr_get_attrib_<type of
attribute>.
If you are retrieving the string type of attribute you use lr_get_attrib_string. The complete syntax for retrieving the value
of a attribute in a string will be written as
Define the variable sHostName.
char* sHostName;
sHostName = lr_get_attrib_string("aHostName");
Note: Here the aHostName is the additional attribute name, defined in run time settings.
What is the function used for converting the string variable value to a parameter in
loadrunner vugen?
Using the variables directly in the VuGen test script is little difficult. It involved lots of coding. So, to avoid using the
variable values directly in test script, we convert the variable values to a parameter and use them directly in test script.
Converting the string variable value to a parameter in loadrunner vugen can be done with lr_save_<type of variable >
if you converting the string variable to a parameter use lr_save_string(sHostName, "pHostName")
Note: in the above case sHostName is a variable and pHostName is a parameter.
What is the function used for converting the parameter value to a string value in loadrunner
vugen?
The function used for converting the parameter value to a string value in vugen is lr_eval_< data type>
if you want to convert the parameter value to a string we use as follows
sHostName = lr_eval_string("{pHostName}");
Note: In this case sHostName is a variable
pHostName is a parameter
What are the types of load testing or performance testing will be done using loadrunner?
The types of load testings supported by loadrunner are:
Load Testing: By using the loadrunner the load test can be simulated for multiple simultaneous users, similar to realistic
number of users.
Endurance / Longevity / Soak Testing: By using the loadrunner the load test can be simulated for longer durations
Stress Testing: By using the loadrunner the load test can be simulated with multiple users till the stress point reaches
with incremental increase in the number of users.
Spike Testing: By using the loadrunner, during the load test execution spikes can introduced by way of adding the
number of users.
Capacity Testing: Using loadrunner systems capacity can be tested for the future incremental in business. For example, if
the current AUT system configuration supports 1000 users. Due to the expansion in business, if the number of users are
increased to 10000. Does the existing environment supports or any new systems to be procured. The decision can be
taken by way of conducting the load tests using load runner.
Scalability Testing: Scalability can be compared using some of the auto generated graphs, like average response times
under load.
Monitoring JVM Memory usage inside a Java Application
How to implement a JVM memory usage threshold warning without hitting the application performance .
We can get the current memory usage of the JVM by using the Runtime.getRuntime ().totalMemory () method. So
one way to monitor memory consumption would be to create a separate thread that will call this method every second
and check if the totalMemory() exceed what we expect. But this will harm the application performance because each
A better alternative is to use a platform MXBean, which are an MBean (managed bean), used to monitor
and manage the JVM.
So in our case we want to monitor the memory consumption so will use a MemoryMXBean. MemoryMXBean has a
neat feature that allows us to set a memory threshold. If the threshold is reached, an event will be fire (so there is no
need to check the memory usage periodically).
A. First step is to create a class that will catch the event thrown by MemoryMXBean.
package com.inoneo.util;
import java.lang.management.MemoryNotificationInfo;
import javax.management.Notification;
import javax.management.NotificationListener;
public class LowMemoryListener implements NotificationListener {
public LowMemoryListener() {
super();
}
@Override
public void handleNotification(Notification notification, Object handback)
{
String notifType = notification.getType();
if
(notifType.equals(MemoryNotificationInfo.MEMORY_THRESHOLD_EXCEEDED)) {
// potential low memory, log a warning
log.warning("Memory usage threshold reached");
}
}
The memory is divided into several memory pools. In the code below we loop on each of them and
register a threshold for each memory pool that supports this feature.This piece of code should be place
where you want to start to monitor the JVM memory usage, usually at the beginning of the program.
package com.inoneo;
import java.lang.management.ManagementFactory;
import java.lang.management.MemoryMXBean;
import java.lang.management.MemoryPoolMXBean;
import javax.management.NotificationEmitter;
import java.util.List;
import com.inoneo.util.LowMemoryListener;
//set threshold
List pools = ManagementFactory.getMemoryPoolMXBeans();
for (MemoryPoolMXBean pool : pools) {
if(pool.isCollectionUsageThresholdSupported()){
// JVM Heap memory threshold in bytes (1.00 Gb = 1000000000), 0 to
disable
pool.setUsageThreshold(1000000000);
}
}
}
In this example, we set a threshold at 1 GB of memory that means an event will be fired if the memory
consumption reaches or exceeds 1 GB.
Planning
We created some scripts that emulated a user journey through the new application. Before we were able to begin
performance testing we needed to see what throughput we were required to achieve.
Little’s Law
L – Lead Time
λ – Throughput
W – Response Time
L – Lead Time
The amount of time spent by an object in a queue. E.g. The number of customers queuing to pay at a till
λ – Throughput
The average arrival rate. E.g. The amount of customers being served at the till per second
W – Response Time
The average time an object spends in the system. E.g. the time a customer has spent from the time they entered the
queue to the time they have been served at a till.
Calculating
We already had figures from the production system; we could see the required throughput we would require per
second. In this case we were required to achieve a throughput of 2.37TPS (Transactions per second)
After our planning was complete we knew that our overall response time (from beginning to end of the system) was
4.3s. We also knew the throughput we were required to achieve was 2.37TPS and finally we can calculate lead time
by using these 2 values.
λ = 2.37TPS, W = 4.52s
Therefore L = 10.7
So now we had a complete formula. So we now knew that with 10.7Vusers we could achieve the required TPS rate
which would be our constant.
However we were performance testing and it was vital we put a realistic number of users through the system.
We wanted to run a test with 50 Vusers on the system therefore we knew that λ would be constant and we would
alter W accordingly.
From this we can multiply both sides by 50. This gives us the following formula:
50 = λ *21.12
Think Times
So now we have everything to calculate the think times required for a 50 Vuser performance test to achieve a set
throughput.
W is equal to 21.12s and we know that the overall response time for a single user is 4.52s.
So we simply subtract the response time for a single user away from the response time with 50 Vusers.
So we now know that we require 16.6s think time in our performance testing scripts to ensure a realistic result.
Then the above formula would look like this: Number of Vusers = TPS * (Pacing)
The fact that TPS is a rate of transactions w.r.t. time, it is also called as throughput.
So Little’s law is
V = (R + TT) * TPS
Where, V = Number of users; R = average response time (now you know, it can be pacing too); TT = Think Time;
TPS = Throughput (i.e. TPS)
Example, If V = 100, R = 2 sec, 100= (2+TT)*TPS and hence –> If TT=18, TPS = 5