0% found this document useful (0 votes)
176 views27 pages

Thread Dump

A thread dump shows the status and stack trace of every thread running in the JVM at a given time. It is useful for troubleshooting hangs and performance issues. A heap dump shows objects in memory and can help identify memory leaks. Tools like jmap can generate thread and heap dumps. Collecting these when issues occur can provide insight for resolving server problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
176 views27 pages

Thread Dump

A thread dump shows the status and stack trace of every thread running in the JVM at a given time. It is useful for troubleshooting hangs and performance issues. A heap dump shows objects in memory and can help identify memory leaks. Tools like jmap can generate thread and heap dumps. Collecting these when issues occur can provide insight for resolving server problems.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 27

Thread Dump:

Thread dump is just a list of all threads and the full stack trace of code running on each thread. A Java thread dump is a way of
finding out what every thread in the JVM is doing at a particular point in time. This is especially useful if your Java application
sometimes seems to hang when running under load, as an analysis of the dump will show where the threads are stuck.

A stack trace is a dump of the current execution stack that shows the method calls running on that thread from the bottom up.

Here is an example stack trace for a thread running in Web Logic: “Execute Thread: '2' for queue: 'weblogic.socket.Muxer'"
daemon prior=1 tid=0x0938ac90 nid=0x2f53 waiting for monitor entry [0x80c77000...0x80c78040]

at weblogic.socket.PosixSocketMuxer.processSockets(PosixSocketMuxer.java:95)
- waiting to lock <0x8d3f6df0> (a weblogic.socket.PosixSocketMuxer$1)
at weblogic.socket.SocketReaderRequest.run(SocketReaderRequest.java:29)
at weblogic.socket.SocketReaderRequest.execute(SocketReaderRequest.java:42)
at weblogic.kernel.ExecuteThread.execute(ExecuteThread.java:145)
at weblogic.kernel.ExecuteThread.run(ExecuteThread.java:117)

The key here knows that what is currently running (or waiting) is always the top method.

Thread Pool:

Most application servers use thread pools to manage execution tasks of a certain type. A thread pool is merely a collection of
threads set aside for a specific task
In most cases, when someone reports that the application server isn't responding, it usually means that the application code
deployed to the application server isn't working right - and you'll need to figure out why. So you'll need to identify the thread
pool that your application code runs on and find those threads in the Java thread dump to see what's going on.

Getting a thread dump:

The easiest way to get a thread dump is to send the HUP or BREAK signal to the java process. In UNIX, this is done by figuring
out the PID (process ID) of the Java virtual machine and doing a "kill -3 ".

Where you go to find the thread dump usually depends on Java implementation. For most vendors, you'll go to the "standard
out" log file. In WebLogic, it's often referred to as "weblogic.out", "nohup.out" or something you've created yourself by
redirecting standard output to a file.

A thread dump can only show the thread status at the time of measurement, so in order to see the change in thread status, it is
recommended to extract them from 5 to 10 times with 5-second intervals

Heap Dump:

A heap dump is a snapshot of the memory of a Java process. The snapshot contains information about the Java objects and
classes in the heap at the moment the snapshot is triggered. Because there are different formats for persisting this data, there
might be some differences in the information provided. Typically, a full garbage collection is triggered before the heap dump is
written, so the dump contains information about the remaining objects in the heap.

You can use jmap to get a dump of any process running, assuming you know the pid. Use Task Manager or Resource Monitor to
get the PID. Then jmap -dump: format=b, file=cheap.bin to get the heap for that process.

Heap Dump extension: heap dumps (.hprof files). A heap dump does not contain allocation information; therefore you cannot
work out what created the objects or where the objects were created.

Memory leaks are notoriously hard to debug. Java, with its built in garbage collector, handles most memory leak issues.
True memory leak happens when objects are stored in memory but are not accessible by running code. These kinds of
inaccessible objects are handled by Java garbage collector (in most cases). Another type of memory leak happens when we
have an unneeded reference to the object somewhere. These are not true memory leaks as objects are still accessible, but none
the less can cause some nasty bugs.

For example, storing large objects in session and not cleaning its references. This kind of issue will often go unnoticed until we
get a large number of concurrent users and your application starts throwing out of memory errors. Without load test this will
most probably happen in production.
One way to find memory leaks is analyzing heap dumps. There are several ways to get a heap dump (not including 3rd party
tools):

1. HeapDumpOnCtrlBreak
Add this option when starting JVM. It will create heap dump every time ctrl+break (kill -3) signal is sent to JVM. HeapDumpPath
is used to set location of heap dumps.

2. HeapDumpOnOutOfMemoryError
This JVM option will create heap dump every time when your application throws an OutOfMemoryError. HeapDumpPath is
used to set location of heap dumps.

3. Jmap
Jmap is a tool that comes with JDK installation. To use it we need a PID of our java process.
Then we can use it like this:
jmap -dump:file=D:\temp\heapdumps\dump.bin 1234

4. Jmap (from the application)


We can also use jmap from our code. To get a pid from code use we need to use java.lang.management.ManagementFactory.

String name = ManagementFactory.getRuntimeMXBean().getName();


String pid = name.substring(0, name.indexOf("@"));
After that we can start jmap process like this:
String[] cmd = { "jmap", "-dump:file=D:\\temp\\heapdumps\\dump.bin", pid };
Process p = Runtime.getRuntime().exec(cmd);

5. HotSpotDiagnosticMXBean
This option is available in Java 1.6+ and uses sun.management.ManagementFactory.
ManagementFactory.getDiagnosticMXBean ().dumpHeap ("D:\\temp\\heapdumps\\dump.bin", true);
How to collect server hung or performance issues information for OMNIbus WebGUI:

The heap and thread dumps are essential when it comes to troubleshoot memory leaks issue. The heap dump provides a
snapshot of the JVM’s memory. The thread dump provides information about the running threads. The information could tell us
which thread may hang the application or troubleshoot high CPU usage.

Ways to setup for collecting heap dumps

A. Environment variables

Exact steps for configuring your JVM to produce a heap dump may vary by JVM, for precise instructions please consult your JVM
vendor's documentation. Below we'll describe typical settings. This information is also provided in the JProbe Reference Guide.

Environment variables control the IBM JVM's heap dump behavior. To enable the JVM to produce heapdumps on demand, you
may need to set either, or both, of the following environment variables:

IBM_HEAP_DUMP=true

IBM_HEAPDUMP=true

If you want to have the JVM create a heap dump file and/or a java core automatically on an Out-of-Memory (OOM) condition,
set the following additional environment variables:

IBM_HEAPDUMP_OUTOFMEMORY=true

IBM_JAVACORE_OUTOFMEMORY=true

Note that more recent IBM JVMs set these latter two environment variables automatically. If you want to disable the automatic
heapdump and/or javacore on OOM, you can set either or both of these environment variables to 'false'.

You may also wish to specify the heapdump location with an optional environment variable:

IBM_HEAPDUMPDIR=<directory>

B. JVM startup options

Before starting the JVM, this can be achieved with the JAVA_DUMP_OPTS parameter, like

SET JAVA_DUMP_OPTS = ONANYSIGNAL (SYSDUMP [3], HEAPDUMP [3])

Note that this method does not allow modification of the maximum number of dumps once the process is started.

C. wsadmin

For example, you might have a scenario where one of your applications is frequently running into an OutOfMemory, but the
Application Server keeps running. In this case, you may want to stop the JVM from writing heapdumps at each OOM, without
restarting the Application Server.

To generate a heap dump while the application server is running, run the command below:

Wsadm123456789in>AdminControl.invoke(jvm, "generateHeapDump")'C:\\WebSphere\\AppServer\\profiles\\AppSrv01\\.\\
heapdump.20110328.162836.1880.0001.phd'
Ways to setup for collecting thread dumps

For Linux (server)

In case of a server crash that is the application server spontaneously dies, look for a file. The JVM creates the file in the product
directory structure, with a name like javacore [number].txt.

In case of a server hang, you can force an application to create a thread dump (or javacore).

Using the wsadmin command prompt, get a handle to the problem application server:

wsadmin>set jvm [$AdminControl completeObjectName type=JVM, process=server1,*]

Generate the thread dump:

wsadmin>$AdminControl invoke $jvm dumpThreads

Look for an output file in the installation root directory with a name like javacore.date.time.id.txt.

Adding additional memory to crashing JVMs is most often not a temporary fix. If you have a real Java memory leak it will just
take longer until the Java runtime crashes. It will even incur more overhead due to garbage collection when using larger heaps.
The real answer to this is to use the simple approach. Look at the memory metrics to identify whether you have a leak or not.
Then identify which objects are causing the issue and why they are not collected by the GC. Working with engineers or 3rd
party providers (as in this case) will help you find a permanent solution that allows you to run the system without impacting end
users and without additional resource requirements.

Fix Memory Leaks in Java Production Applications


Adding more memory to your JVMs (Java Virtual Machines) might be a temporary solution to fixing memory leaks in Java
applications, but it for sure won’t fix the root cause of the issue. Instead of crashing once per day it may just crash every other
day. “Preventive” restarts are also just another desperate measure to minimize downtime – but – let’s be frank: this is not how
production issues should be solved.

One of our customers – a large online retail store – ran into such an issue. They run one of their online gift card self-service
interfaces on two JVMs. Especially during peak holiday seasons – when users are activating their gift cards or checking the
balance – crashes due to OOM (Out Of Memory) were more frequent which caused bad user experience. The first “measure”
they took was to double the JVM Heap Size. This didn’t solve the problem as JVMs were still crashing, so they followed the
memory diagnostics approach for production as explained in Java Memory Leaks to identify and fix the root cause of the
problem.

Before we walk through the individual steps, let’s look at the memory graph that shows the problems they had in December
during the peak of the holiday season. The problem persisted even after increasing the memory. They could fix the problem
after identifying the real root cause and applying specific configuration changes to a 3rd party software component:
After identifying the actual root cause and applying necessary configuration changes did the memory leak issue go away?
Increasing Memory was not even a temporary solution that worked.

Step 1: Identify a Java Memory Leak


The first step is to monitor the JVM/CLR Memory Metrics such as Heap Space. This will tell us whether there is a potential
memory leak. In this case we see memory usage constantly growing resulting in an eventual runtime crash when the memory
limit is reached.

Java Heap Size of both JVMs showed significant growth starting Dec 2nd and Dec 4th resulting in a crash on Dec 6th for both
JVMs when the 512MB Max Heap Size was exceeded.

Step 2: Identify problematic Java Objects


The out-of-memory exception automatically triggers a full memory dump that allows for analysis of which objects consumed
the heap and are most likely to be the root cause of the out-of-memory crash. Looking at the objects that consumed most of
the heap below indicates that they are related to a 3rd party logging API used by the application.
Sorting by GC (Garbage Collection) Size and focusing on custom classes (instead of system classes) shows that 80% of the heap
is consumed by classes of a 3rd party logging framework

A closer look at an instance of the VPReportEntry4 shows that it contains 5 Strings – with one consuming 23KB (as compared to
several bytes of other string objects).This also explains the high GC Size of the String class in the overall Heap Dump.

Individual very large String objects as part of the ReportEntry object

Following the referrer chain further up reveals the complete picture. The EventQueue keeps LogEvents in an Array which itself
keeps VPReportEntrys in an Array. All of these objects seem to be kept in memory as the objects are being added to these
arrays but never removed and therefore not garbage collected:
Following the referrer tree reveals that global EventQueue objects hold on to the LogEvent and VPReportEntry objects in array
lists which are never removed from these arrays

Step 3: Who allocates these objects?


Analyzing object allocation allows us to figure out which part of the code is creating these objects and adding them to the
queue. Creating what is called a “Selective Memory Dump” when the application reached 75% Heap Utilization showed the
customer that the ReportWriter.report method allocated these entries and that they have been “living” on the heap for quite a
while.

It is the report method that allocates the VPReportEntry objects which stay on the heap for quite a while

Step 4: Why are these objects not removed from the Heap?
The premise of the 3rd party logging framework is that log entries will be created by the application and written in batches at
certain times by sending these log entries to a remote logging service using JMS. The memory behavior indicates that – even
though these log entries might be sent to the service, these objects are not always removed from the EventQueue leading to
the out-of-memory exception.

Further analysis revealed that the background batch writer thread calls a logBatch method which loops through the event
queue (calling EventQueue.next) to send current log events in the queue. The question is whether as many messages were
taken out of the queue (using next) vs put into the queue (using add) and whether the batch job is really called frequently
enough to keep up with the incoming event entries. The following chart shows the method executions of add, as well as the call
to logBatch highlighting that logBatch is actually not called frequently enough and therefore not calling next to remove
messages from the queue:
The highlighted area shows that messages are put into the queue but not taken out because the background batch job is not
executed. Once this leads to an OOM and the system restarts it goes back to normal operation but older log messages will be
lost.

Step 5: Fixing the Java Memory Leak problem


After providing this information to the 3rd party provider and discussing with them the number of log entries and their system
environment the conclusion was that our customer used a special logging mode that was not supposed to be used in high-load
production environments. It is like running with DEBUG log level in a high load or production environment. This overwhelmed
the remote logging service and this is why the batch logging thread was stopped and log events remained in the EventQueue
until the out of memory occurred.

After making the recommended changes the system could again run with the previous heap memory size without experiencing
any out-of-memory exceptions.

The Memory Leak issue has been solved and the application now runs even with the initial 512MB Heap Space without any
problem.

They still use the same dashboards they have built to troubleshoot this issue, to monitor for any future excessive logging
problems.
These dashboards allow them to verify that the logging framework can keep up with log messages after they applied the
changes.

Conclusion
Adding additional memory to crashing JVMs is most often not a temporary fix. If you have a real Java memory leak it will just
take longer until the Java runtime crashes. It will even incur more overhead due to garbage collection when using larger heaps.
The real answer to this is to use the simple approach explained here. Look at the memory metrics to identify whether you have
a leak or not. Then identify which objects are causing the issue and why they are not collected by the GC. Working with
engineers or 3rd party providers (as in this case) will help you find a permanent solution that allows you to run the system
without impacting end users and without additional resource requirements.

Citrix:

ctrx_sync_on_window (“Document1 – Microsoft Word”, ACTIVATE, x, y, h, w, “snapshot”, CTRX_LAST);

ctrx_sync_on_window ("Rational ClearCase Explorer - {UserID4_3635}_view (V:\\{UserID4_3635}_view\\", ACTIVATE, -4, -


4, 809, 609, "snapshot36", CTRX_LAST);

ctrx_key (“f”, MODIF_ALT, “”, CTRX_LAST);  Press Alt+f.

ctrx_type (“x”, “”, CTRX_LAST); Press “x”.

ctrx_type (“prashant”, “”, CTRX_LAST);  Type any text.

ctrx_type (“”,””, CTRX_LAST);  Press “Space-Bar” to select any checkbox

ctrx_disconnect_server (“”, CTRX_LAST);

ctrx_key (“ENTER_KEY”, 0, “”, CTRX_LAST);

ctrx_key (“TAB_KEY”, 0, “”, CTRX_LAST);


ctrx_key (“TAB_KEY”, MODIF_CONTROL);  Press Ctrl+Tab

ctrx_mouse_click (x, y, LEFT_BUTTON, 0, “window_name=snapshot”, CTRX_LAST); Mouse Click.

ctrx_wait_for_event (“Window_Name”, CTRX_LAST);

ctrx_key ("DOWN_ARROW_KEY", 0, "", CTRX_LAST);

ctrx_key ("UP_ARROW_KEY", 0, "", CTRX_LAST);

ctrx_key ("RIGHT_ARROW_KEY", 0, "", CTRX_LAST);

ctrx_key ("LEFT_ARROW_KEY", 0, "", CTRX_LAST);

Paste the following code in default.cfg file of your citrix scripts:

[CITRIX]

DesktopColors=16

Colors=High Color (16 bit)

Encryption=Use Server Default

Window=800 x 600

Latency=Use Server Default

Compression=1

Cache=0

Queue=0

Sound=Use Server Default

BitmapSyncLevel=Exact

Additional Steps need to take while creating citrix scripts

web_url("launcher.aspx",

"URL=https://gpsii.novartis.net/Citrix/GPSII/site/launcher.aspx?
CTX_Application=Citrix.MPS.App.CTX_V4P_EU.GPSII%20Start%20Menu%20PHCHBS-
T3635&CTX_Token={CitrixXenApp_Session_Token}&LaunchId=1363954877969",

"TargetFrame=",
"Resource=0",

"RecContentType=text/html",

"Referer=https://gpsii.novartis.net/Citrix/GPSII/site/default.aspx?CTX_CurrentFolder=%5cLoad%20Test
%20GPSII",

"Snapshot=t21.inf",

"Mode=HTML",

EXTRARES,

"URL=../media/LaunchSpinner.gif", "Referer=https://gpsii.novartis.net/Citrix/GPSII/site/default.aspx?
CTX_CurrentFolder=%5cLoad%20Test%20GPSII", ENDITEM,

LAST);

web_reg_save_param("icadata","LB=[Encoding]","RB=",LAST);
web_url("appembed.aspx", "URL=https://gpsii.novartis.net/Citrix/GPSII/site/appembed.aspx?
CTX_Application=Citrix.MPS.App.CTX_V4P_EU.GPSII%20Start%20Menu%20PHCHBS-
T3635&CTX_AppFriendlyNameURLENcoded=GPSII%20Start%20Menu%20PHCHBS-
T3635&CTX_Token={CitrixXenApp_Session_Token}&LaunchId=1363954877969&CTX_WindowWidth=1280&CTX_WindowHeigh
t=800&Title=GPSII%20Start%20Menu%20PHCHBS-T3635",

"TargetFrame=",

"Resource=0",

"RecContentType=text/html", "Referer=https://gpsii.novartis.net/Citrix/GPSII/site/launcher.aspx?
CTX_Application=Citrix.MPS.App.CTX_V4P_EU.GPSII%20Start%20Menu%20PHCHBS-
T3635&CTX_Token={CitrixXenApp_Session_Token}&LaunchId=1363954877969",

"Snapshot=t22.inf",

"Mode=HTML",

EXTRARES, "URL=launch.ica?CTX_UID=1&CTX_Application=Citrix.MPS.App.CTX_V4P_EU.GPSII
%20Start%20Menu%20PHCHBS-T3635&CTX_Token={CitrixXenApp_Session_Token}&LaunchId=1363954877969",
ENDITEM,LAST);

lr_output_message(lr_eval_string("{icadata}"));

icafile =(char *)malloc(strlen(lr_eval_string(" {icadata}")) + 100);

sprintf(icafile,"[Encoding]%s",lr_eval_string("{icadata}"));

tmp=(char *) getenv("TEMP");

sprintf(filename,"%s\\%s.%s",tmp,lr_eval_string("{ICANAME}"),"ica");
fp = fopen(filename, "w");

fprintf(fp, icafile);

fclose(fp);

ctrx_set_connect_opt(ICAFILE, filename);

USEFULL - LINKssssxcshn

http://wendyliu327.wordpress.com/2010/06/28/configurationtroubleshooting-the-setup-of-controller-and-load-generator-
machines/  MI Listener

http://www.teamquest.com/pdfs/whitepaper/tqwp23.pdf

A MI listener is used when carrying out performance testing over a firewall. Actually the common port 80 is blocked when
firewall is used. So for communication between the controller and load generator MI listener is used along with MOFW. Using
both of this together communication is done through port 443.

What is the difference between thread and process?


1. Both processes and threads are independent sequences of execution. The typical difference is that threads (of the
same process) run in a shared memory space, while processes run in separate memory spaces.
2. New threads are easily created; new processes require duplication of the parent process.
3. Threads can exercise considerable control over threads of the same process; processes can only exercise control over
child processes.
4. Threads have direct access to the data segment of its process; processes have their own copy of the data segment of
the parent process.
5. Threads are used for small tasks, whereas processes are used for more ‘heavyweight’ tasks – basically the execution
of applications.

Determining the Maximum Number of Vusers per Load Generator:


1. This formula will give you the number of Vusers the CPU can sustain (limited to 70% so as not to overuse the CPU):
Number of Vusers per CPU =
70% x (Number of core processors) / (Vuser average CPU usage).
2. This formula gives you the number of Vusers the memory can accommodate:
Number of Vusers per Memory =
(Total GB RAM of Load Generator) – (GB RAM for the OS and other processes) / (Vuser Peak Memory Usage)
Systematic tuning follows these steps:

1. Assess the problem and establish numeric values that categorize acceptable behavior.
2. Measure the performance of the system before modification.
3. Identify the part of the system that is critical for improving the performance. This is called the bottleneck.
4. Modify that part of the system to remove the bottleneck.
5. Measure the performance of the system after modification.
6. If the modification makes the performance better, adopt it. If the modification makes the performance worse, put it
back the way it was.

Server Throughput = # Concurrent Users / (Avg. Response Time + Think Time)

Hard Disk vs RAM:


1. Hard Disk is used for long-term storage of work while RAM is used to store your current work.

2. Hard Disk holds the original copy of the program permanently while when you want to use a program, a
temporary copy is put into RAM and that’s the copy you use.

3. When working on a file, the original file is left untouched in the Hard Drive until you do a “save;” the “save”

copies the new version of the file that’s in RAM onto the Hard Disk (and usually replaces the original

file) while The file you are modifying, plus all the changes you make, are kept in RAM until you do a “save”

Virtual Memory and Paging:


Virtual Memory is an essential part of all Operating Systems. As we saw above, RAM stores info about all the

programs currently running on your desktop. If you open a program when RAM is full, your OS will try to locate

programs on RAM which are not in use currently. It will then transfer those programs to some areas of hard disk, that

ways space will be created on RAM for your new programs to run. So effectively, though there was no space on RAM

but your OS created a memory space with the help of your hard disk. This memory is called as Virtual Memory. The

area of hard disk where RAM image is copied is known as page file and process as paging.

You might ask why we can’t eliminate the use of hard disk or RAM, given the above scenario…here is a beautiful

explanation of this, from the source cited below.

The read/write speed of a hard drive is much slower than RAM, and the technology of a

hard drive is not geared toward accessing small pieces of data at a time. If your system has

to rely too heavily on virtual memory, you will notice a significant performance drop. The

key is to have enough RAM to handle everything you tend to work on simultaneously —

then, the only time you “feel” the slowness of virtual memory is when there’s a slight pause

when you’re changing tasks. When that’s the case, virtual memory is perfect.
When it is not the case, the operating system has to constantly swap information back and

forth between RAM and the hard disk. This is called  thrashing, and it can make your

computer feel incredibly slow.

Memory usage:

In terms of Load Runner, you should ensure that Commit charge should always be less than Physical Memory (RAM)
on your load generator machines so that minimal paging is required.

What is memory leak?

A memory leak  is a particular type of unintentional memory consumption by a computer

program where the program fails to release memory when no longer needed. This condition

is normally the result of a bug in a program that prevents it from freeing up memory that it

no longer needs. This term has the potential to be confusing, since memory is not physically

lost from the computer. Rather, memory is allocated to a program, and that program

subsequently loses the ability to access it due to program logic flaws.

What is a page fault?

An interrupt that occurs when a program requests data that is not currently in real
memory. The interrupt triggers the operating system to fetch the data from a virtual

memory and load it into RAM.

An invalid page fault  or  page fault error  occurs when the operating system cannot find the

data in virtual memory. This usually happens when the virtual memory area, or the table

that maps virtual addresses to real addresses, becomes corrupt.

Now the most important question comes up, how do they affect Load Runner functioning?

As you might guess, memory leak, if left unattended and not corrected, could prove to be fatal. Memory leaks can be

found out by running tests for long duration (say about an hour) and continuously checking memory usage.
Issues caused by memory leaks are essentially based on two variables for a standalone windows application 1)

Frequency of usage 2) Size of memory leak. If either one or both are very high, the computer might come to a point

when no memory is available for other applications. This could lead to a computer crash. If it is a network based

application then you will also have to consider network traffic. If each network transaction causes a memory leak,

then a high volume of network transactions could also prove dangerous.

How to calculate memory requirement for Vusers


Memory requirement also called as Memory footprint for a Vuser can be called as a function that is dependent on

Vuser type (Web, CITRIX, SAP GUI); application and system where you intend to run the test.

So if your Web/HTTP Vuser takes 5MB when running as a process, 50 Vusers would take 250 MB. Then remote

desktop connection, third party program (that is continuously running in background), memory consumed by

application all summed will give you the memory requirement for a test.

What is the difference between a process and a thread?


Process is defined as the virtual address space and the control information necessary for the execution of a

program while Threads are a way for a program to split itself into two or more simultaneously running tasks. In

general, a thread is contained inside a process and different threads in the same process share some resources

while different processes do not.

In terms of Load runner, when we run Vuser as a process, Load Runner creates 1 process calledmmdrv.exe per

Vuser. So if we have 10 Vusers, we will have 10 mmdrv.exe processes on our machines.

While when we run Vuser as a thread, Load Runner creates 1 thread per Vuser. So if we have 10 Vusers, then we will

have 1 process with 10 threads running inside it if the limit is 10 threads per process.

Running Vuser as a thread is more memory efficient that running Vuser as a process for obvious reasons that less

memory resources are utilized when we run them as thread.

Difference between concurrent and simultaneous


vuser
All the vusers in a particular scenario are called Concurrent vusers.They may or may not perform the same tasks. On
the other hand simultaneous vusers is more to do with rendezvous points. When we set rendezvous points we
instruct the system to wait till a certain no of vusers arrive so that they all can do a particular task simultaneously.
These vusers performing the same task at the same time are called Simultaneous vusers.
Monitors: What metrics/counters to monitor for Windows System
Resource?
Processor (_Total)\% Processor Time
Processor (_Total)\% Privileged Time
Processor (_Total)\% User Time
Processor (_Total)\Interrupts/sec
Process (instance)\% Processor Time
Process (instance)\Working Set
Physical Disk (instance)\Disk Transfers/sec
System\Processor Queue Length
System\Context Switches/sec
Memory\Pages/sec
Memory\Available Bytes
Memory\Cache Bytes
Memory\Transition Faults/sec

String Functions: C Virtual User Function

1. lr_eval_string --> Replaces a parameter with its current value.

2. lr_save_string --> Saves a null-terminated string to a parameter.

3. lr_save_var --> Saves a variable length string to a parameter.

4. lr_save_datetime --> Saves the current date and time to a parameter.

5. lr_decrypt --> Decrypts an encoded string.

Message Functions:

1. lr_debug_message --> Sends a debug message to the Output window or the Business
Process Monitor log files.

2. lr_error_message --> Sends an error message to the Output window or the Business Process
Monitor log files.

3. lr_log_message --> Sends a message to a log file.

4. lr_output_message --> Sends a message to the Output window or the Business Process
Monitor log files.

5. lr_message --> Sends a message to the Vuser log and Output window or the Business
Process Monitor log files.

Run-Time Functions

1. lr_load_dll --> Loads an external DLL.

2. lr_peek_events --> Indicates where a Vuser script can be paused.


3. lr_think_time --> Pauses script execution to emulate think time—the time a real user pauses to
think between actions.

4. lr_continue_on_error --> Specifies an error handling method.

5. lr_rendezvous --> Sets a rendezvous point in a Vuser script. Not applicable for Application
Management.

Transaction Functions:

1. lr_end_sub_transaction --> Marks the end of a sub-transaction for performance analysis.

2. lr_end_transaction --> Marks the end of a transaction.

3. lr_end_transaction_instance --> Marks the end of a transaction instance for performance analysis.

4. lr_fail_trans_with_error --> Sets the status of open transactions to LR_FAIL and sends an error
message.

5. lr_get_trans_instance_duration --> Gets the duration of a transaction instance specified by its handle.

6. lr_get_trans_instance_wasted_time --> Gets the wasted time of a transaction instance by its handle.

7. lr_get_transaction_duration --> Gets the duration of a transaction by its name.

8. lr_get_transaction_think_time --> Gets the think time of a transaction by its name.

9. lr_get_transaction_wasted_time --> Gets the wasted time of a transaction by its name.

10. lr_resume_transaction --> Resumes collecting transaction data for performance analysis.

11. lr_resume_transaction_instance --> Resumes collecting transaction instance data for performance
analysis.

12. lr_set_transaction_instance_status --> Sets the status of a transaction instance.

13. lr_set_transaction_status --> Sets the status of open transactions.

14. lr_set_transaction_status_by_name --> Sets the status of a transaction.

15. lr_start_sub_transaction --> Marks the beginning of a subtransaction.

16. lr_start_transaction --> Marks the beginning of a transaction.

17. lr_start_transaction_instance --> Starts a nested transaction specified by its parent’s handle.

18. lr_stop_transaction --> Stops the collection of transaction data.

19. lr_stop_transaction_instance --> Stops collecting data for a transaction specified by its handle.

20. lr_wasted_time --> Removes wasted time from all open transactions.
Application Performance
Application performance is an area of increasing importance. We are building bigger and bigger applications. The
functionality of today’s applications is getting more and more powerful. At the same time we use highly distributed,
large scale architectures which also integrate external services as central pieces of the application landscape

When discussing performance we must carefully differentiate these two points (People are likely to say that “the
performance of the application is bad”, but does this mean that response times are too high or that the application
cannot scale up to more than some large number of concurrent users?). It’s not always an easy task, because at the
same time, they are interrelated and the one can, and often does, affect the other.

What is Heap space in Java?


When a Java program started Java Virtual Machine gets some memory from Operating System. Java Virtual
Machine or JVM uses this memory for all its need and part of this memory is call java heap memory. Heap in Java
generally located at bottom of address space and move upwards. whenever we create object using new operator or
by any another means object is allocated memory from Heap and When object dies or garbage collected ,memory
goes back to Heap space in Java

Read more: http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-
jvm.html#ixzz3CtRF4iEL

Java Heap and Garbage Collection


As we know objects are created inside heap memory and Garbage collection is a process which removes dead
objects from Java Heap space and returns memory back to Heap in Java. For the sake of Garbage collection Heap is
divided into three main regions named as New Generation, Old or Tenured Generation and Perm space. New
Generation of Java Heap is part of Java Heap memory where newly created object are stored, During the course of
application many objects created and died but those remain live they got moved to Old or Tenured Generation by
Java Garbage collector thread on Major or full garbage collection. Perm space of Java Heap is where JVM stores
Meta data about classes and methods, String pool and Class level details

Read more: http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-
jvm.html#ixzz3CtT6RsSK

OutOfMemoryError in Java Heap


When JVM starts JVM heap space is equal to the initial size of Heap specified by -Xms parameter, as application
progress more objects get created and heap space is expanded to accommodate new objects. JVM also run garbage
collector periodically to reclaim memory back from dead objects. JVM expands Heap in Java somewhere near to
Maximum Heap Size specified by -Xmx and if there is no more memory left for creating new object in java heap , JVM
throws  java.lang.OutOfMemoryError and  your application dies. Before throwing OutOfMemoryError No
Space in Java Heap, JVM tries to run garbage collector to free any available space but even after that not much
space available on Heap in Java it results into OutOfMemoryError. To resolve this error you need to understand your
application object profile i.e. what kind of object you are creating, which objects are taking how much memory etc.
you can use profiler or heap analyzer to troubleshoot OutOfMemoryError in
Java. "java.lang.OutOfMemoryError: Java heap space" error messages denotes that Java heap does not have
sufficient space and cannot be expanded further while "java.lang.OutOfMemoryError: PermGen space" error
message comes when the permanent generation of Java Heap is full, the application will fail to load a class or to
allocate an interned string.

Read more: http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-
jvm.html#ixzz3CtUNUOYd

Java Heap dump


Java Heap dump is a snapshot of Java Heap Memory at a particular time. This is very useful to analyze or
troubleshoot any memory leak in Java or any Java.lang.OutOfMemoryError. There are tools available inside JDK
which helps you to take heap dump and there are heap analyzer available tool which helps you to analyze java heap
dump. You can use "jmap" command to get java heap dump, this will create heap dump file and then you can
use "jhat - Java Heap Analysis Tool" to analyze those heap dumps.

Read more: http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-
jvm.html#ixzz3CtVu8Iu3

How to increase size of Java Heap


Default size of Heap space in Java is 128MB on most of 32 bit Sun's JVM but its highly varies from JVM to JVM e.g.
default maximum and start heap size for the 32-bit Solaris Operating System (SPARC Platform Edition) is -
Xms=3670K and -Xmx=64M and Default values of heap size parameters on 64-bit systems have been increased up
by approximately 30%. Also if you are using throughput garbage collector in Java 1.5 default maximum heap size of
JVM would be Physical Memory/4 and default initial heap size would be Physical Memory/16. Another
way to find default heap size of JVM is to start an application with default heap parameters and monitor in
using JConsole which is available on JDK 1.5 onwards, on VM Summary tab you will be able to see maximum heap
size.

By the way you can increase size of java heap space based on your application need and I always recommend this
to avoid using default JVM heap values. If your application is large and lots of object created you can change size of
heap space by using JVM options -Xms and -Xmx.  Xms denotes starting size of Heap while -Xmx denotes
maximum size of Heap in Java. There is another parameter called -Xmn which denotes Size of new generation
of Java Heap Space. Only thing is you cannot change the size of Heap in Java dynamically, you can only
provide Java Heap Size parameter while starting JVM.

I have shared some more useful JVM options related to Java Heap space and Garbage collection on my post 10 JVM
options Java programmer must know, you may find useful.

Update:

Regarding default heap size in Java, from Java 6 update 18 there are significant changes in how JVM calculates
default heap size in 32 and 64 bit machine and on client and server JVM mode:
1) Initial heap space and maximum heap space is larger for improved performance.

2) Default maximum heap space is 1/2 of physical memory of size upto 192 bytes and 1/4th of physical memory for
size upto 1G. So for 1G machine maximum heap size is 256MB 2.maximum heap size will not be used until program
creates enough object to fill initial heap space which will be much lesser but at-least 8 MB or 1/64th part of Physical
memory upto 1G.

3) For Server Java virtual machine default maximum heap space is 1G for 4GB of physical memory on a 32 bit JVM.
for 64 bit JVM its 32G for a physical memory of 128GB.
Reference: http://www.oracle.com/technetwork/java/javase/6u18-142093.html

Read more: http://javarevisited.blogspot.com/2011/05/java-heap-space-memory-size-
jvm.html#ixzz3CtWennvY

Difference between Stack vs Heap in Java


Here are few differences between stack and heap memory in Java:

1) Main difference between heap and stack is that stack memory is used to store local variables and function call,
while heap memory is used to store objects in Java. No matter, where object is created in code e.g. as member
variable, local variable or class variable, they are always created inside heap space in Java.

2) Each Thread in Java has their own stack which can be specified using -Xss JVM parameter, similarly you can
also specify heap size of Java program using JVM option -Xms and -Xmx where -Xms is starting size of heap and -
Xmx is maximum size of java heap.

3) If there is no memory left in stack for storing function call or local variable, JVM will
throw java.lang.StackOverFlowError, while if there is no more heap space for creating object, JVM will
throw java.lang.OutOfMemoryError: Java Heap Space.

4) If you are using Recursion, on which method calls itself, you can quickly fill up stack memory. Another difference
between stack and heap is that size of stack memory is lot lesser than size of heap memory in Java.

5) Variables stored in stacks are only visible to the owner Thread, while objects created in heap are visible to all
thread. In other words stack memory is kind of private memory of Java Threads, while heap memory is shared among
all threads.

That's all on difference between Stack and Heap memory in Java.


Read more: http://javarevisited.blogspot.com/2013/01/difference-between-stack-and-heap-
java.html#ixzz3CtdtF1Wz

How can you retrieve the additional attribute values from run time settings in loadrunner
vugen?
The additional attribute values from run time settings in vugen can be retrieved using the function lr_get_attrib_<type of
attribute>.
If you are retrieving the string type of attribute you use lr_get_attrib_string. The complete syntax for retrieving the value
of a attribute in a string will be written as
Define the variable sHostName.
char* sHostName;
sHostName = lr_get_attrib_string("aHostName");
Note: Here the aHostName is the additional attribute name, defined in run time settings.
What is the function used for converting the string variable value to a parameter in
loadrunner vugen?
Using the variables directly in the VuGen test script is little difficult. It involved lots of coding. So, to avoid using the
variable values directly in test script, we convert the variable values to a parameter and use them directly in test script.
Converting the string variable value to a parameter in loadrunner vugen can be done with lr_save_<type of variable >
if you converting the string variable to a parameter use lr_save_string(sHostName, "pHostName")
Note: in the above case sHostName is a variable and pHostName is a parameter.
What is the function used for converting the parameter value to a string value in loadrunner
vugen?
The function used for converting the parameter value to a string value in vugen is lr_eval_< data type>
if you want to convert the parameter value to a string we use as follows
sHostName = lr_eval_string("{pHostName}");
Note: In this case sHostName is a variable
pHostName is a parameter

What are the types of load testing or performance testing will be done using loadrunner?
The types of load testings supported by loadrunner are:
Load Testing: By using the loadrunner the load test can be simulated for multiple simultaneous users, similar to realistic
number of users.
Endurance / Longevity / Soak Testing: By using the loadrunner the load test can be simulated for longer durations
Stress Testing: By using the loadrunner the load test can be simulated with multiple users till the stress point reaches
with incremental increase in the number of users.
Spike Testing: By using the loadrunner, during the load test execution spikes can introduced by way of adding the
number of users.
Capacity Testing: Using loadrunner systems capacity can be tested for the future incremental in business. For example, if
the current AUT system configuration supports 1000 users. Due to the expansion in business, if the number of users are
increased to 10000. Does the existing environment supports or any new systems to be procured. The decision can be
taken by way of conducting the load tests using load runner.
Scalability Testing: Scalability can be compared using some of the auto generated graphs, like average response times
under load.
Monitoring JVM Memory usage inside a Java Application
How to implement a JVM memory usage threshold warning without hitting the application performance .

We can get the current memory usage of the JVM by using the Runtime.getRuntime ().totalMemory () method. So

one way to monitor memory consumption would be to create a separate thread that will call this method every second

and check if the totalMemory() exceed what we expect. But this will harm the application performance because each

call to totalMemoy() method is costly (70 to 100 ns).

A better alternative is to use a platform MXBean, which are an MBean (managed bean), used to monitor
and manage the JVM.

Here are lists of what can be monitored using an MXBean:

- Number of classes loaded and threads running.


- Java VM uptime, system properties, and VM input arguments.
- Thread state, thread contention statistics, and stack trace of live threads.
- Memory consumption.
- Garbage collection statistics.
- Low memory detection.
- On-demand deadlock detection.
- Operating system information.

So in our case we want to monitor the memory consumption so will use a MemoryMXBean. MemoryMXBean has a

neat feature that allows us to set a memory threshold. If the threshold is reached, an event will be fire (so there is no
need to check the memory usage periodically).

A. First step is to create a class that will catch the event thrown by MemoryMXBean.

To do so simply implement the NotificationListener interface and override the handleNotification method. In this

method, we log a warning if the notification is of


typeMemoryNotificationInfo.MEMORY_THRESHOLD_EXCEEDED .

package com.inoneo.util;

import java.lang.management.MemoryNotificationInfo;

import javax.management.Notification;
import javax.management.NotificationListener;
public class LowMemoryListener implements NotificationListener {

private static final Logger log =


Logger.getLogger(LowMemoryListener.class);

public LowMemoryListener() {
super();
}

@Override
public void handleNotification(Notification notification, Object handback)
{
String notifType = notification.getType();
if
(notifType.equals(MemoryNotificationInfo.MEMORY_THRESHOLD_EXCEEDED)) {
// potential low memory, log a warning
log.warning("Memory usage threshold reached");
}
}

B. Now, we need to register a usage threshold that will trigger the event.

The memory is divided into several memory pools. In the code below we loop on each of them and
register a threshold for each memory pool that supports this feature.This piece of code should be place
where you want to start to monitor the JVM memory usage, usually at the beginning of the program.

package com.inoneo;

import java.lang.management.ManagementFactory;
import java.lang.management.MemoryMXBean;
import java.lang.management.MemoryPoolMXBean;

import javax.management.NotificationEmitter;

import java.util.List;

import com.inoneo.util.LowMemoryListener;

public static void main(String[] args){

//Start to monitor memory usage


MemoryMXBean mbean = ManagementFactory.getMemoryMXBean();
NotificationEmitter emitter = (NotificationEmitter) mbean;
LowMemoryListener listener = new LowMemoryListener(this,
maxHeapMemoryThreshold);
emitter.addNotificationListener(listener, null, null);

//set threshold
List pools = ManagementFactory.getMemoryPoolMXBeans();
for (MemoryPoolMXBean pool : pools) {
if(pool.isCollectionUsageThresholdSupported()){
// JVM Heap memory threshold in bytes (1.00 Gb = 1000000000), 0 to
disable
pool.setUsageThreshold(1000000000);
}
}
}

In this example, we set a threshold at 1 GB of memory that means an event will be fired if the memory
consumption reaches or exceeds 1 GB.

Using Little’s Law to set Think Times in Performance Testing

Planning

We created some scripts that emulated a user journey through the new application. Before we were able to begin
performance testing we needed to see what throughput we were required to achieve.

There is a formula we can use to calculate the throughput that we require.

Little’s Law

L – Lead Time

λ – Throughput

W – Response Time

L – Lead Time

The amount of time spent by an object in a queue. E.g. The number of customers queuing to pay at a till

λ – Throughput

The average arrival rate. E.g. The amount of customers being served at the till per second

W – Response Time

The average time an object spends in the system. E.g. the time a customer has spent from the time they entered the
queue to the time they have been served at a till.

Calculating

We already had figures from the production system; we could see the required throughput we would require per
second. In this case we were required to achieve a throughput of 2.37TPS (Transactions per second)
After our planning was complete we knew that our overall response time (from beginning to end of the system) was
4.3s. We also knew the throughput we were required to achieve was 2.37TPS and finally we can calculate lead time
by using these 2 values.

So from this we get the following values.

λ = 2.37TPS, W = 4.52s

Therefore L = 10.7

So now we had a complete formula. So we now knew that with 10.7Vusers we could achieve the required TPS rate
which would be our constant.

However we were performance testing and it was vital we put a realistic number of users through the system.

We wanted to run a test with 50 Vusers on the system therefore we knew that λ would be constant and we would
alter W accordingly.

If we set L to 1 we then know that W is 0.422

From this we can multiply both sides by 50. This gives us the following formula:

50 = λ *21.12

Think Times

So now we have everything to calculate the think times required for a 50 Vuser performance test to achieve a set
throughput.

W is equal to 21.12s and we know that the overall response time for a single user is 4.52s.

So we simply subtract the response time for a single user away from the response time with 50 Vusers.

This gives us….

21.12s – 4.52s = 16.6s

So we now know that we require 16.6s think time in our performance testing scripts to ensure a realistic result.

Pacing and ThinkTime Calculation based on TPS-1

As per the NFR we have the following info./requirements:


Requirement 1: User Load: 300
Requirement 2: Response Time: <2 Sec
Requirement 3:  System Capacity: 10 million page views per month with 25% YoY increase
Total No. of Transactions per day: 10000000/30=333333.3333333333
Total No. of Transactions per Hour: 333333.3333333333/24=13888.88888888889
Total No. of Transactions per Sec (TTPS) = 13888.88888888889/3600=3.858024691358025
\Requirement 3: TTPS: 3.86
Then what is the think time and pacing we give between transactions and test runs?
Total Transactions per Hr. /Total transactions in the scenario=13889/23=604
No. of Iterations or cycle Per User=604/300=2.012 iterations i.e. Each Vuser will do 2 iterations of 23 trxs in 1 hr.
Pacing=Total duration of test in sec/Iterations per user=3600/2=1800 sec
Hence, we need to distribute this 1800 sec among 23 transactions as think time and pacing as like below:
Tranx1:
(ThinkTime: 60 sec)
----
Tranx2:
(ThinkTime: 60 Sec)
----
----
Tranx22:
(ThinkTime: 60 Sec)
----
Tranx23.
Pacing: 420 sec (i.e. 23*60= 1380, 1800-1380=420 sec)
Hence, remaining 420 sec can be given as pacing time between iterations.

Pacing and Think-Time Calculation based on TPS-2

If pacing is set to zero,

Then Number of Vusers = TPS * (Response Time + Think Time)

If pacing is ≠ zero and pacing > (response time + Think Time),

Then the above formula would look like this: Number of Vusers = TPS * (Pacing)

The fact that TPS is a rate of transactions w.r.t. time, it is also called as throughput.

So Little’s law is

Average number of users in the system = average response time * throughput

V = (R + TT) * TPS

Where, V = Number of users; R = average response time (now you know, it can be pacing too); TT = Think Time;
TPS = Throughput (i.e. TPS)

Example, If V = 100, R = 2 sec, 100= (2+TT)*TPS and hence –> If TT=18, TPS = 5

You might also like