Annex 9 - Performance and Load Test Plan
Annex 9 - Performance and Load Test Plan
ikubINFO
Audit Trail:
Table of Contents
1. References.............................................................................................................................5
2. Summary...............................................................................................................................5
3. Tools Used............................................................................................................................6
4. JMeter Glossary....................................................................................................................7
7. Throughput Charts..............................................................................................................18
9. Conclusions.........................................................................................................................25
Table of Figures
1. References
i. Adisa.csv
ii. Adisa.jmx
iii. HtmlReports
2. Summary
Performance testing was conducted on ADISA to determine the baseline performance of ADISA.
Testing was done with a basic set of tools configured in each worksite. Concurrent user testing
began with a small number of users and gradually increased to support more and more users.
This process also helped to debug the test environment itself, fixing errors in configuration and
fine-tuning. The results show a stable condition of the system. However, the results also show no
dramatic decrease in performance as more users are added to the test. As a result, a max of 30
users was given in the final tests that were run. In addition to debugging the test environment and
gathering the initial test results, three bugs may have been uncovered. These bugs would most
likely not have been visible in normal functional testing. Therefore, performance testing can
claim an additional measure of success in discovering bugs that might, otherwise, have gone
unnoticed.
3. Tools Used
The Apache JMeter™ application is open-source software, a 100% pure Java application
designed to load test functional behavior and measure performance. It was originally designed
for testing Web Applications but has since expanded to other test functions.
Apache JMeter may be used to test performance both on static and dynamic resources, Web
dynamic applications.
It can be used to simulate a heavy load on a server, group of servers, network or object to test its
strength or to analyze overall performance under different load types.
4. JMeter Glossary
APDEX (Application Performance Index) - is an open standard for measuring the performance
of software applications in computing. Its purpose is to convert measurements into insights about
user satisfaction, by specifying a uniform way to analyze and report on the degree to which
measured performance meets user expectations.
Response Times over Time - Graph that will display for each sampler the average response
time in milliseconds.
Response Times Percentiles - Graph that will display the percentiles for the response time
values. The X-Axis represents a percentage, Y-Axis Response time values. One point (P, Value)
means for the whole scenario, P percent of the values are bellow Value ms.
Active Threads Over Time - is a simple listener showing how many active threads are there in
each thread group during a test run.
Response Times vs Threads - Graph that shows how Response Time changes with the number
of parallel threads. Naturally, the server takes longer to respond when many users request it
simultaneously.
Latency Vs Request - Latency is generally considered the amount of time it takes from when
the user makes a request to the time it takes the response to get back to that user. On a first
request, for the first 14Kb bytes, latency is longer because it includes a DNS lookup, a TCP
handshake, and the secure TLS negotiation. Subsequent requests will have less latency because
the connection to the server is already set.
Time Vs Threads - Graph that shows how Response Time changes with the number of parallel
threads. Naturally, the server takes longer to respond when many users request it simultaneously.
Response Times Distribution - Graph that will display the response time distribution of the test.
The X-axis shows the response times grouped by interval and the Y-axis the number of samples,
which are contained in each interval.
Figure 2 APDEX
A Statistics table providing in one table a summary of all metrics per transaction including
7 configurable percentiles.
An error table providing a summary of all errors and their proportion in the total requests.
A Top 5 Errors by Sampler table providing for every Sampler (excluding Transaction
Controller by default) the top 5 Errors.
11
Table reports are more informative in JMeter, as "Summary Report" shows. This report displays
all main indicators for all requests in the Test Plan and includes quantity of sent requests. In the
table, you can find bottlenecks or other problems at once and solve them immediately.
This graph will display for each sampler the average response time in milliseconds.
This graph will display the percentiles for the response time values. X Axis represents
percentage, Y Axis Response time values. One point (P, Value) means for the whole
scenario, P percent of the values are bellow Value ms.
12
Active Threads over Time is a simple listener showing how many active threads are there
in each thread group during test run.
13
This graph will display the number of bytes sent and received by JMeter during the load
test.
This graph will display the response latencies during the load test. A latency is the duration
between the end of the request and the beginning of the server response.
This graph will display the average time to establish connection during the load test.
14
7. Throughput Charts
Throughput is one of the components in the JMeter of non-functional requirement, which we can
consider under the performance-testing category, and its formula to calculate is:
Total number of requests in a given time or TPS (transaction per second)
It is the term in the JMeter, we use it to measure the performance on the server by putting the
load on the server or in another way, and we can say it is the term, which tells us the ability of
the application to handle the load.
Throughput is the significant way of finding the performance of the application, higher the
throughput good result is considered although throughput may vary from the number of threads
per second.
Throughput is considered as the number of requests sent to the server per second.
The formula is Throughput = (number of requests) / (total time).
This graph will display the hits generated by the test plan to the server per second. Hits
includes child samples from transactions and embedded resources hits.
15
HTTP Codes per second over time (200 OK, 500 Internal Error etc.)
This graph shows the number of transactions per second for each sampler. It counts for
each seconds the number of finished transactions.
16
This graph shows the number of transactions per second for each sampler. It counts for
each seconds the number of finished transactions.
This graph shows how Response Time changes with amount of parallel threads. Naturally,
server takes longer to respond when many users requests it simultaneously. This graph
visualizes such dependencies.
17
18
Measures the time taken for one system node to respond to the request of another. It is the time a
system takes to reach a specific input until the process is over. For example, you have API, and
you want to know exactly how much time it takes to execute it and return data in JSON.
Response Time measures the server response of every single transaction or query.
Response time starts when a user sends a request and ends at the time that the application states
that the request has completed.
This graph will display the percentiles for the response time values. X Axis represents
percentage, Y Axis Response time values. One point (P, Value) means for the whole
scenario, P percent of the values are bellow Value ms.
This graph will display the response time distribution of the test. The X-axis shows the
response times grouped by interval and the Y-axis the number of samples, which are
contained in each interval.
19
This graph shows how Response Time changes with amount of parallel threads. Naturally,
server takes longer to respond when many users requests it simultaneously. This graph
visualizes such dependencies.
This graph will display the response time distribution of the test. The X-axis shows the
response times grouped by interval and the Y-axis the number of samples, which are
contained in each interval.
20
9. Conclusions
21
This test was concluded that was a maximum rate of 4.38% error rate or 23893 samples.
Throughput - Throughput is the number of requests that are processed per time
unit (seconds, minutes, and hours) by the server. This time is calculated from the
start of the first sample to the end of the last sample.
Were a maximum of 1777.16 requests/s
KB/Sec: This indicates the amount of data downloaded from the server during the
performance test execution.
The amount of data that the server was able to download during this test was 4648.27 kilobytes.
22