PerformanceTesing Approach
PerformanceTesing Approach
1. INTRODUCTION
Performance testing is similar to a crime investigation, where we need to drill down the
things very exhaustively and if we get the right thread, then we can easily reach up to
our target. The purpose of this paper is to explain briefly the performance Testing
approaches, like - How to drill down the requirement more effectively, what are the
prerequisite for the performance testing , How to approach on analysis and
recommendation etc.
2. PREREQUISTES
Basic Knowledge of Performance (Load / Stress) Testing.
Basic Knowledge of any performance testing tool in testing web applications.
Understanding of Load testing objectives and desired output.
3. AUDIENCE
Performance Test Analyst.
Quality Assurance Engineers.
How many No. of users we are planning to consider in performance test?
What are main business activates and their count those are frequently getting
executed during the business hours?
What would be peek business timings?
What would be the network band width... and so on.
In most of the cases these information is extractable from various business
requirements and end users but in few cases we need to figure it out based on the
current trends and future extensibility. For this purpose we need to drill down further.
Need to look in to Historical data (could be invoices, Web logs, etc.)
Interviews with stakeholders.
Existing production logs
Audit logs (if any) to get the more information about the business requirement
and to authenticate the existing requirements.
Based on these information and logs we can get an actual picture of the Business
Transactions.
6. TYPES OF TESTS
Baseline (single user) Test: It is for initial basis of comparison and ‘best case’.
Benchmark testing: It compares the performance of new or unknown target-of-
test to a known reference standard, such as existing software or measurements.
Contention test: This type of test verifies the target-of-test which can acceptably
handle multiple users demands on the same resource (data records, memory,
and so forth).
Load testing: It verifies the acceptability of the target-of-test's performance behavior under
varying operational conditions (such as number of users, number of transactions, and so on)
while the configuration remains constant.
Stress testing: It verifies the acceptability of the target-of-test's performance
behavior when abnormal or extreme conditions are encountered, such as
diminished resources or an extremely high number of users.
7. TOOL SELECTION
Tool selection is also one of the major activities. When we talk about performance
testing tools, we should have concern about three specific categories: load generators,
end-to-end response time tools and monitors.
A performance test requires all three types of category, properly configured and working
together.
Monitors are essential to finding the bottlenecks areas (could be Network, Apps sever,
Database) and solving any performance problems that may be identified during the
testing.
There are many tools are available in the market (some of them listed below). We can
select based on our need and budget.
Commercial Tool
HP-Loadrunner
IBM- RPT (Rational Performance Tester)
Silkperfromer
WAPT, etc.
Open Source
Open STA
Web Load
Apache Jmeter
CLIF, etc.
Right set of Tool selection is essential to a successful Performance test.
If the performance test environment is setup locally, then this test environment should
be isolated from the other test environments like – Functional test environment, UAT
(user acceptance testing) test environment and any other environment .
Performance test should be conducted in an isolated environment, so that it is easy to
segregate the bottleneck and avoid unnecessary extra load during the performance test.
It is recommended that isolated network should be used for the performance test; as if
the network is heavily loaded with other traffic(Functional test, UAT etc), then it’s
obvious that amount of data sent to/from client and server would have an impact.
Load testing tools are normally used to create the required load on the serve and this
can be done using the Load Generators (An agent Machine, from where we can
generate Virtual users).
Since each virtual user requires some resources, we need to find out what is the
resource requirements are for each virtual user .However it could vary Tool to Tool,
Protocols to Protocols.
For example, when the performance test tool is configured to simulate 1000 Virtual
users then the tool generates 1000 threads or process (depending on the configuration
setting of the tool) which sends the client requests to the server. Our load generator
Machine should have enough hardware resources (CPU and Memory) to generate the
threads.
There is no any specific formula to calculate the resources of the Load generator
machine but Ideally, One virtual user needs 4 to 5 MB of RAM when running as a
process for a normal application (It could also vary application to application) and based
on the requirement (No. of user) we can configure Performance Test Lab. For Example:
If are planning to execute 1000 user load, each with 2 GB of available RAM then up to
400 users we could generate from one machine .
Factors affecting the amount of resources each virtual user needs include:
Size of the test script being executed
Complexity of the application / Business Points.
Resources used such as threads, etc.
It is recommended that, we should start a cooperative test with verity of the load like -
Can start from a small piece of load and gradually can increase the load.
During a Cooperative Test, we can get the actual status of the application as all
cooperative Team members can provide the actual picture of their area like –
How network is behaving during the run?
What is the status of the database during the test?
Status of resource utilization of different servers (ideally-, DB, apps and web
server).
The performance tester, or performance testing team, is a critical component of this
cooperative team as tuning typically requires additional monitoring of components,
resources, and response times under a variety of load conditions and configurations.
Although, some of commercial tools provides all these statistics and we could get all
these monitoring form performance tool itself but for exhaustive analysis it is
recommended that to execute a cooperative test.
Resource utilizations
End-user response time is by far the most commonly requested and reported metric in performance
testing. This is a measure of presumed user satisfaction with the performance characteristics of the system
or application. Stakeholders are interested in end-user response times to judge the degree to which users
will be satisfied with the application. Technical team members are interested because they want to know
if they are achieving the overall performance goals from a user’s perspective.
RESOURCE UTILIZATIONS
Resource utilizations are the second most requested and reported metrics in
performance testing. Most frequently, resource utilization metrics are reported verbally
or in a narrative fashion. For example, “The CPU utilization of the application server
never exceeded 45 percent. The target is to stay below 80 percent.” It is generally
valuable to report resource utilizations graphically when there is an issue to be
communicated.
VOLUMES, CAPACITIES, AND RATES
Volume, capacity, and rate metrics are also frequently requested by stakeholders, even
though the implications of these metrics are often more challenging to interpret. For this
reason, it is important to report these metrics in relation to specific performance criteria
or a specific performance issue.
COMPONENT RESPONSE TIMES
Even though component response times are not reported to stakeholders as commonly
as end-user response times or resource utilization metrics, they are frequently collected
and shared with the technical team. These response times helps team to determine
what sub-part or parts of the system are responsible for the majority of end-user
response times.
14 ROI
ROI is a performance measure used to evaluate the efficiency of an investment or to compare the
efficiency of a number of different investments. To calculate ROI, the benefit (return) of an investment is
divided by the cost of the investment; the result is expressed as a percentage or a ratio.
Cost of Hardware used as well as Load Generator tools are highly expensive.
Performance related bugs are difficult to detect and more difficult to fix.
Performance testing can serve different purposes in testing area which are as below.
It can serve to validate and verify other quality attributes of the system, such as
scalability, reliability and resource usage etc.
It can measure what parts of the system or workload cause the system to
perform badly.
The testing help in reducing risk of downtime thus improves the deployment
quality, which in turn increase customer satisfaction.
This type of testing minimizes the risk which are related to performance
requirements, thus reduces the costs of failure
15. DISCLAIMER
The methodology described in this paper is purely based of experience and different
performance case studies. This article does not explain that these are the best
execution steps of the any performance testing, but it does give us an assurance of
correct and positive results.