0% found this document useful (0 votes)
73 views

PerformanceTesing Approach

The document provides an overview of the key steps and considerations for performance testing. It discusses identifying business goals and requirements, analyzing requirements, selecting appropriate testing tools, configuring the test environment, generating test data, and the types of performance tests that should be conducted including baseline tests, load tests, and stress tests. The overall goal is to effectively test the performance of an application under different loads and scenarios to identify any bottlenecks or points of failure.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views

PerformanceTesing Approach

The document provides an overview of the key steps and considerations for performance testing. It discusses identifying business goals and requirements, analyzing requirements, selecting appropriate testing tools, configuring the test environment, generating test data, and the types of performance tests that should be conducted including baseline tests, load tests, and stress tests. The overall goal is to effectively test the performance of an application under different loads and scenarios to identify any bottlenecks or points of failure.
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 11

Contents

 
 
1. INTRODUCTION
Performance testing is similar to a crime investigation, where we need to drill down the
things very exhaustively and if we get the right thread, then we can easily reach up to
our target.  The purpose of this paper is to explain briefly the performance Testing
approaches, like - How to drill down the requirement more effectively, what are the
prerequisite for the performance testing , How to approach on analysis and
recommendation etc.

2. PREREQUISTES
 
         Basic Knowledge of Performance (Load / Stress) Testing.
         Basic Knowledge of any performance testing tool in testing web applications.
         Understanding of Load testing objectives and desired output.
 

3. AUDIENCE
 
         Performance Test Analyst.
         Quality Assurance Engineers.

4. IDENTIFYING BUSINESS GOALS


First Steps to proceed with performance testing is to know the Business requirements
and desired output in terms of performance over the stated load, which is the one of the
critical and complex task.
One of the major challenges is that, if a requirement is extracted from a contract or from
a existing marketing document, it likely to be stay unchangeable and when we face a
requirement such as, “Six -second average response time,” with “3,500 concurrent
users”. We have to figure out what those requirements mean and what additional
information we need in order to make them testable.
There is no absolute formula to get the correct requirements. The basic idea is to
interpret the requirements specifically written in common language, supplement them
with the most common or expected scenarios for the application.
Generally, we find that the end user not well versed with the term ‘Performance testing’
and its key attributes (i.e. -Think time, Iteration, Pacing, throughput etc.). End user
always talks in the terms of Business transactions, operation flow and its Response
Time.
We have to address the identification of the goals and objectives of the performance
test. For that we need to prepare a questionnaire having basic objectives like –

         How many No. of users we are planning to consider in performance test?
 What are main business activates and their count those are frequently getting
executed during the business hours?
 What would be peek business timings?
 What would be the network band width... and so on. 
In most of the cases these information is extractable from various business
requirements and end users but in few cases we need to figure it out based on the
current trends and future extensibility. For this purpose we need to drill down further.

         Need to look in to Historical data (could be invoices, Web logs, etc.)
         Interviews with stakeholders.
         Existing production logs
         Audit logs (if any) to get the more information about the business requirement
and to authenticate the existing requirements.
 
Based on these information and logs we can get an actual picture of the Business
Transactions.

5. ANALYZE THE REQUIREMENT


When we are through with Business requirements, we need to analyze and refine the
requirement, like –
 What business points we need to consider for the performance testing?
 What can we measure and what cannot? ,
 Our own experiences with similar applications and many more…
Based on analysis and walkthroughs, we need to finalize the performance requirement
and need to get business signoff.     

6. TYPES OF TESTS

The following types of tests are included in Performance Testing:

  Baseline (single user) Test:  It is for initial basis of comparison and ‘best case’.
  Benchmark testing: It compares the performance of new or unknown target-of-
test to a known reference standard, such as existing software or measurements.
  Contention test: This type of test verifies the target-of-test which can acceptably
handle multiple users demands on the same resource (data records, memory,
and so forth).

  Performance profiling: It verifies the acceptability of the target-of-test's performance behavior


using varying configurations while the operational conditions remain constant.

  Load testing: It verifies the acceptability of the target-of-test's performance behavior under
varying operational conditions (such as number of users, number of transactions, and so on)
while the configuration remains constant.
  Stress testing: It verifies the acceptability of the target-of-test's performance
behavior when abnormal or extreme conditions are encountered, such as
diminished resources or an extremely high number of users.
7. TOOL SELECTION
Tool selection is also one of the major activities. When we talk about performance
testing tools, we should have concern about three specific categories: load generators,
end-to-end response time tools and monitors.

A performance test requires all three types of category, properly configured and working
together.

Load-generation requires generating the user load (i.e. - Virtual Users),


An end-to-end response time is essential as large amount of business logic on the
client.

Monitors are essential to finding the bottlenecks areas (could be Network, Apps sever,
Database) and solving any performance problems that may be identified during the
testing.
There are many tools are available in the market (some of them listed below). We can
select based on our need and budget.  
 
Commercial Tool
  HP-Loadrunner
  IBM- RPT (Rational Performance Tester)
  Silkperfromer
  WAPT, etc.
 
Open Source
 
  Open STA
  Web Load
  Apache Jmeter
  CLIF, etc.
 
Right set of Tool selection is essential to a successful Performance test.

8. CONFIGURE THE TEST ENVIRONMENT


Prepare the test environment, based on Performance test requirement and select the
appropriate Testing Tools (Check whether licensees and required Load Generator
(Hardware) are available to execute each strategy / scenario.)
Generally, there is a huge difference between Production and Test environment as both
environment uses different hardware platform to deploy the application. The main
reason of the differences is High Cost of the production environment. We need to
identify the system configuration details of the server machines in the Production
environment like –
  How many numbers of CPUs are getting used?
  What would be the capacity (speed) of CPU, RAM and Disk?
  How much free space available in the disk?
  And what would be available network bandwidth? Etc.
 
All these details needs to be identified before commencing the performance test and
should be documented in the test plan / test strategy document for future reference.

If the performance test environment is setup locally, then this test environment should
be isolated from the other test environments like – Functional test environment, UAT
(user acceptance testing) test environment and any other environment .
Performance test should be conducted in an isolated environment, so that it is easy to
segregate the bottleneck and avoid unnecessary extra load during the performance test.
It is recommended that isolated network should be used for the performance test; as if
the network is heavily loaded with other traffic(Functional test, UAT etc), then it’s
obvious that amount of data sent to/from client and server would have an impact.
Load testing tools are normally used to create the required load on the serve and this
can be done using the Load Generators (An agent Machine, from where we can
generate Virtual users).
Since each virtual user requires some resources, we need to find out what is the
resource requirements are for each virtual user .However it could vary Tool to Tool,
Protocols to Protocols.
 For example, when the performance test tool is configured to simulate 1000 Virtual
users then the tool generates 1000 threads or process (depending on the configuration
setting of the tool) which sends the client requests to the server. Our load generator
Machine should have enough hardware resources (CPU and Memory) to generate the
threads.
There is no any specific formula to calculate the resources of the Load generator
machine but Ideally, One virtual user needs 4 to 5 MB of RAM when running as a
process for a normal application (It could also vary application to application) and based
on the requirement (No. of user) we can configure Performance Test Lab. For Example:
If are planning to execute 1000 user load, each with 2 GB of available RAM then up to
400 users we could generate from one machine .    
Factors affecting the amount of resources each virtual user needs include:
  Size of the test script being executed
  Complexity of the application / Business Points.  
  Resources used such as threads, etc.

9. TEST DATA GENERATION


The next step is identifying the sources of Test data. From where will the data come and
will there be sufficient volume and variety to meet our test requirements? During the
execution phase of the testing, when changes are made to various aspects of the
system, we may have to update these data elements. There are several possibilities for
acquiring the necessary data.
Test data Generation could base on the prerequisite, or based on our test scenario. We
need to baseline our test data.
We can generate Test data in several ways like–
  We can use some automation data generation scripts (Even can use our own
performance test scripts)
  Data feed form Production systems
  Manual creation of data.
When we are finished with the baseline data generation, create a replica of the current
baseline test data and keep it as a backup. Performance tuning and fixing activities may
require a regression test (re-run) and this could affected Baseline test data so it is
recommended that keep a backup of the baseline data for the further use. 

10. SCRIPTS GENERATION


Almost all Automation testing tools works on the concept of Record and replay. We can
record out Performance test script based on Business requirement and can modify /
customize it according to need. We can parameterize the scripts to mimic the actual
user. 
A very inserting fact is that whenever we find problems during initial phase of testing,
the test tool and scripts are always blamed first. People will start blaming us that our
scripts are not working according, It is essential that we should have an understanding
of the tool and depth knowledge, how it interacts with the system, how virtual users hit
the server? Etc.
Apart from this, we have to make sure at our end that our scripts are working fine, we
need to make sure that there should not be any other issue (could be related to
Correlation & Parameters).
To assuring that our scripts are functionally working fine, we need to check the script’s
transaction in database or through front end (if possible). For example- Lets say, our
script is generating some Purchase Order, then after execution of the script we need to
check the purchase order table in database to make sure that order has been
generated.            
 

11. TEST EXECUTION


Cooperative Test
 
Performance test is a Team work and performance Test execution process would be
most effective when it is a co-operative effort between all of those concerned with the
application or system under test, including:
   
  Testers
  Server Administrators ( For logs collections  )
  Database administrators
  System administrators
  Architects
  Network administrators
  Developers
                       Cooperative Test
Without the cooperation of a cross-functional team, it is almost impossible to gain the
system-wide perspective necessary to resolve performance issues effectively or
efficiently.

It is recommended that, we should start a cooperative test with verity of the load like -
Can start from a small piece of load and gradually can increase the load.
During a Cooperative Test, we can get the actual status of the application as all
cooperative Team members can provide the actual picture of their area like –
  How network is behaving during the run?
  What is the status of the database during the test?
  Status of resource utilization of different servers (ideally-, DB, apps and web
server).
The performance tester, or performance testing team, is a critical component of this
cooperative team as tuning typically requires additional monitoring of components,
resources, and response times under a variety of load conditions and configurations.
 Although, some of commercial tools provides all these statistics and we could get all
these monitoring form performance tool itself but for exhaustive analysis it is
recommended that to execute a cooperative test. 

12. FREQUENTLY REPORTED PERFORMANCE METRICS 


The following are the most frequently reported types of results data. These Metrics
helps developers, architects, database administrators (DBAs), and administrators to
find-out the bottlenecks. 
 End-user response times

 Resource utilizations

 Volumes, capacities, and rates

 Component response times

  END-USER RESPONSE TIMES

End-user response time is by far the most commonly requested and reported metric in performance
testing. This is a measure of presumed user satisfaction with the performance characteristics of the system
or application. Stakeholders are interested in end-user response times to judge the degree to which users
will be satisfied with the application. Technical team members are interested because they want to know
if they are achieving the overall performance goals from a user’s perspective.

  RESOURCE UTILIZATIONS
Resource utilizations are the second most requested and reported metrics in
performance testing. Most frequently, resource utilization metrics are reported verbally
or in a narrative fashion. For example, “The CPU utilization of the application server
never exceeded 45 percent. The target is to stay below 80 percent.” It is generally
valuable to report resource utilizations graphically when there is an issue to be
communicated.
  VOLUMES, CAPACITIES, AND RATES
Volume, capacity, and rate metrics are also frequently requested by stakeholders, even
though the implications of these metrics are often more challenging to interpret. For this
reason, it is important to report these metrics in relation to specific performance criteria
or a specific performance issue.
  COMPONENT RESPONSE TIMES
Even though component response times are not reported to stakeholders as commonly
as end-user response times or resource utilization metrics, they are frequently collected
and shared with the technical team. These response times helps team to determine
what sub-part or parts of the system are responsible for the majority of end-user
response times.

13. ANALYSIS AND RECOMMENDATIONS


I believe that result analysis is most challenging Job in Performance testing. During the
analysis we have to look in to Scenario’s, run time settings, Volume of data, Network,
Resources, environment status and many more…
 
 
The best way is to start analysis, start with Resource monitoring, for that we can refer
resource utilization report (resource utilization of the server during the run).
Let’s take an example, our performance results are not up to mark and we started
investigation of bottlenecks areas. During the resource utilization analysis, we came to
know that CPU utilization of the DB server was too high; it was approx more the 85% for
a while (say 10 min.) So, based on the first analysis it is clear that there was some issue
with DB during the run. Now we need to find-out what could be a reason for this issue?
We need to look in to different aspects like- what activates was running on the DB
server during the test, was there any batch process was scheduled during the
execution? Etc.
Let’s say we find everything was fine during the run, then we can ask some query
executing reports (Lets say AWR reports if oracle is as a backend) for the last run, with
AWR report we can get the Top 10 queries which are taking more time.
We need to execute same performance test repeatedly (at least three Times) to check
the consistency of the DB’s query issue. We need to collect AWR report after each run,
and need to confirm that whether query execution time and set of queries are almost
same.( In all there execution).
Based on above analysis we can get the quires lists which are taking more time during
the test execution. Now next step is we can take those quires and map with our script’s
transaction like, which query is related to which transaction.
We can recommend that these quires are taking more time and it could be write in a
better (Optimized) way.
Once we received a New build (after query fixes), we need to execute the same step as
we discussed earlier.
Best practices is says there are two main thumb rules of any successful analysis.
 
  Save every result with their run time setting.
 
  Keep track of every Change.
 
We need to have a track of every change whether it is related to our Performance script,
various server settings, run time settings, Database setting and Test Scenarios settings
etc.
 
When we will have these entire tracks then we can easily find out the differences of
poor and good performance. It could also help us to track the progress. 

14 ROI
ROI is a performance measure used to evaluate the efficiency of an investment or to compare the
efficiency of a number of different investments. To calculate ROI, the benefit (return) of an investment is
divided by the cost of the investment; the result is expressed as a percentage or a ratio.

ROI =    Benefit


             -------------
              Cost
  Myths on Performance testing
There are certain myths which are briefly stated as below.

 Unlike functionality, performance counters is not so important.

 Cost of Hardware used as well as Load Generator tools are highly expensive.

 Performance related bugs are difficult to detect and more difficult to fix.

 Window time is not required in the testing.

 Management has to Increase infrastructure to increase performance.

 All users require same response time during the scenario.


  SAMPLE CASE STUDY
I have the example of an online shopping Organization ABC.com Business of the Company -Online
shopping which accepts online orders through Order entry Application.
 
Normally system takes 5-8 (can say that an avg. of 6 min. / Order) Minutes to execute an end to end
order.
 
After performance testing system start responding with in 2 Minutes per Order Entry. Overall time save 4
min/ transaction
Calculation of the ROI
For instance this application generates a profit of USD 0.75 per completed E2E Order transaction
   BENIFITS OF PERFORMANCE TESTING
In software engineering, performance test is the test to determine how fast some aspect
of a system performs under a particular workload and particular stress.

Performance testing can serve different purposes in testing area which are as below.

 It can serve to validate and verify other quality attributes of the system, such as
scalability, reliability and resource usage etc.

 It can improve the system as well as business.

 It can help in demonstrating whether the system meets performance criteria or


not.
 It can compare two systems to find which performs better.

 It can measure what parts of the system or workload cause the system to
perform badly.

 The testing help in reducing risk of downtime thus improves the deployment
quality, which in turn increase customer satisfaction.

 It reduces risk associated with SLAs ( If any)

 This type of testing minimizes the risk which are  related to performance
requirements, thus reduces the costs of failure

15. DISCLAIMER
The methodology described in this paper is purely based of experience and different
performance case studies. This article does not explain that these are the best
execution steps of the any performance testing, but it does give us an assurance of
correct and positive results.

You might also like