How To Do Performance Testing
How To Do Performance Testing
Performance team interacts with the client for identification and gathering of requirement
technical and business. This includes getting information on applications architecture,
technologies and database used, intended users, functionality, application usage, test
requirement, hardware & software requirements etc.
Once the key functionality are identified, POC (proof of concept which is a sort of
demonstration of the real time activity but in a limited sense) is done with the available
tools. The list of available performance test tools depends on cost of tool, protocol that
application is using, the technologies used to build the application, the number of users
we are simulating for the test, etc. During POC, scripts are created for the identified key
functionality and executed with 10-15 virtual users.
Depending on the information collected in the preceding stages, test planning and
designing is conducted.
Test Planning involves information on how the performance test is going to take place
test environment the application, workload, hardware, etc.
Test designing is mainly about the type of test to be conducted, metrics to be measured,
Metadata, scripts, number of users and the execution plan.
During this activity, a Performance Test Plan is created. This serves as an agreement
before moving ahead and also as a road map for the entire activity. Once created this
document is shared to the client to establish transparency on the type of the application,
test objectives, prerequisites, deliverable, entry and exit criteria, acceptance criteria etc.
d) Test Approach (User Distribution, Test data requirements, Workload criteria, Entry &
Exit criteria, Deliverable, etc.)
e) In-Scope and Out-of-Scope
f) Test Environment (Configuration, Tool, Hardware, Server Monitoring, Database, test
configuration, etc.)
g) Reporting & Communication
h) Test Metrics
i) Role & Responsibilities
j) Risk & Mitigation
k) Configuration Management
Use cases are created for the functionality identified in the test plan as the scope of PT.
These use cases are shared with the client for their approval. This is to make sure the
script will be recorded with correct steps.
Once approved, script development starts with a recording of the steps in use cases with
the performance test tool selected during the POC (Proof of Concepts) and enhanced by
performing Correlation (for handling dynamic value), Parameterization (value
substitution) and custom functions as per the situation or need. More on these
techniques in our video tutorials.
The Scripts are then validated against different users.
Parallel to script creation, performance team also keeps working on setting up of the test
environment (Software and hardware).
Performance team will also take care of Metadata (back-end) through scripts if this
activity is not taken up by the client.
#5. Performance Test Modeling
Performance Load Model is created for the test execution. The main aim of this step is to
validate whether the given Performance metrics (provided by clients) are achieved
during the test or not. There are different approaches to create a Load model. Littles
Law is used in most cases.
The scenario is designed according to the Load Model in Controller or Performance Center
but the initial tests are not executed with maximum users that are in the Load model.
Test execution is done incrementally. For example: If the maximum number of users are
100, the scenarios is first run with 10, 25, 50 users and so on, eventually moving on to
100 users.
Test results are the most important deliverable for the performance tester. This is where
we can prove the ROI (Return on Investment) and productivity that a performance testing
effort can provide.
Some of the best practices that help the result analysis process:
a) A unique and meaningful name to every test result this helps in understanding the
purpose of the test
b) Include the following information in the test result summary:
Error occurred
Recommendations
There might be recommendations like configuration changes for the next test. Server
logs also help in identifying the root cause of the problem (like bottlenecks) Deep
Diagnostic tools are used for this purpose.
#8. Report
Test results should be simplified so the conclusion is clearer and should not need any
derivation. Development Team needs more information on analysis, comparison of
results, and details of how the results were obtained.
Execution Summary
System Under test
Testing Strategy
Summary of test
Results Strategy
Problem Identified
Recommendations
Along with the final report, all the deliverable as per test plan should be shared with the
client.
Conclusion
We hope this article has given a process oriented, conceptual and detailed information on
how performance testing is carried out from beginning to end.
Can We do performance testing manually?*
Yes you can do Performance testing manually. For this you should open many active
sessions of the application and should test it out. It also depends on what type of
performance test you want to do. However, in general you can judge the active sessions,
number of DB connections open, number of threads running (I have taken JAVA based
Web applications as eg), the amount of the CPU time and memory being used by having
a performance viewer. You can have IBM Tivoli Performance viewer. It is available for trial
version also. Usually the the test is done by deploying the application on the server and
accessing the application from multiple client machines and making multiple threads to
run. The performance viewer should of course be installed on the server.
Good performance testing isnt just about generating load onto the system; its about
generating the correct load into a system and achieving accurate results. I view
performance testing as a sub-project in its own right. A performance consultant should
spend time elaborating and fleshing out the requirements; they should also take time to
put the results of a performance test back into the context of the business. A
performance testing assignment is small technical project that requires more than just
scripting skills; it involves communication with disparate parties within a project. Here is
a broad high-level outline of the steps I attempt to put in place when faced with an
assignment:
It may seem like a lot of steps just to execute a performance test, but I find that the
documents produced do not have to be extensive they just have to be a few pages.
When to start Performance Testing?
Performance Testing starts parallel with Software Development Life Cycle (SDLC). NFR
elicitation happens parallel with System Requirement Specification (SRS).
Now we will see the phases of Performance Testing Life Cycle (PTLC).
1.
Entry Criteria
Tasks
Growth pattern
Exit Criteria
2.
This phase defined how to approach Performance Testing for the identified critical
scenarios. Following are to be addressed during this phase.
Data set up
SLA
Workload Model
Exit Criteria
3.
This phase involves with the script generation using identified testing tool in a dedicated
environment. All the script enhancements should be done and unit tested.
Entry Criteria
Test Environment
Test Data
Activities
Test Scripting
Data Parameterization
Correlation
Unit Testing
Exit Criteria
4.
This phase involves dedicated to the test engineers who design scenarios based on
identified workload and load the system with concurrent virtual users (VUsers).
Entry Criteria
Activities
Exit Criteria
5.
The collected log files are analyzed and reviewed by the experienced test engineers.
Tuning recommendation will be given if any conflicts identified.
Entry Criteria
Activities
Tuning recommendation
Exit Criteria
is
the
last
phase
in
PTLC
which
involves
Entry Criteria
Activities
benchmarking
and
providing
Exit Criteria