Performance Testing Overview
Performance Testing Overview
Q & A
Performance testing is the process of determining the speed or effectiveness of a computer, network,
software program or device or an application.
The testing to evaluate the response time (speed), throughput and utilization of system to execute its
required functions in comparison with different versions of the same product or a different competitive
product is called Performance Testing.
• Prevents revenue and credibility loss due to poor Web site performance.
• To ensure that the system meets performance expectations such as response time, throughput etc
under given levels of load.
• Expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks,
buffer overflows, etc.
Physical Architecture
Throughput
Throughput
Response Time
• Capability of a product to handle multiple transactions
in a give period.
Response Time
• It is equally important to find out how much time each of the transactions took to complete.
• Response time is defined as the delay between the point of request and the first response from the product.
Design Phase:
Pages containing lots of images and multimedia for reasonable wait times. Heavy loads are less important than
knowing which types of content cause slowdowns.
Development Phase:
To check results of individual pages and processes, looking for breaking points, unnecessary code and bottlenecks.
Deployment Phase:
To identify the minimum hardware and software requirements for the application.
• Requirements gathering -- is little difficult if the customer does not has any idea on the non functional requirements.
• Setting up exact replica of Perf Prod Env -- The Performance environment where all the activities will take place needs to
scale up with the production environment.
• Identification of Functional flows -- The scenarios/workflows identified should have the maximum coverage of the product
and also it should be critical and database transaction workflows.
• Re-usability of Scripts -- Scripts that have been used in previous rounds of testing for an application are often times
unreliable at best on new builds. This issue will frequently cause additional time and cost for re-scripting on the latest build.
• Code changes affecting the Scripting -- Code changes after the Performance tuning will adversely affect the script and it is
not guaranteed that the same script will work to validate the memory leaks. With that said, additional time should be
reserved to perform script enhancement.
• Tool related issues -- If any Tool related issue is encountered in the mid of the execution then the entire test has to be
repeated.
• High frequency transactions: The most frequently used transactions have the potential to impact the performance of all of
the other transactions if they are not efficient.
• Mission Critical transactions: The more important transactions that facilitate the core objectives of the system should be
included, as failure under load of these transactions has, by definition, the greatest impact.
• Read Transactions: At least one READ ONLY transaction should be included, so that performance of such transactions can be
differentiated from other more complex transactions.
• Update Transactions: At least one update transaction should be included so that performance of such transactions can be
differentiated from other transactions.
1.1.Planning
Planning
2.2.Record
Record
Record
Record the defined testing activities that will be used as a foundation for your load test
scripts. One activity per task or multiple activities depending on user task definition
3.3.Modify
Modify
Modify
Modify load test scripts defined by recorder to reflect more realistic Load test simulations.
Defining the project, users
Randomize parameters (Data, times, environment)
Randomize user activities that occur during the load test
4.4.Execute
Execute
Test Script:
One typical user from login through completion.
5.5.Monitor
Monitor
Monitoring the scenario: We monitor scenario execution using the various online runtime monitors.
6.6. Analyze
Analyze
Analysing test results: During scenario execution, the tool records the performance of the
application under different loads. We use the graphs and reports to analyse the application’s
performance.
Load Testing
• Process of exercising the system under test by feeding it the largest
tasks it can operate with.
• Constantly increasing the load on the system via automated tools
to simulate real time scenario with virtual users.
Examples:
• Testing a word processor by editing a very large document.
• For Web Application load is defined in terms of concurrent users or
HTTP connections.
Stress Testing
• Trying to break the system under test by overwhelming its resources
or by taking resources away from it.
• Purpose is to make sure that the system fails and recovers gracefully.
Examples:
• Double the baseline number for concurrent users/HTTP connections.
• Randomly shut down and restart ports on the network
switches/routers that connects servers.
Build test scenarios that accurately emulate your working environment: Load testing means testing the application under
typical working conditions, and checking for system performance, reliability, capacity, and so forth.
Understand which resources are required for testing: Application testing requires hardware, software, and human
resources. Before beginning testing, we should know which resources are available and decide how to use them effectively.
Define success criteria in measurable terms: Focused testing goals and test criteria ensure successful testing. For example,
it’s not enough to define vague objectives like “Check server response time under heavy load.” A more focused success
criterion would be “Check that 50 customers can check their account balance simultaneously & that server response time
will not exceed 1- minute”
Gathering Requirements
• All the requirements and resources should be evaluated and collected beforehand to avoid any last minute hurdles.
• Load testing does not require as much knowledge of the application as functional testing does.
• Load tester should have some operational knowledge of the application to be tested.
• Load tester should have the idea on how the application is actually used in production to make an informed estimate.
• Load tester must know the application architecture (Client Server, Local Deployment, Live URL), Platform and Database used.
• Performance, Load or Stress testing: Type and scope of testing should be clear as each type of testing has different requirements.
• Common Objectives:
Measuring end-user response time
Defining optimal hardware configuration
Checking reliability
Assist the development team in determining the performance characteristics for various configuration options
Ensure that the new production hardware is no slower than the previous release
Provide input data for scalability and capacity-planning efforts
Determine if the application is ready for deployment to production
Detect bottlenecks to be tuned
Gathering Requirements
Users: Identify all the types of people and processes that can put load on the application or system.
Defining the types of primary end users of the application or system such as purchasers, claims processors, and sales
reps
Add other types of users such as system administrators, managers, and report readers who use the application or
system but are not the primary users.
Add types of non-human users such as batch processes, system backups, bulk data loads and anything else that may
add load or consume system resources.
Transactions: For each type of user we identified in the previous step, identify the tasks that the user performs.
Production Environment:
Performance and capacity of an application is significantly affected by the hardware and software components on
which it executes.
Production Environment:
Speed, capacity, IP address and name, version numbers and other significant information.
Test Environment:
Should be similar to the production environment as is possible to be able to get meaningful performance results.
It is important that the databases be set up with the same amount of data in the same proportions as the production
environment as that can substantially affect the performance.
Scenarios:
Select the use cases to include
Determine how many instances of each use case will run concurrently
Determine how often the use cases will execute per hour
Select the test environment
Commercial
Open Source
• Load Runner
• Apache Jmeter • Rational Performance Tester
• The Grinder • WebLoad
• Pylot • Wapt
• OpenSTA • NeoLoad
• LoadUI
• Benerator
• FunkLoad
• Hammerora
• SLAMD Distributed Load Generation Engine (SLAMD)
• Tsung
Profiling
• Memory profiling
• Heap Walker
• CPU Profiling
• Thread profiling
Performance Tuning
• Generate test report
• Formulate system
Capacity Planning performance parameters
• Based on the reports • Identify performance
estimation of Bottlenecks
Application Scalability
and Stability.
Q&A
Thank you