0% found this document useful (0 votes)
205 views

Performance Testing

Performance testing plays a critical role in establishing acceptable quality levels for end users. There are various types of performance testing objectives and methods. Key objectives include evaluating time behavior, resource utilization, and capacity. Common types of performance testing include load testing, stress testing, scalability testing, and others. Performance tests must be aligned with stakeholder expectations, reproducible, understandable, and affordable within project timelines.

Uploaded by

Kirty Raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
205 views

Performance Testing

Performance testing plays a critical role in establishing acceptable quality levels for end users. There are various types of performance testing objectives and methods. Key objectives include evaluating time behavior, resource utilization, and capacity. Common types of performance testing include load testing, stress testing, scalability testing, and others. Performance tests must be aligned with stakeholder expectations, reproducible, understandable, and affordable within project timelines.

Uploaded by

Kirty Raj
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 76

Principles of Performance Testing

• Performance efficiency (or simply “performance”) is an essential part of providing a “good


experience” for users when they use their applications on a variety of fixed and mobile platforms.
• Performance testing plays a critical role in establishing acceptable quality levels for the end user
and is often closely integrated with other disciplines such as usability engineering and performance
engineering.
Common performance testing objectives
• Time Behavior:
1. Generally the evaluation of time behavior is the most common performance testing objective.
2. This aspect of performance testing examines the ability of a component or system to respond to
user or system inputs within a specified time and under specified conditions.
3. Measurements of time behavior may vary from the “end-to-end” time taken by the system to
responding to user input, to the number of CPU cycles required by a software component to
execute a particular task.
• Resource Utilization:
• If the availability of system resources is identified as a risk, the utilization of those resources (e.g.,
the allocation of limited RAM) may be investigated by conducting specific performance tests.
• Capacity:
• If issues of system behavior at the required capacity limits of the system (e.g., numbers of users or
volumes of data) is identified as a risk, performance tests may be conducted to evaluate the
suitability of the system architecture
The following performance testing principles are particularly
relevant.

1. Tests must be aligned to the defined expectations of different stakeholder


groups, in particular users, system designers and operations staff.
2. The tests must be reproducible. Statistically identical results (within a
specified tolerance) must be obtained by repeating the tests on an unchanged
system.
3. The tests must yield results that are both understandable and can be readily
compared to stakeholder expectations.
4. The tests can be conducted, where resources allow, either on complete or
partial systems or test environments that are representative of the production
system.
5. The tests must be practically affordable and executable within the timeframe
set by the project.
Performance Testing
• Performance testing is an umbrella term including any
kind of testing focused on performance
(responsiveness) of the system or component under
different volumes of load.
Types of Performance Testing

• Different types of performance testing can be defined. Each of these may be applicable to a given
project, depending on the objectives of the test.
Load Testing

• Load testing focuses on the ability of a system to handle increasing levels of anticipated realistic
loads resulting from transaction requests generated by controlled numbers of concurrent users or
processes.
• Stress Testing
• Stress testing focuses on the ability of a system or component
to handle peak loads that are at or beyond the limits of its
anticipated or specified workloads.
• Stress testing is also used to evaluate a system’s ability to
handle reduced availability of resources such as accessible
computing capacity, available bandwidth, and memory.
Scalability Testing

• Scalability testing focuses on the ability of a system to meet future


efficiency requirements which may be beyond those currently
required.
• The objective of these tests is to determine the system’s ability to
grow (e.g., with more users, larger amounts of data stored) without
violating the currently specified performance requirements or failing.
• Once the limits of scalability are known, threshold values can be set
and monitored in production to provide a warning of problems which
may be about to arise..
• In addition the production environment may be adjusted with
appropriate amounts of hardware.
Spike Testing

• Spike testing focuses on the ability of a system to respond correctly to sudden bursts of peak loads
and return afterwards to a steady state.
Endurance Testing

• Endurance testing focuses on the stability of the system over a time frame specific to the system’s
operational context. This type of testing verifies that there are no resource capacity problems (e.g.,
memory leaks, database connections, thread pools) that may eventually degrade performance
and/or cause failures at breaking points.
• Concurrency Testing
• Concurrency testing focuses on the impact of situations where specific actions occur
• simultaneously (e.g., when large numbers of users log in at the same time).
• Concurrency issues are notoriously difficult to find and reproduce, particularly when
• the problem occurs in an environment where testing has little or no control, such as
• production.
• Capacity Testing
• Capacity testing determines how many users and/or transactions a given system will
• support and still meet the stated performance objectives. These objectives may also
• be stated with regard to the data volumes resulting from the transactions.
Ways of Performance Testing
• STATIC- Performance Testing
• DYNAMIC -Performance Testing
• Static testing activities for performance can include:
•  Reviews of requirements with focus on performance aspects and risks
•  Reviews of database schemas, entity-relationship diagrams, metadata, stored
• procedures and queries
•  Reviews of the system and network architecture
•  Reviews of critical segments of the system code (e.g., complex algorithms)
Dynamic performance testing include:

•  During unit testing, including using profiling information to determine potential bottlenecks and
dynamic analysis to evaluate resource utilization
•  During component integration testing, across key use cases and workflows, especially when
integrating different use case features or integrating with the “backbone” structure of a workflow
The Concept of Load Generation

• In order to carry out the various types of performance testing, representative system loads must be
modeled, generated and submitted to the system under test.
• Loads are comparable to the data inputs used for functional test cases, but differ in the following
principal ways:
1. A performance test load must represent many user inputs, not just one
2.  A performance test load may require dedicated hardware and tools for
generation
3.  Generation of a performance test load is dependent on the absence of
any functional defects in the system under test which may impact test
execution
• The efficient and reliable generation of a specified
load is a key success factor when conducting
performance tests.
There are different options for load
generation.
• Load Generation via the User Interface
• Load Generation using Crowds
• Load Generation via the Application Programming Interface (API)
• Load Generation using Captured Communication Protocols
Load Generation via the User Interface

• This may be an adequate approach if only a small number of users are to be represented and if the
required numbers of software clients are available from which to enter required inputs.
• This approach may also be used in conjunction with functional test execution tools, but may rapidly
become impractical as the numbers of users to be simulated increases.
• The stability of the user interface (UI) also represents a critical dependency.
• Frequent changes can impact the repeatability of performance tests and may significantly affect the
maintenance costs.
• Testing through the UI may be the most representative approach for end-to-end tests.
Load Generation using Crowds

• This approach depends on the availability of a large number of testers who will represent real users.

• In crowd testing, the testers are organized such that the desired load can be generated.

• This may be a suitable method for testing applications that are reachable from anywhere in the
world (e.g., web-based), and may involve the users generating a load from a wide range of different
device types and configurations.

• Although this approach may enable very large numbers of users to be utilized, the load generated
will not be as reproducible and precise as other options and is more complex to organize.
Load Generation via the Application Programming Interface
(API)

• This approach is similar to using the UI for data entry, but uses the application’s API instead of the
UI to simulate user interaction with the system under test.

• The approach is therefore less sensitive to changes (e.g., delays) in the UI and allows the
transactions to be processed in the same way as they would if entered directly by a user via the UI.

• Dedicated scripts may be created which repeatedly call specific API routines and enable more
users to be simulated compared to using UI inputs.
Load Generation using Captured Communication Protocols

• This approach involves capturing user interaction with the


system under test at the communications protocol level and
then replaying these scripts to simulate potentially very large
numbers of users in a repeatable and reliable manner.
Common Performance Efficiency Failure Modes and Their
Causes

1. Slow response under all load


levels:

• In some cases, response is unacceptable


regardless of load.
• This may be caused by underlying performance
issues, including, but not limited to:
• A. bad database design or implementation,
• B. network latency, and other background loads.
• Such issues can be identified during functional
and usability testing, not just performance
testing, so test analysts should keep an eye
open for them and report them.
Slow response under moderate-to-heavy load levels

• In some cases, response degrades unacceptably with moderate-to-heavy load, even when such
loads are entirely within normal, expected, allowed ranges.
• Underlying defects include saturation of one or more resources and varying background loads.
Degraded response over time

• In some cases, response degrades gradually or severely over time.


• Underlying causes include:
A. memory leaks,
B. disk fragmentation,
C. increasing network load over time,
D. growth of the file repository,
E. and unexpected database growth.
Inadequate or graceless error handling under heavy or
over-limit load

• In some cases, response time is acceptable but error handling


degrades at high and beyond-limit load levels.
• Underlying defects include:
• A. insufficient resource pools,
• B. undersized queues and stacks,
• C. and too rapid time-out settings.
Performance Testing 
• Performance Testing is a type of software testing which
ensures that the application is performing well under the
workload. 
• The goal of performance testing is not to find bugs but to
eliminate performance bottlenecks.
• It measures the quality attributes of the system. 
The attributes of Performance Testing include:

• Speed – It determines whether the application responds


quickly.
• Scalability – It determines maximum user load the software
application can handle.
• Stability – It determines if the application is stable under
varying loads.
Common performance problems faced by users:
• Some of the common performance problems faced by users
are:
• Longer loading time
• Poor response time
• Poor Scalability
• Bottlenecking such as coding errors or hardware issues
Some of the common Performance Testing Tools.

• The market is full of a number of tools for test management,


performance testing, GUI testing, functional testing, etc. I would
suggest you opt for a tool which is on-demand, easy to learn as
per your skills, generic and effective for the required type of
testing. Some of the common Performance Testing tools are:
• LoadView
• Apache JMeter
• LoadUI Pro
• WebLoad
• NeoLoad
Some common Performance bottlenecks.

• Some common performance bottlenecks include:


• CPU Utilization
• Memory Utilization
• Networking Utilization
• Disk Usage
 Parameters considered for Performance Testing:

The Parameters for Performance Testing are:


1. Memory usage
2. Processor usage
3. Bandwidth
4. Memory pages
5. Network output queue length
6. Response time
7. CPU interruption per second
8. Committed memory
9. Thread counts
10.Top waits
Factors for selecting Performance Testing Tools

• Customer preference tool


• Availability of license within customer machine
• Availability of test environment
• Additional protocol support
• License cost
• Efficiency of tool
• User options for testing
• Vendor support
Throughput in Performance Testing

• Throughput is referred to the amount of data transported to


the server in responds to the client request at a given period
of time.
• It is calculated in terms of requests per second, calls per day,
reports per year, hits per second, etc.

• Performance of application depends on throughput value,


higher the value of throughput -higher the performance of the
application.
• 
What is a Performance Tuning?
• Performance tuning is the improvement of system performance.
Typically in computer systems, the motivation for such activity is called
a performance problem.

• It can be either real or anticipated and most systems will respond to


increased load with some degree of decreasing performance.
• Tuning, as its name suggests, is an iterative process. As you
optimize one parameter, other aspects of performance may
degrade.
Typical Metrics Collected in Performance Testing

• A common way to categorize performance measurements and metrics is to consider:


• 1. the technical environment,
• 2. business environment,
• 3. operational environment in which the assessment of performance is needed.
Technical Environment

• Performance metrics will vary by the type of the technical environment, as shown in
• the following list:
•  Web-based
•  Mobile
•  Internet-of-Things (IoT)
•  Desktop client devices
•  Server-side processing
•  Mainframe
•  Databases
•  Networks
•  The nature of software running in the environment (e.g., embedded)
The metrics include the following:

Response time (e.g., per transaction, per concurrent user, page load times)
Resource utilization (e.g., CPU, memory, network bandwidth, network latency,
available disk space, I/O rate, idle and busy threads)
 Throughput rate of key transaction (i.e., the number of transactions that can be
processed in a given period of time)
 Batch processing time (e.g., wait times, throughput times, data base response
times, completion times)
 Numbers of errors impacting performance
 Completion time (e.g., for creating, reading, updating, and deleting data)
 Background load on shared resources (especially in virtualized environments)
 Software metrics (e.g., code complexity)
Business Environment

• From the business or functional perspective, performance metrics may include the
• following:
•  Business process efficiency (e.g., the speed of performing an overall business
• process including normal, alternate and exceptional use case flows)
•  Throughput of data, transactions, and other units of work performed (e.g.,
• orders processed per hour, data rows added per minute)
•  Service Level Agreement (SLA) compliance or violation rates (e.g., SLA
• violations per unit of time)
•  Scope of usage (e.g., percentage of global or national users conducting tasks
• at a given time)
•  Concurrency of usage (e.g., the number of users concurrently performing a
• task)
•  Timing of usage (e.g., the number of orders processed during peak load
• times)
Operational Environment

• The operational aspect of performance testing focuses on tasks that are generally not
• considered to be user-facing in nature. These include the following:
•  Operational processes (e.g., the time required for environment start-up,
• backups, shutdown and resumption times)
•  System restoration (e.g., the time required to restore data from a backup)
•  Alerts and warnings (e.g., the time needed for the system to issue an alert or
• warning)
Performance Testing in the Software Lifecycle
Performance Testing Activities

• Performance testing is iterative in nature.


• Each test provides valuable insights into application and system performance.
• The information gathered from one test is used to correct or optimize application and system
parameters.
• The next test iteration will then show the results of modifications, and so on until test objectives are
reached.
Test Planning

• Test planning is particularly important for performance testing due to the need for the allocation of
test environments, test data, tools and human resources.
• In addition, this is the activity in which the scope of performance testing is established.
• During test planning, risk identification and risk analysis activities are completed and relevant
information is updated in any test planning documentation e.g., test plan
Test Monitoring and Control

• increasing the load generation capacity if the infrastructure does not generate the desired loads as
planned for particular performance tests
•  changed, new or replaced hardware
•  changes to network components
•  changes to software implementation
Test Analysis

• Effective performance tests are based on an


analysis of performance requirements, test
objectives, Service Level Agreements (SLA), IT
architecture, process models and other items
that comprise the test basis.
• This activity may be supported by modeling
and analysis of system resource requirements
and/or behavior using spreadsheets or
capacity planning tools.

• Specific test conditions are identified such as


load levels, timing conditions, and transactions
to be tested. The required type(s) of
performance test (e.g., load, stress, scalability)
are then decided.
Test Design

• Performance test cases are designed.


• These are generally created in modular form so that they may be used as the building blocks of
larger, more complex performance tests
Test Implementation

• A test implementation activity is establishing and/or resetting the test environment before each test
execution.
• Since performance testing is typically data-driven, a process is needed to establish test data that is
representative of actual production data in volume and type so that production use can be
simulated.
Test Execution

• Test execution occurs when the performance test is conducted, often by using performance test
tools.
• Test results are evaluated to determine if the system’s performance meets the requirements and
other stated objectives. Any defects are reported.
Test Completion

• Performance test results are provided to the stakeholders (e.g., architects, managers, product
owners) in a test summary report.
• The results are expressed through metrics which are often aggregated to simplify the meaning of
the test results.
• Visual means of reporting such as dashboards are often used to express performance test results in
ways that are easier to understand than text-based metrics.
• Performance testing is often considered to be an ongoing activity.
• It is performed at multiple times and at all test levels (component, integration, system, system
• integration and acceptance testing).
Categories of Performance Risks for Different Architectures

Application or system performance varies considerably based on the architecture, application and host
environment.
While it is not possible to provide a complete list of performance risks for all systems, the list below includes
some typical types of risks associated with particular architectures:
• Single Computer Systems
• Multi-tier Systems
• Distributed Systems
• Virtualized Systems
• Dynamic/Cloud-based Systems
• Client –Server Systems
• Mobile Applications
• Embedded Real-time Systems
• Mainframe Applications
Single Computer Systems
• These are systems or applications that runs entirely on one non-virtualized computer.
• Performance can degrade due to excessive resource consumption including
• -- memory leaks,
• -- background activities such as security software, slow storage subsystems (e.g., low-speed
external devices or disk fragmentation), and operating system mismanagement.
• ---inefficient implementation of algorithms which do not make use of available resources (e.g., main
memory) and as a result execute slower than required.
Multi-tier Systems

• These are systems of systems that run on multiple servers, each of which performs a specific set of
tasks, such as database server, application server, and presentation server.
• Each server is, of course, a computer and subject to the risks given earlier.
• In addition, performance can degrade due to poor or non-scalable database design, network
bottlenecks, and inadequate bandwidth or capacity on any single server.
Distributed Systems

• These are systems of systems, similar to a multi-tier architecture, but the various servers may
change dynamically, such as an e-commerce system that accesses different inventory databases
depending on the geographic location of the person placing the order.
• In addition to the risks associated with multi-tier architectures, this architecture can experience
performance problems due to critical workflows or dataflows to, from, or through unreliable or
unpredictable remote servers, especially when such servers suffer periodic connection problems or
intermittent periods of intense load.
Virtualized Systems

• These are systems where the physical hardware hosts multiple virtual computers.
• These virtual machines may host single-computer systems and applications as well as servers that
are part of a multi-tier or distributed architecture.
• Performance risks that arise specifically from virtualization include excessive load on the hardware
across all the virtual machines or improper configuration of the host virtual machine resulting in
inadequate resources.
Dynamic/Cloud-based Systems

• These are systems that offer the ability to scale on demand, increasing capacity as the level of load
increases.
• These systems are typically distributed and virtualized multitier systems, albeit with self-scaling
features designed specifically to mitigate some of the performance risks associated with those
architectures.
• However, there are risks associated with failures to properly configure these features during initial
setup or subsequent updates.
Client –Server Systems

• These are systems running on a client that communicate via a user interface with a single server,
multi-tier server, or distributed server. Since there is code running on the client, the single computer
risks apply to that code, while the server-side issues mentioned above apply as well. Further,
performance risks exist due to connection speed and reliability issues, network congestion at the
client connection point (e.g., public Wi-Fi), and potential problems due to firewalls, packet inspection
and server load balancing.
• Mobile Applications
• This are applications running on a smartphone, tablet, or other mobile device. Such
• applications are subject to the risks mentioned for client-server and browser-based
• (web apps) applications. In addition, performance issues can arise due to the limited
• and variable resources and connectivity available on the mobile device (which can be
• affected by location, battery life, charge state, available memory on the device and
• temperature). For those applications that use device sensors or radios such as
• accelerometers or Bluetooth, slow dataflows from those sources could create
• problems. Finally, mobile applications often have heavy interactions with other local
• mobile apps and remote web services, any of which can potentially become a
• performance efficiency bottleneck.
• Embedded Real-time Systems
• These are systems that work within or even control everyday things such as cars (e.g.,
• entertainment systems and intelligent braking systems), elevators, traffic signals,
• Heating, Ventilation and Air Conditioning (HVAC) systems, and more. These systems
• often have many of the risks of mobile devices, including (increasingly) connectivity related issues
since these devices are connected to the Internet. However the
• diminished performance of a mobile video game is usually not a safety hazard for the
• user, while such slowdowns in a vehicle braking system could prove catastrophic.
• Mainframe Applications
• These are applications—in many cases decades-old applications—supporting often
• mission-critical business functions in a data center, sometimes via batch processing.
• Most are quite predictable and fast when used as originally designed, but many of
• these are now accessible via APIs, web services, or through their database, which can
• result in unexpected loads that affect throughput of established applications.
Performance Metrics
Typical Communication Protocols
• Communication protocols define a set of communications rules between computers and systems.
• Designing tests properly to target specific parts of the system requires understanding protocols.
• Communication protocols are often described by the Open Systems Interconnection (OSI) model
layers (see ISO/IEC 7498-1), although some protocols may fall outside of this model.
• For performance testing, protocols from Layer 5 (Session Layer) to Layer 7 (Application Layer) are
most commonly used for performance testing.
• Common protocols include:

•  Database - ODBC, JDBC, other vendor-specific protocols


•  Web - HTTP, HTTPS, HTML
•  Web Service - SOAP, REST
Additional protocols used in performance testing include:

•  Network - DNS, FTP, IMAP, LDAP, POP3, SMTP, Windows Sockets, CORBA
•  Mobile - TruClient, SMP, MMS
• Remote Access - Citrix ICA, RTE
•  SOA - MQSeries, JSON, WSCL

You might also like