Performance Test Report
Performance Test Report
Page 1 of 44
Performance Test Report
Prepared By: :( Group – 6)
Sukanta Basak (ID # 102 1025 050)
Farhana Afroz (ID # 071 486 050)
Instructor:
Md. Shazzad Hosain
Assistant Professor
Page 2 of 44
Index:
SL # Tropic Page#
1 Abstract 1
2 Chapter 1: Introduction 1
Page 3 of 44
9 Chapter 8 Plan and Design Tests
8.1 Introduction: 23
8.2 Approaches for Modeling Application Usage 23
8.3 Determine Navigation Paths for Key Scenarios 23
8.4 Determine the Relative Distribution of Scenarios 25
8.5 Identify Target Load Levels 26
12 Chapter 11 Conclusion 39
13 Reference 40
Page 4 of 44
Abstract:
Performance testing is a type of testing intended to determine the responsiveness,
throughput, reliability, and/or scalability of a system under a given workload.
Performance testing is commonly conducted to accomplish the following:
Assess production readiness
Evaluate against performance criteria
Compare performance characteristics of multiple systems or system
configurations
Find the source of performance problems
Support system tuning
Find throughput levels
Chapter 1: Introduction
In software engineering, performance testing is testing that is performed, from one
perspective, to determine how fast some aspect of a system performs under a particular
workload. It can also serve to validate and verify other quality attributes of the system, such
as
Scalability,
Reliability and
Resource usage.
Performance testing is a subset of Performance engineering, an emerging computer
science practice that strives to build performance into the design and architecture of a
system, prior to the onset of actual coding effort.
The performance testing approach used in this guide consists of the following activities:
1. Activity 1. Identify the Test Environment. Identify the physical test environment and the
production environment as well as the tools and resources available to the test team. The
physical environment includes hardware, software, and network configurations. Having a
thorough understanding of the entire test environment at the outset enables more efficient test
design and planning and helps we identify testing challenges early in the project. In some
situations, this process must be revisited periodically throughout the project’s life cycle.
Page 5 of 44
2. Activity 2. Identify Performance Acceptance Criteria. Identify the response time,
throughput, and resource utilization goals and constraints. In general, response time is a user
concern, throughput is a business concern, and resource utilization is a system concern.
Additionally, identify project success criteria that may not be captured by those goals and
constraints; for example, using performance tests to evaluate what combination of
configuration settings will result in the most desirable performance characteristics.
3. Activity 3. Plan and Design Tests. Identify key scenarios, determine variability among
representative users and how to simulate that variability, define test data, and establish
metrics to be collected. Consolidate this information into one or more models of system
usage to be implemented, executed, and analyzed.
4. Activity 4. Configure the Test Environment. Prepare the test environment, tools, and
resources necessary to execute each strategy as features and components become available
for test. Ensure that the test environment is instrumented for resource monitoring as
necessary.
Page 6 of 44
5. Activity 5. Implement the Test Design. Develop the performance tests in accordance
with the test design.
6. Activity 6. Execute the Test. Run and monitor our tests. Validate the tests, test data, and
results collection. Execute validated tests for analysis while monitoring the test and the test
environment.
7. Activity 7. Analyze Results, Report, and Retest. Consolidate and share results data.
Analyze the data both individually and as a cross-functional team. Reprioritize the remaining
tests and re-execute them as needed. When all of the metric values are within accepted limits,
none of the set thresholds have been violated, and all of the desired information has been
collected, we have finished testing that particular scenario on that particular configuration.
At the highest level, performance testing is almost always conducted to address one or
more risks related to expense, opportunity costs, continuity, and/or corporate reputation.
Some more specific reasons for conducting performance testing include:
Page 7 of 44
Assessing adequacy of developed software performance by:
In performance testing, it is often crucial (and often difficult to arrange) for the test
conditions to be similar to the expected actual use. This is, however, not entirely possible
in actual practice. The reason is that the workloads of production systems have a random
nature, and while the test workloads do their best to mimic what may happen in the
production environment, it is impossible to exactly replicate this workload variability -
except in the simplest system.
Performance goals will differ depending on the application technology and purpose
however they should always include some of the following: -
Concurrency / Throughput
If an application identifies end-users by some form of login procedure then a concurrency
goal is highly desirable. By definition this is the largest number of concurrent application
users that the application is expected to support at any given moment. The workflow of
our scripted transaction may impact true application concurrency especially if the
iterative part contains the Login & Logout activity
Page 8 of 44
Render response time
A difficult thing for load testing tools to deal with, as they generally have no concept of what
happens within a node apart from recognizing a period of time where there is no activity 'on
the wire'. To measure render response time it is generally necessary to include functional test
scripts as part of the performance test scenario which is a feature not offered by many load
testing tools.
Load
Stress
Endurance
Spike
Configuration
Isolation
Page 9 of 44
3.1.5 Configuration Testing
Configuration testing is another variation on traditional performance testing. Rather than
testing for performance from the perspective of load we are testing the effects of
configuration changes in the application landscape on application performance and
behavior. A common example would be experimenting with different methods of load
balancing.
• Identifies mismatches
between performance-
related expectations and
reality.
Page 11 of 44
Stress test To determine or validate an The goal of stress testing is to reveal
application’s behavior when application bugs that surface only under
it is pushed beyond normal high load conditions. These bugs can
or peak load conditions. include such things as synchronization
issues, race conditions, and memory
leaks. Stress testing enables we to
identify our application’s weak points,
and shows how the application behaves
under extreme load conditions.
Although the potential benefits far outweigh the challenges related to performance
testing, uncertainty over the relevance of the resulting data — based on the sheer
impossibility of testing all of the reasonable combinations of variables, scenarios and
situations — makes some organizations question the value of conducting performance
testing at all. In practice, however, the likelihood of catastrophic performance failures
occurring in a system that has been through reasonable (not even rigorous) performance
testing is dramatically reduced, particularly if the performance tests are used to help
determine what to monitor in production so that the team will get early warning signs if
the application starts drifting toward a significant performance-related failure.
Performance testing is a broad and complex activity that can take many forms, address
many risks, and provide a wide range of value to an organization.
Page 12 of 44
It is important to understand the different performance test types in order to reduce risks,
minimize cost, and know when to apply the appropriate test over the course of a given
performance-testing project. To apply different test types over the course of a
performance test, we need to evaluate the following key points:
The objectives of the performance test.
The context of the performance test; for example, the resources involved, cost,
and potential return on the testing effort.
Page 13 of 44
Are the network components
adequate?
Smoke Is this build/configuration ready for
additional performance testing?
What type of performance testing
should I conduct next?
Does this build exhibit better or
worse performance than the last one?
Spike What happens if the production load
exceeds the anticipated peak load?
What kinds of failures should we
plan for?
What indicators should we look for?
Stress What happens if the production load
exceeds the anticipated load?
What kinds of failures should we
plan for?
What indicators should we look for in
order to intervene prior to failure?
Unit Is this segment of code reasonably
efficient?
Did I stay within my performance
budgets?
Is this code performing as anticipated
under load?
Validation Does the application meet the goals
and requirements?
Is this version faster or slower than
the last one?
Will I be in violation of my
contract/Service Level Agreement
(SLA) if I release?
Page 14 of 44
Is a Web Service responding within the maximum expected response time before
an error is thrown?
Ensure that our performance requirements and goals represent the needs and
desires of our users, not someone else’s.
Compare our speed measurements against previous versions and competing
applications.
Design load tests that replicate actual workload at both normal and anticipated
peak times.
Conduct performance testing with data types, distributions, and volumes similar
to those used in business operations during actual production (e.g., number of
products, orders in pending status, size of user base). We can allow data to
accumulate in databases and file servers, or additionally create the data volume,
before load test execution.
Use performance test results to help stakeholders make informed architecture and
business decisions.
Solicit representative feedback about users’ satisfaction with the system while it is
under peak expected load.
Include time-critical transactions in our performance tests.
Ensure that at least some of our performance tests are conducted while periodic
system processes are executing (e.g., downloading virus-definition updates, or
during weekly backups).
Measure speed under various conditions, load levels, and scenario mixes. (Users
value consistent speed.)
Validate that all of the correct data was displayed and saved during our
performance test. (For example, a user updates information, but the confirmation
screen still displays the old information because the transaction has not completed
writing to the database.)
Scalability-Related Risks
Scalability risks concern not only the number of users an application can support,
but also the volume of data the application can contain and process, as well as the
ability to identify when an application is approaching capacity. Common
scalability risks that can be addressed via performance testing include:
Can the application provide consistent and acceptable response times for the
entire user base?
Can the application store all of the data that will be collected over the life of the
application?
Are there warning signs to indicate that the application is approaching peak
capacity?
Will the application still be secure under heavy usage?
Will functionality be compromised under heavy usage?
Page 15 of 44
4.5 Scalability-Related Risk-Mitigation Strategies
The following strategies are valuable in mitigating scalability-related risks:
Compare measured speeds under various loads. (Keep in mind that the end user
does not know or care how many other people are using the application at the
same time that he/she is.)
Design load tests that replicate actual workload at both normal and anticipated
peak times.
Conduct performance testing with data types, distributions, and volumes similar
to those used in business operations during actual production (e.g., number of
products, orders in pending status, size of user base). We can allow data to
accumulate in databases and file servers, or additionally create the data volume,
before load test execution.
Use performance test results to help stakeholders make informed architecture and
business decisions.
Work with more meaningful performance tests that map to the real-world
requirements.
When we find a scalability limit, incrementally reduce the load and retest to help
we identify a metric that can serve as a reliable indicator that the application is
approaching that limit in enough time for we to apply countermeasures.
Validate the functional accuracy of the application under various loads by
checking database entries created or validating content returned in response to
particular user requests.
Conduct performance tests beyond expected peak loads and observe behavior by
having representative users and stakeholders access the application manually
during and after the performance test.
Can the application run for long periods of time without data corruption,
slowdown, or servers needing to be rebooted?
If the application does go down unexpectedly, what happens to partially
completed transactions?
When the application comes back online after scheduled or unscheduled
downtime, will users still be able to see/do everything they expect?
When the application comes back online after unscheduled downtime, does it
resume at the correct point? In particular, does it not attempt to resume cancelled
transactions?
Can combinations of errors or repeated functional errors cause a system crash?
Page 16 of 44
Can one leg of the load-balanced environment be taken down and still provide
uninterrupted service to users?
Can the system be patched or updated without taking it down?
Execute identical tests immediately before and after a system reboot. Compare the
results. We can use an identical approach for recycling services or processes.
Include error or exception cases in our performance test scenarios (for example,
users trying to log on with improper credentials).
Apply a patch to the system during a performance test.
Force a backup and/or virus definition update during a performance test.
Page 17 of 44
5.2 Iterative Performance Testing Activities
Using the following nine activities can represent this approach:
Page 18 of 44
5.3 Relationship to Core Performance Testing Activities
The following graphic displays how the seven core activities map to nine activities:
Page 19 of 44
Chapter 6 – Managing the Performance Test Cycle in a
Regulated (CMMI) Environment
6.1 Introduction:
The nature of performance testing makes it difficult to predict what type of test will add
value, or even be possible. Obviously, this makes planning all the more challenging. This
chapter describes an industry-validated approach to planning and managing performance
testing. This approach is sensitive to the need for audit ability, progress tracking, and changes
to plans that require approval without being oppressively procedural.
This approach described in this chapter can be represented by the following 12 activities.
Page 20 of 44
6.3 Relationship to Core Performance-Testing Activities
The following graphic show how the seven core activities from Chapter 4 map to these
twelve activities:
Page 21 of 44
6.4 CMMI Performance Testing Activity Flow
Page 22 of 44
Chapter 7 – Evaluating Systems to Increase Performance
Testing Effectiveness
7.1 Introduction:
Evaluating the system includes, but is not limited to, the following activities:
Logical architecture, as it is used in this chapter, refers to the structure, interaction, and
abstraction of software and/or code. That code may include everything from objects,
functions, and classes to entire applications. We will have to learn the code-level
architecture from our team. When doing so, remember to additionally explore the concept
of logical architectural tiers.
Page 23 of 44
The most basic architecture for Web-based applications is known as the three-tier
architecture, where those tiers often correspond to physical machines with roles defined
as follows:
More complex architectures may include more tiers, clusters of machines that serve the
same role, or even single machines serving as the host for multiple logical tiers.
Page 24 of 44
7.4 Physical Architecture
It should be clear that the physical architecture of the environment — that is, the actual
hardware that runs the software — is at least as important as the logical architecture.
Many teams refer to the actual hardware as the “environment” or the “network
architecture,” but neither term actually encompasses everything of interest to a
performance tester. What concerns the tester is generally represented in diagrams where
actual, physical computers are shown and labeled with the roles they play, along with the
other actual, physical computers with which they communicate. The following diagram
shows an example of one such physical architecture diagram.
Page 25 of 44
Putting these two pieces of the puzzle together adds the most value to the performance-
testing effort. Having this information at our fingertips, along with the more detailed code
architecture of what functions or activities are handled on which tiers, allows we to
design tests that can determine and isolate bottlenecks.
Page 26 of 44
Chapter 8 Plan and Design Tests
8.1 Introduction:
The most common purpose of Web load tests is to simulate the user’s experience as
realistically as possible. For performance testing to yield results that are directly
applicable to understanding the performance characteristics of an application in
production, the tested workloads must represent a real-world production scenario. To
create a reasonably accurate representation of reality, we must understand the business
context for the use of the application, expected transaction volumes in various situations,
expected user path(s) by volume, and other usage factors. By focusing on groups of users
and how they interact with the application, this chapter demonstrates an approach to
developing workload models that approximate production usage based on various data
sources.
Testing a Web site in such a way that the test can reliably predict performance is often
more art than science. As critical as it is to creating load and usage models that will
predict performance accurately, the data necessary to create these models is typically not
directly available to the individuals who conduct the testing. When it is, it is typically not
complete or comprehensive.
While it is certainly true that simulating unrealistic workload models can provide a team
with valuable information when conducting performance testing, we can only make
accurate predictions about performance in a production environment, or prioritize
performance optimizations, when realistic workload models are simulated.
Page 27 of 44
― there is no way to be certain until we test it. There are many methods to determine
navigation paths, including:
Identifying the user paths within our Web application that are expected to have
significant performance impact and that accomplish one or more of the identified key
scenarios
After the application is released for unscripted user acceptance testing, beta testing, or
production, we will be able to determine how the majority of users accomplish activities
on the system under test by evaluating Web server logs. It is always a good idea to
compare our models against reality and make an informed decision about whether to do
additional testing based on the similarities and differences found.
Page 28 of 44
8.4 Determine the Relative Distribution of Scenarios
Having determined which scenarios to simulate and what the steps and associated data
are for those scenarios, and having consolidated those scenarios into one or more
workload models, we now need to determine how often users perform each activity
represented in the model relative to the other activities needed to complete the workload
model.
Page 29 of 44
8.5 Identify Target Load Levels
A customer visit to a Web site comprises a series of related requests known as a user
session. Users with different behaviors who navigate the same Web site are unlikely to
cause overlapping requests to the Web server during their sessions. Therefore, instead of
modeling the user experience on the basis of concurrent users, it is more useful to base
our model on user sessions. User sessions can be defined as a sequence of actions in a
navigational page flow, undertaken by a customer visiting a Web site.
Figure 8.3 above represents usage volume from the perspective of the server (in this case,
a Web server). Reading the graph from top to bottom and from left to right, we can
See that user 1 navigates first to page “solid black” and then to pages “white,” “polka
dot,” “solid black,” “white,” and “polka dot.” User 2 also starts with page “solid black,”
but then goes to pages “zebra stripe,” “grey,” etc. We will also notice that virtually any
vertical slice of the graph between the start and end times will reveal 10 users accessing
the system, showing that this distribution is representative of 10 concurrent, or
simultaneous, users. What should be clear is that the server knows that 10 activities are
occurring at any moment in time, but not how many actual users are interacting with the
system to generate those 10 activities.
Page 30 of 44
Figure 8.4 Actual Distributions of User Activities Over Time
Figure 8.4 depicts another distribution of activities by individual users that would
generate the server perspective graph above.
Page 31 of 44
Chapter 9 Execute Tests
9.1 Introduction
Performance test execution is the activity that occurs between developing test scripts and
reporting and analyzing test results. Much of the performance testing–related training
available today treats this activity as little more than starting a test and monitoring it to
ensure that the test appears to be running as expected. In reality, this activity is
significantly more complex than just clicking a button and monitoring machines.
Term Benefits Poor load simulations can render all previous work useless. To
understand the data collected from a test run, the load simulation must accurately reflect
the test design. When the simulation does not reflect the test design, the results are prone
to misinterpretation. Even if our tests accurately reflect the test design, there are still
many ways that the test can yield invalid or misleading results. Although it may be
tempting to simply trust our tests, it is almost always worth the time and effort to validate
the accuracy of our tests before we need to depend on them to provide results intended to
assist in making the “go-live” decision. It may be useful to think about test validation in
terms of the following four categories:
Test design implementation. To validate that we have implemented our test design
accurately (using whatever method we have chosen), we will need to run the test and
examine exactly what the test does.
Concurrency. After we have validated that our test conforms to the test design when run
with a single user, run the test with several users. Ensure that each user is seeded with
unique data, and that users begin their activity within a few seconds of
One another — not all at the same second, as this is likely to create an unrealistically
stressful situation that would add complexity to validating the accuracy of our test design
implementation. One method of validating that tests run as expected with multiple users
is to use three test runs; one with 3 users, one with 5 users, and one with 11 users. These
three tests have a tendency to expose many common issues with both the configuration of
the test environment (such as a limited license being installed on an application
component) and the test itself (such as parameterized data not varying as intended).
Combinations of tests. Having validated that a test runs as intended with a single user
and with multiple users, the next logical step is to validate that the test runs accurately in
combination with other tests. Generally, when testing performance, tests get mixed and
matched to represent various combinations and distributions of users, activities, and
scenarios. If we do not validate that our tests have been both designed and implemented
to handle this degree of complexity prior to running critical test projects, we can end up
wasting a lot of time debugging our tests or test scripts when we could have been
collecting valuable performance information.
Page 32 of 44
Test data validation. Once we are satisfied that our tests are running properly, the last
critical validation step is to validate our test data. Performance testing can utilize and/or
consume large volumes of test data, thereby increasing the likelihood of errors in our
dataset. In addition to the data used by our tests, it is important to validate that our tests
share that data as intended, and that the application under test is seeded with the correct
data to enable our tests.
The following are some commonly employed methods of test validation, which are
frequently used in combination with one another:
Run the test first with a single user only. This makes initial validation much less
complex.
Observe us test while it is running and pay close attention to any behavior we feel is
unusual. Our instincts are usually right, or at least valuable.
Use the system manually during test execution so that we can compare us
observations with the results data at a later time.
Make sure that the test results and collected metrics represent what we intended them
to represent.
Check to see if any of the parent requests or dependent requests failed.
Check the content of the returned pages, as load-generation tools sometimes report
summary results that appear to “pass” even though the correct page or data was not
returned.
Run a test that loops through all of us data to check for unexpected errors.
If appropriate, validate that we can reset test and/or application data following a test
run.
At the conclusion of us test run, check the application database to ensure that it has
been updated (or not) according to us test design. Consider that many transactions in
which the Web server returns a success status with a “200” code might be failing
internally; for example, errors due to a previously used user name in a new user
registration scenario, or an order number that is already in use.
Consider cleaning the database entries between error trials to eliminate data that
might be causing test failures; for example, order entries that we cannot reuse in
subsequent test execution.
Run tests in a variety of combinations and sequences to ensure that one test does not
corrupt data needed by another test in order to run properly.
Although the process and flow of running tests are extremely dependent on us tools,
environment, and project context, there are some fairly universal tasks and considerations to
keep in mind when running tests.
Once it has been determined that the application under test is in an appropriate state to have
performance tests run against it, the testing generally begins with the highest-priority
performance test that can reasonably be completed based on the current state of the project
and application. After each test run, compile a brief summary of what happened during the
test and add these comments to the test log for future reference. These comments may
address machine failures, application exceptions and errors, network problems, or exhausted
Page 33 of 44
disk space or logs. After completing the final test run, ensure that we have saved all of the
test results and performance logs before we dismantle the test environment.
Whenever possible, limit tasks to one to two days each to ensure that no time will be lost if
the results from a particular test or battery of tests turn out to be inconclusive, or if the initial
test design needs modification to produce the intended results. One of the most important
tasks when running tests is to remember to modify the tests, test designs, and subsequent
strategies as results analysis leads to new priorities.
A widely recommended guiding principle is: Run test tasks in one- to two-day batches. See
the tasks through to completion, but be willing to take important detours along the way if an
opportunity to add additional value presents itself.
Using the same data value causes artificial usage of caching because the system
will retrieve data from copies in memory. This can happen throughout different
layers and components of the system, including databases, file caches of the
operating systems, hard drives, storage controllers, and buffer managers. Reusing
data from the cache during performance testing might account for faster testing
results than would occur in the real world.
Some business scenarios require a relatively small range of data selection. In such
a case, even reusing the cache more frequently will simulate other performance-
related problems, such as database deadlocks and slower response times due to
timeouts caused by queries to the same items. This type of scenario is typical of
marketing campaigns and seasonal sales events.
Some business scenarios require using unique data during load testing; for
example, if the server returns session-specific identifiers during a session after
login to the site with a specific set of credentials. Reusing the same login data
would cause the server to return a bad session identifier error. Another frequent
scenario is when the user enters a unique set of data, or the system fails to accept
the selection; for example, registering new users that would require entering a
unique user ID on the registration page.
In some business scenarios, we need to control the number of parameterized
items; for example, a caching component that needs to be tested for its memory
footprint to evaluate server capacity, with a varying number of products in the
cache.
In some business scenarios, we need to reduce the script size or the number of
scripts; for example, several instances of an application will live in one server,
reproducing a scenario where an independent software vendor (ISV) will host
them. In this scenario, the Uniform Resource Locators (URLs) need to be
parameterized during load test execution for the same business scenarios.
Page 34 of 44
Chapter 10 – Performance Test Reporting Fundamentals
10.1 Introduction:
Managers and stakeholders need more than simply the results from various tests — they
need conclusions based on those results, and consolidated data that supports those
conclusions. Technical team members also need more than just results — they need
analysis, comparisons, and details of how the results were obtained. Team members of all
types get value from performance results being shared more frequently. In this chapter,
we will learn how to satisfy the needs of all the consumers of performance test results and
data by employing a variety of reporting and results-sharing techniques, and by learning
exemplar scenarios where each technique tends be well received.
End-user response time is by far the most commonly requested and reported metric in
performance testing. If we have captured goals and requirements effectively, this is a
measure of presumed user satisfaction with the performance characteristics of the system
or application. Stakeholders are interested in end-user response times to judge the degree
to which users will be satisfied with the application. Technical team members are
interested because they want to know if they are achieving the overall performance goals
from a user’s perspective, and if not, in what areas those goals not being met.
Even though end-user response times are the most commonly reported performance-
testing metric, there are still important points to consider. ]
Page 35 of 44
Figure 10.1 Response Time
Page 36 of 44
10.4 Resource Utilizations
Resource utilizations are the second most requested and reported metrics in performance
testing. Most frequently, resource utilization metrics are reported verbally or in a
narrative fashion. For example, “The CPU utilization of the application server never
exceeded 45 percent. The target is to stay below 70 percent.” It is generally valuable to
report resource utilizations graphically when there is an issue to be communicated.
Page 37 of 44
Stakeholders also frequently request volume, capacity, and rate metrics, even though the
implications of these metrics are often more challenging to interpret. For this reason, it is
important to report these metrics in relation to specific performance criteria or a specific
performance issue. Some examples of commonly requested volume, capacity, and rate
metrics include:
Bandwidth consumed
Throughput
Transactions per second
Hits per second
Number of supported registered users
Number of records/items able to be stored in the database
Even though component response times are not reported to stakeholders as commonly as
end-user response times or resource utilization metrics, they are frequently collected and
shared with the technical team. These response times help developers, architects,
database administrators (DBAs), and administrators determine what sub-part or parts of
the system are responsible for the majority of end-user response times.
Page 38 of 44
Figure 10.6 Sequential Consecutive Database Updates
10.7 Trends
Trends are one of the most powerful but least-frequently used data-reporting methods.
Trends can show whether performance is improving or degrading from build to build, or
the rate of degradation as load increases. Trends can help technical team members
quickly understand whether the changes they recently made achieved the desired
performance impact.
Page 39 of 44
10.8 Creating a Technical Report
Although six key components of a technical report are listed below, all six may not be
appropriate for every technical report. Similarly, there may be additional information that
should be included based on exactly what message we are trying to convey with the
report. While these six components will result in successful technical reports most of the
time, remember that sometimes creativity is needed to make us message clear and
intuitive.
Consider including the following key components when preparing a technical report:
A results graph
A table for single-instance measurements (e.g., maximum throughput achieved)
Workload model (graphic)
Test environment (annotated graphic)
Short statements of observations, concerns, questions, and requests for
collaboration
References section
Page 40 of 44
10.8.1 Exemplar Tables for Single-Instance Measurements
Page 41 of 44
10.8.3 Exempllar Test Environme
E ent Graphiic
Paage 42 of 44
Chapter 11 Conclusion:
Performance test execution involves activities such as validating test
environments/scripts, running the test, and generating the test results. It can also include
creating baselines and/or benchmarks of the performance characteristics. It is important
to validate the test environment to ensure that the environment truly represents the
production environment. Validate test scripts to check if correct metrics are being
collected, and if the test script design is correctly simulating workload characteristics.
There are many reasons for load-testing a Web application. The most basic type of load
Testing is used to determine the Web application’s behavior under both normal and
anticipated peak load conditions. As we begin load testing, it is recommended that we
start with a small number of virtual users and then incrementally increase the load from
normal to peak. We can then observe how us application performs during this gradually
increasing load condition. Eventually, we will cross a threshold limit for us performance
objectives. For example, we might continue to increase the load until the server processor
utilization reaches 75 percent, or when end-user response times exceed 8 seconds.
Stress testing allows we to identify potential application issues that surface only under
extreme conditions. Such conditions range from exhaustion of system resources such as
memory, processor cycles, network bandwidth, and disk capacity to excessive load due to
unpredictable usage patterns, common in Web applications. Stress testing centers
around objectives and key user scenarios with an emphasis on the robustness, reliability,
and stability of the application. The effectiveness of stress testing relies on applying the
correct methodology and being able to effectively analyze testing results. Applying the
correct methodology is dependent on the capacity for reproducing workload conditions
for both user load and volume of data, reproducing key scenarios, and interpreting the
key performance metrics.
Page 43 of 44
Reference
Page 44 of 44