0% found this document useful (0 votes)
10 views21 pages

Unit Iv

The document outlines various aspects of performance testing, including key factors such as throughput, response time, tuning, and benchmarking. It details different testing types like load, stress, volume, failover, recovery, configuration, compatibility, usability, and security testing, as well as their significance in Agile methodologies. Additionally, it discusses the importance of test strategy, automation, test data management, and defect management in Agile environments.

Uploaded by

gowsalyaam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views21 pages

Unit Iv

The document outlines various aspects of performance testing, including key factors such as throughput, response time, tuning, and benchmarking. It details different testing types like load, stress, volume, failover, recovery, configuration, compatibility, usability, and security testing, as well as their significance in Agile methodologies. Additionally, it discusses the importance of test strategy, automation, test data management, and defect management in Agile environments.

Uploaded by

gowsalyaam
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

UNIT IV

Performance Testing
• Performance testing is the process of determining the speed or
effectiveness of a computer, network, software program or device
• Performance testing is the process by which software is tested to determine
the current system performance
• Factors that governs performance testing;
– Throughput
– Response time
– Tuning
– Benchmarking
ThroughPut
• Capability of the product to handle multiple transaction in a given period
• Throughput represents number of requests/business transactions processed
by the product in a specified time duration.
Response Time
• Response time is defined as the delay between the point of the request and
the first response from the product.
• Response Time increases proportionally to the user load.
Tuning
• Tuning is an iterative process that we use to identify and eliminate the
bottlenecks until your application meets its performance objectives.
• We establish a baseline and then collect the data, analyze the results,
identify the bottlenecks, make the configuration changes and measure the
data.
Benchmarking
• A very well improved performance of the product makes no business sense
if that performance does not match up to the competitive products.
• A careful analysis is required to chalk out the list of transactions to be
compared across the products so that an apple-apple comparison becomes
possible.
Load Testing
• To test the performance and behaviour at the peak load (or speed or
configuration)
• Ex : 100 users is the limit and testing the system by applying the 100 user
is called Load Testing.
Stress Testing
• Stress Testing is used to find ways to break the system. The test also
provides the range of maximum load the system can hold.
• It is to test the stress limit of the system (maximum number of users , peak
demands etc).
• Ex: Applying beyond 100 users and towards the system crash is called
stress testing
Volume Testing
• Volume Testing is to verify that the performance of the application is not
affected by the volume of data that is being handled by the application.
• Testing the application for the large volume of data is called volume
testing.
• This is mainly performed to check the memory leaks and capacity of the
server to handle large volume of data.
Fail Over Testing
• Failover testing is a technique that validates if a system can allocate extra
resources and back up all the information and operations when a system
fails abruptly due to some reason.
• This test determines the ability of a system to handle critical failures and
handle extra servers.
Recovery Testing
• Reliability Testing or Recovery Testing – is to verify whether or not the
application is able to return back to its normal state after a failure or
abnormal behaviour and how long does it take for it to do so (in other
words, time estimation).
• Testing how well is the system is able to recover from the crashes and
hardware failures.
• It tests the system’s response to presence of errors and loss of data.
Configuration Testing
• Configuration Testing is the process of testing the system under each
configuration of the supported software and hardware.
• Here, the different configurations of hardware and software means the
multiple operating system versions, various browsers, various supported
drivers, distinct memory sizes, different hard drive types, various types of
CPU etc.
Compatibility Testing
• Compatibility testing is software testing which comes under the
non functional testing category, and it is performed on an application to
check its compatibility (running capability) on different platform
/environments.
• This testing is done only when the application becomes stable.
Usability Testing
• Usability Testing in software testing is a type of testing, that is done from
an end user’s perspective to determine if the system is easily usable.
Usability testing is generally the practice of testing how to easy a design is
to use on a group of representative users.
• Types
– Guerilla Testing
– Usability Lab
– Screen or Video Recording
Testing the Documentation
• Before Testing:
– SRS Document
– Test Policy document
– Test Strategy document
– Traceability Matrix document
• During Testing:
– Test Case Document
– Test description
– Test case report
– Test logs
• After Testing:
– Test Summary
Security Testing
• Security Testing is a type of Software Testing that uncovers vulnerabilities
of the system and determines that the data and resources of the system are
protected from possible intruders.
• It ensures that the software system and application are free from any
threats or risks that can cause a loss.
• Security Testing is the process to determine that an information system
protects data and maintain the functionality as intended.
Testing in Agile
• In Agile frameworks, both testing and development are carried out in the
same sprint. Once the testing team has a clear idea of what is being
developed, the tester’s work on how the system would behave after the
new changes are incorporated and whether it satisfies the business needs or
acceptance criteria.
• In the time when the development team works on the design and code
changes, the testing team works on the testing approach and comes up with
the test scenarios and test scripts.
Contd…..
• Test Strategy and Planning
– It’s the responsibility of the testing team to have a structured and definite
testing approach and strategy in the project.
• Importance of Automation for Testing in Agile
– Agile needs continuous testing of the functionalities developed in past Sprints.
The use of Automation helps complete this regression testing on time.
• Test Coverage in Agile
– Test coverage for a user story is usually discussed and finalized during the
backlog grooming sessions and later during the Sprint Planning meetings..
• Test Data Management in Agile
– Test data is one of the most important factors in testing. Creating and
manipulating the data for testing various test cases is one of the challenging
tasks in testing. Agile Testing in Sprints gets difficult when the system is
dependent on another team for test data.
Contd…..
• Impact Analysis in Agile
– In Agile, there is a more significant role of the testing team in impact analysis
and design reviews as they are involved in the design discussions happening
for a story.
– This helps the developer in the impact analysis and debugging of the issues
• Testing Practices in Agile:
– Types of Testing:
• Unit Testing
• Integration Testing
• Smoke Testing
• System Testing
• Regression Testing
• Performance Testing
• Exploratory Testing
• Client/User Acceptance Testing
Contd…
• Defect Management in Agile
Whenever the stories are available to test, the testing team starts testing
and finds bugs, then a defect is raised to make the team and the relevant
stakeholders aware of the issue. This has to be tracked formally in a
defect management tool. The critical defects are discussed with the team
along with the Product Owner who finalizes the criticality of the defect
and the criticality of the story under which the defect has been logged.
• Defect Removal Effectiveness
Defect removal effectiveness is a metric that tells how many defects
testers could identify in their testing and how many of them got slipped
to the next stage
Defect Removal Effectiveness (DRE) = (No. of In-process defects / Total
no. of defects)* 100
(Total number of defects = In-process defects + Defects found in
Sprint/Release review)
Bug Life Cycle in Agile

You might also like