ST Unit4 Slides
ST Unit4 Slides
Unit 4
Acceptance Testing
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Acceptance Testing
List of Contents
Acceptance Testing
- What is It?
- Importance
- Acceptance Testing [Criteria & Execution]
- Acceptance Testing – Challenges
- Alpha Testing
- Beta Testing
- Alpha vs. Beta Testing
3
Acceptance Testing - Importance
4
Acceptance Testing - Criteria
5
Acceptance Testing - Criteria
6
Acceptance Testing - Criteria
7
Acceptance Testing - Execution
8
Acceptance Testing – Practical Challenges
9
Alpha Testing
- Prior to Beta
- Product stability still poor; more ad-hoc process
10
Beta Testing
11
Beta Testing
Process
1. Select & list representative customers
2. Work out a beta test plan
3. Initiate the product and support throughout
4. Carefully monitor / watch the progress and the feedbacks – good
& bad both
5. Have a good response system to avoid frustration for customers
6. Analyze the whole feedback and plough it back for product
improvement
Incentivized participation
12
Alpha Testing vs. Beta Testing
During alpha testing, data is not real and typically, the data set is very
13
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Software Testing
Unit 4
Non-Functional Testing
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Non-Functional Testing
List of Contents
- Overview - Test Automation
- Non Functionality Tests - Test Execution
- Definitions - Test Analysis
- Test Planning & Phases - NF Test Tools
- Scalability - Security Testing
- Reliability - Other NF Tests
- Stress Test
- Performance Test
Functionality Testing
● To evaluate the functional requirements of a system and focuses on
the outputs generated in response to selected inputs and execution
conditions
● Focus is on testing the product’s behavior (both positive & negative)
from a direct output point of view
● We have seen many aspects of it so far
3
Overview
4
Why Non-Functionality Tests?
➔ To find design faults and to help in fixing them.
➔ To find the limits of the product
➔ Max no. of concurrent access, min memory, max # of rows, max
➔ To get tunable parameters for the best performance
➔ Optimal combination of values
➔ To find out whether resource upgrades can improve performance? - ROI
➔ To find out whether the product can behave gracefully during stress and load conditions
➔ To ensure that the product can work without degrading for a long duration
➔ To compare with other products and previous version of the product under test.
➔ To avoid unintended side effects.
5
Common Characteristics of NFT
NF behavior
● Depends heavily on the deployment environment
● “multi-” in most of the control parameters
6
Quick Definitions
Scalability test
Testing that requires to find out the maximum capability of the product
parameters.
Performance test
Testing conducted to evaluate the time taken or response time of the
product to perform its required functions under stated conditions in
comparison with different versions of same product and competitive
products.
7
Quick Definitions
Reliability test
Testing conducted to evaluate the ability of the product to perform it's
required functions under stated conditions for a specified period of time or
number of iterations.
Stress test
Testing conducted to evaluate a system beyond the limits of the specified
requirements or environment resources (such as disk space, memory,
processor utilization, network congestion) to ensure the product behavior is
acceptable.
8
Test Phases
9
Test Planning – Test Strategy Basis
● What are all the input’s that can be used to design the test cases
○ Product Requirement Document
○ Customer Deployment Information.
○ Key NF requirements & priorities
● Industry / competitor products / current customer product behavior data for
benchmarking.
● What automation can be used.
● Test execution in stages
● How much % of TCs need to/can be automated.
10
Test Planning - Scope
11
Test Planning - Estimations
12
NF Test Planning – Entry/Exit Criteria
13
Entry/Exit Criteria - Examples
14
Test Planning – Defect Management
15
Test Design – Typical TC Contains
1. Ensures all the non-functional and design requirements are
implemented as specified in the documentation.
2. Inputs – No. of clients, Resources, No. of iterations, Test Configuration
3. Steps to execute, with some verification steps.
4. Tunable Parameters if any
5. Output (Pass/Failed Definition) : Time taken, resource utilization,
Operations/Unit time, …
6. What data to be collected, at what intervals.
7. Data presentation format
8. The test case priority
16
Test Design – Test Scenarios
17
What is Scalability?
18
Test Design – Scalability Test
The test cases will focus on to test the maximum limits of the
features, utilities and performing some basic operations
Few Examples:
Backup and restore of a DB with 1 GB records.
Add records to until the DB size grows beyond 2 GB.
Repair a DB of 2 million records.
19
Example of Scalability Test
20
Scalability Test– Outcomes
21
Reliability
Probability of failure-free software operation for a specified period
of time OR number of operations in a specified environment
The test cases will focus on the product failures when the operations
are executed continuously for a given duration or iterations – Long
and many!
Examples:
Query for entries which are not available on a particular server, but
available on the server which it is referring to continuously for 48 hrs
23
Test Design – Reliability Test
24
Reliability Test – Outcomes
● # of failures in given run
● Interval of defect free operation - MTBF
● Some of the configuration parameter sensitivity – Which cause more/less
unreliability
● Identifying causes of failures is hardest.
● Possible Failures reasons -?
○ Memory leaks
○ Deep defects due to uninitialized values
○ Weak error handling
○ Unstable environment
○ Unintended “side effects”
25
Test Design – Stress Test
To test the behavior of the system under very severe conditions – high load / low resource
Good system should show a graceful degradation in output and safe/acceptable behavior
under extremes
Example:
Performing login, query, add, repair, backup etc operations randomly from 50 clients
simultaneously at half the rated resource levels – say half memory / low speed
processor…
Since stressed conditions are randomly applied over a period of time, this is similar
to reliability tests
26
Stress Test - Techniques
27
Stress Test - Outcomes
28
Stress Test – Failure Cases
29
Test Design – Performance Test
The test cases would focus on getting response time and throughput
for different operations, under defined environment and tracking
resource consumption when resources are shared with other systems
Example:
Performance for adding 1 million records
30
Performance Test – Multiple Needs
31
Performance Test – Typical Outcome
32
Performance Test – Methodology
33
Test Automation – Considerations
● Test automation is itself a software development activity
● Specialized tools and/or Shell script driven batch programs
● Input/Configuration parameters, which are not hard-coded
● Modularization and Reusability
● Selective and Random execution of test cases
● Reporting Data and test logs
● Handling abnormal test termination
● Tool should be maintainable and reliable
34
Test Setup
● Non Functionality setups are usually huge
● Different kinds of tests may need different setups
● Good idea to build the setup before execution.
Test objectives:
● Testing to improve the product quality factors by finding and helping infixing
the defects
● Testing to gain confidence on the product quality factors
● Tunable parameters
36
Test Execution – Data Collection
Example
37
Test Execution – Outcomes
38
Test Analysis - Sample Charts
39
Test Analysis - Scalability
Memory α Num clients ⇒
ROI is more when memory is upgraded as the
number of clients increases.
40
Test Analysis - Performance
CPU and Memory α Num clients
Better performance by increasing these resources
Memory
After 100 clients, creation of process is failing or
memory allocator in the S/W is failing
Indicates a defect or upgrade of memory after
certain limit does not have any ROI
41
Test Analysis - ROI
Assuming the customers already
have 128 MB RAM, The ROI from
memory upgrade is short term, as
the after 256MB there is no
improvement in the performance.
42
NF Test Tools
1. LoadRunner (HP)
2. Jmeter (Apache)
3. PerformanceTester(Rational)
4. LoadUI ( Smart Bear)
5. Silk Performer (Borland)
43
Security Testing
● Both static and dynamic
● Weak spots are called security vulnerabilities
● Many test tools to identify vulnerabilities at application level. Ex.,
○ Access control – application level
○ Direct usage of resources through low level code
○ SQL injection through input
○ buffer overflow
○ Usage of encryption
○ Sensitive info in non-secure channel(http)
○ API Interfaces
● OWASP Certifications / Standard
44
Few Other NF Test Types
1. Endurance testing
2. Load testing
3. Compatibility testing
4. Standards Compliance testing
5. Usability testing
6. Accessibility & Internationalization testing
45
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Software Testing
Unit 4
Regression Testing
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Regression Testing
List of Contents
- Regression Testing & Types
- Methodology
- Selecting Test Cases
- Classifying Test Cases
- Resetting Test Cases
- How to Conclude Results
- Popular Strategies
- Best Practices
Not just defect fixes, even other modifications to call for regression
testing.
Happens in fields other than software as well – Where a bit of complexity
is involved.
3
Regression Testing – Types
I. Final regression testing
● Unchanged build exercised for the minimum period of “cook time” (gold master build)
● To ensure that “the same build of the product that was tested reaches the customer”
● To have full confidence on product prior to release
II. Regular regression testing
● To validate the product builds between test cycles
● Used to get a comfort feeling on the bug fixes, and to carry on with next cycle of
testing
● Also used for making intermediate releases (Beta,Alpha)
4
Regression Testing – Types
5
Regression Testing - Methodology
6
Performing Initial Smoke Test
Smoke testing ensures that the basic functionality works and indicates
that the build can be considered for further testing.
7
What Is Needed for Selecting Test Cases?
● Bug fixes and how they affect the system
● Area of frequent defects
● Area that has undergone many / recent code changes
● Area that is highly visible to the users
● Area that has more risks
● Core features of the product which are mandatory requirements of the customer
Points to Remember…
● Emphasis is more on the criticality of bug fixes than the criticality of the defect itself
● More positive test cases than negative test cases for final regression
● “Constant set” of regression test cases is rare
8
Classifying Test Cases
9
Classifying Test Cases
10
Selecting Test Cases
Criteria
● Bug fixes work
● No side-effects
11
Resetting Test Cases
12
Resetting Test Cases (Contd.)
It is done
● When there is a major change in the product
● When there is a change in the build procedure that affects the
product
● In a large release cycle where some test cases have not been
executed for a long time
● When you are in the final regression test cycle with a few selected
test cases
● In a situation in which the expected results could be quite different
from history
13
How to Conclude Results
14
Popular Strategies
1. Regress all. Rerun all priority 1 , 2 & 3 TCs. Time becomes the
constraint and ROI is less.
2. Priority-based regression: Rerun priority 1 , 2 & 3 TCs based on
time availability. Cut-off is based on time availability.
3. Regress changes: Compare code changes and and select test cases
based on impact (grey box strategy).
4. Random regression: Select random test cases and execute. Tests
can include both automated and not automated test cases
5. Context-based dynamic regression: Execute a few of the priority-1
TCs, based on context (e.g., find new defects, boundary value) and
outcome, select additional related cases.
An effective regression strategy is the combination of all of the above, not any of them in isolation.
15
Some Guidelines
● Should not select more test cases that are bound to fail and have no or
less relevance to the bug fixes.
● Select more positive test cases than negative test cases for the final
regression test cycle as more of the latter may create some confusion and
unexpected heat.
● The regression guidelines are equally applicable for cases in which major
release of a product, have executed all test cycles and are planning a
regression test cycle
16
Best Practices
17
Summary
18
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Software Testing
Unit 4
Agile & AdHoc Testing
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Agile & AdHoc Testing
List of Contents
- Iterative Testing
- Agile Testing
- Methodology
- AdHoc Testing
- Defect Seeding
- Examples of AdHoc Testing
3
Agile Testing
Agile testing is software testing that follows the best practices of the
Agile development framework. Agile development takes an incremental
approach to development. Similarly, Agile testing includes an
incremental approach to testing
4
Advantages of Agile Testing
5
Agile Testing Methodology
1. Impact assessment
2. Planning
3. Daily stand-ups
4. Reviews
6
Defect Seeding
Defect seeding is a method of intentionally introducing defects into a product to
check the rate of detection and residual defects.
7
AdHoc Testing
When a software testing performed without proper planning and
documentation, it is said to be Adhoc Testing.
AdHoc Tests are done after formal testing is performed on the application.
AdHoc methods are the least formal type of testing as it is NOT a structured
approach. Hence, defects found using this method are hard to replicate as there
are no test cases aligned for those scenarios.
8
AdHoc Testing Examples
9
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
Software Testing
Unit 4
Software Testing Tools
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
1
Software Testing Tools
List of Contents
- Software Testing Tools
- Selenium
- Advantages & Disadvantages of Selenium
- Test Management Tools
- Bugzilla
- Advantages & Disadvantages of Bugzilla
- Jira
- Advantages & Disadvantages of Jira
- Bugzilla vs Jira (A Comparison)
3
Selenium
Selenium is an open-source,
automated testing tool used to test
web applications across various
browsers.
4
Advantages & Disadvantages of Selenium
5
Test Management Tools
Test management tools are used to store information on how testing is to be done, plan
testing activities and report the status of quality assurance activities.
Examples include Bugzilla & Jira
6
Bugzilla
Bugzilla is an open-source tool used to
track bugs and issues of a project or a
software. It helps the developers and other
stakeholders to keep track of unresolved
problems with the product.
7
Features of Bugzilla
Reference
8
Advantages & Disadvantages of Bugzilla
9
Jira
10
Jira
11
Some Jira Use-Cases
12
Advantages & Disadvantages of Jira
13
Bugzilla vs Jira
14
THANK YOU
Prof Raghu B. A. Rao
Department of Computer Science and Engineering
15