0% found this document useful (0 votes)
140 views30 pages

ST - Module 4: Test Selection & Minimization For Regression Testing Regression Testing

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 30

ST - Module 4

Test Selection & Minimization for Regression Testing


Regression Testing
• Software undergoes constant changes. Such changes are necessitated because of
defects to be fixed, enhancements to be made to existing functionality, or new
functionality to be added.
• Anytime such changes are made, it is important to ensure that
o The changes or additions work as designed; and
o The changes or additions do not break something that is already working and
should continue to work.
• Regression testing is designed to address the above two purposes.
• Regression testing is done to ensure that enhancements or defect fixes made to
the software works properly and does not affect the existing functionality.

Example:
• Assume that in a given release of a product, there were three defects—D1, D2, and
D3.
• When these defects are reported, the development team will fix these defects and
the testing team will perform tests to ensure that these defects are indeed fixed.
• When the customers start using the product (modified to fix defects D1, D2, and
D3), they may encounter new defects—D4 and D5.
• Again, the development and testing teams will fix and test these new defect fixes.
• But, in the process of fixing D4 and D5, as an unintended side-effect, D1 may
resurface.
• Thus, the testing team should not only ensure that the fixes take care of the
defects they are supposed to fix, but also that they do not break anything else
that was already working.

Uses of Regression Testing


• Regression testing is essential to make quick and frequent releases and also
deliver stable software.
• Regression testing enables that any new feature introduced to the existing
product does not adversely affect the current functionality.
• Regression testing follows selective re-testing technique. Whenever the defect
fixes are done, only a set of test cases that need to be run to verify the defect fixes
are selected by the test team.
• An impact analysis is done to find out what areas may get impacted due to those
defect fixes. Based on the impact analysis, some more test cases are selected to
take care of the impacted areas.
• Since this testing technique focuses on reuse of existing test cases that have
already been executed, the technique is called selective re-testing.

Types of Regression Testing


• There are two types of regression testing in practice.
1. Regular regression testing
2. Final regression testing
• Regular regression testing:
o A regular regression testing is done between test cycles to ensure that the
defect fixes that are done and the earlier functionality that were working
with the earlier test cycles continue to work.
o A regular regression testing can use more than one product build for the test
cases to be executed.
• Final regression testing:
o A final regression testing is done to validate the final build before release.
o The SCM (system configuration management) engineer delivers the final
build with the media and other contents exactly as it would go to the
customer.
o The final regression test cycle is conducted for a specific period of duration,
which is mutually agreed upon between the development and testing
teams. This is called the “cook time” for regression testing.
o Cook time is necessary to keep testing the product for a certain duration,
since some of the defects (for example, Memory leaks) can be found only
after the product has been used for a certain time duration.
• The regression testing types discussed above are represented in Figure 8.1.
• Reg. 1 and Reg. 2 are regular regression test cycles and final regression is shown as
Final Reg. in the figure
When to do Regression Testing?
• It is necessary to perform regression testing when
1. A reasonable amount of initial testing is already carried out.
2. A good number of defects have been fixed.
3. Defect fixes that can produce side-effects are taken care of.
• Regression testing may also be performed periodically, as a pro-active measure.
• RT can be performed irrespective of which test phase the product is in.
• Regression testing is both a planned test activity and a need-based activity and it is
done between builds and test cycles.
• Hence, RT is applicable to all phases in a software development life cycle (SDLC)
and also to component, integration, system, and acceptance test phases.
Regression Testing Process
• A well-defined methodology for regression testing is very important as this among
is the final type of testing that is normally performed just before release.
• There are several methodologies for regression testing:
• The methodology here is made of the following steps:
1. Performing an initial “Smoke” or “Sanity” test
2. Understanding the criteria for selecting the test cases
3. Classifying the test cases into different priorities
4. A methodology for selecting test cases
5. Resetting the test cases for test execution
6. Concluding the results of a regression cycle

1. Performing an Initial “Smoke” or “Sanity” Test


• Whenever changes are made to a product, it should first be made sure that
nothing basic breaks.
• In addition, you may want to ensure that the key interfaces to other products also
work properly.
• This is to be done before performing any other more detailed tests on the product.
Smoke testing consists of
1. Identifying the basic functionality that a product must satisfy;
2. Designing test cases to ensure that these basic functionality work and packaging
them into a smoke test suite;
3. Ensuring that every time a product is built, this suite is run successfully before
anything else is run; and
4. If this suite fails, escalating to the developers to identify the changes and
perhaps change or roll back the changes to a state where the smoke test suite
succeeds.
Difference between Smoke and Sanity Testing
Particulars Smoke testing Sanity testing
Smoke testing is performed to Sanity testing is done to check
ascertain that the critical whether bugs have been fixed
Purpose
functionalities of the program after the build and to verify new
are operating properly. functionalities.
Smoke testing can be Sanity testing can’t be
Documentation
documented and is scripted. documented and is unscripted.
The major goal of this testing The major goal of this testing is to
is to ensure that the newly determine the system’s
generated build is stable rationality and correctness to
Basis of testing
enough to withstand further ensure that the proposed
rigorous testing. functionality performs as
intended.
It is done by both developers It is done by testers.
Executed by
and testers.
It is a subset of acceptance It is a subset of regression
Subset
testing. testing.
Smoke Testing is first Sanity Testing is performed on
Performed on performed on initial build. stable build or for the new
features in the software
Covers end-to-end basic Covers specific modules, in which
Coverage
functionalities of the system. code changes have been made

2. Understanding the Criteria for Selecting the Test Cases


• There are two approaches to selecting the test cases for a regression run.
• First, an organization can choose to have a constant set of regression tests that are
run for every build or change. In such a case, deciding what tests to run is simple.
But this approach is likely to be sub-optimal because:
o In order to cover all fixes, the constant set of tests will encompass all features
and tests which are not required may be run every time; and
o A given set of defect fixes or changes may introduce problems for which
there may not be ready-made test cases in the constant set. Hence, even
after running all the regression test cases, introduced defects will continue to
exist.
• A second approach is to select the test cases dynamically for each build by making
judicious choices of the test cases. The selection of test cases for regression testing
requires knowledge of:
o The defect fixes and changes made in the current build;
o The ways to test the current changes;
o The impact that the current changes may have on other parts of the system;
o The ways of testing the other impacted parts.
• Some of the criteria to select test cases for regression testing are as follows:
1. Include test cases that have produced the maximum defects in the past
2. Include test cases for a functionality in which a change has been made
3. Include test cases in which problems are reported
4. Include test cases that test the basic functionality or the core features of the
product which are mandatory requirements of the customer
5. Include test cases that test the end-to-end behavior of the application
6. Include test cases to test the positive test conditions
7. Includes the area which is highly visible to the users

3. Classifying Test Cases


• To enable choosing the right tests for a regression run, the test cases can be
classified into various priorities based on importance and customer usage as:
o Priority-0:
▪ These test cases can be called sanity test cases which check basic
functionality and are run for accepting the build for further testing.
▪ They are also run when a product goes through a major change.
▪ These test cases deliver a very high project value to both to product
development teams and to the customers.
o Priority-1: Uses the basic and normal setup and these test cases deliver high
project value to both development team and to customers.
o Priority-2:
▪ These test cases deliver moderate project value.
▪ They are executed as part of the testing cycle and selected for
regression testing on a need basis.
4. Methodology for Selecting Test Cases
• Once the test cases are classified into different priorities, the test cases can be
selected.
• There could be several right approaches to regression testing which need to be
decided on “case to case” basis.
• Case 1:
o If the criticality and impact of the defect fixes are low, then it is enough that
a test engineer selects a few test cases from test case database (TCDB), (it is
a repository that stores all the test cases that can be used for testing a
product) and executes them.
o These test cases can fall under any priority (0, 1, or 2).
• Case 2:
o If the criticality and the impact of the defect fixes are medium, then we need
to execute all Priority-0 and Priority-1 test cases.
o If defect fixes need additional test cases (few) from Priority-2, then those test
cases can also be selected and used for regression testing.
o Selecting Priority-2 test cases in this case is desirable but not necessary.
• Case 3:
o If the criticality and impact of the defect fixes are high, then we need to
execute all Priority-0, Priority-1 and a carefully selected subset of Priority-2
test cases
• The above methodology requires that the impact of defect fixes be analyzed for all
defects.
• This can be a time-consuming procedure. If, for some reason, there is not enough
time and the risk of not doing an impact analysis is low, then the alternative
methodologies given below can be considered:
o Regress all: For regression testing, all priority 0, 1, and 2 test cases are rerun.
This means all the test cases in the regression test bed/suite are executed.
o Priority based regression: For regression testing based on this priority, all
priority 0, 1, and 2 test cases are run in order, based on the availability of time.
Deciding when to stop the regression testing is based on the availability of time.
o Regress changes: For regression testing using this methodology, code changes
are compared to the last cycle of testing and test cases are selected based on
their impact on the code.
o Random regression: Random test cases are selected and executed for this
regression methodology.
o Context based dynamic regression: A few Priority-0 test cases are selected, and
based on the context created by the analysis of those test cases after the
execution and outcome, additional related cases may be/are selected for
continuing the regression testing.

5. Resetting the Test Cases for Regression Testing


• A method or procedure that uses test case result history to indicate some of the
test cases be selected for regression testing is called a reset procedure.
• Resetting test cases reduces the risk involved in testing defect fixes by making the
testers go through all the test cases and selecting appropriate test cases based on
the impact of those defect fixes.
• Resetting of test cases, is not expected to be done often, and it needs to be done
with the following considerations in mind:
1. When there is a major change in the product.
2. When there is a change in the build procedure which affects the product.
3. Large release cycle where some test cases were not executed for a long time.
4. When the product is in the final regression test cycle with a few selected test
cases.
5. Where there is a situation, that the expected results of the test cases could be
quite different from the previous cycles.
6. The test cases relating to defect fixes and production problems need to be
evaluated release after release. In case they are found to be working fine, they
can be reset.
7. Whenever existing application functionality is removed, the related test cases
can be reset.
8. Test cases that consistently produce a positive result can be removed.
9. Test cases relating to a few negative test conditions (not producing any defects)
can be removed.

6. Concluding the Results of Regression Testing


• Since regression uses test cases that have already executed more than once, it is
expected that 100% of those test cases pass using the same build, if defect fixes
are done right.
• In situations where the pass percentage is not 100, the test manager can compare
with the previous results of the test case to conclude whether regression was
successful or not.
• If the result of a particular test case was a pass using the previous builds and a fail
in the current build, then regression has failed. A new build is required and the
testing must start from scratch after resetting the test cases.
• If the result of a particular test case was a fail using the previous builds and a pass
in the current build, then it is safe to assume the defect fixes worked.
• If the result of a particular test case was a fail using the previous builds and a fail in
the current build and if there are no defect fixes for this particular test case, it may
mean that the result of this test case should not be considered for the pass
percentage. This may also mean that such test cases should not be selected for
regression.
• If the result of a particular test case is a fail using the previous builds but works
with a documented workaround and if you are satisfied with the workaround, then
it should considered as a pass
• If you are not satisfied with the workaround, then it should be considered as a fail
for a system test cycle but may be considered as a pass for regression test cycle.
Regression Testing Techniques
1. Retest All
• As the name itself suggests, the entire test cases in the test suite are re-
executed to ensure that there are no bugs that have occurred because of a
change in the code.
• This is an expensive method as it requires more time and resources when
compared to the other techniques.

2. Regression Test Selection


• In this method, test cases are selected from the test suite to be re-executed.
• The selection of test cases is done on the basis of code change in the module.
• Test cases are divided into two categories, one is Reusable test cases and
another one is Obsolete test cases.
• The reusable test cases can be used in future regression cycles whereas
obsolete ones are not used in the upcoming regression cycles.

3. Test Case Prioritization


• Test cases with high Priority are executed first rather than the ones with
medium and low priority.
• The priority of the test case depends on its criticality and its impact on the
product and also on the functionality of the product which is used more often.
4. Hybrid
• The hybrid technique is a combination of Regression Test Selection and
Test case Prioritization.
• Rather than selecting the entire test suite, select only the test cases which are
re-executed depending on their priority.

Regression Testing Tools


• If your software undergoes frequent changes, regression testing costs will escalate.
Thus, Manual execution of test cases increases test execution time as well as costs.
• Automation of regression test cases is the smart choice in such cases. Following are
the most important tools used for both functional and regression testing:

1. Avo Assure
• It is a technology agnostic (cross-compatible), no-code test automation solution
that helps test end-to-end business processes with a few clicks of the buttons.
This makes regression testing more straightforward and faster.
Features
• Autogenerate test cases with a 100% no-code approach.
• Test across the web, desktop, mobile, ERP applications, Mainframes, associated
emulators, and more with a single solution.
• Enable accessibility testing.
• Execute test cases in a single VM independently or in parallel with Smart
Scheduling
• Integrate with Jira, Jenkins, ALM, QTest, Salesforce, Sauce Labs, TFS, etc.

2. Telerik Test Studio


• Telerik Test Studio is an automated testing platform for web, desktop and
responsive applications, supporting functional UI, load and RESTful API testing.
• Test Studio helps teams eliminate regressions and makes sure their applications
still work the way they did before any changes were introduced.
• It comes with standalone IDE and Visual Studio integration.
Features
• Visual test recorder for codeless end-to-end tests
• Cross-browser support
• Three times faster test execution
• Element location based on element ID and image
• CI/CD Integration and Docker support for Continuous Testing
• Data-driven testing
• Remote test scheduling and execution
• Test results and reports

3. testRigor
• testRigor helps you to directly express tests as executable specifications in plain
English.
• Users of all technical abilities are able to build end-to-end tests of any
complexity covering mobile, web, and API steps in one test.
• Test steps are expressed on the end-user level instead of relying on details of
implementation like XPaths or CSS Selectors.
Features
• Free forever public version
• Test cases are in English
• Unlimited users & Unlimited tests
• The easiest way to learn automation
• Recorder for web steps
• Integrations with CI/CD and Test case management
• Email & SMS testing
• Web + Mobile + API steps in one test

4. Eggplant
• Eggplant’s AI-driven test automation streamlines regression testing through
prioritization of test cases and minimization of test maintenance.
Features
• A.I.-driven test execution enables Eggplant to test the most important areas of
each release.
• Reuse testing models and scripts to test multiple versions with one set of assets.
• Reduce the burden of test maintenance through self-healing functional tests.
• Discover bugs otherwise miss through automated exploratory testing.
• Understand and focus in on problematic areas of your application that put your
release at risk.
• Reduce the time required to test key functionality of applications after updates.

5. Selenium:
• This is an open-source tool used for automating web applications.
• Selenium can be used for browser-based regression testing.
6. Quick Test Professional (QTP):
• HP Quick Test Professional is automated software designed to automate
functional and regression test cases.
• It uses VBScript language for automation.
• It is a Data-driven, Keyword based tool.
7. Rational Functional Tester (RFT):
• IBM’s rational functional tester is a Java tool used to automate the test cases of
software applications.
• This is primarily used for automating regression test cases and it also integrates
with Rational Test Manager.

Regression Test Selection: The Problem


• Given test set T, our goal is to determine Tr such that successful execution of P’
against Tr implies that modified or newly added code in P’ has not broken the code
carried over from P.
• Note that some tests might become obsolete when P is modified to P’. Such tests
are not included in the regression subset Tr.
• The task of identifying such obsolete tests is known as test revalidation.
Overview of a test selection method using Execution Trace
• Step 1: Given P and test set T, find the execution trace of P for each test in T.
• Step 2: Extract test vectors from the execution traces for each node in the CFG of P
• Step 3: Construct syntax trees for each node in the CFGs of P and P’. This step can
be executed while constructing the CFGs of P and P’.
• Step 4: Traverse the CFGs and determine the a subset of T appropriate for
regression testing of P’.

Execution Trace
• Let P be a program containing one or more functions. P has been tested against
tests in T as shown previously.
• P is modified to P' by adding new functionality and fixing some known errors.
• Our goal now is to test P' to ensure that the changes made do not affect the
functionality carried over from P.
• While this could be achieved by executing P' against all the non-obsolete tests in T,
we want to select only those that are necessary to check if the modifications made
do not affect functionality common to P and P'
• Our first technique for selecting a subset of T is based on the use of execution slice
obtained from the execution trace of P. The technique can be split into two phases:
o In the first phase, P is executed and the execution slice recorded for each test
case in Tno = Tu ∪ Tr
o Tno contains non-obsolete test cases and hence is a candidate for full
regression test.
o In the second phase, the modified program P' is compared with P and Tr is
isolated from Tno by an analysis of the execution slice obtained in the first
phase.
• Let 𝐺 = (𝑁, 𝐸) denote the CFG of program P.
• N is a finite set of nodes and E a finite set of edges connecting the nodes.
• Suppose that nodes in N are numbered 1, 2, and so on and that Start and End are
two special nodes.
• Tno is the set of all valid tests for P’. It is obtained by discarding all tests that have
become obsolete for some reason.
• An execution trace of program P for some test t in Tno is the sequence of nodes in
G traversed when P is executed against t.
• As an example, consider the following program:

• Here is a CFG for our example program:

• Now consider the following test set:


The execution trace as a sequence of nodes traversed:

• Let test(n) denote the set of tests such that each test in test(n) traversed node n at
least once.
• Given the execution trace for each test in 𝑇𝑛𝑜 (set of non-obsolete tests), it is easy
to find 𝑡𝑒𝑠𝑡(𝑛) for each 𝑛 ∈ 𝑁. 𝑡𝑒𝑠𝑡(𝑛) is also known as test vector
corresponding to node n.
• Test vectors for each node in the CFGs shown previously:

• 1, 2, 3 and 4 represent the node numbers in each CFG.


• Since Node 4 is not present in CFGs for functions g1 and g2, it’s a --.
• Explanation:
o Tests t1, t2 and t3 run through/execute Node 1 in the main function.
o Only tests t1 and t3 execute Node 2 in the main function
• Now, construct the syntax tree:
Dynamic Slice
• Let L be a location in program P and v a variable used at L.
• Let trace(t) be the execution trace of P when executed against test t.
• The dynamic slice of P with respect to t and v, denoted as DS(t, v, L), is the set of
statements in P that
o (a) lie in trace(t)
o (b) affect the value of v at L.
• Dynamic dependence graph (DDG): The DDG is needed to obtain a dynamic slice.
Here is how a DDG named G is constructed.
o Step 1: Initialize G with a node for each declaration. There are no edges
among these nodes.
o Step 2: Add to G, the first node in trace(t).
o Step 3: For each successive statement in trace(t), a new node n is added to G.
Control and data dependence edges are added from n to the existing nodes
in G
Construction of a DDG: Example
• Let t: <x=2, y=4>
• Assume successive values of x to be 0 and 5, and
f1(2)=1, f1(0)=2, f1(5)=3
• trace(t)={1, 2, 3, 4, 6, 7, 2, 3, 5, 6, 7, 2, 8}
• Ignore declarations for simplicity. To begin
1
with, node labelled 1 is added to G.
• Node 2 is added next. This node is data dependent on
node 1 as it uses variable x defined at node 1 and
hence an edge, indicated as a solid line, is added from node 2 to node 1.

1 2

• Next, node 3 is added. This node is data dependent on node 1. Hence an edge is
added from node 3 to node 1.

1 2 3
• Node 3 is also control dependent on node 2 and hence an edge, indicated as a
dotted line, is added from node 3 to node 2.
• Node 4 is added next and data and control dependence edges are added,
respectively, from node 4 to node 1 and to node 3.
• The process continues as described until the node corresponding to the last
statement in the trace, i.e., at line 8, is added. The final DDG is as shown:

Obtaining Dynamic Slice


• Step 1: Execute P against test t and obtain trace(t).
• Step 2: Construct the dynamic dependence graph (DDG) G from P and trace(t).
• Step 3: Identify in G, node n labeled L that contains the last assignment to v.
If no such node exists then the dynamic slice is empty, otherwise execute Step 4.
• Step 4: Find in G, the set DS(t, v, n) of all nodes reachable from n, including n.
DS(t, v, n) is the dynamic slice of P with respect to v at location L and test t.
Example:
• Suppose we want to compute the dynamic slice of P with respect to variable w at
line 8 and test t shown earlier.
• We already have the DDG of P for t.
• First identify the last definition of w in the DDG. This occurs at line 7 as marked.
• Traverse the DDG backwards from node 7 and collect all nodes reachable from 7.
This gives us the following dynamic slice: {1, 2, 3, 5, 6, 7, 8}.
Test selection using Dynamic Slice
• Let T be the test set used to test P. P' is the modified program.
• Let n1, n2, ... nk be the nodes in the CFG of P modified to obtain P'.
• Which tests from T should be used to obtain a regression test T' for P'?
• Find DS(t) for P. If any of the modified nodes is in DS(t) then add t to T'.

Slicing
• Slicing or program slicing is a technique used in software testing which takes a slice
or a group of program statements in the program for testing particular test
conditions or cases and that may affect a value at a particular point of interest.
• It can also be used for the purpose of debugging in order to find the bugs more
easily and quickly.
• There are 2 types of slicing: static slicing and dynamic slicing
Static Slicing Dynamic Slicing
A static slicing of a program contains all A dynamic slice of a program contains
statements that may affect the value only the statements that actually affect
of a variable at any point for any the value of a variable at any point for a
execution of the program. particular execution of the program.

Static slices are generally larger. Dynamic slices are generally smaller.

It considers every possible execution of Considers only a particular execution


the program. of the program.

Slicing cost is cheaper Slicing cost is more expensive

Considers all possible paths Only considers a specific executed path

No assumption is made on input Input is considered to find path taken

Results are generally not useful Results are useful for applications like
debugging and testing

Only static information is retrieved Dynamic information is retrieved


Example:

Static Slicing for variable Dynamic Slicing for variable


Example Code:
'sum' 'sum' when n = 22
int z = 10;
int n; int n; int n;
cin >> n; cin >> n; cin >> n;
int sum = 0; int sum = 0; int sum = 0;
if (n > 10) if (n > 10) if (n > 10)
sum = sum + n; sum = sum + n; sum = sum + n;
else else
sum = sum - n; sum = sum - n;
cout << “done";

• From the example shown in the next page, we observe that static slice takes all the
possible execution of the program which may affect the value of the variable sum.
• Whereas in case of dynamic slicing, it considers only a particular execution (when n
= 22) of the program which may affect the value of the variable sum.
Hence, dynamic slice is always smaller than a static slice.

Test Selection using Test Minimization


• Test minimization is yet another method for selecting tests for regression testing.
• To illustrate test minimization, suppose that P contains 2 functions, main and f.
Now suppose that P is tested using test cases t1 and t2.
• During testing it was observed that
o t1 causes the execution of main but not of f
o t2 does cause the execution of both main and f
• Now suppose that P’ is obtained from P by making some modification to f
Which of the two test cases should be included in the regression test suite?
• Obviously, there is no need to execute P’ against t1 as it does not cause the
execution of f. Thus, the regression test suite consists of only t2.
• In this example we have used function coverage to minimize a test suite {t1, t2} to
a obtain the regression test suite {t2}.
• Test minimization is based on the coverage of testable entities in P.
• Testable entities include, for example, program statements, decisions, def-use
chains, and mutants.
• One uses the following procedure to minimize a test set based on a selected
testable entity.

A Procedure for Test Minimization


• Step 1: Identify the type of testable entity to be used for test minimization.
Let e1, e2, ... ek be the k testable entities present in P. In our previous example the
test entity were functions.
• Step 2: Execute P against all elements of test set T and for each test t in T,
determine which of the k testable entities are covered.
• Step 3: Find a minimal subset T’of T such that each testable entity is covered by at
least one test in T’.

Test Minimization: Example


• Step 1: Let the basic block be the testable entity of interest. The basic blocks for a
sample program are shown here for both main and function f1.
• Step 2: Suppose the coverage of the basic blocks in T = {t1, t2, t3} when executed
against three tests is as follows:
o t1: main: 1, 2, 3. f1: 1, 3
o t2: main: 1, 3. f1: 1, 3
o t1: main: 1, 3. f1: 1, 2, 3
• Step3:
o Tests in T cover all six blocks, three in main
and three in f1.
o However, it is easy to check that these six
blocks can also be covered by t1 and t3.
o A minimal test set for regression testing is
{t1, t3}.
Ad Hoc Testing
• Test carried out in an unplanned manner (hence the name ad hoc testing).
• Testing done without using any formal testing technique is called ad hoc testing.
• Types of ad hoc testing:
o buddy testing
o exploratory testing
o pair testing
o iterative testing
o agile and extreme testing
o defect seeding
• Ad hoc testing does not make use of any of the test case design techniques like
equivalence partitioning, boundary value analysis, and so on.
• It is done to explore the undiscovered areas in the product by using
o intuition, previous experience in working with the product,
o expert knowledge of the platform or technology, and
o experience of testing a similar product.
• It is generally done to uncover defects that are not covered by planned testing.

• Ad hoc testing can be planned by one of the 2 ways:


o Method 1
▪ After a certain number of planned test cases are executed.
▪ In this case, the product is likely to be in a better shape and thus newer
perspectives and defects can be uncovered.
▪ Since ad hoc testing does not require all the test cases to be documented
immediately, this provides an opportunity to catch multiple missing
perspectives with minimal time delay.

o Method 2
▪ Prior to planned testing.
▪ This will enable gaining better clarity on requirements and assessing the
quality of the product upfront.

• The following figure shows the various steps of ad hoc testing and illustrates the
basic differences between ad hoc testing and planned testing.
• One of the most fundamental differences between planned testing and ad hoc
testing is that test execution and test report generation takes place before
test case design in ad hoc testing.
• This testing gets its name by virtue of the fact that execution precedes design.
• Drawbacks of Ad-Hoc Testing and their Resolution

Drawback Possible resolution

Difficult to ensure that the learnings • Document ad hoc tests after test
gained in ad hoc testing are used in future. completion.
• Schedule a meeting to discuss
Large number of defects found in ad hoc defect impacts
testing • Improve the test cases for planned
testing
• When producing test reports
combine planned test and ad hoc
Lack of comfort on coverage of ad hoc
test
testing
• Plan for additional planned test
and ad hoc test cycles
• Write detailed defect reports in a
step-by-step manner
Difficult to track the exact steps
• Document ad hoc tests after test
execution
• Plan the metrics collection for both
Lack of data for metrics analysis
planned tests and ad hoc tests
Pair Testing
• Here, 2 testers pair up to test a product's feature on the same machine.
• The objective of this exercise is to maximize the exchange of ideas between the
two testers.
• When one person is executing the tests, the other person takes notes.
• The other person suggests an idea or helps in providing additional perspectives.
• Example: two people traveling in a car in a new area to find a place, with one
person driving the car and another person navigating with the help of a map.

Advantages of Pair Testing


• The presence of one senior member can also help in pairing.
This can cut down on the time spent on the learning curve of the product.
• It may prove effective when 2 members work very well together and share a good
understanding.
• It can be done during any phase of testing.
• It encourages idea generation right from the requirements analysis phase, taking it
forward to the design, coding, and testing phases.
• Defect reproduction (resolving and fixing the bug ) becomes easy in pair testing as
both testers can discuss and produce the steps for reproduction.
• When the product is in a new domain and not many people have the desired
domain knowledge, pair testing is useful.
• Pair testing helps in getting feedback on each tester’s abilities from each other.
This testing can be used to coach the inexperienced members in the team by
pairing them with experienced testers.
• Pair testing is an extension of the “pair programming” concept used as a technique
in the extreme programming model.

Disadvantages of Pair Testing


• In case the pair of individuals in the team are ones who do not try to understand
and respect each other, pair testing may lead to frustration and domination.
• When one member is working on the computer and the other is playing role of a
scribe, if their speed of understanding and execution does not match, it may
result in loss of attention.
• Pair testing may result in delays (if not properly planned) due to the time spent on
interactions between the testers.
• Sometimes pairing up juniors with experienced members may result in the former
doing tasks that the senior may not want to do.
• At the end of the session, there is no accountability on who is responsible for
steering the work, providing directions, and delivering of results.

Exploratory Testing
• Exploratory testing tries to
o explore the product,
o covering more depth and breadth
o with specific objectives, tasks, and plans.

• Exploratory testing can be done during any phase of testing.


• It may execute their tests based on their past experiences in testing
o a similar product, or a product of similar domain, or
o a product in a similar technology area.
• A developer's knowledge of a similar technology can help in the unit testing phase
to explore the limitations or the constraints imposed by that technology.

• Exploratory testing can be used to test software that is


o un-tested,
o un-known, or
o un-stable.

• Exploring can happen not only for functionality but also for different
environments, configuration parameters, test data, and so on.
• Since there is large creative element to exploratory testing, similar test cases may
result in different kinds of defects when done by two different individuals.
• Example: The person driving a car will use various common techniques to reach a
new place, such as:
o Getting a map of the area
o Traveling in some random direction to figure out the place
o Calling up and asking a friend for the route
o Asking for directions by going to a nearby gas station
• Several ways to perform exploratory testing:
o Guesses:
▪ These are used to find the part of the program that is likely to have
more errors.
▪ Previous experience on working with a similar product or software or
technology helps in guessing.

o Architecture diagrams and use cases:


▪ Architecture diagrams depict the interactions and relationships
between different components and modules.
▪ Use cases give an insight of the product's usage from the end user's
perspective.
▪ A use case explains a set of
• business events,
• input required,
• people involved in those events and
• expected output.

o Study of past defects:


▪ Studying the defects reported in the previous releases helps in
understanding of the error prone functionality/modules in a product
development environment.
▪ It acts as a pointer to explore an area of the product further.
o Error handling:
▪ Error handling is a portion of the code which prints appropriate
messages or provides appropriate actions in case of failures.
▪ For example, in the case of a catastrophic error, termination should be
with a meaningful error message.
▪ In case of an action by the user that is invalid or unexpected, the system
may misbehave.
▪ Error handling provides a message or corrective action in such
situations.

o Discussions:
▪ Exploration may be planned based on the understanding of the system
during project discussions or meetings.
▪ It brings various presentations of the product implementation such as
architecture and design presentations, or even presentations made to
customers.

o Questionnaires and checklists:


▪ Questions like “what, when, how, who and why” can provide leads to
explore areas in the product.
▪ To understand the implementation of functionality in a product, open-
ended questions like
• What does this module do?
• When is it being called or used?
• How is the input processed?
• Who are the users of this module?

Iterative Testing
• The iterative (or spiral) model is where the requirements keep coming and the
product is developed iteratively for each requirement.
• The testing associated for this process is called iterative testing.
• Biggest challenge in this model is in ensuring that all the requirements that are
tested continue to work when a new requirement is given.
• Hence, iterative testing requires repetitive testing.
• As the new requirements may involve a large number of changes at the product
level, the majority of these tests are executed manually because automation in
this case is very difficult.
• Iterative testing aims at testing the product for all requirements, irrespective of
the phase they belong to in the spiral model.
• Customers have a usable product at the end of every iteration.
• Product undergoes all the phases of the life cycle each time.
• Errors due to omission and misunderstanding can be corrected at regular intervals.
• Customers and the management can notice the impact of defects and the product
functionality at the end of each iteration.
• A test plan is created at the beginning of the first iteration and updated for every
subsequent iteration.
• This can broadly define the type and scope of testing to be done for each of the
iterations.
• Some type of tests that are performed in later iterations may not be possible to
perform during earlier iterations.
• For example, performance or load testing may come under the scope of testing
only during the last few iterations when the product becomes complete.
• Hence test plan preparation becomes an important activity during the beginning
phase.
• This document gets updated after each iteration since the scope of testing, type
of testing, and the effort involved vary.
• After each iteration, unit test cases are added, edited, or deleted to keep up with
the revised requirement for the current phase.
• Regression tests may be repeated at least every alternate iteration (if not every
iteration) so that the current functionality is preserved.
• Since iterative testing involves repetitive test execution, it becomes a tiresome
exercise for the testers.
• In order to increase test efficiency, tests may be automated, wherever possible.
Defect Seeding
• Defect seeding is also known as Bebugging.
• It acts as a reliability measure for the release of the product.
• Usually, one group of members in the project injects the defects while another
group tests to remove them.
• While finding the known seeded defects, the unseeded/unearthed defects may
also be uncovered.
• Known bugs are randomly added to a program source code and the software
tester is tasked to find them.
• The percentage of the known bugs not found gives an indication of the real bugs
that remain.
• Defects that are seeded are similar to real defects. Therefore, they are not very
obvious and easy to detect.
• Defects that can be seeded may vary from severe or critical defects to cosmetic
errors.
• Defect seeding may act as a guide to check the efficiency of the inspection or
testing process.
• It serves as a confidence measure to know the percentage of defect removal rates.
• It acts as a measure to estimate the number of defects yet to be discovered.
For example
• Assume that 20 defects that range from critical to cosmetic errors are seeded on a
product.
• Suppose when the test team completes testing, it has found 12 seeded defects and
25 original (not seeded) defects.
• The total number of defects that may be latent (hidden) in the product is
calculated as follows.
𝐷𝑒𝑓𝑒𝑐𝑡𝑠 𝑠𝑒𝑒𝑑𝑒𝑑
𝑻𝒐𝒕𝒂𝒍 𝒍𝒂𝒕𝒆𝒏𝒕 𝒅𝒆𝒇𝒆𝒄𝒕𝒔 = ∗ 𝑂𝑟𝑖𝑔𝑖𝑛𝑎𝑙 𝑑𝑒𝑓𝑒𝑐𝑡𝑠 𝑓𝑜𝑢𝑛𝑑
𝑆𝑒𝑒𝑑𝑒𝑑 𝑑𝑒𝑓𝑒𝑐𝑡𝑠 𝑓𝑜𝑢𝑛𝑑

• So, the number of estimated defects, based on the above example


20
= ( ) ∗ 25 = 41.67 ≈ 42
12
• Based on the above calculations, the number of estimated defects yet to be found
is 42.
• When a group knows that there are seeded defects in the system, it acts as a
challenge to testers.
• It adds new energy into their testing.
• In case of manual testing, defects are seeded before the start of the testing
process.
• When the tests are automated, defects can be seeded any time.

The following are the issues to be looked into while using defect seeding:
1. Care should be taken during the defect seeding process to ensure that all the
seeded defects are removed before the release of the product.

2. The code should be written in such a way that the errors introduced can be
identified easily.
Minimum number of lines should be added to seed defects so that the effort
involved in removal becomes reduced.

3. It is necessary to estimate the efforts required to clean up the seeded defects


along with the effort for identification.
Effort may also be needed to fix the real defects found due to the injection of
some defects.

You might also like