UNIT-3 Regression Testing STM
UNIT-3 Regression Testing STM
Regression testing: Progressives Vs regressive testing, Regression test ability, Objectives of regression testing,
Regression testing types, Regression testing techniques.
• Introduction
• When a new module is added as a (part of integration)testing, the software changes.
• whenever a new bug in the working system appears and it needs some changes.
• The new modifications may affect other parts of the software too.
• It is important to check software with all test cases so that new modifications does not effect other
• parts of software.
.
• Progressive testing:Test case design methods or testing techniques referred to as progressive testing or
development testing.
Definition:
Regression testing is the selective retesting of a system or component to verify that
modifications have not caused unintended effects and that the system or component still complies
with its specified requirements.
REGRESSION TESTABILITY
• Regression testability refers to the property of a program, modification, or test suite that lets
it be effectively and efficiently regression-tested.
• Regression testability is a function of both the design of the program and the test suite.
• To consider regression testability, a regression number is computed.
• It is the average number of affected test cases in the test suite that are affected by any
modification to a single instruction.
• Computed using information about the test suite coverage of the program.Provide significant
savings in cost of development and maintenance of software
• Regression number-R(n)
• Test suite-{t1,t2,t3…tn}
• Affected testcases-t5,t8,t9 then
• R(n)=(t5+t8+t9)/3
--It tests to check that the bug has been addressed: The first objective in bug fix testing is to check
whether the bug-fixing has worked or not.
--It finds other related bugs: Regression tests are necessary to validate that the system does not
have any related bugs.
--It tests to check the effect on other parts of the program: It may be possible that bug-fixing has
unwanted consequences on other parts of a program. Therefore, it is necessary to check the
influence of changes in one part or other parts of the program.
• Software Maintenance
Corrective maintenance: Changes made to correct a system after a failure has been observed.
Adaptive maintenance: Changes made to achieve continuing compatibility with the target
environment or other systems.
Perfective maintenance: Changes designed to improve or add capabilities
Preventive maintenance: Changes made to increase robustness, maintainability, portability,
and other features
• Rapid Iterative Development: The extreme programming approach requires that a test be
developed for each class and that this test be re-run
every time the class changes
• First Step of Integration: Re-running accumulated test suites, as new components are added
to successive test configurations, builds the regression suite incrementally and reveals
regression bugs.
• Compatibility Assessment and Benchmarking: Some test suites are designed to be run on a
wide range of platforms and applications to establish conformance with a
standard(benchmarking) or to evaluate time and space performance.
a)General Test Case Prioritization: For a given program P and test suits T, we prioritize the test
cases in T that will be useful over a succession of subsequent modified versions of P, without any
knowledge of the modified version
b)Version-Specific Test case Prioritization: We prioritize the test cases in T, when P is modified to
P', with the knowledge of the changes made in P.
-->Test Suite Reduction Technique: It reduces testing costs by permanently eliminating
redundant test cases form test suites in terms of codes of functionalities exercised
Selective retest technique attempts to reduce the cost of testing by identifying the portions of P'
(modified version of Program) that must be exercised but the regression test suite. Following are the
characteristic features of the selective retest technique:
--> It minimizes the resources required to regression test a new version.
--> It is achieved by minimizing the number of test cases applied to the new version.
--> It analyses the relationship between the test cases and the software elements they cover.
-->It uses the information about changes to select test cases.
Steps in Selective retest technique:
1. Select T' subset of T, a set of test cases to execute on P'.
2. Test P' with T', establishing correctness of P' with respect to T'.
3. If necessary, create T'‖, a set of new functional test cases for P'.
4. Test P' with T', establishing correctness of P' with respect to T''.
5. Create T'''. a new test suite and test execution profile for P', from T, T' and T''.
Minimization Techniques:
Minimization-based regression test selection techniques attempt to select minimal sets of test cases
from T that yield coverage of modified or affected portions of P.
Ex:- linear equations, integer programming algorithm
• • E.g., every program statement added to or modified for P’ be executed (if possible) by at least
one test in T
• Ad Hoc/Random Techniques:
• developers often select test cases based on ‘intuitions’ or loose associations of test cases with
functionality.
• Another simple approach is to randomly select a predetermined number of test cases from T.
Retest-All Technique: The retest-all technique simply reuses all existing test cases. To test P', the
technique effectively ―selects‖ all test cases in T.
Inclusiveness: Let M be a regression test selection technique. Inclusiveness measures the extent to
which M chooses modification revealing tests from T for inclusion in T'. We define inclusiveness
relative to a particular program, modified program, and test suite, as follows:
DEFINITION
Suppose T contains n tests that are modification revealing for P and P', and suppose M selects m of
these tests. The inclusiveness of M relative to P, P', and T is
1. INCL(M) = (100 * ( m/n)%, if n ≠ 0
2. INCL(M)% = 100%, if n = 0
For example, if T contains 50 tests of which eight are modification-revealing for P and P', and M
selects two of these eight tests, then M is 25% inclusive relative to P, P', and T. If T contains no
modification-revealing tests then every test selection technique is 100% inclusive relative to P, P",
and T.
Precision: Let M be a regression test selection technique. Precision measures the extent to which M
omits tests that are non modification-revealing. We define precision relative to a particular program,
modified program, and test suite, as follows:
DEFINITION
Suppose T contains n tests that are non modification-revealing for P and P' and suppose M omits m
of these tests. The precision of M relative to P, P: and T is
1)Precision = 100 * (m/ n) % if n ≠ 0
2)Precision = 100 % if n = 0
For example, if T contains 50 tests of which 44 are non modification-revealing for P and P', and M
omits 33 of these 44 tests, then M is 75% precise relative to P, P', and T. If T contains no non-
modification-revealing tests, then every test selection technique is 100% precise relative to P, P',
and T.
Efficiency: We measure the efficiency of regression test selection techniques in terms of their space
and time requirements. Where time is concerned, a test selection technique is more economical than
the retest-all technique if the cost of selecting T' is less than the cost of running the tests in T-T'
Space efficiency primarily depends on the test history and program analysis information a technique
must store. Thus, both space and time efficiency depend on the size of the test suite that a technique
selects, and on the computational cost of that technique.
Regression Test Prioritization:
The regression test prioritization approach is different as compared to selective retest
techniques. Regression test prioritization attempts to reorder a regression test suite so that those
tests with the highest priority, according to some established criterion, are executed earlier in the
regression testing process than those with a lower priority.