Unit 4 Software Testing - Integration Level
Unit 4 Software Testing - Integration Level
The goal of testing is to find errors, and a good test is one that has a high probability of finding
an error.
Test Characteristics :
Kaner, Falk, and Nguyen suggest the following attributes of a “good” test:
A good test has a high probability of finding an error. To achieve this goal, the tester must
understand the software and attempt to develop a mental picture of how the software might
fail. Ideally, the classes of failure are probed.
A good test is not redundant. Testing time and resources are limited. There is no point in
conducting a test that has the same purpose as another test. Every test should have a
different purpose.
A good test should be “best of breed” In a group of tests that have a similar intent, time
and resource limitations may mitigate toward the execution of only a subset of these tests.
In such cases, the test that has the highest likelihood of uncovering a whole class of errors
should be used.
A good test should be neither too simple nor too complex. Although it is sometimes
possible to combine a series of tests into one test case, the possible side effects associated
with this approach may mask errors. In general, each test should be executed separately.
(1) Knowing the specified function that a product has been designed to perform, tests can
be conducted that demonstrate each function is fully operational while at the same time searching
for errors in each function.
WHITE-BOX TESTING
BLACK-BOX TESTING
Black-box testing, also called behavioral testing, focuses on the functional requirements of
the software. That is, black-box testing techniques enable you to derive sets of input conditions
that will fully exercise all functional requirements for a program.
Black-box testing is not an alternative to white-box techniques. Rather, it is a complementary
approach that is likely to uncover a different class of errors than white-box methods. Black-box
testing attempts to find errors in the following categories: (1) incorrect or missing functions, (2)
interface errors, (3) errors in data structures or external database access, 4) behavior or
performance errors, and (5) initialization and termination errors.
Integration Testing
Integration testing is a systematic technique for constructing the software architecture
while at the same time conducting tests to uncover errors associated with interfacing. The objective
is to take unit-tested components and build a program structure that has been dictated by design.
There is often a tendency to attempt non incremental integration; that is, to construct the
program using a “big bang” approach. All components are combined in advance. The entire
program is tested as a whole. If a set of errors is encountered. Correction is difficult because
isolation of causes is complicated by the vast expanse of the entire program. Once these errors are
corrected, new ones appear and the process continues in a seemingly endless loop.
Incremental integration is the antithesis of the big bang approach. The program is
constructed and tested in small increments, where errors are easier to isolate and correct; interfaces
are more likely to be tested completely; and a systematic test approach may be applied. There are
two different incremental integration strategies :
Top-down integration. Top-down integration testing is an incremental approach to construction
of the software architecture. Modules are integrated by moving downward through the control
hierarchy, beginning with the main control module (main program). Modules subordinate to the
main control module are incorporated into the structure in either a depth-first or breadth-first
manner. Referring to the following figure, depth-first integration integrates all components on a
major control path of the program structure. For example, selecting the left-hand path,
components M1, M2 , M5 would be integrated first. Next, M8 or M6 would be integrated. Then,
the central and right-hand control paths are built. Breadth-first integration incorporates all
components directly subordinate at each level, moving across the structure horizontally. From the
figure, components M2, M3, and M4 would be integrated first. The next control level, M5, M6,
and so on, follows.
1. Low-level components are combined into clusters (sometimes called builds) that
perform a specific software sub function.
2. A driver (a control program for testing) is written to coordinate test case input and
output.
3. The cluster is tested.
4. Drivers are removed and clusters are combined moving upward in the program
structure.
Integration follows the pattern illustrated in following figure. Components are combined to form
clusters 1, 2, and 3. Each of the clusters is tested using a driver (shown as a dashed block).
Components in clusters 1 and 2 are subordinate to Ma. Drivers D1 and D2 are removed and the
clusters are interfaced directly to Ma. Similarly, driver D3 for cluster 3 is removed prior to
integration with module Mb. Both Ma and Mb will ultimately be integrated with component Mc,
and so forth.
Fig : Bottom-up integration
As integration moves upward, the need for separate test drivers lessens. In fact, if the top two levels
of program structure are integrated top down, the number of drivers can be reduced substantially
and integration of clusters is greatly simplified.
Continuous Integration:
Continuous integration is the practice of merging components into the evolving software increment once
or more each day.
Regression testing. Regression testing is the reexecution of some subset of tests that have already
been conducted to ensure that changes have not propagated unintended side effects. Regression
testing helps to ensure that changes do not introduce unintended behavior or additional errors.
Regression testing may be conducted manually, by reexecuting a subset of all test cases or
using automated capture/playback tools. Capture/playback tools enable the software engineer to
capture test cases and results for subsequent playback and comparison. The regression test suite
(the subset of tests to be executed) contains three different classes of test cases:
• A representative sample of tests that will exercise all software functions.
• Additional tests that focus on software functions that are likely to be affected by the
change.
• Tests that focus on the software components that have been changed.
As integration testing proceeds, the number of regression tests can grow quite large.
Smoke testing. Smoke testing is an integration testing approach that is commonly used when
product software is developed. It is designed as a pacing mechanism for time-critical projects,
allowing the software team to assess the project on a frequent basis. In essence, the smoke- testing
approach encompasses the following activities:
1. Software components that have been translated into code are integrated into a build. A
build includes all data files, libraries, reusable modules, and engineered components that
are required to implement one or more product functions.
2. A series of tests is designed to expose errors that will keep the build from properly
performing its function. The intent should be to uncover “showstopper” errors that have
the highest likelihood of throwing the software project behind schedule.
3. The build is integrated with other builds, and the entire product (in its current form) is
smoke tested daily. The integration approach may be top down or bottom up.
McConnell describes the smoke test in the following manner:
The smoke test should exercise the entire system from end to end. It does not have to be
exhaustive, but it should be capable of exposing major problems. The smoke test should
be thorough enough that if the build passes, you can assume that it is stable enough to
be tested more thoroughly.
Smoke testing provides a number of benefits when it is applied on complex, time critical
software projects:
An overall plan for integration of the software and a description of specific tests is
documented in a test specification.
This work product incorporates a test plan and a test procedure and becomes part of the
software configuration.
Testing is divided into phases and incremental builds that address specific functional and
behavioral characteristics of the software.
For example, integration testing for the SafeHome security system might be divided into
the following test phases: user interaction, sensor processing, communications functions,
and alarm processing.
VALIDATION TESTING
Validation testing begins at the culmination of integration testing, when individual components
have been exercised, the software is completely assembled as a package, and interfacing errors
have been uncovered and corrected.
Validation can be defined in many ways, but a simple definition is that validation succeeds
when software functions in a manner that can be reasonably expected by the customer.
Validation-Test Criteria
Software validation is achieved through a series of tests that demonstrate conformity with
requirements. After each validation test case has been conducted, one of two possible conditions
exists: (1) The function or performance characteristic conforms to specification and is accepted
or (2) a deviation from specification is uncovered and a deficiency list is created.
Configuration Review
An important element of the validation process is a configuration review. The intent of the review
is to ensure that all elements of the software configuration have been properly developed, are
cataloged, and have the necessary detail to bolster the support activities. The configuration review,
sometimes called an audit
Alpha and Beta Testing
When custom software is built for one customer, a series of acceptance tests are conducted to
enable the customer to validate all requirements. Conducted by the end user rather than software
engineers, an acceptance test can range from an informal “test drive” to a planned and
systematically executed series of tests. In fact, acceptance testing can be conducted over a period
of weeks or months, thereby uncovering cumulative errors that might degrade the system over
time.
The alpha test is conducted at the developer’s site by a representative group of end users.
The software is used in a natural setting with the developer “looking over the shoulder” of the
users and recording errors and usage problems. Alpha tests are conducted in a controlled
environment.
The beta test is conducted at one or more end-user sites. Unlike alpha testing, the
developer generally is not present. Therefore, the beta test is a “live” application of the software
in an environment that cannot be controlled by the developer. The customer records all problems
that are encountered during beta testing and reports these to the developer at regular intervals.
A variation on beta testing, called customer acceptance testing, is sometimes performed
when custom software is delivered to a customer under contract. The customer performs a series
of specific tests in an attempt to uncover errors before accepting the software from the developer.
Testing patterns
Testing patterns describe common testing problems and solutions that can assist you in dealing with
them.
The following three testing patterns (presented in abstract form only) provide representative examples: