4.1 Software Testing - Component Level
4.1 Software Testing - Component Level
Software Testing : Testing is the process of exercising a program with the specific
intent of finding errors prior to delivery to the end user.
A number of software testing strategies have been proposed in the literature. All provide
a template for testing and all have the following generic characteristics:
• To perform effective testing, you should conduct effective technical reviews. By doing
this, many errors will be eliminated before testing commences.
• Testing begins at the component level and works “outward” toward the integration of
the entire computer-based system.
• Different testing techniques are appropriate for different software engineering
approaches and at different points in time.
• Testing is conducted by the developer of the software and (for large projects) an
independent test group.
• Testing and debugging are different activities, but debugging must be accommodated in
any testing strategy.
Verification refers to the set of tasks that ensure that software correctly implements a specific
function.
Validation refers to a different set of tasks that ensure that the software that has been built is
traceable to customer requirements.
Boehm states this another way:
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
Verification and validation includes a wide array of SQA activities:
▪ performance monitoring,
▪ simulation,
▪ feasibility study,
▪ documentation review,
▪ database review,
▪ algorithm analysis,
▪ development testing,
Testing does provide the last bastion from which quality can be assessed and, more pragmatically,
errors can be uncovered.
Quality is not measure only by no. of error but it is also measure on application methods, process
model, tool, formal technical review, etc will lead to quality, that is confirmed during testing.
A strategy for software testing may also be viewed in the context of the spiral.
Unit testing begins at the vortex of the spiral and concentrates on each unit of the software
as implemented in source code. Testing progresses by moving outward along the spiral to
integration testing, where the focus is on design and the construction of the software architecture.
Taking another turn outward on the spiral, you encounter validation testing, where requirements
established as part of requirements modeling are validated against the software that has been
constructed. Finally, you arrive at system testing, where the software and other system elements
are tested as a whole.
Considering the process from a procedural point of view, testing within the context of
software engineering is actually a series of four steps that are implemented sequentially. The steps
are shown in following figure. Initially, tests focus on each component individually, ensuring that
it functions properly as a unit. Hence, the name unit testing. Unit testing makes heavy use of
testing techniques that exercise specific paths in a component’s control structure to ensure
complete coverage and maximum error detection.
Next, components must be assembled or integrated to form the complete software package.
Integration testing addresses the issues associated with the dual problems of verification and
program construction. Test case design techniques that focus on inputs and outputs are more
prevalent during integration, although techniques that exercise specific program paths may be used
to ensure coverage of major control paths. After the software has been integrated (constructed), a
set of high-order tests is conducted. Validation criteria must be evaluated. Validation testing
provides final assurance that software meets all informational, functional, behavioral, and
performance requirements.
◼ A testing strategy that is chosen by most software teams falls between the two extremes.
◼ It takes an incremental view of testing, beginning with the testing of individual program units,
moving to tests designed to facilitate the integration of the units, and culminating with tests that
exercise the constructed system.
◼ Unit testing focuses verification effort on the smallest unit of software design. The unit test
focuses on the internal processing logic and data structures within the boundaries of a component.
This type of testing can be conducted in parallel for multiple components.
Tom Gilb argues that a software testing strategy will succeed when software testers:
• Specify product requirements in a quantifiable manner long before testing commences.
Although the overriding objective of testing is to find errors, a good testing strategy also
assesses other quality characteristics such as portability, maintainability, and usability. These
should be specified in a way that is measurable so that testing results are unambiguous.
• State testing objectives explicitly. The specific objectives of testing should be stated in
measurable terms.
• Understand the users of the software and develop a profile for each user category. Use cases
that describe the interaction scenario for each class of user can reduce overall testing effort by
focusing testing on actual use of the product.
• Develop a testing plan that emphasizes “rapid cycle testing.” Gilb recommends that a
software team “learn to test in rapid cycles The feedback generated from these rapid cycle tests
can be used to control quality levels and the corresponding test strategies.
• Build “robust” software that is designed to test itself. Software should be designed in a manner
that uses anti bugging techniques. That is, software should be capable of diagnosing certain
classes of errors. In addition, the design should accommodate automated testing and regression
testing.
• Use effective technical reviews as a filter prior to testing. Technical reviews can be as
effective as testing in uncovering errors.
• Conduct technical reviews to assess the test strategy and test cases themselves. Technical
reviews can uncover inconsistencies, omissions, and outright errors in the testing approach.
This saves time and also improves product quality.
• Develop a continuous improvement approach for the testing process. The test strategy should
be measured. The metrics collected during testing should be used as part of a statistical process
control approach for software testing.
◼ The test cases and directions for their use are developed and reviewed by the stakeholders as
the code needed to implement each user story is created.
◼ Testing results are shared with all team members as soon as practical to allow changes in both
existing and future code development.
◼ For this reason, many teams choose to keep their test recordkeeping in online documents.
◼ The test cases can be recorded in a Google Docs spreadsheet that briefly describes the test case,
contains a pointer to the requirement being tested, contains expected output from the test case
data or the criteria for success,
◼ Allows testers to indicate whether the test was passed or failed and the dates the test case was
run, and should have room for comments about why a test may have failed to aid in
debugging.
◼ This type of online form can be viewed as needed for analysis, and it is easy to summarize at
team meetings.
◼ Drivers and stubs represent overhead. That is, both are software that must be written but that
is not delivered with the final software product.
◼ In such cases, complete testing can be postponed until the integration test step
◼ Unit testing is simplified when a component with high cohesion is designed.
◼ When only one function is addressed by a component, the number of test cases is reduced and
errors can be more easily predicted and uncovered.
Cost-Effective Testing
◼ Exhaustive testing requires every possible combination of input values and test-case
orderings be processed by the component being tested (e.g., consider the move generator in
a computer chess game).
◼ In some cases, this would require the creation of a near-infinite number of data sets. The
return on exhaustive testing is often not worth the effort, since testing alone cannot be used
to prove a component is correctly implemented.
◼ There are some situations in which you will not have the resources to do comprehensive unit
testing.
◼ In these cases, testers should select modules crucial to the success of the project and those
that are suspected to be error-prone because they have complexity metrics as the focus for
your unit testing.
A testing strategy that is chosen by most software teams falls between the two extremes. It
takes an incremental view of testing, beginning with the testing of individual program units,
moving to tests designed to facilitate the integration of the units.
Unit Testing
Unit testing focuses verification effort on the smallest unit of software design. The unit test
focuses on the internal processing logic and data structures within the boundaries of a component.
This type of testing can be conducted in parallel for multiple components.
Unit-test considerations. Unit tests are illustrated schematically in following figure.
• The module interface is tested to ensure that information properly flows into and out of
the program unit under test.
• Local data structures are examined to ensure that data stored temporarily maintains its
integrity during all steps in an algorithm’s execution.
• All independent paths through the control structure are exercised to ensure that all
statements in a module have been executed at least once.
• Boundary conditions are tested to ensure that the module operates properly at boundaries
established to limit or restrict processing.
• And finally, all error-handling paths are tested.
Fig : Unit Test
Selective testing of execution paths is an essential task during the unit test. Test cases
should be designed to uncover errors due to erroneous computations, incorrect comparisons, or
improper control flow.
Boundary testing is one of the most important unit testing tasks. Software often fails at its
boundaries. That is, errors often occur when the nth element of an n-dimensional array is
processed, when the ith repetition of a loop with i passes is invoked, when the maximum or
minimum allowable value is encountered.
A good design anticipates error conditions and establishes error-handling paths to reroute
or cleanly terminate processing when an error does occur. Yourdon calls this approach
antibugging.
Among the potential errors that should be tested when error handling is evaluated are:
(1) error description is unintelligible,
(2) error noted does not correspond to error encountered,
(3) error condition causes system intervention prior to error handling,
(4) exception-condition processing is incorrect, or
(5) error description does not provide enough information to assist in the location of the
cause of the error.
◼ Software often fails at its boundaries. That is, errors often occur when the nth element of an n-
dimensional array is processed or when the maximum or minimum allowable value is
encountered.
◼ So BVA(Boundary Value Analysis) test is always be a last task for unit test.
◼ Test cases that exercise data structure, control flow, and data values just below, at, and just
above maxima and minima are very likely to uncover errors.
Traceability
◼ To ensure that the testing process is auditable, each test case needs to be traceable back to specific
functional or nonfunctional requirements.
◼ Nonfunctional requirements need to be traceable to specific business or architectural
requirements.
◼ Regression testing requires retesting selected components that may be affected by changes made
to other software components that it collaborates with.
(1) Knowing the specified function that a product has been designed to perform, tests can
be conducted that demonstrate each function is fully operational while at the same time searching
for errors in each function.
WHITE-BOX TESTING
Referring to figure (b), each circle, called a flow graph node, represents one or more procedural
statements. A sequence of process boxes and a decision diamond can map into a single node. The
arrows on the flow graph, called edges or links, represent flow of control and are analogous to
flowchart arrows. An edge must terminate at a node, even if the node does not represent any
procedural statements. Areas bounded by edges and nodes are called regions. When counting
regions, we include the area outside the graph as a region Each node that contains a condition is
called a predicate node and is characterized by two or more edges emanating from it.
How do you know how many paths to look for? The computation of cyclomatic complexity
provides the answer. Cyclomatic complexity is a software metric that provides a quantitative
measure of the logical complexity of a program. When used in the context of the basis path testing
method, the value computed for cyclomatic complexity defines the number of independent paths
in the basis set of a program and provides you with an upper bound for the number of tests that
must be conducted to ensure that all statements have been executed at least once.
Cyclomatic complexity has a foundation in graph theory and provides you with an extremely
useful software metric. Complexity is computed in one of three ways:
1. The number of regions of the flow graph corresponds to the cyclomatic complexity.
2. Cyclomatic complexity V(G) for a flow graph G is defined as
V(G) = E –N+2 ;
where E is the number of flow graph edges and N is the number of flow graph nodes.
3. Cyclomatic complexity V(G) for a flow graph G is also defined as
V(G) = P+1
where P is the number of predicate nodes contained in the flow graph G.
Referring once more to the flow graph in figure (b), the cyclomatic complexity can be computed
using each of the algorithms just noted:
1. The flow graph has four regions.
2. V(G) = 11 edges - 9 nodes + =4.
3. V(G) = 3 predicate nodes + 1 =4.
Therefore, the cyclomatic complexity of the flow graph in figure (b) is 4.
Deriving Test Cases
The basis path testing method can be applied to a procedural design or to source code. The
following steps can be applied to derive the basis set:
1. Using the design or code as a foundation, draw a corresponding flowgraph.
2. Determine the cyclomatic complexity of the resultant flowgraph.
3. Determine a basis set of linearly independent paths.
4. Prepare test cases that will force execution of each path in the basis set.
These broaden control structure testing coverage and improve the quality of white-box
testing.
Condition Testing
Condition testing is a test-case design method that exercises the logical conditions
contained in a program module. A simple condition is a Boolean variable or a relational expression,
possibly preceded with one NOT (¬) operator. A relational expression takes the form
E1 <relational-operator>E2
where E1 and E2 are arithmetic expressions and <relational-operator> is one of the following:
<,<=, =, ≠,> or >=. A compound condition is composed of two or more simple conditions, Boolean
operators, and parentheses. Boolean operators allowed in a compound condition include OR ( | ),
AND (&), and NOT (¬). A condition without relational expressions is referred to as a Boolean
expression.
If a condition is incorrect, then at least one component of the condition is incorrect.
Therefore, types of errors in a condition include Boolean operator errors (incorrect/missing/extra
Boolean operators), Boolean variable errors, Boolean parenthesis errors, relational operator errors,
and arithmetic expression errors. The condition testing method focuses on testing each condition
in the program to ensure that it does not contain errors.
Data Flow Testing
The data flow testing method selects test paths of a program according to the locations of
definitions and uses of variables in the program. To illustrate the data flow testing approach,
assume that each statement in a program is assigned a unique statement number and that each
function does not modify its parameters or global variables.
Loop Testing
Loops are the cornerstone for the vast majority of all algorithms implemented in software.
Loop testing is a white-box testing technique that focuses exclusively on the validity of
loop constructs. Four different classes of loops can be defined: simple loops, concatenated loops,
nested loops, and unstructured loops (shown in figure).
Simple loops. The following set of tests can be applied to simple loops, where n is the maximum
number of allowable passes through the loop.
1. Skip the loop entirely.
2. Only one pass through the loop.
3. Two passes through the loop.
4. m passes through the loop where m <n.
5. n - 1, n, n + 1 passes through the loop.
Nested loops. If we were to extend the test approach for simple loops to nested loops, the number
of possible tests would grow geometrically as the level of nesting increases. This would result in
an impractical number of tests.
Beizer suggests an approach that will help to reduce the number of tests:
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter (e.g., loop counter) values. Add other tests for out-of-range
or excluded values.
3. Work outward, conducting tests for the next loop, but keeping all other outer loops at
minimum values and other nested loops to “typical” values.
4. Continue until all loops have been tested.
Concatenated loops. Concatenated loops can be tested using the approach defined for simple
loops, if each of the loops is independent of the other. However, if two loops are concatenated and
the loop counter for loop 1 is used as the initial value for loop 2, then the loops are not independent.
When the loops are not independent, the approach applied to nested loops is recommended.
Unstructured loops. Whenever possible, this class of loops should be redesigned to reflect the
use of the structured programming constructs.
BLACK-BOX TESTING
Black-box testing, also called behavioral testing, focuses on the functional requirements of
the software. That is, black-box testing techniques enable you to derive sets of input conditions
that will fully exercise all functional requirements for a program.
Black-box testing is not an alternative to white-box techniques. Rather, it is a complementary
approach that is likely to uncover a different class of errors than white-box methods.
Black-box testing attempts to find errors in the following categories:
(1) incorrect or missing functions,
(2) interface errors,
(3) errors in data structures or external database access,
(4) behavior or performance errors, and
(5) initialization and termination errors.
(2) test cases that tell you something about the presence or absence of classes of errors, rather
than an error associated only with the specific test at hand.
Interface Testing
• Interface testing is used to check that the program component accepts information passed to it
in the proper order and data types and returns information in proper order and data format.
• Interface testing is often considered part of integration testing.
• Because most components are not stand-alone programs, it is important to make sure that when
the component is integrated into the evolving program it will not break the build.
• Stubs and drivers sometimes incorporate test cases to be passed to the component or accessed
by the component.
• In other cases, debugging code may need to be inserted inside the component to check that data
passed was received correctly.
Equivalence Partitioning
Equivalence partitioning is a black-box testing method that divides the input domain of a program
into classes of data from which test cases can be derived. Test-case design for equivalence
partitioning is based on an evaluation of equivalence classes for an input condition. Using concepts
introduced in the preceding section, if a set of objects can be linked by relationships that are
symmetric, transitive, and reflexive, an equivalence class is present.
Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes
are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence
class are defined.
4. If an input condition is Boolean, one valid and one invalid class are defined.
2. If an input condition specifies a number of values, test cases should be developed that
exercise the minimum and maximum numbers. Values just above and below minimum and
maximum are also tested.
3. Apply guidelines 1 and 2 to output conditions.
4. If internal program data structures have prescribed boundaries, be certain to design a test
case to exercise the data structure at its boundary. Most software engineers intuitively
perform BVA to some degree.