0% found this document useful (0 votes)
64 views18 pages

4.1 Software Testing - Component Level

The document discusses strategies for software testing at the component level. It explains that testing begins by verifying individual components, then integrating and validating them, before system-level testing. Effective testing requires technical reviews, independent testing teams, and quantifiable requirements and objectives.

Uploaded by

jashtiyamini72
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views18 pages

4.1 Software Testing - Component Level

The document discusses strategies for software testing at the component level. It explains that testing begins by verifying individual components, then integrating and validating them, before system-level testing. Effective testing requires technical reviews, independent testing teams, and quantifiable requirements and objectives.

Uploaded by

jashtiyamini72
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

SOFTWARE TESTING – COMPONENT LEVEL

Software Testing : Testing is the process of exercising a program with the specific
intent of finding errors prior to delivery to the end user.

A STRATEGIC APPROACH TO SOFTWARE TESTING

A number of software testing strategies have been proposed in the literature. All provide
a template for testing and all have the following generic characteristics:
• To perform effective testing, you should conduct effective technical reviews. By doing
this, many errors will be eliminated before testing commences.
• Testing begins at the component level and works “outward” toward the integration of
the entire computer-based system.
• Different testing techniques are appropriate for different software engineering
approaches and at different points in time.
• Testing is conducted by the developer of the software and (for large projects) an
independent test group.
• Testing and debugging are different activities, but debugging must be accommodated in
any testing strategy.

Verification and Validation


Software testing is one element of a broader topic that is often referred to as verification and
validation (V&V).

Verification refers to the set of tasks that ensure that software correctly implements a specific
function.

Validation refers to a different set of tasks that ensure that the software that has been built is
traceable to customer requirements.
Boehm states this another way:
Verification: “Are we building the product right?”
Validation: “Are we building the right product?”
Verification and validation includes a wide array of SQA activities:

▪ Formal technical reviews,

▪ quality and configuration audits,

▪ performance monitoring,

▪ simulation,

▪ feasibility study,

▪ documentation review,

▪ database review,

▪ algorithm analysis,

▪ development testing,

▪ qualification testing, and installation testing

Testing does provide the last bastion from which quality can be assessed and, more pragmatically,
errors can be uncovered.

Quality is not measure only by no. of error but it is also measure on application methods, process
model, tool, formal technical review, etc will lead to quality, that is confirmed during testing.

Organizing for Software Testing


For every software project, there is an inherent conflict of interest that occurs as testing
begins. The people who have built the software are now asked to test the software.
The software developer is always responsible for testing the individual units (components)
of the program, ensuring that each performs the function or exhibits the behavior for which it was
designed. In many cases, the developer also conducts integration testing—a testing step that leads
to the construction (and test) of the complete software architecture. Only after the software
architecture is complete does an independent test group become involved.
The role of an independent test group (ITG) is to remove the inherent problems associated
with letting the builder test the thing that has been built. Independent testing removes the conflict
of interest that may otherwise be present. The developer and the ITG work closely throughout a
software project to ensure that thorough tests will be conducted. While testing is conducted, the
developer must be available to correct errors that are uncovered.
Software Testing Strategy for conventional software architecture
The software process may be viewed as the spiral illustrated in following figure. Initially,
system engineering defines the role of software and leads to software requirements analysis, where
the information domain, function, behavior, performance, constraints, and validation criteria for
software are established. Moving inward along the spiral, you come to design and finally to coding.
To develop computer software, you spiral inward (counter clockwise) along streamlines that
decrease the level of abstraction on each turn.

Fig : Testing Strategy

A strategy for software testing may also be viewed in the context of the spiral.

Unit testing begins at the vortex of the spiral and concentrates on each unit of the software
as implemented in source code. Testing progresses by moving outward along the spiral to
integration testing, where the focus is on design and the construction of the software architecture.
Taking another turn outward on the spiral, you encounter validation testing, where requirements
established as part of requirements modeling are validated against the software that has been
constructed. Finally, you arrive at system testing, where the software and other system elements
are tested as a whole.
Considering the process from a procedural point of view, testing within the context of
software engineering is actually a series of four steps that are implemented sequentially. The steps
are shown in following figure. Initially, tests focus on each component individually, ensuring that
it functions properly as a unit. Hence, the name unit testing. Unit testing makes heavy use of
testing techniques that exercise specific paths in a component’s control structure to ensure
complete coverage and maximum error detection.
Next, components must be assembled or integrated to form the complete software package.
Integration testing addresses the issues associated with the dual problems of verification and
program construction. Test case design techniques that focus on inputs and outputs are more
prevalent during integration, although techniques that exercise specific program paths may be used
to ensure coverage of major control paths. After the software has been integrated (constructed), a
set of high-order tests is conducted. Validation criteria must be evaluated. Validation testing
provides final assurance that software meets all informational, functional, behavioral, and
performance requirements.

Fig : Software testing steps


The last high-order testing step falls outside the boundary of software engineering and into
the broader context of computer system engineering. Software, once validated, must be combined
with other system elements (e.g., hardware, people, databases). System testing verifies that all
elements mesh properly and that overall system function/performance is achieved.

Criteria for Completion of Testing


“When are we done testing—how do we know that we’ve tested enough?” Sadly, there is no
definitive answer to this question, but there are a few pragmatic responses and early attempts at
empirical guidance.
One response to the question is: “You’re never done testing; the burden simply shifts from
you (the software engineer) to the end user.” Every time the user executes a computer program,
the program is being tested.
Although few practitioners would argue with these responses, you need more rigorous
criteria for determining when sufficient testing has been conducted. The clean room software
engineering approach suggests statistical use techniques that execute a series of tests derived
from a statistical sample of all possible program executions by all users from a targeted
population.
. By collecting metrics during software testing and making use of existing software
reliability models, it is possible to develop meaningful guidelines for answering the question:
“When are we done testing?”

PLANNING AND RECORD KEEPING

◼ A testing strategy that is chosen by most software teams falls between the two extremes.
◼ It takes an incremental view of testing, beginning with the testing of individual program units,
moving to tests designed to facilitate the integration of the units, and culminating with tests that
exercise the constructed system.
◼ Unit testing focuses verification effort on the smallest unit of software design. The unit test
focuses on the internal processing logic and data structures within the boundaries of a component.
This type of testing can be conducted in parallel for multiple components.

Tom Gilb argues that a software testing strategy will succeed when software testers:
• Specify product requirements in a quantifiable manner long before testing commences.
Although the overriding objective of testing is to find errors, a good testing strategy also
assesses other quality characteristics such as portability, maintainability, and usability. These
should be specified in a way that is measurable so that testing results are unambiguous.
• State testing objectives explicitly. The specific objectives of testing should be stated in
measurable terms.
• Understand the users of the software and develop a profile for each user category. Use cases
that describe the interaction scenario for each class of user can reduce overall testing effort by
focusing testing on actual use of the product.

• Develop a testing plan that emphasizes “rapid cycle testing.” Gilb recommends that a
software team “learn to test in rapid cycles The feedback generated from these rapid cycle tests
can be used to control quality levels and the corresponding test strategies.
• Build “robust” software that is designed to test itself. Software should be designed in a manner
that uses anti bugging techniques. That is, software should be capable of diagnosing certain
classes of errors. In addition, the design should accommodate automated testing and regression
testing.
• Use effective technical reviews as a filter prior to testing. Technical reviews can be as
effective as testing in uncovering errors.
• Conduct technical reviews to assess the test strategy and test cases themselves. Technical
reviews can uncover inconsistencies, omissions, and outright errors in the testing approach.
This saves time and also improves product quality.
• Develop a continuous improvement approach for the testing process. The test strategy should
be measured. The metrics collected during testing should be used as part of a statistical process
control approach for software testing.

◼ The test cases and directions for their use are developed and reviewed by the stakeholders as
the code needed to implement each user story is created.
◼ Testing results are shared with all team members as soon as practical to allow changes in both
existing and future code development.
◼ For this reason, many teams choose to keep their test recordkeeping in online documents.
◼ The test cases can be recorded in a Google Docs spreadsheet that briefly describes the test case,
contains a pointer to the requirement being tested, contains expected output from the test case
data or the criteria for success,
◼ Allows testers to indicate whether the test was passed or failed and the dates the test case was
run, and should have room for comments about why a test may have failed to aid in
debugging.
◼ This type of online form can be viewed as needed for analysis, and it is easy to summarize at
team meetings.

Role of Scaffolding -Unit Test Procedures


◼ The design of unit tests can occur before coding or after source code has been generated.
◼ A review of design information provides guidance for establishing test cases. Each test case
should be coupled with a set of expected results.
◼ Because a component is not a stand-alone program, some type of scaffolding is required to
create a testing framework i.e driver and/or stub software must be developed for each unit
test.
◼ In most applications a driver is nothing more than a "main program" that accepts test case
data, passes such data to the component (to be tested), and prints relevant results.
◼ A stub or "dummy subprogram" uses the subordinate module's interface, may do minimal
data manipulation, prints verification of entry, and returns control to the module undergoing
testing.
◼ Stubs serve to replace modules that are subordinate the component to be tested.

Unit test Environment

◼ Drivers and stubs represent overhead. That is, both are software that must be written but that
is not delivered with the final software product.
◼ In such cases, complete testing can be postponed until the integration test step
◼ Unit testing is simplified when a component with high cohesion is designed.
◼ When only one function is addressed by a component, the number of test cases is reduced and
errors can be more easily predicted and uncovered.

Cost-Effective Testing
◼ Exhaustive testing requires every possible combination of input values and test-case
orderings be processed by the component being tested (e.g., consider the move generator in
a computer chess game).
◼ In some cases, this would require the creation of a near-infinite number of data sets. The
return on exhaustive testing is often not worth the effort, since testing alone cannot be used
to prove a component is correctly implemented.
◼ There are some situations in which you will not have the resources to do comprehensive unit
testing.
◼ In these cases, testers should select modules crucial to the success of the project and those
that are suspected to be error-prone because they have complexity metrics as the focus for
your unit testing.

TEST CASE DESIGN

A testing strategy that is chosen by most software teams falls between the two extremes. It
takes an incremental view of testing, beginning with the testing of individual program units,
moving to tests designed to facilitate the integration of the units.
Unit Testing
Unit testing focuses verification effort on the smallest unit of software design. The unit test
focuses on the internal processing logic and data structures within the boundaries of a component.
This type of testing can be conducted in parallel for multiple components.
Unit-test considerations. Unit tests are illustrated schematically in following figure.
• The module interface is tested to ensure that information properly flows into and out of
the program unit under test.
• Local data structures are examined to ensure that data stored temporarily maintains its
integrity during all steps in an algorithm’s execution.
• All independent paths through the control structure are exercised to ensure that all
statements in a module have been executed at least once.
• Boundary conditions are tested to ensure that the module operates properly at boundaries
established to limit or restrict processing.
• And finally, all error-handling paths are tested.
Fig : Unit Test
Selective testing of execution paths is an essential task during the unit test. Test cases
should be designed to uncover errors due to erroneous computations, incorrect comparisons, or
improper control flow.
Boundary testing is one of the most important unit testing tasks. Software often fails at its
boundaries. That is, errors often occur when the nth element of an n-dimensional array is
processed, when the ith repetition of a loop with i passes is invoked, when the maximum or
minimum allowable value is encountered.
A good design anticipates error conditions and establishes error-handling paths to reroute
or cleanly terminate processing when an error does occur. Yourdon calls this approach
antibugging.
Among the potential errors that should be tested when error handling is evaluated are:
(1) error description is unintelligible,
(2) error noted does not correspond to error encountered,
(3) error condition causes system intervention prior to error handling,
(4) exception-condition processing is incorrect, or
(5) error description does not provide enough information to assist in the location of the
cause of the error.

◼ Software often fails at its boundaries. That is, errors often occur when the nth element of an n-
dimensional array is processed or when the maximum or minimum allowable value is
encountered.
◼ So BVA(Boundary Value Analysis) test is always be a last task for unit test.
◼ Test cases that exercise data structure, control flow, and data values just below, at, and just
above maxima and minima are very likely to uncover errors.

Requirements and Use Cases


◼ Use cases and models can be used to guide the systematic creation of test cases for testing the
functional requirements of each software component and provide good test coverage overall.
◼ The analysis artifacts do not provide much insight into the creation of test cases for many
nonfunctional requirements (e.g., usability or reliability).
◼ Test-case developers make use of additional information based on their professional experience
to quantify acceptance criteria to make it testable.
◼ Testing nonfunctional requirements may require the use of integration testing methods.

Traceability
◼ To ensure that the testing process is auditable, each test case needs to be traceable back to specific
functional or nonfunctional requirements.
◼ Nonfunctional requirements need to be traceable to specific business or architectural
requirements.
◼ Regression testing requires retesting selected components that may be affected by changes made
to other software components that it collaborates with.

INTERNAL AND EXTERNAL VIEWS OF TESTING

Any engineered product can be tested in one of two ways:

(1) Knowing the specified function that a product has been designed to perform, tests can
be conducted that demonstrate each function is fully operational while at the same time searching
for errors in each function.

(2) Knowing the internal workings of a product.


The first test approach takes an external view and is called black-box testing.
The second requires an internal view and is termed white-box testing.
Black-box testing alludes to tests that are conducted at the software interface. A black- box
test examines some fundamental aspect of a system with little regard for the internal logical
structure of the software.
White-box testing of software is predicated on close examination of procedural detail.
Logical paths through the software and collaborations between components are tested by
exercising specific sets of conditions and/or loops.

WHITE-BOX TESTING

White-box testing, sometimes called glass-box testing, is a test-case design philosophy


that uses the control structure described as part of component-level design to derive test cases.
Using white-box testing methods, you can derive test cases that
(1) guarantee that all independent paths within a module have been exercised at least once,
(2) exercise all logical decisions on their true and false sides,
(3) execute all loops at their boundaries and within their operational bounds, and
(4) exercise internal data structures to ensure their validity.

BASIS PATH TESTING


Basis path testing is a white-box testing technique first proposed by Tom McCabe. The
basis path method enables the test-case designer to derive a logical complexity measure of a
procedural design and use this measure as a guide for defining a basis set of execution paths. Test
cases derived to exercise the basis set are guaranteed to execute every statement in the program at
least one time during testing.
Flow Graph Notation
A simple notation for the representation of control flow, called a flow graph (or program
graph). The flow graph depicts logical control flow using the notation illustrated in following
figure.
Fig : Flow Graph Notation
To illustrate the use of a flow graph, consider the procedural design representation in following
figure (a). Here, a flowchart is used to depict program control structure. Figure (b) maps the
flowchart into a corresponding flow graph.

Referring to figure (b), each circle, called a flow graph node, represents one or more procedural
statements. A sequence of process boxes and a decision diamond can map into a single node. The
arrows on the flow graph, called edges or links, represent flow of control and are analogous to
flowchart arrows. An edge must terminate at a node, even if the node does not represent any
procedural statements. Areas bounded by edges and nodes are called regions. When counting
regions, we include the area outside the graph as a region Each node that contains a condition is
called a predicate node and is characterized by two or more edges emanating from it.

Fig : (a) Flowchart and (b) flow graph


Independent Program Paths
An independent path is any path through the program that introduces at least one new set
of processing statements or a new condition. When stated in terms of a flow graph, an independent
path must move along at least one edge that has not been traversed before the path is defined. For
example, a set of independent paths for the flow graph illustrated in figure (b) is
Path 1: 1-11
Path 2: 1-2-3-4-5-10-1-11
Path 3:1-2-3-6-8-9-10-1-11
Path 4:1-2-3-6-7-9-10-1-11
Note that each new path introduces a new edge. The path
1-2-3-4-5-10-1-2-3-6-8-9-10-1-11 is not considered to be an independent path
because it is simply a combination of already specified paths and does not traverse
any new edges.

How do you know how many paths to look for? The computation of cyclomatic complexity
provides the answer. Cyclomatic complexity is a software metric that provides a quantitative
measure of the logical complexity of a program. When used in the context of the basis path testing
method, the value computed for cyclomatic complexity defines the number of independent paths
in the basis set of a program and provides you with an upper bound for the number of tests that
must be conducted to ensure that all statements have been executed at least once.
Cyclomatic complexity has a foundation in graph theory and provides you with an extremely
useful software metric. Complexity is computed in one of three ways:
1. The number of regions of the flow graph corresponds to the cyclomatic complexity.
2. Cyclomatic complexity V(G) for a flow graph G is defined as
V(G) = E –N+2 ;
where E is the number of flow graph edges and N is the number of flow graph nodes.
3. Cyclomatic complexity V(G) for a flow graph G is also defined as
V(G) = P+1
where P is the number of predicate nodes contained in the flow graph G.
Referring once more to the flow graph in figure (b), the cyclomatic complexity can be computed
using each of the algorithms just noted:
1. The flow graph has four regions.
2. V(G) = 11 edges - 9 nodes + =4.
3. V(G) = 3 predicate nodes + 1 =4.
Therefore, the cyclomatic complexity of the flow graph in figure (b) is 4.
Deriving Test Cases
The basis path testing method can be applied to a procedural design or to source code. The
following steps can be applied to derive the basis set:
1. Using the design or code as a foundation, draw a corresponding flowgraph.
2. Determine the cyclomatic complexity of the resultant flowgraph.
3. Determine a basis set of linearly independent paths.
4. Prepare test cases that will force execution of each path in the basis set.

CONTROL STRUCTURE TESTING

These broaden control structure testing coverage and improve the quality of white-box
testing.
Condition Testing
Condition testing is a test-case design method that exercises the logical conditions
contained in a program module. A simple condition is a Boolean variable or a relational expression,
possibly preceded with one NOT (¬) operator. A relational expression takes the form
E1 <relational-operator>E2
where E1 and E2 are arithmetic expressions and <relational-operator> is one of the following:
<,<=, =, ≠,> or >=. A compound condition is composed of two or more simple conditions, Boolean
operators, and parentheses. Boolean operators allowed in a compound condition include OR ( | ),
AND (&), and NOT (¬). A condition without relational expressions is referred to as a Boolean
expression.
If a condition is incorrect, then at least one component of the condition is incorrect.
Therefore, types of errors in a condition include Boolean operator errors (incorrect/missing/extra
Boolean operators), Boolean variable errors, Boolean parenthesis errors, relational operator errors,
and arithmetic expression errors. The condition testing method focuses on testing each condition
in the program to ensure that it does not contain errors.
Data Flow Testing
The data flow testing method selects test paths of a program according to the locations of
definitions and uses of variables in the program. To illustrate the data flow testing approach,
assume that each statement in a program is assigned a unique statement number and that each
function does not modify its parameters or global variables.
Loop Testing
Loops are the cornerstone for the vast majority of all algorithms implemented in software.

Loop testing is a white-box testing technique that focuses exclusively on the validity of
loop constructs. Four different classes of loops can be defined: simple loops, concatenated loops,
nested loops, and unstructured loops (shown in figure).
Simple loops. The following set of tests can be applied to simple loops, where n is the maximum
number of allowable passes through the loop.
1. Skip the loop entirely.
2. Only one pass through the loop.
3. Two passes through the loop.
4. m passes through the loop where m <n.
5. n - 1, n, n + 1 passes through the loop.

Fig : Classes of Loops

Nested loops. If we were to extend the test approach for simple loops to nested loops, the number
of possible tests would grow geometrically as the level of nesting increases. This would result in
an impractical number of tests.
Beizer suggests an approach that will help to reduce the number of tests:
1. Start at the innermost loop. Set all other loops to minimum values.
2. Conduct simple loop tests for the innermost loop while holding the outer loops at their
minimum iteration parameter (e.g., loop counter) values. Add other tests for out-of-range
or excluded values.
3. Work outward, conducting tests for the next loop, but keeping all other outer loops at
minimum values and other nested loops to “typical” values.
4. Continue until all loops have been tested.

Concatenated loops. Concatenated loops can be tested using the approach defined for simple
loops, if each of the loops is independent of the other. However, if two loops are concatenated and
the loop counter for loop 1 is used as the initial value for loop 2, then the loops are not independent.
When the loops are not independent, the approach applied to nested loops is recommended.
Unstructured loops. Whenever possible, this class of loops should be redesigned to reflect the
use of the structured programming constructs.

BLACK-BOX TESTING

Black-box testing, also called behavioral testing, focuses on the functional requirements of
the software. That is, black-box testing techniques enable you to derive sets of input conditions
that will fully exercise all functional requirements for a program.
Black-box testing is not an alternative to white-box techniques. Rather, it is a complementary
approach that is likely to uncover a different class of errors than white-box methods.
Black-box testing attempts to find errors in the following categories:
(1) incorrect or missing functions,
(2) interface errors,
(3) errors in data structures or external database access,
(4) behavior or performance errors, and
(5) initialization and termination errors.

Tests are designed to answer the following questions:


• How is functional validity tested?
• How are system behavior and performance tested?
• What classes of input will make good testcases?
• Is the system particularly sensitive to certain input values?
• How are the boundaries of a data class isolated?
• What data rates and data volume can the system tolerate?
• What effect will specific combinations of data have on system operation?
By applying black-box techniques, you derive a set of test cases that satisfy the following criteria
(1) test cases that reduce, by a count that is greater than one, the number of additional test cases
that must be designed to achieve reasonable testing, and

(2) test cases that tell you something about the presence or absence of classes of errors, rather
than an error associated only with the specific test at hand.

Interface Testing

• Interface testing is used to check that the program component accepts information passed to it
in the proper order and data types and returns information in proper order and data format.
• Interface testing is often considered part of integration testing.
• Because most components are not stand-alone programs, it is important to make sure that when
the component is integrated into the evolving program it will not break the build.
• Stubs and drivers sometimes incorporate test cases to be passed to the component or accessed
by the component.
• In other cases, debugging code may need to be inserted inside the component to check that data
passed was received correctly.

Equivalence Partitioning

Equivalence partitioning is a black-box testing method that divides the input domain of a program
into classes of data from which test cases can be derived. Test-case design for equivalence
partitioning is based on an evaluation of equivalence classes for an input condition. Using concepts
introduced in the preceding section, if a set of objects can be linked by relationships that are
symmetric, transitive, and reflexive, an equivalence class is present.
Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes
are defined.
2. If an input condition requires a specific value, one valid and two invalid equivalence
classes are defined.
3. If an input condition specifies a member of a set, one valid and one invalid equivalence
class are defined.
4. If an input condition is Boolean, one valid and one invalid class are defined.

Boundary Value Analysis


A greater number of errors occurs at the boundaries of the input domain rather than in the
“center.” It is for this reason that boundary value analysis (BVA) has been developed as a testing
technique. Boundary value analysis leads to a selection of test cases that exercise bounding values.
Boundary value analysis is a test-case design technique that complements equivalence
partitioning. Rather than selecting any element of an equivalence class, BVA leads to the selection
of test cases at the “edges” of the class. Rather than focusing solely on input conditions, BVA
derives test cases from the output domain as well.
Guidelines for BVA are similar in many respects to those provided for equivalence partitioning:
1. If an input condition specifies a range bounded by values a and b, test cases should be
designed with values a and b and just above and just below a and b.

2. If an input condition specifies a number of values, test cases should be developed that
exercise the minimum and maximum numbers. Values just above and below minimum and
maximum are also tested.
3. Apply guidelines 1 and 2 to output conditions.

4. If internal program data structures have prescribed boundaries, be certain to design a test
case to exercise the data structure at its boundary. Most software engineers intuitively
perform BVA to some degree.

You might also like