0% found this document useful (0 votes)
73 views5 pages

TESTING

sfv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
73 views5 pages

TESTING

sfv
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 5

TESTING

A test case is a triplet [I, S, R], where I is the data input to the program under test, S is the state
of the program at which the data is to be input, and R is the result expected to be produced by
the program. The state of a program is also called its execution mode.

A test scenario is an abstract test case in the sense that it only identifies the aspects of the
program that are to be tested without identifying the input, state, or output. A test case can be
said to be an implementation of a test scenario.

A test script is an encoding of a test case as a short program. Test scripts are developed for
automated execution of the test cases.

A test suite is the set of all test that have been designed by a tester to test a given program.

▪ Differentiate between verification and validation.


Verification: “Are we building the product, right?”
Validation: “Are we building the right product?”

Verification is the process of determining whether the output of one phase of software
development conforms to that of its previous phase, whereas validation is the process of
determining whether a fully developed system conforms to its requirements specification.
Thus, while verification is concerned with phase containment of errors, the aim of validation
is that the final product be error free. Verification does not require execution of the software,
whereas validation requires execution of the software. Verification is carried out during the
development process to check if the development activities are proceeding alright, whereas
validation is carried out to check if the right as required by the customer has been developed.

Error detection techniques = Verification techniques + Validation techniques

▪ Driver and stub modules


In order to test a single module, we need a complete environment to provide all relevant code
that is necessary for execution of the module. That is, besides the module under test, the
following are needed to test the module:
• The procedures belonging to other modules that the module under test calls.
• Non-local data structures that the module accesses.
• A procedure to call the functions of the module under test with appropriate parameters.
Modules required to provide the necessary environment (which either call or are called by the
module under test) are usually not available until they too have been unit tested. In this context,
stubs and drivers are designed to provide the complete environment for a module so that
testing can be carried out.
Stub: The role of stub and driver modules is pictorially shown.
A stub procedure is a dummy procedure that has the same I/O
parameters as the function called by the unit under test but has a
highly simplified behaviour. For example, a stub procedure may
produce the expected behaviour using a simple table look up
mechanism.
Driver: A driver module should contain the non-local data
structures accessed by the module under test. Additionally, it should also have the code to call
the different functions of the unit under test with appropriate parameter values for testing.

1. Testing in the Large versus Testing in the Small


A software product is normally tested in three levels or stages:
A. Unit testing
B. Integration testing
C. System testing

A.UNIT TESTING
Unit testing is undertaken after a module has been coded and reviewed. This activity is typically
undertaken by the coder of the module himself in the coding phase. Before carrying out unit
testing, the unit test cases have to be designed and the test environment for the unit under test
has to be developed. In this section, we first discuss the environment needed to perform unit
testing. There are essentially two main approaches to systematically design test cases:
❖ Black-box approach
❖ White-box approach

I. BLACK-BOX TESTING
In black-box testing, test cases are designed from an examination of the input/output values
only and no knowledge of design or code is required. The following are the two main
approaches available to design black box test cases:
a. Equivalence class partitioning
b. Boundary value analysis

a) Equivalence Class Partitioning


The main idea behind defining equivalence classes of input data is that testing the code with
any one value belonging to an equivalence class is as good as testing the code with any other
value belonging to the same equivalence class. Equivalence classes for a unit under test can be
designed by examining the input data and output data.
Example 1 For a software that computes the square root of an input integer that can assume
values in the range of 0 and 5000. Determine the equivalence classes and the black box test
suite.
Answer: There are three equivalence classes—The set of negative integers, the set of integers
in the range of 0 and 5000, and the set of integers larger than 5000. Therefore, the test cases
must include representatives for each of the three equivalence classes. A possible test suite can
be: {–5,500,6000}.

Example 2 Design equivalence class partitioning test suite for a function that reads a character
string of size less than five characters and displays whether it is a palindrome.
Answer: The equivalence classes are palindromes, non-palindromes, and invalid inputs. Now,
selecting one representative value from each equivalence class, we have the required test suite:
{abc,aba,abcdef}.

b) Boundary Value Analysis


Boundary value analysis-based test suite design involves designing test cases using the values
at the boundaries of different equivalence classes. A type of programming error that is
frequently committed by programmers is missing out on the special consideration that should
be given to the values at the boundaries of different equivalence classes of inputs. For example,
programmers may improperly use < instead of <=, or conversely <= for <, etc.

Example 3 For a function that computes the square root of the integer values in the range of 0
and 5000, determine the boundary value test suite.
Answer: There are three equivalence classes—The set of negative integers, the set of integers
in the range of 0 and 5000, and the set of integers larger than 5000. The boundary value-based
test suite is: {0, -1,5000,5001}.
Example 4 Design boundary value test suite for the function described in Example 2.
Answer: There is a boundary between the valid and invalid equivalence classes. Thus, the
boundary value test suite is {abcdefg, abcdef}.

II. WHITE-BOX TESTING


White-box testing is an important type of unit testing. A large number of white-box testing
strategies exist. Each testing strategy essentially designs test cases based on analysis of some
aspect of source code and is based on some heuristic. We first discuss some basic concepts
associated with white-box testing, and follow it up with a discussion on specific testing
strategies.
A white-box testing strategy can either be coverage-based or fault based. A fault-based testing
strategy targets to detect certain types of faults. These faults that a test strategy focuses on
constitutes the fault model of the strategy. An example of a fault-based strategy is
mutation testing, which is discussed later in this section.
A coverage-based testing strategy attempts to execute (or cover) certain elements of a program.
Popular examples of coverage-based testing strategies are statement coverage, branch
coverage, multiple condition coverage, and path coverage-based testing.

a) Statement Coverage
The principal idea governing the statement coverage strategy is that unless a statement is
executed, there is no way to determine whether an error exists in that statement.

Example 5 Design statement coverage-based test suite for the following Euclid’s GCD
computation program:
int computeGCD(x,y)
int x,y;
{
1 while (x != y){
2 if (x>y) then
3 x=x-y;
4 else y=y-x;
5}
6 return x;
}
Answer: To design the test cases for the statement coverage, the conditional expression of the
while statement needs to be made true and the conditional expression of the if statement needs
to be made both true and false. By choosing the test set {(x = 3, y = 3), (x = 4, y = 3), (x = 3, y
= 4)}, all statements of the program would be executed at least once.

b) Branch Coverage
A test suite satisfies branch coverage, if it makes each branch condition in the program to
assume true and false values in turn. In other words, for branch coverage each branch in the
CFG representation of the program must be taken at least once, when the test suite is executed.
Branch testing is also known as edge testing, since in this testing scheme, each edge of a
program’s control flow graph is traversed at least once.

Example 6 For the program of Example 5, determine a test suite to achieve branch coverage.
Answer: The test suite {(x = 3, y = 3), (x = 3, y = 2), (x = 4, y = 3), (x = 3, y = 4)} achieves
branch coverage.

c. Multiple Condition Coverage


In the multiple condition (MC) coverage-based testing, test cases are designed to make each
component of a composite conditional expression to assume both true and false values.

Example 7 Give an example of a fault that is detected by multiple condition coverage, but not
by branch coverage.
Answer: Consider the following C program segment:
if(temperature>150 || temperature>50)
setWarningLightOn();
The program segment has a bug in the second component condition, it should have been
temperature<50. The test suite {temperature=160, temperature=40} achieves branch coverage.
But, it is not able to check that setWarningLightOn(); should not be called for temperature
values within 150 and 50.

d. Path Coverage
A test suite achieves path coverage if it executes each linearly independent paths ( or basis
paths ) at least once. A linearly independent path can be defined in terms of the control flow
graph (CFG) of a program. Therefore, to understand path coverage-based testing strategy, we
need to first understand how the CFG of a program can be drawn.
▪ Control flow graph (CFG)
A control flow graph describes how the control flows
through the program. We can define a control flow
graph as the following:
A control flow graph describes the sequence in
which the different instructions of a program get
executed.
In order to draw the control flow graph of a program,
we need to first number all the statements of a
program. The different numbered statements serve as
nodes of the control flow graph (see Figure). There
exists an edge from one node to another, if the
execution of the statement representing the first node can result in the transfer of control to the
other node.
We can easily draw the CFG for any program, if we know how to represent the sequence,
selection, and iteration types of statements in the CFG. After all, every program is constructed
by using these three types of constructs only.

• Path
A path through a program is a node and edge sequence from the starting node to a terminal
node of the control flow graph of a program. There can be more than one terminal node in a
program. Writing test cases to cover all the paths of a typical program is impractical. For this
reason, the path-coverage testing does not require coverage of all paths but only coverage of
linearly independent paths.

• Linearly independent path


A linearly independent path is any path through the program that introduces at least one new
edge that is not included in any other linearly independent paths. If a path has one new node
compared to all other linearly independent paths, then the path is also linearly independent.
This is because, any path having a new node automatically implies that it has a new edge.
Thus, a path that is sub path of another path is not considered to be a linearly independent
path.

▪ Cyclomatic complexity
McCabe’s cyclomatic complexity defines an upper bound for the number of linearly
independent paths through a program. Also, the McCabe’s cyclomatic complexity is very
simple to compute. Thus, the McCabe’s cyclomatic complexity metric provides a practical way
of determining the maximum number of linearly independent paths in a program. Though the
McCabe’s metric does not directly identify the linearly independent paths, but it informs
approximately how many paths to look for. There are three different ways to compute the
cyclomatic complexity. The answers computed by the three methods are guaranteed to agree.
Method 1:
Given a control flow graph G of a program, the cyclomatic complexity V(G) can be computed
as: V(G) = E – N + 2
where N is the number of nodes of the control flow graph and E is the number of edges in the
control flow graph. For the CFG of example shown in figure, E=7 and N=6. Therefore, the
cyclomatic complexity = 7-6+2 = 3.

Method 2:
An alternative way of computing the cyclomatic complexity of a program from an inspection
of its control flow graph is as follows:
V(G) = Total number of non-overlapping bounded areas+ 1
In the program’s control flow graph G, any region enclosed by nodes and edges can be called
as a bounded area. This is an easy way to determine the McCabe’s cyclomatic complexity.
But what if the graph G is not planar, i.e., however you draw the graph, two or more edges
intersect? Actually, it can be shown that structured programs always yield planar graphs. But,
presence of GOTO’s can easily add intersecting edges. Therefore, for non-structured
programs, this way of computing the McCabe’s cyclomatic complexity cannot be used.

Method 3: The cyclomatic complexity of a program can also be easily computed by


computing the number of decision and loop statements of the program. If N is the number
of decision and loop statements of a program, then the McCabe’s metric is equal to N + 1.

You might also like