0% found this document useful (0 votes)
13 views25 pages

Module-2 TB

sa

Uploaded by

skjskjskj333
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views25 pages

Module-2 TB

sa

Uploaded by

skjskjskj333
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 25

MODULE-2

CHAPTER 1: Functional Testing: Boundary Value Testing

CHAPTER 2: Equivalence Class Testing

CHAPTER 3: Decision Table Based Testing

20
FUNCTIONAL TESTING
BOUNDARY VALUE TESTING, EQUIVALENCE CLASS TESTING,
DECISION TABLE-BASED TESTING
Any program can be considered to be a function in the sense that program inputs form
its domain and program outputs form its range. Input domain testing (also called
“boundary value testing”) is the best-known specification-based testing technique.
Historically, this form of testing has focused on the input domain; however, it is often a
good supplement to apply many of these techniques to develop range-based test cases.

There are two independent considerations that apply to input domain testing. The first
asks whether or not we are concerned with invalid values of variables. Normal
boundary value testing is concerned only with valid values of the input variables.
Robust boundary value testing considers invalid and valid variable values. The second
consideration is whether we make the “single fault” assumption common to reliability
theory. This assumes that faults are due to incorrect values of a single variable. If this
is not warranted, meaning that we are concerned with interaction among two or more
variables, we need to take the cross product of the individual variables. Taken
together, the two considerations yield four variations of boundary value testing:

 Normal boundary value testing


 Robust boundary value testing
 Worst-case boundary value testing
 Robust worst-case boundary value testing

Boundary Value Analysis


For the sake of comprehensible drawings, function, F, of two variables x1 and x2.
When the function F is implemented as a program, the input variables x1 and x2 will
have some (possibly unstated) boundaries:
a ≤ x1 ≤ b
c ≤ x2 ≤ d

Input domain of a function of two variables.

21
Unfortunately, the intervals [a, b] and [c, d] are referred to as the ranges of x1 and x2,
so right away we have an overloaded term. The intended meaning will always be clear
from its context. Strongly typed languages (such as AdaR and Pascal) permit explicit
definition of such variable ranges. In fact, part of the historical reason for strong typing
was to prevent programmers from making the kinds of errors that result in faults that
are easily revealed by boundary value testing. Other languages (such as COBOL,
FORTRAN, and C) are not strongly typed, so boundary value testing is more
appropriate for programs coded in these languages. The input space (domain) of our
function F is shown in Figure above. Any point within the shaded rectangle and
including the boundaries is a legitimate input to the function F.

Normal boundary value testing


All four forms of boundary value testing focus on the boundary of the input space to
identify test cases. The rationale behind boundary value testing is that errors tend to
occur near the extreme values of an input variable. Loop conditions, for example, may
test for < when they should test for ≤, and counters often are “off by one.” (Does
counting begin at zero or at one?) The basic idea of boundary value analysis is to use
input variable values at their minimum, just above the minimum, a nominal value, just
below their maximum, and at their maximum.

The next part of boundary value analysis is based on a critical assumption; it is known
as the “single fault” assumption in reliability theory. This says that failures are only
rarely the result of the simultaneous occurrence of two (or more) faults. The All Pairs
testing approach contradicts this, with the observation that, in software-controlled
medical systems, almost all faults are the result of interaction between a pair of
variables. Thus, the normal and robust variations cases are obtained by holding the
values of all but one variable at their nominal

Boundary testing is the process of testing between extreme ends or boundaries


between partitions of the input values.

22
1. So, these extreme ends like Start- End, Lower- Upper, Maximum-Minimum, Just
Inside-Just Outside values are called boundary values and the testing is called
"boundary testing".
2. The basic idea in boundary value testing is to select input variable values at their:
1. Minimum
2. Just above the minimum
3. A nominal value
4. Just below the maximum
5. Maximum

Generalizing Boundary Value Analysis


This Black Box testing technique believes and extends the concept that the density of
defect is more towards the boundaries. This is done due to the following reasons
a) Usually the programmers are not able to decide whether they have to use <= operator
or < operator when trying to make comparisons.
b) Different terminating conditions of For-loops, while loops and Repeat loops may
cause defects to move around the boundary conditions.
c) The requirements themselves may not be clearly understood, especially around the
boundaries, thus causing even the correctly coded program to not perform the
correct way.

The basic idea of BVA is to use input variable values at their minimum, just above the
minimum, a nominal value, just below their maximum and at their maximum. Meaning
thereby (min, min+, nom, max-, max), as shown in the figure above.

BVA is based upon a critical assumption that is known as “Single fault assumption
theory”. According to this assumption, we derive the test cases on the basis of the fact
that failures are not due to simultaneous occurrence of two (or more) faults. So, we
derive test cases by holding the values of all but one variable at their nominal values
and allowing that variable assume its extreme values.
If we have a function of n-variables, we hold all but one at the nominal values and let
the remaining variable assume the min, min+, nom, max-and max values, repeating
this for each variable. Thus, for a function of n variables, BVA yields (4n + 1) test cases.

Limitations of Boundary Value Analysis


1) Boolean and logical variables present a problem for Boundary Value Analysis.
2) BVA assumes the variables to be truly independent which is not always possible.
3) BVA test cases have been found to be rudimentary because they are obtained with
very little insight and imagination.

Robust Boundary Value Testing


In BVA, we remain within the legitimate boundary of our range i.e. for testing we
consider values like (min, min+, nom, max-, max) whereas in Robustness testing, we
try to cross these legitimate boundaries as well. Thus, for testing here we consider the
values like (min-, min, min+, nom, max-, max, max+). Again, with robustness testing,

23
we can focus on exception handling. With strongly typed languages, robustness testing
may be very awkward. For example, in PASCAL, if a variable is defined to be within a
certain range, values outside that range result in run-time errors thereby aborting the
normal execution.
For a program with n-variables, robustness testing will yield (6n + 1) test-cases. Thus,
we can draw the following Robustness Test Cases graph. Each dot represents a test
value at which the program is to be tested. In Robustness testing, we cross the
legitimate boundaries of input domain. In the above graph, we show this by dots that
are outside the range [a, b] of variable x1. Similarly, for variable x2, we have crossed
its legitimate boundary of [c, d] of variable x2. This type of testing is quite common in
electric and electronic circuits. Furthermore, this type of testing also works on 'single
fault assumption theory'.

Robustness test cases for a function of two variables.


Worst-Case Boundary Value Testing

Worst-case test cases for a function of Robust worst-case test cases for a
two variables. function of two variables.

If we reject our basic assumption of single fault assumption theory and focus on what
happens when we reject this theory-it simply means that we want to see that what
happens when more than one variable has an extreme value. In electronic circuit

24
analysis, this is called as "worst-case analysis". We use this idea here to generate
worst- case test cases.
For each variable, we start with the five-element set that contains the min, min+, nom,
max-, and max values. We then take the Cartesian product of these sets to generate test
cases. This is shown in above graph. For a program with n-variables, 5n test cases are
generated.

Special Value Testing


Special value testing is probably the most widely practiced form of functional testing. It
also is the most intuitive and the least uniform. Special value testing occurs when a
tester uses domain knowledge, experience with similar programs, and information about
“soft spots” to devise test cases. We might also call this ad hoc testing. No guidelines are
used other than “best engineering judgment.” As a result, special value testing is very
dependent on the abilities of the tester. Even though special value testing is highly
subjective, it often results in a set of test cases that is more effective in revealing faults
than the test sets generated by boundary value methods—testimony to the craft of
software testing.

Normal Boundary Value Test Cases


case # a b c Expected
1. 100 100 1 isosceles
2. 100 100 2 isosceles
3. 100 100 100 equilateral
4. 100 100 199 isosceles
5. 100 100 200 not a triangle
6. 100 1 100 isosceles
7. 100 2 100 isosceles
8. 100 100 100 Equilateral
9. 100 199 100 isosceles
10. 100 200 100 not a triangle
11. 1 100 100 isosceles
12. 2 100 100 isosceles
13. 100 100 100 Equilateral
14. 199 100 100 Isosceles
15. 200 100 100 Not a triangle
(Selected) Worst-Case Boundary Value Test Cases
case # a b c Expected
1 1 1 1 equilateral
2 1 1 2 not a triangle
3 1 1 100 not a triangle
4 1 1 199 not a triangle
5 1 1 200 not a triangle
6 1 2 1 not a triangle

25
7 1 2 2 isosceles
8 1 2 100 not a triangle
9 1 2 199 not a triangle
10 1 2 200 not a triangle
11 1 100 1 not a triangle
12 1 100 2 not a triangle
13 1 100 100 isosceles
14 1 100 199 not a triangle
15 1 100 200 not a triangle
16 1 199 1 not a triangle
17 1 199 2 not a triangle
18 1 199 100 not a triangle
19 1 199 199 isosceles
20 1 199 200 not a triangle
21 1 200 1 not a triangle
22 1 200 2 not a triangle
23 1 200 100 not a triangle
24 1 200 199 not a triangle
25 1 200 200 isosceles

Test Cases for the NextDate Function


All 125 worst-case test cases for NextDate are listed in Table 5.3. Take some time to
examine it for gaps of untested functionality and for redundant testing. For example,
would anyone actually want to test January 1 in five different years? Is the end of
February tested sufficiently?

Expected Expected Expected


case # day month year day month year
1 1 1 1812 2 1 1812
2 1 1 1813 2 1 1813
3 1 1 1912 2 1 1912
4 1 1 2011 2 1 2011
5 1 1 2012 2 1 2012
6 2 1 1812 3 1 1812
7 2 1 1813 3 1 1813
8 2 1 1912 3 1 1912
9 2 1 2011 3 1 2011
10 2 1 2012 3 1 2012
11 15 1 1812 16 1 1812
12 15 1 1813 16 1 1813
13 15 1 1912 16 1 1912

26
14 15 1 2011 16 1 2011
15 15 1 2012 16 1 2012
16 30 1 1812 31 1 1812
17 30 1 1813 31 1 1813
18 30 1 1912 31 1 1912
19 30 1 2011 31 1 2011
20 30 1 2012 31 1 2012
21 31 1 1812 1 2 1812
22 31 1 1813 1 2 1813
23 31 1 1912 1 2 1912
24 31 1 2011 1 2 2011
25 31 1 2012 1 2 2012

Test Cases for the Commission Problem


Instead of going through 125 boring test cases again, we will look at some more
interesting test cases for the commission problem. This time, we will look at boundary
values derived from the output range, especially near the threshold points of $1000 and
$1800 where the commission percentage changes. The output space of the
commission is shown in Figure below. The intercepts of these threshold planes with
the axes are shown.

Output Boundary Value Analysis Test Cases

Case Locks Stocks Barrels Sales Comm Comment


1 1 1 1 100 10 Output minimum
2 1 1 2 125 12.5 Output minimum +
3 1 2 1 130 13 Output minimum +
4 2 1 1 145 14.5 Output minimum +
5 5 5 5 500 50 Midpoint

27
6 10 10 9 975 97.5 Border point –
7 10 9 10 970 97 Border point –
8 9 10 10 955 95.5 Border point –
9 10 10 10 1000 100 Border point
10 10 10 11 1025 103.75 Border point +
11 10 11 10 1030 104.5 Border point +
12 11 10 10 1045 106.75 Border point +
13 14 14 14 1400 160 Midpoint
14 18 18 17 1775 216.25 Border point –
15 18 17 18 1770 215.5 Border point –
16 17 18 18 1755 213.25 Border point –
17 18 18 18 1800 220 Border point
18 18 18 19 1825 225 Border point +
19 18 19 18 1830 226 Border point +
20 19 18 18 1845 229 Border point +
21 48 48 48 4800 820 Midpoint
22 70 80 89 7775 1415 Output maximum –
23 70 79 90 7770 1414 Output maximum –
24 69 80 90 7755 1411 Output maximum –
25 70 80 90 7800 1420 Output maximum

The volume between the origin and the lower plane corresponds to sales below the $1000
threshold. The volume between the two planes is the 15% commission range. Part of
the reason for using the output range to determine test cases is that cases from the
input range are almost all in the 20% zone. We want to find input variable
combinations that stress the sales/commission boundary values: $100, $1000, $1800,
and $7800. The minimum and maximum were easy, and the numbers happen to work
out so that the border points are easy to generate.

Output Special Value Test Cases


Case Locks Stocks Barrels Sales Comm Comment
1 10 11 9 1005 100.75 Border point +
2 18 17 19 1795 219.25 Border point –
3 18 19 17 1805 221 Border point +

Random Testing
At least two decades of discussion of random testing are included in the literature.
Most of this interest is among academics, and in a statistical sense, it is interesting. Our
three sample problems lend themselves nicely to random testing. The basic idea is
that, rather than always choose the min, min+, nom, max–, and max values of a
bounded variable, use a random number generator to pick test case values. This
avoids any form of bias in testing. It also raises a serious question: how many random
test cases are sufficient? Later, when we discuss structural test coverage metrics, we
will have an elegant answer.

28
Random Test Cases for Triangle Program

Test Cases Nontriangles Scalene Isosceles Equilateral


1289 663 593 32 1
15,436 7696 7372 367 1
17,091 8556 8164 367 1
2603 1284 1252 66 1
6475 3197 3122 155 1
5978 2998 2850 129 1
9008 4447 4353 207 1
Percentage 49.83% 47.87% 2.29% 0.01%

Random Test Cases for Commission Program

Test Cases 10% 15% 20%


91 1 6 84
27 1 1 25
72 1 1 70
176 1 6 169
48 1 1 46
152 1 6 145
125 1 4 120
Percentage 1.01% 3.62% 95.37%

x = Int((b – a + 1) * Rnd + a)
where the function Int returns the integer part of a floating-point number, and the
function Rnd generates random numbers in the interval [0, 1]. The program keeps
generating random test cases until at least one of each output occurs. In each table, the
program went through seven “cycles” that ended with the “hard-to-generate” test case.
Guidelines for Boundary Value Testing

With the exception of special value testing, the test methods based on the input
domain of a function (program) are the most rudimentary of all specification-based
testing methods. They share the common assumption that the input variables are truly
independent; and when this assumption is not warranted, the methods generate
unsatisfactory test cases.

Another useful form of output-based test cases is for systems that generate error
messages. The tester should devise test cases to check that error messages are
generated when they are appropriate, and are not falsely generated.
Boundary value analysis can also be used for internal variables, such as loop control
variables, indices, and pointers. Strictly speaking, these are not input variables;
however, errors in the use of these variables are quite common. Robustness testing is
a good choice for testing internal variables.

29
Random Test Cases for NextDate Program

Test Days 1-30 0f Days 1-31 0f Days 1-29 0f Days 1-30 0f


Cases 31-Day Months 31-Day Months 30-Day Months 30-Day Months
913 542 17 274 10
1101 621 9 358 8
4201 2448 64 1242 46
1097 600 21 350 9
5853 3342 100 1804 82
3959 2195 73 1252 42
1436 786 22 456 13
Percentage 56.76% 1.65% 30.91% 1.13%
Probability 56.45% 1.88% 31.18% 1.88%

Days 1-27 0f Feb. 28 of a Feb. 28 of a Feb. 29 of a Impossible


Feb. Leap Year Non-Leap Leap Year Days
Year
45 1 1 1 22
83 1 1 1 19
312 1 8 3 77
92 1 4 1 19
417 1 11 2 94
310 1 6 5 75
126 1 5 1 26
7.46% 0.04% 0.19% 0.08% 1.79%
7.26% 0.07% 0.20% 0.07% 1.01%

Equivalence Class Testing


The use of equivalence classes as the basis for functional testing has two motivations:
we would like to have a sense of complete testing, and, at the same time, we would
hope to avoid redundancy. Equivalence class testing echoes the two deciding factors of
boundary value testing, robustness and the single/multiple fault assumption.

Equivalence Classes
The important aspect of equivalence classes is that they form a partition of a set,
where partition refers to a collection of mutually disjoint subsets, the union of which is
the entire set. This has two important implications for testing—the fact that the entire
set is represented provides a form of completeness, and the disjointedness ensures a
form of non-redundancy. The idea of equivalence class testing is to identify test cases
by using one element from each equivalence class. If the equivalence classes are
chosen wisely, this greatly reduces the potential redundancy among test cases.
The key of equivalence class testing is the choice of the equivalence relation that
determines the classes. The four forms of equivalence class testing all address the

30
problems of gaps and redundancies that are common to the four forms of boundary
value testing.
Weak Normal Equivalence Class Testing
3. Identify equivalence classes of valid values.
4. Test cases have all valid values.
5. Detects faults due to calculations with valid values of a single variable.
6. OK for regression testing.
7. Need an expanded set of valid classes
 valid classes: {a <= x1 < b}, {b <= x1 < c}, {c <= x1 <= d}, {e <= x2 < f}, {f <= x2
<= g}
 invalid classes: {x1 < a}, {x1 > d}, {x2 < e}, {x2 > g}

Weak Robust Equivalence Class Testing


8. Identify equivalence classes of valid and invalid values."
9. Test cases have all valid values except one invalid value."
10. Detects faults due to calculations with valid values of a single variable."
11. Detects faults due to invalid values of a single variable."
12. OK for regression testing."

Strong Normal Equivalence Class Testing


13. Identify equivalence classes of valid values.
14. Test cases from Cartesian Product of valid values.
15. Detects faults due to interactions with valid values of any number of variables.
16. OK for regression testing, better for progression testing.

31
Strong Robust Equivalence Class Testing
17. Identify equivalence classes of valid and invalid values.
18. Test cases from Cartesian Product of all classes.
19. Detects faults due to interactions with any values of any number of variables.
20. OK for regression testing, better for progression testing.
 (Most rigorous form of Equivalence Class testing, BUT,
 Jorgensen’s First Law of Software Engineering applies.)
21. Jorgensen’s First Law of Software Engineering:
 The product of two big numbers is a really big number.
 (More elegant: scaling up can be problematic)

Equivalence Class Test Cases for the Triangle Problem


In the problem statement, we note that four possible outputs can occur: NotATriangle,
Scalene, Isosceles, and Equilateral. We can use these to identify output (range)
equivalence classes as follows.

R1 = {<a, b, c>: the triangle with sides a, b, and c is equilateral}


R2 = {<a, b, c>: the triangle with sides a, b, and c is isosceles}
R3 = {<a, b, c>: the triangle with sides a, b, and c is scalene}
R4 = {<a, b, c>: sides a, b, and c do not form a triangle}
Four weak normal equivalence class test cases, chosen arbitrarily from each class are
as follows:

Test Case a b c Expected Output


WN1 5 5 5 Equilateral

32
WN2 2 2 3 Isosceles
WN3 3 4 5 Scalene
WN4 4 1 2 Not a triangle

Because no valid subintervals of variables a, b, and c exist, the strong normal


equivalence class test cases are identical to the weak normal equivalence class test
cases. Considering the invalid values for a, b, and c yields the following additional weak
robust equivalence class test cases.

Test Case a b c Expected Output


WR1 –1 5 5 Value of a is not in the range of permitted values
WR2 5 –1 5 Value of b is not in the range of permitted values
WR3 5 5 –1 Value of c is not in the range of permitted values
WR4 201 5 5 Value of a is not in the range of permitted values
WR5 5 201 5 Value of b is not in the range of permitted values
WR6 5 5 201 Value of c is not in the range of permitted values

Here is one “corner” of the cube in three-space of the additional strong robust
equivalence class test cases:

Test Case a b c Expected Output


SR1 –1 5 5 Value of a is not in the range of permitted values
SR2 5 –1 5 Value of b is not in the range of permitted values
SR3 5 5 –1 Value of c is not in the range of permitted values
SR4 –1 –1 5 Values of a, b are not in the range of permitted values
SR5 5 –1 –1 Values of b, c are not in the range of permitted values
SR6 –1 5 –1 Values of a, c are not in the range of permitted values
SR7 –1 –1 –1 Values of a, b, c are not in the range of permitted values

Equivalence class testing is clearly sensitive to the equivalence relation used to define
classes.

D1 = {<a, b, c>: a = b = c} D5 = {<a, b, c>: a ≠ b, a ≠ c, b ≠ c}


D2 = {<a, b, c>: a = b, a ≠ c} D6 = {<a, b, c>: a ≥ b + c}
D3 = {<a, b, c>: a = c, a ≠ b} D7 = {<a, b, c>: b ≥ a + c}
D4 = {<a, b, c>: b = c, a ≠ b} D8 = {<a, b, c>: c ≥ a + b}

Equivalence Class Test Cases for the NextDate Function


The NextDate function illustrates very well the craft of choosing the underlying
equivalence relation. Recall that NextDate is a function of three variables: month, day,
and year, and these have intervals defined as follows:

Valid Equivalence Classes Invalid Equivalences Classes


M1 = {month: 1 ≤ month ≤ 12} M2 = {month: month < 1}
D1 = {day: 1 ≤ day ≤ 31} M3 = {month: month > 12}

33
Y1 = {year: 1812 ≤ year ≤ 2012} D2 = {day: day < 1}
D3 = {day: day > 31}
Y2 = {year: year < 1812}
Y3 = {year: year > 2012}

Because the number of valid classes equals the number of independent variables, only
one weak normal equivalence class test case occurs, and it is identical to the strong
normal equivalence class test case:

Case ID Month Day Year Expected Output


WN1, SN1 6 15 1912 6/16/1912

Here is the full set of weak robust test cases:

Case ID Month Day Year Expected Output


WR1 6 15 1912 6/16/1912
WR2 –1 15 1912 Value of month not in the range 1 ... 12
WR3 13 15 1912 Value of month not in the range 1 ... 12
WR4 6 –1 1912 Value of day not in the range 1 ... 31
WR5 6 32 1912 Value of day not in the range 1 ... 31
WR6 6 15 1811 Value of year not in the range 1812 ... 2012
WR7 6 15 2013 Value of year not in the range 1812 ... 2012

As with the triangle problem, here is one “corner” of the cube in three-space of the
additional strong robust equivalence class test cases:

Case ID Month Day Year Expected Output


SR1 –1 15 1912 Value of month not in the range 1 ... 12
SR2 6 –1 1912 Value of day not in the range 1 ... 31
SR3 6 15 1811 Value of year not in the range 1812 ... 2012
Value of month not in the range 1 ...
SR4 –1 –1 1912
12 Value of day not in the range 1 ...
31
Value of day not in the range 1 ... 31
SR5 6 –1 1811
Value of year not in the range 1812 ... 2012
Value of month not in the range 1 ... 12
SR6 –1 15 1811
Value of year not in the range 1812 ...
2012
Value of month not in the range 1 ...
SR7 –1 –1 1811 12 Value of day not in the range 1 ...
31
Value of year not in the range 1812 ... 2012

We can postulate the following equivalence classes:


M1 = {month: month has 30 days}
M2 = {month: month has 31 days}
M3 = {month: month is February}
34
D1 = {day: 1 ≤ day ≤ 28}
D2 = {day: day = 29}
D3 = {day: day = 30}
D4 = {day: day = 31}
Y1 = {year: year = 2000}
Y2 = {year: year is a non-century leap
year} Y3 = {year: year is a common year}
Equivalence Class Test Cases for the Commission Problem
The input domain of the commission problem is “naturally” partitioned by the limits
on locks, stocks, and barrels. These equivalence classes are exactly those that would
also be identified by traditional equivalence class testing. The first class is the valid
input; the other two are invalid. The input domain equivalence classes lead to very
unsatisfactory sets of test cases. Equivalence classes defined on the output range of the
commission function will be an improvement.

The valid classes of the input variables are


L1 = {locks: 1 ≤ locks ≤ 70}
L2 = {locks = –1} (occurs if locks = –1 is used to control input iteration)
S1 = {stocks: 1 ≤ stocks ≤ 80}
B1 = {barrels: 1 ≤ barrels ≤ 90}
The corresponding invalid classes of the input variables are

L3 = {locks: locks = 0 OR locks < –1}


L4 = {locks: locks > 70}
S2 = {stocks: stocks < 1}
S3 = {stocks: stocks > 80}
B2 = {barrels: barrels < 1}
B3 = {barrels: barrels > 90}
We will have eight weak robust test cases.

Case ID Locks Stocks Barrels Expected Output


WR1 10 10 10 $100
WR2 –1 40 45 Program terminates
WR3 –2 40 45 Value of locks not in the range 1 ... 70
WR4 71 40 45 Value of locks not in the range 1 ... 70
WR5 35 –1 45 Value of stocks not in the range 1 ... 80
WR6 35 81 45 Value of stocks not in the range 1 ... 80
WR7 35 40 –1 Value of barrels not in the range 1 ... 90
WR8 35 40 91 Value of barrels not in the range 1 ... 90

Here is one “corner” of the cube in 3-space of the additional strong robust equivalence
class test cases:

35
Case ID Locks Stocks Barrels Expected Output
SR1 –2 40 45 Value of locks not in the range 1 ... 70
SR2 35 –1 45 Value of stocks not in the range 1 ... 80
SR3 35 40 –2 Value of barrels not in the range 1 ... 90
Value of locks not in the range 1 ... 70
SR4 –2 –1 45
Value of stocks not in the range 1 ...
80
Value of locks not in the range 1 ... 70
SR5 –2 40 –1
Value of barrels not in the range 1 ... 90
Value of stocks not in the range 1 ... 80
SR6 35 –1 –1
Value of barrels not in the range 1 ... 90
Value of locks not in the range 1 ... 70
SR7 –2 –1 –1 Value of stocks not in the range 1 ... 80
Value of barrels not in the range 1 ... 90

Sales is a function of the number of locks, stocks, and barrels sold:


Sales = 45 <mathMultiply> locks + 30 <mathMultiply> stocks + 25 <mathMultiply>
barrels
We could define equivalence classes of three variables by commission ranges:

S1 = {<locks, stocks, barrels>: sales ≤ 1000}


S2 = {<locks, stocks, barrels>: 1000 < sales ≤ 1800}
S3 = {<locks, stocks, barrels>: sales > 1800}
As was the case with the triangle problem, the fact that our input is a triplet means that
we no longer take test cases from a Cartesian product.

Test Case Locks Stocks Barrels Sales Commission


OR1 5 5 5 500 50
OR2 15 15 15 1500 175
OR3 25 25 25 2500 360

Guidelines and Observations


Now that we have gone through three examples, we conclude with some observations
about, and guidelines for, equivalence class testing.
1. Obviously, the weak forms of equivalence class testing (normal or robust) are not as
comprehensive as the corresponding strong forms.
2. If the implementation language is strongly typed (and invalid values cause run-
time errors), it makes no sense to use the robust forms.
3. If error conditions are a high priority, the robust forms are appropriate.
4. Equivalence class testing is appropriate when input data is defined in terms of
intervals and sets of discrete values. This is certainly the case when system
malfunctions can occur for out-of-limit variable values.

36
5. Equivalence class testing is strengthened by a hybrid approach with boundary value
testing. (We can “reuse” the effort made in defining the equivalence classes.)
6. Equivalence class testing is indicated when the program function is complex. In
such cases, the complexity of the function can help identify useful equivalence
classes, as in the NextDate function.
7. Strong equivalence class testing makes a presumption that the variables are
independent, and the corresponding multiplication of test cases raises issues of
redundancy. If any dependencies occur, they will often generate “error” test cases,
as they did in the NextDate function. (The decision table technique in Chapter 7
resolves this problem.)
8. Several tries may be needed before the “right” equivalence relation is discovered,
as we saw in the NextDate example. In other cases, there is an “obvious” or
“natural” equivalence relation. When in doubt, the best bet is to try to second-guess
aspects of any reasonable implementation. This is sometimes known as the
“competent programmer hypothesis.”
9. The difference between the strong and weak forms of equivalence class testing is
helpful in the distinction between progression and regression testing.

Decision Table-Based Testing


Decision tables have been used to represent and analyze complex logical relationships.
They are ideal for describing situ
ations in which a number of combinations of actions are taken under varying sets of
conditions.

A decision table has four portions: the part to the left of the bold vertical line is the
stub portion; to the right is the entry portion. The part above the bold horizontal line is
the condition portion, and below is the action portion. Thus, we can refer to the
condition stub, the condition entries, the action stub, and the action entries. A column
in the entry portion is a rule. Rules indicate which actions, if any, are taken for the
circumstances indicated in the condition portion of the rule. In the decision table in
Table below, when conditions c1, c2, and c3 are all true, actions a1 and a2 occur. When
c1 and c2 are both true and c3 is false, then actions a1 and a3 occur. The entry for c3
in the rule where c1 is true and c2 is false is called a “don’t care” entry. The don’t care
entry has two major interpretations: the condition is irrelevant, or the condition does
not apply. Sometimes people will enter the “n/a” symbol for this latter interpretation.

When we use decision tables for test case identification, this completeness property of
a decision table guarantees a form of complete testing. Decision tables in which all the
conditions are binary are called Limited Entry Decision Tables (LETDs). If conditions
are allowed to have several values, the resulting tables are called Extended Entry
Decision Tables (EEDTs).

Portions of a Decision Table


Stub Rule 1 Rule 2 Rule 3, 4 Rules 5 Rule 6 Rule 7, 8
c1 T T T F F F
c2 T T F T T F
c3 T F — T F —
a1 X X X
37
a2 X X
a3 X X
a4 X x
Decision Table Techniques

To identify test cases with decision tables, we interpret conditions as inputs and actions
as outputs. Sometimes conditions end up referring to equivalence classes of inputs, and
actions refer to major functional processing portions of the item tested. The rules are
then interpreted as test cases. Because the decision table can mechanically be forced to
be complete, we have some assurance that we will have a comprehensive set of test
cases. The don’t care entries (—) really mean “must be false.”

Decision Table for Triangle Problem


c1: a, b, c form a triangle? F T T T T T T T T
c2: a = b? — T T T T F F F F
c3: a = c? — T T F F T T F F
c4: b = c? — T F T F T F T F
a1: Not a triangle X
a2: Scalene X
a3: Isosceles X X X
a4: Equilateral X
a5: Impossible X X X
Refined Decision Table for Triangle Problem
c1: a < b + c? F T T T T T T T T T T
c2: b < a + c? — F T T T T T T T T T
c3: c < a + b? — — F T T T T T T T T
c4: a = b? — — — T T T T F F F F
c5: a = c? — — — T T F F T T F F
c6: b = c? — — — T F T F T F T F
a1: Not a triangle X X X
a2: Scalene X
a3: Isosceles X X X
a4: Equilateral X
a5: Impossible X X X

Use of don’t care entries has a subtle effect on the way in which complete decision
tables are recognized. For a limited entry decision table with n conditions, there must
be 2n independent rules. When don’t care entries really indicate that the condition is
irrelevant, we can develop a rule count as follows: rules in which no don’t care entries
occur count as one rule, and each don’t care entry in a rule doubles the count of that
rule.
Decision Table with Rule Counts
c1: a < b + c? F T T T T T T T T T T

38
c2: b < a + c? — F T T T T T T T T T
c3: c < a + b? — — F T T T T T T T T
c4: a = b? — — — T T T T F F F F
c5: a = c? — — — T T F F T T F F
c6: b = c? — — — T F T F T F T F
Rule Count 32 16 8 1 1 1 1 1 1 1 1
a1: Not a triangle X X X
a2: Scalene X
a3: Isosceles X X X
a4: Equilateral X
a5: Impossible X X X

Rule Counts for a Decision Table with Mutually Exclusive Conditions

Conditions R1 R2 R3
c1: Month in M1 T — —
c2: Month in M2 — T —
c3: Month in M3 — — T
Rule count 4 4 4
a1

Impossible Rules

Conditions 1.1 1.2 1.3 1.4 2.1 2.2 2.3 2.4 3.1 3.2 3.3 3.4
c1: Month in M1 T T T T T T F F T T F F
c2: Month in M2 T T F F T T T T T F T F
c3: Month in M3 T F T F T F T F T T T T
Rule count 1 1 1 1 1 1 1 1 1 1 1 1
a1: Impossible X X X — X X X — X X —

Mutually Exclusive Conditions with Impossible Rules

1.1 1.2 1.3 1.4 2.3 2.4 3.4


c1: Month in M1 T T T T F F F F
c2: Month in M2 T T F F T T F F
c3: Month in M3 T F T F T F T F
Rule count 1 1 1 1 1 1 1 1
a1: Impossible X X X X X

Redundant Decision Table

Conditions 1–4 5 6 7 8 9
c1 T F F F F T
c2 — T T F F F
c3 — T F T F F

39
a1 X X X — — X
a2 — X X X — —
a3 X — X X X X

An Inconsistent Decision Table

Conditions 1–4 5 6 7 8 9
c1 T F F F F T
c2 — T T F F F
c3 — T F T F F
a1 X X X — — —
a2 — X X X — X
a3 X — X X X —

Test Cases for the Triangle Problem

We obtain 11 functional test cases: three impossible cases, three ways to fail the triangle
property, one way to get an equilateral triangle, one way to get a scalene triangle, and
three ways to get an isosceles triangle

Case ID a b c Expected Output


DT1 4 1 2 Not a triangle
DT2 1 4 2 Not a triangle
DT3 1 2 4 Not a triangle
DT4 5 5 5 Equilateral
DT5 ? ? ? Impossible
DT6 ? ? ? Impossible
DT7 2 2 3 Isosceles
DT8 ? ? ? Impossible
DT9 2 3 2 Isosceles
DT10 3 2 2 Isosceles
DT11 3 4 5 Scalene

Test Cases for the NextDate Function


M1 = {month: month has 30 days}
M2 = {month: month has 31 days except
December} M3 = {month: month is December}
M4 = {month: month is February}
D1 = {day: 1 ≤ day ≤ 27}
D2 = {day: day = 28}
D3 = {day: day = 29}

40
D4 = {day: day = 30}
D5 = {day: day = 31}
Y1 = {year: year is a leap year}
Y2 = {year: year is a common year}
Decision Table for NextDate Function

1 2 3 4 5 6 7 8 9 10
c1: Month in M1 M1 M1 M1 M1 M2 M2 M2 M2 M2
c2: Day in D1 D2 D3 D4 D5 D1 D2 D3 D4 D5
c3: Year in — — — — — — — — — —
Actions
a1: Impossible X
a2: Increment day X X X X X X X
a4: Reset day X X
a5: Increment month X X
a5: Reset month
a6: Increment year
11 12 13 14 15 16 17 18 19 20 21 22
c1: Month in M3 M3 M3 M3 M3 M4 M4 M4 M4 M4 M4 M4
c2: Day in D1 D2 D3 D4 D5 D1 D2 D2 D3 D3 D4 D5
c3: Year in — — — — — — Y1 Y2 Y1 Y2 — —
Actions
a1: Impossible X X X
a2: Increment day X X X X X X
a3: Reset day X X X
a4: Increment month X X
a5: Reset month X
a6: Increment year X

Test Cases for the Commission Problem


The commission problem is not well served by a decision table analysis. This is not
surprising because very little decisional logic is used in the problem. Because the
variables in the equivalence classes are truly independent, no impossible rules will occur
in a decision table in which conditions correspond to the equivalence classes. Thus, we
will have the same test cases as we did for equivalence class testing.

Cause-and-Effect Graphing
The most that can be learned from a cause-and-effect graph is that, if there is a
problem at an output, the path(s) back to the inputs that affected the output can be
retraced. There is little support for actually identifying test cases. Figure below shows
a cause- and-effect graph for the commission problem.

41
Cause-and-effect graph for commission problem
Guidelines and Observations
As with the other testing techniques, decision table–based testing works well for some
applications (such as NextDate) and is not worth the trouble for others (such as the
commission problem). Not surprisingly, the situations in which it works well are those
in which a lot of decision making takes place (such as the triangle problem), and those
in which important logical relationships exist among input variables (the NextDate
function).
1. The decision table technique is indicated for applications characterized by any of the
following:
a. Prominent if–then–else logic
b. Logical relationships among input variables
c. Calculations involving subsets of the input variables
d. Cause-and-effect relationships between inputs and outputs
e. High cyclomatic complexity
2. Decision tables do not scale up very well (a limited entry table with n conditions
has 2n rules). There are several ways to deal with this—use extended entry
decision tables, algebraically simplify tables, “factor” large tables into smaller ones,
and look for repeating patterns of condition entries. Try factoring the extended
entry table for NextDate.
3. As with other techniques, iteration helps. The first set of conditions and actions you
identify may be unsatisfactory. Use it as a stepping stone and gradually improve on
it until you are satisfied with a decision table.

42

You might also like