Software Testing (1)
Software Testing (1)
testing, Robust Worst testing for triangle problem, Nextdate problem and
commission problem, Equivalence classes, Equivalence test cases for the triangle
problem, NextDate function, and the commission problem, Guidelines and
observations, Decision tables, Test cases for the triangle problem, NextDate
function, and the commission problem, Guidelines and observations. Fault
Based Testing: Overview, Assumptions in fault based testing, Mutation analysis,
Fault-based adequacy criteria, Variations on mutation analysis.
1. Minimum
2. Just above the minimum
3. A nominal value
4. Just below the maximum
5. Maximum
Identify Exact Boundary Value of this partition Class – which is 16 and 60.
Get the Boundary value which is one less than the exact Boundary – which
is 15 and 59.
Get the Boundary Value which is one more than the precise Boundary –
which is 17 and 61.
If we combine them all, we will get below combinations for Boundary Value
for the Age Criteria.
Valid Boundary Conditions : Age = 16, 17, 59, 60
Invalid Boundary Conditions : Age = 15, 61
It’s straightforward to see that valid boundary conditions fall under Valid
partition class, and invalid boundary conditions fall under Invalid partition
class.
The Focus of BVA Boundary Value Analysis focuses on the input variables of the function. For the
purposes of this report I will define two variables ( I will only define two so that further examples can be
kept concise) X1 and X2. Where X1 lies between A and B and X2 lies between C and D.
A ≤ X1 ≤ B
C ≤ X2 ≤ D
The values of A, B, C and D are the extremities of the input domain. These are best demonstrated by
figure 4.1.
The Yellow shaded area of the graph shows the acceptable/legitimate input domain of the given
function. As the name suggests Boundary Value Analysis focuses on the boundary of the input space to
recognize test cases. The idea and motivation behind BVA is that errors tend to occur near the
extremities of the input variables. The defects found on the boundaries of these input variables can
obviously be the result of countless possibilities. Figure 4.1 4 But there are many common faults that
result in errors more collated towards the boundaries of input variables. For example if the programmer
forgot to count from zero or they just miscalculated. Errors in the code concerning loop counters being
off by one or the use of a < operator instead of ≤. These are all very common mistakes and accompanied
with other common errors we find an increasing need to perform Boundary Value Analysis.
5.0 Applying Boundary Value Analysis In the general application of Boundary Value Analysis can be done
in a uniform manner. The basic form of implementation is to maintain all but one of the variables at
their nominal (normal or average) values and allowing the remaining variable to take on its extreme
values. The values used to test the extremities are:
• Min ------------------------------------- Minimal
• Min+ ------------------------------------- Just above Minimal
• Nom ------------------------------------- Average
• Max- ------------------------------------- Just below Maximum
• Max ------------------------------------- Maximum
In continuing our example this results in the following test cases shown in figures 5.1 and 5.2:
{<x1nom, x2min>, <x1nom, x2min+ >,<x1nom, x2nom>,<x1nom, x2max- >,
<x1nom, x2max>, <x1min, x2nom >, <x1min+, x2nom >, <x1nom, x2nom >,
<x1max-, x2nom >, <x1max, x2nom > }
You maybe wondering why it is we are only concerned with one of the values taking on their extreme
values at any one particular time. The reason for this is that generally Boundary Value Analysis uses the
Critical Fault Assumption. There are advantages and shortcomings of this method.
5.1 Some Important examples To be able to demonstrate or explain the need for certain methods and
their relative merits I will introduce two testing examples proposed by P.C. Jorgensen [1]. These
examples will provide more extensive ranges to show where certain testing techniques are required and
provide a better overview of the methods usability.
The NextDate problem is a function of three variables: day, month and year. Upon the input of a certain
date it returns the date of the day after that of the input. The input variables have the obvious
conditions:
1 ≤ Day ≤ 31.
1 ≤ month ≤ 12.
1812 ≤ Year ≤ 2012.
(Here the year has been restricted so that test cases are not too large). There are more complicated
issues to consider due to the dependencies between variables. For example there is never a 31st of April
no matter what year we are in. The nature of these dependencies is the reason this example is so useful
to us. All errors in the NextDate problem are denoted by “Invalid Input Date.”
• The Triangle problem In fact the first introduction of the Triangle problem is in 1973, Gruenburger.
There have been many more references to this problem since making this one of the most popular
example to be used in conjunction with testing literature.
The triangle problem accepts three integers (a, b and c) as its input, each of which are taken to be sides
of a triangle. The values of these inputs are used to determine the type of the triangle (Equilateral,
Isosceles, Scalene or not a triangle).
For the inputs to be declared as being a triangle they must satisfy the six conditions:
C1. 1 ≤ a ≤ 200.
C2. 1 ≤ b ≤ 200.
C3. 1 ≤ c ≤ 200.
C4. a < b + c.
C5. b < a + c.
C6. c < a + b.
Otherwise this is declared not to be a triangle. The type of the triangle, provided the conditions are met,
is determined as follows:
The Critical Fault Assumption also known as the single fault assumption in reliability theory. The
assumption relies on the statistic that failures are only rarely the product of two or more simultaneous
faults. Upon using this assumption we can reduce the required calculations dramatically.
The amount of test cases for our example as you can recall was 9. Upon inspection we find that the
function f that computes the number of test cases for a given number of variables n can be shown as:
f = 4n + 1
As there are four extreme values this accounts for the 4n. The addition of the constant one constitutes
for the instance where all variables assume their nominal value.
There are two approaches to generalizing Boundary Value Analysis. We can do this by the number of
variables or by the ranges these variables use. To generalize by the number of variables is relatively
simple. This is the approach taken as shown by the general Boundary Value Analysis technique using the
critical fault assumption.
Generalizing by ranges depends on the type of the variables. For example in the NextDate example
proposed by P.C. Jorgensen [1], we have variable for the year, month and day. Languages similar to the
likes of FORTRAN would normally encode the month’s variable so that January corresponded to 1 and
February corresponded to 2 etc. Also it would be possible in some languages to declare an enumerated
type {Jan, Feb, Mar,……, Dec}. Either way this type of declaration is relatively simple because the ranges
have set values.
When we do not have explicit bounds on these variable ranges then we have to create our own. These
are known as artificial bounds and can be illustrated via the use of the Triangle problem. The point
raised by P.C. Jorgensen was that we can easily impose a lower bound on the length of an edge for the
tri-angle as an edge with a negative length would be “silly”. The problem occurs when trying to decide
upon an upper bound for the length of each length. We could use a certain set integer, we could allow
the program to use the highest possible integer (normally denoted as something to the effect of
MaxInt). The arbitrary nature of this problem can lead to messy results or non concise test cases.
For example the NextDate problem, where Boundary Value Analysis would place an even testing regime
equally over the range, tester’s intuition and common sense shows that we require more emphasis
towards the end of February or on leap years.
The reason for this poor performance is that BVA cannot compensate or take into consideration the
nature of a function or the dependencies between its variables. This lack of intuition or understanding
for the variable nature means that BVA can be seen as quite rudimentary.
Robustness testing
Robustness testing can be seen as and extension of Boundary Value Analysis. The idea behind
Robustness testing is to test for clean and dirty test cases. By clean I mean input variables that lie in the
legitimate input range. By dirty I mean using input variables that fall just outside this input domain.
In addition to the aforementioned 5 testing values (min, min+, nom, max-, max) we use two more values
for each variable (min-, max+), which are designed to fall just outside of the input range.
If we adapt our function f to apply to Robustness testing we find the following equation:
f = 6n + 1
I have equated this solution by the same reasoning that lead to the standard BVA equation. Each
variable now has to assume 6 different values each whilst the other values are assuming their nominal
value (hence the 6n), and there is again one instance whereby all variables assume their nominal value
(hence the addition of the constant 1). These result can be seen in figures 6.1 and 6.2.
Robustness testing ensues a sway in interest, where the previous interest lied in the input to the
program, the main focus of attention associated with Robustness testing comes in the expected outputs
when and input variable has exceeded the given input domain. For example the NextDate problem
when we an entry like the 31st June we would expect an error message to the effect of “that date does
not exist; please try again”. Robustness testing has the desirable property that it forces attention on
exception handling. Although Robustness testing can be somewhat awkward in strongly typed languages
it can show up altercations. In Pascal if a value is defined to reside in a certain range then and values
that falls outside that range result in the run time errors that would terminate any normal execution. For
this reason exception handling mandates Robustness testing.
Worst-case testing
Boundary Value analysis uses the critical fault assumption and therefore only tests for a single variable
at a time assuming its extreme values. By disregarding this assumption we are able to test the outcome
if more than one variable were to assume its extreme value. In an electronic circuit this is called Worst
Case Analysis. In Worst-Case testing we use this idea to create test cases.
To generate test cases we take the original 5-tuple set (min, min+, nom, max-, max) and perform the
Cartesian product of these values. The end product is a much larger set of results than we have seen
before.
We can see from the results in figures 7.1 and 7.2 that worst case testing is a more comprehensive
testing technique. This can be shown by the fact that standard Boundary Value Analysis test cases are a
proper subset of Worst-Case test cases.
These test cases although more comprehensive in their coverage, constitute much more endeavour. To
compare we can see that Boundary Value Analysis results in 4n + 1 test case where Worst-Case testing
results in 5n test cases. As each variable has to assume each of its variables for each permutation (the
Cartesian product) we have 5 to the n test cases.
For this reason Worst-Case testing is generally used for situations that require a higher degree of testing
(where failure of the program would be very costly)with less regard for the time and effort required as
for many situations this can be too expensive to justify.
If the function under test were to be of the greatest importance we could use a method named Robust
Worst-Case testing which as the name suggests draws it attributes from Robust and Worst-Case testing.
Test cases are constructed by taking the Cartesian product of the 7-tuple set defined in the Robustness
testing chapter. Obviously this results in the largest set of test results we have seen so far and requires
the most effort to produce.
We can see that the function f (to calculate the number of test cases required) can be adapted to
calculate the amount of Robust Worst-Case test cases. As there are now 7 values each variable can
assume we find the function f to be:
f = 7n
This function has also been reached in the paper A Testing and analysis tool for Certain 3-Variable
functions [2].
The results for the continuing example can be seen in figures 7.3 and 7.4.
For each example I will show test cases for the standard Boundary Value Analysis and the Worst-case
testing techniques. These will show how the test cases are performed and how comprehensive the
results are. There will not be test cases for Robustness testing or robust Worst-case testing as the cases
covered should explain how the process works. Too many test cases would prove to be monotonous
when trying to explain a concept, however when presenting a real project when the figures are more
“necessary” all test cases should be detailed and explained to their full extent.
For an variable in some program, it might be possible to test the project when every
int
program value is input for the variable. This is true because, on any specific machine,
only a finite number of values can be assigned to an variable. However, the number of
int
values is large, and the testing would be very time consuming and not likely worthwhile.
The number of possible values is much larger for variables of type or String.
float
Thus, for almost every program, it is impossible to test all possible input values.
To get around the impossibility of testing for every possible input value, the possible
input values for a variable are normally divided into categories, usually called blocks or
equivalence classes.
The objective is to put values into the same equivalence class if the
project should have similar (equivalent) behavior for each value of the equivalence class.
Now, rather than testing the project for all possible input values, the project is tested for
an input value from each equivalent class.
The rationale for defining an equivalence class is as follows: If one test case for a particular
equivalence class exposes an error, all other test cases in that equivalence class will likely
expose the same error.
Using standard notation from discrete mathematics, the objective is to partition the
input values for each variable, where a partition is defined as follows:
Definition 16.1: A partition of a set A is the division of the set into subsets
Ai, i = 1, 2, . . . ,m,
Given the three sides of a triangle as integers x, y, and z, it is desired to have a program to determine the
type of the triangle: equilateral, isosceles, or scalene.
The behavior (i.e., output) of the program depends on the values of the three integers. However, as
previously remarked, it is infeasible to try all possible combinations of the possible integer values.
Traditional equivalence class testing simply partitions the input values into valid and nonvalid values,
with one equivalence class for valid values and another for each type of invalid values. Note that this
implies an individual test case to cover each invalid equivalence class. The rationale for doing this is that
if invalid inputs can contain multiple errors, the detection of one error may result in other error checks
not being made.
For the triangle example, there are several types of invalid values. The constraints can be divided into
the following categories:
To guarantee that each invalid situation is checked independently, an invalid equivalence class should be
set up for each of the variables having a nonpositive value:
1. {(x, y, z) | x ≤ 0, y, z > 0}
2. {(x, y, z) | y ≤ 0, x, z > 0}
3. {(x, y, z) | z ≤ 0, x, y > 0}
However, each of the variables can be the one that has the largest value (i.e., corresponds to the longest
side). Thus, three more invalid equivalence classes are needed:
4. {(x, y, z) | x ≥ y, x ≥ z, x ≥ y + z}
5. {(x, y, z) | y ≥ x, y ≥ z, y ≥ x + z}
6. {(x, y, z) | z ≥ x, z ≥ y, z ≥ x + y}
In the current example, possible test cases for each equivalence class are the following:
1. (−1, 2, 3), (0, 2, 3)
2. (2, −1, 3), (2, 0, 3)
3. (2, 3, −1), (2, 3, 0)
4. (5, 2, 3), (5, 1, 2)
5. (2, 5, 3), (1, 5, 2)
6. (2, 3, 5), (1, 2, 5)
The above are not handled by BVA technique as we can see massive
redundancy in the tables of test cases. In this technique, the input
and the output domain is divided into a finite number of equivalence
classes.
Strong Normal Equivalence Test Cases for Triangle Problem • Since there is no further sub-
intervals inside the valid inputs for the 3 sides a, b, and c, are Strong Normal Equivalence is the
same as the Weak Normal Equivalence
NextDate function
So, now let us again identify the various equivalence class test
cases:
Test Case ID Month (mm) Day (dd) Year (yyyy) Expected Output
SN1 6 14 2000 6/15/2000
SN2 6 14 1996 6/15/1996
SN3 6 14 2002 6/15/2002
SN4 6 29 2000 6/30/2000
SN5 6 29 1996 6/30/1996
SN6 6 29 2002 6/30/2002
SN7 6 30 2000 6/31/2000 (Impossible)
SN8 6 30 1996 6/31/1996 (Impossible)
SN9 6 30 2002 6/31/2002 (Impossible)
SN10 6 31 2000 7/1/2000 (Invalid Input)
SN11 6 31 1996 7/1/1996 (Invalid Input)
SN12 6 31 2002 7/1/2002 (Invalid Input)
SN13 7 14 2000 7/15/2000
SN14 7 14 1996 7/15/1996
SN15 7 14 2002 7/15/2002
SN16 7 29 2000 7/30/2000
SN17 7 29 1990 7/30/1996
SN18 7 29 2002 7/30/2002
SN19 7 30 2000 7/31/2000
SN20 7 30 1996 7/31/1996
SN21 7 30 2002 7/31/2002
SN22 7 31 2000 8/1/1996
SN23 7 31 1996 8/1/2000
SN24 7 31 2002 8/1/2002
SN25 2 14 2000 7/15/2000
SN26 2 14 1996 2/15/1996
SN27 2 14 2002 2/15/2002
SN28 2 29 2000 3/1/2000 (Invalid Input)
SN29 2 29 1996 3/1/1996
SN30 2 29 2002 3/1/2002 (Impossible Date)
SN31 2 30 2000 3/1/2000 (Impossible Date)
SN32 2 30 1996 3/1/1996 (Impossible Date)
SN33 2 30 2002 3/1/2002 (Impossible Date)
SN34 6 31 2000 7/1/2000 (Impossible Date)
SN35 6 31 1996 7/1/1996 (Impossible Date)
SN36 6 31 2002 3/1/2002 (Impossible Date)
So, three month classes, four day classes and three year classes
results in 3 * 4 * 3 = 36 strong normal equivalence class test cases.
Furthermore, adding two invalid classes for each variable will result
in 150 strong robust equivalence class test cases.
It is quite difficult to describe all such 150 classes here.
There are 150 strong-robust test cases (5 x 6 x 5)
commission problem
Test data : price Rs for lock - 45.0 , stock - 30.0 and barrel - 25.0
sales = total lock * lock price + total stock * stock price + total barrel * barrel price
commission : 10% up to sales Rs 1000 , 15 % of the next Rs 800 and 20 % on any sales in excess
of 1800
Pre-condition : lock = -1 to exit and 1< =lock < = 70 , 1<=stock <=80 and 1<=barrel<=90
Brief Description : The salesperson had to sell at least one complete rifle per month.
Checking boundary value for locks, stocks and barrels and commission
Valid Classes
L1 ={LOCKS :1 <=LOCKS<=70}
L2 ={Locks=-1}(occurs if locks=-1 is used to control input iteration)
L3 ={stocks : 1<=stocks<=80}
L4= {barrels :1<=barrels<=90}
Invalid Classes
L3 ={locks: locks=0 OR locks<-1}
L4 ={locks: locks> 70}
S2 ={stocks : stocks<1}
S3 ={stocks : stocks >80}
B2 ={barrels : barrels <1}
B3 =barrels : barrels >90}
Decision tables
Decision Table Test case design technique is one of the testing techniques. You
could find other testing techniques such as Equivalence Partitioning, Boundary
Value Analysis
Here we decide under what condition the action be performed Now let’s see the
tabular column below.
In the first column I took all the conditions and actions related to the requirement.
All the other columns represent Test Cases.
From the case 3 and case 4, we could identify that if condition 2 failed then
system will execute Action 3. So we could take either of case 3 or case 4
The ability to recognize complete decision table puts us into a challenge of identifying redundant and
inconsistent rules
Rule 4 and 9
A inconsistent decision table rule 4 and rule 9
The NextDate problem illustrates the problem of dependencies in the input domain
Decision tables can highlight such dependencies
Impossible dates can be clearly marked as a separate action
M1 = {month : 1 .. 12 | days(month) = 30 }
M2 = {month : 1 .. 12 | days(month) = 31 }
M3 = {month : {2} }
D1 = {day : 1 .. 28}
D2 = {day : {29} }
D3 = {day : {30} }
D4 = {day : {31} }
So we go for Try 3
Commission problem