0% found this document useful (0 votes)
15 views

Software Testing (1)

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Software Testing (1)

Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Functional Testing: Boundary value analysis, Robustness testing, Worst-case

testing, Robust Worst testing for triangle problem, Nextdate problem and
commission problem, Equivalence classes, Equivalence test cases for the triangle
problem, NextDate function, and the commission problem, Guidelines and
observations, Decision tables, Test cases for the triangle problem, NextDate
function, and the commission problem, Guidelines and observations. Fault
Based Testing: Overview, Assumptions in fault based testing, Mutation analysis,
Fault-based adequacy criteria, Variations on mutation analysis.

Boundary value analysis

Boundary testing is the process of testing between extreme ends or


boundaries between partitions of the input values.

 So these extreme ends like Start- End, Lower- Upper, Maximum-


Minimum, Just Inside-Just Outside values are called boundary values
and the testing is called "boundary testing".
 The basic idea in boundary value testing is to select input variable
values at their:

1. Minimum
2. Just above the minimum
3. A nominal value
4. Just below the maximum
5. Maximum

The first step of Boundary value analysis is to create Equivalence


Partitioning, which would look like below.
Now Concentrate on the Valid Partition, which ranges from 16-60. We have a
3 step approach to identify boundaries:

 Identify Exact Boundary Value of this partition Class – which is 16 and 60.
 Get the Boundary value which is one less than the exact Boundary – which
is 15 and 59.
 Get the Boundary Value which is one more than the precise Boundary –
which is 17 and 61.
If we combine them all, we will get below combinations for Boundary Value
for the Age Criteria.
Valid Boundary Conditions : Age = 16, 17, 59, 60
Invalid Boundary Conditions : Age = 15, 61

It’s straightforward to see that valid boundary conditions fall under Valid
partition class, and invalid boundary conditions fall under Invalid partition
class.

The Focus of BVA Boundary Value Analysis focuses on the input variables of the function. For the
purposes of this report I will define two variables ( I will only define two so that further examples can be
kept concise) X1 and X2. Where X1 lies between A and B and X2 lies between C and D.

A ≤ X1 ≤ B
C ≤ X2 ≤ D

The values of A, B, C and D are the extremities of the input domain. These are best demonstrated by
figure 4.1.
The Yellow shaded area of the graph shows the acceptable/legitimate input domain of the given
function. As the name suggests Boundary Value Analysis focuses on the boundary of the input space to
recognize test cases. The idea and motivation behind BVA is that errors tend to occur near the
extremities of the input variables. The defects found on the boundaries of these input variables can
obviously be the result of countless possibilities. Figure 4.1 4 But there are many common faults that
result in errors more collated towards the boundaries of input variables. For example if the programmer
forgot to count from zero or they just miscalculated. Errors in the code concerning loop counters being
off by one or the use of a < operator instead of ≤. These are all very common mistakes and accompanied
with other common errors we find an increasing need to perform Boundary Value Analysis.

5.0 Applying Boundary Value Analysis In the general application of Boundary Value Analysis can be done
in a uniform manner. The basic form of implementation is to maintain all but one of the variables at
their nominal (normal or average) values and allowing the remaining variable to take on its extreme
values. The values used to test the extremities are:
• Min ------------------------------------- Minimal
• Min+ ------------------------------------- Just above Minimal
• Nom ------------------------------------- Average
• Max- ------------------------------------- Just below Maximum
• Max ------------------------------------- Maximum

In continuing our example this results in the following test cases shown in figures 5.1 and 5.2:
{<x1nom, x2min>, <x1nom, x2min+ >,<x1nom, x2nom>,<x1nom, x2max- >,
<x1nom, x2max>, <x1min, x2nom >, <x1min+, x2nom >, <x1nom, x2nom >,
<x1max-, x2nom >, <x1max, x2nom > }

You maybe wondering why it is we are only concerned with one of the values taking on their extreme
values at any one particular time. The reason for this is that generally Boundary Value Analysis uses the
Critical Fault Assumption. There are advantages and shortcomings of this method.

5.1 Some Important examples To be able to demonstrate or explain the need for certain methods and
their relative merits I will introduce two testing examples proposed by P.C. Jorgensen [1]. These
examples will provide more extensive ranges to show where certain testing techniques are required and
provide a better overview of the methods usability.

• The NextDate problem

The NextDate problem is a function of three variables: day, month and year. Upon the input of a certain
date it returns the date of the day after that of the input. The input variables have the obvious
conditions:

1 ≤ Day ≤ 31.
1 ≤ month ≤ 12.
1812 ≤ Year ≤ 2012.

(Here the year has been restricted so that test cases are not too large). There are more complicated
issues to consider due to the dependencies between variables. For example there is never a 31st of April
no matter what year we are in. The nature of these dependencies is the reason this example is so useful
to us. All errors in the NextDate problem are denoted by “Invalid Input Date.”

• The Triangle problem In fact the first introduction of the Triangle problem is in 1973, Gruenburger.
There have been many more references to this problem since making this one of the most popular
example to be used in conjunction with testing literature.

The triangle problem accepts three integers (a, b and c) as its input, each of which are taken to be sides
of a triangle. The values of these inputs are used to determine the type of the triangle (Equilateral,
Isosceles, Scalene or not a triangle).

For the inputs to be declared as being a triangle they must satisfy the six conditions:

C1. 1 ≤ a ≤ 200.
C2. 1 ≤ b ≤ 200.
C3. 1 ≤ c ≤ 200.

C4. a < b + c.
C5. b < a + c.
C6. c < a + b.
Otherwise this is declared not to be a triangle. The type of the triangle, provided the conditions are met,
is determined as follows:

1. If all three sides are equal, the output is Equilateral.


2. If exactly one pair of sides is equal, the output is Isosceles.
3. If no pair of sides is equal, the output is Scalene.

5.2 Critical Fault Assumption

The Critical Fault Assumption also known as the single fault assumption in reliability theory. The
assumption relies on the statistic that failures are only rarely the product of two or more simultaneous
faults. Upon using this assumption we can reduce the required calculations dramatically.

The amount of test cases for our example as you can recall was 9. Upon inspection we find that the
function f that computes the number of test cases for a given number of variables n can be shown as:

f = 4n + 1

As there are four extreme values this accounts for the 4n. The addition of the constant one constitutes
for the instance where all variables assume their nominal value.

5.3 Generalizing BVA

There are two approaches to generalizing Boundary Value Analysis. We can do this by the number of
variables or by the ranges these variables use. To generalize by the number of variables is relatively
simple. This is the approach taken as shown by the general Boundary Value Analysis technique using the
critical fault assumption.

Generalizing by ranges depends on the type of the variables. For example in the NextDate example
proposed by P.C. Jorgensen [1], we have variable for the year, month and day. Languages similar to the
likes of FORTRAN would normally encode the month’s variable so that January corresponded to 1 and
February corresponded to 2 etc. Also it would be possible in some languages to declare an enumerated
type {Jan, Feb, Mar,……, Dec}. Either way this type of declaration is relatively simple because the ranges
have set values.

When we do not have explicit bounds on these variable ranges then we have to create our own. These
are known as artificial bounds and can be illustrated via the use of the Triangle problem. The point
raised by P.C. Jorgensen was that we can easily impose a lower bound on the length of an edge for the
tri-angle as an edge with a negative length would be “silly”. The problem occurs when trying to decide
upon an upper bound for the length of each length. We could use a certain set integer, we could allow
the program to use the highest possible integer (normally denoted as something to the effect of
MaxInt). The arbitrary nature of this problem can lead to messy results or non concise test cases.

5.4 Limitations of BVA


Boundary Value Analysis works well when the Program Under Test (PUT) is a “function of several
independent variables that represent bounded physical quantities” [1]. When these conditions are met
BVA works well but when they are not we can find deficiencies in the results.

For example the NextDate problem, where Boundary Value Analysis would place an even testing regime
equally over the range, tester’s intuition and common sense shows that we require more emphasis
towards the end of February or on leap years.

The reason for this poor performance is that BVA cannot compensate or take into consideration the
nature of a function or the dependencies between its variables. This lack of intuition or understanding
for the variable nature means that BVA can be seen as quite rudimentary.

Robustness testing

6.0 Robustness Testing

Robustness testing can be seen as and extension of Boundary Value Analysis. The idea behind
Robustness testing is to test for clean and dirty test cases. By clean I mean input variables that lie in the
legitimate input range. By dirty I mean using input variables that fall just outside this input domain.

In addition to the aforementioned 5 testing values (min, min+, nom, max-, max) we use two more values
for each variable (min-, max+), which are designed to fall just outside of the input range.

If we adapt our function f to apply to Robustness testing we find the following equation:

f = 6n + 1

I have equated this solution by the same reasoning that lead to the standard BVA equation. Each
variable now has to assume 6 different values each whilst the other values are assuming their nominal
value (hence the 6n), and there is again one instance whereby all variables assume their nominal value
(hence the addition of the constant 1). These result can be seen in figures 6.1 and 6.2.

Robustness testing ensues a sway in interest, where the previous interest lied in the input to the
program, the main focus of attention associated with Robustness testing comes in the expected outputs
when and input variable has exceeded the given input domain. For example the NextDate problem
when we an entry like the 31st June we would expect an error message to the effect of “that date does
not exist; please try again”. Robustness testing has the desirable property that it forces attention on
exception handling. Although Robustness testing can be somewhat awkward in strongly typed languages
it can show up altercations. In Pascal if a value is defined to reside in a certain range then and values
that falls outside that range result in the run time errors that would terminate any normal execution. For
this reason exception handling mandates Robustness testing.
Worst-case testing

Boundary Value analysis uses the critical fault assumption and therefore only tests for a single variable
at a time assuming its extreme values. By disregarding this assumption we are able to test the outcome
if more than one variable were to assume its extreme value. In an electronic circuit this is called Worst
Case Analysis. In Worst-Case testing we use this idea to create test cases.

To generate test cases we take the original 5-tuple set (min, min+, nom, max-, max) and perform the
Cartesian product of these values. The end product is a much larger set of results than we have seen
before.

We can see from the results in figures 7.1 and 7.2 that worst case testing is a more comprehensive
testing technique. This can be shown by the fact that standard Boundary Value Analysis test cases are a
proper subset of Worst-Case test cases.
These test cases although more comprehensive in their coverage, constitute much more endeavour. To
compare we can see that Boundary Value Analysis results in 4n + 1 test case where Worst-Case testing
results in 5n test cases. As each variable has to assume each of its variables for each permutation (the
Cartesian product) we have 5 to the n test cases.

For this reason Worst-Case testing is generally used for situations that require a higher degree of testing
(where failure of the program would be very costly)with less regard for the time and effort required as
for many situations this can be too expensive to justify.

Robust Worst testing for triangle problem,

If the function under test were to be of the greatest importance we could use a method named Robust
Worst-Case testing which as the name suggests draws it attributes from Robust and Worst-Case testing.

Test cases are constructed by taking the Cartesian product of the 7-tuple set defined in the Robustness
testing chapter. Obviously this results in the largest set of test results we have seen so far and requires
the most effort to produce.

We can see that the function f (to calculate the number of test cases required) can be adapted to
calculate the amount of Robust Worst-Case test cases. As there are now 7 values each variable can
assume we find the function f to be:

f = 7n

This function has also been reached in the paper A Testing and analysis tool for Certain 3-Variable
functions [2].

The results for the continuing example can be seen in figures 7.3 and 7.4.
For each example I will show test cases for the standard Boundary Value Analysis and the Worst-case
testing techniques. These will show how the test cases are performed and how comprehensive the
results are. There will not be test cases for Robustness testing or robust Worst-case testing as the cases
covered should explain how the process works. Too many test cases would prove to be monotonous
when trying to explain a concept, however when presenting a real project when the figures are more
“necessary” all test cases should be detailed and explained to their full extent.

Robust worst case testing for triangle problem


Robust worst case testing Nextdate problem
Worst case analysis test case

Robust worst case testing commission problem


Equivalence classes
Equivalence Class Testing, which is also known
as Equivalence Class Partitioning
(ECP) and Equivalence Partitioning, is an important
software testing technique used by the team of testers for
grouping and partitioning of the test input data, which is
then used for the purpose of testing the software product
into a number of different classes.

These different classes resemble the specified requirements


and common behaviour or attribute(s) of the aggregated
inputs. Thereafter, the test cases are designed and created
based on each class attribute(s) and one element or input is
used from each class for the test execution to validate the
software functioning, and simultaneously validates the
similar working of the software product for all the other
inputs present in their respective classes.

For an variable in some program, it might be possible to test the project when every
int

program value is input for the variable. This is true because, on any specific machine,
only a finite number of values can be assigned to an variable. However, the number of
int

values is large, and the testing would be very time consuming and not likely worthwhile.
The number of possible values is much larger for variables of type or String.
float

Thus, for almost every program, it is impossible to test all possible input values.
To get around the impossibility of testing for every possible input value, the possible
input values for a variable are normally divided into categories, usually called blocks or
equivalence classes.

The objective is to put values into the same equivalence class if the
project should have similar (equivalent) behavior for each value of the equivalence class.
Now, rather than testing the project for all possible input values, the project is tested for
an input value from each equivalent class.

The rationale for defining an equivalence class is as follows: If one test case for a particular
equivalence class exposes an error, all other test cases in that equivalence class will likely
expose the same error.
Using standard notation from discrete mathematics, the objective is to partition the
input values for each variable, where a partition is defined as follows:
Definition 16.1: A partition of a set A is the division of the set into subsets
Ai, i = 1, 2, . . . ,m,

called blocks or equivalence classes, such that each element of A is in


exactly one of the equivalence classes.
Often the behavior of a program is a function of the relative values of several variables.
In this case, it is necessary for the partition to reflect the values of all the variables involved.
As an example, consider the following informal specification of a program:

Given the three sides of a triangle as integers x, y, and z, it is desired to have a program to determine the
type of the triangle: equilateral, isosceles, or scalene.

The behavior (i.e., output) of the program depends on the values of the three integers. However, as
previously remarked, it is infeasible to try all possible combinations of the possible integer values.

Traditional equivalence class testing simply partitions the input values into valid and nonvalid values,
with one equivalence class for valid values and another for each type of invalid values. Note that this
implies an individual test case to cover each invalid equivalence class. The rationale for doing this is that
if invalid inputs can contain multiple errors, the detection of one error may result in other error checks
not being made.

For the triangle example, there are several types of invalid values. The constraints can be divided into
the following categories:

C 1. The values of x, y, and z are greater than zero.


C 2. The length of the longest side is less than the sum of the lengths of the other two sides.

To guarantee that each invalid situation is checked independently, an invalid equivalence class should be
set up for each of the variables having a nonpositive value:

1. {(x, y, z) | x ≤ 0, y, z > 0}

2. {(x, y, z) | y ≤ 0, x, z > 0}

3. {(x, y, z) | z ≤ 0, x, y > 0}

However, each of the variables can be the one that has the largest value (i.e., corresponds to the longest
side). Thus, three more invalid equivalence classes are needed:

4. {(x, y, z) | x ≥ y, x ≥ z, x ≥ y + z}

5. {(x, y, z) | y ≥ x, y ≥ z, y ≥ x + z}

6. {(x, y, z) | z ≥ x, z ≥ y, z ≥ x + y}

In the current example, possible test cases for each equivalence class are the following:
1. (−1, 2, 3), (0, 2, 3)
2. (2, −1, 3), (2, 0, 3)
3. (2, 3, −1), (2, 3, 0)
4. (5, 2, 3), (5, 1, 2)
5. (2, 5, 3), (1, 5, 2)
6. (2, 3, 5), (1, 2, 5)

The above are not handled by BVA technique as we can see massive
redundancy in the tables of test cases. In this technique, the input
and the output domain is divided into a finite number of equivalence
classes.

Then, we select one representative of each class and test our


program against it. It is assumed by the tester that if one
representative from a class is able to detect error then why should
he consider other cases. Furthermore, if this single representative
test case did not detect any error then we assume that no other test
case of this class can detect error. In this method we consider both
valid and invalid input domains. The system is still treated as a
black-box meaning that we are not bothered about its internal logic.
The idea of equivalence class testing is to identify test cases by
using one element from each equivalence class. If the equivalence
classes are chosen wisely, the potential redundancy among test
cases can be reduced.

Types of equivalence class testing:


Following four types of equivalence class testing are presented here
1) Weak Normal Equivalence Class Testing.
2) Strong Normal Equivalence Class Testing.
3) Weak Robust Equivalence Class Testing.
4) Strong Robust Equivalence Class Testing.

1) Weak Normal Equivalence Class Testing:


The word ‘weak’ means ‘single fault assumption’. This type of
testing is accomplished by using one variable from each equivalence
class in a test case. We would, thus, end up with the weak
equivalence class test cases as shown in the following figure.
Each dot in above graph indicates a test data. From each class we
have one dot meaning that there is one representative element of
each test case. In fact, we will have, always, the same number of
weak equivalence class test cases as the classes in the partition.

2) Strong Normal Equivalence Class Testing:


This type of testing is based on the multiple fault assumption theory.
So, now we need test cases from each element of the Cartesian
product of the equivalence classes, as shown in the following figure.
Just like we have truth tables in digital logic, we have similarities
between these truth tables and our pattern of test cases. The
Cartesian product guarantees that we have a notion of
“completeness” in following two ways
a) We cover all equivalence classes.
b) We have one of each possible combination of inputs.

3) Weak Robust Equivalence Class Testing:


The name for this form of testing is counter intuitive and
oxymoronic. The word’ weak’ means single fault assumption theory
and the word ‘Robust’ refers to invalid values. The test cases
resulting from this strategy are shown in the following figure.
Following two problems occur with robust equivalence
testing.
a) Very often the specification does not define what the expected
output for an invalid test case should be. Thus, testers spend a lot of
time defining expected outputs for these cases.
b) Strongly typed languages like Pascal, Ada, eliminate the need for
the consideration of invalid inputs. Traditional equivalence testing is
a product of the time when languages such as FORTRAN, C and
COBOL were dominant. Thus this type of error was quite common.

4) Strong Robust Equivalence Class Testing:


This form of equivalence class testing is neither counter intuitive nor
oxymoronic, but is just redundant. As explained earlier also, ‘robust’
means consideration of invalid values and the ‘strong’ means
multiple fault assumption. We obtain the test cases from each
element of the Cartesian product of all the equivalence classes as
shown in the following figure.
We find here that we have 8 robust (invalid) test cases and 12
strong or valid inputs. Each one is represented with a dot. So, totally
we have 20 test cases (represented as 20 dots) using this
technique.

Guidelines for Equivalence Class Testing:


The following guidelines are helpful for equivalence class
testing
1) The weak forms of equivalence class testing (normal or robust)
are not as comprehensive as the corresponding strong forms.
2) If the implementation language is strongly typed and invalid
values cause run-time errors then there is no point in using the
robust form.
3) If error conditions are a high priority, the robust forms are
appropriate.
4) Equivalence class testing is approximate when input data is
defined in terms of intervals and sets of discrete values. This is
certainly the case when system malfunctions can occur for out-of-
limit variable values.
5) Equivalence class testing is strengthened by a hybrid approach
with boundary value testing (BVA).
6) Equivalence class testing is used when the program function is
complex. In such cases, the complexity of the function can help
identify useful equivalence classes.
7) Strong equivalence class testing makes a presumption that the
variables are independent and the corresponding multiplication of
test cases raises issues of redundancy. If any dependencies occur,
they will often generate “error” test cases.
8) Several tries may be needed before the “right” equivalence
relation is established.
9) The difference between the strong and weak forms of
equivalence class testing is helpful in the distinction between
progression and regression testing.

Equivalence test cases for the triangle Problem

Four possible outputs –


NotA-Triangle, Scalene, Isosceles and Equilateral.

R1 = { : the triangle with sides a.b and c is equilateral}


R2 = { : the triangle with sides a,b and c is isosceles}
R3 = { : the triangle with sides a,b and c is isosceles}
R4 = { : sides a,b and c do not form a triangle}

Strong Normal Equivalence Test Cases for Triangle Problem • Since there is no further sub-
intervals inside the valid inputs for the 3 sides a, b, and c, are Strong Normal Equivalence is the
same as the Weak Normal Equivalence
NextDate function

Next Date Function Problem „


Valid Equivalence Classes
M1 = { month : 1 ≤ month ≤ 12 }
D1 = { day: 1 ≤ day ≤ 31 }
Y1 = { year: 1812 ≤ year ≤ 2012 } „

Invalid Equivalence Classes


M2 = { month : month < 1 }
M3 = { month : month > 12 }
D2 = { day: day < 1 }
D3 = { day: day > 31 }
Y2 = { year: year < 1812 }
Y3 = { year: year > 2012 }
Previous test cases were poor. „

Focus on Equivalence Relation. „


What must be done to an input date? „
We produce a new set of Equivalence Classes.

M1 = { month: month has 30 days } „


M2 = { month: month has 31 days } „
M3 = { month: month is February } „
D1 = { day: 1 ≤ day ≤ 28 } „
D2 = { day: day = 29 } „
D3 = { day: day = 30 } „
D4 = { day: day = 31 } „
Y1 = { year: year = 2000 } „
Y2 = { year: year is a leap year } „
Y3 = { year: year is a common year }

So, now let us again identify the various equivalence class test
cases:

1) Weak Normal Equivalence Class: As done earlier as well, the


inputs are mechanically selected from the approximate middle of
the corresponding class.
Test Case ID Month (mm) Day (dd) Year (yyyy) Expected Output
WN1 6 14 2000 6/15/2000
WN2 7 29 1996 7/30/1996
WN3 2 30 2002 2/31/2002 (Impossible)
WN4 6 31 2000 7/1/2000 (Impossible)
The random / mechanical selection of input values makes no
consideration of our domain knowledge and thus we have two
impossible dates. This will always be a problem with ‘automatic’ test
case generation because all of our domain knowledge is not
captured in the choice of equivalence classes.
2) Strong Normal Equivalence Class: The strong normal
equivalence class test cases for the revised classes are:

Test Case ID Month (mm) Day (dd) Year (yyyy) Expected Output
SN1 6 14 2000 6/15/2000
SN2 6 14 1996 6/15/1996
SN3 6 14 2002 6/15/2002
SN4 6 29 2000 6/30/2000
SN5 6 29 1996 6/30/1996
SN6 6 29 2002 6/30/2002
SN7 6 30 2000 6/31/2000 (Impossible)
SN8 6 30 1996 6/31/1996 (Impossible)
SN9 6 30 2002 6/31/2002 (Impossible)
SN10 6 31 2000 7/1/2000 (Invalid Input)
SN11 6 31 1996 7/1/1996 (Invalid Input)
SN12 6 31 2002 7/1/2002 (Invalid Input)
SN13 7 14 2000 7/15/2000
SN14 7 14 1996 7/15/1996
SN15 7 14 2002 7/15/2002
SN16 7 29 2000 7/30/2000
SN17 7 29 1990 7/30/1996
SN18 7 29 2002 7/30/2002
SN19 7 30 2000 7/31/2000
SN20 7 30 1996 7/31/1996
SN21 7 30 2002 7/31/2002
SN22 7 31 2000 8/1/1996
SN23 7 31 1996 8/1/2000
SN24 7 31 2002 8/1/2002
SN25 2 14 2000 7/15/2000
SN26 2 14 1996 2/15/1996
SN27 2 14 2002 2/15/2002
SN28 2 29 2000 3/1/2000 (Invalid Input)
SN29 2 29 1996 3/1/1996
SN30 2 29 2002 3/1/2002 (Impossible Date)
SN31 2 30 2000 3/1/2000 (Impossible Date)
SN32 2 30 1996 3/1/1996 (Impossible Date)
SN33 2 30 2002 3/1/2002 (Impossible Date)
SN34 6 31 2000 7/1/2000 (Impossible Date)
SN35 6 31 1996 7/1/1996 (Impossible Date)
SN36 6 31 2002 3/1/2002 (Impossible Date)

So, three month classes, four day classes and three year classes
results in 3 * 4 * 3 = 36 strong normal equivalence class test cases.
Furthermore, adding two invalid classes for each variable will result
in 150 strong robust equivalence class test cases.
It is quite difficult to describe all such 150 classes here.
There are 150 strong-robust test cases (5 x 6 x 5)

commission problem

Class for Commission Problem

Test data : price Rs for lock - 45.0 , stock - 30.0 and barrel - 25.0

sales = total lock * lock price + total stock * stock price + total barrel * barrel price

commission : 10% up to sales Rs 1000 , 15 % of the next Rs 800 and 20 % on any sales in excess
of 1800
Pre-condition : lock = -1 to exit and 1< =lock < = 70 , 1<=stock <=80 and 1<=barrel<=90

Brief Description : The salesperson had to sell at least one complete rifle per month.
Checking boundary value for locks, stocks and barrels and commission

Valid Classes

L1 ={LOCKS :1 <=LOCKS<=70}
L2 ={Locks=-1}(occurs if locks=-1 is used to control input iteration)
L3 ={stocks : 1<=stocks<=80}
L4= {barrels :1<=barrels<=90}

Invalid Classes
L3 ={locks: locks=0 OR locks<-1}
L4 ={locks: locks> 70}
S2 ={stocks : stocks<1}
S3 ={stocks : stocks >80}
B2 ={barrels : barrels <1}
B3 =barrels : barrels >90}

Commission Problem Output Equivalence Class Testing


( Weak & Strong Normal Equivalence Class )
Guidelines and observations

Guidelines and Observations


Now that we have gone through three examples, we conclude with some observations about, and
guidelines for equivalence class testing.
1. The traditional form of equivalence class testing is generally not as thorough as weak
equivalence class testing, which in turn, is not as thorough as the strong form of equivalence class testing.
2. The only time it makes sense to use the traditional approach is when the implementation
language is not strongly typed.
3. If error conditions are a high priority, we could extend strong equivalence class testing to
include invalid classes.
4. Equivalence class testing is appropriate when input data is defined in terms of ranges and
sets of discrete values. This is certainly the case when system malfunctions can occur for out-of-limit
variable values.
5. Equivalence class testing is strengthened by a hybrid approach with boundary value testing. (Wecan
“reuse” the effort made in defining the equivalence classes.)
6. Equivalence class testing is indicated when the program function is complex. In such
cases, the complexity of the function can help identify useful equivalence classes, as in the NextDate
function.
7. Strong equivalence class testing makes a presumption that the variables are independent
when the Cartesian Product is taken. If there are any dependencies, these will often generate “error” test
cases, as they did in the NextDate function. (The decision table technique in Chapter 7 resolves this
problem.)
8. Several tries may be needed before “the right” equivalence relation is discovered, as we
saw in the NextDate example. In other cases, there is an “obvious” or “natural” equivalence relation.
When in doubt, the best bet is to try to second guess aspects of any reasonable implementation.

Decision tables
Decision Table Test case design technique is one of the testing techniques. You
could find other testing techniques such as Equivalence Partitioning, Boundary
Value Analysis

In Decision table technique, we deal with combinations of inputs. To identify the


test cases with decision table, we consider conditions and actions. We take
conditions as inputs and actions as outputs.

Examples on Decision Table Test Case Design Technique:


Take an example of transferring money online to an account which is already
added and approved.

Here the conditions to transfer money are ACCOUNT ALREADY APPROVED,


OTP (One Time Password) MATCHED, SUFFICIENT MONEY IN THE
ACCOUNT.

And the actions performed are TRANSFER MONEY, SHOW A MESSAGE AS


INSUFFICIENT AMOUNT, BLOCK THE TRANSACTION INCASE OF
SUSPICIOUS TRANSACTION.

Here we decide under what condition the action be performed Now let’s see the
tabular column below.
In the first column I took all the conditions and actions related to the requirement.
All the other columns represent Test Cases.

T = True, F = False, X = Not possible

From the case 3 and case 4, we could identify that if condition 2 failed then
system will execute Action 3. So we could take either of case 3 or case 4

So finally concluding with the below tabular column.

Decision Table Interpretation


Conditions are interpreted as
Input
Equivalence classes of inputs

Actions are interpreted as


Output
Major functional processing portions

With a complete decision table


We have a complete set of test cases

The ability to recognize complete decision table puts us into a challenge of identifying redundant and
inconsistent rules

A redundant decision table

Rule 4 and 9
A inconsistent decision table rule 4 and rule 9

Test cases for the triangle problem


Don't Care Entries and Rule Counts
Limited entry tables with N conditions have 2N rules.
Don't care entries reduce the number of explicit rules by implying the existence of non-explicitly
stated rules.
How many rules does a table contain including all the implied rules due to don't care entries?

Don't Care Entries and Rule Counts – 2


Each don't care entry in a rule doubles the count for the rule
For each rule determine the corresponding rule count
Total the rule counts
NextDate function

The NextDate problem illustrates the problem of dependencies in the input domain
Decision tables can highlight such dependencies
Impossible dates can be clearly marked as a separate action

NextDate Equivalence Classes – for 1st try


M1 = {month : 1 .. 12 | days(month) = 30 }
M2 = {month : 1 .. 12 | days(month) = 31 }
M3 = {month : {2} }
D1 = {day : 1 .. 28}
D2 = {day : {29} }
D3 = {day : {30} }
D4 = {day : {31} }
Y1 = {year : 1812 .. 2012 | leap_year (year) }
Y2 = {year : 1812 .. 2012 | common_year (year) }

First try decision table yields 256 rules

NextDate Equivalence Classes – for 2nd try

M1 = {month : 1 .. 12 | days(month) = 30 }
M2 = {month : 1 .. 12 | days(month) = 31 }
M3 = {month : {2} }
D1 = {day : 1 .. 28}
D2 = {day : {29} }
D3 = {day : {30} }
D4 = {day : {31} }

Y2 = {year : 1812 .. 2012 | leap_year (year) ∧ year ≠ 2000 }


Y1 = {year : {2000} }

Y3 = {year : 1812 .. 2012 | common_year (year) }

Second try decision table yields 36 rules


3 months * 4 days * 3 year = 36 rules
December problem in rule 8
And February 28 problem in rule 9,11 and 12

So we go for Try 3
Commission problem

Guidelines and observations.

Fault Based Testing: Overview, Assumptions in fault based testing,

You might also like