0% found this document useful (0 votes)
92 views26 pages

Software Testing Techniques

Software testing involves executing a program with test cases and observing the behavior to validate it. There are different levels of testing such as unit, integration, and system testing. Testing can be specification-based (black-box) or code-based (white-box). Black-box testing focuses on requirements while white-box testing validates internal logic. Common techniques include equivalence partitioning, boundary value analysis, cause-effect graphing, and basis path testing.

Uploaded by

Rajat Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
92 views26 pages

Software Testing Techniques

Software testing involves executing a program with test cases and observing the behavior to validate it. There are different levels of testing such as unit, integration, and system testing. Testing can be specification-based (black-box) or code-based (white-box). Black-box testing focuses on requirements while white-box testing validates internal logic. Common techniques include equivalence partitioning, boundary value analysis, cause-effect graphing, and basis path testing.

Uploaded by

Rajat Pandey
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 26

Software Testing

g Testing
g
Techniques

Software Testing
Most commonlyy used method for Software Validation
Executing the code under test (CUT) with a set of test
cases and observing the behavior
Different from static approaches
Different levels
Unit
Integration
System
Acceptance Testing

Te ting strategies
Testing
t tegie
Specification-base
Specificationbased
d (black(black-box)
Code based (white
(white--box)

Specialized testing

Errors, defects, faults & Failures


Error: A mistake made by a programmer
Error:
Example: Misunderstood the requirements.
Defect/fault/bug:
Defect/fault/bug
/
/ g: Manifestation of an error in a
program.
Example:
Incorrect code:
Correct code:

if (a<b) {foo(a,b);}
if (a>b) {foo(a,b);}
{foo(a b);}

Failure
F
Failure:
il : Manifestation
M if t ti off one or more ffaults
lt iin
the observed program behavior

Failure
Incorrect program behavior due to a fault in the
program.
Failure can be determined only with respect to a
set of requirement specifications.
specifications.
A necessary condition for a failure to occur is
that execution of the program force the
erroneous portion of the program to be
executed.
Sufficient condition?

Testing ?

Basic Testing Strategies


Black--box testing
Black

White--box testing
White

Tests that validate business


requirements (what the system
is supposed to do)

Tests that validate internal


program logic (control flow ,
data structures, data flow)

Test cases are derived from


the requirements specification
g of
of the CUT. No knowledge
internal program structure is
used.

Test cases are derived by


examination of the internal
structure of the CUT.
Also known as -- structural or
logic--driven testing
logic

Also known as -- functional,


functional
data--driven, or Input/Output
data
testing

Comparing BlackBlack-box Testing Methods

T
T

Comparing WhiteWhite-box Testing Methods

T
T

Basic Testing Strategies


Black
Bl k box
b testing
t ti (Specification
(S
ifi ti Based)
B d)
Equivalence Class Partitioning
Boundary Value Analysis
Cause--effect g
Cause
graphing
p g
Model based Testing

White box testing (Program Based)


Control Flow Based
Data Flow based
Mutation Testing

Equivalence Class Partitioning


Partition the program input domain into
equivalence classes (according to the
specifications)
The rationale is that test of a representative value
of each class is equivalent to a test of any other
value of the same class.
identify valid as well as invalid equivalence
classes
One test case from each equivalence class

Equivalence Classes
x2
g
f

x1

Equivalence Classes Testing


x2
g
f

W k Normal
Weak
N
l Equivalence
E i l
Class
Cl
Testing
T ti

x1

Equivalence Classes Testing


x2
g
f

x1

St
Strong
N
Normal
l Equivalence
E i l
Class
Cl
Testing
T ti

Equivalence Classes Testing


x2
g
f

W k Robust
Weak
R b t Equivalence
E i l
Class
Cl
Testing
T ti

x1

Equivalence Classes Testing


x2
g
f

x1

St
Strong
R b t Equivalence
Robust
E i l
Class
Cl
Testing
T ti

Example
E
Example:
l iinputt condition
diti

0 <=
< x <=
< max

valid equivalence class


: 0 <= x <= max
invalid equivalence classes : x < 0, x > max
3 test
t t cases

Guidelines for Identifying Equivalence


Classes
Input Condition
range of values
(eg. 1 - 200
200))
number N valid
values
Set of input values
each handled
differently by the
program (e.g. A, B, C, (n))
must be condition
(e.g. Id name must begin
with
h a letter
l
)

Valid Eq. Classes


one valid
(value within range)
one valid

Invalid Eq. Classes


two invalid

two invalid
(none, more than N)

one valid Eq. class


for each value
(total n)

one
(e.g. any value not
in valid input set )

one
(e.g. it is a letter)

one
(e.g. it is not a letter)

Identifying Test Cases for Equivalence


Classes
Cl
Assign a unique number to each equivalence
class
Until all valid equivalence classes have been
covered by test cases, write a new test case
covering as many of the uncovered valid
equivalence classes as possible.
Each invalid equivalence class cover by a
separate
p
test case.

Boundary Value Analysis


Generally combined with Equivalence Class Partitioning
Design test cases that exercise values that lie at the
boundaries of an input equivalence class.
class.
Also identify output equivalence classes, and write test
cases to generate o/p at the boundaries of the output
equivalence classes
classes..
Example: input condition 0 <= x <= max
Test for values : 0,1, x,
x maxmax-1, max ( valid inputs)
: -1, max+
max+1
1 (invalid inputs)

Input Domain of a Function of


two Variables
x2

x1

BVA Test cases


x2
d

x1

BVA Test cases


x2
d

b
Robustness Testing

x1

BVA Test cases


x2
d

x1

Worse-Case Testing

BVA Test cases


x2
d

Robust Worse-Case Testing

x1

Cause Effect Graphing Technique


A technique that aids in selecting test cases for
combinations of input conditions in a systematic way
Steps:
( p conditions)) and effects
1. Identifyy the causes (input
(output conditions) of the program under test.
2. For each effect, identify the causes that can produce
that effect. Draw a Cause
Cause--Effect Graph.
3. Generate a test case for each combination of input
conditions that make some effect to be true.

Example
C
Consider
id a program with
ith th
the ffollowing:
ll i
input conditions
C1: command is credit
C2: command is debit
C3: A/C is valid
C4: Transaction amount
not valid

Output conditions
E1
E1: print invalid command
E
E2
2: print invalid A/C
E
E3
3: print debit amount not valid
E4
E4: debit A/C
E5: credit A/C

Example:
p CauseCause-Effect Graph
p
input conditions
C1: command is credit

Output conditions
E1: print invalid command

C2: command is debit

E
E2
2: print invalid A/C

C3: A/C is valid

E3: print debit amount not valid


E3

C4: Transaction amount

E4: debit A/C

not valid

E5: credit A/C

not

or

E1

and

E2

C1

not
C2

and

and

E3

and

E5

and

E4

C3

and
C4

and

Example
p
Decision table showing the combinations of input conditions
that make an effect true.
true (summarized from Cause Effect
Graph)
Write test cases to exercise each Rule in decision Table.

Example:

C1
C2
C3
C4

0
0
-

E1
E2
E3
E4
E5

1
0
-

1
1
0

1
1
1

1
1
1

1
1
1
1

White Box Testing


Whit box
White
b Test
T t case design
d i techniques
t h i

Control--Flow Testing
Control

D t Flow
Data
Fl
Testing
T ti

Statement coverage
Decision coverage
Condition coverage
Decision--condition
Decision
coverage
Multiple condition coverage
B i P
Basis
Path
th T
Testing
ti
Loop testing

All p-use
All c-use
All d-use
All uses

White Box Testing


The program structure used in structural testing is the CFG i.e. control flow

read x,y;
2 z:=1;
3 while not y=0
y 0 do
4 if x mod 2 =1
5 z:
z:=z*x
z x
6 y:=y/2
7 x:=x*x
end

2
3

exit

4
5
67

Nodes represent statements


Ed
Edges
representt th
the fl
flow

White Box Test


Test--Case Design
Statement coverage
write enough test cases to execute every
statement at least once

TER (Test Effectiveness Ratio)


TER = Coverage achieved
= statements exercised / total statements

Example
void function eval (int A, int B, int X )
{
if ( A > 1) and ( B = 0 )
X = X / A;;
then
if ( A = 2 ) or ( X > 1)
then
X = X + 1;
}
Statement coverage test cases:
1) A = 2, B = 0, X = ? ( X can bbe assigned
i
d any value)
l )

White Box Test


Test--Case Design
Decision coverage (Branch coverage)
write test cases to exercise the true and false
outcomes of every decision
TER = branches exercised / total branches

Condition coverage (Predicate coverage)


write test cases such that each condition in a decision
takes on all possible outcomes at least once
may not always satisfy decision coverage

Example
p
void function eval (int A, int B, int X )
{
if ( A > 1) and ( B = 0 ) then
X = X / A;
A
if ( A = 2 ) or ( X > 1) then
X = X + 1;
}

Decision coverage test cases:

a
A>1
and
B=0

F
X = X/ A

b
A=2
or
X>1

1) A = 3,
3 B = 0,
0 X = 1 (acd)
2) A = 2, B = 1, X = ? (abe)

X = X+1

Example
p
Condition coverage
g test
cases must cover conditions
A>1, A<=
A>1
A<=1
1, B=
B=0
0, B !=
!=0
0
A=2
A=
2, A !=
!=2
2, X >
>1
1, X<=
X<=1
1

Test cases:
1) A = 1, B = 0, X = 3
2) A = 2, B = 1, X = 1

(abe)
abe)
(abe)
abe
b )

a
A>1
and
B=0

X = X/ A

b
A=2
or
X>1

does not satisfy decision


coverage

X = X+1

White Box Test


Test--Case Design
Decision Condition coverage
write test cases such that each condition in a
decision takes on all possible outcomes at least once
and each decision takes on all possible outcomes
at least once

Multiple Condition coverage (Full Predicate)


write
it test
t t cases to
t exercise
i allll possible
ibl combinations
bi ti
of True and False outcomes of conditions within a
decision

Example
Decision Condition coverage
test cases must cover
conditions
A>1, A<=
A>1
A<=1
1, B=
B=0
0, B !=
!=0
0
A=2
A=
2, A !=2
!=2, X >1
>1, X<=1
X<=1
and
also ( A > 1 and B = 0) T,, F
( A = 2 or X > 1) T, F

a
A>1
and
B=0

F
X = X/ A

b
A=2
or
X>1

Test cases:

1) A = 2, B = 0, X = 4 (ace)
(
)
2) A = 1, B = 1, X = 1 (abd)

X = X+1

Example
Multiple Condition coverage
must cover conditions
1)
2)
3)
4)

A>
>1
1, B =0
=0
A>
>1
1, B !=0
!=0
A<=
A<=1
1, B=0
B=0
A <=
<=1
1, B!=0
B!=0

5)
5)
6)
6)
7)
7)
8
8))

A=2
A=2, X>1
X>1
A=2
A=2, X <=1
<=1
A!=2
A!=2, X > 1
A !=2
!=2, X<=1
X<=1

Test cases:
1)
2)
3)
4)

A = 2,
A = 2,
A = 1,
A = 1,

B = 0, X = 4
B = 1, X = 1
B = 0, X = 2
B = 1, X = 1

(
(covers
1,5)
(covers 2,6)
(covers 3,7)
(covers 4,8)

a
A>1
and
B=0

F
X = X/ A

b
A=2
or
X>1

X = X+1

Basis Path Testing


g
1. Draw control flow graph of program from the
program detailed design or code.
2. Compute the Cyclomatic complexity V(G) of
th fl
the
flow graph
h using
i any off the
th formulas:
f
l
V(G) = #Edges - #Nodes + 2
or V(G) = #regions in flow graph
or V(G) = #predicates + 1
or V(G) = # linearly independent paths

Example
p
1

V(G) = 6 regions
2
3

R4

R1

4
R6

5
6

10

V(G) = #Edges - #Nodes


+2
= 17 - 13 + 2 = 6
V(G) = 5 predicate-nodes
+1=6

R2
R3

11

R5
13

12

6 linearly
independent paths

Basis Path Testing (contd


(contd))
3. Determine a basis set of linearly
y independent
p
paths.
4. Prepare test cases that will force execution of
each path in the Basis set.
The value of Cyclomatic complexity provides an
upper bound
b
d on th
the number
b off ttests
t that
th t mustt
be designed to guarantee coverage of all
program statements.
t t
t

Loop Testing
Aims to expose bugs in loops
Fundamental Loop Test criteria
1)
2)
3)
4)

bypass
b pass the loop altogether
one pass through the loop
t
two
passes through
th
h th
the lloop b
before
f
exiting
iti
a typical number of passes through the loop, unless
covered by some other test
5) max number of passes (loop exit condition in farfarloop)

Loop Testing
Nested loops
1) Set all but one loop to a typical value and run
through the single
single--loop cases for that loop. Repeat
for all loops.
2) Do minimum values for all loops simultaneously.
3) Set all loops but one to the minimum value and
repeat the test cases for that loop. Repeat for all
loops.
loops
4) Do maximum looping values for all loops
simultaneously.
simultaneously

Data Flow Testing


g
Select test paths of a program based on
the DefinitionDefinition-Use (DU) chain of variables
in the program
program.
Write test cases to cover every DU chain
is at least once
once.

Black--box vs. White


Black
White--Box Testing
g
Black box testing can detect errors such as
incorrect functions, missing functions
Cannot detect design errors, coding errors,
unreachable code
code, hidden functions

White box testing can detect errors such as


logic errors, design errors
Cannot detect whether the program is performing its
expected functions, missing functionality.

Both methods of testing are required.

I Complete
Is
C
l t Testing
T ti Possible
P ibl ?
Exhaustive Black
Black--box testing is generally
not possible because the input domain for a
program may be infinite
f
or incredibly large.
Exhaustive WhiteWhite-box testing is generally
nott possible
ibl b
because a program usually
ll
has a very large number of paths.

Implications ...
T
Test
Testt-case design
d i
careful selection of a subset of all possible
test cases
The
e objec
objective
e sshould
ou d be to
o maximize
a
e the
e
number of errors found by a small finite
number of test cases

Test--completion criteria
Test

Testing Principles (1)


--- Glen Myers

A good test case is one likely to show an error.


Description
p
of expected
p
output
p or result is an
essential part of testtest-case definition.
A programmer should avoid attempting to test
his/her own program.
testing is more effective and successful if
performed by an Independent Test Team.

Testing Principles (2)


--- Glen Myers

Avoid onon-thethe-fly testing. Document all test


cases.
Test valid as well as invalid cases.
Th
Thoroughly
hl iinspectt allll ttestt results.
lt
More
o e detected e
errors
o s implies
p es e
even
e more
oe
errors present.

Testing Principles (3
(3)
--- Glen Myers

Decide in advance when to stop testing


Do not plan testing effort under the tacit
assumption that no errors will be found.
Testing is an extremely creative and
intellectuallyy challenging
g g task.

What is a good test?

Summary
Testing can show the presence of bugs
bugs, but not
their absence
E h
Exhaustive
ti ttesting
ti iis generally
ll nott possible
ibl
Testing Strategies
White-box
Black-box

Both strategies are required for a


comprehensive
p
testing
g of software

You might also like