SWVV L16 Component Testing
SWVV L16 Component Testing
Istvan Majzik
[email protected]
System
specification
Architecture
design
Component
design
Component
implementation • Source code analysis
• Proof of program correctness by theorem proving
System • Software model checking with abstraction-based methods
integration
• Component testing
System
delivery
Operation,
maintenance
Inputs and outputs of the phase
Software component
design
Software component
test report
Software component Software component
test plan testing
Software component
Software quality verification report
assurance plan
Summarizes all
component level
verification and
testing activities
3
Goals of testing
Testing:
o Running the program in order to detect faults
Exhaustive testing:
o Running programs in all possible ways (with all possible inputs)
o Hard to implement in practice
Observations:
o Dijkstra: Testing is able to show the presence of faults,
but not able to show the absence of faults.
o Hoare: Testing can be applied with the principle of induction:
If the program runs correctly for a given test input then it will
run correctly in case of similar inputs.
4
Test environment: Component testing
Test program or
test script
5
Test approaches
6
Specification based testing
(functional testing)
7
Goals and overview
Goals:
o Based on the functional specification,
o find representative inputs (test data)
for checking the correctness of the implementation
Overview of techniques:
1. Equivalence partitioning
2. Boundary value analysis
3. Cause-effect analysis
4. Combinatorial techniques
5. Finite state automaton based techniques
6. Use case based testing
8
Example: Requirements in standards (EN 50128)
Software design and implementation:
9
1. Equivalence partitioning
Input and output equivalence partitions (classes)
o Data that are expected to cover the same faults
(typically: execute the same part of the program)
o Each equivalence class is represented by a test input
o The correctness in case of the other inputs from the equiv.
class follows from the principle of induction
Equivalence partitioning and test data selection are
heuristic procedure
o Based on specification: Input/output data belonging to
the same functionality, mode of operation, service
o Valid and invalid input data
valid and invalid equivalence classes
• Invalid data: Robustness testing
11
Example: Equivalence classes (partitions)
Classic example: Triangle characterization program
o Inputs: Lengths of the sides (here: 3 integers)
o Outputs: Equilateral, isosceles, scalene b c
x1 x1
Weak combination of Strong combination of
normal equivalence classes normal equivalence classes
13
Example: Complex equivalence classes
Gearbox controller: Input and output partitions are not
independent – hard to test
14
2. Boundary value analysis
Examining the boundaries of data partitions
(equivalence classes)
o Input and output partitions are both examined
o To be applied for upper/lower bounds
Motivation: Bugs typically occur at boundaries
o Incorrect relational operations in the code
o Incorrect input/output conditions in loops
o Incorrect size of data structures (accessed), …
Typical test data:
o Testing a boundary requires 3 tests, a partition requires 5-7 tests
b1 b2
18
3. Cause-effect analysis
Used if the relation of inputs and outputs is
combinational
o Causes: input equivalence classes
o Effects: output equivalence classes
o Using Boolean relations between causes and effects
Boole-graph: representing the relations
o AND, OR nodes
o Implicitly represented: invalid combinations
Decision table: Covering the Boole-graph
o Rows: Inputs and corresponding outputs
o Columns represent test data
19
Example: Cause-effects analysis
Inputs: Outputs:
Owner ID 1 A No access
OR
AND
Administrator ID 2 B Full access
AND
Authorization code 3 C Restricted access
T1 T2 T3 …
1 0 1 0
Inputs 2 1 0 0
3 1 1 1
A 0 0 1
Outputs B 1 1 0
C 0 0 0
20
4. Combinatorial techniques
Goal: Testing the combinations of discrete inputs
o Problems are often caused by rare combinations
o But the number of all combinations can be high
“Best guess” ad-hoc testing
o Based on intuition, covering typical faults
“Each choice” testing
o All input values shall be tested at least once
“n-wise” testing
o For each n inputs (selectable out of all m inputs) testing
all possible combinations of their potential values
o “Pairwise” testing: Special case with n = 2
o Tool support: e.g., http://www.pairwise.org
21
Example: Pairwise testing
Given 3 inputs and their potential values:
o OS: Windows, Linux
o CPU: Intel, AMD
o Protocol: IPv4, IPv6
All combinations:
o 8 combinations are possible; 8 test cases
“Pairwise” testing: 4 test cases are used
o T1: Windows, Intel, IPv4
For each possible
o T2: Windows, AMD, IPv6 pairs of inputs,
o T3: Linux, Intel, IPv6 all combinations of
o T4: Linux, AMD, IPv4 values are tested
22
Efficiency of n-wise testing
Comparing manual
ad hoc and pairwise
testing (10 projects)
23
5. Finite state automaton based testing
Specification is given as a finite state automaton
Typical testing goals:
o Covering (testing) all states, all transitions
o Trying also transitions that are not allowed (implicit)
• Problems:
• Determining the state of
the tested system
• Setting initial state
• Methods
• Automated test input
generation (see later)
24
6. Use case based testing
Deriving test cases from the specified use cases
o Use cases: often given together with preconditions
and post-conditions
o Test oracles: checking the post-conditions
Typical test goals:
o Main path (“happy path”, “mainstream”)
o Alternative paths: Separate test cases
o Robustness testing: Tests for violating preconditions
Mainly higher level testing
o System tests, acceptance tests
25
Using the methods together
Typical application of the basic methods:
1. Equivalence partitioning
2. Boundary value analysis
3. Cause-effect analysis, or combinatorial, or finite state
automaton based (depending on the specification)
26
Structure based testing
27
Test approaches
28
The internal structure
Well-specified representation:
o Model-based: state machine, activity diagram
o Source code based: control flow graph (program graph)
S A1 S
e0 / a0
e1 / a2 S1
A2
e2[ g ] / a1
e1 / a1
S3
S2
A4 A3 e1 / a2
e2 e2[ g1 ] / a2
S4
A5 E
29
The internal structure
Well-specified representation:
o Model-based: state machine, activity diagram
o Source code based: control flow graph (program graph)
31
Overview of test coverage criteria
Control flow based test coverage criteria
o Statement coverage
o Decision coverage
o Condition coverage (several metrics)
o Path coverage
Data flow based test coverage criteria
o Definition – usage coverage
o Definition-clear path coverage
Combination of techniques
32
Basic concepts
Statement
Block (of statements)
o A sequence of one or more consecutive executable statements
without branches
Decision
o Determines true/false branches in if(…), while(…), etc.
o Logical expression consisting of one or more conditions
combined by logical operators (AND, OR, …)
Condition
o Logical expression without logical operators (AND, OR, …)
Path
o A sequence of executable statements of a component,
typically from an entry point to an exit point
33
1. Statement coverage
Definition:
Number of statements executed during testing
Number of all statements
Does not take into account all branches:
A1 k=0
A2 [a>0]
[a<=0]
k=1
A4 A3 m=1/k
A5
[a || b] A2
A4 A3
35
3. Condition coverage
Generic definition:
Number of tested combinations of conditions
Number of targeted combinations of conditions
Definitions (regarding the “targeted combinations”):
• Every condition is set to both true and false during testing
• Example for 100% condition coverage: [a || b] A2
1. a = true, b = false
2. a = false, b = true
A4 A3
• Does not yield 100% decision coverage
36
4. Condition/decision coverage (C/DC)
Definition:
o Each decision takes every possible outcome (branch)
o Each condition in a decision takes every possible
outcome
Does not take into account whether each condition has an effect
(e.g., effect of changing b from false to true can be tested when a = false)
37
5. Modified condition/decision coverage (MC/DC)
Definition:
o Each decision takes every possible outcome (branch)
o Each condition in a decision takes every possible
outcome
o Each condition in a decision is shown to independently
affect the outcome of the decision
38
6. Multiple condition coverage
Definition:
o All combinations of conditions is tested
• For n conditions: 2n test cases may be necessary
(less with lazy evaluation)
• Sometimes not practical
(e.g. in avionics systems there are programs with more than
30 conditions in a decision)
39
7. Basic path coverage
Definition:
Number of independent paths traversed during testing
Number of all independent paths
100% path coverage implies:
o 100% statement coverage, 100% decision coverage
o 100% multiple condition coverage is not implied
A1
Path coverage: 80% (4 out of 5)
A2
Statement coverage: 100%
A4 A3
A5
42
Independent paths of a component
Goal: Covering independent paths
o Independent paths from the point of view of testing:
There is a statement or decision branch in the path,
that is not included in the other path
The maximal number of independent paths:
o CK: cyclomatic complexity
o In regular control flow graphs (connected, single entry/exit):
CK(G)=E-N+2, where
E: number of edges
N: number of nodes
in the control flow graph G
The set of independent paths is not unique
43
Independent paths of a component
Goal: Covering independent paths A1
o Independent paths from the point of view of testing:
N=5,
There is a statement or decision branch
A2 E=8,
in the path,
that is not included in the other path CK=5,
Max. 5
The maximal number of independent
A4 paths:independent
A3
44
Covering the paths of a component
Conceptual algorithm:
o Selecting maximum CK independent paths
o Generating inputs to traverse these paths,
each after the other
Problems:
o Not all paths can be traversed:
Conditions along the selected path may be contradictory
o Loops: The number of loop executions shall be limited
There are no fully automated tools to generate test
sequences for 100% path coverage
o Symbolic execution: With SMT solver (see later)
o Limitations: Loops, data types, external libraries, …
45
Other coverage metrics (examples)
Loop coverage
o Loops executed 0 (if applicable), 1, or n times
Race coverage
o Multiple threads executed on the same block of statements
Relational operator coverage
o Boundary values tried in case of relational operators
Table coverage
o Jump tables (state machine implementation) testing
Object code branch coverage
o Machine instruction level coverage of conditional branches
46
Overview of test coverage criteria
Control flow based test coverage criteria
o Statement coverage
o Decision coverage
o Condition coverages
o Path coverage
Data flow based test coverage criteria
o Definition - usage coverage
o Definition-clear path coverage
Combination of techniques
48
Data flow based test criteria
Goals of testing
o Checking definition (value assignment) and use of the variables
o Is there an incorrect assignment? Is the value used in a correct way?
Labeling the program graph:
o def(v): definition of variable v (by assigning a value)
o use(v): using variable v
o p-use(v): using v in a predicate (condition for a decision)
o c-use(v): using v in computation
Notation for paths:
o def-clear v path: there is no def v label on the path
o def-use v (shortly d-u v) path:
• Starts with def v label, ends with p-use v or c-use v label
• Between these there is a def-clear v path
• There is no internal loop (or the full d-u v path is a loop segment)
49
Example: Labeling the program graph
x y z a variables
y=24 def y
if (x>12) p-use x
p-use x p-use x
50
All-defs coverage criterion
All-defs:
For all v variables, from all def v statements in the program:
At least one use v statement is reached
by at least one def-clear v path
(here use v may be either p-use v or c-use v)
to one use v:
use v use v use v use v use v use v
51
All-p-uses, all-c-uses, all-uses criteria
All-p-uses / all-c-uses:
For all v variables, from all def v statements:
All p-use v / c-use v statements are reached
by at least one def-clear v path
one def-clear v
path is tested:
to all use v:
p-use v p-use v p-use v c-use v c-use v c-use v
All-uses:
For all v variables, from all def v statements:
All use v statements are reached by
at least one def-clear v path
52
All-paths and all-du-paths criteria
All-paths:
o For all v variables, from all def v statements :
To all use v statements all executable def-clear v paths are tested
o In case of loops multiple executions are distinguished
All-du-paths (less strict)
o For all v variable, from all def v statements:
To all use v statements all d-u v paths are tested (w/o internal loops)
All-paths coverage: All-du-paths coverage:
forall v, forall def v: def v def v
to all use v:
use v use v use v use v use v use v
53
Hierarchy of data flow based test coverage criteria
all-du-paths
all-uses
all-defs
all-p-uses
all-edges
100% decision coverage
all-nodes
100% statement coverage
54
Using test coverage metrics
What are these good for?
o Finding parts of the program (source code) where testing is weak
• Test suite shall be extended to cover untested elements
o Finding redundant test cases (that cover the same part of the
program)
• However, data dependency shall be considered: different types of
faults can be tested by different data on the same path
o The coverage of successful tests is not direct measure of code quality
• Rather, measure of the completeness of the test suite
• Testing often terminated on the basis of the coverage measure
What are these not good for?
o To identify requirements that were not implemented
o To aim at simply “covering” program parts without considering the
specification and related other ways of test data selection
55
Execution of test cases
Execution order (prioritization) of the test cases:
If the number of faults is expected to be low:
First the more efficient tests (with higher fault coverage)
o Covering longer paths
o Covering more difficult decisions
c2
t t
ttest ttest
56
Summary: Component test design techniques
Specification and structure based techniques
o Many (more or less orthogonal) techniques
o Specification based testing is the primary approach
Commonly, only basic techniques are used
o Exception: Safety-critical systems
o E.g. DO178-B requires MC/DC coverage analysis
Combination of techniques is useful:
• Example (Microsoft report):
Specification based testing: 83% code coverage achieved
+ exploratory testing: 86% code coverage
+ structure based testing: 91% code coverage
Not tested: Internal checks, defensive programming
57