0% found this document useful (0 votes)
16 views50 pages

SWVV L16 Component Testing

The document discusses software component testing, outlining the development process and the goals of testing, including fault detection and the limitations of exhaustive testing. It presents various testing approaches such as specification-based and structure-based testing, along with techniques like equivalence partitioning, boundary value analysis, and cause-effect analysis. Additionally, it emphasizes the importance of test coverage metrics to ensure comprehensive testing of software components.

Uploaded by

11kacsa11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
16 views50 pages

SWVV L16 Component Testing

The document discusses software component testing, outlining the development process and the goals of testing, including fault detection and the limitations of exhaustive testing. It presents various testing approaches such as specification-based and structure-based testing, along with techniques like equivalence partitioning, boundary value analysis, and cause-effect analysis. Additionally, it emphasizes the importance of test coverage metrics to ensure comprehensive testing of software components.

Uploaded by

11kacsa11
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Software Verification and Validation (VIMMD052)

Software component testing

Istvan Majzik
[email protected]

Budapest University of Technology and Economics


Dept. of Artificial Intelligence and Systems Engineering

Budapest University of Technology and Economics


Dept. of Artificial Intelligence and Systems Engineering 1
Where are we now in the development process?
Requirement
analysis

System
specification

Architecture
design

Component
design

Component
implementation • Source code analysis
• Proof of program correctness by theorem proving
System • Software model checking with abstraction-based methods
integration
• Component testing
System
delivery

Operation,
maintenance
Inputs and outputs of the phase

Software component
design
Software component
test report
Software component Software component
test plan testing
Software component
Software quality verification report
assurance plan

Summarizes all
component level
verification and
testing activities

3
Goals of testing
 Testing:
o Running the program in order to detect faults
 Exhaustive testing:
o Running programs in all possible ways (with all possible inputs)
o Hard to implement in practice
 Observations:
o Dijkstra: Testing is able to show the presence of faults,
but not able to show the absence of faults.
o Hoare: Testing can be applied with the principle of induction:
If the program runs correctly for a given test input then it will
run correctly in case of similar inputs.

4
Test environment: Component testing

Test execution Component Substituting


• providing inputs (module) dependencies,
Test evaluation under test with test-oriented
• checking outputs functionality

Test driver MUT (CUT) Stub

Test program or
test script

5
Test approaches

 Specification based (functional) testing


o The component is considered as a “black box” M1

o Only the specified functionality (external behaviour) m1()


m2()
is known, the internal structure is not m3()

o Test goals: checking the existence of the specified


functionality and absence of extra functionality

 Structure based testing M1

o The component is a “white box” A1

o The internal structure (source) is known A2 A3


o Test goals: ensuring coverage of the A4
internal structure (e.g., program graph)

6
Specification based testing
(functional testing)

7
Goals and overview
Goals:
o Based on the functional specification,
o find representative inputs (test data)
for checking the correctness of the implementation

Overview of techniques:
1. Equivalence partitioning
2. Boundary value analysis
3. Cause-effect analysis
4. Combinatorial techniques
5. Finite state automaton based techniques
6. Use case based testing
8
Example: Requirements in standards (EN 50128)
 Software design and implementation:

 Functional/black box testing (D3):

9
1. Equivalence partitioning
 Input and output equivalence partitions (classes)
o Data that are expected to cover the same faults
(typically: execute the same part of the program)
o Each equivalence class is represented by a test input
o The correctness in case of the other inputs from the equiv.
class follows from the principle of induction
 Equivalence partitioning and test data selection are
heuristic procedure
o Based on specification: Input/output data belonging to
the same functionality, mode of operation, service
o Valid and invalid input data
 valid and invalid equivalence classes
• Invalid data: Robustness testing
11
Example: Equivalence classes (partitions)
 Classic example: Triangle characterization program
o Inputs: Lengths of the sides (here: 3 integers)
o Outputs: Equilateral, isosceles, scalene b c

 Test data for equivalence classes a


o Equilateral: 3, 3, 3
o Isosceles: 5, 5, 2 (similarly for the other sides)
o Scalene: 5, 6, 7
o Not a triangle: 1, 2, 5 (similarly for the other sides)
o Just not a triangle: 1, 2, 3
o Invalid inputs
• Zero value: 0, 1, 1
• Negative value: -3, -5, -3
• Not an integer: 2, 2, ’a’
• Less inputs than needed: 3, 4
o…
12
Using equivalence classes
 Tests when the component has multiple inputs:
o Valid (normal) equivalence classes:
Test data should cover as much equivalence classes as possible
o Invalid equivalence classes:
First covering each invalid equivalence class separately,
then combining them systematically
 Weak and strong combination of equivalence classes:
x2 x2

x1 x1
Weak combination of Strong combination of
normal equivalence classes normal equivalence classes
13
Example: Complex equivalence classes
 Gearbox controller: Input and output partitions are not
independent – hard to test

14
2. Boundary value analysis
 Examining the boundaries of data partitions
(equivalence classes)
o Input and output partitions are both examined
o To be applied for upper/lower bounds
 Motivation: Bugs typically occur at boundaries
o Incorrect relational operations in the code
o Incorrect input/output conditions in loops
o Incorrect size of data structures (accessed), …
 Typical test data:
o Testing a boundary requires 3 tests, a partition requires 5-7 tests
b1 b2

18
3. Cause-effect analysis
 Used if the relation of inputs and outputs is
combinational
o Causes: input equivalence classes
o Effects: output equivalence classes
o Using Boolean relations between causes and effects
 Boole-graph: representing the relations
o AND, OR nodes
o Implicitly represented: invalid combinations
 Decision table: Covering the Boole-graph
o Rows: Inputs and corresponding outputs
o Columns represent test data
19
Example: Cause-effects analysis
Inputs: Outputs:

Owner ID 1 A No access
OR
AND
Administrator ID 2 B Full access
AND
Authorization code 3 C Restricted access

T1 T2 T3 …
1 0 1 0
Inputs 2 1 0 0
3 1 1 1
A 0 0 1
Outputs B 1 1 0
C 0 0 0

20
4. Combinatorial techniques
Goal: Testing the combinations of discrete inputs
o Problems are often caused by rare combinations
o But the number of all combinations can be high
 “Best guess” ad-hoc testing
o Based on intuition, covering typical faults
 “Each choice” testing
o All input values shall be tested at least once
 “n-wise” testing
o For each n inputs (selectable out of all m inputs) testing
all possible combinations of their potential values
o “Pairwise” testing: Special case with n = 2
o Tool support: e.g., http://www.pairwise.org

21
Example: Pairwise testing
 Given 3 inputs and their potential values:
o OS: Windows, Linux
o CPU: Intel, AMD
o Protocol: IPv4, IPv6
 All combinations:
o 8 combinations are possible; 8 test cases
 “Pairwise” testing: 4 test cases are used
o T1: Windows, Intel, IPv4
For each possible
o T2: Windows, AMD, IPv6 pairs of inputs,
o T3: Linux, Intel, IPv6 all combinations of
o T4: Linux, AMD, IPv4 values are tested

22
Efficiency of n-wise testing
Comparing manual
ad hoc and pairwise
testing (10 projects)

Many faults are triggered by


combinations of 2 or 3 inputs

Source: R. Kuhn et al. „Combinatorial Software


Testing”, IEEE Computer, 42:8, 2009

23
5. Finite state automaton based testing
 Specification is given as a finite state automaton
 Typical testing goals:
o Covering (testing) all states, all transitions
o Trying also transitions that are not allowed (implicit)
• Problems:
• Determining the state of
the tested system
• Setting initial state
• Methods
• Automated test input
generation (see later)

24
6. Use case based testing
 Deriving test cases from the specified use cases
o Use cases: often given together with preconditions
and post-conditions
o Test oracles: checking the post-conditions
 Typical test goals:
o Main path (“happy path”, “mainstream”)
o Alternative paths: Separate test cases
o Robustness testing: Tests for violating preconditions
 Mainly higher level testing
o System tests, acceptance tests

25
Using the methods together
Typical application of the basic methods:
1. Equivalence partitioning
2. Boundary value analysis
3. Cause-effect analysis, or combinatorial, or finite state
automaton based (depending on the specification)

Extension: Random testing


o Generating random test data
• Fast test generation, with low computational effort
o Fault coverage cannot be estimated
o Difficult to evaluate the test results:
• Computing the expected results (e.g., by simulation)
• Only “smoke checking” (identifying harsh failures like crash)

26
Structure based testing

27
Test approaches

 Specification based (functional) testing


o The component is considered as a “black box” M1

o Only the specified functionality (external behaviour) m1()


m2()
is known, the internal structure is not m3()

o Test goals: checking the existence of the specified


functionality and absence of extra functionality

 Structure based testing M1

o The component is a “white box” A1

o The internal structure (source) is known A2 A3


o Test goals: ensuring coverage of the A4
internal structure (e.g., program graph)

28
The internal structure
 Well-specified representation:
o Model-based: state machine, activity diagram
o Source code based: control flow graph (program graph)

S A1 S
e0 / a0

e1 / a2 S1
A2
e2[ g ] / a1
e1 / a1
S3
S2
A4 A3 e1 / a2
e2 e2[ g1 ] / a2
S4
A5 E

29
The internal structure
 Well-specified representation:
o Model-based: state machine, activity diagram
o Source code based: control flow graph (program graph)

Source code: Control flow graph:


a: for (i=0; i<MAX; i++) { a Decision
b: if (i==a) { Statement
c: n=n-i; (block) b
} else { Branch
c
d: m=n-i; d
} Path
e: printf(“%d\n”,n); e
}
f: printf(“Ready.”) f
30
Test coverage metrics
Characterizing the quality of the test suite:
Which testable elements were tested
1. Statements → Statement coverage
2. Decisions → Decision coverage
3. Conditions → Condition coverage
4. Execution paths → Path coverage

This is not fault coverage!


Standards require test coverage (DO-178B, EN 50128,...)
o 100% statements coverage is a typical basic requirement

31
Overview of test coverage criteria
 Control flow based test coverage criteria
o Statement coverage
o Decision coverage
o Condition coverage (several metrics)
o Path coverage
 Data flow based test coverage criteria
o Definition – usage coverage
o Definition-clear path coverage
 Combination of techniques

32
Basic concepts
 Statement
 Block (of statements)
o A sequence of one or more consecutive executable statements
without branches
 Decision
o Determines true/false branches in if(…), while(…), etc.
o Logical expression consisting of one or more conditions
combined by logical operators (AND, OR, …)
 Condition
o Logical expression without logical operators (AND, OR, …)
 Path
o A sequence of executable statements of a component,
typically from an entry point to an exit point

33
1. Statement coverage
Definition:
Number of statements executed during testing
Number of all statements
Does not take into account all branches:

A1 k=0

A2 [a>0]
[a<=0]
k=1
A4 A3 m=1/k

A5

Statement coverage: 100%


Statement coverage: 80%
but not tested with a<=0
34
2. Decision coverage
Definition:
Number of decision branches reached during testing
Number of all potential decision branches

Does not take into account all settings of conditions:

[a || b] A2

A4 A3

100% decision coverage is possible


without setting b = true
(that may cause problems elsewhere)

35
3. Condition coverage
Generic definition:
Number of tested combinations of conditions
Number of targeted combinations of conditions
Definitions (regarding the “targeted combinations”):
• Every condition is set to both true and false during testing
• Example for 100% condition coverage: [a || b] A2
1. a = true, b = false
2. a = false, b = true
A4 A3
• Does not yield 100% decision coverage

• Every condition is evaluated to both true and false


• Not the same as above due to lazy evaluation

36
4. Condition/decision coverage (C/DC)
 Definition:
o Each decision takes every possible outcome (branch)
o Each condition in a decision takes every possible
outcome

Example for 100% C/DC coverage: [a || b] A2


1. a = true, b = true
2. a = false, b = false
A4 A3

Does not take into account whether each condition has an effect
(e.g., effect of changing b from false to true can be tested when a = false)

37
5. Modified condition/decision coverage (MC/DC)
 Definition:
o Each decision takes every possible outcome (branch)
o Each condition in a decision takes every possible
outcome
o Each condition in a decision is shown to independently
affect the outcome of the decision

Example for 100% MC/DC coverage: [a || b] A2


1. a = true, b = false
2. a = false, b = true
A4 A3
3. a = false, b = false

38
6. Multiple condition coverage
 Definition:
o All combinations of conditions is tested
• For n conditions: 2n test cases may be necessary
(less with lazy evaluation)
• Sometimes not practical
(e.g. in avionics systems there are programs with more than
30 conditions in a decision)

100% multiple condition coverage: [a || b] A2


1. a = true, b = false
2. a = false, b = true
3. a = false, b = false A4 A3
4. a = true, b = true

39
7. Basic path coverage
Definition:
Number of independent paths traversed during testing
Number of all independent paths
100% path coverage implies:
o 100% statement coverage, 100% decision coverage
o 100% multiple condition coverage is not implied
A1
Path coverage: 80% (4 out of 5)
A2
Statement coverage: 100%

A4 A3

A5

42
Independent paths of a component
 Goal: Covering independent paths
o Independent paths from the point of view of testing:
There is a statement or decision branch in the path,
that is not included in the other path
 The maximal number of independent paths:
o CK: cyclomatic complexity
o In regular control flow graphs (connected, single entry/exit):
CK(G)=E-N+2, where
E: number of edges
N: number of nodes
in the control flow graph G
 The set of independent paths is not unique

43
Independent paths of a component
 Goal: Covering independent paths A1
o Independent paths from the point of view of testing:
N=5,
There is a statement or decision branch
A2 E=8,
in the path,
that is not included in the other path CK=5,
Max. 5
 The maximal number of independent
A4 paths:independent
A3

o CK: cyclomatic complexity paths!


o In regular control flow graphs (connected,
A5
single entry/exit):
CK(G)=E-N+2, where
E: number of edges
N: number of nodes
in the control flow graph G
 The set of independent paths is not unique

44
Covering the paths of a component
 Conceptual algorithm:
o Selecting maximum CK independent paths
o Generating inputs to traverse these paths,
each after the other
 Problems:
o Not all paths can be traversed:
Conditions along the selected path may be contradictory
o Loops: The number of loop executions shall be limited
 There are no fully automated tools to generate test
sequences for 100% path coverage
o Symbolic execution: With SMT solver (see later)
o Limitations: Loops, data types, external libraries, …

45
Other coverage metrics (examples)
 Loop coverage
o Loops executed 0 (if applicable), 1, or n times
 Race coverage
o Multiple threads executed on the same block of statements
 Relational operator coverage
o Boundary values tried in case of relational operators
 Table coverage
o Jump tables (state machine implementation) testing
 Object code branch coverage
o Machine instruction level coverage of conditional branches

46
Overview of test coverage criteria
 Control flow based test coverage criteria
o Statement coverage
o Decision coverage
o Condition coverages
o Path coverage
 Data flow based test coverage criteria
o Definition - usage coverage
o Definition-clear path coverage
 Combination of techniques

48
Data flow based test criteria
 Goals of testing
o Checking definition (value assignment) and use of the variables
o Is there an incorrect assignment? Is the value used in a correct way?
 Labeling the program graph:
o def(v): definition of variable v (by assigning a value)
o use(v): using variable v
o p-use(v): using v in a predicate (condition for a decision)
o c-use(v): using v in computation
 Notation for paths:
o def-clear v path: there is no def v label on the path
o def-use v (shortly d-u v) path:
• Starts with def v label, ends with p-use v or c-use v label
• Between these there is a def-clear v path
• There is no internal loop (or the full d-u v path is a loop segment)

49
Example: Labeling the program graph
x y z a variables

x=a+2 def x c-use a

y=24 def y

z=x+y c-use x c-use y def z

if (x>12) p-use x

p-use x p-use x

50
All-defs coverage criterion
 All-defs:
For all v variables, from all def v statements in the program:
At least one use v statement is reached
by at least one def-clear v path
(here use v may be either p-use v or c-use v)

Representing All-defs coverage:


program def v def v
forall v, forall def v:
paths:
one def-clear v
path is tested:

to one use v:
use v use v use v use v use v use v

51
All-p-uses, all-c-uses, all-uses criteria
 All-p-uses / all-c-uses:
For all v variables, from all def v statements:
All p-use v / c-use v statements are reached
by at least one def-clear v path

forall v, forall def v: def v def v

one def-clear v
path is tested:

to all use v:
p-use v p-use v p-use v c-use v c-use v c-use v
 All-uses:
For all v variables, from all def v statements:
All use v statements are reached by
at least one def-clear v path
52
All-paths and all-du-paths criteria
 All-paths:
o For all v variables, from all def v statements :
To all use v statements all executable def-clear v paths are tested
o In case of loops multiple executions are distinguished
 All-du-paths (less strict)
o For all v variable, from all def v statements:
To all use v statements all d-u v paths are tested (w/o internal loops)
All-paths coverage: All-du-paths coverage:
forall v, forall def v: def v def v

all def-clear v all d-u v


paths are tested: paths tested:

to all use v:
use v use v use v use v use v use v
53
Hierarchy of data flow based test coverage criteria

all-paths Difficult to achieve

all-du-paths

all-uses

all-c-uses / some-p-uses all-p-uses / some-c-uses

all-defs
all-p-uses

all-edges
100% decision coverage
all-nodes
100% statement coverage

54
Using test coverage metrics
 What are these good for?
o Finding parts of the program (source code) where testing is weak
• Test suite shall be extended to cover untested elements
o Finding redundant test cases (that cover the same part of the
program)
• However, data dependency shall be considered: different types of
faults can be tested by different data on the same path
o The coverage of successful tests is not direct measure of code quality
• Rather, measure of the completeness of the test suite
• Testing often terminated on the basis of the coverage measure
 What are these not good for?
o To identify requirements that were not implemented
o To aim at simply “covering” program parts without considering the
specification and related other ways of test data selection
55
Execution of test cases
Execution order (prioritization) of the test cases:
If the number of faults is expected to be low:
First the more efficient tests (with higher fault coverage)
o Covering longer paths
o Covering more difficult decisions

Fault coverage Fault coverage


c1

c2

t t
ttest ttest
56
Summary: Component test design techniques
 Specification and structure based techniques
o Many (more or less orthogonal) techniques
o Specification based testing is the primary approach
 Commonly, only basic techniques are used
o Exception: Safety-critical systems
o E.g. DO178-B requires MC/DC coverage analysis
 Combination of techniques is useful:
• Example (Microsoft report):
Specification based testing: 83% code coverage achieved
+ exploratory testing: 86% code coverage
+ structure based testing: 91% code coverage
Not tested: Internal checks, defensive programming
57

You might also like