SWVV_L17a_Code-based_testing
SWVV_L17a_Code-based_testing
System
specification
Architecture
design
Component
design
Component
implementation • Source code analysis
• Proof of program correctness by theorem proving
System • Software model checking with abstraction-based methods
integration
• Component testing: classic techniques
System • Component testing: code-based testing
delivery
Operation,
maintenance
2
Motivation
Goal: Developer testing of software components
(modules, units)
o At this level, detailed specification may be missing
Idea: Generate test inputs based on the source code
o Execute all parts of the code
(this way source code coverage criteria can be satisfied)
How test outputs are checked?
o Based on overall expectations (derived from higher level
specifications)
o Using generic criteria: avoiding crash, OS level error signal,
exception, timeout, violated assertion
o Re-using outputs of previous test (regression testing)
3
1. Random test generation
Random selection of input data from the input domain
Advantage:
o Very fast
o Very cheap
Ideas:
o If no error found: trying different parts of the input domain
o Selection based on ”difference”, ”distance” of values
Example tool for Java: Randoop
8
Random test generation: The Randoop tool
Generation a sequence of method calls
Creating object parameters using random values in
the constructor
Heuristics:
o Execution of the selected test case
o Throwing away random tests that seem to be redundant
o Using invalid values for robustness testing
9
2. Annotation-based test generation
The code shall contain:
o Pre- and post-conditions (e.g.: design by contract)
o Other annotations (e.g., loop invariants)
These are able to guide test generation
o Preconditions: to be satisfied
(or violated if robustness is tested)
o Postconditions, invariants: to be checked
/*@ requires amt > 0 && amt <= acc.bal;
@ assignable acc.bal;
@ ensures acc.bal == \old(acc.bal) - amt;
@*/
public void transfer(int amt, Account acc) {
acc.withdraw(amt);
deposit(amt);
}
12
Annotation-based test generation: Example tools
AutoTest
o For the Eiffel language, with design by contract
o Input: object pool
• Random generation of inputs that satisfy the preconditions
o Expected output: checked on the base of the contracts
o Ref: Bertrand Meyer et al., "Program that Test Themselves", IEEE Computer
42:9, 2009.
13
3. Search-based techniques
Approach: Search-based Software Engineering (SBSE)
Representing a problem as a search:
o Search space:
Possible test inputs + structure of the tested program
o Objective function: reaching a test goal
• Coverage criteria: statements, decision branches
• Minimization of the size of the test suite
Using metaheuristic algorithms
o Genetic algorithms: a random initial test set and
mutations or crossover + the objective function
• Mutations: Changes in test inputs, adding / removing calls
• Crossover: Combining existing test cases to generate a new set
• Keeping the new test cases that increase the objective function
16
Search-based techniques: Mutation and crossover
18
4. Symbolic execution
Static program analysis technique
Basic idea:
o Following the computation of program paths on the
basis of the source code, with symbolic variables
o Deriving reachability conditions as path constraints (PC)
o Constraint solving on path constraints (e.g., using an
SMT solver):
A solution provides inputs to execute the given path
Popular nowadays:
o Efficient SMT solvers exist
o Used to generate test inputs for covering given paths
20
Symbolic execution: Program paths and related inputs
23
Symbolic execution: Example tools
Name Platform Language Notes
KLEE Linux C (LLVM bitcode)
IntelliTest Windows .NET assembly Formerly: Pex
SAGE Windows x86 binary Security testing
Jalangi - JavaScript
Symbolic - Java
PathFinder
27
Symbolic execution: Microsoft IntelliTest
Generate unit tests for your code with IntelliTest
https://learn.microsoft.com/en-us/visualstudio/test/generate-
unit-tests-for-your-code-with-intellitest
28
Symbolic execution: Challenges for symbolic execution
return i;
}
Solution ideas:
o Various loop traversal algorithms (instead of DFS)
o “Method summary”: simple representation of methods
33
Symbolic execution: Challenge 2
Complex arithmetic expressions
Most SMT solvers cannot handle these
int hardToTest2(int x){
if (log(x) > 10)
return x
else
return -x;
}
34
Symbolic execution: Challenge 6
Interaction with the environment
Calls to platform and external libraries
int hardToTest3(string s){
FileStream fs = File.Open(s, FileMode.Open);
if (fs.Lenth > 1024){
return 1;
} else
return 0;
}
}
Idea:
o „Environment models” (KLEE): for simple C programs
o Special objects representing the environment (Java)
36
Evaluation: Applying code-based techniques
A large-scale embedded system (C)
o Execution of CREST and KLEE on a project of the company ABB
o ~60% branch coverage reached
o Fails and issues in several cases
X. Qu, B. Robinson: A Case Study of Concolic Testing Tools and Their Limitations, ESEM 2011