Dutta MCDC Test Generation
Dutta MCDC Test Generation
[email protected]
† National Institute of Technology Warangal, India
Artifact Available
Badges obtained: Available, Functional, and Reusable
Sangharatna Godboley, Joxan Jaffar, Rasool Maghareh, & Arpita Dutta. CUSTOM-Interpolation: ISSTA artifact evaluation. In 30th ACM SIGSOFT
International Symposium on Software Testing and Analysis (ISSTA), Aarhus, Denmark. Zenodo: http://doi.org/10.5281/zenodo.4771439
Website: https://tracer-x.github.io/
Github: https://github.com/tracer-x/
1 Introduction
2 Survey
3 Proposed Idea
4 Experimental Evaluation
5 Conclusion
1 Introduction
2 Survey
3 Proposed Idea
4 Experimental Evaluation
5 Conclusion
Figure: Automatic vs
Manual! Figure: Which strategy?
1 Introduction
2 Survey
3 Proposed Idea
4 Experimental Evaluation
5 Conclusion
k=0
<1> b≥0
b<0 k = k * 10 + 2
k = k * 10 + 1 <2>
k = k * 10 c<0 Table: Paths from sequences
k = k * 10 + 1 c≥0
k = k * 10 + 2 Id Seq Path
<3> d≥0 k = k * 10
d<0 k = k * 10 + 2
S1 101 1→3→T
k = k * 10 +1 S2 102 1→3→F
<F>
<T> S3 211 1→2→3→T
S4 212 1→2→3→F
<postDom>
S5 220 1→2→F
assert( k ∉ {101, 102,
211, 212, 220 } )
Figure: Exploration of Symbolic Execution Tree in Non-pruning DSE vs. Pruning DSE
x=0
x ≠ 1 ∧ x ≠16 ∧
x ≠ 13 ∧ x ≠ 28 <1a>
<7a> <7b>
assert( x ≠ 28)
Example
if (a < 0)b = 3;
1 Introduction
2 Survey
3 Proposed Idea
4 Experimental Evaluation
5 Conclusion
CBMC CUSTOM
CBMC CUSTOM
10000
10000
7500
5000 5000
2500 2500
0 0
0 50 100 150 200 0 50 100 150 200
Program Program
Figure: Scatter chart for MC/DC Proved Figure: Scatter chart for MC/DC
Sequences UnProven Sequences
1 Introduction
2 Survey
3 Proposed Idea
4 Experimental Evaluation
5 Conclusion
We have surveyed and found that in industrial practice, automatic MC/DC test
generation is woefully inadequate and most practitioners rely on manual effort.
Our algorithm, if terminates, generates an optimal set of MC/DC test cases.
We compared CUSTOM against CBMC, the only practical method available
which address large programs.
A comprehensive experimental evaluation shows our implementation to
perform at a higher level.
1 Kelly J., Hayhurst and Dan S., Veerhusen and John J., Chilenski and Leanna K., Rierson. A Practical Tutorial on Modified Condition/Decision
Coverage. NASA Langley Technical Report Server (2001).
2 Cadar, C., Dunbar, D. and Engler, D.R., 2008, December. KLEE: Unassisted and Automatic Generation of High-Coverage Tests for Complex Systems
Programs. In OSDI (Vol. 8, pp. 209-224).
3 Jaffar J., Murali V., Navas J.A., Santosa A.E. (2012) TRACER: A Symbolic Execution Tool for Verification. In: Madhusudan P., Seshia S.A. (eds)
Computer Aided Verification. CAV 2012. Lecture Notes in Computer Science, vol 7358. Springer, Berlin, Heidelberg
4 Kroening D., Tautschnig M. (2014) CBMC-C Bounded Model Checker. In: Abraham E., Havelund K. (eds) Tools and Algorithms for the Construction
and Analysis of Systems. TACAS 2014. Lecture Notes in Computer Science, vol 8413. Springer, Berlin, Heidelberg
5 Jaffar J., Maghareh R., Godboley S., Ha XL. (2020) TracerX: Dynamic Symbolic Execution with Interpolation (Competition Contribution). In: Wehrheim
H., Cabot J. (eds) Fundamental Approaches to Software Engineering. FASE 2020. Lecture Notes in Computer Science, vol 12076. Springer, Cham
6 Jaffar J., Godboley S., and Maghareh R. (2019). Optimal MC/DC test case generation. In Proceedings of the 41st International Conference on
Software Engineering: Companion Proceedings (ICSE’19). IEEE Press, 288-289. DOI:https://doi.org/10.1109/ICSE-Companion.2019.00118
7 Jaffar J., Maghareh R., Godboley S., Ha XL. (2020) TracerX: Dynamic Symbolic Execution with Interpolation. KLEE 2020 (2nd International KLEE
Workshop on Symbolic Execution) Imperial College London, South Kensington Campus