An Empirical Study of Fault Localisation Techniques for Deep Learning

N Humbatova, J Kim, G Jahangirova, S Yoo… - arXiv preprint arXiv …, 2024 - arxiv.org
arXiv preprint arXiv:2412.11304, 2024arxiv.org
With the increased popularity of Deep Neural Networks (DNNs), increases also the need for
tools to assist developers in the DNN implementation, testing and debugging process.
Several approaches have been proposed that automatically analyse and localise potential
faults in DNNs under test. In this work, we evaluate and compare existing state-of-the-art
fault localisation techniques, which operate based on both dynamic and static analysis of the
DNN. The evaluation is performed on a benchmark consisting of both real faults obtained …
With the increased popularity of Deep Neural Networks (DNNs), increases also the need for tools to assist developers in the DNN implementation, testing and debugging process. Several approaches have been proposed that automatically analyse and localise potential faults in DNNs under test. In this work, we evaluate and compare existing state-of-the-art fault localisation techniques, which operate based on both dynamic and static analysis of the DNN. The evaluation is performed on a benchmark consisting of both real faults obtained from bug reporting platforms and faulty models produced by a mutation tool. Our findings indicate that the usage of a single, specific ground truth (e.g., the human defined one) for the evaluation of DNN fault localisation tools results in pretty low performance (maximum average recall of 0.31 and precision of 0.23). However, such figures increase when considering alternative, equivalent patches that exist for a given faulty DNN. Results indicate that \dfd is the most effective tool, achieving an average recall of 0.61 and precision of 0.41 on our benchmark.
arxiv.org
Showing the best result for this search. See all results