REVIEW1
REVIEW1
Team members:
V.CHAITANYA REG: 21K61A05I9
A.THARUN REG: 21K61A0505
Under the supervision of : D.VISHNU REG: 21K61A0536
D.VINAY REG: 21K61A0537
Dr. M. PARTHIBAN
Batch Number – 21CSEB017
Professor / CSE
1
Table Of Contents :
• Problem Introduction
• Literature Survey
• Comparison Table
• Gap Identification
• Objective of the project
• Tools And Dataset used for Implementation
• Anticipated Outcome
• References
2
PROBLEM INTRODUCTION
• The increasing complexity of software systems demands innovative solutions for quality
assurance, as traditional methods prove inadequate.
• Machine learning and deep learning are being integrated into defect prediction and
testing methodologies, improving accuracy and efficiency.
• New frameworks are emerging to meet the quality assurance needs of AI-driven software,
emphasizing the importance of tailored testing approaches.
• Automation, including the use of large language models for test case generation, is
significantly transforming testing processes and enhancing software reliability.
• Effective maintenance strategies, such as the automated classification of bug reports, are
crucial for maintaining software quality throughout its lifecycle. 3
Literature Overview :
Authors Algorithm Used Methodologies/ Key Findings Accuracy /
Approaches Performance
Outperforms
Deep learning for traditional ML in Significant
Firas Alghanim et al. Deep Learning
defect density sparse data settings, improvement (exact
(2023) Model
prediction enhancing defect 9.3%)
density prediction.
Proposes robust
testing
Demonstrated
Chuanqi Tao et al. Metamorphic Testing frameworks methodologies for
feasibility (exact
(2023) Testing for AI applications AI, addressing
12.3 %)
quality assurance
gaps.
Methodologies/ Accuracy /
Authors Algorithm Used Approaches Key Findings Performance
Enhances model
performance by
Outperforms
HS-CSDT (Harmony Holistic parameter optimizing
Lee et al. (2022) traditional methods
Search CSDT) optimization in SDP parameters
(exact 21.8 %)
throughout the SDP
process.
Significant
Up to 6% increase in
improvements in
opt-aiNet (Artificial Hyper-parameter accuracy; AUC
Khan et al. (2022) accuracy and AUC
Immune Network) optimization for SBP improvements up to
metrics for various
41%
classifiers.
Methodologies/ Accuracy /
Authors Algorithm Used Key Findings
Approaches Performance
HSBF significantly
improves prediction
HSBF (Hierarchical Exact 18.6 % ,
Data filtering accuracy,
Li et al. (2024) Selection-Based higher than WPDP
strategies for CPDP outperforming
Filter) models
traditional
methods.
Achieves high
accuracy in
Natural language Notable
Shatha Abed Ensemble ML classifying bug
processing for bug improvements
Alsaedi et al. (2023) Algorithm reports, enhancing
classification observed with 31%
maintenance
processes.
Facilitates effective
Metamorphic validation of
Xiaoyuan Xie et al. testing for unsupervised Effective validation
METTLE
(2023) unsupervised systems, enhancing (23.6 %)
learning user
understanding.
Methodologies/ Accuracy /
Authors Algorithm Used Approaches Key Findings Performance
Context-aware
Grzegorz Siewruk & Automated classification Exact % not
Machine Learning
Wojciech Mazurczyk vulnerability improves accuracy specified, improved
Algorithms
(2023) classification in vulnerability accuracy noted
management.
Highlights
challenges in cross-
Analysis of flaky Improved prediction
Angelo Afeltra et al. Machine Learning project predictions
tests in cross- accuracy (exact %
(2023) Models and the importance
project scenarios not specified)
of filtering
methods.
Link :
https://www.kaggle.com/datasets/semustafacevik/software-defect-prediction