0% found this document useful (0 votes)
6 views17 pages

REVIEW1

The document discusses advancements in software defect prediction and quality assurance through machine learning and deep learning techniques. It highlights the integration of innovative frameworks and automation in testing processes to improve software reliability and maintenance. Key findings from various studies are presented, emphasizing the effectiveness of different algorithms and methodologies in addressing challenges in software quality assurance.

Uploaded by

PARTHIBAN M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views17 pages

REVIEW1

The document discusses advancements in software defect prediction and quality assurance through machine learning and deep learning techniques. It highlights the integration of innovative frameworks and automation in testing processes to improve software reliability and maintenance. Key findings from various studies are presented, emphasizing the effectiveness of different algorithms and methodologies in addressing challenges in software quality assurance.

Uploaded by

PARTHIBAN M
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 17

ADVANCEMENTS IN SOFTWARE DEFECT PREDICTION AND QUALITY

ASSURANCE USING MACHINE LEARNING

Team members:
V.CHAITANYA REG: 21K61A05I9
A.THARUN REG: 21K61A0505
Under the supervision of : D.VISHNU REG: 21K61A0536
D.VINAY REG: 21K61A0537
Dr. M. PARTHIBAN
Batch Number – 21CSEB017
Professor / CSE
1
Table Of Contents :

• Problem Introduction
• Literature Survey
• Comparison Table
• Gap Identification
• Objective of the project
• Tools And Dataset used for Implementation
• Anticipated Outcome
• References

2
PROBLEM INTRODUCTION

• The increasing complexity of software systems demands innovative solutions for quality
assurance, as traditional methods prove inadequate.

• Machine learning and deep learning are being integrated into defect prediction and
testing methodologies, improving accuracy and efficiency.

• New frameworks are emerging to meet the quality assurance needs of AI-driven software,
emphasizing the importance of tailored testing approaches.

• Automation, including the use of large language models for test case generation, is
significantly transforming testing processes and enhancing software reliability.

• Effective maintenance strategies, such as the automated classification of bug reports, are
crucial for maintaining software quality throughout its lifecycle. 3
Literature Overview :
Authors Algorithm Used Methodologies/ Key Findings Accuracy /
Approaches Performance
Outperforms
Deep learning for traditional ML in Significant
Firas Alghanim et al. Deep Learning
defect density sparse data settings, improvement (exact
(2023) Model
prediction enhancing defect 9.3%)
density prediction.

Proposes robust
testing
Demonstrated
Chuanqi Tao et al. Metamorphic Testing frameworks methodologies for
feasibility (exact
(2023) Testing for AI applications AI, addressing
12.3 %)
quality assurance
gaps.

@SIH Idea submission- Template 4


Literature Overview :

Methodologies/ Accuracy /
Authors Algorithm Used Approaches Key Findings Performance
Enhances model
performance by
Outperforms
HS-CSDT (Harmony Holistic parameter optimizing
Lee et al. (2022) traditional methods
Search CSDT) optimization in SDP parameters
(exact 21.8 %)
throughout the SDP
process.
Significant
Up to 6% increase in
improvements in
opt-aiNet (Artificial Hyper-parameter accuracy; AUC
Khan et al. (2022) accuracy and AUC
Immune Network) optimization for SBP improvements up to
metrics for various
41%
classifiers.

@SIH Idea submission- Template 5


Literature Overview :

Methodologies/ Accuracy /
Authors Algorithm Used Key Findings
Approaches Performance
HSBF significantly
improves prediction
HSBF (Hierarchical Exact 18.6 % ,
Data filtering accuracy,
Li et al. (2024) Selection-Based higher than WPDP
strategies for CPDP outperforming
Filter) models
traditional
methods.

Achieves high
accuracy in
Natural language Notable
Shatha Abed Ensemble ML classifying bug
processing for bug improvements
Alsaedi et al. (2023) Algorithm reports, enhancing
classification observed with 31%
maintenance
processes.

@SIH Idea submission- Template 6


Literature Overview :
Methodologies/ Accuracy /
Authors Algorithm Used Approaches Key Findings Performance
RoBERTa-generated
Average
datasets show
mislabeling of
Issue classification lower mislabeling
Afric et al. (2024) RoBERTa 14.36%, better
methods and improve defect
performance
prediction
overall
outcomes.

Facilitates effective
Metamorphic validation of
Xiaoyuan Xie et al. testing for unsupervised Effective validation
METTLE
(2023) unsupervised systems, enhancing (23.6 %)
learning user
understanding.

@SIH Idea submission- Template 7


Literature Overview :

Methodologies/ Accuracy /
Authors Algorithm Used Approaches Key Findings Performance
Context-aware
Grzegorz Siewruk & Automated classification Exact % not
Machine Learning
Wojciech Mazurczyk vulnerability improves accuracy specified, improved
Algorithms
(2023) classification in vulnerability accuracy noted
management.

Highlights
challenges in cross-
Analysis of flaky Improved prediction
Angelo Afeltra et al. Machine Learning project predictions
tests in cross- accuracy (exact %
(2023) Models and the importance
project scenarios not specified)
of filtering
methods.

@SIH Idea submission- Template 8


COMPARISION TABLE :
Year Author(S) Proposed Proposed Work
Algorithm
2023 F. Alghanim Deep Learning for Sparse data,
et al Defect Density deep learning
Prediction
2023 C. Tao et al. Metamorphic AI testing,
Testing for AI metamorphic
Software testing
2022 Lee et al. Harmony Search Holistic SDP
for optimization
SDP Optimization
(HS-CSDT)

2022 Khan et al. Opt-aiNet for Hyperparameter


Hyperparameter optimization
Optimization
2023 G. Siewruk et al Machine Learning Vulnerability
for Vulnerability classification
Classification
@SIH Idea submission- Template 9
COMPARISION TABLE :

Year Auther(S) Proposed Algorithm Proposed Work


2024 Li et al. Hierarchical Cross-project,
Selection-Based data filtering
Filter (HSBF)
2023 S. Alsaedi et Ensemble Machine NLP, bug
al. Learning for Bug classification
Classification
2024 Afric et al RoBERTa for Issue NLP, issue
Classification classification
2023 X. Xie et al METTLE for Metamorphic
Unsupervised testing,
Learning Validation validation

@SIH Idea submission- Template 10


GAP IDENTIFICATION

• Data sparsity in defect prediction models can lead to inaccurate predictions


and inefficient testing processes.
• Integration of AI into software applications complicates quality assurance
due to unpredictable behaviors.
• Fragmented approaches to defect prediction limit overall effectiveness by
neglecting parameter optimization across the entire process.
• The evolving software landscape necessitates continuous research to
address emerging technologies and context-aware systems in vulnerability
management.
11
OBJECTIVE OF THE PROBLEM
•Machine learning improves software defect prediction and handles data
sparsity.
•AI-specific frameworks ensure reliable testing of AI-driven functionalities.
•Automated testing using large language models boosts efficiency and
reliability.
•Ensemble machine learning enhances bug classification and software
maintenance.

@SIH Idea submission- Template 12


TOOLS

1. Machine Learning Algorithms :


Support Vector Machines (SVM)
Random Forest
2. Deep Learning Models:
Convolutional Neural Networks (CNN)
Long Short-Term Memory Networks (LSTM)
3. Natural Language Processing (NLP) Tools:
RoBERTa for Bug and Issue Classification
4. Testing Methodologies:
Metamorphic Testing for AI Software Validation
Progressive Selection-Based Sifting for Cross-Project Defect Prediction

@SIH Idea submission- Template 13


DATASET :
Source : NASA Metrics Data Program
Dataset Characteristics :
• Understanding involves grasping the meaning and significance of something. It's about being
able to explain it in your own words and connect it to other concepts.
• It's the ability to analyse and make sense of information, often going beyond the literal
meaning.
• Putting together different pieces of information to form a new whole or a new understanding.
• Critically assessing information, ideas, or arguments to determine their validity, accuracy, and
significance.
• Understanding the feelings, perspectives, and experiences of others.

Link :
https://www.kaggle.com/datasets/semustafacevik/software-defect-prediction

@SIH Idea submission- Template 14


ANTICIPATED OUTCOME :

@SIH Idea submission- Template 15


PRESENTATION CERTIFICATE_IEEE CONFERENCE

@SIH Idea submission- Template 16


REFERENCES
1. Alghanim, F., Alsaedi, S. A., & Others. (2023). A deep learning approach for defect density prediction in software
systems. Journal of Software Engineering Research and Development, 11(2), 45-62.
2. Tao, C., Zhang, L., & Others. (2023). Quality validation frameworks for AI-driven software applications. International
Journal of Software Engineering and Applications, 14(1), 23-37.
3. Lee, Y., Kim, H., & Others. (2022). Holistic parameter optimization for software defect prediction using harmony
search. Software: Practice and Experience, 52(4), 865-883.
4. Khan, M. S., Sadiq, A., & Others. (2022). Hyper-parameter optimization in software bug prediction using Artificial
Immune Networks. Journal of Systems and Software, 181, 111021.
5. Li, J., Zhang, Y., & Others. (2024). Data filtering strategies for cross-project defect prediction. Empirical Software
Engineering, 29(1), 45-73.
6. Abed Alsaedi, S., Sadiq, A., & Others. (2023). Automated classification of bug reports using ensemble machine
learning. Journal of Software Maintenance and Evolution, 35(2), e2365.
7. Afric, K., Zhang, X., & Others. (2024). Impact of issue classification on software defect prediction datasets. ACM
Transactions on Software Engineering and Methodology, 33(1), 1-29.
8. Xie, X., Liu, Y., & Others. (2023). METTLE: A framework for validating unsupervised learning systems using
metamorphic testing. IEEE Transactions on Software Engineering, 49(3), 567-584.
9. Siewruk, G., & Mazurczyk, W. (2023). Mixeway: A context-aware system for automating vulnerability classification in
large-scale networks. Journal of Network and Computer Applications, 215, 102728.
10. Afeltra, A., Lanza, M., & Others. (2023). Exploring flaky tests in cross-project scenarios. IEEE Transactions on
Reliability, 72(2), 345-362. 17

You might also like