0% found this document useful (0 votes)
101 views7 pages

A Comprehensive Review On Test Case Prioritization in Continuous Integration Platforms

- Continuous Integration (CI) platforms enable recurrent integration of software variations, creating software development rapidly and cost-effectively. In these platforms, integration, and regression testing play an essential role in Test Case Prioritization (TCP) to detect the test case order, which enhances specific objectives like early failure discovery. Currently, Artificial Intelligence (AI) models have emerged widely to solve complex software testing problems like integration and regression testing that create a huge quantity of data from iterative code commits and test executions. In CI testing scenarios, AI models comprising machine and deep learning predictors can be trained by using large test data to predict test cases and speed up the discovery of regression faults during code integration. But these models attain various efficiencies based on the context and factors of CI testing such as varying time cost or the size of test execution history utilized to prioritize failing test cases. Earlier research on TCP using AI models does not often learn these variables that are crucial for CI testing. In this article, a comprehensive review of the different TCP models using deep-learning algorithms including Reinforcement Learning (RL) is presented to pay attention to the software testing field. Also, the merits and demerits of those models for TCP in CI testing are examined to comprehend the challenges of TCP in CI testing. According to the observed challenges, possible solutions are given to enhance the accuracy and stability of deep learning models in CI testing for TCP.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
101 views7 pages

A Comprehensive Review On Test Case Prioritization in Continuous Integration Platforms

- Continuous Integration (CI) platforms enable recurrent integration of software variations, creating software development rapidly and cost-effectively. In these platforms, integration, and regression testing play an essential role in Test Case Prioritization (TCP) to detect the test case order, which enhances specific objectives like early failure discovery. Currently, Artificial Intelligence (AI) models have emerged widely to solve complex software testing problems like integration and regression testing that create a huge quantity of data from iterative code commits and test executions. In CI testing scenarios, AI models comprising machine and deep learning predictors can be trained by using large test data to predict test cases and speed up the discovery of regression faults during code integration. But these models attain various efficiencies based on the context and factors of CI testing such as varying time cost or the size of test execution history utilized to prioritize failing test cases. Earlier research on TCP using AI models does not often learn these variables that are crucial for CI testing. In this article, a comprehensive review of the different TCP models using deep-learning algorithms including Reinforcement Learning (RL) is presented to pay attention to the software testing field. Also, the merits and demerits of those models for TCP in CI testing are examined to comprehend the challenges of TCP in CI testing. According to the observed challenges, possible solutions are given to enhance the accuracy and stability of deep learning models in CI testing for TCP.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology

ISSN No:-2456-2165

A Comprehensive Review on Test Case Prioritization


in Continuous Integration Platforms
R.Shankar Dr. D. Sridhar
Assistant Professor Assistant Professor
Department of ICT and Cognitive Systems Department of Computer Science
Sri Krishna Arts and Science College Sri Krishna Adithya College of Arts and Science
Coimbatore, Tamil Nadu, India Coimbatore, Tamil Nadu, India

Abstract:- Continuous Integration (CI) platforms enable I. INTRODUCTION


recurrent integration of software variations, creating
software development rapidly and cost-effectively. In Agile adoption by software firms is increasing the
these platforms, integration, and regression testing play popularity of continuous integration (CI) solutions [1]. CI
an essential role in Test Case Prioritization (TCP) to platforms make it possible to incorporate software upgrades
detect the test case order, which enhances specific more frequently, which leads to quicker and more affordable
objectives like early failure discovery. Currently, software testing [2]. They automatically promote functions
Artificial Intelligence (AI) models have emerged widely to including build, test execution, and test results analysis to
solve complex software testing problems like integration address problems and identify faults. Regression testing is a
and regression testing that create a huge quantity of data process that takes up a significant amount of time in a CI
from iterative code commits and test executions. In CI cycle (also known as a build) [3]. Minimization, selection, and
testing scenarios, AI models comprising machine and priority are three categories for the techniques that support
deep learning predictors can be trained by using large regression testing [4-5]. Typically, the test case minimization
test data to predict test cases and speed up the discovery method minimizes the test set depending on specific
of regression faults during code integration. But these circumstances and eliminates redundant test cases. The
models attain various efficiencies based on the context methods used for test case selection choose a subset of test
and factors of CI testing such as varying time cost or the cases, leaving just the most crucial ones for software testing.
size of test execution history utilized to prioritize failing The TCP approaches make an effort to rearrange a test suite in
test cases. Earlier research on TCP using AI models does order to determine the ideal arrangement of test cases, which
not often learn these variables that are crucial for CI improves some goals, such as early failure detection. A broad
testing. In this article, a comprehensive review of the comparison of these three regression testing techniques is
different TCP models using deep-learning algorithms shown in Table 1.
including Reinforcement Learning (RL) is presented to
pay attention to the software testing field. Also, the merits TCP methods, one of three types of approaches, are the
and demerits of those models for TCP in CI testing are most well-known in the field and the focus of this study. To
examined to comprehend the challenges of TCP in CI deal with the test cost, insufficient resources, and limitations
testing. According to the observed challenges, possible of CI platforms, most traditional TCP approaches require
solutions are given to enhance the accuracy and stability adjustments [6]. For instance, due to time constraints and test
of deep learning models in CI testing for TCP. cost for a design, the adoption of search-based approaches or
others that need extensive code analysis and coverage may be
Keywords:- Software Testing, Continuous Integration unfeasible.
Testing, Regression Testing, Test Case Prioritization,
Artificial Intelligence, Deep Learning, Reinforcement
Learning.

Table 1. Comparison of Different Regression Test Methods in CI Platform


Regression test methods
Component
Minimization Selection Prioritization
Plan Remove the test case. Change-aware test case. Test case permutation by ordering and
prioritizing.
Benefit Useful in decreasing test Useful in choosing change-aware Suitable while new test cases can often be
cases. test cases. considered in the test case permutation.
Drawback Test cases are not change- New test cases might be lost in the Time-consuming, large test suite.
aware. temporary selection, i.e., change-
aware.

IJISRT23APR2084 www.ijisrt.com 2323


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
A. CONTINUOUS INTEGRATION PLATFORMS Continuous software engineering practices like CI,
In former times, a small number of developers would Continuous Deployment (CD), and Continuous Delivery
work independently for an extended period of time and (CDE) have become popular and accepted by many
would only integrate their adjustments to the master unit once industries [7] as a result of the growth of the agile
they were done. On the other hand, this method includes development model. These practices allow for the regular
drawbacks such a significant time commitment, unneeded combination, testing, deployment, and quick client feedback
administrative expenses for the projects, and faults that go in a very short cycle. Figure 1 shows how these behaviors are
undetected over extended periods of time. The quick delivery related to one another.
of updates to customers was delayed by such factors.

Fig.1 Overview of Association between CI, CD, and CDE Practices (Source: [3])

Before CD and CDE, CI is a vital practice to unpredictable nature of test cases, which can be included and
implement. CI platforms include automated software design removed in subsequent commits, and the discovery of
and testing, helping engineering teams scale up personnel and numerous errors as quickly as possible.
distribute results while also enabling software designers to
work independently on feature sets concurrently. They can B. Test Case Prioritization
accomplish this independently and quickly if they are willing The TCP issue is described as follows, per Rothermel et
to incorporate such traits into the final result. Buildbot [8], al. [15], for a test suite T, the collection of all promising
GoCD [9], Integrity [10], Jenkins [11], and Travis CI [12] are prioritizations (orderings) of T (PT), and a function f that
the most well-known public CI servers. gauges the effectiveness of a certain prioritization from PT to
real number:
CDE focuses on packaging an artifact (i.e., the
application's construction-ready state) for distribution for T' ∊ PT s.t. (∀T'' ∊ PT) (T''≠T ' )[f(T' )≥f(T'' )]
acceptance testing. The artifact must be ready to be provided
to end-clients (construction) in this manner at any time. A The goal of a TCP issue is to find the best T' possible
CD, on the other hand, can autonomously pack, start, and while achieving specific goals. Due to insufficient resources,
deliver the software artifact to the building site. Using CI, the a full regression test suite cannot be implemented in a
artifact used in CD and CDE was successfully transported to regression testing scenario. TCP approaches can be time-
the integration stage [13]. consuming and require each test location in some
circumstances. The fact that the coverage is maintained,
To evaluate and release new software updates quickly however, also makes this attribute advantageous.
and affordably, improve error identification and software Additionally, TCP permits failing test cases to be
performance, integration and regression testing is a critical implemented initially due to the time and cost constraints for
task in CI platforms. This is because CI allows for quick test the test activity that delay the execution of each test case.
feedback, which results in test cycles being time-constrained This ensures that the greatest possible fault coverage is
[14]. The time costs might vary from cycle to cycle, and they established while using fewer resources and lowering test
include time for selecting critical tests to run, running the costs.
tests, and reporting test results to designers.
The following are a few objectives of TCP techniques,
Therefore, using traditional TCP techniques in the CI according to Rothermel et al. [15]: (1) raising the test suite's
platform requires certain modifications. The methods must error recognition rate even before regression test execution
take into account specific aspects of CI platforms, such as begins, (2) raising the system code coverage under test, (3)
parallel test case execution and resource distribution, the raising the rate at which high-risk errors are identified, and

IJISRT23APR2084 www.ijisrt.com 2324


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
(4) raising the likelihood that errors connected to specific Such approaches are divided into cost-aware, coverage-
code modifications will be discovered. Several TCP based, distribution-based, history-based, requirement-based,
approaches have been developed in the literature to achieve model-based, and AI-based categories based on the data used
these goals. in the TCP (as shown in Table 2) [16].

Table 2. Different Categories of TCP Methods


Category Description
Cost-aware It prioritizes test cases depending on the test case costs since their costs are not equal.
Coverage-based It prioritizes test cases depending on the code coverage.
Distribution-based It prioritizes test cases depending on the test case profile distribution.
History-based It prioritizes test cases depending on test case execution history data and code modifications.
Requirement-based It prioritizes test cases depending on data extracted from requirements.
Model-based It prioritizes test cases depending on data extracted from models like UML graphs.
It prioritizes test cases depending on AI models such as machine learning, deep learning, and
AI-based
probabilistic theories.
Search-based It prioritizes test cases depending on multiple objectives of TCP or test case execution.

The exploration of AI models incorporating machine To independently forecast the initial build in a string of
learning, deep learning, reinforcement learning, and build failures as well as the residual build failures, Jin &
probabilistic theories along with search-based algorithms is a Servant [19] created a system called SmartBuildSkip. With
recent trend that is being explored. Utilizing deep learning the aid of this approach, developers were given the freedom
and reinforcement learning models for approaching TCP to decide how much they wanted to sacrifice by storing
techniques in the CI platform has the potential to be a numerous builds or waiting until a build had failed. Based on
promising fix. the automated build-result prediction, it can lower the cost of
CI.
The deep reinforcement learning-based TCP models An approach using Combinatorial VOlatiLE Multi-
used in the CI platform for software testing are covered in- Armed Bandit (COLEMAN) for TCP in the CI platform was
depth in this paper. The advantages and disadvantages of presented by Lima & Vergilio [20]. Using historical test case
various models are also examined in order to solve the issues failure data and RL, the MAB was merged with
and offer viable options for enhancing TCP in the CI combinatorial and volatile elements to adaptively discover a
platform. The remaining paragraphs are organized as follows: sufficient prioritized test suite for all CI cycles. To prioritize
The various AI models for TCP in the CI platform are test cases and improve software privacy, Shi et al. [21]
reviewed in Section II. The study's conclusion and created a new ReLU (Rectified Linear Unit)-weighted
recommendations for further research are presented in Historical Execution (RHE) data reward function. Many
Section III. historical execution outcomes received weighted rewards
based on previous execution data with varied lengths, hence
II. SURVEY ON TEST CASE PRIORITIZATION IN weighted reward functions with multiple lengths for
CONTINUOUS INTEGRATION PLATFORM USING historical outcomes were established.
ARTIFICIAL INTELLIGENCE MODELS
In order to prioritize DNN testing based on the
In order to automatically learn test case prioritization statistical view of DNN for classifying high-dimensional
and selection in the CI platform, Spieker et al. [17] presented objects, Feng et al. [22] created a method called DeepGini.
a novel approach called Retecs. This shortens the time With this approach, the problem of assessing set impurity can
between code pushes and developer feedback on failed test be reduced along with the problem of computing
cases. Based on their execution speed, previous final misclassification chance. As a result, the tests that were
executions, and failure histories, test cases in Retecs were presumably misclassified were quickly found, and the DNN's
chosen and prioritized using RL. Additionally, the Retecs resilience in various classification tasks was increased.
was taught to recognize earlier CI cycles and rank error-
prone test cases according to reward value. By implementing a batch update method akin to Monte
Carlo control for automatically ranking test cases,
An improved regression testing technique called CTFF Rosenbauer et al. [23] enhanced the learning strategy of XCS
was created by Ali et al. [18] for CI and agile software as an RL model. Additionally, they investigated if the
development. Initially, test cases that frequently change were prioritized experience reply for the test prioritization problem
grouped together and given a priority. In the event of a tie, has similar favorable impacts on XCS as it does on the neural
test cases were ranked based on the corresponding failure network. A scalable strategy for CI and regression testing in
frequencies and coverage requirements. Then, from all IoT-based systems was created by Medhat et al. [24]. This
clusters, test cases with a greater frequency of failure or model was built using IoT-related TCP and selection criteria.
coverage requirements were picked for execution. An optimized prioritized set of test cases was initially
obtained using search-based approaches. In order to

IJISRT23APR2084 www.ijisrt.com 2325


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
periodically ensure the overall dependability of IoT-based random forest approach included numerous heuristic
systems, the selection was then based on a trained prediction prioritization techniques.
model for IoT standard devices using supervised deep-
learning algorithms. Two distinct approaches, including the test suite-based
dynamic sliding window and the individual test case-based
For the purpose of prioritizing hybrid and consensus dynamic sliding window for TCP, were presented by Yang et
regression tests, Mondal & Nasre [25] presented the Hansie al. [29]. For all CI tests, a fixed-size sliding window with a
approach. The TCP was represented as a social choice theory preset length of recent historical data was first implemented.
rank aggregation dilemma. Priority-aware hybridization and Following that, techniques for dynamic sliding windows
priority-blind computation of a consensus ordering from were created, where the size of the window was constantly
different prioritizations were two components of the Hansie. adaptable to all CI testing.
It utilized normal windows in the absence of ties and
irregular windows in the presence of ties to execute the At a conceptual level, Yaraghi et al. [30] proposed a
combined test-case orderings in parallel using several data model that gathers data sources and their linkages in a
processes, leading to a high test execution speed. typical CI platform. Then, using this data model, a thorough
set of features was developed, consisting of every feature
In order to increase the number of test failures found previously used by related studies. Additionally, these
while reducing the number of tests, Nguyen & Le [26] attributes were used to appropriately prioritize test cases and
created the RLTCP test prioritizing technique. To represent train machine learning models.
the underlying association between test cases for the user
interface testing, a weighted coverage graph was constructed. A brand-new Black-box TCP (BTCP) model with log
The coverage graph and RL for TCP were integrated in the preprocessing, log representation, and TCP modules was
RLTCP. created by Chen et al. [31]. The LogTCP paradigm states that
different log-based BTCP schemes were created by
For the purpose of prioritizing regression tests in CI, combining different log representation technologies with
Sharif et al. [27] created a time-effective deep learning-based diverse priority techniques. Using various ranking
regression system called DeepOrder. The DNN was trained algorithms, Bagherzadeh et al. [32] analyzed the sequential
as a regression approach using historical test data regarding interactions between the CI platform and the TCP as an RL
the timing and status of test cases. As a result, the number of problem. Additionally, the TCP strategy was automatically
failed test cases was decreased and the priority of the test and continually learned using RL models with the aim of
cases inside a particular test suite was determined. By getting as close to the optimum strategy as possible.
incorporating the multidimensional properties of the
Extended Finite State Machine (EFSM) under test, Huang et Table 3 compares all of the aforementioned AI models
al. [28] created a new learn-to-rank technique for prioritizing for TCP in CI platforms based on their benefits and
test cases. In order to train the ranking model for TCP, the drawbacks.

Table 3. Comparison of Different AI Models for TCP in CT Platforms


Ref. No. Models Benefits Drawbacks Dataset Performance
[17] Retecs Time consumption In this model, only a Paint Control, -
was less since it did few metadata of a test IOF/ROL, and
not need to execute case and its history GSDTSR
computationally were used.
intensive tasks
during prioritization.
[18] CTFF It achieved a high The results were not Case A (CA), Case Precision:
error recognition statistically verified B (CB), and Case C CA=0.92;
rate and can detect due to the use of (CC) datasets from CB=0.9;
more errors fastly. already collected already presented CC=0.92
datasets. case studies Recall:
CA=1;
CB=0.99;
CC=1
F-measure:
CA=0.96;
CB=0.93;
CC=0.96
[19] SmartBuildSkip It can save cost in It may not be useful TravisTorrent Recall=100%;
CI while for software projects dataset Saved builds=89%;
maintaining most of that cannot afford a Saving efficiency=92%
its value, with the single delay in
ability to modify its observing failing

IJISRT23APR2084 www.ijisrt.com 2326


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
cost-value trade-off. builds.
[20] COLEMAN Better performance It needs to learn a Paint Control, Prioritization
for early fault relationship between IOF/ROL, and period=0.0654sec
recognition. the number of test GSDTSR
cases and the
prioritization time for
increasing model
scalability.
[21] RHE reward It can maximize the The time spent Paint Control, Execution
function number of testing executing functions IOF/ROL, and time=595000sec
cases that had based on the overall GSDTSR
already discovered rewards was
faults within the relatively longer than
available time. that consumed by the
partial rewards.
[22] DeepGini Highly beneficial. It needs more features MNIST, CIFAR-10, Mean accuracy=0.96
to increase scalability. Fashion (Zalando’s
article images), and
Street View House
Numbers (SVHN)
[23] Learning It was a viable The sample efficiency Paint Control, Estimated variances of the
strategy of XCS solution to the and reproducibility IOF/ROL, and agents=0.019997
adaptive test case needed to improve GSDTSR
selection problem. further.
[24] Scalable model It can ensure the Advanced deep IoT device Accuracy:
using Long total reliability of learning such as RL connection Regression testing=90%;
Short-Term IoT-based systems. was needed to efficiency Integration testing=92%
Memory increase the accuracy. Dataset and MIoT
(LSTM) dataset
classifier
[25] Hansie It achieved better It considered only 12 real-time codes Effectiveness of change
performance without independent unit test from coverage=1
considering the test cases. Software-artifact
execution history Infrastructure
from earlier code Repository and 8
modifications. public projects from
GitHub
[26] RLTCP It can be resilient to The performance was Spectrum_OR, and Mean ratio of test suite
drastic and influenced by the Mattermost_OR execution needed for
structural changes in reward policy. 100% fault
the test suite coverage=55.6%
because of using the
weighted coverage
graph.
[27] DeepOrder Better flexibility and It needs more features Cisco dataset, Paint Average mean ratio of
time complexity to of test cases or data Control, IOF/ROL, errors recognized=0.723
efficiently deal with such as modification and GSDTSR
large-scale datasets. in source code, etc.,
for increasing
efficiency.
[28] Learn-to-rank Efficient for The performance may Monitor, INRES, Average mean ratio of
method prioritizing test affect the construct Class II, OLSR, and errors recognized=0.884;
cases. validation of the SCP from EFSM Total time cost=3.21
experiment because it
does not consider the
time cost of test case
execution and the
fault security level.
Also, it was time-
consuming.
[29] Test suite-based It can effectively The failure rate was 14 different Average normalized mean

IJISRT23APR2084 www.ijisrt.com 2327


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
dynamic sliding improve the low, resulting in industrial datasets ratio of errors
window and the prioritization effect many test executions recognized=37.45%;
individual test of test cases. without effective Mean recall=72.5%
case-based reward values that
dynamic sliding provide slow learning
window or complicated
convergence.
[30] Data model at a It can achieve High-quality datasets Four publicly Mean accuracy=83.975%
conceptual level promising results were required to accessible datasets Average mean ratio of
across most evaluate TCP models. errors recognized=0.82
subjects.
[31] LogTCP Effective for More advanced - Average mean ratio of
detecting faults. techniques were errors recognized=0.7884
needed to improve the
efficiency of real-
time TCP.
[32] RL algorithms It can achieve a Hyperparameters of Simple and enriched Average mean ratio of
like pairwise significant accuracy RL were not history datasets. errors recognized=0.79
ranking, ACER, enhancement to optimized. Only a
and an actor prioritize test cases. limited number of
critic-based RL reward functions
were analyzed, which
influence the RL
efficiency.

III. CONCLUSION industrial system. Software Testing, Verification and


Reliability, 25(4), 371-396, 2015.
The numerous TCP approaches based on RL and deep [5]. M. Khatibsyarbini, M. A. Isa, D. N. Jawawi, D. N and
learning in the CI platform were discussed in this article in R. Tumeng, Test case prioritization approaches in
detail. Additionally, the pros and shortcomings of each model regression testing: a systematic literature
were highlighted along with its efficiency. This investigation review. Information and Software Technology, 93, 74-
revealed that the TSP problem requires the optimum 93, 2018.
integration of various data, which deep learning or RL [6]. R. Pan, M. Bagherzadeh, T. A. Ghaleb, and L. Briand,
enables. The TSP problem is a crucial topic in CI platforms, Test case selection and prioritization using machine
with regular builds and regression testing activities. On the learning: a systematic literature review. Empirical
other hand, RL-based or deep learning-based TCP models Software Engineering, 27(2), 1-34, 2022.
solve ongoing issues. In the future, more sophisticated deep [7]. O. Cico, L. Jaccheri, A. Nguyen-Duc, and H. Zhang,
learning and RL algorithms will need to be looked into in Exploring the intersection between software industry
order to create extremely reliable TCP models. Furthermore, and software engineering education-a systematic
because prior RL research mainly used execution history mapping of software engineering trends. Journal of
information for TCP, a more comprehensive feature set may Systems and Software, 172, 1-28, 2021.
be taken into consideration for further enhancing RL [8]. “Buildbot Basics,” Buildbot. [Online]. Available:
performance. https://www.buildbot.net/. [Accessed: 21-Apr-2023].
[9]. “Open source continuous delivery and release
REFERENCES Automation Server,” GoCD. [Online]. Available:
https://www.gocd.org/. [Accessed: 21-Apr-2023].
[1]. I. C. Donca, O. P. Stan, M. Misaros, D. Gota, and L. [10]. Integrity. [Online]. Available:
Miclea, Method for continuous integration and https://integrity.github.io/. [Accessed: 21-Apr-2023].
deployment using a pipeline generator for agile [11]. Jenkins. [Online]. Available: https://www.jenkins.io/.
software projects. Sensors, 22(12), 1-18, 2022. [Accessed: 21-Apr-2023].
[2]. V. K. Makam, Continuous integration on cloud versus [12]. “Home – travis-ci,” Travis, 29-Nov-2022. [Online].
on premise: a review of integration tools. Advances in Available: https://www.travis-ci.com/. [Accessed: 21-
Computing, 10(1), 10-14, 2020. Apr-2023].
[3]. M. Shahin, M. A. Babar, and L. Zhu, Continuous [13]. S. Buchanan, J. Rangama and N. Bellavance, CI/CD
integration, delivery and deployment: a systematic with Azure Kubernetes Service. Introducing Azure
review on approaches, tools, challenges and Kubernetes Service: A Practical Guide to Container
practices. IEEE Access, 5, 3909-3943, 2017. Orchestration, 191-219, 2020.
[4]. D. Di Nardo, N. Alshahwan, L. Briand, and Y. Labiche,
Coverage‐based regression test case selection,
minimization and prioritization: a case study on an

IJISRT23APR2084 www.ijisrt.com 2328


Volume 8, Issue 4, April – 2023 International Journal of Innovative Science and Research Technology
ISSN No:-2456-2165
[14]. E. A. Da Roza, J. A. P. Lima, R. C. Silva, and S. R. [27]. A. Sharif, D. Marijan, and M. Liaaen, DeepOrder: Deep
Vergilio, Machine learning regression techniques for learning for test case prioritization in continuous
test case prioritization in continuous integration integration testing. In IEEE International Conference
environment. In IEEE International Conference on on Software Maintenance and Evolution, pp. 525-534,
Software Analysis, Evolution and Reengineering, pp. 2021.
196-206, 2022. [28]. Y. Huang, T. Shu, and Z. Ding, A learn-to-rank method
[15]. G. Rothermel, R. H. Untch, C. Chu, and M. J. Harrold, for model-based regression test case
Prioritizing test cases for regression testing. IEEE prioritization. IEEE Access, 9, 16365-16382, 2021.
Transactions on Software Engineering, 27(10), 929- [29]. Y. Yang, C. Pan, Z. Li, and R. Zhao, Adaptive reward
948, 2001. computation in reinforcement learning-based
[16]. R. Mukherjee, and K. S. Patnaik, A survey on different continuous integration testing. IEEE Access, 9, 36674-
approaches for software test case prioritization. Journal 36688, 2021.
of King Saud University-Computer and Information [30]. A. S. Yaraghi, M. Bagherzadeh, N. Kahani, and L.
Sciences, 33(9), 1041-1054, 2021. Briand, Scalable and accurate test case prioritization in
[17]. H. Spieker, A. Gotlieb, D. Marijan, and M. Mossige, continuous integration contexts. IEEE Transactions on
Reinforcement learning for automatic test case Software Engineering, 1-27, 2022.
prioritization and selection in continuous integration. [31]. Z. Chen, J. Chen, W. Wang, J. Zhou, M. Wang, X.
In ACM Proceedings of the 26th International Chen, and J. Wang, Exploring better black-box test case
Symposium on Software Testing and Analysis, pp. 12- prioritization via log analysis. ACM Transactions on
22, 2017. Software Engineering and Methodology, 1-33, 2022.
[18]. S. Ali, Y. Hafeez, S. Hussain, and S. Yang, Enhanced [32]. M. Bagherzadeh, N. Kahani, and L. Briand,
regression testing technique for agile software Reinforcement learning for test case
development and continuous integration prioritization. IEEE Transactions on Software
strategies. Software Quality Journal, 28, 397-423, 2020. Engineering, 48(8), 2836-2856, 2022.
[19]. X. Jin, and F. Servant, A cost-efficient approach to
building in continuous integration. In Proceedings of
the ACM/IEEE 42nd International Conference on
Software Engineering, pp. 13-25, 2020.
[20]. J. A. P. Lima, and S. R. Vergilio, A multi-armed bandit
approach for test case prioritization in continuous
integration environments. IEEE Transactions on
Software Engineering, 48(2), 453-465, 2020.
[21]. T. Shi, L. Xiao, and K. Wu, Reinforcement learning
based test case prioritization for enhancing the security
of software. In IEEE 7th International Conference on
Data Science and Advanced Analytics, pp. 663-672,
2020.
[22]. Y. Feng, Q. Shi, X. Gao, J. Wan, C. Fang, and Z. Chen,
Deepgini: prioritizing massive tests to enhance the
robustness of deep neural networks. In ACM
Proceedings of the 29th International Symposium on
Software Testing and Analysis, pp. 177-188, 2020.
[23]. L. Rosenbauer, A. Stein, R. Maier, D. Pätzel, and J.
Hähner, Xcs as a reinforcement learning approach to
automatic test case prioritization. In Proceedings of the
Genetic and Evolutionary Computation Conference
Companion, pp. 1798-1806, 2020.
[24]. N. Medhat, S. M. Moussa, N. L. Badr, and M. F. Tolba,
A framework for continuous regression and integration
testing in IoT systems based on deep learning and
search-based techniques. IEEE Access, 8, 215716-
215726, 2020.
[25]. S. Mondal, and R. Nasre, Hansie: Hybrid and consensus
regression test prioritization. Journal of Systems and
Software, 172, 1-42, 2021.
[26]. V. Nguyen, and B. Le, RLTCP: a reinforcement
learning approach to prioritizing automated user
interface tests. Information and Software
Technology, 136, 1-16, 2021.

IJISRT23APR2084 www.ijisrt.com 2329

You might also like