0% found this document useful (0 votes)
6 views

Rime Ref1

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views

Rime Ref1

Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Computers in Biology and Medicine 179 (2024) 108803

Contents lists available at ScienceDirect

Computers in Biology and Medicine


journal homepage: www.elsevier.com/locate/compbiomed

Chaotic RIME optimization algorithm with adaptive mutualism for feature


selection problems
Mahmoud Abdel-Salam a, * , Gang Hu b , Emre Çelik c , Farhad Soleimanian Gharehchopogh d ,
Ibrahim M. EL-Hasnony a
a
Faculty of Computer and Information Science, Mansoura University, Mansoura, 35516, Egypt
b
Department of Applied Mathematics, Xi’an University of Technology, Xi’an, 710054, PR China
c
Department of Electrical and Electronics Engineering, Faculty of Engineering, Düzce University, Düzce, Turkey
d
Department of Computer Engineering, Urmia Branch, Islamic Azad University, Urmia, Iran

A R T I C L E I N F O A B S T R A C T

Keywords: The RIME optimization algorithm is a newly developed physics-based optimization algorithm used for solving
Optimization optimization problems. The RIME algorithm proved high-performing in various fields and domains, providing a
Metaheuristics high-performance solution. Nevertheless, like many swarm-based optimization algorithms, RIME suffers from
RIME
many limitations, including the exploration-exploitation balance not being well balanced. In addition, the
Feature selection
Chaos theory
likelihood of falling into local optimal solutions is high, and the convergence speed still needs some work. Hence,
Wilcoxon test there is room for enhancement in the search mechanism so that various search agents can discover new solutions.
The authors suggest an adaptive chaotic version of the RIME algorithm named ACRIME, which incorporates four
main improvements, including an intelligent population initialization using chaotic maps, a novel adaptive
modified Symbiotic Organism Search (SOS) mutualism phase, a novel mixed mutation strategy, and the utili­
zation of restart strategy. The main goal of these improvements is to improve the variety of the population,
achieve a better balance between exploration and exploitation, and improve RIME’s local and global search
abilities. The study assesses the effectiveness of ACRIME by using the standard benchmark functions of the
CEC2005 and CEC2019 benchmarks. The proposed ACRIME is also applied as a feature selection to fourteen
various datasets to test its applicability to real-world problems. Besides, the ACRIME algorithm is applied to the
COVID-19 classification real problem to test its applicability and performance further. The suggested algorithm is
compared to other sophisticated classical and advanced metaheuristics, and its performance is assessed using
statistical tests such as Wilcoxon rank-sum and Friedman rank tests. The study demonstrates that ACRIME ex­
hibits a high level of competitiveness and often outperforms competing algorithms. It discovers the optimal
subset of features, enhancing the accuracy of classification and minimizing the number of features employed.
This study primarily focuses on enhancing the equilibrium between exploration and exploitation, extending the
scope of local search.

1. Introduction accomplishments. Therefore, this can lead to complex computing pro­


cesses that require significantly more time to complete [1].
There are many optimization-related problem-solving opportunities In addition, the complete information included in the feature space is
in scientific research and technical applications. In the past, when often unknown in certain fields, such as mechanical engineering and
attempting to resolve optimization problems, individuals often relied on machine learning. Hence, the utilization of exact mathematical formu­
methods that needed accurate computations, such as the Lagrange lation to address optimization problems and get optimal solutions is a
multiplier, quasi-Newton, and Newton’s methods. Nevertheless, when difficult challenge [2]. This challenge arises due to the frequent lack of
the size of problem factors grows, the computational complexity of these sufficient data in such areas. Scientists have developed metaheuristic
algorithms progressively intensifies, notwithstanding their algorithms using several concepts to address this problem [3]. Stochastic

* Corresponding author.
E-mail address: [email protected] (M. Abdel-Salam).

https://doi.org/10.1016/j.compbiomed.2024.108803
Received 15 April 2024; Received in revised form 17 May 2024; Accepted 24 June 2024
Available online 1 July 2024
0010-4825/© 2024 Elsevier Ltd. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

optimization approaches do not require earlier information and consider image segmentation [27], solar power parameter estimation [28], and
the issue as a black box, focusing solely on the output and input values as engineering problems [29]. Despite the authors’ efforts to develop
the parameters of interest. Upon completing the several stages of the several revised versions of RIME to boost its efficiency in addressing
designated optimization process, a feasible solution to the problem is global problems, the RIME algorithm remains relatively new. It requires
achieved, although not always the globally optimal one. However, the further improvements to optimize its performance. In addition, there is
approach can generate an optimal solution for different problems while very little literature that applies RIME to the feature selection problem.
keeping the computational burden at an acceptable level [4]. This leads Hence, this study introduces four enhancements to RIME aimed at
to a favorable consequence. In recent times, there has been a significant improving its capabilities in global optimization and feature selection
interest in metaheuristic algorithms due to their simplicity, randomness, applications. These enhancements include the incorporation of an
lack of reliance on gradients, and ability to avoid local optimum solu­ intelligent population initialization using chaotic maps, a novel SOS’s
tions. As a result, scholars have been working on creating more effective mutualism phase, a new mixed mutation operator, and a restart strategy.
metaheuristics for competition [5]. Currently, research on metaheuristic The adaptive chaotic variant of RIME is referred to as ACRIME. By
techniques mostly concentrates on two specific areas: the creation of employing chaotic maps, RIME can enlarge its search space, enhancing
novel algorithms and the enhancement of established algorithms population diversity during the initial phases and effectively mitigating
already being utilized. There are generally three approaches to the problems associated with the greedy technique that often leads to
enhancing existing algorithms: combining many algorithms into one, local optima. In addition, mixed mutation promotes thorough investi­
modifying the search structure of a present algorithm, and introducing gation at the present location in RIME, preventing the occurrence of
new operators to an existing algorithm. suboptimal solutions while exploring novel solution regions. In addition,
Metaheuristic algorithms can be categorized into two main types the proposed modification of SOS’ mutualism phase enables the agents
based on their structure: population-based and trajectory-based. Tra­ to improve the exploitation phase of RIME, hence preventing the
jectory-based algorithms enhance search efficiency by tracing the tra­ occurrence of local optima. Furthermore, the restart tactic aids in the
jectory of a solitary solution. The proposed solution is modified through exploration of new search regions and also helps to avoid local optimal
the use of randomness and guided by criteria such as the greedy prin­ solutions. Integrating these techniques optimizes RIME’s capability in
ciple, all within a limited number of repetitions, for example, Iterated exploration and exploitation. To assess the effectiveness of ACRIME, this
Local Search (ILS) [6], Tabu Search (TS) [7], and Simulated Annealing study conducted several experiments using the CEC2005 and CEC2019
(SA) [8]. The advantages of these algorithms include a low computing benchmark tests. The results were compared with those obtained from
expense and a rapid convergence rate. Nevertheless, they experience a other sophisticated algorithms, revealing notable benefits for ACRIME.
lack of progress in local solutions, particularly when dealing with To summarize, this paper’s main contributions include.
high-dimensional optimization problems.
Conversely, population-based metaheuristic algorithms address the • Proposed an adaptive alternative enhanced version of RIME called
optimization problem by generating and preserving multiple potential ACRIME.
solutions in each iteration. These algorithms utilize collaboration, • RIME is combined with chaotic population initialization enhancing
interaction, and information sharing across these solutions to enhance the population diversity and hence the exploration of early stages.
the quality of the final result gradually. Within the domain of • The novel adaptive SOS mutualism phase enhanced the exploitation
population-based algorithms, numerous renowned algorithms are pre­ ability and boosted the convergence speed.
sent such as Grey Wolf Optimizer (GWO) [9], Gravitational Search Al­ • The mixed mutation is combined with RIME to expand the local
gorithm (GSA) [10], Arithmetic Optimization Algorithm (AOA) [11], search and balance between exploration and exploitation.
Gorilla Troops Optimizer(GTO) [12], Particle Swarm Algorithm (PSO) • RIME is combined with RS strategy to enhance the exploration and
[13], Seagull Optimization (SOA) [14], Farmland Fertility Algorithm avoid the local optimal solutions.
(FFA) [15], Sine-Cosine Algorithm (SCA) [16], Whale Optimization • The ACRIME algorithm was subjected to rigorous experimentation in
Algorithm (WOA) [17], Dung Beetle Algorithm (DBO) [18], African CEC2005 and CEC2019 and was compared to other advanced algo­
Vultures Optimization Algorithm(AVOA) [19], Golden Jackal Algorithm rithms. The findings consistently demonstrated that ACRIME out­
(GJO) [20], and Mountain Gazelle Optimizer (MGO) [21]. performs its counterparts in terms of performance.
The population-based algorithm enhances exploration by avoiding • A proposed binary adaptation of ACRIME for feature selection in
local solutions [22]. Nevertheless, they are more costly regarding classification tasks.
computational resources and require the exchange of information
among multiple solutions. Consequently, the likelihood of local optima The remainder of this paper is structured as follows: Section 2 pro­
being stagnant is reduced. Typically, employing numerous search agents vides an overview of related work in the field, highlighting the appli­
simultaneously to locate the optimal solution is advantageous for overall cation of metaheuristic algorithms in feature selection problems. Section
performance. Furthermore, population-based algorithms can employ 3 delves into the methodology, detailing the algorithms used. Section 4
several stochastic processes, such as mutation [23], crossover [24], and details the proposed algorithm and the improvements applied. Section 5
selection [25], to enhance their ability to explore and exploit. Recently, presents the experimental results and analysis applied to a set of
metaheuristics that employ a population-based approach have gained benchmark functions. Section 6 provides the applicability of ACRIME to
significant popularity due to the advantages mentioned above. FS problems. Finally, Section 7 concludes the paper, summarizing the
In [26], the authors introduced the RIME, which is derived from the key contributions and outlining avenues for future research.
natural occurrence of frost. The RIME algorithm possesses a sophisti­
cated structural design and demonstrates strong optimization abilities, 2. Related works
enabling the achievement of exceptional optimal solutions. Neverthe­
less, the algorithm experiences sluggish convergence when optimizing In this section, several related works were applied to the FS problem
intricate real-world issues. Furthermore, in multi-modal problems, RIME domain to contribute to the research community of optimization algo­
often becomes stuck in local optima and encounters difficulties in rithms. Therefore, we explain some of the optimization algorithms and
identifying the global optimal solution. advanced optimization algorithms applied to both the FS problem
Multiple works of literature have demonstrated that RIME is a direct, domain and FS.
flexible, and convenient algorithm, allowing it to effectively tackle a In [30], the authors proposed the dynamic butterfly optimization
diverse array of problems. Due to these advantages, RIME has been algorithm as an improved approach for addressing feature selection
effectively utilized in a wide range of optimization problems, including problems. The core BOA has undergone essential modifications. A local

2
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

search algorithm based on mutation has been implemented to address exploration-exploitation balance. The proposed algorithm was tested by
the issue of local optima difficulties. In addition, it has been employed to the authors through 25 benchmark functions, and the results were with
enhance the diversity of solutions generated by the BOA. A total of ten different optimization algorithms. The proposed SDABWO was also
twenty benchmark datasets from the UCI library have been added. The applied to the feature selection problem of 12 benchmark datasets. The
trials have demonstrated that DBOA has superior performance results indicated that the SDABWO algorithm is promising for the
compared to similar algorithms. feature selection problem. However, it suffered from high computation
In [31], the authors proposed a new optimization algorithm toward time and still faced issues of getting into local optima solutions.
the solution of the Feature Selection problem. The authors combined the In [37], the authors introduced a new feature selection (FS) method
Particle Swarm Optimization algorithm with the specifically designed for datasets with a high number of features. The
Attack-Retreat-Surrender strategy for the purpose of improving the new approach developed by the authors is called MGWO. It incorporates
behavior of BES and increasing its exploitation-exploration phases. The three key enhancements: the utilization of copula entropy and ReliefF as
proposed algorithm is tested and investigated using 27 different FS a filter-based step for the FS problem, the integration of DE to enhance
datasets and compared with ten widely used optimizers for FS problems. GWO’s search capability in exploring new search regions, and the
The authors showed the performance of their proposed FS algorithm and introduction of a novel position update strategy to improve both local
proposed a new promising contribution to the literature. and global search ability. The researchers assessed the performance of
In [32], the authors introduced a novel GJO-based FS algorithm for the MGWO by testing it on ten diverse high-dimensional datasets and
improving feature selection accuracy on datasets with a high number of comparing it to six other optimization techniques. The authors’ results
dimensions. They introduced mGJO as a new variant by adding Copula surpass those of their competitors, but there is still room for improve­
entropy to enhance the initialization of the population in the GJO al­ ment in the classification accuracy of some datasets, particularly when
gorithm to achieve diverse populations. The authors suggested a suite of the number of characteristics is increased.
enhancements to enrich the exploration-exploitation phases of GJO to­ In [38], the authors utilized an unsupervised PSO technique called
ward the achievement of global optimal solutions. The mGJO algorithm the filter-based bare-bone particle optimization algorithm (FBPSO) to
has been tested with fourteen different FS datasets. It has been compared conduct a feature selection procedure. Two filter-based strategies were
with classical optimization algorithms and FS algorithms to show the proposed to enhance the convergence of the algorithm. The first tech­
superiority of mGJO as a newly proposed FS solution. nique involved reducing the space by focusing on the average reciprocal
In [33], the authors proposed a hybrid approach to optimize the content. The second technique employed a local filter to search for
problem of feature selection (FS) and, consequently, improve the accu­ redundancy of features. The effectiveness and superiority of the pro­
racy of classification. The authors combined the strengths of the SMA vided FBPSO have been proven by experimental results on particular
and GBO algorithms to create the hybrid algorithm GBOSMA. The standard datasets.
GBOSMA algorithm is tested and evaluated using benchmark functions In [39], the authors introduced an enhanced Meerkat Optimization
CEC2017 and compared to eight other algorithms. Furthermore, the Algorithm (MOA), which aims to address the feature selection (FS)
authors applied GBOSMA to FS problems using 16 benchmark datasets. problem by identifying the optimal collection of features that yield the
The results showed that the classification accuracy achieved by highest accuracy. The authors combined the Periodic Mode Boundary
GBOSMA outperformed that of its competitors. The authors were able to Handling approach with a novel local search tactic to improve the
deduce that there is a statistically significant difference between Meerkat algorithm’s powers in exploiting and exploring. The perfor­
GBOSMA and other competing algorithms. mance of the new binary variant IBMOA was assessed by testing and
In [34], the authors presented a novel feature selection approach evaluating it on twenty-one distinct benchmark FS datasets. The con­
towards finding the optimal subset of features. They presented the ducted studies demonstrated its superior performance compared to
improved version of STOA by incorporating three different improve­ other algorithms in achieving a high accuracy percentage.
ments: a new self-adaptive control parameter, a novel mechanism for In [40], the authors introduced three versions of the Rat Swarm
balancing exploration and exploitation, and a population reduction optimizer (RSO) algorithm. The indigenous notions of PSO enhance the
strategy. All these improvements enhanced the performance of the STOA capacity for exploitation, while the application of crossover operators
algorithm. The author measured the performance of their proposed enhances the diversity of the algorithms. These versions achieved
approach, mSTOA, on the CEC2020 test suite and applied mSTOA to ten outstanding outcomes in comparison to 25 metaheuristic algorithms and
different datasets about feature selection. They compared the perfor­ five filter approaches. In Ref. [41], the authors incorporated S, U, and
mance of mSTOA with seven different optimizers that established the V-shaped transfer functions into the Horse Herd Optimization Algorithm
superiority of the mSTOA over other algorithms. (HOA) to transform the results into binary format. In addition, the al­
In [35], the authors proposed a wrapper-based feature selection gorithm’s efficiency is enhanced by combining three crossover opera­
approach with the aim of enhancing the FS problem. The authors tors. Among all the versions, the BHOA with an S-shape and one-point
implemented three main enhancements to the BES algorithm to find the crossover yielded results that were comparable to the other approaches.
optimum feature subset for obtaining high accuracy. The main im­ In Ref. [42], the authors introduced a two-stage hybrid ant colony al­
provements were the integration of chaotic local search, the OBL strat­ gorithm explicitly designed for high-dimensional feature selection. The
egy, and the phasor strategy to enhance and improve the global algorithm employs an interval technique to ascertain the ideal subset
performance of BES. The proposed variant, mBES, was tested and vali­ size of features to be explored by the supplementary stage.
dated through the CEC2017 and CEC2020 test suites in order to know its In [43], The authors introduced a hybrid model that combines the
performance and competence as a new global optimizer. In addition, the improved rime algorithm (ACRIME) and fuzzy K-nearest neighbor
authors considered fifteen datasets to investigate the applicability of (FKNN) technique for feature selection. The ACRIME presented the tri­
mBES in solving FS problems. angle game search technique, which enhances the algorithm’s ability to
In [36], the authors proposed a new feature selection approach based explore globally by carefully selecting different search agents
on the novel Black Widow Optimization algorithm (BWO). They pro­ throughout the exploration domain. This method promoted both
posed a new variant of BWO called SDABWO that includes three basic competitive competition and collaborative synergy among various en­
development strategies. The first improvement strategy consists of a new tities. Additionally, a random follower search method is implemented to
method of spouse selection for the procreation phase in BWO. The sec­ impart a unique trajectory to the primary search agent, enhancing the
ond is a new approach for mutation with a better comprehensive search range of search orientations.
process of the search agent. The third is adapting and adjusting three In [44], the authors presented an approach, called HFSIA, with the
primary parameters, which can improve the global purpose to improve the performance of the Feature Selection process to

3
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

achieve high accuracy in classification, which can be seen as a novel Table 1


method for FS. They have proposed a new mutation strategy to update Related work advantages and disadvantages.
the position of the solutions from the population. They also have Reference Year Methodology Advantages Disadvantages
included a new Cauchy operator with an adaptive regulator to improve
Tubishat 2020 A dynamic High accuracy, The exploitation
global exploration and diversity. Evaluation of HFSIA was done through et al. butterfly presented an ability of (BOA)
the application of FS to eleven different benchmark datasets. The com­ [30] algorithm (BOA) adequate set of lacks diversity,
parisons were done with traditional approaches for FS and other opti­ based on a novel selected time consumption
mization algorithms. Although the provided results appeared to be local search features, is high compared
mutated strategy improved to other
optimistic, some issues in terms of computational time were pointed out, quality of algorithms, and
and a further improvement in the exploration phase of the HFSIA was solutions. convergence
suggested to increase classification accuracy. speed needs more
Table 1 summarizes the previously mentioned literature works improvements.
Kwakye 2024 Enhanced Enhanced Some datasets’
stating the methodology, advantages and disadvantages of each
et al. variant of Bald exploration- convergence
approach to highlight the research gap for the proposed algorithm. [31] Eagle Search exploitation speed is slow at
Therefore, the feature selection problem is still an open issue that needs algorithm (BES) balance, early iterations,
to be handled. In addition, the literature works in the RIME optimization based on enhanced time consumption
algorithm is still limited and the presented works do not handle all combining PSO classification is high, and
with Attack- accuracy, and exploitation
related problems of RIME. Also, there is little work in applying RIME in Retreat- enhanced ability is low.
the feature selection problem, which motivates the enhancement of the Surrender solution quality.
RIME algorithm to be adapted to the FS problem. Therefore, the new, strategy
improved RIME algorithm is proposed to overcome most of the issues in Askr et al. 2024 An improved Enhanced Convergence
[32] variant of GJO exploitation speed still has
the existing literature works, including premature convergence, falling
optimizer based ability of GJO, room for
into local solutions, and weak exploration and exploitation balance. on Copula enhanced improvement;
entropy and new classification exploration
3. Background update operators accuracy, obtain ability is poor at
for the leader the best feature some datasets and
solutions. set and still stuck in local
This section provides the background needed to present the proposed applicable for solutions at some
algorithm. It introduces the original RIME algorithm and its phases and high datasets.
provides a brief overview. In addition, it describes the mutualism phase dimensional
of SOS, which is modified and incorporated into RIME to enhance its datasets.
Ewees et al. 2023 A hybrid FS Provide good Exploration and
search process. [33] approach based accuracy at some exploitation
on SMA and GBO datasets and ability need more
optimizers avoid local improvement,
3.1. RIME overview according to the optima at low- especially for a
fitness value of dimensional high dimensional
The RIME optimization algorithm, known as RIME, is a physics- each solution. datasets. number of
features;
based optimization [26]. It is a novel and effective optimization tech­
convergence
nique that takes motivation from the natural properties of ice. When speed at early
faced with an optimization problem, it can conceptualize the potential iterations is poor
solution as a rime agent positioned inside a vast search space, where the and presents bad
complete collection of possible solutions forms a rhyme-population. For performance
compared with
a population with N of solutions and each with d dimensions, each so­
[ ] more advanced
lution is denoted as Xi,j = xi,1 , xi,2 , …, xi,d . Similar to other intelligent algorithms.
optimization methods, the initial population is created as the basis for Houssein 2023 A novel variant Presents good The exploitation
the optimization process. Consequently, by simulating the growth pro­ et al. of the STOA exploration ability is poor in
[34] algorithm for the ability and some datasets
cesses of hard rime and soft rime, RIME incorporates a search strategy FS problem is avoids local when the number
for soft rime and a mechanism for penetrating hard rime. This results in a based on a self- optima at most of features is
complex interaction between exploitation and exploration within the adaptive control datasets. increased, the
optimization algorithm. The primary objective is to thoroughly inves­ parameter, a exploration-
novel exploitation
tigate the range of possibilities and discover the most optimal solution,
mechanism for balance needs
denoted as Xbest,j . Eqs. (1)–(5) denoted the formulation for the soft-rime balancing more
search as follows: exploration and enhancement,
( ( ) ) exploitation, and and the time
Xi,j
new
= Xbest,j + RimeFactor • h • ubj − lbj + lbj , a2 < E (1) a population consumption is
reduction high compared
RimeFactor = a1 • cos θ • β (2) strategy. with more
advanced
algorithms.
t
θ=π • (3) Chhabra 2023 A novel variant Enhanced The exploration
10 • Iter Max et al. of the BES classification phase needs more
√̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ [35] algorithm for FS accuracy and improvement
E= t/Iter Max (4) based on chaotic enhanced when the number
local search, exploration- of features is
[ ]/ phasor operator, exploitation increased. It does
b•t
β=1 − b (5) and OBL balance. not handle large
Iter Max strategy. datasets well and
has poor
where the best agent’s location with jth dimension is denoted by Xbest,j , (continued on next page)
a1 is a random value belonging to [-1, 1] while a2 and h are two random

4
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

Table 1 (continued ) Table 1 (continued )


Reference Year Methodology Advantages Disadvantages Reference Year Methodology Advantages Disadvantages

exploitation falls into local


ability for some optima.
datasets. Zhu et al. 2023 A new FS Enhanced Low convergence
Hu et al. 2022 A novel FS Enhanced the The exploration- [44] approach based classification speed and fall into
[36] approach based exploitation exploitation on Artificial accuracy, local optima for
on the BWO ability of BWO balance for some Immune enhanced some datasets.
algorithm using and datasets is not Optimization exploration, and
a new approach classification good, the Algorithm global search
for spouse accuracy, exploration utilizing new ability.
selection, a novel especially with ability for many mutation and
mutation large datasets. datasets is poor, Cauchy
approach, and and the operators.
adaptive convergence
parameters are speed at early
new
incorporated. iterations is not values that belong to the interval from 0 to 1. Xi,j is the next iteration’s
adequate.
Pan et al. 2023 A novel FS Enhanced the The exploration-
location of the ith solution at jth dimension of the optimization problem
[37] approach using searchability of exploitation and b is 5. The parameter E signifies the level of attachment, which has
enhanced GWO GWO, enhanced balance is poor an impact on the probability of concentration for an entity and increases
with Relief and exploration for some datasets proportionally with the number of assessments performed. ubj and lbj
copula entropy ability for many and falls into local
are the search space boundaries for an optimization problem at
approaches. datasets, and optima for some
Besides, the enhanced datasets. dimension j. Also, t and Iter Max are the present and total number of
integration of DE solution quality generations/iterations used, respectively. The formulation of the hard-
into the basic of obtained rime phase is defined as follows:
operation of features. ( )
GWO Xi,j
new
= Xbest,j , a3 < Fnormr Fiti (6)
Zhang et al. 2019 A novel filter- Enhanced The exploration-
[38] wrapper solution quality exploitation
approach for FS and enhanced balance needs
where a3 represents a value generated randomly within the interval [-1,
based on the PSO classification more 1] but the fitness value’s normalized value for ith solution is referred to
algorithm. accuracy. improvement, as as Fnormr (Fiti ). Algorithm 1 describes the main procedure of the original
the convergence RIME optimization algorithm.
speed is slow at
early iterations in
many datasets.
Hussien 2024 A novel PSO FS Enhanced the The exploitation 3.2. Symbiotic Organism Search’s mutualism phase
et al. approach is classification ability and
[39] developed based accuracy, diversity are a SOS is a well-known metaheuristic technique inspired by the diverse
on a local search enhanced the challenge, and the
symbiotic interactions observed in real ecosystems [45]. The common
and Periodic obtained feature exploitation
Mode Boundary subset set, and ability failed for symbiotic relationships found in natural environments include mutu­
Handling avoided local many datasets. alism, commensalism, parasitism, and predation. This paper focuses on a
approach. solutions for modified adaptive form of the SOS algorithm that specifically deals with
many datasets.
the mutualism phase. Therefore, we will only discuss the mutualism
Awadallah 2022 The enhanced FS Enhanced Poor exploration
et al. approach is exploitation of some datasets
phase of SOS to provide the readers with relevant information. Symbi­
[40] based on Rat ability and and fall into local otic mutualism is the state in which two species rely on each other for
Swarm optimizer solution quality. optima at high survival, and both organisms derive benefits from their relationship. The
(RSO) and dimensional mutualistic relationship might be related to the symbiotic contact
different datasets.
observed between bees and flowers. Within the floral realm, bees tra­
crossover
operators. verse and collect nectar to procure sustenance for their existence within
Awadallah 2022 An FS-based Enhanced Time the vast expanse of the universe. Conversely, bees engage in pollination,
et al. approach uses a exploitation consumption and which is advantageous for flowers. The mathematical formulations of
[41] Horse Herd ability and exploration-
the mutualistic relationship can be represented as:
Optimizer enhanced exploitation
(HOA) and three classification balance many
( )
Xi,j
t+1
= Xi,j
t
+ rndm ∗ Xbest,j − MVj ∗ BF1 (7)
crossover accuracy. datasets, and
operators. population ( )
diversity is low in Xz,j
t+1
= Xz,j
t
+ rndm ∗ Xbest,j − MVj ∗ BF2 (8)
many datasets.
Ma et al. 2021 A novel ant Enhanced Poor exploration- t+1
Where Xi,j is the newly updated position of the agent i with dimension j
[42] colony wrapper- exploitation exploitation
based FS ability for high- balance, and Xz,j is a randomly selected agent from the population to cooperate
approach. dimensional population with the current agent i through mutualism. Also, randm is a random
datasets and diversity, and
number uniformly distributed in the range 0–1 and Xbest,j represent the
enhanced stability need
classification more best agent in the population at iteration t. The mutual vector and benefit
accuracy. improvements. factors are defined by MVj and BF, respectively, which are modeled as
Yu et al. 2023 A novel RIME Enhanced the The exploitation follows:
[43] optimizer based exploration ability is poor in ( )/
on FKNN. ability and the some cases; the MVj = Xi,j + Xz,j 2 (9)
solution quality. convergence
speed is slow and
BF = round (1 + rndm) (10)

5
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

4. The proposed ACRIME map, and Kent map, can be utilized to map chaotic sequences. The au­
thors in Ref. [48] provide evidence to support the assertion that the
This section introduces the four main improvements integrated into Cubic map exhibits superior levels of uniformity when compared to
the RIME algorithm. The following subsections explain in detail each alternative mapping techniques. Hence, the utilization of cubic mapping
improvement, including the chaotic population initialization, the chaotic sequences is employed for the formation of the RIME population.
modified adaptive mutualism phase, the novel mixed mutation strategy, The population diversity of RIME is enhanced through the utilization of
and the restart strategy. The main phases, along with the proposed en­ ergodicity and the initial sensitivity of chaotic maps. The mathematical
hancements, are shown in Fig. 1. To begin, chaotic sequence maps have formula is presented in the following manner:
been integrated to diversify the population initialization process of the ( ) ( )
ubj − lbj × si,j + 1
RIME algorithm. This inclusion diversifies the initial population, Xi,j = lbj + , (11)
thereby improving the exploration phase’s ability to search for the 2
global optimum. Next, the adaptive enhanced mutualism strategy will
si+1,j = 4si,j 3 − 3si , j − 1 < si,j < 1, si ∕
= 0, i = 0, 1, …, N (12)
improve the searchability of the RIME algorithm, help to avoid falling
into the local optima and enhance the exploration ability of the RIME where lbj and ubj represent the search space limits for an optimization
algorithm. Furthermore, the mixed mutation mechanism has been problem with several individuals/solutions N. Xi,j is the ith solution
implemented to introduce mutations and enhance the top individuals
within the population with dimension j. For a d-dimensional optimiza­
selected from the population. Lastly, a restart strategy has been intro­
tion problem, a d-dimensional vector is first formed with values of [-1,1]
duced to modify the least performing individuals and counteract the
randomly assigned in each dimension. Subsequently, Eq. (11) is used to
tendency to converge prematurely into local optima.
iterate through each dimension of the initial operator to derive the
remaining (N − 1) operators. Then, Eq. (12) is utilized to establish a
correspondence between the values of the operators produced by the
4.1. Chaotic strategy
cubic mapping. The utilization of the chaotic cubic map in the popula­
tion initialization of RIME will enhance the exploration phase of
Most optimization algorithms initialize their population randomly,
ACRIME and help the algorithm to discover new regions in the search
which affects the exploration phase. The random initialization limits the
space at the initialization phase, which provides more diversity in the
population diversity where many search regions among the search space
population and reduces the probability of agents to fall into the local
are not reached. Therefore, the initial population diversity is an opti­
optima.
mization challenge. In recent times, the utilization of chaotic sequences
in intelligence algorithms for various optimization applications has
4.2. Enhanced adaptive mutualism phase
gained significant attention [46,47]. Therefore, the utilization of chaotic
sequences is explored to enhance population size to prevent premature
The formulas representing the mutualism phase can be found in Eqs.
convergence in different fields. This paper employs chaotic sequences as
(7) and (8). The mutualism phase of the SOS algorithm allows candi­
a means to enhance the population variety of the RIME algorithm.
dates to be more effectively exploited by utilizing the optimal solution
Various chaotic models, including the Logistic map, Tent map, Cubic
from the search space [49]. Consequently, it exhibits an inherent incli­
nation to gravitate towards the most optimal solution it has found so far,
which can result in becoming trapped in local optima. One drawback of
this phase is the selection of benefit factors (BFs). The BFs’ values are
selected randomly from either 1 or 2. Organisms in an interaction can
either receive no benefit or derive full benefit. However, it is important
to note that it is practically impossible for them to experience extreme.
Instead, the advantages they obtain lie somewhere in between. There­
fore, in this paper, we have implemented two adjustments in the
mutualism phase. Initially, based on a randomly generated number q,
the two chosen organisms would either imitate the average of the top
two best organisms (Xα = (Xbest1 + Xbest2 )/2) or imitate a random or­
ganism Xr from the population. This approach reduces the likelihood of
the algorithm becoming trapped in local optima. If the value of q is less
than 0.5, the organisms adhere to Eq. (13); otherwise, they update
themselves according to Eq. (14). Furthermore, the BFs are adaptively
adjusted within the range of 1–2 to enhance the system’s dependability.
The mathematical expression for the modified mutualism phase is pro­
vided below:
⎧ ( )
⎨ Xt+1 = Xt + rndm ∗ Xα,j − MVj ∗ BF1
i,j i,j
( ) when q < 0.5 (13)
⎩ Xz,j
t+1
= Xz,j
t
+ rndm ∗ Xα,j − MVj ∗ BF2

⎧ ( )
⎨ Xt+1 = Xt + rndm ∗ Xr,j − MVj ∗ BF1
i,j i,j
( ) when q ≥ 0.5 (14)
⎩ Xz,j
t+1
= Xz,j
t
+ rndm ∗ Xr,j − MVj ∗ BF2

( )/
MVj = Xi,j + Xz,j 2 (15)

BF1 = 1 + rand [0, 1]


(16)
BF2 = 3 − BF1
Fig. 1. The proposed ACRIME algorithm’s flowchart. Hence, the selection of either the average of the top two optimal

6
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

solutions or a solution picked at random during this phase will help otherwise, it is reset. When the count of TR(i) reached a defined
maintain a balanced search process for the algorithm. By selecting the threshold, then new vectors were used to reposition the ith solution to
optimal solutions, the algorithm can effectively utilize the potential other regions. Better vectors T1 and T2 are used to change the position
solutions it has previously acquired. On the other hand, opting for a of the poorer individuals to enhance their performance outside of the
solution generated at random will enable the algorithm to escape from local region. The T1 and T2 vectors are defined as follows:
the local optima and provide more diversity in the population. In ( )
addition, the exploitation ability of RIME is further enhanced. T1,j = lbj + rnd () × ubj − lbj (22)
( )
T2,j = rnd () × ubj + lbj − Xi,j (23)
4.3. Mixed Cauchy-Gaussian mutation
( )
T2,j = lbj + rnd () × ubj − lbj if T2,j ≥ ubj ‖ T2,j ≤ lbj (24)
With increasing iterations, individuals tend to gravitate towards the
best solutions. However, this can cause a lack of diversity in the popu­ {
T1 if f(T1 ) < f(T2 ) and TR(i) ≥ limit
lation, potentially resulting in premature convergence of the algorithm. Xi = (25)
T2 if f(T1 ) > f(T2 ) and TR(i) ≥ limit
In other words, if the best solution falls into stagnation, the other agents
will fall into local optima. Therefore, to solve this problem, an adaptive
where T1, j and T2, j represent the values at the jth position in positions
mixed Cauchy-Gaussian mutation [49] is introduced. The solution with
T1 and T2, respectively. A random number within the interval from 0 to
the highest fitness is chosen for mutation. The best solution, pre- and
1 is generated using rnd(). When the value of T2, j surpass the upper
post-mutation, is compared; hence, the best one (with minimum fitness
boundary ubj or falls below the lower boundary lbj in the jth dimension,
value) is chosen to enter the next iteration. Using this approach to
then it is adjusted using Eq. (24), and the sample vector TR(i) is changed
improve the best individual can help other individuals explore more √̅̅
regions in the search space and provide more diversity among the to zero. Moreover, this paper introduces a t as a constraint limit.
population. The mathematical modeling of the adaptive mixed During the initial iterations of the algorithm, reducing these constraints
Cauchy-Gaussian approach is defined as follows: can enhance its overall global performance. Conversely, increasing the
limit during later iterations effectively constrains the algorithm from
[ ( ) ( )]
A(t+1)
best = Xbest 1 + λ1 cauchy 0, σ
t 2
+ λ2 Gauss 0, σ 2 (17) deviating from the global solution. The restart strategy helps the RIME
algorithm enhance searchability and helps poor agents be enhanced and

⎨ 1,
⎪ f(Xbest ) < f(Xi ) restarted, thereby avoiding falling into local optima for a long number of
σ=
(
f(Xbest ) − f(Xi )
)
(18) iterations. Therefore, the exploration phase of ACRIME is also enhanced.
⎩ exp
⎪ , otherwise
|f(Xbest )|
4.5. The architecture of ACRIME
{ ( ) ( t )
At+1
best best ≤ f Xbest
f At+1
Xbest
t+1
= (19) The RIME algorithm is a physics-based metaheuristic algorithm that
Xbest
t
otherwise
is distinguished by its four primary phases: 1) The process of initializing
RIME clusters, 2) The implementation of a search method that mimics
where At+1 t
best and Xbest are the position of the best solution after and before the behavior of soft-rime particles, hence enabling the exploration of
applying mutation, respectively, Gauss(0, σ2 ) is random variables algorithms 3) The algorithm is exploited through the utilization of a
denoting the Gaussian distribution with a standard deviation σ2 while hard-rime puncture mechanism, which effectively simulates the cross­
Cauchy(0, σ 2 ) denotes the Cauchy distribution. Furthermore, f( • ) over behavior shown by hard-rime agents 4) The use of a greedy se­
represent the objective function of a solution among the population. λ1 lection method with a positive bias is implemented. It’s important to
and λ2 are two dynamic parameters adjusted as the number of iterations note that this initial process lacks specialized population diversity and
changes, which are defined as follows: control. To mitigate the randomness in the initial population, a sequence
of chaotic maps is applied to enhance the exploration phase, potentially
t2
the λ1 = 1 − (20) impacting the convergence rate. Additionally, Gaussian random walks
Iter Max2
have proven effective in all aspects of MAs throughout the iterative
t2 process in metaheuristic algorithms. The present study proposes a novel
λ2 = (21) enhanced adaptive version of the mutualism phase to effectively handle
Iter Max2
population formation and updates within the RIME method. The adap­
According to Eqs. (20) and (21), λ1 start with a larger value at first tive enhanced mutualism phase is introduced at the start of the next
iterations, and as iterations go, it decreases, which indicates it has a iteration and continues until the final iteration. Following this strategy,
large mutation step to explore a large range of the search space. the RIME phases are executed normally. Then, another enhancement
Meanwhile, λ2 has a small mutation step to help search closer to the best strategy, referred to as the mixed mutation, is introduced into RIME.
solution. Therefore, during the process of searching, λ1 keep decreasing This addition is aimed at improving the best solutions, especially guid­
while λ2 keep increasing as the number of iterations increases. ing individual population members in later iterations. Consequently,
incorporating the mixed mutation can lead to significant enhancements
4.4. Restart strategy (RS) in the exploitation process by improving the best solution. Finally, in­
dividuals with lower fitness values that persist over extended periods are
Moving to the fourth proposed enhancement strategy, the poor in­ repositioned to more favorable locations within the search space using a
dividual may fall into local optimal for a long time and cannot move restart strategy. In summary, the enhanced RIME (ACRIME) algorithm is
toward better positions anymore. Therefore, poor individuals need to detailed and consolidated in Algorithm 2.
relocate to other, better regions. The restart strategy [50] can help to
handle this problem and help poor individuals move from local optima 5. Performance analysis of ACRIME for global optimization
regions to more diverse regions. Hence, the restart strategy is used to
change the poorer solutions’ position. In this paper, a sample vector This section presents the evaluation experiments to assess the effi­
TR(i) is used to capture the number of times the fitness function of an ciency of the proposed ACRIME. A series of experiments are conducted
individual has not changed, indicating falling into local optima. For each to evaluate and assess the performance of ACRIME. The first experiment
time, the fitness value is not changed TR(i) is incremented by 1; is applied to test the efficiency of ACRIME using different benchmark

7
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

mathematical functions, including 23 standard benchmark functions The information about the unimodal, multimodal, and fixed
and ten recent CEC’19 functions. Then, ACRIME is applied as feature dimension functions is displayed in Tables 3–5, respectively. The CEC
selection to evaluate the classification accuracy for fourteen different 2019 benchmark typically classifies its functions into two distinct
benchmark FS datasets. Finally, to evaluate the real-application appli­ groups. The characteristics of CEC-01 to CEC-03 include multimodality,
cability, ACRIME is applied to the COVID-19 real problem as a feature non-scalability, and non-separability. On the other hand, the functions
selection tool for further evaluation. CEC-04-to-10 have the properties of multimodality, non-separability,
and scalability. Table 6 displays the information about the CEC 2019
5.1. Performance analysis using different benchmark functions test set. The variable dim in this table represents the number of opti­
mization problem’s dimensions while fmin refers to the global minima for
In this section, the evaluation of ACRIME is conducted using two this problem. The range denotes the limits of the search space for the
separate sets of benchmark functions, which comprise the following: the optimization problem. By conducting a comprehensive set of benchmark
standard twenty-three benchmark functions and CEC2019 complex tests consisting of 33 tests in total, the efficacy of ACRIME in addressing
functions. Evaluating the performance of a novel algorithm using a large set of optimization problems may be evaluated. To ensure fair
benchmark functions is an essential task to monitor the behavior of the comparisons among a total of 33 benchmark functions, the parameters
algorithm under different simple and complex classical functions. The for the pop size is 30 an,d the Max Iter is 2000. Moreover, employing
description of the functions is presented in section 5.1.1. Nine compar­ default settings mitigates the potential for comparison bias as no other
ative algorithms are used to evaluate the proposed ACRIME. The com­ tuning options are deemed more appropriate. The simulation is con­
petitors are both classical, highly cited algorithms and more recent ducted autonomously on 30 runs to validate the integrity of the
algorithms. GWO [9], GSA [10], AOA [11], PSO [13], SOA [14], SCA benchmarking comparison.
[16], WOA [17], DBO [18], and the original RIME algorithms are used
for the comparison results with the proposed ACRIME. The parameter 5.1.2. Evaluation metrics
settings of the comparative algorithms are shown in Table 2. The To evaluate the conducted comparisons, different evaluation metrics
simulation experiment configuration is that we used a 64-bit Core-i7 are used. Best, worst, standard deviation and average fitness values are
with a 10750H CPU running at 2.60 GHz and 32 GB of main memory statistical measures applied to estimate the comparison between
with MATLAB R2020a for software implementation. ACRIME and its competitors. Moreover, two non-parametric statistical
tests are utilized to measure the statistical performance of ACRIME:
5.1.1. Test functions description Friedman rank [53] and Wilcoxon test [54]Friedman ranks are used to
The proposed ACRIME algorithm, along with other algorithms, was statistically rank the obtained results among different algorithms. On the
implemented in simulations conducted on a set of 23 classical bench­ other hand, the Wilcoxon test measures the statistical difference be­
mark functions [51] and the CEC2019 function set [52]. The primary tween RIME and other algorithms.
aim of these optimization benchmarks is to minimize the values of the 1. BestFit refers to the best-obtained fitness value during N runs,
fitness targets. Within the classical benchmark, the functions can be which is defined as follows:
categorized into three distinct groups. The first to seventh functions fall BestFit = min (Fit1 , Fit2 , …, FitN ) (26)
under unimodal functions, while the eighth to thirteenth functions are
classified as multimodal functions. The remaining functions belong to 2. WorstFit refers to the worst or maximum fitness value obtained by
the category of fixed-dimensional functions. The first category of an algorithm during N independent runs of an algorithm, which is
benchmark functions exhibits a singular global optimum, whether it be a defined as follows:
maximum or a minimum. A multimodal function is depicted by the WorstFit = max (Fit1 , Fit2 , …, FitN ) (27)
existence of multiple local optima in addition to a single global opti­
mum. The final category pertains to the fixed-dimension multimodal 3. AvgFit refers to the average fitness value when an algorithm is
function. In which, the dimensionality of each function remains constant executed for N of times, which is defined as follows:
when compared to two additional types of benchmark functions. The N

efficacy and efficiency of metaheuristic algorithms in solving global BestFiti
optimization issues are evaluated through the utilization of benchmark AvgFit = i=1 (28)
N
functions.
4. Standard deviation (stdFit) refers to the statistical standard de­
viation between different fitness values obtained during N runs of an
Table 2
Parameter settings of comparative algorithms.
algorithm which is defined as follows:

Algorithm Parameters Values

AOA MOP min, max 0.2, 1


Table 3
α 5
μ 0.499
Standard unimodal benchmark function.
WOA α Decrease 2 to 0 Functions Mathematical definition Range fmin Dim
α2 Decrease − 1 to − 2 ∑Dim
GWO a Linear decreasing 2 to 0 G1 G1 (Z) = Z2k [ − 100, 0 30
k=1
SCA A 2 100]
∑Dim ∏Dim
SOA Umax = 0.95 0.95 G2 G2 (Z) = k=1 |Zk | + k=1 |Zk | [ − 10, 10] 0 30
Umin = 0.0111 0.0111 G3 ∑Dim (∑k )2
[ − 100, 0 30
G3 (Z) = k=1 j− 1 Zj
Wmax = 0.9 0.9 100]
Wmin = 0.1 0.1 G4 G4 (Z) = maxk {|Zk |, 1 < k < Dim} [ − 100, 0 30
PSO W_min 0.4 100]
[
W_max 0.9 G5 G5 (Z) =
∑Dim− 1 ( )2
100 Zk+1 − Z2k + [ − 30, 30] 0 30
k=1
C1=C2 1.49 2
]
GSA Gravitational constant G0 = 100, 100 (Zk − 1)
∑Dim
α 20 G6 G6 (Z) = (Zk + 0.5)2 [ − 100, 0 30
k=1
DBO k 0.1 100]
∑Dim
b = 0.3 0.3 G7 G7 (Z) = kZ4k + rand [0, 1] [ − 1.28, 0 30
k=1
S = 0.5 0.5 1.28]

8
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

Table 4
Standard multimodal benchmark function.
Func. Mathematical expressions Range fmin Dim
(√̅̅̅̅̅̅̅̅)
G8 ∑
G8 (Z) = Dim [ − 500, 500] − 418.9829× Dim 30
k=1 − Zk sin |Zk |
∑Dim [ 2 ]
G9 G9 (Z) = k=1 Zk − 10 cos (2πZk ) + 10 [ − 5.12, 5.12] 0 30
√̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
G10 ∑
1 Dim 2 ∑Dim [ − 32, 32] 0 30
k=1 Zk
1
− eDim k=1 cos (2πZk ) + 20 + e
− 0.2 n
G10 (Z) = − 20e
( )
G11 1 ∑ Dim ∏Dim Zk [ − 600, 600] 0 30
G11 (Z) = Z2 − k=1 cos √̅̅̅ +1
4000 k=1 k k
{
G12 π ∑ Dim− 1 [ ] [ − 50, 50] 0 30
G12 (Z) = 10 sin (πY1 ) + (Yk − 1)2 1 + 10 sin2 (πYk+1 )
Dim k=1

} ∑Dim
+(YDim − 1)2 + k=1
u(Zk , 10, 100, 4)

Zk + 1
Yk = 1 +
4 ⎧
⎨ b(Zk − a)m , Zk > a
u(Zk , a, b, m) = 0 , − a < Zk < a
⎩ m
b(− Zk − a) , Zk < − a
{
G13 ∑ 2[ ] [-50, 50] 0 30
G13 (Z) = 0.1 sin2 (3πZ1 ) + Dim 2
k=1 (Zk − 1) 1 + sin (3πZk + 1) +

[ ] } ∑
(ZDim − 1)2 1 + sin2 (2πZDim ) + Dim k=1 u(Zk , 5, 100, 4)

√̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
√ N different independent runs, while std is the statistical standard deviation
√ 1 ∑
stdFit = √ (Fiti − AvgFit)2 (29) of the fitness values obtained through 30 different independent runs at
N − 1 i=1 each function. It is worth revealing that ACRIME performs the best ef­
ficiency at most functions except F5 and F12. For F5, GWO was superior
5.1.3. Performance analysis using standard functions to ACRIME in terms of average fitness. Still, the standard deviation of
Table 7 exhibits the statistical comparison of ACRIME with addi­ ACRIME at this function was better than other algorithms, which in­
tional comparative algorithms across the 23 typical benchmark func­ dicates the excellent stability of ACRIME compared with other algo­
tions. The statistical measures to compare between the algorithms are rithms is because the novel proposed enhanced mutualism phase
the statistical average, which is the average fitness values over 30 provided a good balance between exploration and exploitation in
ACRIME. For F12, DBO’s performance was better than other algorithms,
but ACRIME was ranked second at this function, outperforming most of
Table 5
the algorithms. On the other hand, ACRIME obtained similar perfor­
Standard fixed-dimension benchmark function.
mance with other competitors in some functions, such as F1, F6, and
Func. Mathematical definition Range fmin Dim F10. This is because the chaotic map integrated with ACRIME enhanced
G14 [ − 65, 1 2 its exploration phase. It is also noticed that ACRIME obtained the best
G14 (Z) = 65] global value at some functions where other methods cannot, such as F2,
( )− 1 F3, F4, F13, F15, F21, F22, and F23 because of the mixed mutation,
1 ∑25 1
500
+ j=1 ∑Dim ( )6 which enhanced the searchability of ACRIME compared with other al­
j + k=1 Zk − ajk
( ) ⎤2 gorithms. The proposed improvements helped ACRIME to avoid falling
G15 ⎡ [ − 5, 0.0003 4
∑11 Z1 b2j + bj Z2
5] into local optima at most functions. Generally, ACRIME is better than
G15 (Z) = j=1 aj − 2
⎣ ⎦
bj + bj Z3 + Z4 other methods at most functions, whereas no other method wins func­
G16 1 [ − 5, − 1.0316 2 tions more than ACRIME. The Friedman rank used to rank the methods
G16 (X) = 4X21 − 2.1X41 + X61 + X1 X2 −
2 4
3 5] according to the obtained fitness values is shown in the last row of
4X2 + 4X2
( )2 Table 7. ACRIME obtained the first rank of 1.7391, while the recent DBO
G17 5.1 2 5 [ − 5, 0.398 2
G17 (Z) = Z2 −
4π 2
Z1 + Z1 − 6 +
π 5] obtained the second rank of 3.6087, but GWO obtained 4.4348 for the
(
1
) third rank. The rest of Friedman’s rank for other competitors is displayed
10 1 − cos (Z1 ) + 10
8π in Table 7. Therefore, ACRIME presents a good performance at the 23
[
G18 [ − 2, 3 2 standard benchmark functions compared to other methods through
(
G18 (Z) = 1 + (Z1 + Z2 + 1)2 19 − 14Z1 +
2]
) ] [ 2000 iterations.
3Z21 − 14Z2 + 6Z1 Z2 + 3Z22 × 30 +
( The convergence curve is utilized to determine the rate of
(2Z1 − 3Z2 )2 × 18 − 32Z1 + 12Z21 +
)]
48Z2 − 36Z1 Z2 + 27Z22
∑Dim Table 6
G19 ∑4 a (Zk − pjk )
2
[1, 3] − 3.86 3
G19 (Z) = − j=1 cj e
− k=1 jk CEC2019 benchmark functions.

G20 ∑4 Dim
a (Zk − pjk )
2
[0, 1] − 3.32 6
G20 (Z) = − j=1 cj e
− k=1 jk Func. Range Dim fmin
G21 [0, 10] − 10.1532 4
G21 (Z) = − F1 [− 8192, 8192] 9 1
∑5 [( )( )T ]− 1 F2 [− 16384, 16384] 16 1
j=1 Z − aj Z − aj + cj F3 [− 4, 4] 18 1
G22 [0, 10] − 10.4028 4 F4 [− 100, 100] 10 1
G22 (Z) = −
F5 [− 100, 100] 10 1
[( )( )T ]− 1
∑7
Z − aj Z − aj + cj F6 [− 100, 100] 10 1
j=1
F7 [− 100, 100] 10 1
G23 [0, 10] − 10.5363 4
G23 (Z) = − F8 [− 100, 100] 10 1
∑10 [( )( )T ]− 1 F9 [− 100, 100] 10 1
j=1 Z − aj Z − aj + cj F10 [− 100, 100] 10 1

9
M. Abdel-Salam et al.
Table 7
Statistical comparisons between ACRIME and other algorithms for 23 standard functions.
Algorithm metric GSA GWO PSO SCA SOA WOA DBO AOA RIME ACRIME

F1 AVE 4.41EX+03 1.38EX-121 1.22EX+02 7.86EX-08 7.44EX-59 4.51EX-124 0.00EXþ00 1.72EX-127 6.91EX-02 0.00EXþ00
STD 1.24EX+03 4.86EX-121 1.47EX+01 2.20EX-07 2.26EX-58 2.44EX-123 0.00EXþ00 9.45EX-127 2.99EX-02 0.00EXþ00
F2 AVE 1.42EX+12 1.29EX-70 6.17EX+01 1.45EX-11 5.04EX-36 1.01EX-202 1.87EX-247 1.10EX-67 1.53EX-01 0.00EXþ00
STD 3.53EX+12 1.48EX-70 1.19EX+01 6.42EX-11 2.19EX-35 0.00EX+00 0.00EX+00 6.01EX-67 5.55EX-02 0.00EXþ00
F3 AVE 2.58EX+04 4.70EX-34 3.02EX+02 1.08EX+03 1.77EX-32 6.23EX+03 4.83EX-59 1.10EX-03 8.41EX+01 0.00EXþ00
STD 1.55EX+04 1.74EX-33 4.40EX+01 1.39EX+03 5.81EX-32 6.71EX+03 2.65EX-58 5.01EX-03 2.86EX+01 0.00EXþ00
F4 AVE 2.38EX+01 1.63EX-29 4.19EX+00 9.92EX+00 1.02EX-14 2.46EX+01 1.59EX-223 1.44EX-02 1.18EX+00 0.00EXþ00
STD 3.54EX+00 2.88EX-29 2.87EX-01 9.69EX+00 5.57EX-14 2.90EX+01 1.62EX-41 1.86EX-02 5.23EX-01 0.00EXþ00
F5 AVE 1.69EX+06 2.64EXþ01 1.25EX+05 2.84EX+01 2.80EX+01 2.63EX+01 2.38EX+01 2.80EX+01 5.87EX+02 2.68EX+01
STD 7.46EX+05 6.62EX-01 2.38EX+04 1.46EX+00 6.72EX-01 3.23EX-01 3.32EX-01 4.83EX-01 9.03EX+02 3.96EX-01
F6 AVE 4.22EX+03 0.00EX+00 1.22EX+02 6.67EX-02 0.00EX+00 0.00EX+00 0.00EX+00 0.00EXþ00 1.70EX+00 0.00EXþ00
STD 9.42EX+02 0.00EXþ00 1.48EX+01 2.54EX-01 0.00EXþ00 0.00EX+00 0.00EX+00 0.00EX+00 1.29EX+00 0.00EXþ00
F7 AVE 1.27EX+02 2.98EX-04 1.04EX+02 1.38EX-02 4.16EX-04 8.98EX-04 3.69EX-04 2.38EX-04 1.16EX-02 1.57EX-05
STD 1.76EX+01 1.38EX-04 2.64EX+01 1.25EX-02 3.96EX-04 1.02EX-03 2.97EX-04 1.82EX-04 4.81EX-03 1.50EX-05
F8 AVE − 2.47EX+03 − 6.17EX+03 − 6.77EX+03 − 4.14EX+03 − 5.55EX+03 − 8.76EX+03 − 1.00EX+04 − 6.54EX+03 − 1.10EX+04 ¡1.12EXþ04
STD 4.02EX+02 5.52EX+02 8.39EX+02 3.87EX+02 7.14EX+02 1.74EX+03 2.39EX+03 5.06EX+02 3.78EXþ02 1.28EX+03
F9 AVE 3.58EX+02 2.43EX-01 3.63EX+02 1.36EX+00 0.00EXþ00 3.79EX-15 1.09EX+00 0.00EXþ00 3.46EX+01 0.00EXþ00
STD 2.50EX+01 1.33EX+00 1.98EX+01 4.17EX+00 0.00EX+00 1.44EX-14 4.26EX+00 0.00EXþ00 9.96EX+00 0.00EXþ00
F10 AVE 1.25EX+01 9.89EX-15 8.17EX+00 1.19EX+01 2.00EX+01 3.49EX-15 1.13EX-15 8.88EX-16 9.71EX-01 8.88EX-16
STD 8.48EX-01 2.91EX-15 3.98EX-01 9.04EX+00 1.36EX-03 2.63EX-15 9.01EX-16 0.00EXþ00 6.33EX-01 0.00EXþ00
F11 AVE 3.78EX+01 2.09EX-03 1.03EX+00 3.22EX-02 0.00EXþ00 3.04EX-03 0.00EXþ00 5.78EX-02 2.23EX-01 0.00EXþ00
STD 1.07EX+01 5.03EX-03 1.01EX-02 8.03EX-02 0.00EXþ00 1.17EX-02 0.00EXþ00 5.03EX-02 7.64EX-02 0.00EXþ00
10

F12 AVE 1.05EX+05 3.53EX-02 4.51EX+00 5.03EX-01 3.11EX-01 1.57EX-03 3.43EX-19 2.89EX-01 1.43EX-01 2.17EX-04
STD 3.11EX+05 2.12EX-02 7.69EX-01 1.20EX-01 1.47EX-01 3.74EX-03 1.49EX-18 4.42EX-02 3.60EX-01 8.56EX-05
F13 AVE 1.83EX+06 5.34EX-01 1.99EX+01 2.28EX+00 2.01EX+00 4.16EX-02 1.45EX-01 2.76EX+00 1.02EX-02 4.39EX-03
STD 1.56EX+06 2.18EX-01 2.90EX+00 1.64EX-01 1.76EX-01 5.51EX-02 1.37EX-01 1.16EX-01 1.01EX-02 4.40EX-03
F14 AVE 6.37EX+00 3.02EX+00 3.07EX+00 9.98EX-01 1.53EX+00 1.88EX+00 1.53EX+00 9.56EX+00 9.98EX-01 9.98EX-01
STD 3.28EX+00 3.42EX+00 2.44EX+00 8.79EX-05 8.92EX-01 2.49EX+00 8.54EX-01 3.59EX+00 1.18EX-13 1.04EX-13
F15 AVE 1.42EX-01 2.31EX-03 1.05EX-03 8.85EX-04 1.16EX-03 6.20EX-04 5.34EX-04 1.47EX-02 4.51EX-03 3.08EX-04
STD 1.37EX-01 6.12EX-03 1.61EX-04 3.92EX-04 2.33EX-04 3.70EX-04 2.37EX-04 2.84EX-02 8.07EX-03 7.92EX-08
F16 AVE − 4.77EX-01 − 1.03EX+00 − 1.03EX+00 − 1.03EX+00 − 1.03EX+00 − 1.03EX+00 − 1.03EX+00 ¡1.03EXþ00 ¡1.03EXþ00 ¡1.03EXþ00
STD 5.79EX-01 2.84EX-09 7.88EX-04 1.78EX-05 1.95EX-07 1.06EX-11 2.58EX-09 5.75EX-08 3.98EX-09 6.52EX-16
F17 AVE 5.44EX-01 3.98EX-01 3.98EX-01 3.98EX-01 3.98EX-01 3.98EX-01 3.98EX-01 4.04EX-01 3.98EX-01 3.98EX-01
STD 1.47EX-01 1.82EX-07 4.31EX-04 3.81EX-04 2.52EX-05 1.21EX-07 4.84EX-09 5.36EX-03 6.15EX-09 3.24EX-16
F18 AVE 6.04EX+01 3.00EX+00 3.06EX+00 3.00EX+00 3.00EX+00 3.00EX+00 3.00EX+00 1.58EX+01 3.00EXþ00 3.00EXþ00
STD 6.86EX+01 1.53EX-06 6.50EX-02 8.02EX-06 1.65EX-05 1.75EX-06 1.64EX-15 2.01EX+01 3.73EX-08 1.19EX-08

Computers in Biology and Medicine 179 (2024) 108803


F19 AVE − 3.66EX+00 − 3.86EX+00 − 3.85EX+00 − 3.86EX+00 − 3.86EX+00 − 3.86EX+00 − 3.86EX+00 − 3.86EX+00 ¡3.86EXþ00 ¡3.86EXþ00
STD 1.50EX-01 3.19EX-03 4.58EX-03 2.56EX-03 1.44EX-03 3.11EX-03 3.21EX-03 2.68EX-03 1.01EX-08 6.38EX-09
F20 AVE − 2.51EX+00 ¡3.28EXþ00 − 2.78EX+00 − 2.95EX+00 − 2.87EX+00 − 3.25EX+00 − 3.23EX+00 − 3.14EX+00 ¡3.28EXþ00 ¡3.28EXþ00
STD 2.21EX-01 6.44EX-02 2.86EX-01 3.30EX-01 4.68EX-01 8.13EX-02 7.46EX-02 4.04EX-02 5.83EX-02 5.70EX-02
F21 AVE − 9.90EX-01 − 8.97EX+00 − 4.71EX+00 − 2.93EX+00 − 2.32EX+00 − 9.64EX+00 − 6.95EX+00 − 4.15EX+00 − 8.97EX+00 ¡1.02EXþ01
STD 5.21EX-01 2.18EX+00 1.68EX+00 2.13EX+00 3.66EX+00 1.56EX+00 2.48EX+00 1.05EX+00 2.18EX+00 1.40EX-05
F22 AVE − 1.09EX+00 ¡1.04EXþ01 − 5.22EX+00 − 3.79EX+00 − 4.53EX+00 − 9.72EX+00 − 7.78EX+00 − 4.97EX+00 − 1.01EX+01 ¡1.04EXþ01
STD 4.16EX-01 5.69EX-05 1.37EX+00 2.25EX+00 4.47EX+00 2.12EX+00 2.91EX+00 1.66EX+00 1.34EX+00 1.45EX-05
F23 AVE − 1.38EX+00 ¡1.05EXþ01 − 5.39EX+00 − 4.65EX+00 − 8.17EX+00 − 9.56EX+00 − 9.46EX+00 − 4.83EX+00 − 9.78EX+00 ¡1.05EXþ01
STD 4.87EX-01 5.63EX-05 1.50EX+00 2.03EX+00 3.86EX+00 2.26EX+00 2.18EX+00 1.55EX+00 1.98EX+00 1.65EX-05
Friedman rank 9.0870 4.4348 7.6522 6.9565 5.9348 4.7391 3.6087 5.5000 5.3478 1.7391
Final rank 10 3 9 8 7 4 2 6 5 1
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

Fig. 2. Convergence rate for some selected classical functions.

convergence speed and behavior of the algorithms at used functions. Generally, the convergence curves show that ACRIME is better than
Some convergence curves of different function types of the standard other methods for finding the global solution at early iterations. For
benchmark functions are shown in Fig. 2. As shown in Fig. 2, the log example, for F11, where PSO and DBO present good performance,
scale is used to display large-scale fitness values. It is noticed that ACRIME convergence speed was superior to other algorithms. According
ACRIME has good exploration to find the global solution at functions F1, to the convergence behavior of ACRIME at different types of functions,
F3, and F4 better than other methods. It is shown that at F5, some ACRIME’s performance is good at most of the functions, and it presents a
methods have similar convergence to ACRIME, but ACRIME has a high level of optimization performance compared with other algorithms.
convergence speed higher than that of the compared methods. This is
because of the chaotic initialization of the population diversity and the 5.1.4. Performance comparison on CEC’19
restart strategies that enhance the exploration ability of ACRIME. This section thoroughly examines the results and comparisons,

11
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

emphasizing the fundamental exploration and exploitation elements of solutions and displaying a commendable rate of rapid convergence
ACRIME about the original RIME algorithm and other well-established when compared to other competing algorithms at the CEC19 bench­
algorithms using the complex benchmark test suite CEC’19. The stan­ mark. This emphasizes the reliable performance of ACRIME in achieving
dard deviation, best, average fitness, and worst fitness values attained by optimal solutions and emphasizes its usefulness as a valuable tool for
ACRIME are presented in Table 8. This table provides a comparison of tackling challenging tasks.
ACRIME with other algorithms over a collection of 10 functions, each
having a solution dimension of 10. The optimum values, shown by the 5.1.5. Stability analysis of ACRIME
minimum, are emphasized in bold. Table 8 demonstrates that the Boxplot analysis is a useful tool that can be applied to visualize
ACRIME algorithm developed in this work yields the best possible result characteristics of the distribution of data, especially in cases where there
for the F1 test function. Moreover, in the case of functions F2, F4, and exist many local minima related to a certain class of functions. In order
F5, the ACRIME algorithm demonstrates superior efficiency compared to provide a clearer insight into the distribution of the results, Figs. 4 and
to the other algorithms. This is because the novel mixed mutation and 5 are used to plot the boxplot of the outcomes produced by each algo­
chaotic strategies enhanced the exploration phase and made the popu­ rithm and function. Boxplots, which partition data into quartiles, pro­
lation more diverse. In the context of function F3, the ACRIME algorithm vide a compact representation summarizing the distribution of data.
demonstrates commendable performance in terms of its behavior They identify the minimum and maximum values attained by the algo­
alongside the original RIME method, but the ACRIME was superior to rithm, which are illustrated through the ends of the whisker. The upper
RIME in stability of standard deviation. Notably, both algorithms sur­ and lower quartiles are identified through the box extents. A narrow
pass other existing algorithms in this regard of average fitness values. boxplot implies a better consensus of the data points. Fig. 4 presents the
ACRIME has superior performance in terms of average fitness for the boxplot results of some selected functions from the classical functions,
functions F5 and F6, but it is worth noting that the original RIME al­ while Fig. 5 shows the boxplot of the ten CEC2019 functions. It can be
gorithms outperform ACRIME, specifically on the F7 test function. observed that the boxplots for the proposed ACRIME are narrow for most
Furthermore, ACRIME achieved the second position in F7, following the functions, indicating a better performance and attaining lower values
RIME algorithm. The ACRIME algorithm demonstrates superior perfor­ than other algorithms. According to Fig. 4, the proposed ACRIME is
mance compared to other algorithms when considering the functions better than its competitors for most test functions, although it is slightly
F8–F10. From Tables 8 and it is clear that the presented improvements inferior for F8. On the other hand, the similar excellent performance of
helped ACRIME avoid local solutions and, hence, obtain promising so­ ACRIME continued at the CEC2019, where ACRIME obtained the
lutions compared with other algorithms. In general, the findings of the smaller boxplot at most functions and presented a competitive perfor­
study imply that the ACRIME algorithm exhibits superior performance mance at other functions, such as F8.
in comparison with competitor algorithms in effectively solving nine
functions from the CEC’19 benchmark, as evidenced by the statistical 5.1.6. Statistical analysis
average of fitness values. Furthermore, it exhibited superior perfor­ A Wilcoxon rank test was employed to assess the statistical signifi­
mance in terms of standard deviation at most functions compared to six cance of the difference between the ACRIME algorithm and other
other algorithms. The modified mutualism phase enhanced the competing algorithms. The findings of this examination are displayed in
exploration-exploitation balance of ACRIME, thereby attaining high Table 9 and Table 10 for the classical benchmarks and CEC’19 test set,
stable performance at most functions. Additionally, ACRIME and AOA respectively. The p-value for ACRIME against all methods is recorded.
achieved the same standard deviation and ranked first in F1. Moreover, When the value of p is smaller than 5 %, this means that there is a sta­
the ACRIME algorithm exhibited superior performance for the worst and tistical difference between ACRIME and other methods. From Tables 9
best fitness values achieved, surpassing the performance of other algo­ and 10 and it is noticed that ACRIME is statistically different from other
rithms. Furthermore, ACRIME achieved the first rank in the Friedman competitors in most functions. For example, ACRIME is statistically
mean rank with a value of 1.6875. RIME earned a rank of 3.725, placing different from DBO in all functions except F8 in Table 9. At the same
it second in the Friedman ranking. GWO obtained a rank of 3.785, time, ACRIME differs from GWO in all functions except F6, F9, and F20,
placing it third. The ranks of the other algorithms can be found in as shown in Table 10, based on CEC2019 test functions. From Tables 10
Table 8. and it is worth noticing that there are some values of p is 1, which in­
Moving ahead, we will conduct a convergence analysis of ACRIME dicates that the methods have similar performance to ACRIME at these
about other algorithms of similar nature. Fig. 3 presents the convergence functions. These functions are not very complex to obtain large statis­
curves of various optimization algorithms, namely AOA, DBO, WOA, tical differences. Values at which there is no significant difference be­
PSO, GWO, SCA, GSA, SOA, and the original RIME. These algorithms are tween ACRIME, and other competitors are bolded in the table.
compared with the proposed ACRIME on the CEC’2019 functions, with a Generally, the results obtained from Tables 9 and 10 show that ACRIME
dimensionality of 10. For F1, the ACRIME algorithm demonstrates a has a good effect on most CEC19 and standard functions. Also, the
tendency for early exploration in its pursuit of achieving the optimal Friedman rank test comparison between all algorithms is recorded in
solution, in contrast to many other closely related algorithms. Over the Fig. 6 for the standard benchmark functions and CEC’19. We realize that
test function F2, RIME outperforms ACRIME in the exploration phase, ACRIME obtained the first rank among all algorithms at both test suite
but ACRIME still performs well compared with many other algorithms. functions, indicating its superior performance compared to all its rivals.
For F3, ACRIME and RIME have similar performance to find the global Hence, the improvements introduced presented ACRIME as a novel,
optimal values through the exploitation and exploration phases. More­ promising algorithm to solve complex optimization problems.
over, as shown in Fig. 2(d–x), the proposed ACRIME has better perfor­
mance in handling the functions F4–F10, and ACRIME presents the best 5.1.7. Time analysis of ACRIME
performance. Through the functions F4–F10, the proposed ACRIME This subsection evaluates the time behavior of ACRIME in compar­
reaches the global optimal value at early iterations, proving the ison with other competitors for CEC2019 and classical functions. The
ACRIME’s stability compared with other algorithms. Generally, it is proposed enhancements put more computational time on ACRIME
apparent that the ACRIME algorithm reliably converges to a stable state compared to the original RIME. Figs. 7 and 8 record the execution time
across a range of test functions exhibiting diverse properties. This captured by ACRIME during classical function evaluations and CEC2019
observation suggests that the ACRIME algorithm exhibits accurate functions evaluations, respectively. According to Fig. 7, it is clear that
convergence and tends to approach the ideal solution. DBO is the worst in terms of execution times at the first six functions,
In addition, it is worth observing that the ACRIME algorithm exhibits GSA and ACRIME. For functions F7–F11, DBO and GSA still obtain the
superior performance in terms of attaining the lowest average of optimal largest execution time due to their complex internal structure. For

12
M. Abdel-Salam et al.
Table 8
Statistical comparisons between ACRIME and other algorithms for CEC19 benchmark functions.
Algorithm metric GSA GWO PSO SCA SOA WOA DBO AOA RIME ACRIME

F1 AVE 4.28EX+04 3.25EX+01 1.77EX+02 2.57EX+02 5.99EX+01 1.78EX+03 7.17EX+01 1.86EX+02 1.86EX+02 1.00EXþ00
STD 2.86EX+04 7.82EX+01 2.52EX+02 4.24EX+02 1.67EX+02 1.91EX+03 1.63EX+02 0.00EXþ00 2.47EX+02 0.00EXþ00
BEST 9.91EX+03 1.00EXþ00 1.00EXþ00 1.00EXþ00 1.00EXþ00 1.46EX+00 1.00EXþ00 1.00EXþ00 1.00EXþ00 1.00EXþ00
WORST 1.36EX+05 3.38EX+02 1.10EX+03 1.97EX+03 8.52EX+02 8.84EX+03 6.31EX+02 1.00EXþ00 1.05EX+03 1.00EXþ00
F2 AVE 3.21EX+01 3.41EX+00 3.64EX+00 6.33EX+00 3.51EX+00 1.21EX+01 3.41EX+00 3.20EX+00 3.25EX+00 3.15EXþ00
STD 1.60EX+01 3.56EX-01 2.12EX-01 2.71EX+00 4.84EX-01 8.30EX+00 4.77EX-01 6.25EX-02 6.82EX-02 4.14EX-02
BEST 9.54EX+00 2.91EX+00 3.30EX+00 3.25EX+00 2.74EX+00 4.17EX+00 2.27EXþ00 3.16EX+00 3.18EX+00 3.05EX+00
WORST 7.38EX+01 4.33EX+00 4.26EX+00 1.44EX+01 4.70EX+00 3.19EX+01 4.83EX+00 3.47EX+00 3.50EX+00 3.20EXþ00
F3 AVE 1.37EX+01 1.25EX+01 1.27EX+01 1.27EX+01 1.27EX+01 1.14EX+01 1.11EX+01 1.09EX+01 1.07EXþ01 1.07EXþ01
STD 1.27EX-01 6.10EX-01 3.48EX-04 1.56EX-02 1.72EX-01 9.59EX-01 7.11EX-01 5.52EX-01 3.05EX-04 8.30EX-05
BEST 1.30EX+01 1.07EXþ01 1.27EX+01 1.27EX+01 1.18EX+01 1.07EXþ01 1.07EXþ01 1.07EXþ01 1.07EXþ01 1.07EXþ01
WORST 1.37EX+01 1.27EX+01 1.27EX+01 1.28EX+01 1.27EX+01 1.27EX+01 1.27EX+01 1.27EX+01 1.07EXþ01 1.07EXþ01
F4 AVE 1.45EX+02 1.45EX+01 4.09EX+01 4.74EX+01 2.35EX+01 5.57EX+01 3.12EX+01 4.88EX+01 1.25EX+01 1.24EXþ01
STD 2.12EX+01 6.95EX+00 1.10EX+01 6.96EX+00 8.90EX+00 2.18EX+01 1.22EX+01 1.83EX+01 5.70EX+00 5.63EXþ00
BEST 1.04EX+02 4.00EX+00 2.16EX+01 3.62EX+01 8.61EX+00 1.70EX+01 1.05EX+01 2.29EX+01 3.99EX+00 2.99EXþ00
WORST 1.90EX+02 3.47EX+01 7.00EX+01 6.00EX+01 3.90EX+01 9.85EX+01 6.48EX+01 9.51EX+01 2.49EXþ01 3.18EX+01
F5 AVE 1.75EX+02 1.89EX+00 1.96EX+00 6.52EX+00 3.18EX+00 1.97EX+00 1.19EX+00 4.53EX+01 1.96EX+00 1.12EXþ00
STD 4.87EX+01 1.99EX+00 1.04EX-01 1.67EX+00 2.18EX+00 4.65EX-01 1.52EX-01 1.52EX+01 3.06EX-01 6.18EX-02
BEST 3.95EX+01 1.13EX+00 1.73EX+00 3.12EX+00 1.40EX+00 1.38EX+00 1.02EX+00 2.08EX+01 1.53EX+00 1.00EXþ00
13

WORST 2.85EX+02 1.21EX+01 2.09EX+00 1.18EX+01 1.30EX+01 3.23EX+00 1.66EX+00 7.58EX+01 1.18EX+01 1.31EXþ00
F6 AVE 1.33EX+01 2.63EX+00 4.78EX+00 6.95EX+00 6.90EX+00 7.93EX+00 6.39EX+00 1.09EX+01 6.37EX+00 2.62EXþ00
STD 1.13EX+00 1.20EX+00 1.49EX+00 1.02EXþ00 1.68EX+00 1.75EX+00 1.49EX+00 1.27EX+00 1.77EX+00 1.32EX+00
BEST 1.05EX+01 1.10EX+00 2.56EX+00 5.41EX+00 3.77EX+00 4.29EX+00 3.30EX+00 8.41EX+00 1.90EX+00 1.01EXþ00
WORST 1.54EX+01 5.14EXþ00 7.89EX+00 9.10EX+00 1.11EX+01 1.08EX+01 8.64EX+00 1.36EX+01 9.10EX+00 6.58EX+00
F7 AVE 2.62EX+03 6.37EX+02 1.18EX+03 1.37EX+03 8.23EX+02 1.22EX+03 1.08EX+03 1.12EX+03 4.18EXþ02 5.24EX+02
STD 1.82EXþ02 2.52EX+02 2.73EX+02 2.18EX+02 1.99EX+02 3.56EX+02 3.30EX+02 3.19EX+02 2.00EX+02 1.98EX+02
BEST 2.24EX+03 1.93EX+02 5.51EX+02 8.57EX+02 4.20EX+02 4.85EX+02 4.57EX+02 6.28EX+02 1.20EX+02 4.79EXþ00
WORST 2.96EX+03 1.30EX+03 1.66EX+03 1.74EX+03 1.33EX+03 1.84EX+03 1.83EX+03 1.69EX+03 9.02EXþ02 9.50EX+02
F8 AVE 5.29EX+00 3.78EX+00 4.01EX+00 4.27EX+00 4.35EX+00 4.61EX+00 4.05EX+00 4.73EX+00 3.49EX+00 3.31EXþ00
STD 1.94EX-01 4.97EX-01 5.10EX-01 2.65EX-01 3.11EX-01 3.32EX-01 4.89EX-01 2.97EX-01 4.53EX-01 5.32EX-01
BEST 4.52EX+00 2.59EX+00 2.91EX+00 3.73EX+00 3.46EX+00 3.87EX+00 2.97EX+00 3.81EX+00 2.07EX+00 1.95EXþ00
WORST 5.51EX+00 4.51EX+00 4.98EX+00 4.85EX+00 4.64EX+00 5.04EX+00 5.00EX+00 5.09EX+00 4.13EX+00 4.06EXþ00
F9 AVE 5.30EX+00 1.15EXþ00 1.20EX+00 1.53EX+00 1.35EX+00 1.40EX+00 1.36EX+00 1.79EX+00 1.21EX+00 1.15EXþ00

Computers in Biology and Medicine 179 (2024) 108803


STD 8.47EX-01 5.21EX-02 5.23EX-02 1.14EX-01 9.72EX-02 1.34EX-01 1.61EX-01 4.71EX-01 7.27EX-02 4.77EX-02
BEST 3.43EX+00 1.05EXþ00 1.13EX+00 1.34EX+00 1.13EX+00 1.19EX+00 1.12EX+00 1.22EX+00 1.10EX+00 1.08EX+00
WORST 6.87EX+00 1.32EX+00 1.31EX+00 1.82EX+00 1.50EX+00 1.71EX+00 1.69EX+00 2.61EX+00 1.35EX+00 1.26EXþ00
F10 AVE 2.15EX+01 2.09EX+01 2.14EX+01 2.12EX+01 2.13EX+01 2.11EX+01 2.13EX+01 2.11EX+01 2.10EX+01 1.70EXþ01
STD 1.09EX-01 1.94EX+00 8.31EX-02 1.16EX+00 7.61EX-02 1.10EX-01 1.23EX-01 4.01EX-02 1.23EX-02 1.15EX-02
BEST 2.13EX+01 1.19EX+01 2.12EX+01 1.51EX+01 2.12EX+01 2.10EX+01 2.10EX+01 2.09EX+01 2.10EX+01 1.03EXþ00
WORST 4.28EX+04 3.25EX+01 1.77EX+02 2.57EX+02 5.99EX+01 1.78EX+03 7.17EX+01 1.86EX+02 1.86EX+02 1.00EXþ00
Friedman rank Test 9.1 3.875 5.75 6.775 5.55 7.2625 5.2375 6.0375 3.725 1.6875
Rank 10 3 6 8 5 9 4 7 2 1
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

Fig. 3. Convergence curve for CEC’19 test set.

functions F12–F17, ACRIME is the worst due to the proposed four en­ feature sets from a given data set to maximize the accuracy of classifi­
hancements, which slightly increase the computational time compared cation. FS employs a method to decrease the dimensionality of the data.
with the original algorithms. For the remaining functions, DBO con­ Furthermore, the classifier’s classification performance is utilized to
sumes more time compared to all other algorithms. On the other hand, validate the efficacy of lowering dimensionality in this process.
the computational time of comparative algorithms on CEC2019 is
recorded in Fig. 8. According to Fig. 8, it is analyzed that RIME and
6.1. Data description and experimentation settings
ACRIME obtained the best execution time that fit the complexity of the
defined CEC2019 test functions. The reason for the increasing time
The effectiveness of the strategy was evaluated on a total of fourteen
complexity of ACRIME is that the integrated improvements largely
datasets, which included the datasets from the UCI machine learning
enhanced the solution quality of ACRIME and boosted the convergence
repository [55]. Table 11 displays the fundamental information of these
speed of ACRIME. However, the computational time increased, indi­
datasets. The datasets exhibit a variety of properties, ranging from a
cating that ACRIME still has room for improvement to reduce the
minimum of 13 to a maximum of more than 7000 features. The datasets
computational time.
contain a varying number of samples. The FS algorithm is considered
sufficiently complete to evaluate its effectiveness [56]. To evaluate the
6. Performance analysis of ACRIME for FS problem
efficacy of the proposed ACRIME algorithm in FS applications, it is
compared to six well-established algorithms and five recently published
This section aims to assess the feasibility of ACRIME through a
algorithms. The algorithms in this group include Binary PSO (BPSO)
benchmark test involving the FS problem. Feature selection is an
[57], Binary WOA (BWOA) [58], Binary MFO (BMFO) [59], Binary BA
essential stage in solving classification problems since it entails navi­
(BBA) [60] and Binary INFO (bINFO) [58] and the original RIME algo­
gating through a search area with multiple dimensions. The goal of
rithm. In addition, this work presents innovative algorithms that exhibit
feature selection (FS) is to discover the most compact and informative
exceptional performance in the field of feature selection (FS), such as

14
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

Fig. 4. Boxplots for some selected functions from classical functions.

Binary COBHCOOT [61], bISSA [62], bRLTLBO [63], Binary IPSO [64], 6.2. Architecture of ACRIME-based FS algorithm
and Binary TACPSO [65]. The specific characteristics of the specified
algorithms remain unchanged, as mentioned in their original papers. The wrapper feature selection strategy is employed to locate the
The population size for BPSO, bINFO, BMFO, BWOA, bRLTLBO, BBA, essential features among several features and discard unnecessary ones.
IPSO, bISSA, COBHCOOT, and TACPSO is fixed at 30 agents. The ACRIME generates N swarm agents as the initial population in this stage.
maximum number of iterations in this experiment is set to 100. To Every agent is considered a component of molecular descriptors chosen
ensure the impartiality of the comparative studies, MATLAB2020a was for evaluation. The indicated stage is crucial for attaining convergence
employed as the operating environment. Additionally, 30 independent and fitness to acquire the ideal solution (feature set). During this stage,
runs were undertaken to mitigate any potential bias in the obtained the population X is formed using a random initialization as described
data. below:

15
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

Fig. 5. Boxplots for CEC2019 functions.

Xi = Lbi + randi × (Ub − Lb), i = 1, 2, …, N (30) solution chooses a reduced subset of features. The kNN classification
technique is employed to train the data samples utilizing the chosen
Where the Lbi and Ubi are the upper and lower limits for each solution subset of features and compute the accuracy [66]. The fitness of the i-th
agent i (from 1 to N) which is in the range between 0 and 1 in the FS solution is determined by the objective function outlined in Equation
problem. Also, randi is a random number generated for each agent (32).
randi ∈ [0, 1]. The intermediary step of binary conversion is crucial for
|SF|
picking a feature subset before the fitness evaluation stage. Therefore, f(x) = α × γ r (D) + β × (32)
each solution Xi undergoes binary conversion to convert it from |AF|
continuous into discrete Xnewi as follows: The classification error rate attained by the kNN classifier is repre­
{ sented as γ r (D). The classification weighting parameters are α and β,
1 if Xi > 0.5
Xnewi = (31) with β being equal to (1 − α). The variable α takes on values between
0 otherwise
0 and 1. Conversely, SF denotes the features that are chosen in each
After identifying the suitable subset of features, the objective func­ iteration, and AF represents the total number of features in the dataset.
tion is computed for each agent to evaluate the quality of the chosen In light of the classifier’s significance in this paper, α and β values of 0.99
features. The incorporation of the objective function in the feature se­ and 0.01, respectively, have been chosen. The values were derived from
lection problem aims to minimize the number of selected features while the available literature.
maximizing classification accuracy or minimizing classification error.
From this viewpoint, the best solution is characterized as the one that
6.3. Evaluation metrics
attains the utmost level of accuracy in classification while concurrently
reducing the number of selected features to a minimum. The k-NN
The following metrics are used to assess the effectiveness of ACRIME
classifier evaluates the Xnewi solutions. During each iteration, every
performance in feature selection tasks.

16
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

Table 9
Wilcoxon test analysis of the proposed ACRIME with other compared algorithms for the classical functions.
DBO AOA GSA GWO PSO RIME SCA SOA WOA

F1 1.000000EX+00 1.953100EX-03 1.734400EX- 1.734400EX-06 1.734400EX- 1.734400EX- 1.734400EX- 1.734400EX-06 1.734400EX-06


06 06 06 06
F2 2.563100EX-06 1.734400EX-06 1.734400EX- 1.734400EX-06 1.734400EX- 1.734400EX- 1.734400EX- 1.734400EX-06 1.734400EX-06
06 06 06 06
F3 1.734400EX-06 8.857500EX-05 1.734400EX- 1.734400EX-06 1.734400EX- 1.734400EX- 1.734400EX- 1.734400EX-06 1.734400EX-06
06 06 06 06
F4 1.734400EX-06 1.920900EX-06 1.734400EX- 1.734400EX-06 1.734400EX- 1.734400EX- 1.734400EX- 1.734400EX-06 1.734400EX-06
06 06 06 06
F5 1.734400EX-06 1.920900EX-06 1.734400EX- 7.271000EX-03 1.734400EX- 2.126600EX- 1.734400EX- 6.339100EX-06 5.307000EX-05
06 06 06 06
F6 1.000000EX+00 1.000000EX+00 1.734400EX- 1.000000EX+00 1.726800EX- 6.002500EX- 5.000000EX- 1.000000EX+00 1.000000EX+00
06 06 06 01
F7 1.920900EX-06 1.734400EX-06 1.734400EX- 1.734400EX-06 1.734400EX- 1.734400EX- 1.734400EX- 1.734400EX-06 1.734400EX-06
06 06 06 06
F8 3.682600EX-02 1.734400EX-06 1.734400EX- 1.734400EX-06 1.920900EX- 8.612100EX- 1.734400EX- 1.734400EX-06 1.057000EX-04
06 06 01 06
F9 1.734400EX-06 1.000000EX+00 1.734400EX- 1.000000EX+00 1.734400EX- 1.734400EX- 1.734400EX- 1.000000EX+00 5.000000EX-01
06 06 06 06
F10 1.734400EX-06 1.000000EX+00 1.734400EX- 6.831400EX-07 1.734400EX- 1.734400EX- 1.734400EX- 1.734400EX-06 1.522800EX-04
06 06 06 06
F11 1.734400EX-06 1.734400EX-06 1.734400EX- 1.734400EX-06 1.734400EX- 1.734400EX- 2.414700EX- 1.734400EX-06 1.972900EX-05
06 06 06 03
F12 1.734400EX-06 1.734400EX-06 1.734400EX- 1.734400EX-06 1.734400EX- 4.533600EX- 1.734400EX- 8.450800EX-01 1.734400EX-06
06 06 04 06
F13 1.734400EX-06 1.734400EX-06 1.734400EX- 1.734400EX-06 1.734400EX- 1.734400EX- 1.734400EX- 1.734400EX-06 1.734400EX-06
06 06 06 06
F14 1.734400EX-06 1.734400EX-06 1.483900EX- 5.751700EX-06 3.515200EX- 1.734400EX- 1.734400EX- 1.734400EX-06 1.734400EX-06
03 06 06 06
F15 9.626600EX-04 1.734400EX-06 7.690900EX- 1.972900EX-05 4.991600EX- 7.864700EX- 2.584600EX- 4.681800EX-03 2.613400EX-04
06 03 02 03
F16 1.734400EX-06 1.734400EX-06 1.734400EX- 2.603300EX-06 1.734400EX- 2.126600EX- 1.734400EX- 4.991600EX-03 1.734400EX-06
06 06 06 06
F17 1.734400EX-06 1.734400EX-06 1.734400EX- 1.734400EX-06 1.734400EX- 1.734400EX- 1.734400EX- 1.734400EX-06 3.001000EX-02
06 06 06 06
F18 1.734400EX-06 8.589600EX-02 1.734400EX- 1.734400EX-06 1.734400EX- 1.734400EX- 2.126600EX- 3.515200EX-06 1.920900EX-06
06 06 06 06
F19 1.734400EX-06 1.734400EX-06 1.734400EX- 1.734400EX-06 1.734400EX- 1.734400EX- 1.734400EX- 1.734400EX-06 1.734400EX-06
06 06 06 06
F20 1.245300EX-02 1.734400EX-06 1.734400EX- 1.734400EX-06 1.734400EX- 1.734400EX- 1.734400EX- 1.734400EX-06 8.729700EX-03
06 06 06 06
F21 2.613400EX-04 1.734400EX-06 1.734400EX- 2.126600EX-06 1.734400EX- 1.149900EX- 1.734400EX- 1.734400EX-06 3.515200EX-06
06 06 04 06
F22 2.067100EX-02 1.734400EX-06 1.734400EX- 1.734400EX-06 1.734400EX- 3.001000EX- 1.734400EX- 1.734400EX-06 1.734400EX-06
06 06 02 06
F23 1.734400EX-06 1.734400EX-06 1.734400EX- 1.734400EX-06 1.734400EX- 9.842100EX- 1.734400EX- 1.734400EX-06 4.729200EX-06
06 06 03 06

Table 10
Wilcoxon test analysis of the proposed ACRIME with other compared algorithms for the CEC’19 test set.
DBO AOA GSA GWO PSO RIME SCA SOA WOA

F1 8.860000EX- 1.000000EX+00 1.730000EX- 1.730000EX- 1.730000EX- 1.730000EX- 1.730000EX- 1.730000EX- 1.730000EX-


05 06 06 06 06 06 06 06
F2 2.610000EX- 6.160000EX-04 1.730000EX- 3.590000EX- 1.730000EX- 2.600000EX- 1.730000EX- 1.360000EX- 1.730000EX-
04 06 04 06 06 06 04 06
F3 3.070000EX- 1.730000EX-06 1.730000EX- 1.730000EX- 1.730000EX- 9.710000EX- 1.730000EX- 1.730000EX- 2.130000EX-
04 06 06 06 05 06 06 06
F4 2.350000EX- 1.730000EX-06 1.730000EX- 2.450000EX- 1.730000EX- 9.430000EX- 1.730000EX- 7.510000EX- 1.730000EX-
06 06 04 06 05 06 05 06
F5 1.780000EX- 1.730000EX-06 1.730000EX- 1.730000EX- 1.730000EX- 1.730000EX- 1.730000EX- 1.730000EX- 1.730000EX-
01 06 06 06 06 06 06 06
F6 5.750000EX- 1.730000EX-06 1.730000EX- 5.860000EX- 7.510000EX- 1.730000EX- 1.730000EX- 1.920000EX- 1.730000EX-
06 06 04 05 06 06 06 06
F7 7.690000EX- 3.180000EX-06 1.730000EX- 1.060000EX- 1.920000EX- 5.710000EX- 1.730000EX- 3.410000EX- 2.130000EX-
06 06 01 06 06 06 05 06
F8 1.240000EX- 1.920000EX-06 1.730000EX- 4.390000EX- 1.060000EX- 9.780000EX- 1.920000EX- 1.730000EX- 1.730000EX-
05 06 03 04 02 06 06 06
F9 3.880000EX- 1.730000EX-06 1.730000EX- 7.500000EX- 8.310000EX- 1.200000EX- 1.730000EX- 3.520000EX- 1.730000EX-
06 06 01 04 03 06 06 06
F10 1.730000EX- 1.020000EX-05 1.730000EX- 1.360000EX- 1.730000EX- 4.950000EX- 1.800000EX- 1.730000EX- 1.730000EX-
06 06 04 06 05 05 06 06

17
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

Fig. 6. Friedman rank test for the compared algorithms based on different benchmark functions.

Fig. 7. Time analysis of ACRIME and other algorithms at classical functions.

Fig. 8. Time analysis of ACRIME and other algorithms of CEC2019 functions.

• Average classification accuracy (ACC): The mean accuracy ach­ and the average accuracy, calculated by subtracting each run’s ac­
ieved by assessing the chosen subset of features using an optimizer curacy from the average accuracy. This statistic quantifies the level
across N repetitions. of accuracy achieved during N repetitions, serving as a measure of
N
quality.
∑ ACCi
ACC = (33) √̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
i=1
N √N
√∑
√ (ACCi − ACC)

STD = i=1 (34)
N− 1

• The standard deviation of classification accuracy (STD): The


discrepancy between the classification accuracy achieved in each run

18
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

Table 11 table displays the Friedman rank test, which is used to evaluate various
Dataset description. algorithms based on their categorization error. Table 12 demonstrates
Dataset #ofSample #ofFeatures that the binary version of ACRIME outperforms all other methods,
achieving the highest accuracy across the majority of datasets.
Zoo 101 16
CongressEW 435 16 The algorithms bINFO, bRLTLBO, and ACRIME achieved a perfect
BreastEW 569 30 accuracy of 1.0000, surpassing all other algorithms in the ALLALML
Vote 300 16 dataset. In the case of the Prostate_GE dataset, both bINFO and ACRIME
ALLAML 72 7129 achieved comparable classification accuracy, surpassing all other tech­
CLEAN 476 166
Lymphography 148 18
niques. In terms of average accuracy, ACRIME outperformed the other
HeartEW 270 13 models in ten out of fourteen datasets. Therefore, the Friedman rank test
M-of-n 1000 13 of ACRIME is 11.54, making it the top-ranked algorithm among all
Waveform 5000 40 others. Table 13 presents comparable results, specifically the standard
Parkinsons 195 22
deviation (STD) of the categorization accuracy. Of the fourteen datasets,
Prostate_GE 102 5966
Exactly2 1000 13 eight demonstrated that ACRIME performs better than other methods.
PenglungEW 73 325 This suggests that ACRIME has a stable ability to handle FS difficulties.
ACRIME utilized adaptive mutualism and chaotic initialization to effi­
ciently identify the most advantageous subset of features throughout
• Average selected features (SF): The mean number of features each iteration. This iterative approach consistently enhanced the
selected by an optimizer throughout N repetitions. selected subset of features, resulting in a superior feature subset for the
N
∑ classification procedure. In addition, the classification method of the
fi provided inadequate solution was effective in removing the worst subset
i
SF = (35) of features to enhance the accuracy by improving these features.
N
When considering the average and standard deviation of the selected
Where fi is the number of selected features at runtime i. features, ACRIME consistently achieves outstanding performance by
obtaining the smallest subset of features across most datasets while
• The standard deviation of the selected number of features maintaining the highest accuracy. Tables 14 and 15 contain data on the
(STDf ): A metric used to quantify the number of selected features in average number of selected features and the standard deviation of the
each repetition to assess the stability of an optimizer when selecting selected number of features, respectively. This data was collected from
the optimal subset of features. 30 distinct runs. Table 14 shows that the ACRIME method achieved the
√̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ lowest number of selected features in twelve of fourteen datasets, which
√N
√∑
√ (SFi − SF) is better than all other techniques. The advanced method bINFO dem­
√ onstrates superiority over ACRIME in feature selection on a single FS
STDf = i=1 (36)
N− 1 dataset, whereas COBHCOOT outperforms ACRIME on the other one.
The average number of collected features is ranked using the Friedman
rank and given in Table 14. ACRIME is ranked first in this table. Table 15
6.4. Analysis and discussion of results
demonstrates that the ACRIME algorithm outperforms others in terms of
the standard deviation of the selected number of features across 30
This section evaluates and measures the effectiveness of ACRIME in
distinct runs. Following ACRIME, the PSO algorithm ranks second. This
comparison to various classical, recent, and advanced metaheuristics-
is due to the recent enhancements that direct the presumed agents to­
based FS algorithms. The evaluation of ACRIME in FS involves the
wards the optimal minimum number of features that align with the
assessment of many metrics, such as standard deviation of obtained
employed fitness function.
accuracy, average accuracy, average number of obtained features, and
Fig. 9 illustrates the rate at which the best fitness values converge
standard deviation of obtained relevant features. This experiment uti­
over 100 iterations. Fig. 9 displays the convergence curves of fourteen
lizes eleven distinct optimization algorithms to assess the quality of
different FS datasets using twelve different comparative algorithms.
ACRIME. These algorithms include bINFO, BPSO, bRIME, bISSA,
ACRIME had superior convergence speed compared to other methods
bRLTLBO, BBA, TACPSO, COBHCOOT, IPSO, BMFO, and BWOA. These
across the majority of datasets. In the ProstateGE dataset, the ACRIME
algorithms range from classical to current, and certain complex algo­
method achieved the highest fitness value before the 30th iteration,
rithms are utilized for FS problems. The average accuracy achieved in 30
surpassing all other algorithms. While ACRIME and bRIME achieved
distinct and separate trials is shown in Table 12. The final row of this

Table 12
Average classification accuracy of ACRIME and other competitors.
BPSO bINFO bISSA BBA bRIME BWOA bRLTLBO IPSO BMFO COBHCOOT TACPSO ACRIME

ALLAML 0.9821 0.9942 0.9642 0.9333 0.9624 0.9641 1.0000 0.9522 0.9442 0.9227 0.9401 1.0000
BreastEW 0.9525 0.9212 0.9533 0.9513 0.9550 0.9534 0.9683 0.9213 0.9579 0.9632 0.9583 0.9631
CongressEW 0.9642 0.9637 0.9623 0.9633 0.9629 0.9612 0.9635 0.9542 0.9532 0.9558 0.9645 0.9696
Exactly2 0.7622 0.7933 0.7555 0.7855 0.7213 0.7627 0.7962 0.7933 0.7707 0.7993 0.7655 0.7902
HeartEW 0.8235 0.8737 0.8826 0.8255 0.8027 0.7913 0.8699 0.8835 0.8523 0.8896 0.8893 0.8891
CLEAN 0.8842 0.8852 0.8535 0.8219 0.8274 0.8125 0.8812 0.8014 0.8247 0.8746 0.8641 0.8892
Zoo 0.9523 0.9593 0.9622 0.9542 0.9433 0.9372 0.9681 0.9664 0.9501 0.9651 0.9542 0.9908
Vote 0.9532 0.9691 0.9632 0.9672 0.9537 0.9181 0.9649 0.9623 0.9677 0.9682 0.9671 0.9635
Waveform 0.7741 0.7931 0.7636 0.7651 0.8143 0.8251 0.7943 0.7841 0.8041 0.8241 0.8125 0.8256
Lymphography 0.9041 0.9020 0.9083 0.8432 0.8839 0.8647 0.9333 0.8435 0.9166 0.8289 0.8571 0.9416
M-of-n 0.9433 0.9712 0.9521 0.9429 0.9553 0.8972 0.9643 0.9374 0.9419 0.9649 0.9527 1.0000
Parkinsons 0.7739 0.8236 0.8129 0.7793 0.7937 0.7941 0.8127 0.8259 0.8145 0.8496 0.8274 0.8522
PenglungEW 0.9633 0.9729 0.9504 0.8919 0.8759 0.8872 0.9123 0.8846 0.8235 0.8143 0.8237 0.9972
Prostate_GE 0.9941 1.0000 1.0000 0.9253 0.9832 0.9418 0.9842 0.9647 0.9327 0.9941 0.9438 1.0000
Friedman rank 5.75 8.54 6.36 4.18 4.86 4.00 8.68 5.04 5.14 7.75 6.82 10.89

19
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

Table 13
STD of classification accuracy of ACRIME and other competitors.
BPSO bINFO BISSA BBA bRIME BWOA bRLTLBO IPSO BMFO COBHCOOT TACPSO ACRIME

ALLAML 0.0000 0.0000 0.0000 0.0337 0.0220 0.0354 0.0000 0.0132 0.0027 0.0009 0.0151 0.0000
BreastEW 0.0033 0.0033 0.0002 0.0053 0.0049 0.0141 0.0342 0.0176 0.0327 0.0010 0.0043 0.0004
CongressEW 0.0271 0.0044 0.0014 0.0196 0.0279 0.0164 0.0092 0.0541 0.0078 0.0062 0.0084 0.0005
Exactly2 0.0131 0.0052 0.0039 0.0061 0.0132 0.0014 0.0033 0.0049 0.0056 0.0041 0.0072 0.0039
HeartEW 0.0274 0.0354 0.0379 0.0542 0.0379 0.0406 0.0381 0.0337 0.0305 0.0408 0.0339 0.0026
CLEAN 0.1436 0.1767 0.0152 0.1139 0.0982 0.1752 0.2227 0.0792 0.1971 0.0071 0.0182 0.0037
Zoo 0.0141 0.0352 0.0219 0.0171 0.0213 0.0141 0.0082 0.0367 0.0171 0.0093 0.0366 0.0041
Vote 0.0271 0.0035 0.0171 0.0352 0.0032 0.0387 0.0171 0.0152 0.0099 0.0067 0.0081 0.0053
Waveform 0.0141 0.0174 0.1063 0.1152 0.1217 0.0973 0.1036 0.0982 0.1117 0.0867 0.0117 0.0003
Lymphography 0.0142 0.0325 0.0371 0.0373 0.0542 0.0419 0.0382 0.0382 0.0391 0.0317 0.0436 0.0174
M-of-n 0.0471 0.0382 0.0517 0.0473 0.0516 0.0671 0.0371 0.0392 0.0472 0.0139 0.0371 0.0000
Parkinsons 0.0223 0.0219 0.0371 0.0334 0.0206 0.0402 0.0214 0.0398 0.0341 0.0274 0.0325 0.0497
PenglungEW 0.0574 0.0316 0.0372 0.0389 0.0416 0.0307 0.0472 0.0438 0.0379 0.0308 0.0272 0.0182
Prostate_GE 0.0000 0.0000 0.0000 0.0096 0.0372 0.0127 0.0000 0.0067 0.0182 0.0002 0.0081 0.0000
Friedman rank 5.93 4.96 5.75 8.82 8.46 8.39 6.46 7.96 7.89 4.50 6.32 2.54

Table 14
Average number of selected features of ACRIME and other competitors.
BPSO bINFO BISSA BBA bRIME BWOA bRLTLBO IPSO BMFO COBHCOOT TACPSO ACRIME

ALLAML 4.62 14.90 15.96 467.37 4.72 96.73 3.72 217.73 353.13 272.96 3011.61 3.17
BreastEW 3.02 3.00 3.17 12.71 4.19 5.52 3.79 9.25 9.41 13.91 8.76 3.14
CongressEW 4.74 5.63 5.71 6.46 5.83 4.97 4.98 5.12 6.73 8.32 5.76 4.31
Exactly2 2.19 5.52 3.94 5.76 3.78 1.36 3.39 5.78 5.49 1.42 5.97 4.46
HeartEW 4.66 5.37 5.76 6.82 3.19 4.74 5.16 6.42 5.72 5.17 6.12 3.02
CLEAN 80.14 73.14 77.52 101.17 110.14 83.14 80.73 88.19 92.36 83.12 114.52 85.25
Zoo 4.41 8.82 5.21 16.13 3.74 5.82 6.13 8.72 14.16 3.30 7.81 2.18
Vote 14.90 33.71 20.45 19.53 17.71 22.79 16.74 28.19 19.21 12.18 16.94 8.36
Waveform 22.41 37.72 25.48 36.85 40.19 39.23 41.49 25.72 24.59 19.46 25.73 19.14
Lymphography 6.25 7.41 5.76 8.85 4.72 5.72 6.36 8.76 8.82 7.47 7.25 4.83
M-of-n 6.74 9.63 7.25 9.82 7.19 8.52 7.71 12.25 8.76 12.72 9.19 4.63
Parkinsons 4.52 6.78 3.19 8.25 3.78 4.49 3.52 5.79 7.25 5.85 5.58 2.28
PenglungEW 8.63 6.25 12.74 178.52 8.52 42.72 9.25 214.73 325.72 12.19 123.35 4.85
Prostate_GE 16.85 12.74 15.25 1438.65 9.74 143.19 12.52 1204.41 2931.52 62.18 2312.71 8.96
Friedman rank 3.50 6.64 5.36 10.43 5.00 6.00 4.93 9.07 9.36 6.79 8.86 2.07

Table 15
The standard deviation of selected features of ACRIME and other competitors.
BPSO bINFO BISSA BBA bRIME BWOA bRLTLBO IPSO BMFO COBHCOOT TACPSO ACRIME

ALLAML 3.1408 325.4196 6.1473 44.1962 4.1921 73.3562 0.8471 93.3592 53.4196 76.1396 123.7419 0.4942
BreastEW 0.0000 0.0000 0.1598 4.3349 0.5412 4.1003 0.5298 4.7495 5.7585 4.6632 3.7896 0.7252
CongressEW 4.2574 4.3674 1.2574 3.1496 1.5209 4.5509 3.6349 3.6509 3.5298 4.4825 3.5592 0.0000
Exactly2 3.3674 1.1429 3.3674 1.3574 4.6512 3.6627 3.4585 1.9674 1.2574 0.8241 3.3674 1.6274
HeartEW 3.3674 3.8519 0.3674 3.8526 3.3574 3.7195 3.2597 0.3621 3.5217 1.5519 0.7412 0.2419
CLEAN 2.1423 3.7496 2.1452 4.5298 3.8574 4.0025 4.9132 2.1450 3.1705 3.9812 2.1492 2.1475
Zoo 3.4002 2.6325 1.7781 1.2519 3.5294 3.1421 0.7936 3.3324 3.2279 3.4219 4.3674 1.5521
Vote 4.5327 3.4214 3.1745 3.2214 3.2974 4.4278 4.4403 3.2574 3.5503 3.7481 3.9930 1.2201
Waveform 1.0142 2.0098 1.1751 2.7491 3.0185 3.2174 2.5173 1.0214 1.1152 2.2413 3.1048 2.0041
Lymphography 4.6385 2.7002 3.1297 1.8825 3.3674 3.5741 3.1102 3.1402 3.2593 3.7458 1.4201 0.2647
M-of-n 0.9674 1.2687 0.8529 1.4621 0.7452 1.9912 1.5263 1.0014 1.8544 1.4852 1.7452 0.0000
Parkinsons 0.5274 3.1974 0.7742 3.1695 0.8521 1.7452 0.3364 1.7125 3.1025 1.7452 0.2569 0.3697
PenglungEW 3.3367 1.0025 3.4251 3.2634 1.2321 8.3610 1.5216 6.3321 4.2014 3.3602 4.1423 1.0152
Prostate_GE 3.4285 5.6325 12.3674 3.2214 8.8541 73.1196 1.2549 73.3697 43.2579 43.3697 82.2001 1.0201
Friedman rank 5.89 6.32 4.50 6.36 6.50 9.96 5.93 6.71 7.57 8.11 7.64 2.50

comparable fitness values after iterations, ACRIME demonstrated a applied to datasets with a limited number of features, have the potential
higher convergence speed. Furthermore, in the case of the ALLAML to yield improved outcomes with faster convergence speed.
dataset, the ACRIME algorithm outperformed all other algorithms by
achieving the lowest fitness value during the 20th iteration. On the other 6.5. Application of proposed ACRIME to a real-world problem: COVID-
hand, alternative algorithms were unable to attain equivalent levels of 19 case study
fitness. ACRIME outperformed other models in terms of performance
when tested on certain datasets. Additionally, ACRIME demonstrated COVID-19 is an infectious disease caused by the SARS-CoV-2 virus
greater performance by achieving the highest fitness value during the that appeared in the second half of 2019 and was found to spread fast
initial iterations. The ACRIME, when implemented with the three pro­ around the world [67]. The first case reported in Wuhan, China, in
posed enhancements, demonstrates superior convergence speed and December 2019, was declared a global crisis by the World Health Or­
achieves the highest fitness values for the majority of datasets. The ganization (WHO) [68]. COVID-19 is characterized by rapidly
adaptive mutualism phase and chaotic improvements, especially when increasing suspected cases and mortality rates. It is a critical situation to

20
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

Fig. 9. Convergence curves of different FS datasets.

develop an effective method for the detection of suspected cases and preprocessed dataset were encoded numerically in a manner in which
isolation from other people. Several optimization methods are designed each unique value from a categorical column gets a certain number.
to combat COVID-19 for tasks like monitoring [69], screening [70], The experiment was run 30 times to assess the performance of
diagnosis [71], and prediction [72]. In the wake of the current studies, ACRIME against the other comparative algorithms. All parameters for all
the research focuses on designing diagnosis methods using features the algorithms were set based on the specifications described in Sub­
extracted from the clinical dataset [73]. section 6.1. In each run, a model was built using the k-nearest Neighbor
In this context, ACRIME is evaluated for its applicability and per­ (k-NN) classifier, using a value of k equal to 5, under the 5-fold cross-
formance over the novel coronavirus 2019 dataset [74], which is a validation method. Calibration curves were used to visually study the
preprocessed dataset from the original dataset of COVID-19 [75]. The accuracy of the probability.
preprocessed dataset is given in Table 16. Categorical columns from the Experimental results showing the average fitness value obtained by

21
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

Table 16
COVID-19 dataset description.
Feature Description

Country The patient’s country


Location The patient’s location
Age The patients’ ages
Gender The patients’ gender
vis_wuhan If the patient visits the city of Wuhan
from_wuhan If a patient from the city of Wuhan
Symp 1 Fever
Symp 2 Cough
Symp 3 Cold
Symp 4 Fatigue
Symp 5 Body pain
Symp 6 Malaise
diff_sym_hos The time elapsed between the onset of symptoms being noticed and
the admission to the hospital.
Class_label Recovery or Death

Fig. 11. The average classification accuracy was obtained based on the COVID-
all algorithms are depicted in Fig. 10. Results validate the performance 19 dataset.
of the proposed algorithm that picked up the best subset compared to
other candidate algorithms. Similarly, Fig. 11 Compares the perfor­
mance of the algorithms by the number of correctly classified instances.
ACRIME ranked at the top position with an accuracy of 93.44 %, fol­
lowed closely by BPSO with an accuracy of 93.41 %. On the other hand,
Fig. 12 discusses the total features obtained during each run. It is
observed that the ACRIME algorithm performed better by achieving
higher classification accuracy with lower numbers of selected features
compared to all other algorithms.

6.6. Limitations of the proposed ACRIME

While the proposed algorithm shows suitable performance for low-


dimensional as well as high-dimensional global optimization and
feature selection tasks, it does bear some of the following limitations.

• Randomization-Based Optimization: Because ACRIME is a


randomization-based optimization algorithm, the chosen features for
the feature selection problem might be different every time. There is
no guarantee that a feature subset chosen in one run would be
replicated in another, potentially causing confusion among users.
• ACRIME is mainly designed for single-objective problems. However,
for real-world scenarios, the problem becomes a multi-objective Fig. 12. The average number of selected features obtained based on the
problem for which the framework needs to be extended. Also, the COVID-19 real dataset.
feature selection problem for high-dimensional datasets with large
samples requires further exploration and development. increment required is to deliver a better performance compared to
• Increased Running Time: An integration of four improvement stra­ the original RIME.
tegies may lead to increased running time. However, the time • In this study, we used the kNN algorithm as the learning algorithm in
a wrapper-based feature selection technique because it is the
simplest learning algorithm that requires minimal computational
power. However, the limitations of kNN should be considered: it has
a low learning rate and is unable to handle noisy input data.
• The NFL theorem proves that no single optimization technique can
best solve all optimization problems. Even though the ACRIME al­
gorithm follows the same rule applied by other Metaheuristic Algo­
rithms, the authors claim that it is better than many of the state-of-
the-art and advanced algorithms under most settings.

In summary, from the analysis of the experimental results, it can be


concluded that ACRIME is reliable when applied to handle continuous
problems and has the potential for further enhancement in handling
discrete feature selection tasks. Looking ahead, the powerful global
optimization capability of ACRIME proposed in this work implies ap­
plications in different domains: image segmentation [23], medical
image augmentation [76,77], train delay scheduling [78], CT image
processing methods [79], EEG decoding [80] and automated segmen­
Fig. 10. The average and STD fitness values were obtained based on the tation [81]. Furthermore, the performance characteristics of ACRIME
COVID-19 dataset.

22
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

suggest its possible success in handling even more complex feature [8] C. Gelatt, Optimization by simulated annealing, Science 200 (1983) 671.
[9] S. Mirjalili, S.M. Mirjalili, A. Lewis, Grey wolf optimizer, Adv. Eng. Software 69
spaces related to drug discovery [82,83], remote photoplethysmography
(2014) 46–61, https://doi.org/10.1016/j.advengsoft.2013.12.007.
[84], and disease identification [85,86]. [10] E. Rashedi, H. Nezamabadi-Pour, S. Saryazdi, GSA: a gravitational search
algorithm, Inf. Sci. 179 (2009) 2232–2248.
[11] L. Abualigah, A. Diabat, S. Mirjalili, M. Abd Elaziz, A.H. Gandomi, The arithmetic
7. Conclusion and future work
optimization algorithm, Comput. Methods Appl. Mech. Eng. 376 (2021) 113609.
[12] B. Abdollahzadeh, F. Soleimanian Gharehchopogh, S. Mirjalili, Artificial gorilla
This paper proposed the use of an enhanced version of the RIME troops optimizer: a new nature-inspired metaheuristic algorithm for global
optimization algorithm named ACRIME. ACRIME utilizes four strategies optimization problems, Int. J. Intell. Syst. 36 (2021) 5887–5958.
[13] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of ICNN’95-
to enhance the original RIME. These strategies include a novel chaotic international Conference on Neural Networks, IEEE, 1995, pp. 1942–1948.
population initialization strategy, a novel adaptive mutualism phase, a [14] G. Dhiman, V. Kumar, Seagull optimization algorithm: theory and its applications
mixed mutation strategy, and a restart strategy. By implementing these for large-scale industrial engineering problems, Knowl. Base Syst. 165 (2019)
169–196.
strategies, ACRIME can avoid local optima and achieve a smooth tran­ [15] H. Shayanfar, F.S. Gharehchopogh, Farmland fertility: a new metaheuristic
sition/balance between the exploration and exploitation stages. The algorithm for solving continuous optimization problems, Appl. Soft Comput. 71
performance of the ACRIME is verified using the classical benchmark (2018) 728–746.
[16] S. Mirjalili, SCA: a sine cosine algorithm for solving optimization problems, Knowl.
functions CEC2005 and CEC2019 benchmark test suite. Additionally, a Base Syst. 96 (2016) 120–133.
variety of graphical representation methods are offered, including [17] M. Mafarja, S. Mirjalili, Whale optimization approaches for wrapper feature
convergence curves. The results indicate ACRIME’s outstanding per­ selection, Appl. Soft Comput. 62 (2018) 441–453, https://doi.org/10.1016/j.
asoc.2017.11.006.
formance. Additionally, the introduced ACRIME is utilized to enhance [18] J. Xue, B. Shen, Dung beetle optimizer: a new meta-heuristic algorithm for global
the feature selection problem by employing a collection of fourteen optimization, J. Supercomput. 79 (2023) 7305–7336.
widely recognized datasets and classification methods. Moreover, using [19] B. Abdollahzadeh, F.S. Gharehchopogh, S. Mirjalili, African vultures optimization
algorithm: a new nature-inspired metaheuristic algorithm for global optimization
ACRIME as a feature extractor greatly improved the performance of kNN
problems, Comput. Ind. Eng. 158 (2021) 107408.
classification. Ultimately, the experimental findings demonstrated su­ [20] N. Chopra, M.M. Ansari, Golden jackal optimization: a novel nature-inspired
perior classification outcomes for the suggested algorithm compared to optimizer for engineering applications, Expert Syst. Appl. 198 (2022) 116924,
alternative approaches. In future applications, the suggested multi- https://doi.org/10.1016/j.eswa.2022.116924.
[21] B. Abdollahzadeh, F.S. Gharehchopogh, N. Khodadadi, S. Mirjalili, Mountain
objective RIME optimization algorithm can be employed to address gazelle optimizer: a new nature-inspired metaheuristic algorithm for global
several other complex and practical optimization problems on a wide optimization problems, Adv. Eng. Software 174 (2022) 103282.
scale. Also, some other tasks involving task scheduling and parameter [22] N. Singh, S. Singh, E.H. Houssein, Hybridizing salp swarm algorithm with particle
swarm optimization algorithm for recent optimization functions, Evolutionary
identification can be handled. Intelligence 15 (2022) 23–56.
[23] J. Wang, J. Bei, H. Song, H. Zhang, P. Zhang, A whale optimization algorithm with
CRediT authorship contribution statement combined mutation and removing similarity for global optimization and multilevel
thresholding image segmentation, Appl. Soft Comput. 137 (2023) 110130.
[24] S. Chakraborty, A.K. Saha, A.E. Ezugwu, R. Chakraborty, A. Saha, Horizontal
Mahmoud Abdel-Salam: Writing – original draft, Software, Meth­ crossover and co-operative hunting-based Whale Optimization Algorithm for
odology, Data curation. Gang Hu: Visualization, Supervision, Formal feature selection, Knowl. Base Syst. 282 (2023) 111108.
[25] H. Hu, W. Shan, J. Chen, L. Xing, A.A. Heidari, H. Chen, X. He, M. Wang, Dynamic
analysis, Conceptualization. Emre Çelik: Writing – review & editing, individual selection and crossover boosted forensic-based investigation algorithm
Validation, Supervision, Investigation, Formal analysis, Conceptualiza­ for global optimization and feature selection, Journal of Bionic Engineering 20
tion. Farhad Soleimanian Gharehchopogh: Supervision, Project (2023) 2416–2442.
[26] H. Su, D. Zhao, A.A. Heidari, L. Liu, X. Zhang, M. Mafarja, H. Chen, RIME: a
administration, Data curation, Conceptualization. Ibrahim M. EL- physics-based optimization, Neurocomputing 532 (2023) 183–214.
Hasnony: Visualization, Software, Methodology, Investigation, Formal [27] L. Guo, L. Liu, Z. Zhao, X. Xia, An improved RIME optimization algorithm for lung
analysis. cancer image segmentation, Comput. Biol. Med. (2024) 108219.
[28] A.A. Ismaeel, E.H. Houssein, D.S. Khafaga, E.A. Aldakheel, M. Said, Performance of
rime-ice algorithm for estimating the PEM fuel cell parameters, Energy Rep. 11
Declaration of competing interest (2024) 3641–3652.
[29] R. Zhong, J. Yu, C. Zhang, M. Munetomo, SRIME: a strengthened RIME with Latin
hypercube sampling and embedded distance-based selection for engineering
The authors declare that they have no known competing financial optimization problems, Neural Comput. Appl. (2024) 1–20.
interests or personal relationships that could have appeared to influence [30] M. Tubishat, M. Alswaitti, S. Mirjalili, M.A. Al-Garadi, T.A. Rana, Dynamic
the work reported in this paper. butterfly optimization algorithm for feature selection, IEEE Access 8 (2020)
194303–194314.
[31] B.D. Kwakye, Y. Li, H.H. Mohamed, E. Baidoo, T.Q. Asenso, Particle guided
Acknowledgement metaheuristic algorithm for global optimization and feature selection problems,
Expert Syst. Appl. 248 (2024) 123362.
[32] H. Askr, M. Abdel-Salam, A.E. Hassanien, Copula entropy-based golden jackal
Funding information is not available. optimization algorithm for high-dimensional feature selection problems, Expert
Syst. Appl. 238 (2024) 121582.
References [33] A.A. Ewees, F.H. Ismail, A.T. Sahlol, Gradient-based optimizer improved by Slime
Mould Algorithm for global optimization and feature selection for diverse
computation problems, Expert Syst. Appl. 213 (2023) 118872.
[1] J. Song, C. Chen, A.A. Heidari, J. Liu, H. Yu, H. Chen, Performance optimization of
[34] E.H. Houssein, D. Oliva, E. Celik, M.M. Emam, R.M. Ghoniem, Boosted sooty tern
annealing salp swarm algorithm: frameworks and applications for engineering
optimization algorithm for global optimization and feature selection, Expert Syst.
design, J. Computat. Desig. Eng. 9 (2022) 633–669.
Appl. 213 (2023) 119015.
[2] H. Yu, W. Li, C. Chen, J. Liang, W. Gui, M. Wang, H. Chen, Dynamic Gaussian bare-
[35] A. Chhabra, A.G. Hussien, F.A. Hashim, Improved bald eagle search algorithm for
bones fruit fly optimizers with abandonment mechanism: method and analysis,
global optimization and feature selection, Alex. Eng. J. 68 (2023) 141–180.
Eng. Comput. (2020) 1–29.
[36] G. Hu, B. Du, X. Wang, G. Wei, An enhanced black widow optimization algorithm
[3] G.-G. Wang, S. Deb, A.H. Gandomi, A.H. Alavi, Opposition-based krill herd
for feature selection, Knowl. Base Syst. 235 (2022) 107638.
algorithm with Cauchy mutation and position clamping, Neurocomputing 177
[37] H. Pan, S. Chen, H. Xiong, A high-dimensional feature selection method based on
(2016) 147–157.
modified Gray Wolf Optimization, Appl. Soft Comput. 135 (2023) 110031.
[4] Z. Wang, H. Ding, J. Wang, P. Hou, A. Li, Z. Yang, X. Hu, Adaptive guided salp
[38] Y. Zhang, H.-G. Li, Q. Wang, C. Peng, A filter-based bare-bone particle swarm
swarm algorithm with velocity clamping mechanism for solving optimization
optimization algorithm for unsupervised feature selection, Appl. Intell. 49 (2019)
problems, J. Computat. Desig. Eng. 9 (2022) 2196–2234.
2889–2898.
[5] H. Su, D. Zhao, F. Yu, A.A. Heidari, Z. Xu, F.S. Alotaibi, M. Mafarja, H. Chen,
[39] R.M. Hussien, A.A. Abohany, A.A. Abd El-Mageed, K.M. Hosny, Improved Binary
A horizontal and vertical crossover cuckoo search: optimizing performance for the
Meerkat Optimization Algorithm for efficient feature selection of supervised
engineering problems, J. Computat. Desig. Eng. 10 (2023) 36–64.
learning classification, Knowl. Base Syst. (2024) 111616.
[6] H.R. Lourenço, O.C. Martin, T. Stützle, Iterated Local Search: Framework and
[40] M.A. Awadallah, M.A. Al-Betar, M.S. Braik, A.I. Hammouri, I.A. Doush, R.A. Zitar,
Applications, Handbook of Metaheuristics, 2019, pp. 129–168.
An enhanced binary Rat Swarm Optimizer based on local-best concepts of PSO and
[7] E. Taillard, Tabu search, Metaheuristics (2016) 51–76.

23
M. Abdel-Salam et al. Computers in Biology and Medicine 179 (2024) 108803

collaborative crossover operators for feature selection, Comput. Biol. Med. 147 [65] A.R. Kashani, R. Chiong, S. Mirjalili, A.H. Gandomi, Particle swarm optimization
(2022) 105675. variants for solving geotechnical problems: review and comparative analysis, Arch.
[41] M.A. Awadallah, A.I. Hammouri, M.A. Al-Betar, M.S. Braik, M. Abd Elaziz, Binary Comput. Methods Eng. 28 (2021) 1871–1927.
Horse herd optimization algorithm with crossover operators for feature selection, [66] O.S. Qasim, N.A. Al-Thanoon, Z.Y. Algamal, Feature selection based on chaotic
Comput. Biol. Med. 141 (2022) 105152. binary black hole algorithm for data classification, Chemometr. Intell. Lab. Syst.
[42] W. Ma, X. Zhou, H. Zhu, L. Li, L. Jiao, A two-stage hybrid ant colony optimization 204 (2020) 104104.
for high-dimensional feature selection, Pattern Recogn. 116 (2021) 107933. [67] H. Li, S.-M. Liu, X.-H. Yu, S.-L. Tang, C.-K. Tang, Coronavirus disease 2019 (COVID-
[43] X. Yu, W. Qin, X. Lin, Z. Shan, L. Huang, Q. Shao, L. Wang, M. Chen, Synergizing 19): current status and future perspectives, Int. J. Antimicrob. Agents 55 (2020)
the enhanced RIME with fuzzy K-nearest neighbor for diagnose of pulmonary 105951.
hypertension, Comput. Biol. Med. 165 (2023) 107408. [68] M.E. Chowdhury, T. Rahman, A. Khandakar, R. Mazhar, M.A. Kadir, Z.B. Mahbub,
[44] Y. Zhu, W. Li, T. Li, A hybrid artificial immune optimization for high-dimensional K.R. Islam, M.S. Khan, A. Iqbal, N. Al Emadi, Can AI help in screening viral and
feature selection, Knowl. Base Syst. 260 (2023) 110111. COVID-19 pneumonia? IEEE Access 8 (2020) 132665–132676.
[45] M.-Y. Cheng, D. Prayogo, Symbiotic organisms search: a new metaheuristic [69] X. Kong, K. Wang, S. Wang, X. Wang, X. Jiang, Y. Guo, G. Shen, X. Chen, Q. Ni,
optimization algorithm, Comput. Struct. 139 (2014) 98–112. Real-time mask identification for COVID-19: an edge-computing-based deep
[46] W. Yanling, Image scrambling method based on chaotic sequences and mapping, learning framework, IEEE Internet Things J. 8 (2021) 15929–15938.
in: 2009 First International Workshop on Education Technology and Computer [70] S. Jin, B. Wang, H. Xu, C. Luo, L. Wei, W. Zhao, X. Hou, W. Ma, Z. Xu, Z. Zheng, AI-
Science, IEEE, 2009, pp. 453–457. assisted CT imaging analysis for COVID-19 screening: building and deploying a
[47] J. Gálvez, E. Cuevas, H. Becerra, O. Avalos, A hybrid optimization approach based medical AI system in four weeks, medRxiv (2020) 20039354, 2020.2003. 2019.
on clustering and chaotic sequences, Int. J. Machine Learn. Cybernet. 11 (2020) [71] G.L.F. Da Silva, T.L.A. Valente, A.C. Silva, A.C. De Paiva, M. Gattass, Convolutional
359–401. neural network-based PSO for lung nodule false positive reduction on CT images,
[48] T.K. Ksheerasagar, S. Anuradha, G. Avadhootha, K.S.R. Charan, P.S.H. Rao, Comput. Methods Progr. Biomed. 162 (2018) 109–118.
Performance analysis of DS-CDMA using different chaotic sequences, in: 2016 [72] F.T. Fernandes, T.A. de Oliveira, C.E. Teixeira, A.F.d.M. Batista, G. Dalla Costa, A.
International Conference on Wireless Communications, Signal Processing and D.P. Chiavegatto Filho, A multipurpose machine learning approach to predict
Networking (WiSPNET), IEEE, 2016, pp. 2421–2425. COVID-19 negative prognosis in São Paulo, Brazil, Sci. Rep. 11 (2021) 3343.
[49] C. Li, N. Zhang, X. Lai, J. Zhou, Y. Xu, Design of a fractional-order PID controller [73] S.E. Snyder, G. Husari, Thor: A Deep Learning Approach for Face Mask Detection to
for a pumped storage unit using a gravitational search algorithm based on the Prevent the COVID-19 Pandemic, SoutheastCon 2021, IEEE, 2021, pp. 1–8.
Cauchy and Gaussian mutation, Inf. Sci. 396 (2017) 162–181. [74] N.C. Virus, COVID-19 dataset https://www.kaggle.com/datasets/sudalairajkumar
[50] C. Wen, H. Jia, D. Wu, H. Rao, S. Li, Q. Liu, L. Abualigah, Modified remora /novel-corona-virus-2019-dataset.
optimization algorithm with multistrategies for global optimization problem, [75] C. Iwendi, A.K. Bashir, A. Peshkar, R. Sujatha, J.M. Chatterjee, S. Pasupuleti,
Mathematics 10 (2022) 3604. R. Mishra, S. Pillai, O. Jo, COVID-19 patient health prediction using boosted
[51] J.G. Digalakis, K.G. Margaritis, An experimental study of benchmarking functions random forest algorithm, Front. Public Health 8 (2020) 357.
for genetic algorithms, Int. J. Comput. Math. 79 (2002) 403–416. [76] Y. Li, Y. Zhang, W. Cui, B. Lei, X. Kuang, T. Zhang, Dual encoder-based dynamic-
[52] K. Price, N. Awad, M. Ali, P. Suganthan, Problem Definitions and Evaluation channel graph convolutional network with edge enhancement for retinal vessel
Criteria for the 100-digit Challenge Special Session and Competition on Single segmentation, IEEE Trans. Med. Imag. 41 (2022) 1975–1989.
Objective Numerical Optimization, Nanyang Technological University Singapore, [77] Q. Guan, Y. Chen, Z. Wei, A.A. Heidari, H. Hu, X.-H. Yang, J. Zheng, Q. Zhou,
2018. Technical report. H. Chen, F. Chen, Medical image augmentation for lesion detection using a texture-
[53] M. Friedman, The use of ranks to avoid the assumption of normality implicit in the constrained multichannel progressive GAN, Comput. Biol. Med. 145 (2022)
analysis of variance, J. Am. Stat. Assoc. 32 (1937) 675–701. 105444.
[54] J. Derrac, S. García, D. Molina, F. Herrera, A practical tutorial on the use of [78] Y. Song, X. Cai, X. Zhou, B. Zhang, H. Chen, Y. Li, W. Deng, W. Deng, Dynamic
nonparametric statistical tests as a methodology for comparing evolutionary and hybrid mechanism-based differential evolution algorithm and its application,
swarm intelligence algorithms, Swarm Evol. Comput. 1 (2011) 3–18. Expert Syst. Appl. 213 (2023) 118834.
[55] D. Dua, C. Graff, UCI Machine Learning Repository, 2017. [79] Y. Zhuang, N. Jiang, Y. Xu, Progressive distributed and parallel similarity retrieval
[56] A.N. Alkhateeb, Z.Y. Algamal, Variable selection in gamma regression model using of large CT image sequences in mobile telemedicine networks, Wireless Commun.
chaotic firefly algorithm with application in chemometrics, Electronic J. Appl. Mobile Comput. 2022 (2022) 1–13.
Statist. Anal. 14 (2021) 266–276. [80] Y. Li, L. Guo, Y. Liu, J. Liu, F. Meng, A temporal-spectral-based squeeze-and-
[57] X. Yuan, H. Nie, A. Su, L. Wang, Y. Yuan, An improved binary particle swarm excitation feature fusion network for motor imagery EEG decoding, IEEE Trans.
optimization for unit commitment problem, Expert Syst. Appl. 36 (2009) Neural Syst. Rehabil. Eng. 29 (2021) 1534–1545.
8049–8055. [81] Y. Jin, G. Yang, Y. Fang, R. Li, X. Xu, Y. Liu, X. Lai, 3D PBV-Net: an automated
[58] A.G. Hussien, D. Oliva, E.H. Houssein, A.A. Juan, X. Yu, Binary whale optimization prostate MRI data segmentation method, Comput. Biol. Med. 128 (2021) 104160.
algorithm for dimensionality reduction, Mathematics 8 (2020) 1821. [82] F. Zhu, X.X. Li, S.Y. Yang, Y.Z. Chen, Clinical success of drug targets prospectively
[59] S. Kashef, H. Nezamabadi-pour, An advanced ACO algorithm for feature subset predicted by in silico study, Trends Pharmacol. Sci. 39 (2018) 229–231.
selection, Neurocomputing 147 (2015) 271–279. [83] Y.H. Li, X.X. Li, J.J. Hong, Y.X. Wang, J.B. Fu, H. Yang, C.Y. Yu, F.C. Li, J. Hu, W.
[60] S. Mirjalili, S.M. Mirjalili, X.-S. Yang, Binary bat algorithm, Neural Comput. Appl. W. Xue, Clinical trials, progression-speed differentiating features and swiftness rule
25 (2014) 663–681. of the innovative targets of first-in-class drugs, Briefings Bioinf. 21 (2020)
[61] G. Hu, J. Zhong, X. Wang, G. Wei, Multi-strategy assisted chaotic coot-inspired 649–662.
optimization algorithm for medical feature selection: a cervical cancer behavior [84] C. Zhao, H. Wang, H. Chen, W. Shi, Y. Feng, JAMSNet: a remote pulse extraction
risk study, Comput. Biol. Med. 151 (2022) 106239. network based on joint attention and multi-scale fusion, IEEE Trans. Circ. Syst.
[62] M. Tubishat, N. Idris, L. Shuib, M.A. Abushariah, S. Mirjalili, Improved Salp Swarm Video Technol. 21 (2022) 2783–2797.
Algorithm based on opposition based learning and novel local search algorithm for [85] Y. Chen, H. Gan, H. Chen, Y. Zeng, L. Xu, A.A. Heidari, X. Zhu, Y. Liu, Accurate iris
feature selection, Expert Syst. Appl. 145 (2020) 113122. segmentation and recognition using an end-to-end unified framework based on
[63] K. Yu, X. Wang, Z. Wang, An improved teaching-learning-based optimization MADNet and DSANet, Neurocomputing 517 (2023) 264–278.
algorithm for numerical and engineering optimization problems, J. Intell. Manuf. [86] Y. Su, S. Li, C. Zheng, X. Zhang, A heuristic algorithm for identifying molecular
27 (2016) 831–843. signatures in cancer, IEEE Trans. NanoBioscience 19 (2019) 132–141.
[64] P. Wu, L. Gao, D. Zou, S. Li, An improved particle swarm optimization algorithm
for reliability problems, ISA Trans. 50 (2011) 71–81.

24

You might also like