A Systematic Review of The Emerging Metaheuristic Algorithms On Solving Complex Optimization Problems
A Systematic Review of The Emerging Metaheuristic Algorithms On Solving Complex Optimization Problems
https://doi.org/10.1007/s00521-023-08481-5 (0123456789().,-volV)(0123456789().
,- volV)
ORIGINAL ARTICLE
Received: 2 November 2022 / Accepted: 8 March 2023 / Published online: 26 March 2023
The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2023
Abstract
The scientific field of optimization has witnessed an increasing trend in the development of metaheuristic algorithms within
the current decade. The vast majority of the proposed algorithms have been proclaimed as superior and highly efficient
compared to their contemporary counterparts by their own developers, which should be verified on a set of benchmark
cases if it is to give conducive insights into their true capabilities. This study completes a comprehensive investigation of
the general optimization capabilities of the recently developed nature-inspired metaheuristic algorithms, which have not
been thoroughly discussed in past literature studies due to their new emergence. To overcome this deficiency in the existing
literature, optimization benchmark problems with different functional characteristics will be solved by some of the widely
used recent optimizers. Unconstrained standard test functions comprised of thirty-four unimodal scalable optimization
problems with varying dimensionalities have been solved by these competitive algorithms, and respective estimated
solutions have been evaluated relying on the performance metrics defined by the statistical analysis of the predictive
results. Convergence curves of the algorithms have been construed to observe the evolution trends of objective function
values. To further delve into comprehensive analysis on unconstrained test cases, CEC 2013 problems have been con-
sidered for comparison tools since their resemblances of the following features of real-world complex algorithms. The
optimization capabilities of eleven metaheuristics algorithms have been comparatively analyzed on twenty-eight multi-
dimensional problems. Finally, fourteen complex engineering problems have been optimized by the algorithms to scru-
tinize their effectiveness on handling the imposed design constraints.
Keywords Algorithm comparison Algorithm scalability Metaheuristic algorithms Real-world design problems
123
14276 Neural Computing and Applications (2023) 35:14275–14378
responsible search agents, which enables to eliminate the values and separation distances. Swarm Intelligence (SI)
local pitfalls over the solution domain [1]. They can be algorithms are nature-inspired solution strategies takes
perceived as a high-level optimization framework appli- their main foundations upon the collective behaviors in the
cable to a wide range of problem domains, guided by a set self-organized and decentralized artificial systems. Devel-
of defined search strategies to develop an efficient heuristic opment of the Particle Swarm Optimization (PSO) [12]
algorithm. They can successfully cope with the challenges algorithm is one of the first pioneering attempts to the
of nonlinear or non-convex problems with expending a gradual evolution of the SI-based algorithms. PSO simu-
relatively low computational budget compared to the tra- lates the cooperative behavior of the swarming individuals
ditional optimizers. This prolific feature of metaheuristic such as birds, insects, herds, etc. Each particle in the swarm
optimizers makes them one step ahead of the traditional takes different roles relying on their search patterns with a
optimizers such as Newton-based algorithms and gradient view to obtain available food sources and benefits from
descent optimizers. Despite their ease in implementation, their previous search experiences and cumulative domain
Conventional optimization methods are easily be trapped in knowledge to adjust the most conducive search activity to
the local solutions and desperately stagnates at these reach the global optimum solution. Ant Colony Opti-
obstructive points during the course of iterations, leading to mization (ACO) [13] mimics the intrinsic foraging
inferior solution outcomes. Metaheuristic algorithms come behaviors of the intelligent ants, which is based on the
to the rescue in these unprosperous situations where con- following the intensity of the pheromones left by foraging
ventional methods collapse and employ alternative options ants which are on their ways to probe around the available
to overcome the obstacles of the optimization problem. food resources. Artificial Bee Colony (ABC) [14] algo-
Metaheuristics can be broadly classified under four rithm is inspired from the intelligent foraging behaviors
different main branches such as evolutionary algorithms, artificial bees, which are, in essence, search agents to be
physical-based algorithms, swarm-based algorithms, and iteratively optimized during the course of consecutive
human-based algorithms. Evolutionary Algorithms (EAs) function evaluations. Salp Swarm Optimization (SSA) [15]
simulate the tendencies of living organisms relying on algorithm is a swarm intelligence metaphor-based opti-
foundational concepts of Darwinian-like natural selection mizer taking its main inspiration to the swarming procliv-
[2] to develop intelligent optimization techniques. Genetic ities of the salp individuals while they are navigating across
Algorithms [3] are the most famous member of EAs, imi- the ocean during foraging activities. Human-based meta-
tating the different aspects of biological evolution based on heuristic algorithms are based on the mathematical models
the principles of natural selection. Differential Evolution and intelligently devised procedures mimicking the char-
[4] is another reputed optimizer iteratively adjusting the set acteristics of human activities. Teaching–Learning-Based
of candidate solutions simulating the basic principles of Optimization (TLBO) [16] is one of the most famous
Darwinian evolution to achieve the optimal answer of the family of the human-based optimizer, metaphorically
problem. Evolutionary Strategies (ES) [5], Genetic Pro- simulating the mutualist teaching and learning process
gramming (GP) [6], and Biogeography-Based Optimization taking place in a classroom. Poor and Rich Optimization
(BBO) [7] also belong to the group of well-known EAs. (PRO) [17] algorithm simulates the tedious efforts between
Physical-based algorithms are conceptualized on the fun- two distinct groups comprised of poor and rich individuals
damental principles of Newton’s law of physics. Gravita- to improve their current wealth situations while sharing the
tional Search Algorithm (GSA) proposed by Rashedi et al. useful domain knowledge within the whole population.
[8] is one of the pioneers of the physics-based algorithm, Harmony Search Optimization Algorithm (HS) [18] is one
imitating the gravitational interactions between masses. of the prominent members belonging to this group, simu-
Big Bang–Big Crunch (BB–BC) [9] algorithm is inspired lating the exhaustive process of a musician trying to find
from the evolution stages of the universe. In the incipient the perfect tune during harmony improvisation. Imperialist
phase, Big Bang occurs in which trial solutions are gen- Competitive Algorithm (ICA) [19] is a socio-political
erated to be used for manipulation in the later stage. Big human-inspired metaheuristic algorithm conceptualized on
Crunch phase modifies the initial solutions iteratively to the imperialist competition among the evolving countries,
retain the global best answer of the problem. Water Cycle which are essentially search agents of this multi-agent
Algorithm (WCA) [10] draws its inspiration from the water optimization algorithm.
cycle process in the nature, formation of the rivers and Despite the undisputable success of the literature
streams, and simulates how they flow toward to the sea in metaheuristic algorithms, emerging novel algorithms come
the real world. Charged System Search (CSS) [11] is into existence to fill the gap for solving the complex
inspired by the governing mechanisms of Coulomb law, optimization problems where available existing optimizers
where each search agent is an interactive charged particle collapse and are not able to yield feasible solutions. In
influencing each other based on their respective fitness addition, a notable No Free Lunch theorem [20] explaining
123
Neural Computing and Applications (2023) 35:14275–14378 14277
why there is an unceasing need for developing brand-new answers for this raised issue as it is hard to deal with the
algorithms despite the abundancy of available optimizers complexities of the real-world problems since the global
with various types in the literature states that there is no optimum is intractable in most of the cases. Problem-in-
single optimization algorithm capable to solve all kinds of dependent feature of metaheuristic algorithms still remains
optimization problems. It requires an exhaustive analysis questionable to the community, and arduous efforts have
employing various strategies to decide which algorithm been paid by researchers to unravel the true nature of these
performs well for a given optimization problem. Statistical algorithms since their first emergence. Recent studies try to
analysis of the compiled set of objective function values shed light on this intriguing research subject yet most of
has been consistently utilized in previous literature studies them consider a particular problem domain rather than
in order to decide the suitability of the associated algorithm looking from a broader perspective for investigating the
for the given problem. Some algorithms get better results comparative performances of the available metaheuristic
for specific optimization problems than the other methods. algorithms. Kumar et al. [23] compared the performances
These are the main reasons accounting for the enormous of six metaheuristic optimizers including Iterated Local
exponential growth in the development of metaheuristic Search (ILS), Simulated Annealing (SA), Genetic Algo-
optimizers within the recent two decades. Competitive rithm (GA), Particle Swarm Optimization (PSO), Tabu
research studies related with the development of effective Search (TS) and Crow Search Algorithm (CSA) in quad-
metaheuristic algorithms during the era between 2010 and ratic assignment problems and concluded that TS has the
2020 are mostly inspired by either the swarming behavior lowest deviation and average runtime between them and
of a flocking particle in its natural habitat or by simulating declares its superiority among the contestant optimizers.
the evolutionary mechanisms of living organisms. In many Lara-Montano et al. [24] comparatively investigated the
optimization cases, these new emerging algorithms are able optimization performances of seven metaheuristic methods
to provide the best possible answer of the problems surpass on shell-and-tube heat exchanger design optimization
the former existing optimizers with respect to considered problem with having mixed-integer decision parameters. It
comparative measures. is observed that Differential Evolution and Grey Wolf
As the existing literature suggests, metaheuristic algo- Optimization (GWO) algorithms provide the most
rithms have been developed, modified, or hybridized with stable and accurate prediction results. Abdor-Sierra et al.
other intelligent algorithms for a period of time more than [25] perform a comprehensive evaluation between ten
two decades. These type of algorithms find their place in different metaheuristic algorithms on solving inverse
many fields of engineering applications ranging from kinematics problem of a robot manipulator. Statistical
Proton Exchange Membrane Fuel Cell (PEMFC) design analysis of the algorithm runs reveal that different PSO
[21] to well placement optimization [22]. They are efficient variants along with DE algorithm provide the most accu-
problem-solving strategies even when the objective func- rate estimation values and are highly recommended for
tion is strictly constrained or characteristically having solving inverse kinematic of mobile robots. Sonmez [26]
mixed-integer design variables. Continuous search agents considered eight literature optimizers for their performance
are rounded-off to its nearest integer values for the mixed- comparison on optimal design of space trusses and con-
integer optimization problems as the traditional penalty cluded that when dimensionality of the design problem is
approach is employed to penalize the infeasible solutions increased to higher extents, computationally effective
obtained during the iterations with a view to convert the solutions are obtained from Jaya, GWO, and ABC algo-
constraint optimization problem to the unconstrained one. rithm. Ahmet et al. [27] hybridized three emergent physics-
Decision parameters to be optimized are bounded to their inspired metaheuristic algorithms with a multi-layer per-
allowable predefined limits, upper and lower limits, and ceptron to approximate the amount of streamflow in the
prior to commencing the optimization process. These fea- rivers for the near future where the compiled streamflow
tures are common to nearly all metaheuristic algorithms data set is collected from the 130 years of the periodic
and should be carefully practiced to acquire a feasible water level changes in High Aswan Dam. Meng et al. [28]
solution for the problem. comprehensively investigated the optimization capabilities
As extensive literature survey indicates, metaheuristics of ten different metaheuristic algorithms on various cases
have become a commonplace approach for solving differ- of reliability-based design problems. Assessment of the
ent domains of optimization problems. However, a for- algorithms are carried out on different performance mea-
midable question is often confusing the mind as to their sures, including global convergence, solution accuracy and
effectivity and convergence capabilities for general class of robustness, and runtime analysis. Predictive results
problems. Although there are numerous emerging meta- obtained from the benchmark reliability problems indicate
heuristic algorithms developed in the existing literature, that WCA shows superior adaptability to different bench-
only a few researchers discuss and address reasonable mark cases and recommended for solving reliability-based
123
14278 Neural Computing and Applications (2023) 35:14275–14378
optimization problems. Katebi et al. [29] independently evaluations and assessments should be carried out to
integrated six intelligent metaheuristic optimizers into the comprehend as to which algorithm performs better or
active mechanism of a Wavelet-based Linear Quadratic which algorithm is most suitable to the type of the problem
Regulator to conquer the local optimality problem as well being solved. Furthermore, it is an undisputable necessity
as to enhance general system efficiency. ICA achieves to to keep up with the recent impact in the development and
obtain the most optimal responses for the optimal control implementation of new metaheuristic algorithms along
problem. Naranjo et al. [30] applied three different well- with their successful applications of various real-world
reputed metaheuristic optimizers to the multi-objective problems. In addition, there should be a continuous effort
optimization of a dimensional synthesis of a spherical to seek the improvement in existing optimizers or to
parallel manipulator. Simulation results obtained after develop state-of-the-art metaheuristic algorithms, relying
repetitive runs demonstrate that Decomposition-based on the implications of No Free Lunch theorem, postulating
Evolutionary Algorithm obtains estimation results with the the general belief that there is no available metaheuristic
lowest deviates and generates uniform and smooth solution algorithm capable to solve all optimization problems, as
distribution along the Pareto curve. Advanced design previously mentioned in the former paragraphs. To widen
optimization of a hydrogen-based microgrid system has the general perspective on scrutinizing the estimation
been carried out by employing six different metaheuristic accuracies of the new emerging metaheuristic algorithms,
algorithms, and their comparative performances are this study proposes a more insightful and reasonable per-
assessed relying on their corresponding fitness function formance benchmark strategy. Overall search effectivities
values accounting for the total cost of integrated sustain- of the recently developed eleven metaheuristic optimizers
able energy system. It is seen that Moth Flame Optimiza- of Runge–Kutta Optimization (RUNGE) [34], Gradient-
tion algorithm results in a significant reduction of the based Optimizer [35], Poor and Rich Optimization (PRO)
overall cost expenditure and outperforms the remaining [17], Reptile Search algorithm (REPTILE) [36], Snake
compared optimizers with respect to solution efficiency Optimizer (SNAKE) [37], Equilibrium Optimizer (EQUIL)
[31]. Gupta et al. [32] investigated the search behavior of [38], Manta Ray Optimization Algorithm (MANTA) [39],
the metaheuristic algorithms on real-world mechanical African Vultures Optimization Algorithm (AFRICAN)
engineering problems with mixed-integer decisions vari- [40], Aquila Optimization Algorithm (AQUILA) [41],
ables binding constraints and conflicting highly nonlinear Harris Hawks Optimization (HARRIS) [42], and Barnacles
problem objectives. Nine metaheuristic optimizers have Mating Optimizer (BARNA) [43] will be benchmarked
been considered for analyzing their comparative perfor- against multidimensional optimization problems with var-
mances on solving these mechanical engineering problems ious types in this research study. Despite their new emer-
in terms of convergence rates and solution qualities. gence and insufficient recognition from the metaheuristic
Ezugwu et al. [33] put forward a systematic analysis community, there are many existing literature applications
approach for evaluating the solution consistencies and regarding their successful employment on real-world
runtime complexities of twelve different metaheuristic design problems. RUNG algorithm was previously applied
optimizers on continuous unconstrained optimization to the optimal design of a photovoltaic system to mitigate
problems. GA, DE, and PSO algorithms perform well partial shading conditions [44]. Optimal parameter esti-
under a variety of optimization test problems, slightly mation of the PEM fuel cell model was conducted by the
surpassing Symbiotic Organism Search (SOS) and Cuckoo newly emerged GRAD algorithm, and it is seen that elec-
Search (CS) algorithms concerning the best results. trical model parameter estimation results for different PEM
One can clearly deduce from the extensive survey that fuel cell devices obtained from GRAD optimizer is much
previous research studies focus on a particular subject, better and accurate those retained by the compared litera-
providing limited insight on the overall capabilities of the ture optimizers [45]. A modified version of PRO algorithm
implemented metaheuristic algorithms for evaluating their is employed for grouping similar documents by using text
comparative prediction performances. In the domain of classification and outperforms the contestant algorithm of
evolutionary computation and metaheuristic algorithms, Particle Swarm Optimization, Whale Optimization, Grey
researchers tend to apply their best optimizers between Wolf Optimization, and Dragonfly algorithms with respect
their contemporary alternatives to solve the optimization to the clustering accuracy of the text documents [46]. Levy
problem at hand. Selecting the best optimization method Flight-assisted Reptile Search algorithm (REPTILE) is
out of the compared contestant optimizers for a given set of developed for tuning proportional-integral-derivative
benchmark problems is decided by the conclusive remarks model parameters of a vehicle cruise control [47]. Hu et al.
of the inventor of the proposed algorithm, which may be [48] proposed multi-strategy boosted Snake Inspired
deceptive and lead to unreasonable inferences regarding to Optimizer (SNAKE) developed for multidimensional
the veracity of its search performance. Comprehensive engineering design problems. They benchmarked the
123
Neural Computing and Applications (2023) 35:14275–14378 14279
optimization capability of the proposed method against functions, and their respective pros and cons are compar-
some of the well-known optimizers, and clear dominance atively evaluated by the similar performance measures.
of this developed optimizer is observed. Optimal energy Following contributions are provided to the current litera-
load dispatch in a multi-chiller system was carried out by ture by this study, which can be concisely listed as:
Equilibrium Optimizer and bets results were compared by
1. Comprehensive analyses are made on eleven algo-
the previous efforts made by Genetic Algorithm and Sim-
rithms through the statistical performance of thirty-four
ulated Annealing optimization method [49]. Hu et al. [50]
unconstrained optimization functions comprised of
enhanced the general optimization capability of Manta Ray
unimodal and multimodal test functions. Convergence
Foraging Optimization (MANTA) by integrating search
graphs for the compared algorithms are plotted for each
equations of Wavelet mutation and quadratic interpolation
unconstrained test problem to observe which algorithm
strategy and applied this ameliorated version of the algo-
converges faster to its optimum, and the best optimizer
rithm to successful shape optimization of a complex
between them is decided by its success on obtaining the
composite cubic generalized ball. Chen et al. [51] utilized
most accurate solutions within the lowest computation
African Vulture Optimization Algorithm (AFRICAN) for
time.
optimal modeling of combined power system operated in a
2. CEC-2013 (Congress on Evolutionary Computation-
watersport complex. Main optimization objective is to
2013) test suite involving twenty-eight benchmark
minimize the total energy losses as much as possible by
functions with different modalities is solved by the
optimizing the considered design parameters of a number
compared eleven algorithms, and the most successful
of gas engines, boiler heating capacity, and cooling
algorithm among the competitive methods is deter-
capacity of the electric and absorption chillers. Aquila
mined by the corresponding statistical analysis regard-
Optimization Algorithm (AQUILA) is put into practice to
ing to the best, worst, mean, and standard deviation
effectively design a feedforward proportional-integral
results.
controller to achieve the optimum air–fuel ratio in a spark
3. Fourteen real-world constrained complex engineering
ignition system, which plays an important role in regulating
design problems are optimized through these twelve
the fuel consumption as well as protecting the environment
metaheuristics, and their comparative performances are
from harmful emissions to some degree [52]. An improved
evaluated based on the statistical analysis of the fitness
Harris Hawks (HARRIS) optimization algorithm enhanced
function values.
with the chaotic Tent map and integrated with an extreme
learning machine was employed for constructing a generic Remainder of the paper is organized as follows. Sec-
holistic model to predict the intensity level of rock busts. tion 2 gives brief description of the compared meta-
The developed model reaches a high degree of prediction heuristic algorithms and lists some of the high-impact past
accuracy of 94.12%, providing a quick convergence rate to studies related with the performance comparison of meta-
its optimum solution [53]. Barnacles Mating Optimizer heuristic algorithms. Section 3 gives mention on the con-
(BARNA) combined with a support vector machine is tributing motivations behind the comparison between the
proposed for obtaining precise state of charge estimation, existing newly emerged metaheuristics. Section 4 provides
which is an important concern for a reliable battery man- the statistical comparison results for the standard uncon-
agement system [54]. strained benchmark problems and compares the optimiza-
The majority of the members of the metaheuristic tion accuracies of the compared algorithms on the test
optimization community do not have in-depth knowledge functions belonging to the CEC 2013 test suite. Section 5
on the general optimization performance of the new analyses the behavior of the contested algorithms on
emerging nature-inspired metaheuristic algorithms. engineering design problems and decides which algorithm
Underlying novelty brought out by this review paper is to gives the less erroneous predictions satisfying the chal-
scrutinize the solution effectivity of the recent nature-in- lenging problem constraints. Section 6 provides a com-
spired algorithms on multidimensional constrained and prehensive discussion on the search tendencies of the
unconstrained optimization problems. One of the compared algorithms and gives an explicit investigation on
demanding goal in this work is to conduct in-depth analysis their algorithmic structure as to why they perform well in
of general behaviors and proclivities of the above-men- some types of test problems and collapse for the other
tioned newly emerged metaheuristic approaches on test types. Section 7 concludes this comprehensive research
functions with different functional characteristics. All study with remarkable comments and insightful future
algorithms will be analyzed on the same benchmark directions.
123
14280 Neural Computing and Applications (2023) 35:14275–14378
2 Metaheuristic algorithms considered an explicit discussion of the mentioned literature works and
for comparative performance evaluations investigate which test instances they utilized for the per-
formance comparison of the considered metaheuristic
This section is related with giving a brief overview of the algorithms.
previous literature studies about the performance compar- One of the early few attempts to evaluate the compar-
ison of metaheuristic optimizers and introducing each ative prediction performances of the stochastic meta-
metaheuristic algorithm for benchmarking their prediction heuristic algorithms was carried out by Ali et al. [62], in
performance on various test functions with different which a reasonable procedure is proposed for testing the
domains. Due to the limited search space, a brief descrip- estimation accuracies of the algorithms through an appro-
tion of the metaheuristic algorithms is provided in this priate selection of the test suite of benchmark problems and
section. Interested could find more information about the put forward a straightforward methodology to present the
related algorithm in its original paper. optimization results. They compiled a diverse set of con-
tinuous test problems from different domains and investi-
2.1 Previous works gated the macroscopic search behaviors of the five
stochastic optimizers, including improving hit-and-run,
Recent developments in chip technologies entail a rapid hide-and-seek, controlled random search, real coded
escalation in generating efficient stochastic metaheuristic genetic algorithm, and differential evolution algorithms.
algorithms whose overall optimization success majorly An informative performance plot are drawn to observe
depends on approaching the global optimum solution gradual improvements in the objective function values with
within a reasonable computational time. Algorithms draw an increasing number of iterations. Numerical experiments
their inspiration from various sources, such as the intrinsic made on the selected test functions are hinged on consid-
foraging skills of the honey bees [55] or the efforts of a ering three different maximum numbers of iterations
skillful musician seeking to improvisation for perfect har- defined for the termination criterion (100D2, 10D2, and
mony [18]. Some of the existing algorithms have shown 10D where D is the dimensionality of the problem).
great potential to solve a diverse range of optimization Respective results reveal that careful selection of the
instances covering from combinatorial problems to con- maximum number of iterations which decides the elapsed
straint engineering design cases. Moreover, some stochas- algorithm runtime has a significant impact on the opti-
tic algorithms can be utilized along with the metaheuristic mization behavior of the running algorithms.
optimizers to solve an optimization problem with faster Civicioglu and Besdok [63] analyzed the algorithmic
convergence to the optimal solution [56–58]. It is note- concepts of Cuckoo search, Particle Swarm Optimization,
worthy to mention that the majority of the proposed Differential Evolution, and Artificial Bee Colony optimizer
metaheuristic optimizers up to now achieved notable suc- employing different performance measures. Optimization
cess on solving complex real-world problems from differ- success of these four well-established optimizers has been
ent domains of engineering fields [59–61]. assessed on fifty different continuous optimization bench-
Recent two decades have witnessed the development of mark functions with varying problem dimensionalities, and
a considerable amount of metaheuristic algorithms with a it is revealed that the overall solution success of is Cuckoo
wide range of inspiration sources; however, their compar- search algorithm is very close to that of Differential Evo-
ative prediction performances between each other have not lution algorithm, those two of which provide much robust
been extensively investigated so far. Keeping in mind that and accurate prediction outcomes compared to those
it is extremely difficult to systematically and exhaustively obtained for Particle Swarm Optimizer and Artificial Bee
evaluate the favorable merits of the emerging metaheuris- colony algorithms. Total number of function evaluations
tics within a limited number of optimization benchmark required for achieving the optimum answer of the problem
cases, most of the previous literature studies associated along with the runtime complexities of the algorithms have
with the optimization performance assessment of the con- also been comprehensively evaluated and found out that
temporary metaheuristic algorithms focus on a particular Differential Evolution requires fewer function evaluations
engineering design problem or cover a restricted number of without burdening a significant amount of computational
optimization benchmark cases, none of which are suffi- load in most of the test cases.
ciently conducive to provide general insights on the true Ma et al. [64] conducted a comprehensive comparative
optimization capabilities on the benchmarked algorithms. study between some of the prevalent evolutionary
There are several attempts in the existing literature to stochastic optimizers of Genetic Algorithm (GA), Bio-
present the comparative performances of algorithms on geography-based Optimization (BBO), Differential Evo-
various fields of test instances. Below paragraphs present lution (DE), Evolutionary Strategy (ES), and Particle
Swarm Optimization (PSO). Firstly, they made a
123
Neural Computing and Applications (2023) 35:14275–14378 14281
conceptual discussion on the equivalences of these men- comparatively investigated. They concluded their research
tioned algorithms and found out that the basic versions of with remarkable decisive comments, including some
these methods have similar optimization performances favorable outcomes of the numerical experiments such that
compared to GA with global uniform recombination under the number of function evaluations defined as the termi-
specific test conditions. They also discussed the differences nation criterion has a great influence on the accuracy of the
based on their biological inspirations and conclude that the predicted solutions yet increases the computational run-
enhanced solution diversity of EAs is the direct result of time, which is not a desired situation for an end-user.
these distinctions. Furthermore, the optimization capabili- According to the respective estimation results of the opti-
ties of these above-mentioned metaheuristic optimizers are mization problems with varying domains, it is seen that
extensively assessed on a set of real-world optimization PSO, DE, and GA show brilliant performances, each
problems with different functional characteristics. Empiri- obtaining the maximum value of 11% success ratio.
cal results obtained from exhaustive numerical experiments This research study is mainly concerned with novel
reveal that BBO algorithm gives the best predictive results metaheuristic algorithms, particularly for optimizers
among the compared standard optimizers. When it comes developed after 2019. Main reason behind the considera-
to examining the improved versions of the algorithms, it is tion of these algorithms is general confusion, mostly
observed that DE and ES provide the best prediction resulting from a surplus of metaheuristic optimizers
accuracy compared to that of the other remaining algo- developed in such a small span of time, as to which
rithms. Ma et al. [65] extended their previous research algorithms perform well for optimization problems with
study by introducing a conceptual comparison between varying degrees of dimensionalities, functional character-
algorithmic equivalences of some swarm intelligence (SI) istics, and types. To be clear, there is now an ongoing
optimizers, including Particle Swarm Optimization (PSO), ambiguity in the metaheuristic community over the general
Shuffled Frog Leaping Algorithm (SFLA), Group Search optimization capabilities of the recent metaheuristic opti-
Algorithm (GSO), Firefly Algorithm (FA), Artificial Bee mizer. Majority of the researchers do not have a conclusive
Colony (ABC) Algorithm, and Gravitational Search opinion regarding the optimization search efficiency of the
Algorithm. After exhaustive elaborations on the considered related algorithm, and comparative analysis should be
test instances that cover the unconstrained test functions made on various types of optimization benchmark func-
employed in CEC 2013 competitions and combinatorial tions with various functional features if it is to get clear
knapsack problems, it is seen that the advanced version of insight on its overall effectivity. In addition, comparative
ABC algorithm numerically outperforms the remaining performance analysis between the newly emerging algo-
algorithms in terms of solution accuracy and robustness for rithms is still in question as there has not been a published
CEC 2013 benchmark problems and improved versions of literature study concerning this issue. Of course, there are
SFLA and GSA algorithms yield the best prediction out- available options in the selection of different algorithms
comes on combinatorial knapsack problems. which were developed between the years of 2019 and 2022.
Ezugwu et al. [33] comparatively examined the predic- However, we consider two important aspects on their
tion capabilities and convergence characteristics of twelve optional selection. First qualification requires their frequent
metaheuristic optimizers, including the standard Genetic application to different kinds of design problems, while the
Algorithm (GA), Particle Swarm Optimization (PSO), second is their general optimization performance compared
Firefly Algorithm (FA), Ant Colony Optimization (ACO), to remaining algorithms, considering two deterministic
Symbiotic Organism Search (SOS), Cuckoo Search (CS), aspects of average computational burden and solution
Artificial Bee Colony (ABC), Bat Algorithm (BA), Dif- accuracy obtained after a defined number of algorithm
ferential Evolution (DE), Flower Pollination Algorithm runs. Between twenty-five recently developed metaheuris-
(FPA), Invasive Weed Optimization (IWO), and Bee tic algorithms, these eleven metaheuristic optimizers yield
Algorithm (BeeA). Main purpose of their accomplished the most fruitful outcomes with respect to these above-
study is to carry out an in-depth analysis that would pro- defined two complementary performance measures.
vide deep insight on the search characteristics of each Therefore, we consider these eleven algorithms to fill this
representative metaheuristic optimization algorithm. All gap in the existing literature because of their widespread
algorithms are evaluated on 36 different standard multidi- application ranges of scientific fields compared to the
mensional test functions, and comprehensive statistical remaining contestant algorithms.
analysis has been performed that entails an unbiased and Most of the literature studies concerning the compre-
objective assessment of the reflected effectiveness of the hensive survey of the metaheuristic algorithms focus on a
algorithms. Furthermore, the minimum required function particular subject, such as optimizing control parameters of
evaluations to acquire the optimum solution of the problem PID models [66], optimizing mechanical design problems
and runtime complexities of the algorithms have also been [67], solving load balancing problems in cloud
123
14282 Neural Computing and Applications (2023) 35:14275–14378
environments [68], and solving the inverse kinematics of Search Algorithms [78] to locate the optimum sitting pla-
robot manipulators [69], and solving feature selection ces of the wind turbines in a wind farm. Multi-objective
problems [70]. Furthermore, most of the review papers Manta Ray Foraging Optimization and SHADE [79]
present in the current literature only report the published algorithm were proposed for solving structural design
studies and their corresponding results without providing problems. The proposed hybrid is applied to six challeng-
comparative solution outcomes between them. This ing truss optimization problems having discrete design
research paper takes advantage of eleven newly emerged variables up to 942 parameters and corresponding results
nature-inspired metaheuristic optimizers to solve a wide have been compared to those obtained for nine state-of-the-
spectrum of constrained engineering design problems and art metaheuristic optimizers [80]. Xiao et al. [81] combined
unconstrained benchmark functions, posing extreme chal- the governing manipulation equations of African vultures
lenges to the researchers of the metaheuristic optimization Optimization Algorithm and Aquila Optimizer for solving
community. To the best knowledge of the authors, this kind global optimization problems. Comparative estimation
of performance assessment has not ever been conducted in results indicate that the hybrid algorithm can achieve
literature approaches yet. Apart from imparting knowledge superior solution accuracy and stability. Ramachandran
on the current trends in metaheuristic algorithm develop- et al. [82] proposed a hybrid optimizer whose integrated
ment, this study also provides an exhaustive comparative components are Grasshopper Optimization Algorithm [83]
study on the prediction performance of the newly emerged and Harris Hawks Optimizer for solving combined heat and
algorithms, and conclusive remarks will be given with power economic dispatch problems. Sine–Cosine Algo-
regards to their respective solution accuracy and efficacy rithm [84] was combined with Barnacles Mating Optimizer
based on the estimation results of various benchmark cases. [85] to solve data clustering problems. Experimental results
These eleven algorithms have been previously hybri- obtained for various clustering cases show that the pro-
dized with some literature metaheuristic optimizers to posed hybrid provides superior performance improvement
compensate for their intrinsic algorithmic deficiencies. resulting from the improved balance between exploration
Rawa et al. [71] hybridized the Runge Kutta Optimization and exploration mechanism.
algorithm with the Gradient-based Optimizer to establish The following sections will provide brief, yet explana-
power system planning model in the presence of renewable tory instructions on these elven algorithms, and their
energy sources considering the techno-economic aspects of algorithmic structure will be explained.
the whole integrated unit. Ewees et al. [72] improved the
search efficiency of the Gradient-based optimizer by using 2.2 Runge–Kutta optimizer
the Slime Mold algorithm [73] and applied this hybrid to
feature selection and benchmark problems used in CEC Runge–Kutta Optimizer (RUNGE) aims to bring a new
2017 competitions. It is seen that the proposed hybrid can dimension to the optimization community by proposing a
successfully improve the classification accuracy and yields metaphor-free algorithm, avoiding cliché methods such as
promising predictions outperforming the contender algo- mimicking foraging strategies of animals or evolutionary
rithms taking place in the competitions with respect to search trends. RUNGE algorithm depends on the extensive
solution efficiency. Almotairi and Abualigah [74] devel- differential equation-solving process and needs to utilize
oped a hybrid optimization model integration of Reptile the slopes that is employed in computing the iterative
Search algorithm and Remora Optimization algorithm [75] solutions steps of a differential equation. Algorithm is
and tested its effectivity over a set of benchmark cases, comprised of two different strategies. First phase is con-
including eight data clustering problems and multidimen- cerned with employing the search process governed by the
sional unconstrained test problems widely employed in fundamental rules of RUNGE algorithms, mainly deals
literature studies. Results retrieved from the performance with exploration. Second phase is mainly ruled by
evaluations show that hybrid algorithm can effectively ‘‘Enhanced Solution Quality (ESQ)’’ mechanism, focusing
tackle the complexities of hard-to-solve challenging opti- on the promising solutions obtained in the first phase of the
mization problems. Reptile Search Algorithm was hybri- algorithm. The general mathematical formulation of the
dized with snake Optimizer to determine the optimal algorithm is composed of a different set of stages that will
features of datasets collected from UCI repository as well be introduced by the following.
as to optimize two real-world optimization problems. The In the first stage, population individuals X are initialized
results show that the hybrid approach can provide practical within the defied search bounds LB (lower bound) and UB
and accurate solutions within comparatively lower com- (upper bound) by conducting the below scheme,
putational runtimes [76]. Rizk-allah and Hassanien [77]
proposed a hybrid optimization model composed of the
search equations of Equilibrium Optimization and Pattern
123
Neural Computing and Applications (2023) 35:14275–14378 14283
Xi;j ¼ LBj þ rnd1 UBj LBj ; i ¼ 1; 2; :::; N; j The numerical value of DX is computed by,
¼ 1; 2; :::; D ð1Þ
DX ¼ 2 rnd19 rnd20 Xb rnd21 Xavg þ c
where N is the population size, D is the problem dimension, ð8Þ
rnd1 is a random number between 0 and 1. RUNGE
algorithm employs a novel search mechanism (SM) to c ¼ rnd22
ðXi rnd23 ðUB LBÞÞ
update the current solutions by the given scheme, iter
exp 4 ð9Þ
( Maxiter
XCF þ SFM þ l rnd2 Xmc if r and 0:5
Xi ¼ where numerical values of Xb and Xw can be updated by the
XmF þ SFM þ l rnd3 Xra otherwise
following algorithmic scheme,
ð2Þ
if f ðXi Þ\f Xpb
where XCF ¼ XC þ r1 SF g Xc and SFM ¼ SF SM , Xb ¼ Xi
XmF ¼ Xm þ r2 SF g Xm , Xra ¼ ðXr1 Xr2 Þ, Xmc ¼
Xw ¼ Xpb
ðXm Xc Þ, r1;2 2 ½1; 1 which can be either -1 or 1 used to
change direction of the search process. Random numbers else ð10Þ
g 2 ½0; 2 and l 2 ½0; 1 helps algorithm more effectively Xb ¼ Xpb
probe around the search space. Adaptive scale factor SF Xw ¼ Xi
can be defined as, end
SF ¼ 2 ð0:5
rnd3 Þ a Enhanced Solution Quality (ESQ) phase is concerned
iter
exp b rnd4 ð3Þ with improving the general solution quality by using the
Maxiter different mutation operators with a view to avoid local
where Maxiter is the maximum number of iteration defined optimum points in the search space,
(
for termination criterion. Parameter Xc and Xm and Xc given Xnew; 1 þ r w Xnew;1 Xavg þ randn1 if w \ 1
in Eq. (2) are calculated by the following, Xnew;2 ¼
Xnew; 1 Xavg þ r w Xna otherwise
Xc ¼ Xi rnd5 þ ð1 rnd5 Þ Xr1 ð4Þ ð11Þ
Xm ¼ Xb rnd6 þ ð1 rnd6 Þ Xpb ð5Þ where r 2 f1; 0; 1g,
where Xb and Xbp are, respectively, so-far-obtained-best Xna ¼ u Xnew;1 Xavg þ randn1 w ¼ rndð0; 2Þ
solution and current best solution within the current itera-
iter
tion. SM parameter given in Eq. (2) is computed by the exp 5 rnd24 ;
Maxiter
below formula,
Xr1 þ Xr2 þ Xr3
ðXRK ÞDX Xavg ¼ ; Xnew;1 ¼ rnd25
SM ¼ ð6Þ 3
6 Xavg þ ð1 rnd25 Þ Xb
where XRK can be computed by, ð12Þ
XRK ¼ k1 þ 2 k2 þ 2 k3 þ k4 In the case if the fitness value of Xnew,2 is not better than
ðrnd7 Xw u Xb Þ that of the ith solution Xi then algorithm provides another
k1 ¼ option to modify and update the current value of Xi by
2DX
ðrnd8 ðXw þ rnd9 k1 DX ÞÞ UX employing the following the simple formulation,
k2 ¼
2DX Xnew;3 ¼ Xnew;2 rnd26 Xnew;2 þ SF
ðrnd10 ðXw þ rnd11 0:5k1 DX ÞÞ UXb rnd27 XRK þ 2 rnd28 Xb Xnew;2
k3 ¼
2DX ð13Þ
ðrnd12 ðXw þ rnd13 k3 DX ÞÞ UXb2
k4 ¼ Below algorithm provides the pseudo-code of Runge-
2DX
u ¼ round ð1 þ rnd14 Þ ð1 rnd15 Þ Kutta optimizer.
UX ¼ ðu Xb þ rnd16 k1 DX Þ
UXb ¼ ðu Xb þ rnd17 0:5k2 DX Þ
UXb2 ¼ ðu Xb þ rnd18 k3 DX Þ
ð7Þ
123
14284 Neural Computing and Applications (2023) 35:14275–14378
123
Neural Computing and Applications (2023) 35:14275–14378 14285
jyiþ1 þ xi j if rand \0:5
yqi ¼ rnd9 rnd10 Dx ð23Þ
2 if rand \0:5
2Dx xi Xnew ¼ xi þ rnd14 ðf1 ðu1 xbest u2 xk ÞÞ
yiþ1 ¼ xi randn þ rnd11 w1
ðxworst xbest þ eÞ þ f2 q ðu3 ðX2 X1ÞÞ þ u2 0:5 ðxr1 xr2 Þ
ðxbest xi Þ else
ð24Þ Xnew ¼ xbest þ rnd14 ðf1 ðu1 xbest u2 xk ÞÞ
Using X1 and X2, new position of the current solution þ f2 q ðu3 ðX2 X1ÞÞ þ u2 0:5 ðxr1 xr2 Þ
for the next iteration is calculated by the following end
expression, end
Xiiterþ1 ¼ rnd12 ðrnd13 X1 þ ð1 rnd13 Þ X2Þ ð27Þ
ð25Þ
þ ð1 rnd12 Þ X3
where f1 stands for a random number between [- 1,1], f2 is
X3 ¼ xi q ðX2 X1Þ ð26Þ a random number drawn from a normal distribution with a
standard deviation of 1 and mean value of 0. Random
Local Escaping Operator (LEO) is a conducive operator
numbers u1, u2, and u3 are calculated by the following
to avoid local optimum points over the search space. LEO
expressions,
updates the current solution with considering the contri- (
butions of xbest, X1, X2, and two randomly selected trial 2 rnd15 if rand\0:5
solutions from the population xr1 and xr2. Below-given u1 ¼ ð28Þ
1:0 otherwise
manipulation scheme describes the formulation of LEO (
mechanism, rnd16 if rand\0:5
u2 ¼ ð29Þ
1:0 otherwise
(
rnd17 if rand\0:5
u3 ¼ ð30Þ
1:0 otherwise
123
14286 Neural Computing and Applications (2023) 35:14275–14378
(
xrand if rand \0:5 best population member of the poor population; and
xk ¼ ð31Þ rand(0,1) is the randomly generated value between 0 and 1
xp otherwise
drawn from uniform distribution. New position of the poor
xrand ¼ LB þ rand ðUB LBÞ ð32Þ population individuals within the search space is updated
by the following simple formulation,
where xrand is a random solution produced between upper
and lower bounds and xp is random solution selected from POPnew
poor;i ¼ POP
old
poor;i
the trial population. Below algorithm gives the pseudo- þ randð0; 1Þ Pattern POPold ð35Þ
poor;i
code of Gradient-based Optimization algorithm.
where POPnew
poor;i is the new position of the ith poor member;
POPold
poor;i is the current position of the ith poor member;
and Pattern variable is resulted from the collective contri-
2.4 Poor and rich optimization algorithm bution of the best, worst and mean values of the rich
population members expressed by the below-given
Proposed by Mosavi and Bardsiri in 2019 [17], Poor and formulation,
Rich optimization (PRO) algorithm is a multi-population
human-based optimization approach inspired from the POPold old old
rich;best þ POPrich;mean þ POPrich;worst
Pattern ¼ ð36Þ
social differences of the individuals living in a particular 3
community. It is basically conceptualized upon the below-
where POPoldrich;best is the current best member of the rich
given two decisive points.
population; POPold rich;mean is the average member of the rich
• Each poor population member aims to improve his or population; POPold rich;worst is the worst member of the rich
her social situation by gaining wealth or learning an
population. There may occur sharp and rapid declines or
experience or knowledge from the rich individuals.
increases in the wealth status of the population members
• Each rich member from the population aims to broaden
resulted from unpredictable or unexpected changes in the
the social gap between the poor individuals by grasping
socio-economic affairs. Since it is nearly impossible to
their limited wealth.
predict the ongoing trends of these decisive factors, a
First, a random population is initialized between the mutation operator is employed to poor and rich individuals,
predefined upper and lower bounds to construct trial pop- which is realized by implementation of a random number
ulation members composed of poor and rich individuals. with zero mean and variance one into the ruling equation
Then, each member is evaluated based on their respective through the following expression,
fitness values and sorted in an ascending order based on
if rand(0,1) \Pmut
their corresponding objective function values. As men-
tioned, there are two distinct subpopulations formed by the POPnew new
rich;i ¼ POPrich;i þ randn ð37Þ
poor and rich members expressed by, end
Nmain ¼ Nrich þ Npoor ð33Þ if rand(0,1) \Pmut
where Nrich and Npoor are size of the rich POPrich and poor POPnew new
poor;i ¼ POPpoor;i þ randn ð38Þ
POPpoor individuals. Current position of each rich member end
is updated by the below formulation,
where Pmut is the mutation probability whose numerical
POPnew old
rich;i ¼ POPrich;i value is decided by the user experience; POPnew
rich;i and
þ rand ð0; 1Þ POPold
rich;i POP old
poor; best ð34Þ POPnew
poor;i are, respectively, updated position vectors rich
and poor members of the population after perturbed by a
where POPnewrich;i is the new position of the ith member of random parameter randn, which is generated by the aver-
rich population; POPold rich;i is the current position of the ith age of 0 and variance of 1.
member of the rich population; POPold poor;best is the current
123
Neural Computing and Applications (2023) 35:14275–14378 14287
123
14288 Neural Computing and Applications (2023) 35:14275–14378
8
1 >
> Bestjiter Piter
i;j randð0;1Þ ðiter\0.75MaxiterÞ
ESiter ¼ 2 randð1; 1Þ 1 ð43Þ >
>
Maxiter >
>
< and ðiter 0.5Maxiter Þ
>
iterþ1
where e is a small value fixed to 1E-10, r2 is random Xi;j ¼ Bestjiter giter iter
i;j eRi;j
>
>
integer between 1 and N, r3 stands for a random value >
>
>
> randð0;1Þ ðiter Maxiter Þ and
between - 1 and 1, Pi,j is the percentage difference >
:
ðiter 0.75Maxiter Þ
between the position of the best crocodile and the current
crocodile computed by, ð45Þ
Xi;j Xaver;j where Bestjiter is the current best solution is obtained so far
Pi;j ¼ a þ ð44Þ
Bestj UBj LBj þ e until the current iteration, giter
i;j is the mathematical operator
structured by the contribution of the current best solution
where Xaver,j is the average solution of the jth dimension, a
Bestjiter and Piter iter
i;j parameters calculated by Eq. (41), Ri;j is
is a sensitive parameter defined for controlling the explo-
ration accuracy of the algorithm and set to 0.1. Exploitation the reduce function defined for the iterative reduction of the
phase of the REPTILE algorithm is occurred by the hunting search space which is computed by Eq. (42). Below simple
process related with the cooperation and coordination of algorithmic scheme describes the essential steps of the
the encircling crocodiles. After the completion of the REPTILE algorithm.
exploration phase, foraging crocodiles focus on the target
prey individuals and employed hunting strategies make it
easier for the crocodile individuals to get closer to the
target prey. The mathematical model representing the
exploitation mechanism taking place in the second phase of
the algorithm can be expressed by the following,
123
Neural Computing and Applications (2023) 35:14275–14378 14289
2.6 Snake optimizer maximum number of iteration defined for the termination
condition. Available food quantity (Q) is computed by,
This nature-inspired metaheuristic algorithm simulates the
iter Maxiter
intrinsic mating behaviors of snakes, which likely occurred Q ¼ 0:5 exp ð50Þ
Maxiter
at high temperatures where food sources are abundant
otherwise, snakes only concentrate on food searching Exploration phase takes place where snakes are only
rather than mating. The proposed algorithm is built upon searching food occurs when the available food quantity (Q)
two complementary mechanisms of exploration and is lower than the threshold limit of 0.25. To model this
exploration. Exploration process is influenced by environ- phase, following equation is put into practice,
mental factors such that cold surroundings an available is iterþ1
Xmale;i iter
¼ Xmale;rand 2 Amale
not present in this case, but only exhaustive food searching
ððUB LBÞ randð0; 1Þ þ LBÞ ð51Þ
is dominant. Exploitation phase includes many shifts and
transitions to obtain the global optimum point. In condi- where Xmale,i is the position of the ith male, Xmale,rand is the
tions where food is available and but hot temperature is random male in the population, rand(0,1) is a uniform
also evident, snake individuals only focus on eating the random number between 0, and Amale is the ability of the
available food. On the contrary, when food is available at male to find food resources and can be computed by,
cold environmental conditions, snakes opt for mating.
fitmale;rand
Mating process also has two cases, which are fight mode Amale ¼ exp ð52Þ
fitmale;i
and mating mode. In the fighting phase, each male snake
fights for mating the best female snake while each female where fitmale,rand is the fitness value of a random male
snake seeking to select the best male snake. In the mating Xmale,rand; fitmale,i is the fitness value of the ith male,
process, mating occurs between each selected pair iterþ1 iter
depending on the availability of the food resources in the Xfemale;i ¼ Xfemale;rand 0:05 Afemale
habitat. UBj LBj randð0; 1Þ þ LBj ð53Þ
Algorithm is initialized by generating the trial snake
where Xfemale,i is the position of the ith female, Xfemale,rand
individuals by the below-given equation,
is the position of random female, Afemale is the female
Xi;j ¼ LBj þ randð0;1Þ ability to find food resources and calculated by the fol-
UBj LBj ; i¼ 1; 2;:::;N; j¼ 1; 2;:::;D lowing expressions,
ð46Þ
fitmale;best fitfemale;rand
FF ¼ exp ; Afemale;i ¼ exp
Xi,j is the jth dimension of the ith snake in the swarm, fiti fitfemale;i
LBj and UBj are, respectively, lower and upper bounds of ð54Þ
the jth dimension of the optimization problem. Algorithm
where fitfemale,rand is the fitness value of a random female,
assumes that the whole population is divided into two
and fitfemale,i is the fitness value of ith female in the pop-
subgroups consisting of females and males such that 50%
ulation. Exploitation phase takes place when there is
of the population is male while remaining individuals are
abundant amount of food sufficient enough to supply
female. Snake swarm is divided into two equal subgroups
energy for mating process. This phase occurs if the avail-
by the following equation,
able food quantity (Q) is above the defined threshold limit.
Nmale ¼ N=2 ð47Þ Furthermore, if the surrounding environment temperature
Nfemale ¼ N Nmale ð48Þ is higher than temperature scale threshold limit of 0.6, then
the snakes will only employ foraging activities which is
where Nmale and Nfemale are, respectively, size of the male modeled by,
and female snake individuals. Best individuals in the male
iterþ1 iter
and female snake population are decided and symbolized Xi;j ¼ Xfood 2 Temp randð0; 1Þ Xfood Xi;j
Bestmale and Bestfemale. In addition, Food position ffood is
ð55Þ
also obtained. Temperature of the surrounding environment
(Temp) is calculated by the following, iter
where Xi;j is an individual in the snake swarm (female or
male) for the current iteration; Xfood is the food location in
iter
Temp ¼ exp ð49Þ the search space which is essence the best solution obtained
Maxiter
so far. If the environment temperature is lower than the
where iter is the current iteration and Maxiter is the defined threshold limit of 0.6, which indicates cold air
conditions is prevalent then the snakes will perform
123
14290 Neural Computing and Applications (2023) 35:14275–14378
where Xfemale,i is the position of the ith female individual; If egg hatch occurs, then worst male and female are
Xmale,best is the best male in the population; FF is the replaced by the following equation,
fighting ability of the female agents. FM and FF are, Xworst;male ¼ LB þ randð0; 1Þ ðUB LBÞ ð64Þ
respectively, calculated by the following terms, Xworst;female ¼ LB þ randð0; 1Þ ðUB LBÞ ð65Þ
fitfemale;best
FM ¼ exp ð58Þ where Xworst,male and Xworst,female are worst members male
fiti
and female subgroups in the population. The flag direction
fitmale;best operator ± facilitates the mechanism of improving the
FF ¼ exp ð59Þ
fiti overall population diversity in the population, which
enables an abrupt change in the direction of the responsible
where fitfemale,best is the fitness of the best female; fitmale,best search agents to achieve a good probing around the search
is the fitness of the best male; fiti is the fitness value of the space. Below algorithm provides the pseudo-code of Snake
ith search agent. Mating mode is activated by the below Optimizer, explaining the step-by-step implementation of
equations, the above-defined manipulation equations into the algo-
iterþ1
Xmale;i iter
¼ Xmale;i þ c3 Mmale randð0; 1Þ rithm framework.
iter iter
Q Xfemale;i Xmale;i ð60Þ
123
Neural Computing and Applications (2023) 35:14275–14378 14291
123
14292 Neural Computing and Applications (2023) 35:14275–14378
123
Neural Computing and Applications (2023) 35:14275–14378 14293
8
it and move toward the highly concentrated plankton areas. > xiter
>
> rand þ rand9 ð0; 1Þ
Cyclone foraging movement is modeled by the following >
>
>
< xiter iter
þ b xiter iter
mathematical expression, rand xi rand xi ; i¼1
8 iter xiterþ1
i ¼
> xbest þ rand5 ð0; 1Þ xiter iter >
> x iter iter
þ rand10 ð0; 1Þ xi1 xi iter
>
> best xi >
> rand
> >
>
>
>
iter iter
: þ b xiter xiter ; i ¼ 2; 3; :::; N
>
>
> þ b xbest x i ; i¼1 rand i
<
iterþ1
xi ¼ xbest þ rand6 ð0; 1Þ xiter
iter
xiter xiter
rand ¼ LB þ rand11 ð0; 1Þ ðUB LBÞ
> i1 i
>
> ð76Þ
>
> þ b xiter iter
>
> best xi ;
>
> where LB and UB are, respectively, lower and upper
:
i ¼ 2; 3; :::; N
bounds of the search space.
ðMaxiter iter þ 1Þ
b ¼ 2 exp rand7 ð0; 1Þ
Maxiter 2.8.3 Somersault foraging
sinð2 p rand8 ð0; 1ÞÞ
ð75Þ This foraging mechanism considers the food location as a
reference point where each artificial search agent pivots
It can be observed from Eq. (75) that the best food around this point to somersault to a new fertile region.
location is taken as a reference point for this search They position themselves around the best solution and
mechanism, which accounts for the full exploitation of the update their current position by using the below-given
promising regions obtained by the previous chain foraging mathematical model simulating the somersault movement,
mechanism. In addition, the cyclone foraging mechanism
xiterþ1 ¼ xiter
i þ2
makes a significant contribution to the global exploration i
capability by introducing a random solution taken as a rand12 ð0:1Þ xiter iter
best rand13 ð0:1Þ xi ; i
¼ 1; 2; :::; N
pivot reference point, which is defined by the following,
ð77Þ
Below algorithm explains the implementation of the
Manta Ray Foraging Optimization algorithm in the form of
a descriptive pseudo-code representation.
123
14294 Neural Computing and Applications (2023) 35:14275–14378
123
Neural Computing and Applications (2023) 35:14275–14378 14295
random manner, which is decided by a parameter P1 valued conflict between the competent vultures on food acquisi-
between 0 and 1. To realize the exploration process, a tion when food sources are limited and detected food area
random number within [0,1] is generated. Each vulture is crowded. In that condition, strong and powerful vultures
chooses its search environment based on its satiation level, do not prefer to share their food with the weak ones. On the
which is decided by the below procedure, contrary, weak vultures aim to tire the strong vultures and
8 iter grasp the collected food from the healthy strong vultures
>
> Ri Diter Fiiter P1 rand1 ð0; 1Þ
>
>
i
causing some small conflicts and arguments between them.
< Riter F iter þ rand ð0; 1Þ
Xiiterþ1 ¼ i i 2
ð82Þ These attacking behaviors can be modeled by the below-
>
> ððUB LBÞ rand3 ð0; 1Þ þ LBÞ given equations,
>
>
:
P1 \rand1 ð0; 1Þ Xiiterþ1 ¼ Diter ðF þ rand ð0; 1ÞÞ dt ð85Þ
i
where Xiiterþ1 is the location of the ith vulture for the next dt ¼ R i Xi ð86Þ
iteration, randi(0,1) i = 1,2,3 is a uniform random number
where Di is calculated by Eq. (83), Satiation rate of the
defined within [0,1], Ri is calculated by Eq. (78), Fi is
vultures F calculated by Eq. (80), rand(0,1) is a random
computed by Eq. (80), LB and UB are lower and upper
number defined between 0 and 1, Ri is the best vultures of
bounds of the search space, and Di is the spatial distance
two different groups computed by Eq. (78), and Xi is the
between the specific vulture and the current optimal value,
current position of the vultures.
Diter
i ¼ rand ð0; 2Þ Riter
i Xiiter ð83Þ Vultures employ spiral attacking movement, which
mathematically models the rotational flight movement
where rand(0,2) is a random number between 0 and 2. between all vulture and best vultures of two different
groups,
2.9.4 Strategy 4—performing exploitation phase
rand ð0; 1Þ Xi
S 1 ¼ Ri cosðXi Þ
To maintain balance between exploration and exploitation 2p
ð87Þ
phases, the absolute value of F parameter is evaluated. If rand ð0; 1Þ Xi
S2 ¼ R i sinðXi Þ
this value is lower than 1.0, AFRICAN enters the 2p
exploitation phase, which is composed of two different
Xiiterþ1 ¼ Ri ðS1 þ S2 Þ ð88Þ
complementary mechanisms. P2 and P3 are two important
decisive parameters responsible for determining the gov- Second phase of the exploitation commence with the
erning search strategy. Parameter P2 is used to choose the consistent siege and aggressive strife of the accumulated
search strategy employed in the first exploitation phase, vultures over the previously explored search regions. If the
while parameter P3 is utilized to determine the available numerical value of jF j is lower than 0.5, the random value
search strategy practiced in the second phase. First phase of between 0 and 1 (randP3) is generated. If this random value
the exploitation occurs when the numerical value of jF j is is lower than or equal to the user-defined parameter P3
between 0.5 and 1.0, in which two different strategies of then, the considered search strategy is to make crowded the
rotating flight and siege fight are carried out in a random explored prey location with different type of vultures.
manner. Parameter P2 is used to decide which strategy is Otherwise, the aggressive siege fight strategy is employed.
performed at this stage of the algorithm, which should be The following procedure is used to decide which available
valued before the search operation and should be between 0 search strategy is employed between the two above-men-
and 1. A random value between 0 and 1, randP2(0,1), is tioned alternatives,
generated at the initial stage of this phase. If the numerical (
Eq: ð91Þ if P3 randp3
value of this random number randP2(0,1) is higher than or X iterþ1
¼ ð89Þ
equal to that of parameter P2 then siege fight strategy is Eq: ð92Þ if P3 \randp3
implemented. Otherwise, if this random number is lower
Artificial vultures accumulate over the possible food
than that of parameter P2 then the rotating flight strategy is
sources by examining the movements of all vultures in the
employed. This selection process is modeled by,
( population. Below formulations represent the typical for-
iterþ1 Eq: ð85Þ P2 randp2 ð0; 1Þ aging behaviors performed by the vultures, facilitating the
Xi ¼ ð84Þ
Eq: ð88Þ P2 \randp2 ð0; 1Þ second phase of the exploitation.
123
14296 Neural Computing and Applications (2023) 35:14275–14378
!
iter
Bestvul1 Xiiter aggressive behaviors on their hard quest for food, which is
iter
A1 ¼ Bestvul1 2
F mathematically modeled by the following scheme,
iter ðX iter Þ
Bestvul1 i
! ð90Þ Xiiterþ1 ¼ Ri jdðtÞj F LevyðDÞ ð92Þ
iter
iter Bestvul2 Xiiter
A2 ¼ Bestvul2 2
F In Eq. (92), d(t) is the distance between a vulture in the
iter ðX iter Þ
Bestvul2 i
population and one of the best vultures in two groups,
iter iter
In Eq. (89), Bestvul1 and Bestvul2 are, respectively, the which is calculated by Eq. (86), Levy() function stands for
best vultures of the first and second groups for the current the levy flight distribution [87] calculated by the following,
iteration, F is the current satiation rates of the vultures, Xiiter ur
LevyðDÞ ¼ 0:01
is the position of the ith vulture for the current iteration. 1
jmjb
Then, the updated spatial position of the ith vulture can be 0 1b1
ð93Þ
computed by the following scheme, Cð1 þ bÞ sin pb 2
r¼ @ A
A1 þ A2 b1
Xiiterþ1 ¼ ð91Þ C 1þb2 b 2ð 2 Þ
2
When jF j\0:5, vultures become unhealthy and weak where D is the problem dimension, u and v are random
due to starvation and do not have the power to deal with the numbers between 0 and 1, and b is a constant value fixed to
strong vultures in the population, therefore showing 1.5. Below algorithm shows the explicit pseudo-code rep-
resentation of the African vultures optimization algorithm.
123
Neural Computing and Applications (2023) 35:14275–14378 14297
2.9.5 Aquila optimization algorithm that, aquilas decide to make spiral circles around the
detected prey and perform rapid attacks. This attacking
Aquila Optimization Algorithm (AQUILA) is a swarm strategy is called contour flight with a short glide attack
intelligence metaheuristic algorithm simulating the intelli- and,s simulated by the following mathematical equation,
gent swarming and foraging behaviors of artificial aquilas, X2iterþ1 ¼ Xbest
iter iter
LevyðDÞ þ Xrand þ ðy xÞ randð0; 1Þ
which are skilled and crafty hunters after humans with
ð97Þ
strong legs and sharp claws. These physical characteristics
enable aquilas to catch various types of prey in their living where X2iterþ1 is the solution for the next iteration iter,
habitat. They live in high mountains and other higher obtained by the second search strategy (X2); D is the search
locations. AQUILA algorithm starts with initializing the dimension of the problem; Levy() is the function that draws
trial candidate aquila population defined between the pre- a random number from a levy flight distribution for each
scribed upper and lower bounds. Each candidate solution is iter
problem dimension j = 1,2,.., D; Xrand is a randomly cho-
generated by the below-given equation, sen aquila from the swarm population. Spiral shape of the
Xi;j ¼ LBj þ rand ð0; 1Þ UBj LBj i ¼ 1; 2; :::; N attacking movement is expressed by the implementation of
j ¼ 1; 2; :::; D y and x variables into the search scheme, which is calcu-
ð94Þ lated by the following,
y ¼ r cosðhÞ ð98Þ
where N is the population size, D is the problem dimension,
UB and LB are correspondingly upper and lower bounds of y ¼ r sinðhÞ ð99Þ
the search space. Algorithm imitates the hunting behaviors
where
of the foraging aquilas, whose ruling attacking strategies
can be categorized into four different complementary steps. r ¼ n1 þ 0:00565 l ð100Þ
h ¼ 0:005 l þ 1:5p ð101Þ
2.9.6 Step 1: increased exploration (X)
The parameter n1 takes a random integer value between
This stage of the optimization process is based on soaring 1 and 20; l is an integer number from 1 to the length of the
high up in the sky and searching for the prey individuals search space (D).
way above the ground and finding the most favorable prey
among the suitable alternatives. Once the prey is detected, 2.9.8 Step 3: expanded exploitation (X3)
a smooth vertical dive toward the prey is performed.
Mathematical model of this foraging skill can be expressed Third foraging skill is based on a vertical landing of the
by, attacking aquila when it pinpoints the prey location. This
method is called low flight with slow descend attack, which
iter
X1iterþ1 ¼ Xbest
iter
1 is found to be very effective in exploiting the fertile regions
iter Maxiter
iter previously explored by the skillful foraging aquilas. This
þ Xmean Xbest rand ð0; 1Þ ð95Þ
hunting behavior is modeled by the below search scheme
iter
iter 1X N
X3iterþ1 ¼ 0:1 Xbest iter
Xmean rand ð0; 1Þ þ 0:1
XMean ¼ X iter ; i ¼ 1; 2; ::; N ð96Þ
N i¼1 i ððUB LBÞ rand ð0; 1Þ þ LBÞ ð102Þ
where Xiiterþ1 is the location of the ith foraging aquila for X3iterþ1 is the solution obtained by the third search
iter iter
the next iteration, Xbest is the best solution obtained until strategy for the next iteration, Xbest is the best solution
the current iteration, which is the estimated spatial location iter
retained until the current iteration, Xmean is the mean value
of the prey within the D-dimensional search hyperspace, of the aquila population individuals, UB and LB are upper
iter is the current iteration while Maxiter is the maximum and lower bounds of the search space of the given opti-
number of iterations defined as the termination criterion, mization problem, and rand(0,1) is a uniform random value
iter
and Xmean is the mean value of each aquila in the popula- between 0 and 1.
tion for this current iteration.
2.9.9 Step 4: narrowed exploitation (X4)
2.9.7 Step 2: narrowed exploration (X2)
This search strategy is activated when the intelligent for-
Second step of the algorithm takes place when the foraging aging aquila is getting close to the prey and quickly attacks
aquila soars up and detects the prey victims. Following with using random stochastic movements. This method is
123
14298 Neural Computing and Applications (2023) 35:14275–14378
called walk and grab prey, which is mathematically mod- Eq. (105); G2 represents an iteratively decreasing number
eled by the following expression, from 2 to 0 computed by Eq. (106)
X4iterþ1 ¼ QF Xbest
iter
G1 X iter rand ð0; 1Þ G2 2randð0;1Þ1
QF iter ¼ iter ð1MaxiterÞ2 ð104Þ
LevyðDÞ þ randð0; 1Þ G1
ð103Þ G1 ¼ 2 rand ð0; 1Þ 1 ð105Þ
iter
where QF is a quality factor calculated by Eq. (104); G1 G2 ¼ 2 1 ð106Þ
represents the various hunting behaviors of aquilas that are Maxiter
utilized for chasing the prey individuals, calculated by Algorithm 9 provides the pseudo-code of the Aquila
Optimization algorithm.
123
Neural Computing and Applications (2023) 35:14275–14378 14299
123
14300 Neural Computing and Applications (2023) 35:14275–14378
123
Neural Computing and Applications (2023) 35:14275–14378 14301
iter
V2 ¼ Xrabbit E J Xrabbit
iter iter
Xmean ð115Þ since it receives its sperm from only another barnacle,
not from itself.
iter
Z2 ¼ Xrabbit E J Xrabbit
iter iter
Xmean þ rand ð1 : DÞ • If the same barnacle is considered for the mating
LevyðDÞ procedure at a certain point, the algorithm disregard this
ð116Þ individual, and iterations proceed without employing
the reproduction process.
A simple pseudo-code of the Harris Hawks optimization • Sperm cast process occurs if the selection for the
algorithm is provided below. current iteration is larger than the penis size pl.
2.9.11 Barnacles mating optimizer Exploration phase of the algorithm is commenced with
the reproduction through the sperm cast process, which is
Barnacles Mating Optimization algorithm (BARNA) formulated,
mimics the characteristic mating principles of barnacles xD
barna ¼ randpermðXÞ ð118Þ
living in their natural habitat. Barnacles are hermaphroditic
microorganisms, which means that they have both male xM
barna ¼ randpermðXÞ ð119Þ
and female reproduction organs. One famous physical where BarnaD and Barnam are randomly selected parents to
feature of the barnacles is that they have a large penis size be mated; and randperm() randomly shuffles the row ele-
relative to their body, which is seven or eight times larger ments of the main population matrix X to generate the trial
than their total body length, to cope with challenging population of the mated parents.
environmental conditions with varying level of difficulty. Search equations proposed for modeling the reproduc-
Mating group of a barnacle consists of all neighboring tion phase of BARNA algorithm are quite different com-
individuals within the reach of its penis size. Variations in pared to other literature evolutionary optimization
the penis length have a significant influence on the deter- algorithms. As there is no specific mathematical model to
mining the optimal group size. The basic principles of be employed for the reproduction of offspring, BARNA
Hardy and Weinberg are utilized for generating new off- algorithm emphasizes on the inheritance characteristics of
spring in the Barnacles Mating Optimization algorithm, the parents in generating new offspring individuals, taking
whose elementary steps are defined sequentially in the into account of the fundamentals of the Hardy–Weinberg
following paragraphs. principle. To put it simply, the following proposed search
Initial population of the barnacles individuals is pro- scheme is used to produce new offspring members result-
duced by, ing from the mating parents,
Xi;j ¼ LBj þ randð0; 1Þ UBj LBj ; xnew ¼ p xD M
ð117Þ barna þ q xbarna ð120Þ
i ¼ 1; 2; :::; N; j¼ 1; 2;:::;D
where p is a random number between 0 and 1 drawn from
where Xi,j is the jth dimension of the ith barnacle in the uniform distribution, xD M
barna and xbarna are, respectively, dad
population, N is the population size, D is the problem and mum of the generated barnacles. Exploitation phase is
dimension rand(0,1) is a random number within the range simulated by sperm cast process, which happens when the
[0,1], and UBj and LBj, respectively, stand for the jth selected barnacle’s choice exceeds the penis length, which
dimension of the upper and lower search limits. The choice is a predetermined algorithm parameter assigned to a cer-
of the barnacles to be mated is decided by the length of the tain value before the iterative process is commenced.
penis size, pl. Selection of candidate barnacles for repro- Offspring generation through sperm cast can be modeled
duction is based on some assumptions listed below. by the following,
• Penis length is the most important factor in selecting xnew ¼ rand ð0; 1Þ xM ð121Þ
barna
random barnacles for reproduction.
• A barnacle in the population is limited in its reproduc- Pseudo-code of the Barnacles Mating Optimizer is
tion, with only one barnacle within each generation provided in the below algorithm.
123
14302 Neural Computing and Applications (2023) 35:14275–14378
3 Main motivation for the current study problems of varying types and functional characteristics.
Researchers seek to find alternative ways to conquer the
This section aims to provide essential insights on why the challenging outcomes of the NFL theorem, such as
current research study is performed and what future hybridizing two complementary metaheuristic algorithms
directions are served to the existing literature, resulting [88], implementing chaotic sequences into the base algo-
from the outcomes of the numerical experiments made on rithm rather than using uniformly generated random num-
the considered test subjects. It also demonstrates some of bers [89] or introducing the fundamentals of reinforcement
the key advantages and disadvantages of the compared learning concepts into metaheuristic algorithms [90]. They
algorithms in terms of solution efficiency and accuracy. have also developed novel metaheuristic algorithms uti-
The recent surge in the rapid development and imple- lizing innovative nature-inspired search schemes, which
mentation of the nature-inspired metaheuristic algorithms entail a considerable amount of improvement on accurate
within the last five years is the main motivation behind this solutions for complex and large-scale benchmark problems
research study as their comparative performances covering compared to existing literature stochastic optimizers.
a broad range of problem domain have not been elaborately Therefore, due to the lack of knowledge on the general
investigated, which not only impedes the spread of the performances of the recently proposed nature-inspired
general knowledge among the community on the utmost optimizers, this research study aims to explore the inherent
capabilities of these proposed optimization methods but pros and cons of eleven selected algorithms, including
also gives limited ideas or insights on their optimization RUNGE, GRAD, PRO, REPTILE, SNAKE, EQUIL,
accuracies for problems in which variety of dimensional of MANTA, AFRICAN, AQUILA, HARRIS, and BARNA
functional complexities occur. Since numerical investiga- optimizers over a wide range of optimization problems.
tions as to the estimation performances of the proposed Despite their recent emergence, these methods have been
algorithms are made on a restricted range of benchmark applied to many engineering design cases covering a
problems and established on their own experimental con- diverse set of scientific fields. Table 1 reports the inspira-
ditions, there is no evidential literature approach unfolding tional sources of these mentioned algorithms and lists some
the true optimization capabilities of these algorithms. of their major contributions to the existing literature
Relying on the postulates of the No Free Lunch (NFL) approaches. Particularly for HARRIS and EQUIL algo-
theorem [20], indicating that there is not a single available rithms, there are plenty of engineering applications avail-
metaheuristic algorithm to be able to solve all optimization able in the existing accumulated literature small portion of
123
Neural Computing and Applications (2023) 35:14275–14378 14303
Runge Kutta Mathematical foundations of Runge–Kutta differential Parameter identification of photovoltaic models [93]
optimization equation solver Multi-hydropower reservoir optimization [94]
(RUNGE)
Gradient-based Newton’s famous gradient-based search method Multi-objective optimization of real-world structural design
Optimizer (GRAD) optimization problems [95]
Parameter estimation of PEM models [45]
Poor and rich Desire to improve the current wealth levels of the poor Feature selection for text classification [96]
optimization (PRO) and rich people in a community Classification of similar documents [46]
Reptile search Foraging behaviors of crocodiles Power systems engineering design [97]
algorithm Selecting the important subsets for churn prediction [98]
(REPTILE)
Snake optimizer Reproduction and hunting behaviors of snakes Avoiding cascading failures through a transmission expansion
(SNAKE) planning model [99]
Equilibrium optimizer Mathematical models to determine the equilibrium Feature selection [100]
(EQUIL) states of non-reacting particles in a control volume Optimal operation of hybrid AC/DC grids [101]
Manta ray foraging Hunting behaviors of manta rays Economic dispatch problems [102]
optimizer Global optimization and image segmentation [103]
(MANTA)
Optimal power flow problem [104]
African vultures Hunting styles of African vultures Shell-and-tube heat exchanger design [105]
optimization Tuning PI controllers for hybrid renewable energy systems
(AFRICAN) [106]
Parameter estimation of three diode solar photovoltaic models
[107]
Aquila optimization Foraging behaviors of intelligent aquilas Optimizing ANFIS model parameters for oil production
(AQUILA) forecasting [108]
Gene selection in cancer classification [109]
Optimal distribution of the generated energy across the grid
network [110]
Harris Hawks Foraging behaviors of Harris Hawks Optimal selection of the most significant chemical descriptors
Optimization and chemical compound activities for drug design [111]
(HARRIS) Optimal design of microchannel heat sinks [112]
Roller bearing design [113]
Barnacles mating Mating behaviors of barnacles Control of a Pendulum System [114]
optimizer (BARNA) Optimal chiller loading [115]
Training radial basis function neural network for parameter
estimation of induction motors [116]
which is reported in Table 1. Although their implementa- discussing the weakness and strengths of these algorithms,
tion into a well-organized computer code is a bit strict and and all decisive inferences regarding their inherent merits
challenging compared to other algorithms, their wide range are based on a limited number of numerical tests dealing
utilization covering different fields of engineering domain with a particular design case or solving a suite of specific
is noticed and recognized by the researchers of the opti- unconstrained benchmark functions without ever men-
mization community. tioning the effects of the ‘‘curse of the dimensionality’’ in
It is worth to emphasizing the actual fact that there is no most of the comparative cases. One interesting point that
clear evidential data or reassuring experimental findings on should be put emphasis on that most of the new emerging
the true optimization performances of these algorithms metaheuristic algorithms do not have tunable algorithm
since each representative research study associated with parameters employed on the responsible search equations,
evaluating its prediction ability is accomplished with its which results in a significant improvement on the overall
own methodology and experimental conditions. Therefore, prediction accuracies of these algorithms and entails a
there is no given or provided common ground for
123
14304 Neural Computing and Applications (2023) 35:14275–14378
Table 2 Multimodal and unimodal test functions considered for benchmarking the optimization accuracies of the compared algorithms
Name Type Range Dimension(D) Opt. point
quick and robust convergence as it avoids time-consuming composed of unimodal and multimodal test functions, and
and tiresome iterative parameter adjustment process. respective predictive results are comparatively analyzed.
These test functions have been commonly used by
researchers as convenient test beds for evaluating the per-
4 Experimental methodology formance of their proposed algorithms. Functional char-
acteristics, problem dimensionalities, search ranges, and
This section presents the comparative investigation of the global optimum points of each employed test functions are
eleven above-mentioned emerging metaheuristic algo- correspondingly reported in Table 2. Unimodal test func-
rithms, taking into account of different benchmark suites. tions characteristically have no local optimum but only a
Comparative algorithms are firstly evaluated on thirty-four global optimum point, whereas multimodal test functions
scalable unconstrained optimization test functions locate many local optimum points within their defined
123
Neural Computing and Applications (2023) 35:14275–14378 14305
search ranges. Unimodal test functions are efficient only for their widespread utilization on various types of
benchmark samples for assessing the exploitation perfor- metaheuristics but also establishing a credible environment
mances of the algorithms, while multimodal test functions for assessing the search tendencies of the algorithms.
are prolific instruments for evaluating the exploration Convergence success of these twelve algorithms is inves-
capabilities of the employed algorithm. These thirty-four tigated and comparatively discussed based on the opti-
benchmark functions comprised of unimodal and multi- mization results of these thirty-four optimization test
modal problems with varying dimensionalities of 30, 500, functions. Following that, the scalabilities of these algo-
and 1000D are considered for the overall performance rithms are evaluated on their respective results of 500D and
evaluation of the compared algorithms. There are many 1000D unimodal and multimodal test functions. Compu-
alternative benchmark cases in the existing literature. Some tational runtimes of each algorithm for each test function
of them are given below in their corresponding references are compared for 2000 function evaluations and a decisive
[91, 92]. However, the main advantages of using these conclusion is drawn as to which algorithm burdens the
artificially produced problems are their common and fre- minimum computational load to the processors. Second
quent applications in benchmarking the optimization phase of the comparative investigation between algorithms
effectivity of the developed algorithm in literature is based on the optimization results of the continuous
approaches, which makes them reliable test alternatives not benchmark functions from the 2013 IEEE Congress on
123
14306 Neural Computing and Applications (2023) 35:14275–14378
Evolutionary Computation [117]. These functions are 4.1 Comprehensive analysis on exploration
briefly summarized in Table 3. Evolution of the design performance of the algorithms
variables to their optimal solutions is iteratively plotted in
the convergence curves, each of which is constructed for The exploration abilities of the eleven mentioned meta-
each 28 test instances of CEC 2013 benchmark problems heuristic optimizers are evaluated through multimodal test
for eleven compared metaheuristic algorithms. For stan- functions, which are challenging benchmark problems (f1–
dard continuous rest functions composed of thirty-four f18) defined in Table 2. These test functions include a
unimodal and multimodal problems, total number of 1000 multitude number of local optimum points located in the
function evaluations have been performed for 30 inde- search space, and the inherent complexities of these func-
pendent algorithm runs due to the stochastic natures of the tions dramatically increase with increasing problem
algorithms. Statistical analysis has been performed on the dimensionalities. Therefore, they are efficient test beds for
obtained set of solutions, and predication accuracies of the evaluating the local minimum avoidance of optimization
algorithms have been asses in terms of best mean, worst, algorithms. Tables 5 and 6 report the predictions of these
and standard deviation results of the consecutive runs. twelve compared metaheuristic algorithms for 30-dimen-
Compared metaheuristic algorithms are developed on sional multimodal benchmark functions.
MATLAB environment and run on a desktop computer REPTILE algorithm provides the best predictions for 13
with Intel Core i5-8300H CPU @ 2.30 GHz with having out of 18 multimodal test functions and becomes one of the
8.0 GB RAM. Parameter settings of the algorithms are trailblazing algorithms between the competitive optimizers.
given in Table 4 and remain constant during the course of MANTA algorithm obtains the best results for f1, f2, f3, f8,
iterations. Previous cumulative experiences of the authors, f11, f13, and f16 test functions and becomes one of the
along with the insightful recommendations of the algorithm successful algorithms regarding the overall estimation
developers in their respective original articles concerning performances. AFRICAN algorithm obtains the best results
the accurate values of algorithm constants, play an for f1, f2, f3, f8, f11, f13, and f15 test functions. AQUILA
important role during the exhaustive parameter setting algorithm is another effective method for acquiring the
process. Next section provides an extensive and conducive most accurate predictions for f2, f3, f8, f11, f13, f14, f15, and
discussion on the optimization results of standard scalable f16. Table 7 reports the ranking points of the algorithms
test functions. according to their best prediction results for 30 D multi-
modal test functions, in which the best-performing algo-
rithm obtains a ranking point of 1 while the worst method’s
ranking point is 11. It is seen that despite the dominating
performance of REPTILE algorithm with reaching the
global optimum points of twelve multimodal 30D test
123
Neural Computing and Applications (2023) 35:14275–14378 14307
Table 5 Statistical results of the compared algorithms for test problems between f1—Ackley and f8—Schaffer
f1- Ackley f2- Rastrigin
Best Mean Std Dev Worst Best Mean Std Dev Worst
f3 - Griewank f4 - Zakharov
Best Mean Std Dev Worst Best Mean Std Dev Worst
f5 - Salomon f6 - Alpine
Best Mean Std Dev Worst Best Mean Std Dev Worst
f7 - Csendes f8 - Schaffer
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
14308 Neural Computing and Applications (2023) 35:14275–14378
Table 6 Comparison of twelve optimization algorithms on multimodal test functions from f9-Yang2 to f18—Levy
Problem f9- Yang2 f10-inverted cosine mixture
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
Neural Computing and Applications (2023) 35:14275–14378 14309
Table 6 (continued)
f15- Path f16- Quintic
Best Mean Std Dev Worst Best Mean Std Dev Worst
functions and becoming the leading algorithm for these algorithms has been performed considering the optimiza-
cases, the prediction accuracies obtained for the remaining tion outcomes of 34 benchmark functions. Relying on the
multimodal benchmark functions by this algorithm are so ranking points of the algorithms assigned to the their
erroneous and deceptive which puts this optimizer in the averaged mean fitness values of 30 independent runs for
third place in terms of overall best estimation as observed eighteen different multimodal functions given in Table 8, a
in Table 7. In this context, MANTA algorithm becomes the statistically significant difference between the compared
best performer with a ranking point of 3.05, followed by algorithms is observed with the corresponding p-values of
AFRICAN algorithm with a respective ranking point of 1.64E-12, which is much less than the predetermined
3.11. Among them, EQUIL algorithm yields the worst threshold value of 0.05. This numerical behavior indicates
estimations with the corresponding ranking point value of that there are significant differences in the general perfor-
10.16. Table 8 evaluates these competitive metaheuristic mances of algorithms in solving multimodal test problems.
algorithms with respect to their ranking points obtained for
their mean fitness values. Taking into account of mean 4.2 Comprehensive analysis on the exploitation
deviation rates, it is observed that best-performing methods capabilities of the compared algorithms
are, respectively, MANTA, REPTILE, and PRO algo-
rithms, which are sorted based on their order of prediction This section aims to analyze and comparatively investigate
success. EQUIL algorithm has the worst mean results for the general capabilities of the algorithms regarding to the
30D multimodal problems. According to the descriptive intensification of the fertile areas discovered in the pre-
statistical results obtained for 30D multimodal optimiza- ceding exploration phase. Algorithms with strong
tion benchmark problems, which include best, mean, exploitation capabilities are able to cope with the com-
standard deviation, and worst solutions found by the plexities of the search space, with a view to reach the one
compared metaheuristic algorithms, Freidman’s test anal- and only global optimum point of the problem. Tables 9
ysis for multiple comparisons of the representative and 10 report the optimal results for 30D standard
123
14310 Neural Computing and Applications (2023) 35:14275–14378
Table 7 Ranking points of the algorithms based on the best prediction results of 30 D multimodal functions
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE
f1 1 7 6 11 1 8 1 1 1 9 10
f2 1 1 1 11 1 1 1 1 1 9 10
f3 1 1 1 11 1 1 1 1 1 1 10
f4 5 7 4 11 6 8 3 2 1 10 9
f5 4 6 5 11 7 8 3 2 1 10 9
f6 5 8 6 11 4 7 3 2 1 9 10
f7 4 7 6 11 5 8 3 2 1 9 10
f8 1 1 1 11 1 1 1 1 1 9 10
f9 6 2 1 10 4 4 7 11 9 8 2
f10 4 7 6 11 5 8 3 2 1 9 10
f11 1 1 1 11 1 1 1 1 1 9 10
f12 4 9 6 11 7 8 3 2 1 5 10
f13 1 1 1 11 1 1 1 1 1 10 9
f14 4 1 10 9 2 3 7 11 8 6 5
f15 1 1 1 11 8 7 1 1 1 10 9
f16 5 1 10 9 2 3 7 11 8 6 4
f17 3 6 11 4 7 5 2 9 9 1 8
f18 5 3 9 8 1 2 7 10 11 6 4
Average rank 3.11 3.88 4.77 10.16 3.55 4.66 3.05 3.94 3.22 7.55 8.27
Overall rank 2 5 8 11 4 7 1 6 3 9 10
Table 8 Ranking points of the algorithms based on their mean deviation results for 30D multimodal test problems
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE
f1 5 8 6 10 4 7 1 1 1 9 11
f2 5 8 6 11 1 7 1 1 1 9 10
f3 6 7 1 11 1 8 1 1 1 9 10
f4 9 7 5 11 4 10 2 1 3 8 6
f5 6 7 3 11 8 5 4 2 1 9 10
f6 5 9 4 11 6 7 3 2 1 8 10
f7 5 7 6 11 4 8 3 2 1 9 10
f8 6 5 4 11 7 8 1 1 1 10 9
f9 4 1 10 9 2 7 3 11 8 6 5
f10 5 8 6 11 4 7 3 2 1 9 10
f11 6 7 4 11 5 8 1 1 1 9 10
f12 4 9 6 11 7 8 3 2 1 5 10
f13 5 7 1 11 8 1 1 1 6 10 9
f14 5 1 10 8 2 3 7 11 9 4 6
f15 6 4 3 11 8 7 9 1 1 10 5
f16 4 1 10 8 2 3 7 11 9 5 6
f17 3 8 10 4 6 5 2 11 9 1 7
f18 5 3 9 8 1 2 6 11 10 4 7
Average rank 5.22 5.94 5.77 9.94 4.44 6.16 3.22 4.05 3.61 7.44 8.38
Overall rank 5 7 6 11 4 8 1 3 2 9 10
123
Neural Computing and Applications (2023) 35:14275–14378 14311
Table 9 Statistical comparison of twelve algorithms for 30D unimodal functions from f17 – Sphere to f26 – Discus
Problem f19 2 Sphere f20 2 Brown
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
14312 Neural Computing and Applications (2023) 35:14275–14378
Table 10 Statistical results of the compared algorithms for 30D unimodal test functions for f27 – Dixon-Price and f34—Powell
Problem f27- Dixon-price f28- Trid
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
Neural Computing and Applications (2023) 35:14275–14378 14313
Table 10 (continued)
Problem f33- Streched Sine Wave f34- Powell
Best Mean Std Dev Worst Best Mean Std Dev Worst
Table 11 Comparative performances and ranking points of the competitive algorithms based on their best predictions
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE
f19 4 7 5 11 6 8 3 2 1 9 10
f20 4 8 6 11 5 7 3 2 1 9 10
f21 4 9 5 11 6 8 3 2 1 7 10
f22 4 8 6 11 5 7 3 2 1 9 10
f23 4 7 6 11 5 8 3 2 1 9 10
f24 1 1 1 11 1 1 1 1 1 9 9
f25 4 5 8 11 3 1 6 9 10 7 2
f26 4 9 5 11 6 7 3 2 1 8 10
f27 2 3 9 11 1 4 6 10 8 6 5
f28 2 4 8 11 3 1 7 10 9 6 5
f29 4 7 6 11 5 8 3 2 1 9 10
f30 4 7 6 11 5 8 3 1 1 9 10
f31 5 1 9 8 2 3 7 10 11 6 4
f32 4 7 6 11 5 8 3 2 1 9 10
f33 4 8 5 11 6 7 3 2 1 9 10
f34 4 9 7 11 5 8 3 2 1 6 10
Average rank 3.62 6.25 6.12 10.81 4.31 5.87 3.75 3.81 3.12 7.93 8.43
Overall rank 2 8 7 11 5 6 3 4 1 9 10
unimodal benchmark problems by the compared eleven value of 3.75. Although PRO is the most accurate algo-
algorithms. REPTILE algorithm shows the best perfor- rithm for test functions of f24 and f30 and becoming the
mance for 12 unimodal functions out of 16 instances and second best algorithm for ten test functions, including f19,
becomes the leading algorithm for 30D unimodal test f20, f21, f22, f23, f26, f29, f32, f33, and f34 test problems, its
problems. Furthermore, this algorithm approaches the performance of the remaining cases are so dissatisfactory,
global optimum answers of f20, f21, f22, f23, f24, f26, f29, f30, which puts this algorithm in the fourth place among the
f32, f33, and f34 problems. According to the reported ranking other methods. EQUIL algorithm is again the worst per-
results obtained for the best predictions of the compared former as occurred for multimodal test functions with the
algorithms in Table 11, AFRICAN algorithm has the sec- respective ranking point values of 10.81. Table 12 com-
ond-best average ranking point value of 3.62, yet only pares eleven algorithms with respective ranking points
reaching the global optimum value of f24 function. obtained for the mean deviation rates. REPTILE continues
MANTA is the third-best algorithm concerning the best its dominancy regarding to the mean results having the best
solutions-based ranking points with the corresponding ranking point value of 3.06. It is interesting to see GRAD
123
14314 Neural Computing and Applications (2023) 35:14275–14378
Table 12 Evaluation of the prediction successes of the algorithms relying on their mean fitness values
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE
f19 5 6 8 11 4 7 3 2 1 9 10
f20 5 8 6 11 4 7 3 2 1 9 10
f21 5 9 6 11 4 8 3 2 1 7 10
f22 5 8 6 11 4 7 3 2 1 9 10
f23 5 8 6 11 4 7 3 2 1 9 10
f24 1 6 1 11 6 6 1 1 1 9 10
f25 4 3 8 11 1 2 6 9 9 7 5
f26 6 11 5 10 4 7 3 2 1 8 9
f27 4 2 7 11 1 3 5 9 8 6 10
f28 3 2 8 11 4 1 7 9 10 5 6
f29 6 7 5 11 4 8 3 2 1 9 10
f30 5 7 6 11 4 8 3 2 1 9 10
f31 4 3 9 8 1 2 7 11 10 5 6
f32 6 8 5 11 4 7 3 2 1 9 10
f33 3 8 4 11 6 7 5 2 1 9 10
f34 6 9 7 11 4 8 3 2 1 5 10
Average rank 4.56 6.56 6.06 10.75 3.68 5.93 3.81 3.81 3.06 7.75 9.12
Overall rank 5 8 7 11 2 6 3 3 1 9 10
algorithm in second place for mean results as there is not a Tables 13 and 14 provide the statistical analysis results of
clear superiority of this method considering the best results. the compared eleven algorithms for 500D variants of the
MANTA and PRO algorithms share the third-best ranking previously investigated test functions, but only dealing
points assigned for mean fitness values. In addition, with multimodal benchmark functions from f1-Ackley to
Friedman’s test results obtained for 30D unimodal test f18-Levy. Total number of consecutive and independent 30
functions indicate the statistical difference among the algorithms runs are performed for the experimental con-
compared algorithms with the respective p-values of ditions covering a predefined population size of N = 20 and
2.45E-14, which is obtained by considering the mean a termination criterion fixed to 100 number of iterations. It
values of the ranking points for 30D unimodal test func- is seen that there is no clear deterioration in the solution
tions tabulated in Table 12. accuracies of the compared algorithms when problem
dimensionalities are increased from 30 to 500. Some
4.3 Investigation on the scalability algorithms between them are able to reach the global
of the compared algorithms optimum points of f2, f3, f4, f5, f6, f8, f10, f11, f12, f13, and f15
test problems even after 2000 function evaluations, which
This section scrutinizes the optimization performance of is relatively small as far as the high problem dimensionality
eleven compared metaheuristics on hyper-dimensional is concerned. It is noteworthy to mention that PRO shows a
500D and 1000D test functions and makes a comprehen- consistent prediction performance for 500D test functions
sive investigation on how the prediction accuracy of the of f2, f3, f8, f11, and f15 for which it obtains the same global
algorithms is influenced by the increasing problem optimum answer in each algorithm run. REPTILE algo-
dimensionalities, particularly for hyper-dimensional prob- rithm again proves its competitiveness in hyper-dimen-
lems. There is a common and strong belief in the opti- sional multimodal problems in terms of obtaining the most
mization community such that the vast majority of the accurate results as it reaches to the best-known solutions of
metaheuristic algorithms suffer from the curse of dimen- f2, f3, f4, f5, f6, f8, f10, f11, f12, f13, and f15 benchmark
sionality, in which a large number of function evaluations problems. Tables 15 and 16 report the statistical results of
is required to conquer the inherent drawbacks of the the 500D unimodal test functions obtained for eleven
increased search space whose spatial volume exponentially compared algorithms. REPTILE acquires the best solutions
grows with the increasing problem dimensionality. of f19, f20, f21, f22, f23, f24, f26, f29, f30, f32, f33, f34 test
123
Neural Computing and Applications (2023) 35:14275–14378 14315
Table 13 Statistical results for 500D multimodal test functions from f1 – Ackley to f8 – Schaffer test functions
f1- Ackley f2- Rastrigin
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
14316 Neural Computing and Applications (2023) 35:14275–14378
Table 13 (continued)
f7- Csendes f8- Schaffer
Best Mean Std Dev Worst Best Mean Std Dev Worst
problems which is a quite successful achievement for a fails in f25, f27, f28, and f31 benchmark functions while
stochastic optimizer as it is not obviously influenced by the reaching the lowest fitness values for the remaining test
adverse effects of the curse of dimensionality. Other functions, which makes it the best optimizer between the
algorithms also are not affected by the increased problem compared ones with having an average point of 2.87. PRO
dimensionalities as most of the optimizers obtain predic- has the second-best average point of 3.68 for unimodal test
tions very close to the global optimum solution of the functions with being the second-best algorithm for most of
problems. Only EQUIL algorithm fails to get close to the the unimodal optimization problems. However, this rela-
optimal solutions within 2000 function evaluations while tively satisfying performance for unimodal functions does
other methods show satisfactory convergence tendencies not help it being in fourth place in overall ranking points.
for 500D problems. Table 17 reports the ranking points of Despite being in the third place for unimodal test functions,
the compared algorithms obtained for the mean results of having only one best position out of 16 unimodal test
500D multimodal and unimodal test problems. PRO algo- functions, MANTA algorithm occupies the second place
rithm has the best average point of 3.55 for multimodal with the overall average point of 3.52. AFRICAN algo-
problems and 3.18 for unimodal test problems, which put rithm is the third-best method between them considering
this algorithm in the first place considering the mean general prediction performances obtained for unimodal and
deviation results. MANTA algorithm has the second best multimodal test problems with respective overall average
overall ranking point of 3.67, whose respective average point of 3.61. Friedman test analysis based on the mean
ranking point for multimodal test functions is 3.94 and results of 500D test functions shows that there is a statis-
unimodal test functions is 3.37. Although REPTILE algo- tical significance between algorithms, which is validated by
rithm finds the best-known solutions of 12 benchmark the corresponding p-value of 7.51E-12.
functions out of 16 test instances, mean solutions acquired Tables 19 and 20 report the statistical results for 1000D
by this algorithm is not as successful and persistent as multimodal test functions for the eleven compared algo-
answers obtained for the best results since it attains the best rithms. Although there is a mammoth increase in the
mean solution in seven test cases, which put this algorithm dimensionality of the test problems which imposes a great
in third place with respect to overall ranking point taking deal of complexity on the search domain, there is no clear
into account of the average points of unimodal and multi- deterioration in the general solution qualities except for the
modal test problems. Indisputable estimation performance f17-Qing function. AFRICAN and REPTILE algorithms
of REPTILE is evident based on the successful achieve- share the best seats considering the best results of the
ments in acquiring the best solutions of multimodal and 1000D multimodal test functions when their respective
unimodal optimization problems with the minimum overall average points (3.00) given in Table 23 are examined.
ranking point of 3.00, as reported in Table 18. This algo- GRAD algorithm is the third-best optimizer with an aver-
rithm obtains the minimum fitness value among the other age point of 3.55, slightly followed by MANTA algorithm,
methods after the completion of consecutive functions which in fourth place. Tables 21 and 22 provide the sta-
evaluations for multimodal test functions of f1, f2, f3, f4, f5, tistical results obtained for 1000D unimodal test functions
f6, f7, f8, f10, f11, f12, f13, and f15 despite its poor performance by the compared algorithms. No significant deterioration in
for predicting the accurate solutions of f9, f14, f16, and f17 general solution qualities is observed for the algorithms.
test problems. For unimodal test functions, REPTILE only However, RUNGE algorithm is not able to converge to the
123
Neural Computing and Applications (2023) 35:14275–14378 14317
Table 14 Comparison of the statistical results acquired by the compared algorithms for 500D test functions from f9 – Yang2 to f18 – Levy
function
Problem f9- Yang2 f10-Inverted Cosine Mixture
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
14318 Neural Computing and Applications (2023) 35:14275–14378
Table 14 (continued)
Problem f15- Path f16- Quintic
Best Mean Std Dev Worst Best Mean Std Dev Worst
optimal solution of f21-Sum of difference test function and point of 3.62. When all ranking points are averaged for
is labeled as ‘‘N/A’’ which indicates that no feasible answer unimodal and multimodal test problems, it is seen that
is attained throughout the sequential algorithm runs. Fur- REPTILE is the best performer while AFRICAN yielding
thermore, SNAKE algorithm only finds one feasible solu- the second-best predictions. MANTA algorithm is in the
tion during the course of independent runs for this test third-best seat, outperforming the PRO algorithm, which
instance. Similar tendencies of these algorithms are also has the overall fourth-best predictions when the best results
evident for the unimodal test function of f34—Powell, for are primarily considered. Table 24 reports the ranking
which RUNGE algorithm do not find any valid solution points of the algorithms for the mean results of the 1000D
after the consecutive runs, and SNAKE algorithm attains benchmark functions. REPTILE algorithm does not sustain
the feasible best solution of 2.81E-06. AFRICAN algo- its leadership position when comparisons are made based
rithm is another algorithm failing to obtain feasible solu- on the mean solutions and lose its place to PRO algorithm
tions during the algorithm runs but only to find one in overall ranking points. MANTA algorithm yields the
solution, which is 1.74E-46 for f34- Powell function. second-best mean results, which are better than the previ-
REPTILE algorithm is superior to the remaining algo- ous ranking order related with the comparisons of the best
rithms in obtaining the global best solutions for 1000D test results. REPTILE algorithm’s overall average point puts it
functions of f19, f20, f21, f22, f23, f24, f26, f29, f30, f32, f33, and in third place, which also indicates that its solution con-
f34. PRO algorithm sits on the second-best seat for most of sistency is hampered by the increased problem dimen-
the test cases, which makes this algorithm the second-best sionalities, particularly for hyper-dimensional problems.
performer when its respective average values obtained for
unimodal test functions given in Table 23 are thoroughly
examined. Third-best algorithm for unimodal test problems
is MANTA algorithm which the corresponding average
123
Neural Computing and Applications (2023) 35:14275–14378 14319
Table 15 Estimation results for 500D unimodal test functions from f19—Sphere to f26—Discus
Problem f19- Sphere f20- Brown
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
14320 Neural Computing and Applications (2023) 35:14275–14378
Table 16 Statistical results obtained for 500D unimodal functions from f27 – Dixon and Price to f34—Powell
Problem f27- Dixon and Price f28- Trid
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
Neural Computing and Applications (2023) 35:14275–14378 14321
Table 17 Ranking points of the algorithms assigned to the mean fitness results for 500D test functions
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE
f1 6 7 5 11 4 8 1 1 1 9 10
f2 7 6 1 11 1 8 1 1 1 9 10
f3 7 1 1 11 1 8 1 1 1 9 10
f4 7 11 3 9 5 10 2 1 4 8 6
f5 6 4 3 11 8 7 5 1 2 9 10
f6 5 9 6 11 4 5 3 2 1 8 10
f7 5 6 7 11 3 8 2 1 4 9 10
f8 6 5 4 11 7 8 1 1 1 10 9
f9 5 1 6 11 2 3 8 9 10 7 4
f10 6 7 5 11 4 8 3 2 1 9 10
f11 8 7 5 11 4 6 1 2 3 9 10
f12 5 8 4 9 6 7 3 1 2 10 11
f13 7 3 2 11 6 4 5 1 8 10 9
f14 4 1 9 11 2 3 7 10 8 5 6
f15 6 3 4 11 8 7 9 1 2 10 5
f16 4 1 9 11 2 3 7 10 8 6 5
f17 4 6 8 11 2 3 5 10 7 9 1
f18 4 3 8 11 1 2 7 9 10 6 5
Average point 5.66 4.94 5.00 10.77 3.88 6.00 3.94 3.55 4.11 8.44 7.83
Ranking point 7 5 6 11 2 8 3 1 4 10 9
f19 7 6 5 11 4 8 3 2 1 9 10
f20 7 6 5 11 4 8 3 1 2 9 10
f21 5 8 6 9 4 7 2 1 3 11 10
f22 5 8 6 11 4 7 3 2 1 9 10
f23 5 6 7 11 4 8 3 2 1 9 10
f24 1 6 1 11 8 6 1 1 1 9 10
f25 5 3 8 11 1 2 7 8 8 6 4
f26 5 8 6 11 4 7 3 2 1 9 10
f27 4 3 6 11 5 2 1 7 8 8 8
f28 4 2 6 11 3 1 5 8 7 9 10
f29 7 6 5 11 4 8 3 1 2 9 10
f30 7 6 5 11 4 8 3 1 2 9 10
f31 4 3 8 11 1 2 7 10 9 6 5
f32 5 7 6 11 4 8 3 2 1 9 10
f33 6 9 3 11 4 7 5 2 1 8 10
f34 6 8 5 9 4 7 2 1 3 10 11
Average point 5.18 5.80 5.50 10.75 3.87 6.00 3.37 3.18 3.19 8.68 9.25
Ranking point 5 7 6 11 4 8 3 1 2 9 10
Overall point 5.43 5.34 5.23 10.76 3.88 6.00 3.67 3.37 3.68 8.55 8.49
Overall ranking point 7 6 5 11 4 8 2 1 3 10 9
4.4 Comparison on the runtime complexities multimodal test functions. This feature of any stochastic
of the algorithms metaheuristic algorithm should be thoroughly analyzed in
order to get clear insights on their general performances
This section is devoted to investigate the runtime com- when solving exhaustive and tedious optimization prob-
plexities of the algorithms on 30D unimodal and lems. Computational load burdened by the excessive run-
ning of the search equations of the metaheuristic
123
14322 Neural Computing and Applications (2023) 35:14275–14378
Table 18 Performance comparison of the algorithms based on the ranking points obtained for the best results of 500D test functions
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE
f1 1 7 6 11 1 8 1 1 1 9 10
f2 1 1 1 11 1 1 1 1 1 9 10
f3 1 1 1 11 1 1 1 1 1 9 10
f4 8 5 4 11 6 9 3 2 1 10 7
f5 4 6 5 11 7 8 3 2 1 10 9
f6 4 7 6 11 5 8 3 2 1 9 10
f7 5 7 6 11 4 8 3 2 1 9 10
f8 1 1 1 11 1 1 1 1 1 10 9
f9 5 1 8 11 4 2 7 9 10 6 3
f10 4 7 6 11 5 8 3 2 1 9 10
f11 1 1 1 11 1 1 1 1 1 9 10
f12 4 9 5 10 6 7 3 2 1 8 11
f13 1 1 1 11 1 1 1 1 1 10 9
f14 5 1 9 11 3 2 8 10 7 6 4
f15 1 1 1 11 8 7 1 1 1 10 9
f16 5 1 9 11 3 4 8 10 7 6 2
f17 4 7 6 11 2 3 5 10 8 9 1
f18 5 1 8 10 3 4 7 9 11 6 2
Average point 3.33 3.61 4.66 10.88 3.44 4.61 3.32 3.72 3.11 8.55 7.55
Ranking point 3 5 6 11 4 7 2 6 1 10 9
f19 5 7 4 11 6 8 3 2 1 9 10
f20 3 7 5 11 6 8 4 2 1 9 10
f21 4 8 6 10 5 7 3 2 1 9 11
f22 4 7 6 11 5 8 3 2 1 9 10
f23 4 7 5 11 6 8 3 2 1 9 10
f24 1 1 1 11 1 1 1 1 1 10 9
f25 5 4 8 11 2 1 7 8 8 6 3
f26 4 8 5 11 6 7 3 2 1 9 10
f27 5 1 6 11 3 2 4 9 8 7 10
f28 3 2 8 11 4 1 7 10 9 6 5
f29 4 7 5 11 6 8 3 2 1 9 10
f30 4 8 6 11 5 7 3 1 1 9 10
f31 5 1 8 11 2 4 7 10 9 6 3
f32 4 6 7 11 5 8 3 2 1 9 10
f33 4 7 5 11 6 8 3 2 1 9 10
f34 4 8 5 10 6 7 3 2 1 9 11
Average point 3.93 5.56 5.63 10.87 4.62 5.81 3.75 3.68 2.87 8.37 8.87
Ranking point 4 6 7 11 5 8 3 2 1 9 10
Overall point 3.61 4.52 5.11 10.88 4.00 5.17 3.52 3.70 3.00 8.46 8.17
Overall ranking point 3 6 7 11 5 8 2 4 1 10 9
algorithms needs to be deeply scrutinized if it is to hold a number of 2000 function evaluations are performed for
firm opinion on the total expended runtime memory and unimodal and multimodal test functions and execution
decide on as to which algorithms should be preferred for a times of the algorithms are averaged over 30 independent
specific type of optimization problem. Figures 1 and 2 runs.
comparatively visualize the elapsed runtimes of each It is seen from the figures that expanded computational
contestant algorithm for each benchmark function. Total memory for unimodal and multimodal test functions is so
123
Neural Computing and Applications (2023) 35:14275–14378 14323
Table 19 Comparison of the competitive algorithms with respect to the statistical results of the 1000D multimodal test functions from f1 –Ackley
to f8—Schaffer
Problem f1- Ackley f2- Rastrigin
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
14324 Neural Computing and Applications (2023) 35:14275–14378
Table 20 Statistical comparison of the optimal results for 1000D multimodal test functions from f9 – Yang to f18—Levy
Problem f9- Yang2 f10- inverted cosine mixture
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
Neural Computing and Applications (2023) 35:14275–14378 14325
Table 20 (continued)
Problem f15- Path f16- Quintic
Best Mean Std Dev Worst Best Mean Std Dev Worst
similar, which can be deduced from the runtimes of the figures, reaming algorithms accomplish the predefined
algorithms obtained for different problems. RUNGE algo- 2000 function evaluations within the runtime band between
rithm consumes a considerable amount of computational 0.02 and 0.03 s, which is much quicker than that is elapsed
memory as the general manipulation scheme of this algo- for the RUNGE and BARNA algorithms.
rithm includes an excessive number of complementary and
well-tailored search equations which metaphorically mimic 4.5 Evaluation on the convergence behavior
the algorithmic steps of the Runge–Kutta differential of the algorithms
equation solver. Interested readers could examine the
governing search equations of the algorithms in the asso- Convergence curves give a provisional insight to the end
ciated sections of this study and comparatively investigate users on how quickly the algorithm reaches to its optimal
on which algorithm requires the most competent manipu- solution and provide a visual understanding of the ten-
lation equations that burden the highest computational load dencies of the iterative declines in fitness values. Ongoing
between them. RUNGE algorithm has the highest runtimes, evolution in the objective function value of an optimization
which takes four or five times longer to arrive the optimal problem is directly related with the predefined number of
solution of the problem compared to the remaining algo- iterations, which means that any increase in the number of
rithms except BARNA algorithm on average. BARNA iteration leads to an increase in the probability of reaching
algorithm is another relatively time-consuming algorithm, the global optimum answer of the problem. A detailed
which mainly resulted from the consistent employment of examination on the proclivities of convergence graphs
randperm() function throughout the iterations responsible helps us to analyze the gradual decreases (or increases) in
for shuffling the current positions of the population mem- objective function values of the employed optimization
bers. BARNA algorithm also requires an ascending sorting problem, which is conducive to fully comprehend and
of the population of individuals based on their respective monitor the ruling search mechanism operated during the
fitness values, which is another decisive factor explaining course of iterations. General trend in the evolution of the
the excessive computational time required to complete the convergence curves is based on the sudden and rapid
predefined number of iterations. As it is evident from the changes in the early phases of iterations, which are
123
14326 Neural Computing and Applications (2023) 35:14275–14378
Table 21 Comparison of the statistical results for 1000D unimodal test functions from f19 – Sphere to f26—Discus
Problem f19- Sphere f20- Brown
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
Neural Computing and Applications (2023) 35:14275–14378 14327
Table 22 Comparison of the statistical results for 1000D unimodal test functions from f27 – Dixon-Price to f34—Powell
Problem f27- Dixon and Price f28- Trid
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
14328 Neural Computing and Applications (2023) 35:14275–14378
Table 23 Ranking points of the compared algorithms for 1000D test functions relying on the best results of the predictions
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE
f1 1 7 5 11 5 8 1 1 1 9 10
f2 1 1 1 11 1 1 1 1 1 9 10
f3 1 1 1 11 1 1 1 1 1 9 10
f4 7 5 4 11 8 10 3 2 1 9 6
f5 4 6 5 11 7 8 3 2 1 10 9
f6 4 7 5 11 6 8 3 2 1 9 10
f7 4 7 5 11 6 8 3 2 1 9 10
f8 1 1 1 11 1 1 1 1 1 10 9
f9 1 1 7 11 1 1 8 9 10 6 1
f10 4 7 5 11 6 8 3 2 1 9 10
f11 1 1 1 11 1 1 1 1 1 9 10
f12 3 9 5 8 7 6 4 2 1 11 10
f13 1 1 1 10 4 1 1 1 1 11 8
f14 5 1 9 11 3 4 8 10 7 6 2
f15 1 1 1 11 1 8 1 1 1 10 9
f16 5 1 9 11 3 2 8 10 7 6 4
f17 5 6 4 11 2 3 7 10 8 9 1
f18 5 2 8 11 1 4 7 10 9 6 3
Average point 3.00 3.61 4.27 10.78 3.55 4.61 3.57 3.77 3.00 8.72 7.33
Ranking point 1 5 7 11 3 8 4 6 1 10 9
f19 4 7 5 11 6 8 3 2 1 9 10
f20 4 7 5 11 6 8 3 2 1 9 10
f21 4 8 5 9 7 6 3 2 1 11 10
f22 5 7 4 11 6 8 3 2 1 9 10
f23 4 7 5 11 6 8 3 2 1 9 10
f24 1 1 1 11 1 1 1 1 1 9 10
f25 5 3 8 11 1 2 7 8 8 6 4
f26 5 8 4 11 6 7 3 2 1 9 10
f27 5 1 8 11 3 2 4 9 6 7 10
f28 3 1 7 11 4 2 6 8 9 10 5
f29 4 6 7 11 5 8 3 2 1 10 9
f30 4 7 5 11 6 8 3 2 1 9 10
f31 5 4 8 11 2 3 7 9 10 6 1
f32 4 8 5 11 6 7 3 2 1 9 10
f33 4 8 5 11 6 7 3 2 1 9 10
f34 4 8 5 9 7 6 3 2 1 11 10
Average point 4.06 5.68 5.43 10.75 4.87 5.69 3.62 3.56 2.81 8.87 8.68
Ranking point 4 7 6 11 5 8 3 2 1 10 9
Overall point 3.49 4.58 4.81 10.76 4.17 5.11 3.59 3.67 2.91 8.79 7.96
Overall ranking point 2 6 7 11 5 8 3 4 1 10 9
123
Neural Computing and Applications (2023) 35:14275–14378 14329
Table 24 Prediction performances of the algorithms based on the ranking points obtained for mean results of 1000D test functions
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE
f1 6 7 5 11 4 8 1 2 3 9 10
f2 6 7 1 11 1 8 1 1 1 9 10
f3 6 7 1 11 1 8 1 1 1 9 10
f4 8 11 3 10 6 9 2 1 4 7 5
f5 5 7 3 11 8 6 4 1 2 9 10
f6 7 9 5 11 4 6 3 1 2 8 10
f7 6 5 4 11 3 7 2 1 9 8 10
f8 7 5 4 10 6 8 1 1 1 11 9
f9 5 1 10 11 1 1 7 8 9 6 1
f10 5 6 7 11 4 8 3 2 1 9 10
f11 8 6 5 11 4 7 1 1 1 9 10
f12 5 9 4 6 7 8 3 1 2 10 11
f13 6 3 1 11 9 5 4 2 8 9 8
f14 4 1 9 11 2 3 7 10 8 5 6
f15 6 4 3 11 7 5 9 1 2 10 8
f16 4 1 8 11 2 3 7 9 10 6 5
f17 6 7 3 11 2 4 5 9 8 10 1
f18 4 3 8 11 1 2 7 10 9 5 6
Average point 5.77 5.50 4.66 10.61 4.00 5.88 3.77 3.44 4.50 8.27 9.43
Ranking point 7 6 6 11 3 8 2 1 4 9 10
f19 6 7 5 11 4 8 3 1 2 9 10
f20 5 7 6 11 4 8 3 1 2 9 10
f21 9 7 4 8 6 5 2 1 3 10 11
f22 6 7 5 11 4 8 3 1 2 9 10
f23 6 7 5 11 4 8 3 1 2 9 10
f24 1 7 1 11 8 1 1 1 1 9 10
f25 5 3 8 11 1 2 7 9 10 6 4
f26 6 8 5 11 4 7 3 2 1 9 10
f27 5 2 7 11 3 4 1 8 6 9 10
f28 4 2 6 11 3 1 5 7 8 9 10
f29 8 6 5 11 4 7 3 1 2 9 10
f30 7 6 5 11 4 8 2 1 3 9 10
f31 4 3 8 11 1 2 7 9 10 6 5
f32 6 7 5 11 4 8 3 2 1 9 10
f33 4 9 5 11 6 7 3 2 1 8 10
f34 9 6 4 7 8 5 2 1 3 10 11
Average point 5.68 5.87 5.25 10.56 4.25 5.56 3.18 3.00 3.56 8.68 9.4
Ranking point 7 8 5 11 4 6 2 1 3 9 10
Overall point 5.72 5.67 4.93 10.58 4.11 5.73 3.49 3.23 4.05 8.46 9.41
Overall ranking point 7 6 5 11 4 8 2 1 3 9 10
controlled by the search schemes responsible for the performance of these eleven metaheuristic algorithms,
explorations mechanism. Next, variational declines in the decreases in the fitness values are plotted against the
fitness values are administrated by the search mechanism increasing number of iterations for 30D unimodal and
of the exploitation phase in which search agents focus on multimodal test problems, as depicted in Figs. 3, 4, 5, 6, 7,
the promising areas meticulously located in the previous 8. The convergence curves for the compared algorithms are
phase. In order to deeply analyze the convergence illustrated in these figures, which are obtained for mean
123
14330 Neural Computing and Applications (2023) 35:14275–14378
Fig. 1 Elapsed runtimes of the algorithms for 30D multimodal test functions
Fig. 2 Computational runtimes of compared algorithms for 30D unimodal test problems
values of 30 independent algorithm runs and 1000 function REPTILE algorithm shows gradual decreases in the earliest
evaluations. Figures 3, 4, 5 depict the convergence curves phases and completes iterations with rapid and sudden
plotted for 30D multimodal test functions by the eleven declines for multimodal test functions of f1, f2, f3, f5, f6, f7,
metaheuristic optimizers. Compared algorithms show dif- f8, f10, f11, f12, and f15. This convergence behavior is the
ferent convergence behaviors for different test functions. direct result of the influences of search equations
123
Neural Computing and Applications (2023) 35:14275–14378 14331
Fig. 3 Evolution histories of the 30D multimodal problems from f1 – Ackley to f6- Alpine
Fig. 4 Evolution plots of the compared algorithms for 30D multimodal test functions from f7 -Csendes to f12 -Yang1
associated with the exploration mechanism in the early unvisited regions of the search space. This tendency gives
stages, which is followed by the intensification on the rise to an acceleration in the general convergence speed of
fruitful regions discovered in the preceding iterations. As the algorithm and results in a quick reach to the global
can be noticed from the search equation of REPTILE optimum point. However, this behavior may not be con-
algorithm, responsible search agents tend to probe the ducive, even can be derogatory for some test instances as
domain around the so-far-obtained-best solutions, empha- seen from the evolution plots obtained for f14, f16, f17, and
sizing the exploitation rather than exploration around the f18 functions. Solution space of these benchmark problems
123
14332 Neural Computing and Applications (2023) 35:14275–14378
Fig. 5 Convergence histories of the competitive algorithms for 30D multimodal test functions from f13-Yang4 to f18-Levy
Fig. 6 Iterative evolution of the fitness function values for 30D unimodal benchmark functions from f19-Sphere to f24-Dropwave
needs more inquisitive exploration instead of consistent dominant and prevalent for these algorithms. Too much
intensification of the promising regions, which explains the emphasis on probing around the feasible regions of the
relatively unsuccessful prediction performance of REP- search space entails not only redundant diversification on
TILE for these test functions. Other remaining algorithms the solution space, disregarding the necessary intensifica-
perform stepwise and gradual decreases throughout the tion when it is needed but also consumes excessive amount
iteration process for most of the multimodal test functions, of computational resources, yielding to longer than
which indicates the exploration mechanism is more expected, and anticipated algorithm runtimes. If it is to
123
Neural Computing and Applications (2023) 35:14275–14378 14333
Fig. 7 Convergence plots for 30D unimodal test problems from f25-Rosenbrock to f30-Schwefel 2.23
Fig. 8 Evolution histories of the objective function values for test problems from f31-Schwefel 2.25 to f34-Powell
summarize the general convergence behavior of the algo- effectively pinpoint the exact locations of the optimum
rithms for multimodal test functions in a nutshell, when solution of the problem for most of the cases.
overall convergence performance is averaged over the Similar search tendencies are also observed for 30D
eighteen test functions, it can be conveniently stated that all unimodal test functions whose convergence graphs are
compared algorithms can capitalize on the promising visualized from Figs. 6, 7, 8. REPTILE algorithm is again
search regions previously explored in the early iterations to able to superiorly maintain a proper balance between
123
14334 Neural Computing and Applications (2023) 35:14275–14378
Table 25 Error comparison of the algorithms with respect to the statistical results of CEC-2013 test functions
CEC 01 CEC 02
Best Mean Std Dev Worst Best Mean Std Dev Worst
CEC 03 CEC 04
Best Mean Std Dev Worst Best Mean Std Dev Worst
CEC 05 CEC 06
Best Mean Std Dev Worst Best Mean Std Dev Worst
CEC 07 CEC 08
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
Neural Computing and Applications (2023) 35:14275–14378 14335
Table 26 Comparison of the error analysis results obtained for CEC-2013 test problems
CEC 09 CEC 10
Best Mean Std Dev Worst Best Mean Std Dev Worst
CEC 13 CEC 14
Best Mean Std Dev Worst Best Mean Std Dev Worst
CEC 15 CEC 16
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
14336 Neural Computing and Applications (2023) 35:14275–14378
Table 26 (continued)
CEC 15 CEC 16
Best Mean Std Dev Worst Best Mean Std Dev Worst
exploration and exploration phases for standard unimodal optimization problems. They have been consistently
problems, which indicates that it is capable of eliminating employed by metaheuristic algorithm developers to assess
the local pitfalls located in the search domain and obtaining their proposed optimizers. When the mathematical char-
the best estimations for the unimodal test functions of f19, acteristics of these test functions are carefully examined in
f20, f21, f22, f23, f24, f26, f29, f30, f33, and f34. This algorithm detail, problems belonging to the test suite used in the
continues to explore the search domain until to the point competitions organized in 2013, 2014, and 2015 are very
where the maximum number of iterations is reached. much similar; only negligible functional nuances make the
Nearly all algorithms have a suitable convergence speed difference between them. Test functions utilized in events
for unimodal test problems; however, they experience occurring after 2017 consist of multi-objective or con-
some difficulties in converging to the optimal solution for strained benchmark cases; those are not the main concern
f25- Rosenbrock function, except for GRAD algorithm. of the related section dealing with solving unconstrained
Rosenbrock test function is a challenging unimodal optimization problems. Therefore, the authors consider the
benchmark case, frequently used for assessing the opti- twenty-eight test functions for performance evaluation of
mization capabilities of the stochastic algorithm, whose the compared algorithms.
global optimum point resides in a long, narrow, parabolic- Similar to the previous cases, exhaustive comparisons
shaped valley, which is very hard to locate by any type of between the competitive algorithms have been realized by
optimization algorithm. Neither of the algorithms is able to recording the error results of the predictions in terms of
converge the global optimum answer of the f27-Dixon and best, mean, worst, and standard deviation values for each
Price test function, which is another compelling test case benchmark function in the suite. Population size of each
for the algorithms. Again nearly all optimizers fail to arrive algorithm is set to N = 20 and the total number of 3000
the optimal solution of f32-Schwefel 2.25 function and tend iterations are performed for each algorithm for each test
to be trapped in local solutions except GRAD algorithm, function. Statistical results are obtained after 50 indepen-
which shows consistent and stepwise decreases throughout dent algorithm runs. For all problems, the size of the 30D
the iterations. All compared algorithms prematurely con- search space is restricted between the lower bound of -100
verged to one of the many local optimum points of the f28- and the upper bound of 100. During the numerical exper-
Trid function, showing no clear sign of balance between iments, the same algorithm parameters have been consid-
the complementary exploration and exploitation phases. ered for the competitive algorithms as it was previously
utilized for thirty-four unconstrained test functions. As it
4.6 Performance assessment on CEC-2013 was mentioned, the competitive algorithms have been run
benchmark problems many times, and their prediction capabilities on twenty-
eight test functions composed of unimodal, multimodal,
This section comparatively investigates the performance of and have been evaluated in terms of well-organized per-
the eleven metaheuristic algorithms by evaluating their formance metrics. Optimization performance of the algo-
prediction accuracies on a test suite of twenty-eight thirty- rithms is compared between each other with respect to the
dimensional benchmark functions employed in CEC-2013 statistical results obtained after the consecutive algorithm
competitions. Multidimensional test functions taking place runs. Furthermore, the Friedman mean ranks for each
in CEC competitions are artificially generated benchmark optimized test function are tabulated in the respective
cases composed of shifted, rotated, highly multimodal, and tables to decide the degree of statistical significance
discontinuous benchmark problems that are most likely to between the algorithms. Similarly, the convergence ten-
simulate the challenges and difficulties posed by real-world dencies of the algorithms have been evaluated and
123
Neural Computing and Applications (2023) 35:14275–14378 14337
Table 27 Statistical results of the compared algorithms for CEC 2013 test problems
Problem CEC 17 CEC 18
Best Mean Std Dev Worst Best Mean Std Dev Worst
CEC 21 CEC 22
Best Mean Std Dev Worst Best Mean Std Dev Worst
CEC 23 CEC 24
Best Mean Std Dev Worst Best Mean Std Dev Worst
123
14338 Neural Computing and Applications (2023) 35:14275–14378
Table 27 (continued)
CEC 23 CEC 24
Best Mean Std Dev Worst Best Mean Std Dev Worst
Fig. 9 Convergence graphs for CEC-2013 test problems from first to sixth test instances
compared through the evolution curves of the objective to CEC-05 given in Table 25, MANTA achieves the best
functions visualized in descriptive graphs. First and fore- predictions while RUNGE algorithm occupying the second
most, the optimization abilities of the compared methods best seat. MANTA obtains the best optimum solutions of
have been evaluated by performing 50 runs on unimodal four out of five unimodal test functions becomes the best
test functions, which are famous for having only one performer for unimodal functions. When general perfor-
minimum or maximum point residing a relatively large mance analysis of the compared algorithms is founded
search domain. A successful algorithm is the one having upon the multimodal test functions whose predictions
the ability to circumvent the trapping local extremum results are given in a tabular form from Tables 25, 26, 27, a
points and arrive to the global optimum point. Potential clear dominance of EQUIL algorithm is evident in most of
local optimum points dramatically increases with increas- the test instances as it obtains the most accurate estimations
ing problem dimensionalities, which makes them appro- in eleven out of fourteen benchmark cases and significantly
priate test beds for evaluating the exploration capabilities surpasses the remaining algorithms in terms of solution
of the algorithms. According to the comparative optimum accuracy. AFRICAN algorithm attains the second-best
results obtained for unimodal test functions from CEC-01 predictions for multimodal test functions. It is also
123
Neural Computing and Applications (2023) 35:14275–14378 14339
Fig. 10 Convergence histories of the algorithms for multimodal test problems employed in CEC 2013 competition
Fig. 11 Evolution histories of the compared algorithms for multimodal test function of CEC 2013 optimization benchmark problems
observed that MANTA algorithm is the third-best method vibrant fluctuations in convergence curves are the direct
providing closer results to the global optimum points of the outcomes of the dominant exploration phase activated in
test functions. Experimental results indicate that EQUIL AFRICAN algorithm, in which responsible search agents
can effectively elude from the deceitful local stalls with a roam around the feasible domain to explore the unvisited
higher convergence rate, as it is seen from the evolution regions, resulting in the shock and sudden changes in fit-
trends of the objective function value depicted from Fig. 9, ness values. It is also worth to mention that success of
10, 11, 12. One can easily interpret from the figures that the EQUIL algorithm is the direct influences of the balanced
123
14340 Neural Computing and Applications (2023) 35:14275–14378
Fig. 12 Comparison of the convergence performance of the algorithms based on the composition function employed in CEC 2013 competition
search mechanisms of the exploration and exploitation. occupied. In addition, the glamorous success of the EQUIL
Feasible predictive solutions of the composite test func- algorithm for CEC 2013 test functions can be conceived as
tions are reported in Tables 27 and 28. Composite bench- contradictive and confusing based on its relatively poor
mark problems combines the functional characteristics of performance on standard unconstrained problems. Search
the forming sub-functions whose successful and accurate equations of EQUIL algorithm are designed and tailored in
solutions require a well-organized task division between such an intrinsic way that they collaboratively as a whole,
local and global search mechanisms. Predictive results for put too much emphasis on exploration over undiscovered
composite function show that EQUIL has the best quality areas and tend to disregard intensification on the promising
of estimations providing the lowest erroneous results and regions at certain stages of the iterations. This behavior
obtains the best answers of six out of nine problems. results in a time-consuming search process accompanied
AFRICAN and MANTA algorithms correspondingly with redundant memory usage without showing any sign of
shares the second- and third-best seats for composite test proceeding to the optimal results due to the unorganized
functions When convergence curves constructed for the balance between active exploration and exploitation
composite test functions are carefully investigated in mechanisms. Average rankings of the algorithms according
Figs. 12 and 13, faster convergence to the optimum point to the Friedman results have been reported and compared
for EQUIL algorithm is valid in most of the problem against each other in Table 29. The lesser ranking point
instances, which also verifies the superiority of this algo- corresponds to the better optimization performance of the
rithm in the diversification of the candidate solutions as algorithm. Robustness and accuracy of EQUIL algorithm is
much as possible. It is also interesting the see the collapse undeniably confirmed since it attains the best average
of REPTILE and PRO algorithms in all departments of ranking point among the other algorithms with the
CEC test functions, which are indisputable victorious respective p-value of 5.91E-08.
optimizers for standard unimodal and multimodal test
problems. This failure can be attributed to the fact that
governing search equations of these two algorithms are 5 Comprehensive benchmark analysis
mainly controlled by the so-far obtained best solution on real-world engineering problems
throughout the iterations, which can sometimes overem-
phasize intensification on the regions where current best This section analyzes the optimization performances of the
search agents are located rather than probing around the investigated metaheuristic optimizers by applying them to
unexplored areas in which possible promising answers are some selected constrained engineering design optimization
123
Neural Computing and Applications (2023) 35:14275–14378 14341
problems and comparing the outcomes. Fourteen different engineering problems available in literature studies.
engineering design optimization problems have been Majority of the engineering design problems that have been
selected with various decision variable, constraint, and frequently employed for benchmarking the optimization
objective function characteristics and feasible solutions accuracy of the developed algorithm were utilized in CEC
obtained from Runge Kutta Optimizer (RUNGE), Gradi- 2020 competitions [118], including a set of fifty-seven
ent-based Optimizer (GRAD), Poor and Rich Optimization challenging constrained engineering problems. These
Algorithm (PRO), Reptile Search Algorithm (REPTILE), problems can capture a wide range of difficulties that is
Snake Optimizer (SNAKE), Equilibrium Optimizer highly possible to be posed by the challenges of real-world
(EQUIL), Manta Ray Foraging Optimization (MANTA), problems. In CEC’20 competition, state-of-art optimizers
African Vultures Optimization Algorithm (AFRICAN), proposed by the participants were exhaustively tested on
Aquila Optimizer (AQUILA), and Harris Hawks Opti- these synthetic benchmark cases. Another comprehensive
mization (HARRIS) are benchmarked against each other. reference providing a comprehensive database is conducted
Barnacles Mating Optimizer (BARNA) have not been by Floudas et al. [92], reflecting their long-term efforts in
included into the comprehensive comparative investiga- designing challenging test instances composed of non-
tions as it is not able to find any feasible solutions during convex optimization problems with varying degrees of
the consecutive and independent runs for each considered difficulty. After a comprehensive investigation on the past
engineering design problem, collapses at some certain literature studies related to solving constrained optimiza-
stages of the iterations. There are lots of alternative con- tion problems, the authors of this study choose the most
strained benchmark cases concerning real-world widely employed fourteen complex engineering design
123
14342 Neural Computing and Applications (2023) 35:14275–14378
Fig. 13 Convergence graphs constructed for composition test functions employed in CEC 2013 competition
cases, which were also previously utilized in CEC’2020 in the search space are disregarded through the following
[118] and Floudas et al.[92]. adjustment made to the objective function,
All the above-mentioned algorithms are developed and 8
< hmax ð~ xÞ if hmax ð~
xÞ [ 0
run in MATLAB programming environment on the same
min f ð~
xÞ ¼ p ð123Þ
personal computer that has been utilized to accomplish the : atan½f ð~
xÞ otherwise
2
previous benchmarking studies. The algorithms have been
independently run 50 times for each engineering design where
problem, and the outcomes have been statistically evalu-
hmax ð~
xÞ ¼ max½h1 ð~
xÞ; h2 ð~
xÞ; :::; hk ð~
xÞ ð124Þ
ated in terms of the best, worst, mean, and standard devi-
ation of the acquired solutions. The maximization problems where atan() is the arctangent function. Table 30 reports
have been transformed into minimization problems by the functional properties and the maximum number of
multiplying the objective function with a minus. The function evaluations performed to solve each engineering
equality constraints in the optimization problems have been design optimization problem. D represents the number of
converted to inequality constraints by jhð xÞj e 0, in decision variable dimensions, gnum and hnum, respectively,
which e is taken as 1E-10. The Inverse Tangent Con- represent the number of inequality and equality constraints
strained Handling method [119] has been employed to deal in the problem, and NFE stands for the maximum number
with the constraints during the optimization process. An of function evaluations in Table 30. Total number of
optimization problem can be mathematically represented as function evaluations assigned for a particular problem is
follows, directly associated with its functional complexity; that is, a
design problem having a nonlinear objective function and
arg min f ðx x2S
~Þ; ~ RD
having a high number of imposed design constraints would
with subject to ð122Þ necessitate a high number of function evaluations to suc-
D
gð~
xÞ 0; gi : R ! R; i ¼ 1; 2; :::; k cessfully reach its global optimum solution. Based on the
functional characteristics and challenging degree of the
where f(x) is the objective function, ~x is the design vari-
problem, a different number of function evaluations are
ables, D is the number of dimensions, S is the search space,
utilized. Therefore, the authors consider a similar perfor-
and g(x) is the inequality constraints. In the Inverse Tan-
mance criterion previously employed in their past effort on
gent Constrained Handling method, the infeasible regions
123
Neural Computing and Applications (2023) 35:14275–14378 14343
Table 29 Performance rankings of the algorithms based on the mean deviation results of fifty independent runs
AFRICAN AQUILA BARNA EQUIL GRAD HARRIS MANTA PRO REPTILE RUNGE SNAKE
CEC-01 1 6 9 1 7 1 1 11 10 1 8
CEC-02 4 6 9 3 7 5 1 11 8 2 10
CEC-03 3 5 9 2 7 6 1 11 8 4 10
CEC-04 8 5 7 3 6 4 2 10 9 1 11
CEC-05 1 6 8 1 7 5 1 11 10 1 9
CEC-06 3 5 9 2 7 6 1 11 8 4 10
CEC-07 2 7 9 1 6 5 3 11 8 4 10
CEC-08 1 1 1 1 1 1 1 1 1 1 1
CEC-09 2 6 10 1 6 5 3 9 8 4 10
CEC-10 4 6 8 1 7 5 1 11 10 1 9
CEC-11 2 6 9 1 8 5 3 11 7 4 10
CEC-12 2 9 6 1 5 8 3 11 7 4 10
CEC-13 2 10 6 1 5 8 3 11 7 4 9
CEC-14 2 6 10 1 7 5 4 11 8 3 9
CEC-15 2 5 9 2 7 6 4 11 8 1 10
CEC-16 1 3 7 1 3 3 3 7 7 7 7
CEC-17 2 5 9 1 8 6 3 11 7 4 10
CEC-18 2 5 8 1 9 6 3 11 7 4 10
CEC-19 1 4 8 2 7 5 3 11 10 6 9
CEC-20 4 4 4 1 4 4 2 10 4 2 10
CEC-21 4 5 8 2 7 3 1 11 10 5 9
CEC-22 1 6 10 2 7 5 4 11 8 3 9
CEC-23 4 6 9 2 7 5 3 11 8 1 10
CEC-24 2 9 8 1 5 5 3 11 9 3 7
CEC-25 3 10 6 1 6 6 2 6 11 4 5
CEC-26 3 11 7 6 7 5 1 10 4 1 9
CEC-27 2 8 10 1 6 5 4 9 10 3 7
CEC-28 2 10 5 1 8 7 3 11 6 4 9
Average rank 2.5 6.25 7.78 1.57 6.32 5.0 2.39 10.0 7.78 3.07 8.82
Overall rank 3 6 8 1 7 5 2 11 8 4 10
a research study concerning metaheuristic algorithm design 5.1 Tension/compression spring design problem
given in [120].
Inverse Tangent Constraint Handling mechanism is a The tension/compression spring design problem first
versatile and easy-to-apply procedure, eliminating the introduced by Arora [122] deals with minimizing the
employment of an exhaustive and tedious trial-and-error- weight of a tension/compression spring by taking several
based penalty value assignment process. This intelligently constraints into account, such as shear stress and surge
devised constraint handling tool is found to be the best frequency. The design parameters of the problem are wire
available mechanism among the possible alternatives diameter (d), mean coil diameter (D), and number of active
according to the outcomes of the research study given in coils (N). The mathematical representation of the opti-
[121]. Tables 31 and 32 show the statistical outcomes of mization problem is,
the compared metaheuristic optimizers for each engineer-
ing design problem considered in this study.
123
14344 Neural Computing and Applications (2023) 35:14275–14378
xÞ ¼ ðN þ 2ÞDd 2
arg min f ð~ (x2), inner diameter Di (x3), and external diameter De (x4)
of the spring. Figure 15 shows the physical representation
subject to
of the Belleville spring. The objective function and con-
D3 N 4D2 dD straints of the problem are given as,
g1 ð~
xÞ ¼ 1 4
0; g2 ð~
xÞ ¼
71785d 12566ðDd 3 d4 Þ
arg min f ð xÞ ¼ 0:07075p D2e D2i t
1
þ 10 subject to
5108d 2
4Edmax dmax
140:45d Dþd g1 ð x Þ ¼ S 2 2
b h þ ct
g3 ð~
xÞ ¼ 1 0; g4 ð~
xÞ ¼ 10 ð1 l ÞaDe 2
D2 N 1:5
4Edmax d 3
ð125Þ 0; g2 ð xÞ ¼ h ðh d Þt þ t
ð1 l2 Þax24 2 d¼dmax
123
Neural Computing and Applications (2023) 35:14275–14378 14345
Table 31 Statistical outcomes for the first set of benchmark constrained engineering optimization problems
Problem Tension/compression spring Bellevile spring
Best Worst Mean Std Dev Best Worst Mean Std Dev
123
14346 Neural Computing and Applications (2023) 35:14275–14378
behavior of MANTA and EQUIL can be observed in arg min f ðxÞ ¼ 1:98 þ 4:90x1 þ 6:67x2 þ 6:98x3
Fig. 16. þ 4:01x4 þ 1:78x5 þ 2:73x7
subject to
5.3 Optimal design of a flywheel
g1 ðxÞ ¼ Fa 1:0 kN, g2 ðxÞ ¼ VCu 0:32 m/s,
The optimal design of a flywheel problem [123] has a g3 ðxÞ ¼ VCm 0:32 m/s
nonlinear objective function and is subjected to two g4 ðxÞ ¼ VCl 0:32 m/s, g5 ðxÞ ¼ Dur 32 m,
inequality constraints. The problem is mathematically g6 ðxÞ ¼ Dmr 32 mm
represented as,
g7 ðxÞ ¼ Dlr 32 mm, g8 ðxÞ ¼ Fp 4 kN,
~Þ ¼ ð0:0201=107 Þx41 x2 x23
arg min f ðx g9 ðxÞ ¼ VMBP 9:9 mm/ms
respect to g10 ðxÞ ¼ VFD 15:7 mm/ms
xÞ ¼ 675 x21 x2 0
g1 ð~ where
ð127Þ
xÞ ¼ 0:419 x21 x23 =107 0
g2 ð~ Fa ¼ 1:16 0:3717x2 x4 0:00931x2 x10 0:484x3 x9
Allowable search bounds þ 0:01343x6 x10
0 x1 36; 0 x2 5; 0 x3 125 VCu ¼ 0:261 0:0159x1 x2 0:1881x1 x8 0:019x2 x7
The reported results in Table 31 show that MANTA, þ 0:0144x3 x5 þ 0:0008757x5 x10 þ 0:08045x6 x9
EQUIL, and AFRICAN find the most desirable best results þ 0:00139x8 x11 þ 0:00001575x10 x11
after several runs. However, it is realized that EQUIL VCm ¼ 0:214 þ 0:00817x5 0:131x1 x8 0:0704x1 x9
outperformed its peers by examining at the mean and þ 0:03099x2 x6 0:018x2 x7
standard deviation results. Table 36 gives the decision
þ0.0208x3 x8 þ 0:121x3 x9 0:00364x5 x6
variable and constraint values of the best results for the
compared optimization algorithms. Figure 17 shows the þ 0:0007715x5 x10 0:0005354x6 x10
variations of the decision variables with the increasing þ 0:00121x8 x11 þ 0:00184x9 x10 0:02x22
number of iterations. VCl ¼ 0:74 0:61x2 0:163x3 x8
þ 0:001232x3 x10 0:166x7 x9 þ 0:277x22
5.4 Car side impact design problem
Dur ¼ 28:98 þ 3:818x3 4:2x1 x2 þ 0:0207x5 x10
This problem, firstly illustrated by Gu et al. [124], handles þ 6:63x6 x9 7:7x7 x8 þ 0:32x9 x10
the FE (finite element) model of a car. The main objective Dmr ¼ 33:86 þ 2:95x3 þ 0:1792x10 5:057x1 x2
is to reduce the weight and ameliorate the strength and 11:0x2 x8 0:0215x5 x10 9:98x7 x8 þ 22:0x8 x9
resilience of the vehicle to a satisfactory amount in the case
Dlr ¼ 46:36 9:9x2 12:9x1 x8 þ 0:1107x3 x10
of an instantaneous crash. The procedures and experi-
mental configurations implemented by the European FP ¼ 4:72 0:5x4 0:0122x4 x10 þ 0:009325x6 x10
Enhanced Vehicle-Safety Committee (EEVC) have been þ 0:000191x211
utilized to accomplish the test studies of the car impact. A VMBP ¼ 10:58 0:674x1 x2 1:95x2 x8 þ 0:02054x3 x10
regressive model has been developed by employing Latin
0:0198x4 x10 þ 0:028x6 x10
hypercube sampling and quadratic backward-stepwise
regression methods. The problem consists of nine design VFD ¼ 16:45 0:489x3 x7 0:843x3 x6 þ 0:0432x9 x10
parameters; these are B-Pillar inner, B-Pillar reinforce- 0:0556x9 x11 0:000786x211
ment, floor side inner, cross members, door beam, door where
beltline reinforcement, and roof rail thicknesses (x1–x7), B- 0:5 x1 ; x2 ; x3 ; x4 ; x5 ; x6 ; x7 1:5
Pillar inner and floor side inner materials (x8 and x9) and
30 x10 ; x11 30
barrier height and hitting position (x10 and x11). Among
them, two of the decision variables are discrete (x8 and x9), x8 ; x9 2 f0:192; 0:345g
and the rest of them are continuous. The optimization ð128Þ
problem is shown below,
Similar to the previous case, MANTA and EQUIL
outperform the competing optimization algorithms and find
the best results f(x) = 22.843092, as reported in Table 31. It
is recognized that MANTA performs slightly better for
consecutive runs compared to EQUIL by investigating the
123
Neural Computing and Applications (2023) 35:14275–14378 14347
Table 32 Statistical results for the second set of benchmark constrained engineering optimization problems
Problem Design of a heat exchanger Hydrostatic thrust bearing model
Best Worst Mean Std Dev Best Worst Mean Std Dev
other metrics. Table 37 lists the decision variable and 5.5 Optimal welded beam design problem
constraint outcomes for the best results found by the
optimizers. Figure 18 demonstrates the convergence The welded beam design optimization problem [125]
chart for the design variables for each optimizer. shown in Fig. 19 tackles the minimization of the produc-
tion costs of the welded beam. The problem has four design
variables, namely thickness of the weld (h - x1), length of
the weld (l - x2), beam height (t - x3), and beam width
(b - x4). Furthermore, there are seven inequality
123
14348 Neural Computing and Applications (2023) 35:14275–14378
- 0.0367954
- 0.0075177
- 3.9413444
- 0.6831819
constraints subjected to shear stress (s), bending stress (r)
0.0543703
0.4208569
8.7250747
0.0133431
AQUILA in the beam, buckling load on the bar (P), end deflection of
the beam (d), and a few side constraints. The mathematical
formulation of the problem is given as,
arg min f ðxÞ ¼ 1:10471x21 x2 þ 0:04811x3 x4 ð14 þ x2 Þ
- 4.0026154
- 0.7436421
- 3.52E-05
- 6.01E-04
12.7817213 subject to
0.0507291
0.3338077
0.0126980
REPTILE
g1 ð xÞ ¼ sð xÞ smax 0; g2 ð xÞ ¼ rð xÞ rmax 0; g3 ð xÞ
¼ x1 x4 0
g4 ð xÞ ¼ 0:1047x21 þ 0:04811x3 x4 ð14 þ x2 Þ 5 0; g5 ð xÞ
¼ 0:125 x1 0
- 4.0538141
- 0.7268456
- 1.95E-04
- 2.74E-04
g6 ð xÞ ¼ dð xÞ dmax 0; g7 ð xÞ ¼ Pð xÞ Pc 0
11.2216283
0.0517467
0.3579849
0.0126741
0:1 x1 ; x4 2; 0.1 x2 ; x3 10
PRO
where
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
x2 P MR
sð xÞ ¼ ðs0 Þ2 þ2s0 s00 þ ðs00 Þ2 ; s0 ¼ pffiffiffi ; s00 ¼
2R 2x1 x2 J
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
- 4.0448223
- 0.7303177
- 3.03E-04
- 1.82E-06
2
11.5144281
x2 x2 x1 þ x3 2
0.0515337
0.3529899
0.0126690
M ¼P Lþ ;R¼ þ
2 4 2
GRAD
Pc ¼ 1
10.9596895
L2 2L 4G
0.0519269
0.3624671
0.0126663
HARRIS
ð129Þ
11.1149557
0.0518138
0.3597259
0.0126657
EQUIL
0.0126655
RUNGE
0.0515938
0.3544304
0.0126654
0.0126653
MANTA
0.0126653
SNAKE
f(x)
g1
g2
g3
g4
x1
x2
x3
123
Neural Computing and Applications (2023) 35:14275–14378 14349
123
14350 Neural Computing and Applications (2023) 35:14275–14378
- 1304.464374
arg min f ð xÞ ¼ 63098:88x2 x4 x12 þ 5441:5x22 x12 þ 115055:5x1:664
2 x6
- 3.8387463
- 0.8413958
- 1.5794556
- 0.0034515
- 2.1286341
- 0.1992142
þ6172:27x22 x6 þ 63098:88x1 x3 x11 þ 5441:5x21 x11
12.0065485
9.8779144
0.2060082
0.2145361
2.1330342
AQUILA
þ 115055:5x1:664
1 x5 þ 6172:27x21 x5 þ 140:53x1 x11
þ281.29x3 x11 þ 70:26x21 þ 281:29x1 x3 þ 281:20x23
þ14437x21 x7 x1:8812
8 x1 0:3424 1
9 x10 x12 x14 þ 20479:2x21 x2:893
7 x0:316
11
- 3890.441850
g1 ð xÞ ¼ 1:524x1
7 1; g2 ð xÞ ¼ 1:524x1 1
- 29.6798113
- 0.7963789
- 1.5861495
- 0.0010729
- 2.1083103
- 0.2020306
8
1
12.0089271
g3 ð xÞ ¼ 0:07789x1 2x7 x9 1;
9.9006168
0.2073006
0.2065499
2.1283579
REPTILE
1 1 2:1195
þ 62:08x0:2
8 x10 x12 x13 1
- 9.6466297
- 0.7780430
- 1.5914240
- 0.0100000
- 2.0175868
- 0.1998703
12.0000000
2.0362030
8 12
HARRIS
g8 ð xÞ ¼ 0:0488x1:893
7 x9 x0:316
11 1
g9 ð xÞ ¼ 0:0099x1 x1
3 1; g10 ð xÞ ¼ 0:0193x2 x1
4 1
g11 ð xÞ ¼ 0:0298x1 x1
5 1; g12 ð xÞ ¼ 0:056x2 x1
6 1
- 31.4141362
- 0.0599619
- 0.7904032
- 1.5931041
- 0.0100000
- 2.0040524
- 0.1989712
9
AFRICAN
9.9959476
0.2044289
0.2024670
2.0029510
1.9927511
1.9874918
SNAKE
1.9834002
PRO
1.9798313
- 0.5074727
- 0.7796637
- 1.5958478
- 1.9796001
- 0.1989694
- 8.73E-11
- 9.43E-06
12.0099906
10.0303905
0.2041519
0.2000002
1.9798231
EQUIL
- 0.0020917
- 0.7797037
- 1.5958566
- 1.9795292
- 0.1989659
- 9.19E-04
- 7.76E-06
Belleville spring design
12.0099922
10.0304630
0.2041434
0.2000000
1.9796760
MANTA
Method
f(x)
h1
h2
h3
h4
h5
h6
h7
x1
x2
x3
x4
123
Neural Computing and Applications (2023) 35:14275–14378 14351
123
14352 Neural Computing and Applications (2023) 35:14275–14378
- 0.8366908
- 5.6722312
As seen from the results in Table 31, RUNGE outper-
- 4.06E-04
28.9821549
70.5936607
0.8026085
AQUILA forms the competing optimizers in all optimization per-
formance metrics. One of the best performer algorithms so
far, MANTA, also shows a desirable performance in this
case; however, it is not enough to overthrow RUNGE from
the top spot. REPTILE is the worst performer for this case,
- 0.0578673
- 5.6810928
- 2.36E-04
22.0743510
92.7036213
with the best objective function value of f(x) = -
1.3851304
REPTILE
- 5.6842686
- 3.56E-05
113.5642123
18.0238286
competing optimizers.
2.0778173
GRAD
- 5.6847021
- 4.39E-06
22.5207397
90.8912524
arg min f ð xÞ ¼ x1 þ x2 þ x3
100.3405433
20.3999880
1.6219742
HARRIS
subject to
g1 ð xÞ ¼ 1 þ 0:0025ðx4 þ x6 Þ 0;
g2 ð xÞ ¼ 1 þ 0:0025ðx4 þ x5 þ x7 Þ 0
g3 ð xÞ ¼ 1 þ 0:01ðx5 þ x8 Þ 0; g4 ð xÞ ¼ 100x1
- 5.6847777
- 5.67E-04
- 2.22E-16
31.1759962
65.6578522
0.6944848
x1 x6 þ 833:33252x4 83333:333 0
SNAKE
g5 ð xÞ ¼ x2 x4 x2 x7 1250x4 þ 1250x5 0; g6 ð xÞ
¼ x3 x5 x3 x8 2500x5 þ 1250000 0
- 5.6847823
- 1.89E-05
- 1.55E-09
22.1362598
92.4704068
10 xi 1000; i¼ 4; 5; 6; 7; 8
ð133Þ
According to the statistical results reported in Table 32,
- 5.6847825
- 1.40E-08
- 6.90E-10
111.2624457
18.3974830
MANTA takes the first spot with the best objective func-
AFRICAN
1.9942838
Table 36 Optimal outcomes for flywheel design problem
0.0000000
MANTA
123
Neural Computing and Applications (2023) 35:14275–14378 14353
thrust bearing is shown in Fig. 26. The problem has four case. HARRIS, which has not shown any presence until
design variables, namely bearing step radius R1, recess this case, is the second-best performer with the objective
radius Ro, oil viscosity l, and flow rate Q. There are also function value f(x) = 19521.2487819. SNAKE finds the
seven nonlinear inequality constraints imposed on the least desirable solution for this problem. Table 43 reports
objective function associated with oil pressure at the inlet, the optimal results of the best solutions obtained by the
load carrying capacity, oil film thickness, oil temperature optimizers. Figure 27 depicts the variations of the decision
rise, and some other physical constraints. The mathemati- variables for each algorithm.
cal formalization of the problem is given as follows,
QPo 5.9.2 Stepped cantilever beam design problem
arg min f ðxÞ ¼ þ Ef
0:7
subject to The main aim of this problem is to minimize the volume of
g1 ðxÞ ¼ W W2 0; g2 ð xÞ ¼ Pmax Po 0; g3 ðxÞ a stepped cantilever beam, depicted in Fig. 28, which is
¼ DTmax DT 0; g4 ðxÞ ¼ h hmin 0; subject to a load at its end. As can be seen from the figure,
g5 ðxÞ ¼ R Ro 0; g6 ðxÞ ¼ 0:001 the beam consists of five segments. The decision variables
c Q W of the problem are the height and width of each segment.
0; g7 ð xÞ ¼ 5000 2 0
gPo 2pRh p R R2o Therefore, the problem has ten decision variables, which
pPo R2 R2o 6lQ R are the widths {x1, x3, x5, x7, x9} = {b1, b2,b3, b4, b5} and
W¼ ; Po ¼ 3 ln ; Ef ¼ 9336QcCDT;
2 ln RRo ph Ro ð134Þ the heights {x2, x4, x6, x8, x10} = {h1, h2, h3, h4, h5.}. The
log10 ðlog10 ð8:122E6l þ 0:8ÞÞ C1 first six decision variables are discrete, and the remaining
P¼
n are continuous. The mathematical representation of the
2pN 2 2pl R4 R4o optimization problem is given below,
h¼
60 Ef 4
c ¼ 0:0307; C ¼ 0:5; n ¼ 3:35; C1 ¼ 10:04
Ws ¼ 101000; Pmax ¼ 1000; DTmax ¼ 50; hmin ¼ 0:001
g ¼ 386:4; N ¼ 750
1 R; Ro ; Q 16; 1E 6 l 16E 6
123
14354
123
Table 37 Optimal results for car side impact design problem
Method EQUIL MANTA RUNGE PRO AFRICAN REPTILE GRAD HARRIS AQUILA SNAKE
123
14356
123
Table 38 Optimal decision variables, constraint satisfaction, and objective function values for the welded beam design problem
Method EQUIL MANTA RUNGE PRO AFRICAN GRAD HARRIS SNAKE AQUILA REPTILE
123
14358 Neural Computing and Applications (2023) 35:14275–14378
- 5999.970910
arg min f ð xÞ ¼ a1 x1 þ a2 x1 x6 þ a3 x3 þ a4 x2 þ a5 a6 x3 x5
- 40.1050445
5946.8282168
- 0.0036301
- 0.0023662
199.8949555
40.4114570
subject to
0.7835712
0.3878915
AQUILA
g1 ð xÞ ¼ 1 c1 x26 c2 x3 =x1 c3 x6 0;
g2 ð xÞ ¼ 1 c4 x1 =x3 c5 x1 x6 =x3 c6 x1 x26 =x3 0
g3 ð xÞ ¼ 1 c7 x26 c8 x5 c9 x4 c10 x6 0;
- 2661.606034
- 41.9692450
5918.0124466
- 0.0021502
- 7.88E-07
198.0307550 g4 ð xÞ ¼ 1 c11 =x5 c12 x6 =x5 c13 x4 =x5 c14 x26 =x5 0
40.4992647
0.7837860
0.3863638
REPTILE
¼ 1 c23 x5 c24 x7 0;
40.4930975
0.7815168
0.3866065
g9 ð xÞ ¼ 1 c25 x3 c26 x1 0;
SNAKE
0.7726319
0.3835221
ð136Þ
Table 45 lists the values of a and c parameters utilized in
- 19.6770802
- 35.5041216
5875.3441691
- 3.73E-05
- 3.63E-05
study.
204.4341142
40.0044720
AFRICAN
0.7720863
0.3816427
123
Neural Computing and Applications (2023) 35:14275–14378 14359
takes continuous values. The mathematical formalization consistent performances. It is realized that the standard
of the problem is as follows [131], deviation of MANTA is better than that of EQUIL.
RUNGE shows a remarkable performance and takes the
arg min f ð xÞ ¼ 0:7854x1 x22 3:3333x23 þ 14:933x3 43:0934
third spot with the best objective function value
1.508x1 x26 þ x27 þ 7:4777 x36 þ x37 þ 0:7854 x4 x26 þ x5 x27
f(x) = 2823.6811432. REPTILE continues its bad perfor-
subject to
mance with this problem and finds the worst optimal
27 397
g1 ð xÞ ¼ 1 0; g2 ð xÞ ¼ 1 0; g3 ð xÞ objective function value f(x) = 2858.3486103. BARNA
x1 x22 x3 x1 x22 x23
fails to find an optimal solution to this problem and is
1:93x34
¼ 10 excluded from the tables. Table 47 reports the optimal
x2 x3 x46
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
r results for the best objective function outcomes obtained by
2
745x4
þ16900000 the optimization algorithms. Figure 32 demonstrates the
1:93x35 x2 x3
g4 ð xÞ ¼ 1 0; g5 ð xÞ ¼ convergence map of the design variables.
x2 x3 x47 110x36
10 5.9.5 Optimal design of a reactor
rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
2
745x5
þ15750000
x2 x3 x 2 x3 This geometric programming problem is taken from
g6 ð xÞ ¼ 1 0; g7 ð xÞ ¼ 10
85x37 40 Dembo [132]. The problem is mathematically shown as
5x2 x1 follows,
g8 ð xÞ ¼ 1 0; g9 ð xÞ ¼ 1 0; g10 ð xÞ
x1 12x2
0:67 0:67
1:5x6 þ 1:9 arg min f ð xÞ ¼ 0:4x0:67
1 x7 þ 0:4x0:67
2 x8 x1 x2 þ 10
¼ 10
x4 subject to
1:1x7 þ 1:9
g11 ð xÞ ¼ 10 g1 ð xÞ ¼ 0:0588x5 x7 þ 0:1x1 1; g2 ð xÞ ¼ 0:0588x6
x5
x8 þ 0:1x1 þ 0:1x2 1
2:6 x1 3:6; 0:7 x2 0:8; 17 x3 28; 7:3 x4 8:3; g3 ð xÞ ¼ 4x3 x1 0:71 1
5 þ 2x3 x5 þ 0:0588x1:3
3 x7 1;
7:3 x5 8:3; 2:9 x6 3:9; 5:0 x7 5:5; g4 ð xÞ ¼ 4x4 x1 0:71 1
6 þ 2x4 x6 þ 0:0588x1:3
4 x8 1
ð137Þ
Once again, EQUIL and MANTA outperformed the 0:1 xi 10; i ¼ 1; 2; :::; 8
competing optimizers with their solution accuracy and ð138Þ
123
14360
Table 40 Optimal results for the industrial refrigeration system design problem
123
Method MANTA EQUIL RUNGE AFRICAN GRAD PRO REPTILE HARRIS AQUILA SNAKE
Fig. 23 Convergence chart of the decision variables for the industrial refrigeration system design problem
The results reported in Table 32 indicate that MANTA ranking points for best and mean results. The optimizers
outperforms the competing algorithms in all performance are also applied to 500D and 1000D hyper-dimensional
metrics, namely, best, worst, mean, and standard deviation. variants of the same benchmark optimization problems to
EQUIL shows a significant performance and takes the see how well they scale with increasing dimensions. PRO
second spot with the best objective function value algorithm became the best predictor for multimodal test
f(x) = 3.9526041. Once again, REPTILE is the worst per- function when problem dimensionality is increased from
former in this problem. Table 48 lists the optimal results of 30 to 500. REPTILE was again the best performer for
the best objective function values, and Fig. 33 depicts the unimodal test functions, even in the hyper-dimensional
variations of the design variables throughout the iterations variant of 500D form. It is also observed that REPTILE
for each algorithm. algorithm shows the fastest convergence to the optimal
solution compared to its counterparts in most of the cases
for standard 30D unimodal and multimodal problems as far
6 Discussion and critical analysis as the evolutionary tendencies of the convergence curves
are concerned. An algorithm complexity analysis is con-
This research study presents a detailed discussion and ducted by assessing the runtimes of the optimizers on 30D
comprehensive investigation of some newly emerging unimodal and multimodal problems. The results revealed
well-reputed metaheuristic algorithms with a supportive that RUNGE algorithm burdens the most computational
emphasis on their motivations, underlying algorithmic resources under the same operational conditions in com-
concepts, and the advantages and disadvantages of each parison to the other remaining algorithms. It is also
optimizer’s computer implementation methods. The observed that REPTILE algorithm shows the fastest con-
exploration and exploitation capabilities of the eleven vergence to the optimal solution compared to its counter-
emerging metaheuristic optimizers are compared over parts in most of the cases for standard 30D unimodal and
eighteen multimodal and sixteen unimodal optimization multimodal problems as far as the evolutionary tendencies
algorithms, respectively. MANTA algorithm proved its of the convergence curves are concerned. Moreover, the
dominancy with respect to the average ranking points optimization capabilities of the competing optimizers are
assigned to the competent algorithms for best and mean assessed over CEC-2013 benchmark optimization prob-
results for multimodal test functions. REPTILE algorithm lems. Contrary to its absolute failure in standard test
took over the first seat and proclaimed a clear dominance functions with varying dimensionalities, EQUIL algorithm
for unimodal test function based on the obtained average shows a significant prediction performance for different
departments of CEC-2013 test problems. Finally, detailed
123
14362
123
Table 41 Optimal results for the multi-spindle automatic lathe design problem
Method RUNGE SNAKE MANTA AQUILA EQUIL AFRICAN HARRIS PRO GRAD REPTILE
inspection of the contestant algorithms is hinged upon the evaluations to achieve the global answer of the problem. It
exhaustive evaluation on constrained engineering design is observed that the total number of 50 iterations is not
problems. MANTA algorithm is the second-best optimizer sufficient to get close to the optimum solution since EQUIL
considering the accurate predictions, while AFRICAN sits suffers from the improper balance between the exploration
in third place for CEC-2013 problems. Finally, detailed and exploitation phases. Resulting from its unbalanced
inspection of the contestant algorithms is hinged upon the algorithmic structure EQUIL makes too much emphasis on
exhaustive evaluation on constrained engineering design the exploration phases in the early phases of the iterations,
problems. MANTA algorithm provides the best optimal neglecting the intensification of the visited regions in the
results with satisfying all imposed design constraints in six exploration, which result in a premature convergence to
out of fourteen constrained engineering design cases and local points. When the number of function evaluations is
becomes the best performer for this phase of the compre- increased as employed for constrained engineering prob-
hensive performance evaluations. EQUIL is the second- lems, the overall prediction accuracy of EQUIL is enor-
best estimator according to its satisfactory optimization mously enhanced compared to that performed for
accuracies obtained for four engineering design problems. unconstrained problems with varying dimensionalities.
RUNGE algorithm acquires the best results for third dif- Similar probing inclinations are also observed for RUNGE
ferent design cases and occupies the third-best seat. It is and SNAKE algorithms, two of which give their priority on
interesting to see that BARNA algorithm is not able to find the exploration rather than the exploitation phase in the
any feasible result within the consecutive algorithm runs early phases of the iterative process. On the contrary, PRO
for each design problem and is removed from the com- and REPTILE algorithm are two of the best predictors for
parative investigations, as seen from the tabulated results. unconstrained test problems; however their overall pre-
The authors aim to include various types of optimization diction accuracy deteriorates when constrained problems
problems, such as multi-objective or dynamic, and com- are in consideration. This deterioration in optimization
pare the performances of a broader set of metaheuristic ability occurred because the algorithmic design of these
optimizers as a future work of this study. two optimizers is predicated upon solving artificially gen-
It is interesting to see the complete failure of EQUIL erated optimization test functions such as multidimensional
algorithm in unconstrained test cases, which is one of the Rastrigin, Sphere, Greiwank, etc. benchmark problems,
best-performing algorithms for engineering design prob- disregarding their comparative performance in optimizing
lems. This is because of the algorithmic structure of the constrained engineering design problems or test cases used
EQUIL optimizer, which requires high number of in CEC competitions which can effectively simulate the
123
14364
123
Table 42 Optimal results for the heat exchanger design problem
Method MANTA EQUIL GRAD AFRICAN PRO RUNGE HARRIS AQUILA SNAKE REPTILE
Fig. 25 Variations of the decision variables with increasing iterations for each optimizer
123
14366 Neural Computing and Applications (2023) 35:14275–14378
- 435.4365927
- 326.4227246
- 549.8120798
related section of this study, MANTA algorithm consists of
26,596.482026
- 1.8176632
- 0.5238436
- 5.84E-04
- 7.36E-04
three complementary search mechanisms, including Chain
7.1870818
6.6632381
3.3807018
0.0000055
foraging, Cyclone foraging, and Somersault foraging
SNAKE
- 110.7417218
- 25.9917633
23,739.446204
- 0.0014881
- 0.5345525
- 0.3884144
- 9.22E-04
- 6.06E-04
the search equations of Salp Swarm Optimization [15],
AFRICAN
6.2819884
5.7474359
5.0392252
0.0000074
which are effective tools in generating diverse solution
candidates. First part of the cyclone foraging phase can
produce a high solution diversity in the population thanks
to the created randomness through the commanding
- 156.1755034
- 200.3180075
- 15.5443204
23,457.815529
- 0.5424124
parameter xrand, which generates a random individual
- 1.98E-04
- 7.00E-04
- 7.10E-04
REPTILE
6.4456729
5.9032605
3.8184859
0.0000065
- 335.3559017
- 72.6899603
- 17.2145200
- 7.24E-04
- 2957.633197
- 0.5787335
- 3.62E-04
- 8.25E-04
- 0.5670326
- 3.9620797
- 4.12E-04
- 8.12E-04
20,009.395662
- 0.5606719
- 3.7347376
- 3.70E-04
- 8.20E-04
19,541.313485
- 0.5669836
- 4.7401175
- 3.27E-04
- 8.33E-04
5.9598727
5.3928891
2.2782097
0.0000054
- 0.5664813
- 0.6925158
- 4.40E-07
- 1.36E-04
- 3.25E-04
- 8.33E-04
7 Conclusive comments
5.9592888
5.3928075
2.2720019
0.0000054
HARRIS
- 0.5667708
- 0.0519950
- 9.42E-04
- 3.24E-04
- 8.33E-04
drawn,
RUNGE
123
Neural Computing and Applications (2023) 35:14275–14378 14367
Fig. 27 Convergence chart of the design variables for the hydrostatic thrust bearing design problem
Optimizer (AQUILA), Barnacles Mating Optimizer • MANTA algorithm is the best optimizer between the
(BARNA), Equilibrium Optimizer (EQUIL), Gradient- other compared methods when all optimum results
based Optimizer (GRAD), Harris Hawks Optimization obtained for different test functions are averaged,
Algorithm (HARRIS), Manta Ray Foraging Optimizer relying on its superior search mechanism, which
(MANTA), Poor and Rich Optimization Algorithm enables to pinpoint the fertile areas during the course
(PRO), Reptile Search Algorithm (REPTILE), Runge– of iterations, thanks to the well-balanced exploration
Kutta Optimizer (RUNGE), and Snake Optimizer and exploitation mechanism of the algorithm.
(SNAKE) over various benchmark suites having dif- • EQUIL is the second-best algorithm despite its unsat-
ferent functional complexities. isfactory performance on standard test functions, which
is compensated by the accurate and robust predictions
123
14368
123
Table 44 Best result decision variables, constraint satisfaction, and objective function values for the stepped cantilever beam design problem
Method MANTA EQUIL RUNGE AFRICAN GRAD HARRIS AQUILA PRO SNAKE REPTILE
123
14370
123
Table 46 Optimal results for the operation of alkylation unit problem
Method MANTA EQUIL RUNGE AFRICAN HARRIS PRO GRAD SNAKE AQUILA REPTILE
Fig. 31 Schematic
representation of the speed
reducer
obtained for CEC-2013 problems and engineering comparison among the algorithms indicates that
design cases. MANTA is much better than the other algorithms in
• REPTILE and PRO algorithms are found to be very terms of solution feasibility and proves its superior
competitive algorithms for standard test functions with ability in coping with the challenging imposed design
varying dimensionalities yet considerably outperformed constraint without any significant violation. Another
by other methods for CEC 2013 and engineering design remarkable conclusion can be given as to the outcomes
problems. of the comparative study between algorithms that
• Most of the algorithms experience difficulties on metaheuristic optimizers can be an important and
solving constrained engineering problems due to the indispensable alternative to the traditional problem
involvement of the nature of design parameters, such as solvers for solving complex real-world engineering
mixed-integer or continuous decision components and problems relying on their stochastic nature, which
various types of problem constraints employed on the enables them to circumvent the singular points on the
objective function of the problem. The exhaustive
123
14372
123
Table 47 Optimal results for speed reducer design optimization problem
Method EQUIL MANTA RUNGE PRO AFRICAN GRAD HARRIS SNAKE AQUILA REPTILE
Fig. 32 Fluctuations of the design variables for the speed reducer design problem
search space and help to them to approach the global convergence to the optimal solution as well as prevent-
optimum of the problem more rapidly and accurately. ing the entrapments in the local pitfalls residing in the
• In most of the literature approaches regarding to the specific regions over the search space. Final suggestion
development of a novel metaheuristic algorithm and its for the direction of future research can be given as the
associated performance evaluations based on compar- development of a novel reliable performance measure
ative studies, the judgment of selecting the best that simultaneously takes into account the solution
optimizer between the compared methods is usually accuracy and algorithmic runtime of the intended
decided by the developer’s own concluding remarks, metaheuristic optimizer.
lacking of the descriptive thorough analyses as to which • This research study deals with the performance com-
algorithm would be a better option for a specific type of parison of the newly emerged nature-inspired algo-
the problem. It is also observed that there are many rithms over unconstrained and unconstrained
factors influencing the optimization accuracies of the multidimensional optimization problems. After an
metaheuristic optimization algorithms, which include exhaustive investigation covering twenty-five recently
tunable algorithm parameters, the total number of developed algorithms, eleven best-performing methods
function evaluations, designing the algorithmic struc- are considered based on their success over a wide range
ture of the optimizer, and its successful implantation of optimization benchmark problems, including a set of
into a computer code, to name a few. Researchers unimodal and multimodal test functions along with
should avoid using algorithm-specific tunable parame- fourteen complex engineering design cases. However, it
ters, which not only significantly jeopardize the overall would be a more comprehensive and conducive survey
optimization performance of the algorithm but also if more recently developed optimizers were included
require a trial-and-error-based parameter adjustment and reviewed, but the majority of them are neglected
procedure, dropping down the applicability of the due to space restrictions. Furthermore, much more
algorithm below a certain level. In addition, most of reliable performance evaluations could be conducted by
the algorithms suffer from an imbalance between employing more unconstrained test evaluations. Possi-
exploration and exploitation mechanisms. Reducing ble future work regarding the comprehensive optimiza-
the instability between these two contradictive but tion performance assessment of the developed
complementary phases entails a quick and accurate metaheuristic algorithms should include a test suite
123
14374
123
Table 48 Optimal outcomes for reactor design optimization problem
Method MANTA EQUIL RUNGE AFRICAN HARRIS PRO AQUILA GRAD SNAKE REPTILE
composed of fifty-seven constrained test problems used 5. Beyer HG, Schwefel HP (2002) Evolution strategies – a com-
in CEC’2020 due to their challenging and complex prehensive introduction. Nat Comput 1:3–52
6. Koza JR (1992) Genetic programming II, automatic discovery of
natures. reusable subprograms. MIT Press, Cambridge
7. Simon D (2008) Biogeography-based optimization. IEEE Trans
Evol Comput 12:702–713
8. Rashedi E, Neamabadi-pour H, Saryazdi S (2009) GSA: a
Funding The authors received no financial support for the research,
gravitational search algorithm. Inf Sci 13:2232–2248
authorship and publication of this article.
9. Erol OK, Eksin I (2006) A new optimization method: big bang-
big crunch. Adv Eng Softw 37:106–111
Data availability Data sharing is not applicable to this article as no
10. Eskendar H, Sadollah A, Bahreininejad A, Hamdi M (2012)
datasets were generated or analyzed during the current study.
Water Cycle Algorithm-A novel metaheuristic optimization
algorithm for solving constrained engineering optimization
problems. Comput Struct 110–111:151–166
Declarations 11. Kaveh A, Talathari S (2010) A novel heuristic optimization
method: charged system search. Acta Mech 213:267–289
Conflict of interest On behalf of all authors, the corresponding author 12. Kennedy J, Eberhart RC (1995) Particle swarm optimization. In:
states that there is no conflict of interest. Proceedings of the IEEE international conference on neural
networks, Perth, Australia, pp 1942–1948
13. Dorigo M (1992) Optimization, learning and natural algorithms.
References PhD Thesis, Politecnico di Milano, Italy
14. Karaboga D (2005) An Idea based on honey bee swarm for
numerical optimization. Technical Report TR06, Erciyes Fac-
1. Abualigah L, Abd-Elaziz M, Khasawneh AK, Alshinwan M, Ali
ulty, Computer engineering Department
Ibrahim R, Al-qaness MAAA, Mirjalili S, Sumari P, Gandomi
15. Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H,
AH (2022) Metaheuristic optimization algorithms for solving
Mirjalili SM (2017) Salp swarm algorithm: a bio-inspired
real-world mechanical engineering design problems: a compre-
optimizer for engineering design problems. Adv Eng Softw
hensive survey, applications, comparative analysis, and results.
114:163–191
Neural Comput Appl 34:4081–4110
16. Rao RV, Savsani VJ, Vakharia DP (2011) Teaching-learning
2. Yıldız AR, Abderazek H, Mirjalili S (2020) A comparative
based optimization: a novel method for constrained mechanical
study of recent non-traditional methods for mechanical design
design optimization problems. Comput Aided Des 43:303–315
optimization. Arch Comput Methods Eng 27:1031–1048
17. Moosavi SHS, Bardsiri VK (2019) Poor and rich optimization: a
3. Holland JH (1975) Adaptation in natural and artificial systems.
new human-based and multi population algorithm. Eng Appl
University of Michigan Press, Ann Arbor
Artif Intel 86:165–181
4. Storn R, Price K (1995) differential evolution – a simple and
18. Geem ZW, Kim JH, Loganathan GV (2001) A new heuristic
efficient adaptive scheme for global optimization over continu-
optimization algorithm: Harmony search. SIMULATION
ous spaces. Technical Report TR-95–012, ICSI
76:60–68
123
14376 Neural Computing and Applications (2023) 35:14275–14378
19. Atashpaz-Gargari E, Lucas C (2007) Imperialist competitive 37. Hashim FA, Hussien AG (2022) Snake optimizer: a novel
algorithm: an algorithm for optimization inspired by imperialist metaheuristic optimization algorithm. Knowl-Based Syst
competition. In: Proceedings of the 2007 IEEE congress on 242:108320
evolutionary computation, pp 4661–4667 38. Faramarzi A, Heidarinejad M, Stephens B, Mirjalili S (2020)
20. Wolpert DH, Macready WG (1997) No free lunch theorems for Equilibrium optimizer: a novel optimization algorithm. Knowl-
optimization. IEEE Trans Evol Comput 1:67–82 Based Syst 191:105190
21. Diab AAZ, Ali H, Abdul-Ghaffar HI, Abdelselam HA, El Sattar 39. Zhao W, Zhang Z, Wang L (2020) Manta ray foraging opti-
MA (2021) Accurate parameters extraction of PEMFC model mization: An effective bio-inspired optimizer for engineering
based on metaheuristic algorithms. Energy Rep 7:6854–6867 applications. Eng Appl Artif Intel 87:103300
22. Raji S, Dehnamaki A, Somee B, Mahdiani MR (2022) A new 40. Abdollahzadeh B, Gharehchopogh FS, Mirjalili S (2021) Afri-
approach in well placement optimization using metaheuristic can vultures optimization algorithm: a new nature-inspired
algorithms. J Pet Sci Eng 215:110640 metaheuristic algorithm for global optimization problems.
23. Kumar M, Sahu A, Mitra P (2021) A comparison of different Comput Ind Eng 158:107408
metaheuristics for the quadratic assignment problem in accel- 41. Abualigah L, Yousri D, Abd-Elaziz EAA, Al-qaness MAA,
erated systems. Appl Soft Comput 100:106927 Gandomi AH (2021) Aquila optimizer: a novel meta-heuristic
24. Lara-Montano OD, Gomez-Castro FI, Gutierrez-Antonio C optimization algorithm. Comput Ind Eng 157:107250
(2021) Comparison of the performance of different meta- 42. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H
heuristic methods for the optimization of shell-and-tube heat (2019) Harris hawks optimization: algorithm and applciations.
exchangers. Comput Chem Eng 152:107403 Futur Gener Comput Syst 97:849–872
25. Abdor-Sierra JA, Merchan-Crus EA, Rodrigues-Canizo RG 43. Sulaiman MH, Mustaffa Z, Saari MM, Daniyal H (2020) Bar-
(2022) A comparative analysis of metaheuristic algorithms for nacles Mating optimizer: a new bio-inspired algorithm for
solving the inverse kinematics of robot manipulators. Results solving engineering optimization problems. Eng Appl Artif Intel
Eng 16:100597 87:103330
26. Sonmez M (2018) Performance comparison of metaheuristic 44. Nassef AM, Houssein EH, Helmy EB, Fathy A, Alghayti ML,
algorithms for the optimal design of space trusses. Arab J Sci Rezk H (2022) Optimal configuration strategy based on modi-
Eng 43:5265–5281 fied Runge Kutta optimizer to mitigate partial shading condition
27. Ahmed AN, Lam TV, Hung TV, Thieu NV, Kisi O, El-Shafie A in photovoltaic systems. Energy Rep 8(7242):7262
(2021) A comprehensive comparison of recent developed 45. Rezk H, Ferahtia S, Djeroui A, Chouder A, Houari A, Mach-
metaheuristic algorithms for streamflow time series forecasting moum M, Abdelkareem MA (2022) Optimal parameter esti-
problem. Appl Soft Comput 105:107282 mation strategy of PEM fuel cell using gradient-based optimizer.
28. Meng Z, Li G, Wang X, Sait SM, Yıldız AR (2021) Compara- Energy 239:122096
tive study of metaheuristic algorithms for reliability-based 46. Thirumoorthy K, Munesswaran K (2022) (2022) An elitism
design optimization problems. Arch Comput Methods Eng based self-adaptive multi-population poor and rich optimization
28:1853–1869 algorithm for grouping similar documents. J Ambient Intell
29. Katebi J, Shoaei-parchin M, Shariati M, Trung NT, Khorami M Humaniz Comput 13:1925
(2020) Developed comparative analysis of metaheuristic opti- 47. Ekinci S, İzci D (2022) Enhanced reptile search algorithm with
mization algorithms for optimal active control of structures. Eng Levy flight vehicle cruise control system design. Evol Intell.
Comput 36:1539–1558 https://doi.org/10.1007/s12065-022-00745-8
30. Naranjo JAL, Alcaraz JAS, Miguel CRTS, Rojas JCP, Espinal 48. Hu G, Yang R, Abbas M, Wei G (2023) BEESO: multi-strategy
A, Gonzalez HR (2019) Comparison of metaheuristic opti- boosted snake-inspired optimizer for engineering applications.
mization algorithms for dimensional synthesis of a spherical J Bionic Eng. https://doi.org/10.1007/s42235-022-00330-w
parallel manipulator. Mech Mach Theory 140:586–600 49. Sun F, Yu J, Zhao A, Zhou M (2021) Optimizing multi-chiller
31. Mohseni S, Brent AC, Burmester D (2020) A comparison of dispatch in HVAC-system using equilibrium optimization
metaheuristics for the optimal capacity planning of an isolated, algorithm. Energy Rep 7:5997–6013
battery-less, hydrogen-based micro-grid. Appl Energy 50. Hu G, Li M, Wang X, Wei G, Chang CT (2022) An enhanced
259:114224 manta ray foraging optimization algorithm for shape optimiza-
32. Gupta S, Abderazek H, Yıldız BS, Yıldız AR, Mirjalili S, Sait tion of complex CCG-Ball curves. Knowl-Based Syst 24:108071
SM (2021) Comparison of metaheuristic optimization algo- 51. Chen L, Huang H, Tang P, Yao D, Yang H, Ghadimi N (2022)
rithms for solving constrained mechanical design optimization Optimal modeling of combined cooling, heating, and power
problems. Expert Syst Appl 183:115351 systems using developed African vulture optimization: a case
33. Ezugwu AE, Adeleke OJ, Akinyelu AA, Viriri S (2020) A study in watersport complex. Energy Sources A Recov Util
conceptual comparison of several metaheuristic algorithms on Environ Eff 44:4296–4317
continuous optimization problems. Neural Comput Appl 52. Ekinci S, İzci D, Abualigah LA (2023) A novel balanced Aquila
32:6207–6251 optimizer using random learning and Nelder-Mead simplex
34. Ahmadianfar I, Heidari AA, Gandomi AH, Chu X, Chen H search mechanisms for air-fuel ratio system control. J Braz Soc
(2021) RUN beyond the metaphor: an efficient optimization Mech Sci Eng 45:68
algorithm based on Runge Kutta method. Expert Syst Appl 53. Li M, Li K, Qin Q (2023) A rockburst prediction model based
181:115079 on extreme learning machine with improved Harris Hawks
35. Ahmadianfar I, Bozorg-Haddad O, Chu X (2020) Gradient- optimization and its application. Tunn Undergr Sp Tech
based optimizer: A new metaheuristic optimization algorithm. 134:104978
Inf Sci 540:131–159 54. Liu B, Wang H, Tseng ML, Li Z (2022) State of charge esti-
36. Abualigah L, Abd-Elaziz M, Sumari P, Geem ZW, Gandomi AH mation for lithium-ion batteries based on improved barnacle
(2022) Reptile search algorithm (RSA): a nature-inspired meta- mating optimizer and support vector machine. J Energy Storage
heuristic optimizer. Expert Syst Appl 191:116158 55:105830
55. Akay B, Karaboga D, Gorkemli B, Kaya E (2021) A survey on
the Artificial Bee Colony algorithm variants for binary, integer,
123
Neural Computing and Applications (2023) 35:14275–14378 14377
and mixed integer programming problems. Appl Soft Comput 75. Jia H, Peng X, Lang C (2021) Remora optimization algorithm
106:107351 expert. Syst Appl 185:115665
56. Lourenço HR, Martin OC, Stützle T (2003) Iterated local search. 76. Al-Shourbaji I, Kachare PH, Alshatri S, Duraibi S, Elnaim B,
In: Handbook of metaheuristics. vol 57, pp 320–353, Springer Abd-Elaziz M (2022) An efficient parallel reptile search algo-
57. Aarts EHL, van Laarhoven PJM (1989) Simulated annealing: an rithm and snake optimizer approach for feature selection.
introduction. Stat Neerl 43:31–52 Mathematics 10:2351
58. Price KV, Storn RM, Lampinen JA (2005) Differential evolu- 77. Rizk-Allah RM, Hassanien AE (2023) A hybrid equilibrium
tion: a practical approach to global optimization. Springer- algorithm and pattern search technique for wind farm layout
Verlag, Berlin optimization problem. ISA Trans 132:402–418
59. Ficarella L, Lamberti L, Degertekin SO (2021) Comparison of 78. Hooke R, Jeeves TA (1961) Direct search solution of numerical
three novel hybrid metaheuristic algorithms for structural opti- and statistical problems. J ACM 8:212–229
mization problems. Comput Struct 244:106395 79. Zhong C, Li G, Meng Z, Li H, He W (2023) Multi-objective
60. Bertolini M, Mezzogori D, Zammori F (2019) Comparison of SHADE with manta ray foraging optimizer for structural design
new metaheuristics, for the solution of an integrated jobs- problems. Appl Soft Comput 134:110016
maintenance scheduling problem. Expert Syst Appl 80. Tanabe R, Fukunaga A (2013) Success-history based parameter
122:118–136 adaptation for Differential Evolution. In: 2013 IEEE congress on
61. Camargo MP, Rueda JL, Erlich I, Ano O (2014) Comparison of evolutionary computation, Cancun, Mexico, pp 71–78
emerging metaheuristic algorithms for optimal hydrothermal 81. Xiao Y, Guo Y, Cui H, Wang Y, Li J, Zhang Y (2022)
system operation. Swarm Evol Comput 18:83–96 IHAOAVOA: A n improved hybrid aquila optimizer and Afri-
62. Ali MM, Khompatraporn C, Zabinsky, (2005) A numerical can vulture optimization algorithm for global optimization
evaluation of several stochastic algorithms on selected contin- problems Math Biosci Eng 19:10963–11017
uous global optimization test problems. J Glob Optim 82. Ramchandran M, Mirjalili S, Heris MN, Parvathysankar DS,
31:635–672 Sundaram A, Gnanakkan CARC (2022) A hybrid grasshopper
63. Civicioglu P, Besdok E (2013) A conceptual comparison of the optimization algorithm and harris hawks optimizer for combined
Cuckoo search, particle swarm optimization, differential evo- heat and power economic dispatch problem. Eng Appl Artif
lution, artificial bee colony algorithms. Artif Intel Rev Intell 111:104753
39:315–346 83. Saremi S, Mirjalili S, Lewis A (2017) Grasshopper optimisation
64. Ma H, Simon D, Fei M, Chen Z (2013) On the equivalences and algorithm: theory and application. Adv Eng Softw 105:30–47
differences of evolutionary algorithms. Eng Appl Artif Intel 84. Mirjalili S (2016) SCA: a sine cosine algorithm for solving
26:2397–2407 optimization problems. Knowl Based Syst 96:120–133
65. Ma H, Ye S, Simon D, Fei M (2017) Conceptual and numerical 85. Abd-Elaziz M, Ewes AA, Al-qaness MAA, Abualigah L, Ibra-
comparisons of swarm intelligence optimization algorithms. him RA (2022) Sine-Cosine-Barnacles algorithm optimizer with
Soft Comput 21:3081–3100 disruption operator for global optimization and automatic data
66. Joseph SB, Dada EG, Abidemi A, Oyewola DO, Khammas BM clustering. Expert Syst Appl 207:117993
(2022) Metaheuristic algorithms for PID controller parameters 86. Ypma TJ (1995) Historical development of the Newton-Raph-
tuning: review, approaches and open problems. Heliyon son method. SIAM Rev 37:531–551
5:e09399 87. Yang XS (2010) Nature-inspired metaheuristic algorithm.
67. Abd Elaziz M, Elsheikh AH, Oliva D, Abualigah L, Lu S, Ewees Luniver Press, Frome
AA (2022) Advanced metaheuristic techniques for mechanical 88. Kaveh A, Mahdavi VR (2015) A hybrid CBO-PSO algorithm for
design problems. Arch Comput Methods Eng 29:695–716 optimal design of truss structures with dynamic constraints.
68. Milan ST, Rajabion L, Ranjbar H, Navimipour NJ (2019) Nature Appl Soft Comput 34:260–273
inspired meta-heuristic algorithms for solving the load balancing 89. Gezici H, Livatyalı H (2022) Chaotic Harris hawks optimization
problem in cloud environments. Comput Oper Res 110:159–187 algorithm. J Comput Des Eng 9:216–245
69. Sierra JAA, Cruz EAM, Canizo RGR (2022) A comparative 90. Seyyedabbasi A, Aliyev R, Kiani F, Gulle MU, Basyildiz H,
analysis of metaheuristic algorithms for solving the inverse Shah MA (2021) Hybrid algorithms based on combining rein-
kinematics of robot manipulators. Results Eng 16:100597 forcement learning and metaheuristic methods to solve global
70. Dokeroglu T, Deniz A, Kiziloz HE (2022) A comprehensive optimization problems. Knowl-Based Syst 223:107044
survey on recent metaheuristics for feature selection. Neuro- 91. Andrei N (2008) An Unconstrained Optimization Test Functions
comput 494:269–296 Collection. Adv Modell Optim 10:147–161
71. Rawa M, AlKubaisy ZM, Alghamdi S, Refaat MM, Ali ZM, 92. Floudas CA, Pardalos PM, Adjiman CS, Esposito WR, Gümüş
Abdel Aleem SHE (2022) A techno-economic planning model ZH, Harding ST, Klepeis JL, Meyer CA, Schweiger CA (1999)
for integrated generation and transmission expansion in modern Handbook of test problems in local and global optimization.
power systems with renewables and energy storage using hybrid Springer
Runge- Kutta – gradient- based optimization algorithms. Energy 93. Shaban H, Houssein EH, Perez-Cisneros M, Oliva Di Yassan
Rep 8:6457–6479 AY, Ismaeel AAK, AbdElminaam DS, DebSaid SM (2021)
72. Ewees AA, Ismail FH, Sahlol AT (2023) Gradient-based opti- Identification of parameters in photovoltaic models through a
mizer improved by Slime Mould Algorithm for global opti- Runge Kutta optimizer. Mathematics 9:2313
mization and feature selection for diverse computation 94. Chen H, Ahmadianfar I, Liang G, Bakhsizadeh H, Azad B, Chu
problems. Expert Syst Appl 213:118872 X (2022) A successful candidate strategy with Rung-Kutta
73. Li S, Chen H, Wang M, Heidari AA, Mirjalili S (2020) Slime optimization for multi-hydropower reservoir optimization.
mould algorithm: A new method for stochastic optimization. Expert Syst Appl 209:118383
Future Gener Comput Syst 111:300–323 95. Premkumar M, Jangir P, Sowmya R (2021) MOGBO: a new
74. Almotairi KH, Abualigah L (2022) Hybrid reptile search algo- multiobjective gradient-based optimizer for real-world structural
rithm and remora optimization algorithm for optimization tasks optimization problems. Knowl-Based Syst 218:106856
and data clustering. Symmetry 14:458
123
14378 Neural Computing and Applications (2023) 35:14275–14378
96. Thirumoorthy K, Muneeswaran K (2021) Feature selection optimziation for control design of a pendulum system. In:
using hybrid poor and rich optimization algorithm for text Emerging technology in computing, communication and elec-
classification. Pattern Recognit Lett 147:63–70 tronics (ETTTCE), pp 1–5
97. Ekinci S, Izci D, Abu Zitar R, Alsoud AR, Abualigah L (2022) 115. Sulaiman MH, Mustaffa Z (2022) Optimal chiller loading
Development of Levy flight-based reptile search algorithm with solution for energy conservation using Barnacles mating opti-
local search ability for power systems engineering design mizer algorithm. Res Control Opt 7:100109
problems. Neural Comput Appl 34:20263–20283 116. Rajesh P, Shajin FH, Anand NV (2021) An efficient estimation
98. Al-Shourbaji I, Helian N, Sun Y, Alshatri S, Abd-Elaziz M model for induction motor using BMO-RBFNN technique.
(2022) Boosting ant colony optimization with reptile search Process Integr Optim Sustain 5:777–792
algorithm for churn prediction. Mathematics 10:1031 117. Liao T, Stuetzle T (2013) Benchmark results for a simple hybrid
99. Rawa M (2022) Towards avoiding cascading failures in trans- algorithm on the CEC 2013 benchmark set for real parameter
mission expansion planning of modern active power systems optimization. In: Proceedings of IEEE congress on evolutionary
using hybrid snake-sine cosine optimization algorithm. Mathe- computation, pp 1938–1944.
matics 10:1323 118. Kumar A, Wu G, Ali MZ, Mallipeddi R, Suganthan PN, Das S
100. Ahmed S, Ghosh KK, Mirjalili S, Sarkar S (2021) AIEOU: (2020) A test-suite of non-convex constrained optimization
automata-based improved equilibrium optimizer with U-shaped problems from the real-world and some baseline results. Swarm
transfer function for feature selection. Knowl-Based Syst Evol Comput 56:100693
228:107283 119. Kim TH, Maruta I, Sugie T (2010) A simple and efficient
101. Abdul-hamied DT, Shaheen AM, Salem WA, Gabr WI, El-se- constrained particle swarm optimization and its application to
hiemy RA (2020) Equilibrium optimizer based multi dimension engineering design problems. Proc Inst Mech Eng C J Mech Eng
operation of hybrid AC/DC grids. Alex Eng J 59:4787–4803 Sci 224:389–400
102. Hassan MH, Houssein EH, Mahdy MA, Kamel S (2021) An 120. Turgut OE, Turgut MS (2023) Local search enhanced Aquila
improved Manta ray foraging optimizer for cost-effective optimization algorithm ameliorated with an ensemble of muta-
emission dispatch problems. Eng Appl Artif Intel 100:104155 tion strategies for complex optimization problems. Math Com-
103. Abd-Elaziz M, Yousri D, Al-qaness MAA, AbdelAty AM, put Simul 206:302–374
Radwan AG, Ewees AA (2021) A Grunwald-Letnikov based 121. Golcuk I (2021) A comparative analysis of constraint-handling
Manta ray foraging optimizer for global optimization and image mechanisms for solving engineering design problems. J Ind Eng
segmentation. Eng Appl Artif Intel 98:104105 32:201–216
104. Kahraman HT, Akbel M, Duman S (2022) Optimization of 122. Arora JS (1989) Introduction to optimum design. McGraw-Hill,
optimal power flow problem using multi-objective manta ray New York, US
foraging optimizer. Appl Soft Comput 116:108334 123. Schittkowski K (1987) More test examples for nonlinear pro-
105. Gürses D, Mehta P, Sait SM, Yildiz AR (2022) African vultures gramming codes. In: Lecture notes in economics and mathe-
optimization algorithm for optimization of shell and tube heat matical systems, Springer, Berlin
exchangers. Mater Test 64:1234–1241 124. Gu L, Yang RJ, Cho CH, Makowski M, Faruque M, Li Y (2001)
106. Ghazi GA, Hasanian HM, Al-Ammar EA, Turky RA, Ko W, Optimization and robustness for crashworthiness. Int J Veh Des
Park S, Choi HJ (2022) African vulture optimization algorithm 82:241–256
based PI controllers for performance enhancement of hybrid 125. Coello CAC (2000) Use of a self-adaptive penalty approach for
renewable- energy systems. Sustainability 14:8172 engineering optimization problems. Comput Ind 41:112–127
107. Kumar C, Mary DM (2021) Parameter estimation of three-diode 126. Askari Q, Younas I, Saeed M (2020) Political optimizer: a novel
solar photovoltaic model using an Improved African Vultures socio-inspired metaheuristic for global optimization. Knowl-
optimization algorithm with Newton-Raphson method. J Com- Based Syst 195:105790
put Electron 20:2563–2593 127. Andrei N (2013) Nonlinear optimization applications using the
108. AlRassas AM, Al-qaness MAA, Ewees AA, Ren S, Abd-Elaziz GAMS technology, 1st edn. Springer-Verlag, Berlin
M, Damasevicius R, Krilavicius T (2021) Optimized ANFIS 128. Hock W, Schittkowski K (1981) Test examples for nonlinear
model using aquila optimizer for oil production forecasting. programming codes. In: Lecture notes in economics and math-
Processes 9:1194 ematical systems, Springer, Berlin
109. Pashaei E (2022) Mutation-based Binary Aquila optimizer for 129. Coello CA (2000) Treating constraints as objectives for singe-
gene selection in cancer classification. Comput Biol Chem objective evolutionary optimization. Eng Optim 32:275–308
101:107767 130. Bracken J, McCormick GP (1968) Selected applications of
110. Ali MH, Salawudeen AT, Kamel S, Salau HB, Habil SM (2022) nonlinear programming. Wiley, New York
Single and multi-objective modified aquila optimizer for optimal 131. Datseris P (1982) Weight minimization of a speed reducer by
multiple renewable energy resources in distribution network. heuristic and decomposition technique. Mech Mach Theory
Mathematics 10:2129 17:255–262
111. Houssein EH, Hosney ME, Elhoseny M, Oliva D, Mohamed 132. Dembo RS (1976) A set of geometric programming test prob-
WM, Hasaballah M (2020) Hybrid Harris hawks optimization lems and their solution. Math Program 10:192–213
with cuckoo search for drug design and discovery in chemin-
formatics. Sci Rep 10:14439 Publisher’s Note Springer Nature remains neutral with regard to
112. Abbasi A, Firouzi B, Sendur P (2021) On the application of jurisdictional claims in published maps and institutional affiliations.
Harris hawks optimization (HHO) algorithm to the design of
microchannel heat sinks. Eng Comput 37:1409–1428
Springer Nature or its licensor (e.g. a society or other partner) holds
113. Abbasi A, Firouzi B, Sendur P, Heidari AA, Chen H, Tiwari R
exclusive rights to this article under a publishing agreement with the
(2021) Multi strategy Gaussian Harris hawks optimization for
author(s) or other rightsholder(s); author self-archiving of the
fatigue life of tapered roller bearings. Eng Comput 3:1–27
accepted manuscript version of this article is solely governed by the
114. Razak AAA, Nasir ANK, NMA Ghani, NAM Rizai, MFM
terms of such publishing agreement and applicable law.
Jusof, Muhamad IH (2020) Multi-objective barnacle mating
123