RUN Beyond The Metaphor-An Efficient Optimization Algorithm Based On Runge Kutta Method-ESWA 2021
RUN Beyond The Metaphor-An Efficient Optimization Algorithm Based On Runge Kutta Method-ESWA 2021
PII: S0957-4174(21)00520-0
DOI: https://doi.org/10.1016/j.eswa.2021.115079
Reference: ESWA 115079
Please cite this article as: Ahmadianfar, I., Asghar Heidari, A., Gandomi, A.H., Chu, X., Chen, H., RUN Beyond
the Metaphor: An Efficient Optimization Algorithm Based on Runge Kutta Method, Expert Systems with
Applications (2021), doi: https://doi.org/10.1016/j.eswa.2021.115079
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover
page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version
will undergo additional copyediting, typesetting and review before it is published in its final form, but we are
providing this version to give early visibility of the article. Please note that, during the production process, errors
may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
a Department of Civil Engineering, Behbahan Khatam Alanbia University of Technology, Behbahan, Iran
Email: [email protected], [email protected]
b School of Surveying and Geospatial Engineering, College of Engineering, University of Tehran, Tehran 1439957131,
Iran.
Email: [email protected], [email protected]
c
Department of Computer Science, School of Computing, National University of Singapore, Singapore 117417,
Singapore
Email: [email protected], [email protected]
d University of Technology Sydney, Ultimo, NSW 2007, Australia.
Email: [email protected]
e
Department of Civil & Environmental Engineering, North Dakota State University, Department 2470, Fargo, ND,
USA.
Email: [email protected]
f College of Computer Science and Artificial Intelligence, Wenzhou University, Wenzhou, Zhejiang
325035, China
Email: [email protected]
1
Abstract
The optimization field suffers from the metaphor-based "pseudo-novel" or "fancy"
optimizers. Most of these cliché methods mimic animals' searching trends and possess
a small contribution to the optimization process itself. Most of these cliché methods
suffer from the locally efficient performance, biased verification methods on easy
problems, and high similarity between their components' interactions. This study
attempts to go beyond the traps of metaphors and introduce a novel metaphor-free
population-based optimization based on the mathematical foundations and ideas of the
Runge Kutta (RK) method widely well-known in mathematics. The proposed RUNge
Kutta optimizer (RUN) was developed to deal with various types of optimization
problems in the future. The RUN utilizes the logic of slope variations computed by the
RK method as a promising and logical searching mechanism for global optimization.
This search mechanism benefits from two active exploration and exploitation phases
for exploring the promising regions in the feature space and constructive movement
toward the global best solution. Furthermore, an enhanced solution quality (ESQ)
mechanism is employed to avoid the local optimal solutions and increase convergence
speed. The RUN algorithm's efficiency was evaluated by comparing with other
metaheuristic algorithms in 50 mathematical test functions and four real-world
engineering problems. The RUN provided very promising and competitive results,
showing superior exploration and exploitation tendencies, fast convergence rate, and
local optima avoidance. In optimizing the constrained engineering problems, the
metaphor-free RUN demonstrated its suitable performance as well. The authors invite
the community for extensive evaluations of this deep-rooted optimizer as a promising
tool for real-world optimization. The source codes, supplementary materials, and
guidance for the developed method will be publicly available at different hubs at and
http://imanahmadianfar.com, http://aliasgharheidari.com/RUN.html, and
http://mdm.wzu.edu.cn/RUN.html.
Keywords: Genetic algorithms; Evolutionary algorithm; Runge Kutta optimization;
Optimization; Swarm intelligence; Performance.
1. Introduction
Most real-world problems are complicated and present difficulties in being
optimized. These problems are often characterized by nonlinearity, multimodality, non-
differentiability, and high dimensionality. Because of these properties, the conventional
gradient-based optimization methods, such as quasi-Newton, conjugate gradient, and
sequential quadratic programming methods, are unable to optimize such problems
virtually (Nocedal & Wright, 2006; Wu, 2016). Therefore, existing literature suggests
that other optimization techniques need to be developed for more efficient and
2
effective optimization. An optimization problem can be in many-objective forms (Cao,
Dong, et al., 2020; Cao, Wang, et al., 2020). One another problem can be multi-
objective (Cao, Zhao, Yang, et al., 2019), memetic (Fu, et al., 2020), fuzzy (Chen, Qiao,
et al., 2019), robust (Qu, et al., 2020), large scale (Cao, Fan, et al., 2020; Cao, Zhao, et
al., 2020), and single-objective. Real-world problems are faced every day, and we need
to develop solvers for deep learning applications (Chen, Chen, et al., 2020; Li, et al.,
2019; Qiu, et al., 2019), decision-making procedures (Liu, et al., 2016; Liu, et al.; Wu, et
al., 2020), optimal resource allocation (Yan, et al., 2020), image improvement
optimization (Wang, et al., 2020), deployment optimization in networks (Cao, Zhao,
Gu, et al., 2019), water-energy optimization (Chen, et al., 2017), training systems and
methods in artificial neural networks (Mousavi, et al., 2020), and optimization of the
parameters (Zhang, et al., 2006). Numerous metaheuristic optimization algorithms
(MOAs) have been developed and widely employed as suitable alternative optimizers to
solve various problems due to their flexibility and straightforward implementation
procedure (Chen, Fan, et al., 2020; Yang & Chen, 2019). MOAs can be categorized into
three groups (Kaveh & Bakhshpoori, 2016): evolutionary algorithms (EAs), physics-
based algorithms (PBAs), and swarm-based algorithms (SBAs). Nevertheless, they
present some drawbacks, including high sensitivity and their control parameter settings.
Also, they do not always converge toward the globally optimal solution (Wu, et al.,
2015). As they utilize some randomly generated components within the procedure (Sun,
et al., 2019), an appropriate balance between exploration and exploitation cannot be
ensured. This limit is one of the fundamental challenges within all kinds of methods in
this area.
The methods under the class of EAs are based on the principles of evolution in
nature, such as selection, recombination, and mutation. The genetic algorithm (GA),
another widely-used EA, was inspired by Darwin's theory of evolution (Holland, 1975).
Other EAs include genetic programming (GP) (Koza, 1994), differential evolution
(DE) (Storn & Price, 1995), and evolution strategy (Beyer & Schwefel, 2002). The
methods in this category have the deepest roots in their foundation theory compared to
other approaches, as Darwin's theory reshaped our vision of the tree of life. Later, the
development of physics-based algorithms (PBAs) emerged as a trend in the field
inspired by physics laws governing the surrounding world. For instance, among these
emerging PBA algorithms, simulated annealing (SA) is the most popular one
(Kirkpatrick, et al., 1983). Other PBAs include gravitational search algorithm (GSA)
(Rashedi, et al., 2009), central force optimization (Formato, 2007), differential search
(DS) (Liu, et al., 2015), vortex search algorithm (VSA) (Doğan & Ölmez, 2015), and
gradient-based optimizer (GBO) (Ahmadianfar, Bozorg-Haddad, et al., 2020).
Researchers tried to simulate organisms' cooperative behaviors in flocks after years
passed, which are natural or artificial (Baykasoğlu & Ozsoydan, 2017). For example, the
main inspiration in particle swarm optimization (PSO) (Eberhart & Kennedy, 1995) is a
flock of birds' social behaviors. Other SBA examples include the Bat algorithm (BA)
(Yang, 2010b), cuckoo search (CS) (Gandomi, et al., 2013), ant colony optimization
3
(ACO) (Dorigo & Di Caro, 1999), artificial bee colony (ABC) (Karaboga & Basturk,
2007), firefly algorithm (FA) (Yang, 2010a), slime mould algorithm (SMA)1 (Li, et al.,
2020), and Harris hawks optimization (HHO)2 (Heidari, Mirjalili, et al., 2019).
On the other hand, evolution served as the core idea of swarm-based methods
that evolved the algorithms themselves. Two large influences of these evolving and
algorithms included the searching trend for an "unused" biologic source of inspiration
and utilizing it as a dress for a set of equations. These unwanted ambiguous directions
first occurred when the black hole optimizer appeared as a modified PSO with a new
dress (Piotrowski, et al., 2014). Later, another issue was raised by a team of researchers
in China, who proved that the widespread grey wolf optimizer (GWO) has a defect,
and there is a problem in the verification process (Niu, et al., 2019). It is also exposed
that there is no novelty in GWO, and its structure resembled some variants of PSO
with a metaphor (Camacho Villalón, et al., 2020). This method's metaphor is not
implemented, as mentioned in the original work (Camacho Villalón, et al., 2020). Such
inaccuracy affects the reliability of methods and questions the validity of metaphor-
based methods like GWO and the black hole algorithm. Despite the weaknesses,
metaphors, and structural differences of various optimization algorithms (Tzanetos &
Dounias, 2020), they all employ two typical phases, exploration and exploitation, to
search the solution space regions (Salcedo-Sanz, 2016). Exploring is an optimization
algorithm's ability to sincerely search the entire solution space and explore the
promising areas. At the same time, exploitation is the capability of an optimization
algorithm to search around near-optimal solutions. Generally, the exploration phase of
an optimizer should randomly produce solutions in various regions of the solution
space during early iterations of the optimization process (Heidari, Aljarah, et al., 2019).
In contrast, the exploitation phase of an optimization algorithm should create a robust
local search. Thus, a well-designed idea should be able of creating a suitable balance
between the exploration and exploitation phases.
Generally, creating an appropriate trade-off between exploration and
exploitation is an essential task for any optimization algorithm (Ahmadianfar,
Kheyrandish, et al., 2020). In this regard, many researchers have attempted to improve
the optimizers' performance by selecting appropriate control parameters or hybridizing
with other optimizers (Abdel-Baset, et al., 2019; Ahmadianfar, et al., 2019; Luo, et al.,
2017; Zhang, et al., 2018). Nevertheless, creating a robust algorithm that can balance
exploration and exploitation is a complex and challenging issue. Moreover, as there are
many real-world problems, more accurate and more consistent optimizers are needed.
To fill such a gap, a well-designed population-based optimization procedure is
proposed in this research. The proposed algorithm, Runge Kutta optimizer (RUN), was
designed according to the foundations of the Runge Kutta method3 (Kutta, 1901;
1 https://aliasgharheidari.com/SMA.html
2 https://aliasgharheidari.com/HHO.html
3 For a better presentation of the term, we use the term Runge Kutta in this paper
4
Runge, 1895). RUN uses a specific slope calculation concept based on the Runge Kutta
method as an effective search engine for global optimization. The proposed algorithm
consists of two main parts: a search mechanism based on the Runge Kutta method and
an enhanced solution quality (ESQ) mechanism to increase solutions' quality. RUN's
performance was evaluated by using 50 mathematical test functions, and the results
were compared with those of other state-of-the-art optimizers. Furthermore, the
proposed RUN was employed to solve four engineering design problems to test its
ability and efficiency in solving a number of real-world optimization problems.
This paper is organized as follows. Section 2 presents a summarized review of
the Runge Kutta method. Section 3 provides the mathematical formulation and
optimization procedures of the RUN algorithm. Section 4 evaluates the efficiency of
the RUN to optimize different benchmark test functions. Section 5 assesses the ability
of the proposed RUN in solving engineering design problems. Section 6 presents the
main conclusions and some useful suggestions for future studies.
2. Related works
Generally, stochastic optimization algorithms can be categorized into two
classes: single-based and population-based algorithms. The algorithm begins the
optimization procedure with a single random position in the first class and updates it
during each iteration (Mirjalili, et al., 2016). Simulated annealing (SA) (Kirkpatrick, et
al., 1983), tabu search (TS) (Glover & Laguna, 1998), and hill-climbing (HC)
(Tsamardinos, et al., 2006) belong to this class. The primary benefits of the single-based
optimizers include easy implementation and a low number of function evaluations,
while their main drawback is the high possibility of getting caught up in local solutions.
In contrast, the population-based methods start the optimization procedure with a set
of random solutions and update their positions at each iteration. The well-known GA,
PSO, DE, ACO, ABC, and biogeography-based optimization (BBO) (Simon, 2008)
belong to this category. Population-based optimization algorithms also have a relatively
acceptable ability to avoid the local optimal solutions because they employ a set of
solutions at each iteration instead of only evolving on a single agent.
Accordingly, the population-based algorithms can handle the sceneries of
feature space and increase the convergence speed. Furthermore, they can share
information between solutions, making a more convenient search in complex and
challenging feature spaces (Mirjalili, et al., 2016). Notwithstanding these advantages,
these optimizers require many function evaluations during the optimization process and
a relatively complicated/difficult implementation. Another unavoidable issue is that
these methods apply a random-based vision for understanding the problem's
topographies, making them unbalanced, inaccurate, or unsuccessful in finding any best
solution. However, sometimes a locally-accurate solution can satisfy the practitioners
and requirements of real-world problems. Many studies indicate that the population-
based optimizers are regarded as more reliable and accurate than the single-based
5
algorithms because of the advantages mentioned above. Their applications in a broad
range of fields have demonstrated their worthiness and high capability. Generally, these
optimization algorithms have been largely inspired physics's laws, social behaviors of
creatures, and natural phenomena.
Of pertinent mention, a study by Sörensen on the low-quality contributions in
the optimization methods opened the eyes of many researchers (Sörensen, 2015). As
per this research, shallow mathematic models supplied with metaphor-based outfits
must be avoided to make improvements in the field (Lones, 2020). These metaphors
are often perplexing and irrelevant to experts, decision-makers, algorithm designers,
and those who utilize these methods for real-world cases. It has also been discovered
that some methods, such as popular harmony search, are not very original, in which the
core mathematic models are a version of (μ+1)-evolutionary search (Saka, et al., 2016).
Regardless of these shortcomings, optimization algorithms consist of exploration and
exploitation phases, as previously mentioned. Since establishing a reasonable balance
between these two phases is a challenge for any optimization technique, designing a
powerful and accurate optimization algorithm to achieve this goal is necessary. Hence, a
novel population-based metaheuristic optimization algorithm based on the Runge
Kutta method was developed in this study. The following two sections focus on the
formulation of this new RUN algorithm.
3. Overview of Runge Kutta method in differential equations
The Runge Kutta method (RKM) is broadly used to solve ordinary differential
equations (Kutta, 1901; Runge, 1895). RKM can be applied to create a high-precision
numerical method by using functions without requiring their high-order derivatives
(Zheng & Zhang, 2017). The primary formulation of the RKM is described as follows.
Consider the following first-order ordinary differential equation for an initial
value problem:
( , ), ( ) (1)
In RKM, the main idea is to define ( , ) as the slope (S) of the best straight
line fitted to the graph at the point ( , ). Using the slope at point ( , ), another
point can be obtained by using the best fitted straight line: ( , ) ( ,
), where ( , ). Similarly, ( , ) ( , ). This
process can be repeated m times, which yields an approximate solution in the range of
[ , ].
The derivation of RKM is based on the Taylor series, which is given by:
( )
( ) ( ) ( ) ( ) (2)
6
By dropping the higher-order terms, the following approximate equation can be
obtained.
( ) ( ) ( ) (3)
According to Eq. (3), the formula for the first-order Runge Kutta method (or
Euler method) can be expressed as:
( ) ( )
(4)
where ( ) ( , ); and - .
In this study, the fourth-order Runge Kutta (RK4) (England, 1969) derived
from Eq. (2) was used to develop the proposed optimization method. The formula for
the RK4 method, which is based on the weighted average of four increments (as shown
in Fig. 1), can be expressed as:
( ) ( ) ( ) (7)
in which the four weighted factors (k1, k2, k3, and k4) are respectively given by:
( ) ( , )
( , )
( , ) (8)
( , )
where is the first increment and determines the slope at the beginning of the interval
[ , ] using . is the second increment and specifies based on the slope at the
7
midpoint, using and ; is the third increment and defines regarding the slope at
the midpoint, using and ; and is the fourth increment and is determined based
on the slope at the end of the interval, using and . According to RK4, the next
value ( ) is specified by the current value ( ) plus the weighted average of
four increments.
𝑦 (𝑥 𝑥)
𝑒𝑥𝑎𝑐𝑡 error
𝑦 (𝑥 𝑥)
𝑅𝑢𝑛𝑔𝑒 𝐾𝑢𝑡𝑡𝑎 (𝑘 𝑘 𝑘 𝑘 )
𝑘
𝑘
𝑘
𝑦 1𝑘
𝑥
𝑥 𝑥 𝑥 𝑥
8
In this step, the logic is to set an initial swarm to be evolved within the allowed
number of iterations. In RUN, N positions are randomly generated for a population
with a size of N. Each member of the population, ( , , , ), is a solution
with a dimension of D for an optimization problem. In general, the initial positions are
randomly created by the following idea:
, .( ) (9)
where and are the lower and upper bounds of the th variable of the problem
( , , , ), and is a random number in the range of [0, 1]. This rule simply
generates some solutions within limits.
4.2. Root of search mechanism
The power of any optimizer is dependent on its iterative cores for generating
the exploration and exploitation patterns. In the exploration core, an optimization
algorithm uses a set of random solutions with a high randomness rate to explore the
promising areas of the feasible space. Small and gradual variations in the exploitation of
core solutions and random behaviors are remarkably lower than those in the
exploration mechanism (Mirjalili, 2015a). In this study, RUN's leading search
mechanism is based on the RK method to search the decision space using a set of
random solutions and implement a proper global and local search.
The RK4 method was employed to determine the search mechanism in the
proposed RUN. The first-order derivative was utilized to define the coefficient ,
which is calculated by Eq. (5). Moreover, the proposed optimization algorithm uses
position instead of its fitness ( ( )), because applying the objective function of a
position needs considerable time in computing. According to Eq. (5), and
are two neighboring positions of . By considering ( ) as a minimization
problem, positions and have best and worst positions, respectively.
Therefore, to create a population-based algorithm, position is equal to (i.e.,
is the best position around ), while the position is equal to w (i.e., w is
the worst position around ). Therefore, is defined as:
xw xb
k1 (10)
2x
where w and are the worst and best solutions obtained at each iteration, which are
determined based on the three random solutions selected from the members of the
population ( , , ), and 1 2 3 .
In order to enhance the exploration search and create a randomness behavior,
Eq. (10) can be rewritten as follows:
(10-1)
( w )
9
( ) ( ) (10-2)
where is a random number in the range of [0, 1]. Overall, the best solution ( )
plays a crucial role in finding promising areas and moving toward the global best
solution. Therefore, in this study, a random parameter ( ) is used to increase the
importance of the best solution ( ) during the optimization process. In Eq. (10),
can be specified by:
(11-1)
.( ) / (11-2)
(11-3)
( ( )) ( )
( .( w . . ) ( . . . )) (12)
( .( w .( ). ) ( . .( ). )) (13)
( .( w . . ) ( . . . )) (14)
where and are two random numbers in the range of [0, 1]. In this study,
w and are determined by the following:
( ) ( )
(15)
where is the best random solution, which is selected from the three random
solutions ( , , and ). According to Eq. (15), if the fitness of the current
10
solution ( ( )) is better than that of , the best and worst solutions ( and ) are
equal to and , respectively. Otherwise, they are equal to and , respectively.
Therefore, the leading search mechanism in RUN can be defined as:
( ) (16)
in which
(16-1)
.
(exploration phase)
( )
(17)
(exploitation phase)
( )
in which
k2
Variable 2
k4
k1 k3
xn
Variable 1
Fig. 2. Slopes employed by the RK to obtain the next position ( ) in the RUN
algorithm
11
The formulas of and are expressed as
.( ) (17-1)
.( ) (17-2)
and can be calculated as follows:
( ) (17-3)
( ) (17-4)
where is a random number in the range of (0,1). is the best-so-far solution.
is the best position obtained at each iteration. is an adaptive factor, which is
given by:
.(0.5 ) (17-5)
in which
( ( )) (17-6)
where and are two constant numbers. is the number of iterations. is the
maximum number of iterations. In this study, was employed to provide a suitable
balance between exploration and exploitation. Based on Eq. (17-5), a large value of SF
is specified in the early iterations to increase the diversity and enhance the exploration
search; then, its value reduces to promote the exploitation search capability by
increasing the number of iterations. The main control parameters of RUN include two
parameters employed in the ( ), which are a and b.
The rule in Eq. (17) shows that the proposed RUN selects the exploration and
exploitation phases based on the condition < 0.5. This novel procedure used for
optimization in RUN ensures that if . , a global search is applied in the
solution space and a local search around solution is performed simultaneously. By
implementing a novel global search (exploration), the RUN can explore the search
space's superior promising regions. On the other hand, if . , RUN uses a
local search around solution . By applying this local search phase, the proposed
algorithm can effectively increase the convergence speed and focus on high-quality
solutions.
To perform the local search around the solutions and and explore the
promising regions in the search space, Eq. (17) is rewritten as follows:
12
.
(exploration phase)
( )
(18)
(exploitation phase)
( )
where is an integer number, which is 1 or -1. This parameter changes the search
direction and increases diversity. is a random number in the range [0, 2]. According
to Eq. (18), the local search around decreases as the number of iterations increases.
Fig. 3 displays the search mechanism of RUN, indicating how to generate position
at the next iteration.
Feasible space
xn+1
µ .(xm-xc)
xc+1/6 (xRK)Δx
k2
k1
k4
Variable 2
k3
xc
xm
Variable 1
13
ESQ, the RUN algorithm ensures that each solution moves toward a better position. In
the proposed ESQ, the average of three random solutions ( ) is calculated and
combined with the best position ( ) to generate a new solution ( ). The following
scheme is executed to create the solution ( ) by using the ESQ:
. . ( ) (19
)
( ) . . ( . )
in which
( , ). ( ( ))
(19-1)
( ) (19-2)
where is a random number in the range of [0, 1]. is a random number, which is
equal to 5 in this study. is a random number, which decreases with the
increasing number of iterations. is an integer number, which is 1, 0, or -1. is the
best solution explored so far. According to the above scheme, for (i.e., the later
iterations), solution trends to create an exploitation search, while for (i.e.,
the early iterations), solution trends to make an exploration search. Note that in
the latter condition, to increase the diversity, parameter is defined. It is noteworthy
that ESQ is applied when the condition is met.
The solution calculated in this part ( ) may not have better fitness than
that of the current solution (i.e., ( ) ( )). To have another chance for
creating a good solution, another new solution ( ) is generated, which is defined as
follows:
if rand<
( . ) .( . ( . )) (20)
end
where is a random number with a value of . In fact, the new solution
( ) is implemented when the condition rand< is met. The main objective of Eq.
(20) is to move the solution towards a better position. In the first rule of this
equation, a local search around is generated, and in the second rule, RUN
14
attempts to explore the promising regions with the movement towards the best
solution. Hence, to emphasize the importance of the best solution, coefficient is
used. It should be noted that to calculate , solutions and become and
, respectively, because the fitness value of is less than that of
( ( ) ( )). The pseudo-code of and flowchart of RUN are presented in
Algorithm 1. and Fig. 4, respectively.
15
Fig. 4. Flowchart of the RUN algorithm
As shown in Fig. 5, three paths are considered for optimization in RUN. The
proposed algorithm first uses the RK search mechanism to generate position and
then employs the ESQ mechanism to explore the promising regions in the search
space. According to this mechanism, RUN follows three paths to reach a better
solution. In the first and second paths, position calculated by the ESQ is
compared with the position . If the fitness of is worse than that of
(i.e., ( ) ( )), another position ( ) is generated. If ( )
( ), the best solution is (second path). Otherwise, it is (first path). In
the third path, if ( ) ( ), the best solution is .
The following characteristics theoretically demonstrate the proficiency of RUN
in solving various complex optimization problems:
17
Table 1. Unimodal test functions.
( ) ∑ 30 [-100, 100] 0
( ) ∑ 30 [-100, 100] 0
( ) ∑ (∑ . ) (∑ . ) 30 [-100, 100] 0
( ) ∑, ( ) ( ) - 30 [-100, 100] 0
( ) ∑ 30 [-100, 100] 0
( ) ∑( ) 30 [-100, 100] 0
( ) ( ) ∑ ( ) , ( )- ( ) ,
( )- 30 [-100, 100] 0
where
( ) . ∑ ( ), .209687462275036e+002
( )
30 [-100, 100] 0
( )
( )
( ( , )) .√ ( , )/
( )
( ( , ) ) .√ ( , ) /
{
( ) ∑( ∑ , ( ( .5)) )- ∑, ( . .5)
30 [-100, 100] 0
. , 3,
18
( ) ∑ ( . ∑ ∑ ) . 30 [-100, 100] 0
( ) ∑ ∏ ( ) 30 [-600, 600] 0
√
( ) * ( ) ∑( ) , ( )- ( ) +
∑ ( ) 30 [-50, 50] 0
19
The population size and the total number of iterations were set respectively
equal to 50 and 500 for the UFs and MFs and 50 and 1000 for the HFs. All results were
presented and compared in terms of the optimization algorithms' average efficiencies
over 30 independent runs. For GWO, WOA, CS, IWO, and WCA, the control
parameters were the same as those suggested in the original work. Table 4 lists the
parameters used in this study
21
Search history 2D Trajectory Convergence
f1
f2
f4
f7
f10
f12
22
Table 5. Results of the UFs and MFs from RUN and five other meta-heuristic optimization algorithms
UFs
Optimizer
f1 f2 f3 f4 f5 f6
Average 1.75E-132 6.68E-267 2.16E-129 2.45E+01 1.26E-137 2.35E-130
RUN Best 5.31E-145 3.55E-278 1.81E-145 2.29E+01 6.74E-147 1.20E-145
SD 9.04E-132 0.00E+00 1.18E-128 1.04E+00 5.31E-137 1.29E-129
Average 3.87E-27 4.17E-97 5.78E-29 2.68E+01 5.60E-33 5.14E-30
GWO Best 4.33E-29 2.8E-108 2.25E-31 2.52E+01 1.61E-34 1.12E-31
SD 7.73E-27 1.87E-96 1.48E-28 7.53E-01 5.84E-33 8.14E-30
Average 2.52E-02 1.81E+01 9.00E-01 1.39E+02 5.16E-04 1.88E-01
CS Best 4.44E-05 1.46E-06 5.38E-03 2.96E+01 6.67E-06 1.22E-02
SD 1.17E-01 8.44E+01 1.70E+00 2.37E+02 7.63E-04 3.04E-01
Average 2.31E-05 6.77E-07 5.02E-09 7.38E+01 6.27E-07 2.86E+03
WCA Best 2.22E-07 4.05E-25 1.11E-10 8.80E-01 3.13E-12 7.39E-08
SD 7.01E-05 3.70E-06 9.07E-09 6.54E+01 3.00E-06 7.78E+03
Average 6.75E-80 1.56E-110 5.52E+03 2.75E+01 2.86E-84 1.30E-81
WOA Best 9.43E-89 9.17E-141 2.88E+01 2.69E+01 2.63E-94 2.90E-89
SD 2.45E-79 7.86E-110 3.85E+03 4.12E-01 1.11E-83 5.59E-81
Average 3.18E+03 1.53E+03 4.24E+02 4.10E+04 5.69E+04 5.01E+06
IWO Best 8.84E+01 1.06E-05 6.12E-05 2.37E+01 4.21E+04 1.25E+06
SD 3.14E+03 1.96E+03 6.40E+02 9.02E+04 1.23E+04 2.57E+06
MFs
f7 f8 f9 f10 f11 f12 f13 f14
Average 0.00E+00 2.04E-01 3.82E-04 8.88E-16 1.04E-13 3.42E-01 0.00E+00 6.59E-08
RUN
Best 0.00E+00 4.21E-07 3.82E-04 8.88E-16 6.39E-14 2.33E-01 0.00E+00 3.33E-08
SD 0.00E+00 1.13E-01 0.00E+00 0.00E+00 1.63E-14 7.53E-02 0.00E+00 1.95E-08
Average 5.91E+00 1.01E+00 3.82E-04 4.46E-14 2.91E+01 6.39E-01 6.13E-03 3.20E-02
GWO
Best 2.11E+00 6.36E-01 3.82E-04 3.64E-14 2.27E+01 4.41E-01 0.00E+00 6.40E-03
SD 2.20E+00 1.59E-01 8.72E-13 4.19E-15 3.34E+00 9.60E-02 1.20E-02 2.33E-02
Average 9.86E+00 2.41E+00 4.12E-04 3.73E-03 6.23E-02 5.93E-01 1.47E-02 1.69E-01
CS
Best 7.74E+00 6.28E-01 3.82E-04 4.69E-04 8.53E-14 4.42E-01 4.35E-10 5.29E-08
SD 8.36E-01 2.27E+00 4.54E-05 3.44E-03 9.52E-02 8.40E-02 1.80E-02 2.68E-01
Average 1.20E+01 2.92E+03 5.19E-03 3.40E+00 1.20E-01 5.30E-01 3.13E-02 3.64E-01
WCA
Best 1.03E+01 1.10E+03 3.82E-04 2.19E-02 8.53E-14 2.53E-01 5.08E-12 1.53E-12
SD 6.12E-01 1.36E+03 2.63E-02 2.28E+00 5.12E-01 1.50E-01 3.86E-02 7.04E-01
Average 3.00E+00 5.12E-01 3.82E-04 3.73E-15 1.92E-14 5.24E-01 3.05E-03 1.03E-02
WOA
Best 0.00E+00 6.99E-02 3.82E-04 8.88E-16 7.11E-15 2.60E-01 0.00E+00 1.30E-03
SD 4.43E+00 3.58E-01 5.55E-13 2.70E-15 6.62E-14 1.88E-01 1.67E-02 1.59E-02
Average 1.30E+01 4.59E+03 6.89E+02 1.24E+00 5.29E+00 3.58E-01 1.67E+02 1.27E-01
IWO Best 1.21E+01 3.25E+03 3.90E-04 5.07E-03 3.00E+00 2.21E-01 9.25E+01 4.80E-02
SD 4.09E-01 6.11E+02 3.84E+02 4.71E+00 1.57E+00 8.82E-02 3.99E+01 8.93E-02
23
Table 6. Statistical results of the HFs from RUN and five other optimizers
HFs
Optimizer
f15 f16 f17 f18 f19 f20
Average 104191.21 3435.33 1919.53 3519.30 48127.89 2674.29
RUN
Best 26504.80 2149.82 1911.91 2345.66 10865.46 2229.29
SD 42897.96 801.49 5.01 2215.65 22065.81 227.33
Average 2017606.11 9419404.67 1945.42 23438.34 865855.49 2581.81
GWO
Best 243778.74 4056.06 1912.26 11065.71 66706.84 2250.33
SD 2197530.17 22146302.91 26.45 12065.16 1222558.84 145.41
Average 1638591.37 8614.09 1931.73 94953.78 405641.76 3114.17
CS
Best 168986.27 2070.91 1909.39 3577.19 16508.82 2364.87
SD 1608329.34 8165.00 30.62 309592.19 577986.74 364.57
Average 1096464.13 5561515.91 1927.69 24082.37 339962.26 2832.20
WCA
Best 177033 2413.67 1910.27 5378.61 23640.99 2579.874
SD 742290.81 30411215.42 29.31 15291.10 223453.44 136.44
Average 11178976.28 93612.11 1964.90 76381.26 3876550.62 3084.20
WOA
Best 2520022.97 9512.03 1919.07 28141.42 189834.25 2476.51
SD 7349962.08 94864.91 34.80 48244.50 4182086.86 252.11
Average 110385.61 5178.86 1922.03 30483.82 53137.20 3263.82
IWO
Best 15620.9 2229.473 1907.79 3739.462 11885.51 2729.88
SD 73296.20 3721.69 21.40 13771.33 31510.29 283.44
24
Fig. 7. Convergence graphs of the RUN and five other optimizers for the selected UFs, MFs, and HFs
According to the convergence curves (Fig. 7), the following conclusions can be
obtained:
25
Concerning the convergence rate, the IWO, WCA, and CS algorithms displayed
weak performances in optimizing the UFs and MFs, followed by the WOA and
GWO algorithms.
The RUN optimizer had a faster convergence curve than the other algorithms
for the unimodal and multimodal test functions due to the proper balance
between exploration and exploitation.
For the HFs, the convergence rate of RUN tended to be accelerated by
increasing the number of functional evaluations due to the ESQ and adaptive
mechanism, which helped it to explore the promising areas of the solution
space in the early iterations and more quickly converge towards the optimal
solution after spending about 15% of the total number of function evaluations.
The convergence curves revealed that RUN did provide a more suitable
convergence speed to optimize the test functions than the other optimizers.
5.7. Ranking analysis
The Friedman and Quade tests (Derrac, et al., 2011) were conducted to
determine the six optimizers' influential performances. These tests employ a
nonparametric two-way analysis of variance, which allows the comparison of several
samples. Based on the Friedman test, all samples are equal in terms of importance. In
contrast, the Quade test considers the fact that some samples are more difficult or
complicated than others and, thus, provides a weighted ranking analysis of the samples
(Derrac, et al., 2011).
Tables 7 and 8 show the Friedman and Quade test ranks, including the
individual, average, and final ranks for the average fitness values from RUN and the
five other optimizers on all UF, MF, and HF test functions. The Friedman and Quade
test results indicated that the RUN algorithm performed the best among the six
algorithms on all test functions.
Table 7. Friedman ranks for the UFs, MFs, and HFs for RUN and five other optimizers
UFs Average
Optimizers Rank
f1 f2 f3 f4 f5 f6 Rank
RUN 1 1 1 1 1 1 1.00 1
GWO 3 3 2 2 3 3 2.67 2
CS 5 6 4 6 5 4 5.00 5
WCA 4 4 5 4 4 5 4.33 4
WOA 2 2 6 3 2 2 2.83 3
IWO 6 5 3 5 6 6 5.17 6
MFs
f7 f8 f9 f10 f11 f12 f13 f14
RUN 1 1 2 1 2 1 1 1 1.25 1
GWO 3 3 2 3 6 6 3 3 3.63 4
CS 4 4 4 4 3 5 4 5 4.13 3
26
WCA 5 5 5 6 4 4 5 6 5.00 5
WOA 2 2 2 2 1 3 2 2 2.00 2
IWO 6 6 6 5 5 2 6 4 5.00 5
HFs
f15 f16 f17 f18 f19 f20
RUN 1 1 1 1 1 2 1.17 1
GWO 5 6 5 3 5 1 4.17 4
CS 4 3 4 6 4 5 4.33 5
WCA 3 5 3 2 3 3 3.17 3
WOA 6 4 6 5 6 4 5.17 6
IWO 2 2 2 4 2 6 3.00 2
Table 8. Quade ranks for the UFs, MFs, and HFs for RUN and five other optimizers
UFs Average
Optimizers Rank
f1 f2 f3 f4 f5 f6 Rank
RUN 5 1 2 6 3 4 1.00 1
GWO 10 2 8 12 4 6 2.67 2
CS 6 15 12 18 3 9 4.57 5
WCA 16 12 4 20 8 24 4.14 4
WOA 20 5 30 25 10 15 2.76 3
IWO 18 12 6 24 30 36 5.86 6
MFs
f7 f8 f9 f10 f11 f12 f13 f14
RUN 1.5 7 6 3 4 8 1.5 5 1.33 1
GWO 28 24 8 4 32 20 12 16 3.31 3
CS 24 21 3 6 12 18 9 15 3.94 4
WCA 35 40 5 30 15 25 10 20 4.97 5
WOA 16 12 6 2 4 14 8 10 1.89 2
IWO 30 48 42 18 24 12 36 6 5.56 6
HFs
f15 f16 f17 f18 f19 f20
RUN 6 3 1 4 5 2 1.10 1
GWO 25 30 5 15 20 10 4.57 5
CS 18 9 3 12 15 6 4.14 4
WCA 20 24 4 12 16 8 3.33 3
WOA 36 24 6 18 30 12 5.19 6
IWO 12 6 2 8 10 4 2.67 2
Table 9 displays the statistics and p-values of the Friedman and Quade tests for
all test functions. As per the p-values calculated for the two tests, significant differences
can be seen among all optimizers.
27
Table 9. Statistic and p-value computed by the Friedman and Quade tests for the
UFs, MFs, and HFs
Average ranking
Friedman Quade
UFs
Statistic 24.7619 10.3445
p-value 1.55e-04 1.83e-05
MFs
Statistic 28.3333 12.9663
p-value 3.13e-05 3.61E-07
HFs
Statistic 16.6667 5.0844
p-value 5.20E-03 2.40E-03
28
Table 10. Statistical results of the RUN and eight advanced optimizers on CEC-BC-2017
RUN CGSCA SCADE BMWOA BWOA OBLGWO CMAES GL25 CLPSO
Best 1.44E+04 1.53E+10 1.87E+10 5.20E+08 1.94E+09 4.44E+07 1.04E+02 6.83E+09 7.65E+09
f1 Average 3.75E+04 2.51E+10 2.97E+10 1.10E+09 5.58E+09 1.57E+08 5.45E+03 1.69E+10 1.16E+10
SD 1.40E+04 5.37E+09 4.86E+09 3.73E+08 2.05E+09 8.59E+07 5.75E+03 5.28E+09 2.59E+09
Best 2.92E+14 9.54E+33 6.98E+34 6.58E+22 1.25E+27 2.68E+17 2.02E+10 2.93E+30 4.62E+32
f2 Average 4.17E+17 8.96E+38 1.13E+40 1.86E+30 4.23E+35 3.80E+22 2.59E+31 4.01E+38 1.29E+43
SD 1.15E+18 2.88E+39 3.27E+40 1.01E+31 1.58E+36 9.92E+22 1.42E+32 1.32E+39 7.05E+43
Best 3.59E+04 5.40E+04 5.72E+04 5.00E+04 5.78E+04 3.27E+04 1.23E+05 1.22E+05 1.09E+05
f3 Average 5.05E+04 7.16E+04 7.68E+04 7.99E+04 7.51E+04 4.97E+04 1.94E+05 1.72E+05 1.56E+05
SD 8.29E+03 1.03E+04 7.59E+03 1.03E+04 7.58E+03 8.31E+03 5.92E+04 3.46E+04 2.38E+04
Best 4.71E+02 1.45E+03 4.93E+03 6.09E+02 8.77E+02 5.19E+02 5.02E+02 1.58E+03 1.97E+03
f4 Average 5.13E+02 3.57E+03 6.99E+03 7.31E+02 1.41E+03 5.57E+02 9.98E+02 3.22E+03 3.08E+03
SD 1.81E+01 9.87E+02 1.29E+03 1.11E+02 3.98E+02 3.05E+01 3.64E+02 1.07E+03 8.66E+02
Best 5.92E+02 7.79E+02 8.19E+02 7.10E+02 7.23E+02 6.10E+02 5.79E+02 7.44E+02 7.54E+02
f5 Average 6.53E+02 8.52E+02 8.74E+02 8.09E+02 8.20E+02 6.84E+02 1.22E+03 8.46E+02 8.08E+02
SD 2.91E+01 3.21E+01 2.41E+01 4.46E+01 3.44E+01 5.05E+01 1.92E+02 3.96E+01 2.60E+01
Best 6.23E+02 6.54E+02 6.58E+02 6.53E+02 6.55E+02 6.07E+02 6.74E+02 6.44E+02 6.41E+02
f6 Average 6.40E+02 6.70E+02 6.74E+02 6.68E+02 6.74E+02 6.25E+02 6.97E+02 6.66E+02 6.58E+02
SD 8.22E+00 7.50E+00 9.15E+00 8.50E+00 9.29E+00 1.28E+01 1.30E+01 9.60E+00 7.81E+00
Best 8.02E+02 1.16E+03 1.17E+03 1.10E+03 1.10E+03 8.78E+02 7.71E+02 1.18E+03 1.13E+03
f7 Average 9.36E+02 1.26E+03 1.26E+03 1.25E+03 1.30E+03 1.01E+03 4.29E+03 1.33E+03 1.23E+03
SD 5.73E+01 4.96E+01 5.32E+01 8.54E+01 7.04E+01 6.22E+01 1.19E+03 8.26E+01 4.37E+01
Best 8.75E+02 1.07E+03 1.06E+03 9.74E+02 9.59E+02 8.99E+02 8.59E+02 1.07E+03 1.04E+03
f8 Average 9.21E+02 1.11E+03 1.12E+03 1.03E+03 1.02E+03 9.61E+02 1.37E+03 1.12E+03 1.10E+03
SD 2.62E+01 1.99E+01 2.12E+01 2.46E+01 2.51E+01 4.16E+01 1.68E+02 2.65E+01 2.67E+01
Best 2.09E+03 5.18E+03 7.68E+03 5.69E+03 6.20E+03 1.34E+03 1.00E+04 5.04E+03 5.13E+03
f9 Average 3.52E+03 8.85E+03 1.06E+04 8.59E+03 7.51E+03 4.46E+03 1.50E+04 9.25E+03 1.14E+04
SD 8.96E+02 1.65E+03 1.05E+03 1.27E+03 1.11E+03 2.28E+03 2.48E+03 2.41E+03 2.35E+03
Best 3.85E+03 7.34E+03 7.69E+03 6.28E+03 5.96E+03 4.44E+03 4.93E+03 8.60E+03 7.27E+03
f10 Average 5.14E+03 8.92E+03 8.70E+03 7.80E+03 7.37E+03 6.96E+03 6.21E+03 9.51E+03 8.12E+03
SD 7.73E+02 3.96E+02 3.50E+02 5.80E+02 8.46E+02 1.44E+03 6.32E+02 5.11E+02 3.77E+02
Best 1.19E+03 2.57E+03 3.56E+03 1.38E+03 2.27E+03 1.28E+03 1.35E+03 4.48E+03 3.24E+03
f11 Average 1.26E+03 4.22E+03 5.28E+03 2.19E+03 3.82E+03 1.38E+03 1.91E+03 1.18E+04 6.44E+03
SD 3.23E+01 9.46E+02 1.11E+03 5.28E+02 9.31E+02 5.50E+01 9.17E+02 3.70E+03 2.16E+03
Best 2.65E+06 8.42E+08 1.26E+09 2.31E+07 5.69E+07 5.42E+06 3.41E+05 3.42E+08 8.45E+08
f12 Average 1.38E+07 2.67E+09 3.88E+09 1.44E+08 4.57E+08 4.21E+07 4.20E+06 1.16E+09 1.47E+09
SD 9.36E+06 1.02E+09 1.14E+09 7.19E+07 2.57E+08 3.56E+07 6.29E+06 6.36E+08 5.55E+08
Best 1.23E+04 5.76E+08 5.87E+08 2.37E+05 1.68E+06 2.06E+05 1.98E+04 1.07E+07 1.64E+08
f13 Average 2.63E+04 1.37E+09 1.51E+09 2.17E+06 1.30E+07 2.08E+06 1.63E+07 3.46E+08 9.58E+08
SD 1.45E+04 5.20E+08 6.76E+08 2.97E+06 1.02E+07 3.41E+06 3.52E+07 3.12E+08 4.91E+08
Best 1.22E+04 1.44E+05 4.44E+05 6.09E+04 1.55E+05 7.65E+03 1.16E+04 9.30E+04 9.18E+03
f14 Average 2.27E+05 1.04E+06 1.25E+06 1.13E+06 2.21E+06 2.57E+05 2.11E+05 2.31E+06 7.30E+05
SD 1.87E+05 7.10E+05 7.27E+05 9.47E+05 2.27E+06 2.61E+05 1.74E+05 1.78E+06 6.25E+05
Best 7.28E+03 5.83E+06 6.36E+06 3.22E+04 3.87E+04 3.59E+04 2.71E+04 1.71E+05 2.51E+05
f15 Average 1.42E+04 4.39E+07 2.91E+07 2.87E+05 6.37E+06 2.18E+05 2.52E+05 1.12E+07 7.93E+07
SD 3.59E+03 4.01E+07 2.41E+07 2.64E+05 7.07E+06 2.22E+05 3.27E+05 1.91E+07 5.37E+07
Best 2.04E+03 3.93E+03 3.74E+03 2.70E+03 3.19E+03 2.14E+03 2.03E+03 3.95E+03 3.41E+03
f16 Average 2.84E+03 4.44E+03 4.23E+03 3.59E+03 4.33E+03 2.97E+03 3.25E+03 4.49E+03 4.03E+03
SD 3.28E+02 2.10E+02 2.45E+02 4.91E+02 5.50E+02 3.48E+02 6.90E+02 2.90E+02 3.11E+02
Best 1.83E+03 2.37E+03 2.29E+03 1.99E+03 2.24E+03 1.89E+03 1.79E+03 2.65E+03 2.40E+03
f17 Average 2.24E+03 2.91E+03 2.84E+03 2.47E+03 2.71E+03 2.33E+03 2.35E+03 3.00E+03 2.80E+03
SD 2.22E+02 1.92E+02 1.61E+02 2.68E+02 3.23E+02 2.04E+02 3.89E+02 2.23E+02 2.05E+02
Best 5.21E+04 3.98E+06 1.40E+06 4.82E+05 2.07E+05 1.49E+05 2.03E+05 5.71E+05 9.28E+05
f18 Average 6.11E+05 1.53E+07 1.25E+07 5.32E+06 1.03E+07 3.17E+06 2.24E+06 2.52E+07 8.03E+06
SD 7.60E+05 7.94E+06 8.79E+06 5.31E+06 1.23E+07 2.56E+06 1.87E+06 1.52E+07 4.97E+06
Best 1.53E+04 3.44E+07 1.28E+07 2.22E+05 4.04E+05 4.41E+04 2.97E+05 4.11E+05 2.52E+06
f19 Average 4.43E+05 1.12E+08 7.79E+07 1.64E+06 1.21E+07 1.01E+06 1.24E+06 2.28E+07 9.84E+07
SD 3.45E+05 6.03E+07 5.14E+07 1.44E+06 1.41E+07 8.83E+05 1.00E+06 4.25E+07 8.57E+07
29
Table 10. Statistical results of the RUN and eight advanced optimizers on CEC-BC-2017 (Continued)
RUN CGSCA SCADE BMWOA BWOA OBLGWO CMAES GL25 CLPSO
Best 2.27E+03 2.71E+03 2.65E+03 2.40E+03 2.44E+03 2.27E+03 2.53E+03 2.96E+03 2.63E+03
f20 Average 2.56E+03 2.95E+03 2.99E+03 2.76E+03 2.81E+03 2.62E+03 3.15E+03 3.26E+03 2.87E+03
SD 1.70E+02 1.36E+02 1.52E+02 1.85E+02 1.94E+02 1.86E+02 3.46E+02 1.64E+02 9.18E+01
Best 2.40E+03 2.57E+03 2.57E+03 2.49E+03 2.56E+03 2.42E+03 2.33E+03 2.57E+03 2.53E+03
f21 Average 2.44E+03 2.62E+03 2.62E+03 2.56E+03 2.64E+03 2.49E+03 2.59E+03 2.62E+03 2.60E+03
SD 2.52E+01 2.45E+01 2.80E+01 4.40E+01 5.41E+01 5.34E+01 2.67E+02 2.59E+01 2.39E+01
Best 2.30E+03 4.08E+03 4.96E+03 2.55E+03 3.49E+03 2.33E+03 6.23E+03 3.31E+03 4.30E+03
f22 Average 3.31E+03 5.39E+03 6.48E+03 5.68E+03 7.74E+03 3.33E+03 8.15E+03 5.31E+03 7.40E+03
SD 1.86E+03 1.23E+03 1.08E+03 3.15E+03 1.86E+03 1.97E+03 1.32E+03 2.11E+03 1.83E+03
Best 2.74E+03 3.02E+03 3.01E+03 2.87E+03 2.95E+03 2.76E+03 2.94E+03 2.99E+03 2.96E+03
f23 Average 2.80E+03 3.09E+03 3.09E+03 2.98E+03 3.19E+03 2.85E+03 4.22E+03 3.10E+03 3.09E+03
SD 2.95E+01 3.74E+01 4.62E+01 7.05E+01 1.16E+02 5.76E+01 5.82E+02 6.85E+01 4.95E+01
Best 2.90E+03 3.19E+03 3.18E+03 3.04E+03 3.07E+03 2.94E+03 3.07E+03 3.12E+03 3.09E+03
f24 Average 2.98E+03 3.25E+03 3.25E+03 3.13E+03 3.28E+03 2.99E+03 3.12E+03 3.24E+03 3.25E+03
SD 4.61E+01 4.25E+01 3.36E+01 6.49E+01 9.54E+01 3.23E+01 2.04E+01 6.14E+01 4.78E+01
Best 2.89E+03 3.30E+03 3.35E+03 2.99E+03 3.10E+03 2.90E+03 2.88E+03 3.34E+03 3.44E+03
f25 Average 2.93E+03 3.70E+03 3.81E+03 3.08E+03 3.20E+03 2.95E+03 2.89E+03 3.72E+03 3.77E+03
SD 2.67E+01 2.36E+02 2.47E+02 5.70E+01 7.47E+01 2.82E+01 6.37E+00 2.51E+02 2.22E+02
Best 2.80E+03 6.36E+03 7.36E+03 3.74E+03 4.71E+03 3.56E+03 2.80E+03 7.35E+03 6.52E+03
f26 Average 4.50E+03 8.02E+03 8.21E+03 6.82E+03 8.33E+03 5.73E+03 5.39E+03 8.47E+03 7.92E+03
SD 1.27E+03 5.81E+02 3.96E+02 1.22E+03 1.12E+03 7.41E+02 1.84E+03 5.37E+02 5.68E+02
Best 3.25E+03 3.41E+03 3.39E+03 3.25E+03 3.33E+03 3.22E+03 3.35E+03 3.51E+03 3.43E+03
f27 Average 3.31E+03 3.52E+03 3.57E+03 3.33E+03 3.47E+03 3.25E+03 3.51E+03 3.66E+03 3.58E+03
SD 3.57E+01 6.53E+01 8.54E+01 6.37E+01 1.52E+02 1.57E+01 3.47E+02 1.01E+02 7.67E+01
Best 3.23E+03 4.08E+03 4.48E+03 3.39E+03 3.50E+03 3.27E+03 3.19E+03 3.95E+03 4.25E+03
f28 Average 3.28E+03 4.76E+03 5.03E+03 3.50E+03 3.82E+03 3.35E+03 3.23E+03 4.88E+03 4.95E+03
SD 2.06E+01 4.47E+02 3.53E+02 7.26E+01 2.00E+02 3.69E+01 3.00E+01 4.09E+02 4.03E+02
Best 3.69E+03 4.67E+03 5.18E+03 4.25E+03 4.31E+03 3.84E+03 3.42E+03 4.91E+03 4.54E+03
f29 Average 4.24E+03 5.29E+03 5.67E+03 5.00E+03 5.45E+03 4.28E+03 3.76E+03 5.56E+03 5.13E+03
SD 2.74E+02 3.17E+02 3.15E+02 5.16E+02 6.21E+02 3.41E+02 2.50E+02 3.28E+02 3.12E+02
Best 3.55E+05 6.81E+07 6.71E+07 1.00E+06 6.87E+06 7.09E+05 7.94E+05 1.76E+07 1.74E+07
f30 Average 3.99E+06 2.19E+08 2.01E+08 8.83E+06 5.03E+07 6.50E+06 3.18E+06 5.03E+07 7.24E+07
SD 2.71E+06 8.81E+07 8.13E+07 4.83E+06 4.07E+07 4.35E+06 2.42E+06 3.73E+07 4.27E+07
30
Table 11 Average ranks of RUN and eight advanced optimizers
based on the Friedman test
RUN 1.33 1
CGSCA 6.53 7
SCADE 7.40 9
BMWOA 4.00 3
BWOA 5.70 5
OBLGWO 2.23 2
CMAES 4.43 4
GL25 7.17 8
CLPSO 6.20 6
31
Fig. 8. Convergence graphs of RUN and eight other algorithms for the selected CEC 2017
benchmark functions
32
The sensitivity analysis of the control parameters of RUN (i.e., a and b) was
performed, which demonstrated that RUN had a very low sensitivity to the parameter
changes. This research evaluated different combinations of the control parameters on
34 mathematical test functions for designing RUN, including two groups, 14 unimodal
and multimodal test functions (group 1) and 20 test functions of CEC-BC-2017 (group
2). In this regard, the values of each parameter were defined as a = [5, 10, 20,
30, 40] and b = [4, 8, 12, 16, 20]. Since each parameter had 5 values, there were 25
combinations of the design parameters. Each combination was evaluated by the
average fitness values obtained from 30 different runs. Fig. 9(a) illustrates the mean
rank values of the two groups, and Fig. 9(b) presents the average rank values of the two
groups. Accordingly, the best rank belongs to C13 (a = 20 and b = 12), and the rank of
C19 is very close to C13. Also, the ranks for most combinations are very close,
indicating that the proposed algorithm is not very sensitive to the parameter changes.
33
Fig. 9. Sensitivity analysis of RUN, (a) ranks of uni- and multi-modal test functions
and CEC-2017 (b) average ranks of all combinations
34
6.1. Rolling element bearing design problem
The primary goal of this problem is to maximize the fatigue life, which is a
function of the dynamic load-carrying capacity. It has ten variables and nine constraints
for modeling and geometric-based limitations. The problem is described in detail by
Gupta et al. (2007). The problem is described in detail in (Gupta, et al., 2007). The
related mathematical formulation is detailed in Appendix A.
Fig. 10 displays the schematic view of the rolling element bearing design
problem.
𝐵𝑤 𝐷ℎ
𝑑 D
𝑟
𝑟𝑖
Table 12. Statistical results from RUN, TLBO, GA, PVS, and HHO for the rolling element
bearing design problem
GA (Gupta, et al., TLBO (Rao, et PVS (Savsani & HHO (Heidari,
RUN Mirjalili, et al.,
2007) al., 2011) Savsani, 2016)
2019)
SD 977.95 NA NA NA NA
35
Table 13. Comparison of the results from RUN, TLBO, GA, PVS, and HHO for the rolling element
bearing design problem
HHO
Variables RUN TLBO (Rao, et al., GA (Gupta, et al., PVS (Savsani & (Heidari,
2011) 2007) Savsani, 2016) Mirjalili, et al.,
2019)
21.59796 21.42559 21.42300 21.42559 21.0000
125.2142 125.7191 125.71710 125.71906 125.0000
0.51500 0.51500 0.51500 0.51500 0.51500
0.51500 0.51500 0.51500 0.51500 0.51500
11.4024 11.0000 11.0000 11.0000 11.0920
0.40059 0.42426 0.41590 0.40043 0.4000
0.61467 0.63394 0.65100 0.68016 0.6000
0.30530 0.30000 0.30004 0.30000 0.3000
0.02000 0.06885 0.02230 0.07999 0.0504
0.63665 0.79994 0.75100 0.70000 0.6000
𝑥 𝑥 𝑥
𝑥 𝑥
RUN's optimal results were compared with the CS results (Gandomi, et al.,
2013), HGSO (Hashim, et al., 2019), GWO, and WOA optimizers. Table 14 gives the
results of these optimization algorithms for this problem. It can be observed that RUN
achieved the best solution and outperformed the compared optimizers. In addition, the
optimal variables of the problem are tabulated in Table 15.
36
Table 14. Statistical results from RUN, CS, HGSO, GWO, and WOA for the speed
reducer design problem
Table 15. Comparison of the results from RUN, CS, HGSO, GWO, and WOA
for the speed reducer design problem
CS (Gandomi, HGSO GWO WOA
Variables RKO et al., 2013)
(Hashim, et al., (Hashim, et al., (Hashim, et
2019) 2019) al., 2019)
3.5001 3.5015 3.4970 3.5000 3.4210
0.7000 0.7000 0.7100 0.7000 0.7000
17.000 17.000 17.020 17.000 17.000
7.0000 7.6050 7.6700 7.3000 7.3000
7.8000 7.8181 7.8100 7.8000 7.8000
3.3500 3.3520 3.3600 2.9000 2.9000
5.2900 5.2875 5.2850 2.9000 5.0000
Fitness 2996.73 3000.98 2997.10 2998.83 2998.40
37
𝑙 𝑙
1 2 3
𝑥𝐴
𝑙 𝑥𝐴
𝑥𝐴
A1=A3
4
P P
Table 16. Comparison of statistical results of RUN with literature for the three-bar truss
problem
MBA HHO
MVO (S. GOA (S. MFO (S. (Sadollah, CS (Heidari,
RUN Mirjalili, et Z. Mirjalili, Mirjalili, Bahreininejad, (Gandomi,
Mirjalili, et
al., 2016) et al., 2018) 2015b) Eskandar, & et al., 2013)
al., 2019)
Hamdi, 2013)
38
Table 17. Best solutions achieved by the seven algorithms for the
three-bar truss problem
Algorithm
RKO 0.788679110 0.408237045
MVO (S. Mirjalili, et al., 2016) 0.78860276 0.408453070
GOA (S. Z. Mirjalili, et al.,
0.78889755 0.40761957
2018)
MFO (S. Mirjalili, 2015b) 0.78824477 0.40946690
MBA (Sadollah,
Bahreininejad, Eskandar, & 0.7885650 0.4085597
Hamdi, 2013)
CS (Gandomi, et al., 2013) 0.78867 0.40902
HHO (Heidari, Mirjalili, et al.,
0.7886628 0.4082313
2019)
39
6.4. Cantilever beam problem
Fig. 13 depicts the five-stepped cantilever beam problem, for which the main
variables are the height and width of the beam [63]. The main goal of the problem is to
minimize the beam weight. The main formulation of the problem is defined in
Appendix A.
5 4 3 2 1
Table 18. Statistical results of RUN, CS, SOS, MMA, GCA I, and GCA II for the
cantilever beam problem
SOS
CS (Cheng & MMA GCA I GCA II
RUN (Gandomi, (Chickermane (Chickermane (Chickermane
et al., 2013) Prayogo, & Gea, 1996) & Gea, 1996) & Gea, 1996)
2014)
SD 4.68E-06 NA 1.10E-05 NA NA NA
The RUN optimized the problem, and its results were compared with those of
CS (Gandomi, et al., 2013), method of moving asymptotes (MMA) (Chickermane &
Gea, 1996), generalized convex approximation (GCA I) (Chickermane & Gea, 1996),
GCA II (Chickermane & Gea, 1996), and SOS (Cheng & Prayogo, 2014). As shown in
Table 18, RUN provided more promising results than the five other optimizers, which
confirmed the RUN algorithm's high efficiency in approximating the global best
solution for this problem. Also, the optimal variables calculated by all the six optimizers
are listed in Table 19.
40
Table 19. Optimal variables obtained by the RUN, CS, SOS, MMA, GCA
I, and GCA II algorithms for the cantilever beam problem
Algorithm
42
Appendix A
I- Rolling element bearing design problem
.
.
{ .
.
Subject to:
( ⃗) ,
( )
( ⃗) ( ) ,
( ⃗) ( ) ,
( ⃗) ,
( ⃗) . ( ) ,
( ⃗) ( . ) ( ) ,
( ⃗) . ( ) ,
( ⃗) . ,
( ⃗) . ,
in which
.
. . . .
( ) ( )
. [ ( . ( ) ( ) ) ] [ ]
( )
( )
.
[ ]
( )
[{ ( )} { } { } ]
( )
{ ( )} { }
( )
, , , .033
, , ,
43
.4 .5, .6 .7, .3 .4, .02 .1
.6 .85
( ) √( ) .9
( )
( ) √( ) .5
( )
( )
( )
( )
.5 .9
( )
.1 .9
( )
Minimize ( ⃗) ( √ ) ,
√
Subject to: ( ⃗) ,
√
( ⃗) ,
√
44
( ⃗) ,
√
, , , ,
( )
Variable ranges
.0 , , , ,
References
Abdel-Baset, M., Zhou, Y., & Hezam, I. (2019). Use of a sine cosine algorithm combined with
Simpson method for numerical integration. International Journal of Mathematics in
Operational Research, 14, 307-318.
Ahmadianfar, I., Bozorg-Haddad, O., & Chu, X. (2020). Gradient-based optimizer: A new
Metaheuristic optimization algorithm. Information Sciences, 540, 131-159.
Ahmadianfar, I., Khajeh, Z., Asghari-Pari, S.-A., & Chu, X. (2019). Developing optimal policies
for reservoir systems using a multi-strategy optimization algorithm. Applied Soft
Computing, 80, 888-903.
Ahmadianfar, I., Kheyrandish, A., Jamei, M., & Gharabaghi, B. (2020). Optimizing operating
rules for multi-reservoir hydropower generation systems: An adaptive hybrid
differential evolution algorithm. Renewable Energy.
Baykasoğlu, A., & Ozsoydan, F. B. (2017). Evolutionary and population-based methods versus
constructive search strategies in dynamic combinatorial optimization. Information
Sciences, 420, 159-183.
Beyer, H.-G., & Schwefel, H.-P. (2002). Evolution strategies–A comprehensive introduction.
Natural computing, 1, 3-52.
Camacho Villalón, C. L., Stützle, T., & Dorigo, M. (2020). Grey Wolf, Firefly and Bat
Algorithms: Three Widespread Algorithms that Do Not Contain Any Novelty. In M.
Dorigo, T. Stützle, M. J. Blesa, C. Blum, H. Hamann, M. K. Heinrich & V. Strobel
(Eds.), Swarm Intelligence (pp. 121-133). Cham: Springer International Publishing.
Cao, B., Dong, W., Lv, Z., Gu, Y., Singh, S., & Kumar, P. (2020). Hybrid Microgrid Many-
Objective Sizing Optimization with Fuzzy Decision. IEEE Transactions on Fuzzy
Systems.
Cao, B., Fan, S., Zhao, J., Yang, P., Muhammad, K., & Tanveer, M. (2020). Quantum-enhanced
multiobjective large-scale optimization via parallelism. Swarm and Evolutionary
Computation, 57, 100697.
Cao, B., Wang, X., Zhang, W., Song, H., & Lv, Z. (2020). A Many-Objective Optimization
Model of Industrial Internet of Things Based on Private Blockchain. IEEE Network,
34, 78-83.
Cao, B., Zhao, J., Gu, Y., Fan, S., & Yang, P. (2019). Security-aware industrial wireless sensor
network deployment optimization. IEEE Transactions on Industrial Informatics, 16, 5309-
5316.
45
Cao, B., Zhao, J., Gu, Y., Ling, Y., & Ma, X. (2020). Applying graph-based differential grouping
for multiobjective large-scale optimization. Swarm and Evolutionary Computation, 53,
100626.
Cao, B., Zhao, J., Yang, P., Gu, Y., Muhammad, K., Rodrigues, J. J., & de Albuquerque, V. H.
C. (2019). Multiobjective 3-D Topology Optimization of Next-Generation Wireless
Data Center Network. IEEE Transactions on Industrial Informatics, 16, 3597-3605.
Chen, H., Chen, A., Xu, L., Xie, H., Qiao, H., Lin, Q., & Cai, K. (2020). A deep learning CNN
architecture applied in smart near-infrared analysis of water pollution for agricultural
irrigation resources. Agricultural Water Management, 240, 106303.
Chen, H., Fan, D. L., Fang, L., Huang, W., Huang, J., Cao, C., Yang, L., He, Y., & Zeng, L.
(2020). Particle swarm optimization algorithm with mutation operator for particle filter
noise reduction in mechanical fault diagnosis. International Journal of Pattern Recognition
and Artificial Intelligence, 2058012.
Chen, H., Qiao, H., Xu, L., Feng, Q., & Cai, K. (2019). A Fuzzy Optimization Strategy for the
Implementation of RBF LSSVR Model in Vis–NIR Analysis of Pomelo Maturity.
IEEE Transactions on Industrial Informatics, 15, 5971-5979.
Chen, H., Xu, Y., Wang, M., & Zhao, X. (2019). A balanced whale optimization algorithm for
constrained engineering design problems. Applied Mathematical Modelling, 71, 45-59.
Chen, Y., He, L., Guan, Y., Lu, H., & Li, J. (2017). Life cycle assessment of greenhouse gas
emissions and water-energy optimization for shale gas supply chain planning based on
multi-level approach: Case study in Barnett, Marcellus, Fayetteville, and Haynesville
shales. Energy Conversion and Management, 134, 382-398.
Cheng, M.-Y., & Prayogo, D. (2014). Symbiotic organisms search: a new metaheuristic
optimization algorithm. Computers & Structures, 139, 98-112.
Chickermane, H., & Gea, H. (1996). Structural optimization using a new local approximation
method. International Journal for Numerical Methods in Engineering, 39, 829-846.
Derrac, J., García, S., Molina, D., & Herrera, F. (2011). A practical tutorial on the use of
nonparametric statistical tests as a methodology for comparing evolutionary and swarm
intelligence algorithms. Swarm and Evolutionary Computation, 1, 3-18.
Doğan, B., & Ölmez, T. (2015). A new metaheuristic for numerical function optimization:
Vortex Search algorithm. Information Sciences, 293, 125-145.
Dorigo, M., & Di Caro, G. (1999). Ant colony optimization: a new meta-heuristic. In Proceedings
of the 1999 congress on evolutionary computation-CEC99 (Cat. No. 99TH8406) (Vol. 2, pp.
1470-1477): IEEE.
Eberhart, R. C., & Kennedy, J. (1995). A new optimizer using particle swarm theory. In
Proceedings of the sixth international symposium on micro machine and human science (Vol. 1, pp.
39-43): New York, NY.
England, R. (1969). Error estimates for Runge-Kutta type solutions to systems of ordinary
differential equations. The computer journal, 12, 166-170.
Eskandar, H., Sadollah, A., Bahreininejad, A., & Hamdi, M. (2012). Water cycle algorithm–A
novel metaheuristic optimization method for solving constrained engineering
optimization problems. Computers & Structures, 110, 151-166.
Formato, R. A. (2007). Central force optimization. Prog Electromagn Res, 77, 425-491.
Fu, X., Pace, P., Aloi, G., Yang, L., & Fortino, G. (2020). Topology Optimization Against
Cascading Failures on Wireless Sensor Networks Using a Memetic Algorithm. Computer
Networks, 107327.
Gandomi, A. H., Yang, X.-S., & Alavi, A. H. (2013). Cuckoo search algorithm: a metaheuristic
approach to solve structural optimization problems. Engineering with computers, 29, 17-35.
García-Martínez, C., Lozano, M., Herrera, F., Molina, D., & Sánchez, A. M. (2008). Global and
local real-coded genetic algorithms based on parent-centric crossover operators.
European Journal of Operational Research, 185, 1088-1113.
Glover, F., & Laguna, M. (1998). Tabu search. In Handbook of combinatorial optimization (pp.
2093-2229): Springer.
46
Gupta, S., Tiwari, R., & Nair, S. B. (2007). Multi-objective design optimisation of rolling
bearings using genetic algorithms. Mechanism and Machine Theory, 42, 1418-1443.
Hansen, N., Müller, S. D., & Koumoutsakos, P. (2003). Reducing the time complexity of the
derandomized evolution strategy with covariance matrix adaptation (CMA-ES).
Evolutionary computation, 11, 1-18.
Hashim, F. A., Houssein, E. H., Mabrouk, M. S., Al-Atabany, W., & Mirjalili, S. (2019). Henry
gas solubility optimization: A novel physics-based algorithm. Future Generation Computer
Systems, 101, 646-667.
Heidari, A. A., Abbaspour, R. A., & Chen, H. (2019). Efficient boosted grey wolf optimizers
for global search and kernel extreme learning machine training. Applied Soft Computing,
81, 105521.
Heidari, A. A., Aljarah, I., Faris, H., Chen, H., Luo, J., & Mirjalili, S. (2019). An enhanced
associative learning-based exploratory whale optimizer for global optimization. Neural
Computing and Applications, 1-27.
Heidari, A. A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., & Chen, H. (2019). Harris hawks
optimization: Algorithm and applications. Future Generation Computer Systems, 97, 849-
872.
Holland, J. H. (1975). Adaptation in natural and artificial systems: an introductory analysis with
applications to biology, control, and artificial intelligence: U Michigan Press.
Hosseini, H. S. (2007). Problem solving by intelligent water drops. In 2007 IEEE congress on
evolutionary computation (pp. 3226-3231): IEEE.
Huang, Q., Zhang, K., Song, J., Zhang, Y., & Shi, J. (2019). Adaptive differential evolution with
a Lagrange interpolation argument algorithm. Information Sciences, 472, 180-202.
Karaboga, D., & Basturk, B. (2007). A powerful and efficient algorithm for numerical function
optimization: artificial bee colony (ABC) algorithm. Journal of global optimization, 39, 459-
471.
Kaveh, A., & Bakhshpoori, T. (2016). Water evaporation optimization: a novel physically
inspired optimization algorithm. Computers & Structures, 167, 69-85.
Kirkpatrick, S., Gelatt, C. D., & Vecchi, M. P. (1983). Optimization by simulated annealing.
science, 220, 671-680.
Koza, J. R. (1994). Genetic programming II: Automatic discovery of reusable subprograms.
Cambridge, MA, USA, 13, 32.
Kumar, N., Hussain, I., Singh, B., & Panigrahi, B. K. (2017). Single sensor-based MPPT of
partially shaded PV system for battery charging by using cauchy and gaussian sine
cosine optimization. IEEE Transactions on Energy Conversion, 32, 983-992.
Kutta, W. (1901). Beitrag zur naherungsweisen integration totaler differentialgleichungen. Z.
Math. Phys., 46, 435-453.
Li, S., Chen, H., Wang, M., Heidari, A. A., & Mirjalili, S. (2020). Slime mould algorithm: A new
method for stochastic optimization. Future Generation Computer Systems, 111, 300-323.
Li, T., Xu, M., Zhu, C., Yang, R., Wang, Z., & Guan, Z. (2019). A deep learning approach for
multi-frame in-loop filter of HEVC. IEEE Transactions on Image Processing, 28, 5663-
5678.
Liang, J. J., Qin, A. K., Suganthan, P. N., & Baskar, S. (2006). Comprehensive learning particle
swarm optimizer for global optimization of multimodal functions. IEEE Transactions on
Evolutionary Computation, 10, 281-295.
Liu, J., Wu, C., Wu, G., & Wang, X. (2015). A novel differential search algorithm and
applications for structure design. Applied Mathematics and Computation, 268, 246-269.
Liu, S., Chan, F. T., & Ran, W. (2016). Decision making for the selection of cloud vendor: An
improved approach under group decision-making with integrated weights and
objective/subjective attributes. Expert Systems with Applications, 55, 37-47.
Liu, S., Yu, W., Chan, F. T. S., & Niu, B. A variable weight-based hybrid approach for multi-
attribute group decision making under interval-valued intuitionistic fuzzy sets.
International Journal of Intelligent Systems, n/a.
47
Lones, M. A. (2020). Mitigating metaphors: A comprehensible guide to recent nature-inspired
algorithms. SN Computer Science, 1, 49.
Luo, Q., Zhang, S., & Zhou, Y. (2017). Stochastic Fractal Search Algorithm for Template
Matching with Lateral Inhibition. Scientific Programming, 2017.
Mezura-Montes, E., & Coello, C. A. C. (2005). Useful infeasible solutions in engineering
optimization with evolutionary algorithms. In Mexican International Conference on Artificial
Intelligence (pp. 652-662): Springer.
Mirjalili, S. (2015a). The ant lion optimizer. Advances in Engineering Software, 83, 80-98.
Mirjalili, S. (2015b). Moth-flame optimization algorithm: A novel nature-inspired heuristic
paradigm. Knowledge-Based Systems, 89, 228-249.
Mirjalili, S., & Lewis, A. (2016). The whale optimization algorithm. Advances in engineering software,
95, 51-67.
Mirjalili, S., Mirjalili, S. M., & Hatamlou, A. (2016). Multi-verse optimizer: a nature-inspired
algorithm for global optimization. Neural Computing and Applications, 27, 495-513.
Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey wolf optimizer. Advances in engineering
software, 69, 46-61.
Mirjalili, S. Z., Mirjalili, S., Saremi, S., Faris, H., & Aljarah, I. (2018). Grasshopper optimization
algorithm for multi-objective optimization problems. Applied Intelligence, 48, 805-820.
Mousavi, A. A., Zhang, C., Masri, S. F., & Gholipour, G. (2020). Structural Damage
Localization and Quantification Based on a CEEMDAN Hilbert Transform Neural
Network Approach: A Model Steel Truss Bridge Case Study. Sensors, 20, 1271.
Nenavath, H., & Jatoth, R. K. (2018). Hybridizing sine cosine algorithm with differential
evolution for global optimization and object tracking. Applied Soft Computing, 62, 1019-
1043.
Niu, P., Niu, S., & Chang, L. (2019). The defect of the Grey Wolf optimization algorithm and
its verification method. Knowledge-Based Systems, 171, 37-43.
Nocedal, J., & Wright, S. (2006). Numerical optimization: Springer Science & Business Media.
Patil, P., & Verma, U. (2006). Numerical Computational Methods. Alpha Science International
Ltd. In: Oxford UK.
Piotrowski, A. P., Napiorkowski, J. J., & Rowinski, P. M. (2014). How novel is the ―novel‖
black hole optimization approach? Information Sciences, 267, 191-200.
Qiu, T., Shi, X., Wang, J., Li, Y., Qu, S., Cheng, Q., Cui, T., & Sui, S. (2019). Deep learning: A
rapid and efficient route to automatic metasurface design. Advanced Science, 6, 1900128.
Qu, S., Han, Y., Wu, Z., & Raza, H. (2020). Consensus Modeling with Asymmetric Cost Based
on Data-Driven Robust Optimization. Group Decision and Negotiation, 1-38.
Rao, R. V., Savsani, V. J., & Vakharia, D. (2011). Teaching–learning-based optimization: a
novel method for constrained mechanical design optimization problems. Computer-
Aided Design, 43, 303-315.
Rashedi, E., Nezamabadi-Pour, H., & Saryazdi, S. (2009). GSA: a gravitational search
algorithm. Information Sciences, 179, 2232-2248.
Runge, C. (1895). Über die numerische Auflösung von Differentialgleichungen. Mathematische
Annalen, 46, 167-178.
Sadollah, A., Bahreininejad, A., Eskandar, H., & Hamdi, M. (2013). Mine blast algorithm: A
new population based algorithm for solving constrained engineering optimization
problems. Applied Soft Computing, 13, 2592-2612.
Saka, M. P., Hasançebi, O., & Geem, Z. W. (2016). Metaheuristics in structural optimization
and discussions on harmony search algorithm. Swarm and Evolutionary Computation, 28,
88-97.
Salcedo-Sanz, S. (2016). Modern meta-heuristics based on nonlinear physics processes: A
review of models and design procedures. Physics Reports, 655, 1-70.
Savsani, P., & Savsani, V. (2016). Passing vehicle search (PVS): A novel metaheuristic
algorithm. Applied Mathematical Modelling, 40, 3951-3978.
48
Simon, D. (2008). Biogeography-based optimization. IEEE Transactions on Evolutionary
Computation, 12, 702-713.
Sörensen, K. (2015). Metaheuristics—the metaphor exposed. International Transactions in
Operational Research, 22, 3-18.
Storn, R., & Price, K. (1995). Differential evolution-a simple and efficient adaptive scheme for global
optimization over continuous spaces (Vol. 3): ICSI Berkeley.
Sun, G., Yang, B., Yang, Z., & Xu, G. (2019). An adaptive differential evolution with combined
strategy for global numerical optimization. Soft Computing, 1-20.
Tian, M., & Gao, X. (2017). An improved differential evolution with information intercrossing
and sharing mechanism for numerical optimization. Swarm and Evolutionary Computation.
Tsamardinos, I., Brown, L. E., & Aliferis, C. F. (2006). The max-min hill-climbing Bayesian
network structure learning algorithm. Machine learning, 65, 31-78.
Tzanetos, A., & Dounias, G. (2020). Nature inspired optimization algorithms or simply
variations of metaheuristics? Artificial Intelligence Review, 1-22.
Wang, B., Zhang, B., Liu, X., & Zou, F. (2020). Novel infrared image enhancement
optimization algorithm combined with DFOCS. Optik, 224, 165476.
Wu, C., Wu, P., Wang, J., Jiang, R., Chen, M., & Wang, X. (2020). Critical review of data-driven
decision-making in bridge operation and maintenance. Structure and Infrastructure
Engineering, 1-24.
Wu, G. (2016). Across neighborhood search for numerical optimization. Information Sciences,
329, 597-618.
Wu, G., Pedrycz, W., Suganthan, P. N., & Mallipeddi, R. (2015). A variable reduction strategy
for evolutionary algorithms handling equality constraints. Applied Soft Computing, 37,
774-786.
Yan, J., Pu, W., Zhou, S., Liu, H., & Greco, M. S. (2020). Optimal Resource Allocation for
Asynchronous Multiple Targets Tracking in Heterogeneous Radar Networks. IEEE
Transactions on Signal Processing, 68, 4055-4068.
Yang, L., & Chen, H. (2019). Fault diagnosis of gearbox based on RBF-PF and particle swarm
optimization wavelet neural network. Neural Computing and Applications, 31, 4463-4478.
Yang, X.-S. (2010a). Firefly algorithm, stochastic test functions and design optimisation. arXiv
preprint arXiv:1003.1409.
Yang, X.-S. (2010b). A new metaheuristic bat-inspired algorithm. In Nature inspired cooperative
strategies for optimization (NICSO 2010) (pp. 65-74): Springer.
Yang, X.-S., & Deb, S. (2010). Engineering optimisation by cuckoo search. International Journal of
Mathematical Modelling and Numerical Optimisation, 1, 330-343.
Zhang, C. W., Ou, J. P., & Zhang, J. Q. (2006). Parameter optimization and analysis of a vehicle
suspension system controlled by magnetorheological fluid dampers. Structural Control
and Health Monitoring: The Official Journal of the International Association for Structural Control
and Monitoring and of the European Association for the Control of Structures, 13, 885-896.
Zhang, J., Zhou, Y., & Luo, Q. (2018). An improved sine cosine water wave optimization
algorithm for global optimization. Journal of Intelligent & Fuzzy Systems, 34, 2129-2141.
Zhao, W., Wang, L., & Zhang, Z. (2019). Atom search optimization and its application to solve
a hydrogeologic parameter estimation problem. Knowledge-Based Systems, 163, 283-304.
Zheng, L., & Zhang, X. (2017). Modeling and analysis of modern fluid problems: Academic Press.
49