A Generalized Evolutionary Metaheuristic
A Generalized Evolutionary Metaheuristic
Engineering Optimization
Xin-She Yang
School of Science and Technology, Middlesex University London,
The Burroughs, London NW4 4BT, United Kingdom.
Abstract
Many optimization problems in engineering and industrial design applications can be formulated
as optimization problems with highly nonlinear objectives, subject to multiple complex constraints.
Solving such optimization problems requires sophisticated algorithms and optimization techniques. A
major trend in recent years is the use of nature-inspired metaheustic algorithms (NIMA). Despite the
popularity of nature-inspired metaheuristic algorithms, there are still some challenging issues and open
problems to be resolved. Two main issues related to current NIMAs are: there are over 540 algorithms
in the literature, and there is no unified framework to understand the search mechanisms of different
algorithms. Therefore, this paper attempts to analyse some similarities and differences among different
algorithms and then presents a generalized evolutionary metaheuristic (GEM) in an attempt to unify
some of the existing algorithms. After a brief discussion of some insights into nature-inspired algorithms
and some open problems, we propose a generalized evolutionary metaheuristic algorithm to unify more
than 20 different algorithms so as to understand their main steps and search mechanisms. We then test
the unified GEM using 15 test benchmarks to validate its performance. Finally, further research topics
are briefly discussed.
Keywords: Algorithm; Derivative-free algorithm; Evolutionary computation; Metaheuristic; Nature-
inspired computing; Swarm intelligence; Optimization.
Citation Detail:
Xin-She Yang, A Generalized Evolutionary Metaheuristic (GEM) Algorithm for Engineering Optimiza-
tion, Cogent Engineering, vol. 11, no. 1, Article 2364041 (2024).
https://doi.org/10.1080/23311916.2024.2364041
1 Introduction
Many design problems in engineering and industry can be formulated as optimization problems subject
to multiple nonlinear constraints. To solve such optimization problems, sophisticated optimization al-
gorithms and techniques are often used. Traditional algorithms such as Newton-Raphson method are
efficient, but they use derivatives and calculations of these derivatives, especially the second derivatives
in a high-dimensional space, can be costly. In addition, such derivative-based algorithms are usually
local search and the final solutions may depend on the starting the point if optimization problems are
highly nonlinear and multimodal (Boyd and Vandenberghe, 2004; Yang, 2020a). An alternative approach
is to use derivative-free algorithms and many evolutionary algorithms, especially recent nature-inspired
algorithms, do not use derivatives (Kennedy and Eberhart, 1995; Storn and Price, 1997; Pham et al.,
2005; Yang, 2020b). These nature-inspired metaheuristic algorithms are flexible and easy to implement,
and yet they are usually very effective in solving various optimization problems in practice.
Algorithms have been important through history (Beer, 2016; Chabert, 1999; Schrijver, 2005). There
are a vast spectrum of algorithms in the literature, ranging from fundamental algorithms to combina-
torial optimization techniques (Chabert, 1999; Schrijver, 2005). In some special classes of optimization
problems, effective algorithms exist for linear programming (Karmarkar, 1984) and quadratic program-
ming (Zdenek, 2009) as well as convex optimization (Bertsekas et al., 2003; Boyd and Vandenberghe,
2004). However, for nonlinear optimization problems, techniques vary and often approximations, heuris-
1
tic algorithms and metaheuristic algorithms are needed. Even so, optimal solutions cannot always be
obtained for nonlinear optimization.
Metaheuristic algorithms are approximation optimization techniques, and they use some form of
heuristics with trial and error and some form of memory and solution selections (Glover, 1986; Glover and
Laguna, 1997). Most metaheuristic algorithms are evolution-based and/or nature-inspired. Evolution-
based algorithms such as genetic algorithm (Holland, 1975; Goldberg, 1989) are often called evolutionary
algorithms. Algorithms such as particle swarm optimization (PSO) (Kennedy and Eberhart, 1995), bees
algorithm (Pham et al., 2005; Pham and Castellani, 2009) and firefly algorithm (Yang, 2009) are often
called swarm intelligence based algorithms (Kennedy et al., 2001).
However, terminologies in this area are not well defined and different researchers may use different
terminologies to refer to the same things. In this paper, we use nature-inspired algorithms to mean all
the metaheuristic algorithms that are inspired by some forms of evolutionary characteristics in nature,
being biological, behaviour, social, physical and chemical characteristics (Yang, 2020a; Yang and He,
2019). In this broad sense, almost all algorithms can be called nature-inspired algorithms, including bees
algorithms (Pham and Castellani, 2014, 2015), PSO (Kennedy et al., 2001), ant colony optimization, bat
algorithm, flower pollination algorithm, cuckoo search algorithm, genetic algorithm, and many others.
Nature-inspired algorithms have become popular in recent years, and it is estimated that there are
several hundred algorithms and variants in the current literature (Yang, 2020a), and the relevant literature
is expanding with more algorithms emerging every year. An exhaustive review of metaheuristic algorithms
by Rajwar et al. (Rajwar et al., 2023) indicated that there are over 540 metaheuristic algorithms with
over 350 of such algorithms that were developed in the last 10 years. Many such new variants have been
developed based on different characteristics/species from nature, social interactions and/or artificial
systems, or based on the hybridization of different algorithms or algorithmic components, or based on
different strategies of selecting candidate solutions and information sharing characteristics (Mohamed
et al., 2020; Rajwar et al., 2023; Zelinka, 2015).
From the application perspective, nature-inspired algorithms have been shown that they can solve
a wide range of optimization problems (Abdel-Basset and Shawky, 2019; Bekasş et al., 2018; Pham
and Castellani, 2015; Osaba et al., 2016), from continuous optimization (Pham and Castellani, 2014)
and engineering design optimization problems (Bekasş et al., 2018) to combinatorial optimization prob-
lems (Ouaarab et al., 2014; Osaba et al., 2016, 2017), multi-robots systems (Rango et al., 2018; Palmieri
et al., 2018) and many other applications (Zelinka, 2015; Gavvala et al., 2019; Mohamed et al., 2020;
Rajwar et al., 2023).
Despite the wide applications of nature-inspired algorithms, theoretical analysis in contrast lacks be-
hind. Though there are some rigorous analyses concerning genetic algorithm (Greenhalgh and Marshal,
2000), PSO (Clerc and Kennedy, 2002) and the bat algorithm (Chen et al., 2018), however, many new
algorithms have not been analyzed in detail. Ideally, a systematical analysis and review should be carried
out in the similar way to convex analysis (Bertsekas et al., 2003) and convex optimization (Boyd and
Vandenberghe, 2004). In addition, since there are so many different algorithms, it is difficult to figure
out what search mechanisms can be effective in determining the performance of a specific algorithm.
Furthermore, some of these 540 algorithms can look very similar, either in terms of their search mecha-
nisms or updating equations, though they may look very different on the surface. This often can cause
confusion and frustration to readers and researchers to see what happens in this research community.
In fact, there are many open problems and unresolved issues concerning nature-inspired metaheuristic
algorithms (Yang et al., 2018; Yang and He, 2019; Yang, 2020b; Rajwar et al., 2023).
Therefore, the purpose of this paper is two-fold: outlining some of the challenging issues and open
problems, and then developing a generalized evolutionary metaheuristic (GEM) to unify many existing
algorithms. The rest of the paper is organized as follows. Section 2 first provides some insights into
nature-inspired computing and then outlines some of the open problems concerning nature-inspired
algorithms. Section 3 presents a unified framework of more than 20 different algorithms so as to look
all the relevant algorithms in the same set of mathematical equations. Section 4 discusses 15 benchmark
functions and case studies, whereas Section 5 carries out some numerical experiments to test and validate
the generalized algorithm. Finally, Section 6 concludes with a brief discussion of future research topics.
2
Exploration/
Components
Exploitation
Algorithm/ Search
Convergence
Features Mechanism
Statistic/ Stability/
Metrics Robustness
mic components, and their role in exploration and exploitation, and we also study its search mechanism
so as to understand ways that local search and global search moves are carried out. Many studies in the
literature have provided numerical convergence curves during iterations when solving different function
optimization and sometimes real-world case studies, and such convergence curves are often presented
with various statistical quantities according to a specific set of performance metrics such as accuracy of
the solution and successful rate as well as number of iterations. In addition, stability and robustness are
also studies for some algorithms (Clerc and Kennedy, 2002). Such analyses, though very important, are
largely qualitative studies of algorithmic features, as summarized in Fig. 1.
The analysis of algorithms can be carried out more rigorously from a quantitative perspective, as
shown in Fig. 2. For a given algorithm, it is possible to analyze the iterative behaviour of the algorithm
using fixed-point theory. However, the assumptions required for such theories may not be realistic or
relevant to the actual algorithm under consideration. Thus, it is not always possible to carry out such
analysis. One good way is to use complexity theory to analyze an algorithm to see its time complexity.
Interestingly, most nature-inspired algorithms have the complexity of O(nt) where n is the typically
population size and t is the number of iterations. It is still a mystery that how such low complexity
algorithms can solve highly complex nonlinear optimization problems that have been shown in various
applications.
From the dynamical system point of view, an algorithm is a system of updating equations, which
can be formulated as a discrete dynamical system. The eigenvalues of the main matrix of such a system
determine the main characteristics of the algorithm. It can be expected that these eigenvalues can
depend on the parameter values of the algorithm, and thus parameter settings can be important. In fact,
the analyses on PSO (Clerc and Kennedy, 2002) and the bat algorithm (Chen et al., 2018) show that
parameter values are important. If the parameter values are in the wrong ranges, the algorithm may
become unstable and become less effective. This also indicates the important of parameter tuning in
nature-inspired algorithms (Eiben and Smit, 2011; Joy et al., 2023). However, parameter tuning itself is
a challenging task because its aim is to find the optimal parameter setting for an optimization algorithm
for a given set of optimization problems.
From the probability point of view, an algorithm can be considered as a set of interacting Markov
chains, thus it is possible to do some approximation analysis in terms of convergence using Markov
chain Monte Carlo (MCMC) theory (Chen et al., 2018). However, the conditions required for MCMC
can be stringent, and thus not all algorithms can be analyzed in this way. From the perspective of the
analysis of variance, it is possible to see how the variances may change with iteration to gain some useful
understanding (Zaharie, 2009).
3
Fixed-
point
theory
Dynamical Complexity
system theory
MCMC
Swarm
(Markov
intelligence
theory)
Bayesian
framework
An alternative approach is to use Bayesian statistical framework to gain some insights into the
algorithm under analysis. Loosely speaking, the initialization of the population in an algorithm with
a given probability distribution forms the prior of the algorithm. When the algorithm evolves and the
solutions also evolve, leading to a posterior distribution of solutions and parameters. Thus, the evolution
of algorithm can be understood from this perspective. However, since Bayesian framework often requires
a lot of extensive integral evaluations, it is not straightforward to gain rigorous results in general.
A more ambition approach is to build a mathematical framework so as to analyze algorithms in a
unified way, though such a framework does not exist in the current literature. Ideally, a theoretical
framework should provide enough insights into the rise of swarm intelligence, which is still an open
problem (Yang, 2020b; Yang and He, 2019).
As we have seen, algorithms can potentially be analyzed from different perspectives and there are
many issues that need further research in the general area of swarm intelligence and nature-inspired
computation. We can highlight a few important open problems.
1. Theoretical framework. Though there are some good theoretical analyses of a few algorithms such
as genetic algorithm (Greenhalgh and Marshal, 2000), PSO (Clerc and Kennedy, 2002) and the bat
algorithm (Chen et al., 2018), there is no unified theoretical framework that can be used to analyze
all algorithms or at at least a major subset of nature-inspired algorithms. There is a strong need to
build a mathematical framework so that the convergence and rate of convergence of any algorithm
can be analyzed with rigorous and quantitative results.
In addition, stability of an algorithm and its robustness should also be analyzed using the same
mathematical framework, based on theories such as dynamical systems and perturbation as well as
probability. The insights gained from such a theoretical framework should provide enough guidance
for tuning and setting parameter values for a given algorithm. However, how to construct this
theoretical framework is still an open problem. It may be that a multidisciplinary approach is
needed to ensure to look at algorithms from different perspectives.
2. Parameter tuning. The setting of parameters in an algorithm can influence the performance of
an algorithm, though the extent of such influence may largely depend on the algorithm itself and
potentially on the problem to be solved (Joy et al., 2023). There are different methods for parameter
tuning, but it is not clear which method(s) should be used for a given algorithm. In addition,
different tuning methods may produce different results for parameter settings for different problems,
4
which may leads to the question if a truly optimal parameter setting exists. It seems that there are
different optimality conditions concerning parameter setting (Yang et al., 2013; Joy et al., 2023),
and parameter settings may be both algorithm-dependent and problem-dependent, depending on
the performance metric used for tuning. Many of these questions remain unresolved.
3. Benchmarking. All new algorithms should be tested and validated using a diverse of set of bench-
marks and case studies. In the current literature, one of the main issues is that most tests use
smooth functions as benchmarks, and it seems that these functions have nothing to do with real-
world applications. Thus, it is not clear how such tests can actually validate the algorithm to gain
any insight into the potential performance of the algorithm to solve much more complicated real-
world applications. There is a strong need to systematically investigate the role of benchmarking
and what types of benchmarks and case studies should be used.
4. Performance metrics. It can be expected that the performance of an algorithm depends on the
performance metrics used to measure the performance. In the current literature, performance mea-
sures are mostly accuracy compared to the true objective values, success rate of multiple functions,
number of iterations as computational efforts, computational time, and the combination of these
measures. Whether these measures are fair or sufficient is still unexplained. In addition, these
performance metrics tend to lead to the ranking of algorithms used and the benchmark functions
used. Again, this may not be consistent with the no-free-lunch theorems (Wolpert and Macready,
1997). It is not clear if other performance measures should be designed and used, and what theory
should be based on to design performance metrics. All these are still open questions.
5. Search mechanism. In many nature-inspired metaheuristic algorithms, certain forms of random-
ization and probability distributions are used to generate solutions with exploration abilities. One
of the main tasks is to balance exploration and exploitation or diversification and intensification
using different search moves or search mechanisms. However, how to balance these two components
is still an open problem. In addition, exploration is usually by randomness, random walks and
perturbations, whereas exploitation is usually by using derivative information and memory. It is
not clear what search moves can be used to achieve both exploration and exploitation effectively.
6. Scalability. Most studies of metaheuristic algorithms in the literature are concerned with problems
with a few parameters or a few dozen parameters. These problems, though very complicated and
useful, are small-scale problems. It is not clear if the algorithms used and the implementation
realized can directly be applied to large-scale problems in practice. Simple scale up by using high-
performance computing or cloud computing facilities may not be enough. How to scale up to solve
real-world, large-scale problems is still a challenging issue. In fact, more efficient algorithms are
always desirable in this context.
7. Rise of Swarm Intelligence. Various discussions about swarm intelligence have attracted attention
in the current literature. It is not clear what swarm intelligence exactly means and what conditions
are necessary to achieve such collective intelligence. There is a strong need to understand swarm
intelligence theoretically and practically so as to gain insights into the rise of swarm intelligence.
With newly gained insights, we may be able to design better and more effective algorithms.
In addition, from both theoretical perspective and practical point of view, the no-free lunch the-
orems (Wolpert and Macready, 1997) had some significant impact on the understanding of algorithm
behaviour. Studies also indicate that free lunches may exist for co-evolution (Wolpert and Macready,
2005), continuous problems (Auger and Teytaud, 2010) and multi-objective optimization (Corne and
Knowles, 2003; Zitzler et al., 2003) under certain conditions. The main question is how to use such
possibilities to build more effective algorithms.
5
where ∇f is the gradient vector of the objective function at xt and ∇2 f is the Hessian matrix. Loosely
speaking, all iterative algorithms can schematically be written as
For a set of n solution vectors xi where i = 1, 2, ..., n, these vectors evolve with the iteration t (the
pseudo-time) and they are denoted by xti . Among these n solutions, there is one solution g ∗ that gives
the best objective value (i.e., highest for maximization or lowest for minimization). For minimization,
where the argmin is to find the corresponding solution vector with the best objective value. Here, x̄g is
the average of the top m best solutions among n solutions (m ≤ n). That is
m
1 X
x̄g = xj . (5)
m j=1
In case of m = n, this becomes the centre of the gravity of the whole population. Thus, we can refer to
this step as the centrality move.
The main updating equations for our proposed unified algorithm are guided randomization and po-
sition update. In the guided randomization search step, it consists of
v t+1
i = pv ti + qϵ1 (g ∗ − xti ) + rϵ2 (x∗i − xti ) . (6)
|{z} | {z } | {z }
inertia term motion towards current best motion to individual best
xt+1 t
i,new = axi + (1 − a)x̄g + b(xtj − xti ) + cv t+1
i + Θh(xti )ζi , (7)
| {z } | {z } | {z } | {z }
Centrality similarity convergence kinetic motion perturbation
where a, b, c, Θ, p, q, and r are parameters, and x∗i is the individual best solution for agent or particle
i. Here, h(xti ) is a function of current solution, and in most case we can set h(xti ) = 1, which can be
a constant. In addition, ζi is a vector of random numbers, typically drawn from a standard normal
distribution. That is
ζi ∼ N (0, 1). (8)
6
It is worth pointing out the effect of each term can be different, and thus we can name each term, based
on their effect and meaning. The inertia term is pv ti , whereas the qϵ1 (g ∗ − xti ) simulates the motion or
move towards the current best solution. The term rϵ2 (x∗i − xti ) means to move towards the individual
best solution. The centrality term has a weighting coefficient (1 − a), which tries to balance between the
importance of the current solution xti and the importance of the centre of gravity of the swarm x̄g . The
main term b(xtj − xti ) shows similar solutions should converge with subtle or minor changes. cv ti is the
kinetic motion of each solution vector. The perturbation term Θh(xti )ζi is controlled by the strength
parameter Θ where h(xi ) can be used to simulate certain specialized moves for each solution agent. If
there is no specialization needed, h(xi ) = 1 can be used.
The selection and acceptance criterion for minimization is
t t+1 t
xi,new if f (xi,new ) ≤ f (xi ),
t+1
xi = (9)
t
xi otherwise.
These four main steps/equations are summarized in the pseudocode, shown in Algorithm 1 where the
initialization of the population follows the standard Monte Carlo randomization
xi = Lb + rand(1, D)(U b − Lb), (10)
where Lb and Ub are, respectively, the lower bound and upper bound of the feasible decision variables.
In addition, rand(1,D) is a vector of D random numbers drawn from a uniform distribution.
Special cases can correspond to more than 20 different algorithms. It is worth pointing out that in
many case there are more than one way to represent the same algorithm by setting different values of
parameters. The following representations are one of the possible ways for representing these algorithms:
1. Differential evolution (DE) (Storn and Price, 1997; Price et al., 2005): a = 1, b = F , c = 0 and
Θ = 0.
2. Particle swarm optimization (PSO) (Kennedy and Eberhart, 1995; Kennedy et al., 2001): a = 1,
b = 0, p = 1, c = 1, and Θ = 0. In addition, q = α and r = β.
2
3. Firefly algorithm (FA) (Yang, 2009, 2013): a = 1, b = β exp(−γrij ), c = 0 and Θ = 0.97t .
4. Simulated annealing (SA) (Kirkpatrik et al., 1983): a = 1, b = c = 0 and Θ = 1.
5. Artificial bee algorithm (ABC) (Karaboga and Basturk, 2008): a = 1, b = Φ ∈ [−1, 1], c = 0 and
Θ = 0.
6. Artificial cooperative search (ACS) (Civioglu, 2013): a = 1, b = R. c = 0, Θ = 0 and its two
predator keys are drawn from its α and β.
7. Charged system search (CSS) (Kaveh and Talatahari, 2010): a = 1, b = A(R) where R is the
normalized distance with the maximum at R = 1. In addition, c = 0 and Θ = 0.
8. Cuckoo search (CS) (Yang and Deb, 2010): a = 1, c = 0, and b = αsH(pa − ϵ) where α is a scaling
parameter, H is a Heaviside function with a switch probability pa and a uniformly distributed
random number ϵ. The step size s is drawn from a Lévy distribution. The other branch with the
switch probability pa is a = 1, b = c = 0, Θ = 1, but ζ is drawn form the Lévy distribution
βΓ(β) sin(πβ/2) 1
L(s, β) ∼ , β = 1.5,
π s1+β
where Γ is the standard Gamma function.
9. Gravitational search algorithm (GSA) (Rashedi et al., 2009): a = 1, p =rand∈ [0, 1], q = r = 0,
c = 1 and Θ = 0. In addition, b =rand∗G(t) where G(t) = G0 exp(−αt/T ).
10. Gradient evolution algorithm (GEA) (Kuo and Zulvia, 2015): a = 1, c = 0 and Θ = 0, but
b = rg ∆xtij /2(xw t B
ij − xij + xij ).
7
13. Harmony search (HS) (Geem et al., 2001): a = 0, b = c = 0, Θ = 1, h(x) = xti for pitch adjustment,
but for harmony selection b = 1 and Θ = 0 can be used.
14. Ant lion optimizer (ALO) (Mirjalili, 2015): a = 1, b = 0, c = 1, Θ = di − ci where di is the scaled
random walk length and ci is its corresponding minimum. In addition, p = 0, q = 0, r = 1 with
some variation of g ∗ is just the average of two selected elite ant-lion solutions.
15. Whale optimization algorithm (WOA) (Mirjalili and Lewis, 2016): a = 1, c = 0, p = q = r = 0,
b = −A where A = 2dr − a with r be drawn randomly from [0, 1] and d be linearly decreasing from
2 to 0. In their spiral branch, Θ = 1 with an equivalent b = eR cos(2πR) where R is uniformly
distributed in [-1, 1].
16. Lion optimization algorithm (LOA) (Yazdani and Jolai, 2016): a = 1, b = 0, c = 1, Θ = 0, p = 0,
q = −P I, and r = 0. In the other moving branch in their LOA, Θ is proportional to DR where D
is their scaled distance and R can be drawn either from [0, 1] or [-1, 1].
17. Mayfly optimization algorithm (MOA) (Zervoudakis and Tsafarakis, 2020): a = 1, b = 0, c = 1,
Θ = 0, p = g, q = a1 exp(−βrp2 ), and r = a2 exp(−βrg2 ).
18. Big bang-big crunch (BBBC) (Erol and Eksin, 2006): the modification of solutions is mainly around
the centre of gravity xg of the population with a = 0. b = c = 0, and Θ = U b/tmax .
2
19. Social spider algorithm (SSA) (James and Li, 2015): a = 1, b = wedij where w is a weight coefficient
and dij is the distance between two spiders. In addition, c = 0, Θ = 1, ζ = rand − 1/2.
20. Moth search algorithm (MSA) (Wang, 2018): a = λ, b = 0, c = 1, p = 0, q = ϕ, r = 0, and Θ = 0
in one equation. The other equation corresponds to a = 1, b = c = 0, Θ = Smax /t2 and ζ = L(s)
drawn from a Lévy distribution.
21. Multi-verse optimizer (MVO) (Mirjalili et al., 2016): a = 1, b = c = 0, but Θ = 1 − (t/tmax )1/p
where tmax is the maximum number of iterations. Its randomized perturbation is given by ζ =
±[Lb + rand(U b − Lb)].
22. Water cycle algorithm (WCA) (Eskandar et al., 2012): a = 1, c = Θ = 0, b = C rand where
C ∈ [1, 2] and rand is a uniformly distributed random number in [0, 1] for both water drops in river
and stream in the WCA. For the additional search for the new stream step, a = 1, b = c = 0, but
√
Θ = µ as its standard deviation and ζ is drawn from a Gaussian distribution with a unity mean.
As pointed out earlier, there are more than one ways of representing an algorithm under consideration
using the unified framework, and the minor details and un-important components of some algorithms
may be ignored. In essence, the unified framework intends to extract the key components of multiple
8
algorithms so that we can figure out what main search mechanisms or moves are important in meta-
heuristic algorithms. Therefore, it can be expected that many other algorithms may be considered as
special cases of the GEM framework if the right combinations of parameter values are used.
with
xi ∈ [−32.768, 32.768], (14)
whose global minimum fmin = 0 occurs at x∗ = (0, 0, ..., 0).
4. Dixon-Price function
D
X
f4 (x) = (x1 − 1)2 + i(2x2i − xi−1 )2 , xi ∈ [−10, 10], (15)
i=2
i
−2)/2i
whose global minimum fmin = 0 at xi = 2−(2 for i = 1, 2, ..., D.
5. Schwefel function (Schwefel, 1995)
f5 (x) = −x1 x2 (72 − 2x1 − 2x2 ), xi ∈ [0, 500], (16)
its global minimum fmin = −3456 occurs at x∗ = (12, 12).
6. Booth function
f6 (x) = (x1 + 2x2 − 7)2 + (2x1 + x2 − 5)2 , xi ∈ [−10, 10], (17)
whose global minimum fmin = 0 occurs at x∗ = (1, 3).
7. Holder table function
√
x2 2
1 +x2
1− π
f7 (x) = − sin(x1 ) cos(x2 )e , xi ∈ [−10, 10], (18)
9
8. Beale function
f8 (x) = (1.5 − x1 + x1 x2 )2 + (2.25 − x1 + x1 x22 )2 + (2.625 − x1 + x1 x32 )2 , (19)
where
xi ∈ [−4.5, +4.5]. (20)
The global minimum fmin = 0 occurs at x∗ = (3, 0.5).
9. Trid function
D
X D
X
f9 (x) = (xi − 1)2 − xi xi−1 , xi ∈ [−d2 , d2 ], (21)
i=1 i=2
whose global minimum fmin = −D(D + 4)(D − 1)/6 occurs at xi = i(D + 1 − i) for i = 1, 2, ..., D.
10. Rastrigin function
D
X
f10 (x) = 10D + [x2i − 10 cos(2πxi )], xi ∈ [−5.12, 5.12], (22)
i=1
12. For the three-bar truss system design with two cross-section areas x1 = A1 and x2 = A2 , the
objective is to minimize √
min f (x) = 100(2 2x1 + x2 ), (30)
subject to
√
( 2x1 + x2 )P
g1 (x) = √ 2 − σ ≤ 0, (31)
2x1 + 2x1 x2
x2 P
g2 (x) = √ 2 − σ ≤ 0, (32)
2x1 + 2x1 x2
P
g3 (x) = √ − σ ≤ 0. (33)
x1 + 2x2
where σ = 2 kN/cm2 is the stress limit and P = 2 kN is the load. In addition, x1 , x2 ∈ [0, 1].
The best solution so far in the literature is (Bekasş et al., 2018)
fmin = 263.8958, x∗ = (0.78853, 0.40866). (34)
10
Table 1: Measured data for a vibration problem.
Time ti 1 2 3 4 5
y(ti ) 1.0706 1.3372 0.8277 0.9507 1.0848
Time ti 6 7 8 9 10
y(ti ) 0.9814 0.9769 1.0169 1.0012 0.9933
13. Beam design. For the beam design to support a vertical load at the free end of the beam, the
objective is to minimize
min f (x) = 0.0624(x1 + x2 + x3 + x4 + x5 ), (35)
subject to
61 37 19 7 1
g(x) = + 3 + 3 + 3 + 3 − 1 ≤ 0, (36)
x31 x2 x3 x4 x5
with the simple bounds 0.01 ≤ xi ≤ 100. The best solution found so far in the literature is (Bekasş
et al., 2018)
fmin = 1.33997, x∗ = (6.0202, 5.3082, 4.5042, 3.4856, 2.1557). (37)
14. Pressure vessel design. The main objective is to minimize
min f (x) = 0.6224x1 x3 x4 + 1.7781x2 x23 + 3.1661x21 x4 + 19.84x21 x3 , (38)
subject to
g1 (x) = −x1 + 0.0193x3 ≤ 0, (39)
g2 (x) = −x2 + 0.00954x3 ≤ 0, (40)
4π 3
g3 (x) = −πx23 x4 − x3 + 1296000 ≤ 0, (41)
3
g3 (x) = x4 − 240 ≤ 0. (42)
The first two design variables must be the integer multiples of the basic thickness h = 0.0625
inches (Cagnina et al., 2008). Therefore, the lower and upper bounds are
Lb = [h, h, 10, 10], (43)
and
U b = [99h, 99h, 200, 200]. (44)
Its true global optimal solution fmin = 6059.714335 occurs at
x1 = 0.8125, x2 = 0.4375, x3 = 40.098446, x4 = 176.636596. (45)
15. Parameter estimation of an ODE. For a vibration problem with a unit step input, we have its
mathematical equation as an ordinary differential equation (Yang, 2023)
ÿ ẏ
+ 2ζ + y = u(t), (46)
ω2 ω
where ω and ζ are the two parameters to be estimated. Here, the unit step function is given
0 if t < 0,
u(t) = (47)
1 if t ≥ 0.
The initial conditions are y(0) = y ′ (0) = 0. For a given system, we have observed its actual
response. The relevant measurements are given Table 1.
In order to estimate the two unknown parameter values ω and ζ, we can define the objective function
as
X10
f (x) = [y(ti ) − ys (ti )]2 , (48)
i=0
where y(ti ) for i = 0, 1, ..., 10 are the observed values and ys (ti ) are the values obtained by solving
the differential equation (46), given a guessed set of values ζ and ω. Here, we have used x = (ζ, ω).
The true values are ζ = 1/4 and ω = 2. The aim of this benchmark is to solve the differential
equation iteratively so as to find the best parameter values that minimize the objective or best-fit
errors.
11
4.3 Parameter Settings
In our simulations, the parameter settings are: population size n = 10 with parameter values of a = 1,
b = 0.7, c = 1, p = 0.7, q = r = 1 and Θ = θt with θ = 0.97. The maximum number of iterations is set
to tmax = 1000.
For the benchmark functions, D = 5 is used for f1 , f2 , f3 , f4 , and f10 . For f9 , D = 4 is used so as
to give fmin as a nice integer. For all other problems, their dimensionality has been given earlier in this
section.
The pressure vessel design problem is a mixed integer programming problem because the first two vari-
ables x1 and x2 can take only integer multiples of the basic thickness h = 0.0625 due to manufacturing
constraints. The other two variables x3 and x4 can take continuous real values. For each case study, the
algorithm has been run 10 times. For example, the 10 runs of the pressure vessel design are summarized
in Table 3. As we can see, Run 2 finds a much better solution fmin = 5850.3851 than the best solution
known so far in the literature 6059.7143. All the constraints are satisfied, which means that this is a
valid new solution.
12
Table 3: Pressure vessel design benchmarks.
x1 x2 x3 x4 fmin
Run 1 1.2500 0.6250 64.7668 11.9886 7332.8419
Run 2 0.7500 0.3750 38.8601 221.3657 5850.3851
Run 3 1.1875 0.6250 61.5285 26.9310 7273.5127
Run 4 1.0000 0.5000 51.8135 84.5785 6410.0869
Run 5 0.8750 0.4375 45.3367 140.2544 6090.5325
Run 6 0.8750 0.4375 45.3368 140.2538 6090.5262
Run 7 0.7500 0.3750 38.8601 221.3655 5850.3831
Run 8 1.0000 0.5625 51.8135 84.5785 6708.4337
Run 9 0.8125 0.4375 42.0984 176.6366 6059.7143
Run 10 0.7500 0.4375 38.8601 221.3655 6018.2036
Following the exact same procedure, each case study has been simulated 20 times. The results for the
five design case studies are summarized in Table 4. As we can see, all the best known optimal solutions
have been found by the algorithm with a population size n = 10.
The above simulations and results have shown that the unified GEM algorithm performs well for
all the 15 test benchmarks, and in some cases it can achieve even better results. This indicates that
this unified algorithm is effective and can potentially be applied to solve a wide range of optimization
problems. This will form part of our further research.
13
References
Abdel-Basset, M. and Shawky, L. A. (2019). Flower pollination algorithm: a comprehensive review.
Artificial Intelligence Review, 52(4):2533–2557.
Auger, A. and Teytaud, O. (2010). Continuous lunches are free plus the design of optimal optimization
algorithms. Algorithmica, 57(2):121–146.
Beer, D. (2016). The social power of algorithms. Information, Communication & Society, 20(1):1–13.
Bekasş, G., Nigdeli, M., and Yang, X.-S. (2018). A novel bat algorithm basefd optimum tuning of mass
dampers for improving the seismic safety of structures. Engineering Structures, 159(1):89–98.
Bertsekas, D. P., Nedic, A., and Ozdaglar, A. (2003). Convex Analysis and Optimization. Athena
Scientific, Belmont, MA, second edition.
Boyd, S. P. and Vandenberghe, L. (2004). Convex Optimization. Cambridge University Press, Cambridge
UK.
Cagnina, L. C., Esquivel, S. C., and Coello Coello, A. C. (2008). Solving engineering optimization
problems with the simple constrained particle swarm optimizer. Informatica, 32(2):319–326.
Chabert, J. L. (1999). A History of Algorithms: From the Pebble to the Microchips. Springer-Verlag,
Heidelberg.
Chen, S., Peng, G.-H., Xing-Shi, and Yang, X.-S. (2018). Global convergence analysis of the bat algo-
rithm using a markovian framework and dynamic system theory. Expert Systems with Applications,
114(1):173–182.
Civioglu, P. (2013). Artificial cooperative search algorithm for numerical optimization problems. Infor-
mation Sciences, 229(1):58–76.
Clerc, M. and Kennedy, J. (2002). The particle swarm: explosion, stability, and convergence in a
multidimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1):58–73.
Coello, C. A. (2000). Use of self-adaptive penalty approach for engineering optimization problems.
Computers in Industry, 41(2):113–127.
Corne, D. and Knowles, J. (2003). Some multiobjective optimizers are better than others. Evolutionary
Computation, 4(2):2506–2512.
Eiben, A. E. and Smit, S. K. (2011). Parameter tuning for configuring and analyzing evolutionary
algorithms. Swarm and Evolutionary Computation, 1(1):19–31.
Erol, O. and Eksin, I. (2006). A new optimization method: big bang-big crunch. Advances in Engineering
Software, 37(2):106–111.
Eskandar, H., Sadollah, A., Bahreininejad, A., and Hamdi, M. (2012). Water cycle algorithm–a novel
metaheuristic optimization method for solving constrained engineering optimization problems. Com-
puters & Structures, 110-111(1):151–166.
Gavvala, S. K., Jatoth, C., Gangadharan, G. R., and Buyya, R. (2019). Qos-aware cloud service compo-
sition using eagle strategy. Future Generation Computer Systems, 90:273–290.
Geem, Z. W., Kim, J. H., and Loganathan, G. V. (2001). A new heuristic optimization algorithm:
harmony search. Simulation, 76(2):60–68.
Glover, F. (1986). Future paths for integer programming and links to artificial intellgience. Computers
& Operations Research, 13(5):533–549.
Glover, F. and Laguna, M. (1997). Tabu Search. Kluwer Academic Publishers, Boston, MA, USA.
Goldberg, D. E. (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Addison-
Wesley, Reading, MA, USA.
14
Greenhalgh, D. and Marshal, S. (2000). Convergence criteria for genetic algorithm. SIAM Journal
Comput, 30(1):269–282.
Hashim, F. A., Houssein, E. H., Mabrouk, M. S., Al-Atabany, W., and Mirjalili, S. (2019). Henry
gas solubility optimization: a novel physics-based algorithm. Future Generation Computer Systems,
101:646–667.
Heidari, A. A., Mirjalili, S., Faris, H., Alijarah, I., Mafarja, M., and Chen, H. L. (2019). Harris hawks
optimization: Algorithm and applications. Future Generation Computer Systems, 97:849–872.
Holland, J. (1975). Adaptation in Nature and Artificial Systems. University of Michigan Press, Ann
Arbor, MI, USA.
James, J. and Li, V. (2015). A social spider algorithm for global optimization. Applied Soft Computing,
30:614–627.
Jamil, M. and Yang, X.-S. (2013). A literature survey of benchmark functions for global optimization
problems. Int. J. of Mathematical Modelling and Numerical Optimisation, 4(2):150–194.
Joy, G., Huyck, C., and Yang, X.-S. (2023). Review of Parameter Tuning Methods for Nature-Inspired
Algorithms, pages 33–47. Springer Nature Singapore, Singapore.
Karaboga, D. and Basturk, B. (2008). On the performance of artificial bee colony (abc) algorithm.
Applied Soft Computing, 8(1):687–697.
Kaveh, A. and Talatahari, S. (2010). A novel heuristic optimization method: charged system search.
Acta Mechanica, 213(3-4):267–289.
Kennedy, J. and Eberhart, R. (1995). Particle swarm optimization. In Proceedings of the IEEE Interna-
tional Conference on Neural Networks, pages 1942–1948, Piscataway, NJ, USA. IEEE.
Kennedy, J., Eberhart, R. C., and Shi, Y. (2001). Swarm Intelligence. Academic Press, London, UK.
Kirkpatrik, S., GEllat, C. D., and Vecchi, M. P. (1983). Optimization by simulated annealing. Science,
220(4598):671–680.
Kuo, R. and Zulvia, F. (2015). The gradient evolution algorithm: a new metaheuristic. Information
Sciences, 316(2):246–265.
Mirjalili, S. (2015). The ant lion optimizer. Adances in Engineering Software, 83(1):80–98.
Mirjalili, S. and Lewis, A. (2016). The whale optimization algorithm. Advances in Engineering Software,
95(1):51–67.
Mirjalili, S., Mirjalili, S., and Hatamlou, A. (2016). Multi-verse optimizer: a nature-inspired algorithm
for global optimization. Neural Computing and Applications, 27(2):495–513.
Mohamed, A., Hadi, A., and Mohamed, A. (2020). Gaining-sharing knowledge based algorithm for
solving optimization problems: a novel nature-inspired algorithm. International Journal of Machine
Learning and Cybernatics, 11(7):1501–1529.
Osaba, E., Yang, X.-S., Diaz, F., Lopez-Garcia, P., and Carballedo, R. (2016). An improved discrete bat
algorithm for symmetric and assymmetric travelling salesman problems. Engineering Applications of
Artificial Intelligence, 48(1):59–71.
Osaba, E., Yang, X.-S., Diaz, F., Onieva, E., Masegosa, A., and Perallos, A. (2017). A discrete firefly
algorithm to solve a rich vehicle routing problem modelling a newspaper distribution system with
recycling policy. Soft Computing, 21(18):5295–5308.
Ouaarab, A., Ahiod, B., and Yang, X.-S. (2014). Discrite cuckoo search algorithm for the travelling
salesman problem. Neural Computing and Applications, 24(7-8):1659–1669.
15
Palmieri, N., Yang, X.-S., Rango, F. D., and Santamaria, A. F. (2018). Self-adaptive decision-making
mechanisms to balance the execution of multiple tasks for a multi-robots team. Neurocomputing,
306(1):17–36.
Pham, D., Ghanbarzadeh, A., Koc, E., Otri, S., Rahim, S., and Zaidi, M. (2005). The bees algorithm,
technical note. Technical report, Cardiff University, Manufacturing Engineering Center, Cardiff.
Pham, D. T. and Castellani, M. (2009). The bees algorithm: Modelling foraging behaviour to solve
continuous optimization problems. Proceedings of the Institution of Mechanical Engineers, Part C:
Journal of Mechanical Engineering Science, 223(12):2910–2938.
Pham, D. T. and Castellani, M. (2015). A comparative study of the bees algorithm as a tool for function
optimisation. Cogent Engineering, 2(1):1091540.
Price, K., Storn, R., and Lampinen, J. (2005). Differential Evolution: A Practical Approach to Global
Optimization. Springer, Berlin, Germany.
Rajwar, K., Deep, K., and Das, S. (2023). An exhaustive review of the metaheuristic algorithms for
search and optimization: taxonomy, applications, and open challenges. Artificial Intelligence Review,
56:13187–13257.
Rango, F. D., Palmieri, N., Yang, X.-S., and Marano, S. (2018). Swarm robotics in wireless distributed
protocol design for coordinating robots invovled in cooperative tasks. Soft Computing, 22(13):4251–
4266.
Rashedi, E., Nezamabadi-Pour, H. H., and Saryazdi, S. (2009). Gsa: a gravitational search algorithm.
Information sciences, 179(13):2232–2248.
Rosenbrock, H. H. (1960). An automatic method for finding the greatest or least value of a function.
The Computer Journal, 3(3):175–184.
Schrijver, A. (2005). On the history of combinatorial optimization (till 1960). In Aardal, K., Nemhauser,
G. L., and Weismantel, R., editors, Handbook of Discrete Optimization, pages 1–68. Elsevier, Amster-
dam.
Schwefel, H. (1995). Evolution and Optimum Seeking. John Wiley Sons, New York.
Ser, J. D., Osaba, E., Molina, D., Yang, X.-S., Salcedo-Sanz, S., Camacho, D., Das, S., Suganthan, P. N.,
Coello, C. A. C., and Herrera, F. (2019). Bio-inspired computation: Where we stand and what’s next.
Swarm Evol. Comput., 48:220–250.
Storn, R. and Price, K. (1997). Differential evolution: a simple and efficient heuristic for global opti-
mization. J Global Optimization, 11(4):341–359.
Suganthan, P., Hansen, N., Liang, J., Deb, K., Chen, Y., Auger, A., and Tiwar, S. (2005). Problem
definitions and evaluation criteria for cec 2005, special session on real-parameter optimization, technical
report. Technical report, Nanyang Technological University (NTU), Singapore.
Wang, G. (2018). Moth search algorithm: a bio-inspired metaheuristic algorithm for global optimization
problems. Memetic Computing, 10(2):151–164.
Wolpert, D. H. and Macready, W. G. (1997). No free lunch theorems for optimization. IEEE Treansac-
tions on Evolutionary Computation, 1(1):67–82.
Yang, X.-S. (2009). Firefly algorithms for multimodal optimization. In Watanabe, O. and Zeugmann,
T., editors, Proceedings of Fifth Symposium on Stochastic Algorithms, Foundations and Applications,
volume 5792, pages 169–178. Lecture Notes in Computer Science, Springer.
16
Yang, X.-S. (2013). Cuckoo Search and Firefly Algorithm: Theory and Applications, volume 516 of
Studies in Computational Intelligence. Springer, Heidelberg, Germany.
Yang, X.-S. (2020a). Nature-Inspired Optimization Algorithms. Academic Press, London, second edition.
Yang, X.-S. (2020b). Nature-inspired optimization algorithms: Challenges and open problems. Journal
of Computational Science, 46:101104.
Yang, X.-S. (2023). Ten new benchmarks for optimization. In Yang, X.-S., editor, Benchmarks and
Hybrid Algorithms in Optimization and Applications, pages 19–32. Springer.
Yang, X.-S. and Deb, S. (2010). Engineering optimisation by cuckoo search. International Journal of
Mathematical Modelling and Numerical Optimisation, 1(4):330–343.
Yang, X.-S., Deb, S., Loomes, M., and Karamanoglu, M. (2013). A framework for self-tuning optimization
algorithm. Neural Computing and Applications, 23(7-8):2051–2057.
Yang, X.-S., Deb, S., Zhao, Y.-X., Fong, S., and He, X. (2018). Swarm intelligence: past, present and
future. Soft Computing, 22(18):5923–5933.
Yang, X.-S. and He, X.-S. (2019). Mathematical Foundations of Nature-Inspired Algorithms. Springer
Briefs in Optimization. Springer, Cham, Switzerland.
Yazdani, M. and Jolai, F. (2016). Lion optimization algorithm (loa): A nature-inspired metaheuristic
algorithm. Journal of Computational Design and Engineering, 3(1):24–36.
Zaharie, D. (2009). Influence of crossover on the behavior of the differential evolution algorithm. Applied
Soft Computing, 9(3):1126–1138.
Zdenek, D. (2009). Optimal Quadratic Programming Algorithms: With Applications to Variational In-
equalities. Springer, Heidelberg.
Zelinka, I. (2015). A survey on evolutionary algoirthm dynamics and its complexy-mutual relations, past,
present and future. Swarm and Evolutionary Computation, 25(1):2–14.
Zervoudakis, K. and Tsafarakis, S. (2020). A mayfly optimization algorithm. Computers & Industrial
Engineering, 145:106559.
Zitzler, E., Thiele, L., Laumanns, M., Fonseca, C. M., and Fonseca, V. G. D. (2003). Performance
assessment of multiobjective optimizers: An analysis and review. IEEE Transaction on Evolutionary
Computaiton, 7(2):117–132.
17