Harris Hawks Optimization
Harris Hawks Optimization
PII: S0167-739X(18)31353-0
DOI: https://doi.org/10.1016/j.future.2019.02.028
Reference: FUTURE 4781
Please cite this article as: A.A. Heidari, S. Mirjalili, H. Faris et al., Harris hawks optimization:
Algorithm and applications, Future Generation Computer Systems (2019),
https://doi.org/10.1016/j.future.2019.02.028
This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to
our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form.
Please note that during the production process errors may be discovered which could affect the
content, and all legal disclaimers that apply to the journal pertain.
Harris Hawks Optimization: Algorithm and Applications
Ali Asghar Heidaria , Seyedali Mirjalilib , Hossam Farisc , Ibrahim Aljarahc , Majdi Mafarjad ,
Huiling Chen ∗e
a
School of Surveying and Geospatial Engineering, University of Tehran, Tehran, Iran
as [email protected]
b
School of Information and Communication Technology, Griffith University, Nathan, Brisbane, QLD 4111,
Australia
seyedali.mirjalili@griffithuni.edu.au
c
King Abdullah II School for Information Technology, The University of Jordan, Amman, Jordan
{i.aljarah,hossam.faris}@ju.edu.jo
d
Department of Computer Science, Birzeit University, POBox 14, West Bank, Palestine
[email protected]
e
Department of Computer Science, Wenzhou University, Wenzhou 325035, China
[email protected]
Abstract
In this paper, a novel population-based, nature-inspired optimization paradigm is proposed, which
is called Harris Hawks Optimizer (HHO). The main inspiration of HHO is the cooperative behavior
and chasing style of Harris’ hawks in nature called surprise pounce. In this intelligent strategy,
several hawks cooperatively pounce a prey from different directions in an attempt to surprise it.
Harris hawks can reveal a variety of chasing patterns based on the dynamic nature of scenarios
and escaping patterns of the prey. This work mathematically mimics such dynamic patterns and
behaviors to develop an optimization algorithm. The effectiveness of the proposed HHO optimizer
is checked, through a comparison with other nature-inspired techniques, on 29 benchmark problems
and several real-world engineering problems. The statistical results and comparisons show that
the HHO algorithm provides very promising and occasionally competitive results compared to
well-established metaheuristic techniques.
Keywords:
Nature-inspired computing, Harris hawks optimization algorithm, Swarm intelligence,
Optimization, Metaheuristic
1 Introduction
1 Many real-world problems in machine learning and artificial intelligence have generally a con-
2 tinuous, discrete, constrained or unconstrained nature [1, 2]. Due to these characteristics, it is
3 hard to tackle some classes of problems using conventional mathematical programming approaches
4 such as conjugate gradient, sequential quadratic programming, fast steepest, and quasi-Newton
5 methods [3, 4]. Several types of research have verified that these methods are not efficient enough
6 or always efficient in dealing with many larger-scale real-world multimodal, non-continuous, and
7 non-differentiable problems [5]. Accordingly, metaheuristic algorithms have been designed and
8 utilized for tackling many problems as competitive alternative solvers, which is because of their
∗
Corresponding author: Huiling Chen ([email protected])
2
ABC F PSO
G E
CS ACO
F1
E1
GSA TS
G 1 Swarm-based
ased
Hum
BBBC TLBO
D
H D1
ics-b
an-ba
1
CFO H
Phys
O HS
sed
C1
I1 C
I Evolutionary
B1
J1
A1
BBO GP
GA
J DE
B
A
Figure 1: Classification of meta-heuristic techniques (meta-heuristic diamond)
56 cies. Otherwise, the possibility of being trapped in local optima (LO) and immature convergence
57 drawbacks increases.
58 We have witnessed a growing interest and awareness in the successful, inexpensive, efficient
59 application of EAs and SI algorithms in recent years. However, referring to No Free Lunch (NFL)
60 theorem [36], all optimization algorithms proposed so-far show an equivalent performance on
61 average if we apply them to all possible optimization tasks. According to NFL theorem, we cannot
62 theoretically consider an algorithm as a general-purpose universally-best optimizer. Hence, NFL
63 theorem encourages searching for developing more efficient optimizers. As a result of NFL theorem,
64 besides the widespread studies on the efficacy, performance aspects and results of traditional EAs
65 and SI algorithms, new optimizers with specific global and local searching strategies are emerging
66 in recent years to provide more variety of choices for researchers and experts in different fields.
67 In this paper, a new nature-inspired optimization technique is proposed to compete with other
68 optimizers. The main idea behind the proposed optimizer is inspired from the cooperative be-
69 haviors of one of the most intelligent birds, Harris’ Hawks, in hunting escaping preys (rabbits in
70 most cases) [37]. For this purpose, a new mathematical model is developed in this paper. Then, a
71 stochastic metaheuristic is designed based on the proposed mathematical model to tackle various
72 optimization problems.
73 The rest of this research is organized as follows. Section 2 represents the background inspiration
74 and info about the cooperative life of Harris’ hawks. Section 3 represents the mathematical model
75 and computational procedures of the HHO algorithm. The results of HHO in solving different
76 benchmark and real-world case studies are presented in Section 4 Finally, Section 6 concludes the
77 work with some useful perspectives.
78 2 Background
79 In 1997, Louis Lefebvre proposed an approach to measure the avian “IQ” based on the observed
80 innovations in feeding behaviors [38]. Based on his studies [38, 39, 40, 41], the hawks can be listed
81 amongst the most intelligent birds in nature. The Harris’ hawk (Parabuteo unicinctus) is a well-
82 known bird of prey that survives in somewhat steady groups found in southern half of Arizona,
83 USA [37]. Harmonized foraging involving several animals for catching and then, sharing the slain
3
84 animal has been persuasively observed for only particular mammalian carnivores. The Harris’s
85 hawk is distinguished because of its unique cooperative foraging activities together with other
86 family members living in the same stable group while other raptors usually attack to discover
87 and catch a quarry, alone. This avian desert predator shows evolved innovative team chasing
88 capabilities in tracing, encircling, flushing out, and eventually attacking the potential quarry.
89 These smart birds can organize dinner parties consisting of several individuals in the non-breeding
90 season. They are known as truly cooperative predators in the raptor realm. As reported by
91 Bednarz [37] in 1998, they begin the team mission at morning twilight, with leaving the rest
92 roosts and often perching on giant trees or power poles inside their home realm. They know their
93 family members and try to be aware of their moves during the attack. When assembled and party
94 gets started, some hawks one after the other make short tours and then, land on rather high
95 perches. In this manner, the hawks occasionally will perform a “leapfrog” motion all over the
96 target site and they rejoin and split several times to actively search for the covered animal, which
97 is usually a rabbit2 .
98 The main tactic of Harris’ hawks to capture a prey is “surprise pounce”, which is also known
99 as “seven kills” strategy. In this intelligent strategy, several hawks try to cooperatively attack
100 from different directions and simultaneously converge on a detected escaping rabbit outside the
101 cover. The attack may rapidly be completed by capturing the surprised prey in few seconds, but
102 occasionally, regarding the escaping capabilities and behaviors of the prey, the seven kills may
103 include multiple, short-length, quick dives nearby the prey during several minutes. Harris’ hawks
104 can demonstrate a variety of chasing styles dependent on the dynamic nature of circumstances
105 and escaping patterns of a prey. A switching tactic occurs when the best hawk (leader) stoops
106 at the prey and get lost, and the chase will be continued by one of the party members. These
107 switching activities can be observed in different situations because they are beneficial for confusing
108 the escaping rabbit. The main advantage of these cooperative tactics is that the Harris’ hawks
109 can pursue the detected rabbit to exhaustion, which increases its vulnerability. Moreover, by
110 perplexing the escaping prey, it cannot recover its defensive capabilities and finally, it cannot
111 escape from the confronted team besiege since one of the hawks, which is often the most powerful
112 and experienced one, effortlessly captures the tired rabbit and shares it with other party members.
113 Harris’ hawks and their main behaviors can be seen in nature, as captured in Fig. 2.
2
Interested readers can refer to the following documentary videos: (a) https://bit.ly/2Qew2qN, (b) https:
//bit.ly/2qsh8Cl, (c) https://bit.ly/2P7OMvH, (d) https://bit.ly/2DosJdS
3
These images were obtained from (a) https://bit.ly/2qAsODb (b) https://bit.ly/2zBFo9l
4
114 3 Harris hawks optimization (HHO)
115 In this section, we model the exploratory and exploitative phases of the proposed HHO in-
116 spired by the exploring a prey, surprise pounce, and different attacking strategies of Harris hawks.
117 HHO is a population-based, gradient-free optimization technique; hence, it can be applied to any
118 optimization problem subject to a proper formulation. Figure 3 shows all phases of HHO, which
119 are described in the next subsections.
|E|=1
5
position of the current population of hawks. We proposed a simple model to generate random
locations inside the group’s home range (LB, U B). The first rule generates solutions based on a
random location and other hawks. In second rule of Eq. (1), we have the difference of the location
of best so far and the average position of the group plus a randomly-scaled component based on
range of variables, while r3 is a scaling coefficient to further increase the random nature of rule
once r4 takes close values to 1 and similar distribution patterns may occur. In this rule, we add a
randomly scaled movement length to the LB. Then, we considered a random scaling coefficient for
the component to provide more diversification trends and explore different regions of the feature
space. It is possible to construct different updating rules, but we utilized the simplest rule, which
is able to mimic the behaviors of hawks. The average position of hawks is attained using Eq. (2):
N
1 ∑
Xm (t) = Xi (t) (2)
N i=1
131 where Xi (t) indicates the location of each hawk in iteration t and N denotes the total number of
132 hawks. It is possible to obtain the average location in different ways, but we utilized the simplest
133 rule.
6
E=2E0 (1-t/T )
2
Escaping energy
0
−1
−2
0 100 200 300 400 500
iteration
156 increase their chances in cooperatively killing the rabbit by performing the surprise pounce. After
157 several minutes, the escaping prey will lose more and more energy; then, the hawks intensify the
158 besiege process to effortlessly catch the exhausted prey. To model this strategy and enable the
159 HHO to switch between soft and hard besiege processes, the E parameter is utilized.
160 In this regard, when |E| ≥0.5, the soft besiege happens, and when |E| <0.5, the hard besiege
161 occurs.
172 A simple example of this step with one hawk is depicted in Fig. 5.
7
B ΔX|
−E|
X rabbit
E
A X(t)
O
O1 X(t+1)
Xrabbit
Figure 5: Example Bof overall vectors in the case of hard besiege
ΔX
177 To mathematically model the escaping patterns of the prey and leapfrog movements (as called in
178 [37]), the levy flight (LF) concept is utilized in the HHO algorithm. The LF is utilized to mimic the
179 real zigzag deceptive motions of preys (particularity rabbits) during escaping phase and irregular,
180 abrupt, and rapid dives of hawks around the escaping prey. Actually, hawks perform several team
181 rapid dives around the rabbit and try to progressively correct their location and directions with
182 regard to the deceptive motions of prey. This mechanism is also supported by real observations
183 in other competitive situations in nature. It has been confirmed that LF-based activities are the
184 optimal searching tactics for foragers/predators in non-destructive foraging conditions [42, 43].
185 In addition, it has been detected the LF-based patterns can be detected in the chasing activities
186 of animals like monkeys and sharks [44, 45, 46, 47]. Hence, the LF-based motions were utilized
187 within this phase of HHO technique.
Inspired by real behaviors of hawks, we supposed that they can progressively select the best
possible dive toward the prey when they wish to catch the prey in the competitive situations.
Therefore, to perform a soft besiege, we supposed that the hawks can evaluate (decide) their next
move based on the following rule in Eq. (7):
Then, they compare the possible result of such a movement to the previous dive to detect that will
it be a good dive or not. If it was not reasonable (when they see that the prey is performing more
deceptive motions), they also start to perform irregular, abrupt, and rapid dives when approaching
the rabbit. We supposed that they will dive based on the LF-based patterns using the following
rule:
Z = Y + S × LF (D) (8)
where D is the dimension of problem and S is a random vector by size 1 × D and LF is the levy
flight function, which is calculated using Eq. (9) [48]:
( ) β1
u×σ Γ(1 + β) × sin( πβ
2
)
LF (x) = 0.01 × 1 ,σ = β−1 (9)
|v| β Γ( 1+β
2
) × β × 2( 2
)
)
188 where u, v are random values inside (0,1), β is a default constant set to 1.5.
Hence, the final strategy for updating the positions of hawks in the soft besiege phase can be
performed by Eq. (10): {
Y if F (Y ) < F (X(t))
X(t + 1) = (10)
Z if F (Z) < F (X(t))
8
189 where Y and Z are obtained using Eqs.(7) and (8).
190 A simple illustration of this step for one hawk is demonstrated in Fig. 6. Note that the
191 position history of LF-based leapfrog movement patterns during some iterations are also recorded
192 and shown in this illustration. The colored dots are the location footprints of LF-based patterns
193 in one trial and then, the HHO reaches to the location Z. In each step, only the better position
194 Y or Z will be selected as the next location. This strategy is applied to all search agents.
Y
Xrabb
it − E |J
Xrabb
it − X|
S×LF(D)
X
ΔX
Z
Xrabbit E
Figure 6: Example of overall vectors in the case of soft besiege with progressive rapid dives
9
Y
Xrabb
it − E |J
Xrabb
it − Xm|
S×LF(D)
Xm
Z
X
ΔX
Xrabbit E
Xm
Xrabbit − E Y
|JXrabbit − X
m|
X
S×LF(D)
ΔX
B
Z
E A
O
O1
Xrabbit
(b) BThe process in 3D space
Figure 7: Example of overall vectors in the case of hard besiege with progressive rapid dives in 2D and 3D space.
10
Algorithm 1 Pseudo-code of HHO algorithm
Inputs: The population size N and maximum number of iterations T
Outputs: The location of rabbit and its fitness value
Initialize the random population Xi (i = 1, 2, . . . , N )
while (stopping condition is not met) do
Calculate the fitness values of hawks
Set Xrabbit as the location of rabbit (best location)
for (each hawk (Xi )) do
Update the initial energy E0 and jump strength J ▷ E0 =2rand()-1, J=2(1-rand())
Update the E using Eq. (3)
if (|E| ≥ 1) then ▷ Exploration phase
Update the location vector using Eq. (1)
if (|E| < 1) then ▷ Exploitation phase
if (r ≥0.5 and |E| ≥ 0.5 ) then ▷ Soft besiege
Update the location vector using Eq. (4)
else if (r ≥0.5 and |E| < 0.5 ) then ▷ Hard besiege
Update the location vector using Eq. (6)
else if (r <0.5 and |E| ≥ 0.5 ) then ▷ Soft besiege with progressive rapid dives
Update the location vector using Eq. (10)
else if (r <0.5 and |E| < 0.5 ) then ▷ Hard besiege with progressive rapid dives
Update the location vector using Eq. (11)
Return Xrabbit
11
222 and exploitation inclinations and escaping from LO in dealing with challenging problems. Details
223 of the CM test problems are also reported in Table 19 in Appendix A. Figure 8 demonstrates three
224 of composition test problems.
225 The results and performance of the proposed HHO is compared with other well-established
226 optimization techniques such as the GA [22], BBO [22], DE [22], PSO [22], CS [34], TLBO [29],
227 BA/BAT [52], FPA [53], FA [54], GWO [55], and MFO [56] algorithms based on the best, worst,
228 standard deviation (STD) and average of the results (AVG). These algorithms cover both recently
229 proposed techniques such as MFO, GWO, CS, TLBO, BAT, FPA, and FA and also, relatively the
230 most utilized optimizers in the field like the GA, DE, PSO, and BBO algorithms.
231 As recommended by Derrac et al. [57], the non-parametric Wilcoxon statistical test with 5% de-
232 gree of significance is also performed along with experimental assessments to detect the significant
233 differences between the attained results of different techniques.
Parameter space Parameter space Parameter space
3000
4000
4000
2000 3000 3000
2000 2000
1000
1000 1000
0 0 0
5
−5 5 5
0 5 5
0 0 0
0 0
−5 5 x2
x1 x2 x2
−5 −5 x1 −5 −5 x1
12
Search history E=2E0 (1-t/T )
P a ra m e te r sp a c e 2
100
4
x 10 1.5
2 50 1
Escaping energy
1.5 0.5
x2
1 0
−0.5
0.5
−50 −1
0
100
100 −1.5
0 0 −100 −2
x2 −100 −50 0 50 100 0 50 100 150 200
−100 −100 x1 x1 iteration
T r a je c t o r y o f 1 s t h a w k C o nv e rg e n c e c u rv e
A v e r a g e fit n e s s o f a ll h a w k s
0
10
40 15000
20
10000
−20
10
0
5000
−20
−40
10
−40 0
50 100 150 200 50 100 150 200 50 100 150 200
It e r a t io n It e r a t io n It e r a t io n
5 50 1
Escaping energy
4 0.5
3
0
x2
0
2
−0.5
1
−50 −1
0
100
100 −1.5
0 0 −100 −2
x2 −100 −50 0 50 100 0 50 100 150 200
−100 −100 x1 x1 iteration
x 10
5 A v e r a g e fit n e s s o f a ll h a w k s
T r a je c t o r y o f 1 s t h a w k C o nv e rg e n c e c u rv e
100
0
5 10
50 4
−10
10
3
0
−20
2 10
−50 1 −30
10
0
50 100 150 200 50 100 150 200 50 100 150 200
It e r a t io n It e r a t io n It e r a t io n
1.5
100 50 1
Escaping energy
0.5
0
x2
50 0
−0.5
−50 −1
0
100 −1.5
100
0 0 −100 −2
x2 −100 −50 0 50 100 0 50 100 150 200
−100 −100 x1 x1 iteration
T r a je c t o r y o f 1 s t h a w k A v e r a g e fit n e s s o f a ll h a w k s
C o nv e rg e n c e c u rv e
60
50 50
−5
10
40
−10
0 30 10
20 −15
10
−50
10
−20
10
0
50 100 150 200 50 100 150 200 50 100 150 200
It e r a t io n It e r a t io n It e r a t io n
13
P a ra m e te r sp a c e
Search history E=2E0 (1-t/T )
2
1 1.5
3 1
0.5
Escaping energy
0.5
2
x2
0 0
1 −0.5
−0.5
0 −1
0.5 −1 −1.5
0 0.5
0 −2
−0.5 −0.5 −1 −0.5 0 0.5 1 0 50 100 150 200
x2 −1 −1 x1 x1 iteration
A v e r a g e fit n e s s o f a ll h a w k s
T r a je c t o r y o f 1 s t h a w k Convergence curve
20
0.5 −1
10
15
0
10
−2
−0.5 10
5
−1
0
50 100 150 200 50 100 150 200 50 100 150 200
It e r a t io n It e r a t io n Iteration
Search history
P a ra m e te r sp a c e E=2E0 (1-t/T )
5
2
1.5
80
1
60
Escaping energy
0.5
x2
40 0 0
20 −0.5
−1
0
5
5 −1.5
0 0 −5 −2
x2 −5 0 5 0 50 100 150 200
−5 −5 x1 x1 iteration
T r a je c t o r y o f 1 s t h a w k A v e r a g e fit n e s s o f a ll h a w k s C o nv e rg e n c e c u rv e
120 0
4 10
100
2
80
−5
10
0 60
40
−2
−10
20 10
−4 0
50 100 150 200 50 100 150 200 50 100 150 200
It e r a t io n It e r a t io n It e r a t io n
Search history
P a ra m e te r sp a c e E=2E0 (1-t/T )
30 2
1.5
20
20 1
10
Escaping energy
15 0.5
x2
0 0
10
−10 −0.5
5
−1
0 −20
20
20 −1.5
0 0 −30
−2
x2 −30 −20 −10 0 10 20 30 0 50 100 150 200
−20 −20 x1 x1 iteration
T r a je c t o r y o f 1 s t h a w k A v e r a g e fit n e s s o f a ll h a w k s
C o nv e rg e n c e c u rv e
0
14 10
20
12
10
10 −5
10
0 8
6
−10 −10
10
4
−20
2
−15
−30 0 10
50 100 150 200 50 100 150 200 50 100 150 200
It e r a t io n It e r a t io n It e r a t io n
Figure 10: Qualitative results for F7, F9, and F10 problems
14
Table 1: The parameter settings
1.5
1
10
Escaping energy
0.5
x2
0 0
5
−0.5
−1
0
5
5 −1.5
0 0 −50 −2
x2 −50 0 50 0 50 100 150 200
−5 −5 x1 x1 iteration
x 10
8 Average fitness of all hawks
Trajectory of 1st hawk Convergence curve
50
6
5 10
0
4
0 3
−1
2 10
−50 0
50 100 150 200 50 100 150 200 50 100 150 200
Iteration Iteration Iteration
15
254 From the history of sampled locations in Figs. 9-11, it can be observed that the HHO reveals a
255 similar pattern in dealing with different cases, in which the hawks attempts to initially boost the
256 diversification and explore the favorable areas of solution space and then exploit the vicinity of
257 the best locations. The diagram of trajectories can help us to comprehend the searching behavior
258 of the foremost hawk (as a representative of the rest of hawks). By this metric, we can check
259 if the foremost hawk faces abrupt changes during the early phases and gradual variations in the
260 concluding steps. Referring to Van Den Bergh and Engelbrecht [59], these activities can guarantee
261 that a P-metaheuristic finally convergences to a position and exploit the target region.
262 As per trajectories in Figs. 9-11, we see that the foremost hawk start the searching procedure
263 with sudden movements. The amplitude of these variations covers more than 50% of the solution
264 space. This observation can disclose the exploration propensities of the proposed HHO. As times
265 passes, the amplitude of these fluctuations gradually decreases. This point guarantees the tran-
266 sition of HHO from exploratory trends to exploitative steps. Eventually, the motion pattern of
267 the first hawk becomes very stable which shows that the HHO is exploiting the promising regions
268 during the concluding steps. By monitoring the average fitness of the population, the next mea-
269 sure, we can notice the reduction patterns in fitness values when the HHO enriches the excellence
270 of the randomized candidate hawks. Based on the diagrams demonstrated in Figs. 9-11, the HHO
271 can enhance the quality of all hawks during half of the iterations and there is an accelerating
272 decreasing pattern in all curves. Again, the amplitude of variations of fitness results decreases by
273 more iteration. Hence, the HHO can dynamically focus on more promising areas during iterations.
274 According to convergence curves in Fig. Figs. 9-11, which shows the average fitness of best hawk
275 found so far, we can detect accelerated decreasing patterns in all curves, especially after half of
276 the iteration. We can also detect the estimated moment that the HHO shift from exploration to
277 exploitation. In this regard, it is observed that the HHO can reveal an accelerated convergence
278 trend.
16
F(x) F(x) F(x)
1.0E+020 1.0E+300
HHO
GA 1.0E+000
1.0E+000 1.0E+250 PSO
HHO BBO
GA FPA
PSO
1.0E+200 GWO
1.0E-020 BAT
BBO 1.0E-020 HHO
FPA 1.0E+150 FA GA
GWO CS PSO
1.0E-040 BAT MFO BBO
FA 1.0E+100 TLBO FPA
CS DE 1.0E-040 GWO
1.0E-060 MFO BAT
TLBO 1.0E+050 FA
DE CS
1.0E-080 1.0E+000 MFO
1.0E-060 TLBO
DE
1.0E-100 1.0E-050
30 100 500 1000 30 100 500 1000 30 100 500 1000
Dimension Dimension Dimension
1.0E+000
1.0E+010
1.0E+005
1.0E-010
HHO HHO HHO
GA 1.0E+005 GA GA
PSO PSO PSO
1.0E-020 BBO BBO 1.0E+000 BBO
FPA FPA FPA
GWO
1.0E+000 GWO GWO
1.0E-030 BAT BAT BAT
FA FA FA
CS CS
1.0E-005 CS
1.0E-005
1.0E-040 MFO MFO MFO
TLBO TLBO TLBO
DE DE DE
1.0E-050 1.0E-010 1.0E-010
30 100 500 1000 30 100 500 1000 30 100 500 1000
Dimension Dimension Dimension
1.0E+006
HHO
GA
PSO
BBO
FPA
1.0E+000 GWO
BAT
FA
CS
MFO
TLBO
1.0E-006 DE
(m) F13
Figure 12: Scalability results of the HHO versus other methods in dealing with the F1-F13 cases with different
dimensions
17
Table 2: Results of HHO for different dimensions of scalable F1-F13 problems
18
323 almost all cases. It is seen that the HHO has reached the best global optimum for F9 and F11
324 cases in any dimension.
325 In order to further check the efficacy of HHO, we recorded the running time taken by optimizers
326 to find the solutions for F1-F13 problems with 1000 dimensions and the results are exposed in
327 Table 7. As per results in Table 7, we detect that the HHO shows a reasonably fast and competitive
328 performance in finding the best solutions compared to other well-established optimizers even for
329 high dimensional unimodal and multimodal cases. Based on average running time on 13 problems,
330 the HHO performs faster than BBO, PSO, GA, CS, GWO, and FA algorithms. These observations
331 are also in accordance with the computational complexity of HHO.
332 The results in Table 8 verify that HHO provides superior and very competitive results on
333 F14-F23 fixed dimension MM test cases. The results on F16-F18 are very competitive and all
334 algorithms have attained high-quality results. Based on results in Table 8, the proposed HHO has
19
Table 5: Results of benchmark functions (F1-F13), with 500 dimensions.
Benchmark HHO GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
AVG 1.46E-92 6.06E+05 6.42E+05 1.60E+05 8.26E+04 1.42E-03 1.52E+06 6.30E+04 6.80E+00 1.15E+06 2.14E-77 7.43E+05
F1
STD 8.01E-92 7.01E+04 2.96E+04 9.76E+03 1.32E+04 3.99E-04 3.58E+04 8.47E+03 4.93E-01 3.54E+04 1.94E-77 3.67E+04
AVG 7.87E-49 1.94E+03 6.08E+09 5.95E+02 5.13E+02 1.10E-02 8.34E+09 7.13E+02 4.57E+01 3.00E+08 2.31E-39 3.57E+09
F2
STD 3.11E-48 7.03E+01 1.70E+10 1.70E+01 4.84E+01 1.93E-03 1.70E+10 3.76E+01 2.05E+00 1.58E+09 1.63E-39 1.70E+10
AVG 6.54E-37 5.79E+06 1.13E+07 2.98E+06 5.34E+05 3.34E+05 3.37E+07 1.19E+06 2.03E+02 4.90E+06 1.06E+00 1.20E+07
F3
STD 3.58E-36 9.08E+05 1.43E+06 3.87E+05 1.34E+05 7.95E+04 1.41E+07 1.88E+05 2.72E+01 1.02E+06 3.70E+00 1.49E+06
AVG 1.29E-47 9.59E+01 8.18E+01 9.35E+01 4.52E+01 6.51E+01 9.82E+01 5.00E+01 4.06E-01 9.88E+01 4.02E-31 9.92E+01
F4
STD 4.11E-47 1.20E+00 1.49E+00 9.05E-01 4.28E+00 5.72E+00 3.32E-01 1.73E+00 3.03E-02 4.15E-01 2.67E-31 2.33E-01
AVG 3.10E-01 1.79E+09 1.84E+09 2.07E+08 3.30E+07 4.98E+02 6.94E+09 2.56E+07 1.21E+03 5.01E+09 4.97E+02 4.57E+09
F5
STD 3.73E-01 4.11E+08 1.11E+08 2.08E+07 8.76E+06 5.23E-01 2.23E+08 6.14E+06 7.04E+01 2.50E+08 3.07E-01 1.25E+09
AVG 2.94E-03 6.27E+05 6.57E+05 1.68E+05 8.01E+04 9.22E+01 1.53E+06 6.30E+04 8.27E+01 1.16E+06 7.82E+01 7.23E+05
F6
STD 3.98E-03 7.43E+04 3.29E+04 8.23E+03 9.32E+03 2.15E+00 3.37E+04 8.91E+03 2.24E+00 3.48E+04 2.50E+00 3.28E+04
AVG 2.51E-04 9.10E+03 1.43E+04 2.62E+03 2.53E+02 4.67E-02 2.23E+04 3.71E+02 8.05E+01 3.84E+04 1.71E-03 2.39E+04
F7
STD 2.43E-04 2.20E+03 1.51E+03 3.59E+02 6.28E+01 1.12E-02 1.15E+03 6.74E+01 1.37E+01 2.24E+03 4.80E-04 2.72E+03
AVG -2.09E+05 -1.31E+05 -1.65E+04 -1.42E+05 -3.00E+04 -5.70E+04 -9.03E+03 -7.27E+04 -2.10E+17 -6.29E+04 -5.02E+04 -2.67E+04
F8
STD 2.84E+01 2.31E+04 9.99E+02 1.98E+03 1.14E+03 3.12E+03 2.12E+03 1.15E+04 1.14E+18 5.71E+03 1.00E+04 1.38E+03
AVG 0.00E+00 3.29E+03 6.63E+03 7.86E+02 4.96E+03 7.84E+01 6.18E+03 2.80E+03 2.54E+03 6.96E+03 0.00E+00 7.14E+03
F9
STD 0.00E+00 1.96E+02 1.07E+02 3.42E+01 7.64E+01 3.13E+01 1.20E+02 1.42E+02 5.21E+01 1.48E+02 0.00E+00 1.05E+02
AVG 8.88E-16 1.96E+01 1.97E+01 1.44E+01 8.55E+00 1.93E-03 2.04E+01 1.24E+01 1.07E+00 2.03E+01 7.62E-01 2.06E+01
F10
STD 4.01E-31 2.04E-01 1.04E-01 2.22E-01 8.66E-01 3.50E-04 3.25E-02 4.46E-01 6.01E-02 1.48E-01 2.33E+00 2.45E-01
AVG 0.00E+00 5.42E+03 5.94E+03 1.47E+03 6.88E+02 1.55E-02 1.38E+04 5.83E+02 2.66E-02 1.03E+04 0.00E+00 6.75E+03
F11
STD 0.00E+00 7.32E+02 3.19E+02 8.10E+01 8.17E+01 3.50E-02 3.19E+02 7.33E+01 2.30E-03 4.43E+02 0.00E+00 2.97E+02
AVG 1.41E-06 2.79E+09 3.51E+09 1.60E+08 4.50E+06 7.42E-01 1.70E+10 8.67E+05 3.87E-01 1.20E+10 4.61E-01 1.60E+10
F12
STD 1.48E-06 1.11E+09 4.16E+08 3.16E+07 3.37E+06 4.38E-02 6.29E+08 6.23E+05 2.47E-02 6.82E+08 2.40E-02 2.34E+09
AVG 3.44E-04 8.84E+09 6.82E+09 5.13E+08 3.94E+07 5.06E+01 3.17E+10 2.29E+07 6.00E+01 2.23E+10 4.98E+01 2.42E+10
F13
STD 4.75E-04 2.00E+09 8.45E+08 6.59E+07 1.87E+07 1.30E+00 9.68E+08 9.46E+06 1.13E+00 1.13E+09 9.97E-03 6.39E+09
20
Table 7: Comparison of average running time results (seconds) over 30 runs for larger-scale problems with 1000
variables
335 always achieved to the best results on F14-F23 problems in comparison with other approaches.
336 Based on results for F24-F29 hybrid CM functions in Table 8, the HHO is capable of achieving to
337 high-quality solutions and outperforming other competitors. The p-values in Table 24 also confirm
338 the meaningful advantage of HHO compared to other optimizers for the majority of cases.
21
Table 9: Brief description of the tackled engineering design tasks. (D: dimension, CV: continuous variables,
DV:Discrete variables, NC: Number of constraints, AC: Active constraints, F/S: ratio of the feasible solutions in
the solution domain (F) to the whole search domain(S), OB: Objective.)
−
→
Consider X = [x1 x2 ] = [A1 A2 ],
−
→ ( √ )
Minimise f ( X ) = 2 2X1 + X2 × 1,
√
−
→ 2x1 + x2
Subject to g1 ( X ) = √ 2 P − σ ≤ 0,
2x1 + 2x1 x2
−
→ x2
g2 ( X ) = √ 2 P − σ ≤ 0,
2x1 + 2x1 x2
−
→ 1
g3 ( X ) = √ P − σ ≤ 0,
2x2 + x1
Variable range 0 ≤ x1 , x2 ≤ 1,
where 1 = 100 cm, P = 2 KN / cm2 , σ = 2 KN / cm2
348 Figure 13 demonstrates the shape of the formulated truss and the related forces on this struc-
349 ture. With regard to Fig. 13 and the formulation, we have two parameters: the area of bars 1 and
350 3 and area of bar 2. The objective of this task is to minimize the total weight of the structure. In
351 addition, this design case has several constraints including stress, deflection, and buckling.
1 2 3
A2
D
A1 A3
4 A1=A3
P
352 The HHO is applied to this case based on 30 independent runs with 30 hawks and 500 iterations
353 in each run. Since this benchmark case has some constraints, we need to integrate the HHO with
22
354 a constraint handling technique. For the sake of simplicity, we used a barrier penalty approach
355 [63] in the HHO. The results of HHO are compared to those reported for DEDS [64], MVO [65],
356 GOA [62], MFO [56], PSO-DE [66], SSA [60], MBA [67], Tsa [68], Ray and Sain [69], and CS [34]
357 in previous literature. Table 10 shows the detailed results of the proposed HHO compared to other
358 techniques. Based on the results in Table 10, it is observed that HHO can reveal very competitive
359 results compared to DEDS, PSO-DE, and SSA algorithms. Additionally, the HHO outperforms
360 other optimizers significantly. The results obtained show that the HHO is capable of dealing with
361 a constrained space.
Table 10: Comparison of results for three-bar truss design problem.
Consider−→
z = [z1 z2 z3 ] = [dDN ],
Minimizef (−
→z ) = (z3 + 2)z2 z12 ,
Subject to
z23 z3
g1 (−
→
z )=1− ≤ 0,
71785z14
4z22 − z1 z2 1
g2 (−
→
z)= + ≤ 0,
12566(z2 z1 − z1 ) 5108z12
3 4
140.45z1
g3 (−
→
z )=1− ≤0
z22 z3
z1 + z2
g4 (−
→
z)= − 1 ≤ 0,
1.5
368 There are several optimizers previously applied to this case such as the SSA [60], TEO [70],
369 MFO [56], SFS [71], GWO [55], WOA [18], method presented by Arora [72], GA2 [73], GA3 [74],
370 method presented by Belegundu [75], CPSO [76], DEDS [64], GSA [25], DELC [77], HEAA [78],
371 WEO [79], BA [80], ESs [81], Rank-iMDDE [82], CWCA [14], and WCA [61]. The results of HHO
372 are compared to the aforementioned techniques in Table 11.
23
Table 11: Comparison of results for tension/compression spring problem.
373 Table 11 shows that the proposed HHO can achieve to high quality solutions very effectively
374 when tackling this benchmark problem and it exposes the best design. It is evident that results
375 of HHO are very competitive to those of SFS and TEO.
Th Ts
AA
B
B
O
O
B
B
B
B
2r
O
O
O
O
O11
O
11
AA
24
follows:
377 The design space for this case is limited to: 0 ≤ z1 , z2 ≤ 99, 0 ≤ z3 , z4 ≤ 200. The results of
378 HHO are compared to those of GWO [55], GA [73], HPSO [83], G-QPSO [84], WEO [79], IACO
379 [85], BA [80], MFO [56], CSS [86], ESs [81], CPSO [76], BIANCA [87], MDDE [88], DELC [77],
380 WOA [18], GA3 [74], Lagrangian multiplier (Kannan) [18], and Branch-bound (Sandgren) [18].
381 Table 12 reports the optimum designs attained by HHO and listed optimizers. Inspecting the
382 results in Table 12, we detected that the HHO is the best optimizer in dealing with problems and
383 can attain superior results compared to other techniques.
25
l h t
P
b
L
Figure 15: Welded beam design problem
389 The optimal results of HHO versus those attained by RANDOM [89], DAVID [89], SIMPLEX
390 [89], APPROX [89], GA1 [73], GA2 [63], HS [90], GSA [18], ESs [81], and CDE [91] are represented
391 in Table 13. From Table 13, it can be seen that the proposed HHO can reveal the best design
392 settings with the minimum fitness value compared to other optimizers.
26
Table 13: Comparison of results for welded beam design problem
2Πn(ro3 − ri3 ) Iz Πn
vrz = 2 2
, T =
90 (ro − ri ) 30(Mh + Mf )
∆r = 20 mm, Iz = 55 kgmm2 , Pmax = 1 M P a, Fmax = 1000 N,
Tmax = 15 s, µ = 0.5, s = 1.5, Ms = 40 N m, Mf = 3 N m, n = 250 rpm,
vsr max = 10m / s, lmax = 30 mm, ri min = 60, ri max = 80, ro min = 90,
ro max = 110, tmin = 1.5, tmax = 3, Fmin = 600, Fmax = 1000, Zmin = 2, Zmax = 9,
27
Table 14: Comparison of results for multi-plate disc clutch brake
28
405 A schematic view of this problem is illustrated in Fig. 16.
Dh
d0
D
d1
r0
ri
Bw
Figure 16: Rolling element bearing problem
406 This case covers closely 1.5% of the feasible area of the target space. The results of HHO is
407 compared to GA4 [94], TLBO [93], and PVS [92] techniques. Table 15 tabulates the results of
408 HHO versus those of other optimizers. From Table 15, we see that the proposed HHO has detected
409 the best solution with the maximum cost with a substantial progress compared to GA4, TLBO,
410 and PVS algorithms.
Table 15: Comparison of results for rolling element bearing design problem
29
423 and competitive solutions based on a stable balance between the diversification and intensification
424 inclinations and a smooth transition between the searching modes. The results also support the
425 superior exploratory strengths of the HHO. The results for six well-known constrained cases in
426 Tables 10-15 also disclose that HHO obtains the best solutions and it is one of the top optimizers
427 compared to many state-of-the-art techniques. The results highlight that the proposed HHO has
428 several exploratory and exploitative mechanisms and consequently, it has efficiently avoided LO
429 and immature convergence drawbacks when solving different classes of problems and in the case
430 of any LO stagnation, the proposed HHO has shown a higher potential in jumping out of local
431 optimum solutions.
432 The following features can theoretically assist us in realizing why the proposed HHO can be
433 beneficial in exploring or exploiting the search space of a given optimization problem:
434 • Escaping energy E parameter has a dynamic randomized time-varying nature, which can
435 further boost the exploration and exploitation patterns of HHO. This factor also requires
436 HHO to perform a smooth transition between exploration and exploitation.
437 • Different diversification mechanisms with regard to the average location of hawks can boost
438 the exploratory behavior of HHO in initial iterations.
439 • Different LF-based patterns with short-length jumps enhance the exploitative behaviors of
440 HHO when conducting a local search.
441 • The progressive selection scheme assists search agents to progressively improve their position
442 and only select a better position, which can improve the quality of solutions and intensifica-
443 tion powers of HHO during the course of iterations.
444 • HHO utilizes a series of searching strategies based on E and r parameters and then, it selects
445 the best movement step. This capability has also a constructive impact on the exploitation
446 potential of HHO.
447 • The randomized jump J strength can assist candidate solutions in balancing the exploration
448 and exploitation tendencies.
449 • The use of adaptive and time-varying parameters allows HHO to handle difficulties of a
450 search space including local optimal solutions, multi-modality, and deceptive optima.
30
462 We designed the HHO as simple as possible with few exploratory and exploitative mechanisms.
463 It is possible to utilize other evolutionary schemes such as mutation and crossover schemes, multi-
464 swarm and multi-leader structure, evolutionary updating structures, and chaos-based phases. Such
465 operators and ideas are beneficial for future works. In future works, the binary and multi-objective
466 versions of HHO can be developed. In addition, it can be employed to tackle various problems
467 in engineering and other fields. Another interesting direction is to compare different constraint
468 handling strategies in dealing with real-world constrained problems.
A Appendix A
31
Table 18: Description of fixed-dimension multimodal benchmark functions.
Table 19: Details of hybrid composition functions F24-F29 (MM: Multi-modal, R: Rotated, NS: Non-Separable, S:
Scalable, D: Dimension)
B Appendix B
Table 20: p-values of the Wilcoxon rank-sum test with 5% significance for F1-F13 with 30 dimensions (p-values ≥
0.05 are shown in bold face, NaN means “Not a Number” returned by the test)
GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 2.85E-11 2.88E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F2 2.72E-11 2.52E-11 4.56E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F3 2.71E-11 2.63E-11 2.79E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F4 2.62E-11 2.84E-11 2.62E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F5 2.62E-11 2.52E-11 2.72E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F6 2.72E-11 2.71E-11 2.62E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 2.25E-04 3.02E-11
F7 2.52E-11 2.71E-11 9.19E-11 3.02E-11 3.69E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F8 7.83E-09 2.71E-11 7.62E-09 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F9 9.49E-13 1.00E-12 NaN 1.21E-12 4.35E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 4.57E-12 1.21E-12
F10 1.01E-12 1.14E-12 1.05E-12 1.21E-12 1.16E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 4.46E-13 1.21E-12
F11 9.53E-13 9.57E-13 9.54E-13 1.21E-12 2.79E-03 1.21E-12 1.21E-12 1.21E-12 1.21E-12 NaN 1.21E-12
F12 2.63E-11 2.51E-11 2.63E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 1.01E-08 3.02E-11 1.07E-06 3.02E-11
F13 2.51E-11 2.72E-11 2.61E-11 3.02E-11 3.02E-11 3.02E-11 5.49E-11 3.02E-11 3.02E-11 2.00E-06 3.02E-11
32
Table 21: p-values of the Wilcoxon rank-sum test with 5% significance for F1-F13 with 100 dimensions (p-values
≥ 0.05 are shown in bold face)
GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 2.98E-11 2.52E-11 2.52E-11 3.01E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F2 2.88E-11 2.72E-11 2.72E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F3 2.72E-11 2.72E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F4 2.40E-11 2.52E-11 2.51E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.01E-11 3.02E-11
F5 2.72E-11 2.62E-11 2.84E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F6 2.52E-11 2.52E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F7 2.71E-11 2.79E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 4.20E-10 3.02E-11
F8 2.72E-11 2.51E-11 2.83E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 5.57E-10 3.02E-11 3.02E-11 3.02E-11
F9 1.06E-12 9.57E-13 9.54E-13 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 3.34E-01 1.21E-12
F10 9.56E-13 9.57E-13 1.09E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 4.16E-14 1.21E-12
F11 1.06E-12 9.55E-13 9.56E-13 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 NaN 1.21E-12
F12 2.72E-11 2.52E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F13 2.72E-11 2.72E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
Table 22: p-values of the Wilcoxon rank-sum test with 5% significance for F1-F13 with 500 dimensions (p-values
≥ 0.05 are shown in bold face)
GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 2.94E-11 2.79E-11 2.72E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F2 2.52E-11 2.63E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F3 2.88E-11 2.52E-11 2.72E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F4 2.25E-11 2.52E-11 2.59E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F5 2.72E-11 2.72E-11 2.72E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F6 2.52E-11 2.52E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F7 2.52E-11 2.79E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 4.98E-11 3.02E-11
F8 2.52E-11 2.72E-11 2.63E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F9 1.06E-12 1.06E-12 1.06E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 NaN 1.21E-12
F10 9.57E-13 9.57E-13 1.06E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 6.14E-14 1.21E-12
F11 9.57E-13 9.57E-13 1.06E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 NaN 1.21E-12
F12 2.52E-11 2.52E-11 2.79E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F13 2.79E-11 2.52E-11 2.72E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
Table 23: p-values of the Wilcoxon rank-sum test with 5% significance for F1-F13 with 1000 dimensions (p-values
≥ 0.05 are shown in bold face)
GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F1 3.01E-11 2.52E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F2 2.63E-11 1.21E-12 2.72E-11 3.02E-11 3.02E-11 1.21E-12 1.21E-12 3.02E-11 1.21E-12 1.21E-12 1.21E-12
F3 2.86E-11 2.52E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F4 1.93E-11 2.52E-11 2.07E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F5 2.72E-11 2.52E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F6 2.63E-11 2.63E-11 2.63E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F7 2.63E-11 2.52E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F8 2.52E-11 2.52E-11 2.52E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F9 1.01E-12 1.06E-12 9.57E-13 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 NaN 1.21E-12
F10 1.01E-12 1.01E-12 9.57E-13 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 8.72E-14 1.21E-12
F11 1.06E-12 1.01E-12 9.57E-13 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.17E-13 1.21E-12
F12 2.52E-11 2.52E-11 2.72E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
F13 2.52E-11 2.63E-11 2.72E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11 3.02E-11
Table 24: p-values of the Wilcoxon rank-sum test with 5% significance for F14-F29 problems(p-values ≥ 0.05 are
shown in bold face)
GA PSO BBO FPA GWO BAT FA CS MFO TLBO DE
F14 8.15E-02 2.89E-08 8.15E-03 1.08E-01 5.20E-08 7.46E-12 1.53E-09 6.13E-14 9.42E-06 8.15E-02 1.00E+00
F15 2.78E-11 7.37E-11 2.51E-11 9.76E-10 1.37E-01 3.34E-11 3.16E-10 8.69E-10 5.00E-10 5.08E-06 3.92E-02
F16 1.05E-12 9.53E-13 9.49E-13 NaN NaN 5.54E-03 NaN NaN NaN NaN NaN
F17 1.87E-12 1.89E-12 2.06E-12 1.61E-01 1.61E-01 5.97E-01 1.61E-01 1.61E-01 1.61E-01 1.61E-01 1.61E-01
F18 NaN 9.53E-13 NaN NaN 1.09E-02 1.34E-03 NaN NaN NaN NaN NaN
F19 2.50E-11 5.24E-02 1.91E-09 1.65E-11 1.06E-01 5.02E-10 1.65E-11 1.65E-11 4.54E-10 1.65E-11 1.65E-11
F20 8.74E-03 2.54E-04 8.15E-03 6.15E-03 5.74E-06 5.09E-06 1.73E-07 NaN 1.73E-04 1.73E-04 1.73E-04
F21 1.22E-04 6.25E-05 5.54E-03 1.91E-08 5.54E-03 6.85E-07 1.71E-07 1.91E-08 9.42E-06 1.73E-04 1.79E-04
F22 1.64E-07 5.00E-10 8.15E-08 2.51E-11 8.15E-08 6.63E-07 5.24E-04 1.73E-08 8.15E-08 8.81E-10 1.21E-12
F23 1.54E-05 5.00E-10 8.88E-08 2.51E-11 8.88E-08 1.73E-08 5.14E-04 1.69E-08 8.88E-08 8.81E-10 NaN
F24 2.40E-01 4.69E-08 1.64E-05 1.17E-05 2.84E-04 3.02E-11 3.03E-03 3.08E-08 8.89E-10 8.35E-08 3.20E-09
F25 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F26 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F27 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12 1.21E-12
F28 0.012732 1.17E-09 5.07E-10 0.001114 1.01E-08 3.02E-11 2.37E-10 2.02E-08 8.35E-08 0.446419 2.71E-11
F29 1.85E-08 6.52E-09 3.02E-11 1.29E-06 7.12E-09 3.02E-11 1.17E-09 3.02E-11 3.02E-11 2.6E-08 3.02E-11
33
Acknowledgments
This research is funded by Zhejiang Provincial Natural Science Foundation of China
(LY17F020012), Science and Technology Plan Project of Wenzhou of China (ZG2017019).
We also acknowledge the comments of anonymous reviewers.
References
[1] R. Abbassi, A. Abbassi, A. A. Heidari, S. Mirjalili, An efficient salp swarm-inspired algorithm for parameters identification of
photovoltaic cell models, Energy Conversion and Management 179 (2019) 362–372.
[2] H. Faris, A. M. Al-Zoubi, A. A. Heidari, I. Aljarah, M. Mafarja, M. A. Hassonah, H. Fujita, An intelligent system for spam
detection and identification of the most relevant features based on evolutionary random weight networks, Information Fusion 48
(2019) 67 – 83.
[3] J. Nocedal, S. J. Wright, Numerical optimization 2nd, 2006.
[4] G. Wu, Across neighborhood search for numerical optimization, Information Sciences 329 (2016) 597–618.
[5] G. Wu, W. Pedrycz, P. N. Suganthan, R. Mallipeddi, A variable reduction strategy for evolutionary algorithms handling equality
constraints, Applied Soft Computing 37 (2015) 774–786.
[6] J. Dréo, A. Pétrowski, P. Siarry, E. Taillard, Metaheuristics for hard optimization: methods and case studies, Springer Science &
Business Media, 2006.
[7] E.-G. Talbi, Metaheuristics: from design to implementation, volume 74, John Wiley & Sons, 2009.
[8] S. Kirkpatrick, C. D. Gelatt, M. P. Vecchi, Optimization by simulated annealing, science 220 (1983) 671–680.
[9] J. H. Holland, Genetic algorithms, Scientific american 267 (1992) 66–73.
[10] J. Luo, H. Chen, Y. Xu, H. Huang, X. Zhao, et al., An improved grasshopper optimization algorithm with application to financial
stress prediction, Applied Mathematical Modelling 64 (2018) 654–668.
[11] M. Wang, H. Chen, B. Yang, X. Zhao, L. Hu, Z. Cai, H. Huang, C. Tong, Toward an optimal kernel extreme learning machine
using a chaotic moth-flame optimization strategy with applications in medical diagnoses, Neurocomputing 267 (2017) 69–84.
[12] L. Shen, H. Chen, Z. Yu, W. Kang, B. Zhang, H. Li, B. Yang, D. Liu, Evolving support vector machines using fruit fly optimization
for medical data classification, Knowledge-Based Systems 96 (2016) 61–75.
[13] Q. Zhang, H. Chen, J. Luo, Y. Xu, C. Wu, C. Li, Chaos enhanced bacterial foraging optimization for global optimization, IEEE
Access (2018).
[14] A. A. Heidari, R. A. Abbaspour, A. R. Jordehi, An efficient chaotic water cycle algorithm for optimization tasks, Neural Computing
and Applications 28 (2017) 57–85.
[15] M. Mafarja, I. Aljarah, A. A. Heidari, A. I. Hammouri, H. Faris, A.-Z. AlaM, S. Mirjalili, Evolutionary population dynamics and
grasshopper optimization approaches for feature selection problems, Knowledge-Based Systems 145 (2018) 25 – 45.
[16] M. Mafarja, I. Aljarah, A. A. Heidari, H. Faris, P. Fournier-Viger, X. Li, S. Mirjalili, Binary dragonfly optimization for feature
selection using time-varying transfer functions, Knowledge-Based Systems 161 (2018) 185 – 204.
[17] I. Aljarah, M. Mafarja, A. A. Heidari, H. Faris, Y. Zhang, S. Mirjalili, Asynchronous accelerating multi-leader salp chains for
feature selection, Applied Soft Computing 71 (2018) 964–979.
[18] S. Mirjalili, A. Lewis, The whale optimization algorithm, Advances in Engineering Software 95 (2016) 51–67.
[19] H. Faris, M. M. Mafarja, A. A. Heidari, I. Aljarah, A.-Z. AlaM, S. Mirjalili, H. Fujita, An efficient binary salp swarm algorithm
with crossover scheme for feature selection problems, Knowledge-Based Systems 154 (2018) 43–67.
[20] J. R. Koza, Genetic Programming II, Automatic Discovery of Reusable Subprograms, MIT Press, Cambridge, MA, 1992.
[21] R. Storn, K. Price, Differential evolution–a simple and efficient heuristic for global optimization over continuous spaces, Journal
of global optimization 11 (1997) 341–359.
[22] D. Simon, Biogeography-based optimization, IEEE transactions on evolutionary computation 12 (2008) 702–713.
[23] O. K. Erol, I. Eksin, A new optimization method: big bang–big crunch, Advances in Engineering Software 37 (2006) 106–111.
[24] R. A. Formato, Central force optimization, progress in Electromagnetic Research77 (2007) 425–491.
[25] E. Rashedi, H. Nezamabadi-Pour, S. Saryazdi, Gsa: a gravitational search algorithm, Information sciences 179 (2009) 2232–2248.
[26] S. Salcedo-Sanz, Modern meta-heuristics based on nonlinear physics processes: A review of models and design procedures, Physics
Reports 655 (2016) 1–70.
[27] F. Glover, Tabu searchpart i, ORSA Journal on computing 1 (1989) 190–206.
[28] M. Kumar, A. J. Kulkarni, S. C. Satapathy, Socio evolution & learning optimization algorithm: A socio-inspired optimization
methodology, Future Generation Computer Systems 81 (2018) 252–272.
[29] R. V. Rao, V. J. Savsani, D. Vakharia, Teaching–learning-based optimization: an optimization method for continuous non-linear
large scale problems, Information Sciences 183 (2012) 1–15.
[30] A. Baykasoğlu, F. B. Ozsoydan, Evolutionary and population-based methods versus constructive search strategies in dynamic
combinatorial optimization, Information Sciences 420 (2017) 159–183.
[31] A. A. Heidari, H. Faris, I. Aljarah, S. Mirjalili, An efficient hybrid multilayer perceptron neural network with grasshopper
optimization, Soft Computing (2018) 1–18.
[32] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Micro Machine and Human Science, 1995. MHS’95.,
Proceedings of the Sixth International Symposium on, IEEE, pp. 39–43.
[33] M. Dorigo, V. Maniezzo, A. Colorni, Ant system: optimization by a colony of cooperating agents, IEEE Transactions on Systems,
Man, and Cybernetics, Part B (Cybernetics) 26 (1996) 29–41.
[34] A. H. Gandomi, X.-S. Yang, A. H. Alavi, Cuckoo search algorithm: a metaheuristic approach to solve structural optimization
problems, Engineering with computers 29 (2013) 17–35.
[35] X.-S. Yang, Review of meta-heuristics and generalised evolutionary walk algorithm, International Journal of Bio-Inspired Com-
putation 3 (2011) 77–84.
34
[36] D. H. Wolpert, W. G. Macready, No free lunch theorems for optimization, IEEE transactions on evolutionary computation 1
(1997) 67–82.
[37] J. C. Bednarz, Cooperative hunting in harris’ hawks (parabuteo unicinctus), Science 239 (1988) 1525.
[38] L. Lefebvre, P. Whittle, E. Lascaris, A. Finkelstein, Feeding innovations and forebrain size in birds, Animal Behaviour 53 (1997)
549–560.
[39] D. Sol, R. P. Duncan, T. M. Blackburn, P. Cassey, L. Lefebvre, Big brains, enhanced cognition, and response of birds to novel
environments, Proceedings of the National Academy of Sciences of the United States of America 102 (2005) 5460–5465.
[40] F. Dubois, L.-A. Giraldeau, I. M. Hamilton, J. W. Grant, L. Lefebvre, Distraction sneakers decrease the expected level of aggression
within groups: a game-theoretic model, The American Naturalist 164 (2004) E32–E45.
[41] EurekAlertAAAS, Bird iq test takes flight, 2005.
[42] N. E. Humphries, N. Queiroz, J. R. Dyer, N. G. Pade, M. K. Musyl, K. M. Schaefer, D. W. Fuller, J. M. Brunnschweiler, T. K.
Doyle, J. D. Houghton, et al., Environmental context explains lévy and brownian movement patterns of marine predators, Nature
465 (2010) 1066–1069.
[43] G. M. Viswanathan, V. Afanasyev, S. Buldyrev, E. Murphy, P. Prince, H. E. Stanley, Lévy flight search patterns of wandering
albatrosses, Nature 381 (1996) 413.
[44] D. W. Sims, E. J. Southall, N. E. Humphries, G. C. Hays, C. J. Bradshaw, J. W. Pitchford, A. James, M. Z. Ahmed, A. S.
Brierley, M. A. Hindell, et al., Scaling laws of marine predator search behaviour, Nature 451 (2008) 1098–1102.
[45] A. O. Gautestad, I. Mysterud, Complex animal distribution and abundance from memory-dependent kinetics, ecological complexity
3 (2006) 44–55.
[46] M. F. Shlesinger, Levy flights: Variations on a theme, Physica D: Nonlinear Phenomena 38 (1989) 304–309.
[47] G. Viswanathan, V. Afanasyev, S. V. Buldyrev, S. Havlin, M. Da Luz, E. Raposo, H. E. Stanley, Lévy flights in random searches,
Physica A: Statistical Mechanics and its Applications 282 (2000) 1–12.
[48] X.-S. Yang, Nature-inspired metaheuristic algorithms, Luniver press, 2010.
[49] X. Yao, Y. Liu, G. Lin, Evolutionary programming made faster, IEEE Transactions on Evolutionary computation 3 (1999) 82–102.
[50] J. G. Digalakis, K. G. Margaritis, On benchmarking functions for genetic algorithms, International journal of computer mathe-
matics 77 (2001) 481–506.
[51] S. Garcı́a, D. Molina, M. Lozano, F. Herrera, A study on the use of non-parametric tests for analyzing the evolutionary algorithms
behaviour: a case study on the cec2005 special session on real parameter optimization, Journal of Heuristics 15 (2009) 617.
[52] X.-S. Yang, A. Hossein Gandomi, Bat algorithm: a novel approach for global engineering optimization, Engineering Computations
29 (2012) 464–483.
[53] X.-S. Yang, M. Karamanoglu, X. He, Flower pollination algorithm: a novel approach for multiobjective optimization, Engineering
Optimization 46 (2014) 1222–1237.
[54] A. H. Gandomi, X.-S. Yang, A. H. Alavi, Mixed variable structural optimization using firefly algorithm, Computers & Structures
89 (2011) 2325–2336.
[55] S. Mirjalili, S. M. Mirjalili, A. Lewis, Grey wolf optimizer, Advances in Engineering Software 69 (2014) 46–61.
[56] S. Mirjalili, Moth-flame optimization algorithm: A novel nature-inspired heuristic paradigm, Knowledge-Based Systems 89 (2015)
228–249.
[57] J. Derrac, S. Garcı́a, D. Molina, F. Herrera, A practical tutorial on the use of nonparametric statistical tests as a methodology
for comparing evolutionary and swarm intelligence algorithms, Swarm and Evolutionary Computation 1 (2011) 3–18.
[58] X.-S. Yang, Firefly algorithm, stochastic test functions and design optimisation, International Journal of Bio-Inspired Computation
2 (2010) 78–84.
[59] F. Van Den Bergh, A. P. Engelbrecht, A study of particle swarm optimization particle trajectories, Information sciences 176
(2006) 937–971.
[60] S. Mirjalili, A. H. Gandomi, S. Z. Mirjalili, S. Saremi, H. Faris, S. M. Mirjalili, Salp swarm algorithm: A bio-inspired optimizer
for engineering design problems, Advances in Engineering Software (2017).
[61] H. Eskandar, A. Sadollah, A. Bahreininejad, M. Hamdi, Water cycle algorithm–a novel metaheuristic optimization method for
solving constrained engineering optimization problems, Computers & Structures 110 (2012) 151–166.
[62] S. Saremi, S. Mirjalili, A. Lewis, Grasshopper optimisation algorithm: Theory and application, Advances in Engineering Software
105 (2017) 30–47.
[63] C. A. C. Coello, Use of a self-adaptive penalty approach for engineering optimization problems, Computers in Industry 41 (2000)
113–127.
[64] M. Zhang, W. Luo, X. Wang, Differential evolution with dynamic stochastic selection for constrained optimization, Information
Sciences 178 (2008) 3043–3074.
[65] S. Mirjalili, S. M. Mirjalili, A. Hatamlou, Multi-verse optimizer: a nature-inspired algorithm for global optimization, Neural
Computing and Applications 27 (2016) 495–513.
[66] H. Liu, Z. Cai, Y. Wang, Hybridizing particle swarm optimization with differential evolution for constrained numerical and
engineering optimization, Applied Soft Computing 10 (2010) 629–640.
[67] A. Sadollah, A. Bahreininejad, H. Eskandar, M. Hamdi, Mine blast algorithm: A new population based algorithm for solving
constrained engineering optimization problems, Applied Soft Computing 13 (2013) 2592–2612.
[68] J.-F. Tsai, Global optimization of nonlinear fractional programming problems in engineering design, Engineering Optimization
37 (2005) 399–409.
[69] T. Ray, P. Saini, Engineering design optimization using a swarm with an intelligent information sharing among individuals,
Engineering Optimization 33 (2001) 735–748.
[70] A. Kaveh, A. Dadras, A novel meta-heuristic optimization algorithm: Thermal exchange optimization, Advances in Engineering
Software 110 (2017) 69 – 84.
[71] H. Salimi, Stochastic fractal search: a powerful metaheuristic algorithm, Knowledge-Based Systems 75 (2015) 1–18.
[72] J. S. Arora, Introduction to optimum design, 1989, McGraw-Mill Book Company (1967).
[73] K. Deb, Optimal design of a welded beam via genetic algorithms, AIAA journal 29 (1991) 2013–2015.
[74] C. A. C. Coello, E. M. Montes, Constraint-handling in genetic algorithms through the use of dominance-based tournament
selection, Advanced Engineering Informatics 16 (2002) 193–203.
35
[75] A. D. Belegundu, J. S. Arora, A study of mathematical programming methods for structural optimization. part i: Theory,
International Journal for Numerical Methods in Engineering 21 (1985) 1583–1599.
[76] Q. He, L. Wang, An effective co-evolutionary particle swarm optimization for constrained engineering design problems, Engineering
Applications of Artificial Intelligence 20 (2007) 89–99.
[77] L. Wang, L.-p. Li, An effective differential evolution with level comparison for constrained engineering design, Structural and
Multidisciplinary Optimization 41 (2010) 947–963.
[78] Y. Wang, Z. Cai, Y. Zhou, Z. Fan, Constrained optimization based on hybrid evolutionary algorithm and adaptive constraint-
handling technique, Structural and Multidisciplinary Optimization 37 (2009) 395–413.
[79] A. Kaveh, T. Bakhshpoori, Water evaporation optimization: a novel physically inspired optimization algorithm, Computers &
Structures 167 (2016) 69–85.
[80] A. H. Gandomi, X.-S. Yang, A. H. Alavi, S. Talatahari, Bat algorithm for constrained optimization tasks, Neural Computing and
Applications 22 (2013) 1239–1255.
[81] E. Mezura-Montes, C. A. C. Coello, A simple multimembered evolution strategy to solve constrained optimization problems,
IEEE Transactions on Evolutionary computation 9 (2005) 1–17.
[82] W. Gong, Z. Cai, D. Liang, Engineering optimization by means of an improved constrained differential evolution, Computer
Methods in Applied Mechanics and Engineering 268 (2014) 884–904.
[83] Q. He, L. Wang, A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization, Applied
mathematics and computation 186 (2007) 1407–1422.
[84] L. dos Santos Coelho, Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design
problems, Expert Systems with Applications 37 (2010) 1676–1683.
[85] H. Rosenbrock, An automatic method for finding the greatest or least value of a function, The Computer Journal 3 (1960)
175–184.
[86] A. Kaveh, S. Talatahari, A novel heuristic optimization method: charged system search, Acta Mechanica 213 (2010) 267–289.
[87] M. Montemurro, A. Vincenti, P. Vannucci, The automatic dynamic penalisation method (adp) for handling constraints with
genetic algorithms, Computer Methods in Applied Mechanics and Engineering 256 (2013) 70–87.
[88] E. Mezura-Montes, C. Coello Coello, J. Velázquez-Reyes, L. Muñoz-Dávila, Multiple trial vectors in differential evolution for
engineering design, Engineering Optimization 39 (2007) 567–589.
[89] K. Ragsdell, D. Phillips, Optimal design of a class of welded structures using geometric programming, Journal of Engineering for
Industry 98 (1976) 1021–1025.
[90] K. S. Lee, Z. W. Geem, A new structural optimization method based on the harmony search algorithm, Computers & structures
82 (2004) 781–798.
[91] F.-z. Huang, L. Wang, Q. He, An effective co-evolutionary differential evolution for constrained optimization, Applied Mathematics
and computation 186 (2007) 340–356.
[92] P. Savsani, V. Savsani, Passing vehicle search (pvs): A novel metaheuristic algorithm, Applied Mathematical Modelling 40 (2016)
3951–3978.
[93] R. V. Rao, V. J. Savsani, D. Vakharia, Teaching–learning-based optimization: a novel method for constrained mechanical design
optimization problems, Computer-Aided Design 43 (2011) 303–315.
[94] S. Gupta, R. Tiwari, S. B. Nair, Multi-objective design optimisation of rolling bearings using genetic algorithms, Mechanism and
Machine Theory 42 (2007) 1418–1443.
36
Ali Asghar Heidari is now a Ph.D. research intern at the School of Computing,
National University of Singapore (NUS). Currently, he is also an exceptionally
talented Ph.D. candidate at the University of Tehran and he is awarded and
funded by Iran's National Elites Foundation (INEF). His main research interests
are advanced machine learning, evolutionary computation, meta-heuristics,
prediction, information systems, and spatial modeling. He has published more
than ten papers in international journals such as Information Fusion, Energy
Conversion and Management, Applied Soft Computing, and Knowledge-Based
Systems.
Majdi Mafarja received his B.Sc in Software Engineering and M.Sc in Computer
Information Systems from Philadelphia University and The Arab Academy for
Banking and Financial Sciences, Jordan in 2005 and 2007 respectively. Dr.
Mafarja did his PhD in Computer Science at National University of Malaysia
(UKM). He was a member in Datamining and Optimization Research Group
(DMO). Now he is an assistant professor at the Department of Computer
Science at Birzeit University. His research interests include Evolutionary
Computation, Meta-heuristics and Data mining.