Algorithm For Global Optimization
Algorithm For Global Optimization
https://doi.org/10.1007/s11227-022-04959-6
Abstract
In this paper, a novel population-based technique called dung beetle optimizer
(DBO) algorithm is presented, which is inspired by the ball-rolling, dancing, forag-
ing, stealing, and reproduction behaviors of dung beetles. The newly proposed DBO
algorithm takes into account both the global exploration and the local exploitation,
thereby having the characteristics of the fast convergence rate and the satisfactory
solution accuracy. A series of well-known mathematical test functions (including
both 23 benchmark functions and 29 CEC-BC-2017 test functions) are employed to
evaluate the search capability of the DBO algorithm. From the simulation results, it
is observed that the DBO algorithm presents substantially competitive performance
with the state-of-the-art optimization approaches in terms of the convergence rate,
solution accuracy, and stability. In addition, the Wilcoxon signed-rank test and the
Friedman test are used to evaluate the experimental results of the algorithms, which
proves the superiority of the DBO algorithm against other currently popular optimi-
zation techniques. In order to further illustrate the practical application potential, the
DBO algorithm is successfully applied in three engineering design problems. The
experimental results demonstrate that the proposed DBO algorithm can effectively
deal with real-world application problems.
* Bo Shen
[email protected]
1
College of Information Science and Technology, Donghua University, Shanghai, China
2
Engineering Research Center of Digitalized Textile and Fashion Technology, Ministry
of Education, Shanghai, China
13
Vol.:(0123456789)
7306 J. Xue, B. Shen
1 Introduction
The optimization problems have been the focus of research for a long time and
has existed in a variety of real-world systems, including fault diagnosis systems
[1, 2], energy management systems [3, 4], forecasting systems [5, 6], and so on
[7, 8]. It should be noticed that a large number of complex optimization problems
(e.g., the NP-complete problems) are particularly difficult to be settled using the
conventional mathematical programming techniques such as the conjugate gra-
dient and quasi-Newton methods [9]. In this regard, a great number of swarm
intelligence (SI) optimization algorithms have been introduced with the merits
of the easy implementation, self-learning ability, and simple framework. Specifi-
cally, the SI system can be viewed as a swarm where each individual denotes
a candidate solution in the entire search space. In addition, the characteristic of
the SI system is that the individual interactions promote the appearance of the
intelligence behavior. It is worth pointing out that the realization of the optimiza-
tion process mainly includes the following two steps: 1) creating a group of ran-
dom individuals within the scope of the search space and 2) combining, moving,
or evolving these random individuals during the iteration process. This is also
the main framework of almost all SI-based techniques in dealing with optimiza-
tion problems. Note that the difference for each optimization algorithm is how
to design new strategies (especially combining, moving, or evolving) during the
optimization process.
For example, a well-known population-based technique, namely the particle
swarm optimization (PSO) technique, has gained much research attention with
the advantages of the fast convergence rate, few parameters, and satisfactory solu-
tion accuracy [10, 11]. To be specific, in a standard PSO algorithm, all parti-
cles thoroughly explore and exploit the search space of the optimization problem
according to their velocity and position. Then, the movement of the whole group
evolves from the disorder to the order, and eventually all particles are clustered at
the best position. It should be mentioned that the position of each individual can
gradually converge to the global optimal solution during the evolution process,
which mainly depends on following two important positions: 1) one is the per-
sonal best position of each individual and 2) the other is the global best position
of the entire population. In addition, the ant colony optimization (ACO) algo-
rithm has become as a famous SI-based technique as the PSO approach as well
[12, 13]. More specifically, in a ACO algorithm, the path of each ant indicates
a feasible solution and the paths of all ants constitute the solution space. Note
that the selection of the pheromone is of crucial importance for the ant colony
to search the globally optimal path in the whole optimization problem space. In
such a case, the shortest path is obtained in the direction of the stronger level of
pheromone.
Up to now, many novel SI-based techniques have been proposed with the pur-
pose of obtaining more optimizers with good search ability that can solve various
practical optimization problems more efficiently, and their performance has been
verified by a large number of experiments in the existing literature. Similarly,
13
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7307
these new SI-based algorithms also mainly mimic the social behaviors of living
creatures (e.g., fish, insects and birds) in nature [14]. For example, the grey wolf
optimizer (GWO) algorithm has been presented in [15], which simulates the lead-
ership hierarchy (including alpha, beta, delta, and omega) and hunting behavior
of grey wolves. Note that the hunting mechanism of the GWO algorithm mainly
consists of searching, encircling, and attacking prey. Recently, the whale optimi-
zation algorithm (WOA) has been put forward in [16] where the bubble-net hunt-
ing strategy has been first proposed that mimics the social behavior of humpback
whales. More recently, a new SI-based technique, the Harris hawks optimizer
(HHO) algorithm, has been proposed in [17], which simulates the living behavior
of Harris’ hawks. The HHO algorithm has attracted the attention of researchers
from both industrial and academic societies due to its strong capability of search-
ing around the global optimum [18–20]. Moreover, there are several promising
SI-based algorithms as well, see [21–25] for more discussions.
It is well known that a great number of SI-based algorithms have played a vitally
important role in many practical applications. For example, it has been shown in
[26, 27] that the ACO algorithm has the satisfactory search capability in dealing
with the traveling salesman problems. Similarly, during the past few decades, the
PSO algorithm has been widely implemented in solving the complex optimization
problems such as large-scaled complex networks [28], fuzzy systems [29], compli-
cated multimodal optimization systems [30] and other fields [31]. Furthermore, in
the past few years, the newly developed SI-based approaches have also been suc-
cessfully applied to various applications with the purpose of providing more suitable
solutions for different optimization problems. For instance, in [32], by introducing
the two new strategies such as the reverse learning and the Levy flight disturbance,
an improved the WOA (IWOA) has been proposed and then is used to identify
the unknown parameters with regard to a static var compensator (SVC) model in
order to verify the optimization performance of the IWOA. In [33], a hybrid HHO
algorithm, namely comprehensive learning Harris hawks-equilibrium optimization
(CLHHEO) algorithm, has been designed by utilizing the comprehensive learning
strategy, the operator of equilibrium optimizer, and the terminal replacement mecha-
nism. The CLHHEO algorithm is successfully used in the optimization of several
engineering design problems (e.g., the car side impact design and the multiple disc
clutch brake design). The sparrow search algorithm (SSA) has been designed in [23]
and a random walk SSA (RWSSA) has been proposed in [34] to optimize the dis-
tribution and signal coverage of 5G networks in open-pit mines. In summary, the
emerging SI-based techniques have received much research attention in dealing with
various practical applications.
On the other hand, according to No Free Lunch (NFL) theorem, we know that
no single algorithm can deal with all optimization problems. In other words, the
optimization performance of an algorithm may perform well in one set of problems
and poorly in another. Therefore, the NFL theorem encourages finding and develop-
ing more optimizers with satisfactory performance. Motivated by the above discus-
sions, a new SI-based technique called dung beetle optimizer (DBO) is developed
with the aim of providing a more efficient optimizer to solve complex optimization
problems. Correspondingly, the main contributions of this paper can be highlighted
13
7308 J. Xue, B. Shen
into the following three aspects: 1) the DBO algorithm is put forward where five
different updating rules that are inspired by the ball-rolling, dancing, foraging,
stealing, and reproduction behaviors of dung beetles are designed to help find high-
quality solutions; 2) the DBO algorithm is comprehensively evaluated through a
series of mathematical test functions (including both 23 benchmark functions and
29 CEC-BC-2017 test functions) and the experimental results indicate that the DBO
algorithm more competitive performance compared with the state-of-the-art opti-
mization techniques; and 3) the DBO algorithm is successfully employed in several
practical engineering design problems, which illustrates that this algorithm is prom-
ising for solving real-world application problems.
The remainder of this paper is structured as follows. The main inspiration and
biological foundations of this paper and the proposed DBO algorithm are introduced
in Sect. 2. Benchmark test functions, parameter settings, optimization results and
discussions are shown in Sect. 3. Section 4 introduces the applications of the DBO
algorithm with regard to three engineering design problems in detail. Finally, the
conclusions of on the paper are given in Sect. 5.
In this section, a novel SI-based optimization technique called the DBO algorithm is
discussed in detail, including the following two aspects: 1) the inspiration and 2) the
mathematical model.
2.1 Inspiration
There are various species for the dung beetles, such as Copris ochus Motschulsky,
Onthophagus gibbulus, Caccobius jessoensis Harold and so on. It is well known that,
the dung beetle, as a common insect in nature, feeds on the dung of animals. Note that
dung beetles are found in most parts of the world and act as decomposers in nature,
which means that they are of vital importance in the ecosystem. Research has shown
that dung beetles have an interesting habit of making dung into a ball and then rolling it
out, as captured in Fig. 1. It is worth mentioning that the purpose of dung beetles is able
13
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7309
to move their dung ball as quickly and efficiently as possible, which can prevent them
from being competed by other dung beetles [35].
As shown in Fig. 1, a dung beetle is rolling backward a dung ball that is larger than
itself. On the other hand, a fascinating behavior of dung beetle can use celestial cues
(especially the sun, the moon and polarized light) to navigate and make the dung ball
roll along a straight line [36, 37]. However, if there is no light source at all (that is,
in total darkness), the dung beetle’s path is no longer straight, but curved and some-
times even slightly round [35]. Note that a number of natural factors (such as wind and
uneven ground) can cause dung beetles to deviate from their original direction. In addi-
tion, the dung beetle is likely to encounter obstacles and be unable to move forward in
the rolling process. In this regard, dung beetles usually climb on top of the dung ball to
dance (including a series of rotations and pauses), which determines their direction of
the movement [38].
Another interesting behavior observed from the dung beetle lifestyle is that dung
balls obtained have the following two main purposes : 1) some dung balls are used to
lay eggs and raise the next generation, and 2) the rest are used as food. Specifically,
dung beetles bury the dung balls and the females lay their eggs in these dung balls. It
should be noted that the dung ball not only serves as a developmental site for the larva,
but also provides the larva with food that is essential for life. Therefore, dung balls play
an irreplaceable role in the survival of dung beetles.
A new SI optimization algorithm, the DBO technique, is mainly inspired by the ball-
rolling, dancing, foraging, stealing, and reproduction behaviors of dung beetles. In the
next subsection, the behavior of the dung beetle will be modeled mathematically with
the hope of developing a optimizer with the satisfactory search performance.
According to the above discussion, we know that dung beetles need to navigate through
the celestial cues in the rolling process to keep the dung ball rolling in a straight path.
To simulate the ball rolling behavior, dung beetles are required to move in a given
direction throughout the entire search space. The trajectory of a dung beetle is given in
Fig. 2. In this figure, we can see that a dung beetle use the sun to navigate, where the
red arrow indicates the rolling direction. In this paper, we assume that the intensity of
the light source also affects the dung beetle’s path. During the rolling process, the posi-
tion of the ball-rolling dung beetle is updated and can be expressed as
xi (t + 1) = xi (t) + 𝛼 × k × xi (t − 1) + b × Δx,
(1)
Δx =∣ xi (t) − X w ∣
where t represents the current iteration number, xi (t) denotes the position informa-
tion of the ith dung beetle at the tth iteration, k ∈ (0, 0.2] denotes a constant value
which indicates the deflection coefficient, b indicates a constant value belonging to
(0, 1), 𝛼 is a natural coefficient which is assigned -1 or 1, X w indicates the global
worst position, Δx is used to simulate changes of light intensity.
13
7310 J. Xue, B. Shen
Remark 1 In (1), it is vitally crucial to select the appropriate values of the two
parameters (k and b). Note that 𝛼 means that many natural factors (such as wind and
uneven ground) can deflect dung beetles from their original direction. Specifically,
𝛼 = 1 indicates no deviation, and 𝛼 = −1 indicates deviation from the original direc-
tion. In this paper, 𝛼 is set to be 1 or -1 by the probability method in order to simu-
late the complex environment in real world (see Algorithm 1). Similarly, a higher
value of Δx indicates a weaker light source. In addition, k and b are set to be 0.1
and 0.3, respectively. The Δx can promote the ball-rolling dung beetle to have the
following two merits: 1) explore the whole problem space as thoroughly as possible
during the optimization process; and 2) pursue stronger searching performance and
reduce the possibility of falling into local optima. Hence, the X w is more suitable
control the value of the Δx to expand the search scope.
When the dung beetle encounters an obstacle and can’t move forward, it needs
to reorient itself through dancing with the purpose of obtaining a new route.
Notably, the dance behavior plays an important role in ball-rolling dung beetles.
13
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7311
Fig. 3 Conceptual model of the tangent function and the dance behavior of a dung beetle
In order to mimic the dance behavior, we use the tangent function to get the new
rolling direction. It should be mentioned that we only need to consider the values
of the tangent function defined on the interval [0, 𝜋 ], as shown in Fig. 3. Once the
dung beetle has successfully determined a new orientation, it should continue to
roll the ball backward. Therefore, the position of the ball-rolling dung beetle is
updated and defined as follows:
xi (t + 1) = xi (t) + tan(𝜃) ∣ xi (t) − xi (t − 1) ∣ (2)
where 𝜃 is the deflection angle belonging to [0, 𝜋].
Remark 2 In (2), ∣ xi (t) − xi (t − 1) ∣ is the difference between the position of the ith
dung beetle at the tth iteration and its position at the t − 1th iteration. Thus, the posi-
tion updating of the ball-rolling dung beetle is closely related to current and histori-
cal information. Note that, if 𝜃 equals 0, 𝜋 /2 or 𝜋 , the position of the dung beetle is
not updated (see Algorithm 2).
In nature, dung balls are rolled to safety and hidden by dung beetles (see Fig. 4). For
the purpose of providing a safe environment for their offspring, the choice of the right
place to lay their eggs is crucial for dung beetles. Inspired by the above discussions, a
boundary selection strategy is proposed to simulate the areas where female dung bee-
tles lay their eggs, which is defined by
13
7312 J. Xue, B. Shen
Some dung beetles that have grown into adults, emerge from the ground to find
food (see Fig. 6). In this paper, we call them small dung beetles. In addition, we need
to establish the optimal foraging area to guide the foraging beetles, which simulates
13
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7313
the foraging process of these dung beetles in nature. Specifically, the boundary of
the optimal foraging area is defined as follows:
13
7314 J. Xue, B. Shen
13
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7315
Based on the previous discussions, the pseudo codes of the proposed DBO
algorithm are shown in Algorithm 4. Firstly, let Tmax be the number of maximum
iteration, and N be the size of the particle’s population. Then, all agents of the
DBO algorithm are randomly initialized and their distribution settings are shown
in Fig. 8. In this figure, the number of small rectangles denotes the population
size. Specifically, as given in the Fig. 8, the population size is assumed to be 30.
It is worth mentioning that the blue, yellow, green and red rectangles represent
the ball-rolling dung beetle, the brood ball, the small dung beetle and the thief,
respectively. After that, according to steps 2-27 of Algorithm 4, we know that the
positions of the ball-rolling dung beetle, the brood ball, the small dung beetle and
13
7316 J. Xue, B. Shen
the thief are constantly updated during the optimization process. Finally, the best
position X b and its fitness value are output. In summary, for the any optimization
problem, the DBO algorithm, as a novel SI-based optimization technique, mainly
has six steps, which can be outlined as follows: 1) initialize the dung beetle
swarm and parameters of the DBO algorithm; 2) calculate the fitness values of all
agents according to the objective function; 3) update the locations of the all dung
beetles; 4) judge whether each agent is out of the boundary; 5) renew the current
optimal solution and its fitness value; and 6) repeat the above steps till t meets the
termination criterion and output the global optimal solution and its fitness value.
Remark 3 According to the developed DBO algorithm, each dung beetle swarm con-
sists of four distinct agents, i.e., the ball-rolling dung beetle, the brood ball, the small
dung beetle and the thief. More specifically, in the DBO algorithm, a dung beetle
population includes N agents, where each agent i represents a candidate solution.
The position vector of the ith agent is indicated by xi (t) = (xi1 (t), xi2 (t), … , xiD (t))
at the tth iteration, where D is the dimension of the searching space. Their distri-
bution ratio is not specified, and can be set according to the real-world application
problems. For instance, as shown in Fig. 8, the size of the dung beetle’s swarm is
N = 30. The numbers of the ball-rolling dung beetle, the brood ball, the small dung
beetle and the thief are 6, 6, 7, and 11, respectively. It should be noticed that the sum
of their numbers should be the same as the entire population which is set to be 30.
3 Experimental results
In this section, the search performance of the developed DBO technique is veri-
fied via 23 classical test functions (including unimodal, multimodal, and fixed
dimension multi modal functions), and another 29 competition functions named
CEC-BC-2017 [39]. Meanwhile, the developed DBO algorithm is compared to
seven well-studied optimization techniques (the HHO [17], PSO [10], WOA [16],
MVO [40], SCA [41], SSA [22], and GWO [15] algorithms). It should be noticed
that these algorithms cover almost all of the recently proposed techniques such as
the HHO, WOA, MVO, SCA, SSA, and GWO algorithms and also, include the
most classical optimization approach like the PSO algorithm. Fig. 9 demonstrates
a two-dimensional shape of some test functions that this study employs to evalu-
ate the DBO algorithm.
3.1 Experimental settings
The experiments should be conducted in the same environment with the purpose of
ensuring the fairness of the experiment. Specifically, the evaluation criteria adopted
in the experiments are to fairly compare the comprehensive search ability of differ-
ent optimization approaches. Therefore, for the 23 classical functions, the size of
13
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7317
3.2 Performance indicators
Two statistical tools are used in this paper, namely, the mean value and the standard
deviation (Std Dev). Their mathematical formulations are given as follows:
P
1∑
M= f,
P i=1 i
√
√ P
(8)
√ 1 ∑
Std = √ (f − M)2
P − 1 i=1 i
It is worth mentioning that F1-F7 are typical unimodal test functions since they
include only one global best solution, which has been widely used in the swarm
intelligence optimization community. Note that these test functions are applied to
evaluate the exploitation capability of the introduced DBO algorithm. The detailed
information of simulation results is given in Table 2, where the Std Dev and the
mean of the fitness value are shown to measure the search performance of the algo-
rithms. It should be noticed that the Std Dev is presented in parentheses.
It is observable from Table 2 that, for the classical test functions F1-F4, the DBO
algorithm demonstrates superiority over the other seven optimization algorithms in
13
7318 J. Xue, B. Shen
terms of evaluation indices including the mean and the Std Dev. Specifically, the
mean fitness value of the DBO algorithm is closer to the theoretical optimum than
other algorithms for F1-F4, indicating that the DBO algorithm has high exploitation
ability. Moreover, although the DBO algorithm cannot obtain the best mean for F5
and F7, it still receives the second rank after HHO algorithm.
Unlike these functions (F1-F7) with only one globally best solution, multimodal
functions have the features of many local minima, which are hard to find the best
solution in the searching space. Meanwhile, as the dimension increases, the num-
ber of local optima for multimodal functions will grow exponentially. Therefore, the
functions F8-F23 (including high dimensional functions (F8-F13) and fixed-dimen-
sion test functions (F14-F23)) are very useful to verify the exploration ability of the
DBO approach. The statistical results of the DBO algorithm on sixteen multimodal
test functions are illustrated in Table 2.
In Table 2, for high dimensional multimodal functions F9-F11, the proposed
DBO algorithm obtains better search ability than the PSO, GWO, SSA, SCA, and
MVO algorithms. Moreover, it is apparent that on F9 and F11, the DBO algorithm
is able to search the global optimum. For function F8, the developed DBO algo-
rithm achieves the third rank after WOA and HHO algorithm, which demonstrates
competitive result than that of the PSO, GWO, SSA, SCA, and MVO algorithms.
For function F13, the HHO algorithm exhibits better optimization performance and
obtains first rank, while the DBO algorithm still shows the competitive performance
compared with other SI-based approaches. In fixed-dimension multimodal func-
tions, the search capability of all algorithms is similar, and the optimization results
obtained by the DBO algorithm are competitive. In functions F16-F18, the proposed
algorithm can obtain the global optimum.
In this section, the sensitivity of three control parameters (k, b and S) employed in
the DBO algorithm is studied in detail. This study is mainly used to demonstrate
which parameters are robust and which ones affect the search performance of the
algorithm and which ones are sensitive for different inputs. Three benchmark test
functions (including the unimodal function F6, the multimodal function F9, and the
fixed-dimension multimodal function F14) are used to meet a full factorial design
for these control parameters. The values of parameters are expressed as k = {0.01,
0.15, 0.1, 0.15, 0.2}, b = {0.1, 0.3, 0.5, 0.7, 0.9}, and S = {0.1, 0.5, 1, 1.5, 2}. Thus,
there are a total of 5 × 5 × 5 = 125 combinations of design for the full factorial.
The sensitivity analysis is displayed in Fig. 10, where the horizontal coordinate
indicates the mean fitness, and the vertical coordinate is the three control param-
eters. We can see from Fig. 10a that S demonstrates a high sensitive behavior, and S
=0.5 has the best performance in F6. In addition, k and b show the similar sensitivity
13
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7319
to different inputs. Note that k = 0.1 and b = 0.3 indicate the optimal behavior. It is
observable in Fig. 10b that b shows more sensitivity than k and S while S demon-
strates the robust behavior to all values. Thus, k = 0.1 and b = 0.3 are the best selec-
tion for the multimodal function F9. In Fig. 10c, S indicates a sensitive behavior in
its left boundary while showing a lower sensitivity after its value equals 0.5, and b
demonstrates the robust behavior for different values. Moreover, we can see that K =
0.1 can obtain the best search performance in F14. Based on the above discussions,
in this paper, K = 0.1, b = 0.3, and S =0.5 are selected as the recommended param-
eter values for the DBO algorithm.
The convergent curves of the developed DBO algorithm and other seven optimi-
zation approaches are displayed in Fig. 11, where the horizontal coordinate repre-
sents the iteration number, and the vertical coordinate indicates the average best so
far. For the functions, nine benchmark test functions (including four unimodal test
functions (F1-F3 and F7), three multimodal test functions (F10-F12), and two fixed-
dimension test functions (F16 and F18)) are employed to evaluate the convergence
performance of the DBO algorithm.
13
7320 J. Xue, B. Shen
From the Fig. 11, we can see that the compared with other algorithms, the DBO
algorithm obtains satisfactory convergence rate for unimodal functions F1-F3. This
is due to the DBO algorithm that can thoroughly search for promising regions in the
early part of the iteration and converge towards the global best position as quickly
as possible in the later. In F7, the convergence speed of the DBO approach is better
than the PSO, GWO, WOA, SSA, SCA, and MVO. By observing some convergence
curves in the multimodal functions environment, it can be seen in F10-f12 that, the
DBO approach is able to avoid many local optima effectively and eventually con-
verge to the best position in the searching space. In F16 and F18, the DBO algorithm
is the rapid convergence in the early part of the iteration. These results indicate that
the proposed DBO approach has a higher success ratio than the state-of-the-art opti-
mization techniques.
13
Table 2 Results of DBO, PSO, GWO, WOA, SSA, SCA, MVO and HHO on benchmark test functions
Function DBO HHO GWO WOA SSA SCA MVO PSO
13
Table 2 (continued)
7322
13
F15 6.89E−04 3.69E−04 0.003803 0.001240 0.001530 0.001042 0.004062 8.83E−04
(3.14E−04) (1.95E−04) (0.007538) (0.002267) (0.003572) (3.37E−04) (0.007420) (2.79E−04)
F16 -1.03162 -1.03162 -1.03162 -1.03162 -1.03162 -1.03158 -1.03162 -1.03162
(5.97E−16) (1.68E−09) (3.03E−08) (8.65E−10) (3.22E−14) (3.01E−05) (3.79E−07) (6.58E−16)
F17 0.397887 0.397892 0.397889 0.397892 0.397886 0.400546 0.397888 0.397887
(0.0000) (1.189E−05) (3.7E−06) (9.74E−06) (6.48E−16) (0.00271) (1.27E−06) (0.0000)
F18 3.00000 3.00000 3.00003 3.00014 3.00000 3.00000 3.00000 3.00000
(4.96E−15) (4.17E−07) (3.40E−05) (3.65E−04) (2.57E−13) (1.56E−04) (3.11E−06) (2.57E−15)
F19 -3.86146 -3.85836 -3.86177 -3.85445 -3.86278 -3.85489 -3.86278 -3.86278
(0.002987) (0.00550) (0.002109) (0.01253) (3.62E−11) (0.00355) (1.12E−06) (2.68E−15)
F20 -3.23117 -3.11713 -3.28175 -3.22736 -3.23454 -2.88365 -3.27375 -3.25462
(0.08642) (0.10387) (0.07149) (0.10708) (0.06349) (0.41253) (0.06009) (0.05992)
F21 -8.00253 -5.21137 -9.30944 -7.84871 -8.23099 -2.14380 -7.37213 -6.47169
(2.51305) (0.87671) (1.91518) (2.68595) (3.06179) (1.75588) (2.91485) (3.20279)
F22 -8.51348 -5.25759 -10.40132 -7.95908 -9.71800 -3.34127 -8.48390 -7.53426
(2.75257) (0.95917) (0.00102) (3.11684) (2.12043) (1.74926) (2.81493) (3.44292)
F23 -9.01011 -5.18520 -10.53463 -6.90157 -8.33435 -3.75404 -8.51946 -8.49766
(2.61416) (0.99404) (0.00105) (3.30492) (3.23138) (1.65319) (3.20394) (3.25049)
+/-/= ∼ 14/5/4 16/3/4 14/1/8 15/1/7 23/0/0 16/2/5 13/4/6
Mean 2.36956 3.10869 4.08695 4.76086 4.93478 7.06521 4.97826 4.56521
Rank 1 2 3 5 6 8 7 4
J. Xue, B. Shen
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7323
From Table 3, we can see that the developed DBO technique is superior to the PSO
approach in 13 functions, superior to the GWO algorithm in 16 functions, superior
to the WOA in 14 functions, superior to the SSA in 15 functions, superior to the
SCA in 23 functions, superior to the MVO algorithm in 16 functions, and superior
to the HHO algorithm in 14 functions. Based on the Friedman test results, the mean
ranking value (Mean) of the DBO algorithm is 2.36956, which is lower than other
optimization approaches. In general, the developed DBO algorithm can keep an ade-
quate balance between the local and global search abilities.
To further verify the search performance of the DBO technique, a set of challenging
test functions, CEC-BC-2017, is employed. It is worth pointing out that the f2 has
been removed from the CEC-BC-2017. The experimental results of the DBO algo-
rithm and other optimizers are illustrated in Table 4, where the mean and the Std
Dev of the fitness value are used to evaluate the search accuracy and stability of the
algorithm (the Std Dev is presented in parentheses).
In Table 4, the developed DBO approach achieves a smaller mean fitness value
than GWO, WOA, SCA, MVO, and HHO algorithms for function f1. In addition,
the DBO algorithm is able to find he global optimum for function f3, which effec-
tively proves that this technique has the satisfactory search performance. The results
of f4-f10 show that the DBO algorithm exhibits more competitive performance than
the most of the compared algorithms because it reduces the possibility of falling into
local optima and efficiently controls between the exploitation and exploration. For
the hybrid functions (f13, f17, and f20), the DBO approach ranks first among these
optimization techniques. For function f14, the DBO algorithm obtains third rank
after PSO and MVO, but outperforms other five optimization algorithms (includ-
ing GWO, WOA, SCA, SSA, and HHO algorithms). For functions f18 and f19, the
developed DBO algorithm achieves second rank after PSO and MVO, respectively.
Furthermore, in the f21-f30, the search capability of the DBO method still outper-
forms several well-known optimizers with high performance algorithms.
Figure 12 gives the convergence curves of the DBO, PSO, HHO, WOA, MVO,
SCA, SSA, and GWO algorithms in the CEC-BC-2017 suite. For the functions, six
CEC-BC-2017 functions that include unimodal (f3), multimodal (f8), hybrid (f13
and f17) and composition (f24 and f27) functions are applied to evaluate the con-
vergence performance of the proposed DBO algorithm. Obviously, the DBO algo-
rithm can find relatively satisfactory mean fitness values with a speed convergence
than other optimization approaches in the iteration process. Similarly, the Wilcoxon
signed-rank test and the Friedman test are also used to evaluate the experimental
results of the algorithms. According to the results of the p-values shown in Table 5,
the DBO algorithm is superior to the PSO algorithm in 14 functions, superior to
the GWO algorithm in 13 functions, superior to the WOA in 23 functions, superior
to the SSA in 10 functions, superior to the SCA in 21 functions, superior to the
MVO algorithm in 13 functions, and superior to the HHO algorithm in 20 functions.
Moreover, from the results of the Friedman test listed in Table 4, the Mean of the
13
7324 J. Xue, B. Shen
Fig. 10 Sensitivity analysis of the DBO’s parameters for a F6, b F9, and c F14
DBO algorithm is 2.67241 and significantly outperforms the PSO algorithm, GWO
algorithm, WOA, SSA, SCA, MVO algorithm, and HHO algorithm.
It is well known that the balance between the global exploration and local exploi-
tation is crucial to finding the optimal solution successfully. In order to better quan-
tify the process of the local and global searches for the DBO algorithm in solving
the CEC-BC-2017 suite, the percentage of the exploration (Xpl) and exploitation
(Xpt) is calculated by
13
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7325
Div
Xpl% = × 100,
Divmax
∣ Div − Divmax ∣
Xpt% = × 100,
Divmax
1∑
N
(9)
Divj = median(xj ) − xij ,
N i=1
D
1∑
Div = Divj
D j=1
Table 3 p-values of Wilcoxon’s signed-rank test at 5% level of significance in benchmark test functions
Function HHO GWO WOA SSA SCA MVO PSO
13
7326 J. Xue, B. Shen
20
F1 10
F2 20
F3
10 10 10
0 0
10 10
0
10
Average Best−so−far
Average Best−so−far
Average Best−so−far
−20 −10
10 10
−20
−40 −20 10
10 10
PSO PSO PSO
−60 −30
10 DBO 10 DBO −40 DBO
10
GWO GWO GWO
−80 WOA −40 WOA WOA
10 10
SSA SSA SSA
−60
SCA SCA 10 SCA
−100 −50
10 MVO 10 MVO MVO
HHO HHO HHO
−120 −60 −80
10 10 10
0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500
Iteration Number Iteration Number Iteration Number
3
F7 F10 F11
10 10
5
700
PSO PSO PSO
DBO
10
2 DBO DBO
GWO 600
WOA 0 GWO GWO
10 WOA
SSA WOA
1
SSA
Average Best−so−far
Average Best−so−far
10
Average Best−so−far
−4 −20
10 10 0
0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500
Iteration Number Iteration Number Iteration Number
F12
10
10
F16 F18
PSO 0.4 45
PSO PSO
DBO
0.2 DBO 40 DBO
GWO GWO GWO
10
5 WOA WOA WOA
35
Average Best−so−far
Average Best−so−far
SCA SCA 30 SCA
MVO −0.2 MVO MVO
HHO HHO
0 HHO 25
10
−0.4
20
−0.6
15
−5
10 −0.8
10
−1 5
−10
10 −1.2 0
0 100 200 300 400 500 0 100 200 300 400 500 0 100 200 300 400 500
Iteration Number Iteration Number Iteration Number
Fig. 11 The convergence curves by the DBO algorithm and other optimizers on benchmark test functions
where N is the size of the dung beetle’s population, median(xj ) denotes median of
dimension j over all search agents, xij indicates the jth dimension of the ith agent,
and D represents the dimension of the optimization problem.
During the evolution process, the exploration and exploitation ratios can be graphi-
cally evidenced in Fig. 13, where the X axis is the iteration number, and the Y axis is
the percentage of the exploration and exploitation. It can be clearly seen from Fig. 13
that the DBO algorithm maintains a powerful exploration performance in the early
stage of iteration process. Then, the percentage of the exploitation increases as the
iteration number increases. Therefore, the DBO approach in this paper explores search
space more extensively in the early part of the optimization process, and has better local
exploitation in the later part of the optimization process. To summarize, our proposed
DBO algorithm has a good balance the local and global searches.
In this section, three well-known engineering design problems are applied to test the
optimization performance of the developed DBO algorithm in solving various practical
applications. It should be mentioned that the penalty function is employed to deal with
inequality constraints in this paper. Similar to the mathematical benchmark functions
described previously, for the DBO algorithm, the population size is N = 30 and the
maximum iteration number is Tmax = 500.
13
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7327
Table 4 Results of DBO, PSO, GWO, WOA, SSA, SCA, MVO and HHO on CEC-BC-2017 test func-
tions
Func- DBO HHO GWO WOA SSA SCA MVO PSO
tion
13
7328 J. Xue, B. Shen
Table 4 (continued)
Func- DBO HHO GWO WOA SSA SCA MVO PSO
tion
The design of three-bar truss is a classical engineering application problem, which has
become one of the most studied cases. This problem is to achieve the minimum weight
by adjusting two parameters (s1 and s2) on the basis of satisfying three constraints (g1,
g2, and g3). The objective function of this problem can be described as follows:
√
min f (s) = (2 2s1 + s2 ) × V
√
2s1 + s2
s.t. g1 = √ Q − 𝜎 ≤ 0,
2s21 + 2s1 s2
z2 (10)
g2 = √ Q − 𝜎 ≤ 0,
2s21 + 2s1 s2
1
g3 = √ Q−𝜎 ≤0
2s2 + s1
The design of the pressure vessel is to minimize the fabrication cost of a vessel
based on four parameters (s1, s2, s3 and s4) and four constraints ( g1, g2, g3 and g4).
The fabrication cost can be formulated as follows:
13
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7329
The design of the welded beam is presented as the third problem in practical
applications of this paper. The objective function of this problem is to discover
the minimum of the manufacturing cost of the welded beam design. It should be
mentioned that the welded beam design problem has four parameters ( s1, s2 , s3
and s4 ) and seven constraints ( g1, g2 , g3, g4 , g5, g6 , and g7 ). The manufacturing
cost can be formulated as follows:
where
13
7330 J. Xue, B. Shen
Table 5 p-values obtained from Wilcoxon’s signed-rank test in CEC-BC-2017 test functions
Function HHO GWO WOA SSA SCA MVO PSO
13
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7331
f3 f8 f13
106 1010
PSO
910 PSO PSO
DBO DBO DBO
900 109
GWO GWO GWO
WOA WOA WOA
890
105 SSA SSA SSA
108
Average Best-so-far
Average Best-so-far
Average Best-so-far
SCA SCA SCA
880
MVO MVO MVO
HHO HHO HHO
870 107
104 860
106
850
840 105
103
830
104
820
102 103
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Function Evaluations (FEs) 105 Function Evaluations (FEs) 105 Function Evaluations (FEs) 105
Average Best-so-far
Average Best-so-far
SCA 3100 SCA SCA
MVO MVO MVO
2200 3350
HHO HHO HHO
3000
2100 3300
2900
3250
2000
2800
3200
1900
2700
3150
1800
2600 3100
0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3 0 0.5 1 1.5 2 2.5 3
Function Evaluations (FEs) 105 Function Evaluations (FEs) 105 Function Evaluations (FEs) 105
Fig. 12 Convergence curves of the DBO, PSO, GWO, WOA, SSA, SCA, MVO and HHO on CEC-
BC-2017 test functions
80 80 80
70 70 70
60 60 60
Percentage
Percentage
Percentage
50 50 50
40 40 40
30 30 30
20 20 20
10 10 10
0 0 0
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Iteration Iteration Iteration
80 80 80
70 70 70
60 60 60
Percentage
Percentage
Percentage
50 50 50
40 40 40
30 30 30
20 20 20
10 10 10
0 0 0
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Iteration Iteration Iteration
Percentage
60
Percentage
50 50
50
40 40
40
30 30 30
20 20 20
10 10 10
0 0 0
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000 0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Iteration Iteration Iteration
Fig. 13 Exploration and exploitation of the DBO algorithm with different functions on the CEC 2017
13
7332 J. Xue, B. Shen
�
s2 p
𝜏= 𝜏12 + 2𝜏1 𝜏2 ( ) + 𝜏22 , 𝜏1 = √ ,
2r s1 s2 2
s2 √ s22 s + s3 2
m = p(l + ), j = 2{ 2s1 s2 [ + ( 1 ) ]},
2 12 2
�
s22 s + s3 2 6pl 6pl3
r= +( 1 ), 𝜎 = , 𝛿 = ,
4 2 s4 s23 Es23 s4
�
s23 s64
4.013E 36 �
s3 E
pc = (1 − ),
l2 2l 4G
G = 12 × 106 psi, E = 30 × 106 psi,
mr
p = 6000 lb, l = 14 in, 𝜏2 = .
j
There are several optimization algorithms previously applied to the welded beam
design problem such as the HHO algorithm [17], GSA [46], WOA [16], MVO algo-
rithm [40], MPA [48], CPSO algorithm [49], HS algorithm [50], and SSA [22]. The
optimal results with regard to the design process of each algorithm are illustrated
in Table 8. It can be seen from Table 8 that the DBO algorithm obtains the optimal
solution s = (0.20162, 3.3477, 9.0405, 0.20606) with the minimum of the manufac-
turing cost equal to 1.7050958. It is worth mentioning that the DBO technique has a
substantial progress compared with GSA and HS algorithm.
Once more, all these simulation results can prove the advantages of the devel-
oped DBO approach in solving practical engineering applications with many ine-
quality constraints. Therefore, we can conclude that the developed DBO algorithm
shows the competitive performance compared to the state-of-the-art optimization
algorithms.
Table 6 Optimum results and comparison for the three-bar truss design problem
Algorithm s1 s2 Optimal weight
13
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7333
Table 7 Optimum results and comparison for the pressure vessel design problem
Algorithm s1 s2 s3 s4 Optimal cost
5 Conclusion
A novel SI-based technique called the DBO algorithm has been put forward in this
paper with the hope of providing a more efficient optimizer to deal with complex
optimization problems. It is worth mentioning that five different updating rules
inspired by the ball-rolling, dancing, foraging, stealing, and reproduction behaviors
of dung beetles in nature have been designed to help find high-quality solutions.
The DBO algorithm has demonstrated competitive search performance in terms
of the convergence rate, solution accuracy, and stability by comparing with seven
well-studied optimization techniques (the HHO, PSO, WOA, MVO, SCA, DA, and
GWO algorithms) on a total of 52 test functions including both 23 classical func-
tions and 29 CEC-BC-2017 test functions. In addition, the DBO algorithm has been
Table 8 Optimum results and comparison for the welded beam design problem
Algorithm s1 s2 s3 s4 Optimal cost
13
7334 J. Xue, B. Shen
Funding This work was supported in part by the National Natural Science Foundation of China under
Grants 61873059 and 61922024, and the Program of Shanghai Academic/Technology Research Leader
under Grant 20XD1420100.
Data availibility All data generated or analysed during this study are included in this published article.
Declarations
Conflict of interest The authors declare that they have no conflict of interest.
References
1. Qin Y, Jin L, Zhang A, He B (2020) Rolling bearing fault diagnosis with adaptive harmonic kurtosis and
improved bat algorithm. IEEE Trans Instrum Meas 70:1–12
2. Li M, Yan C, Liu W, Liu X, Zhang M, Xue J (2022) Fault diagnosis model of rolling bearing based on
parameter adaptive AVMD algorithm. Appl Intell. https://doi.org/10.1007/s10489-022-03562-9
3. Karami H, Ehteram M, Mousavi S-F, Farzin S, Kisi O, El-Shafie A (2019) Optimization of energy man-
agement and conversion in the water systems based on evolutionary algorithms. Neural Comput Appl
31(10):5951–5964
4. Singh AR, Ding L, Raju DK, Raghav LP, Kumar RS (2022) A swarm intelligence approach for energy
management of grid-connected microgrids with flexible load demand response. Int J Energy Res
46(4):301–4319
5. Li J, Lei Y, Yang S (2022) Mid-long term load forecasting model based on support vector machine opti-
mized by improved sparrow search algorithm. Energy Rep 8:491–497
6. Wei D, Wang J, Li Z, Wang R (2021) Wind power curve modeling with hybrid copula and grey wolf opti-
mization. IEEE Trans Sustain Energy 13(1):265–276
7. Zhang Y, Mo Y (2022) Chaotic adaptive sailfish optimizer with genetic characteristics for global optimiza-
tion. J Supercomput 78:10950–10996. https://doi.org/10.1007/s11227-021-04255-9
13
Dung beetle optimizer: a new meta‑heuristic algorithm for… 7335
8. Abdulhammed O (2022) Load balancing of IoT tasks in the cloud computing by using sparrow search
algorithm. J Supercomput 78:3266–3287. https://doi.org/10.1007/s11227-021-03989-w
9. Wu G (2016) Across neighborhood search for numerical optimization. Inf Sci 329:597–618
10. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of IEEE International Con-
ference on Neural Networks, 1942–1948
11. Liu W, Wang Z, Yuan Y, Zeng N, Hone K, Liu X (2021) A novel sigmoid-function-based adaptive
weighted particle swarm optimizer. IEEE Trans Cybern 51(2):1085–1093
12. Liu J, Yang J, Liu H, Tian X, Gao M (2017) An improved ant colony algorithm for robot path planning.
Soft Comput 21(19):5829–5839
13. Drigo M (1996) The ant system: optimization by a colony of cooperating agents. IEEE Trans Syst Man
Cybern Part B Cybern 26(1):1–13
14. Li M, Xu G, Fu B, Zhao X (2022) Whale optimization algorithm based on dynamic pinhole imaging
and adaptive strategy. J Supercomput 78:6090–6120. https://doi.org/10.1007/s11227-021-04116-5
15. Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61
16. Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67
17. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization:
algorithm and applications. Future Gener Comput Syst 97:849–872
18. Abbasi A, Firouzi B, Sendur P (2021) On the application of Harris Hawks Optimization (HHO) algo-
rithm to the design of microchannel heat sinks. Eng Comput 37(2):1409–1428
19. Cai J, Luo T, Xu G, Tang Y (2022) A novel biologically inspired approach for clustering and multi-
level image thresholding: modified harris hawks optimizer. Cogn Comput. https://doi.org/10.1007/
s12559-022-09998-y
20. Liu C (2021) An improved Harris Hawks Optimizer for job-shop scheduling problem. J Supercomput
77:14090–14129. https://doi.org/10.1007/s11227-021-03834-0
21. Gandomi AH, Alavi AH (2012) Krill herd: a new bio-inspired optimization algorithm. Commun Non-
linear Sci 17(12):4831–4845
22. Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili SM (2017) Salp swarm algorithm: a
bio-inspired optimizer for engineering design problems. Adv Eng Softw 114:163–191
23. Xue J, Shen B (2020) A novel swarm intelligence optimization approach: sparrow search algorithm.
Syst Sci Control Eng 8(1):22–34
24. Yang X-S (2010) Firefly algorithm, stochastic test functions and design optimisation. Int J Bio-Inspired
Comput 2(2):78–84
25. Yang X-S (2010) A new metaheuristic bat-inspired algorithm. In: Nature Inspired Cooperative Strate-
gies for Optimization (NICSO 2010), pp 65–74
26. Ebadinezhad S (2020) DEACO: adopting dynamic evaporation strategy to enhance ACO algorithm for
the traveling salesman problem. Eng Appl Artif Intel 92:103649
27. Yang K, You X, Liu S, Pan H (2020) A novel ant colony optimization based on game for traveling
salesman problem. Appl Intell 50(12):4529–4542
28. Liu Y, Chen S, Guan B, Xu P (2019) Layout optimization of large-scale oil-gas gathering system based
on combined optimization strategy. Neurocomputing 332:159–183
29. Huang M, Lin H, Yunkai H, Jin P, Guo Y (2012) Fuzzy control for flux weakening of hybrid excit-
ing synchronous motor based on particle swarm optimization algorithm. IEEE Trans Magn
48(11):2989–2992
30. Zeng N, Wang Z, Liu W, Zhang H, Hone K, Liu X (2020) A dynamic neighborhood-based switching
particle swarm optimization algorithm. IEEE Trans Cybern. https://doi.org/10.1109/TCYB.2020.30297
48
31. Liu W, Wang Z, Liu X, Zeng N, Bell D (2018) A novel particle swarm optimization approach for
patient clustering from emergency departments. IEEE Trans Evol Comput 23(4):632–644
32. Guo Q, Gao L, Chu X, Sun H (2022) Parameter identification of static var compensator model using
sensitivity analysis and improved whale optimization algorithm. CSEE J Power Energy 8(2):535–547
33. Zhong C, Li G (2022) Comprehensive learning Harris Hawks-Equilibrium optimization with terminal
replacement mechanism for constrained optimization problems. Expert Syst Appl. https://doi.org/10.
1016/j.eswa.2021.116432
34. Chang Z, Gu Q, Lu C, Zhang Y, Ruan S, Jiang S (2021) 5G private network deployment optimization
based on RWSSA in open-pit mine. IEEE Trans Ind Inform. https://doi.org/10.1109/TII.2021.3132041
35. Dacke M, Baird E, El JB, Warrant EJ, Byrne M (2021) How dung beetles steer straight. Annu Rev
Entomol 66:243–256
13
7336 J. Xue, B. Shen
36. Byrne M, Dacke M, Nordström P, Scholtz C, Warrant E (2003) Visual cues used by ball-rolling dung
beetles for orientation. J Comp Physiol A 189(6):411–418
37. Dacke M, Nilsson D-E, Scholtz CH, Byrne M, Warrant EJ (2003) Insect orientation to polarized moon-
light. Nature 424(6944):33–33
38. Yin Z, Zinn-Björkman L (2020) Simulating rolling paths and reorientation behavior of ball-rolling
dung beetles. J Theor Biol 486:110106
39. Awad NH, Ali MZ, Suganthan PN (2017) Ensemble sinusoidal differential covariance matrix adapta-
tion with euclidean neighborhood for solving cec2017 benchmark problems. In: 2017 IEEE Congress
on Evolutionary Computation (CEC), pp 372–379
40. Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multi-verse optimizer: a nature-inspired algorithm for
global optimization. Neural Comput Appl 27(2):495–513
41. Mirjalili M (2016) SCA: a sine cosine algorithm for solving optimization problems. Knowl Based Syst
96:120–133
42. Liu H, Cai Z, Wang Y (2010) Hybridizing particle swarm optimization with differential evolution for
constrained numerical and engineering optimization. Appl Soft Comput 10(2):629–640
43. Sadollah A, Bahreininejad A, Eskandar H, Hamdi M (2013) Mine blast algorithm: a new popula-
tion based algorithm for solving constrained engineering optimization problems. Appl Soft Comput
13(5):2592–2612
44. Gandomi AH, Yang X-S, Alavi AH (2013) Cuckoo search algorithm: a metaheuristic approach to solve
structural optimization problems. Eng Comput 29(1):17–35
45. He Q, Wang L (2007) An effective co-evolutionary particle swarm optimization for constrained engi-
neering design problems. Eng Appl Artif Intel 20(1):89–99
46. Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci
179(13):2232–2248
47. He Q, Wang L (2007) A hybrid particle swarm optimization with a feasibility-based rule for constrained
optimization. Appl Math Comput 186(2):1407–1422
48. Faramarzi A, Heidarinejad M, Mirjalili S, Gandomi AH (2020) Marine predators algorithm: A nature-
inspired metaheuristic. Expert Syst Appl. https://doi.org/10.1016/j.eswa.2020.113377
49. Krohling RA, Coelho LS (2006) Coevolutionary particle swarm optimization using gaussian distri-
bution for solving constrained optimization problems. IEEE Trans Syst Man Cybern Part B Cybern
36(6):1407–1416
50. Lee KS, Geem ZW (2005) A new meta-heuristic algorithm for continuous engineering optimization:
harmony search theory and practice. Comput Method Appl Mech Eng 194(36–38):3902–3933
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under
a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted
manuscript version of this article is solely governed by the terms of such publishing agreement and
applicable law.
13