0% found this document useful (0 votes)
151 views20 pages

Dandelion Optimizer

This document summarizes a research paper that proposes a new nature-inspired metaheuristic optimization algorithm called the Dandelion Optimizer (DO). The DO simulates the process of dandelion seed dispersal, which is divided into three stages - a rising stage, descending stage, and landing stage. The algorithm is evaluated on benchmark functions and real-world problems, showing better performance than other established algorithms in terms of optimization accuracy, stability, convergence, and scalability. Source code for the DO is publicly available online.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
151 views20 pages

Dandelion Optimizer

This document summarizes a research paper that proposes a new nature-inspired metaheuristic optimization algorithm called the Dandelion Optimizer (DO). The DO simulates the process of dandelion seed dispersal, which is divided into three stages - a rising stage, descending stage, and landing stage. The algorithm is evaluated on benchmark functions and real-world problems, showing better performance than other established algorithms in terms of optimization accuracy, stability, convergence, and scalability. Source code for the DO is publicly available online.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Engineering Applications of Artificial Intelligence 114 (2022) 105075

Contents lists available at ScienceDirect

Engineering Applications of Artificial Intelligence


journal homepage: www.elsevier.com/locate/engappai

Dandelion Optimizer: A nature-inspired metaheuristic algorithm for


engineering applications
Shijie Zhao a,b ,∗, Tianran Zhang a , Shilin Ma a , Miao Chen a
a
Institute of Intelligence Science and Optimization, Liaoning Technical University, Fuxin, 123000, China
b
Institute for Optimization and Decision Analytics, Liaoning Technical University, Fuxin, 123000, China

ARTICLE INFO ABSTRACT


Keywords: This paper proposes a novel swarm intelligence bioinspired optimization algorithm, called the Dandelion
Nature-inspired metaheuristic algorithm Optimizer (DO), for solving continuous optimization problems. DO simulates the process of dandelion seed
Swarm intelligence long-distance flight relying on wind, which is divided into three stages. In the rising stage, seeds raise in a spiral
Dandelion optimizer
manner due to the eddies from above or drift locally in communities according to different weather conditions.
Real-world optimization problems
In the descending stage, flying seeds steadily descend by constantly adjusting their direction in global space. In
the landing stage, seeds land in randomly selected positions so that they grow. The moving trajectory of a seed
in the descending stage and landing stage are described by Brownian motion and a Levy random walk. CEC2017
benchmark functions are utilized to evaluate the performance of DO, including the optimization accuracy,
stability, convergence, and scalability, through a comparison with 9 well-known nature-inspired metaheuristic
algorithms. Finally, the applicability of DO is verified by solving 4 real-world optimization problems. The
experimental results indicate that the proposed DO method is a higher performing optimizer with outstanding
iterative optimization and strong robustness compared with well-established algorithms. Source codes of DO
are publicly available at https://ww2.mathworks.cn/matlabcentral/fileexchange/114680-dandelion-optimizer.

1. Introduction 2019). Thus, a nature-inspired metaheuristic algorithm has a greater


capacity to escape local extremes than canonical optimization tech-
Optimization is the process of seeking the optimal value of an niques. Accordingly, nature-inspired metaheuristic algorithms do not
objective function under a series of constraints. Presently, the com- require too much prior information about the problem, but focus on
plexity and diversity of engineering applications require optimization several heuristic paradigms to describe the candidate solutions (Halim
problems that are increasingly characterized by large-scale variable et al., 2021). Hence, they are broadly applied to black box problems.
dimensions and high-dimensional function objectives. However, some Nature-inspired metaheuristic algorithms, which are not dependent on
canonical algorithms, such as Newton’s method (Galli and Lin, 2021), gradient information and benefit from an uncomplicated core idea,
the steepest descent method (Pu et al., 2013), and integer programming
are successful substitutes for canonical algorithms and can tackle chal-
(Cornuéjols, 2008), tend to sink into local optima due to the lack of
lenging problems more effectively. They have been widely used in
random operators when tackling such complex optimization problems.
many fields, such as mechanical design (Gupta et al., 2021), image
Moreover, canonical algorithms have high requirements on the conti-
segmentation (Kurban et al., 2021), parameter optimization (Zhou
nuity and differentiability of the objective function, so it is generally
acknowledged that canonical algorithms are not conducive to the de- et al., 2021), deep learning (Chan-Ley and Olague, 2020), and video
velopment of modern universal optimization technology (Jain et al., tracking (Soubervielle-Montalvo et al., 2022).
2019). Therefore, to handle modern optimization problems effectively, In general, nature-inspired metaheuristic algorithms can be roughly
it is crucial to introduce stochastic technology into the optimization divided into four categories based on different heuristic mechanisms
method, which leads to the emergence of metaheuristic algorithms. (see Fig. 1). The first category is evolutionary algorithms (EAs) based
In essence, metaheuristics are general algorithmic frameworks, often on genetic and mutated organisms. Such algorithms generate new
nature-inspired (Bianchi et al., 2009; Blum and Roli, 2003). Ran- individuals by evolutionary operators, the individuals gradually de-
domness is the main characteristic of nature-inspired metaheuristic velop into better individuals compared with the last generation, and
algorithms (Back, 1996). Such randomness is in a more intelligent finally, they search for the vicinity of the optimal solution (Fonseca
way, which moves or generates new solutions through learning strate- and Fleming, 1995). Therefore, EAs possess strong global optimiza-
gies to effectively approach the optimal solution (Dorigo and Stützle, tion ability and local extremum avoidance. The main representative

∗ Corresponding author at: Institute of Intelligence Science and Optimization, Liaoning Technical University, Fuxin, 123000, China.
E-mail address: [email protected] (S. Zhao).

https://doi.org/10.1016/j.engappai.2022.105075
Received 13 January 2022; Received in revised form 28 May 2022; Accepted 11 June 2022
Available online 16 July 2022
0952-1976/© 2022 Elsevier Ltd. All rights reserved.
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Fig. 1. Classification of the nature-inspired metaheuristic algorithms.

algorithm of this kind is the genetic algorithm (GA) (Holland, 1992), (Punnathanam and Kotecha, 2016), the Lichtenberg algorithm (LA)
which simulates the natural selection and genetic mechanism of Dar- (Pereira et al., 2021), the Archimedes optimization algorithm (AOA)
winian evolution. Heredity and variation are the core operations of (Hashim et al., 2021), and atomic orbital search (AOS) (Azizi, 2021).
the GA. Other popular examples of EAs include evolutionary pro- Methods based on mathematics include the sine cosine algorithm (SCA)
gramming (EP) (Fogel, 1998), evolution strategy (ES) (Nand et al., (Mirjalili, 2016), golden ratio optimization method (GROM) (Nema-
2021), and differential evolution (DE) (Storn and Price, 1997). The tollahi et al., 2020), and Runge Kutta method (RUN) (Ahmadianfar
second category is the swarm-intelligence (SI) optimization algorithm, et al., 2021). The last category includes other methods, which are
which is inspired by the intelligent behaviour of animals, plants or based on real-life situations. Such algorithms include the fireworks
other organisms and belongs to a more mature class of nature-inspired algorithm (FA) (Tan and Zhu, 2010) and chaos game optimization
metaheuristic algorithms. Predation, reproduction, and hunting are (CGO) (Talatahari and Azizi, 2021).
common social behaviours. SI is activated by a group of randomly However, nature-inspired metaheuristic algorithms also have some
produced solutions in the search space, and constructs several heuris- shortcomings. For example, the sine cosine algorithm and whale op-
tic methods to mimic such social behaviours to achieve global and timization algorithm are sensitive to dimension expansions. Although
local iterative optimization of the algorithm. It is possible to make they have good optimization performance in low-dimensional prob-
multiple search agents communicate with each other and show com- lems, they have slow convergence speeds and easily fall into local
plex and ordered swarm intelligence behaviour through cooperation optima in high-dimensional problems. Harris hawks optimization only
among individuals (Zahedi et al., 2016). Many well-known algorithms uses linear decline factors, resulting in a poor balance between ex-
in this class have been proposed. Particle swarm optimization (PSO) ploitation and exploration. Therefore, it is weak in avoiding local
(Kennedy and Eberhart, 1995) was developed to mimic the predation extreme values of complex functions. Too many parameters need to
behaviour of birds, in which the particle utilizes speed and position
be optimized for the horse herd optimization algorithm. Levy flight
to collect important information in the search process, and the par-
distribution has higher convergence accuracy but a longer running
ticles are continuously updated to realize iterative optimization. The
time.
artificial bee colony (ABC) (Karaboga and Basturk, 2007) approach
Proverbially, exploration and exploitation are two of the most im-
simulates the honey-gathering behaviour of bees. The motivation of
portant aspects of nature-inspired metaheuristic algorithms (Hussain
ant colony optimization (ACO) (Dorigo and Blum, 2005) comes from
et al., 2019). Excessive exploration ultimately leads to difficulties for
the path found by ants in the process of looking for food. In recent
the convergence of the algorithm, while focusing only on exploitation
years, several new SI algorithms have been proposed and include
causes the model to easily to fall into local optima. Hence, how to strike
the whale optimization algorithm (WOA) (Mirjalili and Lewis, 2016),
a balance between exploration and exploitation is still a problem to
the moth swarm algorithm (MSA) (Mohamed et al., 2017), Harris
be solved. The no free lunch (NFL) theorem (Wolpert and Macready,
hawks optimization (HHO) (Heidari et al., 2019), the seagull opti-
mization algorithm (SOA) (Dhiman and Kumar, 2019), the tunicate 1997) states that for any algorithm, an improvement in performance
swarm algorithm (TSA) (Kaur et al., 2020), the chimp optimization on one class of problems will be offset by a decrease in performance
algorithm (ChOA) (Khishe and Mosavi, 2020), levy flight distribution on another. No algorithm can be completely suitable for dealing with
(LFD) (Houssein et al., 2020), the horse herd optimization algorithm all kinds of problems. Optimization theory encourages more algorithms
(HOA) (MiarNaeimi et al., 2021), the aquila optimizer (AO) (Abuali- to solve complex problems.
gah et al., 2021), the dwarf mongoose optimization algorithm (DMO) In this paper, a novel swarm-intelligence bioinspired optimization
(Agushaka et al., 2022), and the snake optimizer (SO) (Hashim and algorithm, called Dandelion Optimizer (DO), is proposed to tackle
Hussien, 2022). In addition, some algorithms based on humans-specific continuous optimization problems. Comparatively, the previous studies
behaviour have been developed, including the forensic-based investiga- about dandelion plants were mainly inspired by the sowing behaviour.
tion algorithm (FBI) (Chou and Nguyen, 2020), and political optimizer In detail, the dandelion algorithm (Gong et al., 2018) updated the next
(PO) (Askari et al., 2020). The third category is based on physics generation of individuals by dynamically regulating the seeding radius
or mathematics. This kind of algorithm can describe some physical of dandelions and their autonomous learning. And based on it, a variant
phenomena to achieve individual iterative optimization by borrow- dandelion algorithm (Li et al., 2017) divided the population of dande-
ing physical theories or mathematical formulas (Mohammadi-Balani lions into two subgroups: core dandelion and assistant dandelions, and
et al., 2021). Benefiting from the fact that such algorithms are usually employed different sowing behaviours for both different subgroups. On
able to logically explain the resulting heuristic paradigm theory, an the other hand, the long-distance flight of a dandelion seed is another
increasing number of mathematical-based or physics-based techniques important behaviour for the biological evolution, and there are no
are being studied. Physics-based algorithms include the multiverse opti- reports based on this behaviour in previous study. Under the influence
mizer (MVO) (Mirjalili et al., 2016), yin-yang-pair Optimization (YYPO) of wind and weather, different flight strategies cause dandelions to land

2
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Fig. 2. Dandelions in nature.

in different locations. DO simulates a lifetime journey of dandelion them to new locations to breed life. Crown hairs play an important role
seeds flying in the wind as they mature. The contributions of this work in the dispersal of Asteraceae seeds. Because they prolong the descent
are presented as follows. of the seeds, these hairs enable the seeds to be blown farther by the
wind (Sheldon and Burrows, 1973). A dandelion is one of the most
• According to the characteristics of the long-distance flight of dan- representative plants that relies on wind for seed propagation. Its seed
delion seeds, mathematical models of dandelion seeds from the can fly dozens of kilometres in the wind under the right conditions. As a
rising stage, descending stage and landing stage are constructed dandelion seed flies, it forms two vortices, which create upwards drag.
under different weather conditions, and the design expressions When seeds fall at lower speeds, the two vortices above become larger
are explained in detail. and symmetrical (Cummins et al., 2018). A symmetrical vortex ensures
• The international standard CEC2017 benchmark functions include the steady descent of the seed; namely, the filament is level with the
unimodal, multimodal and composition functions, and function ground, and the fruit points downwards. For dandelion seeds to fly
optimization is challenging. The proposed algorithm is tested on long distances, they need to be kept at a relatively stable altitude. The
the CEC2017 suite. separated vortex is maintained at a fixed distance below the dandelion
• Statistical analyses, convergence analyses, expansibility analyses, crown (Cavieres et al., 2008). Curiously, the porosity of the dandelion
Wilcoxon tests, and Friedman tests are employed to evaluate the crown appears to be precisely regulated to stabilize the vortex ring.
proposed DO algorithm, and the results are strictly compared with Crested hairs are made of slender filaments that radiate from a central
9 well-known nature-inspired metaheuristic algorithms. handle, similar to spokes on a bicycle wheel. Such seeds always have
• Four classic real-world optimization problems, namely, speed re- between 90 and 110 spokes. This consistency is the key to the stability
duction design, tension/compression spring design, welded beam of the separation vortex above a dandelion seed, thus helping the seed
design, and pressure vessel design, are utilized to evaluate the to remain stable during long-distance flight (Casseau et al., 2015).
constrained optimization of the proposed DO algorithm. Fur- Wind speeds and weather are the two primary factors that affect the
thermore, fewer evolutionary iterations than used by existing spread of dandelion seeds. The wind speed is used to determine whether
algorithms are used to verify the efficiency of the proposed DO a seed flies long or short distances (Soons et al., 2004). Weather
algorithm. controls whether dandelion seeds can fly and influences the dandelion’s
ability to grow in nearby or far away spaces. Dandelion seeds travel
The rest of this paper is organized as follows. Section 2 introduces through three stages, which are reported below.
the inspiration of DO and the detailed mathematical model building
process. In Section 3, the performance of DO is evaluated on CEC2017 • In the rising stage, a vortex is generated above the dandelion
benchmark functions and compared with 9 well-known nature-inspired seed and it rises under the action of a dragging force with sunny
metaheuristic algorithms. Section 4 describes the application of DO in and windy weather. Conversely, the weather is rainy, there are
no eddies above the seeds. In this case, the search can only be
real-world optimization problems. Finally, Section 5 summarizes the
performed locally.
conclusions and gives prospects for future work.
• In the descending stage, after seeds rise to a certain height, they
drop steadily.
2. Dandelion optimizer
• In the landing stage, dandelion seeds eventually randomly land in
one place under the influence of wind and weather to grow new
This section describes DO in detail. First, the biological mechanism
dandelions.
and motivation of the proposed DO are presented. Then, the mathemat-
ical model of DO is formulated, and its expressions are given. Finally, Dandelions evolve their population by passing their seeds to the
the complexity of DO is analysed and compared with that of other next generation on a three-stage basis. The main inspiration of DO in
algorithms. this paper comes from the above three stages. The following subsections
describe how to model these behaviours in DO.
2.1. Inspiration
2.2. Mathematical model
A dandelion (see Fig. 2), scientifically known as Herba taraxaci, is a
perennial herb in the Asteraceae family. These plants can reach more The mathematical expressions for DO are described specifically in
than 20 cm high. Dandelion heads are shaped like inflorescences. The this subsection. First, the mathematical expressions of the two kinds of
seeds are usually composed of hundreds of crested hairs, a beak, and an weather conditions are given, and then the mathematical models for
achene (Meng et al., 2016). When the seeds mature, the wind carries the descending stage and landing stage are analysed.

3
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

2.2.1. Initialization
Similar to other natured-inspired metaheuristic algorithms, DO ful-
fils population evolution and iterative optimization on the basis of pop-
ulation initialization. In the proposed DO algorithm, it is assumed that
each dandelion seed represents a candidate solution, whose population
is expressed as
⎡ 𝑥11 … 𝑥𝐷𝑖𝑚
1

⎢ ⎥
𝑝𝑜𝑝𝑢𝑙𝑎𝑡𝑖𝑜𝑛 = ⎢ ⋮ ⋱ ⋮ ⎥ (1)
⎢ ⎥
⎢𝑥1 … 𝑥𝐷𝑖𝑚 ⎥
⎣ 𝑝𝑜𝑝 𝑝𝑜𝑝 ⎦

where 𝑝𝑜𝑝 denotes the population size and 𝐷𝑖𝑚 is the dimension of the
variable. Each candidate solution is randomly generated between the
upper bound (𝑈 𝐵) and the lower bound (𝐿𝐵) of the given problem,
and the expression of the 𝑖th individual 𝑋𝑖 is

𝑋𝑖 = 𝑟𝑎𝑛𝑑 × (𝑈 𝐵 − 𝐿𝐵) + 𝐿𝐵 (2) Fig. 3. Dynamic trend of 𝛼.

where 𝑖 is an integer between 1 and 𝑝𝑜𝑝 and 𝑟𝑎𝑛𝑑 denotes a random


number between 0 and 1. 𝐿𝐵 and 𝑈 𝐵 are expressed as
𝐿𝐵 = [𝑙𝑏1 , … , 𝑙𝑏𝐷𝑖𝑚 ]
(3)
𝑈 𝐵 = [𝑢𝑏1 , … , 𝑢𝑏𝐷𝑖𝑚 ]
During initialization, DO regards the individual with the optimal
fitness value as the initial elite, which is approximately considered the
most suitable position for the dandelion seed to flourish. Taking the
minimum value as an example, the mathematical expression of the
initial elite 𝑋𝑒𝑙𝑖𝑡𝑒 is
( ( ))
𝑓𝑏𝑒𝑠𝑡 = min 𝑓 𝑋𝑖
( ( ( ))) (4)
𝑋𝑒𝑙𝑖𝑡𝑒 = 𝑋 𝑓 𝑖𝑛𝑑 𝑓𝑏𝑒𝑠𝑡 == 𝑓 𝑋𝑖
where 𝑓 𝑖𝑛𝑑() denotes two indexes with equal values.

2.2.2. Rising stage


Fig. 4. Dynamic trend of 𝑘.
In the rising stage, dandelion seeds need to reach a certain height
before they can float away from their parent. Under the influence of the
wind speed, air humidity, etc., dandelion seeds rise to different heights.
Here, the weather is divided into the following two situations. Fig. 3 visualizes the dynamic change in 𝛼 with the number of
Case 1 On a clear day, wind speeds iterations. According to Fig. 3, 𝛼 is a random perturbation between [0, 1]
( )can be regarded to have
a lognormal distribution ln 𝑌 ∼ 𝑁 𝜇, 𝜎 2 . Under this distribution, in the process of a nonlinear decrease that approaches 0. Such fluctu-
random numbers are more distributed along the 𝑌 -axis, which increases ations make the algorithm pay much attention to the global search in
the chance for dandelion seeds to travel to far regions. Therefore, DO the early stage and turn to a local search in the later stage, which is
emphasizes exploration in this case. In the search space, dandelion beneficial to ensure accurate convergence after a full global search. 𝑣𝑥
seeds are blown randomly to various locations by the wind. The rising and 𝑣𝑦 represent the lift component coefficients of a dandelion due to
height of a dandelion seed is determined by the wind speed. The the separated eddy action. Eq. (9) is utilized to calculate the force on
stronger the wind is, the higher the dandelion flies and the farther the variable dimension.
the seeds scatter. Affected by the wind speed, the vortexes above the 1
𝑟= 𝜃
dandelion seeds are constantly adjusted to make them rise in a spiral 𝑒
form. The corresponding mathematical expression in this case is (9)
𝑣𝑥 = 𝑟 ∗ cos 𝜃
( )
𝑋𝑡+1 = 𝑋𝑡 + 𝛼 ∗ 𝑣𝑥 ∗ 𝑣𝑦 ∗ ln 𝑌 ∗ 𝑋𝑠 − 𝑋𝑡 (5)
𝑣𝑦 = 𝑟 ∗ sin 𝜃
where 𝑋𝑡 represents the position of the dandelion seed during iteration
where 𝜃 is a random number between [−𝜋, 𝜋].
𝑡. 𝑋𝑠 represents the randomly selected position in the search space
Case 2 On a rainy day, dandelion seeds cannot rise appropriately
during iteration 𝑡. Eq. (6) provides the expression for the randomly
with the wind because of air resistance, humidity and other factors. In
generated position.
this case, dandelion seeds are exploited in their local neighbourhoods,
𝑋𝑠 = 𝑟𝑎𝑛𝑑(1, 𝐷𝑖𝑚) ∗ (𝑈 𝐵 − 𝐿𝐵) + 𝐿𝐵 (6) and the corresponding mathematical expression is
ln 𝑌 denotes a lognormal distribution subject to 𝜇 = 0 and 𝜎2 = 1, and 𝑋𝑡+1 = 𝑋𝑡 ∗ 𝑘 (10)
its mathematical formula
[ is ]
⎧ √1 exp − 1 (ln 𝑦)2 where 𝑘 is used to regulate the local search domain of a dandelion, and
𝑦≥0
⎪ 2𝜎 2 Eq. (11) is used to calculate the domain.
ln 𝑌 = ⎨ 𝑦 2𝜋 (7)
1 2 1
⎪ 0 𝑦<0 𝑞= 𝑡2 − 𝑡+1+
⎩ 𝑇 2 − 2𝑇 + 1 𝑇 2 − 2𝑇 + 1 𝑇 2 − 2𝑇 + 1 (11)
In Eq. (7), 𝑦 denotes the standard normal distribution 𝑁(0, 1). 𝛼 is 𝑘 = 1 − 𝑟𝑎𝑛𝑑() ∗ 𝑞
an adaptive parameter used to adjust the search step length, and the
mathematical expression is Fig. 4 shows the dynamic wave of 𝑘. Clearly, 𝑘 exhibits a ‘‘down-
( ) wards convex’’ oscillation, which is conducive to local exploitation of
1 2 2
𝛼 = 𝑟𝑎𝑛𝑑() ∗ 𝑡 − 𝑡+1 (8) the algorithm with a long stride in the early stage and a small cloth
𝑇2 𝑇

4
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Fig. 5. Schematic diagram of the rising stage of dandelion seeds.

length in the later stage. At the end of the iteration, parameter 𝑘 expression is
gradually approaches 1 to guarantee that the population eventually
1 ∑
𝑝𝑜𝑝
converges to the optimal search agent. 𝑋𝑚𝑒𝑎𝑛_𝑡 = 𝑋 (14)
In conclusion, the mathematical expression of dandelion seeds in 𝑝𝑜𝑝 𝑖=1 𝑖
the rising stage is Fig. 6 shows the regeneration process of dandelion seeds during
{ ( )
𝑋𝑡 + 𝛼 ∗ 𝑣𝑥 ∗ 𝑣𝑦 ∗ ln 𝑌 ∗ 𝑋𝑠 − 𝑋𝑡 𝑟𝑎𝑛𝑑𝑛 < 1.5 descent. According to Fig. 6, the average position information of the
𝑋𝑡+1 = (12) population is essential for the iterative updating of individuals, which
𝑋𝑡 ∗ 𝑘 𝑒𝑙𝑠𝑒 directly determines the evolution direction of individuals. The trajec-
where 𝑟𝑎𝑛𝑑𝑛() is the random number that follows the standard normal tory of Brownian motion, which is based on a global search, is also
distribution. presented in the figure. The irregular movement causes the search
Fig. 5 shows the behaviour of dandelion seeds flying under dif- agents to escape the local extremum with a high probability during the
ferent weather conditions. The approximate regeneration locations of iterative update and then pushes the population to seek the region near
dandelion seeds are given in the figure. First, when the weather is the global optimum.
clear, dandelion seeds are updated based on randomly selected location
information to emphasize the exploration process. The eddy above the
2.2.4. Landing stage
seed acts on the moving vector by multiplying the 𝑥 and 𝑦 components
In this part, the DO algorithm focuses on exploitation. Based on the
to correct the direction of the dandelion’s movement in a spiral. In
first two stages, the dandelion seed randomly chooses where to land. As
the second case, dandelion seeds are exploited in all directions in
the iterations gradually progress, the algorithm will hopefully converge
the local community. The normal distribution of random numbers is
to the global optimal solution. Therefore, the obtained optimal solution
used to dynamically control exploitation and exploration. To make the
is the approximate position where dandelion seeds will most easily
algorithm more global search oriented, the cut-off point is set to 1.5.
This setting makes dandelion seeds traverse the entire search space as survive. To accurately converge to the global optimum, search agents
much as possible in the first stage to provide the correct direction for borrow the eminent information of the current elite to exploit in their
the next stage of iterative optimization. local neighbourhoods. With the evolution of the population, the global
optimal solution can eventually be found. This behaviour is expressed
2.2.3. Descending stage in Eq. (15).
In this stage, the proposed DO algorithm also emphasizes explo- ( )
𝑋𝑡+1 = 𝑋𝑒𝑙𝑖𝑡𝑒 + 𝑙𝑒𝑣𝑦 (𝜆) ∗ 𝛼 ∗ 𝑋𝑒𝑙𝑖𝑡𝑒 − 𝑋𝑡 ∗ 𝛿 (15)
ration. Dandelion seeds descend steadily after rising to a certain dis-
tance. In DO, Brown motion is used to simulate the moving trajectory where 𝑋𝑒𝑙𝑖𝑡𝑒 represents the optimal position of the dandelion seed in
of dandelions. It is easy for individuals to traverse more search com- the 𝑖th iteration. 𝑙𝑒𝑣𝑦 (𝜆) represents the function of Levy flight and is
munities in the process of iterative updating because Brownian motion calculated using Eq. (16) (Mantegna, 1994).
obeys a normal distribution at each change. To reflect the stability of 𝑤×𝜎
dandelion descent, the average position information after the rising 𝐿𝑒𝑣𝑦 (𝜆) = 𝑠 × 1
(16)
stage is employed. This facilitates the development of the popula- |𝑡| 𝛽
tion as a whole towards promising communities. The corresponding In Eq. (16), 𝛽 is a random number between [0, 2] (𝛽 = 1.5 in this paper).
mathematical expression is 𝑠 is a fixed constant 0.01. 𝑤 and 𝑡 are random numbers between [0, 1].
( ) The mathematical expression of 𝜎 is
𝑋𝑡+1 = 𝑋𝑡 − 𝛼 ∗ 𝛽𝑡 ∗ 𝑋𝑚𝑒𝑎𝑛_𝑡 − 𝛼 ∗ 𝛽𝑡 ∗ 𝑋𝑡 (13)
( )
⎛ 𝜋𝛽 ⎞
where 𝛽𝑡 denotes Brownian motion and is a random number from the ⎜ 𝛤 (1 + 𝛽) × sin 2 ⎟
standard normal distribution (Einstein, 1956). 𝑋𝑚𝑒𝑎𝑛_𝑡 denotes the aver- 𝜎=⎜ ( ) (
𝛽−1 ⎟
) (17)
age position of the population in the 𝑖th iteration, and its mathematical ⎜ 𝛤 1+𝛽 × 𝛽 × 2 2 ⎟
⎝ 2 ⎠

5
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Fig. 6. Schematic diagram of the descending stage of dandelion seeds.

the accurate direction optimization due to the iteration process. The al-
gorithm ends the optimization process by setting the maximum number
of iterations. The pseudocode for the proposed DO algorithm is detailed
in Algorithm 1.

Fig. 7. Schematic diagram of the dandelion seed landing stage.

where 𝛽 is fixed at 1.5. 𝛿 is a linear increasing function between [0, 2]


and is calculated by Eq. (18).
2𝑡
𝛿= (18)
𝑇
Fig. 7 shows the process of the search agent gradually updating to
the global optimal solution in the final phase. To accurately converge
to the global optimum, the linear increasing function is applied to
individuals to avoid excessive exploitation. In this stage, the Levy flight
coefficient is used to simulate the individual movement step size. The
reason is that the Levy flight coefficient can be used by agents to
stride to other positions with a large probability under a Gaussian
distribution, which develops more local search domains with a limited
number of iterations.

2.2.5. DO algorithm execution 2.3. Computational complexity


This subsection completely describes the specific execution flow of
DO. For DO, groups of vectors are randomly generated in the search Computational complexity is an important indicator to measure
space to initiate the optimization process. Then, dandelion seeds per- algorithm efficiency. This subsection analyses the time complexity and
form iterative optimization through three stages of rising, descending, space complexity of the proposed DO algorithm.
and landing. It is worth noting the rules for the generation of new
populations before the next iteration. Taking the minimum value as 2.3.1. Time complexity
an example, each dandelion seed is arranged in ascending order from As with the sine cosine algorithm, the whale optimization algo-
top to bottom according to the fitness value. The individual with the rithm, the moth swarm algorithm, Harris hawks optimization, the
minimum fitness value is the elite individual of the next generation seagull optimization algorithm, the chimp optimization algorithm, Levy
of the population, and the sorted population is the initial population flight distribution, the horse herd optimization algorithm, and the
of the next iteration. This kind of sorting method is beneficial to the aquila optimizer, the initialization population of the proposed DO takes
inheritance of good information and avoids the algorithm overwriting 𝑂 (𝑝𝑜𝑝 × 𝐷𝑖𝑚) time, where 𝑝𝑜𝑝 represents the population size and 𝐷𝑖𝑚

6
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Table 1
Summary of the CEC2017 benchmark functions (Awad et al., 2017).
No. Function Range 𝐹𝑏𝑒𝑠𝑡
𝐹1 Shifted and Rotated Bent Cigar Function [−100,100] 100
Unimodal functions
𝐹3 Shifted and Rotated Zakharov Function [−100,100] 300
𝐹4 Shifted and Rotated Rosenbrock’s Function [−100,100] 400
𝐹5 Shifted and Rotated Rastrigin’s Function [−100,100] 500
𝐹6 Shifted and Rotated Expanded Scaffer’s F6 Function [−100,100] 600
Simple multimodal functions 𝐹7 Shifted and Rotated Lunacek Bi_Rastrigin Function [−100,100] 700
𝐹8 Shifted and Rotated Non-Continuous Rastrigin’s Function [−100,100] 800
𝐹9 Shifted and Rotated Levy Function [−100,100] 900
𝐹10 Shifted and Rotated Schwefel’s Function [−100,100] 1000
𝐹11 Hybrid Function 1 (𝑁 = 3) [−100,100] 1100
𝐹12 Hybrid Function 2 (𝑁 = 3) [−100,100] 1200
𝐹13 Hybrid Function 3 (𝑁 = 3) [−100,100] 1300
𝐹14 Hybrid Function 4 (𝑁 = 4) [−100,100] 1400
𝐹15 Hybrid Function 5 (𝑁 = 4) [−100,100] 1500
Hybrid functions
𝐹16 Hybrid Function 6 (𝑁 = 4) [−100,100] 1600
𝐹17 Hybrid Function 6 (𝑁 = 5) [−100,100] 1700
𝐹18 Hybrid Function 6 (𝑁 = 5) [−100,100] 1800
𝐹19 Hybrid Function 6 (𝑁 = 5) [−100,100] 1900
𝐹20 Hybrid Function 6 (𝑁 = 6) [−100,100] 2000
𝐹21 Composition Function 1 (𝑁 = 3) [−100,100] 2100
𝐹22 Composition Function 2 (𝑁 = 3) [−100,100] 2200
𝐹23 Composition Function 3 (𝑁 = 4) [−100,100] 2300
𝐹24 Composition Function 4 (𝑁 = 4) [−100,100] 2400
𝐹25 Composition Function 5 (𝑁 = 5) [−100,100] 2500
Composition functions
𝐹26 Composition Function 6 (𝑁 = 5) [−100,100] 2600
𝐹27 Composition Function 7 (𝑁 = 6) [−100,100] 2700
𝐹28 Composition Function 8 (𝑁 = 6) [−100,100] 2800
𝐹29 Composition Function 9 (𝑁 = 3) [−100,100] 2900
𝐹30 Composition Function 10 (𝑁 = 3) [−100,100] 3000

represents the variable dimension. The calculated fitness value of the functions are composed of basic functions through translations, rota-
population takes 𝑂 (𝑝𝑜𝑝 × 𝑓 ) time, where 𝑓 is the objective function that tions, combinations, etc. Table 1 shows the specific information of the
defines the problem. CEC2017 suite. 𝐹1 and 𝐹3 are unimodal functions, which are used to
The three stages of rising, descending and landing need a total of test the convergence accuracy of the algorithms. The global optimiza-
𝑂 (𝑇 × 𝑝𝑜𝑝 × 𝐷𝑖𝑚) time, where 𝑇 represents the maximum number of tion ability of the methods is measured on simple multimodal functions
iterations. In each iteration, it takes 𝑂 (𝑀) time to find the current 𝐹4 − 𝐹10 and hybrid functions 𝐹11 − 𝐹20 . 𝐹21 − 𝐹30 are composition
optimal solution. The total time complexity of DO in the iteration functions used to test the local extremum avoidance of the algorithms.
stage is 𝑂 (𝑀 × 𝑇 × 𝑝𝑜𝑝 × 𝐷𝑖𝑚 × 𝑓 ). The total time complexity of the Fig. 8 shows the topology of the two-dimensional version of some of
sine cosine algorithm, the whale optimization algorithm, the moth the benchmark functions.
swarm algorithm, Harris hawks optimization, the seagull optimiza- The proposed DO algorithm was compared with 9 well-known
tion algorithm, Levy flight distribution, and the aquila optimizer are algorithms, namely, the sine cosine algorithm (Mirjalili, 2016), the
also 𝑂 (𝑀 × 𝑇 × 𝑝𝑜𝑝 × 𝐷𝑖𝑚 × 𝑓 ) in the iteration stage. The chimp opti- whale optimization algorithm (Mirjalili and Lewis, 2016), the moth
mization algorithm requires 𝑂 (𝐾) time to update the population and swarm algorithm (Mohamed et al., 2017), Harris hawks optimization
𝑂 (𝐾 × 𝑀 × 𝑇 × 𝑝𝑜𝑝 × 𝐷𝑖𝑚 × 𝑓 ) time for the (( entire iteration. The ) time (Heidari et al., 2019), the seagull optimization algorithm (Dhiman and
𝑇 ×𝑝𝑜𝑝2 𝑇 ×𝐷×𝑝𝑜𝑝
complexity of horse herd optimization is 𝑂 + × 𝑓× Kumar, 2019), the chimp optimization algorithm (Khishe and Mosavi,
2 2
𝑀). Hence, the calculation time of the DO algorithm is better than 2020), Levy flight distribution (Houssein et al., 2020), the horse herd
that of the chimp optimization algorithm and horse herd optimization optimization algorithm (MiarNaeimi et al. and the aquila optimizer
algorithm, and the calculation efficiency of the DO algorithm is equal (Abualigah et al., 2021).
to that of the other algorithms, such as the sine cosine algorithm. Table 2 shows the parameter settings of all algorithms, among
which the comparison algorithm parameters are set according to the
2.3.2. Space complexity original literature. To ensure a fair competition among the algorithms,
Initializing the population can be regarded as the maximum amount each function was implemented independently 30 times. The mean
of space occupied by DO at any time. Therefore, the space complexity fitness value (mean) and standard deviation (std) of 30 results were
of the proposed DO is 𝑂 (𝑝𝑜𝑝 × 𝐷𝑖𝑚). used as the final statistical indexes.

3. Experimental results and discussion 3.2. Qualitative analysis

This section describes the DO simulation experiment on 29 bench- To clearly observe the optimization behaviour of DO during the
mark functions. The optimization ability, convergence, scalability, and iterative process, this subsection conducts a qualitative analysis of the
statistical tests of DO were evaluated. All experiments were run on the proposed DO algorithm. The experimental dimension is 𝐷𝑖𝑚 = 10, the
same operating system. population size 𝑝𝑜𝑝 = 30, and the maximum number of iterations is
𝑇 = 1000. Fig. 9 shows the qualitative measurement results of DO on
3.1. Benchmark functions and comparison algorithms the CEC2017 benchmark functions. The selected functions include two
unimodal, three multimodal and four composition functions. In Fig. 9,
To verify the local mining, local extremum avoidance, global ex- the first column depicts the topological structure of the functions in the
ploration, and other performance indicators of DO, CEC2017 uncon- two-dimensional version. In addition, the last four columns represent
strained benchmark functions were selected (Awad et al., 2017). These the search history on the first two dimensions of an individual, the

7
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Fig. 8. Topological structures of the benchmark functions in the two-dimensional version.

Table 2 number of iterations. Numerous search agents are around the optimal
Parameter settings of the algorithms. solution, indicating that DO is more inclined to exploit promising
Algorithm name Parameters Value solutions. Compared with the unimodal functions, the search agents
All algorithms
Population size 60 are more scattered on the multimodal functions and the composition
Maximum iterations 1000 functions, which reflects the balance of DO among numerous local
SCA (Mirjalili, 2016) Number of elites 𝛼 2 optimal solutions.
𝛼 [2, 0] To visualize the dandelion seed behaviour, the iterative trajectory
WOA (Mirjalili and Lewis, 2016)
𝛼2 [−2, −1] in the first variable is approximatively regarded as the movement
MSA (Mohamed et al., 2017) Number of Pathfinders 12 trajectory of a search agent. From the third column in the figure, it
HHO (Heidari et al., 2019) 𝐸0 [−1, 1] can be seen that the curve exhibits a large oscillation state in the early
Control parameter (A) [2, 0]
stage of iteration. Whereafter, the curve gradually flattens out in the
SOA (Dhiman and Kumar, 2019) middle and late iterations. It is worth considering that the oscillation
𝑓𝑐 2
ChOA (Khishe and Mosavi, 2020) 𝑟1 , 𝑟2 Random durations of the multimodal functions or the composition functions
𝑚 Chaotic are longer than those of the unimodal functions. The reason for this
𝑇 ℎ𝑟𝑒𝑠ℎ𝑜𝑙𝑑 2 phenomenon is that a function has many local extreme values, and it
𝐶𝑆𝑉 0.5 is more difficult to search for an optimal solution. Under the action
𝛽 1.5 of adaptive parameter 𝛼 and Brownian motion, DO is more inclined to
𝛼1 10 explore. At the end of the iterations, exploitation is performed to ensure
LFD (Houssein et al., 2020)
𝛼2 0.00005
that DO converges to a higher precision. In addition, the figure clearly
𝛼3 0.005
𝜕1 0.9 reveals the switching time of DO from exploration to exploitation.
𝜕2 0.1 The average fitness value denotes the average target optimal value
ℎ𝛽 , ℎ𝛾 0.9,0.5 of all dimensions in each iteration and reflects the average tendency of
𝑠𝛽 , 𝑠𝛾 0.2,0.1 population evolution. As seen from column 4 in Fig. 9, the mean fitness
HOA (MiarNaeimi et al., 2021) 𝑖𝛾 0.3 value has a distinct and frequent oscillation in the early iterations,
𝑑𝛼 , 𝑑𝛽 , 𝑑𝛾 0.5,0.2,0.1 while the oscillation gradually weakens and tends to be gentle in the
𝑟𝛿 , 𝑟𝛾 0.1,0.05
late iterations. This process indicates that DO fully explores in the early
𝛼 [0, 1] stage and precisely exploits in the later stage. This implies the transition
DO
k [0, 1]
occurs from the rising stage to the descending stage of dandelion seed
migration.
The convergence curve shows the optimal behaviour of a dandelion
search trajectory on the first dimension of the search agents, the seed to obtain the optimal solution thus far. On unimodal functions,
average fitness value of the population, and the convergence curve. the curve drops rapidly at the beginning of the iteration, and later, the
The search history paints the points that dandelions pass through precision is refined. Different from the unimodal function, the curve
during the iterations to find the global optimal solution. It can be seen on the multimodal function descends step by step, which is caused by
from the second column in the figure that DO searches globally. The red jumping out local optimal solutions and gradually searching near the
‘‘dots’’ represent the best solution that DO gains at the preset maximum global optimal solution.

8
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Fig. 9. Qualitative analysis results of DO for unimodal, multimodal, and composition functions.

3.3. Statistical results std indicator. In addition, on function 𝐹9 , the optimization result of
DO is second only to that of MSA and better than those of the other
To evaluate the general optimization performance of DO, this sec- algorithms. For the 𝐹11 , 𝐹12 , 𝐹13 , 𝐹15 , 𝐹16 , 𝐹17 , and 𝐹20 hybrid functions,
tion tests the proposed DO algorithm and 9 comparison algorithms DO has excellent exploration performance. On the 𝐹14 , 𝐹18 and 𝐹19 func-
on 50-dimensional CEC’17 functions. The statistical results are shown tions, DO achieves better results than the other algorithms except MSA.
in Table 3. Considering the results of the different algorithms on the
DO has better std indexes on most complex hybrid functions. Among
50-dimensional CEC’17 functions, DO obtained better results in 33/58
the composition functions, DO outperforms the other algorithms on the
indicators, among which better mean fitness values were obtained for
22/29 functions. For unimodal functions 𝐹1 and 𝐹3 , DO shows superior 𝐹21 , 𝐹22 , 𝐹23 , 𝐹25 , 𝐹27 , 𝐹28 , and 𝐹29 functions. In addition to the above
optimization accuracy and stability than the other algorithms. DO has composition functions, DO acquires suboptimal results on the other
the best optimization effect on the simple multimodal functions of 𝐹4 , composition functions. In summary, DO shows accurate exploitation
𝐹5 , 𝐹6 , 𝐹7 , 𝐹8 , and 𝐹10 . On function 𝐹4 , DO obtains the minimum performance and superior exploration performance on the unimodal

9
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Table 3
Statistical results of the different algorithms on the 50-dimensional CEC’17 functions. (The best result is marked in bold, and italics indicate that the algorithm achieves suboptimal
results.)
Fun. SCA WOA MSA HHO ChOA SOA LFD HOA AO DO
mean 5.38E+10 2.21E+09 1.71E+06 1.06E+08 5.08E+10 3.62E+10 3.07E+08 1.92E+10 1.13E+09 1.29E+06
𝐹1
std 6.97E+09 9.40E+08 1.06E+06 1.54E+07 5.66E+09 6.97E+09 1.89E+08 2.97E+09 4.03E+08 8.42E+05
mean 1.48E+05 2.13E+05 8.35E+04 8.70E+04 1.90E+05 1.19E+05 2.93E+05 1.54E+05 1.87E+05 2.78E+04
𝐹3
std 2.07E+04 6.97E+04 2.34E+04 1.42E+04 1.49E+04 1.83E+04 2.79E+04 1.82E+04 3.57E+04 9.93E+03
mean 9.88E+03 1.38E+03 6.33E+02 7.29E+02 1.14E+04 4.00E+03 1.40E+03 4.36E+03 9.52E+02 5.64E+02
𝐹4
std 2.37E+03 2.83E+02 6.61E+01 1.03E+02 1.56E+03 1.48E+03 1.21E+03 5.39E+02 1.48E+02 5.69E+01
mean 1.10E+03 1.01E+03 8.59E+02 8.91E+02 1.08E+03 9.31E+02 8.64E+02 1.10E+03 8.56E+02 8.29E+02
𝐹5
std 3.89E+01 7.08E+01 7.02E+01 2.66E+01 3.07E+01 4.99E+01 3.03E+01 3.20E+01 3.67E+01 6.58E+01
mean 6.77E+02 6.87E+02 6.55E+02 6.73E+02 6.81E+02 6.61E+02 6.68E+02 6.87E+02 6.64E+02 6.51E+02
𝐹6
std 6.04E+00 1.03E+01 6.59E+00 4.89E+00 5.60E+00 8.49E+00 1.18E+01 5.29E+00 8.68E+00 8.60E+00
mean 1.74E+03 1.78E+03 1.55E+03 1.83E+03 1.72E+03 1.55E+03 1.69E+03 1.53E+03 1.52E+03 1.35E+03
𝐹7
std 9.12E+01 1.11E+02 1.20E+02 8.54E+01 8.86E+01 7.74E+01 8.47E+01 4.72E+01 1.17E+02 1.10E+02
mean 1.41E+03 1.29E+03 1.15E+03 1.18E+03 1.35E+03 1.24E+03 1.18E+03 1.41E+03 1.17E+03 1.13E+03
𝐹8
std 3.70E+01 6.84E+01 6.95E+01 3.00E+01 3.26E+01 4.97E+01 4.26E+01 2.89E+01 3.86E+01 5.70E+01
mean 2.77E+04 2.86E+04 1.25E+04 2.54E+04 3.06E+04 2.07E+04 1.69E+04 2.89E+04 2.22E+04 1.46E+04
𝐹9
std 3.43E+03 9.32E+03 3.05E+03 2.92E+03 2.91E+03 4.52E+03 4.96E+03 3.07E+03 3.59E+03 3.92E+03
mean 1.51E+04 1.14E+04 8.84E+03 9.18E+03 1.50E+04 1.22E+04 1.03E+04 1.51E+04 8.90E+03 7.98E+03
𝐹10
std 3.82E+02 1.42E+03 1.26E+03 1.19E+03 4.29E+02 1.11E+03 1.38E+03 6.34E+02 1.23E+03 7.81E+02
mean 9.03E+03 2.93E+03 1.42E+03 1.54E+03 1.09E+04 6.58E+03 2.31E+03 9.32E+03 2.13E+03 1.33E+03
𝐹11
std 2.02E+03 4.37E+02 9.39E+01 8.06E+01 1.68E+03 2.40E+03 8.40E+02 3.49E+03 2.79E+02 5.77E+01
mean 1.59E+10 8.32E+08 2.79E+07 1.47E+08 2.80E+10 5.68E+09 1.19E+09 5.83E+09 3.75E+08 2.76E+07
𝐹12
std 3.32E+09 3.96E+08 1.50E+07 9.64E+07 8.16E+09 3.29E+09 3.27E+09 1.09E+09 2.75E+08 1.41E+07
mean 4.35E+09 2.53E+07 2.18E+05 3.86E+06 1.43E+10 1.32E+09 4.62E+07 1.36E+09 7.51E+06 1.07E+05
𝐹13
std 1.33E+09 2.17E+07 1.33E+05 3.32E+06 7.60E+09 1.65E+09 2.49E+08 3.59E+08 5.62E+06 6.24E+04
mean 4.65E+06 2.91E+06 1.65E+05 1.45E+06 2.85E+06 1.45E+06 5.50E+06 3.11E+06 3.37E+06 2.34E+05
𝐹14
std 2.62E+06 1.79E+06 4.65E+04 1.79E+06 8.33E+05 1.41E+06 4.93E+06 1.41E+06 2.97E+06 1.19E+05
mean 6.73E+08 1.19E+06 6.63E+04 5.47E+05 5.74E+08 9.10E+07 2.36E+08 3.46E+08 5.70E+05 5.12E+04
𝐹15
std 3.96E+08 2.05E+06 4.30E+04 2.30E+05 1.29E+09 2.27E+08 5.25E+08 9.53E+07 2.36E+05 2.78E+04
mean 5.93E+03 5.69E+03 3.94E+03 4.26E+03 5.71E+03 4.05E+03 5.16E+03 6.09E+03 4.23E+03 3.78E+03
𝐹16
std 3.75E+02 7.10E+02 4.69E+02 5.30E+02 3.99E+02 4.52E+02 1.07E+03 3.66E+02 6.32E+02 3.39E+02
mean 4.78E+03 4.26E+03 3.89E+03 3.69E+03 5.05E+03 3.61E+03 3.87E+03 4.42E+03 3.68E+03 3.35E+03
𝐹17
std 3.67E+02 5.09E+02 4.36E+02 4.78E+02 5.68E+02 4.05E+02 4.40E+02 2.21E+02 4.75E+02 3.35E+02
mean 3.09E+07 2.25E+07 1.19E+06 4.97E+06 1.45E+07 6.90E+06 1.94E+07 2.59E+07 7.69E+06 2.49E+06
𝐹18
std 1.59E+07 2.10E+07 7.73E+05 6.22E+06 6.71E+06 4.61E+06 1.62E+07 8.31E+06 4.33E+06 1.04E+06
mean 4.01E+08 6.51E+06 6.74E+04 1.17E+06 8.84E+08 8.74E+07 4.17E+06 1.26E+08 1.95E+06 1.59E+05
𝐹19
std 1.29E+08 5.48E+06 6.60E+04 9.75E+05 1.01E+09 1.98E+08 7.81E+06 2.94E+07 1.66E+06 1.05E+05
mean 4.03E+03 3.78E+03 3.51E+03 3.43E+03 4.15E+03 3.53E+03 3.69E+03 4.04E+03 3.30E+03 3.28E+03
𝐹20
std 2.54E+02 3.89E+02 3.70E+02 3.80E+02 2.20E+02 3.85E+02 3.22E+02 2.65E+02 2.40E+02 3.98E+02
mean 2.92E+03 2.96E+03 2.67E+03 2.87E+03 2.93E+03 2.73E+03 2.89E+03 2.91E+03 2.70E+03 2.64E+03
𝐹21
std 3.99E+01 1.15E+02 6.79E+01 6.74E+01 4.70E+01 5.27E+01 1.04E+02 3.91E+01 5.85E+01 6.51E+01
mean 1.67E+04 1.31E+04 1.09E+04 1.13E+04 1.70E+04 1.35E+04 1.21E+04 1.67E+04 1.07E+04 9.55E+03
𝐹22
std 4.48E+02 1.46E+03 1.23E+03 8.23E+02 3.91E+02 1.38E+03 1.53E+03 9.09E+02 1.63E+03 8.15E+02
mean 3.59E+03 3.70E+03 3.48E+03 3.85E+03 3.58E+03 3.20E+03 3.94E+03 4.09E+03 3.43E+03 3.20E+03
𝐹23
std 5.72E+01 1.51E+02 1.27E+02 1.60E+02 6.31E+01 5.75E+01 1.85E+02 1.40E+02 1.06E+02 8.19E+01
mean 3.81E+03 3.77E+03 3.62E+03 4.19E+03 6.80E+03 3.29E+03 4.21E+03 4.00E+03 3.47E+03 3.45E+03
𝐹24
std 7.50E+01 1.46E+02 1.39E+02 2.28E+02 3.29E+02 5.60E+01 2.02E+02 1.24E+02 8.79E+01 1.19E+02
mean 7.42E+03 3.58E+03 3.14E+03 3.21E+03 9.46E+03 5.52E+03 3.40E+03 5.39E+03 3.37E+03 3.10E+03
𝐹25
std 8.15E+02 1.80E+02 4.38E+01 5.57E+01 1.23E+03 6.36E+02 1.91E+02 3.60E+02 1.29E+02 3.27E+01
mean 1.29E+04 1.39E+04 1.15E+04 1.07E+04 1.15E+04 8.32E+03 1.17E+04 8.70E+03 7.93E+03 9.05E+03
𝐹26
std 6.26E+02 1.72E+03 1.43E+03 1.61E+03 5.45E+02 6.12E+02 1.79E+03 1.26E+03 2.28E+03 1.01E+03
mean 4.66E+03 4.54E+03 4.30E+03 4.41E+03 4.73E+03 3.79E+03 4.22E+03 6.01E+03 4.06E+03 3.77E+03
𝐹27
std 1.62E+02 4.63E+02 3.77E+02 3.89E+02 1.87E+02 1.43E+02 8.66E+02 5.21E+02 2.43E+02 1.82E+02
mean 7.86E+03 4.46E+03 3.42E+03 3.59E+03 6.52E+03 8.53E+03 4.38E+03 5.73E+03 4.19E+03 3.35E+03
𝐹28
std 8.29E+02 3.71E+02 5.40E+01 6.80E+01 3.70E+02 1.44E+03 1.24E+03 2.07E+02 2.96E+02 3.07E+01
mean 7.73E+03 8.47E+03 6.25E+03 6.09E+03 8.63E+03 6.46E+03 8.56E+03 7.51E+03 6.35E+03 4.77E+03
𝐹29
std 6.20E+02 1.28E+03 5.27E+02 6.47E+02 1.63E+03 9.05E+02 2.96E+03 5.47E+02 5.33E+02 3.36E+02
mean 9.29E+08 2.09E+08 7.74E+06 4.15E+07 1.26E+09 2.74E+08 1.59E+08 4.44E+08 1.05E+08 1.09E+07
𝐹30
std 2.80E+08 9.59E+07 3.62E+06 1.17E+07 1.23E+09 1.36E+08 2.33E+08 6.09E+07 3.49E+07 2.90E+06

and multimodal functions and has strong local extremum avoidance on CEC’17 functions. If ‘+’ appears outside the upper edge, it represents
composite functions. an extreme situation in which the algorithm runs 30 experiments.
To intuitively present the distribution of the 30 results, Fig. 10 In contrast, if it is beyond the lower edge, it means that algorithm
provides the boxplots of the different algorithms on the 50-dimensional adequately exploits and explores in the search space. According to

10
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Fig. 10. Boxplots of the different algorithms on the 50-dimensional CEC’17 functions.

Fig. 10, the intermediate results of DO optimization are better than the of full exploration and exploitation on the functions decrease in the case
results of the 9 comparison algorithms. The amplitudes of the upper and of 100 dimensions. However, the DO algorithm can still outperform the
lower limits of DO do not change significantly compared to those of the other algorithms on these functions. On the 𝐹1 , 𝐹12 , 𝐹15 , 𝐹25 , 𝐹28 , and
other competitors, which is particularly prominent on the 𝐹25 , 𝐹28 , and 𝐹30 functions, the upper and lower limits of DO, MSA, and HHO vary
𝐹30 functions. The stability and robustness of DO are verified by these slightly and are similar to the distributions of fitness values in the case
functions. On the 𝐹5 and 𝐹10 functions, DO accounts for the majority of 50 dimensions, as shown in Fig. 9. The boxplot results in the case of
of the upper and lower ‘+’ indicators. This reflects the randomness and 100 dimensions show that the proposed DO algorithm still maintains
uncertainty of DO in solving similar multimodal problems. For other good local extremum avoidance in high dimensions, which proves that
functions, DO has fewer ‘+’ cases, and the difference between the upper DO is more applicable than the other algorithms.
bounds and lower bounds is not obvious. This demonstrates that DO Fig. 12 shows the log-mean fitness values of the different algorithms
still has good optimization results in extreme situations. Based on the in the cases of 10, 30, 50 and 100 dimensions. To show a fair compar-
above analysis, the boxplot further verifies the effectiveness and strong ison of the scalability of the different algorithms, each function was
robustness of DO. independently run 30 times under the premise that all methods kept
the same population size and maximum iteration number. In Fig. 12,
3.4. Scalability analysis there is no significant change in the mean optimization situation of
the algorithm from 10 to 30 dimensions. However, the slope of the
Two aspects mainly affect the complexity of engineering optimiza- DO curve changes more gently than that of the other algorithms at 50
tion problems: numerous local extrema and higher dimensional vari- and 100 dimensions. DO shows the best search performance except for
ables. This subsection focuses on the performance of DO in high di-
some dimensions of the 𝐹23 function. On the 𝐹23 function, although
mensions. Table 4 shows the statistical results of all methods on 100-
the optimization performance of DO is slightly lower than that of
dimensional CEC’17 functions. According to Table 4, DO achieves
MSA at 30 and 50 dimensions, it is still better than that of MSA at
better results on 36/58 indicators, among which the best mean fitness
100 dimensions. This test shows that DO can synchronously track the
value is obtained on 23/30 functions. Comparing the results in Tables 3
difficulty of optimization caused by an increase in function dimension
and 4, it can be seen that DO shows outstanding optimization perfor-
and obtain better convergence accuracy.
mance in the case of high dimensions (i.e., 𝐹1 , 𝐹4 , 𝐹5 , 𝐹6 , 𝐹7 , 𝐹8 , 𝐹10 , 𝐹11 ,
𝐹12 ). Except for 𝐹9 and 𝐹18 , DO can obtain suboptimal or optimal mean
values. On 𝐹19 , 𝐹24 , and 𝐹30 functions, even though the mean accuracy 3.5. Convergence analysis
of DO is inferior to that of MSA on the 50-dimensional functions, the
mean optimization accuracy of DO is superior to that of MSA on the Convergence analysis is conducted on DO and 9 comparison al-
100-dimensional functions. In addition, HHO obtains better mean and gorithms to reveal the exploration and exploitation of the different
std indicators than the other competitors on the 𝐹3 function. MSA algorithms. Fig. 13 describes the convergence curves of the CEC2017
ranked first and DO ranked second on the 𝐹13 function. For the case benchmark functions of the 10 algorithms at 100 dimensions. The se-
of 100 dimensions, DO continues to maintain the positive optimization lected benchmark functions include unimodal, multimodal, hybrid and
performance it experienced in the case of 50 dimensions on the rest composition functions to reflect the changing tendency with iterative
of the functions. The statistical results show that DO still has good optimization of the algorithms on different functions.
applicability in high-dimensional optimization problems. According to Fig. 13, the dynamic iteration process of different
Fig. 11 displays the boxplots of the different algorithms on the 100- algorithms on each test function is varied. On the unimodal function 𝐹1 ,
dimensional CEC’17 functions. As the dimension increases, function the convergence speed of the whale optimization algorithm is slightly
optimization becomes more difficult. Therefore, compared with 50 di- faster than that of DO in the early stage of iteration, but as the iterations
mensions, the extreme cases appear more frequently, and the instances gradually progress, DO converges faster than the other algorithms and

11
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Table 4
Statistical results of the different algorithms on the 100-dimensional CEC’17 functions. (The best result is marked in bold, and italics indicate that the algorithm achieves suboptimal
results.)
Fun. SCA WOA MSA HHO SOA ChOA LFD HOA AO DO
mean 1.84E+11 3.06E+10 3.47E+07 1.96E+09 1.40E+11 1.70E+11 1.12E+10 8.20E+10 1.87E+10 3.01E+07
𝐹1
std 1.11E+10 4.94E+09 1.13E+07 3.70E+08 1.40E+10 1.04E+10 3.25E+09 9.55E+09 3.97E+09 1.05E+07
mean 4.33E+05 8.70E+05 4.78E+05 2.72E+05 3.27E+05 5.02E+05 7.12E+05 3.73E+05 3.30E+05 2.95E+05
𝐹3
std 5.51E+04 1.35E+05 8.76E+04 2.05E+04 2.82E+04 9.50E+04 7.46E+04 3.40E+04 1.27E+04 4.98E+04
mean 3.91E+04 5.74E+03 1.08E+03 1.59E+03 1.80E+04 3.73E+04 3.48E+03 1.64E+04 3.79E+03 8.73E+02
𝐹4
std 5.22E+03 1.29E+03 9.05E+01 2.34E+02 3.60E+03 6.81E+03 8.36E+02 2.07E+03 7.34E+02 6.41E+01
mean 1.98E+03 1.74E+03 1.37E+03 1.55E+03 1.62E+03 1.89E+03 1.39E+03 1.95E+03 1.51E+03 1.29E+03
𝐹5
std 6.57E+01 9.92E+01 1.32E+02 3.72E+01 9.84E+01 4.86E+01 6.95E+01 7.29E+01 6.16E+01 1.11E+02
mean 6.98E+02 6.97E+02 6.70E+02 6.84E+02 6.82E+02 6.96E+02 6.72E+02 7.02E+02 6.82E+02 6.65E+02
𝐹6
std 4.60E+00 9.44E+00 6.47E+00 3.11E+00 5.63E+00 4.68E+00 8.17E+00 4.83E+00 4.06E+00 7.74E+00
mean 3.79E+03 3.57E+03 3.06E+03 3.70E+03 3.26E+03 3.54E+03 3.31E+03 3.38E+03 3.15E+03 2.66E+03
𝐹7
std 1.63E+02 1.92E+02 1.32E+02 1.14E+02 1.60E+02 1.08E+02 1.27E+02 2.54E+02 1.83E+02 2.30E+02
mean 2.34E+03 2.19E+03 1.76E+03 1.99E+03 1.97E+03 2.24E+03 1.84E+03 2.34E+03 1.94E+03 1.64E+03
𝐹8
std 6.18E+01 1.18E+02 1.33E+02 6.55E+01 7.48E+01 4.39E+01 6.36E+01 7.58E+01 9.23E+01 7.75E+01
mean 8.15E+04 6.41E+04 3.03E+04 5.67E+04 5.62E+04 7.42E+04 3.37E+04 7.32E+04 5.58E+04 3.82E+04
𝐹9
std 8.57E+03 1.64E+04 5.97E+03 4.72E+03 8.18E+03 4.79E+03 4.34E+03 5.76E+03 6.39E+03 9.09E+03
mean 3.25E+04 2.62E+04 2.07E+04 2.19E+04 2.75E+04 3.23E+04 2.32E+04 3.24E+04 2.11E+04 1.67E+04
𝐹10
std 5.59E+02 2.23E+03 2.81E+03 1.67E+03 2.14E+03 4.79E+02 3.69E+03 6.84E+02 2.04E+03 1.63E+03
mean 1.21E+05 1.60E+05 1.21E+04 3.21E+04 8.61E+04 1.42E+05 2.78E+05 1.08E+05 1.76E+05 3.30E+03
𝐹11
std 1.80E+04 5.97E+04 3.66E+03 1.23E+04 1.77E+04 1.60E+04 3.66E+04 1.25E+04 3.27E+04 4.72E+02
mean 8.06E+10 5.17E+09 2.56E+08 8.16E+08 4.00E+10 9.14E+10 5.35E+09 3.12E+10 3.41E+09 2.34E+08
𝐹12
std 1.21E+10 1.41E+09 9.40E+07 2.77E+08 1.17E+10 1.11E+10 5.14E+09 5.36E+09 9.45E+08 9.46E+07
mean 1.32E+10 8.84E+07 2.47E+05 1.09E+07 5.74E+09 2.32E+10 9.90E+06 4.44E+09 3.59E+07 5.20E+05
𝐹13
std 2.41E+09 4.21E+07 3.80E+05 2.45E+06 2.13E+09 4.46E+09 5.08E+06 6.99E+08 1.53E+07 2.05E+06
mean 4.39E+07 1.10E+07 1.10E+06 3.96E+06 9.35E+06 1.43E+07 1.57E+07 1.68E+07 1.04E+07 2.09E+06
𝐹14
std 1.79E+07 4.39E+06 3.50E+05 1.16E+06 4.62E+06 4.26E+06 1.75E+07 4.80E+06 3.72E+06 9.01E+05
mean 4.28E+09 1.41E+07 1.12E+05 3.02E+06 1.90E+09 9.08E+09 1.25E+06 1.31E+09 4.65E+06 5.12E+04
𝐹15
std 1.47E+09 1.43E+07 6.17E+04 2.03E+06 1.14E+09 3.61E+09 5.78E+06 1.99E+08 1.98E+06 2.13E+04
mean 1.41E+04 1.38E+04 7.63E+03 7.94E+03 8.85E+03 1.41E+04 9.03E+03 1.39E+04 8.87E+03 6.52E+03
𝐹16
std 7.91E+02 1.76E+03 1.27E+03 9.17E+02 1.18E+03 1.20E+03 1.33E+03 9.50E+02 1.07E+03 7.74E+02
mean 2.18E+04 9.32E+03 6.82E+03 6.64E+03 7.83E+03 2.11E+04 8.06E+03 1.06E+04 7.39E+03 5.61E+03
𝐹17
std 1.49E+04 1.49E+03 8.93E+02 7.69E+02 1.99E+03 1.17E+04 4.94E+03 1.02E+03 6.30E+02 5.79E+02
mean 8.11E+07 9.87E+06 1.40E+06 4.59E+06 9.81E+06 2.41E+07 1.06E+07 2.28E+07 9.02E+06 3.43E+06
𝐹18
std 3.07E+07 4.33E+06 4.84E+05 2.00E+06 5.85E+06 7.70E+06 7.63E+06 8.66E+06 3.59E+06 1.66E+06
mean 3.60E+09 5.65E+07 1.90E+06 1.13E+07 1.41E+09 6.17E+09 9.92E+06 1.30E+09 1.59E+07 1.00E+06
𝐹19
std 9.89E+08 4.42E+07 1.50E+06 4.00E+06 8.56E+08 4.20E+09 7.46E+06 3.09E+08 1.32E+07 4.89E+05
mean 7.77E+03 6.69E+03 5.87E+03 5.91E+03 6.28E+03 7.64E+03 6.42E+03 7.59E+03 5.71E+03 5.61E+03
𝐹20
std 2.65E+02 6.87E+02 5.59E+02 5.15E+02 8.25E+02 4.69E+02 6.06E+02 3.41E+02 3.81E+02 5.50E+02
mean 4.07E+03 4.25E+03 3.56E+03 4.13E+03 3.56E+03 4.20E+03 4.22E+03 4.06E+03 3.82E+03 3.27E+03
𝐹21
std 8.12E+01 2.25E+02 1.59E+02 1.71E+02 1.03E+02 1.11E+02 2.15E+02 1.14E+02 2.09E+02 1.12E+02
mean 3.48E+04 2.92E+04 2.32E+04 2.53E+04 2.97E+04 3.49E+04 2.65E+04 3.51E+04 2.46E+04 1.96E+04
𝐹22
std 4.79E+02 1.93E+03 3.08E+03 1.17E+03 1.58E+03 6.64E+02 3.35E+03 8.20E+02 1.31E+03 1.52E+03
mean 5.07E+03 5.09E+03 4.68E+03 5.34E+03 4.04E+03 5.11E+03 5.94E+03 7.64E+03 4.55E+03 3.82E+03
𝐹23
std 1.34E+02 2.32E+02 3.29E+02 3.66E+02 1.18E+02 1.87E+02 3.24E+02 5.16E+02 2.62E+02 1.36E+02
mean 6.97E+03 6.29E+03 6.25E+03 7.15E+03 4.79E+03 6.69E+03 8.87E+03 7.29E+03 5.72E+03 4.76E+03
𝐹24
std 2.46E+02 5.30E+02 4.44E+02 5.05E+02 1.76E+02 3.00E+02 6.48E+02 4.40E+02 3.91E+02 1.91E+02
mean 1.88E+04 5.98E+03 3.71E+03 4.13E+03 1.31E+04 1.55E+04 5.07E+03 1.09E+04 5.18E+03 3.54E+03
𝐹25
std 2.50E+03 4.19E+02 7.29E+01 1.38E+02 1.58E+03 1.11E+03 7.26E+02 6.85E+02 3.49E+02 6.71E+01
mean 3.76E+04 3.53E+04 3.07E+04 2.75E+04 2.08E+04 2.83E+04 3.10E+04 2.79E+04 2.65E+04 2.18E+04
𝐹26
std 1.85E+03 3.33E+03 3.99E+03 1.37E+03 1.48E+03 1.13E+03 5.55E+03 2.36E+03 3.99E+03 2.20E+03
mean 7.85E+03 5.41E+03 5.34E+03 4.96E+03 4.56E+03 6.75E+03 7.72E+03 8.36E+03 5.19E+03 4.10E+03
𝐹27
std 2.96E+02 6.56E+02 6.17E+02 5.04E+02 2.71E+02 3.42E+02 2.72E+03 7.01E+02 3.74E+02 2.25E+02
mean 2.39E+04 8.29E+03 3.90E+03 4.54E+03 2.31E+04 1.51E+04 7.42E+03 1.45E+04 7.07E+03 3.62E+03
𝐹28
std 2.60E+03 1.09E+03 1.14E+02 3.95E+02 3.84E+03 1.31E+03 2.26E+03 1.19E+03 7.30E+02 4.33E+01
mean 2.30E+04 1.68E+04 1.06E+04 1.05E+04 1.40E+04 3.70E+04 1.54E+04 1.51E+04 1.23E+04 8.15E+03
𝐹29
std 6.82E+03 1.97E+03 1.48E+03 7.39E+02 1.61E+03 1.79E+04 7.15E+03 1.12E+03 1.22E+03 7.02E+02
mean 9.60E+09 9.80E+08 1.87E+07 8.55E+07 3.52E+09 1.60E+10 7.48E+08 4.65E+09 3.47E+08 1.55E+07
𝐹30
std 1.52E+09 3.72E+08 1.10E+07 3.05E+07 1.91E+09 3.01E+09 1.29E+09 9.79E+08 1.49E+08 5.24E+06

continues to exploit the global optimal value to obtain higher conver- iteration also promotes the population’s development towards promis-
gence accuracy. The reason for this result is that under the effect of ing regions. For multimodal functions, the different algorithms show
Levy flight, search agents are drawn to other communities with larger different global search performances, among which DO has the best
strides and longer spans. The elite information inherited from each search performance. On the 𝐹10 , 𝐹15 , and 𝐹20 functions, even though

12
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Fig. 11. Boxplots of the different algorithms on the 100-dimensional CEC’17 functions.

Fig. 12. Scalability results of the different algorithms when dealing with different dimensions.

DO does not converge the fastest during the early iteration stages, it of the local neighbourhood in the last stage. The convergence analysis
can successfully jump out of local extrema and search near the optimal proves that the proposed DO is effective to a certain extent.
value as the iterations progress. Nevertheless, other algorithms cannot
adequately achieve better accuracy or stop at local extrema without 3.6. Statistical tests
effective improvement. The curves of the composition functions show
that DO has strong local extrema avoidance and high convergence Because the test results are occasional, a comparison between the
accuracy in the case of a complicated function. For example, DO algorithms cannot guarantee the superiority and validity completely.
converges to the optimal value and then refines exploitation on the Therefore, this subsection uses a variety of statistical tests to demon-
𝐹27 and 𝐹29 functions. The curves of the multimodal and composition strate the statistical superiority of DO. Statistical tests are carried out
functions clearly show the balance between the DO’s full exploration with the results of the algorithms on the 100-dimensional CEC’17
of the search space in the first two stages and the precise exploitation functions.

13
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Fig. 13. Convergence curves of the different algorithms on the 100-dimensional CEC’17 functions.

First, the Wilcoxon rank sum test (Wilcoxon, 1992) is performed CEC’17 functions, but poor performance on the 100-dimensional. The
at a significance level of 5%. DO is taken as the control algorithm optimization performance of AO, HHO, CHOA and HOA on CEC2017
and compared with each algorithm in pairs to generate 𝑝 values. benchmark functions, i.e., 𝐹12 , 𝐹19 , 𝐹30 , are poor. In AO, HHO, and
Table 5 shows the statistical test results of the Mann-Whitney U test MSA, levy-distributed random numbers are used, which can improve
for the proposed DO algorithm at a significance level of 5%. ‘+’ and the optimization performance to a certain extent. However, for the
‘−’ indicate that the algorithm can be considered to have significant complex multimodal functions, since they do not conduct a wider
advantages and no obvious competitiveness in statistical significance, range of the random search, the proposed DO algorithm adopts log-
respectively. Among the 261 competitive indicators of DO, 156 have positive distribution of random numbers in the rising stage, which
significant superiority. The Wilcoxon rank sum test results show that can effectively avoid falling into local optimum. That is why DO can
DO outperforms the other comparison algorithms. perform better on these functions. For CHOA and HOA, there are fewer
In addition, the Friedman test is a nonparametric method that random operators acting on different roles, which increase the risk of
uses rank implementations to determine whether there is a significant them falling into local optimum. LFD determines whether to update
difference between multiple population distributions. The Friedman the current search agent by the threshold between two individuals. Ac-
test shows the overall picture of algorithm performance. Hence, the cording to certain random number, the search agent searches randomly
Friedman test is used to evaluate the comprehensive optimization in the global scope or searches in Levy random walk. This setting is
performance of DO on the 100-dimensional CEC’17 functions. The DO helpful for jumping out of local extrema but it takes more time to find
Friedman test results ranked first out of the results of the 10 algorithms two suitable individuals to update. Moreover, when local extremes are
in Table 6. The Friedman test results reveal that the exploration and close to each other, updating by threshold may not be a better trade-off.
exploitation strategies of DO are effective. Thus, the optimization performance of LFD on CEC’17 𝐹12 , 𝐹13 and 𝐹14
functions is poor. As a result, the proposed DO algorithm has relatively
3.7. Discussions of existing works
better optimization capacity.

From the experimental results of Sections 3.3 to 3.6, it can be


concluded that the existing methods are usually sensitive to dimen- 4. DO for engineering design problems
sional changes or have poor optimization performance on CEC2017
benchmark functions, and the proposed dandelion algorithm can better In this subsection, the optimization efficiency of the proposed DO
improve these deficiencies. The main reasons for such experimental algorithm in solving real-world optimization problems is evaluated. A
results are analysed below. total of 4 famous practical engineering problems are selected: the speed
SCA moves solutions inward or towards the best search agent with reducer design problem, tension/compression spring design problem,
the help of the sines and cosines formula. The heuristic method is welded beam design problem, and pressure vessel design problem. DO
simple and fast. However, the optimization results of SCA are not is compared with the existing nature-inspired metaheuristic algorithms
satisfactory in high-dimensional optimization problems. Both WOA and that are applied to these problems. In general, real-world optimization
SOA use spiral expressions and move current individuals by borrowing problems are constrained by equalities or inequalities. To make the
random search agents or global optimal solutions. However, they are algorithm easily address the constraint optimization problem, this pa-
easy to fall into local optimum for solving complex optimization prob- per draws on a simple constraint processing method, namely, the static
lems. This is because the scope of spiral expression used is limited, penalty function. DO uses 250,000 (50 × 500) as the maximum number
so the global optimization of search agents in the solution space is of evolutions and is ran independently 30 times on each engineering
not sufficient. MSA shows good performance on the 50-dimensional problem.

14
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Table 5
Wilcoxon rank-sum test results of the different algorithms on the 100-dimensional CEC’17 functions.
Fun. DO vs. SCA DO vs. WOA DO vs. MSA DO vs. HHO DO vs. SOA DO vs. ChOA DO vs. LFD DO vs. HOA DO vs. AO
𝐹1 3.0E−11 (+) 3.0E−11 (+) 5.9E−02 (−) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+)
𝐹3 4.6E−10 (+) 3.0E−11 (+) 3.5E−10 (+) 2.2E−02 (+) 6.7E−03 (+) 4.1E−11 (+) 3.0E−11 (+) 5.5E−08 (+) 1.2E−03 (+)
𝐹4 3.0E−11 (+) 3.0E−11 (+) 3.2E−10 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+)
𝐹5 3.0E−11 (+) 3.0E−11 (+) 2.6E−02 (+) 1.8E−10 (+) 1.5E−10 (+) 3.0E−11 (+) 1.0E−04 (+) 3.0E−11 (+) 4.6E−09 (+)
𝐹6 3.0E−11 (+) 5.0E−11 (+) 6.1E−03 (+) 1.1E−10 (+) 1.7E−09 (+) 3.0E−11 (+) 6.9E−04 (+) 3.0E−11 (+) 6.1E−10 (+)
𝐹7 3.0E−11 (+) 5.5E−11 (+) 4.6E−09 (+) 3.7E−11 (+) 6.1E−10 (+) 4.1E−11 (+) 3.5E−10 (+) 1.8E−10 (+) 2.2E−09 (+)
𝐹8 3.0E−11 (+) 3.0E−11 (+) 6.8E−05 (+) 3.3E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 5.6E−10 (+) 3.0E−11 (+) 6.7E−11 (+)
𝐹9 3.0E−11 (+) 2.7E−09 (+) 3.8E−04 (+) 1.3E−09 (+) 1.6E−08 (+) 3.0E−11 (+) 7.7E−02 (−) 3.0E−11 (+) 3.8E−09 (+)
𝐹10 3.0E−11 (+) 3.0E−11 (+) 3.1E−08 (+) 6.7E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 1.9E−09 (+) 3.0E−11 (+) 5.1E−10 (+)
𝐹11 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+)
𝐹12 3.0E−11 (+) 3.0E−11 (+) 3.5E−01 (−) 6.7E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+)
𝐹13 3.0E−11 (+) 3.0E−11 (+) 1.6E−05 (+) 1.8E−10 (+) 3.0E−11 (+) 3.0E−11 (+) 1.8E−10 (+) 3.0E−11 (+) 3.0E−11 (+)
𝐹14 3.0E−11 (+) 3.3E−11 (+) 1.4E−06 (+) 7.1E−08 (+) 5.5E−11 (+) 3.0E−11 (+) 4.1E−11 (+) 3.0E−11 (+) 7.4E−11 (+)
𝐹15 3.0E−11 (+) 3.0E−11 (+) 7.1E−08 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 9.8E−08 (+) 3.0E−11 (+) 3.0E−11 (+)
𝐹16 3.0E−11 (+) 3.0E−11 (+) 7.2E−05 (+) 2.2E−07 (+) 2.0E−10 (+) 3.0E−11 (+) 1.2E−10 (+) 3.0E−11 (+) 8.1E−10 (+)
𝐹17 1.2E−12 (+) 1.2E−12 (+) 7.5E−10 (+) 1.9E−07 (+) 3.4E−11 (+) 1.2E−12 (+) 3.4E−11 (+) 1.2E−12 (+) 1.2E−12 (+)
𝐹18 3.0E−11 (+) 1.5E−09 (+) 7.1E−09 (+) 9.1E−03 (+) 1.2E−08 (+) 3.0E−11 (+) 3.6E−08 (+) 3.0E−11 (+) 9.3E−09 (+)
𝐹19 3.0E−11 (+) 3.0E−11 (+) 3.4E−02 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 2.4E−10 (+) 3.0E−11 (+) 3.0E−11 (+)
𝐹20 3.0E−11 (+) 2.0E−07 (+) 6.1E−02 (−) 4.2E−02 (+) 4.5E−04 (+) 5.0E−11 (+) 1.1E−05 (+) 3.0E−11 (+) 6.6E−02 (−)
𝐹21 3.0E−11 (+) 3.0E−11 (+) 1.3E−08 (+) 3.0E−11 (+) 2.6E−10 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.7E−11 (+)
𝐹22 3.0E−11 (+) 3.0E−11 (+) 5.9E−06 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 6.7E−11 (+) 3.0E−11 (+) 4.1E−11 (+)
𝐹23 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 2.2E−07 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+)
𝐹24 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 5.7E−01 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 4.1E−11 (+)
𝐹25 3.0E−11 (+) 3.0E−11 (+) 8.9E−10 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+)
𝐹26 3.0E−11 (+) 3.0E−11 (+) 1.3E−10 (+) 1.3E−10 (+) 4.1E−02 (+) 6.7E−11 (+) 1.6E−10 (+) 2.4E−10 (+) 1.7E−06 (+)
𝐹27 3.0E−11 (+) 6.1E−11 (+) 6.7E−11 (+) 5.1E−10 (+) 5.1E−08 (+) 3.0E−11 (+) 2.8E−04 (+) 3.0E−11 (+) 4.1E−11 (+)
𝐹28 3.0E−11 (+) 3.0E−11 (+) 3.7E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 9.5E−06 (+) 3.0E−11 (+) 3.0E−11 (+)
𝐹29 3.0E−11 (+) 3.0E−11 (+) 8.9E−10 (+) 9.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 4.2E−10 (+) 3.0E−11 (+) 3.7E−11 (+)
𝐹30 3.0E−11 (+) 3.0E−11 (+) 4.9E−01 (−) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+) 3.0E−11 (+)
+∕−. 29/0 29/0 25/4 29/0 29/0 29/0 28/1 29/0 28/1

Table 6
Friedman test results of the different algorithms on the 100-dimensional CEC’17
1.93𝑧34
𝑔3 (𝑧) = −1≤0
functions. (𝑧2 𝑧3 𝑧46 )
Algorithm Friedman rank test Rank
1.93𝑧35
DO 1.2414 1 𝑔4 (𝑧) = −1≤0
SCA 8.9655 10 (𝑧2 𝑧3 𝑧47 )

WOA 6.7931 7 ( )
MSA 2.5517 2 1 745𝑧4 2
𝑔5 (𝑧) = × + 16.9 × 106 − 1 ≤ 0
HHO 4.1724 3 (110𝑧36 ) 𝑧2 𝑧3
SOA 5.3448 5 √
( )
ChOA 8.3793 9
1 745𝑧5 2
LFD 5.7241 6 𝑔6 (𝑧) = × + 157.5 × 106 − 1 ≤ 0
HOA 7.6207 8 (85𝑧3 ) 7
𝑧2 𝑧3
AO 4.2069 4 𝑧2 𝑧3
𝑔7 (𝑧) = −1≤0
40
𝑧2
𝑔8 (𝑧) = 5 −1≤0
4.1. Speed reducer design problem 𝑧1
𝑧1
𝑔9 (𝑧) = −1≤0
The objective of the speed reducer problem is to minimize the 12𝑧2
weight of a mechanical device under 11 constraints, such as the shaft 1.5𝑧6 + 1.9
pressure, bearing diameter, surface pressure, and gear bending force. 𝑔10 (𝑧) = −1≤0
𝑧4
A deceleration of the specific problem is shown in Fig. 14. There are
1.1𝑧7 + 1.9
7 decision variables used to control this problem, namely, the width 𝑔11 (𝑧) = −1≤0
𝑧5
of flat ground 𝑏, gear module 𝑚, the number of teeth of pinion 𝑝, the
length of the first shaft between bearings 𝑙1 , the length of the second 2.6 ≤ 𝑧1 ≤ 3.6, 0.7 ≤ 𝑧2 ≤ 0.8, 17 ≤ 𝑧3 ≤ 28, 7.3 ≤ 𝑧4 ≤ 8.3
shaft between bearings 𝑙2 , the diameter of the first bearing 𝑑1 , and the
diameter of the second bearing 𝑑2 . The mathematical expression of the
7.3 ≤ 𝑧5 ≤ 8.3, 2.9 ≤ 𝑧6 ≤ 3.9, 5.0 ≤ 𝑧7 ≤ 5.5
problem is as follows.
[ ] [ ] DO is compared with the harmony search algorithm (Dhiman and
𝐶𝑜𝑛𝑠𝑖𝑑𝑒𝑟 𝑧 = 𝑧1 , 𝑧2 , 𝑧3 , 𝑧4 , 𝑧5 , 𝑧6 , 𝑧7 = 𝑏, 𝑚, 𝑝, 𝑙1 , 𝑙2 , 𝑑1 , 𝑑2 Kumar, 2017), particle swarm optimization (Dhiman and Kumar,
2019), the sine cosine algorithm (Dhiman and Kumar, 2017), the grey
min 𝑓 = 0.7854𝑧1 𝑧22 (3.3333𝑧23 + 14.9334𝑧3 − 43.0934) wolf optimizer (Dhiman and Kumar, 2017), MDE (Kamboj et al., 2020),
−1.508𝑥1 (𝑧26 + 𝑧27 ) + 7.4777(𝑧36 + 𝑧37 ) + 0.7854(𝑧4 𝑧26 + 𝑧5 𝑧27 ) the spotted hyena optimizer (Dhiman and Kumar, 2017), elephant
herding optimization (Hashim et al., 2019), Henry gas solubility op-
27
𝑠.𝑡. 𝑔1 (𝑧) = −1≤0 timization (Hashim et al., 2019) and the aquila optimizer (Abualigah
(𝑧1 𝑧22 𝑧3 ) et al., 2021). Table 7 shows the results of all algorithms and the best
397.5 solution, and Table 8 records the statistical results of all algorithms.
𝑔2 (𝑧) = −1≤0
(𝑧1 𝑧22 𝑧23 ) By combining these two tables, it can be seen that DO achieves

15
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Table 7
Optimal results of the different algorithms on the speed reducer design problem.
Algorithms 𝑏 𝑚 𝑝 𝑙1 𝑙2 𝑑1 𝑑2 Optimal value
HS (Dhiman and Kumar, 2017) 3.520124 0.7 17 8.37 7.8 3.366970 5.288719 3029.002
PSO (Dhiman and Kumar, 2019) 3.500019 0.7 17 8.3 7.8 3.352412 5.286715 3005.763
SCA (Dhiman and Kumar, 2017) 3.508755 0.7 17 7.3 7.8 3.461020 5.289213 3030.563
GWO (Dhiman and Kumar, 2017) 3.506690 0.7 17 7.380933 7.815726 3.357847 5.286768 3001.288
MDE (Kamboj et al., 2020) 3.50001 0.7 17 7.300156 7.800027 3.350221 5.286685 2996.35669
SHO (Dhiman and Kumar, 2017) 3.50159 0.7 17 7.3 7.8 3.35127 5.28874 2998.5507
EHO (Hashim et al., 2019) 2.900 0.70 17 7.30 7.80 3.10 5.200 3019.01
HGSO (Hashim et al., 2019) 3.498 0.71 17.02 7.67 7.810 3.36 5.289 2997.10
AO (Abualigah et al., 2021) 3.5021 0.7000 17.0000 7.3099 7.7476 3.3641 5.2994 3007.7328
DO 3.5000 0.7000 17.000 7.3057 7.7193 3.3507 5.2867 2994.7531

Table 8
Statistical results of the different algorithms on the speed reducer design problem.
Algorithms Eval Best Mean Worst Std
HS (Dhiman and Kumar, 2017) 30,000 3029.002 3295.329 3619.465 5.7E+01
PSO (Dhiman and Kumar, 2019) 100,000 3005.763 3105.252 3211.174 7.96E+01
SCA (Dhiman and Kumar, 2017) 30,000 3030.563 3065.917 3104.779 1.81E+01
GWO (Dhiman and Kumar, 2017) 30,000 3001.288 3005.845 3008.752 5.84E+00
MDE (Kamboj et al., 2020) / 2996.35669 / / /
SHO (Dhiman and Kumar, 2017) 30,000 2998.5507 2999.640 3003.889 1.93E+00
EHO (Hashim et al., 2019) 50,000 3019.0124 3100.12 3100.145 2.5262E+01
HGSO (Hashim et al., 2019) 50,000 2997.10 2996.4 2996.9 4.39E−05
AO (Abualigah et al., 2021) 25,000 3007.7328 / / /
DO 25,000 2994.7531 3000.70 3011.70 4.53E+00

4𝑥22 − 𝑥1 𝑥2 1
𝑔2 (⃖𝑥)
⃗ = + ≤0
12566(𝑥2 𝑥31 − 𝑥41 ) 5108𝑥21
140.45𝑥1
𝑔3 (⃖𝑥)
⃗ =1− ≤0
𝑥22 𝑥3
𝑥1 + 𝑥2
𝑔4 (⃖𝑥)
⃗ = −1≤0
1.5

0.05 ≤ 𝑥1 ≤ 2.00

0.25 ≤ 𝑥2 ≤ 1.30

2.00 ≤ 𝑥3 ≤ 15.00
Fig. 14. Speed reducer design.
The nature-inspired metaheuristic algorithms that are used to solve
this problem include the harmony search algorithm (Dhiman and Ku-
mar, 2017), particle swarm optimization (Dhiman and Kumar, 2017),
better results with fewer function evaluations. The best value of DO is the cultural algorithm (Coello Coello and Becerra, 2004), coevolu-
obviously better than those of HS, SCA, and EHO with 30 000 evolution tionary particle swarm optimization (He and Wang, 2007), elephant
times. Compared with suboptimal MDE, DO also shows a relatively herding optimization (Hashim et al., 2019), the multiverse optimizer
significant improvement. In terms of the mean indicator, DO obtains (Dhiman and Kumar, 2017), the sine cosine algorithm (Hashim et al.,
a better effect under the condition of fewer evolutionary iterations. For 2021), the whale optimization algorithm (Hashim et al., 2021), Harris
the std and worst-score indicators, DO performs well compared to the hawks optimization (Hashim et al., 2021), and the Archimedes opti-
other algorithms and is within acceptable limits. The results show that mization algorithm (Hashim et al., 2021). DO is compared with these
the algorithm has good applicability to this problem. algorithms. Table 9 shows the optimal results obtained by the different
algorithms. Table 10 shows the statistical results of all the algorithms.
4.2. Tension/compression spring design problem DO achieves a better optimal value with fewer function evaluations
than the other algorithms. For the mean indicator, DO is third only
The second engineering problem is the tension/compression spring to HS and CPSO. For the worst indicator, DO ranks second among all
design problem shown in Fig. 15. The purpose of this problem is to algorithms. Although CPSO has better std results, its evolution time is
minimize the weight of a spring. This problem is controlled by three- approximately 10 times that of the DO algorithm. With the increase of
decision variables: wire diameter 𝑑, average coil diameter 𝐷 and the in DO evolution iterations, the std results improve. The above results
number of active coils 𝑁. The mathematical expression of the problem verify that DO has a strong constraint programming ability and can be
is as follows. used to solve such problems.

𝑥⃖⃗ = [𝑥1 , 𝑥2 , 𝑥3 ] = [𝑑, 𝐷, 𝑁] 4.3. Welded beam design

⃗ = (𝑥3 + 2)𝑥2 𝑥1 2
min 𝑓 (⃖𝑥) The design cost of welding a beam is minimized as much as possible
under the constraints of weld shear stress 𝜏, bending stress 𝜎 in the
𝑥32 𝑥3 beam, buckling load 𝑃𝑐 on the rod and the deflection of the beam
𝑠.𝑡. 𝑔1 (⃖𝑥)
⃗ =1− ≤0
71785𝑥41 end. Fig. 16 is a diagram of a welding beam design. This problem

16
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Fig. 15. Tension/compression spring design problem.

Table 9
Optimal results of the different algorithms on the tension/compression spring design problem.
Algorithms 𝑑 𝐷 𝑁 Optimal value
HS (Dhiman and Kumar, 2017) 0.001622 0.316351 15.23960 0.012776352
PSO (Dhiman and Kumar, 2017) 0.05000 0.310414 15.0000 0.013192580
CA (Coello Coello and Becerra, 2004) 0.050000 0.317395 14.031795 0.012721
CPSO (He and Wang, 2007) 0.051728 0.357644 11.244543 0.012674
EHO (Hashim et al., 2019) 0.0580 0.5278 5.5820 0.0135
MVO (Dhiman and Kumar, 2017) 0.05000 0.315956 14.22623 0.012816930
SCA (Hashim et al., 2021) 0.0500 0.3171 14.1417 0.012797
WOA (Hashim et al., 2021) 0.0507 0.3339 12.7645 0.012683
HHO (Hashim et al., 2021) 0.0562 0.4754 6.6670 0.013016
AOA (Hashim et al., 2021) 0.0508 0.3348 11.7020 0.012681
DO 0.051215 0.345416 11.983708 0.012669

Table 10
Statistical results of the different algorithms on the tension/compression spring design problem.
Algorithms Eval Best Mean Worst Std
HS (Dhiman and Kumar, 2017) 30,000 0.012776352 0.013069872 0.015214230 3.75E−04
PSO (Dhiman and Kumar, 2017) 30,000 0.013192580 0.014817181 0.017862507 2.272E−03
CA (Coello Coello and Becerra, 2004) 50,000 0.012721 0.013568 0.0151156 8.4E−04
CPSO (He and Wang, 2007) 200,000 0.012674 0.012730 0.012924 5.19E−05
EHO (Dhiman and Kumar, 2017) 50,000 0.0135 0.0155 0.0189 1.1E−03
MVO (Dhiman and Kumar, 2017) 30,000 0.012816930 0.014464372 0.017839737 1.622E−03
SCA (Hashim et al., 2021) 30,000 0.012807 0.013859 0.015869 4.30E−04
WOA (Hashim et al., 2021) 30,000 0.012683 0.014709 0.017211 2.30E−03
HHO (Hashim et al., 2021) 30,000 0.013026 0.014160 0.016034 1.64E−03
AOA (Hashim et al., 2021) 30,000 0.012681 0.013369 0.015625 7.44E−04
DO 25,000 0.012669 0.013242 0.014830 6.15E−04

mainly optimizes four decision variables: the welding thickness, rod 0.1 ≤ 𝑥1 ≤ 2.00 0.1 ≤ 𝑥2 , 𝑥3 ≤ 10 0.1 ≤ 𝑥4 ≤ 2.00
attachment length, rod height, and rod thickness. The mathematical √
expression of this problem is as follows. ( ) 𝑥
where, 𝜏 𝑥⃖⃗ = (𝜏 ′ )2 + 2𝜏 ′ 𝜏 ′′ 2 + (𝜏 ′′ )2
2𝑅
𝑥⃖⃗ = [𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ] = [ℎ, 𝑙, 𝑡, 𝑏] ( 𝑥 )
𝑃 𝑀𝑅
𝜏′ = √ , 𝜏 ′′ = ,𝑀 = 𝑃 𝐿 + 2
( ) 2𝑥 𝑥 𝐽 2
min 𝑓 𝑥⃖⃗ = 1.10471𝑥21 𝑥2 + √ 1 2
( ) 2 ( )2
𝑥2 𝑥1 + 𝑥3
0.04811𝑥3 𝑥4 14.0 + 𝑥2 𝑅= +
( ) ( ) 4 2
𝑠.𝑡. 𝑔1 𝑥⃖⃗ = 𝜏 𝑥⃖⃗ − 𝜏max ≤ 0 { [ 𝑥 ( ) ]}
√ 𝑥2 𝑥1 + 𝑥3 2
𝐽 =2 2𝑥1 𝑥2 +
( ) ( ) 4 2
𝑔2 𝑥⃖⃗ = 𝜎 𝑥⃖⃗ − 𝜎max ≤ 0
( ) 6𝑃 𝐿 ( ) 6𝑃 𝐿3
𝜎 𝑥⃖⃗ = , 𝛿 𝑥⃖⃗ =
( ) ( ) 𝑥4 𝑥23 𝐸𝑥23 𝑥4
𝑔3 𝑥⃖⃗ = 𝛿 𝑥⃖⃗ − 𝛿max ≤ 0 √
𝑥23 𝑥64 ( √ )
( ) ( ) 4.013𝐸 36 𝑥 𝐸
𝑔4 𝑥⃖⃗ = 𝑥1 − 𝑥4 ≤ 0 𝑃𝑐 𝑥⃖⃗ = 1− 3
𝐿2 2𝐿 4𝐺
( ) ( )
𝑔5 𝑥⃖⃗ = 𝑃 − 𝑃𝑐 𝑥⃖⃗ ≤ 0 𝑃 = 6000 lb, 𝐿 = 14 in., 𝛿max = 0.25 in.

( )
𝑔6 𝑥⃖⃗ = 0.125 − 𝑥1 ≤ 0 𝐸 = 30 × 16 psi, 𝐺 = 12 × 106 psi

( ) ( )
𝑔7 𝑥⃖⃗ = 1.10471𝑥21 + 0.04811𝑥3 𝑥4 14.0 + 𝑥2 − 5.0 ≤ 0 𝜏max = 13 600 psi, 𝜎max = 30 000 psi

17
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Fig. 16. Welded beam design.

Table 11
Optimal results of the different algorithms on the welded beam design problem.
Algorithms ℎ 𝑙 𝑡 𝑏 Optimal value
GA (Dhiman and Kumar, 2017) 0.164171 4.032541 10.00000 0.223647 1.873971
PSO (Dhiman and Kumar, 2017) 0.197411 3.315061 10.00000 0.201395 1.820395
CPSO (He and Wang, 2007) 0.2024 3.5442 9.04821 0.2057 1.7280
SCA (Dhiman and Kumar, 2017) 0.204695 3.536291 9.004290 0.210025 1.759173
WOA (Hashim et al., 2019) 0.1876 3.9298 8.9907 0.2308 1.9428
HHO (Hashim et al., 2021) 0.2134 3.5601 8.4629 0.2346 1.8561
HGSO (Hashim et al., 2019) 0.2054 3.4476 9.0269 0.2060 1.7260
DO 0.2061 3.4656 9.0286 0.2061 1.7249

Table 12
Statistical results of the different algorithms on the welded beam design problem.
Algorithms Eval Best Mean Worst Std
GA (Dhiman and Kumar, 2017) 30,000 1.873971 2.119240 2.320125 3.4820E−02
PSO (Dhiman and Kumar, 2017) 30,000 1.820395 2.230310 3.048231 3.2453E−01
CPSO (He and Wang, 2007) 200,000 1.7280 1.7280 1.782143 1.2926E−02
SCA (Dhiman and Kumar, 2017) 30,000 1.759173 1.817657 1.873408 2.7543E−02
WOA (Hashim et al., 2019) 50,000 1.9428 3.3865 5.9905 8.251E−01
HHO (Hashim et al., 2021) 30,000 1.8561 1.9302 1.9759 6.47E−02
HGSO (Hashim et al., 2019) 50,000 1.7260 1.7265 1.7325 7.66E−03
DO 25,000 1.7249 1.7276 1.7456 4.38E−03

Table 11 shows the best results obtained by the proposed DO


algorithm, the genetic algorithm (Dhiman and Kumar, 2017), particle
swarm optimization (Dhiman and Kumar, 2017), coevolutionary parti-
cle swarm optimization (He and Wang, 2007), the sine cosine algorithm
(Dhiman and Kumar, 2017), the whale optimization algorithm (Hashim
et al., 2019), Harris hawks optimization (Hashim et al., 2021), and
Henry gas solubility optimization (Hashim et al., 2019) on the welding
beam problem. Table 12 displays the statistical results of the above
algorithms. It can be seen that DO performs better than the other
algorithms. With fewer function evaluations, DO has the optimal std
results, indicating that DO has good robustness. In terms of the mean Fig. 17. Pressure vessel design.
and worst indicators, DO ranks second only to HGSO and the evolution
time of DO is half that of HGSO. Therefore, DO is effective at optimizing
the solution.
( )
4.4. Pressure vessel design 𝑔2 𝑥⃖⃗ = −𝑥3 + 0.00954𝑥3 ≤ 0
( ) 4
𝑔3 𝑥⃖⃗ = −𝜋𝑥23 𝑥4 − 𝜋𝑥33 + 1296000 ≤ 0
The last engineering problem is the pressure vessel design problem, 3
which is shown in Fig. 17. The objective of this problem is to minimize ( )
the total cost of materials, shaping, and welding for cylindrical vessels. 𝑔4 𝑥⃖⃗ = 𝑥4 − 240 ≤ 0
The problem contains four decision variables, which are the thickness
of the shell 𝑇𝑠 , the thickness of the head 𝑇ℎ , the radius of entry 𝑅, 0 ≤ 𝑥1 ≤ 99 0 ≤ 𝑥2 ≤ 99 10 ≤ 𝑥3 ≤ 200 10 ≤ 𝑥4 ≤ 200
and the length of the cylindrical section 𝐿 without considering the
The algorithms that have been applied to this problem include
head. The four constraints and objective function of this problem are
the harmony search algorithm (Dhiman and Kumar, 2017), particle
mathematically expressed as follows.
swarm optimization (Dhiman and Kumar, 2017), coevolutionary parti-
𝑥⃖⃗ = [𝑥1 , 𝑥2 , 𝑥3 , 𝑥4 ] = [𝑇𝑠 , 𝑇ℎ , 𝑅, 𝐿] cle swarm optimization (He and Wang, 2007), the sine cosine algorithm
(Dhiman and Kumar, 2017), the marine predators algorithm (Faramarzi
( ) et al., 2020), and the aquila optimizer (Abualigah et al., 2021). The
min 𝑓 𝑥⃖⃗ = 0.6224𝑥1 𝑥3 𝑥4 + 1.7781𝑥2 𝑥23 + 3.1661𝑥21 𝑥4 + 19.84𝑥21 𝑥3
proposed DO algorithm is compared with the other algorithms, and the
best results obtained by each algorithm are shown in Table 13. Table 14
( )
𝑠.𝑡. 𝑔1 𝑥⃖⃗ = −𝑥1 + 0.0193𝑥3 ≤ 0 displays the statistical results of the 7 algorithms. It can be seen from

18
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Table 13
Optimal results of the different algorithms on the pressure vessel design problem.
Algorithms 𝑇𝑠 𝑇ℎ 𝑅 𝐿 Optimal value
HS (Dhiman and Kumar, 2017) 1.099523 0.906579 44.456397 179.65887 6550.0230
PSO (Dhiman and Kumar, 2017) 0.778961 0.384683 40.320913 200.00000 5891.3879
CPSO (He and Wang, 2007) 0.8125 0.4375 42.091266 176.746500 6061.0777
SCA (Dhiman and Kumar, 2017) 0.817577 0.417932 41.74939 183.57270 6137.3724
MPA (Faramarzi et al., 2020) 0.8125 0.4375 42.098445 176.636607 6059.7144
AO (Abualigah et al., 2021) 1.0540 0.182806 59.6219 38.8050 5949.2258
DO 0.7784 0.3848 40.3310 199.8421 5885.7766

Table 14
Statistical results of the different algorithms on the pressure vessel design problem.
Algorithms Eval Best Mean Worst Std
HS (Dhiman and Kumar, 2017) 30,000 6550.0230 6643.9870 8005.4397 6.5752E+02
PSO (Dhiman and Kumar, 2017) 30,000 5891.3879 6531.5032 7394.5879 5.341E+02
CPSO (He and Wang, 2007) 200,000 6061.0777 6147.1332 6363.8041 8.6455E+01
SCA (Dhiman and Kumar, 2017) 30,000 6137.3724 6326.7606 6512.3541 1.266E+01
MPA (Faramarzi et al., 2020) 25,000 6059.7144 6102.8271 6410.0929 1.0661E+02
AO (Abualigah et al., 2021) 25,000 5949.2258 / / /
DO 25,000 5885.7766 6374.0396 7318.5197 5.2906E+02

Table 13 that DO with 25,000 evolutions is significantly better than the Writing – review & editing, Visualization, Investigation, Supervision.
HS and SCA algorithms with 30,000 evolutions. With the same number Miao Chen: Writing – review & editing, Visualization, Investigation,
of evolutions, DO achieves a better solution than MPA and AO, which Supervision.
were proposed in recent years. Therefore, DO solves this problem at a
lower cost than the previous nature-inspired metaheuristic algorithms. Declaration of competing interest

5. Conclusion and future works The authors declare that they have no known competing finan-
cial interests or personal relationships that could have appeared to
This paper proposes a novel swarm intelligence optimization algo- influence the work reported in this paper.
rithm called dandelion optimization. DO simulates the flight modes
of a dandelion seed in the three stages of rising, descending, and Acknowledgements
landing. CEC2017 unconstrained benchmark functions were used to
evaluate the optimization performance of DO, and the performance This work was supported in part by the Education Department
was compared with the performance of 9 famous comparison algo- of Liaoning Province Fund, China Project (LJ2019JL017), Doctoral
rithms. Finally, DO was applied to 4 real-world problems and com- Research Start-up Fund of Science and Technology Department of
pared with many nature-inspired metaheuristic algorithms to verify the Liaoning Province, China (2019-BS-118).
constrained programmability performance.
The statistical results demonstrate that DO has strong optimiza- References
tion performance on 50- and 100-dimensional CEC’17 functions. The
100-dimensional convergence curve further proves that the proposed Abualigah, L., Yousri, D., Abd Elaziz, M., Ewees, A.A., Al-qaness, M.A., Gandomi, A.H.,
2021. Aquila optimizer: A novel meta-heuristic optimization algorithm. Comput.
DO algorithm can jump out of local extreme values and refine the
Ind. Eng. 157, 107250.
exploitation accuracy in the iterative optimization process. The scal- Agushaka, J.O., Ezugwu, A.E., Abualigah, L., 2022. Dwarf mongoose optimization
ability analysis shows that the proposed DO algorithm still has high algorithm. Comput. Methods Appl. Mech. Engrg. 391, 114570.
optimization accuracy for all dimensions. The results of the Wilcoxon Ahmadianfar, I., Heidari, A.A., Gandomi, A.H., Chu, X., Chen, H., 2021. RUN beyond
the metaphor: an efficient optimization algorithm based on Runge Kutta method.
rank-sum test and Friedman test verify the effectiveness of DO in
Expert Syst. Appl. 181, 115079.
terms of statistical significance. Finally, the application results of real- Askari, Q., Younas, I., Saeed, M., 2020. Political optimizer: A novel socio-inspired
world problems show that DO can replace the previous nature-inspired meta-heuristic for global optimization. Knowl.-Based Syst. 195, 105709.
metaheuristic algorithms and achieve better convergence accuracy with Awad, N.H., Ali, M.Z., Suganthan, P.N., Liang, J.J., Qu, B.Y., 2017. Problem definitions
fewer evolutions. and evaluation criteria for the CEC 2017 special session and competition on
single objective real-parameter numerical optimization. In: 2017 IEEE Congress on
In future works, although DO will achieve satisfactory results, other
Evolutionary Computation (CEC).
well-known operators, such as opposition-based learning mechanisms Azizi, M., 2021. Atomic orbital search: A novel metaheuristic algorithm. Appl. Math.
and chaos mapping, can be introduced into DO to better enhance the Model. 93, 657–683.
optimization performance. It is necessary to develop a binary version Back, T., 1996. Evolutionary Algorithms in Theory and Practice: evolution strategies,
evolutionary programming, genetic algorithms. Oxford University press.
of DO to solve classification problems. In addition, the multiobjective
Bianchi, L., Dorigo, M., Gambardella, L.M., Gutjahr, W.J., 2009. A survey on
version of DO can be improved to solve multiobjective optimization metaheuristics for stochastic combinatorial optimization. Nat. Comput. 8 (2),
problems. Finally, DO can be applied to the hyperparameter optimiza- 239–287.
tion of machine learning algorithms, image segmentation and other Blum, C., Roli, A., 2003. Metaheuristics in combinatorial optimization: Overview and
fields. conceptual comparison. ACM Comput. Surv. (CSUR) 35 (3), 268–308.
Casseau, V., De Croon, G., Izzo, D., Pandolfi, C., 2015. Morphologic and aerodynamic
considerations regarding the plumed seeds of Tragopogon pratensis and their
CRediT authorship contribution statement implications for seed dispersal. PLoS One 10 (5), e0125040.
Cavieres, L.A., Quiroz, C.L., Molina-Montenegro, M.A., 2008. Facilitation of the non-
Shijie Zhao: Conceptualization, Methodology, Formal analysis, Writ- native taraxacum officinale by native nurse cushion species in the high andes of
ing – original draft, Writing – review & editing, Software, Visualization, central Chile: are there differences between nurses? Funct. Ecol. 22 (1), 148–156.
Chan-Ley, M., Olague, G., 2020. Categorization of digitized artworks by media with
Investigation. Tianran Zhang: Conceptualization, Methodology, For- brain programming. Appl. Opt. 59 (14), 4437–4447.
mal analysis, Writing – original draft, Writing – review & editing, Chou, J.S., Nguyen, N.M., 2020. FBI inspired meta-optimization. Appl. Soft Comput.
Formal analysis, Software, Visualization, Investigation. Shilin Ma: 93, 106339.

19
S. Zhao, T. Zhang, S. Ma et al. Engineering Applications of Artificial Intelligence 114 (2022) 105075

Coello Coello, C.A., Becerra, R.L., 2004. Efficient evolutionary optimization through the Khishe, M., Mosavi, M.R., 2020. Chimp optimization algorithm. Expert Syst. Appl. 149,
use of a cultural algorithm. Eng. Optim. 36 (2), 219–236. 113338.
Cornuéjols, G., 2008. Valid inequalities for mixed integer linear programs. Math. Kurban, R., Durmus, A., Karakose, E., 2021. A comparison of novel metaheuristic
Program. 112 (1), 3–44. algorithms on color aerial image multilevel thresholding. Eng. Appl. Artif. Intell.
Cummins, C., Seale, M., Macente, A., Certini, D., Mastropaolo, E., Viola, I.M., 105, 104410.
Nakayama, N., 2018. A separated vortex ring underlies the flight of the dandelion. Li, X., Han, S., Zhao, L., Gong, C., Liu, X., 2017. New dandelion algorithm optimizes
Nature 562 (7727), 414–418. extreme learning machine for biomedical classification problems. Comput. Intel.
Dhiman, G., Kumar, V., 2017. Spotted hyena optimizer: a novel bio-inspired based Neurosci. 2017.
metaheuristic technique for engineering applications. Adv. Eng. Softw. 114, 48–70. Mantegna, R.N., 1994. Fast, accurate algorithm for numerical simulation of Levy stable
Dhiman, G., Kumar, V., 2019. Seagull optimization algorithm: Theory and its appli- stochastic processes. Phys. Rev. E 49 (5), 4677.
cations for large-scale industrial engineering problems. Knowl.-Based Syst. 165, Meng, Q.A., Wang, Q., Zhao, K., Wang, P., Liu, P., Liu, H., Jiang, L., 2016. Hydroactu-
169–196. ated configuration alteration of fibrous dandelion pappi: Toward self-controllable
Dorigo, M., Blum, C., 2005. Ant colony optimization theory: A survey. Theoret. Comput. transport behavior. Adv. Funct. Mater. 26 (41), 7378–7385.
Sci. 344 (2–3), 243–278. MiarNaeimi, F., Azizyan, G., Rashki, M., 2021. Horse herd optimization algo-
Dorigo, M., Stützle, T., 2019. Ant colony optimization: overview and recent advances. rithm: A nature-inspired algorithm for high-dimensional optimization problems.
In: Handbook of Metaheuristics. pp. 311–351. Knowl.-Based Syst. 213, 106711.
Einstein, A., 1956. Investigations on the Theory of the Brownian Movement. Courier Mirjalili, S., 2016. SCA: a sine cosine algorithm for solving optimization problems.
Corporation. Knowl.-Based Syst. 96, 120–133.
Faramarzi, A., Heidarinejad, M., Mirjalili, S., Gandomi, A.H., 2020. Marine predators Mirjalili, S., Lewis, A., 2016. The whale optimization algorithm. Adv. Eng. Softw. 95,
algorithm: A nature-inspired metaheuristic. Expert Syst. Appl. 152, 113377. 51–67.
Fogel, D.B., 1998. Artificial Intelligence Through Simulated Evolution. Wiley-IEEE Press, Mirjalili, S., Mirjalili, S.M., Hatamlou, A., 2016. Multi-verse optimizer: a nature-
pp. 227–296. inspired algorithm for global optimization. Neural Comput. Appl. 27 (2),
Fonseca, C.M., Fleming, P.J., 1995. An overview of evolutionary algorithms in 495–513.
multiobjective optimization. Evol. Comput. 3 (1), 1–16. Mohamed, A.A.A., Mohamed, Y.S., El-Gaafary, A.A., Hemeida, A.M., 2017. Opti-
Galli, L., Lin, C.J., 2021. A study on truncated Newton methods for linear classification. mal power flow using moth swarm algorithm. Electr. Power Syst. Res. 142,
IEEE Trans. Neural Netw. Learn.. 190–206.
Gong, C., Han, S., Li, X., Zhao, L., Liu, X., 2018. A new dandelion algorithm and Mohammadi-Balani, A., Nayeri, M.D., Azar, A., Taghizadeh-Yazdi, M., 2021. Golden
optimization for extreme learning machine. J. Exp. Theor. Artif. Intell. 30 (1), eagle optimizer: A nature-inspired metaheuristic algorithm. Comput. Ind. Eng. 152,
39–52. 107050.
Gupta, S., Abderazek, H., Yıldız, B.S., Yildiz, A.R., Mirjalili, S., Sait, S.M., 2021. Com- Nand, R., Sharma, B.N., Chaudhary, K., 2021. Stepping ahead firefly algorithm and
parison of metaheuristic optimization algorithms for solving constrained mechanical hybridization with evolution strategy for global optimization problems. Appl. Soft
design optimization problems. Expert Syst. Appl. 183, 115351. Comput. 109, 107517.
Halim, A.H., Ismail, I., Das, S., 2021. Performance assessment of the metaheuristic Nematollahi, A.F., Rahiminejad, A., Vahidi, B., 2020. A novel meta-heuristic op-
optimization algorithms: an exhaustive review. Artif. Intell. Rev. 54 (3), 2323–2409. timization method based on golden ratio in nature. Soft Comput. 24 (2),
Hashim, F.A., Houssein, E.H., Mabrouk, M.S., Al-Atabany, W., Mirjalili, S., 2019. Henry 1117–1151.
gas solubility optimization: A novel physics-based algorithm. Future Gener. Comput. Pereira, J.L.J., Francisco, M.B., Diniz, C.A., Oliver, G.A., Cunha, Jr., S.S., Gomes, G.F.,
Syst. 101, 646–667. 2021. Lichtenberg algorithm: A novel hybrid physics-based meta-heuristic for global
Hashim, F.A., Hussain, K., Houssein, E.H., Mabrouk, M.S., Al-Atabany, W., 2021. optimization. Expert Syst. Appl. 170, 114522.
Archimedes optimization algorithm: a new metaheuristic algorithm for solving Pu, Y.F., Zhou, J.L., Zhang, Y., Zhang, N., Huang, G., Siarry, P., 2013. Fractional
optimization problems. Appl. Intell. 51 (3), 1531–1551. extreme value adaptive training method: fractional steepest descent approach. IEEE
Hashim, F.A., Hussien, A.G., 2022. Snake optimizer: A novel meta-heuristic optimization Trans. Neural Netw. Learn. Syst. 26 (4), 653–662.
algorithm. Knowl.-Based Syst. 242, 108320. Punnathanam, V., Kotecha, P., 2016. Yin-Yang-pair optimization: A novel lightweight
He, Q., Wang, L., 2007. An effective co-evolutionary particle swarm optimization for optimization algorithm. Eng. Appl. Artif. Intell. 54, 62–79.
constrained engineering design problems. Eng. Appl. Artif. Intell. 20 (1), 89–99. Sheldon, J.C., Burrows, F.M., 1973. The dispersal effectiveness of the achene–pappus
Heidari, A.A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., Chen, H., 2019. Harris units of selected compositae in steady winds with convection. New Phytol. 72 (3),
hawks optimization: Algorithm and applications. Future Gener. Comput. Syst. 97, 665–675.
849–872. Soons, M.B., Heil, G.W., Nathan, R., Katul, G.G., 2004. Determinants of long-distance
Holland, J.H., 1992. Genetic algorithms. Sci. Am. 267 (1), 66–73. seed dispersal by wind in grasslands. Ecology 85 (11), 3056–3068.
Houssein, E.H., Saad, M.R., Hashim, F.A., Shaban, H., Hassaballah, M., 2020. Lévy flight Soubervielle-Montalvo, C., Perez-Cham, O.E., Puente, C., Gonzalez-Galvan, E.J.,
distribution: A new metaheuristic algorithm for solving engineering optimization Olague, G., Aguirre-Salado, C.A., Cuevas-Tello, J.C., Ontanon-Garcia, L.J., 2022.
problems. Eng. Appl. Artif. Intell. 94, 103731. Design of a low-power embedded system based on a SoC-FPGA and the honeybee
Hussain, K., Salleh, M.N.M., Cheng, S., Shi, Y., 2019. On the exploration and exploita- search algorithm for real-time video tracking. Sensors 22 (3), 1280.
tion in popular swarm-based metaheuristic algorithms. Neural Comput. Appl. 31 Storn, R., Price, K., 1997. Differential evolution–a simple and efficient heuristic for
(11), 7665–7683. global optimization over continuous spaces. J. Glob. Optim. 11 (4), 341–359.
Jain, M., Singh, V., Rani, A., 2019. A novel nature-inspired algorithm for optimization: Talatahari, S., Azizi, M., 2021. Chaos game optimization: a novel metaheuristic
Squirrel search algorithm. Swarm Evol. Comput. 44, 148–175. algorithm. Artif. Intell. Rev. 54 (2), 917–1004.
Kamboj, V.K., Nandi, A., Bhadoria, A., Sehgal, S., 2020. An intensify Harris Hawks Tan, Y., Zhu, Y., 2010. Fireworks algorithm for optimization. In: International
optimizer for numerical and engineering optimization problems. Appl. Soft Comput. Conference in Swarm Intelligence. Springer, Berlin, Heidelberg, pp. 355–364.
89, 106018. Wilcoxon, F., 1992. Individual comparisons by ranking methods. In: Breakthroughs in
Karaboga, D., Basturk, B., 2007. A powerful and efficient algorithm for numerical Statistics. Springer, New York, NY, pp. 196–202.
function optimization: artificial bee colony (ABC) algorithm. J. Glob. Optim. 39 Wolpert, D.H., Macready, W.G., 1997. No free lunch theorems for optimization. IEEE
(3), 459–471. Trans. Evol. Comput. 1 (1), 67–82.
Kaur, S., Awasthi, L.K., Sangal, A.L., Dhiman, G., 2020. Tunicate swarm algorithm: A Zahedi, Z.M., Akbari, R., Shokouhifar, M., Safaei, F., Jalali, A., 2016. Swarm intelligence
new bio-inspired based metaheuristic paradigm for global optimization. Eng. Appl. based fuzzy routing protocol for clustered wireless sensor networks. Expert Syst.
Artif. Intell. 90, 103541. Appl. 55, 313–328.
Kennedy, J., Eberhart, R., 1995. Particle swarm optimization. In: Proceedings Zhou, J., Qiu, Y., Zhu, S., Armaghani, D.J., Li, C., Nguyen, H., Yagiz, S., 2021.
of ICNN’95-International Conference on Neural Networks, Vol. 4. IEEE, pp. Optimization of support vector machine through the use of metaheuristic algorithms
1942–1948. in forecasting TBM advance rate. Eng. Appl. Artif. Intell. 97, 104015.

20

You might also like