Secretary bird optimization algorithm
Secretary bird optimization algorithm
https://doi.org/10.1007/s10462-024-10729-y
Abstract
This study introduces a novel population-based metaheuristic algorithm called secretary
bird optimization algorithm (SBOA), inspired by the survival behavior of secretary birds
in their natural environment. Survival for secretary birds involves continuous hunting for
prey and evading pursuit from predators. This information is crucial for proposing a new
metaheuristic algorithm that utilizes the survival abilities of secretary birds to address real-
world optimization problems. The algorithm’s exploration phase simulates secretary birds
hunting snakes, while the exploitation phase models their escape from predators. During
this phase, secretary birds observe the environment and choose the most suitable way to
reach a secure refuge. These two phases are iteratively repeated, subject to termination cri-
teria, to find the optimal solution to the optimization problem. To validate the performance
of SBOA, experiments were conducted to assess convergence speed, convergence behavior,
and other relevant aspects. Furthermore, we compared SBOA with 15 advanced algorithms
using the CEC-2017 and CEC-2022 benchmark suites. All test results consistently dem-
onstrated the outstanding performance of SBOA in terms of solution quality, convergence
speed, and stability. Lastly, SBOA was employed to tackle 12 constrained engineering
design problems and perform three-dimensional path planning for Unmanned Aerial Vehi-
cles. The results demonstrate that, compared to contrasted optimizers, the proposed SBOA
can find better solutions at a faster pace, showcasing its significant potential in addressing
real-world optimization problems.
* Dan Liu
[email protected]
Youfa Fu
[email protected]; [email protected]
Jiadui Chen
[email protected]
Ling He
[email protected]
1
Key Laboratory of Advanced Manufacturing Technology, Ministry of Education, Guizhou
University, Guiyang 550025, Guizhou, China
13
Vol.:(0123456789)
123 Page 2 of 102 Y. Fu et al.
1 Introduction
With the continuous development of society and technology, optimization problems have
become increasingly complex and challenging in various domains. The nature of these
problems encompasses a wide range of areas, including manufacturing, resource alloca-
tion, path planning, financial portfolio optimization, and others. They involve multiple
decision variables, numerous constraints, and diverse objective functions. In the face of
real-world constraints such as resource scarcity, cost control, and efficiency requirements,
finding optimal solutions has become an urgent imperative (Zhou et al. 2011).
Traditional mathematical optimization methods, while performing well in certain
cases, often exhibit limitations when dealing with complex, high-dimensional, nonlinear,
and multimodal problems. Issues such as local optima, slow convergence rates, difficulty
in parameter tuning, high-dimensional problems, and computational costs have been per-
sistent challenges for researchers and practitioners in the field of optimization (Faramarzi
et al. 2020b). Therefore, people are seeking new methods and technologies to address these
challenges. In this context, metaheuristic algorithms have emerged. They belong to a cat-
egory of intelligent search algorithms inspired by natural phenomena and mechanisms,
designed to find solutions to optimization problems through stochastic methods. Unlike tra-
ditional mathematical optimization methods, metaheuristic algorithms are better suited for
complex, multimodal, high-dimensional, and nonlinear optimization problems. These algo-
rithms, by simulating processes like evolution, swarm intelligence, and simulated anneal-
ing found in nature, exhibit robustness and global search capabilities, and as a result, they
have gained widespread popularity in various practical applications.
Despite the significant progress that metaheuristic algorithms have made in various
fields, they also face some challenges and issues, including susceptibility to getting stuck in
local optima, slow convergence rates, insufficient robustness, and high computational costs
(Agrawal et al. 2021). Despite the significant progress that metaheuristic algorithms have
made in various fields, they also face some challenges and issues, including susceptibility
to getting stuck in local optima, slow convergence rates, insufficient robustness, and high
computational costs (Wolpert and Macready 1997) explicitly stated that the significant
performance of an algorithm in solving a specific set of optimization problems does not
guarantee that it will perform equally well in other optimization problems. Therefore, not
all algorithms excel at in all optimization applications. The NFL theorem drives research-
ers to design innovative algorithms that can more effectively solve optimization prob-
lems by providing better solutions. In certain scenarios, algorithmic convergence to local
optima may occur due to an imbalance between development and exploration. To address
this issue, various approaches have been proposed. Nama et al. introduced a novel inte-
grated algorithm, denoted as e-mPSOBSA (Nama et al. 2023), based on the Backtracking
Search Algorithm (BSA) and Particle Swarm Optimization (PSO). This integration aims
to mitigate the imbalance between development and exploration. Nama et al. proposed an
improved version of the Backtracking Search Algorithm, named gQR-BSA (Nama et al.
2022b), to address challenges arising from the imbalance between exploitation and explo-
ration. He presented a bio-inspired multi-population self-adaptive Backtracking Search
Algorithm, referred to as ImBSA (Nama and Saha 2022), as a solution to the development-
exploration imbalance. Nama presented an enhanced symbiotic organism search algorithm,
mISOS (Nama 2021), to overcome challenges associated with development-exploration
imbalance. Chakraborty introduced an enhanced version of the Symbiotic Organism Search
algorithm, namely nwSOS (Chakraborty et al. 2022a), designed for solving optimization
13
Secretary bird optimization algorithm: a new metaheuristic… Page 3 of 102 123
problems in higher dimensions. Saha combined the exploration capability of SQI with the
development potential of SOS, proposing the hybrid symbiotic organism search (HSOS)
algorithm (Saha et al. 2021) and a new parameter setting-based modified differential evo-
lution for function optimization (Nama and Saha 2020). This combination enhances the
algorithm’s robustness and overall performance.
Engineering optimization problems have consistently posed significant challenges
within the field of engineering. These problems involve achieving specific objectives,
typically involve cost minimization, efficiency maximization, or performance optimiza-
tion, under limited resources. The challenges in engineering optimization arise from their
diversity and complexity. Problems can be either discrete or continuous, involve multiple
decision variables, are subject to various constraints, and may incorporate elements of ran-
domness and uncertainty. Consequently, selecting appropriate optimization algorithms is
crucial for solving these problems. In recent decades, researchers have developed various
types of optimization algorithms, including traditional mathematical programming meth-
ods, heuristic algorithms, and evolutionary algorithms, among others. However, these algo-
rithms still exhibit certain limitations when addressing such optimization problems. For
example, when dealing with more complex problems, they tend to have slow convergence
rates, limited search precision, and are prone to becoming trapped in local optima.
To overcome these challenges, this paper introduces a new optimization algorithm—the
secretary bird optimization algorithm (SBOA). Our motivation is to address the shortcom-
ings of existing algorithms in tackling complex engineering optimization problems. Spe-
cifically, our focus lies in improving the convergence speed of optimization algorithms,
enhancing optimization precision, and effectively avoiding local optima. By introducing
the secretary bird optimization algorithm, we aim to provide a more efficient and reliable
solution for the engineering domain, fostering new breakthroughs in the research and prac-
tical application of optimization problems. The primary contributions of this study are as
follows:
The structure of this paper is arranged as follows: the literature review is presented in
Sect. 2. Section 3 introduces the proposed SBOA and models it. Section 4 presents a simu-
lation analysis of the convergence behavior, the exploration and exploitation balance ratio
and search agent distribution of SBOA when dealing with optimization problems. Section 5
13
123 Page 4 of 102 Y. Fu et al.
uses twelve engineering optimization problems to verify the efficiency of SBOA in solving
practical engineering optimization problems and addresses a path planning scenario for an
unmanned aerial vehicles (UAVs). Section 6 provides the conclusion and outlines several
directions for future research.
2 Literature review
13
Secretary bird optimization algorithm: a new metaheuristic… Page 5 of 102 123
13
123 Page 6 of 102 Y. Fu et al.
et al. 2022b), network anomaly detection (Choura et al. 2021), data clustering (Manjarres
et al. 2013), scheduling problems (Attiya et al. 2022), and many other engineering applica-
tions and issues.
In the field of biology, the simulation of collective behaviors of animals, aquatic life,
birds, and other organisms has long been a source of inspiration for the development of
swarm intelligence algorithms. For example, the ant colony optimization (ACO) algorithm
(Dorigo et al. 2006) was inspired by the intelligent behavior of ants in finding the shortest
path to their nest and food sources. The particle swarm optimization (PSO) algorithm
(Kennedy and Eberhart 1995) was inspired by the collective behaviors and movement pat-
terns of birds or fish in their natural environments while searching for food. The grey wolf
optimizer (GWO) (Mirjalili et al. 2014) is based on the social hierarchy and hunting behav-
iors observed in wolf packs. The nutcracker optimizer (NOA) (Abdel-Basset et al. 2023b)
is proposed based on the behavior of nutcrackers, specifically inspired by Clark’s nut-
cracker, which locates seeds and subsequently stores them in appropriate caches. The algo-
rithm also considers the use of various objects or markers as reference points and involves
searching for hidden caches marked from different angles. The sea-horse optimizer (SHO)
(Zhao et al. 2023) draws inspiration from the natural behaviors of seahorses, encompassing
their movements, predatory actions, and reproductive processes. The SHO algorithm incor-
porates principles observed in seahorse behavior for optimization purposes. The African
vulture optimization algorithm (AVOA) (Abdollahzadeh et al. 2021a) draws inspiration
from the foraging and navigation behaviors of African vultures. The fox optimization algo-
rithm (FOX) (Mohammed and Rashid 2023) models the hunting behavior of foxes in the
wild when pursuing prey. The Chameleon swarm algorithm (CSA) (Braik 2021) simulates
the dynamic behaviors of chameleons as they search for food in trees, deserts, and swamps.
The Golden Jackal optimization algorithm (GJO) (Chopra and Mohsin Ansari 2022) is
inspired by the cooperative hunting behavior of golden jackals. The Chimpanzee optimiza-
tion algorithm (ChOA) (Khishe and Mosavi 2020) is based on the hunting behavior of
chimpanzee groups. Finally, the whale optimization algorithm (WOA) (Mirjalili and Lewis
2016) takes inspiration from the behavior of whales when hunting for prey by encircling
them. Additionally, there are several other nature-inspired optimization algorithms in the
field, each drawing inspiration from the foraging and collective behaviors of various animal
species. These algorithms include the marine predator algorithm (MPA) (Faramarzi et al.
2020a), which is inspired by the hunting behavior of marine predators; the rat swarm opti-
mization algorithm (RSO) (Dhiman et al. 2021), inspired by the population behavior of rats
when chasing and attacking prey; The artificial rabbits optimization (ARO) algorithm
(Wang et al. 2022), introduced by Wang et al. in 2022, draws inspiration from the survival
strategies of rabbits in nature. This includes their foraging behavior, which involves detour-
ing and randomly hiding. The detour-foraging strategy specifically entails rabbits eating
grass near the nests of other rabbits, compelling them to consume vegetation around differ-
ent burrows. And a novel bio-inspired algorithm, the shrike mocking optimizer (SMO)
(Zamani et al. 2022), is introduced, drawing inspiration from the astonishing noise-making
behavior of shrikes. On the other hand, the spider wasp optimizer (SWO) (Abdel-Basset
et al. 2023c) takes inspiration from the hunting, nesting, and mating behaviors of female
spider wasps in the natural world. The SWO algorithm incorporates principles observed in
the activities of female spider wasps for optimization purposes. White shark optimizer
(WSO) (Braik et al. 2022), which takes inspiration from the exceptional auditory and olfac-
tory abilities of great white sharks during navigation and hunting; the tunicate swarm algo-
rithm (TSA) (Kaur et al. 2020), inspired by jet propulsion and clustering behavior in tuni-
cates; the honey badger algorithm (HBA) (Hashim et al. 2022), which simulates the
13
Secretary bird optimization algorithm: a new metaheuristic… Page 7 of 102 123
digging and honey-searching dynamic behaviors of honey badgers; the dung beetle optimi-
zation algorithm (DBO) (Xue and Shen 2022), inspired by the rolling, dancing, foraging,
stealing, and reproductive behaviors of dung beetles; and the salp swarm algorithm (SSA)
(Mirjalili et al. 2017), inspired by the collective behavior of salps in the ocean. The fennec
fox optimization (FFO) (Zervoudakis and Tsafarakis 2022), inspired by the survival strate-
gies of fennec foxes in desert environments. The quantum-based avian navigation algo-
rithm (QANA) (Zamani et al. 2021) is proposed, inspired by the extraordinary precision
navigation behavior of migratory birds along long-distance aerial routes; the Northern
Goshawk Optimization (NGO) (Dehghani et al. 2021), which draws inspiration from the
hunting process of northern goshawks; the pathfinder algorithm (PFA) (Yapici and Cetin-
kaya 2019), inspired by animal group collective actions when searching for optimal food
areas or prey; the snake optimizer (SO) (Hashim and Hussien 2022), which models unique
snake mating behaviors. The crested porcupine optimizer (CPO) (Abdel-Basset et al.
2024), introduced by Abdel-Basset et al. (2024), is inspired by the four defense behaviors
of the crested porcupine, encompassing visual, auditory, olfactory, and physical attack
mechanisms. The CPO algorithm incorporates these defensive strategies observed in
crested porcupines for optimization purposes. On the other hand, the Genghis Khan Shark
Optimizer (GKSO) (Hu et al. 2023) is proposed based on the hunting, movement, foraging
(from exploration to exploitation), and self-protection mechanisms observed in the Geng-
his Khan shark. This optimizer draws inspiration from the diverse behaviors exhibited by
Genghis Khan sharks, integrating them into optimization algorithms. Slime Mould Algo-
rithm (SMA) (Li et al. 2020), inspired by slime mold foraging and diffusion behaviors and
Chakraborty combined Second-order Quadratic Approximation with Slime Mould Algo-
rithm (SMA), presenting the hybrid algorithm HSMA (Chakraborty et al. 2023) to enhance
the algorithm’s exploitation capabilities for achieving global optimality. Nama introduced a
novel quasi-reflective slime mould algorithm (QRSMA) (Nama 2022). The evolutionary
crow search algorithm (ECSA) (Zamani and Nadimi-Shahraki 2024) is introduced by Zam-
ani to optimize the hyperparameters of artificial neural networks for diagnosing chronic
diseases. ECSA successfully addresses issues such as reduced population diversity and
slow convergence speed commonly encountered in the Crow Search Algorithm. Sahoo pro-
posed an improved Moth Flame Optimization algorithm (Sahoo et al. 2023) based on a
dynamic inverse learning strategy. Combining binary opposition algorithm (BOA) with
moth flame optimization (MFO), a new hybrid algorithm h-MFOBOA (Nama et al. 2022a)
was introduced. Fatahi proposes an improved binary quantum avian navigation algorithm
(IBQANA) (Fatahi et al. 2023) in the context of fuzzy set system (FSS) for medical data
preprocessing. This aims to address suboptimal solutions generated by the binary version
of heuristic algorithms. Chakraborty introduced an enhancement to the Butterfly Optimiza-
tion Algorithm, named mLBOA (Chakraborty et al. 2022b), utilizing Lagrange interpola-
tion formula and embedding Levy flight search strategy; Sharma proposed an improved
Lagrange interpolation method for global optimization of butterfly optimization algorithm
(Sharma et al. 2022); Hoda Zamani proposes a conscious crow search algorithm (CCSA)
(Zamani et al. 2019) based on neighborhood awareness to address global optimization and
engineering design problems. CCSA successfully tackles issues related to the unbalanced
search strategy and premature convergence often encountered in the Crow Search Algo-
rithm. the gorilla troop optimization algorithm (GTO) (Abdollahzadeh et al. 2021b), based
on the social behaviors of gorillas groups; and the Walrus Optimization Algorithm (WOA)
(Trojovský and Dehghani 2022), inspired by walrus behaviors in feeding, migration, escap-
ing, and confronting predators. Regardless of the differences between these metaheuristic
algorithms, they share a common characteristic of dividing the search process into two
13
123 Page 8 of 102 Y. Fu et al.
In this section, the proposed secretary bird optimization algorithm (SBOA) is described
and the behavior of the secretary bird is modeled mathematically.
The Secretary Bird (Scientific name: Sagittarius serpentarius) is a striking African raptor
known for its distinctive appearance and unique behaviors. It is widely distributed in grass-
lands, savannas, and open riverine areas south of the Sahara Desert in Africa. Secretary
birds typically inhabit tropical open grasslands, savannas with sparse trees, and open areas
with tall grass, and they can also be found in semi-desert regions or wooded areas with
open clearings. The plumage of secretary birds is characterized by grey-brown feathers on
their backs and wings, while their chests are pure white, and their bellies are deep black
(De Swardt 2011; Hofmeyr et al. 2014), as shown in Fig. 1.
The Secretary Bird is renowned for its unique hunting style, characterized by its long
and sturdy legs and talons that enable it to run and hunt on the ground (Portugal et al.
2016). It typically traverses grasslands by walking or trotting, mimicking the posture of a
“secretary at work” by bowing its head and attentively scanning the ground to locate prey
hidden in the grass. Secretary birds primarily feed on insects, reptiles, small mammals,
and other prey. Once they spot prey, they swiftly charge towards it and capture it with their
sharp talons. They then strike the prey against the ground, ultimately killing it and consum-
ing it (Portugal et al. 2016).
The remarkable aspect of Secretary Birds lies in their ability to combat snakes, mak-
ing them a formidable adversary to these reptiles. When hunting snakes, the Secretary
Bird displays exceptional intelligence. It takes full advantage of its height by looking
down on the slithering snakes on the ground, using its sharp eyes to closely monitor
their every move. Drawing from years of experience in combat with snakes, the Sec-
retary Bird can effortlessly predict the snake’s next move, maintaining control of the
13
Table 1 Advantages and disadvantages of different algorithms
Variety Algorithm Advantages and disadvantages
Evolutionary algorithms Genetic algorithm (GA) (Holland 1992) Advantages: It has the ability of global optimization and parallel search, and is
(EA) easy to deal with complex problems
Disadvantages: Depending on the setting of the algorithm parameters, the conver-
gence rate is slow
Differential evolution (DE) (Storn and Price 1997) Advantages: It has strong robustness, strong global search ability, and is not easy
to fall into local optimal solutions
Disadvantages: Large computational resources are required and convergence is
slow
Genetic programming (GP) (Angeline 1994) Advantages: Automatic search, wide applicability, strong adaptability
Disadvantages: It is computationally expensive, difficult to adjust parameters, and
easy to fall into local optimal solutions
Cultural algorithm (CA) (Reynolds 1994) Advantages: Knowledge sharing helps to speed up the search process, improve
the efficiency of the algorithm, and has strong adaptability
Disadvantages: High complexity, dependence on parameter Settings, and high
computational cost
Secretary bird optimization algorithm: a new metaheuristic…
Evolution strategy (ES) (Asselmeyer et al. 1997) Advantages: Strong adaptability and robustness
Disadvantages: It is computationally expensive and easy to fall into local optimal
solutions
Based on physical phenom- Simulated annealing (SA) (Kirkpatrick et al. 1983) Advantages: It has strong global search ability, strong adaptability, simple and
ena algorithms (PhA) easy to implement
Disadvantages: The convergence speed is slow, the solution accuracy is not high,
and it is easy to fall into the local optimal solution
Gravitational search algorithm (GSA) (Rashedi et al. 2009) Advantages: It has simple structure, strong adaptability and strong global search
ability
Disadvantages: The computational cost is high, the convergence rate is unstable,
and it is easy to fall into local optimal solutions
Big bang-big crunch algorithm (BB-BC) (Erol and Eksin Advantages: Simple and easy to understand, few parameter Settings, strong global
2006) search ability
Disadvantages: The computational cost is high, the convergence rate is unstable,
and it is easy to fall into local optima
Page 9 of 102 123
13
Table 1 (continued)
123
13
Multiverse optimizer (MVO) (Mirjalili et al. 2016) Advantages: It has relatively few parameters, strong adaptability and strong
global search ability
Disadvantages: It has poor performance in solving large-scale optimization prob-
Page 10 of 102
Swarm intelligence (SI) Particle swarm optimization (PSO) (Kennedy and Eberhart Advantages: Simple and easy to implement; Strong global search ability; Strong
1995) parallelism and gradient-free optimization
Disadvantages: It tends to converge to local optima and has poor convergence in
high-dimensional problems
Ant colony optimization (ACO) (Dorigo et al. 2006) Advantages: Strong global search ability and strong interpretability
Disadvantages: Premature convergence, time-consuming pheromone delivery,
falling into local optimum, etc
Grey wolf optimizer (GWO) (Mirjalili et al. 2014) Advantages: It is simple and easy to implement, with few parameters, strong
global search ability and fast convergence speed
Disadvantages: It is sensitive to parameters, easy to fall into local optimal solu-
tions and not suitable for high-dimensional problems
African vultures optimization algorithm (AVOA) (Abdol- Advantages: The structure is simple and easy to implement, the convergence
lahzadeh et al. 2021a) speed is fast, and the robustness is strong
Disadvantages: High dimensional problems are prone to fall into local optima
Fox hunting algorithm (FOX) (Mohammed and Rashid Advantages: It has less parameters, easy implementation and strong global search
Secretary bird optimization algorithm: a new metaheuristic…
2023) ability
Disadvantages: It is sensitive to parameters, easy to fall into local optima and not
suitable for high-dimensional problems
Chameleon swarm algorithm (CSA) (Braik 2021) Advantages: Strong global search ability and adaptability
Disadvantages: It is difficult to select the parameters and computationally inten-
sive
Whale optimization algorithm (WOA) (Mirjalili and Lewis Advantages: It has strong global search ability and does not rely on gradient
2016) NaNormation
Disadvantages: The slow convergence rate makes it difficult to find the optimal
result, and it increases the computation time of the multidimensional problem
Chimp optimization algorithm (ChOA) (Khishe and Advantages: The structure is simple and easy to implement, and it has good
Mosavi 2020) robustness and strong global search ability
Disadvantages: It is easy to fall into local optimum, slow convergence speed and
low optimization accuracy
Page 11 of 102 123
13
Table 1 (continued)
123
13
Golden jackal optimization algorithm (GJO) (Chopra and Advantages: Few parameters, adaptability, strong global search ability
Mohsin Ansari 2022) Disadvantages: It is easy to fall into local optimum due to the large amount of
computation
Page 12 of 102
Marine predator algorithm (MPA) (Faramarzi et al. 2020a) Advantages: Strong global search ability, diversity and convergence speed
Disadvantages: It is difficult to select the parameters, and the algorithm is com-
plex because of the large amount of calculation
Rat swarm optimization (RSO) (Dhiman et al. 2021) Advantages: Global search capability, does not rely on gradient NaNormation
Disadvantages: The computational complexity is large, the convergence speed is
slow, and it is easy to fall into local optimum
Tunicate swarm algorithm (TSA) (Kaur et al. 2020) Advantages: It has simple operation, few parameters to adjust and strong ability
to jump out of local optimum
Disadvantages: The convergence rate is slow and it is difficult to find the optimal
result
White shark optimizer (WSO) (Braik et al. 2022) Advantages: Strong global search ability, diversity
Disadvantages: The convergence speed is slow and it is easy to fall into local
optimum
Honey badger algorithm (HBA) (Hashim et al. 2022) Advantages: The convergence speed is faster and the robustness is better
Disadvantages: It is easy to fall into a local optimal solution
Salp swarm algorithm (SSA) (Mirjalili et al. 2017) Advantages: The structure is simple, few parameters are adjusted, and the adapt-
ability is strong
Disadvantages: It is easy to fall into local optimal solutions and has slow conver-
gence speed
Dung beetle optimization algorithm (DBO) (Xue and Shen Advantages: The convergence rate is faster
2022) Disadvantages: It is less robust and has more parameter Settings
Fennec fox optimization (FFO) (Trojovska et al. 2022) Advantages: The convergence speed is fast and the optimization accuracy is high
Disadvantages: It has poor robustness and is easy to fall into local optimum
Northern goshawk algorithm (NGO) Advantages: It has strong global search capability and is easy to implement and
understand
Disadvantages: It may fall into a local optimum and the convergence speed is
slow
Y. Fu et al.
Table 1 (continued)
Variety Algorithm Advantages and disadvantages
Snake optimizer (SO) (Hashim and Hussien 2022) Advantages: It has the ability of global optimization and parallel search, and is
easy to deal with complex problems
Disadvantages: The convergence speed is slow in the later stage, and it is easy to
fall into local optimum
Slime mold algorithm (SMA) (Li et al. 2020) Advantages: The structure is simple and easy to realize and understand, and has
the ability of global optimization and parallel search
Disadvantages: The convergence speed is slow, the solution accuracy is low, and
it is easy to fall into local optimum
Artificial gorilla troops optimizer (GTO) (Abdollahzadeh Advantages: Strong global optimization ability and excellent performance in
et al. 2021b) high-dimensional problems
Disadvantages: It is easy to fall into local optimum and has slow convergence
speed
Walrus optimization algorithm (WOA) (Trojovský and Advantages: It has fast convergence speed and strong global search ability
Dehghani 2022) Disadvantages: It is easy to fall into local optimum and has poor robustness
Secretary bird optimization algorithm: a new metaheuristic…
Page 13 of 102 123
13
123 Page 14 of 102 Y. Fu et al.
situation. It gracefully hovers around the snake, leaping, and provoking it. It behaves
like an agile martial arts master, while the snake, trapped within its circle, struggles in
fear. The relentless teasing exhausts the snake, leaving it weakened. At this point, the
Secretary Bird avoids the snake’s frontal attacks by jumping behind it to deliver a lethal
blow. It uses its sharp talons to grasp the snake’s vital points, delivering a fatal strike.
Dealing with larger snakes can be challenging for the Secretary Bird, as large snakes
possess formidable constriction and crushing power. In such cases, the Secretary Bird
may lift the snake off the ground, either by carrying it in its beak or gripping it with its
talons. It then soars into the sky before releasing the snake, allowing it to fall to the hard
ground, resulting in a predictable outcome (Feduccia and Voorhies 1989; De Swardt
2011).
Furthermore, the intelligence of the Secretary Bird is evident in its strategies for evad-
ing predators, which encompass two distinct approaches. The first strategy involves the
bird’s ability to camouflage itself when it detects a nearby threat. If suitable camouflage
13
Secretary bird optimization algorithm: a new metaheuristic… Page 15 of 102 123
surroundings are available, the Secretary Bird will blend into its environment to evade
potential threats. The second strategy comes into play when the bird realizes that the sur-
rounding environment is not conducive to camouflage. In such cases, it will opt for flight
or rapid walking as a means to swiftly escape from the predator (Hofmeyr et al. 2014). The
correspondence between the Secretary Bird’s behavior and the secretary bird optimization
algorithm (SBOA) is illustrated in Fig. 2. In this context, the preparatory hunting behav-
ior of the secretary bird corresponds to the initialization stage of secretary bird optimiza-
tion algorithm (SBOA). The subsequent stages of the secretary bird’s hunting process align
with the three exploration stages of SBOA. The two strategies employed by the secretary
bird to evade predators correspond to C1 andC2, the two strategies in the exploitation stage
of SBOA.
In this subsection, the mathematical model of the natural behavior of secretary birds to
hunt snakes and avoid natural enemies is proposed to be presented to SBOA.
The secretary bird optimization algorithm (SBOA) method belongs to the category of pop-
ulation-based metaheuristic approaches, where each Secretary Bird is considered a mem-
ber of the algorithm’s population. The position of each Secretary Bird in the search space
determines the values of decision variables. Consequently, in the SBOA method, the posi-
tions of the Secretary Birds represent candidate solutions to the problem at hand. In the
initial implementation of the SBOA, Eq. (1) is employed for the random initialization of
the Secretary Birds’ positions in the search space.
( )
Xi,j = lbj + r × ubj − lbj , i = 1, 2, … , N, j = 1, 2, … , Dim (1)
where Xi denotes the position of the ith secretary bird lbj and ubj are the lower and upper
bounds, respectively, and r denotes a random number between 0 and 1.
In the secretary bird optimization algorithm (SBOA), it is a population-based approach
where optimization starts from a population of candidate solutions, as shown in Eq. (2).
These candidate solutions X are randomly generated within the upper bound (ub) and lower
bound (lb) constraints for the given problem. The best solution obtained thus far is approxi-
mately treated as the optimal solution in each iteration.
X said secretary bird group, Xi said the ith secretary bird, Xi,j said the ith secretary bird jth
question the value of a variable, said N group members (the secretary) number, and Dim
said problem of variable dimension.
13
123 Page 16 of 102 Y. Fu et al.
Each secretary bird represents a candidate solution to optimize the problem. Therefore,
the objective function can be evaluated based on the values proposed by each secretary bird
for the problem variables. The resulting objective function values are then compiled into a
vector using Eq. (3).
⎡ F1 ⎤ ⎡ F(X 1 ) ⎤
⎢ ⋮ ⎥ ⎢ ⋮ ⎥
F=⎢ Fi ⎥ = ⎢ F(X i ) ⎥ (3)
⎢ ⎥ ⎢ ⎥
⎢ ⋮ ⎥ ⎢ ⋮ ⎥
⎣ FN ⎦ ⎣ F(X N ) ⎦
N×1 N×1
Here, F represents the vector of objective function values, and Fi represents the objec-
tive function value obtained by the ith secretary bird. By comparing the obtained objec-
tive function values, the quality of the corresponding candidate solutions is effectively
analyzed, determining the best candidate solution for a given problem. In minimization
problems, the secretary bird with the lowest objective function value is the best candidate
solution, whereas in maximization problems, the secretary bird with the highest objective
function value is the best candidate solution. Since the positions of the secretary birds and
the values of the objective function are updated in each iteration, it is necessary to deter-
mine the best candidate solution in each iteration as well.
Two distinct natural behaviors of the secretary bird have been utilized for updating the
SBOA members. These two types of behaviors encompass:
Thus, in each iteration, each member of the secretary bird colony is updated in two dif-
ferent stages.
The hunting behavior of secretary birds when feeding on snakes is typically divided into
three stages: searching for prey, consuming prey, and attacking prey. The hunting behavior
of the secretary bird is shown in Fig. 3.
Based on the biological statistics of the secretary bird’s hunting phases and the time
durations for each phase, we have divided the entire hunting process into three equal time
intervals, namelyt < 13 T , 13 T < t < 32 T and 23 T < t < T , corresponding to the three phases
of the secretary bird’s predation: searching for prey, consuming prey, and attacking prey.
Therefore, the modeling of each phase in SBOA is as follows:
Stage 1 (Searching for Prey): The hunting process of secretary birds typically begins
with the search for potential prey, especially snakes. Secretary birds have incredibly sharp
vision, allowing them to quickly spot snakes hidden in the tall grass of the savannah.
They use their long legs to slowly sweep the ground while paying attention to their sur-
roundings, searching for signs of snakes (Feduccia and Voorhies 1989). Their long legs
and necks enable them to maintain a relatively safe distance to avoid snake attacks. This
situation arises in the initial iterations of optimization, where exploration is crucial. There-
fore, this stage employs a differential evolution strategy. Differential evolution uses differ-
ences between individuals to generate new solutions, enhancing algorithm diversity and
global search capabilities. By introducing differential mutation operations, diversity can
13
Secretary bird optimization algorithm: a new metaheuristic… Page 17 of 102 123
help avoid getting trapped in local optima. Individuals can explore different regions of the
solution space, increasing the chances of finding the global optimum. Therefore, updating
the secretary bird’s position in the Searching for Prey stage can be mathematically modeled
using Eqs. (4) and (5).
1 ( )
(4)
new P1
While t < T, xi,j = xi,j + xrandom_1 − xrandom_2 × R1
3
{
Xinew,P1 , if Finew,P1 < Fi
Xi = (5)
Xi , else
where, t represents the current iteration number, T represents the maximum iteration num-
ber, Xinew,P1 represents the new state of the ith secretary bird in the first stage, and xrandom_1
and xrandom_2 are the random candidate solutions in the first stage iteration. R1 represents a
randomly generated array of dimension 1 × Dim from the interval [0, 1], where Dim is the
dimensionality of the solution space. xi,j new P1
represents its value of the jth dimension, and
Finew,P1
represents its fitness value of the objective function.
Stage 2 (Consuming Prey): After a secretary bird discovers a snake, it engages in a dis-
tinctive method of hunting. Unlike other raptors that immediately dive in for combat, the
secretary bird employs its agile footwork and maneuvers around the snake. The secretary
13
123 Page 18 of 102 Y. Fu et al.
bird stands its ground, observing every move of the snake from a high vantage point. It
uses its keen judgment of the snake’s actions to hover, jump, and provoke the snake gradu-
ally, thereby wearing down its opponent’s stamina (Hofmeyr et al. 2014). In this stage, we
introduce Brownian motion (RB) to simulate the random movement of the secretary bird.
Brownian motion can be mathematically modeled using Eq. (6). This "peripheral combat"
strategy gives the secretary bird a significant physical advantage. Its long legs make it dif-
ficult for the snake to entangle its body, and the bird’s talons and leg surfaces are covered
with thick keratin scales, like a layer of thick armor, making it impervious to the fangs of
venomous snakes. During this stage, the secretary bird may frequently pause to lock onto
the snake’s location with its sharp eyesight. Here, we use the concept of “ xbest ” (individual
historical best position) and Brownian motion. By using “ xbest ,” individuals can perform
local searches towards the best positions they have previously found, better exploring the
surrounding solution space. Additionally, this approach not only helps individuals avoid
premature converging to local optima but also accelerates the algorithm’s convergence to
the best positions in the solution space. This is because individuals can search based on
both global information and their own historical best positions, increasing the chances of
finding the global optimum. The introduction of the randomness of Brownian motion ena-
bles individuals to explore the solution space more effectively and provides opportunities
to avoid being trapped in local optima, leading to better results when addressing complex
problems. Therefore, updating the secretary bird’s position in the Consuming Prey stage
can be mathematically modeled using Eqs. (7) and (8).
RB = randn(1, Dim) (6)
1 2 ( )
While new P1
T < t < T, xi,j = xbest + exp ((t∕T) ∧ 4) × (RB − 0.5) × xbest − xi,j (7)
3 3
{
Xinew,P1 , if Finew,P1 < Fi
Xi = (8)
Xi , else
where randn(1, Dim) represents a randomly generated array of dimension 1 × Dim from a
standard normal distribution (mean 0, standard deviation 1), and xbest represents the current
best value.
Stage 3 (Attacking Prey): When the snake is exhausted, the secretary bird perceives the
opportune moment and swiftly take action, using its powerful leg muscles to launch an
attack. This stage typically involves the secretary bird’s leg-kicking technique, where it
rapidly raises its leg and delivers accurate kicks using its sharp talons, often targeting the
snake’s head. The purpose of these kicks is to quickly incapacitate or kill the snake, thereby
avoiding being bitten in return. The sharp talons strike at the snake’s vital points, leading to
its demise. Sometimes, when the snake is too large to be immediately killed, the secretary
bird may carry the snake into the sky and release it, causing it to fall to the hard ground and
meet its end. In the random search process, we introduce Levy flight strategy to enhance the
optimizer’s global search capabilities, reduce the risk of SBOA getting stuck in local solu-
tions, and improve the algorithm’s convergence accuracy. Levy flight is a random move-
ment pattern characterized by short, continuous steps and occasional long jumps in a short
amount of time. It is used to simulate the flight ability of the secretary bird, enhancing its
exploration of the search space. Large steps help the algorithm explore the global range
of the search space, bringing individuals closer to the best position more quickly, while
small steps contribute to improving optimization accuracy. To make SBOA more dynamic,
13
Secretary bird optimization algorithm: a new metaheuristic… Page 19 of 102 123
adaptive, and flexible during the optimization process—achieving a better balance between
exploration and exploitation, avoiding premature convergence, accelerating convergence,
and enhancing algorithm performance—we introduce a nonlinear perturbation factor rep-
resented as (1 − Tt )( 2 × Tt ). Therefore, updating the secretary bird’s position in the Attack-
ing Prey stage can be mathematically modeled using Eqs. (9) and (10).
(( ) ( ))
2 t t
(9)
new P1
While t > T, xi,j = xbest + 1 − ∧ 2× × xi,j × RL
3 T T
{
Xinew,P1 , if Finew,P1 < Fi
Xi = (10)
Xi , else
To enhance the optimization accuracy of the algorithm, we use the weighted Levy flight,
denoted as ’ RL ’.
RL = 0.5 × Levy(Dim) (11)
Here, Levy(Dim) represents the Levy flight distribution function. It is calculated as
follows:
u×𝜎
𝐿𝑒𝑣𝑦(D) = s × 1 (12)
|v| 𝜂
Here, s is a fixed constant of 0.01 and 𝜂 is a fixed constant of 1.5. u and v are random
numbers in the interval [0, 1]. The formula for σ is as follows:
� � 1𝜂
⎛ 𝜋𝜂 ⎞
⎜ Γ(1 + 𝜂) × 𝑠𝑖𝑛 2 ⎟
𝜎=⎜ � � � �⎟ (13)
𝜂−1
⎜Γ 1+𝜂
×𝜂×2 2 ⎟
⎝ 2 ⎠
The natural enemies of secretary birds are large predators such as eagles, hawks, foxes, and
jackals, which may attack them or steal their food. When encountering these threats, sec-
retary birds typically employ various evasion strategies to protect themselves or their food.
These strategies can be broadly categorized into two main types. The first strategy involves
flight or rapid running. Secretary birds are known for their exceptionally long legs, ena-
bling them to run at remarkable speeds. They can cover distances of 20 to 30 km in a
single day, earning them the nickname “marching eagles”. Additionally, secretary birds are
skilled flyers and can swiftly take flight to escape danger, seeking safer locations (Feduccia
and Voorhies 1989). The second strategy is camouflage. Secretary birds may use the colors
or structures in their environment to blend in, making it harder for predators to detect them.
Their evasion behaviors when confronted with threats are illustrated in Fig. 4. In the design
of the SBOA, it is assumed that one of the following two conditions occurs with equal
probability:
13
123 Page 20 of 102 Y. Fu et al.
In the first strategy, when secretary birds detect the proximity of a predator, they initially
search for a suitable camouflage environment. If no suitable and safe camouflage environ-
ment is found nearby, they will opt for flight or rapid running to escape. In this context, we
2
introduce a dynamic perturbation factor, denoted as (1 − Tt ) . This dynamic perturbation
factor helps the algorithm strike a balance between exploration (searching for new solu-
tions) and exploitation (using known solutions). By adjusting these factors, it is possible
to increase the level of exploration or enhance exploitation at different stages. In summary,
both evasion strategies employed by secretary birds can be mathematically modeled using
Eq. (14), and this updated condition can be expressed using Eq. (15).
{ ( )2
new,P2 C1 ∶ xbest + (2 × RB − 1) × 1 − Tt × xi,j , if r and < ri
xi,j = ( ) (14)
C2 ∶ xi,j + R2 × xrandom − K × xi,j , else
{
Xinew,P2 , if Finew,P2 < Fi
Xi = (15)
Xi , else
13
Secretary bird optimization algorithm: a new metaheuristic… Page 21 of 102 123
Different algorithms take varying amounts of time to optimize the same problems and assess-
ing the computational complexity of an algorithm is an essential way to evaluate its execu-
tion time. In this paper, we utilize Big O notation (Tallini et al. 2016) to analyze the time
complexity of SBOA. Let N represent the population size of secretary birds, Dim denote the
dimensionality, and T is the maximum number of iterations. Following the rules of opera-
tion for the time complexity symbol O, the time complexity for randomly initializing the
population is O(N). During the solution update process, the computational complexity is
O(T × N) + O(T × N × Dim), which encompasses both finding the best positions and updat-
ing the positions of all solutions. Therefore, the total computational complexity of the pro-
posed SBOA can be expressed as O(N × (T × Dim + 1)).
13
123 Page 22 of 102 Y. Fu et al.
In this section, to assess the effectiveness of the proposed algorithm SBOA in optimization
and providing optimal solutions, we conducted a series of experiments. First, we designed
experiments to evaluate the convergence as well as exploration vs. exploitation capabili-
ties of the algorithm. Secondly, we compared this algorithm with 14 other algorithms in
the context of CEC-2017 and CEC-2022 to validate its performance. Finally, we subjected
it to a rank-sum test to determine whether there were significant performance differences
between the SBOA algorithm and the other algorithms. The algorithms are executed using
a consistent system configuration, implemented on a desktop computer featuring an 13th
Intel(R) Core (TM) i5-13400 (16 CPUs), ~ 2.5 GHz processor and 16 GB RAM. These
experiments were conducted utilizing the MATLAB 2022b platform.
4.1 Qualitative analysis
In this section, the CEC-2017 test set is used to validate the SBOA in terms of exploration
and exploitation balance and convergence behavior in 30 dimensions. The CEC-2017 test
set functions are shown in Table 3.
Among metaheuristic algorithms, exploration and exploitation are considered two crucial
factors. Exploration involves searching for new solutions in the solution space, aiming to
discover better solutions in unknown regions. Exploitation, on the other hand, focuses on
known solution spaces and conducts searches within the local neighborhoods of solutions
to find potentially superior solutions. A well-balanced combination of exploration and
exploitation not only helps the algorithm converge quickly to optimal solutions, enhancing
search efficiency, but also allows for flexibility in addressing diverse optimization problems
and complexities, showcasing exceptional adaptability and robustness (Morales-Castaneda
et al. 2020). A high-quality algorithm should strike a good balance between these two fac-
tors. Therefore, we use Eqs. (17) and (18) to calculate the percentages of exploration and
exploitation, respectively, allowing us to assess the algorithm’s balance between these two
factors. Div(t) is a measure of dimension diversity calculated by Eq. (19). Here, xid repre-
sents the position of the ith represents the position of the dth dimension, and Divmax denotes
the maximum diversity throughout the entire iteration process.
Div(t)
Exploration(%) = Divmax
× 100 (17)
|Div(t)−Divmax |
Exploitation(%) = Divmax
× 100 (18)
1 ∑D 1 ∑N � � � �
i=1 ��median xd (t) − xid (t)�� (19)
Div(t) = D d=1 N
13
Secretary bird optimization algorithm: a new metaheuristic… Page 23 of 102 123
13
123 Page 24 of 102 Y. Fu et al.
primarily occurs during the mid-iterations of the problem search process. In the initial
stages, there is a comprehensive exploration of the global search space, gradually tran-
sitioning into the phase of local exploitation. It’s worth noting that the SBOA algorithm
maintains a relatively high exploitation ratio in the later iterations across all functions, con-
tributing to enhanced problem convergence speed and search precision. The SBOA algo-
rithm maintains a dynamic equilibrium between exploration and exploitation throughout
the iteration process. Therefore, SBOA exhibits outstanding advantages in avoiding local
optima and premature convergence.
13
Secretary bird optimization algorithm: a new metaheuristic… Page 25 of 102 123
These results demonstrate that, although SBOA may temporarily fall into local optima
in certain situations, with an increase in the number of iterations, it is capable of break-
ing out of local optima and gradually approaching and converging on the global optimal
solution.
4.2 Quantitative analysis
13
123 Page 26 of 102 Y. Fu et al.
differential evolution algorithm, MadDE (Biswas et al. 2021). (2) Highly-Cited Algo-
rithms: DE (Storn and Price 1997), Grey wolf optimizer (GWO) (Mirjalili et al. 2014),
Whale optimization algorithm (WOA) (Mirjalili and Lewis 2016), CPSOGSA (Rather
and Bala 2021) and African vultures optimization algorithm (AVOA) (Abdollahzadeh
et al. 2021a); (3) advanced algorithms: snake optimizer (SO) (Hashim and Hussien
2022), Artificial gorilla troops optimizer (GTO) (Abdollahzadeh et al. 2021b), crayfish
optimization algorithm (COA) (Jia et al. 2023), Rime optimization algorithm (RIME)
(Su et al. 2023), Golden jackal optimization (GJO) (Chopra and Mohsin Ansari 2022),
dung beetle optimizer (DBO) (Xue and Shen 2022) and nutcracker optimization algo-
rithm (NOA) (Abdel-Basset et al. 2023b). The parameter settings for the compared algo-
rithms are detailed in Table 2. We set the maximum number of iterations and population
size for all algorithms to 500 and 30, respectively. Each algorithm is independently run
30 times, and the experimental results will be presented in the following text. The best
results for each test function and its corresponding dimension are highlighted in bold.
13
Secretary bird optimization algorithm: a new metaheuristic… Page 27 of 102 123
13
123 Page 28 of 102 Y. Fu et al.
even in the later stages of iteration. Although, during certain periods in the subsequent
iterations, functions F5, F8, F10, F12, F16, F20, and F22 briefly experience local optima,
with an increase in the number of iterations, SBOA demonstrates the ability to break free
from these local optima and continues to explore more deeply, ultimately achieving higher
convergence accuracy. This suggests that the introduced differential evolution strategy,
Brownian motion strategy, and Levy flight strategy are effective. These strategies not only
help the algorithm escape local optima but also enhance the algorithm’s convergence speed
and accuracy.
13
Secretary bird optimization algorithm: a new metaheuristic… Page 29 of 102 123
Figures 13, 14, 15 present the results of 16 algorithms on three dimensions of the
CEC2017 test set in the form of box plots. It is evident from the figures that the majority of
SBOA box plots are the narrowest, indicating that SBOA exhibits higher robustness com-
pared to other algorithms. Furthermore, the boxes are positioned near the optimal function
values, suggesting that SBOA achieves higher precision in solving problems compared to
other algorithms.
To assess the scalability of SBOA, in this section, we compare it with the 15 benchmark
algorithms using the CEC-2022 test functions in both 10 and 20 dimensions. Similar to
the CEC-2017 test functions, the CEC-2022 test functions consist of unimodal functions,
multimodal functions, hybrid functions, and composite functions (Luo et al. 2022). The
specific details can be found in Table 7.
The test results for SBOA and the 15 other algorithms on the CEC-2022 test suite are
presented in Tables 8 and 9, and the convergence curves can be seen in Figs. 16 and 17.
And Box plots are shown in Figs. 19 and 20.The results indicate that SBOA outperforms
the other algorithms in 6 out of the functions in the 10-dimensional and 20-dimensional
CEC-2022 test set. To clearly illustrate the comparative ranking of SBOA against other
algorithms, a stacked ranking chart is presented in Fig. 18. Rankings are divided into five
categories: average best ranking, average second-best ranking, average third-best ranking,
average worst ranking, and other rankings. From the chart, it is evident that SBOA does not
have the worst ranking in any of the test functions, demonstrating its strong scalability and
effectiveness. When considering the overall ranking, SBOA ranks the highest among the
15 algorithms and significantly outperforms the others. Figure 17 displays the convergence
curves of SBOA and the 15 benchmark algorithms. From the figure, we can observe that
SBOA exhibits higher convergence speed and accuracy compared to the other algorithms.
The results indicate that our proposed SBOA demonstrates faster convergence speed and
13
123 Page 30 of 102 Y. Fu et al.
Unimodal F1 Shifted and rotated bent cigar function [− 100,100] 30/50/100 100
F2 Shifted and rotated sum of different power function [− 100,100] 30/50/100 200
F3 Shifted and rotated Zakharov function [− 100,100] 30/50/100 300
Multimodal F4 Shifted and rotated Rosenbrock’s function [− 100,100] 30/50/100 400
F5 Shifted and rotated Rastrigin’s Function [− 100,100] 30/50/100 500
F6 Shifted and rotated expanded Scaffer’s F6 Function [− 100,100] 30/50/100 600
F7 Shifted and ROTATED Lunacek Bi_Rastrigin func- [− 100,100] 30/50/100 700
tion
F8 Shifted and rotated non-continuous Rastrigin’s [− 100,100] 30/50/100 800
function
F9 Shifted and rotated levy function [− 100,100] 30/50/100 900
F10 Shifted and rotated Schwefel’s function [− 100,100] 30/50/100 1000
Hybrid F11 Hybrid function 1 (N = 3) [− 100,100] 30/50/100 1100
F12 Hybrid function 2 (N = 3) [− 100,100] 30/50/100 1200
F13 Hybrid function 3 (N = 3) [− 100,100] 30/50/100 1300
F14 Hybrid function 4 (N = 4) [− 100,100] 30/50/100 1400
F15 Hybrid function 5 (N = 4) [− 100,100] 30/50/100 1500
F16 Hybrid function 6 (N = 4) [− 100,100] 30/50/100 1600
F17 Hybrid function 6 (N = 5) [− 100,100] 30/50/100 1700
F18 Hybrid function 6 (N = 5) [− 100,100] 30/50/100 1800
F19 Hybrid function 6 (N = 5) [− 100,100] 30/50/100 1900
F20 Hybrid function 6 (N = 6) [− 100,100] 30/50/100 2000
Composition F21 Composition function 1 (N = 3) [− 100,100] 30/50/100 2100
F22 Composition function 2 (N = 3) [− 100,100] 30/50/100 2200
F23 Composition function 3 (N = 4) [− 100,100] 30/50/100 2300
F24 Composition function 4 (N = 4) [− 100,100] 30/50/100 2400
F25 Composition function 5 (N = 5) [− 100,100] 30/50/100 2500
F26 Composition function 6 (N = 5) [− 100,100] 30/50/100 2600
F27 Composition function 7 (N = 6) [− 100,100] 30/50/100 2700
F28 Composition function 8 (N = 6) [− 100,100] 30/50/100 2800
F29 Composition function 9 (N = 3) [− 100,100] 30/50/100 2900
F30 Composition function 10 (N = 3) [− 100,100] 30/50/100 3000
4.3 Statistical test
In this section, we analyze the experimental results using Wilcoxon test and Friedman test
to statistically analyze the differences between SBOA and other comparison algorithms.
13
Table 4 The experimental results of 16 algorithms in CEC-2017 (Dim = 30)
Function LSHADE_cnEpSin LSHADE_SPACMA MadDE DE AVOA GWO WOA CPSOGSA
13
Table 4 (continued)
123
13
F14 Ave 1.60E + 03 1.60E + 03 1.23E + 05 6.45E + 05 8.22E + 05 6.58E + 05 2.01E + 06 4.39E + 05
Std 5.72E + 01 5.99E + 01 1.04E + 05 4.59E + 05 9.33E + 05 6.67E + 05 2.49E + 06 4.42E + 05
Page 32 of 102
F15 Ave 3.15E + 03 3.28E + 03 7.23E + 03 1.67E + 06 4.22E + 04 2.13E + 06 7.40E + 06 2.94E + 04
Std 1.60E + 03 2.99E + 03 3.44E + 03 1.09E + 06 2.43E + 04 7.42E + 06 1.33E + 07 1.44E + 04
F16 Ave 2.35E + 03 2.59E + 03 2.62E + 03 2.98E + 03 3.13E + 03 2.77E + 03 4.36E + 03 3.29E + 03
Std 2.36E + 02 2.01E + 02 2.23E + 02 1.82E + 02 3.83E + 02 2.16E + 02 6.96E + 02 4.69E + 02
F17 Ave 1.95E + 03 2.02E + 03 2.01E + 03 2.17E + 03 2.63E + 03 2.06E + 03 2.78E + 03 2.69E + 03
Std 1.22E + 02 1.57E + 02 1.20E + 02 1.30E + 02 2.25E + 02 1.63E + 02 2.55E + 02 2.79E + 02
F18 Ave 3.60E + 04 1.06E + 05 5.45E + 05 2.10E + 06 2.64E + 06 2.68E + 06 1.80E + 07 9.50E + 05
Std 2.17E + 04 3.56E + 05 3.09E + 05 1.22E + 06 3.07E + 06 5.43E + 06 2.37E + 07 1.35E + 06
F19 Ave 3.70E + 03 2.20E + 03 1.51E + 04 1.25E + 06 1.01E + 05 3.15E + 06 2.29E + 07 4.25E + 04
Std 8.22E + 03 1.64E + 02 2.26E + 04 5.59E + 05 1.16E + 05 7.35E + 06 2.04E + 07 5.71E + 04
F20 Ave 2.35E + 03 2.47E + 03 2.37E + 03 2.52E + 03 2.80E + 03 2.45E + 03 2.91E + 03 2.92E + 03
Std 9.76E + 01 1.50E + 02 1.05E + 02 1.24E + 02 2.09E + 02 1.55E + 02 2.05E + 02 2.95E + 02
F21 Ave 2.38E + 03 2.38E + 03 2.45E + 03 2.48E + 03 2.53E + 03 2.41E + 03 2.64E + 03 2.58E + 03
Std 2.06E + 01 2.08E + 01 1.92E + 01 1.97E + 01 4.74E + 01 2.80E + 01 5.85E + 01 6.43E + 01
F22 Ave 4.08E + 03 2.31E + 03 2.51E + 03 3.44E + 03 5.76E + 03 5.27E + 03 7.89E + 03 6.59E + 03
Std 1.94E + 03 4.46E + 00 9.62E + 02 2.58E + 02 2.21E + 03 1.91E + 03 2.10E + 03 1.38E + 03
F23 Ave 2.75E + 03 2.73E + 03 2.83E + 03 2.84E + 03 2.96E + 03 2.80E + 03 3.19E + 03 3.12E + 03
Std 2.50E + 01 2.58E + 01 2.95E + 01 9.13E + 00 9.58E + 01 5.57E + 01 1.13E + 02 1.67E + 02
F24 Ave 2.91E + 03 2.90E + 03 3.00E + 03 3.05E + 03 3.15E + 03 2.97E + 03 3.28E + 03 3.35E + 03
Std 2.53E + 01 2.07E + 01 2.22E + 01 1.69E + 01 1.06E + 02 7.30E + 01 1.01E + 02 9.74E + 01
F25 Ave 2.90E + 03 2.90E + 03 2.93E + 03 2.98E + 03 2.95E + 03 3.03E + 03 3.21E + 03 3.02E + 03
Std 1.28E + 01 1.43E + 01 1.56E + 01 1.59E + 01 3.16E + 01 7.97E + 01 7.87E + 01 7.71E + 01
F26 Ave 4.58E + 03 4.31E + 03 3.56E + 03 5.35E + 03 6.88E + 03 5.04E + 03 8.28E + 03 7.46E + 03
Y. Fu et al.
Table 4 (continued)
Function LSHADE_cnEpSin LSHADE_SPACMA MadDE DE AVOA GWO WOA CPSOGSA
13
Table 4 (continued)
123
13
F8 Ave 8.94E + 02 9.58E + 02 9.85E + 02 9.11E + 02 9.81E + 02 1.03E + 03 9.79E + 02 8.77E + 02
Std 2.45E + 01 3.48E + 01 2.17E + 01 2.57E + 01 4.66E + 01 5.49E + 01 4.45E + 01 1.55E + 01
Page 34 of 102
F27 Ave 3.30E + 03 3.33E + 03 3.30E + 03 3.25E + 03 3.36E + 03 3.34E + 03 3.29E + 03 3.22E + 03
Std 4.12E + 01 8.07E + 01 4.59E + 01 1.85E + 01 6.42E + 01 8.21E + 01 6.75E + 01 1.26E + 01
F28 Ave 3.36E + 03 3.28E + 03 3.36E + 03 3.28E + 03 3.85E + 03 3.74E + 03 3.27E + 03 3.25E + 03
Std 4.53E + 01 3.16E + 01 4.62E + 01 3.00E + 01 3.40E + 02 9.15E + 02 1.78E + 01 2.97E + 01
F29 Ave 4.11E + 03 4.39E + 03 4.17E + 03 4.10E + 03 4.33E + 03 4.48E + 03 4.09E + 03 3.63E + 03
Std 1.93E + 02 4.52E + 02 3.07E + 02 2.05E + 02 3.77E + 02 3.43E + 02 3.02E + 02 1.78E + 02
F30 Ave 2.88E + 05 2.35E + 04 1.14E + 06 6.15E + 05 4.32E + 07 4.63E + 06 2.61E + 04 1.20E + 04
Std 3.23E + 05 1.64E + 04 9.77E + 05 3.40E + 05 4.33E + 07 9.50E + 06 1.57E + 04 1.07E + 05
(W|T|L) (30 |0 |0) (28 |0 |2) (30 |0 |0) (30 |0 |0) (30 |0 |0) (30 |0 |0) (28 |0 |2)
Page 35 of 102 123
13
Table 5 The experimental results of 16 algorithms in CEC-2017 (Dim = 50)
123
13
F1 Ave 7.63E + 07 6.86E + 07 7.54E + 08 2.10E + 09 3.17E + 08 1.09E + 10 2.06E + 10 7.73E + 09
Std 5.04E + 07 4.96E + 07 3.57E + 08 3.25E + 08 4.35E + 08 4.24E + 09 4.75E + 09 4.18E + 09
Page 36 of 102
F14 Ave 2.32E + 04 1.81E + 04 1.23E + 06 3.67E + 06 2.79E + 06 1.35E + 06 6.19E + 06 2.02E + 06
Std 3.25E + 04 1.77E + 04 8.55E + 05 1.81E + 06 1.72E + 06 2.33E + 06 5.58E + 06 1.99E + 06
F15 Ave 1.18E + 04 8.59E + 03 4.68E + 04 5.77E + 06 6.18E + 04 1.98E + 07 5.69E + 07 5.79E + 04
Std 6.30E + 03 5.95E + 03 3.87E + 04 3.67E + 06 3.83E + 04 2.37E + 07 8.44E + 07 2.56E + 04
F16 Ave 3.16E + 03 3.43E + 03 3.53E + 03 4.56E + 03 4.27E + 03 3.35E + 03 6.42E + 03 4.30E + 03
Std 3.83E + 02 3.92E + 02 3.31E + 02 3.01E + 02 4.79E + 02 5.62E + 02 6.95E + 02 5.92E + 02
F17 Ave 2.97E + 03 3.26E + 03 3.12E + 03 3.60E + 03 3.91E + 03 3.21E + 03 4.48E + 03 3.74E + 03
Std 3.24E + 02 2.41E + 02 1.28E + 02 2.29E + 02 3.65E + 02 5.03E + 02 5.53E + 02 5.02E + 02
F18 Ave 2.10E + 05 2.06E + 05 5.89E + 06 1.46E + 07 5.89E + 06 1.03E + 07 4.98E + 07 3.47E + 06
Std 1.64E + 05 1.37E + 05 4.35E + 06 7.13E + 06 5.78E + 06 9.40E + 06 3.37E + 07 3.02E + 06
F19 Ave 2.23E + 04 1.77E + 04 3.62E + 04 1.24E + 06 3.63E + 05 5.88E + 06 2.54E + 07 5.45E + 05
Std 1.86E + 04 1.09E + 04 1.24E + 04 8.63E + 05 3.75E + 05 1.21E + 07 2.39E + 07 7.76E + 05
F20 Ave 3.06E + 03 3.48E + 03 3.17E + 03 3.50E + 03 3.49E + 03 3.35E + 03 3.82E + 03 3.55E + 03
Secretary bird optimization algorithm: a new metaheuristic…
13
Table 5 (continued)
123
13
Std 6.77E + 02 6.03E + 02 2.27E + 03 2.44E + 02 1.91E + 03 6.38E + 02 1.58E + 03 1.48E + 03
F27 Ave 3.57E + 03 3.55E + 03 3.71E + 03 3.88E + 03 3.90E + 03 3.74E + 03 5.03E + 03 4.79E + 03
Page 38 of 102
13
Table 5 (continued)
123
13
Std 3.24E + 02 2.87E + 02 2.40E + 02 2.97E + 02 4.10E + 02 3.57E + 02 3.69E + 02 3.54E + 02
F21 Ave 2.51E + 03 2.64E + 03 2.70E + 03 2.57E + 03 2.74E + 03 2.88E + 03 2.79E + 03 2.48E + 03
Page 40 of 102
13
Table 6 (continued)
123
13
F14 Ave 5.81E + 05 8.58E + 05 7.97E + 06 2.91E + 07 8.83E + 06 9.78E + 06 2.23E + 07 6.51E + 06
Std 3.55E + 05 4.74E + 05 1.86E + 06 6.60E + 06 3.34E + 06 4.56E + 06 9.20E + 06 2.97E + 06
Page 42 of 102
F15 Ave 5.69E + 04 2.70E + 04 1.01E + 05 1.97E + 07 1.04E + 05 3.58E + 08 5.09E + 08 2.21E + 07
Std 2.02E + 04 1.23E + 04 9.05E + 04 8.92E + 06 1.96E + 05 5.50E + 08 2.06E + 08 3.94E + 07
F16 Ave 6.83E + 03 7.29E + 03 9.55E + 03 1.12E + 04 8.17E + 03 6.77E + 03 1.75E + 04 8.61E + 03
Std 9.36E + 02 8.84E + 02 7.56E + 02 4.70E + 02 9.45E + 02 8.00E + 02 2.92E + 03 6.93E + 02
F17 Ave 5.21E + 03 5.67E + 03 6.09E + 03 7.95E + 03 6.32E + 03 5.60E + 03 3.20E + 04 7.09E + 03
Std 5.22E + 02 7.85E + 02 3.82E + 02 4.19E + 02 6.92E + 02 5.51E + 02 2.80E + 04 8.40E + 02
F18 Ave 1.08E + 06 9.47E + 05 7.14E + 06 4.23E + 07 7.40E + 06 1.06E + 07 1.90E + 07 7.84E + 06
Std 6.87E + 05 3.64E + 05 1.85E + 06 1.24E + 07 3.82E + 06 6.15E + 06 1.12E + 07 3.93E + 06
F19 Ave 3.72E + 05 1.70E + 05 2.50E + 05 2.91E + 07 2.84E + 06 2.04E + 08 5.06E + 08 6.81E + 07
Std 2.71E + 05 2.54E + 05 1.35E + 05 1.50E + 07 2.08E + 06 2.29E + 08 1.93E + 08 1.31E + 08
F20 Ave 6.12E + 03 6.68E + 03 6.25E + 03 6.91E + 03 6.11E + 03 5.46E + 03 7.14E + 03 6.10E + 03
Std 7.27E + 02 3.72E + 02 3.82E + 02 3.54E + 02 6.60E + 02 1.10E + 03 5.98E + 02 6.19E + 02
F21 Ave 3.15E + 03 3.13E + 03 3.49E + 03 3.59E + 03 3.67E + 03 3.15E + 03 4.46E + 03 4.23E + 03
Std 1.20E + 02 1.13E + 02 8.85E + 01 3.89E + 01 1.96E + 02 1.74E + 02 2.02E + 02 2.10E + 02
F22 Ave 2.62E + 04 2.61E + 04 3.00E + 04 3.27E + 04 2.21E + 04 2.16E + 04 3.22E + 04 1.96E + 04
Std 2.78E + 03 2.79E + 03 8.01E + 02 4.04E + 02 1.64E + 03 3.11E + 03 1.35E + 03 1.63E + 03
F23 Ave 3.80E + 03 3.64E + 03 3.98E + 03 3.99E + 03 4.43E + 03 3.75E + 03 5.32E + 03 5.19E + 03
Std 1.37E + 02 6.84E + 01 8.71E + 01 3.57E + 01 3.38E + 02 6.94E + 01 2.24E + 02 3.65E + 02
F24 Ave 4.56E + 03 4.45E + 03 4.65E + 03 4.67E + 03 5.35E + 03 4.47E + 03 6.87E + 03 6.68E + 03
Std 1.86E + 02 1.83E + 02 1.13E + 02 6.18E + 01 2.50E + 02 1.49E + 02 4.14E + 02 4.23E + 02
F25 Ave 4.35E + 03 4.36E + 03 6.61E + 03 1.10E + 04 4.93E + 03 7.23E + 03 1.08E + 04 1.06E + 04
Std 2.70E + 02 2.51E + 02 6.35E + 02 7.68E + 02 4.33E + 02 8.41E + 02 1.23E + 03 2.06E + 03
F26 Ave 1.78E + 04 1.78E + 04 2.51E + 04 1.99E + 04 2.73E + 04 1.73E + 04 3.84E + 04 3.37E + 04
Y. Fu et al.
Table 6 (continued)
Function LSHADE_cnEpSin LSHADE_SPACMA MadDE DE AVOA GWO WOA CPSOGSA
13
Table 6 (continued)
123
13
F8 Ave 1.51E + 03 1.82E + 03 2.03E + 03 1.61E + 03 1.94E + 03 2.20E + 03 2.02E + 03 1.49E + 03
Std 7.22E + 01 6.05E + 01 6.82E + 01 7.35E + 01 8.29E + 01 2.04E + 02 1.07E + 02 7.89E + 01
Page 44 of 102
F27 Ave 4.35E + 03 4.37E + 03 4.47E + 03 4.02E + 03 5.84E + 03 4.75E + 03 4.26E + 03 3.71E + 03
Std 1.63E + 02 3.84E + 02 3.27E + 02 1.68E + 02 4.62E + 02 5.08E + 02 2.81E + 02 1.05E + 02
F28 Ave 1.02E + 04 5.37E + 03 1.09E + 04 4.24E + 03 1.60E + 04 1.85E + 04 6.19E + 03 4.85E + 03
Std 1.48E + 03 5.43E + 02 1.33E + 03 3.08E + 02 1.61E + 03 6.07E + 03 6.78E + 02 5.10E + 02
F29 Ave 9.29E + 03 8.95E + 03 1.18E + 04 9.57E + 03 1.77E + 04 1.20E + 04 1.00E + 04 7.26E + 03
Std 7.91E + 02 9.47E + 02 1.72E + 03 6.09E + 02 1.03E + 04 2.36E + 03 7.46E + 02 6.63E + 02
F30 Ave 1.85E + 07 7.36E + 06 5.91E + 08 1.76E + 08 7.98E + 09 3.02E + 08 4.77E + 07 9.40E + 05
Std 9.10E + 06 3.75E + 06 7.39E + 08 9.30E + 07 3.24E + 09 1.36E + 08 2.28E + 07 4.78E + 05
(W|T|L) (28 |1 |1) (27 |0 |3) (30 |0 |0) (26 |0 |4) (30 |0 |0) (30 |0 |0) (28 |0 |2)
Page 45 of 102 123
13
123 Page 46 of 102 Y. Fu et al.
13
Secretary bird optimization algorithm: a new metaheuristic… Page 47 of 102 123
The results for CEC-2022 can be found in Tables 13, 14. To highlight the comparison
results, values exceeding 0.05 will be bolded.
The absence of “NaN” values in CEC-2017 and the minimal presence of “NaN” val-
ues in CEC-2022 test functions suggest that SBOA’s optimization results are generally
dissimilar to those of the other algorithms. Furthermore, as seen in the tables, it can be
observed that DE does not have prominently highlighted data in the CEC-2017 results,
and other comparative algorithms have few data points highlighted in bold, particularly
in the 100-dimensional data for 2017 test functions. Thus, SBOA shows significant dif-
ferences from DE and other compared metaheuristic algorithms.
In conclusion, based on the analysis results presented in Sects. 4.1 and 4.2, it is evident
that SBOA exhibits the best overall performance among various metaheuristic algorithms.
This underscores the effectiveness of the strategies employed in SBOA, such as the dif-
ferential evolution strategy, Levy flight strategy, dynamic perturbation factor, and other
13
123 Page 48 of 102 Y. Fu et al.
13
Secretary bird optimization algorithm: a new metaheuristic… Page 49 of 102 123
Fig. 13 Boxplot of SBOA and competitor algorithms in optimization of the CEC-2017 test suite (Dim = 30)
4.3.2 Friedmann’s test
By employing the non-parametric Friedman average rank test to rank the experimental
results of the SBOA algorithm and other algorithms on the CEC-2017 and CEC-2022 test
sets, we obtained the rankings presented in Table 15. Clearly, SBOA consistently ranks
first, indicating that our proposed optimizer outperforms the other benchmark algorithms
on the considered test sets.
5 Application of SBOA
After conducting the experiments and analysis in the fourth section, we have confirmed that
SBOA exhibits superior optimization performance in testing functions. However, the pri-
mary objective of metaheuristic algorithms is to address real-world problems. Therefore,
13
123 Page 50 of 102 Y. Fu et al.
Fig. 14 Boxplot of SBOA and competitor algorithms in optimization of the CEC-2017 test suite (Dim = 50)
in this section, we will further validate the effectiveness and applicability of SBOA through
its application to practical problems. To assess the practical applicability and scalability of
the proposed algorithm, we applied it to twelve typical real engineering problems (Kumar
et al. 2020a; b). These problems encompass: three-bar truss design (TBTD), pressure vessel
design (PVD), tension/compression spring design (TCPD (case 1)), welded beam design
(WBD), weight minimization of speed reducer design (WMSRD), rolling element bearing
design (REBD), gear train design (GTD), hydrostatic thrust bearing design (HSTBD), single
cone pulley design (SCPD), gas transmission compressor design (GTCD), planetary gear
train design (PGTD) and four-stage gear box design (F-sGBD). Furthermore, to demonstrate
the superiority of SBOA, we compared its results with the optimization results obtained
from fifteen state-of-the-art algorithms mentioned earlier in this study.
13
Secretary bird optimization algorithm: a new metaheuristic… Page 51 of 102 123
Fig. 15 Boxplot of SBOA and competitor algorithms in optimization of the CEC-2017 test suite
(Dim = 100)
The Three-Bar Truss Design problem originates from the field of civil engineering. Its
objective is to minimize the overall structure weight by controlling two parameter vari-
ables. The structure is depicted in Fig. 21, and its mathematical model is described by
Eq. (20).
13
123 Page 52 of 102 Y. Fu et al.
� � � �
Consider x⃗ = x1 x2 = A1 A2 ,
� � � √ �
Minimize f x⃗ = l ∗ 2 2x1 + x2 ,
√
� � 2x1 + x2
Subject to g1 x⃗ = � P − 𝜎 ≤ 0,
2x12 + 2x1 x2
� � x2
g2 x⃗ = � P − 𝜎 ≤ 0, (20)
2x12 + 2x1 x2
� � 1
g3 x⃗ = √ P − 𝜎 ≤ 0,
2x2 + x1
Parameter range 0 ≤ x1 , x2 ≤ 1,
Where l = 100 cm, P = 2 KN∕cm2 , 𝜎 = 2 KN∕cm2
Table shows the optimization results of SBOA and 11 other different contrasts for
the three-bar truss design problem. As seen in the table, SBOA, LSHADE_cnEp-
Sin, LSHADE_SPACMA, MadDE and GTO simultaneously achieve an optimal cost of
2.64E + 02 and produce different solutions (Table 16).
The Pressure Vessel Design problem features a structure as shown in Fig. 22. The design
objective is to minimize costs while meeting usage requirements. Four optimization
parameters include the vessel thickness(Ts ), head thickness(Th ), inner radius(R), and head
length(L). Equation (21) provides its mathematical model.
13
Table 8 Experimental results of 16 algorithms in CEC-2022 (Dim = 10)
Function LSHADE_cnEpSin LSHADE_SPACMA MadDE DE AVOA GWO WOA CPSOGSA
13
Table 8 (continued)
123
13
F1 Ave 1.06E + 03 3.00E + 02 3.22E + 03 3.01E + 02 2.72E + 03 1.91E + 03 3.00E + 02 3.00E + 02
Std 7.15E + 02 3.49E − 04 2.32E + 03 6.06E − 01 1.93E + 03 2.23E + 03 1.01E + 00 1.17E − 01
Page 54 of 102
13
Table 9 (continued)
123
13
F1 Ave 1.68E + 04 2.41E + 03 4.41E + 04 1.27E + 03 1.88E + 04 3.46E + 04 2.29E + 03 1.21E + 03
Std 5.05E + 03 9.23E + 02 1.40E + 04 5.36E + 02 6.20E + 03 1.00E + 04 1.22E + 03 1.37E + 03
Page 56 of 102
[ ] [ ]
Consider x⃗ = x1 x2 x3 x4 = Ts Th RL ,
( )
Minimize f x⃗ = 0.6224x1 x3 x4 + 1.7781x2 x32 + 3.1661x12 x4 + 19.84x12 x3 ,
( )
Subject to g1 x⃗ = −x1 + 0.0193x3 ≤ 0,
( )
g2 x⃗ = −x3 + 0.00954x3 ≤ 0, (21)
( ) 4
g3 x⃗ = −𝜋x32 x4 − 𝜋x33 + 1296000 ≤ 0,
( ) 3
g4 x⃗ = x4 − 240 ≤ 0,
Parameter range 0 ≤ x1 , x2 ≤ 99, 10 ≤ x3 , x4 ≤ 200.
From the results in Table 17, it is evident that SBOA, LSHADE_cnEpSin, MadDE and
NOA outperforms all other competitors, achieving a minimum cost of 5.89E + 03.
This design problem aims to minimize the weight of tension/compression springs by opti-
mizing three critical parameters: wire diameter(d), coil diameter(D), and the number of
13
123 Page 58 of 102 Y. Fu et al.
13
Secretary bird optimization algorithm: a new metaheuristic… Page 59 of 102 123
Fig. 19 Boxplot of SBOA and competitor algorithms in optimization of the CEC-2022 test suite (Dim = 10)
coils(N). The structure of this engineering problem is illustrated in Fig. 23, with the math-
ematical model is presented in Eq. (22).
[ ]
Consider x⃗ = x1 x2 x3 = [dDN],
( ) ( )
Minimize f x⃗ = x3 + 2 x2 x12 ,
( ) x23 x3
Subject to g1 x⃗ = 1 − ≤ 0,
71785x14
( ) 4x22 − x1 x2 1
g2 x⃗ = ( )+ ≤ 0, (22)
12566 x2 x13 − x14 5108x12
( ) 140.45x1
g3 x⃗ = 1 − ≤ 0,
x22 x3
( ) x + x2
g4 x⃗ = 1 − 1 ≤ 0,
1.5
Parameter range 0.05 ≤ x1 ≤ 2, 0.25 ≤ x2 ≤ 1.3, 2 ≤ x3 ≤ 1.5.
13
123 Page 60 of 102 Y. Fu et al.
Fig. 20 Boxplot of SBOA and competitor algorithms in optimization of the CEC-2022 test suite (Dim = 20)
Table presents the optimization results of SBOA compared to fourteen different com-
peting algorithms for the tension/compression spring design problem. It is evident from
the table that SBOA outperforms the other algorithms, achieving the optimal value of
1.27E – 02 (Table 18).
Welded beam design represents a typical nonlinear programming problem, aiming to mini-
mize the manufacturing cost of a welded beam by controlling parameters such as beam
thickness(h), length(l), height(t), width(b), and weld size. The structure of the optimization
problem is depicted in Fig. 24, and its mathematical model is described by Eq. (23).
13
Table 10 P-value on CEC-2017 (Dim = 30)
Function LSHADE_cnEpSin LSHADE_SPACMA MadDE DE AVOA GWO WOA CPSOGSA
13
Table 10 (continued)
123
13
F27 8.29E − 06 1.11E − 03 1.17E − 09 3.02E − 11 8.15E − 11 3.65E − 08 3.02E − 11 3.02E − 11
F28 7.28E − 01 3.04E − 01 5.26E − 04 3.02E − 11 5.07E − 10 3.02E − 11 3.02E − 11 5.46E − 09
Page 62 of 102
13
Table 11 P-value on CEC-2017 (Dim = 50)
123
13
F1 1.11E − 06 2.68E − 06 6.07E − 11 3.02E − 11 1.69E − 09 3.02E − 11 3.02E − 11 3.02E − 11
F2 1.21E − 12 1.21E − 12 1.21E − 12 3.02E − 11 1.61E − 10 3.02E − 11 3.02E − 11 3.02E − 11
Page 64 of 102
13
Table 11 (continued)
123
13
F21 5.83E − 03 8.15E − 11 6.70E − 11 9.06E − 08 3.02E − 11 3.02E − 11 3.02E − 11
F22 7.22E − 06 4.80E − 07 2.67E − 09 4.64E − 03 1.09E − 10 5.57E − 10 3.01E − 07
Page 66 of 102
13
Table 12 (continued)
123
13
F27 1.33E − 10 2.57E − 07 3.02E − 11 3.02E − 11 8.99E − 11 4.08E − 11 3.02E − 11 3.02E − 11
F28 1.32E − 04 6.74E − 06 3.02E − 11 3.02E − 11 2.19E − 08 3.02E − 11 3.02E − 11 3.02E − 11
Page 68 of 102
13
123
13
F1 3.01E − 11 6.34E − 05 3.02E − 11 3.02E − 11 6.07E − 11 3.02E − 11 3.02E − 11 6.70E − 11
F2 4.33E − 02 9.22E − 01 3.09E − 05 1.97E − 02 1.37E − 02 4.83E − 05 1.81E − 07 9.70E − 03
Page 70 of 102
13
123
13
Table 15 Friedman average rank sum test results
Page 72 of 102
Dimensions 30 50 100 10 20
Algorithms Ave. rank Overall rank Ave. rank Overall rank Ave. rank Overall rank Ave. rank Overall rank Ave. rank Overall rank
13
123 Page 74 of 102 Y. Fu et al.
� �
Consider x⃗ = x1 x2 x3 x4 ]=[ h l t b ,
� � � � � �
Minimize f x⃗ = f x⃗ = 1.10471x12 x2 + 0.04811x3 x4 14.0 + x2 ,
� � � �
Subject to g1 x⃗ = 𝜏 x⃗ − 𝜏max < 0,
� � � �
g2 x⃗ = 𝜎 x⃗ − 𝜎max < 0,
� � � �
g3 x⃗ = 𝛿 x⃗ − 𝛿max < 0,
� �
g4 x⃗ = x1 − x4 < 0,
� � � �
g5 x⃗ = P − Pc x⃗ < 0,
� �
g6 x⃗ = 0.125 − x1 < 0,
� � � �
g7 x⃗ = 1.10471x12 + 0.04811x3 x4 14.0 + x2 − 5.0 < 0,
Parameter range 0.1 < x1 , x4 < 2, 0.1 < x2 , x3 ≤ 10,
�
� � x (23)
Where, 𝜏 x⃗ = (𝜏 � )2 + 2𝜏 � 𝜏 �� 2 + (𝜏 � )2
2R
p MR
𝜏� = √ , 𝜏 �� = ,
2x1 x2 1
� x �
M =P L+ 2 ,
2
�
2 � �
x2 x1 + x2 2
R= + ,
4 2
� � � � ��
√ x22 x1 + x2 2 � � 6PL � � 6PL2
J=2 2x1 x2 + , 𝜎 x⃗ = , 𝛿 x⃗ = 2 ,
4 2 x4 x22 Ex2 x4
�
40x � � �
� � 36 x22 E
Pc x = 2
1− , P = 6000 Lb, L = 14 in, 𝛿max = 0.25 in,
L 2L 4C
E = 30 × 16 psi, G = 12 × 106 psi, 𝜏max = 13600 psi, 𝜎max = 30000 psi.
The optimization results for the Welded Beam Design problem are presented
in Table 19. As per the test results, SBOA achieves the lowest economic cost after
optimization.
13
Secretary bird optimization algorithm: a new metaheuristic… Page 75 of 102 123
This problem originates from the gearbox of a small aircraft engine, aiming to find the
minimum gearbox weight subject to certain constraints. The weight minimization design of
the gearbox involves seven variables, which include: gear width ( x1), number of teeth ( x2),
number of teeth on the pinion ( x3), length between bearings for the first shaft ( x4), length
between bearings for the second shaft ( x5), diameter of the first shaft ( x6), and diameter of
the second shaft ( x7). The mathematical model for this problem is represented by Eq. (24).
13
123 Page 76 of 102 Y. Fu et al.
13
Secretary bird optimization algorithm: a new metaheuristic… Page 77 of 102 123
( )
Minimize f x⃗ = 0.7854x22 x1 (14.9334x3 − 43.0934 + 3.3333x32 +
( ) ( ) ( )
0.7854 x5x72 + x4x62 − 1.508x1 x72 + x62 + 7.477 x73 + x63 ,
( )
Subject to g1 x = −x1 x22 x3 + 27 ≤ 0,
( )
g2 x = −x1 x22 x32 + 397.5 ≤ 0,
( )
g3 x = −x2 x64 x3 x4−3 + 1.93 ≤ 0,
( )
g4 x = −x2 x74 x3 x5(−3) + 1.93 ≤ 0,
√
( ) ( )2
g5 x = 10x6−3 16.91 × 106 + 745x4 x2−1 x3−1 − 1100 ≤ 0,
√
( ) ( )2 (24)
g6 x = 10x7−3 157.5 × 106 + 745x5 x2−1 x3−1 − 850 ≤ 0,
( )
g7 x = x2 x3 − 40 ≤ 0,
( )
g8 x = −x1 x2(−1) + 5 ≤ 0,
( )
g9 x = x1 x2(−1) − 12 ≤ 0,
( )
g10 x = 1.5x6 − x4 + 1.9 ≤ 0,
( )
g11 x = 1.1x7 − x5 + 1.9 ≤ 0,
Parameters range 0.7 ≤ x2 ≤ 0.8, 17 ≤ x3 ≤ 28, 2.6 ≤ x1 ≤ 3.6,
5 ≤ x7 ≤ 5.5, 7.3 ≤ x5, x4 ≤ 8.3, 2.9 ≤ x6 ≤ 3.9.
The experimental results, as shown in Table 20, we can observe that SBOA achieves the
best optimization results, obtaining an optimal result of 2.99E + 03.
13
123
13
Table 20 Experimental results of reducer weight minimization design
Page 78 of 102
The design of rolling bearings presents complex nonlinear challenges. The bearing’s capac-
ity to support loads is constrained by ten parameters, encompassing five design variables:
pitch circle diameter (Dm ), ball diameter (Db ), curvature coefficients of the outer and inner
races ( fo and fi ), and the total number of balls (Z). The remaining five design parameters,
including e, 𝜖 , 𝜁 , KDmax , and KDmin, are used solely in the constraint conditions. The struc-
ture of the optimization problem for rolling bearings is illustrated in Fig. 25. The math-
ematical model for this problem can be represented by Eq. (25).
� �
Consider x⃗ = x1 x2 x3 x4 x5 x6 x7 x8 x9 x10
� �
= Dm Db fo fi Z e𝜁 KDmax KDmin
� 2∕3 1.8
� � fc Z Db , ifDb ≤ 25.4mm
Minimize f x =
3.647fc Z 2∕3 D1.4
b
, otherwise
� � 𝜙0
Subject to g1 x = Z − � � − 1 ≤ 0,
2sin−1 Db ∕Dm
� �
g2 x = KDmin (D − d) − 2Db ≤ 0,
� �
g3 x = 2Db − KDmax (D − d) ≤ 0,
� �
g4 x = Db − Bw ≤ 0
� �
g5 x = 0.5(D + d) − Dm ≤ 0,
� �
g6 x = Dm − (0.5 + e)(D + d) ≤ 0,
� � � �
g7 x = Db − 0.5 D − Dm − Db ≤ 0,
� �
g8 x = 0.515 − fi ≤ 0,
� �
g9 x = 0.515 − f0 ≤ 0,
10∕3 −0.3
⎧ ⎧ � �1.72 � � � �0.41 ⎫ ⎫
⎪ ⎪ 1−𝛾 fi 2f0 − 1 ⎪ ⎪
Where, fc = 37.91⎨1 + ⎨1.04 � � ⎬ ⎬ ,
⎪ ⎪ 1 + 𝛾 f 0 2f i − 1 ⎪ ⎪
⎩ ⎩ ⎭ ⎭
Db cos(𝛼) ri r0
𝛾= , fi = ,f = ,
Dm Db 0 Db
� �2
{(D − d)∕2 − 3(T∕4)}2 + D∕2 − (T∕4) − Db − {d∕2 + (T∕4)}2
𝜙0 = 2𝜋 − 2 × cos−1 (
2{(D − d)∕2 − 3(T∕4)}(D∕2 − (T∕4) − Db }
T = D − d − 2Db , D = 160, d = 90, Bw = 30,
Parameters range 0.5(D + d) ≤ Dm ≤ 0.6(D + d), 0.5(D + d) ≤ Dm ≤ 0.6(D + d), 4 ≤ Z ≤ 50,
0.515 ≤ fi ≤ 0.6, 0.515 ≤ f0 ≤ 0.6, 0.4 ≤ KDmin ≤ 0.5, 0.6 ≤ KDmax ≤ 0.7,
0.3 ≤≤ 0.4, 0.02 ≤ e ≤ 0.1, 0.6 ≤ 𝜁 ≤ 0.85.
(25)
Table displays the optimization results for the rolling bearing design problem using dif-
ferent comparative algorithms. It is evident that SBOA, LSHADE_cnEpSin, LSHADE_
SPACMA, SO and DBO simultaneously achieve optimal results, yielding the best lowest
of 1.70E + 04 while generating different solutions (Table 21).
13
123 Page 80 of 102 Y. Fu et al.
The gear train design problem is a practical issue in the field of mechanical engineering.
The objective is to minimize the ratio of output to input angular velocity of the gear train
by designing relevant gear parameters. Figure 26 illustrates the structure of the optimiza-
tion problem, and Eq. (26) describes the mathematical model for the optimization problem.
[ ] [ ]
Consider x⃗ = x1 x2 x3 x4 = nA nB nC nD ,
( )
( ) 1 x x 2
Minimize f x⃗ = − 1 2 , (26)
6.931 x3 x4
Parameter range 12 ≤ x1 , x2 , x3 , x4 ≤ 60
From Table 22, it is evident that the parameters optimized by SBOA, AVOA, WOA, GTO
and NOA result in the minimum cost for gear train design, achieving a cost of 0.00E + 00.
The main objective of this design problem is to optimize bearing power loss using four
design variables. These design variables are oil viscosity (𝜇), bearing radius (R), flow rate
(R0), and groove radius (Q). This problem includes seven nonlinear constraints related to
inlet oil pressure, load capacity, oil film thickness, and inlet oil pressure. The mathematical
model for this problem is represented by Eq. (27).
13
Table 21 Experimental results of rolling element bearing design problems
Algorithm Optimal values for Parameters Optimal Ranking
value
x1 x2 x3 x4 x5 x6 x7 x8 x9 x10
LSHADE_ 1.31E + 02 1.80E + 01 4.58E + 00 6.00E − 01 6.00E − 01 4.05E − 01 6.24E − 01 3.00E − 01 9.10E − 02 6.00E − 01 1.70E + 04 1
cnEpSin
LSHADE_ 1.31E + 02 1.80E + 01 4.85E + 00 6.00E − 01 6.00E − 01 4.43E − 01 7.00E − 01 3.00E − 01 1.00E − 01 6.00E − 01 1.70E + 04 1
SPACMA
MadDE 1.31E + 02 1.80E + 01 5.35E + 00 6.00E − 01 6.00E − 01 4.61E − 01 6.53E − 01 3.00E − 01 6.26E − 02 6.00E − 01 1.70E + 04 7
DE 1.31E + 02 1.80E + 01 5.28E + 00 6.00E − 01 6.00E − 01 4.97E − 01 6.88E − 01 3.02E − 01 3.97E − 02 6.00E − 01 1.70E + 04 10
AVOA 1.29E + 02 1.80E + 01 4.51E + 00 6.00E − 01 6.00E − 01 4.86E − 01 6.94E − 01 3.65E − 01 8.23E − 02 6.00E − 01 1.70E + 04 12
GWO 1.30E + 02 1.80E + 01 5.11E + 00 6.00E − 01 6.00E − 01 4.65E − 01 6.83E − 01 3.17E − 01 2.12E − 02 6.00E − 01 1.70E + 04 11
Secretary bird optimization algorithm: a new metaheuristic…
WOA 1.26E + 02 1.80E + 01 5.34E + 00 6.00E − 01 6.00E − 01 4.11E − 01 6.09E − 01 3.00E − 01 2.33E − 02 6.00E − 01 1.70E + 04 15
CPSOGSA 1.27E + 02 1.80E + 01 5.00E + 00 6.00E − 01 6.00E − 01 4.08E − 01 6.09E − 01 3.03E − 01 3.17E − 02 6.00E − 01 1.70E + 04 14
SO 1.31E + 02 1.80E + 01 4.51E + 00 6.00E − 01 6.00E − 01 5.00E − 01 6.00E − 01 3.00E − 01 1.00E − 01 6.00E − 01 1.70E + 04 1
GTO 1.25E + 02 1.80E + 01 4.51E + 00 6.00E − 01 6.00E − 01 4.00E − 01 6.00E − 01 4.00E − 01 7.24E − 02 6.00E − 01 1.71E + 04 16
COA 1.31E + 02 1.80E + 01 5.29E + 00 6.00E − 01 6.00E − 01 4.11E − 01 6.00E − 01 3.00E − 01 9.96E − 02 6.00E − 01 1.70E + 04 8
RIME 1.31E + 02 1.80E + 01 4.51E + 00 6.00E − 01 6.00E − 01 4.95E − 01 6.49E − 01 3.00E − 01 8.25E − 02 6.00E − 01 1.70E + 04 9
GJO 1.29E + 02 1.80E + 01 5.34E + 00 6.00E − 01 6.00E − 01 4.82E − 01 6.04E − 01 3.56E − 01 7.31E − 02 6.00E − 01 1.70E + 04 13
DBO 1.31E + 02 1.80E + 01 4.51E + 00 6.00E − 01 6.00E − 01 5.00E − 01 7.00E − 01 3.00E − 01 1.00E − 01 6.00E − 01 1.70E + 04 1
NOA 1.31E + 02 1.80E + 01 4.64E + 00 6.00E − 01 6.00E − 01 4.21E − 01 6.83E − 01 3.00E − 01 9.88E − 02 6.00E − 01 1.70E + 04 6
SBOA 1.31E + 02 1.80E + 01 5.26E + 00 6.00E − 01 6.00E − 01 4.97E − 01 6.06E − 01 3.00E − 01 1.00E − 01 6.00E − 01 1.70E + 04 1
Page 81 of 102 123
13
123 Page 82 of 102 Y. Fu et al.
13
Secretary bird optimization algorithm: a new metaheuristic… Page 83 of 102 123
𝜇 R0 R Q
( ) QP0
Minimize f x = + Ef
0.7
( )
Subject to g1 x = 1000 − P0 ≤ 0,
( )
g2 x = W − 101000 ≤ 0,
( ) W
g3 x = 5000 − ( ) ≤ 0,
𝜋 R2 − R20
( )
g4 x = 50 − P0 ≤ 0,
( )
( ) 0.0307 Q
g5 x = 0.001 − ≤ 0,
386.4P0 2𝜋Rh
( )
g6 x = R − R0 ≤ 0,
( )
g7 x = h − 0.001 ≤ 0 (27)
( )
𝜋P0 R2 − R20 6𝜇Q R
Where, W= ( ) , P0 = ln ,
2 ln R 𝜋h3 R0
R0
( )
Ef = 9336Q × 0.0307 × 0.5ΔT, ΔT = 2 10P − 559.7 ,
( )
log10 log10 8.122 × 106 𝜇 + 0.8 + 3.55
P= ,
10.04
( )
( ) 4
2𝜋 × 750 2 2𝜋𝜇 R4 R0
h= −
60 Ef 4 4
With bounds, 1 ≤ R ≤ 16, 1 ≤ R0 ≤ 16,
1 × 10−6 ≤ 𝜇 ≤ 16 × 10−6 , 1 ≤ Q ≤ 16.
13
123 Page 84 of 102 Y. Fu et al.
From Table 23, it is evident that the parameters optimized by SBOA result in the mini-
mum cost for the hydrostatic thrust bearing design.
The primary objective of the walking conical pulley design is to minimize the weight of
the four-stage conical pulley by optimizing five variables. The first four parameters repre-
sent the diameter of each stage of the pulley, and the last variable represents the pulley’s
width. The mathematical model for this problem is described by Eq. (28).
� � �2 � � � �2 �
⎡ 2 N1 2 N2 ⎤
⎢ d1 11 + + d2 1 + ⎥
� � ⎢ N N ⎥
Minimize f x = 𝜌𝜔⎢ � � �2 � � � �2 ⎥, �
⎢ 2 N3 N4 ⎥
⎢ +d3 1 + N + d42 1 + ⎥
⎣ N ⎦
� �
Subject to h1 x = C1 − C2 = 0,
� �
h2 x = C1 − C3 = 0,
� �
h3 x = C1 − C4 = 0,
� �
gi=1,2,3,4 x = −Ri ≤ 2, (28)
� �
gi=5,6,7,8 x = (0.75 × 745.6998) − Pi ≤ 0,
� �2
� � Ni
− 1
𝜋di N N
Where, Ci = 1+ i + + 2a, i = (1, 2, 3, 4),
2 N 4a
� � �� � ���
Ni di
Ri = exp 𝜇 𝜋 − 2sin−1 −1 , i = (1, 2, 3, 4)
N 2a
� � 𝜋di Ni
Pi = st𝜔 1 − Ri , i = (1, 2, 3, 4)
60
t = 8 mm, s = 1.75 MPa, 𝜇 = 0.35, 𝜌 = 7200 kg∕m3 , a = 3 mm
From Table 24, it is evident that the parameters optimized by SBOA result in the lowest
cost for the walking conical pulley design, amounting to 8.18E + 00.
The mathematical model for the gas transmission compressor design problem is repre-
sented by Eq. (29).
( ) 1∕2 −2∕3 −1∕2
Minimize f x = 8.61 × 105 x1 x2 x3 x4 + 3.69 × 104 x3 + 7.72
( )
Subject to g1 x = 1000 − P0 ≤ 0,
With bounds 20 ≤ x1 ≤ 50
(29)
1 ≤ x2 ≤ 10
20 ≤ x3 ≤ 50
0.1 ≤ x4 ≤ 60
13
Secretary bird optimization algorithm: a new metaheuristic… Page 85 of 102 123
13
123 Page 86 of 102 Y. Fu et al.
Table shows that SBOA, LSHADE_cnEpSin and NOA all simultaneously achieve the
best result, which is 2.96E + 06, while producing different solutions (Table 25).
The primary objective of this problem is to minimize the error in the gear ratio by optimiz-
ing the parameters. To achieve this, the total number of gears in the automatic planetary
transmission system is calculated. It involves six variables, and the mathematical model is
represented by Eq. (30):
( )
Minimize f x = max||ik − i0k ||, k = {1, 2, .., R}, (30)
( )
N N6 N1 N3 + N2 N4
Where, i1 = 6 , i01 = 3.11, i2 = ( ) , i0R = −3.11,
N4 N1 N3 N6 − N4
N2 N6 { }
IR = − , i = 1.84, x = p, N6 , N5 , N4 , N3 , N2 , N1 , m2 , m1 ,
N1 N3 02
Subject to g1 x = m3 N6 + 2.5 − Dmax ≤ 0,
( ) ( )
g2 x = m1 N1 + N2 + m1 N2 + 2 − Dmax ≤ 0,
( ) ( ) ( )
g3 x = m3 (N4 + N5 ) + m3 (N5 + 2) − Dm ax ≤ 0,
( )
g9 x = N4 − N6 + 2N5 + 2𝛿56 + 4 ≤ 0,
( )
( ) N − N4
h1 x = 6 = integer,
p
Where, 𝛿22 = 𝛿33 = 𝛿55 = 𝛿35 = 𝛿56 = 0.5,
( )
cos−1 ((N4 + N3 )2 + (N6 − N3 )2 − N3 + N3 )2
𝛽= ( )( ) ,
2 N6 − N3 N4 + N5
Dmax = 220,
17 ≤ N1 ≤ 96, 14 ≤ N2 ≤ 54, 14 ≤ N3 ≤ 51
and Ni = integer
Table displays the optimization results for the Planetary Gear Train design. The results
indicate that SBOA ultimately achieves the minimum error, with an optimal value of
5.23E – 01 (Table 26).
13
Table 26 Experimental results of planetary gear train design
Algorithm Optimal values for Variable Optimal value Ranking
X1 X2 X3 X4 X5 X6 X7 X8 X9
LSHADE_cnEpSin 4.22E + 01 3.05E + 01 2.72E + 01 2.73E + 01 2.44E + 01 9.80E + 01 1.05E + 00 1.53E + 00 2.10E + 00 5.26E − 01 7
LSHADE_SPACMA 5.24E + 01 2.13E + 01 1.67E + 01 3.02E + 01 2.55E + 01 1.09E + 02 5.10E − 01 2.80E + 00 5.80E − 01 5.23E − 01 2
MadDE 4.97E + 01 3.34E + 01 2.51E + 01 2.74E + 01 2.17E + 01 9.75E + 01 1.46E + 00 1.08E + 00 1.78E + 00 5.24E − 01 3
DE 6.43E + 01 2.84E + 01 1.87E + 01 3.07E + 01 2.69E + 01 1.12E + 02 2.04E + 00 5.74E − 01 1.36E + 00 5.31E − 01 14
AVOA 3.55E + 01 1.87E + 01 1.35E + 01 1.86E + 01 1.65E + 01 6.88E + 01 5.10E − 01 5.10E − 01 5.10E − 01 5.28E − 01 12
GWO 4.66E + 01 2.58E + 01 1.67E + 01 2.22E + 01 1.68E + 01 7.97E + 01 1.35E + 00 6.55E − 01 1.92E + 00 5.27E − 01 9
WOA 4.06E + 01 3.37E + 01 3.48E + 01 2.96E + 01 2.68E + 01 1.09E + 02 1.44E + 00 1.35E + 00 1.01E + 00 5.27E − 01 10
Secretary bird optimization algorithm: a new metaheuristic…
CPSOGSA 3.07E + 01 2.03E + 01 1.95E + 01 2.13E + 01 1.61E + 01 7.57E + 01 8.08E − 01 1.71E + 00 5.30E − 01 5.29E − 01 13
SO 2.88E + 01 1.85E + 01 2.61E + 01 2.97E + 01 1.49E + 01 1.09E + 02 5.46E − 01 6.49E + 00 1.36E + 00 5.28E − 01 11
GTO 3.66E + 01 2.44E + 01 2.04E + 01 2.19E + 01 2.03E + 01 8.01E + 01 6.97E − 01 8.41E − 01 1.06E + 00 5.26E − 01 8
COA 2.15E + 01 1.41E + 01 1.52E + 01 1.66E + 01 1.41E + 01 6.20E + 01 5.33E − 01 2.63E + 00 1.01E + 00 5.37E − 01 15
RIME 4.82E + 01 3.31E + 01 2.57E + 01 2.71E + 01 1.94E + 01 9.82E + 01 2.17E + 00 1.45E + 00 2.21E + 00 5.26E − 01 6
GJO 4.17E + 01 1.35E + 01 1.35E + 01 3.01E + 01 2.72E + 01 1.09E + 02 5.56E − 01 5.73E + 00 5.81E − 01 5.25E − 01 4
DBO 2.16E + 01 1.35E + 01 1.35E + 01 1.65E + 01 1.36E + 01 5.18E + 01 1.40E + 00 5.10E − 01 5.10E − 01 7.94E − 01 16
NOA 3.54E + 01 1.95E + 01 2.40E + 01 2.95E + 01 1.92E + 01 1.09E + 02 3.49E + 00 4.73E + 00 7.96E − 01 5.25E − 01 4
SBOA 2.37E + 01 1.80E + 01 2.04E + 01 1.87E + 01 1.62E + 01 6.92E + 01 5.56E − 01 3.20E + 00 2.02E + 00 5.23E − 01 1
Page 87 of 102 123
13
123 Page 88 of 102 Y. Fu et al.
The Four-stage Gear Box problem is relatively complex compared to other engineering
problems, involving 22 variables for optimization. These variables include the positions
of gears, positions of small gears, blank thickness, and the number of teeth, among others.
The problem comprises 86 nonlinear design constraints related to pitch, kinematics, con-
tact ratio, gear strength, gear assembly, and gear dimensions. The objective is to minimize
the weight of the gearbox. The mathematical model is represented by Eq. (31).
� � � �
Consider x⃗ = x1 x2 , … , x21 x22 �
= N N b x x y y , where, i = (1, 2, 3, 4) ,
� pi gi i p1 gi p1 gi
� �∑ 4 b c2 N 2 +N 2
𝜋 i i pi gi
Minimize f (̄x) = 1000 2 , where, i = (1, 2, 3, 4),
( N +N )� �
� i=1
�
pi gi
2
2c1 Np1 (Np1 +Ng1 ) 𝜎N JR
Subject to g1 (̄x) = 366000 𝜋𝜔1
+ Npi +Ng1 4b1 c21 Np1
− 0.0167WK ≤ 0,
o Km
� 366000N � � 2
�
2c N (Np2 +Ng2 ) 𝜎N JR
g2 (̄x) = 𝜋𝜔 N g1 + N 2+Np2 4b2 c22 Np2
− 0.0167WK ≤ 0,
o Km
1 p1
� 366000N N
p2 g2
� � 2
�
2c N (Np3 +Ng3 ) 𝜎N JR
g3 (̄x) = 𝜋𝜔 N g1N g2 + N 3+Np3 4b3 c23 Np3
− 0.0167WK ≤ 0,
o Km
1 p1 p2
� 366000N N N
p3 g3
� � 2
�
2c N (Np4 +Ng4 ) 𝜎N JR
g4 (̄x) = 𝜋𝜔 N g1N g2N g3 + N 4+Np4 4b4 c24 Np4
− 0.0167WK ≤ 0,
o Km
�
1 p1 p2 p3
� � p4 g4
3
� � � � �
2
2c N (Np1 +Ng1 ) 𝜎
g5 (̄x) = 366000 𝜋𝜔1
+ N 1+Np1 4b1 c21 Ng1 Np1 2 − CH sin(𝜙)cos(𝜙)
0.0334WKo Km
≤ 0,
� 366000N
p1 g1
� � 3
� �
p
� � �
2
2c N (Np2 +Ng2 ) 𝜎 sin(𝜙)cos(𝜙)
g6 (̄x) = 𝜋𝜔 N g1 + N 2+Np2 4b2 c22 Ng2 Np22 − CH 0.0334WKo Km
≤ 0,
1 p1
� 366000N N
p2 g2
� � 3
� � � � p
�
2
2c N (Np3 +Ng3 ) 𝜎 sin(𝜙)cos(𝜙)
g7 (̄x) = 𝜋𝜔 N g1N g2 + N 3+Np3 4b3 c23 Ng3 Np3 2 − CH 0.0334WKo Km
≤ 0,
1 p1 p2
� 366000N N N
p3 g3
� � 3
� � � � p
�
2
2c N (Np4 +Ng4 ) 𝜎 sin(𝜙)cos(𝜙)
g8 (̄x) = 𝜋𝜔 N g1N g2N g3 + N 4+Np4 4b4 c24 Ng4 Np4 2 − CH 0.0334WKo Km
≤ 0,
�
1 p1 p2 p3 p4 g4
� �
p
2
sin2 (𝜙) 1 1
g9−12 (̄x) = −Npi 4
− Npi
+ Npi
� � �2
sin2 (𝜙)
+Ngi 4
+ N1 1
Ngi
gi
sin(𝜙)(Npi +Ngi )
+ 2
+ CRmin 𝜋cos(𝜙) ≤ 0,
2ci Npi
g13−16 (̄x) = dmin − Npi +Ngi
≤ 0,
2ci Ngi
g17−20 (̄x) = dmin − N +N ≤ 0,
� N +2 pi gi�
( )c
g21 (̄x) = xp1 + Np1+N 1 − Lmax ≤ 0,
p1 �g1 �
(N +2)c
g22−24 (̄x) = −Lmax + Npi+N i + xg(i−1) ≤ 0,
gi pi i=2,3,4
(Np1 +2)c1
g25 (̄x) = −xp1 + N +N ≤ 0, ≤ 0,
� N +2p1c g1 �
( )
g26−28 (̄x) = Npi+N i − xg(i−1) ≤ 0,
pi gi i=2,3,4
(Np1 +2)c1
g29 (̄x) = yp1 + Np1 +Ng1
− Lmax ≤ 0,
(31)
13
Table 27 Experimental results of four-stage gear box design
Algorithm LSHADE_cnEpSin LSHADE_SPACMA MadDE DE AVOA GWO WOA CPSOGSA
13
Table 27 (continued)
123
13
Optimal for variable
X1 1.19E + 01 6.51E + 00 1.33E + 01 1.73E + 01 1.16E + 01 1.20E + 01 2.05E + 01 1.82E + 01
Page 90 of 102
13
123 Page 92 of 102 Y. Fu et al.
≤ 0,
(c )
(2+Npi )
g30−32 (̄x) = −Lmax + i
Npi +Ngi
+ yg(i−1)
− yp1 ≤ 0,
i=2,3,4
(2+Np1 )c1
g33 (̄x) = N +N
≤ 0,
( g1c 2+N
p1 )
i( pi )
g34−36 (̄x) = N +N −y
≤ 0,
( piN +2gi ci )
( )
g49−52 (̄x) = −ygi + N gi +N
g53−56 (̄x) = bi − 8.255 bi − 5.715 bi − 12.70 −Npi + 0.945ci − Ngi (−1) ≤ 0,
( )( gi
pi
)( )( )
Unmanned aerial vehicles (UAVs) play a vital role in various civil and military applica-
tions, and their importance and convenience are widely recognized. As a core task of the
autonomous control system for UAVs, path planning and design aim to solve a complex
constrained optimization problem: finding a reliable and safe path from a starting point
to a goal point under certain constraints. In recent years, with the widespread applica-
tion of UAVs, research on the path planning problem has garnered considerable attention.
Therefore, we employ SBOA to address the UAV path planning problem and verify the
13
Secretary bird optimization algorithm: a new metaheuristic… Page 93 of 102 123
13
123 Page 94 of 102 Y. Fu et al.
Flight Altitude: The flight altitude of the unmanned aerial vehicle significantly affects
the control system and safety. The mathematical model for this constraint is shown in
Eq. (34).
ith (34)
Maximum Turning Angle: The turning angle of the unmanned aerial vehicle must be
within a specified maximum turning angle. The constraint for the maximum turning angle
can be expressed as:
√
√ ( )2
√∑ ∑
√ n 1
g
H=√ zi − z (35)
i=1
n k=1 i
13
Secretary bird optimization algorithm: a new metaheuristic… Page 95 of 102 123
( )
∑
g
𝜑i+1 × 𝜑i
where, S = cos(𝜑) − is the turning angle when moving from φi , and φ
|𝜑i+1 |×|𝜑i |
m=1 | |
represents the maximum turning angle.
To evaluate the planned trajectory, it is common to consider multiple factors, including
the maneuverability of the UAV, trajectory length, altitude above ground, and the magni-
tude of threats from various sources. Based on a comprehensive analysis of the impact of
these factors, the trajectory cost function is calculated using the formula shown in Eq. (36)
to assess the trajectory.
(xi+1 − xi , yi+1 − yi , zi+1 −zi ) (36)
∑
n
where, C = (𝜔1 × L + 𝜔2 × H + 𝜔3 × S) and L are the length and altitude above ground
i
of the trajectory, respectively; H is the smoothing cost of the planned path; S , 𝜔1 and 𝜔2 are
weight coefficients satisfying 𝜔3. By adjusting these weight coefficients, the influence of
each factor on the trajectory can be modulated.
13
123 Page 96 of 102 Y. Fu et al.
demonstrating the feasibility of the generated trajectories. In Fig. 30b, the trajectory
obtained by SBOA is the shortest and relatively smoother, while DBO’s trajectory is the
worst, requiring ascent to a certain height and navigating a certain distance to avoid threat
areas. The results indicate that SBOA can effectively enhance the efficiency of trajectory
planning, demonstrating a certain advantage.
This article introduces a new bio-inspired optimization algorithm named the secretary bird
optimization algorithm (SBOA), which aims to simulate the behavior of secretary birds in
nature. The hunting strategy of secretary birds in capturing snakes and their mechanisms
for evading predators have been incorporated as fundamental inspirations in the design of
SBOA. The implementation of SBOA is divided into two phases: a simulated exploration
phase based on the hunting strategy and a simulated exploitation phase based on the eva-
sion strategy, both of which are mathematically modeled. To validate the effectiveness of
SBOA, we conducted a comparative analysis of exploration versus exploitation and con-
vergence behavior. The analysis results demonstrate that SBOA exhibits outstanding per-
formance in balancing exploration and exploitation, convergence speed, and convergence
accuracy.
To evaluate the performance of SBOA, experiments were conducted using the CEC-
2017 and CEC-2022 benchmark functions. The results demonstrate that, both in dif-
ferent dimensions of CEC-2017 and on CEC-2022 test functions, SBOA consistently
ranks first in terms of average values and function average rankings. Notably, even in
high-dimensional problems of CEC-2017, SBOA maintains its strong problem-solving
capabilities, obtaining the best results in 20 out of 30 test functions. Furthermore, to
assess the algorithm’s ability to solve real-world problems, it was applied to twelve clas-
sical engineering practical problems and a three-dimensional path planning problem
for Unmanned Aerial Vehicles (UAVs). In comparison to other benchmark algorithms,
SBOA consistently obtained the best results. This indicates that SBOA maintains strong
optimization capabilities when dealing with constrained engineering problems and
practical optimization problems and outperforms other algorithms in these scenarios,
providing optimal optimization results. In summary, the proposed SBOA demonstrates
excellent performance in both unconstrained and constrained problems, showcasing its
robustness and wide applicability.
In future research, there are many aspects that require continuous innovation. We
intend to optimize SBOA from the following perspectives:
13
Secretary bird optimization algorithm: a new metaheuristic… Page 97 of 102 123
tance. In future research, we will place greater emphasis on enhancing SBOA’s capabil-
ity to address multi-objective problems, providing more comprehensive solutions for
optimization challenges.
4. Expanding Application Domains: The initial design of metaheuristic algorithms was
intended to address real-life problems. Therefore, we will further expand the application
domains of SBOA to make it applicable in a wider range of fields. This includes docu-
ment classification and data mining, circuit fault diagnosis, wireless sensor networks,
and 3D reconstruction, among others.
Supplementary Information The online version contains supplementary material available at https://doi.
org/10.1007/s10462-024-10729-y.
Acknowledgements This work was supported by the Technology Plan of Guizhou Province (Contract No:
Qian Kehe Support [2023] General 117), (Contract No: Qiankehe Support [2023] General 124) and (Con-
tract No: Qiankehe Support [2023] General 302),Guizhou Provincial Science and Technology Projects
QKHZC[2023]118, and the National Natural Science Foundation of China (72061006).
Author contributions YF: Conceptualization, Methodology, Writing—Original Draft, Data Curation, Writ-
ing—Review and Editing, Software. DL: Supervision, Writing-Reviewing and Editing. JC: Writing-Origi-
nal Draft, Formal analysis. LH: Writing-Reviewing and Editing, Drawing, Funding Acquisition.
Funding Funding was provided by Natural Science Foundation of Guizhou Province (Contract No.: Qian
Kehe Support [2023] General 117) (Contract No: Qiankehe Support [2023] General 124) and (Contract No:
Qiankehe Support [2023] General 302), Guizhou Provincial Science and Technology Projects QKHZC[202
3]118, and the National Natural Science Foundation of China (Grant No. 72061006).
Data availability All data generated or analyzed in this study are included in this paper.
Declarations
Competing interests The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License,
which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long
as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Com-
mons licence, and indicate if changes were made. The images or other third party material in this article
are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the
material. If material is not included in the article’s Creative Commons licence and your intended use is not
permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly
from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
References
Abdel-Basset M, El-Shahat D, Jameel M, Abouhawwash M (2023a) Exponential distribution optimizer
(EDO): a novel math-inspired algorithm for global optimization and engineering problems. Artif
Intell Rev 56:9329–9400. https://doi.org/10.1007/s10462-023-10403-9
Abdel-Basset M, Mohamed R, Abouhawwash M (2024) Crested porcupine optimizer: a new nature-inspired
metaheuristic. Knowl-Based Syst 284:111257. https://doi.org/10.1016/j.knosys.2023.111257
Abdel-Basset M, Mohamed R, Jameel M, Abouhawwash M (2023b) Nutcracker optimizer: a novel nature-
inspired metaheuristic algorithm for global optimization and engineering design problems. Knowl-
Based Syst 262:110248. https://doi.org/10.1016/j.knosys.2022.110248
13
123 Page 98 of 102 Y. Fu et al.
13
Secretary bird optimization algorithm: a new metaheuristic… Page 99 of 102 123
De Swardt DH (2011) Late-summer breeding record for Secretarybirds Sagittarius serpentarius in the free
state. Gabar 22:31–33
Dehghani M, Hubalovsky S, Trojovsky P (2021) Northern goshawk optimization: a new swarm-based algo-
rithm for solving optimization problems. IEEE Access 9:162059–162080. https://doi.org/10.1109/
access.2021.3133286
Deng L, Liu S (2023) Snow ablation optimizer: a novel metaheuristic technique for numerical optimization
and engineering design. Expert Syst Appl 225:120069
Dhiman G, Garg M, Nagar A, Kumar V, Dehghani M (2021) A novel algorithm for global optimization:
rat swarm optimizer. J Ambient Intell Humaniz Comput 12:8457–8482. https://doi.org/10.1007/
s12652-020-02580-0
Dorigo M, Birattari M, Stützle T (2006) Ant colony optimization. Comput Intell Mag 1:28–39. https://doi.
org/10.1109/MCI.2006.329691
Erol OK, Eksin I (2006) A new optimization method: big bang–big crunch. Adv Eng Softw 37:106–111
Eskandar H, Sadollah A, Bahreininejad A, Hamdi M (2012) Water cycle algorithm—a novel metaheuris-
tic optimization method for solving constrained engineering optimization problems. Comput Struct
110:151–166. https://doi.org/10.1016/j.compstruc.2012.07.010
Faramarzi A, Heidarinejad M, Mirjalili S, Gandomi AH (2020a) Marine predators algorithm: a nature-
inspired metaheuristic. Exp Syst Appl. https://doi.org/10.1016/j.eswa.2020.113377
Faramarzi A, Heidarinejad M, Stephens B, Mirjalili S (2020b) Equilibrium optimizer: a novel optimization
algorithm. Knowl-Based Syst. https://doi.org/10.1016/j.knosys.2019.105190
Fatahi A, Nadimi-Shahraki MH, Zamani H (2023) An improved binary quantum-based avian navigation
optimizer algorithm to select effective feature subset from medical data: a COVID-19 case study. J
Bionic Eng. https://doi.org/10.1007/s42235-023-00433-y
Feduccia A, Voorhies MR (1989) Miocene hawk converges on secretarybird. Ibis 131:349–354
Goodarzimehr V, Shojaee S, Hamzehei-Javaran S, Talatahari S (2022) Special relativity search: a novel
metaheuristic method based on special relativity physics. Knowl-Based Syst 257:109484. https://doi.
org/10.1016/j.knosys.2022.109484
Guan Z, Ren C, Niu J, Wang P, Shang Y (2023) Great wall construction algorithm: a novel meta-heuristic
algorithm for engineer problems. Expert Syst Appl 233:120905. https://doi.org/10.1016/j.eswa.2023.
120905
Hashim FA, Houssein EH, Hussain K, Mabrouk MS, Al-Atabany W (2022) Honey badger algorithm: new
metaheuristic algorithm for solving optimization problems. Math Comput Simul 192:84–110. https://
doi.org/10.1016/j.matcom.2021.08.013
Hashim FA, Hussien AG (2022) Snake optimizer: a novel meta-heuristic optimization algorithm. Knowl-
Based Syst. https://doi.org/10.1016/j.knosys.2022.108320
Hofmeyr SD, Symes CT, Underhill LG (2014) Secretarybird Sagittarius serpentarius population trends
and ecology: insights from South African citizen science data. PLoS ONE 9:e96772
Holland JH (1992) Genetic algorithms. Sci Am 267:66–73
Hu G, Guo Y, Wei G, Abualigah L (2023) Genghis Khan shark optimizer: a novel nature-inspired algo-
rithm for engineering optimization. Adv Eng Inform 58:102210. https://doi.org/10.1016/j.aei.
2023.102210
Jia H, Rao H, Wen C, Mirjalili S (2023) Crayfish optimization algorithm. Artif Intell Rev 56:1919–1979.
https://doi.org/10.1007/s10462-023-10567-4
Kaur S, Awasthi LK, Sangal AL, Dhiman G (2020) Tunicate swarm algorithm: a new bio-inspired based
metaheuristic paradigm for global optimization. Eng Appl Artif Intell 90:103541. https://doi.org/
10.1016/j.engappai.2020.103541
Kaveh A, Khayatazad M (2012) A new meta-heuristic method: ray optimization. Comput Struct
112:283–294. https://doi.org/10.1016/j.compstruc.2012.09.003
Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN’95—international
conference on neural networks, pp 1942–1948
Khishe M, Mosavi MR (2020) Chimp optimization algorithm. Exp Syst Appl. https://doi.org/10.1016/j.
eswa.2020.113338
Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science 220:671–
680. https://doi.org/10.1126/science.220.4598.671
Kumar A, Wu G, Ali MZ, Mallipeddi R, Suganthan PN, Das S (2020a) Guidelines for real-world single-
objective constrained optimisation competition. Tech Rep 2020:1–7
Kumar A, Wu G, Ali MZ, Mallipeddi R, Suganthan PN, Das S (2020b) A test-suite of non-convex con-
strained optimization problems from the real-world and some baseline results. Swarm Evol Com-
put 56:100693
13
123 Page 100 of 102 Y. Fu et al.
Kumar M, Kulkarni AJ, Satapathy SC (2018) Socio evolution & learning optimization algorithm: a
socio-inspired optimization methodology. Future Gener Comput Syst Int J Sci 81:252–272. https://
doi.org/10.1016/j.future.2017.10.052
Li S, Chen H, Wang M, Heidari AA, Mirjalili S (2020) Slime mould algorithm: a new method for sto-
chastic optimization. Future Gener Comput Syst Int J Sci 111:300–323. https://doi.org/10.1016/j.
future.2020.03.055
Lian J, Hui G (2024) Human evolutionary optimization algorithm. Exp Syst Appl 241:122638. https://
doi.org/10.1016/j.eswa.2023.122638
Liu C, IEEE (2014) The development trend of evaluating face-recognition technology. In: International
conference on mechatronics and control (ICMC), Jinzhou, pp 1540–1544
Liu SH, Mernik M, Hrncic D, Crepinsek M (2013) A parameter control method of evolutionary algo-
rithms using exploration and exploitation measures with a practical application for fitting Sovova’s
mass transfer model. Appl Soft Comput 13:3792–3805. https://doi.org/10.1016/j.asoc.2013.05.010
Luo W, Lin X, Li C, Yang S, Shi Y (2022) Benchmark functions for CEC 2022 competition on seeking
multiple optima in dynamic environments. Preprint at https://arxiv.org/abs/2201.00523
Mahdavi-Meymand A, Zounemat-Kermani M (2022) Homonuclear molecules optimization (HMO)
meta-heuristic algorithm. Knowl-Based Syst 258:110032. https://doi.org/10.1016/j.knosys.2022.
110032
Manjarres D, Landa-Torres I, Gil-Lopez S, Del Ser J, Bilbao MN, Salcedo-Sanz S, Geem ZW (2013) A
survey on applications of the harmony search algorithm. Eng Appl Artif Intell 26:1818–1831
Mirjalili S, Gandomi AH, Mirjalili SZ, Saremi S, Faris H, Mirjalili SM (2017) Salp swarm algorithm: a
bio-inspired optimizer for engineering design problems. Adv Eng Softw 114:163–191. https://doi.
org/10.1016/j.advengsoft.2017.07.002
Mirjalili S, Lewis A (2016) The whale optimization algorithm. Adv Eng Softw 95:51–67. https://doi.
org/10.1016/j.advengsoft.2016.01.008
Mirjalili S, Mirjalili SM, Hatamlou A (2016) Multi-verse optimizer: a nature-inspired algorithm for
global optimization. Neural Comput Appl 27:495–513. https://doi.org/10.1007/s00521-015-1870-7
Mirjalili S, Mirjalili SM, Lewis A (2014) Grey wolf optimizer. Adv Eng Softw 69:46–61. https://doi.org/
10.1016/j.advengsoft.2013.12.007
Mohamed AW, Hadi AA, Fattouh AM, Jambi KM (2017) LSHADE with semi-parameter adaptation
hybrid with CMA-ES for solving CEC 2017 benchmark problems. In: 2017 IEEE congress on
evolutionary computation (CEC), pp 145–152.
Mohammed H, Rashid T (2023) FOX: a FOX-inspired optimization algorithm. Appl Intell 53:1030–1050
Moosavian N, Roodsari BK (2014) Soccer league competition algorithm: a novel meta-heuristic algorithm
for optimal design of water distribution networks. Swarm Evol Comput 17:14–24. https://doi.org/10.
1016/j.swevo.2014.02.002
Morales-Castaneda B, Zaldivar D, Cuevas E, Fausto F, Rodriguez A (2020) A better balance in metaheuris-
tic algorithms: does it exist? Swarm Evol Comput. https://doi.org/10.1016/j.swevo.2020.100671
Nama S (2021) A modification of I-SOS: performance analysis to large scale functions. Appl Intell
51:7881–7902. https://doi.org/10.1007/s10489-020-01974-z
Nama S (2022) A novel improved SMA with quasi reflection operator: performance analysis, application
to the image segmentation problem of Covid-19 chest X-ray images. Appl Soft Comput 118:108483.
https://doi.org/10.1016/j.asoc.2022.108483
Nama S, Chakraborty S, Saha AK, Mirjalili S (2022a) Hybrid moth-flame optimization algorithm with
slime mold algorithm for global optimization. In: Mirjalili S (ed) Handbook of moth-flame optimi-
zation algorithm: variants, hybrids, improvements, and applications. CRC Press, Boca Raton, pp
155–176
Nama S, Saha AK (2020) A new parameter setting-based modified differential evolution for function opti-
mization. Int J Model Simul Sci Comput 11:2050029
Nama S, Saha AK (2022) A bio-inspired multi-population-based adaptive backtracking search algorithm.
Cogn Comput 14:900–925. https://doi.org/10.1007/s12559-021-09984-w
Nama S, Saha AK, Chakraborty S, Gandomi AH, Abualigah L (2023) Boosting particle swarm optimization
by backtracking search algorithm for optimization problems. Swarm Evol Comput 79:101304. https://
doi.org/10.1016/j.swevo.2023.101304
Nama S, Saha AK, Sharma S (2020) A hybrid TLBO algorithm by quadratic approximation for func-
tion optimization and its application. In: Balas VE, Kumar R, Srivastava R (eds) Recent trends and
advances in artificial intelligence and internet of things. Springer, Cham, pp 291–341
Nama S, Sharma S, Saha AK, Gandomi AH (2022b) A quantum mutation-based backtracking search algo-
rithm. Artif Intell Rev 55:3019–3073. https://doi.org/10.1007/s10462-021-10078-0
13
Secretary bird optimization algorithm: a new metaheuristic… Page 101 of 102 123
Portugal SJ, Murn CP, Sparkes EL, Daley MA (2016) The fast and forceful kicking strike of the secretary
bird. Curr Biol 26:R58–R59
Rao RV, Savsani VJ, Vakharia DP (2011) Teaching-learning-based optimization: a novel method for con-
strained mechanical design optimization problems. Comput Aided Des 43:303–315. https://doi.org/
10.1016/j.cad.2010.12.015
Rashedi E, Nezamabadi-Pour H, Saryazdi S (2009) GSA: a gravitational search algorithm. Inf Sci
179:2232–2248. https://doi.org/10.1016/j.ins.2009.03.004
Rather SA, Bala PS (2021) Constriction coefficient based particle swarm optimization and gravitational
search algorithm for multilevel image thresholding. Expert Syst 38:e12717
Reynolds RG (1994) An introduction to cultural algorithms. Proceedings of the 3rd annual conference on
evolutionary programming. World Scientific Publishing, Singapore, pp 131–139
Saha A, Nama S, Ghosh S (2021) Application of HSOS algorithm on pseudo-dynamic bearing capacity of
shallow strip footing along with numerical analysis. Int J Geotech Eng 15:1298–1311. https://doi.org/
10.1080/19386362.2019.1598015
Sahoo SK, Saha AK, Nama S, Masdari M (2023) An improved moth flame optimization algorithm based on
modified dynamic opposite learning strategy. Artif Intell Rev 56:2811–2869. https://doi.org/10.1007/
s10462-022-10218-0
Sharma S, Chakraborty S, Saha AK, Nama S, Sahoo SK (2022) mLBOA: a modified butterfly optimization
algorithm with lagrange interpolation for global optimization. J Bionic Eng 19:1161–1176. https://
doi.org/10.1007/s42235-022-00175-3
Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over
continuous spaces. J Global Optim 11:341–359
Su H, Zhao D, Heidari AA, Liu L, Zhang X, Mafarja M, Chen H (2023) RIME: a physics-based optimiza-
tion. Neurocomputing 532:183–214
Taheri A, RahimiZadeh K, Beheshti A, Baumbach J, Rao RV, Mirjalili S, Gandomi AH (2024) Partial rein-
forcement optimizer: an evolutionary optimization algorithm. Expert Syst Appl 238:122070. https://
doi.org/10.1016/j.eswa.2023.122070
Tallini LG, Pelusi D, Mascella R, Pezza L, Elmougy S, Bose B (2016) Efficient non-recursive design of
second-order spectral-null codes. IEEE Trans Inf Theory 62:3084–3102. https://doi.org/10.1109/TIT.
2016.2555322
Trojovska E, Dehghani M, Trojovsky P (2022) Fennec fox optimization: a new nature-inspired optimization
algorithm. IEEE Access 10:84417–84443. https://doi.org/10.1109/ACCESS.2022.3197745
Trojovský P, Dehghani M (2022) Walrus optimization algorithm: a new bio-inspired metaheuristic
algorithm
Trojovský P, Dehghani M (2023) Subtraction-average-based optimizer: a new swarm-inspired metaheuris-
tic algorithm for solving optimization problems. Biomimetics (basel). https://doi.org/10.3390/biomi
metics8020149
Wang L, Cao Q, Zhang Z, Mirjalili S, Zhao W (2022) Artificial rabbits optimization: a new bio-inspired
meta-heuristic algorithm for solving engineering optimization problems. Eng Appl Artif Intell
114:105082. https://doi.org/10.1016/j.engappai.2022.105082
Wei ZL, Huang CQ, Wang XF, Han T, Li YT (2019) Nuclear reaction optimization: a novel and power-
ful physics-based algorithm for global optimization. IEEE Access 7:66084–66109. https://doi.org/10.
1109/access.2019.2918406
Wolpert DH, Macready WG (1997) No free lunch theorems for optimization. IEEE Trans Evol Comput
1:67–82. https://doi.org/10.1109/4235.585893
Wu X, Zhang S, Xiao W, Yin Y (2019) The exploration/exploitation tradeoff in whale optimization algo-
rithm. IEEE Access 7:125919–125928. https://doi.org/10.1109/ACCESS.2019.2938857
Xue J, Shen B (2022) Dung beetle optimizer: a new meta-heuristic algorithm for global optimization. J
Supercomput. https://doi.org/10.1007/s11227-022-04959-6
Yapici H, Cetinkaya N (2019) A new meta-heuristic optimizer: pathfinder algorithm. Appl Soft Comput
78:545–568
Zamani H, Nadimi-Shahraki MH (2024) An evolutionary crow search algorithm equipped with interactive
memory mechanism to optimize artificial neural network for disease diagnosis. Biomed Signal Pro-
cess Control 90:105879. https://doi.org/10.1016/j.bspc.2023.105879
Zamani H, Nadimi-Shahraki MH, Gandomi AH (2019) CCSA: conscious neighborhood-based crow search
algorithm for solving global optimization problems. Appl Soft Comput 85:28. https://doi.org/10.
1016/j.asoc.2019.105583
Zamani H, Nadimi-Shahraki MH, Gandomi AH (2021) QANA: Quantum-based avian navigation optimizer
algorithm. Eng Appl Artif Intell 104:104314. https://doi.org/10.1016/j.engappai.2021.104314
13
123 Page 102 of 102 Y. Fu et al.
Zamani H, Nadimi-Shahraki MH, Gandomi AH (2022) Starling murmuration optimizer: a novel bio-
inspired algorithm for global and engineering optimization. Comput Methods Appl Mech Eng
392:114616. https://doi.org/10.1016/j.cma.2022.114616
Zervoudakis K, Tsafarakis S (2022) A global optimizer inspired from the survival strategies of flying foxes.
Eng Comput 2022:1–34
Zhao S, Zhang T, Ma S, Wang M (2023) Sea-horse optimizer: a novel nature-inspired meta-heuris-
tic for global optimization problems. Appl Intell 53:11833–11860. https://doi.org/10.1007/
s10489-022-03994-3
Zhou A, Qu BY, Li H, Zhao SZ, Suganthan PN, Zhang Q (2011) Multiobjective evolutionary algorithms: a
survey of the state of the art. Swarm Evol Comput 1:32–49. https://doi.org/10.1016/j.swevo.2011.03.
001
Zolf K (2023) Gold rush optimizer: a new population-based metaheuristic algorithm. Op Res Decis. https://
doi.org/10.37190/ord230108
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and
institutional affiliations.
13