0% found this document useful (0 votes)
14 views24 pages

A Recommender System With Multi-Objective Hybrid Harris Hawk Optimization For Feature Selection and Disease Diagnosis

The document presents an integrated multi-strategy sand cat swarm optimization algorithm aimed at improving the efficiency and accuracy of path planning applications. The proposed algorithm incorporates various strategies, including chaotic mapping and opposition-based learning, to enhance global optimization capabilities and reduce the likelihood of local optima. Experimental results demonstrate its effectiveness across multiple test functions and path planning scenarios, achieving optimal performance in many instances.

Uploaded by

Trong Nghia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views24 pages

A Recommender System With Multi-Objective Hybrid Harris Hawk Optimization For Feature Selection and Disease Diagnosis

The document presents an integrated multi-strategy sand cat swarm optimization algorithm aimed at improving the efficiency and accuracy of path planning applications. The proposed algorithm incorporates various strategies, including chaotic mapping and opposition-based learning, to enhance global optimization capabilities and reduce the likelihood of local optima. Experimental results demonstrate its effectiveness across multiple test functions and path planning scenarios, achieving optimal performance in many instances.

Uploaded by

Trong Nghia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Intelligent Systems with Applications 25 (2025) 200486

Contents lists available at ScienceDirect

Intelligent Systems with Applications


journal homepage: www.journals.elsevier.com/intelligent-systems-with-applications

Integrated multi-strategy sand cat swarm optimization for path


planning applications
Yourui Huang a,b, Quanzeng Liu a,* , Tao Han a , Tingting Li a , Hongping Song a
a
Anhui University of Science and Technology, Huainan, 232001, China
b
West Anhui University, Lu’an, 237012, China

A R T I C L E I N F O A B S T R A C T

Keywords: An integrated multi-strategy sand cat swarm optimization algorithm is proposed to address the shortcomings of
Sand cat swarm optimization the sand cat swarm algorithm, such as inefficient solutions, insufficient optimization accuracy, and a tendency to
Path planning fall into local optimal solutions. The algorithm introduces an improved circle chaotic mapping to balance the
Convergence performance
population distribution, water wave dynamic convergence factor to maintain population diversity, and a lens
Hybrid metaheuristics
opposition-based learning to enhance the global optimization capability. Additionally, the golden sine strategy is
incorporated to improve the local search ability. Experiments on 23 test functions demonstrate the new algo­
rithm’s optimal average performance on 18 of them. It was further applied to 9 2D path planning instances and 2
3D path planning instances, all of which were able to find the shortest path. The results show that the improved
algorithm is less prone to local optimization, exhibits high stability, and can effectively solve path planning
problems.

Venkata Satya Durga Manohar Sahu et al. proposed the Tyrannosaurus


Optimization Algorithm (TROA), which achieves efficient solutions on
1. Introduction several benchmarking problems and real optimal control problems
(Sahu et al., 2023). To address the problem of parameter optimization
As an important branch of stochastic optimization algorithms, met­ for digital infinite impulse response (IIR) systems in system identifica­
aheuristic algorithms have been increasingly valued for their excellent tion, Serdar Ekinci et al. proposed a novel adaptive algorithm that
performance in solving complex optimization problems. These algo­ achieves optimized performance on complex benchmark test functions
rithms are known for their conceptual simplicity and high degree of and higher order IIR system identification problems (Ekinci and Izci,
flexibility, and are able to efficiently cope with a wide range of chal­ 2023). Benyamin Abdollahzadeh et al. proposed an artificial Gorilla
lenging difficulties, such as feature selection (Nguyen et al., 2024), Population Optimizer (GTO) that exhibits excellent performance on
parameter optimization (Kim et al., 2024), and so on. In terms of solving multidimensional optimization problems (Abdollahzadeh et al., 2021).
complex optimization problems, Junbo Lian et al. proposed the Parrot Metaheuristic algorithms are powerful optimization tools that have
Optimizer (PO), which achieves effective applications in several fields, also shown great potential and wide application value in the field of path
including engineering design problems, disease diagnosis and medical planning. With their flexibility and efficiency, these algorithms can
image segmentation in the medical field (Lian et al., 2024). Yajun Leng effectively solve complex path planning problems. Some classical met­
et al. proposed a novel renewable energy development assessment aheuristic algorithms, such as genetic algorithms, ant colony algorithms
method based on improved sparrow search algorithm and projection and particle swarm algorithms, have been used to optimize specific
tracking model to achieve optimal selection and assessment of renew­ problems in path planning. Mohd Nadhir Ab Wahab et al. proposed a
able energy technology solutions (Leng et al., 2024). In order to solve the new population initialization method and combined several genetic
hyperrelational interaction problem, Yuhuan Lu et al. proposed a novel operators to improve the existing genetic algorithm model for the static
Hyperrelational Multimodal Trajectory Prediction (HyperMTP) method, mobile robot global path planning (MRGPP) problem (Wahab et al.,
which aims to learn directly from hyperrelational interactions (Lu et al., 2024). Vasileios Kourepinis et al. proposed an artificial fish swarm
2024). In recent years, these algorithms have been particularly used in optimization algorithm for efficiently solving the UTRP for the route
several fields. In the context of solving optimal control problems,

* Corresponding author.
E-mail address: [email protected] (Q. Liu).

https://doi.org/10.1016/j.iswa.2025.200486
Received 2 November 2024; Received in revised form 27 December 2024; Accepted 22 January 2025
Available online 27 January 2025
2667-3053/© 2025 The Author(s). Published by Elsevier Ltd. This is an open access article under the CC BY-NC license (http://creativecommons.org/licenses/by-
nc/4.0/).
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Nomenclature IIR infinite impulse response


LOBL lenses opposite-based learning
ABC_RL reinforcement learning -based ABC algorithm MOSCSO multi-objective sand cat swarm optimization
CDD-SCSO chaotic dynamic disturbance sand cat swarm MRGPP mobile robot global path planning
optimization OBL opposite-based learning
DBO dung beetle optimizer PA-C-IACO planning algorithm for clustering and improved ant
DWA dynamic window approach colony optimization
DYACO dynamic ant colony optimization PO parrot optimizer
ESCSO enhanced sand cat swarm optimization PSO particle swarm optimization
ESSA enhanced sparrow search algorithm PSO particle swarm optimization
FHO fire hawk optimizer RL reinforcement learning
GSA golden sine algorithm SCSO sand cat swarm optimization
GSCSO integrated multi-strategy sand cat swarm optimization SSA salp swarm algorithm
GTO gorilla troops optimizer TROA tyrannosaurus optimization algorithm
GWO gray wolf optimizer UAV unmanned aerial vehicle
HBA honey badger algorithm UTRP the urban transit routing problem
HGWODE hybrid GWO and differential evolution algorithm 2D 2 dimensions
HHO Harris hawks algorithm 3D 3 dimensions

design problem of public transportation systems, which produces a very Sand Cat Swarm Optimization (SCSO) (Niu et al., 2024) is a new type
high direct trip coverage share (Kourepinis et al., 2024). Qiting Li et al. of meta-heuristic algorithm, which is widely used by scholars at home
use quantum bits to solve the collision problem in route planning for and abroad due to the fact that it possesses fewer parameters and the
consumer electronics supply chains to accommodate the current limi­ algorithm is easier to adjust and implement. Yanbiao Niu et al. proposed
tations of quantum computing resources, and validate the effectiveness an enhanced sand cat swarm optimization algorithm (ESCSO) based on
of the proposed methods, models, and algorithms through quantum an adaptive social neighborhood search mechanism and Lévy flight
simulation computations (Li et al., 2024a). Weixing Liang et al. pro­ strategy for UAVs to obtain safe and efficient sample quality in obstacle
posed an enhanced ant colony algorithm, called DYACO, for the mining avoidance paths. The weights and biases of the neural network are
truck operation path planning problem, which effectively reduces en­ optimized using ESCSO to achieve offline training of the neural network
ergy consumption (Liang et al., 2024). Ziwei Wang et al. proposed a (Li et al., 2024b). Junhong Liu et al. proposed a parameter identification
hybrid algorithm PESSA for UAV path planning with particle swarm method based on Chaotic Dynamic Disturbance Sand Cat Swarm Opti­
algorithm (PSO) and enhanced sparrow search algorithm (ESSA) mization (CDD-SCSO) algorithm, which effectively solves the problem of
working in parallel, which strengthens random jumps of producer lo­ identifying numerous unknown parameters (Wang et al., 2024). Zhihua
cations and ensures global search capability (Wang et al., 2023). Wang et al. proposed a novel multi-objective sand cat swarm optimi­
Xiaobing Yu et al. proposed a hybrid HGWODE with Gray Wolf opti­ zation (MOSCSO) two-layer model for constructing a multi-energy
mization algorithm and differential evolutionary algorithm for autono­ complementary integrated energy system in response to the un­
mous navigation of UAVs, which makes the UAV path smoother and certainties at the source and load side in the optimal design and oper­
shorter (Yu et al., 2023). Bing Yang et al. proposed an improved ant ation of the system (Arul and Jebaselvi, 2024). S Benjamin Arul et al.
colony algorithm called Planning Algorithm for Clustering and introduced enhanced SCSO as a new technique for the problem of
Improved Ant Colony Optimization (PA-C-IACO) for the rescue path selecting anchor nodes at suitable locations in agricultural wireless
planning problem to provide an effective route plan for rescue (Yang sensor networks, extracting meaningful features from sensor data and
et al., 2023). Xiaobing Yu et al. proposed a multi-strategy cuckoo search training linear classifiers to accurately predict disease outbreaks in cattle
algorithm based on reinforcement learning to address the problems of herds (Seyyedabbasi and Kiani, 2023). Although the sand cat swarm
poor searchability and slow convergence of existing optimization optimization algorithm (SCSO) has achieved certain results in optimi­
methods for UAV path planning. The algorithm can dynamically and zation problems, there are still some shortcomings. Firstly, the stability
accurately adjust the search strategy to provide better searchability (Yu of the algorithm needs to be improved, because it is easy to fall into the
and Luo, 2023). Yibing Cui et al. proposed a reinforcement learning local optimal solution in the optimization process, especially in the later
(RL)-based ABC algorithm (ABC_RL) and applied it to the robot path stage of the algorithm. In addition, the SCSO algorithm relies on simu­
planning problem, and the ABC_RL algorithm has a great advantage in lating the auditory characteristics of sand cats to search, which may lead
terms of path length and running time (Cui et al., 2023). Boliang Lin to low search efficiency. In the path planning problem, the algorithm
et al. used a simulated annealing algorithm to combine traffic path may fall into the local optimal solution, and cannot find the global
optimization with train formation planning for rolling updates of rail­ optimal path. In order to effectively solve the problems of inefficient
road transportation planning (Lin et al., 2021). Inkyung Sung et al. algorithmic solution and insufficient optimization accuracy encountered
investigated an online path planning algorithm applying neural net­ during path planning, this study proposes a chaotic sand cat swarm
works for smoother paths for the problem of uncertainty in the vehicle optimization algorithm incorporating the golden sine strategy, called
operating environment in path planning for self-driving vehicles (Sung GSCSO. By introducing the improved Circle chaotic mapping, the pop­
et al., 2021). The sand cat swarm optimization algorithm still has some ulation distribution is more uniform, which helps the algorithm to
defects and shortcomings when solving path planning problems, conduct a wider exploration in the solution space and thus enhances the
although it has strong optimization ability and fast convergence speed. global search capability. Second, the adopted water wave dynamic
The main problems include weak global exploration ability, easy to fall convergence factor expands the search range and maintains population
into local optimal solution in the late stage of the algorithm, low accu­ diversity, which improves the algorithm’s ability to jump out of the local
racy of the solution, and slow convergence speed in the late stage of optimum and facilitates the exploration of globally optimal solutions.
iteration. These problems limit the performance and efficiency of the The incorporation of the prismatic opposition learning strategy further
SCSO algorithm in solving complex path planning problems. improves the exploration efficiency of the algorithm, which makes the

2
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

algorithm wander through the solution space more efficiently and re­
r = rG × rand(0, 1) (3)
duces the risk of the solution falling into the local optimum. Meanwhile,
combined with the golden sine strategy, the local search accuracy of the Each sand cat updates its position based on the optimal position, its
algorithm is enhanced, ensuring a fast and accurate localization of the current position and its sensitivity range. Thus, the sand cat can find
optimal solution in the region. The combined application of these stra­ other best prey locations, as shown in Eq. (4).
tegies not only accelerates the convergence speed of the algorithm, but
Pos(t + 1) = r × (Posb (t) − rand(0, 1) × Posc (t)) (4)
also improves its ability to solve complex optimization problems,
enabling it to demonstrate excellent performance on multiple test
where Posb is the optimal position, Posc is the current position, Pos(t +1)
function and path planning problems. The novelty and main contribu­
is the updated position, and r is the sensitivity range.
tions of the algorithm are mainly:
By using a sensitivity range parameter to control the movement of
the sand cat, the algorithm is able to efficiently avoid local optima and
(1) An improved circle chaotic mapping is introduced to achieve a
rapidly converge to a globally optimal solution over a wide solution
more balanced population distribution and enhance the global
space. The design of the algorithm ensures fast and accurate optimiza­
search capability of the algorithm. By employing a water-wave
tion results with fewer parameters and operations.
dynamic convergence factor, the search range is extended and
The distance between the optimal position and the current position
the diversity of the population is maintained, thus improving the
of the sand cat when attacking the prey is calculated according to Eq.
ability of the algorithm to jump out of local optima.
(5). The sensitivity range of the sand cat is assumed to be circular, and
(2) Integrates lenses opposite-based learning and golden sine strat­
the direction of its movement is determined by selecting a random angle
egy, which enhances the ability of SCSO to explore in the solution
on the circumference. The value domain of this random angle is between
space and enables it to find the optimal solution in the search
0 and 360 ◦ , corresponding to the interval from -1 to 1. Thus, each
region quickly.
member of the population is able to move in a different circumferential
(3) GSCSO was tested on 23 test functions and successfully solved
direction in the search space. SCSO utilizes a roulette selection mecha­
path planning problems for 9 2D maps and 2 3D maps.
nism to select a random angle for each sand cat.

2. Algorithm design Posrnd = |rand(0, 1) × Posb (t) − Posc (t)|


(5)
Pos(t + 1) = Posb (t) − r⋅Posrnd ⋅cos(θ)
2.1. Sand cat swarm optimization
where Posb is the optimal position, Posc is the current position, Pos(t +1)
The Sand Cat Swarm Optimization (SCSO) algorithm is an algorithm is the updated position, and r is the sensitivity range.
that simulates the behavior of dune cats in nature by dividing the The SCSO algorithm forces the sand cat swarm to explore when R is
optimization process into two phases, exploration and exploitation, less than or equal to 1, otherwise it forces the sand cat swarm to explore
which correspond to the searching and attacking behaviors of dune cats, as shown in Eq. (6).
respectively. {
Posb (t) − r⋅Posrnd ⋅cos(θ) |R| ≤ 1; exploitation
When searching for prey, the sand cat relies on its ability to listen Pos(t + 1) =
sensitively to low-frequency noise, which is translated in the algorithm r⋅(Posb (t) − rand(0, 1)⋅Posc (t)) |R| > 1; exploration
into the sensitivity of each sand cat. The sand cat is able to perceive low (6)
frequency sounds below 2kHz, and using this, the sensitivity range of the
where Posb is the optimal position, Posc is the current position, Pos(t +1)
sand cat is simulated by a mathematical model to decrease linearly with
is the updated position, and r is the sensitivity range.
the iterative process from 2kHz to 0 to accurately locate the prey, as
The pseudo-code of the algorithm is shown below:
shown in Eq. (1).
( ) Algorithm 1. Standard SCSO pseudocode
SM × iterc
rG = SM − (1) Initialize the population
itermax Calculate the fitness function based on the objective function
Obtain a random angle
Where itermax is the maximum number of iterations, iterc is the current Initialize the rG , R, r based on the Eqs. (1)–(3)
number of iterations, and SM is the value of the hearing characteristic While (t ≤ maximum iteration)
parameter of the sand cat swarm, which is generally set to 2, and its size For each search agent
Get a random angle based on the Roulette Wheel Selection (0∘ ≤ θ ≤ 360∘ )
can be adjusted according to different optimization situations.
If (|R| ≤ 1)
The main parameter controlling the transformation between explo­ Update the search agent position based on the Eq. (5)
ration and exploitation is R, which is a vector obtained according to Eq. Else
(2). Update the search agent position based on the Eq. (4)
End
R = 2 × rG × rand(0, 1) − rG (2) End
t=t++
During the initialization phase of the algorithm, sand cats within the End
search space are randomly assigned within predefined boundaries. Upon
entering the search phase, each sand cat’s position update is adjusted
based on a randomly selected reference point, thus facilitating the
exploration of new regions within the vast search space. In order to
2.2. Proposed algorithm
avoid the algorithm from falling into a local optimal solution prema­
turely, the sand cat swarm optimization algorithm (SCSO) is specifically
2.2.1. Improved circle chaotic mapping for population initialization
designed so that each sand cat has a unique range of sensitivity, a
Chaotic mapping has the properties of randomness, non-
property that ensures the algorithm’s versatility and adaptability during
repeatability and chaotic traversal, and is able to generate uniformly
the global search process, as shown in Eq. (3). Thus, rG represents the
distributed populations, making it an important application in the
regular sensitivity range (linearly decreasing from 2 to 0), while r is the
design of optimization algorithms (Shi and Li, 2019). The purpose of
sensitivity range for each cat.
introducing chaotic mapping into the algorithm design is to increase the
diversity of potential solutions by using chaotic mapping to generate an

3
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Fig. 1. Distributions of random initialization, Circle mapping, and improved Circle mapping.

Fig. 2. Six different convergence factors. (a) the original convergence factor in Eq. (1), (b) the nonlinear convergence factor in Eq. (9), (c) the sinusoidal convergence
factor shown in Eq. (10), (d) the fast convergence factor proposed in this paper shown in Eq. (11), (e) the quintuple convergence factor demonstrated in Eq. (12), and
(f) the Water wave dynamic factor shown in Eq. (13).

initial population. Based on this, the algorithm is able to explore the


xi+1 = mod (xi + 0.2 − (0.5 / 2π)sin(2πxi ), 1) (7)
search space more dynamically and comprehensively, helping to
improve the efficiency and accuracy of the algorithm. Considering that the Circle mapping takes denser values between
Circle mapping is more stable and has high coverage of its chaotic [0.2,0.6], the improved Circle mapping (Song et al., 2023) is introduced
values. Circle chaotic mapping introduces nonlinearity through a sinu­ to make the values more uniform, as shown in Eq. (8). Fig. 1 shows the
soidal function and utilizes Eq. (7) operation to ensure that the result is distribution of random initialization, Circle mapping, and improved
in the interval [0,1). Its mathematical expression is: Circle mapping.

4
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Fig. 3. LOBL schematic.

Fig. 4. Flowchart of GSCSO algorithm.

xi+1 = mod (3.85xi + 0.4 − (0.7 / 3.85π)sin(3.85πxi ), 1) (8) 2.2.2. Water wave dynamic factor
In SCSO, the value of the sensitivity parameter rG , which simulates
As shown in Fig. 1, the left column plots are random initialization,
the ability of a sand cat to detect low-frequency sounds in nature, is used
the middle plots are Circle mapping, and the right side shows the
to control the balance of the algorithm between the exploration and
improved Circle mapping. As can be seen from the figure, the improved
exploitation phases. The initial value of rG in the SCSO algorithm is set to
Circle mapping has a great improvement in the uniformity of the dis­
2. As the number of iterations increases, this value decreases linearly
tribution. Therefore, the improved Circle mapping can be applied to the
and eventually reduces to 0, as shown in Eq. (1). However, the linear
initialization to improve the diversity of the population and strengthen
decreasing mechanism can cause the algorithm to fall into a local opti­
the global search ability of the algorithm.
mum trap that is difficult to escape from, and the problem of low
exploitation accuracy occurs. This problem is especially obvious when
dealing with complex and high-dimensional problems, as the algorithm

5
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Table 1
Benchmark functions.
function dimension domain theoretical optimum
∑n
f1 (x) = x2i 30 [-100,100] 0
∑i=1
n
f2 (x) =

|x | + ni=1 |xi | 30 [-10,10] 0
i=1 i
∑n− 1 {∑j<i }2 30 [-100,100] 0
f3 (x) = i=0
x
j=0 i
f4 (x) = maxi {|xi |, 1 ≤ i ≤ n} 30 [-100,100] 0
∑n− 1 [ ]
f5 (x) =
(
100 xi+1 − x2i
)2
+ (xi − 1)2 30 [-30,30] 0
i=1

f6 (x) =
∑n
([xi + 0.5])
2 30 [-100,100] 0
∑i=1n
f7 (x) = 4
ix + random[0, 1) 30 [-1.28,1.28] 0
i=1 i
∑n (√̅̅̅̅̅̅̅)
f8 (x) = − xisin |xi | 30 [-500,500] -12,569.4
i=1
∑n [ ]
f9 (x) = x2
− 10cos(2 πxi ) + 10 30 [-5.12,5.12] 0
i=1 i
( )
√̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
∑n 30 [-32,32] 0
f10 (x) = − 20exp − 0.2 (1/n) × i=1 i
x2
( )
∑n
− exp (1/n) × i=1
cos(2πxi ) + 20 + e
∑n ( √̅̅̅̅̅̅̅̅̅̅̅̅ )
f11 = (1 /4000) × x2 −
∏n 30 [-600,600] 0
i=1 i i=1 cos xi / x + 1 + 1
( )2
f12 (x) = (π/n) × {10sin(πy1 ) + yn − 1 30 [-50,50] 0
∑n− 1 ( )2 [ ( )]
+ i=1
yi − 1 1 + 10sin2 πyi+1 }
yi = 1 + (xi + 1)/4
k(xi − a)m xi > a
u(xi , a, k, m)= { 0 − a < xi < 1
k(− xi − a)m xi < − a
∑n [ ]
f13 (x) = 0.1{ i=1 (xi − 1)2 1 + sin2 (3πx1 + 1) 30 [-50,50] 0
[ ]
sin2 (3πx1 ) + (xn − 1)2 1 + sin2 (2πxni ) }
∑n
+ i=1
u(xi , 5, 100, 4)
( ∑25 ( ( ∑2 ( )6 )))− 1 2 [-65,65] 1
f14 (x) = 1/500 + j=1
1/ j + i=1
xi − aij
∑11 [ ( ) ( )]2 4 [-5,5] 0.00003075
f15 (x) = i=1
ai − x1 b2i + bi x2 / b2i + bi x3 + x4
f16 (x) = 4x21
− 2.1x41
+ x61 /3 + x1 x2 − 4x22+ 4x42 2 [-5,5] -1.0316285
(
f17 (x) = x2 −
( )
5.1/4π2 x21 + (5/π)x1 − 6
)2 2 [-5,5] 0.398
+ 10(1 − (1/8π))cosx1 + 10
[ ( )]
f18 (x) = 1 + (x1 + x2 + 1)2 × 19 − 14x1 + 3x21 − 14x2 + 6x1 x2 + 3x22 2 [-2,2] 3
[ 2 ( 2 2
)]
⋅ 30 + (2x1 − 3x2 ) × 18 − 32x1 + 12x1 + 48x2 − 36x1 x2 + 27x2
( ∑3 ( )2 )
∑4 3 [0,1] -3.86
f19 (x) = − c exp −
i=1 i
a xj − pij
j=1 ij
( ∑6 ( )2 )
∑4 6 [0,1] -3.32
f20 (x) = − c exp −
i=1 i
a xj − pij
j=1 ij
∑5 [ ]− 1 4 [0,10] -10
f21 (x) = − i=1
(x − ai )(x − ai )T + ci
∑7 [ ]− 1 4 [0,10] -10
f22 (x) = − i=1
(x − ai )(x − ai )T + ci
∑10 [ ]− 1 4 [0,10] -10
f23 (x) = − i=1
(x − ai )(x − ai )T + ci

may not be able to cover and explore the whole search space effectively, (10), (d) is the fast convergence factor proposed in this paper shown in
resulting in limited exploitation accuracy. So, it is important to develop Eq. (11), (e) is the quintuple convergence factor demonstrated by Eq.
an appropriate convergence factor. The introduction of nonlinear vari­ (12), and (f) is the Eq. (13) shown by the Water wave dynamic factor.
ables increases the complexity of the search process and improves the ( )
search capability of the algorithm (Seyyedabbasi et al., 2023), as shown a = 2 ∗ 1 − (iterc /itermax )2 (9)
in Eq. (9). Other algorithms have proposed different convergence fac­
tors, and literature (Duan and Yu, 2022) proposes a convergence factor a = 1 + sin(π / 2 + π ∗ iterc / itermax ) (10)
based on sinusoidal function, which facilitates the search for the optimal
value in the global range, as shown in Eq. (10). We also propose a a = 2 − 2 ∗ (iterc /itermax )1/10 (11)
convergence factor that enables the algorithm to converge quickly,
which speeds up the convergence of the algorithm, as shown in Eq. (11). a = 2 − 2 ∗ (iterc /itermax )5 (12)
Literature (Huang et al., 2024) proposes a five-fold nonlinear conver­
gence factor that enhances the search ability of the algorithm, as shown rG = 2 ∗ rand ∗ S ∗ exp( − iterc /itermax )k (13)
in Eq. (12). Literature (Zheng, 2015) proposes a Water Wave Dynamic
Factor that draws on the inherent uncertainty property in water wave where itermax is the maximum number of iterations, iterc is the current
dynamics in order to improve the algorithm’s adaptability to complex number of iterations, S is a random integer, S ∈ [0, 1], k ∈ [1, 3]. The
functions and to enhance its solution capability, as shown in Eq. (13). larger k is, the smaller the reduced rG is, and when k is larger, the sit­
Fig. 2 illustrates the above six convergence factors, where (a) is the uation is just the opposite, in this paper, we set k to 3.
original convergence factor of Eq. (1), (b) is the nonlinear convergence When dealing with complex multimodal functions, the SCSO algo­
factor of Eq. (9), (c) is the sinusoidal convergence factor shown in Eq. rithm often encounters the problem of insufficient accuracy. To solve

6
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Table 2 imaging, with the aim of improving the search capability of the multi-
Sensitivity analysis about P, t. objective optimization algorithm’s search capability (Yu et al., 2024).
Function Criterion P/t P/t P/t The core idea is to utilize the lens imaging principle to guide the
15/2000 30/1000 60/500 updating of the solution in order to improve the global search capability
f1 Mean 0.0000Eþ00 0.0000Eþ00 0.0000Eþ00 and convergence speed of the algorithm. It complements the traditional
Std 0.0000Eþ00 0.0000Eþ00 0.0000Eþ00 random search strategy. The LOBL strategy demonstrates a more effi­
Rank 2 2 2 cient performance than OBL in finding the optimal or near-optimal
f5 Mean 6.4612E+00 6.2003E+00 6.1599Eþ00 solution.
Std 1.1996E+00 8.0128E-01 7.4337E-01
Rank 3 2 1
According to the principle of optical imaging, when an object is
f7 Mean 4.2843E-05 3.2259E-05 3.1646E-05 located beyond twice the focal length of the lens, it forms an inverted
Std 4.0022E-05 3.0189E-05 3.3746E-05 and reduced true image within one to two times the focal length on the
Rank 3 1 2 opposite side of the lens. As shown in Fig. 3, assume that the midpoint of
f8 Mean -3.4544Eþ03 -3.3226E+03 -3.3233E+03
the interval [lb, ub] is O, and consider the y axis as a convex lens. An
Std 5.8800Eþ02 6.2504E+02 6.1590E+02
Rank 1 3 2 object of height h is located at point x. Point x is twice the focal length of
f11 Mean 0.0000Eþ00 0.0000Eþ00 0.0000Eþ00 the lens. When this object is imaged through the lens, the coordinates of
Std 0.0000Eþ00 0.0000Eþ00 0.0000Eþ00 the vertices of its image will change to (x∗ , h∗ ). The specific mathemat­
Rank 2 2 2 ical expression (14) is shown.
f15 Mean 3.1956E-04 3.0749E-04 3.0749E-04
Std 8.5353E-05 1.0589E-09 1.5754E-09 ((lb + ub) / 2 − x)/(x∗ − (lb + ub) / 2) = h/h∗ (14)
Rank 3 1 2
f18 Mean 3.0000Eþ00 3.0000Eþ00 3.0000Eþ00 Assuming k = h/h , Eq. (14) can be simplified to Eq. (15).

Std 3.5396E-06 1.5728E-06 3.9795E-06


Rank 2 1 3 lb + ub lb + ub x
x∗ = + − (15)
f20 Mean -3.2671E+00 -3.8628Eþ00 -3.2503E+00 2 2k k
Std 6.0066E-02 4.4033E-02 5.9134E-02
Rank 3 1 2 In this paper, k is a dynamically varying coefficient that increases
f22 Mean -1.0403Eþ01 -1.0403Eþ01 -1.0403Eþ01 with the number of iterations t. Its growth rate is controlled by
Std 1.4144E-07 7.6504E-08 2.6908E-07
(iterc /itermax )1/2 and is scaled up by a power of 10 to simulate the
Rank 2 1 3
Rank-Count 21 14 19 focusing effect of the lens as shown in Eq. (16). Pos(t +1) is the new
Ave-Rank 2.3333 1.5556 2.1111 position, which can be shown by Eq. (17).
Overall-Rank 3 1 2 ( )10
k = 1 + (iterc /itermax )1/2 (16)

Table 3 Poso (t) = (lb + ub)/2 + (lb + ub)/(2 ∗ k) − (Posc (t) / k) (17)
Specific parameter settings for the comparison algorithm.
where itermax is the maximum number of iterations, iterc is the current
Algorithm Parameters Value
( )
number of iterations, Posc is the position of the current individual, Poso is
SSA c1 2exp − (4π/T)2 the position of the opposite point, and ub and lb are the upper and lower
HHO β 1.5 bounds, respectively.
PSO c1 2 The new location needs to be checked and adapted to ensure that it
c2 2
lies within the boundaries of the search space. If the fitness of the new
wstart 0.9
wend 0.6
position is lower than the fitness of the current position, then the current
GWO amax 2 position will be updated to the new position, and this new fitness will be
amin 0 recorded. If the fitness of the new position is lower than the currently
GSA Alpha 20 recorded global optimal fitness, the global optimal fitness is updated and
Rpower 1
the corresponding optimal position is recorded.
Rnorm 2
G0 100 The advantage of the LOBL strategy is that the geometric relationship
HBA β 6 between the current solution and the optimal solution can be utilized
C 2 instead of being completely random, which helps to improve the global
vec flag [− 1, 1] exploration ability of the algorithm. By dynamically adjusting the
DBO P percent 0.2
FHO HN [1, ceil(nPop/5)]
parameter k, a balance between exploration and utilization can be
SCSO Series rG [2, 0] achieved, which improves the convergence speed and stability of the
R [-2rG , 2rG ] algorithm. The new solution update mechanism can better utilize the
geometric properties of the problem itself and improve the solution
accuracy of the algorithm. The LOBL strategy complements the original
this problem, the hydrodynamic evolution factor is introduced, which
stochastic search strategy and jointly promotes the overall improvement
enables the algorithm to explore in a wider search range by incorpo­
of the algorithm performance.
rating the principle of hydrodynamics. This dynamic uncertainty brings
new vitality to the algorithm, reduces the blindness of the algorithm in
2.2.4. Golden sine
the search process, and motivates the individuals in the algorithm to
Golden Sine algorithm (GSA) is a new intelligent algorithm proposed
exchange information and learn more frequently and deeply. This
by Tanyildizi et al. in 2017 based on the idea of sinusoidal function
enhanced information exchange not only helps to maintain the diversity
correlation, which has the advantages of fast searching speed, simple
of the population but also effectively prevents the algorithm from falling
parameter tuning and good robustness (Tanyildizi and Demir, 2017).
into local optimal solutions prematurely.
The central idea is to combine the periodicity of the sine function and
the nonlinear properties of the golden section ratio to guide an in­
2.2.3. Lens opposition-based learning
dividual’s exploration in a search space. The sine function can produce
Lenses Opposite-Based Learning (LOBL) integrates the concepts of
periodic oscillations, while the golden section ratio can produce
Opposite-Based Learning (OBL) with the scientific principles of lens

7
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Table 4
Experimental results of GSCSO and comparison algorithms under 50 cycles of 23 test functions.
Function Algorithm Mean Std Function Algorithm Mean Std

f1 SSA 3.9070E-08 7.8902E-09 f13 SSA 3.9364E-03 5.7121E-03


HHO 9.0319E-186 0.0000E+00 HHO 8.8174E-07 0.0000Eþ00
PSO 4.7149E-09 1.2228E-08 PSO 4.6147E-03 5.4780E-03
GWO 1.3206E-58 6.1024E-58 GWO 5.0964E-01 1.6733E-01
GSA 0.0000E+00 0.0000E+00 GSA 1.3495E-05 3.9599E-05
DBO 2.4435E-229 0.0000E+00 DBO 2.6306E-01 3.1091E-01
FHO 5.2142E-158 3.6070E-157 FHO 6.5557E-03 3.5521E-03
HBA 2.8431E-276 0.0000E+00 HBA 9.7404E-02 1.2985E-01
SCSO 1.1002E-267 0.0000E+00 SCSO 1.4033E-01 1.3870E-01
SCSO1 0.0000E+00 0.0000E+00 SCSO1 1.6244E+00 5.6402E-01
SCSO2 9.1832E-228 0.0000E+00 SCSO2 1.2370E+00 1.1177E+00
SCSO3 0.0000E+00 0.0000E+00 SCSO3 2.2142E+00 5.3727E-01
GSCSO 0.0000Eþ00 0.0000Eþ00 GSCSO 6.2944E-02 8.9261E-02
f2 SSA 1.3414E+02 5.6577E+01 f14 SSA 1.0378E+00 1.9677E-01
HHO 1.2125E-92 8.5730E-92 HHO 1.2549E+00 9.9274E-01
PSO 5.3692E-04 1.2418E-03 PSO 3.3623E+00 2.7210E+00
GWO 8.9338E-35 8.5018E-35 GWO 3.4685E+00 3.6676E+00
GSA 9.4649E-224 0.0000E+00 GSA 1.0179Eþ00 1.4058E-01
DBO 2.0926E-115 1.4791E-114 DBO 1.1965E+00 5.6700E-01
FHO 1.0562E-40 3.0456E-40 FHO 2.0105E+00 1.2141E+00
HBA 2.0698E-145 1.3469E-144 HBA 1.5269E+00 2.0453E+00
SCSO 2.4428E-136 1.7272E-135 SCSO 3.5876E+00 3.6163E+00
SCSO1 0.0000E+00 0.0000E+00 SCSO1 6.5533E+00 4.4022E+00
SCSO2 5.0813E-120 3.4991E-119 SCSO2 2.3683E+00 3.1822E+00
SCSO3 0.0000E+00 0.0000E+00 SCSO3 9.8908E+00 3.7111E+00
GSCSO 0.0000Eþ00 0.0000Eþ00 GSCSO 1.4946E+00 6.7269E-01
f3 SSA 1.6053E+02 2.0700E+02 f15 SSA 1.5377E-03 8.8789E-04
HHO 3.7108E-137 2.6239E-136 HHO 3.2262E-04 1.7967E-05
PSO 1.6328E+01 7.9732E+00 PSO 8.3467E-04 2.0966E-04
GWO 2.2557E-14 1.0036E-13 GWO 4.6210E-03 8.0948E-03
GSA 0.0000E+00 0.0000E+00 GSA 3.6195E-04 5.7515E-05
DBO 6.7373E-138 4.7640E-137 DBO 7.4619E-04 3.7025E-04
FHO 1.0086E-154 6.0995E-154 FHO 1.1620E-03 1.3840E-03
HBA 3.0316E-198 0.0000E+00 HBA 5.6875E-03 9.2842E-03
SCSO 6.1300E-227 0.0000E+00 SCSO 3.6443E-04 2.2313E-04
SCSO1 0.0000E+00 0.0000E+00 SCSO1 5.3174E-04 2.8363E-04
SCSO2 1.2830E-198 0.0000E+00 SCSO2 3.2845E-04 1.3045E-04
SCSO3 0.0000E+00 0.0000E+00 SCSO3 3.3061E-03 3.1805E-03
GSCSO 0.0000Eþ00 0.0000Eþ00 GSCSO 3.0749E-04 1.0589E-09
f4 SSA 1.6592E+00 2.5872E+00 f16 SSA -1.0316E+00 7.6253E-14
HHO 1.3044E-89 8.6660E-89 HHO -1.0316E+00 4.3720E-11
PSO 6.1624E-01 1.5088E-01 PSO -1.0316E+00 2.3093E-16
GWO 1.9449E-14 2.9132E-14 GWO -1.0316E+00 6.0299E-09
GSA 9.1422E-217 0.0000E+00 GSA -1.0306E+00 1.3052E-03
DBO 5.4387E-111 3.4458E-110 DBO -1.0316E+00 2.6158E-16
FHO 1.8608E-64 6.2056E-64 FHO -1.0316E+00 1.0649E-05
HBA 8.1870E-116 5.2930E-115 HBA -1.0316E+00 2.8372E-16
SCSO 1.6283E-113 9.3147E-113 SCSO -1.0316E+00 1.5558E-10
SCSO1 0.0000E+00 0.0000E+00 SCSO1 -1.0296E+00 6.4773E-03
SCSO2 4.8773E-102 2.1732E-101 SCSO2 -1.0316E+00 1.9915E-10
SCSO3 0.0000E+00 0.0000E+00 SCSO3 -1.0242E+00 1.3944E-02
GSCSO 0.0000Eþ00 0.0000Eþ00 GSCSO -1.0316Eþ00 4.6111E-12
f5 SSA 9.2491E+02 1.9655E+03 f17 SSA 3.9789E-01 5.6916E-14
HHO 3.2861E-03 3.9962E-03 HHO 3.9789E-01 3.6842E-06
PSO 7.0990E+01 5.3554E+01 PSO 3.9789E-01 3.3645E-16
GWO 2.6918E+01 7.5204E-01 GWO 3.9789E-01 1.0025E-06
GSA 2.1121E-03 3.2765E-03 GSA 3.9835E-01 1.7214E-03
DBO 2.4894E+01 1.8625E-01 DBO 3.9789E-01 0.0000Eþ00
FHO 2.0638E-01 1.6263E-01 FHO 3.9809E-01 2.2526E-04
HBA 2.1742E+01 6.2141E-01 HBA 3.9789E-01 0.0000Eþ00
SCSO 6.9200E+00 8.0497E-01 SCSO 3.9789E-01 2.5625E-08
SCSO1 2.8417E+01 3.4598E-01 SCSO1 4.0029E-01 1.1287E-02
SCSO2 2.4063E+01 9.7633E+00 SCSO2 3.9789E-01 2.2838E-08
SCSO3 2.8870E+01 3.8758E-02 SCSO3 5.7768E-01 2.2565E-01
GSCSO 6.2003E+00 8.0128E-01 GSCSO 3.9789E-01 4.2148E-10
f6 SSA 3.9125E-08 6.7602E-09 f18 SSA 3.0000E+00 6.2653E-13
HHO 3.7438E-05 6.1832E-05 HHO 3.0000E+00 1.4412E-08
PSO 1.8534E-08 8.6260E-08 PSO 3.0000E+00 2.0370E-15
GWO 6.4982E-01 3.0093E-01 GWO 3.0000E+00 1.0575E-05
GSA 7.0830E-05 1.7411E-04 GSA 6.8603E+00 9.5972E+00
DBO 0.0000Eþ00 0.0000Eþ00 DBO 3.5400E+00 3.8184E+00
FHO 0.0000Eþ00 0.0000Eþ00 FHO 3.0011E+00 1.0031E-03
HBA 0.0000Eþ00 0.0000Eþ00 HBA 6.7800E+00 1.6374E+01
SCSO 9.0067E-02 1.4094E-01 SCSO 3.0000E+00 2.2061E-06
(continued on next page)

8
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Table 4 (continued )
Function Algorithm Mean Std Function Algorithm Mean Std

SCSO1 2.0107E+00 5.4227E-01 SCSO1 3.0105E+00 2.9162E-02


SCSO2 1.4755E+00 8.1776E-01 SCSO2 3.0000E+00 2.1883E-06
SCSO3 3.8989E+00 8.9072E-01 SCSO3 1.5760E+01 1.9121E+01
GSCSO 9.8926E-03 4.8957E-02 GSCSO 3.0000Eþ00 1.5728E-06
f7 SSA 5.9820E-02 1.9357E-02 f19 SSA -3.8628E+00 1.5496E-14
HHO 7.6651E-05 7.7196E-05 HHO -3.8617E+00 1.6928E-03
PSO 6.7901E-02 1.9881E-02 PSO -3.8628E+00 3.1235E-15
GWO 8.9154E-04 5.1126E-04 GWO -3.8619E+00 2.1471E-03
GSA 9.2440E-05 7.1102E-05 GSA -3.8225E+00 6.8472E-02
DBO 6.6876E-04 5.6351E-04 DBO -3.8619E+00 2.3973E-03
FHO 6.4127E-04 4.1846E-04 FHO -3.8369E+00 5.8712E-02
HBA 2.0117E-04 1.7805E-04 HBA -3.8465E+00 1.0923E-01
SCSO 3.3289E-05 3.6466E-05 SCSO -3.8608E+00 3.4253E-03
SCSO1 3.5568E-05 3.0303E-05 SCSO1 -3.8319E+00 4.1656E-02
SCSO2 6.6611E-05 6.3589E-05 SCSO2 -3.8606E+00 3.5738E-03
SCSO3 3.6552E-05 3.0977E-05 SCSO3 -3.3262E+00 8.3029E-01
GSCSO 3.2259E-05 3.0189E-05 GSCSO -3.8628Eþ00 1.1677E-07
f8 SSA -2.8492E+03 3.8597E+02 f20 SSA -3.2089E+00 2.9013E-02
HHO -1.2549E+04 1.4334E+02 HHO -3.1334E+00 1.1114E-01
PSO -6.5705E+03 1.0067E+03 PSO -3.2507E+00 5.8837E-02
GWO -6.2406E+03 5.8330E+02 GWO -3.2700E+00 7.3190E-02
GSA -1.2569Eþ04 2.6806E-02 GSA -3.0847E+00 1.8403E-01
DBO -8.6588E+03 1.9618E+03 DBO -3.2428E+00 7.8482E-02
FHO -1.2569Eþ04 9.6500E-01 FHO -3.2096E+00 9.4334E-02
HBA -8.8415E+03 9.5579E+02 HBA -3.2296E+00 1.1506E-01
SCSO -2.5801E+03 3.3542E+02 SCSO -3.2038E+00 1.5748E-01
SCSO1 -5.9214E+03 3.4755E+03 SCSO1 -3.2152E+00 1.0801E-01
SCSO2 -1.1782E+04 1.4846E+03 SCSO2 -3.2358E+00 2.1448E-01
SCSO3 -1.1736E+04 1.7565E+03 SCSO3 -2.4600E+00 4.0340E-01
GSCSO -3.3226E+03 6.2504E+02 GSCSO -3.3030Eþ00 4.4033E-02
f9 SSA 1.3368E+02 3.9415E+01 f21 SSA -6.7728E+00 2.9013E+00
HHO 0.0000E+00 0.0000E+00 HHO -5.5509E+00 1.5052E+00
PSO 4.5724E+01 1.0954E+01 PSO -8.5886E+00 2.4752E+00
GWO 3.9089E-01 1.7029E+00 GWO -9.7002E+00 1.5788E+00
GSA 0.0000E+00 0.0000E+00 GSA -1.0153E+01 1.0438E-03
DBO 0.0000E+00 0.0000E+00 DBO -7.4643E+00 2.6317E+00
FHO 0.0000E+00 0.0000E+00 FHO -8.3961E+00 1.9655E+00
HBA 0.0000E+00 0.0000E+00 HBA -9.8173E+00 1.6716E+00
SCSO 0.0000E+00 0.0000E+00 SCSO -5.8427E+00 2.1486E+00
SCSO1 0.0000E+00 0.0000E+00 SCSO1 -5.0392E+00 1.6779E-02
SCSO2 0.0000E+00 0.0000E+00 SCSO2 -8.5291E+00 2.3915E+00
SCSO3 0.0000E+00 0.0000E+00 SCSO3 -3.4367E+00 1.3510E+00
GSCSO 0.0000Eþ00 0.0000Eþ00 GSCSO -1.0153Eþ01 2.6744E-07
f10 SSA 1.8475E+00 3.7620E+00 f22 SSA -8.0912E+00 3.0720E+00
HHO 8.8818E-16 0.0000E+00 HHO -5.3541E+00 1.3096E+00
PSO 5.9447E-04 3.1238E-03 PSO -8.6374E+00 2.6257E+00
GWO 1.5881E-14 3.1524E-15 GWO -1.0190E+01 1.0521E+00
GSA 8.8818E-16 0.0000E+00 GSA -1.0402E+01 1.3221E-03
DBO 9.5923E-16 5.0243E-16 DBO -8.6629E+00 2.5827E+00
FHO 8.8818E-16 0.0000E+00 FHO -9.6536E+00 7.7093E-01
HBA 1.5939E+00 5.4602E+00 HBA -9.6782E+00 2.2096E+00
SCSO 8.8818E-16 0.0000E+00 SCSO -6.1242E+00 2.5851E+00
SCSO1 8.8818E-16 0.0000E+00 SCSO1 -5.2659E+00 9.9278E-01
SCSO2 8.8818E-16 0.0000E+00 SCSO2 -9.2393E+00 2.2132E+00
SCSO3 8.8818E-16 0.0000E+00 SCSO3 -3.6611E+00 1.5882E+00
GSCSO 8.8818E-16 0.0000Eþ00 GSCSO -1.0403Eþ01 7.6504E-08
f11 SSA 8.8648E-03 9.0987E-03 f23 SSA -1.0403E+01 2.3923E-07
HHO 0.0000E+00 0.0000E+00 HHO -8.3818E+00 3.2678E+00
PSO 9.3594E-03 8.0781E-03 PSO -5.4427E+00 1.2608E+00
GWO 3.0684E-03 9.9472E-03 GWO -9.9886E+00 1.8960E+00
GSA 0.0000E+00 0.0000E+00 GSA -1.0265E+01 1.3659E+00
DBO 0.0000E+00 0.0000E+00 DBO -8.7095E+00 2.5708E+00
FHO 0.0000E+00 0.0000E+00 FHO -9.7685E+00 3.9977E-01
HBA 0.0000E+00 0.0000E+00 HBA -8.8764E+00 3.0182E+00
SCSO 0.0000E+00 0.0000E+00 SCSO -1.0536E+01 1.4546E-03
SCSO1 0.0000E+00 0.0000E+00 SCSO1 -6.5755E+00 2.5421E+00
SCSO2 0.0000E+00 0.0000E+00 SCSO2 -5.1117E+00 1.6353E-02
SCSO3 0.0000E+00 0.0000E+00 SCSO3 -9.3495E+00 2.2576E+00
GSCSO 0.0000Eþ00 0.0000Eþ00 GSCSO -1.0536Eþ01 2.3004E-07
f12 SSA 2.3698E+00 2.0879E+00 ​ ​ ​
HHO 2.9610E-06 5.1491E-06 ​ ​ ​
PSO 2.0734E-03 1.4661E-02 ​ ​ ​
GWO 5.3676E-02 7.7687E-02 ​ ​ ​
GSA 2.9757E-06 6.1622E-06 ​ ​ ​
DBO 2.3704E-09 8.9460E-09 ​ ​ ​
FHO 8.6984E-04 4.7194E-04 ​ ​ ​
(continued on next page)

9
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Table 4 (continued )
Function Algorithm Mean Std Function Algorithm Mean Std

HBA 2.1905E-08 6.2144E-08 ​ ​ ​


SCSO 1.5673E-02 1.5856E-02 ​ ​ ​
SCSO1 1.2922E-01 5.4229E-02 ​ ​ ​
SCSO2 4.9564E-02 4.3541E-02 ​ ​ ​
SCSO3 3.8629E-01 1.7830E-01 ​ ​ ​
GSCSO 4.3561E-08 2.7870E-07 ​ ​ ​

Table 5
Friedman test results for 10 algorithms.
Function SSA HHO PSO GWO GSA DBO FHO HBA SCSO SCSO1 SCSO2 SCSO3 GSCSO

f1 13 9 12 11 2.5 7 10 5 6 2.5 8 2.5 2.5


f2 13 9 12 11 4 8 10 5 6 2 7 2 2
f3 13 10 12 11 2.5 9 8 7 5 2.5 6 2.5 2.5
f4 13 9 12 11 4 7 10 5 6 2 8 2 2
f5 13 2 12 8 1 7 3 6 5 9 11 10 4
f6 4 6 5 10 7 2 2 2 9 12 11 13 8
f7 12 6 13 11 7 10 9 8 4 2 5 3 1
f8 11 3 9 8 1 7 2 6 13 12 4 5 10
f9 13 5.5 12 11 5.5 5.5 5.5 5.5 5.5 5.5 5.5 5.5 5.5
f10 12 4.5 11 10 4.5 9 4.5 13 4.5 4.5 4.5 4.5 4.5
f11 13 5.5 12 11 5.5 5.5 5.5 5.5 5.5 5.5 5.5 5.5 5.5
f12 13 4 7 10 5 1 6 2 8 11 9 12 3
f13 3 1 4 10 2 9 5 7 8 11 12 13 6
f14 2 5 9 10 1 3 6 7 11 12 8 13 4
f15 9 2 7 12 3 8 10 13 5 6 4 11 1
f16 5.5 5.5 5.5 5.5 11 5.5 5.5 5.5 5.5 12 5.5 13 5.5
f17 5 5 5 5 11 5 10 5 5 12 5 13 5
f18 4 4 4 4 11 10 8 12 4 9 4 13 4
f19 2 4 2 5 11 6 10 12 7 9 8 13 2
f20 4 10 3 2 12 5 6 7 9 8 11 13 1
f21 11 10 7 4 2 9 5 3 12 8 6 13 1
f22 9 11 8 3 2 7 4 5 12 10 6 13 1
f23 3 10 12 6 5 8 4 9 2 13 11 7 1
Rank-Count 200.5 141 195.5 189.5 120.5 153.5 149 155.5 158 180.5 165 202.5 82
Ave-Rank 8.72 6.13 8.50 8.24 5.24 6.67 6.48 6.76 6.87 7.85 7.17 8.80 3.57
Overall –Rank 12 3 11 10 2 5 4 6 7 9 8 13 1

nonlinear variations. Combining these two properties allows individuals and enhance the exploration ability of the algorithm. The nonlinear
to strike an effective balance between local and global optimization, characteristic of the golden section ratio can guide the individual to
thus improving the overall performance of the algorithm. search more effectively in the search space to avoid falling into the local
GSA introduces golden section coefficients A and B in the location optimum. By generating a new Pos(t + 1), and comparing it with the
updating process so that the search and development achieve a good original position, it guides the individual to better balance the local
balance. These coefficients reduce the search space to lead the individ­ utilization and global exploration in the search process. This helps to
ual close to the optimal value: improve the ISCSO algorithm’s ability to solve complex optimization
problems.
x1 = a⋅(1 − τ) + b⋅τ (18)
2.2.5. Algorithmic process
x2 = a⋅τ + b⋅(1 − τ) (19)
Algorithm 2 gives the pseudo-code for GSCSO.
Where a and b are the initial values of the golden ratio search, a = − π, b Algorithm 2. GSCSO pseudocode
= π, in the paper, and τ is the golden ratio, as shown in Eq. (20). Initialize the population using an improved Circle chaotic map according to Eq. (8)
(√̅̅̅ )/ Calculate the fitness function based on the objective function
τ= 5− 1 2 (20) Execute LOBL for each agent based on the Eq. (17)
Initialize the rG , R, r based on the Eqs. (13), (2) and (3)
As the number of iterations increases, the Gold-SA algorithm used in While (t ≤ maximum iteration)
this paper performs a position update via Eq. (21). For each search agent
Get a random angle based on the Roulette Wheel Selection (0∘ ≤ θ ≤ 360∘ )
Pos(t + 1) = Posc (t) ∗ |sin(R1 )| − R2 ∗ sin(R1 ) ∗ |x1 Posb (t) − x2 Posc (t)| If (|R| ≤ 1)
Update the search agent position based on the Eq. (5)
(21)
Else
Update the search agent position based on the Eq. (4)
Where Posb is the optimal position, Posc is the current position, Pos(t +1) End
is the updated position, and t is the number of iterations; R1 and R2 are End
random numbers with values [0, 2π] and [0, π], respectively, representing Updated with the golden sine based on the Eq. (21)
the distance and direction of the next generation of individuals; x1 and t=t++
End
x2 are golden sectioning coefficients used to narrow the search space so
that individuals converge to the optimal value.
The advantage of introducing GSA is that periodic oscillations can
In this paper, the SCSO is improved as shown in the pseudo-code, in
help individuals switch effectively between local and global optimums
which the initialization uses the improved Circle chaotic mapping. A

10
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Table 6
Wilcoxon signed rank test results.
Functions SSA vs GSCSO HHO vs GSCSO PSO vs GSCSO GWO vs GSCSO

p h p h p h p h

f1 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1


f2 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1
f3 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1
f4 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1
f5 7.0661E-18 1 7.0661E-18 1 1.3493E-16 1 7.0661E-18 1
f6 3.4621E-14 1 2.5603E-15 1 9.1443E-05 1 9.5403E-18 1
f7 7.0661E-18 1 8.5865E-04 1 7.0661E-18 1 7.0661E-18 1
f8 8.8867E-05 1 7.0661E-18 1 3.5254E-17 1 7.0661E-18 1
f9 3.3111E-20 1 NaN 0 3.3111E-20 1 7.5052E-06 1
f10 3.3111E-20 1 NaN 0 3.3111E-20 1 2.9053E-21 1
f11 3.3111E-20 1 NaN 0 3.3111E-20 1 1.2285E-02 1
f12 7.0279E-18 1 9.4820E-17 1 1.0589E-14 1 7.0279E-18 1
f13 5.1397E-10 1 3.8951E-02 1 1.2164E-07 1 3.9657E-17 1
f14 9.3675E-17 1 2.5733E-02 1 8.4871E-02 0 2.3953E-08 1
f15 7.0661E-18 1 7.0661E-18 1 7.0661E-18 1 1.0100E-16 1
f16 2.3840E-14 1 9.1438E-05 1 4.7330E-20 1 7.0661E-18 1
f17 7.0343E-18 1 5.1825E-04 1 3.3111E-20 1 7.0661E-18 1
f18 7.0661E-18 1 1.0701E-16 1 4.3619E-19 1 9.4090E-05 1
f19 7.0454E-18 1 9.5403E-18 1 6.6308E-20 1 2.6246E-17 1
f20 8.9415E-14 1 1.8349E-15 1 9.4691E-01 0 7.4363E-12 1
f21 8.5431E-02 0 7.0661E-18 1 1.8222E-03 1 7.0661E-18 1
f22 3.8951E-02 1 7.0661E-18 1 1.7147E-03 1 7.0661E-18 1
f23 1.9432E-03 1 7.0661E-18 1 6.4585E-14 1 7.0661E-18 1

Functions GSA vs GSCSO DBO vs GSCSO FHO vs GSCSO HBA vs GSCSO

p h p h p h p h

f1 NaN 0 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1


f2 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1
f3 NaN 0 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1
f4 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1
f5 7.0661E-18 1 7.0661E-18 1 7.0661E-18 1 7.0661E-18 1
f6 2.7060E-15 1 3.3111E-20 1 3.3111E-20 1 3.3111E-20 1
f7 3.5505E-06 1 2.1396E-16 1 8.4620E-18 1 9.9168E-13 1
f8 7.0660E-18 1 7.0661E-18 1 7.0661E-18 1 7.0661E-18 1
f9 NaN 0 NaN 0 NaN 0 NaN 0
f10 NaN 0 3.2709E-01 0 NaN 0 4.3349E-02 1
f11 NaN 0 NaN 0 NaN 0 NaN 0
f12 3.8338E-14 1 5.0325E-12 1 7.0279E-18 1 7.8662E-03 1
f13 2.0353E-02 1 1.2358E-06 1 8.4185E-02 0 7.8707E-03 1
f14 6.8233E-02 0 6.0710E-12 1 8.4030E-07 1 6.5465E-14 1
f15 7.0661E-18 1 1.5181E-09 1 7.0661E-18 1 8.5391E-02 0
f16 7.0661E-18 1 2.1215E-19 1 7.0661E-18 1 5.1961E-19 1
f17 7.0661E-18 1 3.3111E-20 1 7.0661E-18 1 3.3111E-20 1
f18 7.0661E-18 1 1.1096E-16 1 7.0661E-18 1 2.2840E-14 1
f19 7.0661E-18 1 2.0000E-11 1 7.0661E-18 1 1.8230E-11 1
f20 1.9072E-16 1 7.5237E-01 0 5.0501E-12 1 7.9483E-01 0
f21 1.0129E-17 1 1.6261E-04 1 7.0661E-18 1 8.4528E-16 1
f22 7.0661E-18 1 1.6725E-01 0 7.0661E-18 1 1.5218E-12 1
f23 7.0661E-18 1 1.7477E-01 0 7.0661E-18 1 6.6590E-06 1

Functions SCSO vs GSCSO SCSO1 vs GSCSO SCSO2 vs GSCSO SCSO3 vs GSCSO

p h p h p h p h

f1 3.3111E-20 1 NaN 0 3.3111E-20 1 NaN 0


f2 3.3111E-20 1 NaN 0 3.3111E-20 1 NaN 0
f3 3.3111E-20 1 NaN 0 3.3111E-20 1 NaN 0
f4 3.3111E-20 1 NaN 0 3.3111E-20 1 NaN 0
f5 5.1097E-07 1 7.0661E-18 1 5.6109E-10 1 7.0661E-18 1
f6 1.2414E-15 1 7.0661E-18 1 1.6330E-17 1 7.0661E-18 1
f7 8.1201E-01 0 5.2368E-01 0 6.3574E-04 1 3.7570E-01 0
f8 3.5360E-09 1 3.0556E-04 1 7.0661E-18 1 7.0661E-18 1
f9 NaN 0 NaN 0 NaN 0 NaN 0
f10 NaN 0 NaN 0 NaN 0 NaN 0
f11 NaN 0 NaN 0 NaN 0 NaN 0
f12 2.9385E-17 1 7.0279E-18 1 9.4892E-18 1 7.0279E-18 1
f13 1.7212E-05 1 7.0661E-18 1 7.1436E-08 1 7.0661E-18 1
f14 4.0034E-09 1 7.0661E-18 1 1.4473E-03 1 9.5211E-18 1
f15 1.4066E-12 1 7.0661E-18 1 3.2384E-11 1 7.0661E-18 1
f16 8.8643E-16 1 7.0661E-18 1 2.8498E-16 1 7.0661E-18 1
f17 2.1396E-16 1 7.0661E-18 1 9.0385E-15 1 7.0661E-18 1
f18 2.7152E-01 0 9.6788E-07 1 9.7325E-02 0 3.2384E-11 1
f19 8.0528E-14 1 7.0661E-18 1 2.0520E-13 1 7.0661E-18 1
f20 2.1605E-13 1 7.4363E-12 1 1.3486E-10 1 7.0661E-18 1
(continued on next page)

11
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Table 6 (continued )
Functions SCSO vs GSCSO SCSO1 vs GSCSO SCSO2 vs GSCSO SCSO3 vs GSCSO

p h p h p h p h

f21 8.9852E-18 1 7.0661E-18 1 7.9688E-18 1 7.0661E-18 1


f22 7.0661E-18 1 7.0661E-18 1 7.0661E-18 1 7.0661E-18 1
f23 7.0661E-18 1 7.0661E-18 1 7.0661E-18 1 7.0661E-18 1

Fig. 5. Average convergence curves of the GSCSO algorithm and the comparative algorithm on the single-peak test functions.

water wave dynamic convergence factor is adopted to improve the used to characterize the time or space resources required by an algo­
convergence and stability of the algorithm. A Lenses Opposite-Based rithm in the worst-case scenario. Computational complexity is usually
Learning strategy is added to enhance the explorability of the algo­ represented using the O notation.
rithm to get rid of the trap of local optimization. Incorporate the Golden The original SCSO defines the complexity of control parameter
Sine strategy to enhance the local search capability of the algorithm, so computation as O(P × N), where P denotes the overall size and N de­
that it can quickly find the optimal solution in the search region. The notes the size of the problem. In the initialization phase, the algorithm
flowchart of the GSCSO algorithm is shown in Fig. 4, and the blue part of also takes O(P ×N) time to complete the initialization process. The
the figure shows the improvements made to the original algorithm in computational complexity of agent position update is also O(P × N). It is
this paper. clear from the pseudo-code of GSCSO that the computational cost of the
initialization phase is O(P × N), and GSCSO evaluates the fitness of each
individual throughout the iteration process with a complexity of O(T ×
2.3. Analysis of complexity P × N), where T denotes the number of iterations. Golden Sine and
LOBL are added, so its computational complexity is O(3 × T × P × N).
2.3.1. Computational complexity In summary, the overall computational complexity of GSCSO is O(T ×
Computational complexity can indicate the relationship between the P × N), which is equal to the original SCSO.
execution time of an algorithm and the amount of input data. It is often

12
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Fig. 6. Average convergence curves of the GSCSO algorithm and the comparison algorithm on multimodal test functions.

2.3.2. Space complexity The average value is used to describe the concentration of all values
Space complexity is a key metric for measuring the storage space in the data set. The general performance and stability of the algorithm
required for an algorithm to run. In metaheuristic algorithms, this metric over multiple experiments can be reflected by calculating the average
is usually determined by both the population size and the problem value of the test results after the algorithm has executed the test function
dimension. When the population size of the algorithm is P and the in multiple cycles. The average value in this paper is shown in Eq. (22).
problem dimension is N, the space complexity of the basic SCSO algo­ ∑s
rithm can be expressed as O(P × N). For GSCSO, although it improves in Mean = (1 / S) i=1 Ci (22)
performance, its population size is also P and the problem dimension is
N. Therefore, its space complexity is also maintained at O(P × N), which Where S is the number of cycles and Ci is the result of the i-th inde­
means that there is no difference between GSCSO and SCSO in terms of pendent experiment.
space complexity. Therefore, GSCSO does not increase the extra storage
space requirement while improving the performance of the algorithm. (2) Std

Standard deviation is used as a measure of how much the values in a


3. Experiments
dataset deviate from the mean. In the field of algorithm optimization
and machine learning, it is often used to assess the consistency of an
In order to validate the improvement of GSCSO algorithm, the per­
algorithm’s performance over multiple iterations, through which we can
formance evaluation of GSCSO algorithm on 23 benchmark test func­
gain a deeper understanding of the stability of an algorithm across
tions is given in this paper. The results are also compared with five meta-
different tests. In this paper, the standard deviation is calculated as
heuristic algorithms and three variants of SCSO. These functions used in
shown in Eq. (23).
this paper are categorized into three groups: single-peaked, multimodal
√̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅
and fixed-dimensional test function. √ ( )2
√ ∑s ∑S
Std = √(1/S) i=1 Ci − (1/S) i=1 Ci (23)

3.1. Experimental setup and evaluation criteria


Where S is the number of cycles and Ci is the result of the i-th inde­
The experimental environment for the GSCSO algorithm and other pendent experiment.
algorithms proposed in this paper is Windows 10, Intel ® Core ™
i5–12490F 3.0 GHz, 16 Gigabytes of operating memory. The program­ (3) Rank
ming environment is MATLAB. The simulation parameters for each al­
gorithm are given in Table 1. Table 2 gives the details of each The Friedman test results for all algorithms are ranked. The ranking
benchmark function, Dim is the dimension of each function, value is determined by calculating the average rank of the algorithms in each
domain is the boundary limit between the lower and upper bounds of test, or sharing the average rank if the results are the same between
each function in the search space, and theoretical optimum is the global algorithms. Ultimately, the ranking of the benchmark function will
optimum. reflect its combined performance in all comparisons. Rank-count is the
The performance of each algorithm is evaluated using the following sum of the ranks, Ave-Rank is the average of the ranks, and Overall-Rank
evaluation criteria. is the final rank of the benchmark function.

(1) Mean

13
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Fig. 7. Average convergence curves of the GSCSO algorithm and the comparative algorithms on the fixed dimensional test function.

3.2. Test functions the dimension of each benchmark function is predetermined and does
not provide the flexibility to adjust the dimension.
In this paper, 23 benchmark test functions are used for experiments,
as shown in Table 1. Where f1 − f7 is a single-peak Unimodal test
function, f8 − f13 is a multimodal test function, and f14 − f23 is a Fixed- 3.3. The sensitivity analysis about P, t
dimension test function. Single-peaked benchmark functions contain
only one global optimal solution and do not have any local optimal The results obtained by the meta-heuristic algorithm are influenced
solutions. In contrast, multimodal functions provide a more complex by the number of fitness evaluations (FES ), usually FES = P ∗ t. In this
optimization challenge; they have multiple local optimal solutions but paper the number of FES is set to 30,000. However, for the same FES ,
only a unique global optimal solution. This not only tests the algorithm’s different sets of P and t may lead to differences in performance. There­
ability to explore the space of different solutions, but also its efficiency fore, we chose three different sets of P/t, 15/2000 (Set-1), 30/1000 (Set-
during the development phase. In the case of fixed-dimension functions, 2) and 60/500 (Set-3), to analyze their effects on GSCSO at different P/t
values.

14
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Fig. 8. Box line diagram of GSCSO and comparison algorithm.

As shown in Table 2, we selected nine functions for comparison, Fire Hawk Optimizer (FHO) (Azizi et al., 2023). Table 3 demonstrates
including three single-peak functions (f1 , f5 and f7 ), two multimodal the specific parameter settings for the comparison algorithm.
functions (f8 and f11 ) and four fixed-dimension functions (f15 , f18 , f20 and In evaluating the performance of the SCSO algorithm against other
f22 ). For f1 and f11 , the same results were obtained for all three different optimization algorithms, we used a uniform experimental setup to
P/t sets. For f8 , Set-1 gives the best results. For f7 , f15 , f18 , f20 and f22 , Set- ensure fairness. In particular, the population size was set to 30 and the
2 gives the best results. For f5 , Set-3 had the best performance. Overall, maximum number of iterations was limited to 1000. To obtain more
the Overall-Rank of Set-2 is the best among the three groups. Therefore, stable and reliable results, each algorithm was run through 50 inde­
when FES are the same, this paper chooses Set-2 for the experiment. pendent runs. This allows mean and standard deviation of each objective
function to be collected, thus providing a comprehensive comparison
3.4. Experimental results and analysis of the efficiency and effectiveness of the different algo­
rithms in solving the optimization problem. Table 4 gives the experi­
The above 23 test functions were experimented with SCSO and mental results of the GSCSO and comparison algorithms under 50 cycles
variants of SCSO (SCSO1, SCSO2 and SCSO3), where SCSO1 is the SCSO of 23 test functions.
that incorporates the golden sine strategy, SCSO2 replaces the original According to the numerical results obtained in Table 4, it can be seen
population initialization with Eq. (8), and SCSO3 incorporates the that: the GSCSO finds the global optimal solution on f1 -f4 . On f5 , GSA
golden sine strategy and replaces the original convergence factor with obtained the best performance and GSCSO did not show excellent per­
Eq. (11). It is also compared with the Salp Swarm Algorithm (SSA) formance. On f6 , DBO, HBA, and FHO all obtain the global optimum,
(Mirjalili et al., 2017), Harris Hawks Algorithm (HHO) (Heidari et al., while GSCSO does not obtain the global optimum, although it is an
2019), Particle Swarm Optimization Algorithm (PSO) (Kennedy and improvement over the original SCSO. On f7 , f9 -f11 , the GSCSO has the
Eberhart, 1995), Gray Wolf Optimizer (GWO) (Mirjalili et al., 2014), highest precision and the smallest standard deviation. On f8 , GSA and
Golden Sine Algorithm (GSA), Dung Beetle Optimizer (DBO) (Xue and FHO can be found to be theoretically optimal, while GSCSO, although an
Shen, 2023), Honey Badger Algorithm (HBA) (Hashim et al., 2022), and improvement over the original SCSO, does not achieve optimal

15
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

algorithms are better at dealing with specific benchmark functions.


Taking the GSCSO algorithm as an example, its performance ranking
with other 9 algorithms on 23 benchmark functions is shown in Table 5.
According to the data presented in Table 5, GSCSO has a rank-count
of 82 and an Ave-Rank of 3.56, and the comparison reveals that the
GSCSO algorithm excels among all the algorithms involved in the
comparison. This result indicates that GSCSO tops the list in terms of
comprehensive performance among the 23 benchmark functions
involved. The statistical analysis results of Friedman’s test further
corroborate the superiority of the GSCSO algorithm over other algo­
rithms for optimization problems.

3.6. Wilcoxon signed-rank test

The Wilcoxon signed rank test, as a nonparametric statistical


method, is an effective tool for assessing the difference in performance
between two algorithms (Dewan and Prakasa Rao, 2005). In this study,
this test was used to meticulously compare the GSCSO algorithm with
nine other algorithms, and the corresponding p-values and significance
symbols (h) were calculated. In the paper, we set a significance level
threshold of 0.05 to determine the significant difference in performance
between the algorithms. When the p-value is less than 0.05, i.e., the sign
Fig. 9. Grid environment model. of significance is 1, we consider that the GSCSO algorithm significantly
outperforms the comparison algorithms in terms of performance. On the
contrary, if the p-value is greater than 0.05 and the significance sign is 0,
this indicates that there is no significant difference in performance be­
tween the GSCSO algorithm and the comparison algorithm. In partic­
ular, when the p-value is NaN (i.e., non-numeric), we interpret this as an
indication that the algorithm is able to achieve similar results in terms of
performance as the GSCSO algorithm. By applying the Wilcoxon
signed-rank test on the 23 benchmark functions, we obtained the results
of the performance comparison between the GSCSO and the other al­
gorithms, which are presented in detail in Table 6.
Table 6 presents the results of the Wilcoxon signed-rank test. In
comparison with GWO, the Wilcoxon signed-rank test results for GSCSO
are significant, with all p-values being less than 0.05 and h-values equal
to 1, indicating a significant performance difference between the two
algorithms. When comparing with SSA and PSO, the frequency of h
values being 0 is low, which confirms a significant difference between
GSCSO and these two algorithms. Since HHO, GSA, DBO, FHO, HBA and
GSCSO are able to find the global optimal solution on some test func­
tions simultaneously, the p-value becomes NaN and the h-value is 0.
However, there are significant differences between GSCSO and the other
algorithms on the other test functions for which the global optimal so­
lution cannot be found. In comparison with GSA and SCSO and their
variants, the Wilcoxon signed-rank test shows a higher occurrence of h
values equal to 0 due to a certain degree of similarity among the algo­
Fig. 10. Possible directions of movement of the current point. rithms. Nevertheless, GSCSO still demonstrates significant differences
when compared to these algorithms.
performance, which is in accordance with the No Free Lunch theorem
(Wolpert and Macready, 1997). On f12 , DBO is the most effective of all 3.7. Convergence analysis
algorithms, while GSCSO is second only to DBO and HBA. On f15 -f20 ,
GSCSO has the highest average accuracy, but the standard deviation of Convergence analysis of algorithms is an important method of
the algorithm is not optimal. While on f21 -f23 , the average accuracy and assessing the nature of algorithms, which focuses on the ability of al­
standard deviation of GSCSO are both optimal. The experimental results gorithms to gradually approach a goal or a steady state during the
show that GSCSO has a clear advantage on the 23 test functions, indi­ iteration process. In this paper, the number of iterations is 1000, the
cating that GSCSO has better stability and stronger ability to jump out of population size is 30, and all algorithms are evaluated for fitness up to
the local optimum. 30,000 times. Figs. 5 to 7 show the average convergence curves of the
GSCSO algorithm and the comparative algorithms on different types of
3.5. Friedman test test functions. Among them, Fig. 5 shows 7 single-peak test functions,
Fig. 6 shows 6 multimodal test functions, and Fig. 7 shows 10 fixed-
In order to evaluate the comprehensive performance of the dimension test functions. The vertical coordinate of the graph is the
mentioned 10 algorithms in depth, the Friedman test, a statistical average of 50 cycle experiments, and the horizontal coordinate is the
method, can be used for ranking comparisons among the algorithms number of iterations.
(Röhmel, 1997). The Friedman test not only provides a relative ranking Fig. 5 shows the average convergence curves of the GSCSO com­
of algorithmic performance, but also helps us to identify which parison algorithms on f1 to f7 . It can be seen that the algorithms can

16
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Fig. 11. 9 raster maps, (a)-(c) are 20 × 20 maps, (d)-(f) are 30 × 30 maps, (g)-(i) are 40 × 40 maps.

converge to a minimum value as fast as possible on all single-peak test not achieve the highest accuracy among all algorithms, it was able to
functions except f5 and f6 . On the f5 and f6 test functions, although the converge to a good value more quickly.
GSCSO algorithm does not achieve the highest accuracy of all the al­ Fig. 7 presents the average convergence curves of the GSCSO algo­
gorithms, the algorithm is able to converge to a better value faster. It rithm with other comparative algorithms on the f14 to f23 test functions.
shows that GSCSO has a faster convergence speed. From the figure, it can be seen that GSCSO shows excellent results on f15 ,
Fig. 6 shows the average convergence curves of GSCSO and its f17 , f19 -f23 , with the fastest convergence rate and the highest conver­
comparison algorithms on f8 -f13 . On the f9 , f10 , f11 and f12 test functions, gence accuracy. On f14 , GSCSO does not achieve the highest accuracy,
GSCSO, as shown by the red line with the pentagram identifier in the but it has the fastest convergence speed. On f16 and f18 , although GSCSO
figure, is located in the lower left of the graph, indicating that on these does not have the fastest convergence speed, it has the highest final
four test functions, the GSCSO algorithm is able to reach more accurate convergence accuracy.
values faster. On f8 , GSCSO performed poorly, which fits the description In summary, GSCSO has higher convergence efficiency compared to
of the no free lunch theorem. On f13 , although the GSCSO algorithm did other algorithms, indicating that the improvement of SCSO is effective

17
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Fig. 12. Average path maps obtained by the 4 algorithms on 9 raster maps.

Table 7
Mean and standard deviation of the 4 algorithms in 9 raster maps.
Algorithm Path planning in (a) Path planning in (b) Path planning in (c)

Mean Std Mean Std Mean Std

GSCSO 30.1798 3.9296E-01 31.5563 0.0000E+00 32.7279 1.4580E-14


GSA 30.3262 1.8030E-01 31.5563 0.0000E+00 32.7279 1.4580E-14
HHO 30.3848 0.0000E+00 31.5563 0.0000E+00 32.7279 1.4580E-14
SCSO 30.3848 0.0000E+00 31.5563 0.0000E+00 32.7279 1.4580E-14

Algorithm Path planning in (d) Path planning in (e) Path planning in (f)

Mean Std Mean Std Mean Std

GSCSO 46.1905 6.7046E-01 45.9968 1.0478E+00 45.7533 1.2729E+00


GSA 46.4269 4.4721E-01 46.5160 7.0339E-01 46.7018 1.3727E+00
HHO 46.4562 3.1623E-01 46.5453 5.9426E-01 46.5311 1.3346E+00
SCSO 46.5269 7.2900E-15 46.6624 5.3206E-01 47.2311 8.2012E-01

Algorithm Path planning in (g) Path planning in (h) Path planning in (i)

Mean Std Mean Std Mean Std

GSCSO 64.0988 7.4142E-01 61.4222 3.3462E-01 63.0122 0.0000E+00


GSA 64.5394 7.8526E-01 61.4222 2.7541E-01 63.0122 0.0000E+00
HHO 64.8637 3.8551E-01 61.5980 0.0000E+00 63.0122 0.0000E+00
SCSO 64.6344 7.1538E-01 61.5980 0.0000E+00 63.0122 0.0000E+00

18
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Fig. 13. Optimal path diagram.

analysis. GSCSO shows better stability in the experiment, indicating that


Table 8 the improvement on SCSO is effective.
Comparison of optimal paths.
Map GSCSO GSA HHO SCSO 4. Application of GSCSO to path planning
(a) 29.2132 29.7990 30.3848 30.3848
(d) 44.5269 44.5269 45.1127 46.5269 4.1. 2D environmental modeling
(g) 62.7696 62.4264 63.5980 62.7696
The global path planning environment model is a key tool for
simulating real application scenarios, which provides a simulation
and enhances the convergence performance of SCSO.
environment for the implementation of algorithms. The grid method has
become a commonly used environment map modeling method in path
3.8. Stability analysis planning due to its simplicity and effectiveness, adaptability to obsta­
cles, and significant reduction of environment modeling complexity
In this section of the study, we analyze the stability performance of (Gong et al., 2021). In this paper, we chose to use the grid method to
various algorithms in depth using box-and-line plots. Each algorithm construct the working environment model, where a value of 0 represents
was tested in 50 independent runs. Three single-peak test functions, the free space, represented by a white grid, and a value of 1 represents
three multimodal test functions, and three fixed-dimension test func­ the obstacle area, represented by a black grid. The start point is iden­
tions were selected for comparison. As shown in Fig. 8, the box-and-line tified by a blue grid, while a red grid represents the target point. In order
plot of the GSCSO algorithm is shown side by side with the other to identify the grid effectively, this paper adopts a combination of the
compared algorithms to visualize the performance difference between Cartesian coordinate system and the Sequence number method (Vince,
them. 2010), as shown in Fig. 9.
Fig. 8 illustrates the boxplot of GSCSO with the other 9 meta­ In the grid model-based path search, an 8-neighborhood represen­
heuristics running 50 experiments independently. Among them, GSCSO tation is used, which allows the intelligent body to move from the cur­
shows the best stability on f1 , f7 , f10 , f12 , f15 , f18 and f20 . On f6 , the GSCSO rent grid to eight possible neighboring grids, and these moving
is stable with individual outliers. On f8 , the improvement on SCSO is not directions will not encounter obstacles. The center point of each grid is
significant, which is consistent with the results of the previous numerical set as the moving position, and the possible moving directions are shown

19
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Fig. 14. Map (a) Map of 6 peaks (b) Map of 7 peaks.

( )
in Fig. 10. node xi , yi must avoid the obstacle region Oj in its path as shown in Eq.
(25) below.

4.2. 2D map boundaries and obstacle constraints (xi , yi ) ∕


∈ Oj , ∀i, j (25)

The movement path of the robot in the access area should avoid
Once the raster map is built, the ideal paths that fulfill all the re­ overlapping paths and detours. Assume that the robot’s position coor­
quirements need to be found in the map. Additionally, a fitness function dinate t at this moment is (xt , yt ). The robot’s position coordinate (xt+1 ,
that can contain constraints is created, retaining solutions that can yt+1 ) at the next moment should satisfy Eq. (26).
satisfy this function and eliminating those that do not.
In path planning, the position coordinates of the population are set to xt+1 > xt or yt+1 > yt (26)
be updated at each iteration to represent the route of movement. The
To realize path planning, the shortest path from the starting point to
algorithm is used to find the best path from the starting point to the goal
the goal point must be found on the basis of satisfying boundary con­
point that satisfies the constraints in the 2D raster map.
straints and path continuity conditions. The length of the path is an
The movement path of the robot must be restricted to the boundaries
important indicator of the merit of the path, and the goal of optimization
of the map, and the constraints are the boundaries of the search space for
( ) is to minimize the total length of the path. The length of a path can be
path planning. Any node xi , yi in the robot’s movement path must calculated by the Euclidean distance between all neighboring nodes on
satisfy the following boundary conditions, as shown in Eq. (24). the path. Specifically, the total length fit of the path is calculated as
{
lbx ≤ xi ≤ ubx shown in Eq. (27).
, ∀i (24)
lby ≤ yi ≤ uby m √̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅

fit = (xi+1 − xi )2 + (yi+1 − yi )2 (27)
Where lbx and ubx are the lower and upper bounds of the horizontal i=1

boundary, lby and uby are the vertical boundary of the lower and upper ( ) ( )
Where xi , yi and xi+1 , yi+1 are the coordinates of two neighboring
bounds.
nodes on the path, and m is the total number of nodes on the path.
At the same time, the agent cannot cross the obstacle region and any

20
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Fig. 15. Convergence plots (a) shows the convergence plot on a 6-peak map (b) shows the convergence plot on a 7-peak map.

The smaller the fit value, the shorter the path is, and accordingly the same result in 50 iterations. On (h), GSA achieves the same accuracy as
path is evaluated as good or bad. To achieve path optimization, the GSCSO in the fiftieth iteration. And in all other sub-maps, the curves of
planning objective is to minimize the path length fit. By minimizing the GSCSO are located below the curves of other algorithms, indicating that
path length, the best path that satisfies all the constraints can be found. GSCSO can find shorter paths in the above maps. The above results prove
that the improvement of SCSO in this paper is effective and improves the
convergence accuracy of the algorithm.
4.3. 2D experimental results and analysis
As shown in Table 7, on (b), (c), and (i), all four algorithms reach the
same result in 50 iterations with mean values of 31.5563, 32.7279, and
Based on the results of Friedman’s test, GSCSO, the original algo­
63.0122, respectively. On (h), GSA and GSCSO have the same mean
rithm SCSO and two other most competitive algorithms were selected
value at the fiftieth iteration, 61.4222, which is in line with the results
for path planning experiments, i.e., GSCSO, GSA, HHO and SCSO. The
demonstrated in Fig. 12. And in other subgraphs, GSCSO can find paths
above four algorithms were tested on three 20 × 20 maps, three 30 × 30
with shorter distances compared to other algorithms.
maps, and three 40 × 40 maps, as shown in Fig. 11 for the nine raster
In order to demonstrate more intuitively the ability of the GSCSO
maps. In order to ensure the accuracy and exclude other interfering
algorithm to solve the path planning problem, we selected map (a), map
factors, all algorithms are set with common parameters. After the pa­
(d), and map (h) in Fig. 11 for optimal path drawing, as shown in Fig. 13.
rameters are set, each algorithm is run independently for 20 times to
The optimal path comparison table is shown in Table 8.
calculate the average value of the optimal path. The average paths ob­
Considering that some algorithms plan the same paths, this paper
tained by the four algorithms are shown in Fig. 12, and the values of
plots each algorithm separately on each map to present the paths more
their statistical information are shown in Table 7, and the optimal values
clearly.
have been bolded in the table.
As shown in Fig. 12, on (b), (c) and (i), the four algorithms reach the

21
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Fig. 16. Optimal path map (a) shows the optimal path map on a 6-peak map, (b) shows the optimal path map on a 7-peak map.

4.4. 3D environmental modeling converge to a good value in the 6th iteration, with fast convergence
speed and high convergence accuracy. And in figure (b), it also con­
Additionally, we perform path planning experiments in two 3D verges to a good value in the 7th iteration. It proves that the GSCSO
maps. The size of the map is a space with a length and width of 200, in algorithm has strong practicability in solving the path planning prob­
which the number of peaks, their coordinates, and slopes are defined. lem. Its optimal routes are shown in Fig. 16, where figure (a) shows the
Fig. 14 shows the two maps, where (a) is the map with 6 peaks and (b) is optimal path map on a 6-peak map and figure (b) shows the optimal path
the map with 7 peaks. map on a 7-peak map.
The objective to be optimized in this paper is to minimize the tra­ The experimental results show that GSCSO significantly improves
jectory length, i.e., the length of the trajectory from the start point to the the solving ability and optimization accuracy of the path planning
end point. problem, and demonstrates good stability and global search ability.

5. Summary and outlook


4.5. 3D experimental results and analysis
In this study, an integrated multi-strategy sand cat swarm optimi­
In order to demonstrate the computational efficiency and time zation algorithm, the global sand cat swarm optimization algorithm
complexity of the algorithms in complex terrain environments, the (GSCSO), is proposed to address the problems of the sand cat swarm
number of iterations for the experiments in this paper in a 3-dimensional optimization algorithm, such as the lack of efficiency and accuracy, the
environment is simply set to 10, and the cycle is performed 50 times. tendency of the algorithm to fall into local optimums, and the low sta­
According to the results of Freedman’s test, we choose the 9 algorithms bility of the algorithm in path planning. The innovations and main
with the best rankings to conduct the experiments. We choose the contributions of GSCSO can be summarized as follows: By introducing
optimal one experiment result to draw the convergence curve graph, as the improved Circle chaos mapping, the distribution of the population is
shown in Fig. 15, where (a) is the convergence curve graph on a 6-peak optimized, which makes the algorithm have stronger global search
map and (b) is the convergence curve graph on a 7-peak map. ability. Adopting the dynamic convergence factor of water waves
As shown in figure (a) of Fig. 15, the GSCSO algorithm is able to

22
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

effectively expands the search range, while maintaining the diversity of CRediT authorship contribution statement
the population and improving the ability of the algorithm to jump out of
the local optimum. Incorporating the prismatic opposition learning Yourui Huang: Writing – review & editing, Funding acquisition,
strategy enhances the exploratory nature of the algorithm, effectively Conceptualization. Quanzeng Liu: Writing – review & editing, Writing
avoids falling into the local optimal solution, and improves the possi­ – original draft, Software, Methodology. Tao Han: Visualization, Su­
bility of discovering the global optimal solution. Incorporating the pervision, Resources. Tingting Li: Writing – review & editing, Valida­
golden sine strategy enhances the algorithm’s ability to search locally, tion, Software, Methodology, Conceptualization. Hongping Song:
enabling the algorithm to quickly find the optimal solution in the search Supervision, Resources.
region. The performance of GSCSO was verified on 23 test functions and
successfully applied to 9 2D path planning instances and 2 3D path Declaration of competing interest
planning instances. The experimental results show that the GSCSO al­
gorithm significantly improves the solution efficiency and optimization The authors declare that they have no known competing financial
accuracy of the path planning problem by comprehensively applying interests or personal relationships that could have appeared to influence
multiple strategies, demonstrates good stability and global search the work reported in this paper.
capability, and provides an effective solution to the robot path planning
problem. Acknowledgments
Although we have made significant progress, there are still many
opportunities for further improvement. Literature (Guo et al., 2024) We would like to thank the School of Electrical and Information
achieves global path planning and local obstacle avoidance through the Engineering at Anhui University of Science and Technology for
fusion of the A* algorithm and the DWA algorithm. Path planning is the providing the laboratory.
core of the mobile robotics field; reference (Wu et al., 2024) proposed an
ant colony optimization algorithm based on farthest point optimization Data availability
and a multi-objective strategy using a multi-objective comprehensive
evaluation metric to judge the quality of the path. In future work, the No data was used for the research described in the article.
improved algorithm can be combined with the Dynamic Window
Approach (DWA) to improve the algorithm’s ability in dynamic obstacle References
avoidance and introduce multi-objective comprehensive evaluation in­
dexes to use path length, smoothness, and safety as constraints of the Abdollahzadeh, B., Soleimanian Gharehchopogh, F., & Mirjalili, S. (2021). Artificial
gorilla troops optimizer: A new nature-inspired metaheuristic algorithm for global
multi-objective optimization function. optimization problems. International Journal of Intelligent Systems, 36. https://doi.
org/10.1002/int.22535
Resource availability Arul, S. B., & Jebaselvi, G. D. A. (2024). OptiLoc: Integrating SCSO and DV-Hop for
wireless sensor network localization with application to disease forecasting in cattle
farm monitoring. Expert Systems with Applications, 255(Part C). https://doi.org/
Lead contact 10.1016/j.eswa.2024.124658
Further information and requests for resources should be directed to Azizi, M., Talatahari, S., & Gandomi, A. H. (2023). Fire Hawk optimizer: A novel
metaheuristic algorithm. Artificial Intelligence Review, 56, 287–363. https://doi.org/
and will be fulfilled by the lead author, Quanzeng Liu (lqz990709@163. 10.1007/s10462-022-10173-w
com). Cui, Y., Hu, W., & Rahmani, A. (2023). Fractional-order artificial bee colony algorithm
with application in robot path planning. European Journal of Operational Research,
306, 47–64. https://doi.org/10.1016/j.ejor.2022.11.007
Materials availability
Dewan, I., & Prakasa Rao, B. L. S. (2005). Wilcoxon-signed rank test for associated
sequences. Statistics & Probability Letters, 71, 131–142. https://doi.org/10.1016/j.
This study did not generate new, unique materials. spl.2004.10.034
Duan, Y., & Yu, X. (2022). A collaboration-based hybrid GWO-SCA optimizer for
engineering optimization problems. Expert Systems with Applications, 213. https://
Approval of the submission doi.org/10.1016/j.eswa.2022.119017
Ekinci, S., & Izci, D. (2023). Enhancing IIR system identification: Harnessing the synergy
All authors and responsible authorities where the work was carried of gazelle optimization and simulated annealing algorithms. e-Prime - Advances in
Electrical Engineering, Electronics and Energy, 5. https://doi.org/10.1016/j.
out have approved its publication. prime.2023.100225
Gong, S., Chen, R., Xie, Z., & Li, X. (2021). Major approaches to robot environmental
Funding modeling. In Proceedings of the 6th international symposium on computer and
information processing technology (ISCIPT) (pp. 218–222). https://doi.org/10.1109/
ISCIPT53667.2021.00051
This work was supported by the Anhui Provincial Colleges and Guo, H., Li, Y., Wang, H., Wang, C., Zhang, J., Wang, T., Rong, L., Wang, H., Wang, Z.,
Universities Collaborative Innovation Project (GXXT-2023–068), and Huo, Y., Guo, S., & Yang, F. (2024). Path planning of greenhouse electric crawler
tractor based on the improved A* and DWA algorithms. Computers and Electronics in
Anhui University of Science and Technology Graduate Innovation Fund Agriculture. , Article 109596. https://doi.org/10.1016/j.compag.2024.109596
Project (2023CX2086). Hashim, F. A., Houssein, E. H., Hussain, K., Mabrouk, M. S., & Al-Atabany, W. (2022).
Honey Badger algorithm: New metaheuristic algorithm for solving optimization
problems. Mathematics and Computers in Simulation, 192, 84–110. https://doi.org/
Inclusion and diversity
10.1016/j.matcom.2021.08.013
Heidari, A. A., Mirjalili, S., Faris, H., Aljarah, I., Mafarja, M., & Chen, H. (2019). Harris
We support inclusive, diverse, and equitable conduct of research. hawks optimization: Algorithm and applications. Future Generation Computer Systems,
97, 849–872. https://doi.org/10.1016/j.future.2019.02.028
Huang, Y., Liu, Q., Song, H., Han, T., & Li, T. (2024). CMGWO: Grey wolf optimizer for
Declaration of generative AI and AI-assisted technologies in the fusion cell-like P systems. Heliyon, 10. https://doi.org/10.1016/j.heliyon.2024.
writing process e34496
Kennedy, J., & Eberhart, R. (1995). Particle swarm optimization. In , 4. Proceedings of the
ICNN’95 - international conference on neural networks (pp. 1942–1948). https://doi.
During the preparation of this work, the authors used chatGPT3.5 in org/10.1109/ICNN.1995.488968
order to improve language and readability. After using this tool/service, Kim, G., Park, S., Choi, J. G., Yang, S. M., Park, H. W., & Lim, S. (2024). Developing a
the authors reviewed and edited the content as needed and take full data-driven system for grinding process parameter optimization using machine
learning and metaheuristic algorithms. CIRP Journal of Manufacturing Science and
responsibility for the content of the publication. Technology, 51, 20–35. https://doi.org/10.1016/j.cirpj.2024.04.001

23
Y. Huang et al. Intelligent Systems with Applications 25 (2025) 200486

Kourepinis, V., Iliopoulou, C., Tassopoulos, I., & Beligiannis, G. (2024). An artificial fish systems using I-GWO and ex-GWO algorithms. Alexandria Engineering Journal, 63,
swarm optimization algorithm for the urban transit routing problem. Applied Soft 339–357. https://doi.org/10.1016/j.aej.2022.08.009
Computing, 155. https://doi.org/10.1016/j.asoc.2024.111446 Shi, X., & Li, M. (2019). Whale optimization algorithm improved effectiveness analysis
Leng, Y. J., Zhang, H., & Li, X. S. (2024). A novel evaluation method for renewable based on compound chaos optimization strategy and dynamic optimization
energy development based on improved sparrow search algorithm and projection parameters. In Proceedings of the international conference on virtual reality and
pursuit model. Expert Systems with Applications, 244. https://doi.org/10.1016/j. intelligent systems (ICVRIS) (pp. 338–341). https://doi.org/10.1109/
eswa.2023.122991 ICVRIS.2019.00088
Li, Q., Huang, Z., Jiang, W., Tang, Z., & Song, M. (2024a). Quantum algorithms using Song, L., Chen, W., Chen, W., et al. (2023). Improvement and application of hybrid
infeasible solution constraints for collision-avoidance route planning. IEEE strategy-based sparrow search algorithm. Journal of Beijing University of Aeronautics
Transactions on Consumer Electronics. https://doi.org/10.1109/TCE.2024.3476156, and Astronautics, 49(8), 2187–2199. https://doi.org/10.13195/j.kzyjc.2019.1362
1-1. Sung, I., Choi, B., & Nielsen, P. (2021). On the training of a neural network for online
Li, J., Xiao, K., Zhang, H., Hua, L., & Gu, J. (2024b). Identification of multiple-input and path planning with offline path planning algorithms. International Journal of
single-output Hammerstein controlled autoregressive moving average system based Information Management, 57. https://doi.org/10.1016/j.ijinfomgt.2020.102142
on chaotic dynamic disturbance sand cat swarm optimization. Engineering Tanyildizi, E., & Demir, G. (2017). Golden sine algorithm: A novel math-inspired
Applications of Artificial Intelligence, 133(B). https://doi.org/10.1016/j. algorithm. Advances in Electrical and Computer Engineering, 17, 71–78. https://doi.
engappai.2024.108188 org/10.4316/AECE.2017.02010
Lian, J., Hui, G., Ma, L., Zhu, T., Wu, X., Heidari, A. A., Chen, Y., & Chen, H. (2024). Vince, J. (2010). Cartesian coordinates. Mathematics for computer graphics. Undergraduate
Parrot optimizer: Algorithm and applications to medical problems. Computers in topics in computer science. London: Springer. https://doi.org/10.1007/978-1-84996-
Biology and Medicine, 172. https://doi.org/10.1016/j.compbiomed.2024.108064 023-6_5
Liang, W., Lou, M., Chen, Z., Qin, H., Zhang, C., Cui, C., & Wang, Y. (2024). An enhanced Wahab, M., Nazir, A., Khalil, A., Wong, J. H., Akbar, M. F., Noor, M., & Mohamed, A.
ant colony optimization algorithm for global path planning of deep-sea mining (2024). Improved genetic algorithm for mobile robot path planning in static
vehicles. Ocean Engineering, 301. https://doi.org/10.1016/j.oceaneng.2024.117415 environments. Expert Systems with Applications, 249(Part C). https://doi.org/
Lin, B., Zhao, Y., Lin, R., & Liu, C. (2021). Integrating traffic routing optimization and 10.1016/j.eswa.2024.123762
train formation plan using simulated annealing algorithm. Applied Mathematical Wang, Z., Sun, G., Zhou, K., & Zhu, L. (2023). A parallel particle swarm optimization and
Modelling, 93, 811–830. https://doi.org/10.1016/j.apm.2020.12.031 enhanced sparrow search algorithm for unmanned aerial vehicle path planning.
Lu, Y., Wang, W., Bai, R., Zhou, S., Garg, L., Bashir, A., Jiang, W., & Hu, X. (2024). Hyper- Heliyon, 9(4), e14784. https://doi.org/10.1016/j.heliyon.2023.e14784
relational interaction modeling in multi-modal trajectory prediction for intelligent Wang, Z. H., Ren, X. Y., Cui, H. J., Wang, W. Q., Liu, J., & He, Z. F. (2024). A multi-stage
connected vehicles in smart cites. Information Fusion, 114. https://doi.org/10.1016/ two-layer stochastic design model for integrated energy systems considering
j.inffus.2024.102682 multiple uncertainties. Energy, 304. https://doi.org/10.1016/j.energy.2024.131729
Mirjalili, S., Mirjalili, S. M., & Lewis, A. (2014). Grey Wolf optimizer. Advances in Wolpert, D. H., & Macready, W. G. (1997). No free lunch theorems for optimization. IEEE
Engineering Software, 69, 46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007 Transactions on Evolutionary Computation, 1, 67–82. https://doi.org/10.1109/
Mirjalili, S., Gandomi, A. H., Mirjalili, S. Z., Saremi, S., Faris, H., & Mirjalili, S. M. (2017). 4235.585893
Salp swarm algorithm: A bio-inspired optimizer for engineering design problems. Wu, S., Dong, A., Li, Q., Wei, W., Zhang, Y., & Ye, Z. (2024). Application of ant colony
Advances in Engineering Software, 114, 163–191. https://doi.org/10.1016/j. optimization algorithm based on farthest point optimization and multi-objective
advengsoft.2017.07.002 strategy in robot path planning. Applied Soft Computing. , Article 112433. https://doi.
Nguyen, V. T., Nguyen, N. H., & Heidari, A. A. (2024). Feature selection using org/10.1016/j.asoc.2024.112433
metaheuristics made easy: Open source MAFESE library in Python. Future Generation Xue, J., & Shen, B. (2023). Dung beetle optimizer: A new meta-heuristic algorithm for
Computer Systems, 160, 340–358. https://doi.org/10.1016/j.future.2024.06.006 global optimization. The Journal of Supercomputing, 79, 7305–7336. https://doi.org/
Niu, Y., Yan, X., Wang, Y., & Niu, Y. (2024). 3D real-time dynamic path planning for UAV 10.1007/s11227-022-04959-6
based on improved interfered fluid dynamical system and artificial neural network. Yang, B., Wu, L., Xiong, J., Zhang, Y., & Chen, L. (2023). Location and path planning for
Advanced Engineering Informatics, 59. https://doi.org/10.1016/j.aei.2023.102306 urban emergency rescue by a hybrid clustering and ant colony algorithm approach.
Röhmel, J. (1997). The permutation distribution of the Friedman test. Computational Applied Soft Computing, 147. https://doi.org/10.1016/j.asoc.2023.110783
Statistics & Data Analysis, 26, 83–99. https://doi.org/10.1016/S0167-9473(97) Yu, X., & Luo, W. (2023). Reinforcement learning-based multi-strategy cuckoo search
00019-4 algorithm for 3D UAV path planning. Expert Systems with Applications, 223. https://
Sahu, V., Samal, P., & Panigrahi, C. K. (2023). Tyrannosaurus optimization algorithm: A doi.org/10.1016/j.eswa.2023.119910
new nature-inspired meta-heuristic algorithm for solving optimal control problems. Yu, X., Jiang, N., Wang, X., & Li, M. (2023). A hybrid algorithm based on grey wolf
e-Prime - Advances in Electrical Engineering, Electronics and Energy, 5. https://doi.org/ optimizer and differential evolution for UAV path planning. Expert Systems with
10.1016/j.prime.2023.100243 Applications, 215. https://doi.org/10.1016/j.eswa.2022.119327
Seyyedabbasi, A., & Kiani, F. (2023). Sand Cat swarm optimization: A nature-inspired Yu, F., Guan, J., Wu, H., Chen, Y., & Xia, X. (2024). Lens imaging opposition-based
algorithm to solve global optimization problems. Engineering with Computers, 39, learning for differential evolution with cauchy perturbation. Applied Soft Computing,
2627–2651. https://doi.org/10.1007/s00366-022-01604-x 152. https://doi.org/10.1016/j.asoc.2023.111211
Seyyedabbasi, A., Kiani, F., Allahviranloo, T., Fernandez-Gamiz, U., & Noeiaghdam, S. Zheng, Y. J. (2015). Water wave optimization: A new nature-inspired metaheuristic.
(2023). Optimal data transmission and pathfinding for WSN and decentralized IoT Computers & Operations Research, 55, 1–11. https://doi.org/10.1016/j.
cor.2014.10.008

24

You might also like