0% found this document useful (0 votes)
89 views12 pages

Chaotic Evolution Optimization: A Novel Metaheuristic Algorithm Inspired by Chaotic Dynamics

The document introduces a novel metaheuristic algorithm called Chaotic Evolution Optimization (CEO), inspired by chaotic dynamics, specifically utilizing a two-dimensional discrete memristive map. CEO enhances optimization processes by incorporating random search directions and integrating crossover and mutation operations from the differential evolution framework, demonstrating competitive performance against 12 other algorithms on various benchmark problems. The paper also addresses common issues in existing algorithms, such as premature convergence and the zero-bias problem, while providing a source code link for public access.

Uploaded by

Trong Nghia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
89 views12 pages

Chaotic Evolution Optimization: A Novel Metaheuristic Algorithm Inspired by Chaotic Dynamics

The document introduces a novel metaheuristic algorithm called Chaotic Evolution Optimization (CEO), inspired by chaotic dynamics, specifically utilizing a two-dimensional discrete memristive map. CEO enhances optimization processes by incorporating random search directions and integrating crossover and mutation operations from the differential evolution framework, demonstrating competitive performance against 12 other algorithms on various benchmark problems. The paper also addresses common issues in existing algorithms, such as premature convergence and the zero-bias problem, while providing a source code link for public access.

Uploaded by

Trong Nghia
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Chaos, Solitons and Fractals 192 (2025) 116049

Contents lists available at ScienceDirect

Chaos, Solitons and Fractals


journal homepage: www.elsevier.com/locate/chaos

Chaotic evolution optimization: A novel metaheuristic algorithm inspired


by chaotic dynamics
Yingchao Dong a , Shaohua Zhang b,*, Hongli Zhang b, Xiaojun Zhou c , Jiading Jiang a
a
School of Energy Engineering, Xinjiang Institute of Engineering, Urumqi, Xinjiang 830023, China
b
School of Electrical Engineering, Xinjiang University, Urumqi, Xinjiang 830017, China
c
School of Automation, Central South University, Changsha 410083, China

A R T I C L E I N F O A B S T R A C T

Keywords: In this paper, a novel population-based metaheuristic algorithm inspired by chaotic dynamics, called chaotic
Chaotic evolution optimization evolution optimization (CEO), is proposed. The main inspiration for CEO is derived from the chaotic evolution
Metaheuristic process of a two-dimensional discrete memristive map. By leveraging the hyperchaotic properties of the mem-
Optimization
ristive map, the CEO algorithm is mathematically modeled to introduce random search directions for evolu-
Discrete memristor
Hyperchaos
tionary processes. Then, the CEO is developed by integrating the crossover and mutation operations from the
differential evolution (DE) framework. The proposed algorithm is evaluated by conducting experiments on 15
benchmark test problems and a sensor network localization problem, comparing its performance with 12 other
metaheuristic algorithms. Experimental results demonstrate that CEO exhibits highly promising and competitive
performance in comparison to widely used, classical, and well-established metaheuristic algorithms. Moreover,
CEO effectively addresses the zero-bias problem observed in many recently proposed algorithms. The source code
for CEO algorithm will publicly available at: https://github.com/Running-Wolf1010/CEO.

1. Introduction widespread application of numerous classical optimization algorithms.


In 1975, Holland introduced the genetic algorithm (GA) [5], which
Optimization algorithms have played a crucial role in solving com- evolved into an important family of algorithms known as evolutionary
plex, nonlinear, and multimodal problems across various fields, algorithms. In 1983, Kirkpatrick proposed the simulated annealing (SA)
including engineering, economics, and artificial intelligence [1]. How- [6] algorithm based on the Metropolis criterion. In 1992, Dorigo, in his
ever, in practice, these problems often exhibit multimodal, discontin- doctoral thesis, introduced the ant colony optimization (ACO) [7,8] al-
uous, non-convex, or high-dimensional characteristics. Traditional gorithm, which demonstrated remarkable performance in solving
mathematical optimization methods, such as gradient descent, sequen- combinatorial optimization problems. In 1995, Eberhart and colleagues,
tial quadratic programming, and quasi-Newton methods, tend to show inspired by the social behavior and flight patterns of birds, proposed the
limitations when dealing with such challenges [2]. These approaches particle swarm optimization (PSO) [9] algorithm. These foundational
heavily rely on the differentiability and convexity of the objective algorithms laid a critical groundwork for subsequent research in meta-
function, making them prone to getting trapped in local optima and heuristic optimization.
inefficient when handling high-dimensional or multimodal problems In the 21st century, the golden era of metaheuristic algorithm
[3]. In contrast, metaheuristic algorithms, a branch of optimization development has gradually passed, but various new algorithms continue
techniques, have emerged as valuable tools due to their independence to emerge, particularly those inspired by natural processes. Examples
from derivative information and their relatively relaxed requirements include the grey wolf optimizer (GWO) [10], whale optimization algo-
regarding the nature of the optimization problem [4]. Additionally, they rithm (WOA) [11], bat algorithm (BA) [12], ant lion optimizer (ALO)
offer robust global search capabilities, making them well-suited for [13], firefly algorithm (FA) [14], harris hawks optimization (HHO) [15],
addressing complex optimization tasks. and runge kutta optimizer (RUN) [16]. However, as the number of
Since the 1970s, metaheuristic algorithms have experienced nearly nature-inspired algorithms has surged in recent years, researchers have
three decades of a golden era, marked by the development and observed that despite their diverse designs, many of these algorithms

* Corresponding author.
E-mail address: [email protected] (S. Zhang).

https://doi.org/10.1016/j.chaos.2025.116049
Received 28 September 2024; Received in revised form 11 December 2024; Accepted 19 January 2025
Available online 29 January 2025
0960-0779/© 2025 Elsevier Ltd. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
Y. Dong et al. Chaos, Solitons and Fractals: the interdisciplinary journal of Nonlinear Science, and Nonequilibrium and Complex Phenomena 192 (2025) 116049

exhibit “core” operational similarities, often concealed behind different (3) In DE, the crossover control parameter Cr is typically a fixed value
models [17–21]. This has led to calls from the evolutionary computation within the interval [0,1]. In contrast, CEO does not maintain a
community, with many scholars expressing concern that most newly fixed Cr value but instead randomly selects a value from the in-
proposed algorithms are hidden behind metaphor-rich terminology and terval [0,1] at each iteration. Additionally, CEO discards the fixed
lack genuine innovation [22]. scaling factor F used in DE, replacing it with a random number
Despite the extensive number of metaheuristic algorithms that have from the interval [0,1]. These modifications introduce greater
been proposed and applied to various optimization tasks, several com- randomness into the crossover control parameter and scaling
mon issues and challenges persist. For instance, many algorithms are factor, thereby enhancing the diversity of the search, reducing
prone to premature convergence or stagnation during the search pro- the risk of getting trapped in local optima, and simplifying the
cess, where they become trapped in local optima or stagnate without algorithm by reducing the number of parameters, thus making it
fully exploring the entire search space. Additionally, as pointed out by easier to use.
Jakub Kudela, most newly proposed metaheuristic algorithms, such as
GWO, HHO, slime mould algorithm (SMA) [23], and RUN, suffer from a The remainder of this paper is organized as follows. Section 2 in-
significant zero-bias problem. Specifically, these algorithms perform troduces the background and inspiration behind the development of
exceptionally well when the optimal solution of a test problem is zero, CEO. Section 3 presents the mathematical model and computational
but their performance degrades significantly when the optimal solution process of the CEO algorithm. Section 4 showcases the experimental
deviates from zero [24,25]. results of CEO in solving 15 benchmark test problems and a sensor
In Swarm Intelligence, Kennedy, the creator of PSO, emphasized that network localization problem, with comparative discussions against
“the degree of randomness determines the level of intelligence”, high- various algorithms. Finally, Section 5 concludes the work and provides
lighting the essence of intelligence as rooted in randomness [26]. This directions for future research.
insight offers guidance on addressing the issues of local optima
entrapment and stagnation in metaheuristic algorithms. Given that 2. Background
chaos exhibits strong randomness and unpredictability, many studies
have introduced chaotic operators to enhance global search capabilities, Many dynamic evolutionary phenomena in real life exhibit chaos,
achieving promising results. For example, Liu et al. [27] incorporated which can be utilized to uncover the objective laws of nature and
chaotic search into PSO, proposing the chaotic PSO (CPSO), and facilitate the rapid advancement of industrial applications. The chaotic
demonstrated its advantages over basic PSO. Kumar et al. [28] explored behavior is characterized by dense periodic orbits, exhibiting various
the impact of 10 chaotic maps on the search performance of the marine properties such as initial sensitivity, topological mixing, and unpre-
predators algorithm (MPA) [29]. Kaur et al. [30] introduced a chaotic dictability [35]. Current research is keenly focused on the construction
map into WOA to develop the chaotic WOA (CWOA). Altay [31] applied of diverse chaotic dynamical systems, facilitating the broad application
a chaotic map to the SMA, accelerating its global convergence rate. Raj of chaos across numerous industrial fields. For example, leveraging the
et al. [32] integrated the logistic map into the sine cosine algorithm randomness inherent in chaotic systems, researchers have developed
(SCA) [33] to improve its search capabilities. However, previous studies various secure communication strategies [36], metaheuristic algorithms
have mainly used traditional one-dimensional chaotic maps to enhance [31], and privacy protection systems for the IoT [37]. Thus, it is evident
the search capabilities of metaheuristic algorithms. In these approaches, that chaotic sequences play an irreplaceable and significant role in in-
individuals evolve independently during chaotic searches, ignoring dustrial processes.
collaborative interactions within the population. However, with the deepening of dynamical analysis technology and
Inspired by studies in Refs. [27–33], a novel population-based met- the development of artificial intelligence, the chaotic degradation has
aheuristic algorithm, called chaotic evolution optimization (CEO), is been gradually discovered and the chaotic evolution can be accurately
proposed in this paper. Specifically, the CEO algorithm leverages the predicted, which undoubtedly brings great risks to chaos-based appli-
hyperchaotic map of a two-dimensional discrete memristive system to cations. Thus, a phenomenon known as hyperchaos, which is more
guide the search directions of two individuals simultaneously, which complex than chaos itself and characterized by two positive Lyapunov
forms the basis for the “Chaotic” aspect of the CEO name. Compared to exponents (LEs), has received considerable attention. Undeniably, to
traditional one-dimensional chaotic maps, the hyperchaotic map con- realize hyperchaos, the continuous system needs to be at the expense of
siders interactions among individuals and incorporates memory func- high dimensions and huge analog circuits, which is inconsistent with the
tionality. This enhances search diversity and allows the population to requirements of practical applications. Therefore, realizing hyperchaos
explore more promising regions of the search space. Moreover, CEO is within the discrete map represents a reliable path, as it only needs two
classified as an evolutionary algorithm because it incorporates muta- dimensions, whereas continuous dynamical systems require a minimum
tion, crossover, and selection operations from the differential evolution of four dimensions [38].
(DE) [34] framework. Although CEO follows the overall structure of DE, In 1971, memristor, as the fourth nonlinear electronic component,
making it a specialized form of DE, there are fundamental distinctions was proposed by Chua based on the circuit symmetry theory [39]. In
between CEO and traditional DE, which are outlined in the following recent years, ongoing explorations have unveiled the nonlinear, nano-
three aspects: scale, and biomimetic characteristics of memristors, thereby facilitating
the development of chaotic and hyperchaotic systems. As its latest
(1) The mutation operator in CEO differs from that in DE. CEO uti- variant, the discrete memristor is widely employed in the construction of
lizes the sequence of the memristive hyperchaotic map to guide the hyperchaotic map. At the application level, discrete memristive
population evolution, providing more random evolution di- maps have been reported in various fields, including the optical
rections compared to DE. This helps avoid the issue encountered communication [40], the geolocation grid encryption [41], and the
in DE where the differential term approaches zero in later stages reservoir computing system [42]. However, the development of high-
of evolution, leading to local optima entrapment or stagnation. performance metaheuristic algorithms utilizing the discrete memristor
(2) In DE, each individual only has a single evolution direction. hyperchaotic map remains a novel area of research. A charge-controlled
However, in CEO, each individual can generate multiple chaotic discrete memristor model has been formulated as:
evolution directions. This enhancement significantly improves {
vt = M(qt )it ,
the algorithm’s ability to explore and exploit the current indi- (1)
qt+1 = qt + it ,
vidual, thus increasing the likelihood of finding the global
optimal solution.

2
Y. Dong et al. Chaos, Solitons and Fractals: the interdisciplinary journal of Nonlinear Science, and Nonequilibrium and Complex Phenomena 192 (2025) 116049

( )
Fig. 1. The bifurcation diagram and LEs curve controlled by the parameter k in Fig. 3. When x0 , y0 = ( − 0.5, 0.4), the phase portrait in the x − y plane is
( )
the E-DM map with k ∈ [2.2, 2.8] and x0 , y0 = ( − 0.5, 0.4). illustrated.

( )
Fig. 4. When x0 , y0 = ( − 0.5, 0.4), the time series concerning states x and y
Fig. 2. For k = 2.66, the local basin of attraction exhibited by the E-DM map in are illustrated.
the x0 − y0 plane with x0 ∈ [ − 2, 2] and y0 ∈ [ − 1, 1].
by the red and dark blue curves in the upper section of Fig. 1. The
where vt , it , and qt represent the sampled values of voltage v(t), current bifurcation diagram reveals that the E-DM map undergoes forward
i(t), and charge q(t) at the t-th iteration in the continuous memristor. In period-doubling bifurcations at k = 2.28, k = 2.44, k = 2.476, and k =
Eq. (1), the discrete memductance equation is deliberately chosen as 2.483, resulting in period-1, period-2, period-4, and period-8 behaviors.
M(qt ) = e− cosπqt − 1, thereby deriving the exponential discrete mem- During this process, both LE1 and LE2 do not exceed 0. However, the E-
ristor. According to literature [43], when the voltage vt is processed DM map enters a wide chaotic bifurcation interval at k = 2.486, char-
through a proportional controller k and utilized as delayed feedback acterized by a dense set of bifurcation points. In the interval
input, a unified memristive map can be obtained. Essentially, by k ∈ [2.486, 2.77], the presence of two positive LEs identifies the hyper-
substituting the output vt and input it in Eq. (1) with xt+1 and xt , chaotic behavior, whereas a single positive LE corresponds to chaotic
respectively, an exponential discrete memristor (E-DM) map is con- behavior. Interestingly, there are several narrow periodic windows in
structed as: the chaotic bifurcation interval, and the map exits chaos and enters
{ single-period operation at k = 2.77 through crisis scenario. Therefore,
xt+1 = k⋅(e− cosπyt − 1)⋅xt the bifurcation diagram and LEs curve demonstrate the complex
(2)
yt+1 = yt + xt dynamical behaviors induced by parameter variations in the E-DM map.
( ) Further, k = 2.66 is selected according to the bifurcation diagram,
When the initial conditions are set to x0 , y0 = ( − 0.5, 0.4) and k is
and the initial dynamical analysis of the E-DM map is also conducted.
varied within the interval [1, 2], the bifurcation points x and y of the E-
Using the initial conditions x0 and y0 as control parameters, the
DM map are extracted using the maximum value method, as depicted by
dynamical attraction region of the E-DM map is calculated in the x0 − y0
the brown and cyan point sets in the lower section of Fig. 1. Next, the LEs
plane. By calculating the LEs and the number of periodic cycles, the
are computed using the QR decomposition method, and are represented

3
Y. Dong et al. Chaos, Solitons and Fractals: the interdisciplinary journal of Nonlinear Science, and Nonequilibrium and Complex Phenomena 192 (2025) 116049

hyperchaos (HCH), stable point (SP), and divergent (DI) attractors are of the optimization problem through the inverse mapping, as described
respectively marked in green, yellow, and blue, thereby rendering a in Eq. (5).
local basin of attraction with a singular fractal boundary in the x0 − y0 {
x chaosnʹ = (x chaosn + 0.5) × (ub − lb) + lb
plane, as illustrated in Fig. 2. Within this basin, the parameter domains (5)
y chaosnʹ = (y chaosn + 0.25) × 2 × (ub − lb) + lb
of different attractors can be directly identified. For instance, the robust
hyperchaotic attractor can be obtained within the region for x0 ∈ [ − Obviously, based on x chaosnʹ and y chaosnʹ, N evolutionary di-
( )
0.5, 0.5] and y0 ∈ [ − 0.4, 0.4]. When x0 , y0 = ( − 0.5, 0.4) is selected, rections can be generated for individuals xt and yt respectively, as shown
Fig. 3 presents two separated hyperchaotic phase point sets and illus- in Eq. (6):
trates the complex evolutionary pathway of the hyperchaotic attractor. { n
Simultaneously, the state variables corresponding to each iteration are dx,t = x chaosnʹ − xt
(6)
represented by the time series shown in Fig. 4. It is evident that the dny,t = y chaosnʹ − yt
iterative processes of sequences x and y are random and unpredictable,
which is particularly beneficial for providing evolutionary directions in where n = {1, 2, ⋯, N}; dnx,t and dny,t are the evolutionary directions of xt
metaheuristic algorithms. This enhances the randomness of the search and yt respectively.
and increases the probability of discovering the global optimal solution. In summary, by combining Eqs. (3) and (6), the mutation operator of
CEO can be derived, as shown in Eq. (7).
3. Chaotic evolution optimization (CEO) { n
xt+1 = xt + a⋅(x chaosnʹ − xt )
̃
(7)
In this section, we mathematically model the proposed CEO algo- ̃nt+1 = yt + a⋅(y chaosnʹ − yt )
y
rithm. The overall framework of CEO is similar to the DE algorithm,
including mutation, crossover, and selection operations. The key dif- where a is the search step size, also known as the scale factor, which is a
ference lies in the design of the mutation operator, where CEO employs a random number on the interval [0,1]. Obviously, N mutant individuals
two-dimensional discrete memristive hyperchaotic map to provide the can be generated for individuals xt and yt respectively.
mutation direction for each individual. It can be seen from Eq. (7) that it has strong global exploration ca-
pabilities, but this may lead to slow convergence speed of the algorithm.
Therefore, in order to further improve the local development capabil-
3.1. Mutation operation ities of the algorithm, Eq. (8) is used to search near the best solution in
the current population to speed up the convergence speed of the
CEO, as a population-based evolutionary algorithm, utilizes the algorithm.
following unified search framework for its mutation operator: { n
xt+1 = Bestt + a⋅(x chaosnʹ − xt )
̃
xt+1 = xt + a⋅dt
̃ (3) (8)
̃nt+1 = Bestt + a⋅(y chaosnʹ − yt )
y
where xt and ̃xt+1 are the current individual and the mutated individual
respectively; a represents the search step size; dt is the evolution di- where Bestt is the best solution for the current population.
rection, which is generated by the dynamical map Eq. (2).
The main idea of CEO is to utilize the hyperchaotic properties of the 3.2. Crossover operation
two-dimensional memristive hyperchaotic map, as shown in Eq. (2), to
provide evolutionary direction for the population. It is important to After mutation, perform a binomial crossover operator on xt , x
(
̃nt+1
)
emphasize that, in order for the proposed CEO algorithm to effectively ( )
exploit the hyperchaotic characteristics of Eq. (2), the two individuals xt and yt , y ̃nt+1 respectively to generate trial vectors x trialnt =
( ) (
and yt selected from CEO must be mapped to the ranges of [− 0.5,0.5] x trialn1,t , x trialn2,t , ⋯, x trialnDim,t and y trialnt = y trialn1,t , y trialn2,t , ⋯,
and [− 0.25,0.25], respectively, as specified in Eq. (4). ) ( n )
⎧ y trialnDim,t . Taking xt , ̃ xt+1 as an example, the specific crossover
xt − lb
⎨ xt = − 0.5

process is shown in Eq. (9):
⎪ ’
ub − lb
(4) { n (
⎩ y’ = yt − lb × 0.5 − 0.25

⎪ xj,t+1 , if randj (0, 1] ≤ Cr ) or (j = jrand )
̃
t
ub − lb x trialnj,t = (9)
xj,t , otherwise
where x’t and y’t are the chaotic initial positions after mapping, respec-
tively, and their values are within the intervals [− 0.5,0.5] and where Dim is the dimension of the optimization problem, j = 1,2,⋯,Dim;
[− 0.25,0.25] respectively; lb and ub are the lower and upper bounds of jrand is an integer randomly selected in the interval [1,Dim]; randj (0, 1] is
the current population variables, respectively. a random number generated for each j and evenly distributed between
N chaotic candidate individuals x chaos and y chaos can be 0 and 1, Cr is the crossover control parameter. In CEO, the value of Cr is a
generated by applying Eq. (2) for individuals x’t and y’t respectively. The random number in [0,1] for each iteration.
specific pseudocodes are as follows:
1: for n ← 1 to N do 3.3. Selection operation
2: x_chaos(n,:)←k⋅(e− cosπyt − 1)⋅x’t

3: y_chaos(n,:)←y’t + x’t For xt and yt , N trial vectors x trialnt and y trialnt can be generated
4: end for respectively. CEO adopts a greedy criterion to select the generated
In the aforementioned pseudocode, N represents the number of experimental vectors. The selection operators are shown in Eqs. (10) and
chaotic samples. Thus, by applying Eq. (2), N chaotic candidate in- (11).
{ }
dividuals x chaos = x chaos1 , …, x chaosN and y chaos = { ( )
{ }
y chaos1 , …, y chaosN can be generated. These chaotic individuals x trial*t , if f x trial*t ≤ f(xt )
xt+1 = (10)
xt , otherwise
can then be mapped back to the actual positions x chaosʹ =
{ ʹ} { ʹ}
x chaos1 , …, x chaosN and y chaosʹ = y chaos1 , …, y chaosN
ʹ ʹ

4
Y. Dong et al. Chaos, Solitons and Fractals: the interdisciplinary journal of Nonlinear Science, and Nonequilibrium and Complex Phenomena 192 (2025) 116049

Fig. 5. Flowchart of the proposed CEO algorithm.

Table 1
Fifteen well-known benchmark functions.
Name Functions Range x* f(x* )
∑n
Spherical f(x) = x2 [− 100,100] [0,0, …] 0
i=1 i
Schwefel 2.22 ∑n n
∏ [− 10,10] [0,0, …] 0
f(x) = i=1
∣xi ∣ + ∣xi ∣
i=1
Schewefel 1.2 ∑n ( ∑i )2
[− 100,100] [0,0, …] 0
f(x) = i=1 j=1
xj
∑n ( )
Rosenbrock f(x) =
)2(
100 xi+1 − x2i + (xi − 1)2 [− 30,30] [1,1, …] 0
i=1
∑n [ ]
Schwefel 2.4 f(x) =
(
(xi − 1)2 + x1 − x2i
)2 [0,10] [1,1, …] 0
i=1
High conditioned elliptic ∑n ( ) i− 1 [− 100,100] [0,0, …] 0
f(x) = i=1
106 n− 1 x2i
∑n
Tablet f(x) = 106 ⋅x21 + x6 [− 100,100] [0,0, …] 0
i=2 i
∑n ( ∑n ) 2 ( ∑n )4
Zakharov f(x) = 2
x + 0.5ixi + 0.5ixi [− 5,10] [0,0, …] 0
i=1 i i=1 i=1
{
π ∑n− 1 (
Penalized 1 f(x) =
)2 [ 2
yi − 1 1 + 10sin πyi+1
( )] [− 50,50] [− 1,-1, …] 0
n i=1

( )2 } ∑n
+ yn − 1 + 10sin(πy1 ) + i=1
u(xi , 10, 100, 4)

⎪ k(xi − a)m xi > a


u(xi , a, k, m) = 0 − a < xi < a



k( − xi − a)m xi < a
{ ∑n− 1
Penalized 2 f(x) = 0.1
[
(xi − 1)2 1 + sin2 (3πxi + 1) +
] [− 50,50] [1,1, …] 0
i=1
[ ]
(xn − 1)2 1 + sin2 (2πxn ) +
} ∑n
sin2 (3πx1 ) + i=1
u(xi , 5, 100, 4)
( √̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅̅ )
Ackley 1 ∑n [− 32,32] [0,0, …] 0
f(x) = 20 + e − 20exp − 0.2 i=1 i
x2
n
( ∑ )
1 n
− exp cos(2πxi )
n i=1
( )
Griewank 1 ∑n ∏n
xi [− 600, 600] [0,0, …] 0
f(x) = i=1 i
x2 − cos √̅ + 1
4000 i=1 i
∑n ( )
Rastrigin f(x) = x 2
− 10cos(2 π x i ) + 10 [− 5.12, 5.12] [0,0, …] 0
i=1 i
Levy and Montalvo 1 π( 2
∑i=1 ( )2 [ ( )] [− 10,10] [− 1,-1, …] 0
f(x) = 10sin (πy1 ) + yi − 1 1 + 10sin2 πyi+1
n n− 1

( )2 ) 1
+ yn − 1 , yi = 1 + (xi + 1)
( ∑n− 1 4
Levy and Montalvo 2 f(x) = 0.1
( (
(xi − 1)2 1 + sin2 (3πxi+1 )
)) [− 5,5] [1,1, …] 0
i=1
( ( ) ) ( ) )
+(xn − 1)2 1 + sin2 (2πxn ) + sin2 (3πx1 )

Note: x* , optimal solution; f (x* ), the optimal function value.

5
Y. Dong et al. Chaos, Solitons and Fractals: the interdisciplinary journal of Nonlinear Science, and Nonequilibrium and Complex Phenomena 192 (2025) 116049

Fig. 6. Illustration of the CEO evolution process on the Rastrigin function.

{ ( ) (continued )
y trial*t , if f y trial*t ≤ f(yt )
yt+1 = (11) 7: [xt , yt ] ← Select two different individuals from the Population.
yt , otherwise
8: [xʹt , yʹt ] ← Perform interval mapping on xt and yt by executing Eq. (4).
9: [x chaos,y chaos] ← N chaotic individuals are obtained by executing Eq. (2).
where x trial*t and y trial*t are the best trial vectors of trial vectors 10: [x chaosʹ, y chaosʹ] ← Executing Eq. (5) yields the actual position of the
( ) 11: corresponding optimization problem.
x trialnt = x trialn1,t , x trialn2,t , ⋯, x trialnDim,t and y trialnt = 12: if rand <0.5 then
( ) n
13: [̃
xt+1 , y ̃nt+1 ]←Executing Eq. (7). ⊳ Mutation phase
y trialn1,t , y trialn2,t , ⋯, y trialnDim,t respectively.
14: else
n
15: [̃
xt+1 , y ̃nt+1 ]←Executing Eq. (8). ⊳ Mutation phase
3.4. Pseudocode of CEO 16: end if
17: Cr = rand (0,1)
18: [x trialnt , y trialnt ] ← Executing Eq. (9). ⊳ Crossover phase
By integrating the mutation, crossover, and selection processes of
19: [xt+1 , yt+1 ] ← Executing Eq. (10) and Eq. (11). ⊳ Selection phase
CEO, a detailed pseudocode for solving minimization optimization 20: [Population, fit] ← Update the population and evaluate it.
problems is presented in Algorithm 1. 21: FEvals = FEvals + 2*N;
22: until all individuals in the population are selected once.
Algorithm 1. Pseudocode of CEO. 23: t = t + 1;
Input: func (Objective function); N (Number of chaotic sampling); Np (Population 24: End while
size); 25: Return Best, fBest
MaxFES (Maximum number of evaluation functions);
Output: Best (the optimal variable); fBest (the optimal function value).
1: t = 1 /* Initialization iteration number */
2: /* Initialize the population and evaluate the population*/
3: [Population, fit, fBest, Best] = Initialization (func, Np, Dim) In Algorithm 1, since two individuals are selected from the popula-
4: FEvals = Np; tion of CEO for each iteration, the population size Np is set to an even
5: while FEvals < MaxFES do
number >2.
6: repeat
(continued on next column) Fig. 5 also shows the flowchart of the proposed CEO algorithm.

6
Y. Dong et al. Chaos, Solitons and Fractals: the interdisciplinary journal of Nonlinear Science, and Nonequilibrium and Complex Phenomena 192 (2025) 116049

Fig. 7. Comparison of GWO, HHO, SMA, RUN, and CEO on 15 unshifted benchmark functions.

3.5. A detailed description of the CEO evolution process RCGA [46], PSO [9], and DE [34], as well as several successful and
widely-used improved metaheuristics, such as CLPSO [47], GL-25 [48],
In this section, the evolutionary process of the CEO algorithm is SaDE [49], and LSHADE [50]. Among these, LSHADE has performed
described in detail using the Rastrigin function provided in the Table 1 exceptionally well in several CEC (IEEE Congress on Evolutionary
as an example. Fig. 6 illustrates the search range of the population at the Computation) competitions, achieving top placements and winning
1st and 31st iterations of CEO, including the chaos sampling, mutation, global optimization contests, making it a frequent baseline algorithm.
crossover, and selection stages. In the experiments, the population size is set to 50, and the dimen-
In this experiment, the population size Np is set to 10, and the sionality of the test functions is set to 2, 5, 10, 20, and 30, with the
number of chaotic samples is set to 5. The following conclusions can be maximum number of function evaluations (MaxFES) set to Dim*10,000.
drawn from Fig. 6: To ensure a fair comparison, the detailed parameters of the comparison
algorithms are kept consistent with their respective original studies.
(1) In the 1st iteration of CEO, the population’s position is far from Except for solving functions Ackley and Rastrigin, where the chaotic
the global optimal solution, whereas by the 31st iteration, the sample numbers for CEO are set to 5 and 20, respectively, all other
population is much closer to the global optimal solution. functions use a sample number of 1. All algorithms are independently
(2) For individuals xt and yt , multiple distinct evolutionary di- run 51 times in MATLAB (version R2020b) on a Windows 10 desktop
rections are generated through chaotic dynamical evolution, with an Intel(R) Core(TM) i5-9500F CPU @ 3.00 GHz. During the
leading to N trial vectors via mutation and crossover operations. execution of each algorithm, iterations terminate either when the
Subsequently, the selection operation updates the current solu- function evaluations (FEvals) exceed MaxFES or when the error toler-
tion, retaining the best individuals. Since each individual gener- ance of the optimal solution reaches 10− 8 .
ates multiple trial vectors in each iteration, CEO effectively
leverages information from the current solutions, significantly 4.2. Comparisons with GWO, HHO, SMA, and RUN
enhancing the algorithm’s search capabilities.
(3) As the evolutionary direction in CEO is derived from the In this section, the proposed CEO algorithm is compared with four
dynamical map, and the step size is controlled by the scaling popular metaheuristic algorithms, including GWO, HHO, SMA, and
factor a, the differential term does not converge to zero as long as RUN. Fig. 7 illustrates the average number of function evaluations
a remains non-zero. This prevents the algorithm from stagnating (AvgFEvals) required by the five algorithms to reach an error precision
or becoming trapped in local optima during the later stages of of 10− 8 upon the completion of optimization on 15 non-shifted bench-
evolution. mark functions. As shown in Fig. 7, all five algorithms achieve an error
precision of 10− 8 on the non-shifted functions Spherical, Schwefel 2.22,
4. Results and discussion Schewefel 1.2, Elliptic, Tablet, Zakharov, Ackley, Griewank, and Ras-
trigin across dimensions of 2, 5, 10, 20, and 30, indicating that the al-
4.1. Experimental setup gorithms terminate before reaching the MaxFES. However, for the non-
shifted functions Rosenbrock, Schwefel 2.4, Penalized 1, Penalized 2,
To evaluate the performance of CEO, fifteen widely used benchmark Levy and Montalvo 1, and Levy and Montalvo 2, GWO, HHO, SMA, and
functions from literature [44] are employed. The specific expressions of RUN do not achieve the 10− 8 error precision across all dimensions.
these functions are provided in Table 1. CEO’s results are compared with Specifically, on the non-shifted Rosenbrock function, these algorithms
12 other metaheuristic algorithms, including some recently popular
only reach the precision of 10− 8 in the 2-dimensional case. In contrast,
methods such as GWO [10], HHO [15], SMA [23], and RUN [16], which
the proposed CEO algorithm consistently attains an error precision of
have been cited >160,000 times, 4600 times, 2300 times, and 700 times,
10− 8 across all 15 non-shifted benchmark functions.
respectively, since their introduction. The comparison also included
Moreover, an interesting phenomenon is observed in Table 1. The
classical evolutionary and swarm intelligence algorithms like ABC [45],
optimal solutions of the functions Spherical, Schwefel 2.22, Schewefel

7
Y. Dong et al. Chaos, Solitons and Fractals: the interdisciplinary journal of Nonlinear Science, and Nonequilibrium and Complex Phenomena 192 (2025) 116049

Fig. 8. Comparison of GWO, HHO, SMA, RUN, and CEO on 15 shifted benchmark functions.

Fig. 9. Comparison of ABC, RCGA, PSO, DE, and CEO on 15 shifted benchmark functions.

1.2, Elliptic, Tablet, Zakharov, Ackley, Griewank, and Rastrigin are all with those in Fig. 7, it is found that when the MaxFES is set to
located at zero, while the functions Rosenbrock, Schwefel 2.4, Panalized Dim*10000, GWO, HHO, SMA, and RUN only achieve an error precision
1, Panalized 2, Levy and Montalvo 1, and Levy and Montalvo 2 have of 10− 8 on some 2-dimensional test functions. Moreover, SMA achieves
optimal solutions away from zero. As pointed out in the literature [25], 10− 8 precision on only a few higher-dimensional test functions. In
when an algorithm performs exceptionally well on test functions with contrast, the proposed CEO algorithm reaches 10− 8 precision on all test
optimal solutions at zero but performs poorly on non-zero optimal functions, and the AvgFEvals on both the shifted and non-shifted func-
functions, the algorithm may suffer from the zero-bias problem. This tions is very similar. These experimental results show that GWO, HHO,
indicates that the algorithm might incorporate operators that are biased SMA, and RUN exhibit the zero-bias problem, while the proposed CEO
toward zero, causing it to perform “exceptionally well” on zero-optimal algorithm does not. CEO demonstrates superior optimization perfor-
functions. mance and convergence speed compared to the other four algorithms.
To verify whether the five comparative algorithms exhibit the zero- Furthermore, it is worth noting that the zero-bias problem in GWO,
bias problem, Fig. 8 presents the AvgFEvals required by these algorithms HHO, SMA, and RUN has already been validated in the literature
on 15 shifted benchmark functions. By comparing the results of Fig. 8 [24,25].

8
Y. Dong et al. Chaos, Solitons and Fractals: the interdisciplinary journal of Nonlinear Science, and Nonequilibrium and Complex Phenomena 192 (2025) 116049

Fig. 10. Comparison of CLPSO, GL-25, SaDE, LSHADE, and CEO on 15 shifted benchmark functions.

4.3. Comparisons with ABC, RCGA, PSO, and DE Rosenbrock function, which has a valley-shaped structure and becomes
non-convex when the dimension exceeds 2, poses a challenge. Its global
In this section, the proposed CEO algorithm is compared with four minimum is located within a long, narrow parabolic valley, which is
classic evolutionary or swarm intelligence algorithms: ABC, RCGA, PSO, easy to locate, but finding the exact global minimum is difficult due to
and DE, all of which do not suffer from the zero-bias problem. Fig. 9 the subtle changes within the valley. As shown in Fig. 10, when the
shows the AvgFEvals for the five comparative algorithms on 15 shifted dimension of the Rosenbrock function exceeds 2, only SaDE, LSHADE,
benchmark functions. As shown in Fig. 9, for the functions Spherical, and CEO are able to locate the global optimum. For the Schwefel 2.4 and
Schwefel 2.4, Tablet, Panalized 1, Panalized 2, Levy and Montalvo 1, Zakharov functions, CLPSO performs worse than the other four algo-
and Levy and Montalvo 2, all five algorithms achieve an optimal pre- rithms when the dimension exceeds 5. Additionally, for the Griewank
cision of 10− 8 . However, CEO algorithm generally consumes fewer function with dimensions of 5 and 10, CLPSO fails to locate the global
FEvals, indicating that its convergence speed is faster than that of ABC, optimum within the fixed FEvals. Overall, CEO performs the best in this
RCGA, PSO, and DE. For the remaining test functions, only CEO achieves experiment, followed by LSHADE, while CLPSO shows the weakest
the precision of 10− 8 across all dimensional settings (2, 5, 10, 20, and 30 performance. These results demonstrate the advantages and effective-
dimensions). The other four algorithms fail to reach the preset precision ness of the proposed CEO algorithm.
of 10− 8 in some cases. Notably, for the Rosenbrock function, only the DE
algorithm meets the precision of 10− 8 in 2D and 5D, while ABC, RCGA, 4.5. Application for the sensor network localization
and PSO do not achieve the desired precision even after reaching the
MaxFES across any dimension. Moreover, RCGA shows weaker perfor- To further verify the effect of the proposed CEO algorithm in prac-
mance on the Schwefel 1.2, Elliptic, Ackley, and Rastrigin functions. tical application problems, this subsection applies it to sensor network
ABC performs poorly on the Schwefel 1.2 and Zakharov functions, while localization (SNL) problems.
DE performs poorly on the Rastrigin function in 20D and 30D. Overall,
CEO algorithm consumes the fewest function evaluations, while RCGA 4.5.1. Sensor network localization problem
consumes the most. The proposed CEO algorithm exhibits significantly The SNL problem can be stated as follows. Given the position of n
better search capability and convergence speed compared to ABC, anchor points a1 , a2 , ⋯, an ∈ Rd (d = 2 in this paper), the distance be-
RCGA, PSO, and DE, with RCGA performing the worst among them. tween the ith sensor and the kth anchor point is dik if (i, k) ∈ Na , and the
distance between the ith sensor and the jth sensor is eij if (i, j) ∈ Nx ,
4.4. Comparisons with CLPSO, GL-25, SaDE, and LSHADE where, Na = {(i, k) : ‖xi − ak ‖ = dik ≤ rd } and Nd =
{ ⃦ ⃦ }
(i, j) : ⃦xi − xj ⃦ = eij ≤ rd , here rd is the radio range, the SNL problem
In this section, the proposed CEO algorithm is compared through is to estimate m different sensor positions xi , i = 1, 2, ⋯, m, such that
experimental simulations with several popular improved metaheuristic
algorithms, including CLPSO, GL-25, SaDE, and LSHADE, to further ‖xi − ak ‖2 = d2ik , ∀(i, k) ∈ Nd (12)
validate its performance. Fig. 10 illustrates the AvgFEvals for the five ⃦ ⃦
comparative algorithms on 15 shifted benchmark functions. Specifically, ⃦xi − xj ⃦2 = e2 , ∀(i, j) ∈ Nx (13)
ij
as shown in Fig. 10, for functions such as Spherical, Schwefel 2.22,
Since distance dik and eij can contain noise, making Eqs. (12) and (13)
Elliptic, Tablet, Panalized 1, Panalized 2, Ackley, Levy and Montalvo 1,
unfeasible, this study uses the least square method to model the SNL
and Levy and Montalvo 2, all five algorithms achieve an optimal pre-
problem, which can be expressed as a non-convex optimization problem,
cision of 10− 8 across all dimensions. However, CEO consistently requires
as shown in the following equation
fewer FEvals than CLPSO, GL-25, SaDE, and LSHADE. For the Schwefel
1.2 function, when the dimension exceeds 2, CLPSO fails to obtain a ∑ (⃦ ⃦ )2 ∑ ( )2
min ⃦xi − xj ⃦2 − e2 + ‖xi − ak ‖2 − d2ik (14)
satisfactory solution before stopping, and for the 30-dimensional case, (i,j)∈Nx
ij
(i,k)∈Na
GL-25 is also unable to meet the precision requirement of 10− 8 . The

9
Y. Dong et al. Chaos, Solitons and Fractals: the interdisciplinary journal of Nonlinear Science, and Nonequilibrium and Complex Phenomena 192 (2025) 116049

Table 2 actual sensor positions, indicating that CEO has successfully found the
Experimental results for SNL instances. global optimum for the SNL problem, whereas CLPSO, GL-25, SMA, DE,
index CLPSO GL-25 SMA DE LSHADE CEO and LSHADE all become trapped in local optima.
Furthermore, as seen in the fitness iteration curves in Fig. 12, it is
Best 3.48 × 2.62 × 5.43 × 2.56 × 2.51 × 1.78 ×
10− 3 10− 4 10− 3 10− 5 10− 5 10− 7
evident that, due to some level of noise, the sensor positions found by
Mean 9.42 × 1.33 × 1.24 × 2.46 × 2.42 × 2.88 × CEO still exhibit an error in the order of 10− 7 compared to the true
10− 3 10− 2 10− 2 10− 2 10− 2 10− 3 positions. However, compared to other optimization algorithms, CEO
Worst 1.69 × 5.30 × 1.78 × 5.55 × 6.02 × 6.78 × converges to an approximately global optimal solution with significantly
10− 2 10− 2 10− 2 10− 2 10− 2 10− 3 higher precision and faster convergence. This demonstrates that CEO
Running 235.8 252.7 388.5 199.3 208.6 203.3
time/s
outperforms the other four algorithms in both search accuracy and
convergence speed.

4.5.2. Experiments on SNL problems


The sensor example is created randomly by SFSDF [51], a MATLAB
package for solving SNL problems. This study created a SNL examples
with 4 anchor points a1, a2, a3, a4. The positions of the 4 anchor points
are (0,0), (0,1), (1,0), (1,1), respectively. The SNL problem has 50
sensors, using a radio range of 0.3 and a noise factor of 0.001. When
solving the SNL problem of 50 sensors, the experimental results are
compared with CLPSO, GL-25, SMA, DE and LSHADE. The MaxFES of all
algorithms is set to 5 × 106 , the search range is set to [0,1], and the other
parameters are set the same as those in the numerical experiment. All
algorithms run independently 30 times. The experimental results of all
SNL instances are shown in Table 2 and Figs. 11 and 12.
For the SNL problem with 50 sensors, only the CEO algorithm is able
to find the global optimum. More specifically, as shown in Table 2, the
best, average, and worst values obtained by CEO outperform those of the
other five comparative algorithms. Additionally, CEO’s running time is
only slightly longer than that of the DE algorithm. Fig. 11 presents a
comparison between the best sensor positions found by the five algo-
rithms and the actual sensor positions, where circles indicate the actual
positions and asterisks represent the sensor positions identified by the
algorithms. From Fig. 11, it is evident that the sensor positions found by
the CLPSO, GL-25, and SMA algorithms deviate significantly from the
actual positions, while the DE and LSHADE algorithms result in two out
of 50 sensor positions that deviate from the true locations. Only the CEO Fig. 12. Fitness values iteration curve of three different algorithms for 50
algorithm identifies sensor positions that perfectly coincide with the sensors and 4 anchor points.

Fig. 11. Best results obtained by CLPSO, GL-25, SMA, DE, LSHADE, CEO for the SNL problem with 50 sensors.

10
Y. Dong et al. Chaos, Solitons and Fractals: the interdisciplinary journal of Nonlinear Science, and Nonequilibrium and Complex Phenomena 192 (2025) 116049

5. Conclusions Science Foundation of Xinjiang Uygur Autonomous Region (Grant No.


2022D01C367), the National Natural Science Foundation of China
This paper proposes a novel population-based metaheuristic opti- (Grant Nos. 52267010, 72361033, 52266018), the Tianchi Talent
mization algorithm inspired by chaotic dynamics, called chaotic evo- Introduction Plan Project of Xinjiang Autonomous Region (2024XGYT-
lution optimization (CEO). The primary inspiration for CEO stems from CYC12) and the Innovation Team at Xinjiang Institute of Engineering.
the dynamical evolution process of a two-dimensional discrete mem-
ristive map. The hyperchaotic characteristics of the map are utilized to Data availability
model the algorithm and provide random search directions for evolu-
tion. While CEO broadly adopts the framework of the DE algorithm, it No data was used for the research described in the article.
differs in its mutation operator. Unlike traditional DE, CEO employs a
chaotic dynamical map to generate more random evolution directions References
for the population, mitigating the local optima and stagnation issues
caused by diminishing difference terms in the later stages of DE. Addi- [1] Dong Y, Zhang H, Wang C, et al. Robust optimal scheduling for integrated energy
systems based on multi-objective confidence gap decision theory. Expert Syst Appl
tionally, CEO introduces multiple chaotic evolution directions, 2023;228:120304. https://doi.org/10.1016/j.eswa.2023.120304.
enhancing the exploration and exploitation capabilities of the algorithm. [2] Nocedal J, Wright SJ. Numerical optimization. New York: Springer; 2006.
By randomizing the crossover rate and scaling factor, CEO further in- [3] Zhang Q, Gao H, Zhan ZH, et al. Growth optimizer: a powerful metaheuristic
algorithm for solving continuous and discrete global optimization problems.
creases search diversity, reducing the risk of being trapped in local op- Knowledge-Based Syst 2023;261:110206. https://doi.org/10.1016/j.
tima while also lowering the difficulty of parameter tuning. The paper knosys.2022.110206.
also provides graphical representations to visually and comprehensively [4] Bahbouhi JE, Elkouay A, Bouderba SI, et al. The whale optimization algorithm and
the evolution of cooperation in the spatial public goods game. Chaos Solitons
describe the evolutionary process of CEO, clearly illustrating the Fractals 2024;182:114873. https://doi.org/10.1016/j.chaos.2024.114873.
mechanism of the algorithm. [5] Holland JH. Genetic algorithms. Sci Am 1992;267(1):66–73. https://www.jstor.
Through experimental comparisons on 15 benchmark test problems org/stable/24939139.
[6] Kirkpatrick S, Gelatt Jr CD, Vecchi MP. Optimization by simulated annealing.
and a sensor network localization problem with 50 sensors, CEO’s per-
Science 1983;220(4598):671–80. https://doi.org/10.1126/science.220.4598.671.
formance is comprehensively evaluated against 12 other metaheuristic [7] Dorigo M, Gambardella LM. Ant colony system: a cooperative learning approach to
algorithms. The results demonstrate that CEO achieves promising and the traveling salesman problem. IEEE Trans Evol Comput 1997;1(1):53–66.
competitive outcomes, outperforming the comparative algorithms in https://doi.org/10.1109/4235.585892.
[8] Bonabeau E, Dorigo M, Theraulaz G. Inspiration for optimization from social insect
terms of robustness and effectiveness, and avoiding the zero-bias prob- behaviour. Nature 2000;406(6791):39–42. https://doi.org/10.1038/35017500.
lem prevalent in many recent algorithms, such as GWO, HHO, SMA, and [9] Kennedy J, Eberhart R. Particle swarm optimization[C]//proceedings of ICNN’95-
RUN. Overall, CEO emerges as an advanced and reliable optimization international conference on neural networks. IEEE 1995;4:1942–8.
[10] Mirjalili S, Mirjalili SM, Lewis A. Grey wolf optimizer. Adv Eng Software 2014;69:
tool, capable of efficiently and accurately solving complex continuous 46–61. https://doi.org/10.1016/j.advengsoft.2013.12.007.
optimization problems. It is worth noting that the proposed CEO can be [11] Mirjalili S, Lewis A. The whale optimization algorithm. Adv Eng Software 2016;95:
regarded as a framework for chaotic-based evolutionary optimization 51–67. https://doi.org/10.1016/j.advengsoft.2016.01.008.
[12] Yang XS, Hossein Gandomi A. Bat algorithm: a novel approach for global
algorithms. This framework allows for the integration of other chaotic engineering optimization. Eng Comput 2012;29(5):464–83. https://doi.org/
maps to extend or enhance the CEO algorithm. For example, it can 10.1108/02644401211235834.
incorporate offset-boosted chaotic maps [52,53], dual memristors-based [13] Mirjalili S. The ant lion optimizer. Adv Eng Software 2015;83:80–98. https://doi.
org/10.1016/j.advengsoft.2015.01.010.
hyperchaotic maps [54,55], or memristor-coupled hyperchaotic maps [14] Yang XS, He X. Firefly algorithm: recent advances and applications. Int J Swarm
[56,57]. This flexibility also provides new avenues for the development Intell 2013;1(1):36–50. https://doi.org/10.1504/IJSI.2013.055801.
of chaotic-based optimization algorithms. [15] Heidari AA, Mirjalili S, Faris H, et al. Harris hawks optimization: algorithm and
applications. Future Gener Comput Syst 2019;97:849–72. https://doi.org/
Future research will focus on expanding CEO by exploring chaos-
10.1016/j.future.2019.02.028.
based evolutionary algorithms, as well as extending CEO to discrete [16] Ahmadianfar I, Heidari AA, Gandomi AH, et al. RUN beyond the metaphor: an
optimization tasks, multi-objective optimization problems, and broader efficient optimization algorithm based on Runge Kutta method. Expert Syst Appl
real-world applications, such as system identification, economic 2021;181:115079. https://doi.org/10.1016/j.eswa.2021.115079.
[17] Camacho-Villalón CL, Dorigo M, Stützle T. Exposing the grey wolf, moth-flame,
dispatch in power systems, and integrated energy system planning and whale, firefly, bat, and antlion algorithms: six misleading optimization techniques
operation. inspired by bestial metaphors. Int Tran Oper Res 2023;30(6):2945–71. https://doi.
org/10.1111/itor.13176.
[18] Weyland D. A rigorous analysis of the harmony search algorithm: how the research
CRediT authorship contribution statement community can be misled by a “novel” methodology. Int J Appl Metaheuristic
Comput 2010;1(2):50–60. https://doi.org/10.4018/jamc.2010040104.
Yingchao Dong: Writing – review & editing, Writing – original draft, [19] Camacho-Villalón CL, Dorigo M, Stützle T. The intelligent water drops algorithm:
why it cannot be considered a novel algorithm: a brief discussion on the use of
Validation, Software, Methodology, Conceptualization. Shaohua metaphors in optimization. Swarm Intell 2019;13:173–92. https://doi.org/
Zhang: Writing – review & editing, Visualization, Validation, Supervi- 10.1007/s11721-019-00165-y.
sion, Software. Hongli Zhang: Writing – review & editing, Validation, [20] Camacho-Villalón CL, Stützle T, Dorigo M. Cuckoo search≡(+)-evolution
strategy—A rigorous analysis of an algorithm that has been misleading the
Supervision, Funding acquisition, Formal analysis, Data curation. research community for more than 10 years and nobody seems to have noticed[R].
Xiaojun Zhou: Visualization, Validation, Supervision, Formal analysis, Technical Report TR/IRIDIA/2021-006. Belgium: IRIDIA, Université Libre de
Conceptualization. Jiading Jiang: Visualization, Validation, Supervi- Bruxelles; 2021.
[21] Piotrowski AP, Napiorkowski JJ, Rowinski PM. How novel is the “novel” black hole
sion, Project administration, Formal analysis, Data curation,
optimization approach? Inform Sci 2014;267:191–200. https://doi.org/10.1016/j.
Conceptualization. ins.2014.01.026.
[22] Aranha C, Camacho Villalón CL, Campelo F, et al. Metaphor-based metaheuristics,
a call for action: the elephant in the room. Swarm Intell 2022;16(1):1–6. https://
Declaration of competing interest
doi.org/10.1007/s11721-021-00202-9.
[23] Li S, Chen H, Wang M, et al. Slime mould algorithm: a new method for stochastic
The authors declare that they have no known competing financial optimization. Future Gener Comput Syst 2020;111:300–23. https://doi.org/
interests or personal relationships that could have appeared to influence 10.1016/j.future.2020.03.055.
[24] Kudela J. The evolutionary computation methods no one should use. arXiv preprint
the work reported in this paper. 2023. https://doi.org/10.48550/arXiv.2301.01984. arXiv:2301.01984.
[25] Kudela J. A critical problem in benchmarking and analysis of evolutionary
Acknowledgments computation methods. Nat Mach Intell 2022;4(12):1238–45. https://doi.org/
10.1038/s42256-022-00579-0.

This work was partially supported by Sponsored by the Natural

11
Y. Dong et al. Chaos, Solitons and Fractals: the interdisciplinary journal of Nonlinear Science, and Nonequilibrium and Complex Phenomena 192 (2025) 116049

[26] Kennedy J. Swarm intelligence[M]//Handbook of nature-inspired and innovative [43] Bao H, Hua Z, Li H, et al. Discrete memristor hyperchaotic maps. IEEE Trans
computing: Integrating classical models with emerging technologies. Boston, MA: Circuits Syst I Regul Pap 2021;68(11):4534–44. https://doi.org/10.1109/
Springer US; 2006. p. 187–219. TCSI.2021.3082895.
[27] Liu B, Wang L, Jin YH, et al. Improved particle swarm optimization combined with [44] Dong Y, Zhang H, Wang C, et al. An adaptive state transition algorithm with local
chaos. Chaos Solitons Fractals 2005;25(5):1261–71. https://doi.org/10.1016/j. enhancement for global optimization. Appl Soft Comput 2022;121:108733.
chaos.2004.11.095. https://doi.org/10.1016/j.asoc.2022.108733.
[28] Kumar S, Yildiz BS, Mehta P, et al. Chaotic marine predators algorithm for global [45] Karaboga D, Basturk B. A powerful and efficient algorithm for numerical function
optimization of real-world engineering problems. Knowledge-Based Syst 2023;261: optimization: artificial bee colony (ABC) algorithm. J Global Optim 2007;39:
110192. https://doi.org/10.1016/j.knosys.2022.110192. 459–71. https://doi.org/10.1007/s10898-007-9149-x.
[29] Faramarzi A, Heidarinejad M, Mirjalili S, et al. Marine predators algorithm: a [46] Tran TD, Jin GG. Real-coded genetic algorithm benchmarked on noiseless black-
nature-inspired metaheuristic. Expert Syst Appl 2020;152:113377. https://doi. box optimization testbed[C]. In: Proceedings of the 12th annual conference
org/10.1016/j.eswa.2020.113377. companion on genetic and evolutionary computation; 2010. p. 1731–8.
[30] Kaur G, Arora S. Chaotic whale optimization algorithm. J Comput Des Eng 2018;5 [47] Liang JJ, Qin AK, Suganthan PN, et al. Comprehensive learning particle swarm
(3):275–84. https://doi.org/10.1016/j.jcde.2017.12.006. optimizer for global optimization of multimodal functions. IEEE Trans Evol
[31] Altay O. Chaotic slime mould optimization algorithm for global optimization. Artif Comput 2006;10(3):281–95. https://doi.org/10.1109/TEVC.2005.857610.
Intell Rev 2022;55(5):3979–4040. https://doi.org/10.1007/s10462-021-10100-5. [48] García-Martínez C, Lozano M, Herrera F, et al. Global and local real-coded genetic
[32] Raj S, Shiva CK, Vedik B, et al. A novel chaotic chimp sine cosine algorithm part-I: algorithms based on parent-centric crossover operators. Eur J Oper Res 2008;185
for solving optimization problem. Chaos Solitons Fractals 2023;173:113672. (3):1088–113. https://doi.org/10.1016/j.ejor.2006.06.043.
https://doi.org/10.1016/j.chaos.2023.113672. [49] Qin AK, Huang VL, Suganthan PN. Differential evolution algorithm with strategy
[33] Mirjalili S. SCA: a sine cosine algorithm for solving optimization problems. adaptation for global numerical optimization. IEEE Trans Evol Comput 2008;13(2):
Knowledge-Based Syst 2016;96:120–33. https://doi.org/10.1016/j. 398–417. https://doi.org/10.1109/TEVC.2008.927706.
knosys.2015.12.022. [50] Tanabe R, Fukunaga AS. Improving the search performance of SHADE using linear
[34] Storn R, Price K. Differential evolution–a simple and efficient heuristic for global population size reduction[C]. In: 2014 IEEE congress on evolutionary computation
optimization over continuous spaces. J Global Optim 1997;11:341–59. https://doi. (CEC). IEEE; 2014. p. 1658–65.
org/10.1023/A:1008202821328. [51] Kim S, Kojima M, Waki H, et al. Algorithm 920: SFSDP: a sparse version of full
[35] Zhang S, Zhang H, Wang C. Memristor initial-boosted extreme multistability in the semidefinite programming relaxation for sensor network localization problems.
novel dual-memristor hyperchaotic maps. Chaos Solitons Fractals 2023;174: ACM Trans Math Software 2012;38(4):1–19. https://doi.org/10.1145/
113885. https://doi.org/10.1016/j.chaos.2023.113885. 2331130.2331135.
[36] Hua Z, Wu Z, Zhang Y, et al. Two-dimensional cyclic chaotic system for noise- [52] Wang Z, Li C, Li Y, et al. A chaotic map with two-dimensional offset boosting.
reduced OFDM-DCSK communication. IEEE Trans Circuits Syst I Regul Pap 2024. Chaos Interdiscip J Nonlinear Sci 2024;34(6):063130. https://doi.org/10.1063/
https://doi.org/10.1109/TCSI.2024.3454535. 5.0207875.
[37] Lai Q, Hu G. A nonuniform pixel split encryption scheme integrated with [53] Lai Q, Yang L, Chen G. Design and performance analysis of discrete memristive
compressive sensing and its application in IoM. IEEE Trans Industr Inform 2024;20 hyperchaotic systems with stuffed cube attractors and ultraboosting behaviors.
(9):11262–72. https://doi.org/10.1109/TII.2024.3403266. IEEE Trans Ind Electron 2024;71(7):7819–28. https://doi.org/10.1109/
[38] Lin H, Wang C, Cui L, et al. Brain-like initial-boosted hyperchaos and application in TIE.2023.3299016.
biomedical image encryption. IEEE Trans Industr Inform 2022;18(12):8839–50. [54] He S, Hu K, Wang M, et al. Design and dynamics of discrete dual-memristor chaotic
https://doi.org/10.1109/TII.2022.3155599. maps and its application in speech encryption. Chaos Solitons Fractals 2024;188:
[39] Chua LO. Memristor-the missing circuit element. IEEE Trans Circuit Theory 1971; 115517. https://doi.org/10.1016/j.chaos.2024.115517.
18(5):507–19. https://doi.org/10.1109/TCT.1971.1083337. [55] Zhang S, Ma P, Zhang H, et al. Dual memristors-radiated discrete hopfield neuron
[40] Li Y, Li C, Lei T, et al. Offset boosting-entangled complex dynamics in the with complexity enhancement. Nonlinear Dyn 2024. https://doi.org/10.1007/
memristive Rulkov neuron. IEEE Trans Ind Electron 2024;71(8):9569–79. https:// s11071-024-10364-w.
doi.org/10.1109/TIE.2023.3325558. [56] Wang Z, Li C, Li Y, et al. A class of memristive Hénon maps. Phys Scr 2024;99(10):
[41] Bao H, Su Y, Hua Z, et al. Grid homogeneous coexisting hyperchaos and hardware 105227. https://doi.org/10.1088/1402-4896/ad71fe.
encryption for 2-D HNN-like map. IEEE Trans Circuits Syst I Regul Pap 2024;71(9): [57] Zhao Q, Bao H, Zhang X, et al. Complexity enhancement and grid basin of
4145–55. https://doi.org/10.1109/TCSI.2024.3423805. attraction in a locally active memristor-based multi-cavity map. Chaos Solitons
[42] Deng Y, Li Y. A 2D hyperchaotic discrete memristive map and application in Fractals 2024;182:114769. https://doi.org/10.1016/j.chaos.2024.114769.
reservoir computing. IEEE Trans Circuits Syst II Express Briefs 2022;69(3):
1817–21. https://doi.org/10.1109/TCSII.2021.3118646.

12

You might also like