0% found this document useful (0 votes)
6 views15 pages

Multi Level Particle Swarm Optimizer For Multimodal Opt - 2025 - Information Sci

The document presents a multi-level particle swarm optimizer (MLPSO) designed to address multimodal optimization problems (MMOPs) by utilizing fast density-peak clustering and innovative mechanisms to enhance solution accuracy and diversity. Key features include a new niching technique for forming sub-populations, an equilibrium point setting strategy for balancing exploration and exploitation, and a hierarchical iterative local search for refining solutions. Experimental results indicate that MLPSO performs competitively against leading multimodal optimization algorithms on benchmark functions.

Uploaded by

daniandresxx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
6 views15 pages

Multi Level Particle Swarm Optimizer For Multimodal Opt - 2025 - Information Sci

The document presents a multi-level particle swarm optimizer (MLPSO) designed to address multimodal optimization problems (MMOPs) by utilizing fast density-peak clustering and innovative mechanisms to enhance solution accuracy and diversity. Key features include a new niching technique for forming sub-populations, an equilibrium point setting strategy for balancing exploration and exploitation, and a hierarchical iterative local search for refining solutions. Experimental results indicate that MLPSO performs competitively against leading multimodal optimization algorithms on benchmark functions.

Uploaded by

daniandresxx
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Information Sciences 702 (2025) 121909

Contents lists available at ScienceDirect

Information Sciences
journal homepage: www.elsevier.com/locate/ins

Multi-level particle swarm optimizer for multimodal optimization


problems
Hao Pan a,c,d,e , Hui Yuan b,∗ , Qiang Yue a , Haibin Ouyang f , Fangqing Gu g ,
,∗
Fei Li a,c,d,e,
a School of Electrical and Information Engineering, Anhui University of Technology, Maanshan, 243002, China
b School of Mechanical Engineering, Anhui University of Technology, Maanshan, 243032, China
c Anhui Province Key Laboratory of Special Heavy Load Robot, Anhui University of Technology, Maanshan, 243032, China
d Power Electronics and Motion Control of Anhui Higher Education Institutions, Anhui University of Technology, Maanshan, 243032, China
e Key Laboratory of Metallurgical Emission Reduction and Comprehensive Utilization of Resources of the Ministry of Education, Anhui University of

Technology, Maanshan, 243032, China


f School of Mechanical and Electric Engineering, Guangzhou University, Guangzhou, 510006, China
g Guangdong University of Technology, Guangzhou, Guangdong Province, 510006, China

A R T I C L E I N F O A B S T R A C T

Keywords: Multimodal optimization problems (MMOPs) involve finding multiple optimal solutions, which is
Multimodal optimization essential in many real-world applications. Key challenges in solving MMOPs include identifying
Multi-level particle swarm optimizer more global optima and improving solution accuracy. To tackle these issues, we propose a multi­
Fast density-peak clustering
level particle swarm optimizer (MLPSO) that utilizes fast density-peak clustering along with three
Hierarchical iterative local search
Equilibrium point setting strategy
innovative mechanisms. Firstly, a new niching technique based on fast density-peak clustering
is introduced to form multiple sub-populations, aiming to discover as many optimal solutions
as possible. Secondly, we implement an equilibrium point setting strategy that adjusts based on
the current evolutionary state to balance exploration and exploitation. Different sub-populations
will use varying parameters during the learning process. Additionally, an exclusion mechanism
is designed to prevent multiple sub-populations from converging on the same peak. Lastly, a
hierarchical iterative local search is proposed to r­fine the accuracy of the ident­fied optima.
Experimental results on 20 benchmark functions demonstrate that MLPSO performs competitively
with leading multimodal optimization algorithms.

1. Introduction

Multimodal optimization problems (MMOPs) which contain multiple optimal solutions are the main challenges in the field of
evolutionary algorithms (EAs). MMOPs appear in many application areas, including: engineering design [1,2], financial portfolio
optimization [3], and machine learning [4,5]. For example, in computer vision [6,7], a multimodal optimization algorithm can be
used to recognize objects in images that have multiple possible representations.

* Corresponding authors.
E-mail addresses: [email protected] (H. Pan), [email protected] (H. Yuan), [email protected] (Q. Yue), [email protected] (H. Ouyang),
[email protected] (F. Gu), [email protected], [email protected] (F. Li).

https://doi.org/10.1016/j.ins.2025.121909
Received 2 October 2024; Received in revised form 1 January 2025; Accepted 25 January 2025

Available online 27 January 2025


0020-0255/© 2025 Elsevier Inc. All rights are reserved, including those for text and data mining, AI training, and similar technologies.
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909
EAs are well-suited for solving MMOPs because they can maintain a diverse set of potential solutions in each iteration. However,
most traditional EAs can only find a global optimum [8--10]. The essence of solving MMOPs is to locate more global optimal solutions
and r­fine their accuracy. To address MMOPs, some researchers have proposed the technique of ``niching'', where populations can
be divided into multiple niches [11--17]. Many researchers have proposed various niching techniques, such as clustering [12,18,19],
crowding [20,21], species formation [22--24], and fitness sharing [25]. On the basis of these techniques, several EAs have emerged for
solving MMOPs, such as particle swarm optimization (PSO) [26--29], the estimation distribution algorithm [30], the genetic algorithm
[31,32], and differential evolution [33--35].
However, traditional EAs encounter the critical challenge of striking a balance between exploration and exploitation while solving
MMOPs. Specifically, the optimization problem must maintain sufficient exploration to find multiple optimal solutions [17,36].
Additionally, these algorithms must find the optimal solutions quickly and accurately. Thus, any optimization algorithm should
enhance the exploitation for solving MMOPs [37]. However, population exploration and exploitation have a mutually i­fluential
relationship. It is possible that enhancing population diversity may weaken the algorithm’s exploitation performance. Therefore, to
address MMOPs, researchers strive to maximize the discovery of global optimal solutions while maintaining precision, making this
their primary focus and challenge.
Recently, researchers have designed many different diversity preservation strategies to enhance the population distribution. Gu et
al. [38] employed an adaptive resource allocation strategy, which improves the population’s exploration and exploitation capabilities
by assigning different weights. Chen et al. [39] proposed a distributed individuals differential evolution (DIDE) algorithm, targeting
multiple peaks and incorporating two novel mechanisms. Within this research, most of the multimodal optimization algorithms have
employed the crossover, variation operator or environmental selection strategies to enhance population diversity. They are well suited
for solving MMOPs. Nevertheless, few studies have considered the possibility that multiple sub-populations may search for the same
peak at the same time. This situation can result in a loss of population diversity. Therefore, the second issue to be addressed is how
to avoid multiple sub-populations from searching for the same peak.
Traditional EAs often perform poorly in solving MMOPs. The main issue is that these algorithms lack a local search strategy. Thus,
many researchers have proposed multimodal optimization algorithms [40--42] that incorporate different local search techniques to
enhance the algorithm’s performance. Wang et al. [43] proposed a contour prediction approach and a two-level local search strategy
to improve solution accuracy. Traditional iterative local search [44] suffers from high complexity. Therefore, the third challenge is
designing an effective local search algorithm that r­fines solution accuracy while maintaining low complexity.
To address the aforementioned challenges, we propose a multi-level particle swarm optimizer for multi-modal optimization prob­
lems (MLPSO). The main contributions of our work are as follows:

• This paper introduces a novel niching technique based on fast density-peak clustering, commonly used in EAs. We use the fast
density-peak clustering to automatically partition the population into numerous sub-populations. Thus, the population diversity
is preserved without introducing any extra parameters.
• Equilibrium point setting strategy is proposed to balance exploration and exploitation within the population. If the optimal
particle fitness value is greater than the equilibrium point, we accelerate the exploitation for this sub-population. If the fitness
value of the optimal particle is less than the equilibrium point, the exploration ability is enhanced. Therefore, the equilibrium
point setting strategy can significantly improve exploration and exploitation.
• An exclusion mechanism is proposed to address the issue of niching techniques potentially encountering multiple sub-populations
searching for the same peak. We compare the distance between the optimal particles of each sub-population in the decision
variable space to a certain threshold. If the Euclidean distance between adjacent optimal particles is less than this threshold, the
sub-population with poorer performance is reinitialized. Therefore, the exclusion mechanism increases population diversity and
avoids meaningless search processes.
• The hierarchical iterative local search technique is introduced to enhance solution accuracy. The first step involves selecting the
appropriate direction by adjusting the minimum value of the optimal particle. The second step involves selecting the point with
the highest fitness value as the new optimal particle after uniform sampling. The uniform sampling step size of each iteration
gradually decreases. As the iterations continue, a r­fined uniform sampling process with smaller steps is applied to the optimal
particle to enhance solution’s accuracy. Finally, the hierarchical iterative local search technique can r­fine the solution’s accuracy.

The rest of this article is as follows. Section 2 provides an overview of particle swarm optimization, fast density-peak clustering, and
some existing multimodal optimization algorithms. The proposed MLPSO is then in the Section 3. Section 4 presents the experimental
results from the multimodal competition. Finally, Section 5 summarizes this paper.

2. Related works

2.1. Particle swarm optimizer

PSO is a commonly used search algorithm [45]. Particles learn from their own best historical positions and the best position found
by the entire population [46]. The velocity 𝑣𝑖 updated equation can be given as follows:

𝑣𝑖 = 𝜔𝑣′𝑖 + 𝑐1 𝑟1 (𝑥𝑝𝑏𝑒𝑠𝑡𝑖 − 𝑥′𝑖 ) + 𝑐2 𝑟2 (𝑥𝑔𝑏𝑒𝑠𝑡 − 𝑥′𝑖 ), (1)

2
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909
where 𝑣𝑖 and 𝑣′𝑖 denote the updated and current velocities of the particle i. 𝑥𝑝𝑏𝑒𝑠𝑡𝑖 represents the particle optimal position. 𝑥𝑔𝑏𝑒𝑠𝑡 is
the global optimum position for each swarm. 𝜔 is the inertia weight. The position 𝑥𝑖 updated equation is given as follows:

𝑥𝑖 = 𝑥′𝑖 + 𝑣𝑖 , (2)
where 𝑥𝑖 denotes the updated position of the particle i. In addition, the higher the 𝜔 values, the greater global search ability. The
algorithm will exhibit slower convergence rates and engage in more exploratory behavior. On the other hand, the lower the values of
𝜔, the better the local search ability. The algorithm will have a faster convergence rate, and more exploitative behavior. The traditional
strategy to find proper 𝜔 values involves either adaptive or non-adaptive methods, with iterations increasing [47]. Properly selecting
the value of 𝜔 comprehensively considering the exploration and exploitation is critical during the optimization procedure [48].

2.2. Density-peak clustering

Density-peak clustering (DPC) [49] can automatically identify cluster centers and efficiently cluster data of various shapes. The
method does not introduce any sensitive parameters. Thus, DPC can quickly find as many peaks as possible.
In DPC, the high-density points are typically located at the center of clusters, and clustering is performed by analyzing the local
density and relative distance of data points. Local density is the total number of other points around a data point. The equation for
local density is shown in Equation (3).

𝜌𝑖 = − 𝜒(𝑑𝑖𝑗 − 𝑑𝑐 ), (3)
𝑗
{
1 𝑥 < 0,
𝜒(𝑥) = (4)
0 𝑥 ≥ 0,
where 𝜌𝑖 represents the local density of point 𝑖; 𝑑𝑖𝑗 denotes the Euclidean distance between point 𝑖 and point 𝑗 , and 𝑑𝑐 stands for a
truncation distance, which sign­fies the shortest distance between a point in a sub-population and another sub-population.
The relative distance 𝛿𝑖 represents the minimum distance between sample point 𝑖 and other high-density points. The equation for
relative distance is shown in Equation (5).
{
𝑚𝑎𝑥(𝑑𝑖𝑗 ), 𝑖 ∈ densest particles,
𝛿𝑖 = (5)
𝑚𝑖𝑛(𝑑𝑖𝑗 ), 𝑖 ∈ other data points.

2.3. Optimization algorithms

Multimodal optimization algorithms are used to locate multiple global optimal solutions to ensure the comprehensiveness of the
solutions. The algorithms aim to increase population diversity and improve convergence speed. Niching techniques have been embed­
ded into EAs to address MMOPs. Crowding, speciation and clustering are three popular methods for solving MMOPs. Moreover, some
researchers convert MMOPs into MOPs and employ evolutionary multi-objective optimization algorithms to tackle the transformed
problems.
1) Crowding Niching Technique: In crowding niching methods, each offspring is compared to its nearest parent individual, which
is a population formed by a random selection of C parent individuals (C is the crowd size). Subsequently, it will replace the parent if
the fitness value of the offspring performs better than the parents. Otherwise, the offspring will be discarded. The crowding niching
technique has been embedded into differential evolution (DE) and called crowing DE (CDE) [11]. The algorithm SAMCDE [50]
incorporates CDE with strategy adaptive and fine search technique to handle MMOPs. Osuna and Sudholt [21] provide the initial
comprehensive runtime analyses of probabilistic crowding and generalized crowding, integrated into a steady-state EA to preserve
population diversity. Overall, crowding niching techniques are effective and efficient for solving MMOPs. However, the technique
highly depends on the parameter settings.
2) Speciation Niching Technique: In speciation niching methods, the population is divided into species or sub-populations according
to some niche parameters (the size of sub-populations or species radius). Each sub-population will locate as many peaks as possible.
The technique maintains good population diversity to jump out of local optimal solution. Li [51] embedded speciation niching into
DE to create the SDE algorithm, which aims to identify multiple global optima simultaneously by adaptively forming multiple sub­
populations at each iteration step. Gao et al. [12] proposed an adaptive parameter mechanism and a clustering technique based on SDE.
Qu et al. [52] integrated the neighborhood mutation operation into SDE, which is called NSDE. In addition, Gu et al. [38] introduced
peak density clustering in the species formation method for creating sub-populations. It has been validated that using speciation is an
effective technique for MMOPs. However, these algorithms all have a certain sensitivity to niche parameters. Additionally, it remains
a challenge for the population to locate various peaks in MMOPs.
3) Clustering Niching Technique: In MMOPs, clustering niching techniques are used to partition the population into sub-populations.
The clustering niching methods employ a less sensitive parameter for population division. Moreover, the methods are able to find so­
lutions with similar characteristics in different regions, thereby improving the performance and efficiency of MMOPs. Most traditional
clustering methods, including k-means clustering, hierarchical clustering and density-based clustering, have been embedded into EAs.
For example, a˙inity propagation clustering is employed in DIDE [39] to generate sub-populations. Furthermore, researchers have

3
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909

Fig. 1. Framework of MLPSO.

introduced clustering-based DE algorithms for optimizing in dynamic environments. Similarly, Cai et al. [53] have developed another
clustering-based DE algorithm which employs the k-means clustering technique as multiple multi-parental crossover operators.

2.4. Motivation

The commonly used niche techniques are often i­fluenced by sensitive parameters, resulting in clustering results that are highly
dependent on the selection of initial parameters. To address this issue, we chose DPC that does not affect sensitive parameters. DPC
automatically determines cluster centers by searching for density peaks, without the need to preset the number of clusters, effectively
reducing the impact of parameter sensitivity on clustering results.
In MMOPs, the global optimal solution is often hidden among multiple local optimal solutions, and these local optimal solutions
may have similar objective function values. Therefore, the equilibrium point setting strategy has been proposed to balance global
search capability and local convergence performance. If the fitness value of the optimal particle is greater than the equilibrium point,
we should accelerate the exploitation of this sub-population to promote convergence. If the fitness value of the optimal particle is
less than the equilibrium point, the exploration ability should be enhanced to promote diversity.
In multiple sub-populations, different sub-populations may explore the same area due to similar search strategies, resulting in low
algorithm efficiency. Therefore, we propose an exclusion mechanism to address this issue in MMOPs. We can measure the Euclidean
distance between the optimal particles in each sub-population and compare it with a predetermined threshold. Once the distance
between adjacent optimal particles is found to be less than this threshold, the sub-population that performs poorly is reinitialized.
Local search plays a crucial role in the optimization process. We propose a hierarchical iterative local search technique to improve
the accuracy of the solution. Firstly, we select a suitable search direction based on the fitness value of the optimal particle. Then, we
select the point with the highest fitness as the new optimal particle on the basis of uniform sampling. As the iteration progresses, we
gradually reduce the sampling step size in order to achieve higher solution accuracy.

3. Proposed method: MLPSO

We first outline the overall framework of the MLPSO. Then we explain the multi-level strategy based on DPC in detail. In addition,
the equilibrium point setting strategy, the mod­fied PSO and the exclusion mechanism are presented. Finally, we propose a hierarchical
iterative local search to r­fine the solution’s accuracy.

3.1. MLPSO

The illustration and pseudo-code of the MLPSO are given in Fig. 1 and Algorithm 1. As shown in Fig. 1, we first employ the
DPC technique to divide the population into five sub-populations from P1 to P5. The five-pointed star represents the best particle
within that sub-population. They can be class­fied into two parts according to the equilibrium point setting strategy. P2 and P5 sub­
populations pay much attention to accelerating convergence while sub-populations P1, P3 and P4 increase the population’s diversity
during evolution. Then, the winner group (P2W, P4W) and the loser group (P2L, P4L) are determined based on their fitness values.

4
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909
Algorithm 1 Framework of MLPSO.
INITIALIZE: 𝑃 population, 𝐶𝑙 the sub-populations, 𝑁 total number of particles, 𝑛 the number of sub-populations, 𝑓 fitness value, 𝑅𝑒𝑥 the threshold value, 𝐷 variable
dimension, 𝑣 velocity, the global best for each sub-population 𝐺𝑏𝑒𝑠𝑡𝑖, (𝑖 = 1, …, 𝑛), 𝑤 inertia weight, the individual best for each particle 𝑃 𝑏𝑒𝑠𝑡, 𝑒𝑣𝑎𝑙𝑠 ← 0;
1: (𝐶𝑙, 𝑛) ← 𝐷𝑒𝑛𝑠𝑖𝑡𝑦𝑝𝑒𝑎𝑘𝑐𝑙𝑢𝑠𝑡𝑒𝑟𝑖𝑛𝑔 ;
2: Calculate the Euclidean distance 𝑑𝑖𝑗 between each particle;
3: Calculate the local density 𝜌𝑖 of each particle;
4: Calculate the relative distance 𝛿𝑖 of each particle;
5: for 𝑖 = 1 to 𝑁 do
6: 𝛾𝑖 = 𝜌𝑖 ∗ 𝛿𝑖 ;
7: end for
8: The particle whose 𝛾𝑖 value is greater than 𝛾𝑘 is selected as the cluster center;
9: The remaining particles are allocated to clusters whose nearest neighbors are locally denser than them;
10: while 𝑓𝑒𝑣𝑎𝑙 ≤ 𝑓𝑒𝑣𝑎𝑙−𝑚𝑎𝑥 do
11: 𝐶𝑙 ← Initialize search;
12: Execute the equilibrium point setting strategy
13: for 𝑖 = 1 to 𝑛 do
14: if 𝑓𝐺𝑏𝑒𝑠𝑡𝑖 ≥ 𝑓𝑒𝑞 then
15: Accelerated convergence of sub-population 𝑖;
16: else
17: Accelerated exploration of sub-population 𝑖;
18: end if
19: end for
20: Execute improved PSO using different 𝑤.
21: (𝐶𝑙, 𝐺𝑏𝑒𝑠𝑡) ← Exclusion(𝐶𝑙, 𝑅𝑒𝑥 , 𝐺𝑏𝑒𝑠𝑡)
22: for 𝑗 = 1 to 𝑛 do
23: for 𝑘 = 1 to 𝑛 do
24: Calculate the Euclidean distance 𝑑𝑗𝑘 between 𝐺𝑏𝑒𝑠𝑡𝑗 and 𝐺𝑏𝑒𝑠𝑡𝑘;
25: if 𝑑𝑗𝑘 ≤ 𝑅𝑒𝑥 then
26: if 𝑓𝐺𝑏𝑒𝑠𝑡𝑗 ≥ 𝑓𝐺𝑏𝑒𝑠𝑡𝑘 then
27: 𝐶𝑙(𝑗) + 𝐶𝑙(𝑘) → 𝐶𝑙(𝑗), the worst |𝐶𝑙(𝑘)| particles are reinitialized from 𝐶𝑙(𝑗);
28: else
29: 𝐶𝑙(𝑗) + 𝐶𝑙(𝑘) → 𝐶𝑙(𝑘), the worst |𝐶𝑙(𝑗)| particles are reinitialized from 𝐶𝑙(𝑘);
30: end if
31: end if
32: end for
33: end for
34: Execute hierarchical iterative local search strategy
35: for 𝑖 = 1 to 𝑛 do
36: for 𝑗 = 1 to 5 do
37: 𝑚 = (0.1)𝑖 ;
38: for 𝑘 = 1 to 𝐷 do
39: 𝐺′ 𝑏𝑒𝑠𝑡𝑖(𝑘) = 𝐺𝑏𝑒𝑠𝑡𝑖(𝑘) + 𝜖 ;
40: if 𝐺′ 𝑏𝑒𝑠𝑡𝑖(𝑘) ≥ 𝐺𝑏𝑒𝑠𝑡𝑖(𝑘) then
41: 𝑎(𝑘) = 1;
42: else
43: 𝑎(𝑘) = −1;
44: end if
45: end for
46: for 𝑙 = 1 to 9 do
47: 𝑚𝑒𝑚′ (𝑙) = 𝑙 ∗ 𝑚 ∗ 𝑎;
48: end for
49: Find mem’ maximum value as 𝐺𝑏𝑒𝑠𝑡𝑖
50: end for
51: end for
52: end while

Different groups adopt different 𝜔 to balance exploration and exploitation. The hierarchical iterative local search strategy is performed
after the learning procedure.
Algorithm 1 is the instantiation of our proposed MLPSO. To be specific, we use DPC to partition the population 𝑃 into multiple
sub-populations from Step 1 to Step 9 of Algorithm 1. In Step 12 of Algorithm 1, we calculate the equilibrium point 𝑓𝑒𝑞 to balance
exploration and exploitation. Following the equilibrium point setting strategy, the sub-population can be class­fied into two parts. If
the global best objective function values for each sub-population 𝑓𝐺𝑏𝑒𝑠𝑡 are larger than 𝑓𝑒𝑞 , these kinds of sub-population accelerate
convergence in Step 15 of Algorithm 1. In contrast, the other sub-populations explore further into the area to enhance population
diversity in Step17. An improved PSO is employed to search for the optimal solution in Step 20. Next, we employ an exclusion
mechanism to avoid multiple sub-populations searching for the same peak from Step 21 to Step 33. Finally, a hierarchical iterative
local search strategy is used to improve the solution’s accuracy from Step 34 to Step 51 of Algorithm 1.

5
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909

Fig. 2. Illustration of equilibrium point 𝑓𝑒𝑞 and exclusion mechanism, where the Euclidean distance between 𝑂𝑃 4 and 𝑂𝑃 5 is less than the threshold.

3.2. Multi-level strategy based density peak clustering

The primary goal in solving MMOPs is to locate distinct global peaks. Hence, we introduce a clustering to partition the population
into multiple sub-populations, allowing sub-populations the opportunity to converge to one or more optimal solutions.
We execute the clustering mechanism on the initialization population. Frequently used clustering methods, like k-means clustering,
are highly sensitive to pred­fined parameters. We introduce the DPC instead of the traditional clustering method. The DPC first selects
the cluster center, which is determined by the local density and relative distance of particles. However, the original DPC cannot
automatically create sub-populations. To address this issue, we formulate 𝛾𝑖 which can be calculated by the local density 𝜌𝑖 and the
relative cluster 𝛿𝑖 as given in Eq. (6).

𝛾𝑖 = 𝜌𝑖 ∗ 𝛿𝑖 , (6)
where we can choose the relatively large value in 𝛾𝑖 as the cluster center. Hence, determining the appropriate number of 𝛾𝑖 to use
as clustering centers presents a challenge. Luo et al. [38] set the number of clusters to 0.3*N. The remaining individuals are then
distributed among the nearest clusters, forming multiple sub-populations.
A multi-level strategy is employed to distinguish the whole population. The first level involves obtaining sub-populations through
population clustering. The second level then further divides sub-populations into two parts based on an equilibrium point setting
strategy. On the one hand, the first part will enhance convergence ability. On the other hand, the second part will expand the scope
of exploration to find multiple peaks, in order to maintain population diversity. The third level divides each sub-population into a
winner group and a loser group based on individual fitness. Each group will use different parameters for PSO. Therefore, the multilevel
strategy can effectively balance exploitation and exploration.

3.3. Equilibrium point setting strategy

EAs often suffer from diversity loss when solving MMOPs. We have designed the equilibrium point setting strategy to maintain
population diversity. The equilibrium point can be calculated by the current optimal solution in each sub-population. To be specific,
we first preserve the optimal particle for each sub-population. We sort the optimal particles in descending order, so that the difference
of the adjacent particles can be calculated. Then, we find the point with the largest difference as the i­flection point 𝑓𝑖𝑛 of the optimal
particles. We collect the particles which are better than the i­flection point. The average value of collected particles can be denoted
as the equilibrium point 𝑓𝑒𝑞 . The equation for 𝑓𝑒𝑞 is as follows:
∑𝑚
𝑖=1 𝑓𝑖
𝑓𝑒𝑞 = 𝑚
, 𝑖 = {𝑖|𝑓𝐺𝑏𝑒𝑠𝑡𝑖 > 𝑓𝑖𝑛 }, (7)
where 𝑓𝐺𝑏𝑒𝑠𝑡𝑖 is the fitness value of the optimal particle in the 𝑖 − 𝑡ℎ sub-population; 𝑓𝑖𝑛 is the fitness value of i­flection point; 𝑚 is
the number of optimal particles with fitness values greater than the i­flection point, and 𝑓𝑖 represents a fitness value greater than
the i­flection point.
Fig. 2 gives a comprehensive illustration of equilibrium point 𝑓𝑒𝑞 . As shown in Fig. 2, {𝑂𝑃 1, 𝑂𝑃 2, 𝑂𝑃 3, 𝑂𝑃 4, 𝑂𝑃 5} represent
the optimal particles in each sub-population. The sorted array is {𝑂𝑃 5, 𝑂𝑃 2, 𝑂𝑃 4, 𝑂𝑃 3, 𝑂𝑃 1}. The adjacent particles {𝑂𝑃 4, 𝑂𝑃 3}
have achieved the largest difference. Thus, 𝑂𝑃 3 can be denoted as the i­flection point 𝑓𝑖𝑛 . 𝑓𝑖 contains {𝑂𝑃 5, 𝑂𝑃 2, 𝑂𝑃 4}. The
illustration of equilibrium point 𝑓𝑒𝑞 can be seen as the dotted line while using Eq (7). If the target value of the optimal particle
in the sub-population is better than 𝑓𝑒𝑞 , these sub-populations should improve the accuracy of the found solutions. In contrast, if
the optimal solution of each sub-population is worse than 𝑓𝑒𝑞 , these sub-populations should maintain population diversity. This
accelerated convergence and exploration is achieved by adjusting the parameters 𝜔 of particle swarm optimization.

3.4. PSO

The basic principles of PSO are mentioned in Section 2.1. Traditional PSO can easily fall into local optima. To address this issue,
researchers have proposed improved PSO, such as multi-strategy and adaptive PSO. As for the proposed MLPSO algorithm, the first
level partitions the population into some sub-populations by adopting the DPC. The second level subdivides sub-populations into

6
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909

Fig. 3. Hierarchical iteration local search in the first iteration of the graph. (a) Determine the optimal particle movement direction. (b) Sample nine points uniformly
along the optimal particle movement direction.

two parts according to equilibrium point setting strategy. The third level divides each sub-population into the winner group and the
loser group according to individual fitness. We have embedded the competition mechanism into the traditional PSO. Specifically, the
winner group employs a small 𝜔 to accelerate convergence and enhance local search ability. The loser group chooses a relatively large
𝜔 to promote the exploration ability and maintain population diversity. After cooperating with the equilibrium point setting strategy,
the smaller 𝜔1 and 𝜔2 are employed to sub-populations which need to improve the accuracy of the found solutions, where 𝜔1 is the
winner group parameter and 𝜔2 is the loser group parameter. The same is done for the second part in which the sub-populations
need to enhance the exploration ability. Thus, the relatively larger 𝜔3 and 𝜔4 parameters are selected.

3.5. Exclusion mechanism

The main purpose of the niching technique is to locate more peaks in MMOPs. However, different sub-populations will search
for the same region during evolution. In existing multimodal optimization algorithms, little attention has been paid to this aspect.
Thus, we introduce an exclusion mechanism to ensure search efficiency. Fig. 2 not only introduces a comprehensive illustration of the
equilibrium point, but also gives the motivation of the exclusion mechanism. It is important to avoid repeatedly searching for the same
region in each sub-population. The critical issue is how to detect whether there are various sub-populations searching for the same
region [54]. If the Euclidean distance 𝑑𝑖𝑗 between 𝑥𝐺𝑏𝑒𝑠𝑡(𝑖) and 𝑥𝐺𝑏𝑒𝑠𝑡(𝑗) is smaller than 𝑅𝑒𝑥 , it can be considered that sub-populations
𝑖 and 𝑗 search for the same region. As shown in Fig. 2, there are two sub-populations searching for the same region. The distance
between 𝑂𝑃 4 and 𝑂𝑃 5 is less than the threshold value 𝑅𝑒𝑥 in the decision variable space. The 𝑂𝑃 4 sub-population is initialized to
enhance diversity. The 𝑅𝑒𝑥 parameter can be calculated as follows:
𝑆
𝑅𝑒𝑥 = √ , (8)
𝐷
20 ∗ 𝑛
where 𝑆 is the scope of the search space; 𝐷 is the decision variable dimension, and 𝑛 is the number of sub-populations. If 𝑓 (𝑥𝐺𝑏𝑒𝑠𝑡(𝑖) )
is greater than 𝑓 (𝑥𝐺𝑏𝑒𝑠𝑡(𝑗) ), two sub-populations merge into the combined population 𝑀𝐶𝑙(𝑖). Then, we sort the 𝑀𝐶𝑙(𝑖) according
to objective values. Finally, we maintain the initial particle count for the better sub-population. The best particles are filled into the
sub-population. The remaining particles are reinitialized.

3.6. Hierarchical iterative local search

Traditional PSO can explore the potential optimal solutions and accelerate convergence. However, only PSO slows convergence.
Fig. 3 illustrates the difficulty. Specifically, point A is the global optimal solution after executing PSO. However, the main purpose is
to find multiple peaks P. Therefore, we formulate effective local search mechanisms in order to find these peaks more accurately. A
hierarchical iterative local search method is proposed to enhance the development ability of the algorithm and improve the solution’s
accuracy. Fig. 3 shows the hierarchical iteration local search in the first iteration. Fig. 3(a) shows how to determine the movement
direction. We observe the changes in the fitness value of the optimal solution for each sub-population by increasing or decreasing 𝜀
in each dimension. If the fitness value increases, it is considered that the direction in this dimension is correct. We then combine the
correct directions from all dimensions to obtain the overall movement direction. Fig. 3(b) represents nine points (𝐴1 − 𝐴9) uniformly
sampled along the direction of motion. Among the nine points sampled, the point (A5) with the largest fitness value is selected as the
new optimal solution, where the sampling step is 0.1. Similar to the first iteration, the following four iterations are to determine the

7
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909

Fig. 4. The final fitness landscape of four complex functions. (a) 𝐹6 (b) 𝐹7 (c) 𝐹13 (d) 𝐹15 .

direction first and then evenly sample nine points. Unlike the first iteration, the second iteration has only 1/10 of the steps of the first
iteration. Therefore, the step size of each subsequent iteration is 1/10 of that of the previous iteration. As the evolution progresses,
the sampling range is gradually reduced until the search stops when it approaches the optimal solution. A hierarchical iteration local
search can solve the problem of slow convergence in the global search algorithm and improve the accuracy of the solution.

4. Empirical results

4.1. Benchmark functions

In this section, we used 20 commonly used benchmark functions from the CEC2015 competition to evaluate the performance of
MLPSO. The CEC2015 competition features problems that are identical to those in the CEC2013 test [55]. The properties of these
functions are provided by the CEC2015 competition [55]. Fig. 4 gives the fitness landscape for the four more complex test functions.
In addition, we introduce the performance metrics employed in the CEC2015 competition. They are the peak rate (PR) and success
rate (SR).

4.1.1. PR
The PR is the average degree of closeness between the best solution found by the algorithm and all the predetermined and known
possible best solutions, expressed in percentage form. The equation for PR is shown below:
∑𝑟
𝐼=1 𝑃 𝐹𝑖
𝑃𝑅 = , (9)
𝑘𝑜 ∗ 𝑟
where 𝑃 𝐹𝑖 represents the number of global optima found; 𝑟 represents the number of runs, and 𝑘𝑜 represents the number of known
global optima.

8
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909
Table 1
Experimental results of MLPSO for PR and SR.

f1 f2 f3 f4 f5
𝜀
PR SR PR SR PR SR PR SR PR SR

0.1 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
0.01 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
0.001 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
0.0001 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
0.00001 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999

f6 f7 f8 f9 f10
𝜀
PR SR PR SR PR SR PR SR PR SR

0.1 0.999 0.999 0.999 0.999 0.917 0.233 0.999 0.999 0.999 0.999
0.01 0.999 0.999 0.999 0.999 0.915 0.267 0.999 0.999 0.999 0.999
0.001 0.999 0.999 0.999 0.999 0.906 0.033 0.999 0.999 0.999 0.999
0.0001 0.999 0.999 0.999 0.999 0.915 0.167 0.999 0.999 0.999 0.999
0.00001 0.999 0.999 0.999 0.999 0.911 0.167 0.999 0.999 0.999 0.999

f11 f12 f13 f14 f15


𝜀
PR SR PR SR PR SR PR SR PR SR

0.1 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
0.01 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
0.001 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
0.0001 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999
0.00001 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999 0.999

f16 f17 f18 f19 f20


𝜀
PR SR PR SR PR SR PR SR PR SR

0.1 1.000 1.000 0.996 0.967 0.872 0.667 0.046 0.001 0.001 0.001
0.01 1.000 1.000 0.933 0.677 0.783 0.400 0.033 0.001 0.001 0.001
0.001 0.989 0.933 0.750 0.333 0.678 0.200 0.025 0.001 0.001 0.001
0.0001 0.856 0.533 0.475 0.000 0.417 0.067 0.013 0.001 0.001 0.001
0.00001 0.656 0.033 0.275 0.000 0.156 0.00 0.004 0.001 0.001 0.001

4.1.2. SR
The SR is the percentage of all runs that successfully search for the full optimal solution. Its equation is shown below:

𝑠𝑟
𝑆𝑅 = , (10)
𝑟
where sr stands for the number of successful runs.
The optimal solution is searched under the condition that the difference between √ its fitness value and a pred­fined fitness value
is less than a level of precision 𝜀. In addition, the number of particles 𝑁 is set at 80 𝐷 ∗ 𝑞 , where 𝑞 is the number of global optimal
solutions. The parameter 𝜔 of PSO is given later in the parametric sensitivity analysis.
In addition, MaxFEs is the maximum number of iterations, which is different on different test functions. Among them, F1 to F5
are relatively simple test problems with a maximum number of iterations of 5.0E+0.4; F6, F7, and F10 to F13 are relatively difficult
test problems with a maximum number of iterations of 2.0E+0.5; F8, F9, and F14 to F20 are the most difficult test problems with a
maximum number of iterations of 5.0E+0.5.

4.2. Comparisons algorithms

We compared MLPSO with 13 advanced multimodal optimization algorithms, which are CDE [11], SDE [51], NCDE [56], NSDE
[56], MOMMOP [57], DSDE [39], LIPS [58], NMMSO [59], LoICDE [14], OPDDE [60], PNPCDE [61], EMO-MMO [62], and DIDE
[39]. Table 1 shows the experimental results of MLPSO. In Table 2 and Table 3, the symbols “ + ”, “ − ” and “ ≈ ” denote that MLPSO
outperformed, underperformed or had no statistically significant difference to the compared algorithms, respectively. The majority
of comparison outcomes for the algorithms depend on their respective scholarly publications. The optimal results are highlighted in
bold.
According to the data in Table 2 and Table 3, MLPSO achieved better PR results than most of the other algorithms.
1) For the first five problems (F1 to F5), most of the multimodal algorithms can find every global optimal solution within the
solution space. This is because the test problems are relatively simple, so it is easy for the algorithm to search for optimal solutions.
2) The test problems F6 to F10 contain many global peaks. For example, F8 has a total of 81 global peaks. For these test problems,
MLPSO performed well. Although slightly inferior to MOMMOP and EMO-MMO for the F8 problem, MLPSO still performed much
better than the other algorithms. This is because MLPSO employs an equilibrium point setting strategy to increase population diversity.

9
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909
Table 2
Experimental results of PR and SR for MLPSO on questions 1-20.

MLPSO CDE SDE NCDE NSDE MOMMOP DSDE


Func
PR SR PR SR PR SR PR SR PR SR PR SR PR SR

f1 0.999 0.999 0.999(≈) 0.999 0.661(+) 0.369 0.999(≈) 0.999 0.999(≈) 0.999 0.999(≈) 0.999 0.999(≈) 0.999
f2 0.999 0.999 0.999(≈) 0.999 0.741(+) 0.531 0.999(≈) 0.999 0.781(+) 0.659 0.999(≈) 0.999 0.999(≈) 0.999
f3 0.999 0.999 0.999(≈) 0.999 0.999(≈) 0.999 0.999(≈) 0.999 0.999(≈) 0.999 0.999(≈) 0.999 0.999(≈) 0.999
f4 0.999 0.999 0.999(≈) 0.999 0.279(+) 0.001 0.999(≈) 0.999 0.239(+) 0.001 0.999(≈) 0.999 0.999(≈) 0.999
f5 0.999 0.999 0.999(≈) 0.999 0.919(+) 0.839 0.999(≈) 0.999 0.739(+) 0.489 0.999(≈) 0.999 0.999(≈) 0.999
f6 0.999 0.999 0.999(≈) 0.999 0.049(+) 0.001 0.299(+) 0.001 0.061(+) 0.001 0.999(≈) 0.999 0.999(≈) 0.999
f7 0.999 0.999 0.859(+) 0.001 0.061(+) 0.001 0.869(+) 0.001 0.049(+) 0.001 0.999(≈) 0.999 0.891(+) 0.001
f8 0.909 0.171 0.001(+) 0.001 0.014(+) 0.001 0.001(+) 0.000 0.012(+) 0.001 0.999(-) 0.999 0.661(+) 0.001
f9 0.999 0.999 0.469(+) 0.001 0.009(+) 0.001 0.459(+) 0.001 0.005(+) 0.001 0.999(≈) 1.000 0.363(+) 0.001
f10 0.999 0.999 0.999(≈) 0.999 0.151(+) 0.001 0.989(+) 0.861 0.097(+) 0.001 0.999(≈) 0.999 1.000(≈) 1.000
f11 1.000 1.000 0.330(+) 0.000 0.314(+) 0.000 0.729(+) 0.059 0.248(+) 0.000 0.716(+) 0.020 1.000(≈) 1.000
f12 1.000 1.000 0.002(+) 0.000 0.208(+) 0.000 0.252(+) 0.000 0.135(+) 0.000 0.939(+) 0.549 1.000(≈) 1.000
f13 0.999 0.999 0.141(+) 0.001 0.289(+) 0.001 0.670(+) 0.001 0.219(+) 0.001 0.670(+) 0.001 0.912(+) 0.549
f14 0.999 0.999 0.026(+) 0.001 0.221(+) 0.001 0.670(+) 0.001 0.189(+) 0.001 0.670(+) 0.001 0.670(+) 0.001
f15 0.999 0.999 0.004(+) 0.001 0.111(+) 0.001 0.320(+) 0.001 0.131(+) 0.001 0.618(+) 0.001 0.635(+) 0.001
f16 0.856 0.533 0.001(+) 0.001 0.111(+) 0.001 0.670(+) 0.001 0.169(+) 0.001 0.650(+) 0.001 0.667(+) 0.001
f17 0.475 0.001 0.001(+) 0.001 0.080(+) 0.001 0.249(+) 0.001 0.110(+) 0.001 0.505(-) 0.001 0.375(+) 0.001
f18 0.417 0.067 0.171(+) 0.001 0.030(+) 0.001 0.490(-) 0.001 0.159(+) 0.001 0.497(-) 0.001 0.667(-) 0.001
f19 0.019 0.001 0.001(+) 0.001 0.095(-) 0.001 0.348(-) 0.001 0.100(-) 0.001 0.223(-) 0.001 0.395(-) 0.001
f20 0.001 0.001 0.001(≈) 0.001 0.001(≈) 0.001 0.250(-) 0.001 0.119(-) 0.001 0.125(-) 0.001 0.314(-) 0.001

+∕−∕≈ 12/0/8 17/1/2 12/3/8 16/2/2 6/5/9 8/3/9

Table 3
Experimental results of PR and SR for MLPSO on questions 1-20 (continued).

LIPS NMMSO LoICDE OPDDE PNPCDE EMO-MMO DIDE


Func
PR SR PR SR PR SR PR SR PR SR PR SR PR SR

f1 0.829(+) 0.679 0.999(≈) 0.999 0.999(≈) 0.999 1.000(≈) 1.000 0.999(≈) 0.999 0.999(≈) 0.999 0.999(≈) 0.999
f2 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000
f3 0.961(+) 0.961 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000
f4 0.990(+) 0.961 1.000(≈) 1.000 0.950(+) 0.800 1.000(≈) 1.000 0.185(+) 0.000 1.000(≈) 1.000 1.000(≈) 1.000
f5 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000 1.000(≈) 1.000
f6 0.246(+) 0.001 0.993(+) 0.880 0.981(+) 0.840 1.000(≈) 1.000 0.942(+) 0.680 0.999(≈) 0.999 0.999(≈) 0.999
f7 0.400(+) 0.001 0.999(≈) 0.999 0.701(+) 0.000 0.995(-) 0.833 0.697(+) 0.001 0.999(≈) 0.999 0.921(+) 0.040
f8 0.084(+) 0.001 0.904(≈) 0.001 0.408(+) 0.001 0.995(+) 0.667 0.414(+) 0.001 0.999(≈) 0.999 0.692(+) 0.001
f9 0.104(+) 0.001 0.975(+) 0.220 0.276(+) 0.001 0.848(+) 0.000 0.267(+) 0.000 0.950(+) 0.001 0.571(+) 0.001
f10 0.748(+) 0.000 0.999(≈) 0.999 0.999(≈) 0.999 1.000(≈) 1.000 0.999(≈) 0.999 0.999(≈) 0.999 0.999(≈) 0.999
f11 0.974(+) 0.843 0.999(≈) 0.999 0.674(+) 0.000 1.000(≈) 1.000 0.623(+) 0.001 0.999(≈) 0.999 0.999(≈) 0.999
f12 0.574(+) 0.001 0.983(+) 0.860 0.020(+) 0.001 1.000(≈) 1.000 0.003(+) 0.001 0.999(≈) 0.999 0.999(≈) 0.999
f13 0.794(+) 0.176 0.990(+) 0.940 0.440(+) 0.001 1.000(≈) 1.000 0.430(+) 0.001 0.997(≈) 0.980 0.987(+) 0.920
f14 0.644(+) 0.001 0.700(+) 0.001 0.603(+) 0.001 0.733(+) 0.000 0.317(+) 0.001 0.733(+) 0.061 0.773(+) 0.020
f15 0.336(+) 0.001 0.620(+) 0.001 0.235(+) 0.001 0.704(+) 0.000 0.093(+) 0.001 0.595(+) 0.001 0.748(+) 0.001
f16 0.304(+) 0.001 0.663(+) 0.001 0.420(+) 0.001 0.667(+) 0.000 0.133(+) 0.001 0.657(+) 0.001 0.667(+) 0.001
f17 0.162(+) 0.001 0.478(≈) 0.001 0.233(+) 0.001 0.563(-) 0.000 0.003(+) 0.001 0.335(+) 0.001 0.593(-) 0.001
f18 0.098(+) 0.001 0.660(-) 0.001 0.170(+) 0.001 0.667(-) 0.000 0.160(+) 0.001 0.327(+) 0.001 0.667(-) 0.001
f19 0.000(+) 0.000 0.350(-) 0.001 0.003(+) 0.001 0.504(-) 0.000 0.001(+) 0.001 0.135(-) 0.001 0.543(-) 0.001
f20 0.000(≈) 0.000 0.170(-) 0.001 0.125(-) 0.001 0.467(-) 0.000 0.008(≈) 0.001 0.080(-) 0.001 0.335(-) 0.001

+∕−∕≈ 17/0/3 7/3/10 14/1/5 5/5/10 14/0/6 6/2/12 7/4/9

3) MLPSO produced the best results in five low-dimensional composite functions F11 to F15. F11-F13 are simpler low-dimensional
composite test problems, where only some algorithms can find the full global optimal solution. The F14 and F15 problems are more
complex and are important problems in low-dimensional composite tests. Most state-of-the-art algorithms are unable to find the full
optimal solutions on these problems. However, our proposed MLPSO has global search capability. The reason is that MLPSO employs
tailored strategies for various stages of the population, allowing it to efficiently search for the global optimal solution.
4) MLPSO’s results on the five high-dimensional composite functions F16-F20 were very limited. MLPSO obtained better results for
the F16 problem and was slightly inferior to a few advanced comparison algorithms on the F17 and F18 problems. The experimental
results on the F19 and F20 problems are not as good. This is because the F19-F20 problems are high-dimensional composite test
problems, which are much more difficult and complex than the first three test problems. In addition, the algorithm effect will be
somewhat limited as we can hardly avoid the effect of dimensional catastrophe in the case of a higher dimension of the search space.
We class­fied the 20 test problems into two groups, F1-F15 and F16-F20, based on their dimensionality. Table 4 and Table 5
respectively show the statistical ranking of the algorithm on 20 test questions [63,64]. The results showed that MLPSO performed

10
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909
Table 4
The statistical ranking on F1-F20 test problems.

Functions Measures MLPSO CDE SD NCDE NSDE MOMMOP DSDE

F1-F15 +∕ − ∕ ≈ 8/0/7 14/0/1 10/0/5 13/0/2 9/1/5 6/0/9


Average Ranking 1.20 6.87 11.80 6.33 11.33 3.07 3.13

F16-F20 +∕ − ∕ ≈ 4/0/1 3/1/1 2/3/0 3/2/0 1/4/0 2/3/0


Average Ranking 6.60 12.00 11.80 5.80 9.80 5.60 3.00

Table 5
The statistical ranking on F1-F20 test problems (continued).

Functions Measures LIPS NMMSO LoICDE OPDDE PNPCDE EMO-MMO DIDE

F1-F15 +∕ − ∕ ≈ 13/0/2 7/0/8 10/0/5 5/1/9 10/0/5 3/1/11 6/0/9


Average Ranking 9.00 2.93 7.07 1.93 7.80 1.73 2.40

F16-F20 +∕ − ∕ ≈ 4/0/1 1/3/1 4/1/0 1/4/0 4/1/0 3/2/0 1/4/0


Average Ranking 11.00 4.40 8.80 1.60 11.60 7.40 1.40

Table 6
Statistical results acquired by MLPSO and MLPSO𝑛𝑒𝑞 on F1-F20 test problems.

MLPSO MLPSO𝑣1 MLPSO𝑣2 MLPSO𝑣3 MLPSO𝑣4


Func
PR SR PR SR PR SR PR SR PR SR

f1 0.999 0.999 0.900(+) 0.800 0.783(+) 0.700 0.933(+) 0.867 0.900(+) 0.767
f2 0.999 0.999 0.999(≈) 0.999 0.483(+) 0.067 0.999(≈) 0.999 0.999(≈) 0.999
f3 0.999 0.999 0.999(≈) 0.999 0.067(+) 0.067 0.999(≈) 0.999 0.999(≈) 0.999
f4 0.999 0.999 0.999(≈) 0.999 0.050(+) 0.001 0.999(≈) 0.999 0.999(≈) 0.999
f5 0.999 0.999 0.999(≈) 0.999 0.600(+) 0.233 0.999(≈) 0.999 0.999(≈) 0.999
f6 0.999 0.999 0.999(≈) 0.999 0.002(+) 0.001 0.999(≈) 0.999 0.734(+) 0.001
f7 0.999 0.999 0.999(≈) 0.999 0.237(+) 0.001 0.999(≈) 0.999 0.528(+) 0.001
f8 0.915 0.617 0.680(+) 0.001 0.001(+) 0.001 0.716(+) 0.001 0.370(+) 0.001
f9 0.999 0.999 0.999(≈) 0.999 0.014(+) 0.001 0.999(≈) 0.999 0.241(+) 0.001
f10 0.999 0.999 0.999(≈) 0.999 0.006(+) 0.001 0.999(≈) 0.999 0.999(≈) 0.999
f11 0.999 0.999 0.999(≈) 0.999 0.004(+) 0.001 0.999(≈) 0.999 0.999(≈) 0.999
f12 0.999 0.999 0.999(≈) 0.999 0.001(+) 0.001 0.999(≈) 0.999 0.999(≈) 0.700
f13 0.999 0.999 0.999(≈) 0.999 0.001(+) 0.001 0.999(≈) 0.999 0.999(≈) 0.999
f14 0.999 0.999 0.989(+) 0.933 0.001(+) 0.001 0.999(≈) 0.999 0.844(+) 0.267
f15 0.999 0.999 0.438(+) 0.001 0.001(+) 0.001 0.999(≈) 0.999 0.933(+) 0.733
f16 0.856 0.533 0.167(+) 0.001 0.001(+) 0.001 0.433(+) 0.001 0.133(+) 0.001
f17 0.475 0.001 0.025(+) 0.001 0.001(+) 0.001 0.401(+) 0.001 0.113(+) 0.001
f18 0.417 0.067 0.044(+) 0.001 0.001(+) 0.001 0.267(+) 0.001 0.063(+) 0.001
f19 0.019 0.001 0.001(+) 0.001 0.001(+) 0.001 0.002(+) 0.001 0.001(+) 0.001
f20 0.001 0.001 0.001(≈) 0.001 0.001(≈) 0.001 0.001(≈) 0.001 0.001(≈) 0.001

+∕ − ∕ ≈ 8/0/12 19/0/1 6/0/14 11/0/9

well in the testing problems of F1-F15, while it was relatively inferior in the testing problems of F16-F20. Therefore, MLPSO exhibits
strong competitiveness in low-dimensional testing problems, but it has certain limitations when dealing with high-dimensional ones.
The Wilcoxon rank sum test statistics reveal that MLPSO significantly outperformed the other algorithms in a significantly larger
number of functions. This result indicates that MLPSO demonstrated superior performance in achieving a balance between conver­
gence and diversity, suggesting that the proposed strategy is effective.

4.3. Component analysis

This section explores the main MLPSO strategies, including the equilibrium point setting strategy and the hierarchical iterative
local search. The next section describes the characteristics and applications of each component.

4.3.1. Equilibrium point setting strategy analysis


We refer to the MLPSO variant without the equilibrium point setting strategy as MLPSO𝑣1 , and compare it with the standard MLPSO
to investigate the impact of the equilibrium point setting strategy on the algorithm performance. The MLPSO𝑣1 was compared with
MLPSO on F1-F20 test problems, and Table 6 shows the comparison results of PR and SR with an accuracy level of 𝜀 = 1.0𝐸 − 04.
MLPSO performed better than MLPSO𝑣1 on these 20 problems. This validates the effectiveness of our equilibrium point setting strategy.
MLPSO significantly outperformed MLPSO𝑣1 in the seven test problems F1, F8, and F15 through F19 in Table 6. This may be due
to the existence of higher peaks on the F8 test problem and the relatively more complex F15-F19 test problems. MLPSO is able to set

11
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909
its equilibrium point strategy, which can accelerate convergence or exploration depending on the different states of sub-populations.
Therefore, the equilibrium point setting strategy can promote a balance between diversity and convergence by dynamically adjusting
the equilibrium point position of each sub-population.

4.3.2. Hierarchical iterative local search analysis


The local search method can significantly improve the accuracy of the solution, thereby enhancing the problem-solving capability
of the algorithm. Therefore, we focus on verifying the effectiveness of the hierarchical iterative local search by designing the variant
MLPSO𝑣2 without local search. The results of these comparisons allow for specific evaluation and improvement of the algorithm’s
performance.
Table 6 compares the experimental results of MLPSO and MLPSO𝑣2 on the test functions F1-F20. MLPSO with hierarchical iterative
local search outperformed MLPSO𝑣2 without local search in all test problems. MLPSO𝑣2 has a PR of 0 on the more complex test
problems F11-F20, while MLPSO with hierarchical iterative local search performed better. This is probably because MLPSO𝑣2 lacks
a local search strategy. In contrast, MLPSO is able to find the global optimal solution by using hierarchical iterative local search. In
F8, the PR value of MLPSO𝑣2 is 0, probably because it is difficult for MLPSO𝑣2 to search for the global optimal solution with a high
fitness value. While on F1-F7 and F9-F12, although the PR value of MLPSO𝑣2 is not 0, the performance is still somewhat different
relative to MLPSO. This shows that local search plays an important role in the performance of optimization algorithms by improving
the accuracy of the solution. Therefore, hierarchical iterative local search proves to be a crucial optimization strategy in MLPSO.

4.3.3. Exclusion mechanism analysis


The exclusion mechanism can improve the search efficiency of solutions. We tested the algorithm performance of the variant
MLPSO𝑣3 without exclusion mechanism. Then we compare it with MLPSO to demonstrate the effectiveness of the exclusion mechanism
on the 20 test functions F1-F20.
Table 6 shows that MLPSO with exclusion mechanism performs better than MLPSO𝑣3 on F1, F8, F16 to F19. This may be because
on F1, the exclusion mechanism can enable the algorithm to obtain all optimal solutions. On F8, F16 to F19, due to the complexity
of the testing problems, the exclusion mechanism can to some extent make the obtained solutions more effective. Therefore, the
exclusion mechanism can promote the effectiveness of the solution.

4.3.4. DPC analysis


DPC is a parameter free clustering method that typically exhibits good clustering performance. We compared the effectiveness of
DPC using the a˙inity propagation clustering variant MLPSO𝑣4 and MLPSO on 20 test functions.
In Table 6, we observed that MLPSO performed better than MLPSO𝑣4 on test questions such as F1, F6-F9, and F14-F19, while they
perform similarly on the remaining test problems. This may be due to the varying distribution shapes of the data for these testing
questions, which affects the clustering effect. Therefore, it indicates that using the DPC method can obtain more reasonable clustering
results for data distributions of different shapes.

4.4. Parameter analysis

We have designed some experiments to investigate the effect of parameters 𝜔1 , 𝜔2 , 𝜔3 , 𝜔4 in PSO with competing ideas on the
performance of MLPSO. In this paper, the choice of 𝜔 ∈ [0.1, 0.2, 0.3, ..., 1.9, 2.0] is used to evaluate the algorithm’s performance. Tests
were performed on the F16 test problems, each of which was run independently 30 times, with a peak rate of 𝑃 𝑅 as a performance
indicator.
Fig. 5 shows the relationship between the four inertia weights and the algorithm’s performance. 𝜔1 and 𝜔2 are the inertial
weights of the winners and losers of the accelerated convergence sub-population. Fig. 5 shows the experimental results of 𝜔1 . Its best
performance is between 0.3 and 0.4, and we chose 0.3 in the experiment. The best performance of the loser 𝜔2 is between 0.8 and 0.9,
and we chose 0.9. 𝜔3 and 𝜔4 are the inertial weights of the winners and losers of the accelerated exploration sub-population. Fig. 5
shows the effect of the inertia weight 𝜔3 on the algorithm’s performance for the winner of the sub-population to be explored. The
results show that choosing 𝜔3 between 0.6 and 0.8 can obtain better performance, so 𝜔3 = 0.6 was chosen. In Fig. 5, the researchers
show the inertia weights 𝜔4 for exploring sub-population losers. The algorithm showed optimal performance when 𝜔4 ranged from
0.9 to 1.0 in the experiment, consequently 𝜔4 = 1.0 was chosen.
In sub-populations, both winners and losers use relatively large inertia weights. This is because the main goal of exploring sub­
populations is to extensively search the space to discover new potential optimal solutions. A larger inertia weight helps particles
maintain a larger movement distance in the space, thereby increasing the chance of discovering new regions. Meanwhile, the inertia
weight of the winner (𝜔3 = 0.6) is slightly smaller than that of the loser (𝜔4 = 1.0), which may be because the winner has already
found some good solutions and can slightly reduce the movement distance for more r­fined search. Therefore, in accelerating the con­
vergence of sub-populations, smaller inertia weights are helpful for development. In accelerating the exploration of sub-populations,
larger inertia weights are helpful for exploration. Meanwhile, the experiment also validated the effectiveness of assigning different
inertia weights to winners and losers in different sub-populations.

5. Conclusion

We have designed a MLPSO strategy in MMOPs that utilizes DPC to eliminate the i­fluence of sensitive parameters. MLPSO com­
bines three key technologies to improve the effectiveness of the algorithm, including equilibrium point setting strategy, exclusion

12
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909

Fig. 5. Experimental results for four different 𝜔 on F16.

mechanism, and hierarchical iterative local search. Among them, the equilibrium point setting strategy helps to achieve a balance
between the exploration and exploitation capabilities of the population. If the fitness value of the optimal particle is higher than
the equilibrium point, we accelerate the exploit process of the sub-population. On the contrary, we enhance the ability to explore.
The exclusion mechanism ensures that each sub-population does not search the same area repeatedly, thereby enhancing population
diversity. If the Euclidean distance between adjacent optimal particles is less than the threshold, the sub-population with poor per­
formance is reinitialized. Hierarchical iterative local search further improves the solution’s accuracy. Our first step is to select the
appropriate search direction by adjusting the minimum value of the optimal particle. The second step is to select the point with the
highest fitness value as the new optimal particle after uniform sampling. The uniform sampling step size of each iteration gradually
decreases. In addition, we conducted comparative tests specifically for the proposed strategies, and the experimental results ver­fied
the effectiveness of these strategies.
In the comparative experiment, we selected 20 test functions and compared MLPSO with various advanced algorithms. The
results indicate that MLPSO exhibits significant competitiveness in MMOPs. However, the performance of our algorithm on high­
dimensional problems still needs improvement. Therefore, our next focus is to study high-dimensional MMOPs with the aim of
enhancing the effectiveness of the algorithm in high-dimensional situations [65]. We are also preparing to conduct research on
dynamic environments with the aim of enhancing the adaptability and robustness of algorithms [66].

CRediT authorship contribution statement

Hao Pan: Writing -- review & editing, Writing -- original draft, Formal analysis, Conceptualization. Hui Yuan: Writing -- review &
editing, Writing -- original draft, Supervision. Qiang Yue: Software, Methodology, Investigation. Haibin Ouyang: Resources, Project
administration, Methodology. Fangqing Gu: Visualization, Validation, Software. Fei Li: Writing -- review & editing, Writing -- original
draft, Funding acquisition, Data curation.

Declaration of competing interest

The authors declare the following financial interests/personal relationships which may be considered as potential competing
interests: Fei Li reports financial support was provided by National Natural Science Foundation of China. If there are other authors,
they declare that they have no known competing financial interests or personal relationships that could have appeared to i­fluence
the work reported in this paper.

Acknowledgement

The authors are supported by the National Natural Science Foundation of China under 61903003 and supported by Anhui
Provincial Natural Science Foundation (Grant 2308085MF199 and 2008085QE227) and Scientific Research Projects in Colleges and
Universities of Anhui Province (Grant 2023AH051124) and Supported by the Open Project of Anhui Province Engineering Laboratory
of Intelligent Demolition Equipment (Grant APELIDE2022A007) and Supported by the Open Project of Anhui Province Key Labora­
tory of Special and Heavy Load Robot (Grant TZJQR001-2021, TZJQR011-2024) and Anhui Province Key Laboratory of Metallurgical
Engineering and Resources Recycling (SKF24-05) and Supported by the Opening Project of Key Laboratory of Power Electronics and
Motion Control of Anhui Higher Education Institutions (PEMC24001).

13
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909
Data availability

Data will be made available on request.

References

[1] B.D.S.G. Vidanalage, M.S. Toulabi, S. Filizadeh, Multimodal design optimization of v-shaped magnet ipm synchronous machines, IEEE Trans. Energy Convers.
33 (3) (2018) 1547--1556, https://doi.org/10.1109/TEC.2018.2807618.
[2] C.-H. Yoo, D.-K. Lim, H.-K. Jung, A novel multimodal optimization algorithm for the design of electromagnetic machines, IEEE Trans. Magn. 52 (3) (2016) 1--4,
https://doi.org/10.1109/TMAG.2015.2478060.
[3] J. Jang, N. Seong, Deep reinforcement learning for stock portfolio optimization by connecting with modern portfolio theory, Expert Syst. Appl. 218 (C) (may
2023), https://doi.org/10.1016/j.eswa.2023.119556.
[4] Y.-J. Gong, J. Zhang, Y. Zhou, Learning multimodal parameters: a bare-bones niching differential evolution approach, IEEE Trans. Neural Netw. Learn. Syst.
29 (7) (2018) 2944--2959, https://doi.org/10.1109/TNNLS.2017.2708712.
[5] D. Ramachandram, M. Lisicki, T.J. Shields, M.R. Amer, G.W. Taylor, Bayesian optimization on graph-structured search spaces: optimizing deep multimodal
fusion architectures, Neurocomputing 298 (2018) 80--89, https://doi.org/10.1016/j.neucom.2017.11.071, https://www.sciencedirect.com/science/article/pii/
S0925231218302121.
[6] D.K. Jain, X. Zhao, G. González-Almagro, C. Gan, K. Kotecha, Multimodal pedestrian detection using metaheuristics with deep convolutional neural net­
work in crowded scenes, Inf. Fusion 95 (2023) 401--414, https://doi.org/10.1016/j.inffus.2023.02.014, https://www.sciencedirect.com/science/article/pii/
S1566253523000544.
[7] Z. Yi, J. Yu, Y. Tan, Q. Wu, A multimodal adversarial attack framework based on local and random search algorithms, Int. J. Comput. Intell. Syst. 14 (2021)
1934--1947, https://doi.org/10.2991/ijcis.d.210624.001.
[8] J. Liang, A. Qin, P. Suganthan, S. Baskar, Comprehensive learning particle swarm optimizer for global optimization of multimodal functions, IEEE Trans. Evol.
Comput. 10 (3) (2006) 281--295, https://doi.org/10.1109/TEVC.2005.857610.
[9] S. Das, P.N. Suganthan, Problem definitions and evaluation criteria for cec 2011 competition on testing evolutionary algorithms on real world optimization
problems, https://al-roomi.org/multimedia/CEC_Database/CEC2011/CEC2011_TechnicalReport.pdf, 2011.
[10] J. Brest, M.S. Maučec, B. Bošković, The 100-digit challenge: algorithm jde100, in: 2019 IEEE Congress on Evolutionary Computation (CEC), 2019, pp. 19--26.
[11] R. Thomsen, Multimodal optimization using crowding-based differential evolution, in: Proceedings of the 2004 Congress on Evolutionary Computation (IEEE
Cat. No. 04TH8753), vol. 2, 2004, pp. 1382--1389.
[12] W. Gao, G.G. Yen, S. Liu, A cluster-based differential evolution with self-adaptive strategy for multimodal optimization, IEEE Trans. Cybern. 44 (8) (2014)
1314--1327, https://doi.org/10.1109/TCYB.2013.2282491.
[13] B.Y. Qu, P.N. Suganthan, J.J. Liang, Differential evolution with neighborhood mutation for multimodal optimization, IEEE Trans. Evol. Comput. 16 (5) (2012)
601--614, https://doi.org/10.1109/TEVC.2011.2161873.
[14] S. Biswas, S. Kundu, S. Das, Inducing niching behavior in differential evolution through local information sharing, IEEE Trans. Evol. Comput. 19 (2) (2015)
246--263, https://doi.org/10.1109/TEVC.2014.2313659.
[15] S. Hui, P.N. Suganthan, Ensemble and arithmetic recombination-based speciation differential evolution for multimodal optimization, IEEE Trans. Cybern. 46 (1)
(2016) 64--74, https://doi.org/10.1109/TCYB.2015.2394466.
[16] B.-Y. Qu, P.N. Suganthan, Novel multimodal problems and differential evolution with ensemble of restricted tournament selection, in: IEEE Congress on Evolu­
tionary Computation, 2010, pp. 1--7.
[17] M.G. Epitropakis, X. Li, E.K. Burke, A dynamic archive niching differential evolution algorithm for multimodal optimization, in: 2013 IEEE Congress on Evolu­
tionary Computation, 2013, pp. 79--86.
[18] S. Huang, H. Jiang, Multimodal estimation of distribution algorithm based on cooperative clustering strategy, in: 2018 Chinese Control and Decision Conference
(CCDC), 2018, pp. 5297--5302.
[19] G. Zhang, L. Yu, Q. Shao, Y. Feng, A clustering based ga for multimodal optimization in uneven search space, in: 2006 6th World Congress on Intelligent Control
and Automation, vol. 1, 2006, pp. 3134--3138.
[20] C.-H. Chen, T.-K. Liu, J.-H. Chou, A novel crowding genetic algorithm and its applications to manufacturing robots, IEEE Trans. Ind. Inform. 10 (3) (2014)
1705--1716, https://doi.org/10.1109/TII.2014.2316638.
[21] E. Covantes Osuna, D. Sudholt, Runtime analysis of crowding mechanisms for multimodal optimization, IEEE Trans. Evol. Comput. 24 (3) (2020) 581--592,
https://doi.org/10.1109/TEVC.2019.2914606.
[22] Y.-H. Zhang, Y.-J. Gong, W.-N. Chen, J. Zhang, Composite differential evolution with queueing selection for multimodal optimization, in: 2015 IEEE Congress
on Evolutionary Computation (CEC), 2015, pp. 425--432.
[23] R. Stoean, C. Stoean, D. Dumitrescu, Investigating landscape topology for subpopulation differentiation in genetic chromodynamics, in: 2008 10th International
Symposium on Symbolic and Numeric Algorithms for Scientific Computing, 2008, pp. 551--554.
[24] R. Patel, M.M. Raghuwanshi, A.N. Jaiswal, Modifying genetic algorithm with species and sexual selection by using k-means algorithm, in: 2009 IEEE International
Advance Computing Conference, 2009, pp. 114--119.
[25] D. Fan, W. Sheng, S. Chen, A diverse niche radii niching technique for multimodal function optimization, in: 2013 Chinese Automation Congress, 2013, pp. 70--74.
[26] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proceedings of ICNN’95 - International Conference on Neural Networks, vol. 4, 1995, pp. 1942--1948.
[27] X. Ji, Y. Zhang, D. Gong, X. Sun, Dual-surrogate-assisted cooperative particle swarm optimization for expensive multimodal problems, IEEE Trans. Evol. Comput.
25 (4) (2021) 794--808, https://doi.org/10.1109/TEVC.2021.3064835.
[28] B.Y. Qu, P.N. Suganthan, S. Das, A distance-based locally informed particle swarm model for multimodal optimization, IEEE Trans. Evol. Comput. 17 (3) (2013)
387--402, https://doi.org/10.1109/TEVC.2012.2203138.
[29] J. Zou, Q. Deng, J. Zheng, S. Yang, A close neighbor mobility method using particle swarm optimizer for solving multimodal optimization problems, Inf. Sci. 519
(2020) 332--347, https://doi.org/10.1016/j.ins.2020.01.049, https://www.sciencedirect.com/science/article/pii/S002002552030061X.
[30] Q. Yang, W.-N. Chen, Y. Li, C.L.P. Chen, X.-M. Xu, J. Zhang, Multimodal estimation of distribution algorithms, IEEE Trans. Cybern. 47 (3) (2017) 636--650,
https://doi.org/10.1109/TCYB.2016.2523000.
[31] T. Friedrich, T. Kötzing, M.S. Krejca, A.M. Sutton, The compact genetic algorithm is efficient under extreme gaussian noise, IEEE Trans. Evol. Comput. 21 (3)
(2017) 477--490, https://doi.org/10.1109/TEVC.2016.2613739.
[32] M. Wang, X. Li, L. Chen, An enhance multimodal multiobjective optimization genetic algorithm with special crowding distance for pulmonary hypertension feature
selection, Comput. Biol. Med. 146 (2022) 105536, https://doi.org/10.1016/j.compbiomed.2022.105536, https://www.sciencedirect.com/science/article/pii/
S0010482522003286.
[33] R.A. Sarker, S.M. Elsayed, T. Ray, Differential evolution with dynamic parameters selection for optimization problems, IEEE Trans. Evol. Comput. 18 (5) (2014)
689--707, https://doi.org/10.1109/TEVC.2013.2281528.
[34] Z.-J. Wang, Y.-R. Zhou, J. Zhang, Adaptive estimation distribution distributed differential evolution for multimodal optimization problems, IEEE Trans. Cybern.
52 (7) (2022) 6059--6070, https://doi.org/10.1109/TCYB.2020.3038694.

14
H. Pan, H. Yuan, Q. Yue et al.
Information Sciences 702 (2025) 121909
[35] X. Lin, W. Luo, P. Xu, Differential evolution for multimodal optimization with species by nearest-better clustering, IEEE Trans. Cybern. 51 (2) (2021) 970--983,
https://doi.org/10.1109/TCYB.2019.2907657.
[36] W. Gong, Z. Cai, Differential evolution with ranking-based mutation operators, IEEE Trans. Cybern. 43 (6) (2013) 2066--2081, https://doi.org/10.1109/TCYB.
2013.2239988.
[37] Z.-J. Wang, Z.-H. Zhan, Y. Li, S. Kwong, S.-W. Jeon, J. Zhang, Fitness and distance based local search with adaptive differential evolution for multimodal
optimization problems, IEEE Trans. Emerg. Top. Comput. Intell. (2023) 1--16, https://doi.org/10.1109/TETCI.2023.3234575.
[38] J. Luo, F. Gu, An adaptive niching-based evolutionary algorithm for optimizing multi-modal function, Int. J. Pattern Recognit. Artif. Intell. 30 (2016)
1659007:1--1659007:19.
[39] Z.-J. Wang, Z.-H. Zhan, Y. Lin, W.-J. Yu, H.-Q. Yuan, T.-L. Gu, S. Kwong, J. Zhang, Dual-strategy differential evolution with a˙inity propagation clustering for
multimodal optimization problems, IEEE Trans. Evol. Comput. 22 (6) (2018) 894--908, https://doi.org/10.1109/TEVC.2017.2769108.
[40] Q. Yang, W.-N. Chen, Z. Yu, T. Gu, Y. Li, H. Zhang, J. Zhang, Adaptive multimodal continuous ant colony optimization, IEEE Trans. Evol. Comput. 21 (2) (2017)
191--205, https://doi.org/10.1109/TEVC.2016.2591064.
[41] Y. Cao, H. Zhang, W. Li, M. Zhou, Y. Zhang, W.A. Chaovalitwongse, Comprehensive learning particle swarm optimization algorithm with local search for
multimodal functions, IEEE Trans. Evol. Comput. 23 (4) (2019) 718--731, https://doi.org/10.1109/TEVC.2018.2885075.
[42] F. Gu, Y.-m. Cheung, J. Luo, An evolutionary algorithm based on decomposition for multimodal optimization problems, in: 2015 IEEE Congress on Evolutionary
Computation (CEC), 2015, pp. 1091--1097.
[43] Z.-J. Wang, Z.-H. Zhan, Y. Lin, W.-J. Yu, H. Wang, S. Kwong, J. Zhang, Automatic niching differential evolution with contour prediction approach for multimodal
optimization problems, IEEE Trans. Evol. Comput. 24 (1) (2020) 114--128, https://doi.org/10.1109/TEVC.2019.2910721.
[44] L. Hao, L. Hu, Hybrid particle swarm optimization for continuous problems, in: 2009 ISECS International Colloquium on Computing, Communication, Control,
and Management, vol. 3, 2009, pp. 217--220.
[45] E. RC, J. Kennedy, A new optimizer using particle swarm theory, in: MHS’95, in: Proc. Sixth Int. Symp. Micro Mach. Hum., Sci, 1995, pp. 39--43.
[46] Y. Shi, E. RC, A mod­fied particle swarm optimizer, in: Proceedings of the IEEE Conference on Evolutionary Computation, ICEC, vol. 6, 1998, pp. 69--73.
[47] T.M. Shami, A.A. El-Saleh, M. Alswaitti, Q. Al-Tashi, M.A. Summakieh, S. Mirjalili, Particle swarm optimization: a comprehensive survey, IEEE Access 10 (2022)
10031--10061, https://doi.org/10.1109/ACCESS.2022.3142859.
[48] J.C. Bansal, P.K. Singh, M. Saraswat, A. Verma, S.S. Jadon, A. Abraham, Inertia weight strategies in particle swarm optimization, in: 2011 Third World Congress
on Nature and Biologically Inspired Computing, 2011, pp. 633--640.
[49] A. Rodriguez, A. Laio, Clustering by fast search and find of density peaks, Science 344 (6191) (2014) 1492--1496.
[50] J. Liang, S. Ma, B. Qu, B. Niu, Strategy adaptative memetic crowding differential evolution for multimodal optimization, in: 2012 IEEE Congress on Evolutionary
Computation, 2012, pp. 1--7.
[51] X. Li, Efficient differential evolution using speciation for multimodal function optimization, in: Proceedings of the 7th Annual Conference on Genetic and
Evolutionary Computation, GECCO ’05, Association for Computing Machinery, New York, NY, USA, 2005, pp. 873--880.
[52] B. Qu, P. Suganthan, J. Liang, Differential evolution with neighborhood mutation for multimodal optimization, IEEE Trans. Evol. Comput. 16 (2012) 601--614,
https://doi.org/10.1109/TEVC.2011.2161873.
[53] Z. Cai, W. Gong, C.X. Ling, H. Zhang, A clustering-based differential evolution for global optimization, Appl. Soft Comput. 11 (1) (2011) 1363--1379, https://
doi.org/10.1016/j.asoc.2010.04.008, https://www.sciencedirect.com/science/article/pii/S1568494610000840.
[54] T. Blackwell, J. Branke, X. Li, Particle Swarms for Dynamic Optimization Problems, Springer, Berlin Heidelberg, Berlin, Heidelberg, 2008, pp. 193--217.
[55] X. Li, A. Engelbrecht, M.G. Epitropakis, Benchmark functions for cec’2013 special session and competition on niching methods for multimodal function opti­
mization, Technical report, Evol. Comput. Mach. Learn. Group, RMIT Univ., Melbourne, VIC, Australia, http://goanna.cs.rmit.edu.au/~xiaodong/cec13-lsgo/
competition/, 2013.
[56] S. Hui, P.N. Suganthan, Ensemble crowding differential evolution with neighborhood mutation for multimodal optimization, in: 2013 IEEE Symposium on
Differential Evolution (SDE), 2013, pp. 135--142.
[57] Y. Wang, H.-X. Li, G.G. Yen, W. Song, Mommop: multiobjective optimization for locating multiple optimal solutions of multimodal optimization problems, IEEE
Trans. Cybern. 45 (4) (2015) 830--843, https://doi.org/10.1109/TCYB.2014.2337117.
[58] B.Y. Qu, P.N. Suganthan, S. Das, A distance-based locally informed particle swarm model for multimodal optimization, IEEE Trans. Evol. Comput. 17 (3) (2013)
387--402, https://doi.org/10.1109/TEVC.2012.2203138.
[59] J.E. Fieldsend, Running up those hills: multi-modal search with the niching migratory multi-swarm optimiser, in: 2014 IEEE Congress on Evolutionary Compu­
tation (CEC), 2014, pp. 2593--2600.
[60] S.-J. Jie, Y. Jiang, X.-X. Xu, S. Kwong, J. Zhang, Z.-H. Zhan, Optimal peaks detected-based differential evolution for multimodal optimization problems, in: 2023
IEEE International Conference on Systems, Man, and Cybernetics (SMC), 2023, pp. 1176--1181.
[61] S. Biswas, S. Kundu, S. Das, An improved parent-centric mutation with normalized neighborhoods for inducing niching behavior in differential evolution, IEEE
Trans. Cybern. 44 (10) (2014) 1726--1737, https://doi.org/10.1109/TCYB.2013.2292971.
[62] R. Cheng, M. Li, K. Li, X. Yao, Evolutionary multiobjective optimization-based multimodal optimization: fitness landscape approximation and peak detection,
IEEE Trans. Evol. Comput. 22 (5) (2018) 692--706, https://doi.org/10.1109/TEVC.2017.2744328.
[63] X. Ji, Y. Zhang, D. Gong, X. Sun, Y. Guo, Multisurrogate-assisted multitasking particle swarm optimization for expensive multimodal problems, IEEE Trans.
Cybern. 53 (4) (2023) 2516--2530, https://doi.org/10.1109/TCYB.2021.3123625.
[64] Y. Zhang, X.-F. Ji, X.-Z. Gao, D.-W. Gong, X.-Y. Sun, Objective-constraint mutual-guided surrogate-based particle swarm optimization for expensive constrained
multimodal problems, IEEE Trans. Evol. Comput. 27 (4) (2023) 908--922, https://doi.org/10.1109/TEVC.2022.3182810.
[65] X.-F. Ji, Y. Zhang, C.-L. He, J.-X. Cheng, D.-W. Gong, X.-Z. Gao, Y.-N. Guo, Surrogate and autoencoder-assisted multitask particle swarm optimization for high­
dimensional expensive multimodal problems, IEEE Trans. Evol. Comput. 28 (4) (2024) 1009--1023, https://doi.org/10.1109/TEVC.2023.3287213.
[66] X. Yan, X. Guo, J. Chen, C. Hu, W. Gong, L. Gao, Learning-driven dynamic multimodal optimization algorithm for real-time traceability of water pollution, IEEE
Trans. Artif. Intell. 5 (6) (2024) 2472--2481, https://doi.org/10.1109/TAI.2024.3355027.

15

You might also like