A Novel Multi-Swarm Particle Swarm Optimization With Dynamic Learning Strategy
A Novel Multi-Swarm Particle Swarm Optimization With Dynamic Learning Strategy
a r t i c l e i n f o a b s t r a c t
Article history: In the paper, we proposed a novel multi-swarm particle swarm optimization with dynamic learning
Received 3 April 2016 strategy (PSO-DLS) to improve the performance of PSO. To promote information exchange among sub-
Received in revised form 25 July 2017 swarms, the particle classification mechanism advocates that particles in each sub-swarm are classified
Accepted 26 August 2017
into ordinary particles and communication particles with different tasks at each iteration. The ordinary
Available online 1 September 2017
particles focus on exploitation under the guidance of the local best position in its sub-swarm, while the
communication particles with dynamic ability that focus on exploration under the guidance of a united
Keywords:
local best position in a new search region promote information to be exchanged among sub-swarms.
Particle swarm optimization
Multi-swarm
Moreover the strategy sets a dynamic control mechanism with an increasing parameter p for implement-
Dynamic learning strategy ing the classification operation, which provides ordinary particles an increasing sense of evolution into
Particle classification communication particles during the searching process. A simple case of analysis on searching behavior
Dynamic evolution supports its remarkable impact on maintaining the diversity and searching a better solution. Experimen-
CEC 2015 tal results on 15 function problems of CEC 2015 for 10 and 30 dimensions also demonstrate its promising
effectiveness in solving complex problems statistically comparing to other algorithms. What’s more, the
computational times reveal the subtle design of PSO-DLS.
© 2017 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.asoc.2017.08.051
1568-4946/© 2017 Elsevier B.V. All rights reserved.
W. Ye et al. / Applied Soft Computing 61 (2017) 832–843 833
particle swarm optimization with dynamic learning strategy (PSO- The experiences of individual and its companions make an impact
DLS). First, the original single swarm is firstly divided into several on the flight. When a particle achieves a new improved position, all
sub-swarms and several lbests replace gbest to instruct updating, the other particles will move closer to it. This process is repeated
which lessons the influence of gbest on the whole population obvi- until satisfactory solution is found or the certain criterion of stop
ously. Second, particles in each sub-swarm are classified into two updating is met.
types by the updating formula. Ordinary particles (Type A) utilize In the PSO domain, there are two main variants: global PSO and
lbest to exploit in its sub-swarms, while communication particles local PSO. The global PSO formula is introduced above where the
(Type B) utilize a united lbest to explore toward a new search whole population is well-mixed and there is only a single group. In
field. Communication particles with dynamic ability can assist the local version, each particle’s velocity is adjusted according to
collective information to be spread among separate sub-swarms. its historical best position pbesti and the best position achieved so
Moreover, the proposed strategy sets a dynamic control mecha- far from its group lbest. The velocity update strategy is described as
nism and allows more and more ordinary particles to evolve into follows:
the communication ones, which provides good performance in both
exploration and exploitation. With constant collaborative informa- vdi = ωvdi + c1 r1 (pbest di − xid ) + c2 r2 (lbest d − xid ) (3)
tion instead of private information, each sub-swarm maintains the
diversity during the searching process. In this manner, separate 2.2. Some variants PSO
sub-swarms reunite at some extent and then achieving a satisfac-
tory solution is possible for multimodal function optimization. Despite the success of PSO in certain applications, traditional
The rest parts of this paper are arranged as follows: Section PSO still has room for improvements. Sufficient works with specific
2 reviews the basics and improvements of PSO. PSO-DLS is ana- techniques have been proposed over the past decades. Most studies
lyzed in detail in Section 3. Numerical results used for experimental on improving the PSO fall into three categories.
study and the discussion on efficiency are given in Section 4. The
conclusions are presented in Section 5. (1) Modifying the inertia weight and coefficient [32–36]. Shi and
Eberhart introduced an inertia weight to reduce the restriction
2. Review of standard PSO and its variants on velocity and control the scope of search [32]. Ratneweera
et al. [33] proposed that the acceleration coefficients c1 and
2.1. Standard PSO c2 are decreased and increased with time varying, to regulate
global and local search abilities. The adaptive methods were
In the particle swarm optimization algorithm, each solution to also introduced to adjust the value of the inertia weight accord-
a specific problem can be described by a particle in the search ing to evolutionary states [34–36].
place, and the particle swarm consists of these randomly initialized (2) Designing topology structure [37–44]. Among the previous
individuals in the feasible space. According to the corresponding works on topology structure, the concept of multi-swarm has
position, each particle possesses a fitness value and a velocity to been adopted by many researchers. In [37], multiple sub-
adjust its direction. The flying direction and distance depend on populations (or named as sub-swarms) was investigated for
the difference between the each particle’s position and the guided multimodal functions, where the original single population is
position. And the fitness evaluation is determined by the objective split into several subpopulations so that the whole population
function for the specific issue. The best positions instruct particles is no longer dominated by gbest and maintains the diversity.In
to fly in the solution space. After a certain criterion of stop updat- species-based PSO (SPSO) [38] the population is also divided
ing, the best individual stands for the best solution found by the into sub-swarms by similarity. The update of standard PSO is
algorithm. applied to each sub-swarm independently. The multi-swarm is
The updating process can be converted into a mathemat- used by Cooperative PSO (CPSO) [39] in another manner. The
ical model. Suppose that several particles are generated to solution is separated into sub-vectors, each solution of which
find the potential global optimum. The position of the ith is found by corresponding sub-swarm. The solutions from
particle can be represented by a D-dimensional vector xi = each sub-swarm combine finally. In [40], Liang and Suganthan
(xi1 , xi2 , . . ., xiD ), where xid ∈ [xmin , xmax ], and the corresponding designed a dynamic multi-swarm particle swarm optimiza-
velocity is vi = (v1i , v2i , . . ., vD ), where vdi ∈ [vmin , vmax ]. The best tion (DMS-PSO), whose particles are dynamic and combine
i
previously visited position of the ith particle is denoted by pbest i = into many small swarms randomly and regroup frequently to
1 2 D
(pbest i , pbest i , . . ., pbest i ); and the global best position of the ensure diverse information exchanged among the multi-modal
whole swarm obtained so far is indicated as gbest = (gbest1 , gbest2 , regions. To strength information communication, Ma et al. [41]
. . ., gbestD ). introduced a strategy of particle migration where particles are
The velocity and position of the particles at the next iteration able to migrate from one group to another during the evolution.
are updated according to the following equations: The adaptive population size was investigated in ultimate time
of current ladder by Chen and Zhao [42]. PSO was also studied in
vdi = ωvdi + c1 r1 (pbest i d − xid ) + c2 r2 (gbest d − xid ) (1) the complex networks with heterogeneous structure, including
scale-free and small-world networks [43,44].
xid = xid + vdi (2)
(3) Altering the evolutionary learning strategy or integrating spe-
where c1 and c2 are positive constants reflecting the weighting of cific techniques [45–57]. Kaveh et al. [46] developed a efficient
stochastic acceleration terms that pull each particle toward pbesti hybrid method in truss weight minimization problems by
and gbest positions, respectively in the flight process. Generally transplanting swallow swarm into particle swarm. Mendes
speaking, c1 shows the acceleration that ith particle flies towards its et al. [47] proposed a novel fully informed particle swarm (FIPS).
previous position pbesti , and c2 shows the acceleration that i-th par- Every neighbor of a particle is allowed to contribute for updat-
ticle flies towards the best previous position gbest. All particles use ing the velocity as a cumulative effect of the information. This
the same values c1 and c2 . r1 , r2 are random variables distributed mechanism increases the source of information for each par-
uniformly in the range (0,1). ω is the inertia wight used for the ticle. Lin et al. [48] proposed the multi-layer particle swarm
balance between the global and local search abilities. At each iter- optimization (MLPSO) in which the upper layer leads the lower
ation, any better position is stored in memory for pbesti and gbest. layer in thoroughly searching the multi-modal regions. Based
834 W. Ye et al. / Applied Soft Computing 61 (2017) 832–843
Algorithm 1 (PSO-DLS).
Fig. 2. The 3-D surface and contour of the multimodal function 2-D Greiwank.
Input: Each swarm’s population size N, Swarms’ number M, Max iteration iter m ax
Output: The best solution gbest
1: Initialize M * N particles with positions x and mately in (10, 5) at the beginning. After a few iterations of search,
velocity v. Divide the population into M
a large proportion of particles has reached the best region at itera-
sub-swarms. Each sub-swarm contains N
particles. The best position pbest, lbest and tion 5 and 10. At iteration 20, almost all the particle has stagnated
gbest are also initialized. in the region, and the best postion has been not changed. Therefore,
2: for t = 1 to iter m ax do a poor gbest easily dominates all the particles in multi-modal func-
3: for i=1 to M * N do
tions. It may influence the particles to move toward the global best
4: generate a random number rand uniformly
distributed in (0,1)
area despite their location in a remote region. Furthermore, it can
5: if rand < t/iter m ax then be deduced that multi-swarm PSO without interaction among sub-
6: Execute the velocity updating formula of Eq. swarms also leads to such a similar situation, just like the global
(3) PSO.
7: else
Fig. 4 displays the simulation results of PSO-DLS composed of
8: Execute the velocity updating formula of Eq.
(4) 10 multi-swarms with 4 particles in each sub-swarm. The initial-
9: end if ization generates the diversified particles as well as 10 local best
10: Execute the position updating formula of Eq. positions. Even though the global best position gbest achieves the
(2) minimal value, it can not instruct all the particles to move toward
11: end for
12: Evaluate the fitness for each particle
its direction due to the influence of lbest in each sub-swarm. After
according to its position vector x; a few iteration of search, the gbest changes into a position farther
13: Update the best position pbest, lbest and away (0,0) at iteration 5. At the same time, two lbest have been
gbest; found around the global optimum region although they are not
14: end for
gbest. Subsequently, the global best position are found around the
15: return the best position gbest
global optimum region at iteration 10 thanks to the instruction of
the two lbest on searching. From iteration 10 to 20, before all the
4. Experiments
particles evolves to the communication particles and converge to
gbest, the whole population still remain the diversity. It can be con-
4.1. Searching behavior
cluded that the multi-swarm can lesson the dominant impact of
gbest on the whole population, and help to maintain diversity. And
To test the performance of the proposed algorithm, a multi-
the dynamic strategy promote interaction among sub-swarms with
modal function (Griewank) is chosen to explain the simulation
the searching scope reduced gradually around the global optimum
result of the multi-swarm PSO with dynamic learning strategy. The
region.
2-D optimized function is given by
1 y
f (x, y) = (x2 + y2 ) − cos x cos √ + 1 (5) 4.2. Benchmark functions and compared algorithms
4000 2
Fig. 2 shows the 3-D surface and contour of the optimized function. In this subsection, the performance of PSO-DLS and the other
It can easily be seen from Eq. (5) or the contour that there are a lot methods are studied on the basis of CEC 2015 learning-based
of local optima around the global minimum occurred at (x, y) = (0, benchmark functions in Liang et al. [58]. This problem set is
0), which easily makes the whole populations trapped in the local the latest collection of 15 unconstrained continuous optimiza-
search regions. tion problems with various difficulty levels. According to their
To show the performance of the proposed algorithm, properties, these functions are divided into four groups: uni-
Figs. 3 and 4 demonstrate the example of the step-by-step opti- modal functions, simple multimodal functions, hybrid functions,
mization process of the PSO-DLS comparing the original PSO. and composition functions. These problems are treated as black-
Convergence dynamics of all particles will be displayed at iteration box problems. All the guidelines of CEC 2015 have been strictly
0, 5, 10, and 20 respectively. followed for the experimentation. Following the instructions of test
Fig. 3 shows the convergence behavior of particles in one-swarm suite, every problem are conducted for 31 runs independently. The
(original PSO) with 40 particles. It is parent that all the particles are results for search space of dimension 10 and 30 are calculated. The
distributed uniformly, the global best position is found approxi- range of the search space is [−100,100]. Population initialization is
836 W. Ye et al. / Applied Soft Computing 61 (2017) 832–843
10 10
5 5
0 0
-5 -5
-10 -10
-10 -5 0 5 10 -10 -5 0 5
5 5
0 0
-5 -5
-10 -10
Fig. 3. Searching behavior of original PSO step by step. The multi-modal problem is two-dimensional Griewanks function (shown by contour map). The size of population is
40. Blue circles represent the particles, and red star represents the best position gbest found by so far. (For interpretation of the references to color in this figure legend, the
reader is referred to the web version of this article.)
generated uniformly in the specified search space by random num- VI Artificial Bee Colony (ABC) [10] is a novel algorithm that finds
ber generator with clock time. The termination criteria is set to the optimal solution by sending employed bees to find food
maximum function evaluation 10000D. The fitness value shown in sources.
this paper is Fi (x) − Fi (x* ) after the maximum iteration, where Fi (x* ) VII Teaching-Learning-Based Optimization (TLBO) [11] is a
is just a number about the corresponding function for instruction. teaching-learning process inspired algorithm based on the
effect of influence of a teacher on the output of learners in a
4.3. Experimental settings class.
The benchmark functions defined in the previous subsection are For all the algorithms, the population size is set to 40. So the
used to test the performance of PSO-DLS. Here we report the algo- max number of iterations is 2500 for 10-D and 7500 for 30-D. For all
rithms tested for all the benchmark datasets. Five PSOs and two kinds of PSO algorithms, the parameter settings are: ω = 0.9 to 0.4,
state-of-the-art algorithms are used to compare for benchmark c1 = c2 = 1.49445, Vmax = 0.5 × search range = 100; for PSO-DLS, DMS-
functions. PSO and MSPSO, the swarm’s size N = 4, Swarms’ number M = 10;
for FIPS, = 0.729, ci = 4.1. For DMS-PSO, Regrouping period = 5;
I Traditional global PSO algorithms with inertia weight (GPSO) for DTT-PSO, The value of the acceleration constant is set to 4.1.
[13]. The probability of reorganization P = 0.05, the number of neighbors
II PSO with multiple subpopulations (MSPSO). [37] The best parti- M = 6, and the ratio of contestants K = 0.1; For ABC, the limit num-
cle within each subpopulation is recorded and then applied into ber of iterations which a food source cannot be improved (Trial
the velocity updating formula to replace the original global best limit) is 100, and the number of food sources (Food number) is 20.
particle in the whole population. The experiments are conducted using MATLAB R2015b on a per-
III Fully informed PSO [47]. The goodness weighted FIPS with sonal computer with an Intel Core i7-6500U 2.5 GHz CPU and 8 GB
USquare topologies are adopted. memory.
IV PSO with dynamic tournament topology strategy (DTT-PSO)
[49]. Each particle is guided by several better solutions, chosen 4.4. Results and discussions
from the entire population.
V Dynamic multi-swarm PSO (DMS-PSO) [40] is a dynamic 1) Results for the 10-D Problems: Tables 1 and 2 present the
topological method that reorganizes neighborhoods during statistical information in terms of best, worst, average, and median
evolution. for the eight algorithms on the fifteen test functions with D = 10
W. Ye et al. / Applied Soft Computing 61 (2017) 832–843 837
5 5
0 0
-5 -5
-10 -10
-10 -5 0 5 10 -10 -5 0 5
5 5
0 0
-5 -5
-10 -10
Fig. 4. Searching behavior of multi-swarm PSO with dynamic learning strategy step by step. Swarm’s size N = 4, swarms’ number M = 10. Blue circles represent ordinary
particles, red circles represent communication particles, blue stars are for lbest, and red stars is for gbest. (For interpretation of the references to color in this figure legend,
the reader is referred to the web version of this article.)
and D = 30. The best results among the eight algorithms are shown early stage for almost all function problems due to its specific abil-
in bold. In order to determine a two-sided rank sum test of the ity of generating a new solution. On the other hand, almost all the
hypothesis that two independent samples come from distributions local versions of PSO are slower than global PSO at the convergence
with equal medians, the nonparametric Wilcoxon rank sum tests speed thanks to their topology structures. Particularly for DMS-PSO
[59,60] are conducted at 5% significance level between the PSO- and PSO-DLS, the specific characteristic of multi-swarms with small
DLS and the other algorithms for each problem. The results from size inhibits the exploitation ability evidently. In fact, PSO-DLS has
the test are presented after the median value in Tables 1 and 2, a large potential search space, and thus it could not converge as
summarized as “†/§/” to denote the corresponding functions for fast as them at the early stage. Obviously, it is dynamic topology
which PSO-DLS performs significantly better than, almost the same and dynamic learning strategy that result in the exploration abil-
as, and significantly worse than its peer algorithm respectively. ity at the expense of slow convergence speed. As a result, better
From Table 1, one can see that PSO-DLS provides a solution solutions closer to the true optimum region are achieved by the
closer to the true optimum values for nine out of the fifteen bench- PSO-DLS algorithm in the end.
mark problems (F2 , F4 , F6 , F7 , F8 , F10 , F13 , F14 , and F15 ), which belong (2) Results for the 30-D Problems: The experiments conducted
to all types of functions: unimodal functions, simple multimodal on 10-D problems are repeated on the 30-D problems and the
functions, hybrid functions, and composition functions. According results are presented in Table 2. As the convergence graphs are sim-
to the results of statistical tests, PSO-DLS dominates on F4 , F8 , F11 , ilar to the 10-D problems, they are not presented. Different slightly
F13 , F14 , F15 with other algorithm. Especially for F6 , F7 , F10 , PSO from the data on 10-D problems, PSO-DLS achieves the highest
achieves the highest accuracy and surpasses all the methods. On accuracy only for six functions from simple multimodal functions,
the other functions, PSO-DLS fails to find the best solutions. TLBO hybrid functions, and composition functions (F4 , F6 , F8 , F10 , F11 and
achieves the better accuracy on F1 , F11 , DMS-PSO finds better results F13 ). On the other functions, PSO-DLS fails to find the best solutions.
on F3 , F5 , F12 , while ABC locates better local optima on F9 . Although DMS-PSO also achieves the better accuracy on five functions (F3 , F7 ,
PSO-DLS is not able to find the best solutions on these function F10 , F12 , and F15 ), TLBO finds better results on F1 , F2 , F5 from uni-
problems, it achieves the satisfactory accuracy with strong robust- modal functions and simple multimodal functions. Nevertheless,
ness and surpasses other classical PSOs statistically. we can observe that the algorithms achieve similar ranking as in the
Figs. 5 and 6 present the convergence characteristics for the 10-D problems. According to the results of statistical tests, PSO- DLS
average of best fitness value versus iteration of each algorithm dominates on F7 , F10 , F13 with other algorithm. Specifically for F4 , F6 ,
for each 10-D test function. From these figures, one can see that F8 , and F11 , PSO achieves the highest accuracy and surpasses all the
TLBO is an effective algorithm with fast convergence speed at the algorithms. Although PSO-DLS is not the most effective algorithm
838 W. Ye et al. / Applied Soft Computing 61 (2017) 832–843
Table 1
The comparison of optimization accuracy, including best, worst, average, and median on 10-dimensional(30-dimensional) problems, where “†/§/” denote the results that
PSO-DLS performs significantly better than, almost the same as, and significantly worse than its peer algorithm respectively for corresponding functions.
F10 Best 79.56 149.65 1153.27 397.15 487.90 403.91 203.62 66.76
Worst 2460.34 2156.24 2176.81 1931.59 2109.57 6947.17 1192.83 959.38
Average 1122.89 952.28 1718.39 1290.00 1304.07 2703.41 772.84 424.23
Median 1192.85† 846.23† 1733.34† 1192.87† 1287.56† 2367.63† 877.01† 366.65
F11 Best 0.58 1.28 3.53 0.66 0.29 1.34 0.88 0.45
Worst 368.35 203.94 208.72 300.00 300.00 4.35 200.21 2.18
Average 184.79 105.18 105.98 184.67 69.71 3.09 121.12 1.23
Median 200.33† 200.29† 102.53† 200.25† 6.59§ 3.21† 200.11† 1.23
F12 Best 102.00 102.60 101.71 100.98 100.44 102.19 101.13 100.74
Worst 106.25 105.30 103.16 102.59 101.56 106.78 105.58 102.34
Average 103.53 104.08 102.42 101.81 100.94 105.67 102.54 101.91
Median 103.49† 104.09† 102.38† 101.82§ 100.87 105.87† 102.44§ 102.02
F13 Best 11.31 18.60 19.22 13.15 7.35 18.80 17.87 5.77
Worst 37.00 32.73 32.55 32.93 31.83 29.81 29.13 20.64
Average 26.65 27.40 26.39 24.02 16.43 24.74 24.13 15.76
Median 27.11† 27.89† 26.87† 20.29† 15.60§ 25.17† 25.11† 16.30
F14 Best 100 100.04 100 100 100 100 100 100
Worst 2729.50 117.29 202.06 2665.05 2735.57 100 2731.82 100
Average 785.31 102.68 114.30 272.45 466.84 100 817.60 100
Median 100.00† 101.29† 101.45† 100† 100† 100§ 100† 100
F15 Best 205.193 205.201 205.197 205.202 205.193 205.210 205.197 205.192
Worst 205.241 205.354 205.208 205.215 205.209 205.232 255.911 205.202
Average 205.211 205.238 205.204 205.207 205.198 205.220 208.590 205.199
Median 205.213† 205.231† 205.204† 205.207† 205.197§ 205.220† 205.210† 205.199
with prominent results on F2 both for 10-D and 30-D problems, it ity on higher dimensional problem. For other functions, PSO-DLS
provides better performance on 10-D functions than 30-D functions performs better than the other algorithms except the best one.
as TLBO, whereas other algorithms provide a poor searching abil- To present the total comparison on performance between PSO-
DLS and other algorithms, Table 3 shows the detailed results
W. Ye et al. / Applied Soft Computing 61 (2017) 832–843 839
Table 2
The comparison of optimization accuracy, including best, worst, average, and median on 10-dimensional(30-dimensional) problems, where “†/§/” denote the results that
PSO-DLS performs significantly better than, almost the same as, and significantly worse than its peer algorithm respectively for corresponding functions.
F10 Best 6.64e+03 2.84e+04 1.67e+04 3.42e+03 2.78e+03 6.61e+04 3.03e+03 2.36e+03
Worst 3.67e+05 8.80e+05 2.53e+05 2.16e+05 1.52e+05 1.05e+06 4.40e+04 4.48e+04
Average 8.77e+04 3.81e+05 1.06e+05 4.66e+04 3.58e+04 4.62e+05 1.63e+04 1.36e+04
Median 2.58e+04† 3.56e+05† 9.83e+04† 1.62e+04† 2.76e+04† 4.37e+05† 1.25e+04§ 1.25e+04
F11 Best 207.96 212.46 213.40 203.49 203.94 208.42 205.41 203.41
Worst 891.36 770.25 389.61 472.89 521.31 226.31 963.06 224.32
Average 581.00 306.93 320.43 352.00 413.68 212.62 612.00 211.26
Median 625.14† 241.50† 349.44† 399.57† 424.04† 219.39† 824.99† 209.69
F12 Best 111.01 111.51 108.78 107.46 105.64 110.45 111.36 108.24
Worst 116.71 114.51 110.85 111.42 109.99 111.92 118.28 110.39
Average 113.41 113.16 109.54 110.00 108.42 111.42 114.38 109.35
Median 113.68† 113.27† 109.54† 109.98† 108.37 111.48† 114.18† 109.39
F13 Best 93.45 95.91 100.37 100.33 87.31 97.81 105.78 82.73
Worst 121.14 118.81 114.12 120.35 105.18 110.21 120.36 100.20
Average 106.73 111.22 108.31 113.13 95.94 105.11 111.75 94.40
Median 106.27† 112.52† 108.39† 114.03† 95.61§ 104.98† 111.06† 95.22
F14 Best 2.85e+04 2.98e+04 2.80e+04 2.84e+04 2.80e+04 2.65e+03 2.80e+04 2.80e+04
Worst 3.73e+04 3.25e+04 3.13e+04 3.78e+04 3.57e+04 3.08e+04 5.70e+04 3.10e+04
Average 3.22e+04 3.11e+04 3.02e+04 3.12e+04 3.10e+04 2.67e+04 3.46e+04 2.84e+04
Median 3.14e+04† 3.11e+04† 3.10e+04† 3.04e+04† 3.10e+04† 2.74e+04 3.23e+04† 2.84e+04
F15 Best 274.02 278.38 274.05 273.39 273.36 274.49 275.12 273.49
Worst 278.15 283.68 275.46 273.90 275.17 274.93 282.06 274.08
Average 275.43 281.02 274.33 273.59 273.97 274.70 278.37 273.91
Median 275.22† 281.03† 274.29† 273.45 273.85§ 274.70† 278.64† 273.95
from the non-parametric Wilcoxon rank sum tests. The number ence of the “Better” score and the “Worse” score. Despite the poorer
of benchmark functions showing that PSO-DLS is significantly bet- performance on best accuracy for 30-dimension problem than that
ter than, almost the same as, and significantly worse than the other for 10-dimension problem, PSO-DLS maintains the positive total
algorithms is illustrated in this table. The total score is the differ-
840 W. Ye et al. / Applied Soft Computing 61 (2017) 832–843
F1 F2 F3
10
9 10 1.33
10
10
GPSO
MSPSO
FIPS
DTTPSO
3
10 10
8 DMSPSO 10 1.28
ABC
TLBO
PSODLS
10 -2 6 10 1.24
10
GPSO GPSO
MSPSO MSPSO
FIPS FIPS
1.2
10 -8 DTTPSO 10 4 10 DTTPSO
DMSPSO DMSPSO
ABC ABC
TLBO TLBO
PSODLS PSODLS
10 2 10 1.16
10 -14
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
iteration iteration iteration
F4 F5 F6
8
10
10 2.2 10 3.4
GPSO GPSO
MSPSO MSPSO
FIPS 10 7 FIPS
DTTPSO DTTPSO
10 1.8 DMSPSO 10 3 DMSPSO
ABC ABC
TLBO TLBO
PSODLS PSODLS
2.6
10 1.4 10 10 5
GPSO
MSPSO 4
FIPS 10
1
10 DTTPSO
2.1
10 DMSPSO
ABC
TLBO
PSODLS
0.5 1.7
10 10 10 2
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
iteration iteration iteration
F7 F8 F9
10 2.4
7
GPSO 10 GPSO GPSO
MSPSO MSPSO 10 2.4 MSPSO
FIPS FIPS FIPS
DTTPSO DTTPSO DTTPSO
10 1.7 DMSPSO 10 6 DMSPSO DMSPSO
ABC ABC ABC
TLBO TLBO 10 2.3 TLBO
PSODLS PSODLS PSODLS
5
1 10
10
2.2
10
10 4
0.3
10 2.1
10
10 -0.4 10 2
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 0 500 1000 1500 2000
Fig. 5. Convergence graphs of PSO-DLS and other algorithms 10-D benchmark functions F1 –F9 .
F 10 F 11 F 12
a 10 2.8 10 2.29
7
10 GPSO GPSO
MSPSO MSPSO
FIPS FIPS
DTTPSO 2.22
DTTPSO
2.1
10 6 DMSPSO 10 10 DMSPSO
ABC ABC
TLBO TLBO
PSODLS PSODLS
10 5 10
1.4 10
2.15
GPSO
MSPSO
4
10 FIPS
10 0.7 DTTPSO 10 2.08
DMSPSO
ABC
3 TLBO
10 PSODLS
10 0 10
2
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
iteration iteration iteration
F 13 F 14 F 15
b 10 4.2 3.8
10
2.2 10
GPSO GPSO GPSO
MSPSO MSPSO MSPSO
FIPS FIPS FIPS
DTTPSO DTTPSO 3.5 DTTPSO
10 2 3.6
10
DMSPSO 10 DMSPSO DMSPSO
ABC ABC ABC
TLBO TLBO TLBO
PSODLS PSODLS PSODLS
10 3.1 10 3.1
10 1.7
1.5
10
2.5 10 2.7
10
1.2 2 2.3
10 10 10
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 0 500 1000 1500 2000
Fig. 6. Convergence graphs of PSO-DLS and other algorithms 10-D benchmark functions F10 –F15 .
5. Conclusion Acknowledgements
A novel multi-swarm particle swarm optimization with This work is supported by the National Natural Science Founda-
dynamic learning strategy (PSO-DLS) has been investigated to tion of China (Grant no. 61572233) and the National Social Science
improve the performance of PSO. The proposed strategy suggests Foundation of China (Grant no. 16BTJ032). The authors would like
the whole population is split up into several sub-swarms so that the to thank the anonymous reviewers for their helpful suggestions and
population is able to keep diversity without the dominant influ- comments on a previous version of the present paper.
ence of the global best position on all particles. However, PSO
with small and separate sub-swarms turns weaker at searching a References
potential solution. Thus, dynamic learning strategy is introduced to
promote the information exchanged among sub-swarms. Collec- [1] D.E. Goldberg, J.H. Holland, Genetic algorithms and machine learning, Mach.
Learn. 3 (1988) 95–99, http://dx.doi.org/10.1023/A:1022602019183.
tive information among separate sub-swarms is utilized through [2] K.S. Lee, Z.W. Geem, A new meta-heuristic algorithm for continuous
communication particles’ updating. In this manner, separate sub- engineering optimization: harmony search theory and practice, Comput.
842 W. Ye et al. / Applied Soft Computing 61 (2017) 832–843
Method Appl. M. 194 (2005) 3902–3933, http://dx.doi.org/10.1016/j.cma. [29] C. Leboucher, H.-S. Shin, P. Siarry, S.L. Ménec, R. Chelouah, A. Tsourdos,
2004.09.007. Convergence proof of an enhanced particle swarm optimisation method
[3] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing, integrated with evolutionary game theory, Inf. Sci. 346 (2016) 389–411,
Science 220 (1983) 671–680, http://dx.doi.org/10.1126/science.220.4598.671. http://dx.doi.org/10.1016/j.ins.2016.01.011.
[4] F. Glover, Tabu search – part I, ORSA J. Comput. 1 (1989) 190–206, http://dx. [30] H. Duan, C. Sun, Swarm intelligence inspired shills and the evolution of
doi.org/10.1287/ijoc.1.3.190. cooperation, Sci. Rep. 4 (2014) 5210, http://dx.doi.org/10.1038/srep05210.
[5] J. Zhang, A.C. Sanderson, Jade: adaptive differential evolution with optional [31] J. Kennedy, Some issues and practices for particle swarms, in: Proc. Swarm
external archive, IEEE Trans. Evol. Comput. 13 (2009) 945–958, http://dx.doi. Intell. Symp., 2007, pp. 162–169, http://dx.doi.org/10.1109/sis.2007.368041.
org/10.1109/tevc.2009.2014613. [32] Y. Shi, R. Eberhart, A modified particle swarm optimizer, in: Proc. IEEE World
[6] A.K. Qin, V.L. Huang, P.N. Suganthan, Differential evolution algorithm with Congr. Comput. Intell, 1998, pp. 69–73, http://dx.doi.org/10.1109/ICEC.1998.
strategy adaptation for global numerical optimization, IEEE Trans. Evol. 699146.
Comput. 13 (2009) 398–417, http://dx.doi.org/10.1109/tevc.2008.927706. [33] A. Ratnaweera, S.K. Halgamuge, H.C. Watson, Self-organizing hierarchical
[7] N. Hansen, A. Ostermeier, Adapting arbitrary normal mutation distributions particle swarm optimizer with time-varying acceleration coefficients, IEEE
in evolution strategies: the covariance matrix adaptation, in: Proc. IEEE Int. Trans. Evol. Comput. 8 (2004) 240–255, http://dx.doi.org/10.1109/TEVC.2004.
Conf. Evol. Comput, doi:10.1109/icec.1996.542381, 1996, pp. 312–317. 826071.
[8] S. Yazdani, H. Nezamabadi-Pour, S. Kamyab, A gravitational search algorithm [34] A. Nickabadi, M.M. Ebadzadeh, R. Safabakhsh, A novel particle swarm
for multimodal optimization, Swarm Evol. Comput. 14 (2014) 1–14, http://dx. optimization algorithm with adaptive inertia weight, Appl. Soft Comput. 11
doi.org/10.1016/j.swevo.2013.08.001. (2011) 3658–3670, http://dx.doi.org/10.1016/j.asoc.2011.01.037.
[9] K.N. Krishnanand, D. Ghose, Glowworm swarm optimization for simultaneous [35] G. Xu, An adaptive parameter tuning of particle swarm optimization
capture of multiple local optima of multimodal functions, Swarm Intell. 3 algorithm, Appl. Math. Comput. 219 (2013) 4560–4569, http://dx.doi.org/10.
(2009) 87–124, http://dx.doi.org/10.1007/s11721-008-0021-5. 1016/j.amc.2012.10.067.
[10] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) [36] L. Zhang, Y. Tang, C. Hua, X. Guan, A new particle swarm optimization
algorithm, Appl. Soft Comput. 8 (2008) 687–697, http://dx.doi.org/10.1016/j. algorithm with adaptive inertia weight based on bayesian techniques, Appl.
asoc.2007.05.007. Soft Comput. 28 (2015) 138–149, http://dx.doi.org/10.1016/j.asoc.2014.11.
[11] R.V. Rao, V.J. Savsani, D.P. Vakharia, Teaching-learning-based optimization: a 018.
novel method for constrained mechanical design optimization problems, [37] W.-D. Chang, A modified particle swarm optimization with multiple
Comput. Aided Des. 43 (2011) 303–315, http://dx.doi.org/10.1016/j.cad.2010. subpopulations for multimodal function optimization problems, Appl. Soft
12.015. Comput. 33 (2015) 170–182, http://dx.doi.org/10.1016/j.asoc.2015.04.002.
[12] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proc. IEEE Int. Conf. [38] L. Xiaodong, Adaptively choosing neighbourhood bests using species in a
Neural Netw., Vol. 4, 1995, pp. 1942–1948, http://dx.doi.org/10.1109/icnn. particle swarm optimizer for multimodal function optimization, in: Conf.
1995.488968. Genetic and Evol. Comput. 26-30. Part I, 2004, pp. 105–116, http://dx.doi.org/
[13] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: 10.1007/978-3-540-24854-5 10.
Proc. Int. Symp. on Micro Mach. and Human Sci, 1995, pp. 39–43, http://dx. [39] F. van den Bergh, A.P. Engelbrecht, A cooperative approach to particle swarm
doi.org/10.1109/mhs.1995.494215. optimization, IEEE Trans. Evol. Comput. 8 (2004) 225–239, http://dx.doi.org/
[14] Y. Shi Eberhart, Particle swarm optimization: developments, applications and 10.1109/TEVC.2004.826069.
resources, in: Proc. Congr. Evol. Comput., Vol. 1, 2001, pp. 81–86, http://dx. [40] J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer, in:
doi.org/10.1109/cec.2001.934374. Proc. Swarm Intell. Symp., 2005, pp. 124–129, http://dx.doi.org/10.1109/sis.
[15] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence 2005.1501611.
in a multidimensional complex space, IEEE Trans. Evol. Comput. 6 (2002) [41] M. Gang, Z. Wei, C. Xiaolin, A novel particle swarm optimization algorithm
58–73, http://dx.doi.org/10.1109/4235.985692. based on particle migration, Appl. Math. Comput. 218 (2012) 6620–6626,
[16] F. Khoshahval, A. Zolfaghari, H. Minuchehr, M. Abbasi, A new hybrid method http://dx.doi.org/10.1016/j.amc.2011.12.032.
for multi-objective fuel management optimization using parallel PSO-SA, [42] D. Chen, C. Zhao, Particle swarm optimization with adaptive population size
Prog. Nucl. Energy 76 (2014) 112–121, http://dx.doi.org/10.1016/j.pnucene. and its application, Appl. Soft Comput. 9 (2009) 39–48, http://dx.doi.org/10.
2014.05.014. 1016/j.asoc.2008.03.001.
[17] R. Liu, J. Li, J. fan, C. Mu, L. Jiao, A coevolutionary technique based on [43] C. Liu, W.B. Du, W.X. Wang, Particle swarm optimization with scale-free
multi-swarm particle swarm optimization for dynamic multi-objective interactions, PLoS ONE 9 (5) (2014) e97822, http://dx.doi.org/10.1371/
optimization, Eur. J. Oper. Res. 261 (2017) 1028–1051, http://dx.doi.org/10. journal.pone.0097822.
1016/j.ejor.2017.03.048. [44] J. Kennedy, Small worlds and mega-minds: effects of neighborhood topology
[18] R. Mohammadi, S.F. Ghomi, F. Jolai, Prepositioning emergency earthquake on particle swarm performance, in: Proc. IEEE Congr. Evol. Comput., Vol. 3,
response supplies: a new multi-objective particle swarm optimization 1999, pp. 1938, http://dx.doi.org/10.1109/CEC.1999.785509.
algorithm, Appl. Math. Model. 40 (2016) 5183–5199, http://dx.doi.org/10. [45] M.-R. Chen, X. Li, X. Zhang, Y.-Z. Lu, A novel particle swarm optimizer
1016/j.apm.2015.10.022. hybridized with extremal optimization, Appl. Soft Comput. 10 (2010)
[19] N. Delgarm, B. Sajadi, F. Kowsary, S. Delgarm, Multi-objective optimization of 367–373, http://dx.doi.org/10.1016/j.asoc.2009.08.014.
the building energy performance: a simulation-based approach by means of [46] A. Kaveh, T. Bakhshpoori, E. Afshari, An efficient hybrid particle swarm and
particle swarm optimization (PSO), Appl. Energy 170 (2016) 293–303, http:// swallow swarm optimization algorithm, Comput. Struct. 143 (2014) 40–59,
dx.doi.org/10.1016/j.apenergy.2016.02.141. http://dx.doi.org/10.1016/j.compstruc.2014.07.012.
[20] A. Chander, A. Chatterjee, P. Siarry, A new social and momentum component [47] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler,
adaptive PSO algorithm for image segmentation, Expert Syst. Appl. 38 (2011) maybe better, IEEE Trans. Evol. Comput. 8 (2004) 204–210, http://dx.doi.org/
4998–5004, http://dx.doi.org/10.1016/j.eswa.2010.09.151. 10.1109/tevc.2004.826074.
[21] S. Suresh, S. Lal, Multilevel thresholding based on chaotic Darwinian particle [48] L. Wang, B. Yang, Y. Chen, Improving particle swarm optimization using
swarm optimization for segmentation of satellite images, Appl. Soft Comput. multi-layer searching strategy, Inf. Sci. 274 (2014) 70–94, http://dx.doi.org/
55 (2017) 503–522, http://dx.doi.org/10.1016/j.asoc.2017.02.005. 10.1016/j.ins.2014.02.143.
[22] M. Mahi, Ö.K. Baykan, H. Kodaz, A new hybrid method based on particle [49] L. Wang, B. Yang, J. Orchard, Particle swarm optimization using dynamic
swarm optimization, ant colony optimization and 3-opt algorithms for tournament topology, Appl. Soft Comput. 48 (2016) 584–596, http://dx.doi.
traveling salesman problem, Appl. Soft Comput. 30 (2015) 484–490, http://dx. org/10.1016/j.asoc.2016.07.041.
doi.org/10.1016/j.asoc.2015.01.068. [50] N. Netjinda, T. Achalakul, B. Sirinaovakul, Particle swarm optimization
[23] Y. Marinakis, G.-R. Iordanidou, M. Marinaki, Particle swarm optimization for inspired by starling flock behavior, Appl. Soft Comput. 35 (2015) 411–422,
the vehicle routing problem with stochastic demands, Appl. Soft Comput. 13 http://dx.doi.org/10.1016/j.asoc.2015.06.052.
(2013) 1693–1704, http://dx.doi.org/10.1016/j.asoc.2013.01.007. [51] Y. Gao, W. Du, G. Yan, Selectively-informed particle swarm optimization, Sci.
[24] P. Moradi, M. Gholampour, A hybrid particle swarm optimization for feature Rep. 5 (2015) 9295, http://dx.doi.org/10.1038/srep09295.
subset selection by integrating a novel local search strategy, Appl. Soft [52] C. Li, S. Yang, T.T. Nguyen, A self-learning particle swarm optimizer for global
Comput. 43 (2016) 117–130, http://dx.doi.org/10.1016/j.asoc.2016.01.044. optimization problems, IEEE Trans. Syst. Man Cybern. B. Cybern. 42 (2012)
[25] Y. Lu, M. Liang, Z. Ye, L. Cao, Improved particle swarm optimization algorithm 627–646, http://dx.doi.org/10.1109/TSMCB.2011.2171946.
and its application in text feature selection, Appl. Soft Comput. 35 (2015) [53] Ş. Gülcü, H. Kodaz, A novel parallel multi-swarm algorithm based on
629–636, http://dx.doi.org/10.1016/j.asoc.2015.07.005. comprehensive learning particle swarm optimization, Eng. Appl. Artif. Intell.
[26] L.M. Abualigah, A.T. Khader, M.A. Al-Betar, O.A. Alomari, Text feature selection 45 (2015) 33–45, http://dx.doi.org/10.1016/j.engappai.2015.06.013.
with a robust weight scheme and dynamic dimension reduction to text [54] J. Jie, J. Zeng, C. Han, Q. Wang, Knowledge-based cooperative particle swarm
document clustering, Expert. Syst. Appl. 84 (2017) 24–36, http://dx.doi.org/ optimization, Appl. Math. Comput. 205 (2008) 861–873, http://dx.doi.org/10.
10.1016/j.eswa.2017.05.002. 1016/j.amc.2008.05.100, special Issue on Advanced Intelligent Computing
[27] L.-Y. Chuang, C.-H. Yang, J.-C. Li, Chaotic maps based on binary particle swarm Theory and Methodology in Applied Mathematics and Computation.
optimization for feature selection, Appl. Soft Comput. 11 (2011) 239–248, [55] Y. Jiang, C. Liu, C. Huang, X. Wu, Improved particle swarm algorithm for
http://dx.doi.org/10.1016/j.asoc.2009.11.014. hydrological parameter optimization, Appl. Math. Comput. 217 (2010)
[28] X. Wang, S. Lv, J. Quan, The evolution of cooperation in the prisoner’s dilemma 3207–3215, http://dx.doi.org/10.1016/j.amc.2010.08.053.
and the snowdrift game based on particle swarm optimization, Physica A 482 [56] X. Xu, Y. Tang, J. Li, C. Hua, X. Guan, Dynamic multi-swarm particle swarm
(2017) 286–295, http://dx.doi.org/10.1016/j.physa.2017.04.080. optimizer with cooperative learning strategy, Appl. Soft Comput. 29 (2015)
169–183, http://dx.doi.org/10.1016/j.asoc.2014.12.026.
W. Ye et al. / Applied Soft Computing 61 (2017) 832–843 843
[57] S.Z. Zhao, P.N. Suganthan, Q.-K. Pan, M. Fatih Tasgetiren, Dynamic [59] F. Wilcoxon, Individual comparisons by ranking methods, Biometrics 1 (1945)
multi-swarm particle swarm optimizer with harmony search, Expert Syst. 80–83.
Appl. 38 (2011) 3735–3742, http://dx.doi.org/10.1016/j.eswa.2010.09.032. [60] S. García, D. Molina, M. Lozano, F. Herrera, A study on the use of
[58] J. Liang, B. Qu, P. Suganthan, Problem definitions and evaluation criteria for non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a
the cec 2015 competition on learning-based real-parameter single objective case study on the cec 2005 special session on real parameter optimization, J.
optimization, Tech. rep., Nanyang Technological University (Singapore) and Heurist. 15 (2009) 617–644, http://dx.doi.org/10.1007/s10732-008-9080-4.
Zhengzhou University (China), Available at: www.ntu.edu.sg/home/
epnsugan/ (Nov. 2014).