0% found this document useful (0 votes)
64 views12 pages

A Novel Multi-Swarm Particle Swarm Optimization With Dynamic Learning Strategy

Uploaded by

kriithiga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
64 views12 pages

A Novel Multi-Swarm Particle Swarm Optimization With Dynamic Learning Strategy

Uploaded by

kriithiga
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Applied Soft Computing 61 (2017) 832–843

Contents lists available at ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

A novel multi-swarm particle swarm optimization with dynamic


learning strategy
Wenxing Ye, Weiying Feng, Suohai Fan ∗
College of Information Science and Technology, Jinan University, Guangzhou 510632, China

a r t i c l e i n f o a b s t r a c t

Article history: In the paper, we proposed a novel multi-swarm particle swarm optimization with dynamic learning
Received 3 April 2016 strategy (PSO-DLS) to improve the performance of PSO. To promote information exchange among sub-
Received in revised form 25 July 2017 swarms, the particle classification mechanism advocates that particles in each sub-swarm are classified
Accepted 26 August 2017
into ordinary particles and communication particles with different tasks at each iteration. The ordinary
Available online 1 September 2017
particles focus on exploitation under the guidance of the local best position in its sub-swarm, while the
communication particles with dynamic ability that focus on exploration under the guidance of a united
Keywords:
local best position in a new search region promote information to be exchanged among sub-swarms.
Particle swarm optimization
Multi-swarm
Moreover the strategy sets a dynamic control mechanism with an increasing parameter p for implement-
Dynamic learning strategy ing the classification operation, which provides ordinary particles an increasing sense of evolution into
Particle classification communication particles during the searching process. A simple case of analysis on searching behavior
Dynamic evolution supports its remarkable impact on maintaining the diversity and searching a better solution. Experimen-
CEC 2015 tal results on 15 function problems of CEC 2015 for 10 and 30 dimensions also demonstrate its promising
effectiveness in solving complex problems statistically comparing to other algorithms. What’s more, the
computational times reveal the subtle design of PSO-DLS.
© 2017 Elsevier B.V. All rights reserved.

1. Introduction memories of the personal best positions and knowledge of the


global best position in the groups during the search process. It is
Optimization is a highly fundamental challenge in many real- because of its design on fast-converging behavior and simplicity of
world problems. However, traditional gradient-based optimization implementation [14,15] that PSO has been successfully applied in
algorithms are unable to solve complex multimodal problems many fields such as multi-objective optimizations [16–19], image
owing to inflexible structure, high dimension and noisy data. For segmentation [20,21], vehicle routing problem [22,23], feature
real-parameter optimization problems, extensive meta-heuristic selection [24–27], evolutionary game theory [28–30], and etc.
algorithms with nature-inspired methods, have been put forward PSO is considered as an effective optimizer in many research
and developed in recent years, such as Genetic Algorithm (GA) [1], fields, however, it sometimes leads to premature convergence in
Harmony Search (HS) [2], Simulated Annealing (SA) [3], Tabu Search terms of multimodal functions with a great number of local minima
(TS) [4], Differential Evolution (DE) [5,6], and other evolutionary [31]. Hence, it is necessary to research and to extend the perfor-
algorithms [7–11]. mance of global search when applying PSO to solve these complex
As one example, particle swarm optimization (PSO) is a multimodal problems.
population-based stochastic, heuristic optimization algorithm with What accounts for the unstable performance of PSO? In gen-
inherent parallelism, firstly introduced by Kennedy and Eberhart in eral, compared with the personal best position pbest, the global
1995 [12,13]. This algorithm is motivated by the emergent motion best position gbest in a single swarm possesses absolute power and
of the foraging behavior of a flock of birds. In PSO, each parti- easily dominates all the particles in multi-modal functions. Each
cle is regarded as a solution. All particles have fitness values and particle will move toward the global best area despite its location
velocities, and they fly through the D-dimensional-parameter space in a remote region. Particularly, when gbest and pbest are located
by learning from the historical information, which contains their in the same region, traditional PSO may stagnates in the local opti-
mum area due to the rapid convergence and diversity loss.
To address this myopic tendency towards a single global best
∗ Corresponding author. particle, we focus on the topology structure and the evolution-
E-mail address: [email protected] (S. Fan). ary updating formula, and propose an improved multi-swarm

http://dx.doi.org/10.1016/j.asoc.2017.08.051
1568-4946/© 2017 Elsevier B.V. All rights reserved.
W. Ye et al. / Applied Soft Computing 61 (2017) 832–843 833

particle swarm optimization with dynamic learning strategy (PSO- The experiences of individual and its companions make an impact
DLS). First, the original single swarm is firstly divided into several on the flight. When a particle achieves a new improved position, all
sub-swarms and several lbests replace gbest to instruct updating, the other particles will move closer to it. This process is repeated
which lessons the influence of gbest on the whole population obvi- until satisfactory solution is found or the certain criterion of stop
ously. Second, particles in each sub-swarm are classified into two updating is met.
types by the updating formula. Ordinary particles (Type A) utilize In the PSO domain, there are two main variants: global PSO and
lbest to exploit in its sub-swarms, while communication particles local PSO. The global PSO formula is introduced above where the
(Type B) utilize a united lbest to explore toward a new search whole population is well-mixed and there is only a single group. In
field. Communication particles with dynamic ability can assist the local version, each particle’s velocity is adjusted according to
collective information to be spread among separate sub-swarms. its historical best position pbesti and the best position achieved so
Moreover, the proposed strategy sets a dynamic control mecha- far from its group lbest. The velocity update strategy is described as
nism and allows more and more ordinary particles to evolve into follows:
the communication ones, which provides good performance in both
exploration and exploitation. With constant collaborative informa- vdi = ωvdi + c1 r1 (pbest di − xid ) + c2 r2 (lbest d − xid ) (3)
tion instead of private information, each sub-swarm maintains the
diversity during the searching process. In this manner, separate 2.2. Some variants PSO
sub-swarms reunite at some extent and then achieving a satisfac-
tory solution is possible for multimodal function optimization. Despite the success of PSO in certain applications, traditional
The rest parts of this paper are arranged as follows: Section PSO still has room for improvements. Sufficient works with specific
2 reviews the basics and improvements of PSO. PSO-DLS is ana- techniques have been proposed over the past decades. Most studies
lyzed in detail in Section 3. Numerical results used for experimental on improving the PSO fall into three categories.
study and the discussion on efficiency are given in Section 4. The
conclusions are presented in Section 5. (1) Modifying the inertia weight and coefficient [32–36]. Shi and
Eberhart introduced an inertia weight to reduce the restriction
2. Review of standard PSO and its variants on velocity and control the scope of search [32]. Ratneweera
et al. [33] proposed that the acceleration coefficients c1 and
2.1. Standard PSO c2 are decreased and increased with time varying, to regulate
global and local search abilities. The adaptive methods were
In the particle swarm optimization algorithm, each solution to also introduced to adjust the value of the inertia weight accord-
a specific problem can be described by a particle in the search ing to evolutionary states [34–36].
place, and the particle swarm consists of these randomly initialized (2) Designing topology structure [37–44]. Among the previous
individuals in the feasible space. According to the corresponding works on topology structure, the concept of multi-swarm has
position, each particle possesses a fitness value and a velocity to been adopted by many researchers. In [37], multiple sub-
adjust its direction. The flying direction and distance depend on populations (or named as sub-swarms) was investigated for
the difference between the each particle’s position and the guided multimodal functions, where the original single population is
position. And the fitness evaluation is determined by the objective split into several subpopulations so that the whole population
function for the specific issue. The best positions instruct particles is no longer dominated by gbest and maintains the diversity.In
to fly in the solution space. After a certain criterion of stop updat- species-based PSO (SPSO) [38] the population is also divided
ing, the best individual stands for the best solution found by the into sub-swarms by similarity. The update of standard PSO is
algorithm. applied to each sub-swarm independently. The multi-swarm is
The updating process can be converted into a mathemat- used by Cooperative PSO (CPSO) [39] in another manner. The
ical model. Suppose that several particles are generated to solution is separated into sub-vectors, each solution of which
find the potential global optimum. The position of the ith is found by corresponding sub-swarm. The solutions from
particle can be represented by a D-dimensional vector xi = each sub-swarm combine finally. In [40], Liang and Suganthan
(xi1 , xi2 , . . ., xiD ), where xid ∈ [xmin , xmax ], and the corresponding designed a dynamic multi-swarm particle swarm optimiza-
velocity is vi = (v1i , v2i , . . ., vD ), where vdi ∈ [vmin , vmax ]. The best tion (DMS-PSO), whose particles are dynamic and combine
i
previously visited position of the ith particle is denoted by pbest i = into many small swarms randomly and regroup frequently to
1 2 D
(pbest i , pbest i , . . ., pbest i ); and the global best position of the ensure diverse information exchanged among the multi-modal
whole swarm obtained so far is indicated as gbest = (gbest1 , gbest2 , regions. To strength information communication, Ma et al. [41]
. . ., gbestD ). introduced a strategy of particle migration where particles are
The velocity and position of the particles at the next iteration able to migrate from one group to another during the evolution.
are updated according to the following equations: The adaptive population size was investigated in ultimate time
of current ladder by Chen and Zhao [42]. PSO was also studied in
vdi = ωvdi + c1 r1 (pbest i d − xid ) + c2 r2 (gbest d − xid ) (1) the complex networks with heterogeneous structure, including
scale-free and small-world networks [43,44].
xid = xid + vdi (2)
(3) Altering the evolutionary learning strategy or integrating spe-
where c1 and c2 are positive constants reflecting the weighting of cific techniques [45–57]. Kaveh et al. [46] developed a efficient
stochastic acceleration terms that pull each particle toward pbesti hybrid method in truss weight minimization problems by
and gbest positions, respectively in the flight process. Generally transplanting swallow swarm into particle swarm. Mendes
speaking, c1 shows the acceleration that ith particle flies towards its et al. [47] proposed a novel fully informed particle swarm (FIPS).
previous position pbesti , and c2 shows the acceleration that i-th par- Every neighbor of a particle is allowed to contribute for updat-
ticle flies towards the best previous position gbest. All particles use ing the velocity as a cumulative effect of the information. This
the same values c1 and c2 . r1 , r2 are random variables distributed mechanism increases the source of information for each par-
uniformly in the range (0,1). ω is the inertia wight used for the ticle. Lin et al. [48] proposed the multi-layer particle swarm
balance between the global and local search abilities. At each iter- optimization (MLPSO) in which the upper layer leads the lower
ation, any better position is stored in memory for pbesti and gbest. layer in thoroughly searching the multi-modal regions. Based
834 W. Ye et al. / Applied Soft Computing 61 (2017) 832–843

on FIPS, a dynamic tournament topology strategy was intro-


duced into PSO (DTT-PSO) [49]. The proposed strategy suggests
that several better positions are chosen from the entire popu-
lation. The selection is stochastic, but still favors particles with
better solutions. The starling PSO [50], proposed by means of
the collective response behavior of starlings, adds the diversity
and explores a wider area of the search space. Yang et al. [51]
employed complex networks to research selectively-informed
PSO where particles choose different learning strategies based
on their connections. In [52], Self-learning PSO (SLPSO) advo-
cated that each particle adaptively chooses one of four learning
strategies in different situations with respect to convergence,
exploitation, exploration, and jumping out of the basins of
attraction of local optima.

3. Particle swarm optimization with dynamic learning


strategy (PSO-DLS)

There should be two key factors that affect the performance of


PSO algorithm. One is the proper balance between global explo-
ration and local exploitation to obtain better solutions rapidly.
Another one is to maintain the diversity of the population or jump
out of local optima in case of premature convergence. In general, it
Fig. 1. The dynamic learning mechanism scheme.
is difficult for every novel variant to satisfy the two improvements
at the same time. Especially for solving the multimodal function
optimization, it is rather improper and inapplicable because the different particles with different tasks. In Type A, implemented as
multimodal function consists of several system optima including a probability 1 − p, particles are considered ordinary particles, and
global optimal solution and local optimal solutions. focus on exploitation by learning lbest in Eq. (3). Type B is imple-
To tackle this problem, we advocate multi-swarm into PSO mented as a probability p, with which particles are considered as
firstly as previous works, where the original single population is communication particles and explore in a new search region by a
firstly divided into several sub-swarms just according to the order new velocity formula:
of particles. For example, suppose that M * N particles in the original  
1
M
population will be divided into M sub-swarms. In this case, the first d d
vdi = ωvdi + c1 r1 (pbest i − xid ) + c2 r2 lbest m − xid (4)
sub-swarm consists of from No. 1 to No. N. The second sub-swarm M
m=1
consists of from No. N+1 to No. 2N and so forth. Such a setting can
be deemed that a big PSO is divided into several separate small where lbestm denotes best position achieved so far in the mth sub-
PSOs. Hence, The best particle in each sub-swarm is necessary to swarm. It is obvious that lbest, the last term of the velocity updating
be recorded as lbest. And the total M lbest instruct particles’ mov- in Eq. (3), is replaced by the average of M lbest in sub-swarms. We
ing in the corresponding sub-swarm used in Eq. (3). Following this name it the united lbest.
idea, gbest is no longer the only information for all particles’ search The strategy suggests that particles in each sub-swarm are clas-
at the global optimum, and each particle turns active without the sified into ordinary particles and communication particles. As a
single constraint from gbest, while M lbest means that there are at pioneer, ordinary particles are responsible for searching a better
most M local optima for multimodal function optimization. This lbest in its sub-swarms, still passive within the traditional frame-
separation operation is the same as that in Ref. [37]. It is verified work. In comparison, communication particles not only become
by related works that the multi-swarm can assist the whole popu- dynamic to break through lbest’s control, but also play an active
lation to avoid premature convergence and maintain the diversity role on the interaction between sub-swarms, and search poten-
in some extent [37–39]. tial useful information on lbest as much as possible. Seemingly,
For the mere separation operation mentioned above, there the velocity update of Eq. (4) appears similar to the full-informed
exists an evident drawback that each sub-swarm works depen- updating in Ref. [47] and Ref. [49], which takes the information
dently and there is no interaction between sub-swarms, so a big PSO of all the neighbors into consideration. In fact, the full-informed
is just cut off into several small PSOs. What’s worse, a larger pop- mechanism that provides too much neighbors’ information may
ulation size is requested to maintain the exploitation performance instruct an obscure direction to update position. For this situation,
in each sub-swarm, and thus it increases the execution time of the the information on M lbest maybe reasonable and subtle due to its
algorithm. To improve the exploration performance while main- superiority in the history. Therefore, information can be effectively
taining the exploitation performance of PSO, some techniques on exchanged among sub-swarms. Fig. 1 clearly explains the proposed
cooperative learning strategy and interaction operation have been concept of multi-swarm PSO with dynamic learning strategy.
proposed into multi-swarm [40,53–57]. Motivated by these previ- Secondly, the strategy sets a dynamic control mechanism to
ous works, we propose the dynamic learning strategy into PSO with implement the classification mechanism during the searching
multi-swarm. The dynamic learning strategy comes into effect on process. We note that the two learning methods depend on a prob-
updating velocity in two ways. ability proportion p ∈ [0, 1], which can be considered as the level of
Firstly, the strategy introduces another updating equation to sense for evolution into communication particles. For p = 0, all the
generate particles with dynamic ability. In this algorithm, there are particles evolve to learning lbest so that the whole population is the
two parallel types of evolutionary learning methods (Type A and same as that in [37]. The case p = 1 leads to the complete communi-
Type B) for each particle at every iteration. Only one method will cation environment where all the particles evolve by learning the
be implemented for one’s evolution. The proposed strategy endows united lbest in Eq. (4). When p increases from 0 to 1, the awareness
W. Ye et al. / Applied Soft Computing 61 (2017) 832–843 835

transits from exploitation to exploration gradually. By coincidence, 2


we set parameter p as the function of the iteration t: p = t/iter m
1.8
ax to control classification mechanism. The relation between p and
t indicates that p rises with iteration t increasing, resulting in the 2
1.6
growth of communication particle in each sub-swarm. In this way,
the control mechanism plays a continuously effective impact on the 1.5 1.4

exploitation ability and the exploration ability since it provides an


1.2
opportunity of evolution from ordinary particles to communication 1
particles. On one hand, each sub-swarm concentrates on its search 1
region with a large proportion of ordinary particles at the early
0.5 0.8
search stage. On the other hand, the increasing communication par-
ticles among each sub-swarm inhibit the plunge into local optima 0.6
caused by the exploitation around lbest. The increasing percent also 0
10
enables M sub-swarms exchange collaborative information more 0.4
5 10
and more frequently, and maintain the diversity of the whole pop- 5
0 0.2
ulation in the end. The detailed algorithm of PSO-DLS is presented 0
-5
in Algorithm 1. -5
0

Algorithm 1 (PSO-DLS).
Fig. 2. The 3-D surface and contour of the multimodal function 2-D Greiwank.
Input: Each swarm’s population size N, Swarms’ number M, Max iteration iter m ax
Output: The best solution gbest
1: Initialize M * N particles with positions x and mately in (10, 5) at the beginning. After a few iterations of search,
velocity v. Divide the population into M
a large proportion of particles has reached the best region at itera-
sub-swarms. Each sub-swarm contains N
particles. The best position pbest, lbest and tion 5 and 10. At iteration 20, almost all the particle has stagnated
gbest are also initialized. in the region, and the best postion has been not changed. Therefore,
2: for t = 1 to iter m ax do a poor gbest easily dominates all the particles in multi-modal func-
3: for i=1 to M * N do
tions. It may influence the particles to move toward the global best
4: generate a random number rand uniformly
distributed in (0,1)
area despite their location in a remote region. Furthermore, it can
5: if rand < t/iter m ax then be deduced that multi-swarm PSO without interaction among sub-
6: Execute the velocity updating formula of Eq. swarms also leads to such a similar situation, just like the global
(3) PSO.
7: else
Fig. 4 displays the simulation results of PSO-DLS composed of
8: Execute the velocity updating formula of Eq.
(4) 10 multi-swarms with 4 particles in each sub-swarm. The initial-
9: end if ization generates the diversified particles as well as 10 local best
10: Execute the position updating formula of Eq. positions. Even though the global best position gbest achieves the
(2) minimal value, it can not instruct all the particles to move toward
11: end for
12: Evaluate the fitness for each particle
its direction due to the influence of lbest in each sub-swarm. After
according to its position vector x; a few iteration of search, the gbest changes into a position farther
13: Update the best position pbest, lbest and away (0,0) at iteration 5. At the same time, two lbest have been
gbest; found around the global optimum region although they are not
14: end for
gbest. Subsequently, the global best position are found around the
15: return the best position gbest
global optimum region at iteration 10 thanks to the instruction of
the two lbest on searching. From iteration 10 to 20, before all the
4. Experiments
particles evolves to the communication particles and converge to
gbest, the whole population still remain the diversity. It can be con-
4.1. Searching behavior
cluded that the multi-swarm can lesson the dominant impact of
gbest on the whole population, and help to maintain diversity. And
To test the performance of the proposed algorithm, a multi-
the dynamic strategy promote interaction among sub-swarms with
modal function (Griewank) is chosen to explain the simulation
the searching scope reduced gradually around the global optimum
result of the multi-swarm PSO with dynamic learning strategy. The
region.
2-D optimized function is given by
1 y
f (x, y) = (x2 + y2 ) − cos x cos √ + 1 (5) 4.2. Benchmark functions and compared algorithms
4000 2
Fig. 2 shows the 3-D surface and contour of the optimized function. In this subsection, the performance of PSO-DLS and the other
It can easily be seen from Eq. (5) or the contour that there are a lot methods are studied on the basis of CEC 2015 learning-based
of local optima around the global minimum occurred at (x, y) = (0, benchmark functions in Liang et al. [58]. This problem set is
0), which easily makes the whole populations trapped in the local the latest collection of 15 unconstrained continuous optimiza-
search regions. tion problems with various difficulty levels. According to their
To show the performance of the proposed algorithm, properties, these functions are divided into four groups: uni-
Figs. 3 and 4 demonstrate the example of the step-by-step opti- modal functions, simple multimodal functions, hybrid functions,
mization process of the PSO-DLS comparing the original PSO. and composition functions. These problems are treated as black-
Convergence dynamics of all particles will be displayed at iteration box problems. All the guidelines of CEC 2015 have been strictly
0, 5, 10, and 20 respectively. followed for the experimentation. Following the instructions of test
Fig. 3 shows the convergence behavior of particles in one-swarm suite, every problem are conducted for 31 runs independently. The
(original PSO) with 40 particles. It is parent that all the particles are results for search space of dimension 10 and 30 are calculated. The
distributed uniformly, the global best position is found approxi- range of the search space is [−100,100]. Population initialization is
836 W. Ye et al. / Applied Soft Computing 61 (2017) 832–843

10 10

5 5

0 0

-5 -5

-10 -10
-10 -5 0 5 10 -10 -5 0 5

t=10, Fitness=2.8870e-02 t=20, Fitness=2.7143e-02


10 10

5 5

0 0

-5 -5

-10 -10

Fig. 3. Searching behavior of original PSO step by step. The multi-modal problem is two-dimensional Griewanks function (shown by contour map). The size of population is
40. Blue circles represent the particles, and red star represents the best position gbest found by so far. (For interpretation of the references to color in this figure legend, the
reader is referred to the web version of this article.)

generated uniformly in the specified search space by random num- VI Artificial Bee Colony (ABC) [10] is a novel algorithm that finds
ber generator with clock time. The termination criteria is set to the optimal solution by sending employed bees to find food
maximum function evaluation 10000D. The fitness value shown in sources.
this paper is Fi (x) − Fi (x* ) after the maximum iteration, where Fi (x* ) VII Teaching-Learning-Based Optimization (TLBO) [11] is a
is just a number about the corresponding function for instruction. teaching-learning process inspired algorithm based on the
effect of influence of a teacher on the output of learners in a
4.3. Experimental settings class.

The benchmark functions defined in the previous subsection are For all the algorithms, the population size is set to 40. So the
used to test the performance of PSO-DLS. Here we report the algo- max number of iterations is 2500 for 10-D and 7500 for 30-D. For all
rithms tested for all the benchmark datasets. Five PSOs and two kinds of PSO algorithms, the parameter settings are: ω = 0.9 to 0.4,
state-of-the-art algorithms are used to compare for benchmark c1 = c2 = 1.49445, Vmax = 0.5 × search range = 100; for PSO-DLS, DMS-
functions. PSO and MSPSO, the swarm’s size N = 4, Swarms’ number M = 10;
for FIPS, = 0.729, ci = 4.1. For DMS-PSO, Regrouping period = 5;
I Traditional global PSO algorithms with inertia weight (GPSO) for DTT-PSO, The value of the acceleration constant  is set to 4.1.
[13]. The probability of reorganization P = 0.05, the number of neighbors
II PSO with multiple subpopulations (MSPSO). [37] The best parti- M = 6, and the ratio of contestants K = 0.1; For ABC, the limit num-
cle within each subpopulation is recorded and then applied into ber of iterations which a food source cannot be improved (Trial
the velocity updating formula to replace the original global best limit) is 100, and the number of food sources (Food number) is 20.
particle in the whole population. The experiments are conducted using MATLAB R2015b on a per-
III Fully informed PSO [47]. The goodness weighted FIPS with sonal computer with an Intel Core i7-6500U 2.5 GHz CPU and 8 GB
USquare topologies are adopted. memory.
IV PSO with dynamic tournament topology strategy (DTT-PSO)
[49]. Each particle is guided by several better solutions, chosen 4.4. Results and discussions
from the entire population.
V Dynamic multi-swarm PSO (DMS-PSO) [40] is a dynamic 1) Results for the 10-D Problems: Tables 1 and 2 present the
topological method that reorganizes neighborhoods during statistical information in terms of best, worst, average, and median
evolution. for the eight algorithms on the fifteen test functions with D = 10
W. Ye et al. / Applied Soft Computing 61 (2017) 832–843 837

t=0, Fitness=2.0438e-01 t=5, Fitness=2.5886e-02


10 10

5 5

0 0

-5 -5

-10 -10
-10 -5 0 5 10 -10 -5 0 5

t=10, Fitness=6.3432e-03 t=20, Fitness=1.1402e-04


10 10

5 5

0 0

-5 -5

-10 -10

Fig. 4. Searching behavior of multi-swarm PSO with dynamic learning strategy step by step. Swarm’s size N = 4, swarms’ number M = 10. Blue circles represent ordinary
particles, red circles represent communication particles, blue stars are for lbest, and red stars is for gbest. (For interpretation of the references to color in this figure legend,
the reader is referred to the web version of this article.)

and D = 30. The best results among the eight algorithms are shown early stage for almost all function problems due to its specific abil-
in bold. In order to determine a two-sided rank sum test of the ity of generating a new solution. On the other hand, almost all the
hypothesis that two independent samples come from distributions local versions of PSO are slower than global PSO at the convergence
with equal medians, the nonparametric Wilcoxon rank sum tests speed thanks to their topology structures. Particularly for DMS-PSO
[59,60] are conducted at 5% significance level between the PSO- and PSO-DLS, the specific characteristic of multi-swarms with small
DLS and the other algorithms for each problem. The results from size inhibits the exploitation ability evidently. In fact, PSO-DLS has
the test are presented after the median value in Tables 1 and 2, a large potential search space, and thus it could not converge as
summarized as “†/§/” to denote the corresponding functions for fast as them at the early stage. Obviously, it is dynamic topology
which PSO-DLS performs significantly better than, almost the same and dynamic learning strategy that result in the exploration abil-
as, and significantly worse than its peer algorithm respectively. ity at the expense of slow convergence speed. As a result, better
From Table 1, one can see that PSO-DLS provides a solution solutions closer to the true optimum region are achieved by the
closer to the true optimum values for nine out of the fifteen bench- PSO-DLS algorithm in the end.
mark problems (F2 , F4 , F6 , F7 , F8 , F10 , F13 , F14 , and F15 ), which belong (2) Results for the 30-D Problems: The experiments conducted
to all types of functions: unimodal functions, simple multimodal on 10-D problems are repeated on the 30-D problems and the
functions, hybrid functions, and composition functions. According results are presented in Table 2. As the convergence graphs are sim-
to the results of statistical tests, PSO-DLS dominates on F4 , F8 , F11 , ilar to the 10-D problems, they are not presented. Different slightly
F13 , F14 , F15 with other algorithm. Especially for F6 , F7 , F10 , PSO from the data on 10-D problems, PSO-DLS achieves the highest
achieves the highest accuracy and surpasses all the methods. On accuracy only for six functions from simple multimodal functions,
the other functions, PSO-DLS fails to find the best solutions. TLBO hybrid functions, and composition functions (F4 , F6 , F8 , F10 , F11 and
achieves the better accuracy on F1 , F11 , DMS-PSO finds better results F13 ). On the other functions, PSO-DLS fails to find the best solutions.
on F3 , F5 , F12 , while ABC locates better local optima on F9 . Although DMS-PSO also achieves the better accuracy on five functions (F3 , F7 ,
PSO-DLS is not able to find the best solutions on these function F10 , F12 , and F15 ), TLBO finds better results on F1 , F2 , F5 from uni-
problems, it achieves the satisfactory accuracy with strong robust- modal functions and simple multimodal functions. Nevertheless,
ness and surpasses other classical PSOs statistically. we can observe that the algorithms achieve similar ranking as in the
Figs. 5 and 6 present the convergence characteristics for the 10-D problems. According to the results of statistical tests, PSO- DLS
average of best fitness value versus iteration of each algorithm dominates on F7 , F10 , F13 with other algorithm. Specifically for F4 , F6 ,
for each 10-D test function. From these figures, one can see that F8 , and F11 , PSO achieves the highest accuracy and surpasses all the
TLBO is an effective algorithm with fast convergence speed at the algorithms. Although PSO-DLS is not the most effective algorithm
838 W. Ye et al. / Applied Soft Computing 61 (2017) 832–843

Table 1
The comparison of optimization accuracy, including best, worst, average, and median on 10-dimensional(30-dimensional) problems, where “†/§/” denote the results that
PSO-DLS performs significantly better than, almost the same as, and significantly worse than its peer algorithm respectively for corresponding functions.

GPSO MSPSO FIPS DTT-PSO DMS-PSO ABC TLBO PSO-DLS

F1 Best 8.37e−10 2.81e+02 4.26e+02 2.72e−07 2.11e−05 6.23e+03 0 1.03e−11


Worst 3.22e−04 6.36e+04 9.83e+04 1.12e+00 2.09e−02 2.21e+05 5.68e−14 2.31e−07
Average 1.41e−05 1.97e+04 2.89e+04 5.99e−02 4.46e−03 8.58e+04 2.37e−14 4.39e−08
Median 3.36e−07† 1.62e+04† 2.35e+04† 7.20e−03† 2.19e−03† 9.16e+04† 1.42e−14 2.24e−08

F2 Best 1.01 5.50 49.44 1.74 15.68 1.05 0.65 0.05


Worst 13170.68 5415.04 15272.37 13557.20 14150.21 832.13 300.24 1796.26
Average 3353.88 974.08 3657.27 3890.00 4432.92 133.38 94.05 478.79
Median 1165.19† 442.24§ 2514.17† 3875.92† 2325.72† 71.98 51.17 309.50

F3 Best 20.01 20.03 16.60 0 0 3.88 2.32 5.68e−14


Worst 20.40 20.32 20.53 20.52 20.07 20.10 20.44 20.26
Average 20.17 20.16 20.02 19.70 14.33 17.62 18.99 17.90
Median 20.13§ 20.15§ 20.34† 20.33† 20.03 20.07 20.31† 20.20

F4 Best 2.98 2.01 3.91 13.48 0.99 4.03 3.98 0.03


Worst 11.94 11.97 17.00 30.92 6.92 13.07 16.81 4.97
Average 7.03 6.54 11.46 23.00 3.96 8.47 9.47 3.39
Median 6.96† 5.97† 12.46† 23.08† 3.98§ 8.47† 9.52† 3.98

F5 Best 10.24 14.33 224.51 18.60 0.25 33.09 15.37 10.24


Worst 716.09 506.72 1121.02 1292.18 243.27 348.33 602.85 252.54
Average 253.39 276.65 692.77 895.00 51.86 191.15 259.18 119.32
Median 255.29† 276.36† 666.96† 874.48† 18.66 189.31† 233.88† 129.07

F6 Best 124.42 75.96 315.32 196.82 56.23 4407.01 249.96 35.79


Worst 2197.10 4508.42 2289.64 1496.28 1090.15 273505.32 1314.40 1123.80
Average 663.92 1190.97 810.64 817.47 372.75 66755.19 672.89 261.79
Median 561.78† 919.04† 696.30† 857.79† 276.83† 53906.77† 590.32† 165.59

F7 Best 0.039 0.361 1.061 0.087 0.084 0.466 0.128 0.037


Worst 3.06 2.54 1.81 3.04 2.47 1.62 2.57 1.09
Average 1.61 1.43 1.47 1.32 1.04 0.95 1.20 0.46
Median 1.57† 1.54† 1.54† 1.24† 1.06† 0.92† 1.34† 0.14

F8 Best 49.23 165.37 273.68 151.36 104.10 635.51 68.72 41.27


Worst 6137.40 2487.64 3743.34 2030.31 4902.14 55007.20 898.04 526.08
Average 2414.30 834.41 1115.43 900.00 1886.90 11735.24 299.24 276.16
Median 1912.53† 719.25† 909.41† 873.79† 1700.79† 6871.98† 179.56§ 260.61

F9 Best 102.19 105.62 102.28 118.80 100.22 18.06 105.37 102.77


Worst 200.50 117.50 129.80 132.52 109.94 112.67 112.31 108.48
Average 124.54 110.41 113.66 128.07 105.36 100.52 108.11 106.58
Median 111.94† 109.86† 112.35† 124.75† 105.09 108.65† 107.52† 106.89

F10 Best 79.56 149.65 1153.27 397.15 487.90 403.91 203.62 66.76
Worst 2460.34 2156.24 2176.81 1931.59 2109.57 6947.17 1192.83 959.38
Average 1122.89 952.28 1718.39 1290.00 1304.07 2703.41 772.84 424.23
Median 1192.85† 846.23† 1733.34† 1192.87† 1287.56† 2367.63† 877.01† 366.65

F11 Best 0.58 1.28 3.53 0.66 0.29 1.34 0.88 0.45
Worst 368.35 203.94 208.72 300.00 300.00 4.35 200.21 2.18
Average 184.79 105.18 105.98 184.67 69.71 3.09 121.12 1.23
Median 200.33† 200.29† 102.53† 200.25† 6.59§ 3.21† 200.11† 1.23

F12 Best 102.00 102.60 101.71 100.98 100.44 102.19 101.13 100.74
Worst 106.25 105.30 103.16 102.59 101.56 106.78 105.58 102.34
Average 103.53 104.08 102.42 101.81 100.94 105.67 102.54 101.91
Median 103.49† 104.09† 102.38† 101.82§ 100.87 105.87† 102.44§ 102.02

F13 Best 11.31 18.60 19.22 13.15 7.35 18.80 17.87 5.77
Worst 37.00 32.73 32.55 32.93 31.83 29.81 29.13 20.64
Average 26.65 27.40 26.39 24.02 16.43 24.74 24.13 15.76
Median 27.11† 27.89† 26.87† 20.29† 15.60§ 25.17† 25.11† 16.30

F14 Best 100 100.04 100 100 100 100 100 100
Worst 2729.50 117.29 202.06 2665.05 2735.57 100 2731.82 100
Average 785.31 102.68 114.30 272.45 466.84 100 817.60 100
Median 100.00† 101.29† 101.45† 100† 100† 100§ 100† 100

F15 Best 205.193 205.201 205.197 205.202 205.193 205.210 205.197 205.192
Worst 205.241 205.354 205.208 205.215 205.209 205.232 255.911 205.202
Average 205.211 205.238 205.204 205.207 205.198 205.220 208.590 205.199
Median 205.213† 205.231† 205.204† 205.207† 205.197§ 205.220† 205.210† 205.199

with prominent results on F2 both for 10-D and 30-D problems, it ity on higher dimensional problem. For other functions, PSO-DLS
provides better performance on 10-D functions than 30-D functions performs better than the other algorithms except the best one.
as TLBO, whereas other algorithms provide a poor searching abil- To present the total comparison on performance between PSO-
DLS and other algorithms, Table 3 shows the detailed results
W. Ye et al. / Applied Soft Computing 61 (2017) 832–843 839

Table 2
The comparison of optimization accuracy, including best, worst, average, and median on 10-dimensional(30-dimensional) problems, where “†/§/” denote the results that
PSO-DLS performs significantly better than, almost the same as, and significantly worse than its peer algorithm respectively for corresponding functions.

GPSO MSPSO FIPS DTT-PSO DMS-PSO ABC TLBO PSO-DLS

F1 Best 2.21e+04 4.15e+06 2.57e+06 1.26e+05 1.58e+04 1.61e+06 1.52e+03 1.91e+04


Worst 2.03e+07 2.06e+07 1.11e+07 9.51e+06 1.16e+07 4.03e+06 5.13e+05 6.39e+05
Average 6.48e+06 1.03e+07 5.83e+06 1.28e+06 3.15e+06 2.92e+06 8.51e+04 1.82e+05
Median 5.81e+06† 1.05e+07† 5.75e+06† 8.71e+05† 2.69e+06† 3.02e+06† 3.46e+04 7.82e+04

F2 Best 6.48e−08 2.91e+06 2.06e+01 7.71e−03 5.77e−03 1.30e−01 1.71e−13 8.76e−11


Worst 1.43e−01 3.00e+07 1.16e+04 1.81e+04 1.24e+03 2.09e+02 1.80e−09 4.50e−06
Average 6.98e−03 1.26e+07 4.24e+03 5.20e+03 1.67e+02 4.02e+01 1.22e−10 1.27e−06
Median 1.51e−04† 1.17e+07† 3.02e+03† 1.57e+03† 3.76e+01† 1.99e+01† 3.69e−13 8.34e−07

F3 Best 20.41 20.56 20.86 20.78 20.19 20.29 20.76 20.68


Worst 21.00 20.86 21.03 21.01 20.45 20.43 21.02 20.82
Average 20.74 20.75 20.95 21.00 20.30 20.35 20.93 20.77
Median 20.75§ 20.76§ 20.96† 20.95† 20.31 20.34 20.94† 20.78

F4 Best 41.79 50.55 71.29 144.21 17.92 61.04 63.68 14.54


Worst 110.44 111.48 147.42 193.88 61.74 106.16 142.61 47.76
Average 64.25 87.79 113.28 172.00 45.24 84.16 97.34 39.57
Median 59.70† 88.10† 113.73† 176.50† 47.44† 84.41† 96.51† 42.78

F5 Best 1419.44 2570.48 5186.89 5870.23 2551.80 2177.80 1132.28 1564.64


Worst 4579.03 4125.64 7016.50 6873.70 3990.88 2834.57 6733.25 2790.90
Average 2948.61 3425.50 6287.79 6340.00 3246.61 2476.75 3875.29 2416.98
Median 3022.93† 3437.39† 6350.08† 6469.30† 3216.85† 2475.73§ 3172.79† 2480.61

F6 Best 9.62+e04 1.06e+05 1.03e+05 3.73e+04 1.44e+04 8.15e+05 2.57e+04 8.75e+03


Worst 1.46e+06 3.56e+06 1.05e+06 1.67e+06 1.72e+06 5.73e+06 1.09e+05 6.00e+04
Average 4.71e+05 8.52e+05 4.12e+05 3.68e+05 2.63e+05 3.25e+06 5.99e+04 3.71e+04
Median 2.44e+05† 8.39e+05† 3.84e+05† 2.42e+05† 1.88e+05† 3.26e+06† 5.76e+04† 3.78e+04

F7 Best 5.18 6.77 3.80 3.41 2.76 5.33 5.88 3.66


Worst 17.41 14.25 11.08 11.38 7.83 9.34 14.06 6.23
Average 10.02 10.41 6.90 6.56 5.28 7.78 10.20 5.09
Median 9.29† 10.10† 6.98† 7.04† 5.29§ 7.87† 10.48† 4.99

F8 Best 3.40e+03 7.21e+03 1.07e+04 7.69+03 3.04e+03 2.79e+05 2.88e+03 2.76e+03


Worst 4.14e+05 2.54e+05 2.73e+05 3.42e+05 1.09e+05 9.01e+05 7.60e+04 6.24e+04
Average 4.38e+04 8.28e+04 7.42e+04 7.71e+04 3.61e+04 5.41e+05 2.14e+04 1.76e+04
Median 3.10e+04† 6.97e+04† 6.16e+04† 5.84e+04† 3.07e+04† 5.18e+05† 1.72e+04† 1.22e+04

F9 Best 152.78 169.31 165.50 201.00 200.83 168.77 184.12 158.33


Worst 212.57 202.33 239.33 293.98 201.10 201.97 221.67 201.08
Average 193.88 198.56 201.36 238.00 200.98 187.68 201.58 198.65
Median 201.12† 201.74† 201.17† 201.28† 200.98§ 186.75 201.33† 201.04

F10 Best 6.64e+03 2.84e+04 1.67e+04 3.42e+03 2.78e+03 6.61e+04 3.03e+03 2.36e+03
Worst 3.67e+05 8.80e+05 2.53e+05 2.16e+05 1.52e+05 1.05e+06 4.40e+04 4.48e+04
Average 8.77e+04 3.81e+05 1.06e+05 4.66e+04 3.58e+04 4.62e+05 1.63e+04 1.36e+04
Median 2.58e+04† 3.56e+05† 9.83e+04† 1.62e+04† 2.76e+04† 4.37e+05† 1.25e+04§ 1.25e+04

F11 Best 207.96 212.46 213.40 203.49 203.94 208.42 205.41 203.41
Worst 891.36 770.25 389.61 472.89 521.31 226.31 963.06 224.32
Average 581.00 306.93 320.43 352.00 413.68 212.62 612.00 211.26
Median 625.14† 241.50† 349.44† 399.57† 424.04† 219.39† 824.99† 209.69

F12 Best 111.01 111.51 108.78 107.46 105.64 110.45 111.36 108.24
Worst 116.71 114.51 110.85 111.42 109.99 111.92 118.28 110.39
Average 113.41 113.16 109.54 110.00 108.42 111.42 114.38 109.35
Median 113.68† 113.27† 109.54† 109.98† 108.37 111.48† 114.18† 109.39

F13 Best 93.45 95.91 100.37 100.33 87.31 97.81 105.78 82.73
Worst 121.14 118.81 114.12 120.35 105.18 110.21 120.36 100.20
Average 106.73 111.22 108.31 113.13 95.94 105.11 111.75 94.40
Median 106.27† 112.52† 108.39† 114.03† 95.61§ 104.98† 111.06† 95.22

F14 Best 2.85e+04 2.98e+04 2.80e+04 2.84e+04 2.80e+04 2.65e+03 2.80e+04 2.80e+04
Worst 3.73e+04 3.25e+04 3.13e+04 3.78e+04 3.57e+04 3.08e+04 5.70e+04 3.10e+04
Average 3.22e+04 3.11e+04 3.02e+04 3.12e+04 3.10e+04 2.67e+04 3.46e+04 2.84e+04
Median 3.14e+04† 3.11e+04† 3.10e+04† 3.04e+04† 3.10e+04† 2.74e+04 3.23e+04† 2.84e+04

F15 Best 274.02 278.38 274.05 273.39 273.36 274.49 275.12 273.49
Worst 278.15 283.68 275.46 273.90 275.17 274.93 282.06 274.08
Average 275.43 281.02 274.33 273.59 273.97 274.70 278.37 273.91
Median 275.22† 281.03† 274.29† 273.45 273.85§ 274.70† 278.64† 273.95

from the non-parametric Wilcoxon rank sum tests. The number ence of the “Better” score and the “Worse” score. Despite the poorer
of benchmark functions showing that PSO-DLS is significantly bet- performance on best accuracy for 30-dimension problem than that
ter than, almost the same as, and significantly worse than the other for 10-dimension problem, PSO-DLS maintains the positive total
algorithms is illustrated in this table. The total score is the differ-
840 W. Ye et al. / Applied Soft Computing 61 (2017) 832–843

F1 F2 F3
10
9 10 1.33
10
10
GPSO
MSPSO
FIPS
DTTPSO
3
10 10
8 DMSPSO 10 1.28
ABC
TLBO
PSODLS

10 -2 6 10 1.24
10
GPSO GPSO
MSPSO MSPSO
FIPS FIPS
1.2
10 -8 DTTPSO 10 4 10 DTTPSO
DMSPSO DMSPSO
ABC ABC
TLBO TLBO
PSODLS PSODLS
10 2 10 1.16
10 -14
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
iteration iteration iteration
F4 F5 F6
8
10
10 2.2 10 3.4
GPSO GPSO
MSPSO MSPSO
FIPS 10 7 FIPS
DTTPSO DTTPSO
10 1.8 DMSPSO 10 3 DMSPSO
ABC ABC
TLBO TLBO
PSODLS PSODLS
2.6
10 1.4 10 10 5
GPSO
MSPSO 4
FIPS 10
1
10 DTTPSO
2.1
10 DMSPSO
ABC
TLBO
PSODLS
0.5 1.7
10 10 10 2
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
iteration iteration iteration
F7 F8 F9
10 2.4
7
GPSO 10 GPSO GPSO
MSPSO MSPSO 10 2.4 MSPSO
FIPS FIPS FIPS
DTTPSO DTTPSO DTTPSO
10 1.7 DMSPSO 10 6 DMSPSO DMSPSO
ABC ABC ABC
TLBO TLBO 10 2.3 TLBO
PSODLS PSODLS PSODLS
5
1 10
10
2.2
10

10 4
0.3
10 2.1
10

10 -0.4 10 2
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 0 500 1000 1500 2000

Fig. 5. Convergence graphs of PSO-DLS and other algorithms 10-D benchmark functions F1 –F9 .

Table 3 (3) Comparison of the computational time: The computational


The statistical analysis of Wilcoxon-tests. The number of benchmark functions (out
time of all versions of PSO is presented in Table 4. All the codes
of the 15 tested functions) that the PSO-DLS is significantly better than (Better),
about the same as (Same), and significantly worse than (Worse) the other algo- of PSOs are the most simplified for the fast running. The time
rithms. The total score is calculated by subtracting Worse from Better. is shown for one-hundred-iteration on 10-D problem, which is
enough to compare the computational time for all the PSOs. It can
GPSO MSPSO FIPS DTT-PSO DMS-PSO ABC TLBO
be observed that the computational time of DTT-PSO is maximum.
10-D Better 14 13 15 14 6 13 11 That’s because the dynamic tournament topology that needs to
Same 1 2 0 0 5 0 2
select the neighborhoods for each particles increases the computa-
Worse 0 0 0 1 4 2 2
Total 14 13 15 13 2 11 9 tional time. The information for every particle is exchanged at each
iteration, so that an extra time consumption is produced. On the
30-D Better 14 14 15 14 9 11 12
Same 1 1 0 0 4 1 1
contrary, PSO-DLS cost much less time as DMS-PSO and FIPS. The
Worse 0 0 0 1 2 3 2 times they cost are not more than two times than original PSO on
Total 14 14 15 13 7 8 10 F1 to F10 . For some functions, the time cost for PSO-DLS and origi-
nal PSO is nearly equal. The computational time demonstrates the
superiority of the proposed algorithm.
scores and even obtains a higher value than the most compared
algorithms, which shows the evident advantage of PSO-DLS.
W. Ye et al. / Applied Soft Computing 61 (2017) 832–843 841

F 10 F 11 F 12
a 10 2.8 10 2.29
7
10 GPSO GPSO
MSPSO MSPSO
FIPS FIPS
DTTPSO 2.22
DTTPSO
2.1
10 6 DMSPSO 10 10 DMSPSO
ABC ABC
TLBO TLBO
PSODLS PSODLS

10 5 10
1.4 10
2.15

GPSO
MSPSO
4
10 FIPS
10 0.7 DTTPSO 10 2.08
DMSPSO
ABC
3 TLBO
10 PSODLS
10 0 10
2
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500
iteration iteration iteration
F 13 F 14 F 15
b 10 4.2 3.8
10
2.2 10
GPSO GPSO GPSO
MSPSO MSPSO MSPSO
FIPS FIPS FIPS
DTTPSO DTTPSO 3.5 DTTPSO
10 2 3.6
10
DMSPSO 10 DMSPSO DMSPSO
ABC ABC ABC
TLBO TLBO TLBO
PSODLS PSODLS PSODLS

10 3.1 10 3.1
10 1.7

1.5
10
2.5 10 2.7
10

1.2 2 2.3
10 10 10
0 500 1000 1500 2000 2500 0 500 1000 1500 2000 2500 0 500 1000 1500 2000

Fig. 6. Convergence graphs of PSO-DLS and other algorithms 10-D benchmark functions F10 –F15 .

Table 4 swarms keep in touch with constant collaborative information and


The computational time of PSOs.
reunite at some extent.
Time (s) GPSO MSPSO FIPS DTT-PSO DMS-PSO PSO-DLS The simple analysis of searching behavior supports its superi-
F1 0.0116 0.0166 0.0353 0.0478 0.0178 0.0236 ority in maintaining the diversity and searching a better solution.
F2 0.0072 0.0111 0.0329 0.0484 0.0156 0.0169 Experimental results on 15 function problems of CEC 2015 for 10
F3 0.0061 0.0098 0.0357 0.0490 0.0157 0.0175 and 30 dimensions also demonstrate its promising effectiveness
F4 0.0092 0.0159 0.0345 0.0482 0.0154 0.0177 in solving complex problems statistically comparing to other algo-
F5 0.0110 0.0185 0.0386 0.0514 0.0200 0.0215
rithms. What’s more, the computational times reveal the subtle
F6 0.0077 0.0176 0.0265 0.0486 0.0141 0.0201
F7 0.0402 0.0563 0.0802 0.0865 0.0626 0.0614 design of PSO-DLS.
F8 0.0076 0.0176 0.0366 0.0505 0.0178 0.0190 It is noted that slow convergence speed arises from the topology
F9 0.0181 0.0333 0.0483 0.0583 0.0321 0.0328 structure in PSO-DLS. The specific characteristic of multi-swarms
F10 0.0527 0.0796 0.0921 0.1121 0.0734 0.0782
with small size inhibits the exploitation ability evidently. We
F11 0.2332 0.2511 0.2666 0.2915 0.2536 0.2576
F12 0.0337 0.0395 0.0595 0.0704 0.0443 0.0467 will carry out more studies to make good use of information to
F13 0.0368 0.0449 0.0607 0.0817 0.0445 0.0459 increase efficiency and effectiveness of updating and exchange. Our
F14 0.0406 0.0482 0.0631 0.0874 0.0498 0.0449 future work also includes the application in practical engineering
F15 0.3593 0.3530 0.3712 0.3801 0.3467 0.3520 problems such as vehicle routing, job-shop scheduling and image
segmentation.

5. Conclusion Acknowledgements

A novel multi-swarm particle swarm optimization with This work is supported by the National Natural Science Founda-
dynamic learning strategy (PSO-DLS) has been investigated to tion of China (Grant no. 61572233) and the National Social Science
improve the performance of PSO. The proposed strategy suggests Foundation of China (Grant no. 16BTJ032). The authors would like
the whole population is split up into several sub-swarms so that the to thank the anonymous reviewers for their helpful suggestions and
population is able to keep diversity without the dominant influ- comments on a previous version of the present paper.
ence of the global best position on all particles. However, PSO
with small and separate sub-swarms turns weaker at searching a References
potential solution. Thus, dynamic learning strategy is introduced to
promote the information exchanged among sub-swarms. Collec- [1] D.E. Goldberg, J.H. Holland, Genetic algorithms and machine learning, Mach.
Learn. 3 (1988) 95–99, http://dx.doi.org/10.1023/A:1022602019183.
tive information among separate sub-swarms is utilized through [2] K.S. Lee, Z.W. Geem, A new meta-heuristic algorithm for continuous
communication particles’ updating. In this manner, separate sub- engineering optimization: harmony search theory and practice, Comput.
842 W. Ye et al. / Applied Soft Computing 61 (2017) 832–843

Method Appl. M. 194 (2005) 3902–3933, http://dx.doi.org/10.1016/j.cma. [29] C. Leboucher, H.-S. Shin, P. Siarry, S.L. Ménec, R. Chelouah, A. Tsourdos,
2004.09.007. Convergence proof of an enhanced particle swarm optimisation method
[3] S. Kirkpatrick, C.D. Gelatt, M.P. Vecchi, Optimization by simulated annealing, integrated with evolutionary game theory, Inf. Sci. 346 (2016) 389–411,
Science 220 (1983) 671–680, http://dx.doi.org/10.1126/science.220.4598.671. http://dx.doi.org/10.1016/j.ins.2016.01.011.
[4] F. Glover, Tabu search – part I, ORSA J. Comput. 1 (1989) 190–206, http://dx. [30] H. Duan, C. Sun, Swarm intelligence inspired shills and the evolution of
doi.org/10.1287/ijoc.1.3.190. cooperation, Sci. Rep. 4 (2014) 5210, http://dx.doi.org/10.1038/srep05210.
[5] J. Zhang, A.C. Sanderson, Jade: adaptive differential evolution with optional [31] J. Kennedy, Some issues and practices for particle swarms, in: Proc. Swarm
external archive, IEEE Trans. Evol. Comput. 13 (2009) 945–958, http://dx.doi. Intell. Symp., 2007, pp. 162–169, http://dx.doi.org/10.1109/sis.2007.368041.
org/10.1109/tevc.2009.2014613. [32] Y. Shi, R. Eberhart, A modified particle swarm optimizer, in: Proc. IEEE World
[6] A.K. Qin, V.L. Huang, P.N. Suganthan, Differential evolution algorithm with Congr. Comput. Intell, 1998, pp. 69–73, http://dx.doi.org/10.1109/ICEC.1998.
strategy adaptation for global numerical optimization, IEEE Trans. Evol. 699146.
Comput. 13 (2009) 398–417, http://dx.doi.org/10.1109/tevc.2008.927706. [33] A. Ratnaweera, S.K. Halgamuge, H.C. Watson, Self-organizing hierarchical
[7] N. Hansen, A. Ostermeier, Adapting arbitrary normal mutation distributions particle swarm optimizer with time-varying acceleration coefficients, IEEE
in evolution strategies: the covariance matrix adaptation, in: Proc. IEEE Int. Trans. Evol. Comput. 8 (2004) 240–255, http://dx.doi.org/10.1109/TEVC.2004.
Conf. Evol. Comput, doi:10.1109/icec.1996.542381, 1996, pp. 312–317. 826071.
[8] S. Yazdani, H. Nezamabadi-Pour, S. Kamyab, A gravitational search algorithm [34] A. Nickabadi, M.M. Ebadzadeh, R. Safabakhsh, A novel particle swarm
for multimodal optimization, Swarm Evol. Comput. 14 (2014) 1–14, http://dx. optimization algorithm with adaptive inertia weight, Appl. Soft Comput. 11
doi.org/10.1016/j.swevo.2013.08.001. (2011) 3658–3670, http://dx.doi.org/10.1016/j.asoc.2011.01.037.
[9] K.N. Krishnanand, D. Ghose, Glowworm swarm optimization for simultaneous [35] G. Xu, An adaptive parameter tuning of particle swarm optimization
capture of multiple local optima of multimodal functions, Swarm Intell. 3 algorithm, Appl. Math. Comput. 219 (2013) 4560–4569, http://dx.doi.org/10.
(2009) 87–124, http://dx.doi.org/10.1007/s11721-008-0021-5. 1016/j.amc.2012.10.067.
[10] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) [36] L. Zhang, Y. Tang, C. Hua, X. Guan, A new particle swarm optimization
algorithm, Appl. Soft Comput. 8 (2008) 687–697, http://dx.doi.org/10.1016/j. algorithm with adaptive inertia weight based on bayesian techniques, Appl.
asoc.2007.05.007. Soft Comput. 28 (2015) 138–149, http://dx.doi.org/10.1016/j.asoc.2014.11.
[11] R.V. Rao, V.J. Savsani, D.P. Vakharia, Teaching-learning-based optimization: a 018.
novel method for constrained mechanical design optimization problems, [37] W.-D. Chang, A modified particle swarm optimization with multiple
Comput. Aided Des. 43 (2011) 303–315, http://dx.doi.org/10.1016/j.cad.2010. subpopulations for multimodal function optimization problems, Appl. Soft
12.015. Comput. 33 (2015) 170–182, http://dx.doi.org/10.1016/j.asoc.2015.04.002.
[12] J. Kennedy, R. Eberhart, Particle swarm optimization, in: Proc. IEEE Int. Conf. [38] L. Xiaodong, Adaptively choosing neighbourhood bests using species in a
Neural Netw., Vol. 4, 1995, pp. 1942–1948, http://dx.doi.org/10.1109/icnn. particle swarm optimizer for multimodal function optimization, in: Conf.
1995.488968. Genetic and Evol. Comput. 26-30. Part I, 2004, pp. 105–116, http://dx.doi.org/
[13] R. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: 10.1007/978-3-540-24854-5 10.
Proc. Int. Symp. on Micro Mach. and Human Sci, 1995, pp. 39–43, http://dx. [39] F. van den Bergh, A.P. Engelbrecht, A cooperative approach to particle swarm
doi.org/10.1109/mhs.1995.494215. optimization, IEEE Trans. Evol. Comput. 8 (2004) 225–239, http://dx.doi.org/
[14] Y. Shi Eberhart, Particle swarm optimization: developments, applications and 10.1109/TEVC.2004.826069.
resources, in: Proc. Congr. Evol. Comput., Vol. 1, 2001, pp. 81–86, http://dx. [40] J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer, in:
doi.org/10.1109/cec.2001.934374. Proc. Swarm Intell. Symp., 2005, pp. 124–129, http://dx.doi.org/10.1109/sis.
[15] M. Clerc, J. Kennedy, The particle swarm-explosion, stability, and convergence 2005.1501611.
in a multidimensional complex space, IEEE Trans. Evol. Comput. 6 (2002) [41] M. Gang, Z. Wei, C. Xiaolin, A novel particle swarm optimization algorithm
58–73, http://dx.doi.org/10.1109/4235.985692. based on particle migration, Appl. Math. Comput. 218 (2012) 6620–6626,
[16] F. Khoshahval, A. Zolfaghari, H. Minuchehr, M. Abbasi, A new hybrid method http://dx.doi.org/10.1016/j.amc.2011.12.032.
for multi-objective fuel management optimization using parallel PSO-SA, [42] D. Chen, C. Zhao, Particle swarm optimization with adaptive population size
Prog. Nucl. Energy 76 (2014) 112–121, http://dx.doi.org/10.1016/j.pnucene. and its application, Appl. Soft Comput. 9 (2009) 39–48, http://dx.doi.org/10.
2014.05.014. 1016/j.asoc.2008.03.001.
[17] R. Liu, J. Li, J. fan, C. Mu, L. Jiao, A coevolutionary technique based on [43] C. Liu, W.B. Du, W.X. Wang, Particle swarm optimization with scale-free
multi-swarm particle swarm optimization for dynamic multi-objective interactions, PLoS ONE 9 (5) (2014) e97822, http://dx.doi.org/10.1371/
optimization, Eur. J. Oper. Res. 261 (2017) 1028–1051, http://dx.doi.org/10. journal.pone.0097822.
1016/j.ejor.2017.03.048. [44] J. Kennedy, Small worlds and mega-minds: effects of neighborhood topology
[18] R. Mohammadi, S.F. Ghomi, F. Jolai, Prepositioning emergency earthquake on particle swarm performance, in: Proc. IEEE Congr. Evol. Comput., Vol. 3,
response supplies: a new multi-objective particle swarm optimization 1999, pp. 1938, http://dx.doi.org/10.1109/CEC.1999.785509.
algorithm, Appl. Math. Model. 40 (2016) 5183–5199, http://dx.doi.org/10. [45] M.-R. Chen, X. Li, X. Zhang, Y.-Z. Lu, A novel particle swarm optimizer
1016/j.apm.2015.10.022. hybridized with extremal optimization, Appl. Soft Comput. 10 (2010)
[19] N. Delgarm, B. Sajadi, F. Kowsary, S. Delgarm, Multi-objective optimization of 367–373, http://dx.doi.org/10.1016/j.asoc.2009.08.014.
the building energy performance: a simulation-based approach by means of [46] A. Kaveh, T. Bakhshpoori, E. Afshari, An efficient hybrid particle swarm and
particle swarm optimization (PSO), Appl. Energy 170 (2016) 293–303, http:// swallow swarm optimization algorithm, Comput. Struct. 143 (2014) 40–59,
dx.doi.org/10.1016/j.apenergy.2016.02.141. http://dx.doi.org/10.1016/j.compstruc.2014.07.012.
[20] A. Chander, A. Chatterjee, P. Siarry, A new social and momentum component [47] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler,
adaptive PSO algorithm for image segmentation, Expert Syst. Appl. 38 (2011) maybe better, IEEE Trans. Evol. Comput. 8 (2004) 204–210, http://dx.doi.org/
4998–5004, http://dx.doi.org/10.1016/j.eswa.2010.09.151. 10.1109/tevc.2004.826074.
[21] S. Suresh, S. Lal, Multilevel thresholding based on chaotic Darwinian particle [48] L. Wang, B. Yang, Y. Chen, Improving particle swarm optimization using
swarm optimization for segmentation of satellite images, Appl. Soft Comput. multi-layer searching strategy, Inf. Sci. 274 (2014) 70–94, http://dx.doi.org/
55 (2017) 503–522, http://dx.doi.org/10.1016/j.asoc.2017.02.005. 10.1016/j.ins.2014.02.143.
[22] M. Mahi, Ö.K. Baykan, H. Kodaz, A new hybrid method based on particle [49] L. Wang, B. Yang, J. Orchard, Particle swarm optimization using dynamic
swarm optimization, ant colony optimization and 3-opt algorithms for tournament topology, Appl. Soft Comput. 48 (2016) 584–596, http://dx.doi.
traveling salesman problem, Appl. Soft Comput. 30 (2015) 484–490, http://dx. org/10.1016/j.asoc.2016.07.041.
doi.org/10.1016/j.asoc.2015.01.068. [50] N. Netjinda, T. Achalakul, B. Sirinaovakul, Particle swarm optimization
[23] Y. Marinakis, G.-R. Iordanidou, M. Marinaki, Particle swarm optimization for inspired by starling flock behavior, Appl. Soft Comput. 35 (2015) 411–422,
the vehicle routing problem with stochastic demands, Appl. Soft Comput. 13 http://dx.doi.org/10.1016/j.asoc.2015.06.052.
(2013) 1693–1704, http://dx.doi.org/10.1016/j.asoc.2013.01.007. [51] Y. Gao, W. Du, G. Yan, Selectively-informed particle swarm optimization, Sci.
[24] P. Moradi, M. Gholampour, A hybrid particle swarm optimization for feature Rep. 5 (2015) 9295, http://dx.doi.org/10.1038/srep09295.
subset selection by integrating a novel local search strategy, Appl. Soft [52] C. Li, S. Yang, T.T. Nguyen, A self-learning particle swarm optimizer for global
Comput. 43 (2016) 117–130, http://dx.doi.org/10.1016/j.asoc.2016.01.044. optimization problems, IEEE Trans. Syst. Man Cybern. B. Cybern. 42 (2012)
[25] Y. Lu, M. Liang, Z. Ye, L. Cao, Improved particle swarm optimization algorithm 627–646, http://dx.doi.org/10.1109/TSMCB.2011.2171946.
and its application in text feature selection, Appl. Soft Comput. 35 (2015) [53] Ş. Gülcü, H. Kodaz, A novel parallel multi-swarm algorithm based on
629–636, http://dx.doi.org/10.1016/j.asoc.2015.07.005. comprehensive learning particle swarm optimization, Eng. Appl. Artif. Intell.
[26] L.M. Abualigah, A.T. Khader, M.A. Al-Betar, O.A. Alomari, Text feature selection 45 (2015) 33–45, http://dx.doi.org/10.1016/j.engappai.2015.06.013.
with a robust weight scheme and dynamic dimension reduction to text [54] J. Jie, J. Zeng, C. Han, Q. Wang, Knowledge-based cooperative particle swarm
document clustering, Expert. Syst. Appl. 84 (2017) 24–36, http://dx.doi.org/ optimization, Appl. Math. Comput. 205 (2008) 861–873, http://dx.doi.org/10.
10.1016/j.eswa.2017.05.002. 1016/j.amc.2008.05.100, special Issue on Advanced Intelligent Computing
[27] L.-Y. Chuang, C.-H. Yang, J.-C. Li, Chaotic maps based on binary particle swarm Theory and Methodology in Applied Mathematics and Computation.
optimization for feature selection, Appl. Soft Comput. 11 (2011) 239–248, [55] Y. Jiang, C. Liu, C. Huang, X. Wu, Improved particle swarm algorithm for
http://dx.doi.org/10.1016/j.asoc.2009.11.014. hydrological parameter optimization, Appl. Math. Comput. 217 (2010)
[28] X. Wang, S. Lv, J. Quan, The evolution of cooperation in the prisoner’s dilemma 3207–3215, http://dx.doi.org/10.1016/j.amc.2010.08.053.
and the snowdrift game based on particle swarm optimization, Physica A 482 [56] X. Xu, Y. Tang, J. Li, C. Hua, X. Guan, Dynamic multi-swarm particle swarm
(2017) 286–295, http://dx.doi.org/10.1016/j.physa.2017.04.080. optimizer with cooperative learning strategy, Appl. Soft Comput. 29 (2015)
169–183, http://dx.doi.org/10.1016/j.asoc.2014.12.026.
W. Ye et al. / Applied Soft Computing 61 (2017) 832–843 843

[57] S.Z. Zhao, P.N. Suganthan, Q.-K. Pan, M. Fatih Tasgetiren, Dynamic [59] F. Wilcoxon, Individual comparisons by ranking methods, Biometrics 1 (1945)
multi-swarm particle swarm optimizer with harmony search, Expert Syst. 80–83.
Appl. 38 (2011) 3735–3742, http://dx.doi.org/10.1016/j.eswa.2010.09.032. [60] S. García, D. Molina, M. Lozano, F. Herrera, A study on the use of
[58] J. Liang, B. Qu, P. Suganthan, Problem definitions and evaluation criteria for non-parametric tests for analyzing the evolutionary algorithms’ behaviour: a
the cec 2015 competition on learning-based real-parameter single objective case study on the cec 2005 special session on real parameter optimization, J.
optimization, Tech. rep., Nanyang Technological University (Singapore) and Heurist. 15 (2009) 617–644, http://dx.doi.org/10.1007/s10732-008-9080-4.
Zhengzhou University (China), Available at: www.ntu.edu.sg/home/
epnsugan/ (Nov. 2014).

You might also like