2015 Elsevier a Modified Particle Swarm Optimization With Multiple Subpopulations for Multimodal Function Optimization Problems
2015 Elsevier a Modified Particle Swarm Optimization With Multiple Subpopulations for Multimodal Function Optimization Problems
a r t i c l e i n f o a b s t r a c t
Article history: In this paper, a modified particle swarm optimization (PSO) algorithm is developed for solving multi-
Received 21 July 2014 modal function optimization problems. The difference between the proposed method and the general
Received in revised form PSO is to split up the original single population into several subpopulations according to the order of par-
15 December 2014
ticles. The best particle within each subpopulation is recorded and then applied into the velocity updating
Accepted 2 April 2015
formula to replace the original global best particle in the whole population. To update all particles in each
Available online 27 April 2015
subpopulation, the modified velocity formula is utilized. Based on the idea of multiple subpopulations,
for the multimodal function optimization the several optima including the global and local solutions may
Keywords:
Particle swarm optimization (PSO)
probably be found by these best particles separately. To show the efficiency of the proposed method, two
Multiple subpopulations kinds of function optimizations are provided, including a single modal function optimization and a com-
Multimodal optimization problem plex multimodal function optimization. Simulation results will demonstrate the convergence behavior
of particles by the number of iterations, and the global and local system solutions are solved by these
best particles of subpopulations.
© 2015 Elsevier B.V. All rights reserved.
http://dx.doi.org/10.1016/j.asoc.2015.04.002
1568-4946/© 2015 Elsevier B.V. All rights reserved.
W.-D. Chang / Applied Soft Computing 33 (2015) 170–182 171
a few particles for PSO, chromosomes for GA, or parameter vectors denoted by gbest is the best particle in the population. Every parti-
for DE algorithm for some special purposes. Then, certain evolving cle contained in the population is updated according to these two
mechanisms and operations are executed to achieve the system main information pbest and gbest, i.e., they guide the search direc-
design. In [20], a sub-population genetic algorithm (SPGA) was tion of all particles over the search space. How to evaluate the pbest
utilized and combined with a mining gene structure technique for each particle and the gbest for the whole population is deter-
for multiobjective flowshop scheduling problems. In the SPGA, mined by the corresponding cost function value. Generally, it is a
the original is divided into numerous subpopulations which are better particle whose cost function value is smaller.
assigned with different weights to search for optimal solutions in In the general PSO, let the vector X = [x1 , x2 , . . ., xn ] be a particle,
different directions. They claimed that this approach can consider where xi for i = 1, 2, . . ., n is the decision variable or the designed
both the expanding and converging natures of the system solutions variable of the optimized problem. The following two updating for-
[20]. In [21], the author presented a two-layer particle swarm opti- mulas are utilized to guide the moving of each particle, the velocity
mization (TLPSO) consisting of the top layer and the bottom layer to updating formula of Eq. (1) and the position updating formula of
increase the search diversity of the particles so that the drawback Eq. (2):
of trapping in some local optimum can be avoided. In the devel-
oped structure, it contains M swarms of particles in the bottom
vij (t + 1) = w ∗ vij (t) + c1 · r1 · (pbestij (t) − xij (t))
layer and one swarm of particles in the top layer. Each global best + c2 · r2 · (gbestj (t) − xij (t)), (1)
position in each swarm of the bottom layer is set to be the position
of the particle in the swarm of the top layer. A mutation mech-
anism is then introduced into the particles of each swarm in the
xij (t + 1) = xij (t) + vij (t), (2)
bottom layer so that the particles leap the local minimum to search
for the global one [21]. Furthermore, a competitive and cooperative where xij , pbestij , and gbestj represent the jth position components
co-evolutionary approach to multiobjective PSO algorithm design of the ith particle, the ith individual best particle, and the global
was presented, which the subswarms are employed to produce the best particle, respectively, vij is the jth velocity component of the
reasonable problem decompositions by exploiting any correlation ith particle, w is the inertia weight for balancing the global and
and interdependency between components of the problem [23]. It local search, c1 and c2 are two positive constants assigned by the
was validated by some comparisons with existing state-of-the-art designer, r1 and r2 are two uniformly random numbers chosen
algorithms. from the interval [0,1]. It is worthy of notice that only one global
This paper will propose a simple modified version of PSO algo- best particle of the population, gbest, is enrolled in the general PSO
rithm with multiple subpopulations for the multimodal function algorithm. In this manner, only one corresponding system solution
optimization. In the beginning, the population needs to be split up can be caught by the gbest for the optimized problem. For solving
into several subpopulations and in each subpopulation the best par- the multimodal function optimization, it is rather improper and
ticle is necessarily recorded, that is, each subpopulation has a best not applicable because the multimodal function consists of several
particle. With the information of the best particle instead of the system optima including global optimal solutions and local optimal
global best particle, all of particles contained in this subpopulation solutions.
are updated. In this manner, simultaneous finding multiple optima To tackle this problem, a modified version of the PSO algorithm is
is probably achieved for the multimodal function optimization. The developed, which the concept of subpopulations is utilized for solv-
remainder of this paper is organized as follows. In Section 2, the dif- ing multimodal functions. In the proposed algorithm, the original
ference between the general PSO algorithm and the proposed one single population is firstly requested to be divided into M sub-
is introduced in detail. A modified version of the velocity updating populations just according to the order of particles. For instance,
formula is described. In Section 3, a single modal function optimiza- assume that there are 40 particles in the original population, and it
tion problem is first simulated by the proposed method and it will will be split up into four subpopulations. In the case, the first sub-
reveal that all simulation results converge to the single optimum. population is composed of particles from number one to number
In Section 4, the proposed method is applied to solving the complex ten, the second subpopulation consists of particles from number
multimodal function optimization problem which consists of sev-
eral global and local minimum points. Simulation results reveal that
the proposed method can seek for most of optima. Finally, Section original population global best
5 gives a simple conclusion and future research work. with N particles
Table 1
Design flow chart of the modified PSO algorithm with multiple subpopulations for
multimodal function optimization.
Begin 120
100
80
Create a population consisting of N particles which
f(x,y)
60
are generated from some search interval [ xmin , xmax ]
40
randomly. 6
20 4
2
0
Divide the population into M subpopulations 0 x
6
4 -2
simply according to the order of particles. Each 2
0 -4
y -2
subpopulation includes N/M particles. -4 -6
-6
Calculate the optimized function value for each I. Create a population consisting of N particles which are gener-
particle and enroll the individual best particle pbest ated from some search interval [xmin , xmax ] randomly.
and the best particle gbest in each subpopulation, II. Divide the population into M subpopulations simply accord-
respectively. ing to the order of particles. Each subpopulation includes N/M
particles.
III. If the number of iterations is achieved, then the algorithm
Execute the velocity updating formula of Eq. (1) in stops.
IV. Calculate the optimized function value for each particle and
which the global best is replaced by the best enroll the individual best particle pbest and the best particle
particle of each subpopulation and the position gbest in each subpopulation, respectively.
updating formula of Eq. (2), respectively, for all V. Execute the velocity updating formula of Eq. (1) in which the
particles of each subpopulation. global best is replaced by the best particle of each subpopula-
tion and the position updating formula of Eq. (2), respectively,
for all particles of each subpopulation.
VI. Check the position of each particle by w = 0.5, positive constants c1 = c2 = 0.5 in Eq. (1), and the search
⎧ interval [xmin , xmax ] = [−6, 6] in Eq. (3), respectively.
⎪ x if xi < xmin
⎨ min Three different examinations are considered including Case
xi = xi if xmin ≤ xi ≤ xmax for i = 1, 2, . . ., n . (3) 1: only one population, Case 2: two subpopulations, and Case
⎪
⎩ 3: four subpopulations for solving such a single modal func-
xmax if xi > xmax tion optimization. Convergence dynamics of all particles will be
2500
3. A single modal function optimization problem
6 single modal function, the green point means the best particle in
this single population, and the black point then represents the com-
mon particle. In the case, all particles quickly and fully converge to
4 the global minimum point of the optimized function after about 20
iterations. Fig. 5 displays the simulation results for Case 2 with two
subpopulations. It contains two best particles (two green points)
2 corresponding to two subpopulations. After a few iterations, the
other common black points as well as both two green points quickly
converge to the same point, red point (1, 2). In Case 3, the popula-
0
y
Fig. 12. Case 4: two subpopulations (converging to (−2.7870, 3.1282) and (3, 2), respectively).
Fig. 14. Case 6: two subpopulations (converging to (3.5814, −1.8208) and (3, 2), respectively).
Fig. 15. Case 7: four subpopulations (converging to (−2.7870, 3.1282) and (3, 2), respectively).
W.-D. Chang / Applied Soft Computing 33 (2015) 170–182 179
Fig. 16. Case 8: four subpopulations (converging to (−3.7634, −3.2660), (3.5814, −1.8208), and (3, 2), respectively).
Fig. 17. Case 9: four subpopulations (converging to (−2.7870, 3.1282), (3.5814, −1.8208), and (3, 2), respectively).
180 W.-D. Chang / Applied Soft Computing 33 (2015) 170–182
Fig. 18. Case 10: two subpopulations and population size N = 20 (converging to (−2.7870, 3.1282) and (3, 2), respectively). Particles in the first subpopulation are denoted
by black points and particles in the second subpopulation are denoted by blue points.
Fig. 19. Case 11: two subpopulations and population size N = 30 (converging to (−3.7634, −3.2660) and (3, 2), respectively).
W.-D. Chang / Applied Soft Computing 33 (2015) 170–182 181
Fig. 20. Case 12: two subpopulations and population size N = 40 (converging to (−2.7870, 3.1282) and (3, 2), respectively).
This function is called the modified Himmelblau function which can be seen from Figs. 9–11, the PSO algorithm with a single pop-
is often used as a benchmark minimization problem. Fig. 7 dis- ulation is insufficient and not applicable to solving the complex
plays the 3-D surface of this multimodal function with the defined multimodal function problem. The solution derived may be a local
space −6 ≤ x ≤ 6 and −6 ≤ y ≤ 6, and its corresponding contour is minimum, not a global minimum of the optimized function due to
then shown in Fig. 8 Fig. 8. It can easily be found from Figs. 7 and 8 the effect of algorithm’s initial conditions.
that this optimized function has four minimum points includ- In the following, the population is further divided into a few
ing one global minimum occurred at f(3, 2) = 0 and another three subpopulations so that the best particle in each subpopulation can
local minimum points at f (−3.7634, −3.2660) ≈ 7.3673, f (3.5814, separately search for other minimum points. To explain this, Cases
−1.8208) ≈ 1.5043, and f (−2.7870, 3.1282) ≈ 3.4871, respectively. 4–6 are simulated for two subpopulations under different initial
In addition, the parameter settings employed in the algorithm are conditions. Fig. 12 displays the two best particles converging to
the same with those of the above example. two minimum points, one is the global minimum (3, 2), the other
For solving such a multimodal function optimization problem, a is the local minimum (−2.7870, 3.1282), respectively. Certainly, it
large number of different testing cases are provided to demonstrate maybe converges to only one minimum point (−3.7634, −3.2660)
the efficiency of the proposed method, including single popula- as shown in Fig. 13 or to another two minimum points (3.5814,
tion, two subpopulations, and four subpopulations. In Case 1, Fig. 9 −1.8208) and (3, 2) in Fig. 14. From these simulation results, it can
shows the convergence behavior of particles when iteration = 0, 5, be concluded that the PSO algorithm with two subpopulations is
10, and 20, respectively, where the four red points represent the better than a single population for solving the multimodal function
four minimization points and the green point denotes the global optimization. As for the algorithm with four subpopulations, Cases
best in this single population. In the case, the only one global best 7–9 are executed when number of particles is changed as N = 40.
(green point) also quickly converges to the global minimum (x, All simulation results are displayed in Figs. 15–17, respectively. In
y) = (3, 2), one of these four minimization points. However, the Fig. 15, two minimum points including the global minimum (3, 2)
global best particle probably converges to one of other local mini- and a local minimum (−2.7870, 3.1282) of the optimized function
mum points, for examples, converging to (x, y) = (−3.7634, −3.2660) are separately caught by these four best particles (green points). In
in Fig. 10 or to (x, y) = (3.5814, −1.8208) in Fig. 11, respectively, if Fig. 16, on the other hand, the four best particles then finally con-
the algorithm’s initial conditions are given by different values. As verge to points (3, 2), (−3.7634, −3.2660), and (3.5814, −1.8208),
182 W.-D. Chang / Applied Soft Computing 33 (2015) 170–182