0% found this document useful (0 votes)
13 views12 pages

Particle Swarm Optimization With Novel Processing Strategy and Its Application

Uploaded by

Alfred Amponsah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views12 pages

Particle Swarm Optimization With Novel Processing Strategy and Its Application

Uploaded by

Alfred Amponsah
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

International Journal of Computational Intelligence Systems, Vol.4, No. 1 (February, 2011).

Particle Swarm Optimization with Novel Processing Strategy and Its Application

Yuanxia Shen
School of Information Science and Technology, Southwest Jiaotong University, Chengdu 600031, China;
Institute of Computer Science and Technology, Chongqing University of Posts and Telecommunications,
Chongqing 400065, China;
Department of Computer Science, Chongqing University of Arts and Sciences, Chongqing 402160, China
E-mail: [email protected]

Guoyin Wang
Institute of Computer Science and Technology, Chongqing University of Posts and Telecommunications,
Chongqing 400065, China;
E-mail:[email protected]

Chunmei Tao
Institute of Computer Science and Technology, Chongqing University of Posts and Telecommunications,
Chongqing 400065, China;
E-mail:[email protected]

Accepted: 01-12-2009
Received: 02-10-2010

Abstract

The loss of population diversity is one of main reasons which lead standard particle swarm optimization (SPSO) to
suffer from the premature convergence when solving complex multimodal problems. In SPSO, the personal
experience and sharing experience are processed with a completely random strategy. It is still an unsolved problem
whether the completely random processing strategy is good for maintaining the population diversity. To study this
problem, this paper presents a correlation PSO model in which a novel correlative strategy is used to process the
personal experience and sharing experience. The relational expression between the correlation coefficient and
population diversity is developed through theoretical analysis. It is found that the processing strategy with positive
linear correlation is helpful to maintain the population diversity. Then a positive linear correlation PSO, PLCPSO,
is proposed, where particles adopt the positive linear correlation strategy to process the personal experience and
sharing experience. Finally, PLCPSO has been applied to solve single-objective and multi-objective optimization
problems. The experimental results show that PLCPSO is a robust effective optimization method for complex
optimization problems.

Keywords: Particle swarm optimization; correlation coefficient; population diversity, multi-objective optimization.

and fish schooling [1]. Due to its simple operator and few
1. Introduction parameters, PSO has been applied to solve real-world
Particle swarm optimization (PSO) is the swarm optimization problems successfully [2-5], including
intelligent model inspired by certain social bird flocking

Published by Atlantis Press


Copyright: the authors
100
Y. X. Shen, G.Y. Wang and C.M.Tao

power system, image processing, economic dispatch, linearly from 0.5 to 2.5. Another active research trend in
neural networks and some project applications, etc. PSO is the design of different topological structures.
PSO emulates the swarm behavior and the individuals Kennedy [9] claimed that PSO with a small neighborhood
represent points in the search space. Assume a D- might perform better on complex problems, while PSO
dimensional search space S ⊂ R D and a swarm consisting with a large neighborhood would perform better on
of N particles. The current position of the i-th particle is a simple problems. The ring topological structure and the
D-dimensional vector Xi = [xi1, xi2, . . . , xiD]T. The von Neumann topological structure are proposed to
velocity of the i-th particle is also a D-dimensional vector restrict the information interaction among particles for
Vi = [νi1, νi2, . . . , νiD]T. In every search-iteration, each relieving the loss of population diversity [10]. Suganthan
particle is updated by following two “best” values, called [11] applied a dynamically adjusted neighborhood where
Pi and Pg. Pi is the prior personal position of the i-th the neighborhood of a particle gradually increases until it
particle (also known as pbest), i.e. the personal includes all particles. Parsopoulos and Vrahatis [12]
experience. Pg is the best position found by particles in combined the global version and local version together to
the swarm (also known as gbest), i.e. the sharing construct a unified particle swarm optimizer (UPSO).
experience. The velocity Vid and position Xid of the d-th Mendes et al. [13] proposed a fully informed particle
dimension of the i-th particle are updated with the swarm (FIPS), where the information of the entire
following equations. neighborhood is used to guide the particles. To increase
Vid (t + 1) = wVid (t ) + c1r1id (t )( Pid (t ) − X id (t )) the population diversity, perturbation operator [14],
(1)
+ c2 r 2id (t )( Pgd (t ) − X id (t )) evolution operator [15] and other search algorithms [16]
X id (t + 1) = X id (t ) + Vid (t + 1) (2) are introduced to PSO. In addition, Xie and Zhang [17]
where random factors r1id and r2id are two independent presented a self-organizing PSO based on the dissipative
random numbers in the range [0, 1]; w is an inertia weight; system in which the negative entropy is introduced to
c1 and c2 are acceleration coefficients reflecting the improve the population diversity. Jie et al. [18]
weighting of stochastic acceleration terms that pull each introduced a knowledge billboard to record varieties of
particle toward pbest and gbest position, respectively. search information and take advantage of multi-swarm to
The first part of Eq. (1) represents the previous velocity, maintain the population diversity and guide their
which provides the necessary momentum for particles to evolution by the shared information.
roam across the search space. The second part, known as However, these PSO algorithms follow the same
the cognitive component, represents the natural tendency principle that each particle adopts the completely random
of individuals to return to environments where they strategy for processing the pbest and gbest, which lets the
experienced their best performance. The third part is cognitive and the social components of the whole swarm
known as the social component, which represents the contribute randomly to the position of each particle in the
tendency of individuals to follow the success of other next iteration. Although the original objective of the
individuals. completely random processing strategy is to keep the
Although PSO has been applied to solve many randomness of search, it is still an unsolved problem
optimization problems successfully, it may easily suffer whether this strategy is good for maintaining the
from the premature convergence when solving complex population diversity. To study this problem, we propose a
problems. Many researchers have worked on improving correlation PSO model in which a novel correlative
the performance of PSO in various ways. To maintain the processing strategy is used to process the pbest and gbest.
population diversity is a main objective of much work. Then the relationship between the degree of correlation
Shi and Eberhart proposed a linearly decreasing inertia and population diversity is presented, which shows that
weight (LDIW) and a fuzzy adaptive inertia weight, the processing strategy with positive linear correlation
which are used to balance the global and local search has advantage of maintaining population diversity. In
abilities [6-7]. To weaken the search density surrounding order to improve the global optimization ability of PSO, a
the historical best position found by the whole swarm in positive linear correlation PSO (PLCPSO) is proposed in
the early evolution, Asanga [8] developed the dynamic the context of the correlation PSO model.
strategy that the cognitive coefficient decreases linearly Optimization plays a major role in the modern-day
from 2.5 to 0.5, while the social coefficient increases design and decision-making activities. Particularly, the

Published by Atlantis Press


Copyright: the authors
101
Particle Swarm Optimization with Novel Processing Strategy and Its Application

multi-objective optimization (MOO) becomes a In this paper, we focus on the linear correlation
challenging problem due to the inherent confliction between random factors. The correlation coefficient
nature of objective to be optimized. As evolutionary Spearman’s ρX,Y is a useful tool for measuring the
algorithm (EA) can deal simultaneously with a set of strength of the linear correlation between random
possible solutions in a single run, it is especially suitable variables X and Y [25]. Thereby the Spearman’s ρ is used
to solve MOO problems. Many evolutionary multi- to measure the correlation of random factors. The
objective optimization algorithms have been developed in velocity of the i-th particle in the correlation PSO model
the last few years, such as evolutionary computation, is updated as follows:
swarm intelligence [19-21]. As a new form of swarm ⎧ Vid (t + 1) = wVid (t ) + c1r1id (t )( Pid (t ) − X id (t ))
intelligence, PSO has been used to solve MOO problems. ⎪
⎨ + c2 r 2id (t )( Pgd (t ) − X id (t )) (3)
To maintain the population diversity, several techniques ⎪ ρ i (t ) = α ,(−1 ≤ α ≤ 1)
⎩ r1 , r2
[21-24] are introduced to PSO, e.g. an adaptive-grid
where ρ ri , r (t ) is the correlation coefficient of the random
mechanism, an adaptive mutation operation. In this paper, 1 2

factors of the i-th particle and a is a number in [0,1]. The


PLCPSO with the disturbance operation is used to solve
position of each particle in the correlation PSO model is
MOO problems, where the correlative processing strategy
updated by Eq. (2). In Eq. (3), the different correlation
is employed to maintain the population diversity.
coefficients can decide the different correlative strategies
The remainder of this paper is organized as follows.
for processing the pbest and gbest. The range of the
A positive linear correlation PSO is introduced in section
correlation coefficient is from -1 to 1. When a=0, the
2. Simulation experiment results on some benchmark
algorithm is the SPSO, where particles adopt the
optimization problems are discussed in section 3.
completely random strategy. SPSO is a special case of the
Conclusions are drawn in section 4.
correlation PSO model; When -1≤a<0, the algorithm is
2. PLCPSO called the negative linear correlation PSO (NLCPSO),
where the processing strategy with negative linear
correlation denotes that particles enhance the exploitation
2.1. The correlation PSO Model
of the one of the pbest and gbest, meanwhile weaken the
In PSO, each particle follows the pbest and gbest to exploitation of the other; If 0<a≤1, the algorithm is called
search for a better position. Therefore, the strategy for the positive linear correlation PSO (PLCPSO), where the
processing the pbest and gbest influences the course of processing strategy with positive linear correlation
search. From the aspect of cognitive study, if a particle denotes that particles enhance the exploitation of the one
considers that the pbest is important for the search toward of the pbest and gbest, meanwhile enhance the
the optimum solution, then the gbest should be also exploitation of the other.
important because the gbest is the best among all pbests. The correlation PSO model extends the information
Therefore, it should be discriminated to exploit the pbest processing mechanism of PSO. The population diversity
and gbest. However, in SPSO, each particle adopts a affects the performance of PSO greatly. For different
completely random processing strategy, which makes no processing strategies, it is concerned which kind of
difference to exploit the pbest and gbest. Hence, it is strategy can improve the global optimization ability of
concerned how to make proper use of the pbest and gbest. PSO. In the next section, the relationship between the
From Eq. (1), acceleration coefficients and random population diversity and processing strategies should be
factors jointly affect the exploitation of pbest and gbest. analyzed.
Much work has focused on adjusting acceleration
coefficients for balancing the global exploration and local 2.2. The Analysis of population diversity of
exploitation, but the little attention is paid to random PLCPSO
factors. In this paper, we are concerned about the effect The high population diversity should directly imply that a
of random factors on the performance of PSO. To study large area of the search space is being explored [26].
this problem, a correlation PSO model is presented, Similarly, the low population diversity should imply that
where random factors are correlated, and then particles the particles are exploiting a small area of the search
adopt the correlative strategy to process pbest and gbest. space. In SPSO, the loss of population diversity is one of
main reasons which lead SPSO to suffer from the

Published by Atlantis Press


Copyright: the authors
102
Y. X. Shen, G.Y. Wang and C.M.Tao

premature convergence when solving complex 1 N


− ∑ E[ X jd (t + 1)])2
N j =1
multimodal problems. Therefore, maintaining population
diversity is an important method for improving the global 1 N D 1 (7)
= ∑∑{(1 − N )Var[ X i d (t + 1)]} +div( E[ X (t + 1)])
ND i =1 d =1
optimization ability of PSO. In this section, we discuss
the relationship between the population diversity and The expectation of Xid (t+1) can be calculated as follow:
correlation coefficient in the correlation PSO model. E ( X id (t + 1)) = X id + wVid (t ) + 0.5c1 ( Pid (t ) − X id (t ))
(8)
Generally, the degree of dispersion of the particles in + 0.5c2 ( Pgd (t ) − X id (t ))
the swarm is used to measure the population diversity. Then the variance of Xid(t+1) can be obtained.
1 2
This measure is given in [27] by: Var[ X id (t + 1)] = {c1 [ Pid (t ) − X id (t )]2 + 2 ρ c1c2 [ Pid (t ) −
(9)
12
1 N D (4)
div ( X (t )) = ∑∑ [ X id (t ) − X d (t )]2
ND
X id (t )][ Pgd (t ) − X id (t )] + c22 [ Pgd (t ) − X id (t )]2 }
i =1 d =1

where N is the swarm size, D is the dimensionality of the Substituting Eq. (9) into Eq. (7), we get the expectation
problem and Xid is the d-th dimension of the i-th particle of div(X(t+1)).
ρ c1c2 N D

position. X d (t ) is calculated by the following equation. E[div( X (t + 1))] = E0 [div( X (t + 1))] + ∑∑{[ Pid (t ) −
6 ND i =1 d =1 (10)
N
X id (t )][ Pgd (t ) − X id (t )]}
X d (t ) = ∑ X id (t ) / N (5)
i =1 Where E0[div(X(t+1))] is the expectation of population
In the correlation PSO model, particles can adopt diversity at the next time step t+1 in which the correlation
different strategies to process pbest and gbest. In order to coefficient is zero. Then E0[div(X(t+1))] is also the
obtain the relationship between different correlative expectation of population diversity of SPSO at the next
strategies and population diversity, the change of time step. E0[div(X(t+1))] is calculated by
population diversity at the next time step is studied in the N −1 N D 2
E0 [div( X (t + 1)] = ∑∑{c1 [ Pid (t ) − X id (t )]2 + c22[ Pgd (t ) −
12 N 2 D i =1 d =1 (11)
context of the current state. The population diversity at
the time step t+1 can be represented by X id (t )] } + div ( E[ X (t + 1)])
2

1 N D
(6) To make clear the relationship between the
div( X (t + 1)) = ∑∑ [ X i d (t + 1) − X d (t + 1)]2
ND i =1 d =1 correlation coefficient and population diversity, it is
where the position and velocity of each particle at the crucial to analyze the sign of the value of the second term
time step t and before the time step t are known. The in Eq. (10). The second term ∑∑[Pid(t)-Xid(t)][Pgd(t)-Xid(t)]
positions of particles at the time step t+1 are calculated is the sum of ND products of (Pid(t)-Xid(t)) and (Pgd(t)-
by the Eqs. (2) and (3). Obviously, Xid(t+1) and Xd (t +1) are Xid(t)). According to the relations of the positions among
random variables because r1id and r2id are random Pid(t), Xid(t) and Pgd(t), if Xid(t) is not in the middle of
numbers. The population diversity div(X(t+1)) at the time Pgd(t) and Pid(t), then [Pid(t)-Xid(t)][Pgd(t)-Xid(t)]>0. There
step t+1 is also a random variable. exist the two possibilities, which is Xid(t) lie either the left
For the randomness of the population diversity at the or right side of Pgd(t) and Pid(t). If Xid(t) is in the middle
time step t+1, we take the maximization method of of Pgd(t) and Pid(t), then [Pid(t)-Xid(t)][Pgd(t)-Xid(t)]<0.
expected utility to decide which is favorable to maintain Hence, the probability of the inequality [Pid(t)-
population diversity among the correlative strategies. The Xid(t)][Pgd(t)- Xid(t)]>0 is equal to 2/3, i.e. P([Pid(t)-
expectation of div(X(t+1)) is calculated by Xid(t)][Pgd(t)-Xid(t)]>0)=2/3. For all particles in the swarm,
1 N D at least there exists the particle which the best personal
E[div ( X (t + 1))] = ∑∑ E[ X id (t + 1) − X d (t + 1)]2
ND i =1 d =1
position Pid(t) should be equal to the best position
1 N D
= ∑∑ E[ X id (t + 1) − E ( X id (t + 1)) + E ( X i d (t + 1)) − X d (t + 1)]2
ND i =1 d =1
Pgd(t)among all the particles in the swarm. Then, in this
1 N D case, [Pid(t)-Xid(t)][Pgd(t)-Xid(t)]>0, and then P([Pid(t)-
= ∑∑{E[ X id (t + 1) − EX id (t + 1)]2 − 2E ([ X id (t + 1) −
ND i =1 d =1 Xid(t)][Pgd(t)-Xid(t)]>0)>2/3.
EX id (t + 1)][ X d (t + 1) − EX id (t + 1)]) + E[ X d (t + 1) − EX id (t + 1)]2 } Without loss of generality, assume |[Pid(t)-
1 N D 1 1 N Xid(t)][Pgd(t)-Xid(t)]|=θ+η, where θ is a positive constant,
= ∑∑{Var[ X i d (t + 1)] + N Var[ X i d (t + 1)] + N 2 ∑Var[ X jd (t + 1)]
ND i =1 d =1 j =1 η~N(0,δ) is a white noise. Then µ=E[Pid(t)-Xid(t)][Pgd(t)-
1 N Xid(t)]>0.
+( ∑ E[ X jd (t + 1)] − E[ X id (t + 1)])2}
N j =1 According to the law of large numbers in probability
1 N D 1 1 N D
= ∑∑{(1 − N )Var[ X id (t + 1)]} + ND ∑∑ ( E[ X id (t + 1)] theory, for any given number ε>0, the following equation
ND i =1 d =1 i =1 d =1
can be obtained.

Published by Atlantis Press


Copyright: the authors
103
Particle Swarm Optimization with Novel Processing Strategy and Its Application

lim P(
1 N D
∑∑{[ Pid (t ) − X id (t )][ Pgd (t ) − X id (t )]} − µ < ε ) = 1 (12) bounds of the velocity are set as the search bounds of the
ND →∞ ND i =1 d =1
position.
Eq. (12) can deduce Eq. (13), thereby
1 N D (13)
lim P (
ND →∞
∑∑{[ Pid (t ) − X id (t )][ Pgd (t ) − X id (t )]} > 0) = 1
ND i =1 d =1
Synthesize Eq. (10) and (13), the conclusion is obtained
that the expectation of the population diversity increases
with the increasing of the correlation coefficient. The
conclusion shows that PLCPSO is helpful for maintaining
the population diversity, which makes particles in
PLCPSO have more chance to get away from the local
optima than SPSO and NLCPSO. From Eq. (10), it is
obvious that the NLCPSO is easier to lose the population
diversity than SPSO. When the correlation coefficient is
set to be 1, the expectation of the population diversity can (a)
reach the maximum value. Likewise, when the correlation
coefficient is set to be -1, the expectation of the
population diversity can reach the minimum value. In this
paper, we only consider two special cases, i.e. the
correlation coefficient set to be -1 and 1. In the following
contents, NLCPSO and PLCPSO specially denote the
PSO algorithms in which the correlation coefficients are
set to be -1 and 1, respectively.
In order to test the above analysis of the population
diversity, PLCPSO, SPSO and NLCPSO are run 20 times
on a (unimodal) sphere function and a (multimodal)
Rastrigin function defined in Section 3. The changes of (b)
population diversity with iterations for each function are
shown in Fig.1. Fig.1 Comparison of SPSO’s, PLNPSO’s and NLSPSO’s
population diversity. (a) Curve of population diversity for
As can been seen from Fig.1, PLCPSO maintains the Shpere function. (b) Curve of population diversity for
higher population diversity than SPSO and NLCPSO. Rastrigin function
The population diversity of SPSO and NLCPSO decrease
with the increasing iteration, which makes SPSO and
NLCPSO easily get trapped in local optima in later 3. Applications
evolution. Meanwhile, the population diversity of SPSO
decreases more slowly than NLCPSO. The experimental 3.1. Experiment 1: single-objective optimization
results are agreed with the analysis of the population
In order to test the effectiveness of PLCPSO, six
diversity.
famous single-objective benchmark functions were
2.3. Implementation of search velocity optimized by PSO with linearly decreased inertial weight
(PSO-LDIW), PSO with time-varying acceleration
To enhance the speed of the search, a small modification coefficients (PSO-TVAC), the fully informed particle
is introduced to the velocity of the particle. If the velocity swarm (FIPS), NLCPSO and PLCPSO.
of a particle is zero, then the velocity of this particle is set
to be a random number in the range from the lower bound 3.1.1. Test Functions
to the upper bound of the velocity. The bounds of the
The six benchmark functions include three unimodal
velocity are specified by the user and applied to clamp
functions (f1~f3) and three multimodal functions (f4~f6).
the maximum velocity of each particle. Usually, the
The multimodal functions have complex multimodal
distribution with one or multiple global optima enclosed

Published by Atlantis Press


Copyright: the authors
104
Y. X. Shen, G.Y. Wang and C.M.Tao

by many local minimizations. All test functions have to 3.1.2. Parameters Setting for PSO Algorithms
be minimized. The properties and the formulas of Parameters setting for PSO-LDIW, PSO-TVAC and
functions are presented below. FIPS come form Refs. [6], [8] and [13]. In FIPS, the ring
Sphere’s function
D
topology structure is implemented with weighted FIPS
f1 ( x ) = ∑ xi2 , for higher successful ratio, as recommended in [13]. In
i =1
PSO-LDIW, FIPS, NLCPSO and PLCPSO, the inertia
x∈[-100,100] , D=30,min(f1)= f1(0,0,…,0)=0. weight is decreased linearly from 0.9 to 0.4, and c1= c2=2.
Quadric’s function
D i
In PSO-TVAC, the cognitive coefficient decreases
f 2 ( x ) = ∑ (∑ xk ) 2 linearly from 2.5 to 0.5, while the social coefficient
i =1 k =1 increases linearly from 0.5 to 2.5. For a fair comparison
x∈[-100,100], D=30,min(f2)= f2(0,0,…,0)=0. among all the PSO algorithms, they are tested using the
Rosenbrock’s function same population size of 40. Further, the maximum fitness
f 3 ( x) = [100( xi +1 − xi2 ) 2 + ( xi − 1) 2 ] evaluation (FE) is set at 200000 for each test function.
x∈[-10,10], D=30, min(f3)= f3(1,1,…,1)=0. For the purpose of reducing statistical errors, each
Rastrigin’s function function is independently simulated 30 times, and their
D
f4 ( x) = ∑ (x
i =1
i
2
− 10 cos( 2π xi ) + 10 ) mean results are used in the comparison.

x∈[-5.12,5.12], D=30, min(f4)= f4(0,0,…,0)=0. 3.1.3. Experiment Results and Discussions


Noncontinuous Rastrigin’s function
D Table 1 presents the means and standard deviations of
f 4 ( x) = ∑ ( xi2 − 10 cos(2πyi ) + 10) the 30 runs of the five algorithms on the six test functions.
i =1
The best results among the five algorithms are shown in
⎧ xi xi < 0.5
where yi = ⎨ bold. Fig.2 presents the comparison in terms of the
⎩ round ( 2 x ) / 2 xi ≥ 0.5
i
convergence characteristics of the evolutionary processes
x∈[-5.12,5.12], D=30,min(f5)= f5(0,0,…,0)=0. of each algorithm for each test function.
Shaffer’s function
D −1
f 6 ( x) = ∑ ( xi2 +xi2+1 )0.25 [sin(50( xi2 + xi2+1 )0.1 + 1.0]
i =1

x∈[-100,100] , D=30, min(f6)= f6(0,0,…,0)=0.

Table 2 Results of different PSO algorithms

Functions PSO-LDIW PSO-TVAC FIPS NLCPSO PLCPSO


Mean 7.321×10-140 6.628×10-81 5.4662×10-6 2.243×10-191 8.371×10-101
f1
Std. Dev 2.011×10-137 9.538×10-79 1.387×10-5 2.739×10-194 9.426×10-105
Mean 3.455×10-42 6.390×10-22 7.882×10-3 4.990×10-119 1.982×10-13
f2
Std. Dev 5.330×10 -36
9.816×10 -20
6.001×10 -2
3.221×10 -120
1.012×10-14
Mean 29.82 13.84 21.99 1.102 2.239
f3
Std. Dev 20.53 18.77 23.32 5.628 1.872×10-3
Mean 28.56 26.72 51.29 48.97 1.414×10-9
f4
Std. Dev 17.35 24.93 22.87 10.76 8.322×10-8
Mean 31.66 25.12 46.99 45.89 3.947×10-5
f5
Std. Dev 11.57 13.81 21.13 10.12 6.445×10-4
Mean 7.857 8.531×10 -1
25.91 2.379 7.548×10-2
f6
Std. Dev 9.891 1.676 13.64 5.967 2.126×10-3

Published by Atlantis Press


Copyright: the authors
105
Y. X. Shen, G.Y. Wang and C.M.Tao

Fig.2 The convergence curve of test function for different function. (a) Sphere’s function. (b) Quadric’s function
(c)Rosenbrock’s function (d) Rastrigin’s function (e) Noncontinuous Rastrigin’s function (f) Schaffer’s function

For unimodal functions, from the results, NLCPSO from the high population diversity. The experimental
achieved the best means because the low population results show that the low population diversity is helpful
diversity enhances the local search ability of NLCPSO; for simple unimodal functions, while the high population
PLCPSO has the good results, especially for the difficult diversity is good for complex multimodal problems.
Rosenbrock’s function. For multimodal functions, Comparing the results and the convergence graphs,
PLCPSO surpasses all other algorithms, and avoids among these five algorithms, PSO-LDIW, PSO-TVAC
getting into the premature convergence, which benefits and NLCPSO get trapped in the local optima for the

Published by Atlantis Press


Copyright: the authors
106
Y. X. Shen, G.Y. Wang and C.M.Tao

difficult unimodal functions (e.g. Rosenbrock’s function solutions to be diverse covering its maximum possible
f3) and the multimodal functions because of the rapid loss regions.
of the population diversity. FIPS with a ring topology has
a local neighborhood, which can avoid falling into the 3.2.2. Performance metrics
premature convergence. However, the local The knowledge of Pareto front of a problem provides
neighborhood leads FIPS to converge slowly, and FIPS an alternative for selection from a list of efficient
cannot achieve the satisfied results. solutions. It thus helps in taking decisions, and also, the
Since PLCPSO has the high population diversity, it knowledge gained can be used in situations where the
could not converge as fast as NLCPSO for unimodal requirements are continually changing. In order to
functions. Hence PLCPSO does not perform the best for provide a quantitative assessment for the performance of
simple unimodal functions. According to the theorem of MO optimizer, two issues are taken into consideration, i.
“no free lunch” [28], one algorithm cannot offer better e. the convergence to the Pareto-optimal set and the
performance than all the others on every aspect or on maintenance of diversity in solutions of the Pareto-
every kind of problem. Therefore, we may not expect the optimal set. In this paper, convergence metric γ [22] and
best performance on all classes of problems, as the diversity metric δ [22] have as qualitative measures.
proposed PLCPSO focuses on improving the Convergence metric is used to measure the extent of
performance of PSO on complex multimodal problems. convergence of the obtained set of solutions. The smaller
is the value of γ, the better is the convergence toward the
3.2. Experiment 2: multi-objective optimization
POF. Diversity metric is used to measure the spread of
(MOO)
solutions lying on the POF. For the most widely and
uniformly spread out set of non-dominated solutions,
3.2.1. Basic concepts on MOO diversity metric δ would be very small.
In general, many real-world applications involve
complex optimization problems with various competing 3.2.3. Description of PLC-MOPSO
specifications and constraints. Without loss of generality, This section describes PLCPSO to MOO problem,
we consider a minimization problem with decision space called PLC-MOPSO. The motivation is to attain better
Y which is a subset of real numbers. For the minimization convergence to the Pareto-optimal front. In PSO, the term
problem, it tends to find a parameter set y for gbest represents the best solution obtained by the whole
Min F ( y ) y ∈ R D (14) swarm. Often the conflicting nature of the multiple
y∈Y
objectives involved in MOO problems makes the choice
where y = [y1, y2, . . . , yD] is a vector with D decision
of a single optimum solution difficult. To resolve this
variables and F = [f1, f2, . . . , fM] are M objectives to be
problem, the concept of non-dominance is used and an
minimized.
archive of non-dominance solutions is maintained, from
In the absence of any preference information, a set of
which a solution is picked up as the gbest in PLC-
solutions is obtained, where each solution is equally
MOPSO. The historical archive stores non-dominance
significant. The obtained set of solutions is called non-
solutions to prevent the loss of good particles. The
dominated or Pareto optimal set of solutions. Any
archive is updated at each cycle, e.g., if the candidate
solution y = [y1, y2, . . . , yD] dominates z = [z1, z2, . . . , zD]
solution is not dominated by any members in the archive,
if and only if y is partially less than z, i.e., ∀i ∈1, L , D
it will be added to the archive. Likewise, any archive
f i ( y ) ≤ fi ( z ) ∧ ∃i ∈ {1,L, D}: fi ( y ) < fi ( z ) (15)
members dominated by this solution will be removed
The front obtained by mapping the Pareto optimal set form the archive. To obtain more solutions, the
(OS) into the objective space is called POF disturbance operation was adopted for randomly selected
uv
POF = { f = ( f1 ( x), L , f D ( x )) | x ∈ OS } (16) non-dominance solutions in the archive. PLC-MOPSO is
The determination of a complete POF is a very described in Fig.3.
difficult task, owing to the presence of a large number of
suboptimal Pareto fronts. By considering the existing 3.2.4 Benchmark problems and PLC-MOPSO
memory constraints, the determination of the complete performance
Pareto front becomes infeasible and, thus, requires the

Published by Atlantis Press


Copyright: the authors
107
Particle Swarm Optimization with Novel Processing Strategy and Its Application

Table 2 Definition of the MOO problems


/Ns: size of the swarm; MaxIter: maximum member
of iterations; d: the dimensions of the search space./ Test problem Definition
(1) t = 0, randomly initialize S0; /St: swarm at
Schaffer’s Minimize F=(f1(x),f2(x)), where
iteration t / study(SCH) f1 ( x) = x 2
• initialize xi,j, i∈{1, . . . ,Ns} and j∈{1, . . . ,d}
f 2 ( x) = ( x − 2) 2
/ xi,j: the j-th coordinate of the i-th particle /
• initialize vi,j, i∈{1, . . . ,Ns} and j∈{1, . . . ,d} x ∈ [−103 ,103 ]
Fonseca’s and Minimize F=(f1(x),f2(x)), where
/vi,j: the velocity of i-th particle in j-th dimension / Fleming’s study
( )
2
f1 ( x) = 1 − exp(−∑ i =1 xi − 1/ 3 )
3
• Pi ← xi, i∈{1, . . . ,Ns} / Pi: the coordinate of (FON)
the personal best of the i-th particle / f ( x) = 1 − exp(−∑ ( x + 1/ 3 ) )
3 2
2 i =1 i
(2)Evaluate each of the particles in S0.
xi ∈ [−4, 4], i = 1, 2,3
(3)A0←non_dominated(S0) /returns the non-
Poloni’s study Minimize F=(f1(x),f2(x)), where
dominated solutions from the swarm; At: archive at (POL) f1 ( x) = [1 + ( A1 − B1 )2 + ( A2 − B2 )2 ]
iteration t /
f 2 ( x) = [( x1 + 3)2 + ( x2 + 1)2 ]
(4) for t = 1 to t = MaxIter:,
A1 = 0.5sin1 − 2cos1 + sin 2 − 1.5cos 2
• for i = 1 to i = Ns / update the swarm /
A2 = 1.5sin1 − cos1 + 2sin 2 − 0.5cos 2
/ updating the velocity of each particle /
B1 = 0.5sin x1 − 2cos x1 + sin x2 − 1.5cos x2
• vi= w vi + c1r1(Pi -xi) + c2r2(Regb-xi) B2 = 1.5sin x1 − cos x1 + 2sin x2 − 0.5cos x2
• ρt=1; r1=rand( ); r2= r1 xi ∈[−π , π ], i = 1, 2
/ Regb is a value that is randomly taken form the
Kursawe’s study Minimize F=(f1(x),f2(x)), where
archive/ (KUR) f1 ( x ) = ∑ i =1[−10 exp(−0.2 xi2 + xi2+1 )]
2
/updating coordinates /
• xi = xi + vi f 2 ( x ) = ∑ i =1[| xi |0.8 +5sin( xi3 )]
3

(5) Evaluate each of the particles in St. xi ∈ [−5,5], i = 1, 2,3


(6) /updating the archive /
At←non_dominated(St).
(7)/disturbance operation to the randomly selected Table 3 Results of the convergence metric for test problems
solutions in At/ γ MOPSO IPSO PLC-MOPSO
At←Selected (non_dominated(St))(1+0.5*rand( )); Best 2.941e-3 2.258e-3 2.612e-3
(8) END while Mean 3.242e-3 2.638e-3 2.567e-3
Return At SCH
Worst 3.799e-3 3.430e-3 3.031e-3
Std 4.900e-4 4.410e-4 4.562e-4
Fig.3 Pseudo code of PLC-MOPSO Best 1.506e-03 1.378e-03 1.234e-03
Mean 1.806e-03 1.517e-03 1.448e-03
FON
Worst 2.418e-03 2.437e-03 2.123e-03
In the context of MOO, the benchmark problems must
Std 1.100e-03 3.000e-04 2.900e-04
pose sufficient difficulty to impede searching for the Best 1.540e-02 9.431e-03 9.960e-03
Pareto optimal solutions. Mean 1.694e-02 1.253e-02 1.254e-02
POL
In this paper, four benchmark problems are selected to Worst 1.820e-02 1.343e-03 1.270e-03
test the performance of the proposed PLC-MOPSO. Std 2.300e-06 1.400e-06 1.100e-06
Many researchers such as the authors in [21], [23] and Best 2.136e-02 2.634e-02 2.156e-02
[24] have applied these problems to examine their Mean 2.647e-02 3.128e-02 3.045e-02
KUR
Worst 3.242e-02 3.715e-02 3.167e-02
proposed algorithms. The definition of these test
Std 2.700e-04 4.500e-04 3.800e-04
functions is summarized in Table 2.
In this experiment, the maximum fitness evaluation
MOPSO in all of test functions. For SCH and POL, the
(FE) is set at 10000. The population size is set at 100 for
performance of PLC-MOPSO is very close to that of
all problems.
IPSO in converges metric. PLC-MOPSO converges
Results for the convergence metric obtained using
slightly better than IPSO for FON and KUR. PLC-
PLC-MOPSO, are given in Table 3, where results of
MOPSO and IPSO have the better diversity than MOPSO
MOPSO and IPSO come form Ref. [24]. From the results,
for SCH, POL and FON. However, for KUR, MOPSO
they are evident that PLC-MOPSO converges better than
has higher performance in diversity metric than PLC-

Published by Atlantis Press


Copyright: the authors
108
Y. X. Shen, G.Y. Wang and C.M.Tao

MOPSO and IPSO. In order to clearly visualize the Table 5 Results of the diversity metric for test problems
quality of solutions obtained, figures have been plotted δ MOPSO IPSO PLC-MOPSO
for the obtained Pareto fronts with POF. As can been Best 3.847e-01 3.385e-01 3.134e-01
seen form Fig. 4, the front obtained from PLC-MOPSO SCH
Mean 4.524e-01 4.388e-01 4.412e-01
has the high extent of coverage and uniform diversity for Worst 5.319e-01 5.189e-01 4.819e-01
Std 3.570e-03 3.430e-03 3.541e-03
all test problems. In a word, the performance of PLC-
Best 2.987e-01 2.751e-01 2.856e-01
MOPSO is better than that of MOPSO, and is nearly
Mean 3.729e-01 3.162e-01 3.098e-01
close to that of IPSO in converges metric and diversity FON
Worst 4.527e-01 3.794e-01 3.527e-01
metric. It must be noted that MOPSO adopts an adaptive Std 8.500e-03 1.140e-04 9.800e-03
mutation operator and an adaptive-grid division strategy Best 2.896e-01 2.962e-01 2.755e-01
to improve its search potential, while IPSO adopts search POL
Mean 3.726e-01 3.140e-01 3.041e-01
methods including an adaptive-grid mechanism, a self- Worst 4.826e-01 3.419e-01 3.154e-01
Std 2.435e-03 1.980e-04 2.000e-04
adaptive mutation operator, and a novel decision-making
Best 3.725e-01 3.927e-01 3.913e-01
strategy to enhance balance between the exploration and Mean 4.541e-01 4.408e-01
4.106e-01
exploitation capabilities. PLC-MOPSO only adopts KUR
Worst 4.286e-01 4.939e-01 4.912e-01
disturbance operation to solve MOO problems, and no Std 8.470e-04 1.200e-03 1.500e-03
other parameters are introduced. This shows that the
correlative strategy in PLC-MOPSO plays an important
role in the global search for MOO problems.

4 1
PLC-MOPSO PLC-MOPSO
3.5 Pareto front
Pareto front
0.8
3

2.5 0.6
F2

2
F2

1.5 0.4

1
0.2
0.5

0 0
0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1
F1 F1
(a) (b)

25 2
PLC-MOPSO PLC-MOPSO
0 Pareto front
Pareto front
20
-2

15 -4
F2

F2

-6
10

-8
5
-10

0 -12
0 5 10 15 20 -20 -19 -18 -17 -16 -15 -14
F1 F1
(c) (d)

Fig. 4 Nondominated solutions with PLCPSO for four MOO problems. (a)SCH, (b)FON, (c) POL, (d)KUR.

Published by Atlantis Press


Copyright: the authors
109
2. D. V. Yamille, K. V. Ganesh and M. Salman, Particle swarm
4. Conclusions optimization: basic concepts, variants and applications in
power systems, IEEE Transactions on Evolutionary
The contribution of this paper includes three parts. Firstly, Computation. 12(2) (2008)171-195.
a correlation PSO model is proposed, which extends 3. M. Madhubanti and C. Amitava, A hybrid cooperative–
information processing mechanism of particles and opens comprehensive learning based PSO algorithm for image
segmentation using multilevel threshold, Expert Systems with
new avenues to improve the performance of PSO.
Applications. 34(2)(2008)1341-1350.
Secondly, the relational expression between the 4. A. Mahor, V. Prasad and S. Rangnekar, Economic dispatch
correlation of random factors and population diversity is using particle swarm optimization: a review, Renewable and
presented through theoretical analysis. An important Sustainable Energy Reviews. 13(8)(2009)2134-2141.
conclusion is obtained that the processing strategy with 5. J. B. Yu, S. J. Wang, L. F. Xi, Evolving artificial neural
networks using an improved PSO and DPSO, Neuron
positive linear correlation is helpful for maintaining the
computing. 71(4-6) (2008)1054-1060.
population diversity. Thirdly, PLCPSO is presented in 6. Y. H. Shi and R. C. Eberhart, A modified particle swarm
which particles adopt the positive linear correlation optimizer, in Proceeding of IEEE Congress on Evolutionary
strategy to process the personal experience and sharing Computation.(Piscataway, NJ, 1998), pp.69-73.
experience. This strategy is used to maintain the 7. Y. H. Shi and R. C. Eberhart, Particle swarm optimization with
population diversity for improving the global search fuzzy adaptive inertia weight, in Proceeding of workshop
particle swarm optimization. (Indianapolis, IN, 2001), pp.
ability of PSO.
101–106.
Experimental results show that PLCPSO is competitive 8. R. Asanga, K. H. Saman and C. W. Harry, Self-organizing
in terms of performance, compared to the SPSO for the hierarchical particle swarm optimizer with time-varying
complex multimodal single-objective and multi-objective acceleration coefficients, IEEE transaction on evolutionary
optimization problems. Another attractive property of computation. 8(3) (2004) 240-255.
9. J. Kennedy, Small worlds and mega-minds: Effects of
PLCPSO is that it does not introduce any complex
neighborhood topology on particle swarm performance. in
operations and new parameters to PSO framework. Proceeding of IEEE Congress on Evolutionary Computation.
Thereby PLCPSO is also simple and easy to be (Washington, DC, USA, 1999), pp.1931-1938.
implemented like the SPSO. Except for PLCPSO, it is 10. J. Kennedy and R. Mendes, Population structure and particle
interesting that NLCPSO surpasses other algorithms for swarm performance, in Proceeding of IEEE Congress on
unimodal functions though it may easily suffer from the Evolutionary Computation. (Honolulu, HI, USA, 2002), pp.
1671-1676.
premature convergence.
11. P. N. Suganthan, Particle swarm optimizer with neighborhood
In order to improve the performance of PSO in operator. Proceeding of IEEE Congress on Evolutionary
complexly dynamic environment, our future research will Computation. (Washington, DC, USA, 1999), pp. 1958-1962.
be devoted to dynamic processing strategy. 12. K. E. Parsopoulos and M. N. Vrahatis, UPSO-a unified particle
swarm optimization scheme, in Proceedings of Computational
Acknowledgements Methods in Sciences and Engineering. (Zeist, Netherlands,
2004), pp.868-873.
This paper is supported by National Natural Science 13. R. Mendes, J. Kennedy and J. Neves, The fully informed
Foundation of China under Grant No.60773113; Natural particle swarm: simpler, maybe better. IEEE transaction on
Science Foundation of Chongqing under Grants No. evolutionary computation. 8(3) (2004) 204-210
2008BA2017 and No.2008BA2041; Science & 14. X. C. Zhao. A perturbed particle swarm algorithm for
numerical optimization. Applied Soft Computing. 10(1)
Technology Research Program of Chongqing Education
(2010)119-124
Commission under Grant No.KJ090512; 15. W. J. Zhang and X. F. Xie, Depso: hybrid particle swarm with
Chongqing Key Lab of Computer Network and differential evolution operator. in Proceedings of IEEE
Communication Technology No.CY-CNCL-2009-03. international conference on systems, man and cybernetics,
(Washington, DC, USA, 2003),pp.3816-3821.
References 16. W. B. Langdon and P. Riccardo, Evolving problems to learn
about particle swarm optimizers and other search algorithms,
IEEE transaction on evolutionary computation. 11(5)(2007)
1. J. Kennedy and R. C. Eberhart. Particle swarm optimization, in 561-579.
Proceeding of International Conference on Neural Networks. 17. X. F. Xie, W. J. Zhang and D. C. Bi. Optimizing
(Perth, Australia, 1995), pp. 1942-1948. semiconductor devices by self-organizing particle swarm. in
1

Published by Atlantis Press


Copyright: the authors
110
Y. X. Shen, G.Y. Wang and C.M.Tao

Proceeding of IEEE Congress on Evolutionary Computation, 23. D. S. Liu, K. C. Tan, C. K. Goh and W. K. Ho. A multi-
(Oregon, USA, 2004), pp. 2017-2022. objective memetic algorithm based on particle swarm
18. J. Jie, J. C. Zeng, C. Z. Han and Q. H. Wang. Knowledge- optimization, IEEE transaction on Systems, Man and
based cooperative particle swarm optimization. Applied Cybernetics, part b: Cybernetics. 37(1)(2007)42-61
mathematics and computation. 2(205)(2008)861-873. 24. S. Agrawal, Y. Dashora, M. K. Tiwari and Y. J. Son,
19. Y. B. Liu and J. Huang, A novel fast multi-objective Interactive particle swarm: a pareto-adaptive metaheuristic to
evolutionary algorithm for QoS multicast routing in MANET, multiobjective optimization, IEEE transaction on systems, man
International Journal of Computational Intelligence Systems 2- and cybernetics, part a: systems and humans. 38(2) (2008)
3(2009)288- 297 258-278.
20. S. K. Chaharsooghi and A. H. M Kermani, An effective ant 25. J. A. Rice (eds.), Mathematical statistics and data analysis,
colony optimization algorithm (ACO) for multi-objective 2nd edn. (Thomson learning ,Wadsworth, 2004)
resource allocation problem (MORAP), Applied mathematics 26. O. Olorunda and A. P. Engelbrecht. Measuring
and computation, 200(1)(2008) 167-177 exploration/exploitation in particle swarms using swarm
21. C. A. C. Coello, G. T. Pulido and M. S. Lechuga, Handling diversity, in Proceeding of IEEE Congress on Evolutionary
multiple objectives with particle swarm optimization, IEEE Computation, (Hong Kong, China, 2008), pp. 1128-1134.
Transactions on evolutionary computation. 3(3)(2004)256-280. 27. Y. H. Shi and R. C. Eberhart. Population diversity of particle
22. P. K. Tripathi, S. Bandyopadhyay and S. K. Pal, Multi- swarm, in Proceeding of IEEE Congress on Evolutionary
objective particle swarm optimization with time variant inertia Computation. (Hong Kong, China, 2008), pp.1063-1068.
and acceleration coefficients, Information sciences. 177(22) 28. D. H. Wopert and W. G. Macreasy. No free lunch theorems for
(2007) 5033–5049 optimization, IEEE Transactions on evolutionary computation.
1(1)(1997)67-82.

Published by Atlantis Press


Copyright: the authors
111

You might also like