Particle Swarm Optimization With Novel Processing Strategy and Its Application
Particle Swarm Optimization With Novel Processing Strategy and Its Application
Particle Swarm Optimization with Novel Processing Strategy and Its Application
Yuanxia Shen
School of Information Science and Technology, Southwest Jiaotong University, Chengdu 600031, China;
Institute of Computer Science and Technology, Chongqing University of Posts and Telecommunications,
Chongqing 400065, China;
Department of Computer Science, Chongqing University of Arts and Sciences, Chongqing 402160, China
E-mail: [email protected]
Guoyin Wang
Institute of Computer Science and Technology, Chongqing University of Posts and Telecommunications,
Chongqing 400065, China;
E-mail:[email protected]
Chunmei Tao
Institute of Computer Science and Technology, Chongqing University of Posts and Telecommunications,
Chongqing 400065, China;
E-mail:[email protected]
Accepted: 01-12-2009
Received: 02-10-2010
Abstract
The loss of population diversity is one of main reasons which lead standard particle swarm optimization (SPSO) to
suffer from the premature convergence when solving complex multimodal problems. In SPSO, the personal
experience and sharing experience are processed with a completely random strategy. It is still an unsolved problem
whether the completely random processing strategy is good for maintaining the population diversity. To study this
problem, this paper presents a correlation PSO model in which a novel correlative strategy is used to process the
personal experience and sharing experience. The relational expression between the correlation coefficient and
population diversity is developed through theoretical analysis. It is found that the processing strategy with positive
linear correlation is helpful to maintain the population diversity. Then a positive linear correlation PSO, PLCPSO,
is proposed, where particles adopt the positive linear correlation strategy to process the personal experience and
sharing experience. Finally, PLCPSO has been applied to solve single-objective and multi-objective optimization
problems. The experimental results show that PLCPSO is a robust effective optimization method for complex
optimization problems.
Keywords: Particle swarm optimization; correlation coefficient; population diversity, multi-objective optimization.
and fish schooling [1]. Due to its simple operator and few
1. Introduction parameters, PSO has been applied to solve real-world
Particle swarm optimization (PSO) is the swarm optimization problems successfully [2-5], including
intelligent model inspired by certain social bird flocking
power system, image processing, economic dispatch, linearly from 0.5 to 2.5. Another active research trend in
neural networks and some project applications, etc. PSO is the design of different topological structures.
PSO emulates the swarm behavior and the individuals Kennedy [9] claimed that PSO with a small neighborhood
represent points in the search space. Assume a D- might perform better on complex problems, while PSO
dimensional search space S ⊂ R D and a swarm consisting with a large neighborhood would perform better on
of N particles. The current position of the i-th particle is a simple problems. The ring topological structure and the
D-dimensional vector Xi = [xi1, xi2, . . . , xiD]T. The von Neumann topological structure are proposed to
velocity of the i-th particle is also a D-dimensional vector restrict the information interaction among particles for
Vi = [νi1, νi2, . . . , νiD]T. In every search-iteration, each relieving the loss of population diversity [10]. Suganthan
particle is updated by following two “best” values, called [11] applied a dynamically adjusted neighborhood where
Pi and Pg. Pi is the prior personal position of the i-th the neighborhood of a particle gradually increases until it
particle (also known as pbest), i.e. the personal includes all particles. Parsopoulos and Vrahatis [12]
experience. Pg is the best position found by particles in combined the global version and local version together to
the swarm (also known as gbest), i.e. the sharing construct a unified particle swarm optimizer (UPSO).
experience. The velocity Vid and position Xid of the d-th Mendes et al. [13] proposed a fully informed particle
dimension of the i-th particle are updated with the swarm (FIPS), where the information of the entire
following equations. neighborhood is used to guide the particles. To increase
Vid (t + 1) = wVid (t ) + c1r1id (t )( Pid (t ) − X id (t )) the population diversity, perturbation operator [14],
(1)
+ c2 r 2id (t )( Pgd (t ) − X id (t )) evolution operator [15] and other search algorithms [16]
X id (t + 1) = X id (t ) + Vid (t + 1) (2) are introduced to PSO. In addition, Xie and Zhang [17]
where random factors r1id and r2id are two independent presented a self-organizing PSO based on the dissipative
random numbers in the range [0, 1]; w is an inertia weight; system in which the negative entropy is introduced to
c1 and c2 are acceleration coefficients reflecting the improve the population diversity. Jie et al. [18]
weighting of stochastic acceleration terms that pull each introduced a knowledge billboard to record varieties of
particle toward pbest and gbest position, respectively. search information and take advantage of multi-swarm to
The first part of Eq. (1) represents the previous velocity, maintain the population diversity and guide their
which provides the necessary momentum for particles to evolution by the shared information.
roam across the search space. The second part, known as However, these PSO algorithms follow the same
the cognitive component, represents the natural tendency principle that each particle adopts the completely random
of individuals to return to environments where they strategy for processing the pbest and gbest, which lets the
experienced their best performance. The third part is cognitive and the social components of the whole swarm
known as the social component, which represents the contribute randomly to the position of each particle in the
tendency of individuals to follow the success of other next iteration. Although the original objective of the
individuals. completely random processing strategy is to keep the
Although PSO has been applied to solve many randomness of search, it is still an unsolved problem
optimization problems successfully, it may easily suffer whether this strategy is good for maintaining the
from the premature convergence when solving complex population diversity. To study this problem, we propose a
problems. Many researchers have worked on improving correlation PSO model in which a novel correlative
the performance of PSO in various ways. To maintain the processing strategy is used to process the pbest and gbest.
population diversity is a main objective of much work. Then the relationship between the degree of correlation
Shi and Eberhart proposed a linearly decreasing inertia and population diversity is presented, which shows that
weight (LDIW) and a fuzzy adaptive inertia weight, the processing strategy with positive linear correlation
which are used to balance the global and local search has advantage of maintaining population diversity. In
abilities [6-7]. To weaken the search density surrounding order to improve the global optimization ability of PSO, a
the historical best position found by the whole swarm in positive linear correlation PSO (PLCPSO) is proposed in
the early evolution, Asanga [8] developed the dynamic the context of the correlation PSO model.
strategy that the cognitive coefficient decreases linearly Optimization plays a major role in the modern-day
from 2.5 to 0.5, while the social coefficient increases design and decision-making activities. Particularly, the
multi-objective optimization (MOO) becomes a In this paper, we focus on the linear correlation
challenging problem due to the inherent confliction between random factors. The correlation coefficient
nature of objective to be optimized. As evolutionary Spearman’s ρX,Y is a useful tool for measuring the
algorithm (EA) can deal simultaneously with a set of strength of the linear correlation between random
possible solutions in a single run, it is especially suitable variables X and Y [25]. Thereby the Spearman’s ρ is used
to solve MOO problems. Many evolutionary multi- to measure the correlation of random factors. The
objective optimization algorithms have been developed in velocity of the i-th particle in the correlation PSO model
the last few years, such as evolutionary computation, is updated as follows:
swarm intelligence [19-21]. As a new form of swarm ⎧ Vid (t + 1) = wVid (t ) + c1r1id (t )( Pid (t ) − X id (t ))
intelligence, PSO has been used to solve MOO problems. ⎪
⎨ + c2 r 2id (t )( Pgd (t ) − X id (t )) (3)
To maintain the population diversity, several techniques ⎪ ρ i (t ) = α ,(−1 ≤ α ≤ 1)
⎩ r1 , r2
[21-24] are introduced to PSO, e.g. an adaptive-grid
where ρ ri , r (t ) is the correlation coefficient of the random
mechanism, an adaptive mutation operation. In this paper, 1 2
where N is the swarm size, D is the dimensionality of the Substituting Eq. (9) into Eq. (7), we get the expectation
problem and Xid is the d-th dimension of the i-th particle of div(X(t+1)).
ρ c1c2 N D
position. X d (t ) is calculated by the following equation. E[div( X (t + 1))] = E0 [div( X (t + 1))] + ∑∑{[ Pid (t ) −
6 ND i =1 d =1 (10)
N
X id (t )][ Pgd (t ) − X id (t )]}
X d (t ) = ∑ X id (t ) / N (5)
i =1 Where E0[div(X(t+1))] is the expectation of population
In the correlation PSO model, particles can adopt diversity at the next time step t+1 in which the correlation
different strategies to process pbest and gbest. In order to coefficient is zero. Then E0[div(X(t+1))] is also the
obtain the relationship between different correlative expectation of population diversity of SPSO at the next
strategies and population diversity, the change of time step. E0[div(X(t+1))] is calculated by
population diversity at the next time step is studied in the N −1 N D 2
E0 [div( X (t + 1)] = ∑∑{c1 [ Pid (t ) − X id (t )]2 + c22[ Pgd (t ) −
12 N 2 D i =1 d =1 (11)
context of the current state. The population diversity at
the time step t+1 can be represented by X id (t )] } + div ( E[ X (t + 1)])
2
1 N D
(6) To make clear the relationship between the
div( X (t + 1)) = ∑∑ [ X i d (t + 1) − X d (t + 1)]2
ND i =1 d =1 correlation coefficient and population diversity, it is
where the position and velocity of each particle at the crucial to analyze the sign of the value of the second term
time step t and before the time step t are known. The in Eq. (10). The second term ∑∑[Pid(t)-Xid(t)][Pgd(t)-Xid(t)]
positions of particles at the time step t+1 are calculated is the sum of ND products of (Pid(t)-Xid(t)) and (Pgd(t)-
by the Eqs. (2) and (3). Obviously, Xid(t+1) and Xd (t +1) are Xid(t)). According to the relations of the positions among
random variables because r1id and r2id are random Pid(t), Xid(t) and Pgd(t), if Xid(t) is not in the middle of
numbers. The population diversity div(X(t+1)) at the time Pgd(t) and Pid(t), then [Pid(t)-Xid(t)][Pgd(t)-Xid(t)]>0. There
step t+1 is also a random variable. exist the two possibilities, which is Xid(t) lie either the left
For the randomness of the population diversity at the or right side of Pgd(t) and Pid(t). If Xid(t) is in the middle
time step t+1, we take the maximization method of of Pgd(t) and Pid(t), then [Pid(t)-Xid(t)][Pgd(t)-Xid(t)]<0.
expected utility to decide which is favorable to maintain Hence, the probability of the inequality [Pid(t)-
population diversity among the correlative strategies. The Xid(t)][Pgd(t)- Xid(t)]>0 is equal to 2/3, i.e. P([Pid(t)-
expectation of div(X(t+1)) is calculated by Xid(t)][Pgd(t)-Xid(t)]>0)=2/3. For all particles in the swarm,
1 N D at least there exists the particle which the best personal
E[div ( X (t + 1))] = ∑∑ E[ X id (t + 1) − X d (t + 1)]2
ND i =1 d =1
position Pid(t) should be equal to the best position
1 N D
= ∑∑ E[ X id (t + 1) − E ( X id (t + 1)) + E ( X i d (t + 1)) − X d (t + 1)]2
ND i =1 d =1
Pgd(t)among all the particles in the swarm. Then, in this
1 N D case, [Pid(t)-Xid(t)][Pgd(t)-Xid(t)]>0, and then P([Pid(t)-
= ∑∑{E[ X id (t + 1) − EX id (t + 1)]2 − 2E ([ X id (t + 1) −
ND i =1 d =1 Xid(t)][Pgd(t)-Xid(t)]>0)>2/3.
EX id (t + 1)][ X d (t + 1) − EX id (t + 1)]) + E[ X d (t + 1) − EX id (t + 1)]2 } Without loss of generality, assume |[Pid(t)-
1 N D 1 1 N Xid(t)][Pgd(t)-Xid(t)]|=θ+η, where θ is a positive constant,
= ∑∑{Var[ X i d (t + 1)] + N Var[ X i d (t + 1)] + N 2 ∑Var[ X jd (t + 1)]
ND i =1 d =1 j =1 η~N(0,δ) is a white noise. Then µ=E[Pid(t)-Xid(t)][Pgd(t)-
1 N Xid(t)]>0.
+( ∑ E[ X jd (t + 1)] − E[ X id (t + 1)])2}
N j =1 According to the law of large numbers in probability
1 N D 1 1 N D
= ∑∑{(1 − N )Var[ X id (t + 1)]} + ND ∑∑ ( E[ X id (t + 1)] theory, for any given number ε>0, the following equation
ND i =1 d =1 i =1 d =1
can be obtained.
lim P(
1 N D
∑∑{[ Pid (t ) − X id (t )][ Pgd (t ) − X id (t )]} − µ < ε ) = 1 (12) bounds of the velocity are set as the search bounds of the
ND →∞ ND i =1 d =1
position.
Eq. (12) can deduce Eq. (13), thereby
1 N D (13)
lim P (
ND →∞
∑∑{[ Pid (t ) − X id (t )][ Pgd (t ) − X id (t )]} > 0) = 1
ND i =1 d =1
Synthesize Eq. (10) and (13), the conclusion is obtained
that the expectation of the population diversity increases
with the increasing of the correlation coefficient. The
conclusion shows that PLCPSO is helpful for maintaining
the population diversity, which makes particles in
PLCPSO have more chance to get away from the local
optima than SPSO and NLCPSO. From Eq. (10), it is
obvious that the NLCPSO is easier to lose the population
diversity than SPSO. When the correlation coefficient is
set to be 1, the expectation of the population diversity can (a)
reach the maximum value. Likewise, when the correlation
coefficient is set to be -1, the expectation of the
population diversity can reach the minimum value. In this
paper, we only consider two special cases, i.e. the
correlation coefficient set to be -1 and 1. In the following
contents, NLCPSO and PLCPSO specially denote the
PSO algorithms in which the correlation coefficients are
set to be -1 and 1, respectively.
In order to test the above analysis of the population
diversity, PLCPSO, SPSO and NLCPSO are run 20 times
on a (unimodal) sphere function and a (multimodal)
Rastrigin function defined in Section 3. The changes of (b)
population diversity with iterations for each function are
shown in Fig.1. Fig.1 Comparison of SPSO’s, PLNPSO’s and NLSPSO’s
population diversity. (a) Curve of population diversity for
As can been seen from Fig.1, PLCPSO maintains the Shpere function. (b) Curve of population diversity for
higher population diversity than SPSO and NLCPSO. Rastrigin function
The population diversity of SPSO and NLCPSO decrease
with the increasing iteration, which makes SPSO and
NLCPSO easily get trapped in local optima in later 3. Applications
evolution. Meanwhile, the population diversity of SPSO
decreases more slowly than NLCPSO. The experimental 3.1. Experiment 1: single-objective optimization
results are agreed with the analysis of the population
In order to test the effectiveness of PLCPSO, six
diversity.
famous single-objective benchmark functions were
2.3. Implementation of search velocity optimized by PSO with linearly decreased inertial weight
(PSO-LDIW), PSO with time-varying acceleration
To enhance the speed of the search, a small modification coefficients (PSO-TVAC), the fully informed particle
is introduced to the velocity of the particle. If the velocity swarm (FIPS), NLCPSO and PLCPSO.
of a particle is zero, then the velocity of this particle is set
to be a random number in the range from the lower bound 3.1.1. Test Functions
to the upper bound of the velocity. The bounds of the
The six benchmark functions include three unimodal
velocity are specified by the user and applied to clamp
functions (f1~f3) and three multimodal functions (f4~f6).
the maximum velocity of each particle. Usually, the
The multimodal functions have complex multimodal
distribution with one or multiple global optima enclosed
by many local minimizations. All test functions have to 3.1.2. Parameters Setting for PSO Algorithms
be minimized. The properties and the formulas of Parameters setting for PSO-LDIW, PSO-TVAC and
functions are presented below. FIPS come form Refs. [6], [8] and [13]. In FIPS, the ring
Sphere’s function
D
topology structure is implemented with weighted FIPS
f1 ( x ) = ∑ xi2 , for higher successful ratio, as recommended in [13]. In
i =1
PSO-LDIW, FIPS, NLCPSO and PLCPSO, the inertia
x∈[-100,100] , D=30,min(f1)= f1(0,0,…,0)=0. weight is decreased linearly from 0.9 to 0.4, and c1= c2=2.
Quadric’s function
D i
In PSO-TVAC, the cognitive coefficient decreases
f 2 ( x ) = ∑ (∑ xk ) 2 linearly from 2.5 to 0.5, while the social coefficient
i =1 k =1 increases linearly from 0.5 to 2.5. For a fair comparison
x∈[-100,100], D=30,min(f2)= f2(0,0,…,0)=0. among all the PSO algorithms, they are tested using the
Rosenbrock’s function same population size of 40. Further, the maximum fitness
f 3 ( x) = [100( xi +1 − xi2 ) 2 + ( xi − 1) 2 ] evaluation (FE) is set at 200000 for each test function.
x∈[-10,10], D=30, min(f3)= f3(1,1,…,1)=0. For the purpose of reducing statistical errors, each
Rastrigin’s function function is independently simulated 30 times, and their
D
f4 ( x) = ∑ (x
i =1
i
2
− 10 cos( 2π xi ) + 10 ) mean results are used in the comparison.
Fig.2 The convergence curve of test function for different function. (a) Sphere’s function. (b) Quadric’s function
(c)Rosenbrock’s function (d) Rastrigin’s function (e) Noncontinuous Rastrigin’s function (f) Schaffer’s function
For unimodal functions, from the results, NLCPSO from the high population diversity. The experimental
achieved the best means because the low population results show that the low population diversity is helpful
diversity enhances the local search ability of NLCPSO; for simple unimodal functions, while the high population
PLCPSO has the good results, especially for the difficult diversity is good for complex multimodal problems.
Rosenbrock’s function. For multimodal functions, Comparing the results and the convergence graphs,
PLCPSO surpasses all other algorithms, and avoids among these five algorithms, PSO-LDIW, PSO-TVAC
getting into the premature convergence, which benefits and NLCPSO get trapped in the local optima for the
difficult unimodal functions (e.g. Rosenbrock’s function solutions to be diverse covering its maximum possible
f3) and the multimodal functions because of the rapid loss regions.
of the population diversity. FIPS with a ring topology has
a local neighborhood, which can avoid falling into the 3.2.2. Performance metrics
premature convergence. However, the local The knowledge of Pareto front of a problem provides
neighborhood leads FIPS to converge slowly, and FIPS an alternative for selection from a list of efficient
cannot achieve the satisfied results. solutions. It thus helps in taking decisions, and also, the
Since PLCPSO has the high population diversity, it knowledge gained can be used in situations where the
could not converge as fast as NLCPSO for unimodal requirements are continually changing. In order to
functions. Hence PLCPSO does not perform the best for provide a quantitative assessment for the performance of
simple unimodal functions. According to the theorem of MO optimizer, two issues are taken into consideration, i.
“no free lunch” [28], one algorithm cannot offer better e. the convergence to the Pareto-optimal set and the
performance than all the others on every aspect or on maintenance of diversity in solutions of the Pareto-
every kind of problem. Therefore, we may not expect the optimal set. In this paper, convergence metric γ [22] and
best performance on all classes of problems, as the diversity metric δ [22] have as qualitative measures.
proposed PLCPSO focuses on improving the Convergence metric is used to measure the extent of
performance of PSO on complex multimodal problems. convergence of the obtained set of solutions. The smaller
is the value of γ, the better is the convergence toward the
3.2. Experiment 2: multi-objective optimization
POF. Diversity metric is used to measure the spread of
(MOO)
solutions lying on the POF. For the most widely and
uniformly spread out set of non-dominated solutions,
3.2.1. Basic concepts on MOO diversity metric δ would be very small.
In general, many real-world applications involve
complex optimization problems with various competing 3.2.3. Description of PLC-MOPSO
specifications and constraints. Without loss of generality, This section describes PLCPSO to MOO problem,
we consider a minimization problem with decision space called PLC-MOPSO. The motivation is to attain better
Y which is a subset of real numbers. For the minimization convergence to the Pareto-optimal front. In PSO, the term
problem, it tends to find a parameter set y for gbest represents the best solution obtained by the whole
Min F ( y ) y ∈ R D (14) swarm. Often the conflicting nature of the multiple
y∈Y
objectives involved in MOO problems makes the choice
where y = [y1, y2, . . . , yD] is a vector with D decision
of a single optimum solution difficult. To resolve this
variables and F = [f1, f2, . . . , fM] are M objectives to be
problem, the concept of non-dominance is used and an
minimized.
archive of non-dominance solutions is maintained, from
In the absence of any preference information, a set of
which a solution is picked up as the gbest in PLC-
solutions is obtained, where each solution is equally
MOPSO. The historical archive stores non-dominance
significant. The obtained set of solutions is called non-
solutions to prevent the loss of good particles. The
dominated or Pareto optimal set of solutions. Any
archive is updated at each cycle, e.g., if the candidate
solution y = [y1, y2, . . . , yD] dominates z = [z1, z2, . . . , zD]
solution is not dominated by any members in the archive,
if and only if y is partially less than z, i.e., ∀i ∈1, L , D
it will be added to the archive. Likewise, any archive
f i ( y ) ≤ fi ( z ) ∧ ∃i ∈ {1,L, D}: fi ( y ) < fi ( z ) (15)
members dominated by this solution will be removed
The front obtained by mapping the Pareto optimal set form the archive. To obtain more solutions, the
(OS) into the objective space is called POF disturbance operation was adopted for randomly selected
uv
POF = { f = ( f1 ( x), L , f D ( x )) | x ∈ OS } (16) non-dominance solutions in the archive. PLC-MOPSO is
The determination of a complete POF is a very described in Fig.3.
difficult task, owing to the presence of a large number of
suboptimal Pareto fronts. By considering the existing 3.2.4 Benchmark problems and PLC-MOPSO
memory constraints, the determination of the complete performance
Pareto front becomes infeasible and, thus, requires the
MOPSO and IPSO. In order to clearly visualize the Table 5 Results of the diversity metric for test problems
quality of solutions obtained, figures have been plotted δ MOPSO IPSO PLC-MOPSO
for the obtained Pareto fronts with POF. As can been Best 3.847e-01 3.385e-01 3.134e-01
seen form Fig. 4, the front obtained from PLC-MOPSO SCH
Mean 4.524e-01 4.388e-01 4.412e-01
has the high extent of coverage and uniform diversity for Worst 5.319e-01 5.189e-01 4.819e-01
Std 3.570e-03 3.430e-03 3.541e-03
all test problems. In a word, the performance of PLC-
Best 2.987e-01 2.751e-01 2.856e-01
MOPSO is better than that of MOPSO, and is nearly
Mean 3.729e-01 3.162e-01 3.098e-01
close to that of IPSO in converges metric and diversity FON
Worst 4.527e-01 3.794e-01 3.527e-01
metric. It must be noted that MOPSO adopts an adaptive Std 8.500e-03 1.140e-04 9.800e-03
mutation operator and an adaptive-grid division strategy Best 2.896e-01 2.962e-01 2.755e-01
to improve its search potential, while IPSO adopts search POL
Mean 3.726e-01 3.140e-01 3.041e-01
methods including an adaptive-grid mechanism, a self- Worst 4.826e-01 3.419e-01 3.154e-01
Std 2.435e-03 1.980e-04 2.000e-04
adaptive mutation operator, and a novel decision-making
Best 3.725e-01 3.927e-01 3.913e-01
strategy to enhance balance between the exploration and Mean 4.541e-01 4.408e-01
4.106e-01
exploitation capabilities. PLC-MOPSO only adopts KUR
Worst 4.286e-01 4.939e-01 4.912e-01
disturbance operation to solve MOO problems, and no Std 8.470e-04 1.200e-03 1.500e-03
other parameters are introduced. This shows that the
correlative strategy in PLC-MOPSO plays an important
role in the global search for MOO problems.
4 1
PLC-MOPSO PLC-MOPSO
3.5 Pareto front
Pareto front
0.8
3
2.5 0.6
F2
2
F2
1.5 0.4
1
0.2
0.5
0 0
0 1 2 3 4 5 0 0.2 0.4 0.6 0.8 1
F1 F1
(a) (b)
25 2
PLC-MOPSO PLC-MOPSO
0 Pareto front
Pareto front
20
-2
15 -4
F2
F2
-6
10
-8
5
-10
0 -12
0 5 10 15 20 -20 -19 -18 -17 -16 -15 -14
F1 F1
(c) (d)
Fig. 4 Nondominated solutions with PLCPSO for four MOO problems. (a)SCH, (b)FON, (c) POL, (d)KUR.
Proceeding of IEEE Congress on Evolutionary Computation, 23. D. S. Liu, K. C. Tan, C. K. Goh and W. K. Ho. A multi-
(Oregon, USA, 2004), pp. 2017-2022. objective memetic algorithm based on particle swarm
18. J. Jie, J. C. Zeng, C. Z. Han and Q. H. Wang. Knowledge- optimization, IEEE transaction on Systems, Man and
based cooperative particle swarm optimization. Applied Cybernetics, part b: Cybernetics. 37(1)(2007)42-61
mathematics and computation. 2(205)(2008)861-873. 24. S. Agrawal, Y. Dashora, M. K. Tiwari and Y. J. Son,
19. Y. B. Liu and J. Huang, A novel fast multi-objective Interactive particle swarm: a pareto-adaptive metaheuristic to
evolutionary algorithm for QoS multicast routing in MANET, multiobjective optimization, IEEE transaction on systems, man
International Journal of Computational Intelligence Systems 2- and cybernetics, part a: systems and humans. 38(2) (2008)
3(2009)288- 297 258-278.
20. S. K. Chaharsooghi and A. H. M Kermani, An effective ant 25. J. A. Rice (eds.), Mathematical statistics and data analysis,
colony optimization algorithm (ACO) for multi-objective 2nd edn. (Thomson learning ,Wadsworth, 2004)
resource allocation problem (MORAP), Applied mathematics 26. O. Olorunda and A. P. Engelbrecht. Measuring
and computation, 200(1)(2008) 167-177 exploration/exploitation in particle swarms using swarm
21. C. A. C. Coello, G. T. Pulido and M. S. Lechuga, Handling diversity, in Proceeding of IEEE Congress on Evolutionary
multiple objectives with particle swarm optimization, IEEE Computation, (Hong Kong, China, 2008), pp. 1128-1134.
Transactions on evolutionary computation. 3(3)(2004)256-280. 27. Y. H. Shi and R. C. Eberhart. Population diversity of particle
22. P. K. Tripathi, S. Bandyopadhyay and S. K. Pal, Multi- swarm, in Proceeding of IEEE Congress on Evolutionary
objective particle swarm optimization with time variant inertia Computation. (Hong Kong, China, 2008), pp.1063-1068.
and acceleration coefficients, Information sciences. 177(22) 28. D. H. Wopert and W. G. Macreasy. No free lunch theorems for
(2007) 5033–5049 optimization, IEEE Transactions on evolutionary computation.
1(1)(1997)67-82.