Particle Swarm Optimization
Particle Swarm Optimization
A tutorial prepared for SEAL06 Xiaodong Li, School of Computer Science and IT, RMIT University, Melbourne, Australia
Outline
n n
n n n
Speciation and niching methods in PSO PSO for optimization in dynamic environments PSO for multiobjective optimization
10/11/2006
Swarm Intelligence
10/11/2006
Swarm Intelligence
Swarm intelligence (SI) is an artificial intelligence technique based around the study of collective behavior in decentralized, self-organized systems. SI systems are typically made up of a population of simple agents interacting locally with one another and with their environment. Although there is normally no centralized control structure dictating how individual agents should behave, local interactions between such agents often lead to the emergence of global behavior. Examples of systems like this can be found in nature, including ant colonies, bird flocking, animal herding, bacteria molding and fish schooling (from Wikipedia).
10/11/2006
Swarm Intelligence
Mind is social
Human intelligence results from social interaction: Evaluating, comparing, and imitating one another, learning from experience and emulating the successful behaviours of others, people are able to adapt to complex environments through the discovery of relatively optimal patterns of attitudes, beliefs, and behaviours. (Kennedy & Eberhart, 2001). Culture and cognition are inseparable consequences of human sociality: Culture emerges as individuals become more similar through mutual social learning. The sweep of culture moves individuals toward more adaptive patterns of thought and behaviour.
10/11/2006
Swarm Intelligence
To model human intelligence, we should model individuals in a social context, interacting with one another.
10/11/2006 6
10/11/2006
10/11/2006
James Kennedy
10/11/2006
Russell Eberhart
9
Individuals in a particle swarm can be conceptualized as cells in a CA, whose states change in many dimensions simultaneously.
10/11/2006
10
Why named as Particle, not points? Both Kennedy and Eberhart felt that velocities and accelerations are more appropriately applied to particles.
10/11/2006 11
Why named as Particle, not points? Both Kennedy and Eberhart felt that velocities and accelerations are more appropriately applied to particles.
10/11/2006 12
PSO Precursors
Reynolds (1987)s simulation Boids a simple flocking model consists of three simple local rules: n Collision avoidance: pull away before they crash into one another; n Velocity matching: try to go about the same speed as their neighbours in the flock; n Flock centering: try to move toward the center of the flock as they perceive it. A demo: http://www.red3d.com/cwr/boids/ With just the above 3 rules, Boids show very realistic flocking behaviour. Heppner (1990) interests in rules that enabled large numbers of birds to flock synchronously.
10/11/2006
13
Both PSO and EC are population based. PSO also uses the fitness concept, but, less-fit particles do not die. No survival of the fittest. No evolutionary operators such as crossover and mutation. Each particle (candidate solution) is varied according to its past experience and relationship with other particles in the population. Having said the above, there are hybrid PSOs, where some EC concepts are adopted, such as selection, mutation, etc.
10/11/2006
14
PSO applications
Problems with continuous, discrete, or mixed search space, with multiple local minima. Evolving neural networks:
Human tumor analysis; Computer numerically controlled milling optimization; Battery pack state-of-charge estimation; Real-time training of neural networks (Diabetes among Pima Indians); Servomechanism (time series prediction optimizing a neural network);
Reactive power and voltage control; Ingredient mix optimization; Pressure vessel (design a container of compressed air, with many constraints); Compression spring (cylindrical compression spring with certain mechanical characteristics); Moving Peaks (multiple peaks dynamic environment); and more PSO can be tailor-designed to deal with specific real-world problems.
10/11/2006
15
cognitive component
social component
difference between the individuals personal best p i and its current position, and r the difference between the neighborhoods best pg and its current position (Kennedy & Eberhart, 2001). Note that the symbol denotes a point-wise vector multiplication. r r
xt +1 = xt + vt +1
r r r r r r r r vt +1 = vt + R1[ 0, ] ( pi xt ) + R2 [ 0, ] ( p g xt ) 2 r r r 2
r 1 = R1[0, ] Since 2
specified range, the particle will cycle unevenly around a point defined as the r r weighted average of pi and p g :
r r r 1 p i + 2 p g r r 1 + 2
10/11/2006
16
r r pg = min( p neighbours );
r p i)
then
r pi
r xi ;
for d =1 to dimensions do velocity_update(); position_update(); end end until termination criterion is met.
10/11/2006
17
Parameters
Tendency to explode.
To prevent it, a parameter Vmax can be used. Basically if the velocity value exceeds Vmax, it gets reset to Vmax accordingly. Control parameter md = 1d + 2 d for the d-th dimension, called acceleration constant: if it is set too small, the trajectory of a particle falls and rises slowly; As its value is increased, the frequency of the particle oscillating around the weighted average of p id and p gd is also increased.
10/11/2006
18
r v t +1 =
r r r xt +1 = xt + vt +1
r r r r r r r v t + R1 [ 0, ] ( p i x t ) + R 2 [ 0, ] ( p g x t ) 2 2
Eberhart and Shi suggested to use the inertia weight which decreasing over time, typically from 0.9 to 0.4, with exploitative mode.
10/11/2006
19
Visualizing PSO
r vt
r r r R2 [0, ] ( p g xi ) r 2
r vt
r r R1 [0, ] ( pi xi ) 2 r
xt +1
r xt
r r pi xt
r r p g xt
r pg
r pi
10/11/2006
20
Constriction PSO
Clerc and Kennedy (2000) suggested a more generalized PSO, where a constriction coefficient (Type 1 coefficient) is applied to both terms of the velocity formula. Clerc shows that the constriction PSO can converge without using Vmax: constriction factor
r v t +1 =
r r r xt +1 = xt + vt +1
r r r r r r r ( v t + R1 [ 0, ] ( p i x t ) + R 2 [ 0, ] ( p g x t )) 2 2
where 2 is a positive number, often set to 2.05; and the constriction factor set 0.7289 (Clerc and Kennedy 2002). By using the constriction coefficient, the amplitude of the particles oscillation decreases, resulting in its convergence over time.
10/11/2006 21
r r r r r vt +1 = (vt + m ( p m xt ))
r r r r r r r r r r m = 1 + 2 and p m = (1 p i + 2 p g ) /(1 + 2 ). where
This shows that that a particle tends to converge towards a point determined r r by p m , which a weighted average of its previous best p i and the r r neighbourhoods best pg . p m can be further generalized to any number of terms:
r r r xt +1 = xt + vt +1
r r v t +1 = ( v t +
kN
r r r R[0, ] ( p k x t )) |N |
r
N denotes the neighbourhood, and p k the best previous position found by the k-th particle in N. If |N| equals 2, then the above is a generalization of the canonical PSO.
10/11/2006 22
If we substitute
r r r vt = xt xt 1
Persistence
r r r r r r r x t +1 = x t + (( x t x t 1 ) + kN R[0, ] ( p k x t )) |N|
Persistence indicates the tendency of a particle to persist in moving in the same direction it was moving previously.
10/11/2006
23
Social central tendency can be estimated, for example by taking the mean of previous bests relative to the particles current position (still open-ended questions) Social dispersion may be estimated by taking the distance of a particles previous best to any neighbors previous best; or by averaging pair-wise distances between the particle and some neighbors. Some distributions such as Gaussian, double-exponential and Cauchy were used by Kennedy (2006).
10/11/2006
24
If pi and pg were kept constant, a canonical PSO samples the search space following a bell shaped distribution centered exactly between the pi and pg. pi pg
This bare bones PSO produces normally distributed random numbers around the mean ( p id + p gd ) / 2 (for each dimension d), with the standard deviation of the Gaussian distribution being | p id p gd | .
10/11/2006
25
Test functions
Two most common models: gbest: each particle is influenced by the best found from the entire swarm. lbest: each particle is influenced only by particles in local neighbourhood.
10/11/2006
28
10/11/2006
29
10/11/2006
30
10/11/2006
31
10/11/2006
32
10/11/2006
33
10/11/2006
34
Clustering-based PSO
f
Cluster A Cluster B
x
Cluster As center performs better than all members of cluster A, whereas cluster Bs center performs better than some and worse than others.
10/11/2006
35
Speciation-based PSO
f
s2 s3
p
s1
2rs
x
An example of how to determine the species seeds from the population at each iteration. s1, s2, and s3 are chosen as the species seeds. Note that p follows s2.
10/11/2006 36
Speciation-based PSO
Step 1: Generate an initial population with randomly generated particles; Step 2: Evaluate all particle individuals in the population; Step 3: Sort all particles in descending order of their fitness values (i.e., from the best-fit to least-fit ones); Step 4: Determine the species seeds for the current population; identified in the same species; Step 6: Adjusting particle positions according to the PSO velocity and position update equation (1) and (2); Step 7: Go back to step 2), unless termination condition is met.
10/11/2006
37
Multimodal problems
10/11/2006
38
Multimodal functions
10/11/2006
39
Simulation runs
E.g., Traffic conditions in a city change dynamically and continuously. What might be regarded as an optimal route at one time might not be optimal in the next minute.
In contrast to optimization towards a static optimum, in a dynamic environment the goal is to track as closely as possible the dynamically changing optima.
10/11/2006
41
Three peak multimodal environment, before (above left) and after (above right) movement of optima. Note that the small peak to the right of the figure becomes hidden and that the highest point switches optimum (Parrott and Li, 2006).
10/11/2006
42
Why PSO?
With a population of candidate solutions, a PSO algorithm can maintain useful information about characteristics of the environment. PSO, as characterized by its fast convergence behaviour, has an in-built ability to adapt to a changing environment. Some early works on PSO have shown that PSO is effective for locating and tracking optima in both static and dynamic environments. Two major issues must be resolved when dealing with dynamic problems: How to detect that a change in the environment has actually occurred? How to respond appropriately to the change so that the optima can still be tracked?
10/11/2006
43
Related work
Tracking the changing optimum of a unimodal parabolic function (Eberhart and Shi, 2001). Carlisle and Dozier (2002) used a randomly chosen sentry particle to detect if a change has occurred. Hu and Eberhart (2002) proposed to re-evaluate the global best particle and a second best particle. Carlisle and Dozier (2002) proposed to re-evaluate all personal bests of all particles when a change has been detected. Hu and Eberhart (2002) studied the effects of re-randomizing various proportions of the swarm. Blackwell and Bentley (2002) introduced charged swarms. Blackwell and Branke (2004, 2006) proposed an interacting multi-swarm PSO (using quantum particles) as a further improvement to the charged swarms.
electron
neutron
proton
10/11/2006
44
10/11/2006
45
|s|
|sq| In this quantum swarm model, a swarm is made up of neutral (ie., conventional and quantum particles. Quantum particles are positioned as a r cloud centered around the p g, providing a constant level of particle diversity within a species (Li et al., 2006).
10/11/2006
46
dp=0
dp=0
b)
10/11/2006
47
10/11/2006
48
Multiobjective optimization
"The great decisions of human life have as a rule far more to do with the instincts and other mysterious unconscious factors than with conscious will and wellmeaning reasonableness. The shoe that fits one person pinches another; there is no recipe for living that suits all cases. Each of us carries his own life-form - an indeterminable form which cannot be superseded by any other." Carl Gustav Jung, Modern Man in Search of a Soul, 1933, p. 69
Many real-world problems involve multiple conflicting objectives, which need to be optimized simultaneously. The task is to find the best possible solutions which still satisfy all objectives and constraints. This type of problems is known as multiobjective optimization problems.
10/11/2006
49
Multiobjective optimization
90%
40%
Comfort
10k
Cost
100k
10/11/2006
50
Concept of domination
A solution vector x is said to dominate the other solution vector y if the following 2 conditions are true: The solution x is no worse than y in all objectives; The solution x is strictly better than y in at least one objective.
f2
Non-dominated front
(minimize) 2 6 1 4 5 3 (minimize)
Pareto-optimal front
0 f1 Solution 1 and 3 are non-dominated with each other. Solution 6 dominates 2, but not 4 or 5.
10/11/2006 51
F ( P2t )
t F ( X 2+1 )
f1
Dominance relationships among 4 particles, including the personal bests of two particles, and their potential offspring, assuming minimization of f1 and f2.
10/11/2006
53
NSPSO Algorithm
The basic idea:
Instead of comparing solely on a particle's personal best with its potential offspring, the entire population of N particles' personal bests and N of these particles' offspring are first combined to form a temporary population of 2N particles. After this, domination comparisons among all the 2N individuals in this temporary population are carried out. Sort the entire population in different non-domination levels (as in NSGA II). This type of sorting can then be used to introduce the selection bias to the individuals in the populations, in favour of individuals closer to the true Pareto front. At each iteration step, we choose only N individuals out of the 2N to the next iteration step, based on the non-domination levels, and two niching methods.
10/11/2006
54
f1
Selection pressure towards the true Pareto-optimal front.
10/11/2006
55
Niching techniques
f2
Selection pressure
B f1
A will be preferred over B, since A has a smaller niche count than B.
10/11/2006
56
B f1
Particles in the less-crowded area of the non-dominated front is more likely to be r chosen as p g for particles in the population, eg., A is more likely than B.
10/11/2006
57
Performance metrics
n
Diversity of the solutions along the Pareto front in the final population:
=
n n
e dm m =1 M
|Q| i =1
| di d |
e dm + m =1
|Q|d
|Q| i =1
d i p )1 / p
|Q|
10/11/2006
58
In all problems except ZDT5, the Pareto-optimal front is formed with g(x) = 1
Note that more scalable test functions, such as the DTLZ functions (with more than 2 objectives) were also proposed.
10/11/2006
59
ZDT series
ZDT1
f1 ( x) = x1 , g ( x) = 1 + 9( xi ) /(n 1)
i =2 n
ZDT3
f1 ( x) = x1 , g ( x) = 1 + 9( xi ) /(n 1)
i =2 n
h( f1 , g ) = 1
f1 / g .
h( f1 , g ) = 1
f1 / g ( f1 / g ) sin(10f1 ).
ZDT4
ZDT2
f1 ( x) = x1 , g ( x) = 1 + 9( xi ) /( n 1)
i =2 n
f 1 ( x ) = x1 , g ( x ) = 1 + 10 ( n 1) + ( ( x i2 10 cos( 4 x i )),
i=2 n
h( f1 , g ) = 1 ( f1 / g ) 2 .
h ( f1 , g ) = 1
f1 / g .
Note: n= 30 (30 variables); xi in the range [0,1], except for ZDT4, where x2- x30 lie in the range [-5, 5].
10/11/2006 60
Experimental results
NSPSO NSGA II
10/11/2006
61
Experimental results
NSPSO NSGA II
10/11/2006
62
Step 1
Step 3
Step 9
10/11/2006
Step 15
63
Constraint handling
The most common approach for solving constrained problems is the use of a penalty function. The constrained problem is transformed into an unconstrained one, by penalizing the constraints and creating a single objective function. Non-stationary penalty functions (Parsopoulos and Vrahatis, 2002):
A penalty function is used, and the penalty value is dynamically modified during a run. This method is problem dependent, however, its results are generally superior to those obtained through stationary functions.
Please see A/Prof. Ponnuthurai Suganthans tutorial for further information on PSO for constraint handling.
10/11/2006 64
More information
Particle Swarm Central: http://www.particleswarm.info
10/11/2006
65
References (incomplete)
Background:
1) 2) 3) Reynolds, C.W.: Flocks, herds and schools: a distributed behavioral model. Computer Graphics, 21(4), p.25-34, 1987. Heppner, F. and Grenander, U.: A stochastic nonlinear model for coordinated bird flocks. In S. krasner, Ed., The Ubiquity of Chaos. AAAS Publications, Washington, DC, 1990. Kennedy, J. and Eberhart, R.: Particle Swarm Optimization. In Proceedings of the Fourth IEEE International Conference on Neural Networks, Perth, Australia. IEEE Service Center(1995) 19421948. Kennedy, J., Eberhart, R. C., and Shi, Y., Swarm intelligence, San Francisco: Morgan Kaufmann Publishers, 2001. Clerc, M.: Particle Swarm Optimization, ISTE Ltd, 2006.
4) 5)
10/11/2006
66
References continued
New improvements and variants:
1) 2) 3) 4) 5) 6) 7) 8) 9) 10) 11) 12) 13) Y. Shi and R. C. Eberhart, A modified particle swarm optimizer, in Proc. IEEE Congr. Evol. Comput., 1998, pp. 6973. Clerc, M. and Kennedy, J.: The particle swarm-explosion, stability and convergence in a multidimensional complex space. IEEE Trans. Evol. Comput. Vol.6, no.2, pp.58-73, Feb. 2002. Kennedy, J., and Mendes, R. (2002). Population structure and particle swarm performance. Proc. of the 2002 World Congress on Computational Intelligence. T. Krink, J. S. Vesterstroem, and J. Riget, Particle swarm optimization with spatial particle extension, in Proc. Congr. Evolut. Comput., Honolulu, HI, 2002, pp. 14741479. M. Lovbjerg and T. Krink, Extending particle swarm optimizers with self-organized criticality, in Proc. Congr. Evol. Comput., Honolulu, HI, 2002, pp. 15881593. X. Xie, W. Zhang, and Z. Yang, A dissipative particle swarm optimization, in Proc. Congr. Evol. Comput., Honolulu, HI, 2002, pp.14561461. T. Peram, K. Veeramachaneni, and C. K. Mohan, Fitness-distance-ratio based particle swarm optimization, in Proc. Swarm Intelligence Symp., 2003, pp. 174181. Kennedy, J.: Bare bones particle swarms. In Proc. of the Swarm Intelligence Symposium (SIS 2003), 2003. Mendes, R. (2004). Population Topologies and Their Influence in Particle Swarm Performance. PhD Thesis, Universidade do Minho, Portugal. R. Mendes, J. Kennedy, and J. Neves, The fully informed particle swarm: Simpler, maybe better, IEEE Trans. Evol. Comput., vol. 8, pp.204210, Jun. 2004. F. van den Bergh and A.P. Engelbrecht: A cooperative approach to particle swarm optimization. IEEE Trans. Evol. Comput., vol.8, pp.225-239, Jun. 2004. A. Ratnaweera, S. Halgamuge, and H. Watson, Self-organizing hierarchical particle swarm optimizer with time varying accelerating coefficients, IEEE Trans. Evol. Comput., vol. 8, pp. 240255, Jun. 2004. J.J. Liang, A.K.Qin, P.N. Suganthan, and S. Baskar: Comprehensive Learning Particle Swarm Optimizer for Global Optimization of Multimodal Functions. IEEE Trans. Evol. Comput., vol.10, No.3, pp.281 295, Jun. 2006.
10/11/2006
67
References continued
Speciation and niching:
1) 2) A. Petrowski, A clearing procedure as a niching method for Genetic Algorithms, in Proc. of the 1996 IEEE International Conference on Evolutionary Computation, 1996, pp.798803. R. Brits, A.P. Engelbrecht, and F. van den Bergh, A niching particle swarm optimizer, in Proc. of the 4th Asia-Pacific Conference on Simulated Evolution and Learning 2002 (SEAL 2002), 2002, pp.692 696. J.P. Li, M.E. Balazs, G. Parks and P.J. Clarkson, A species conserving genetic algorithm for multimodal function optimization, Evol. Comput., vol.10, no.3, pp.207234, 2002. X. Li, Adaptively choosing neighbourhood bests using species in a particle swarm optimizer for multimodal function optimization, in Proc. of Genetic and Evolutionary Computation Conference 2004 (GECCO04), LNCS 3102, eds. Deb, K. et al., 2004, pp.105116. K.E. Parsopoulos and M.N. Vrahatis, On the computation of all global minimizers through Particle Swarm Optimization, IEEE Trans. Evol. Comput., vol.8, no.3, Jun. 2004, pp.211224. Bird, S. and Li, X.(2006), "Adaptively Choosing Niching Parameters in a PSO", in Proceeding of Genetic and Evolutionary Computation Conference 2006 (GECCO'06), eds. M. Keijzer, et al., p.3 - 9, ACM Press. Bird, S. and Li, X.(2006), "Enhancing the robustness of a speciation-based PSO", in Proceeding of Congress of 2006 Evolutionary Computation (CEC'06), p.3185 - 3192, IEEE Service Center, Piscataway, NJ 08855-1331.
3) 4)
5) 6)
7)
10/11/2006
68
References continued
Optimization in dynamic environments:
1) R. C. Eberhart and Y. Shi. Tracking and optimizing dynamic systems with particle swarms. In Proc. the 2001 Congress on Evolutionary Computation CEC2001, p.94100. IEEE Press, 27-30 May 2001. 2) J. Branke, Evolutionary Optimization in Dynamic Environments. Norwell, MA: Kluwer Academic Publishers, 2002. 3) A. Carlisle and G. Dozier. Tracking changing extrema with adaptive particle swarm optimizer. In Proc. World Automation Cong.,, pages 265270, Orlando FL USA, 2002. 4) X. Hu and R. Eberhart. Adaptive particle swarm optimisation: detection and response to dynamic systems. In Proc. Congress on Evolutionary Computation, p.16661670, 2002. 5) T. Blackwell and P. Bentley. Dynamic search with charged swarms. In Proc. the Workshop on Evolutionary Algorithms Dynamic Optimization Problems (EvoDOP-2003), pages 1926, 2002. 6) T. Blackwell and J. Branke. Multi-swarm optimization in dynamic environments. In LNCS, No. 3005, Proc. Of Applications of Evolutionary Computing: EvoWorkshops 2004: EvoBIO, EvoCOMNET, EvoHOT, EvoISAP, EvoMUSART, and EvoSTOC, pages 489500, 2004. 7) D. Parrott and X. Li, A particle swarm model for tacking multiple peaks in a dynamic environment using speciation, in Proc. of the 2004 Congress on Evolutionary Computation, 2004, pp.98103. 8) T. Blackwell and J. Branke. Multi-swarms, exclusion, and anti-convergence in dynamic environments. IEEE Trans. on Evol. Compu., Vol.10, No.4, August 2006, pp.459-472. 9) Parrott, D. and Li, X. (2006), "Locating and Tracking Multiple Dynamic Optima by a Particle Swarm Model using Speciation", IEEE Trans on Evol. Compu., Vol.10, No.4, August 2006, pp.440-458. 10) Li, X., Branke, J. and Blackwell, T. (2006), "Particle Swarm with Speciation and Adaptation in a Dynamic Environment ", in Proceeding of Genetic and Evolutionary Computation Conference 2006 (GECCO'06), eds. M. Keijzer, et al., p.51 - 58, ACM Press.
10/11/2006 69
References continued
Multiobjective optimization:
1) 2) 3) Deb, K.: Multi-Objective Optimization using Evolutionary Algorithms, John Wiley & Sons, Chichester, UK (2001). Deb, K., Agrawal,S., Pratap, A. and Meyarivan, T.: A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6(2): 182-197 (2002). Hu, X. and Eberhart, R.: Multiobjective Optimization Using Dynamic Neighbourhood Particle Swarm Optimization. In Proceedings of the IEEE World Congress on Computational Intelligence, Hawaii, May 12-17, 2002. IEEE Press (2002). Coello, C.A.C. and Lechuga, M.S.: MOPSO: A Proposal for Multiple Objective Particle Swarm Optimization, in Proceedings of Congress on Evolutionary Computation (CEC'2002), Vol. 2, IEEE Press (2002) 1051-1056. Mostaghim, S. and Teich, J.: Strategies for finding good local guides in Multi-Objective Particle Swarm Optimization (MOPSO. In Proc. 2003 IEEE Swarm Intelligence Symp., Indianapolis, IN, Apr. 2003, pp.26-33. Fieldsend, J.E. and Singh, S.: A multi-objective algorithm based upon particle swarm optimization, an efficient data structure and turbulence. In Proc. 2002 U.K. Workshop on Computational Intelligence, Birmingham, U.K., Sept. 2002, pp.37-44. Li, X.: A Non-dominated Sorting Particle Swarm Optimizer for Multiobjective Optimization, in Erick Cant-Paz et al. (editors), Genetic and Evolutionary Computation - GECCO 2003. Proceedings, Part I, Springer, LNCS Vol. 2723, (2003) 37-48. C. A. C. Coello, G. T. Pulido, and M. S. Lechuga MS, Handling multiple objectives with particle swarm optimization, IEEE Trans. Evol. Comput., vol. 8, no. 3, pp. 256279, Jun. 2004.
70
4)
5)
6)
7)
8)
10/11/2006
References continued
Constraint handling:
1) 2) 3) Z. Michalewicz and M. Schoenauer. Evolutionary Algorithms for Constrained Parameter Optimization Problems. Evolutionary Computation, 4(1):132, 1996. T. P. Runarsson and X. Yao. Stochastic Ranking for Constrained Evolutionary Optimization. IEEE Transactions on Evolutionary Computation, 4(3):284294, September 2000. X. Hu, and R. Eberhart. Solving constrained nonlinear optimization problems with particle swarm optimization. 6th World Multiconference on Systemics, Cybernetics and Informatics (SCI 2002), Orlando, USA. K. Parsopoulos and M. Vrahatis. Particle Swarm Optimization Method for Constrained Optimization Problems. In P. Sincak, J.Vascak, V. Kvasnicka, and J. Pospicha, editors, Intelligent Technologies Theory and Applications: New Trends in Intelligent Technologies, pages 214220. IOS Press, 2002. Frontiers in Artificial Intelligence and Applications series, Vol. 76 ISBN: 1-58603-256-9. G. Coath and S. K. Halgamuge. A comparison of constraint-handling methods for the application of particle swarm optimization to constrained nonlinear optimization problems. In Proceedings of the 2003 Congress on Evolutionary Computation, p.2419 - 2425. IEEE, December 2003. J. Zhang and F. Xie. DEPSO: Hybrid particle swarm with differential evolution operator. In Proceedings of IEEE International Conference on Systems, Man and Cybernetics, p.3816-3821. IEEE, October 2003. G. Toscano and C. Coello. A constraint-handling mechanism for particle swarm optimization. In Proceedings of the 2004 Congress on Evolutionary Computation, p.1396 - 1403. IEEE, June 2004.
4)
5)
6)
7)
10/11/2006
71