0% found this document useful (0 votes)
29 views6 pages

Particle Swarm Optimization (PSO) : (Kennedy and Eberhart, 1995)

Particle swarm optimization (PSO) is an algorithm that mimics the movement of social organisms like bird flocking to optimize a problem. It initializes a population of random solutions called particles and updates their positions based on their own experience and neighboring particles' experience to move closer to optimal solutions. The algorithm uses simple mathematical equations to update each particle's velocity and position. An example problem minimizes a quadratic function using PSO to demonstrate how particle positions and velocities are updated iteratively until reaching optimal or near-optimal solutions.

Uploaded by

Arpan Gayen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
29 views6 pages

Particle Swarm Optimization (PSO) : (Kennedy and Eberhart, 1995)

Particle swarm optimization (PSO) is an algorithm that mimics the movement of social organisms like bird flocking to optimize a problem. It initializes a population of random solutions called particles and updates their positions based on their own experience and neighboring particles' experience to move closer to optimal solutions. The algorithm uses simple mathematical equations to update each particle's velocity and position. An example problem minimizes a quadratic function using PSO to demonstrate how particle positions and velocities are updated iteratively until reaching optimal or near-optimal solutions.

Uploaded by

Arpan Gayen
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 6

Particle Swarm Optimization

(PSO)
(Kennedy and Eberhart, 1995)
Local and global optimal
Most Agents are
near
Global Optima
The Particle Dynamics Used a Simple PSO Program
(Kennedy and Eberhart, 1995)

Vi(t+1)=φ.Vi(t)+C1.rand(0,1).(Plb-Xi(t))+C2.rand(0,1).(Pgb-Xi(t))
Xi(t+1)=Xi(t)+Vi(t+1)
where
φ is the inertia factor,and 0<φ<1,
C1 is the local acceleration constant of a particle,
C2 is the global acceleration constant of a particle,
C1, C2 are usually selected in (0, 2]
Plb is the local best position of a particle
and Pgb is the so far global best position of all the particles.
An Example to Minimize f( x) = x(x-8) by PSO

Consider a small swarm of particles for the above


single dimensional function:
Initial Position and velocities of the particles at time t=0: Randomly initialized
In the range (-10, 10)

Particle Position Velocity v f (x)


number x(0) at t = 0 at t = 0
1 7 3 -7
2 -2 5 20
3 9 6 9
4 -6 -4 84

So, the fittest particle is particle 1 and we set Pgb = 7


and Plb =Xi
Change in position of the particles in next iteration:
Vi(t+1)=φ.Vi(t)+C1.rand(0,1).(Plb-Xi(t))+C2.rand(0,1).(Pgb-Xi(t))
Xi(t+1)=Xi(t)+Vi(t+1)

For this small scale PSO problem, we set C1 = C2 = 2.0,ω = 0.5

Particle 1>
V1(1) = 0.5*3 + 2*0.6*(7 – 7) + 2*0.4*(7 – 7) = 1.5
X1(1) = 7+1.5 = 8.5
Fitness f (X(1)) =4.25

Particle 2>
V2(1) = 0.5*5 + 2*0.3*(-2 + 2 ) + 2*0.4*(7 – (-2)) = 6.5
X2(1) = -2+6.5 = 3.5
Fitness f (X(1)) =-9.75
Particle 3>
V3(1) = 0.5*6 + 2*0.8*(9 - 9 ) + 2*0.95*(7 – 9) = -0.8
X3(1) = 6 – (-0.8) = 6.8
Fitness f (X(1)) =-8.16

Particle 4>
V4(1) = 0.5*(-4) + 2*0.38*(-6 + 6 ) + 2*0.45*(7 – (-6)) = 9.7
X4(1) = -6 + 9.3 = 3.7
Fitness f (X(1)) =-15.91
Here we go for the next iteration:

Particle Current(previou Current(previou f (current x) Plb for Pgb for


number s Position) s velocity) (f(previous t=2 t=2
x))
1 8.5 (7) 1.5 (3) 4.25 (–7) 7 3.7
2 6.5 (-2) 3.5 (5) -9.75 (20) 6.5 3.7
3 6.8 (9) -0.8 (6) -8.16 (9) 6.8 3.7
4 3.7 (-6) 9.7 (-4) -15.91(84) 3.7 3.7

You might also like