0% found this document useful (0 votes)
58 views

Implementing The Particle Swarm Optimization

The document summarizes implementing the Particle Swarm Optimization (PSO) algorithm in Python. PSO is a metaheuristic algorithm that simulates the movement of bird flocks or fish schools to find the optimal solution to a problem iteratively. The implementation defines a Particle class to represent individual particles and a SearchSpace class to control the algorithm. Main loop initializes particles randomly, calculates new velocities based on each particle's best position and the swarm's best position, and updates positions iteratively until a target error is reached or maximum iterations are completed. Results found the minimum point of the test function f(x,y)=x^2+y^2+1 to be [0.000506, -0.000115]

Uploaded by

fati zinnane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
58 views

Implementing The Particle Swarm Optimization

The document summarizes implementing the Particle Swarm Optimization (PSO) algorithm in Python. PSO is a metaheuristic algorithm that simulates the movement of bird flocks or fish schools to find the optimal solution to a problem iteratively. The implementation defines a Particle class to represent individual particles and a SearchSpace class to control the algorithm. Main loop initializes particles randomly, calculates new velocities based on each particle's best position and the swarm's best position, and updates positions iteratively until a target error is reached or maximum iterations are completed. Results found the minimum point of the test function f(x,y)=x^2+y^2+1 to be [0.000506, -0.000115]

Uploaded by

fati zinnane
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Implementing the Particle Swarm Optimization (PSO) Algorithm in Python | by Iran Macedo |

Analytics Vidhya | Medium

A Gentle Introduction to Particle Swarm Optimization (machinelearningmastery.com)

Iran Macedo
Dec 24, 2018
·
6 min read
·
Listen

Implementing the Particle Swarm Optimization


(PSO) Algorithm in Python

Photo by Johnny Chen on Unsplash

There are lots of definitions of AI. According to the Merrian-Webster dictionary,


Artificial Intelligence is a large area of computer science that simulates intelligent
behavior in computers. Based on this, an algorithm implementation based
on metaheuristic called Particle Swarm Optimization (originaly proposed to
simulate birds searching for food, the movement of fishes’ shoal, etc.) is able to
simulate behaviors of swarms in order to optimize a numeric problem iteratively.
It can be classified as a swarm intelligence algorithm like Ant Colony Algorithm,
Artificial Bee Colony Algorithm and Bacterial Foraging, for example.

Proposed in 1995 by J. Kennedy an R.Eberhart, the article “Particle Swarm


Optimization” became very popular due his continue optimization process
allowing variations to multi targets and more. Consisting in the constant search of
best solution, the method moves the particles (in this case represented as a (x,y)
position) with a certain velocity calculated in every iteration. Each particle’s
movement has the influence of his own the best known position and also the best
known position in the space-search. The final result expected is that the particle
swarm converge to the best solution. It’s important to mention that PSO doesn’t
use Gradient Descent, so it can be used to non linear problems once it doesn’t
require that the problem have to be differentiable.

Algorithm

Let’s observe the pseudo algorithm:


Uploaded by Ganesh K. Venayagamoorthy

At first, in the 2 for loops, it initializes the particles’ positions with a random
uniform distribution within a permissible range for all its dimensions (Some
problems require handling to several dimensions). After that, for each particle, it
calculates its fitness value and compared with his own best position (The p_best
value is the best position of that specific particle has ever been) and then it chooses
the best position of all particles in g_best.
Let’s take a closer look to the equation that defines the velocity of the next iteration
of a particle dimension:

 Vᵢ(k+1) is the next iteration velocity

 W is an inertial parameter. This parameter affects the movement propagation


given by the last velocity value.

 C₁ and C₂ are acceleration coefficients. C₁ value gives the importance of


personal best value and C₂ is the importance of social best value.

 Pᵢ is the best individual position and Pg is the best position of all particles. In
the equation, is measured the distance of each of these parameters to the
particle’s actual position.

 rand₁ and rand₂ are random numbers where 0 ≤ rand ≤ 1 and they control the
influence of each value: Social and individual as shown below.

Google images
After that is calculated the new particle’s position until the number of iterations
specified or an error criteria be reached.

Implementation

Our goal is to find the minimum point of a certain function. In this case, the
function is f(x,y) = x² + y² + 1. Thus, the algorithm will work with 2 dimensions
positions arrays and the fitness value will be the Z-coordinate. Also, we know that
our target is to find the coordinates [0,0] which is the minimum of f(x,y).

To implement the algorithm in python was used an OOP (at this point it’s been
considered that you know the basics at it) to help us to implement and understand
all steps in code. Here, it’s used the numpy library (check more information here)
to handle array operations once we work with a multidimensional space.

3D graphic of x²+y² (Wolframalpha)


Contour plot of x² + y² (Wolframalpha)

Let’s begin with the Particle class.

Particle

Particle Class

When a Particle is initiated automatically we sort 2 position limited in range -50 to


+50. The pbest_position (which is the best individual position of that particle) is
initiated with the initial position, also, as we’re looking for the minimum value,
the pbest_value is initiated with +inf (could be any larger value). It’s also defined
a method __str__() just to print the actual position and the best individual value.
The move() method add the positional vector and the dimensions’ velocity
calculated in the searches as we gonna see ahead.

Search Space

The Search Space is the entity which controls the algorithm routine. In this
implementation, it is responsible to keep all the particles, identify and set the
individuals best position values of all particles, manage the target error criteria,
calculate the best global value and set the best global position. In resume, it
encapsulate all principal steps.
Space State Class

The methods set_pbset and set_gbest have similar implementation ways. At


first it go through all the particles and calculate the fitness value of the particle
position and compare to the best individual position (at set_pbest) and the best
global position (at set_gbest).

The method move_particles calculate the new vector velocity to each particle in
each dimension as it was explained before.

Main loop
Main loop

At first, it’s initiated a Search Space with target at 1. This target represents the
target at fitness value, this means that the target is f(x,y) = 1. The algorithm then
will find which values of x and y gives a result equals 1 as shown before at the
function shape that we want to find the minimum. The Contour plot also show us
that the value we want to find out is [0,0]. The target error and the number of
particles (n_particles) it will be set by the user. Then with a list generator it’s
initiated all the particles and after initiated the iterations.

In all iterations at first, it’s found the best individual position and the best global
position to evaluate the target error criteria. If this criteria doesn’t achieve the
minimum error, the particles will calculate the velocity to move and then stop at a
new position until the number of iterations or the minimum error criteria be
reached. The last line just print out the best result found.

Check out the full code below

The values of W, C₁ and C₂ are chosen by hand so be careful with them because
they can get the longer.

Results
Using python interpreter we enter with the values of number of iterations, target
error criteria and number of particles into the swarm.

Then, it prints out the best found result:


The best solution is: [ 5.06237850e-04 -1.14851873e-05] in n_iterations: 22

It’s an easy algorithm to implement and use. Hope you all enjoyed!

Any critics, tips or hints will be welcome too :)

You might also like