0% found this document useful (0 votes)
10 views3 pages

Particle Swarm Optimization: Pbest Gbest

Particle Swarm Optimization (PSO) is a global optimization algorithm inspired by social behaviors of animals, developed in 1995 by Kennedy and Eberhart. It utilizes a swarm of particles, each representing potential solutions, which update their positions and velocities based on personal and collective experiences to find optimal solutions efficiently. The algorithm involves initializing a population, evaluating fitness, updating velocities and positions, and iterating until a stopping condition is met.

Uploaded by

yasseralhusain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views3 pages

Particle Swarm Optimization: Pbest Gbest

Particle Swarm Optimization (PSO) is a global optimization algorithm inspired by social behaviors of animals, developed in 1995 by Kennedy and Eberhart. It utilizes a swarm of particles, each representing potential solutions, which update their positions and velocities based on personal and collective experiences to find optimal solutions efficiently. The algorithm involves initializing a population, evaluating fitness, updating velocities and positions, and iterating until a stopping condition is met.

Uploaded by

yasseralhusain
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 3

Particle Swarm Optimization

The Particle Swarm Optimization (PSO) algorithm is one of computational


algorithms that are inspired from animals’ behavior such as bird flocks and fish
schools. PSO is a population based search algorithm, simple in implementation,
effective and considered as a global optimization algorithm. It requires only
initialization of mathematical operators. In addition, it is inexpensive in both speed
and memory requirements. PSO was developed by Kennedy and Eberhart in 1995.
Compared to other evolutionary algorithms, PSO was found to have a unique
concept which was a particle (potential solutions) flying in search space, accelerating
toward better solutions and has ability to find a feasible solution quickly.

The swarm of PSO consists of particles. Each particle represents a potential


solution in optimization problem. The particles have two main attributes that are
position and velocity. The position of each particle is updated according to its own
experience and the experience of its neighbors. The velocity is adjusted to determine
the direction that a particle needs to move. During swarm movement, a particle
updates its position depending on new velocity and previous position that obtained
by the experiments in search space, while the updating of particle’s velocity depends
on previous velocity, the local best position (Pbest) and the global best position or
the leader (Gbest). Equations 2.1 and 2.2 are used to update the velocity and position
respectively.

𝑉𝑖,𝑗 (t + 1) = 𝑤 𝑉𝑖,𝑗 (t) + 𝑟1 𝑐1 [Pbest 𝑖,𝑗 (t) − 𝑋𝑖,𝑗 (t)] + 𝑟2 𝑐2 [Gbest(t) − 𝑋𝑖,𝑗 (t)] (1)

𝑋𝑖,𝑗 (t + 1) = 𝑋𝑖,𝑗 (t) + 𝑉𝑖,𝑗 (t + 1) (2)


where 𝑽 𝒊,𝒋 is a velocity of i particle at iteration t; 𝑿 𝒊,𝒋 is a position of i particle at
iteration t and it depends on previous position and new velocity; w is the inertia
weight that is used to control the influence of the previous velocities on the current
velocity [6]; 𝒓𝟏 and 𝒓𝟐 are two random numbers between (0,1); 𝒄𝟏 and 𝒄𝟐 are learning
factors or acceleration factors that are fixed numbers; 𝐏𝐛𝐞𝐬𝐭 𝒊,𝒋 (𝐭) is the local best
particle i that have the smallest fitness value obtained so far in one iteration t;
𝑮𝒃𝒆𝒔𝒕(𝒕) is the particle leader or global best position at generation t.

The leader particle in each generation guides other particles to move towards the
optimal positions. The performance of each particle in the swarm is evaluated
according to objective function or the fitness function of the optimization problem.

It is assumed that a j-dimensions in search space and particles i (potential


solutions) has a fitness value F(x) and a velocity V that makes it move in the search
space. The process steps of PSO algorithm are shown in the following:

Step 1: Initialize a random population (positions X and velocities V of all


particles).

Step 2: Assume the local best particles set equals to the positions set such
as: 𝑃𝑏𝑒𝑠𝑡 𝑖, 𝑗 = 𝑋 𝑖, 𝑗 and evaluate the fitness value of each particle
F(x)𝑖, 𝑗 (the fitness value measured in different ways according to
problem) and then take the better value (either maximum or
minimum) from this set to be the global best position (Gbest) called
the leader.

Step 3: Update the particle’s velocity according to equation (1) and then
update the particle’s position according to equation (2).

Step 4: Evaluate the fitness value of each particle with the new position and
compare the current fitness value with the previous position, if the
current is better, replace the 𝑃𝑏𝑒𝑠𝑡 with F(x): 𝑃𝑏𝑒𝑠𝑡 𝑖, 𝑗 = F(x)𝑖, 𝑗, else
𝑃𝑏𝑒𝑠𝑡 𝑖, 𝑗 (t+1) = 𝑃𝑏𝑒𝑠𝑡 𝑖, 𝑗 (t).

Then choose the better value (either maximum or minimum) to update


the leader of the swarm (Gbest value). If Pbest(t+1) is better than
Gbest(t) then Gbest(t+1) = Pbest(t+1), else Gbest(t+1) = Gbest(t).

Step 5: If t = Tmax then stop and return the solution, else go to step 3.

You might also like