0% found this document useful (0 votes)
5 views21 pages

Module V

It contains five modules

Uploaded by

ashishayusman7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views21 pages

Module V

It contains five modules

Uploaded by

ashishayusman7
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

CHAPTER 3

Basic Particle Swarm Optimization

This chapter discusses a conceptual overview of the PSO algorithm and its
parameters selection strategies, geometrical illustration and neighborhood
topology, advantages and disadvantages of PSO, and mathematical explanation.

3.1 The Basic Model of PSO algorithm

Kennedy and Eberhart first established a solution to the complex non-linear


optimization problem by imitating the behavior of bird flocks. They generated the
concept of function-optimization by means of a particle swarm [15]. Consider the
global optimum of an n-dimensional function defined by

(3.1)
where is the search variable, which represents the set of free variables of the
given function. The aim is to find a value such that the function is either
a maximum or a minimum in the search space.

Consider the functions given by

(3.2)
and (3.3)

8 6

4
6

2
4
0

2
-2

0 -4
2 2
2 2
0 0
0 0

-2 -2 -2 -2
(a) Unimodel (b) Multi-model

Figure 3.1: Plot of the functions f1 and f2.

From the figure 3.1 (a), it is clear that the global minimum of the function is
at , i.e. at the origin of function in the search space. That means
it is a unimodel function, which has only one minimum. However, to find the
global optimum is not so easy for multi-model functions, which have multiple local
minima. Figure 3.1 (b) shows the function which has a rough search space with
multiple peaks, so many agents have to start from different initial locations and

10
continue exploring the search space until at least one agent reach the global
optimal position. During this process all agents can communicate and share their
information among themselves [15]. This thesis discusses how to solve the multi-
model function problems.

The Particle Swarm Optimization (PSO) algorithm is a multi-agent parallel search


technique which maintains a swarm of particles and each particle represents a
potential solution in the swarm. All particles fly through a multidimensional search
space where each particle is adjusting its position according to its own experience
and that of neighbors. Suppose denote the position vector of particle in the
multidimensional search space (i.e. ) at time step , then the position of each
particle is updated in the search space by

with (3.4)
where,
is the velocity vector of particle that drives the optimization process
and reflects both the own experience knowledge and the social
experience knowledge from the all particles;

is the uniform distribution where are its


minimum and maximum values respectively.

Therefore, in a PSO method, all particles are initiated randomly and evaluated to
compute fitness together with finding the personal best (best value of each
particle) and global best (best value of particle in the entire swarm). After that a
loop starts to find an optimum solution. In the loop, first the particles’ velocity is
updated by the personal and global bests, and then each particle’s position is
updated by the current velocity. The loop is ended with a stopping criterion
predetermined in advance [22].

Basically, two PSO algorithms, namely the Global Best (gbest) and Local Best
(lbest) PSO, have been developed which differ in the size of their neighborhoods.
These algorithms are discussed in Sections 3.1.1 and 3.1.2 respectively.

3.1.1 Global Best PSO

The global best PSO (or gbest PSO) is a method where the position of each
particle is influenced by the best-fit particle in the entire swarm. It uses a star
social network topology (Section 3.5) where the social information obtained from
all particles in the entire swarm [2] [4]. In this method each individual particle,
, has a current position in search space , a current
velocity, , and a personal best position in search space, . The personal best
position corresponds to the position in search space where particle had the
smallest value as determined by the objective function , considering a
minimization problem. In addition, the position yielding the lowest value amongst
all the personal best is called the global best position which is denoted

11
by [20]. The following equations (3.5) and (3.6) define how the personal and
global best values are updated, respectively.

Considering minimization problems, then the personal best position at the


next time step, , is calculated as

(3.5)

where is the fitness function. The global best position at time


step is calculated as

, (3.6)

Therefore it is important to note that the personal best is the best position
that the individual particle has visited since the first time step. On the other hand,
the global best position is the best position discovered by any of the particles
in the entire swarm [4].

For gbest PSO method, the velocity of particle is calculated by

(3.7)
where
is the velocity vector of particle in dimension at time ;
is the position vector of particle in dimension at time ;
is the personal best position of particle in dimension found
from initialization through time t;
is the global best position of particle in dimension found from
initialization through time t;
and are positive acceleration constants which are used to level the
contribution of the cognitive and social components respectively;
and are random numbers from uniform distribution at time t.

12
The following Flowchart 1 shows the gbest PSO algorithm.

Start

Initialize position xij0, c1, c2, velocity vij0 , evaluate fij0 using xij0,
D= max. no of dimentions, P=max. no of particles, N = max.no of iterations.

t=0

Choose randomly rt1j, rt2j

i=1

j=1

vijt+1=vijt+c1rt1j[Ptbest,i-xijt]+c2rt2j[Gbest-xijt]

j = j+1
xijt+1=xijt+vijt+1
i = i+1
Yes
t = t+1 j<D
No
Yes
i<P

Evaluate fijt using xijt

Yes
fijt ≤ f best,i f best,i = fijt , Ptbest,i = xijt

No
Yes
fijt ≤ f gbest f gbest = fijt , Gbest = xijt

No
Yes
t≤N
No
stop

Flowchart 1: gbest PSO

3.1.2 Local Best PSO

The local best PSO (or lbest PSO) method only allows each particle to be
influenced by the best-fit particle chosen from its neighborhood, and it reflects a
ring social topology (Section 3.5). Here this social information exchanged within
the neighborhood of the particle, denoting local knowledge of the environment [2]
[4]. In this case, the velocity of particle is calculated by

(3.8)

13
where, is the best position that any particle has had in the neighborhood of
particle found from initialization through time t.

The following Flowchart 2 summarizes the lbest PSO algorithm:


Start

Initialize position xij0, c1, c2, velocity vij0 , evaluate fij0 using xij0,
D= max. no of dimentions, P=max. no of particles, N = max.no of iterations.

t=0

Choose randomly rt1j,rt2j

i=1

j=1

vijt+1=vijt+c1rt1j[Pbest,it-xijt]+c2rt2j[Lbest,i-xijt]
j = j+1
xijt+1=xijt+vijt+1
i = i+1
Yes
j<D
No
Yes
i<P
t = t+1

Evaluate fijt using xijt

Yes
fijt ≤ f best,i f best,i = fijt , Ptbest,i = xijt

No
Yes
( f tbest,i-1, ft best,i , ft best,i+1) ≤ f lbest f lbest = fijt,Lbest,i = xijt

No
Yes
t≤N
No
stop

Flowchart 2: lbest PSO

Finally, we can say from the Section 3.1.1 and 3.1.2 respectively, in the gbest PSO
algorithm every particle obtains the information from the best particle in the entire
swarm, whereas in the lbest PSO algorithm each particle obtains the information
from only its immediate neighbors in the swarm [1].

14
3.2 Comparison of ‘gbest’ to ‘lbest’

Originally, there are two differences between the ‘gbest’ PSO and the ‘lbest’ PSO:
One is that because of the larger particle interconnectivity of the gbest PSO,
sometimes it converges faster than the lbest PSO. Another is due to the larger
diversity of the lbest PSO, it is less susceptible to being trapped in local minima
[4].

3.3 PSO Algorithm Parameters

There are some parameters in PSO algorithm that may affect its performance. For
any given optimization problem, some of these parameter’s values and choices
have large impact on the efficiency of the PSO method, and other parameters have
small or no effect [9]. The basic PSO parameters are swarm size or number of
particles, number of iterations, velocity components, and acceleration coefficients
illustrated bellow. In addition, PSO is also influenced by inertia weight, velocity
clamping, and velocity constriction and these parameters are described in Chapter
IV.

3.3.1 Swarm size

Swarm size or population size is the number of particles n in the swarm. A big
swarm generates larger parts of the search space to be covered per iteration. A
large number of particles may reduce the number of iterations need to obtain a
good optimization result. In contrast, huge amounts of particles increase the
computational complexity per iteration, and more time consuming. From a number
of empirical studies, it has been shown that most of the PSO implementations use
an interval of for the swarm size.

3.3.2 Iteration numbers

The number of iterations to obtain a good result is also problem-dependent. A too


low number of iterations may stop the search process prematurely, while too large
iterations has the consequence of unnecessary added computational complexity
and more time needed [4].

3.3.3 Velocity Components

The velocity components are very important for updating particle’s velocity. There
are three terms of the particle’s velocity in equations (3.7) and (3.8):

1. The term is called inertia component that provides a memory of the previous
flight direction that means movement in the immediate past. This component
represents as a momentum which prevents to drastically change the direction of
the particles and to bias towards the current direction.

15
2. The term is called cognitive component which measures the
performance of the particles relative to past performances. This component looks
like an individual memory of the position that was the best for the particle. The
effect of the cognitive component represents the tendency of individuals to return
to positions that satisfied them most in the past. The cognitive component referred
to as the nostalgia of the particle.

3. The term for gbest PSO or for lbest PSO


is called social component which measures the performance of the particles
relative to a group of particles or neighbors. The social component’s effect is
that each particle flies towards the best position found by the particle’s
neighborhood.

3.3.4 Acceleration coefficients

The acceleration coefficients and , together with the random values and
, maintain the stochastic influence of the cognitive and social components of the
particle’s velocity respectively. The constant expresses how much confidence a
particle has in itself, while expresses how much confidence a particle has in its
neighbors [4]. There are some properties of and :

●When , then all particles continue flying at their current speed until
they hit the search space’s boundary. Therefore, from the equations (3.7) and (3.8),
the velocity update equation is calculated as

(3.9)

●When and , all particles are independent. The velocity update


equation will be
(3.10)

On the contrary, when and , all particles are attracted to a single


point in the entire swarm and the update velocity will become

for gbest PSO, (3.11)

or, for lbest PSO. (3.12)

●When , all particles are attracted towards the average of and .

●When , each particle is more strongly influenced by its personal best


position, resulting in excessive wandering. In contrast, when then all
particles are much more influenced by the global best position, which causes all
particles to run prematurely to the optima [4] [11].

16
Normally, are static, with their optimized values being found
empirically. Wrong initialization of may result in divergent or cyclic
behavior [4]. From the different empirical researches, it has been proposed that the
two acceleration constants should be

3.4 Geometrical illustration of PSO

The update velocity for particles consist of three components in equations (3.7)
and (3.8) respectively. Consider a movement of a single particle in a two
dimensional search space.

Personal best position, Ptbest,i Best position of neighbors, Gbest


Pt+1best,i Gbest
x2 x2
Cognitive velocity, Ptbest,i-xit

Social velocity, Gbest -xit xit+2

New velocity, vit+1 xit

Initial position, xit xit+1


New position, xit+1

Inertia velocity, vit vit+1

x1 x1
(a) Time step t (b) Time step t +1

Figure 3.2: velocity and position update for a particle in a two-dimensional search space.

Figure 3.2 illustrates how the three velocity components contribute to move the
particle towards the global best position at time steps and respectively.

x2 x2

Gbest

x1 x1

Gbest

(a) at time t = 0 (b) at time t = 1

Figure 3.3: Velocity and Position update for Multi-particle in gbest PSO.

Figure 3.3 shows the position updates for more than one particle in a two
dimensional search space and this figure illustrates the gbest PSO. The optimum
position is denoted by the symbol ‘ ’. Figure 3.3 (a) shows the initial position of
all particles with the global best position. The cognitive component is zero
at and all particles are only attracted toward the best position by the social
component. Here the global best position does not change. Figure 3.3 (b) shows

17
the new positions of all particles and a new global best position after the first
iteration i.e. at .

x2 x2
f c b
e
a c
f
d d
a b
Lbest
1 Lbest e Lbest 1
2 Lbest 2
g
g
x1 Lbest x1
Lbest
i j
i
3 h 3
j
h

(a) at time t = 0 (b) at time t = 1


Figure 3.4: Velocity and Position update for Multi-particle in lbest PSO.

Figure 3.4 illustrates how all particles are attracted by their immediate neighbors in
the search space using lbest PSO and there are some subsets of particles where one
subset of particles is defined for each particle from which the local best particle is
then selected. Figure 3.4 (a) shows particles a, b and c move towards particle d,
which is the best position in subset 1. In subset 2, particles e and f move towards
particle g. Similarly, particle h moves towards particle i, so does j in subset 3 at
time step . Figure 3.4 (b) for time step , the particle d is the best
position for subset 1 so the particles a, b and c move towards d.

3.5 Neighborhood Topologies

A neighborhood must be defined for each particle [7]. This neighborhood


determines the extent of social interaction within the swarm and influences a
particular particle’s movement. Less interaction occurs when the neighborhoods in
the swarm are small [4]. For small neighborhood, the convergence will be slower
but it may improve the quality of solutions. For larger neighborhood, the
convergence will be faster but the risk that sometimes convergence occurs earlier
[7]. To solve this problem, the search process starts with small neighborhoods size
and then the small neighborhoods size is increased over time. This technique
ensures an initially high diversity with faster convergence as the particles move
towards a promising search region [4].

The PSO algorithm is social interaction among the particles in the entire swarm.
Particles communicate with one another by exchanging information about the
success of each particle in the swarm. When a particle in the whole swarm finds a
better position, all particles move towards this particle. This performance of the
particles is determined by the particles’ neighborhood [4]. Researchers have
worked on developing this performance by designing different types of
neighborhood structures [15]. Some neighborhood structures or topologies are
discussed below:

18
(a) Star or gbest. (b) Ring or lbest.

Focal particle

(c) Wheel.
(d) Four Clusters.

Figure 3.5: Neighborhood topologies.

Figure 3.5 (a) illustrates the star topology, where each particle connects with every
other particle. This topology leads to faster convergence than other topologies, but
there is a susceptibility to be trapped in local minima. Because all particles know
each other, this topology is referred to as the gbest PSO.

Figure 3.5 (b) illustrates the ring topology, where each particle is connected only
with its immediate neighbors. In this process, when one particle finds a better
result, this particle passes it to its immediate neighbors, and these two immediate
neighbors pass it to their immediate neighbors, until it reaches the last particle.
Thus the best result found is spread very slowly around the ring by all particles.
Convergence is slower, but larger parts of the search space are covered than with
the star topology. It is referred as the lbest PSO.

Figure 3.5 (c) illustrates the wheel topology, in which only one particle (a focal
particle) connects to the others, and all information is communicated through this
particle. This focal particle compares the best performance of all particles in the
swarm, and adjusts its position towards the best performance particle. Then the
new position of the focal particle is informed to all the particles.

Figure 3.5 (d) illustrates a four clusters topology, where four clusters (or cliques)
are connected with two edges between neighboring clusters and one edge between
opposite clusters.

19
There are more different neighborhood structures or topologies (for instance,
pyramid topology, the Von Neumann topology and so on), but there is no the best
topology known to find the optimum for all kinds of optimization problems.

3.6 Problem Formulation of PSO algorithm

Problem: Find the maximum of the function


with
using the PSO algorithm. Use 9 particles with the initial positions
, , , , ,
and . Show the detailed computations for iterations 1, 2 and 3.

Solution:

Step1: Choose the number of particles: , , ,


, , and .

The initial population (i.e. the iteration number ) can be represented


as

, , ,
, ,
, , .

Evaluate the objective function values as

Let

Set the initial velocities of each particle to zero:

Step2: Set the iteration number as and go to step 3.

Step3: Find the personal best for each particle by

20
So,

, .

Step4: Find the global best by

Since, the maximum personal best is thus

Step5: Considering the random numbers in the range (0, 1) as and


and find the velocities of the particles by

so

, , ,
, .

Step6: Find the new values of by

So
, , ,
, , ,
, , .

Step7: Find the objective function values of

Step 8: Stopping criterion:

If the terminal rule is satisfied, go to step 2,


Otherwise stop the iteration and output the results.

21
Step2: Set the iteration number as , and go to step 3.

Step3: Find the personal best for each particle.

Step4: Find the global best.

Step5: By considering the random numbers in the range (0, 1) as


and find the velocities of the particles by

so

, ,
, , .

Step6: Find the new values of by

so
,
, , ,
1.9240, .

Step7: Find the objective function values of

Step 8: Stopping criterion:

If the terminal rule is satisfied, go to step 2,


Otherwise stop the iteration and output the results.

22
Step2: Set the iteration number as , and go to step 3.

Step3: Find the personal best for each particle.

Step4: Find the global best.

Step5: By considering the random numbers in the range (0, 1) as


and find the velocities of the particles by

.
so

, ,
, , .

Step6: Find the new values of by

so
,
, , ,
, .

Step7: Find the objective function values of

Step 8: Stopping criterion:

If the terminal rule is satisfied, go to step 2,


Otherwise stop the iteration and output the results.

23
Finally, the values of did not converge, so we increment
the iteration number as and go to step 2. When the positions of all particles
converge to similar values, then the method has converged and the corresponding
value of is the optimum solution. Therefore the iterative process is continued
until all particles meet a single value.

3.7 Advantages and Disadvantages of PSO

It is said that PSO algorithm is the one of the most powerful methods for solving
the non-smooth global optimization problems while there are some disadvantages
of the PSO algorithm. The advantages and disadvantages of PSO are discussed
below:

Advantages of the PSO algorithm [14] [15]:

1) PSO algorithm is a derivative-free algorithm.

2) It is easy to implementation, so it can be applied both in scientific research


and engineering problems.

3) It has a limited number of parameters and the impact of parameters to the


solutions is small compared to other optimization techniques.

4) The calculation in PSO algorithm is very simple.

5) There are some techniques which ensure convergence and the optimum
value of the problem calculates easily within a short time.

6) PSO is less dependent of a set of initial points than other optimization


techniques.

7) It is conceptually very simple.

Disadvantages of the PSO algorithm [13]:

1) PSO algorithm suffers from the partial optimism, which degrades the
regulation of its speed and direction.

2) Problems with non-coordinate system (for instance, in the energy field)


exit.

24
Ant Colony Optimization
What is Algorithm?
Algorithms are processes or optimized solutions for any complex problems. There is always a
principle behind any algorithm design. Sometimes, these algorithms are designed from
natural laws and events, and evolutionary algorithms are the example of these algorithms.
This algorithm uses natural events and behavior as it is to get the low-cost and best possible
solution to a complex problem.
There are a lot of algorithms based on natural behavior, and they are called metaheuristics.
Metaheuristics are made of two words: meta, which means one level above, and heuristics,
which means to find.
Particle Swarm Optimization and Ant Colony Optimization are examples of these swarm
intelligence algorithms. The objective of the swarm intelligence algorithms is to get the
optimal solution from the behavior of insects, ants, bees, etc.
Principle of Ant Colony Optimization
This technique is derived from the behavior of ant colonies. Ants are social insects that live in
groups or colonies instead of living individually. For communication, they use pheromones.
Pheromones are the chemicals secreted by the ants on the soil, and ants from the same colony
can smell them and follow the instructions.
To get the food, ants use the shortest path available from the food source to the colony. Now
ants going for the food secret the pheromone and other ants follow this pheromone to follow
the shortest route. Since more ants use the shortest route so the concentration of the
pheromone increase and the rate of evaporation of pheromone to other paths will be
decreased, so these are the two major factors to determine the shortest path from the food
source to the colony.
We can understand it by following steps:

Stage 1:

In this stage, there is no pheromone in the path, and there are empty paths from food to the
ant colony.

Stage2:

In this stage, ants are divided into two groups following two different paths with a probability
of 0.5. So we have four ants on the longer path and four on the shorter path.
Stage 3:

Now, the ants which follow the shorter path will react to the food first, and then the
pheromone concentration will be higher on this path as more ants from the colony will follow
the shorter path.

Stage 4:

Now more ants will return from the shortest path, and the concentration of pheromones will
be higher. Also, the rate of evaporation from the longer path will be higher as fewer ants are
using that path. Now more ants from the colony will use the shortest path.
Algorithm Design
Now the above behavior of the ants can be used to design the algorithm to find the shortest
path. We can consider the ant colony and food source as the node or vertex of the graph and
the path as the edges to these vertices. Now the pheromone concentration can be assumed as
the weight associated with each path.
Let's suppose there are only two paths which are P1 and P2. C1 and C2 are the weight or the
pheromone concentration along the path, respectively.
So we can represent it as graph G(V, E) where V represents the Vertex and E represents the
Edge of the graph.
Initially, for the ith path, the probability of choosing is:

If C1 > C2, then the probability of choosing path 1 is more than path 2. If C1 < C2, then Path 2
will be more favorable.
For the return path, the length of the path and the rate of evaporation of the pheromone are
the two factors.
1. Concentration of pheromone according to the length of the path:

Where Li is the length of the path and K is the constant depending upon the length of the
path. If the path is shorter, concentration will be added more to the existing pheromone
concentration.
2. Change in concentration according to the rate of evaporation:
Here parameter v varies from 0 to 1. If v is higher, then the concentration will be less.
Pseudo Code:
Procedure ACO:
1. Initialize the necessary parameters and pheromone concentration;
2. while not termination do:
3. Generate initial ant population;
4. Calculate the fitness values for each ant of the colony;
5. Find optimal solution using selection methods;
6. Update pheromone concentration;
7. end while
8. end procedure

Ant Colony optimization is used in various problems like the Travelling Salesman Problem
etc.
Q. Difference Between PSO and ACO:

Particle Swarm Optimization Ant Colony Optimization


Aspect
(PSO) (ACO)
Modelled after the foraging
Based on the social behaviour
Inspiration behaviour of ants and their use
of birds or fish.
of pheromones.
Iteratively adjusts positions of Constructs solutions
Solution Approach particles based on individual incrementally using paths
and group knowledge. marked by pheromone trails.
Faster convergence for
Better for discrete and
continuous problems; may
Performance combinatorial problems but
suffer from premature
converges slower.
convergence.
Requires fewer parameters Sensitive to parameters like
Parameter Tuning (e.g., inertia, velocity) and pheromone evaporation rate
easier to tune. and requires more tuning.
Suitable for continuous Effective for routing,
Applications optimization, neural scheduling, and distance
architecture search. optimization.
Better for exploring multiple
Simplicity, speed, and
Strengths paths and avoiding local
versatility in continuous spaces.
optima.

Q. How does ant colony optimization (ACO) solve the traveling salesman problem?

Ant Colony Optimization (ACO) is a metaheuristic algorithm inspired by the foraging


behavior of ants. It's particularly effective in solving combinatorial optimization problems
like the Traveling Salesman Problem (TSP).
Here's how ACO works to solve the TSP:
1. Initialization:
• Graph Representation: The TSP problem is represented as a graph where nodes
represent cities and edges represent the distances or costs between them.
• Pheromone Trails: Initially, a small amount of pheromone is deposited on each edge
of the graph.
2. Ant Movement:
• Artificial Ants: A number of artificial ants are placed on random nodes.
• Path Construction: Each ant constructs a tour by moving from one city to another.
• Pheromone Influence: The probability of an ant choosing an edge to move to is
influenced by two factors:
o Pheromone Level: Ants tend to choose edges with higher pheromone levels,
as these indicate shorter or more efficient paths.
o Heuristic Information: Ants also consider the distance or cost of the edge,
with shorter distances being more attractive.
3. Pheromone Update:
• Pheromone Evaporation: After each iteration, the pheromone on all edges is reduced
by a certain percentage.
• Pheromone Deposition: Ants deposit pheromone on the edges they traverse,
reinforcing the quality of their tours.
• Positive Feedback: The best ant (or ants) deposits more pheromone on its tour,
making it more likely to be chosen by subsequent ants.
4. Iteration:
• Steps 2 and 3 are repeated for a predefined number of iterations.
5. Convergence:
• Over time, the pheromone levels on the shortest paths increase, guiding more ants to
choose those paths.
• The algorithm converges to a near-optimal solution, representing the shortest possible
tour.

Q. Describe how particle swarm optimization (PSO) works to find the global minimum of
f(x)=(x−2)2.

PSO is a meta-heuristic optimization algorithm inspired by the social behavior of bird


flocking or fish schooling. It works by iteratively adjusting the positions of a population of
particles (potential solutions) in the search space.
1. Initialization:
o Population: A group of particles is randomly initialized within the search
space.
o Position and Velocity: Each particle has a position (x) and a velocity (v).
o Personal Best (pbest): Each particle remembers its best position so far.
o Global Best (gbest): The best position found by any particle in the swarm is
tracked.
2. Evaluation:
o The fitness of each particle is evaluated using the objective function f(x).
3. Update Personal Best:
o If the current position of a particle yields a better fitness value than its
previous best, the current position becomes the new pbest.
4. Update Global Best:
o The particle with the best fitness value in the entire swarm becomes the new
gbest.
5. Update Velocity and Position:
o The velocity and position of each particle are updated using the following
equations:
o v(t+1) = w * v(t) + c1 * rand() * (pbest(t) - x(t)) + c2 * rand() * (gbest(t) -
x(t))
o x(t+1) = x(t) + v(t+1)
Where:
▪ w: Inertia weight, controls the impact of the previous velocity.
▪ c1 and c2: Cognitive and social coefficients, respectively, balance the
influence of personal and global best positions.
▪ rand(): Random number between 0 and 1.
6. Iteration:
o Steps 2-5 are repeated for a predefined number of iterations or until a
satisfactory solution is found.
The global minimum of f(x)f(x)f(x) is at x=2x = 2x=2, with f(x)=0f(x) = 0f(x)=0. The particles
collectively converge to this solution through exploration and exploitation of the search space
Q. Describe two advantages of using PSO over genetic algorithms.

Here are two advantages of using Particle Swarm Optimization (PSO) over Genetic
Algorithms (GA):
1. Faster Convergence:
• Simple Mechanism: PSO employs a simpler mechanism compared to GA, which
involves operations like crossover and mutation. This simplicity often leads to faster
convergence.
• Direct Search: PSO particles directly move towards promising regions of the search
space, guided by their personal best and global best positions. This direct approach
can accelerate the search process.
2. Fewer Parameters:
• Reduced Complexity: PSO typically requires fewer parameters to tune compared to
GA. This makes it easier to implement and less prone to overfitting.
• Efficient Tuning: With fewer parameters, the optimization process for PSO is often
more efficient, as there are fewer combinations to explore.

You might also like