Optimization Problems Algorithms And: Prof. G. K. Mahanti

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 27

Optimization Problems and

Algorithms
Prof. G. K. Mahanti
Department of ECE
NIT,Durgapur
Overview
2

 Definition of Optimization

 Definition of Optimization Problems

 Types of Optimization Techniques

 Meta-heuristic Algorithms

 An Example : Whale Optimization


Algorithm,QPSO,PSO,TLBO etc.
Definition of Optimization
3

The process of finding the best values for the


variables of a particular problem to minimize or
maximize an objective function
Definition of Optimization Problem
4

Optimization
Problem

Objective
Variables Constraints
Function

Continuous Discrete Constrained Unconstrained Single Multi


Definition of Optimization Problem (cont.)
5

Single Objective Constrained:


Min f(z1, z2, z3) = (-100-(z1-5)2 - (z2-5)2 +(z3-5)2)/100
Subject to:
h(z1, z2, z3) = (z1 - 3)2 + (z2 - 2)2 + (z3 - 5)2 – 0.0625 ≤ 0
where
0 ≤ zi ≤ 10;

F=Min(f)+penalty*(max(h,0)), penalty=106
Definition of Optimization Problem (cont.)
6

Multi-objective unconstrained:

Objective function: Min(f1),Min(f2) and Min(f3)


Definition of Optimization Problem (cont.)
7

Multi-objective to single
objective conversion:

Min of F=Min of σ𝑁 𝑁
𝑖=1 𝑤𝑖 𝑓𝑖 𝑠𝑢𝑐ℎ 𝑡ℎ𝑎𝑡 σ𝑖=1 𝑤𝑖 = 1

W= weighting coefficients and positive


Relative importance of each objective
Definition of Optimization Problem (cont.)
8

Constrained optimization:

Min=
{

Subject to:
F=w1*f1+w2*f2+w3*f3
+penalty1*(max(g1,0))
+penalty2*(max(g2,0))
Definition of Optimization Problem (cont.)
8

Constrained optimization:

Min=
{

Subject to:
F=w1*f1+w2*f2+w3*f3
+penalty1*(max(g1,0))
+penalty2*(max(g2,0))
Types of Optimization Techniques
9

Mathematical
Programming

Calculus
Conventional
Methods

Optimization Network
Technique Methods

Meta-heuristic
Nonconventional algorithms
Meta-heuristic Algorithms
10

Meta-heuristic is a general algorithmic framework


which can be applied to different optimization
problems with relatively few modifications to make
them adapted to a specific problem.
Meta-heuristic Algorithms (cont.)
11

Meta-heuristic
algorithms

Evolutionary Physics-based Swarm-based Human-based


algorithms algorithms algorithms algorithms

Ant
GA GP CSS SA Whale TLBO EMA
Colony

Genetic Algorithm (GA) Genetic Programming (GP) Charged System Search (CSS)
Simulated Annealing (SA) Teaching Learning Based Optimization(TLBO) Exchange Market Algorithm (EMA)
QUANTUM PARTICLE SWARM OPTIMIZATION FOR
MINIMIZATION OF BENCHMARK FUNCTIONS

Dr. G.K.Mahanti

Department of Electronics and Communication Engg.


National Institute of Technology, Durgapur
What is optimization?
It is either minimization or maximization of any function/problem

Minimize Y= f(x1,x2,x3……………xn)
Where x1,x2, x3….. are n no. of variables
x1,x2,x3……[lower bound, upper bound]
Example: x1 [-10,10]

Maximize Y=Minimize(-Y)

Unimodal vs Multimodal functions:


 A function is multimodal, if it has two or more local optima
 A function is unimodal, if it has single optima
Image for multimodal function:

Image for unimodal/bimodal/multimodal function:


Minimize Y=X1^2+X2^2+X3^2
ps=population size=10, D= dimension of the problem=3

VRmin=-5;% Lower bound of all variables


VRmax=5; % Upper bound of all the variables
D=3; % Dimension of the problem
ps=10; %population size
if length(VRmin)==1
VRmin=repmat(VRmin,1,D);
VRmax=repmat(VRmax,1,D);
end;
VRmin=repmat(VRmin,ps,1);
VRmax=repmat(VRmax,ps,1);
pos=VRmin+(VRmax-VRmin).*rand(ps,D)
Y=pos(:,1).^2+pos(:,2).^2+pos(:,3).^2
1 st iteration:
pos
X1 X2 X3 Y
3.1472 -3.4239 1.5574 POP1 24.0535 Y1
4.0579 4.7059 -4.6429 POP 2 60.1688 Y2
-3.7301 4.5717 3.4913 POP 3 47.0032 Y3
4.1338 -0.1462 4.3399 POP 4 35.9444 Y4
1.3236 3.0028 1.7874 POP 5 13.9634 Y5
-4.0246 -3.5811 2.5774 POP 6 35.6649 Y6
-2.2150 -0.7824 2.4313 POP 7 11.4298 Y7
0.4688 4.1574 -1.0777 POP 8 18.6649 Y8
4.5751 2.9221 1.5548 POP 9 31.8871 Y9
4.6489 4.5949 -3.2881 POP 10 53.5373 Y10

POP 7,Y7 is the minimum, gbest


2nd iteration:

pos = Y=

2.0605 -0.6126 -2.2397 9.6372


-4.6817 -1.1844 1.7970 26.5502
-2.2308 2.6552 1.5510 14.4318
-4.5383 2.9520 -3.3739 40.6934
-4.0287 -3.1313 -3.8100 40.5514
3.2346 -0.1024 -0.0164 10.4732
1.9483 -0.5441 4.5974 25.2284
-1.8290 1.4631 -1.5961 8.0337
4.5022 2.0936 0.8527 25.3804
-4.6555 2.5469 -2.7619 35.7886

POP8,Y8 is the minimum,gbest


WHAT IS PARTICLE SWARM OPTIMIZATION?
The particle swarm optimization (PSO) is a robust stochastic Evolutionary
computation technique based on the movement and intelligence of swarms.
Developed in 1995 by Kennedy and Eberhart [3]
Particle position= potential solution

Particle velocity/position in different dimensions=No. of


Variables in the problem

Cost/Fitness= Physical link between optimization engine


and actual problem
Cost function is to be minimized or maximized

No of particles =2 to 3 times no. of variables


Particle Swarm Optimization:

PSO emulates the swarm behavior of insects, animals herding, birds flocking and fish
schooling where these swarms search for food in a collaborative manner.

In PSO, a member in the swarm, called a particle, represents a potential solution, which is a
point in the search space.

The global optimum is regarded as the location of food.

Each particle has a fitness value and a velocity to adjust its flying direction according to the
best experiences of the swarm to search for the global optimum in the D-dimensional solution
space.

The PSO algorithm is easy to implement & simpler and faster than GA.
What is QPSO?

 The QPSO algorithm proposed in (Mikki and Kishk, 2006) is a novel


optimization algorithm that is based on the fundamental theory of particle swarm
and properties of quantum mechanics.

 Unlike standard PSO, there is no velocity vector in QPSO.

 It is represented only by the position vector.

 In quantum world and according to uncertainty principle, the trajectory


becomes redundant as position and velocity of a particle cannot be
found simultaneously.
Different steps of QPSO:

Step1: Initialize positions of all particles (potential solutions) in the population randomly
between the maximum and the minimum operating limits of the search range in the
D-dimensional space.
Step2: Evaluate the fitness value of all particles.

Step3: Compare the personal best (pbest) of every particle with its current fitness value.
If the current fitness value is better, then assign the current fitness value to pbest and assign
the current coordinates to pbest coordinates.
Step 4:Determine the current best fitness value in the whole population and its coordinates.
If the current best fitness value is better than global best (gbest), then assign the current best
fitness value to gbest and assign the current coordinates to gbest coordinates.

Step 5: Determine the mean best position of all M particles using the following equation:
M
1
mbest 
M
 rand (1, D)  pbesti  (1  rand (1, D))  gbesti
i 1
Step6: Determine the vector local focus of the particle using the following equation:
r1tid  pbest  r 2tid  gbest
t
pid 
r1tid  r 2tid

Step7: Update position (Xid) of the d-th dimension of the i-th particle using the following
equations:

t
X id  t
pid   1 
ceil 0.5 r 3tid     mbest  X idt 1  loge (1/ r 4tid )

If X id  X min
t d
Or
t
X id  X max
d

t
X id  X min
d
 r 5tid  X max
d

 X min
d

where t is the current generation, r1, r2, r3, r4 and r5 are


uniform random numbers between 0 and 1.
Step 8: Repeat steps 2-7 until a stop criterion is satisfied, usually it is stopped when
there is no further update of best fitness value or maximum number of iteration
reached. In order to avoid premature convergence, mean best position (mbest) is
regarded as the barycenter of all particles and  is the contraction-expansion
coefficient that can be adjusted to control convergence speed of the particle.
The value of  is 0.75.
Flow chart of QPSO algorithm:

Start

Initialize the Particles

Compute the Fitness of


each particle

Gen=Gen+1
Update pbest
No
Yes
Calculate mbest
Conver
-ged

Update gbest
Stop

Update Position of the Particle


Thank You All

You might also like