0% found this document useful (0 votes)
52 views

A Generic Particle Swarm Optimization Matlab Function

This document summarizes a paper that presents a generic particle swarm optimization (PSO) function for MATLAB. The function aims to have syntax similar to other MATLAB optimization functions like fmincon. The paper introduces PSO and describes the generic PSO function's capabilities, like accepting vectorized fitness functions and allowing the optimization to continue with other methods after PSO. Two examples are used to demonstrate the function: an academic test problem and optimizing gear ratios in a hybrid electric vehicle.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views

A Generic Particle Swarm Optimization Matlab Function

This document summarizes a paper that presents a generic particle swarm optimization (PSO) function for MATLAB. The function aims to have syntax similar to other MATLAB optimization functions like fmincon. The paper introduces PSO and describes the generic PSO function's capabilities, like accepting vectorized fitness functions and allowing the optimization to continue with other methods after PSO. Two examples are used to demonstrate the function: an academic test problem and optimizing gear ratios in a hybrid electric vehicle.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2012 American Control Conference

Fairmont Queen Elizabeth, Montréal, Canada


June 27-June 29, 2012

A Generic Particle Swarm Optimization Matlab Function


Soren Ebbesen, Pascal Kiwitz and Lino Guzzella

Abstract— Particle swarm optimization (PSO) is rapidly accepts a vector of input values and will return a vector of
gaining popularity but an official implementation of the PSO corresponding output values rather than calling the function
algorithm in M ATLAB is yet to be released. In this paper, with one input at the time. This can reduce the runtime
we present a generic particle swarm optimization M ATLAB
function. The syntax necessary to interface the function is significantly and enables the algorithm to run on a computer
practically identical to that of existing M ATLAB functions such cluster. Other useful functionalities include the possibility to
as fmincon and ga. We demonstrate our PSO function by specify a hybrid function, such as fmincon or fminsearch,
means of two examples: the first example is an academic which automatically continues the optimization after the PSO
test problem; the second example is a simplified problem of algorithm terminates. Moreover, it is possible to specify
optimizing the gear ratios in a hybrid electric drivetrain. The
PSO function is available online. user-defined output and plotting functions that are called
periodically while the algorithm is running. Furthermore, our
I. INTRODUCTION PSO function enables the user to recover a swarm from an
intermediate state, rather than restarting the optimization,
Particle Swarm Optimization is a numerical search al-
in case the function terminated prematurely due to some
gorithm which is used to find parameters that minimize
extraordinary event. Finally, for simplicity, the function and
a given objective, or fitness function. The PSO algorithm
all necessary sub-functions are contained in a single file.
was introduced less than two decades ago by Kennedy and
This paper is organized in the following way: in Section II
Eberhart [1]. Over the years, PSO has gained significant
the PSO algorithm is introduced; in Section III we present
popularity due to its simple structure and high performance.
the syntax and commands necessary to interface the generic
The fitness function can be non-linear and can be subjected
PSO M ATLAB function; in Section IV we demonstrate the
to any number of linear and non-linear constraints. Numerous
function using two examples; finally, in Section V we state
publications, such as [2]–[6] to name a few, demonstrate the
our conclusions.
merit of PSO in a diverse range of applications.
M ATLAB by MathWorks, Inc. is widely used software II. PARTICLE SWARM OPTIMIZATION
for numerical computing. While M ATLAB makes several
algorithms for numerical optimization available, the PSO Particle swarm optimization is a stochastic search method
algorithm is yet to be included. In 2003, Birge [7] published inspired by the coordinated motion of animals living in
a PSO function for M ATLAB. In addition, a few other more groups. The change in direction and velocity of each indi-
recent implementations of the PSO algorithm are available vidual particle is the effect of cognitive, social and stochastic
online from the M ATLAB file-exchange server1 , for example influences. The common goal of all group members is to find
the PSO Research Toolbox by Evers. Yet, none of those were the most favorable location within a specified search space.
accompanied by published documentation. Mathematically speaking, particle swarm optimization can
The aforementioned PSO function by Birge is a simple be used to solve optimization problems of the form
yet capable implementation. Unfortunately, its syntax differs
min : f (x) (1)
significantly from standard M ATLAB optimization functions x
such as fmincon and ga. Thus, switching between the differ-
ent search methods is somewhat involved. In addition, while subject to : A·x ≤ b
the PSO algorithm is very well suited for parallelization, the Aeq · x = beq
function does not easily allow to deploy the algorithm on a c(x) ≤ 0 (2)
M ATLAB Distributed Computing Server.
ceq (x) = 0
In this paper, we introduce a generic PSO function for
M ATLAB. The syntax used to interface the function is lb ≤ x ≤ ub
practically identical to that of fmincon and ga. In addition, where x, b, beq , lb, and ub are vectors, and A and Aeq
it is possible to specify whether the fitness function is are matrices. The functions f (x), c(x), and ceq (x) can be
vectorized. In this context, vectorized means that the function nonlinear functions. The fitness function f (x) quantifies the
Soren Ebbesen, Pascal Kiwitz and Lino Guzzella are with the
performance of x.
Institute for Dynamic Systems and Control ETH Zurich, 8092
Zurich, Switzerland [email protected], A. Algorithm
[email protected],
[email protected] Over the years, several modifications to the original PSO
1 www.mathworks.com/matlabcentral/fileexchange algorithm have been suggested. We adopted the following

978-1-4577-1096-4/12/$26.00 ©2012 AACC 1519


Authorized licensed use limited to: ULAKBIM UASL - Erciyes Universitesi. Downloaded on May 17,2023 at 11:55:10 UTC from IEEE Xplore. Restrictions apply.
intuitive formulation The first method penalizes particles violating a constraint
by assigning a high objective function value to the particle.
h  i h  i
vk+1
i = φ k vki + α1 γ1,i Pi −xki + α2 γ2,i G−xki (3)
This value must be higher than the highest value attainable
xk+1
i = xki + vk+1
i . (4) within the feasible region of the search space. Consequently,
particles are free to move across the constraints, yet they will
The vectors xki and vki are the current position and velocity of
remain attracted to the feasible region of the search space
the i-th particle in the k-th generation. The swarm consists
which they will eventually re-enter.
of N particles, i.e., i = {1, . . . , N}. Furthermore, Pi is the
The second method absorbs the particles on the boundary
personal best position of each individual and G is the
of the feasible region, defined by the constraints, if the
global best position observed among all particles up to the
particle would otherwise be moving across. In contrast to the
current generation. The parameters γ1,2 ∈ [0, 1] are uniformly
former method, the fitness value of the particles is evaluated.
distributed random values and α1,2 are acceleration constants.
This method may yield better results than penalizing, assum-
The function φ is the particle inertia which gives rise to a
ing the global minimum is located on or near a constraint.
certain momentum of the particles.
Figure 1 shows a graphical interpretation of the PSO The last method retracts the particle from beyond the
algorithm in a two-dimensional space: the new velocity vk+1 violated constraints and places the particle on the nearest
i
is the sum of a momentum that tends to keep the particle constraint. Identically to absorb, the fitness function is eval-
on its current path; an attraction towards its personal best uated.
position P; and finally, an attraction towards the global best Figure 2 illustrates all three methods. The set Ω is the
position of all group members G. Finally, the new position feasible region of the search space. The position xk+1 and
xk+1 is the sum of the current position xk and the velocity velocity vk+1 are the final position and velocity, according
vk+1 . to the PSO algorithm, respectively. The white circles indicate
the final position after the chosen constraint handling method
has been applied. The velocity of the particle is altered
G accordingly.

α2 γ2,i G − xki
 
xk+1
i

φ k vki vk+1
i
nearest Ω

xki
xki

absorb

xk+1 vk+1
i
i
α1 γ1,i Pi − xki
 
Pi penalize

Fig. 1. Graphical interpretation of the PSO algorithm.

B. Stability Fig. 2. Graphical interpretation of the constraint handling methods.


Necessary and sufficient conditions for stability of the
swarm were derived in [8]. The conditions are
α1 + α2 < 4 (5) III. GENERIC PSO MATLAB FUNCTION
and The generic pso M ATLAB function presented herein is
α1 + α2
− 1 < φ < 1. (6) an implementation of the PSO algorithm introduced in Sec-
2
These conditions guarantee convergence to a stable equi- tion II. It can be used to solve optimization problems in the
librium. However, there is no guarantee that the proposed form of (1) and (2). In this section, the syntax and commands
solution is the global optimum. needed to solve such problems using the pso function are
demonstrated. That said, the syntax and commands of the
C. Constraints pso function are largely identical to those of the M ATLAB
There are several ways of dealing with the constraints (2) generic algorithm function ga. Thus, pso can be applied
in particle swarm optimization. Some methods are demon- with little effort, in particular if a ga-based optimization
strated in [9], [10]. Three distinct constraint handling meth- routine is already in place. For the same reason, this section
ods are implemented in the current PSO function. These we is limited to explain those points that are unique to the pso
refer to as penalize, absorb, and nearest. function. For details on shared functionality, we recommend

1520
Authorized licensed use limited to: ULAKBIM UASL - Erciyes Universitesi. Downloaded on May 17,2023 at 11:55:10 UTC from IEEE Xplore. Restrictions apply.
TABLE I TABLE III
INPUT ARGUMENTS. OPTIONS-STRUCTURE (OPTIONS).

fitnessfcn: Function handle of fitness function PopInitRange: Range of random initial population
nvars: Number of design variables PopulationSize: Number of particles in swarm
Aineq: A matrix for linear inequality constraints Generations: Maximum number of generations
bineq: b vector for linear inequality constraints TimeLimit: Maximum time before pso termi-
Aeq: A matrix for linear equality constraints nates
beq: b vector for linear equality constraints FitnessLimit: Fitness value at which pso termi-
nates
lb: Lower bound on x
ub: Upper bound on x StallGenLimit: Terminate if fitness value
changes less than TolFun over
nonlcon: Function handle of nonlinear constraint function StallGenLimit
options: Options structure created by calling pso with no StallTimeLimit: Terminate if fitness value
inputs and a single output changes less than TolFun over
StallTimeLimit
TABLE II TolFun: Tolerance on fitness value
OUTPUT ARGUMENTS. TolCon: Acceptable constraint violation
HybridFcn: Function called after pso terminates
x: Variables minimizing fitness function Display: Display output in command window
fval: The value of the fitness function at x OutputFcns: User specified output function called
exitflag: Integer identifying the reason the algorithm terminated after each generation
output: Structure containing output from each generation and PlotFcns: User specified plot function called
other information about the performance of the algorithm after each generation
Vectorized Specify whether fitness function is
vectorized

the official documentation of ga which is openly available InitialPopulation: Initial position of particles
InitialVelocities: Initial velocities of particles
at http://www.mathworks.com.
InitialGeneration: Initial generation number
The pso function is normally called using the following
PopInitBest: Initial personal best of particles
syntax
CognitiveAttraction: Attraction towards personal best
>> [x,fval,exitflag,output] = pso(fitnessfcn,... SocialAttraction: Attraction towards global best
nvars,A,b,Aeq,Beq,lb,ub,nonlcon,options) VelocityLimit: Limit absolute velocity of particles
BoundaryMethod: Set method of enforcing constraints
Users familiar with ga or fmincon will recognize this
syntax. In fact, the only differences in syntax between
those functions and pso are in the options structure. For
convenience, the role of each input and output argument is Table III shows the available options. Notice that all but
summarized in Table I and II, respectively. Alternatively, pso the last eight options are identical to those found in the ga
may be called using the syntax options structure. The eight unique options are introduced in
detail below.
>> [x,fval,exitflag,output] = pso(problem)
The option InitialPopulation is used to specify
where problem is a structure containing the input arguments the exact initial position of one or more particles. This
of Table I. is practical if knowledge of one or more possible lo-
cations of the global optimum is available. In addition,
A. Options this option together with the options InitialVelocities,
The options structure controls the behavior of the PSO InitialGeneration, and PopInitBest may be used to
function. The options available to pso can be inspected at recover the algorithm in a certain state. This is useful if the
any time by calling the function with no input nor output algorithm terminated prematurely due to some extra-ordinary
arguments, that is event. To ensure the swarm can be recovered, the state of the
swarm must be recorded after each generation. This can be
>> pso
done by appropriately defining an output function in which
This will generate an output in the command window in- the current state is saved.
dicating all available options, their default values, and their The two options CognitiveAttraction and
class, e.g., matrix, scalar, function handle, etc. SocialAttraction correspond to the parameters α1
The default options structure is generated by calling pso and α2 of Eq. (3), respectively. Thus, they are used to
without input arguments but with a single output argument, specify the attraction of the particles towards their personal
that is best position and the global best position of the entire
swarm, respectively. A warning is issued if the values
>> options = pso of these two constants do not respect the conditions for

1521
Authorized licensed use limited to: ULAKBIM UASL - Erciyes Universitesi. Downloaded on May 17,2023 at 11:55:10 UTC from IEEE Xplore. Restrictions apply.
stability (6). In general, it is recommended to set the social with ε ≪ 1. A linearly decreasing particle inertia improves
attraction larger than the cognitive attraction. initial exploration of the search space, but finally ensures
The option VelocityLimit may be used to limit the stronger attraction towards the personal and global best
absolute velocity of the particles in either direction of the positions as the generation count increases.
search space. If no limits are imposed, a certain degree of
IV. EXAMPLES
oscillation may occur where the particles are bouncing back
and forth between the boundaries of the search space. This In this section, the functionality of pso is demonstrated
causes slow convergence. On the other hand, a too stringent by two different examples. The first example is the Ackley
velocity limit may also lead to slow convergence because the problem which is a common algebraic test problem. The
particles need more time to move across the search space. second example demonstrates pso applied to a simplified
Finally, the option BoundaryMethod specifies how par- automotive engineering problem of finding fuel optimal
ticles violating one or more constraints are handled (cf. gear ratios of a manual transmission for a hybrid electric
Section II-C). If set to 'penalize', then the fitness value drivetrain.
of particles violating one or more constraints is assigned A. Ackley Problem
realmax (largest positive floating-point number). In con-
The fitness function to be minimized takes the following
trast, if 'absorb', then the particles will be absorbed on
form
the constraint rather than crossing it. The exact position of s !
the particle will be the intersection between the constraint 1 n 2
f (x) = 20 + e − 20 · exp −0.2 · · ∑ xi
and the straight line between the old and new positions. n i=1
The method uses a bi-section algorithm. The last method ! (9)
'nearest' places the particles on the nearest constraint. 1 n
This is done internally using either linear or sequential
− exp ∑ cos(2π · xi ) .
n i=1
quadratic programming depending on the type of constraint.
The global minimum of this function is the origin, unless
The latter two methods are computationally more demanding
the origin is excluded by constraints. This is true regardless
than penalizing because the position of the particle on the
of the dimension of x, i.e., the value of n ∈ N. For this
constraint is computed. However, they may yield better
example, we chose n = 2 to be able to visualize the problem.
results if the global optimum is on or near a constraint.
We implemented the fitness function in the following way
B. Output and Plotting Functions function f = ackley(x)
The options OutputFcns and PlotFcns can be cell-
% Dimension
arrays of function-handles to user-defined functions. These n = size(x,2);
functions are called once during initialization, once at the
end of each generation, and once after the particle swarm % Ackley function
f = 20 + exp(1) ...
algorithm terminates. The functions are called automatically -20*exp(-0.2*sqrt((1/n).*sum(x.ˆ2,2))) ...
from within the pso function using the syntax -exp((1/n).*sum(cos(2*pi*x),2));

state = function_handle(options,state,flag) Moreover, we impose the following constraints


The input argument flag is either 'init','iter' or x1 ≤ x2 (10)
'done' indicating the point at which the function is called. x21
≤ 4 · x2 (11)
The argument state is a structure containing information  
x1
about the current state of the swarm such as generation −2 ≤ ≤2 (12)
x2
number, particle positions, velocities, personal bests, global
best, and a stop-flag. The PSO algorithm is interrupted Note that the analytical solution to this constrained prob-
if state.StopFlag = true. The stop-flag is false per lem remains in the origin. The corresponding optimal fit-
default. ness value is equal to zero. The non-linear inequality con-
straint (11) was implemented in the following way
C. Particle Inertia Function
function [c,ceq] = mynonlcon(x)
The particle inertia function φ k of (3) is typically defined
as a linearly decreasing function of k, that is % Non-linear inequality constraints
c(1) = x(:,1).ˆ2 - 4*x(:,2);
φb − φa
φk = · (k − 1) + φa for k = 1, . . . , K (7) % Non-linear equality constraints
K
ceq = [];
where K is the maximum number of generations, defined
by the option Generations. The parameters φa and φb are Figure 3 shows the fitness function (9) including the con-
defined to comply with the stability condition (6), that is straints (10) to (12). The shaded area Ω indicates the feasible
α1 + α2 region of the search space. Notice the existence of several
φa = 1 − ε and φb = −1+ε (8) local minima.
2
1522
Authorized licensed use limited to: ULAKBIM UASL - Erciyes Universitesi. Downloaded on May 17,2023 at 11:55:10 UTC from IEEE Xplore. Restrictions apply.
2
7 6 5 6 7 straint is active in the solution. Finally, the values of x and

6 6 fval show that we indeed found the analytical solution albeit
7 4 5 7
4 5 3 with some insignificant numerical error.
1
6 4 6
B. Hybrid Vehicle Gear Ratio Optimization
54 3 2 4
6 3 1 6
In this example, we demonstrate a proficient interaction
x2

0 5 5
3
6 4 6 between the pso function and a generic dynamic program-
3
ming M ATLAB function by Sundström [11], namely the dpm

5
4 3 function. The system under investigation is a hybrid electric

4
−1
6 4 6 vehicle comprising a manual transmission with six gears.
7 5 5 7
6 6
The optimization problem is to find the six gear ratios that
7 6 5 6 7 minimize the fuel consumed by the engine over a given
−2
−2 −1 0 1 2 driving cycle (JN1015). The vehicle is described using a
x1 discrete-time quasi-static model [12]. The control input u j is
Fig. 3. The fitness function of the Ackley problem including linear and the torque split factor determining how the traction torque,
non-linear constraints. dictated by the driving cycle, is distributed between the
engine and the electric motor. Subscript j = 0, . . . , N denotes
the discrete time steps of the driving cycle.
The Ackley problem can be set up and solved by means The optimal values of the six gear ratios x =
of the pso function using the syntax and commands given [x1 , x2 , x3 , x4 , x5 , x6 ]T are influenced by the chosen control
below. strategy and vice versa. Thus, to find the gear ratios with the
% EXAMPLE 1: ACKLEY PROBLEM largest possible potential of reducing fuel consumption, we
% Options only consider the globally optimal strategy u∗ . This strategy
options = pso; is unique to every possible x. Thus, two optimization prob-
options.PopulationSize = 24;
options.Vectorized = 'on'; lems exist: one problem is the dynamic optimization problem
options.BoundaryMethod = 'absorb'; of finding u∗ given x. This problem is solved by means of
options.Display = 'off'; dynamic programming (DP) using the dpm function. The
% Problem other problem is the static optimization problem of finding
problem = struct; the optimal gear ratios x∗ given u∗ . This problem is solved by
problem.fitnessfcn = @ackley; means of particle swarm optimization using the pso function.
problem.nvars = 2;
problem.Aineq = [ 1 -1]; Figure 4 illustrates how these two optimization problems are
problem.bineq = 0; combined. The fitness function f (xk | u∗ ) is the optimal fuel
problem.lb = [-2 -2]; consumption (L/100km) with respect to the six gear ratios x
problem.ub = [ 2 2];
problem.nonlcon = @mynonlcon; given the optimal control strategy u∗ . The variable k is the
problem.options = options; generation number.
% Optimize xk
[x,fval,exitflag,output] = pso(problem) Particle Swarm Dynamic
Optimization f (xk | u∗ ) Programming
Executing the code above gave rise to the following output:
x = 1.0e-13 * Fig. 4. Optimization of gearbox ratios using a compound of dynamic
programming and particle swarm optimization.
0.1166 -0.1011

fval = 4.1744e-14 The static optimization problem solved by PSO takes the
exitflag = 1 following form
N
output = problemtype: 'nonlinerconstr'
generations: 61 min : f (x | u∗ ) = ∑ ∆m f (x | u∗j ) (13)
x
funccount: 1464 j=0
message: [1x173 char]
maxconstraint: 3.7907e-12
subject to : xi+1 ≤ xi for i = {1, . . . , 5} (14)
The exitflag and output.message indicate that the xmin ≤ x ≤ xmax (15)
average cumulative change in value of the fitness func-
tion over options.StallGenLimit generations was where the constraints (14) ensure that the gear ratios decrease
less than options.TolFun and constraint violation monotonically. The bounds (15) ensure the solution remain
less than options.TolCon. The vanishing value of within reasonable values. The problem can be set up and
options.maxconstraint confirms that at least one con- solved using the syntax and commands given below.

1523
Authorized licensed use limited to: ULAKBIM UASL - Erciyes Universitesi. Downloaded on May 17,2023 at 11:55:10 UTC from IEEE Xplore. Restrictions apply.
3.91
% EXAMPLE 2: GEAR RATIO OPTIMIZATION Best
Best: 3.6764 Mean: 3.6770 Mean
% Options 3.87
options = pso;
options.PopulationSize = 24; 3.84
options.PlotFcns = @psoplotbestf;
options.Display = 'iter'; 3.81

Fitness value
options.Vectorized = 'off';
options.TolFun = 1e-6; 3.78
options.StallGenLimit = 50;
3.75
% Problem
problem = struct; 3.71
problem.fitnessfcn = @hev_main;
problem.nvars = 6; 3.68
problem.Aineq = [-1 1 0 0 0 0
0 -1 1 0 0 0 3.65
0 10 20 30 40 50 60 70 80 90 100
0 0 -1 1 0 0
0 0 0 -1 1 0 Generation
0 0 0 0 -1 1]; Fig. 5. The global best fitness value and the mean of all particles over
problem.bineq = zeros(size(problem.Aineq,1),1); generation number.
problem.lb = [13.5, 7.6, 5.0, 3.9, 3.0, 2.8];
problem.ub = [20.4, 11.5, 7.6, 5.5, 4.4, 4.2];
problem.options = options;
VI. ACKNOWLEDGMENTS
% Optimize
[x,fval,exitflag,output] = pso(problem); We would like to thank Daimler AG for having supported
this project. We also thank our colleagues for testing the pso
The fitness function hev_main comprises the dynamic op- function and providing useful feedback.
timization problem, i.e., the DP algorithm and the vehicle
R EFERENCES
model, and returns the minimum fuel consumption (13) given
x. Executing the code above returned following output: [1] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Neural
Networks, IEEE International Conference on, vol. 4, nov/dec 1995,
pp. 1942–1948.
x = 16.4048 7.6920 5.9285 3.7483 3.6259 3.5890
[2] J. Duro and J. de Oliveira, “Particle swarm optimization applied to the
chess game,” in Evolutionary Computation, 2008. CEC 2008. (IEEE
fval = 3.6764
World Congress on Computational Intelligence). IEEE Congress on,
june 2008, pp. 3702 –3709.
exitflag = 1
[3] P. Faria, Z. Vale, J. Soares, and J. Ferreira, “Particle swarm optimiza-
tion applied to integrated demand response resources scheduling,” in
output = problemtype: 'linearconstraints'
Computational Intelligence Applications In Smart Grid (CIASG), 2011
generations: 95
IEEE Symposium on, april 2011, pp. 1 –8.
funccount: 2280
[4] G. Lambert-Torres, H. Martins, M. Coutinho, C. Salomon, and
message: [1x173 char]
F. Vieira, “Particle swarm optimization applied to system restoration,”
maxconstraint: -0.0085
in PowerTech, 2009 IEEE Bucharest, 2009, pp. 1–6.
[5] M. Lanza, J. R. Perez, and J. Basterrechea, “Particle swarm optimiza-
The fuel consumption corresponding to the solution tion applied to planar arrays synthesis using subarrays,” in Antennas
x was 3.6764 L/100km. The less-than-zero value of and Propagation (EuCAP), 2010 Proceedings of the Fourth European
Conference on, 2010, pp. 1–5.
output.maxconstraint indicates that the solution [6] I. Oumarou, D. Jiang, and C. Yijia, “Particle swarm optimization
is strictly within the interior of the search space and not on applied to optimal power flow solution,” in Natural Computation,
(or beyond) the boundaries. Figure 5 shows the output of the 2009. ICNC ’09. Fifth International Conference on, vol. 3, August
2009, pp. 284–288.
plotting function psoplotbestf. The mean value of the [7] B. Birge, “PSOt - a particle swarm optimization toolbox for use with
swarm is close to the the global best value. This observation matlab,” in Swarm Intelligence Symposium 2003, Proceedings of the
raises confidence in that all particles have converged to IEEE, 2003, pp. 182–186.
[8] R. Perez and K. Behdinan, “Particle swarm approach for structural
approximately the same solution. design optimization,” Computers & Structures, vol. 85, no. 19-20, pp.
1579–1588, 2007.
[9] G. Pulido and C. Coello, “A constraint-handling mechanism for
V. CONCLUSION particle swarm optimization,” in Evolutionary Computation, 2004.
CEC2004. Congress on, vol. 2, 2004, pp. 1396–1403.
In this paper, we introduced a generic PSO function for [10] C. Coello, G. Pulido, and M. Lechuga, “Handling multiple objectives
M ATLAB. The function uses practically the same syntax with particle swarm optimization,” Evolutionary Computation, IEEE
as common M ATLAB functions such as fmincon and ga. Transactions on, vol. 8, no. 3, pp. 256–279, june 2004.
[11] O. Sundstrom and L. Guzzella, “A generic dynamic programming mat-
Thus, the learning curve is flat for users already familiar lab function,” in Control Applications, (CCA) & Intelligent Control,
with the syntax of those. In addition, the pso function (ISIC), 2009 IEEE, July 2009, pp. 1625–1630.
can be substituted into existing fmincon or ga based [12] L. Guzzella and A. Sciarretta, Vehicle Propulsion Systems: Introduc-
tion to Modeling and Optimization. Springer, 2005.
optimization frameworks with little effort. The pso func-
tion, the sample functions presented herein, including the
plotting function psoplotbestf, can be downloaded at
http://www.idsc.ethz.ch/Downloads.

1524
Authorized licensed use limited to: ULAKBIM UASL - Erciyes Universitesi. Downloaded on May 17,2023 at 11:55:10 UTC from IEEE Xplore. Restrictions apply.

You might also like