Effects of Random Values For Particle Swarm Optimization Algorithm (Iteration)
Effects of Random Values For Particle Swarm Optimization Algorithm (Iteration)
Article
Effects of Random Values for Particle Swarm
Optimization Algorithm
Hou-Ping Dai 1,2 , Dong-Dong Chen 1,3, * ID
and Zhou-Shun Zheng 1, * ID
1 School of Mathematics and Statistics, Central South University, Changsha 410083, China;
[email protected]
2 School of Mathematics and Statistics, Jishou University, Jishou 416000, China
3 State Key Laboratory of High Performance Complex Manufacturing, Changsha 410083, China
* Correspondence: [email protected] or [email protected] (D.-D.C.); [email protected] (Z.-S.Z.);
Tel.: +86-0183-9099-9143 (D.-D.C.); +86-0137-8713-6098 (Z.-S.Z.)
Abstract: Particle swarm optimization (PSO) algorithm is generally improved by adaptively adjusting
the inertia weight or combining with other evolution algorithms. However, in most modified PSO
algorithms, the random values are always generated by uniform distribution in the range of [0, 1].
In this study, the random values, which are generated by uniform distribution in the ranges of [0, 1]
and [−1, 1], and Gauss distribution with mean 0 and variance 1 (U [0, 1], U [−1, 1] and G (0, 1)), are
respectively used in the standard PSO and linear decreasing inertia weight (LDIW) PSO algorithms.
For comparison, the deterministic PSO algorithm, in which the random values are set as 0.5, is also
investigated in this study. Some benchmark functions and the pressure vessel design problem are
selected to test these algorithms with different types of random values in three space dimensions
(10, 30, and 100). The experimental results show that the standard PSO and LDIW-PSO algorithms
with random values generated by U [−1, 1] or G (0, 1) are more likely to avoid falling into local optima
and quickly obtain the global optima. This is because the large-scale random values can expand the
range of particle velocity to make the particle more likely to escape from local optima and obtain
the global optima. Although the random values generated by U [−1, 1] or G (0, 1) are beneficial
to improve the global searching ability, the local searching ability for a low dimensional practical
optimization problem may be decreased due to the finite particles.
Keywords: particle swarm optimization algorithm; random values; uniform distribution; gauss
distribution
1. Introduction
Based on the intelligent collective behaviors of some animals such as fish schooling and
bird flocking, particle swarm optimization (PSO) algorithm was first introduced by Kennedy and
Eberhart [1]. This algorithm is a stochastic population based heuristic global optimization technology,
and it has advantages of simple implementation and rapid convergence capability [2–4]. Therefore,
PSO algorithm has been widely utilized in function optimization [5], neural network training [6–9],
parameters optimization of fuzzy system [10–12], and control system [13–17], etc.
However, the PSO algorithm is easily trapped in local optima when it is used to solve
complex problems [18–31]. This disadvantage seriously limits the application of the PSO algorithm.
In order to deal with this issue, many modifications or improvements are proposed to improve
the performance of the PSO algorithm. Generally, the improved methods include changing the
parameter values [19–21], tuning the inertia weight or population size [22–25], and combining with
other evolution algorithms [26–31], etc. In recent years, the PSO algorithms for dynamic optimization
problems are developed. The multi-swarm PSO algorithms, such as, multi-swarm charged/quantum
PSO [32], species-based PSO [33], clustering-based PSO [34], child and parent swarms PSO [35],
multi-strategy ensemble PSO [36], chaos mutation-based PSO [37], distributed adaptive PSO [38],
and stochastic diffusion search—aided PSO [39], are developed to improve their performances.
Furthermore, some dynamic neighborhood topology-based PSO algorithms are developed to deal
with dynamic optimization problems [40,41]. These improvements or modifications have improved
the global optimization ability of the PSO algorithm to some extent. However, these methods cannot
effectively prevent the stagnation of optimization and premature convergence. This is because the
velocity of particle becomes very small in the position of the local optima, which renders the particle
unable to escape from the local optimum. Therefore, it is necessary to propose an effective way to
make the particle jump out of the local optimum.
In the PSO algorithm, the velocity of particle is updated according to its previous velocity and
the distances of its current position from its own best position and the group’s best position [20].
The coefficients of previous velocity and two distances are inertia weight and random values,
respectively. In previous research, a variety of inertia weight strategies were proposed and developed
to improve the performance of the PSO algorithm. However, the random values for most modified PSO
algorithms are always generated by uniform distribution in the range of [0, 1]. Obviously, the random
values represent the weights of two distances for updating the particle velocity. If the range of random
values is small, these two distances have little influence on the new particle velocity, which means
that the velocity cannot be effectively increased or changed to escape from local optima. In order to
improve the global optimization ability of the PSO algorithm, it is necessary to expand the range of
random values.
In this paper, the random values generated by different probability distributions are utilized to
investigate their effects on the PSO algorithms. In addition, the deterministic PSO algorithm, in which
the random values are set as 0.5, is investigated for comparison. The performances of PSO algorithms
with different types of random values are tested and compared by the experiments of benchmark
functions in three space dimensions. The rest of the paper is organized as follow. Section 2 presents
the standard PSO algorithm and its modification strategies. The different types of random values are
provided in Section 3. Section 4 provides the performances of PSO algorithms with different types of
random values, and the effects of random values on PSO algorithms are also analyzed in this section.
Finally, Section 5 concludes this paper.
x i ( t + 1) = x i ( t ) + v i ( t + 1) (2)
where, w is the inertia weight and can be used to control the influence of previous velocity on the
new one; the parameters c1 and c2 are two constants which determine the weights of pi and p g ; pi
represents the best previous position of the ith individual and p g denotes the best previous position of
all particles in current generation; r1 and r2 represent two separately generated random values which
uniformly distribute in the range of [0, 1]. The pseudocode of standard particle swarm optimization is
shown in Figure 1.
Algorithms 2018, 11, 23 3 of 20
Algorithms 2018, 11, 23 3 of 20
itermax − iter
w(iter ) = (wmax − wmin ) + wmin (4)
itermax
where iter is the current iteration of algorithm and itermax represents the maximum number of
iterations; wmax and wmin are the upper and lower bounds of inertia weight, and they are 0.9 and
0.4, respectively.
Based on the linear decreasing inertia weight strategy, the nonlinear decreasing strategy for inertia
weight is proposed for the PSO algorithm [47], and it can be expressed as,
n
itermax − iter
w(iter ) = (wmax − wmin ) + wmin (5)
itermax
where n is the nonlinear modulation index. Obviously, with n = 1, this strategy becomes the linearly
decreasing inertia weight strategy.
In addition, some similar methods [48–51], which use linear or nonlinear decreasing inertia
weight, haven been proposed to improve the performance of the PSO algorithm.
p g (iter )
w(iter ) = 1.1 − (6)
N
1
N ∑ pi (iter )
i =1
where pi (iter ) and p g (iter ) represent the best previous positions of the i-th individual and all particles,
respectively; N is the number of particles.
The inertia weight updated by the particle rank can be expressed as [53],
rank i
wi (iter ) = wmin + (wmax − wmin ) (7)
N
where wi (iter ) is the inertia weight of the i-th particle in current iteration; rank i represents the position
of the i-th particle when the particles are ordered based on their best fitness.
The inertia weight adjusted by the distance to the global best position can be expressed as [54],
disti
w i = w0 1− (8)
max_dist
where the inertia weight w0 = rand(0.5, 1); disti is the current Euclidean distance of the i-th particle
from the global best, and it can be expressed as,
" #1
D 2 2
disti = ∑ pdg − xid (9)
d =1
Algorithms 2018, 11, 23 5 of 20
and max_dist is the maximum distance of a particle from the global best in the previous generation.
In recent research, the PSO algorithm with inertia weight adjusted by the average absolute value
of velocity or the situation of swarm is proposed to keep the balance between local search and global
search [56–58]. In addition, the adaptive population size strategy is an effective way to improve the
accuracy and efficiency of the PSO algorithm [59–62].
Algorithms 2018, 11, 23
It is obvious 5 of 20
Algorithms 2018, 11, 23that the improvements of the PSO algorithm are generally implemented by adaptively 5 of 20
adjusting inertia weight or population size. These methods can avoid falling into local optima by
values on the
adaptively
particle velocity
updating
has never been discussed. Therefore, the PSO algorithm with different
values on the particlethe velocity
velocity hasofnever
particle todiscussed.
been some extent. However,
Therefore, thethe
PSOeffect of random
algorithm withvalues on
different
types
the
of random
particle velocity
values will
has will
never
be studied in Section 3 in detail.
types of random values bebeen
studieddiscussed.
in Section Therefore, the PSO algorithm with different types of
3 in detail.
random values will be studied in Section 3 in detail.
3. Particle Swarm Optimization Algorithm with Different Types of Random Values
3. Particle Swarm Optimization Algorithm with Different Types of Random Values
3. Particle Swarm Optimization Algorithm with Different Types of Random Values
3.1. Random Values with Uniform Distribution in the Range of [0, 1]
3.1. Random Values with Uniform Distribution in the Range of [0, 1]
3.1. Random Values with Uniform Distribution in the Range of [0, 1]
In the traditional PSO algorithm, the random values r and r are generated by uniform
In the traditional PSO algorithm, the random values r11 and r22 are generated by uniform
In the traditional
distribution in the rangePSO 1] ( U [ 0,1] ).the
of [0,algorithm, random
As shown in values r1the
Figure 2, and r2 are generated
probability by uniform
of each random value
distribution in
distribution in the
the range
range of
of [0,
[0, 1] U [[0,
1] ((U 0,11]]).). As
As shown
shown in in Figure
Figure 2,
2, the
the probability of each
probability of each random
random value
value
is similar in the range.
is similar in the range.
0.010
value
0.010
value
random
random
0.005
of of
0.005
Probability
Probability
0.000
0.0000.00 0.25 0.50 0.75 1.00
0.00 0.25 Random
0.50value 0.75 1.00
Random value
Figure 2. Random values with uniform distribution in the range of [0, 1].
Figure 2. Random values with uniform distribution in the range of [0, 1].
0.010
value
0.010
value
of random
of random
0.005
0.005
Probability
Probability
0.000
0.000-1.0 -0.5 0.0 0.5 1.0
-1.0 -0.5 Random
0.0 value 0.5 1.0
Random value
Figure 3. Random
Figure 3. Random values
values with
with uniform
uniform distribution
distributionin
inthe
therange
rangeof −1, 1].
of[[−1,
Figure 3. Random values with uniform distribution in the range of [−1, 1].
0.04
0.02
0.01
0.00
-4 -2 0 2 4
Random value
Figure 4.
Figure Random values
4. Random values with
with Gauss
Gauss distribution.
distribution.
4.1. Experimental
4.1. Experimental Setup
Setup
In order
In ordertotoinvestigate
investigatethetheperformances
performances of PSO
of PSOalgorithms with with
algorithms different types of
different random
types values,
of random
some commonly used benchmark functions are adopted and
values, some commonly used benchmark functions are adopted and shown in Table 1. Theshown in Table 1. The dimensions of
search spaceofare
dimensions 10, 30space
search and 100 in this
are 10, study.
30 and 100The standard
in this study. PSO algorithmPSO
The standard is selected
algorithmto investigate
is selected
the effects of random values. In addition, because the linear decreasing inertia
to investigate the effects of random values. In addition, because the linear decreasing inertia weight weight (LDIW) PSO
algorithm has a better global search ability in starting phase to help the algorithm
(LDIW) PSO algorithm has a better global search ability in starting phase to help the algorithm converge to an area
quickly
convergeand to aanstronger local search
area quickly and a ability
strongerin local
the latter
search phase to obtain
ability in the high
latterprecision
phase tovalue,
obtainsohigh
the
LDIW PSO algorithm is also utilized to study the effects of random values.
precision value, so the LDIW PSO algorithm is also utilized to study the effects of random values. Moreover, although the
effects of setting
Moreover, althoughparameters on deterministic
the effects PSO algorithm
of setting parameters have been studied
on deterministic PSO [63], the deterministic
algorithm have been
PSO algorithm (r = 0.5) is adopted to compare with standard PSO and LDIW-PSO
studied [63], the deterministic PSO algorithm (r = 0.5) is adopted to compare with standard PSO and algorithms.
To have
LDIW-PSO a fair comparison, the parameter settings of all algorithms are same. In this study,
algorithms.
the population
To have a size fair is 100, and thethe
comparison, maximum
parameter number
settingsof function evaluations
of all algorithms areissame.
10,000.InThe
thisparameters
study, the
cpopulation
1 and c 2 are all 2. For standard PSO algorithm, the inertia weight w is 0.7. For
size is 100, and the maximum number of function evaluations is 10,000. The parameters LDIW-PSO algorithm,
2 are all 2. For standard PSO algorithm, the inertia weight w
w
c1 max
andandc wmin are 0.9 and 0.4, respectively. In order to eliminate random discrepancy, the results
is 0.7. For of all
LDIW-PSO
experiments are averaged over 30 independent runs.
algorithm, wmax and w min are 0.9 and 0.4, respectively. In order to eliminate random discrepancy,
the
4.2. results of all experiments
Experimental are averaged over 30 independent runs.
Results and Comparisons
For some benchmark
4.2. Experimental Results and functions,
Comparisons the comparisons of standard PSO algorithm with different types
of random values are shown in Table 2. The bold numbers indicate the best solutions for each test
For some
function in thebenchmark
certain space functions,
dimension. the comparisons
Obviously, the ofperformances
standard PSOof algorithm
deterministicwith PSO different types
algorithm
of =
(r random
0.5) arevalues are shown
the worst for allinthe
Table 2. The bold
benchmark numbersThis
functions. indicate the bestthe
is because solutions
randomforvalues each test
are
function in the certain space dimension. Obviously, the performances of deterministic
deterministic which decreases the diversity of particles. For the random values generated by U [0, 1],the PSO algorithm
(r = 0.5) PSO
standard are the worst can
algorithm for only
all the benchmark
obtain the optimal functions.
solutionsThis is because
of Sphere1, the random
Sphere2, Alpine and values
Movedare
deterministic which decreaseswhen
axis parallel hyper-ellipsoid the diversity
the spaceofdimension
particles. For thebut
is 10, random values generated
this algorithm is useless [0,1]
byforUother
,the standard PSO
test functions algorithm
or higher spacecan only obtainHowever,
dimensions. the optimal for solutions
the random of Sphere1, Sphere2, by
values generated Alpine and
U [−1, 1]
Moved axis parallel hyper-ellipsoid when the space dimension is 10, but this
or G (0, 1), the standard PSO algorithm can obtain the optimal solutions of all test functions in every algorithm is useless for
other test functions
space dimension exceptor Rosenbrock.
higher spaceIndimensions.
the low space However,
dimension for(10),
thethe
random values of
best solution generated
Rosenbrock by
is [ ]
−
U obtained
1,1 or G
by (0,1)
the standard
, the PSO
standard algorithm
PSO with
algorithm random
can values
obtain the distributed
optimal in U
solutions [ 0, 1
of] . However,
all test in the
functions
high space
in every dimension
space dimension (30 or 100),Rosenbrock.
except its best solution In the is low
obtained
spaceby the standard
dimension (10),PSO
the best algorithm with
solution of
Rosenbrock is obtained by the standard PSO algorithm with random values distributed in U [ 0,1] .
random values generated by U [− 1, 1 ] or G ( 0, 1 ) . In addition, the best solutions of Levy and Montalvo
2 (30 dimensions) and Sinusoidal (100 dimensions) are also obtained by the standard PSO algorithm
However, in the high space dimension (30 or 100), its best solution is obtained by the standard PSO
with random values distributed in U [0, 1]. Obviously, the random values make an important effect on
algorithm with random values generated by U [ −1,1] or G(0,1) . In addition, the best solutions of
the performance of standard PSO algorithm, and its performance is highly improved when the range
Levy and Montalvo
of random 2 (30 dimensions)
value is expanded. This implies and Sinusoidal
that the standard(100PSO
dimensions)
algorithmare withalso obtainedrandom
large-scale by the
standard
values canPSO algorithm
avoid with
falling into random
local optima values distributed
and obtain the global [0,1] . Obviously, the random values
in U optima.
The comparisons of LDIW-PSO algorithm with
make an important effect on the performance of standard PSO algorithm, different types of randomand values for benchmark
its performance is
functions
highly are shown
improved when in the
Table 3. Inofaddition,
range random value the bold numbers indicate
is expanded. This impliesthe best
that solutions
the standard for each
PSO
algorithm with large-scale random values can avoid falling into local optima and obtain the global
optima.
Algorithms 2018, 11, 23 7 of 20
test function in the certain space dimension. For all the benchmark functions, the performances
of deterministic LDIW-PSO algorithm (r = 0.5) are also the worst. For the U [−1, 1] and G (0, 1),
the performance of LDIW-PSO algorithm is similar with that of standard PSO algorithm. However,
for the random values distributed in U [0, 1], the performance of LDIW-PSO algorithm is improved
compared to the standard PSO algorithm, which is also reported in some references [5,20,45,46].
It should be noted that the LDIW-PSO algorithm with random values generated by U [0, 1] can obtain
the optimal solutions of all test function in low space dimension (10 and 30) except Sphere2, Rotated
Expanded Scaffer and Schwefel. In addition, for Levy and Montalvo 2, Sinusoidal and Alpine in 30
dimensions, the performance of LDIW-PSO algorithm with random values generated by U [0, 1] is
better than that with random values generated by U [−1, 1] or G (0, 1). However, in the high space
dimension (100), the LDIW-PSO algorithm with random values generated by U [0, 1] cannot obtain the
optimal solutions of these test functions, which implies that this algorithm is useless for the problems
in high dimension space. Therefore, the performance of improved PSO (LDIW-PSO) algorithm is
also influenced by the random values, especially for solving the problems in high dimension space.
Furthermore, the LDIW-PSO algorithm with a wide range of random values is more beneficial to
escape from local optima and obtain the global optima.
Figure 5 shows the mean best fitness of the standard PSO algorithm with different types of random
values for benchmark functions. For the same type of random values, the performance of the standard
PSO algorithm improves with decreasing the space dimension, and the convergence velocity also
improves with decreasing the space dimension. For the same space dimension of each test function,
the performances of deterministic PSO algorithm (r = 0.5) are the worst. However, the performances
of standard PSO algorithm with random values generated by U [−1, 1] or G (0, 1) are the best for the
most benchmark functions. In the low dimension (10 and 30), the performances of standard PSO
algorithm with random values generated by U [0, 1] and U [−1, 1] are the best for Levy and Montalvo 2
and Sinusoidal, respectively. The performance of the standard PSO algorithm with random values
distributed in U [−1, 1] is slightly worse than that of the standard PSO algorithm with random values
distributed in G (0, 1). In addition, the global optima can be obtained within 50 iterations by the
standard PSO algorithm with random values generated by U [−1, 1] or G (0, 1). This indicates that the
standard PSO algorithm with large-scale random values can more quickly obtain the global optima.
The mean best fitness of the LDIW-PSO algorithm with different types of random values for
benchmark functions are shown in Figure 6. In addition, the performance and convergence velocity of
the LDIW-PSO algorithm are all improved with decreasing the space dimension for the same random
values of each test function. When the random values are generated by U [0, 1], compared to the
standard PSO algorithm, the LDIW-PSO algorithm has better performance. In the space dimensions
10 and 30, the global optima of some test functions can be obtained by the LDIW-PSO algorithm
with random values are generated by U [0, 1], but its convergence velocity is slower than that of the
LDIW-PSO algorithm with random values generated by U [−1, 1] or G (0, 1). Moreover, in the low
dimension (10 and 30), the performance of the LDIW-PSO algorithm with random values generated by
U [0, 1] is the best for Levy and Montalvo 2, Sinusoidal and Alpine. When the space dimension is 100,
the global optima cannot be obtained by the LDIW-PSO algorithm with random values distributed in
U [0, 1], but can be obtained by the LDIW-PSO algorithm with random values distributed in U [−1, 1]
or G (0, 1). This implies that the LDIW-PSO algorithm with large-scale random values can be more
likely to obtain the global optima with less iterations.
Algorithms 2018, 11, 23 8 of 20
√
2 2 2sin x +y −0.5
F ( x, y) = 0.5 +
(1+0.001( x2 +y2 ))2
Rotated Expanded Scaffer [−100, 100] D [−100, 100] [0, · · · , 0] 0
f9 (x) = F ( x1 , x2 ) + F ( x2 , x3 ) + · · ·
+ F ( x D −1 , x D ) + F ( x D , x 1 )
D
Alpine f 1 (x) = ∑ | xi · sin xi + 0.1xi | [−9, 7] D [−7, 7] [0, · · · , 0] 0
i =1
Table 2. Comparisons of standard particle swarm optimization (PSO) algorithm with different types of random values for benchmark functions.
Table 2. Cont.
Table 3. Comparisons of LDIW-PSO algorithm with different types of random values for benchmark functions.
Table 3. Cont.
Best fitness
60,000
Best fitness
5,000
30 40,000
2,000
20,000 0
0 0 500 1,000 1,500 2,000
0 20 40 60 80 100
0 0
0 2,000 4,000 6,000 8,000 10,000 0 2,000 4,000 6,000 8,000 10,000
Iteration Iteration
(a) (b)
1,500 60 10 30 100
10 30 100 6.0x105 U[0,1]
U[0,1]
U[-1,1]
U[-1,1]
40 G(0,1)
G(0,1) 1,000 0.5
1,000 0.5
4.0x105
Best fitness
Best fitness
800
20
600
500 400
0 2.0x105
0 20 40 60 80 100
200
0
0 0.0 0 50 100 150
0 2,000 4,000 6,000 8,000 10,000 0 2,000 4,000 6,000 8,000 10,000
Iteration Iteration
(c) (d)
300
10
10 30 100 15
U[0,1] 10 30 100
U[-1,1] U[0,1]
G(0,1) U[-1,1]
200
0.5 G(0,1)
Best fitness
Best fitness
5 10 0.5
0.2
100
5 0.1
0
0 20 40 60 0.0
3 4 5 6 7 8
0 0
0 2,000 4,000 6,000 8,000 10,000 0 2,000 4,000 6,000 8,000 10,000
Iteration Iteration
(e) (f)
10 30 100 2 10 30 100
90 U[0,1] U[0,1]
U[-1,1] U[-1,1]
4 G(0,1) G(0,1)
0.5 0.5
0
60
Best fitness
Best fitness
30 -2
0
0 20 40 60 80 100
0
-4
0 2,000 4,000 6,000 8,000 10,000 0 2,000 4,000 6,000 8,000 10,000
Iteration Iteration
(g) (h)
Figure 5. Cont.
Algorithms 2018,
Algorithms 11,11,
2018, 23 x 13 of
1320
of 20
Algorithms 2018, 11, x 13 of 20
Best fitness
Best fitness
200 G(0,1)
U[-1,1]
2 5
Best fitness
Best fitness
50 G(0,1)0.5
2
50 0.5
0
100 0
0 0 40 80
100 0 0 20 40 60
0 40 80
0 20 40 60
0 0
0 0
0 2,000 4,000 6,000 8,000 10,000 0 2,000 4,000 6,000 8,000 10,000
0 2,000 4,000Iteration
6,000 8,000 10,000 0 2,000 4,000Iteration
6,000 8,000 10,000
Iteration Iteration
(i) (j)
(i) (j)
80,000 0
80,000 0
-5,000
60,000 -5,000
100 -10,000
60,000
-10,000
Best fitness
100
Best fitness
10 30 100 -15,000 10 30 100
Best fitness
Best fitness
40,000 U[0,1] 10 30 100 -15,000 U[0,1] 10 30 100
U[-1,1]
40,000 50 U[-1,1]
U[0,1] -20,000 U[0,1]
G(0,1)
U[-1,1]
50
G(0,1)
U[-1,1] -20,000
-25,000 G(0,1)0.5
20,000 G(0,1)0.5
0.5
20,000 0.5 -25,000
-30,000
0
0 50 100 150 200 -30,000
0 0 -35,000
0 50 100 150 200 0 2,000 4,000 6,000 8,000 10,000
0 0 2,000 4,000 6,000 8,000 10,000 -35,000
0 2,000 4,000Iteration
6,000 8,000 10,000 0 2,000 4,000Iteration
6,000 8,000 10,000
Iteration Iteration
(k) (l)
(k) (l)
Figure 5. The mean best fitness of standard PSO algorithm with different types of random values for
Figure5.5.The
Figure Themean
meanbest
bestfitness
fitnessofofstandard
standardPSOPSOalgorithm
algorithmwith
withdifferent
differenttypes
typesofofrandom
randomvalues
valuesfor
for
benchmark functions: (a) Sphere1; (b) Sphere2; (c) Rastrigin; (d) Rosenbrock; (e) Griewank; (f) Ackley;
benchmark
benchmark functions: (a) Sphere1; (b) Sphere2; (c) Rastrigin; (d) Rosenbrock; (e) Griewank; (f)Ackley;
functions: (a) Sphere1; (b) Sphere2; (c) Rastrigin; (d) Rosenbrock; (e) Griewank; (f) Ackley;
(g) Levy and Montalvo 2; (h) Sinusoidal; (i) Rotated Expanded Scaffer; (j) Alpine; (k) Moved axis
(g)
(g)Levy
LevyandandMontalvo
Montalvo2;2;(h) (h)Sinusoidal;
Sinusoidal;(i)(i)Rotated
RotatedExpanded
ExpandedScaffer;
Scaffer;(j)
(j)Alpine;
Alpine;(k)(k)Moved
Movedaxisaxis
parallel hyper-ellipsoid; (l) Schwefel. (The solid, dash, short dash and short dash dot lines represent
parallel hyper-ellipsoid; (l) Schwefel. (The solid, dash, short dash and short dash dot lines
parallel hyper-ellipsoid; (l) Schwefel. (The solid, dash, short dash and short dash dot lines represent represent the
the random values generated by uniform distribution in the ranges of [0, 1] and [−1, 1], Gauss
random
the randomvaluesvalues
generated by uniform
generated distribution
by uniform in the ranges
distribution of [0,
in the 1] andof[−[0,
ranges 1, 1],
1] Gauss distribution,
and [−1, 1], Gauss
and distribution,
0.5, and 0.5,the
respectively; respectively;
black, red the black,
and blue red and
lines blue lines
represent the represent
space the space 10,
dimensions dimensions
30, 10, 30,
and10,
100,
distribution, and 0.5, respectively; the black, red and blue lines represent the space dimensions 30,
and 100,
respectively).respectively).
and 100, respectively).
Figure 6. Cont.
Algorithms
Algorithms 2018,
2018,11,
11,x23 14
14 of
of 20
20
600 10 30 100
4 140000
10 30 100 U[0,1]
U[0,1] U[-1,1]
120000 G(0,1)
U[-1,1]
0.5
G(0,1) 100000 120
400
2 0.5
Best fitness
Best fitness
80000 80
60000 40
200
0 40000
0 10 20 30 40 0
20000 0 100 200 300 400 500
0 0
0 2,000 4,000 6,000 8,000 10,000 0 2,000 4,000 6,000 8,000 10,000
Iteration Iteration
(c) (d)
150 15
4 10 30 100
10 30 100 U[0,1]
U[0,1]
U[-1,1]
U[-1,1]
G(0,1)
G(0,1) 0.5
100 2 0.5 10
Best fitness
Best fitness
1.0
50 0 5 0.5
0 10 20 30 40 50
0.0
0 0 0 20 40 60 80 100
0 2,000 4,000 6,000 8,000 10,000 0 2,000 4,000 6,000 8,000 10,000
Iteration Iteration
(e) (f)
90 10 30 100
2
10 30 100
U[0,1] U[0,1]
U[-1,1] U[-1,1]
G(0,1) G(0,1)
60 0.5 0.5
0
Best fitness
Best fitness
2
30
-2
0
0 20 40 60 80 100
0
-4
0 2,000 4,000 6,000 8,000 10,000 0 2,000 4,000 6,000 8,000 10,000
Iteration Iteration
(g) (h)
G(0,1)
50 0.5
2
100 0
0
0 20 40 60
0 40 80
0 0
0 2,000 4,000 6,000 8,000 10,000 0 2,000 4,000 6,000 8,000 10,000
Iteration Iteration
(i) (j)
Figure 6. Cont.
Algorithms
Algorithms 2018,
2018, 11,11,
23x 15 15
of of
20 20
80,000 0
100 -5,000
10 30 100
60,000 U[0,1]
U[-1,1] -10,000
Best fitness
Best fitness
G(0,1) 10 30 100
50
40,000 0.5 -15,000 U[0,1]
U[-1,1]
-20,000 G(0,1)
0.5
20,000 0
0 50 100 150 200 -25,000
-30,000
0
0 2,000 4,000 6,000 8,000 10,000 0 2,000 4,000 6,000 8,000 10,000
Iteration Iteration
(k) (l)
Figure
Figure 6. 6.
TheThe mean
mean bestbest fitness
fitness of of LDIW-PSO
LDIW-PSO algorithm
algorithm withwith different
different types
types of of random
random values
values forfor
benchmark functions: (a) Sphere1; (b) Sphere2; (c) Rastrigin; (d) Rosenbrock; (e) Griewank;
benchmark functions: (a) Sphere1; (b) Sphere2; (c) Rastrigin; (d) Rosenbrock; (e) Griewank; (f) Ackley; (f) Ackley;
(g)(g) Levy
Levy and and Montalvo
Montalvo 2; 2;
(h)(h) Sinusoidal;
Sinusoidal; (i) (i) Rotated
Rotated Expanded
Expanded Scaffer;
Scaffer; (j)(j) Alpine;
Alpine; (k)(k) Moved
Moved axis
axis
parallel hyper-ellipsoid; (l) Schwefel. (The solid, dash, short dash and short dash
parallel hyper-ellipsoid; (l) Schwefel. (The solid, dash, short dash and short dash dot lines represent dot lines represent
thethe random
random values
values generated
generated byby uniform
uniform distribution
distribution in in
thethe ranges
ranges of of[0, [0, 1] and
1] and [−1,[−1,
1],1], Gauss
Gauss
distribution, and 0.5, respectively; the black, red and blue lines represent the space dimensions 10, 30,30,
distribution, and 0.5, respectively; the black, red and blue lines represent the space dimensions 10,
andand 100,
100, respectively).
respectively).
4.3.
4.3. Application
Application and
and Analysis
Analysis
4.3.1.
4.3.1. Application
Application inin Engineering
Engineering Problem
Problem
Thepressure
The pressurevessel
vesseldesign,
design,which whichwas wasinitially
initiallyintroduced
introducedbybySandgren
Sandgren[64],
[64],is is a real
a real world
world
engineering
engineering problem.
problem. There
There areare four
four involved
involved variables,
variables, including
including thethe thickness
thickness (x1(x ), thickness
),1thickness ofof
thethe
head (x ), the inner radius (x ), and the length of the cylindrical section of the vessel (x
head (x2 ), the inner radius (x3 ), and the length of the cylindrical section of the vessel (x4 ). The highly
2 3 4). The highly
constrained
constrained problem
problem of of pressure
pressure vessel
vessel design
design cancan
bebe expressed
expressed as,as,
Min: f ( x ) = 0.6224 x1 x3 x4 + 1.7781x2 x32 + 3.1611x12 x4 + 19.84 x22 x4 (10)
Min : f ( x ) = 0.6224x1 x3 x4 + 1.7781x2 x32 + 3.1611x12 x4 + 19.84x22 x4 (10)
Subject to:
Subject to : g1 = 0.0163 x3 − x1 ≤ 0,
g1 = 0.0163x3 − x1 ≤ 0,
g 2 = 0.00954 x3 − x2 ≤ 0,
g2 = 0.00954x3 − x2 ≤ 0,
4 33
g3 g=3 = 1296000−−πx
1296000 π x322 x44 −− 43ππx
x3 3≤≤0,0, (11)
(11)
3
g4 = x4 − 240 ≤ 0,
g 4 = x4 − 240 ≤ 0,
g5 = 1.1 − x1 ≤ 0,
g6 g=5 = 1.1−− xx1 ≤≤0,0.
0.6 2
g 6 = 0.6 − x2 ≤ 0.
where x1 and x2 are integer multipliers of 0.0625. x3 and x4 are continuous variables in the ranges of
≤ x3 ≤
40where x1 80
andand 20 ≤
x2 are x4 ≤ multipliers
integer of 0.0625.
60. In this study, the xstandard
3 and x4 are PSO continuous
and LDIW-PSO variables in the ranges
algorithms withof
40 ≤ x3 ≤types
different 80 and
of 20 ≤ x4 ≤ 60.
random In this
values arestudy,
utilizedthetostandard
solve thisPSO and LDIW-PSO
engineering problem. algorithms with different
types
Theofparameters
random values are utilized
of standard PSOtoand solve this engineering
LDIW-PSO algorithms problem.
for the engineering problem are
the sameTheasparameters of standard functions.
those for benchmark PSO and LDIW-PSO
In order toalgorithms
eliminate for the engineering
random problem
discrepancy, are the
the results
aresame as those
averaged overfor30benchmark
independent functions.
runs. The In optimization
order to eliminate resultsrandom
are shown discrepancy,
in Table 4.the results are
Obviously,
theaveraged
results ofover 30 independent
all algorithms runs. The
are similar. optimization
However, results areofshown
the performances LDIW-PSOin Table 4. Obviously,
algorithm randomthe
results
values of all algorithms
generated by U [−1, are similar.
1] and G (0, 1However,
) are slightlythe poorer
performancesthan those of LDIW-PSO algorithmThis
of other algorithms. random
is
because
valuesthe pressureby
generated U [ −design
vessel 1,1] and (0,1)dimensional
is aGlow are slightlyoptimization
poorer than problem. Although
those of other the random
algorithms. This is
values generated
because by U [−
the pressure vessel or G (0,is1)aare
1, 1] design lowbeneficial
dimensional to improve
optimization the diversity
problem.ofAlthough
particles,the
therandom
local
searching ability may be decreased due
[
to the
]
finite particles.
values generated by U −1,1 or G(0,1) are beneficial to improve the diversity of particles, the local
searching ability may be decreased due to the finite particles.
Algorithms 2018, 11, 23 16 of 20
Algorithms 2018, 11, x 16 of 20
Table4.4.Comparisons
Table Comparisons ofofstandard
standardPSO
PSOand
andLDIW-PSO
LDIW-PSOalgorithms
algorithms with
with different
differenttypes
typesof
ofrandom
random
values for pressure vessel design.
values for pressure vessel design.
Type
Type rr1,1 ,r2r2 U [U0,1
[0,]1] 0.5
0.5 U [U−[−
1, 11,] 1] GG((0,1)
0, 1)
Average
Averagesolution
solution 5975.93
5975.93 5975.93
5975.93 5975.93
5975.93 5975.94
5975.94
Standarddeviation
Standard deviation 0.00
0.00 0.00
0.00 0.01
0.01 0.01
0.01
SPSO
SPSO Theworst
The worstsolution
solution 5975.93
5975.93 5975.93
5975.93 5975.96
5975.96 5975.99
5975.99
Thebest
The bestsolution
solution 5975.93
5975.93 5975.93
5975.93 5975.93
5975.93 5975.93
5975.93
Averagesolution
Average solution 5975.93
5975.93 5975.93
5975.93 6001.34
6001.34 6026.76
6026.76
Standarddeviation
Standard deviation 0.00
0.00 0.00
0.00 139.18
139.18 193.41
193.41
LDIW-PSO
LDIW-PSO Theworst
The worstsolution
solution 5975.93
5975.93 5975.93
5975.93 6738.24
6738.24 6738.26
6738.26
The best solution
The best solution 5975.93
5975.93 5975.93
5975.93 5975.93
5975.93 5975.93
5975.93
4.3.2.
4.3.2. Analysis
Analysis
According
Accordingtotothethe experimental results
experimental and comparisons,
results it can be concluded
and comparisons, it can be that the performances
concluded that the
of standard PSOofand
performances LDIW-PSO
standard algorithms
PSO and LDIW-PSOare all highly improved
algorithms by expanding
are all highly improved thebyrange of random
expanding the
values. This is because that the large-scale random values are helpful in increasing
range of random values. This is because that the large-scale random values are helpful in increasing the velocity of
particles, andofthen
the velocity the particles
particles, and thenavoid falling into
the particles the falling
avoid local optima.
into theAs shown
local in Figure
optima. 7, in in
As shown theFigure
local
optima areas
7, in the (Aoptima
local or C), ifareas
the velocity cannot
(A or C), be velocity
if the increasedcannot
or its direction cannot
be increased orbe
itschanged,
directionthe particle
cannot be
will gradually
changed, fall intowill
the particle the local optima
gradually falland
intocannot jump
the local out. However,
optima and cannot if the velocity
jump of particleifcan
out. However, the
be increased
velocity or changed
of particle can beaccording
increasedtoora changed
certain probability,
according to thea global
certainoptima will be
probability, theobtained more
global optima
easily
will beand quickly.
obtained more easily and quickly.
Y A B C
v
v
Local optima
Local optima
0 X
Global optima
Figure 7. Schematic diagram of particles’ velocity.
Figure 7. Schematic diagram of particles’ velocity.
For the standard PSO and LDIW-PSO algorithms, if the random values are set as 0.5 or generated
by U [ the
For0,1
] , the
standard PSOofand
diversity LDIW-PSO
particle algorithms,
is decreased, if the
and the randomrange
variation valuesof are set asvelocity
particle 0.5 or generated
is limited
by U [0, 1], the diversity of particle is decreased, and the variation range of particle velocity is limited
in a narrow band. Therefore, the probability of escaping local optima is very small. This is because
in a narrow band. Therefore, the probability of escaping local optima is very small. This is because the
the particle velocity gradually tends to 0 when the particle falls into local optima. In addition, the
particle velocity gradually tends to 0 when the particle falls into local optima. In addition, the random
random value is a positive/negative number, which may lead to the monotonous variation of particle
value is a positive/negative number, which may lead to the monotonous variation of particle velocity.
velocity. This also decreases the possibility of escaping local optima. However, the PSO algorithms
This also decreases the possibility of escaping local optima. However, the PSO algorithms with
large-scale random values (distributed in U [−1, 1][or G](0, 1)) can overcome these problems to some
with large-scale random values (distributed in U −1, 1 or G(0,1) ) can overcome these problems to
some extent.
extent. Furthermore,
Furthermore, for afor
lowa low dimensional
dimensional practical
practical optimization
optimization problem,the
problem, therandom
randomvalues
values
generatedby
generated U [1,
byU [− 1] ]oror
−1,1 G(0,1)
G (0, 1) cancan improve
improve the diversity
the diversity of particles,
of particles, but thebut thesearching
local local searching
ability
may be may
ability decreased due to the
be decreased finite
due particles.
to the finite However, keeping thekeeping
particles. However, balancethebetween
balancelocal search local
between and
search and global search is very important for the performances of these PSO algorithms. So, the PSO
Algorithms 2018, 11, 23 17 of 20
global search is very important for the performances of these PSO algorithms. So, the PSO algorithm
with random values distributed in U [0, 1] and deterministic PSO algorithm (r = 0.5) have better local
searching ability for some low dimensional optimization problems.
5. Conclusions
In this paper, the standard PSO algorithm and one of its modifications (LDIW-PSO algorithm)
are adopted to study and analyze the influences of random values generated by uniform distribution
in the ranges of [0, 1] and [−1, 1], Gauss distribution with mean 0 and variance 1 (U [0, 1], U [−1, 1]
and G (0, 1)). In addition, the deterministic PSO algorithm, in which the random values are set as
0.5, is also investigated in this study. Some benchmark functions and the pressure vessel design
problem are utilized to test and compare the performances of two PSO algorithms with different types
of random values in three space dimensions (10, 30, and 100). The experimental results show that
the performances of deterministic PSO algorithms are the worst. Moreover, the performances of two
PSO algorithms with random values generated by U [−1, 1] or G (0, 1) are much better than that of
the algorithms with random values generated by U [0, 1] for most benchmark functions. In addition,
the convergence velocities of the algorithms with random values distributed in U [−1, 1] or G (0, 1) are
much faster than that of the algorithms with random values distributed in U [0, 1]. It is concluded that
the PSO algorithms with large-scale random values can effectively avoid falling into the local optima
and quickly obtain the global optima. However, for a low dimensional practical optimization problem,
the random values generated by U [−1, 1] or G (0, 1) are beneficial to improve the global searching
ability, but the local searching ability may be decreased due to the finite particles.
Acknowledgments: This work was supported by the National Key Research and Development Program of China
(Grant No. 2017YFB0701700), Educational Commission of Hunan Province of China (Grant No. 16c1307) and
Innovation Foundation for Postgraduate of Hunan Province of China (Grant No. CX2016B045).
Author Contributions: Hou-Ping Dai and Dong-Dong Chen conceived and designed the experiments;
Hou-Ping Dai performed the experiments; Hou-Ping Dai and Dong-Dong Chen analyzed the data;
Zhou-Shun Zheng contributed reagents/materials/analysis tools; Dong-Dong Chen wrote the paper. All authors
have read and approved the final manuscript.
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Kennedy, J.; Eberhart, R.C. Particle swarm optimization. In Proceedings of the IEEE International Conference
on Neuron Networks, Perth, WA, Australia, 27 November–1 December 1995; pp. 1942–1948.
2. Wang, Y.; Li, B.; Weise, T.; Wang, J.Y.; Yuan, B.; Tian, Q.J. Self-adaptive learning based particle swarm
optimization. Inf. Sci. 2011, 180, 4515–4538. [CrossRef]
3. Liang, J.J.; Qin, A.K.; Suganthan, P.N.; Baskar, S. Comprehensive learning particleswarm optimizer for global
optimization of multimodal functions. IEEE Trans. Evol. Comput. 2006, 10, 281–295. [CrossRef]
4. Chen, D.B.; Zhao, C.X. Particle swarm optimization with adaptive population size and its application.
Appl. Soft Comput. 2009, 9, 39–48. [CrossRef]
5. Xu, G. An adaptive parameter tuning of particle swarm optimization algorithm. Appl. Math. Comput. 2013,
219, 4560–4569. [CrossRef]
6. Mirjalili, S.A.; Hashim, S.Z.M.; Sardroudi, H.M. Training feedforward neural networks using hybrid particle
swarm optimization and gravitational search algorithm. Appl. Math. Comput. 2012, 218, 11125–11137.
[CrossRef]
7. Ren, C.; An, N.; Wang, J.; Li, L.; Hu, B.; Shang, D. Optimal parameters selection for BP neural network based
on particle swarm optimization: A case study of wind speed forecasting. Knowl. Based Syst. 2014, 56, 226–239.
[CrossRef]
8. Zhang, J.R.; Zhang, J.; Lok, T.M.; Lyu, M.R. A hybrid particle swarmoptimization–back-propagation
algorithm for feedforward neural network training. Appl. Math. Comput. 2007, 185, 1026–1037.
9. Das, G.; Pattnaik, P.K.; Padhy, S.K. Artificial Neural Network trained by Particle Swarm Optimization for
non-linear channel equalization. Expert Syst. Appl. 2014, 41, 3491–3496. [CrossRef]
Algorithms 2018, 11, 23 18 of 20
10. Lin, C.J.; Chen, C.H.; Lin, C.T. A hybrid of cooperative particle swarm optimization and cultural algorithm
for neural fuzzy networks and its prediction applications. IEEE Trans. Syst. Man Cybern. Part C Appl. Rev.
2009, 39, 55–68.
11. Juang, C.F.; Hsiao, C.M.; Hsu, C.H. Hierarchical cluster-based multispecies particle-swarm optimization for
fuzzy-system optimization. IEEE Trans. Fuzzy Syst. 2010, 18, 14–26. [CrossRef]
12. Kuo, R.J.; Hong, S.Y.; Huang, Y.C. Integration of particle swarm optimization-based fuzzy neural network
and artificial neural network for supplier selection. Appl. Math. Model. 2010, 34, 3976–3990. [CrossRef]
13. Tang, Y.; Ju, P.; He, H.; Qin, C.; Wu, F. Optimized control of DFIG-based wind generation using sensitivity
analysis and particle swarm optimization. IEEE Trans. Smart Grid 2013, 4, 509–520. [CrossRef]
14. Sui, X.; Tang, Y.; He, H.; Wen, J. Energy-storage-based low-frequency oscillation damping control using
particle swarm optimization and heuristic dynamic programming. IEEE Trans. Power Syst. 2014, 29,
2539–2548. [CrossRef]
15. Jiang, H.; Kwong, C.K.; Chen, Z.; Ysim, Y.C. Chaos particle swarm optimization and T–S fuzzy modeling
approaches to constrained predictive control. Expert Syst. Appl. 2012, 39, 194–201. [CrossRef]
16. Moharam, A.; El-Hosseini, M.A.; Ali, H.A. Design of optimal PID controller using hybrid differential
evolution and particle swarm optimization with an aging leader and challengers. Appl. Soft Comput. 2016,
38, 727–737. [CrossRef]
17. Arumugam, M.S.; Rao, M.V.C. On the improved performances of the particle swarm optimization algorithms
with adaptive parameters, cross-over operators and root mean square (RMS) variants for computing optimal
control of a class of hybrid systems. Appl. Soft Comput. 2008, 8, 324–336. [CrossRef]
18. Pehlivanoglu, Y.V. A new particle swarm optimization method enhanced with a periodic mutation strategy
and neural networks. IEEE Trans. Evolut. Comput. 2013, 17, 436–452. [CrossRef]
19. Ratnaweera, A.; Halgamuge, S.; Waston, H. Self-organizing hierarchical particle optimizer with time-varying
acceleration coefficients. IEEE Trans. Evol. Comput. 2004, 8, 240–255. [CrossRef]
20. Shi, Y.H.; Eberhart, R.C. A modified particle swarm optimizer. In Proceedings of the IEEE International
Conference on Computational Intelligence, Anchorage, AK, USA, 4–9 May 1998; pp. 69–73.
21. Xing, J.; Xiao, D. New Metropolis coefficients of particle swarm optimization. In Proceedings of the IEEE
Chinese Control and Decision Conference, Yantai, China, 2–4 July 2008; pp. 3518–3521.
22. Taherkhani, M.; Safabakhsh, R. A novel stability-based adaptive inertia weight for particle swarm
optimization. Appl. Soft Comput. 2016, 38, 281–295. [CrossRef]
23. Nickabadi, A.; Ebadzadeh, M.M.; Safabakhsh, R. A novel particle swarm optimization algorithm with
adaptive inertia weight. Appl. Soft Comput. 2011, 11, 3658–3670. [CrossRef]
24. Zhang, L.; Tang, Y.; Hua, C.; Guan, X. A new particle swarm optimization algorithm with adaptive inertia
weight based on Bayesian techniques. Appl. Soft Comput. 2015, 28, 138–149. [CrossRef]
25. Hu, M.; Wu, T.; Weir, J.D. An adaptive particle swarm optimization with multiple adaptive methods.
IEEE Trans. Evol. Comput. 2013, 17, 705–720. [CrossRef]
26. Shi, X.H.; Liang, Y.C.; Lee, H.P.; Lu, C.; Wang, L.M. An improved GA and a novel PSOGA-based hybrid
algorithm. Inf. Process. Lett. 2005, 93, 255–261. [CrossRef]
27. Zhang, J.; Sanderson, A.C. JADE: Adaptive differential evolution with optional external archive. IEEE Trans.
Evol. Comput. 2009, 13, 945–958. [CrossRef]
28. Mousa, A.A.; El-Shorbagy, M.A.; Abd-El-Wahed, W.F. Local search based hybrid particle swarm optimization
algorithm for multiobjective optimization. Swarm Evol. Comput. 2012, 3, 1–14. [CrossRef]
29. Liu, Y.; Niu, B.; Luo, Y. Hybrid learning particle swarm optimizer with genetic disturbance. Neurocomputing
2015, 151, 1237–1247. [CrossRef]
30. Duan, H.B.; Luo, Q.A.; Shi, Y.H.; Ma, G.J. Hybrid Particle Swarm Optimization and Genetic Algorithm for
Multi-UAV Formation Reconfiguration. IEEE Computat. Intell. Mag. 2013, 8, 16–27. [CrossRef]
31. Epitropakis, M.G.; Plagianakos, V.P.; Vrahatis, M.N. Evolving cognitive and social experience in particle
swarm optimization through differential evolution: A hybrid approach. Inf. Sci. 2012, 216, 50–92. [CrossRef]
32. Blackwell, T.; Branke, J. Multiswarms, exclusion, and anti-convergence in dynamic environments. IEEE Trans.
Evol. Comput. 2006, 10, 459–472. [CrossRef]
33. Parrott, D.; Li, X. Locating and tracking multiple dynamic optima by a particle swarm model using speciation.
IEEE Trans. Evol. Comput. 2006, 10, 440–458. [CrossRef]
Algorithms 2018, 11, 23 19 of 20
34. Li, C.; Yang, S. A clustering particle swarm optimizer for dynamic optimization. In Proceedings of the 2009
Congress on Evolutionary Computation, Trondheim, Norway, 18–21 May 2009; pp. 439–446.
35. Kamosi, M.; Hashemi, A.B.; Meybodi, M.R. A new particle swarm optimization algorithm for dynamic
environments. In Proceedings of the 2010 Congress on Swarm, Evolutionary, and Memetic Computing,
Chennai, India, 16–18 December 2010; pp. 129–138.
36. Du, W.; Li, B. Multi-strategy ensemble particle swarm optimization for dynamic optimization. Inf. Sci. 2008,
178, 3096–3109. [CrossRef]
37. Dong, D.M.; Jie, J.; Zeng, J.C.; Wang, M. Chaos-mutation-based particle swarm optimizer for dynamic
environment. In Proceedings of the 2008 Conference on Intelligent System and Knowledge Engineering,
Xiamen, China, 17–19 November 2008; pp. 1032–1037.
38. Cui, X.; Potok, T.E. Distributed adaptive particle swarm optimizer in dynamic environment. In Proceedings
of the 2007 Conference on Parallel and Distributed Processing Symposium, Rome, Italy, 26–30 March 2007;
pp. 1–7.
39. De, M.K.; Slawomir, N.J.; Mark, B. Stochastic diffusion search: Partial function evaluation in swarm
intelligence dynamic optimization. In Stigmergic Optimization; Springer: Berlin/Heidelberg, Germany,
2006; pp. 185–207.
40. Janson, S.; Middendorf, M. A hierarchical particle swarm optimizer for noisy and dynamic environments.
Genet. Program. Evol. Mach. 2006, 7, 329–354. [CrossRef]
41. Zheng, X.; Liu, H. A different topology multi-swarm PSO in dynamic environment. In Proceedings of the
2009 Conference on Medicine & Education, Jinan, China, 14–16 August 2009; pp. 790–795.
42. Shi, Y.H.; Eberhart, R.C. Parameter selection in particle swarm optimization. In Proceedings of the 7th
Annual International Conference on Evolutionary Programming, San Diego, CA, USA, 25–27 March 1998;
pp. 591–601.
43. Eberhart, R.C.; Shi, Y.H. Tracking and optimizing dynamic systems with particle swarms. In Proceedings of
the 2001 Congress on Evolutionary Computation, Seoul, Korea, 27–30 May 2001; pp. 81–86.
44. Yang, C.; Gao, W.; Liu, N.; Song, C. Low-discrepancy sequence initialized particle swarm optimization
algorithm with high-order nonlinear time-varying inertia weight. Appl. Soft Comput. 2015, 29, 386–394.
[CrossRef]
45. Shi, Y.H.; Eberhart, R.C. Empirical study of particle swarm optimization. In Proceedings of the 1999 Congress
on Evolutionary Computation, Washington, DC, USA, 6–9 July 1999; pp. 1945–1950.
46. Eberhart, R.C.; Shi, Y.H. Comparing inertia weights and constriction factors in particle swarm optimization.
In Proceedings of the IEEE Congress on Evolutionary Computation, La Jolla, CA, USA, 16–19 July 2000;
pp. 84–88.
47. Chatterjee, A.; Siarry, P. Nonlinear inertia weight variation for dynamic adaptation in particle swarm
optimization. Comput. Oper. Res. 2006, 33, 859–871. [CrossRef]
48. Feng, Y.; Teng, G.F.; Wang, A.X.; Yao, Y.M. Chaotic inertia weight in particle swarm optimization.
In Proceedings of the 2nd International Conference on Innovative Computing, Information and Control,
Kumamoto, Japan, 5–7 September 2007; p. 475.
49. Fan, S.K.S.; Chiu, Y.Y. A decreasing inertia weight particle swarm optimizer. Eng. Optimiz. 2007, 39, 203–228.
[CrossRef]
50. Jiao, B.; Lian, Z.; Gu, X. A dynamic inertia weight particle swarm optimization algorithm. Chaos Solitons
Fract. 2008, 37, 698–705. [CrossRef]
51. Lei, K.; Qiu, Y.; He, Y. A new adaptive well-chosen inertia weight strategy to automatically harmonize global
and local search ability in particle swarm optimization. In Proceedings of the 1st International Symposium
on Systems and Control in Aerospace and Astronautics, Harbin, China, 19–21 January 2006; pp. 977–980.
52. Yang, X.; Yuan, J.; Mao, H. A modified particle swarm optimizer with dynamic adaptation. Appl. Math.
Comput. 2007, 189, 1205–1213. [CrossRef]
53. Panigrahi, B.K.; Pandi, V.R.; Das, S. Adaptive particle swarm optimization approach for static and dynamic
economic load dispatch. Energ. Convers. Manag. 2008, 49, 1407–1415. [CrossRef]
54. Suresh, K.; Ghosh, S.; Kundu, D.; Sen, A.; Das, S.; Abraham, A. Inertia-adaptiveparticle swarm optimizer
for improved global search. In Proceedings of the Eighth International Conference on Intelligent Systems
Design and Applications, Kaohsiung, Taiwan, 26–28 November 2008; pp. 253–258.
Algorithms 2018, 11, 23 20 of 20
55. Tanweer, M.R.; Suresh, S.; Sundararajan, N. Self-regulating particle swarm optimization algorithm. Inf. Sci.
2015, 294, 182–202. [CrossRef]
56. Nakagawa, N.; Ishigame, A.; Yasuda, K. Particle swarm optimization using velocity control. IEEJ Trans.
Electr. Inf. Syst. 2009, 129, 1331–1338. [CrossRef]
57. Clerc, M.; Kennedy, J. The particle swarm: Explosion stability and convergence in a multi-dimensional
complex space. IEEE Trans. Evol. Comput. 2002, 6, 58–73. [CrossRef]
58. Iwasaki, N.; Yasuda, K.; Ueno, G. Dynamic parameter tuning of particle swarm optimization. IEEJ Trans.
Electr. Electr. 2006, 1, 353–363. [CrossRef]
59. Leong, W.F.; Yen, G.G. PSO-based multiobjective optimization with dynamic population size and adaptive
local archives. IEEE Trans. Syst. Man Cybern. Part B Cybern. 2008, 38, 1270–1293. [CrossRef] [PubMed]
60. Rada-Vilela, J.; Johnston, M.; Zhang, M. Population statistics for particle swarm optimization:
Single-evaluation methods in noisy optimization problems. Soft Comput. 2014, 19, 1–26. [CrossRef]
61. Hsieh, S.T.; Sun, T.Y.; Liu, C.C.; Tsai, S.J. Efficient population utilization strategy for particle swarm optimizer.
IEEE Trans. Syst. Man Cybern. Part B Cybern. 2009, 39, 444–456. [CrossRef] [PubMed]
62. Ruan, Z.H.; Yuan, Y.; Chen, Q.X.; Zhang, C.X.; Shuai, Y.; Tan, H.P. A new multi-function global particle
swarm optimization. Appl. Soft Comput. 2016, 49, 279–291. [CrossRef]
63. Serani, A.; Leotardi, C.; Iemma, U.; Campana, E.F.; Fasano, G.; Diez, M. Parameter selection in synchronous
and asynchronous deterministic particle swarm optimization for ship hydrodynamics problems. Appl. Soft
Comput. 2016, 49, 313–334. [CrossRef]
64. Sandgren, E. Nonlinear integer and discrete programming in mechanical design optimization. J. Mech. Des.
ASME 1990, 112, 223–229. [CrossRef]
© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access
article distributed under the terms and conditions of the Creative Commons Attribution
(CC BY) license (http://creativecommons.org/licenses/by/4.0/).