My PSO
My PSO
a r t i c l e i n f o a b s t r a c t
Article history: Hüseyin Haklı and Harun Uguz (2014) proposed a novel approach for global function optimization using
Received 12 October 2015 particle swarm optimization with levy flight (LFPSO) [Hüseyin Haklı, Harun U guz, A novel particle swarm
Received in revised form 28 January 2016 optimization algorithm with levy flight. Appl. Soft Comput. 23, 333–345 (2014)]. In our study, we enhance
Accepted 12 February 2016
the LFPSO algorithm so that modified LFPSO algorithm (PSOLF) outperforms LFPSO algorithm and other
Available online 21 February 2016
PSO variants. The enhancement involves introducing a levy flight method for updating particle velocity.
After this update, the particle velocity becomes the new position of the particle. The proposed work
Keywords:
is examined on well-known benchmark functions and the results show that the PSOLF is better than
Particle swarm optimization
Levy flight
the standard PSO (SPSO), LFPSO and other PSO variants. Also the experimental results are tested using
Nature-inspired strategy Wilcoxon’s rank sum test to assess the statistical significant difference between the methods and the test
Global optimization proves that the proposed PSOLF method is much better than SPSO and LFPSO. By combining levy flight
with PSO results in global search competence and high convergence rate.
© 2016 Elsevier B.V. All rights reserved.
1. Introduction gives assurance that the algorithm can explore the search space
efficiently.
In the last few decades, many nature-inspired evolutionary algo- Among the several metaheuristic algorithms, particle swarm
rithms have been developed for solving most engineering design optimization (PSO) algorithm has been used a lot for the bene-
optimization problems which are highly nonlinear, involving many fit of ease of implementation and small number of parameters to
design variables and complex constraints. Due to the capability of be controlled. It has been widely applied to solve many scientific
global search and less time consumption of metaheuristic algo- and real-world problems [10–17]. However, PSO algorithm suffers
rithms are attracted very much now days to solve real world from two major problems of trapping in local minima and pre-
problems. Nature-inspired algorithms [3–5] imitate the behaviors mature convergence and decreased convergence rate in the later
of the living things in the nature, so they are also called as Swarm period of evolution. To overcome those problems, several variants
Intelligence (SI) algorithms. For example, Ant Colony Optimization of particle swarm optimization algorithms were developed in the
(ACO) [6,9], imitates the food searching paths of ants in nature, Bees literature. Shi and Eberhart [19] introduced an effective linearly
Algorithm [7] developed by Pham DT in 2005 imitate the food for- time varying parameter called inertia weight into original PSO and
aging behavior of honey bee colonies, particle swarm optimization they proved that modified PSO produces global optimum value by
(PSO), introduced by Eberhart and Kennedy [8] in 1995, simulates making use of global search capability. Liang et al. [18] used a new
the social behavior of bird flock or fish school. learning strategy to update particle’s velocity by utilizing all other
The power of all popular metaheuristics comes from the fact particles’ best experience and as a result of premature convergence
that they imitate the best characteristics in nature. Two impor- was avoided by preserving the diversity of the swarm. Zhan et al.
tant characteristics are selection of the fittest and adaptation to the [20] used orthogonal learning strategy to guide the particles to
environment. These two properties are translated into exploitation fly in better directions based on particles’ own experience and its
and exploration. Exploitation directs to search around the cur- neighborhood’s experiences. Xinchao [21] introduced a new veloc-
rent best solutions and select the best solutions, while exploration ity updating strategy based on the perturbed gbest which resulted
the prevention of loss of swarm diversity. Wang et al. [22] utilized
special velocity update procedure to prevent premature conver-
gence problem. In [23], Tsoulos made modified velocity update
∗ Corresponding author. Tel.: +91 04639235224. method by adding stopping rule, similarity check and some local
E-mail addresses: r [email protected] (R. Jensi), [email protected] (G.W. Jiji). search. Ratnaweera et al. [24] proposed a novel particle swarm
http://dx.doi.org/10.1016/j.asoc.2016.02.018
1568-4946/© 2016 Elsevier B.V. All rights reserved.
R. Jensi, G.W. Jiji / Applied Soft Computing 43 (2016) 248–261 249
1: Initialize the parameters NP, c1, c2, Xmin, Xmax, Max FEs
2: Initialize the particles with random positions within the initialization range in the problem space.
3: Evaluate fitness function value and assign X to be pbest. Find gbest amongst the swarm.
4: iteration=1
5: while (stopping conditions are not met) do
6: ω=0.1+0.8*(1-iteration/MaxIter)
7: for j=1 to NP do
8: if rand ( ) < 0.5 then
9: Change the velocity and position of the particle by Eq. (12) and (13) respectively
10: else
11: Change the velocity and position of the particle by Eq. (1) and (2) respectively
12: end if
13: Position values are brought to the boundary value when its values are moved out of the
boundary in search space
14: Evaluate the fitness function for new particle .
15: if then
17: Assign , and equals to the current location in N-
dimensional space
18: end if
19: end for
20: Record the gbest solution
21: iteration=iteration+1
22: end while
23: Output the global best (gbest) solution
optimization algorithm called HPSO-TVAC in which social and cog- Each particle is a solution of the considered problem. For a d-
nitive part of the particles swarm were considered to estimate the dimensional problem, each particle i maintains two vectors:
particles’ velocity and particles are reinitialized whenever they are
unable to explore the search space. Yang et al. [25] proposed a Position vector Xi = [xi1 , xi2 . . .xid ] and
new velocity updating formula, where inertia weight is dynam-
ically changed based on the generation and evolution state. In Velocity vector Vi = [vi1 , vi2 . . .vid ].
[26] authors used full information of the entire neighbourhood for
Each particle uses its own best experience (pbest) and the best
guiding the particle to fly in better directions. Topological struc-
experience of all particles (gbest) to choose how to move in the
ture is used to determine particles’ neighbourhood in [27]. Liang
search space.
et al. [28] utilized dynamic multiple swarms and small neighbor-
For a d-dimensional search space,
hood’s information to diversify the swarm. A new PSO algorithm
pbest of particle i is represented as:
based on fusion global-local-topology (FGLT-PSO) was proposed
in [29]. Hüseyin Haklı and Harun U guz [1] proposed a novel pbesti = [pi1 , pi2 , pi3 , . . ., pid ]
particle swarm optimization algorithm with levy flight in which
randomness phenomenon of levy flight and the advantages of orig- gbest is represented as
inal PSO algorithm are utilized. Levy flight is heavily applied with
gbest = [g1 , g2 , g3 , . . ., gd .].
various global optimization methods such as cuckoo search algo-
rithm [30–33], firefly algorithm [34], bat algorithm [35], etc. In In practice, in the initialization phase each particle is given a
our study, we enhance the particle swarm optimization algorithm random initial position and an initial velocity. The position of the
with levy flight by using PSOLF method to update the particles’ particle represents a solution of the problem and has therefore a
velocity. value, given by the objective function. While moving in the search
The rest of the paper is organized as follows. In Section 2 a space, particles memorize the position of the best solution they
simple PSO algorithm is explained and in Section 3 Levy flight found. At each iteration of the algorithm, each particle moves with
is presented. The proposed PSOLF algorithm is presented in Sec- a velocity that is a weighted sum of three components: the old
tion 4 and conducted experiments and results are shown in velocity, a velocity component that drives the particle towards the
Section 5. As a final, the paper is concluded with summary in location in the search space where it previously found the best
Section 6. solution so far, and a velocity component that drives the particle
towards the location in the search space where the neighbour par-
ticles found the best solution so far. The velocity and position of the
2. Simple PSO algorithm particle i at each iteration are calculated as follows:
(t+1) (t)
The particle swarm optimization (PSO) [8] is a population based Vi = ω × Vi + c1 × rand( ) ⊕ (pbesti − Xit ) + c2 × rand( )
stochastic optimization technique for the solution of continuous
⊕(gbest − Xit ) (1)
optimization problems. It is inspired by social behaviors in flocks of
birds and schools of fish. In PSO, a set of software agents called par-
ticles search for good solutions to a given continuous optimization (t+1) (t) (t+1)
problem. Xi = Xi + Vi (2)
250 R. Jensi, G.W. Jiji / Applied Soft Computing 43 (2016) 248–261
3. Levy flight
Update position and velocity by Update position and velocity by
using (1) and (2) respectively using (12) and (13) respectively Levy flight follows [1,2,30–35]; the generation of random num-
bers with levy flight consists of two steps: the choice of a random
direction and the generation of steps which obey the chosen levy
distribution. Random walks are drawn from Levy stable distribu-
Evaluate fitness function value for new particle f(X’i) tion. This distribution is a simple power-law formula L(s) ∼ |s|−1−ˇ
where 0 < ˇ < 2 is an index.
No Yes
f(X’i)<f(pbesti) Definition 5.1. Mathematically, a simple version of Levy distribu-
tion can be defined as:
Update pbest
L(s, , )
⎧
1
⎨
exp − , if 0 < < s < ∞
j=j+1
= 22(s − ) (s − )3/2
No ⎩
Yes 0, if s ≤ 0
j<=NP
where parameter is location or shift parameter, > 0 parameter
Update gbest is scale (controls the scale of distribution) parameter.
Definition 5.2. In general, Levy distribution should be defined in
iteration=iteration+1 terms of Fourier transform.
ˇ
Output the solution
F(k) = exp −˛k , 0<ˇ≤2
(t+1) (t) For random walk, the step length S can be calculated by Man-
where Vi , velocity of particle i at iteration t + 1; Vi , velocity of
tegna’s algorithm as
particle i at iteration t; Xit , position value of the ith particle at iter-
(t+1) u
ation t; Xi , position value of the ith particle at iteration t + 1; c1, S= (7)
cognitive weighting factor; c2, social weighting factors are acceler- |v|1/ˇ
ation coefficients; rand (), stochastic components of the algorithm, where u and v are drawn from normal distributions. That is
Table 1
Control parameters and its values for SPSO, LFPSO and PSOLF algorithms.
2 * Schwefel2.22 f2 (x) =
x + x 30/50/100/500/1000 [−10,10]d /[−10,5]d 0 {0}d
i i
i=1 i=1
d−1
2
3 * Rosenbrock f3 (x) = (1 − x2 ) + 100(xi+1 − x2 ) 30/50/100/500/1000 [−10,10]d /[−10,10]d 0 {0}d
i i
i=1
n
i=1
d
6 + Rastrigin f6 (x) = 10d + x2 − 10 cos (2xi ) 30/50/100/500/1000 [−5.12,5.12]d /[−5.12,2]d 0 {0}d
i
i=1 ⎡ ⎤
d d
7 + Ackley f7 (x) = −20 exp ⎣−0.2 1 x2 ⎦ − exp 1 cos (2xi ) + (20 + e) 30/50 [−32,32]d /[−32,16]d 0 {0}d
d i d
i=1 i=1
d d
1 x
8 + Griewank f8 (x) = 4000 x2 − cos √i +1 30/50/100/500/1000 [−600,600]d /[−600,200]d 0 {0}d
i i
i=1 i=1
n−1
n
2 2 2 2
f9 (x) = (10 sin (y1 ) + (yi − 1) [1 + 10 sin (yi+1 )] + (yn − 1) + u(xi , 10, 100, 4)
n
9 + Penalized1
i=1 i=1 30/50 [−50,50]d /[−50,25]d 0 {0}d
k(xi − a)m xi > a
1
yi = 1 + (xi + 1), ux ,a,k,m = 0 −a ≤ xi ≤ a
4 i
k(xi − a)m xi < −a
+ Penalized2 1 2
n−1
2 2
2 2
n
10 f10 (x) = 10
sin (3x1 ) + (xi − 1) 1 + sin (3xi+1 ) + (xn − 1) 1 + sin (2xn ) + u(xi , 5, 100, 4) 30/50 [−50,50]d /[−50,25]d 0 {0}d
i=1
i=1
d
−yi sin
y , if
y ≤ 500
11 # Rotated f11 (x) = 418.9828 ∗ d − zi , where zi = i i yi = y + 420.96 30/50 [−500,500]d /[−500,500]d 0 {0}d
i
Schwefel2.26 0, otherwise
i=1
y = M ∗ (x − 420.96), M is an orthogonal matrix
i
d
12 # Rotated Rastrigin f12 (x) = y2 − 10 cos (2yi ) + 10 , where y = M*x, M is an orthogonal matrix 30/50 [−5.12,5.12]d /[−5.12,2]d 0 {0}d
i
i=1 ⎡ ⎤
d d
13 # RotatedAckley f13 (x) = −20 exp ⎣−0.2 1 y2 ⎦ − exp 1 cos (2yi ) + (20 + e) where y = M*x, M is an orthogonal 30/50 [−32,32]d /[−32,16]d 0 {0}d
d i d
i=1 i=1
matrix
251
252
Table 2 (Continued)
i=1
d
i=1
n
sin2 ( x2 )−0.5
n
i=1 i
19 + Schaffer f19 (x) = 0.5 + 30/50/100/500/1000 [−100,100]d /[−100,100]d 0 {0}d
(1+0.001( x2 ))
i=1 i
n
i=1
d
f21 (x) = y2 − 10 cos(2yi ) + 10
i
+ Non-continuous i=1 [−5.12,5.12]d /[−5.12,2]d {0}d
21
Rastrigrin xi
x < 1 30/50/100/500/1000 0
i
yi = round(2xi ) x ≥ 12
i
2 2
*
Unimodal.
+
Multimodal.
#
Rotated.
R. Jensi, G.W. Jiji / Applied Soft Computing 43 (2016) 248–261 253
Table 3
Experimental result values obtained by SPSO, LFPSO, and PSOLF for dimension 30 through 50 independent runs (opt., optimum; std. dev., standard deviation).
Mean Std. dev. Time (s) Mean Std. dev. Time (s) Mean Std. dev. Time (s)
f1 0.00E+00 0.00E+00 0.00E+00 9.39 0.00E+00 0.00E+00 5.29 0.00E+00 0.00E+00 1.35
f2 0.00E+00 0.00E+00 0.00E+00 10.35 2 .00E−01 1.41E+00 6.21 0.00E+00 0.00E+00 2.63
f3 0.00E+00 3.55E+00 2.41E+00 10.88 2.39E+01 2.47E−01 7.13 2.69E+01 1.01 E+00 7.79
f4 0.00E+00 3.25 E−03 1.341 E−03 11.21 1.20 E-03 4.9E−04 7.91 2.11E−05 1.89E−05 7.66
f5 0.00E+00 7.02 E+03 7.19 E+02 14.04 1.76 E+03 9.42 E+02 7.27 2.06 E+03 5.67 E+02 7.78
f6 0.00E+00 4.00 E+01 1.13 E+01 13.91 3.15 E+00 8.80 E+00 6.24 0.00E+00 0.00E+00 0.09
f7 0.00E+00 1.30 E+00 9.35 E−01 11.02 1.72E−14 3.87E−15 7.38 8.88E−16 0.00E+00 7.82
f8 0.00E+00 1.88 E−02 2.69 E−02 10.06 5.41E−04 3.83E−03 7.34 0.00E+00 0.00E+00 0.12
f9 0.00E+00 3.16 E−01 5.04 E−01 14.27 0.00E+00 0.00E+00 9.87 2.08 E−02 1.8E−02 9.57
f10 0.00E+00 1.51E−12 7.45E−12 15.67 0.00E+00 0.00E+00 4.24 2.24E−08 6.91E−08 10.2
f11 0.00E+00 3.72 E+03 9.83 E+02 16.88 3.22 E+03 9.18 E+02 11.33 1.64E+03 4.12 E+02 10.4
f12 0.00E+00 6.23 E+01 5.11 E+01 15.88 1.36 E+02 2.02 E+01 9.29 0.00E+00 0.00E+00 0.13
f13 0.00E+00 1.63 E+00 7.73 E−01 18.23 1.75E−14 4.96E−15 11.1 0.00E+00 0.00E+00 0.22
f14 0.00E+00 1.04 E−02 1.11 E−02 13.67 2.86 E−03 7.01 E−03 13.5 0.00E+00 0.00E+00 0.14
f15 0.00E+00 0.00E+00 0.00E+00 9.69 0.00E+00 0.00E+00 6.6 0.00E+00 0.00E+00 1.39
f16 0.00E+00 4.04 E+00 1.59 E+01 4.25 0.00E+00 0.00E+00 2.55 0.00E+00 0.00E+00 0.02
f17 0.00E+00 0.00E+00 0.00E+00 10.91 0.00E+00 0.00E+00 7.71 0.00E+00 0.00E+00 0.76
f18 0.00E+00 1.06 E+00 9.59 E−01 14.21 0.00E+00 0.00E+00 9.17 9.39 E−01 5.19 E−01 9.26
f19 0.00E+00 4.21 E−02 1.34 E−02 13.83 1.01 E−02 4.15 E−03 7.54 0.00E+00 0.00E+00 0.47
f20 0.00E+00 1.8E−15 1.52E−15 14.79 1.78E−07 1.16E−06 7.06 0.00E+00 0.00E+00 2.67
f21 0.00E+00 2.31E+01 1.32E+02 14.69 1.89 E+00 5.54 E+00 6.36 0.00E+00 0.00E+00 0.12
The best mean result value and the best standard deviation value of the test functions obtained by the algorithms are marked in bold.
Table 4
Experimental result values obtained by SPSO, LFPSO, and PSOLF for dimension 50 through 50 independent runs (opt, optimum; std. dev., standard deviation).
Mean Std. dev. Time (s) Mean Std. dev. Time (s) Mean Std. dev. Time (s)
f1 0.00E+00 0.00E+00 0.00E+00 11.28 9.17E−17 3.2E−16 6.4 0.00E+00 0.00E+00 1.36
f2 0.00E+00 0.00E+00 0.00E+00 11.19 8.00E−01 4.44E+00 6.53 0.00E+00 0.00E+00 2.63
f3 0.00E+00 5.11E+01 3.09E+01 12.55 4.41E+01 2.82E−01 7.57 4.74E+01 9.84E−01 7.67
f4 0.00E+00 1.38E−02 7.54E−03 13.77 2.365E−03 1.01E−03 8.06 1.6E−05 1.17E−05 8.03
f5 0.00E+00 1.22E+04 1.03E+03 15.61 4.36E+03 1.27E+03 8.63 5.07E+03 9.74E+02 8.30
f6 0.00E+00 7.56E+01 1.81E+01 15.14 1.26E+01 1.61E+01 8.41 0.00E+00 0.00E+00 0.09
f7 0.00E+00 2.17E+00 6.09E−01 12.49 6.53E−10 2.46E−09 7.93 8.88E−16 0.00E+00 7.69
f8 0.00E+00 1.89E−02 2.52E−02 12.83 4.86E−03 1.36E−02 10.23 0.00E+00 0.00E+00 0.13
f9 0.00E+00 2.89E−01 5.03E−01 15.96 1.94E−17 7.21E−17 10.35 3.15E−02 1.66E−02 9.97
f10 0.00E+00 0.00E+00 0.00E+00 0.68 2.39E−04 1.69E−03 4.57 4.23E−08 5.84E−08 10.79
f11 0.00E+00 7.01E+03 1.69E+03 17.38 5.72E+03 1.64E+03 12.02 1.79E+03 4.44E+02 10.93
f12 0.00E+00 1.89E+02 1.20E+02 15.43 2.56E+02 3.44E+01 9.8 0.00E+00 0.00E+00 0.13
f13 0.00E+00 2.25E+00 .79E−01 16.87 7.97E−09 2.99E−08 11.49 0.00E+00 0.00E+00 0.23
f14 0.00E+00 5.03E−03 8.04E−03 16.59 3.74E−03 7.97E−03 13.41 0.00E+00 0.00E+00 0.15
f15 0.00E+00 0.00E+00 0.00E+00 11.35 1.96E−17 1.37E−16 6.95 0.00E+00 0.00E+00 1.39
f16 0.00E+00 9.08E+00 1.96E+01 10.27 0.00E+00 0.00E+00 3.26 0.00E+00 0.00E+00 0.02
f17 0.00E+00 0.00E+00 0.00E+00 12.81 0.00E+00 0.00E+00 8.23 0.00E+00 0.00E+00 0.81
f18 0.00E+00 2.23E+00 1.69E+00 17.48 1.07E−16 5.42E−16 10.91 2.37E+00 6.58E−01 10.9
f19 0.00E+00 1.12E−01 5.21E−02 14.56 1.83E−02 1.49E−02 7.91 0.00E+00 0.00E+00 0.59
f20 0.00E+00 3.94E−15 2.05E−15 16.08 3.2E−05 2.21E−04 7.7 0.00E+00 0.00E+00 2.7
f21 0.00E+00 4.65E+01 2.79E+01 16.22 1.74E+01 2.18E+01 8.4 0.00E+00 0.00E+00 0.12
The best mean result value and the best standard deviation value of the test functions obtained by the algorithms are marked in bold.
Table 5
Experimental result values obtained by PSOLF for dimension 100, 500, and 1000 through 10 independent runs (opt, optimum; std. dev., standard deviation).
Mean Std. dev. Time (s) Mean Std. dev. Time (s) Mean Std. dev. Time (s)
f1 0.00E+00 0.00E+00 0.00E+00 1.51 0.00E+00 0.00E+00 2.89 0.00E+00 0.00E+00 4.71
f2 0.00E+00 0.00E+00 0.00E+00 3.21 0.00E+00 0.00E+00 7.37 0.00E+00 0.00E+00 12.82
f3 0.00E+00 9.04E+01 2.57E+01 8.82 4.61E+02 1.32E+02 16.98 9.25E+02 2.66E+02 27.82
f4 0.00E+00 2.38E−05 1.92E−05 10.12 6.77E−04 1.97E−05 25.42 1.8E−05 1.67E−05 44.89
f6 0.00E+00 0.00E+00 0.00E+00 0.11 0.00E+00 0.00E+00 0.19 0.00E+00 0.00E+00 0.31
f8 0.00E+00 0.00E+00 0.00E+00 0.13 0.00E+00 0.00E+00 0.26 0.00E+00 0.00E+00 0.42
f15 0.00E+00 0.00E+00 0.00E+00 1.48 0.00E+00 0.00E+00 2.88 0.00E+00 0.00E+00 4.87
f16 0.00E+00 0.00E+00 0.00E+00 0.03 0.00E+00 0.00E+00 0.07 0.00E+00 0.00E+00 0.11
f17 0.00E+00 0.00E+00 0.00E+00 0.97 0.00E+00 0.00E+00 2.49 0.00E+00 0.00E+00 4.49
f19 0.00E+00 0.00E+00 0.00E+00 0.46 0.00E+00 0.00E+00 2.29 0.00E+00 0.00E+00 3.73
f20 0.00E+00 0.00E+00 0.00E+00 3.24 0.00E+00 0.00E+00 7.49 0.00E+00 0.00E+00 13.19
f21 0.00E+00 0.00E+00 0.00E+00 0.14 0.00E+00 0.00E+00 0.26 0.00E+00 0.00E+00 0.41
254 R. Jensi, G.W. Jiji / Applied Soft Computing 43 (2016) 248–261
Table 6
Comparison of PSOLF with SPSO and LFPSO for the unimodal and multimodal functions using Wilcoxon’s rank sum test at ˛ = 0.05 (D = 30, 50) from f1 to f11.
D = 30 D = 50 D = 30 D = 50
u∼N(0, u2 ), v∼N(0, v2 ), (8) to 0.5. If the random value is less than 0.5, then particle’s veloc-
ity is updated as given in the Eq. (12) and then particle’s velocity
where becomes its position. By employing levy flight method in updating
1/ˇ the particle’s velocity, particle takes long jump towards its pbest
(1 + ˇ) sin (ˇ/2)
u = (9) and gbest thereby enhancing the diversity of the swarm and facili-
[1 + ˇ/2]ˇ2(ˇ−1)/2 tating the algorithm to perform global exploration throughout the
search space. In Levy flight method ˇ parameter takes major role
Then the step size is calculated by
in distribution. By applying different values for ˇ, the random dis-
step size = 0.01 × S (10) tribution is changed differently. In our study, we choose constant
value for ˇ(i,e., 1.5).
Here the factor 0.01 comes from the fact that L/100 should the Loss of diversity is thus avoided by using random phenomenon
typical step size of walks where L is the typical length scale; other- of levy flight while updating the velocity. As the performance of PSO
wise, Levy flights may become too aggressive, which makes new algorithm is enhanced by incorporating the advantages of random
solutions (even) jump out side of the design domain (and thus walk into the PSO, it improves particles’ positions in each iteration
wasting evaluations). through high exploration and exploitation of the search space.
In our study, ω is defined as
4. The proposed PSOLF algorithm
iteration
ω = 0.1 + 0.8 × 1 − (11)
Researchers investigated particle swarm optimization algo- Maxiter
rithm to avoid the problem of premature convergence and trapping
in local optima. Some researchers improved PSO’s performance The next position of each particle computed in Eq. (13) is based
by designing different techniques for updating velocity. In order on Eq. (12) which involves computing the Levy flight random walk
to improve the performance of PSO algorithm, this study aims at on the current position and using the computed values of pbest
updating velocity using levy flight method. Similar to original PSO, (best position found by the particle)and gbest (best position found
initially particles are randomly distributed within the search space, by the swarm).
fitness values of all particles are evaluated and particles’ pbest as
well as the swarm gbest are found. Then for each particle, veloc- (t+1) (t)
ity and position are updated based on the random probability. The Vi = ω × Levywalk (Xi ) + c1 × rand( ) ⊕ (pbesti − Xit )
particles’ velocity and position are updated as in the original PSO by
+ c2 × rand( ) ⊕ (gbest − Xit ) (12)
Eqs. (1) and (2), respectively with probability greater than or equal
R. Jensi, G.W. Jiji / Applied Soft Computing 43 (2016) 248–261 255
Table 7
Comparison of PSOLF with SPSO, and LFPSO for the unimodal and multimodal functions using Wilcoxon’s rank sum test at ˛ = 0.05 (D = 30, 50) from f12 to f21.
D = 30 D = 50 D = 30 D = 50
(t+1) (t+1) PSO algorithm for solving both unimodal functions and multimodal
Xi = Vi (13)
functions.
where All the three algorithms SPSO, LFPSO, proposed PSOLF are exe-
(t) (t) cuted in Matlab 8.2 with Windows OS environment using Intel Core
Levywalk (Xi ) = Xi + step ⊕ random(size(Xi )) (14)
i3, 3.30 GHz, 3.41 GB RAM. The population size is automatically
where calculated based on the dimension for SPSO. For more informa-
(t)
tion [36] can be referred. The control parameter values for LFPSO
step = stepsize ⊕ Xi (15) are set as per the suggestions of the authors in [1]. The con-
trol parameter settings for the above algorithms are shown in
and stepsize is the value obtained from Eq. (10). ⊕ represents
Table 1.
element-by-element multiplication.
After updating particles’ velocity and position, fitness value is
calculated. If the fitness value for the new particle is better than 5.1. Benchmark functions used
its pbest fitness value, then update pbest. Otherwise pbest value is
not updated. Then gbest for the entire swarm is found. Repeat the Twenty one benchmark test functions listed in Table 2 are used
same procedure until maximum number of function evaluations is for evaluating the performance of the proposed PSOLF algorithm.
reached or the global optimum value is obtained. The pseudocode The selected benchmark functions are categorized into three cat-
of the PSOLF algorithm is given in Fig. 1 and flowchart is given in egories such as unimodal, multimodal and rotated multimodal
Fig. 2. functions. The first category, unimodal functions having a single
optimal solution, includes seven functions namely f1 (Sphere), f2
5. Experimental results and discussion (Schwefel2.22), f4 (Noise), f15 (Sum Square), f16 (Step), f17 (Quar-
tic) and f3 (Rosenbrock) is a simple unimodal in 2D or 3D search
In this section, the detailed evaluation of the proposed PSOLF space but also can be considered as a multimodal function in
algorithm is presented. For comparison, two other algorithms are high-dimensional cases. The second category, multimodal func-
used. They are standard PSO (SPSO) [36], recently developed by M. tions having two or more local optima, includes ten complex
Omran, which is one of the state of the art of PSOs being trapped high-dimensional functions such as f5 (Schwefel2.26), f6 (Rastri-
in local optima and another is LFPSO [1] proposed by Hüseyin gin), f7 (Ackley), f8 (Griewank), f9 (Penalized1), f10 (Penalized2), f18
Haklı. Since the proposed paper is an enhanced PSO algorithm (Levy), f19 (Schaffer), f20 (Alpine) and f21 (Non-continuous Rastrigin).
with levy flight, we are comparing our proposed algorithm with The third category contains four rotated multimodal functions f11
the above two algorithms. The source code for SPSO is available (Rotated Schwefel), f12 (Rotated Rastrigin), f13 (Rotated Ackley) and
at [36] and LFPSO program code was obtained from the corre- f14 (Rotated Griewank). Table 2 gives global optimal solution (col-
sponding author [1] upon request, thanks to him for providing umn 7), global optimal value (column 6), search space range and
the code. The goal of this paper is to improve the performance of initialization range (column 5), dimension being used (column 4),
256 R. Jensi, G.W. Jiji / Applied Soft Computing 43 (2016) 248–261
Sphere schwefel
100 4
10 10
SPSO
LFPSO
0 PSOLF
10
Fitness value
Fitness value
3
10
-100
10
SPSO
LFPSO
PSOLF
-200 2
10 10
0 0.5 1 1.5 2 0 0.5 1 1.5 2
FEs 5 FEs 5
x 10 x 10
(a) (e)
schwefel222 rastrigin
5
100
10 10
SPSO
LFPSO 0
10
0 PSOLF
Fitness value
Fitness value
10
-5 SPSO
10 LFPSO
-100 PSOLF
10
-10
10
-200 -15
10 10
0 0.5 1 1.5 2 0 0.5 1 1.5 2
FEs 5 FEs 5
x 10 x 10
(b) (f)
rosenbrock Ackley
5 10
10 10
SPSO
LFPSO
PSOLF 0
10
Fitness value
Fitness value
SPSO
LFPSO
-10 PSOLF
10
0 -20
10 10
0 0.5 1 1.5 2 0 0.5 1 1.5 2
FEs 5 FEs 5
x 10 x 10
(c) (g)
Noise griewank
0 5
10 10
SPSO
LFPSO 0
PSOLF 10
-2
10
Fitness value
Fitness value
-5
10
-4
10
-10 SPSO
10
LFPSO
PSOLF
-6 -15
10 10
0 0.5 1 1.5 2 0 0.5 1 1.5 2
FEs 5 FEs 5
x 10 x 10
(d) (h)
Fig. 3. Convergence graphs for (a) Sphere, (b) Schwefel 2.22, (c) Rosenbrock, (d) Noise, (e) Schwefel 2.26, (f) Rastrigin, (g) Ackley and (h) Griewank functions for D = 30.
R. Jensi, G.W. Jiji / Applied Soft Computing 43 (2016) 248–261 257
20
penalized1 Quatric
0
10 10
0 -100
Fitness value
10 10
Fitness value
-20 -200
10 10
SPSO SPSO
LFPSO LFPSO
PSOLF PSOLF
-40 -300
10 10
0 0.5 1 1.5 2 0 0.5 1 1.5 2
FEs 5 5
x 10 x 10
(i) (m)
penalized2 Levy
10 10
10 10
SPSO
5 LFPSO 0
10 PSOLF 10
Fitness value
Fitness value
0 -10
10 10
-5 -20 SPSO
10 10
LFPSO
PSOLF
-10 -30
10 10
0 5 10 0 0.5 1 1.5 2
FEs 4 FEs 5
x 10 x 10
(j) (n)
RotatedSchwefel226 Schaffer
4 0
10 10
-1
Fitness value
Fitness value
10
3
10
-2
10
SPSO SPSO
LFPSO LFPSO
PSOLF PSOLF
2 -3
10 10
0 0.5 1 1.5 2 0 0.5 1 1.5 2
FEs 5 FEs 5
x 10 x 10
(k) (o)
RotatedRastrigrin Alpine
5 100
10 10
0
10
0
10
Fitness value
Fitness value
-100
10
-5
10
SPSO -200 SPSO
10
LFPSO LFPSO
PSOLF PSOLF
-10 -300
10 10
0 0.5 1 1.5 2 0 0.5 1 1.5 2
FEs 5 5
x 10 x 10
(l) (p)
Fig. 4. Convergence graphs for (i) penalized 1, (j) penalized 2, (k) RotatedSchwefel2.26, (l) RotatedRastrigin, (m) Quartic, (n) Levy, (o) Schaffer and (p) Alphine for D = 30.
258 R. Jensi, G.W. Jiji / Applied Soft Computing 43 (2016) 248–261
RotatedAckly SumSquare
5 100
10 10
SPSO
0 LFPSO
10 PSOLF
0
10
Fitness value
Fitness value
-5
10
-100
10
-10 SPSO
10
LFPSO
PSOLF
-15 -200
10 10
0 0.5 1 1.5 2 0 0.5 1 1.5 2
FEs 5 FEs 5
x 10 x 10
(q) (t)
RotatedGrievank Step
5 4
10 10
SPSO
0 3 LFPSO
10 10 PSOLF
Fitness value
Fitness value
-5 2
10 10
-10 SPSO 1
10 10
LFPSO
PSOLF
-15 0
10 10
0 0.5 1 1.5 2 0 0.5 1 1.5 2
FEs 5 FEs 5
x 10 x 10
(r) (u)
noncontinuousRastrigrin
5
10
0
10
Fitness value
-5
10
-10 SPSO
10
LFPSO
PSOLF
-15
10
0 0.5 1 1.5 2
FEs 5
x 10
(s)
Fig. 5. Convergence graphs for (q) RotatedAckley, (r) RotatedGriewank, (t) SumSquare and (u) Step (s) Non-continous Rastrigin functions for D = 30.
function name and category (column 2) and formula (column 3) for f20, and f21. SPSO algorithm acquires better results than LFPSO,
each test function. PSOLF on f3 function only whilst LFPSO obtains better results for
the functions f5, f9, f10, and f18 functions (i.e) 4 out of the 21
5.2. Comparison of simulation results for benchmark functions functions. Both algorithms LFPSO and PSOLF get optimal results
on f1, f15, f16 and f17 functions whilst SPSO and PSOLF get opti-
For each test function the three algorithms SPSO, LFPSO and mal results on f1, f2, f15 and f17 functions. In general, algorithm’s
PSOLF are executed independently for about 50 times on nearly 30 performance in finding global optimal or near global optimal val-
and 50 dimensions. The mean, standard deviation and mean CPU ues using SPSO is 5 functions namely f1, f2, f3, f15 and f17, LFPSO
time (in seconds) for each function test results over 50 runs at the is 8 functions such as f1, f5, f9, f10, f15, f16, f17, and f18, and
maximum function evaluations (FEs) with dimension 30 and 50 are PSOLF is 16 functions f1, f2, f4, f6, f7, f8, f11, f12, f13, f14, f15, f16,
given in Tables 3 and 4, respectively. The best results produced by f17, f19, f20, and f21. SPSO performs well on Sphere and Schwe-
the algorithms for each test function are shown in boldface. For fel2.22 which are simple unimodal functions. Since Rosenbrock
comparison to become easier and clear, result values below 10−18 is considered as multimodal function in high-dimensional search
are considered as 0. space, SPSO algorithm achieves better results 3.55E+00(2.41E+00)
As given in Table 3 for the dimension 30, PSOLF algorithm gets than 2.39E+01(2.47E−01), 2.69E+01(1.01E+00) which are obtained
optimal results for 16 benchmark functions among all 21 func- by LFPSO and PSOLF, respectively. SPSO algorithm achieves better
tions. PSOLF algorithm outperforms other two algorithms for the results on function f10 1.51E−12(7.45E−12) than PSOLF. But SPSO
11 benchmark functions (i.e) f4, f6, f7, f8, f11, f12, f13, f14, f19, algorithm fails to achieve optimum results for the multimodal and
R. Jensi, G.W. Jiji / Applied Soft Computing 43 (2016) 248–261 259
Table 8
Comparison of PSOLF with PSO variants on 14 benchmark functions.
Functions CLPSO [18] HPSO-TVAC[24] FIPSO [26] SPSO-40 [36] LPSO [27] DMS-PSO[28] LFPSO [1] PSOLF
rotated multimodal functions as it gets stuck in local optima. While In summary, proposed PSOLF algorithm performs much better
SPSO algorithm is being trapped in local minima for multimodal than SPSO and LFPSO algorithms and reaches optimal solutions for
and rotated multimodal functions, LFPSO algorithm provides bet- most of the test functions. The small standard deviation values for
ter solution than SPSO but this improvement is less when compared the PSOLF algorithm seems to be robust as it obtains optimal solu-
with PSOLF algorithm. The proposed PSOLF algorithm performs tions for all runs when compared to SPSO and LFPSO algorithms.
well for unimodal, multimodal and rotated multimodal functions In order to analyze the scalability of proposed PSOLF algorithm,
except Rosenbrock, Schwefel2.26, Penalized1, Penalized2 and Levy PSOLF algorithm is tested with higher dimensions including 100,
functions. 500 and 1000. For conducting this experiment, functions f1, f2, f3, f4,
When examining the values given in Table 4 for the dimension f6, f8, f15, f16, f17, f19, f20 and f21 are used. The mean, standard devi-
50, PSOLF still continues to achieve best results on f1, f2, f4, f6, f7, ation and mean CPU time (in seconds) values for 10 independent
f8, f11, f12, f13, f14, f15, f16, f17, f19, f20, and f21 (i.e) 16 out of runs for the benchmark functions obtained by the algorithm are
the 21 functions. PSOLF algorithm escapes the local minima and reported in Table 5. While increasing the dimension, it is revealed
provides better result than SPSO and LFPSO for both unimodal and that as the dimension increases, the success of the PSOLF algorithm
multimodal functions. continues to give best solution. This is achieved by using the same
For the 50 dimension, while SPSO algorithm performs bet- parameter settings as used for above experiments and it does not
ter than the PSOLF algorithm for only function f11, both of the require any increase in population size or number of function eval-
algorithms obtain optimum result for the function f1, f2, f15, uations. Thus, it is concluded that PSOLF algorithm is insensitive to
and f17. For the remaining 11 benchmark functions, PSOLF algo- growing dimensions and has a superior scalability.
rithm gives better results than the SPSO algorithm. LFPSO achieves
optimal results for the functions f3, f5, f9, f16, f17 and f18 but 5.3. Wilcoxon’s rank sum test results
PSOLF algorithm performs well for the multimodal functions,
unimodal functions and rotated functions also. SPSO algorithm In order to determine whether there is a statistical signifi-
gets stuck in the local minima while PSOLF algorithm escapes cance difference between SPSO and PSOLF and also between LFPSO
the local minima and obtains the better result than SPSO and and PSOLF, Wilcoxon’s test is performed for 50 independent runs
LFPSO. results. The test is conducted with significance level at ˛ = 0.05, and
260 R. Jensi, G.W. Jiji / Applied Soft Computing 43 (2016) 248–261
obtained results in terms of p-value, h-value and z-value are shown outperforms SPSO and PSOLF which are converging very fast and
in Tables 6 and 7 for each test function. As given in Table 6 for the getting stuck in local minima as shown in Fig. 4(n) and (i).
functions from f1 to f11, there is a significant difference between On Penalized2 function, both algorithms LFPSO and SPSO
SPSO and PSOLF algorithm’s performance for dimension 30 and for achieves optimal result of 0 at about 64,880, 4860 FEs respectively,
dimension 50 excluding function f3. Similarly, results obtained by whilst PSOLF performs poorly on this function, as seen in Fig. 4(j).
PSOLF algorithm is statistically significant in compared to LFPSO As a conclusion on examining the convergence behavior of the
algorithm for dimension 30 and 50. It is also noted by z-value that three methods, the proposed PSOLF algorithm offers best perfor-
SPSO algorithm performs well on functions f3 and f10 for dimension mance, converges very fast and achieves optimal results quickly.
30 and on function f3 for dimension 50 in compared to PSOLF algo-
rithm and LFPSO’s performance is better on functions f3, f5, f9 and
5.4.1. Computational time analysis
f10 than PSOLF as we discussed earlier in Section 5.2. From the test
In order to analyze the computational complexity of the pro-
values for functions f12–f21 given in Table 7, test results imply that
posed algorithm, the average time required in reaching the optimal
there is a statistical significant difference between SPSO and PSOLF
result or maximum number of functions evaluations on the test
for dimension 30 and 50 except f18 whereas LFPSO and PSOLF are
functions is given in Tables 3–5. As PSOLF algorithm converges
also statistically different for dimension 30 and 50 except f16 since
quickly, it takes less than 3 s for the most of the functions. On see-
for f16, both algorithms produce the same optimal result of 0 in all
ing Table 5, as dimension increases, PSOLF does not take much time
runs. Thus it shows that PSOLF’s performance is better compared
and it scales with O(n) where n is the problem dimension.
with SPSO and LFPSO.
Table 8 shows the results obtained by the proposed PSOLF algo-
rithm and various PSO variants such as CLPSO [18], HPSO-TVAC
5.4. Convergence progresses of the methods
[24], FIPSO [26], SPSO-40 [36], LPSO [27], DMS-PSO [28], LFPSO [1].
The results in Table 8 are obtained from 25 experimental results
Figs. 3–5 graphically present the comparison of convergence
with dimension 30. Results of the Algorithms CLPSO [18], HPSO-
characteristics for the three methods SPSO, LFPSO and PSOLF in
TVAC [24], FIPSO [26], SPSO-40 [36], LPSO [27], DMS-PSO [28], are
solving the benchmark test functions for dimension 30. Fig. 3 plots
directly from [20] and LFPSO [1] results’ from [1]. For SPSO-40, the
the convergence graphs for Sphere, Schwefel2.22, Rosenbrock,
number of particle is not found automatically, the number of parti-
Noise, Schwefel2.26, Rastrigin, Ackley and Griewank functions.
cles is set to 40. The results given in Table 8 are displayed without
Fig. 4 plots the convergence behaviors for Penalized1, Penalized2,
error rate.
RotatedSchwefel2.26, RotatedRastrigin, Quartic, Levy, Schaffer
When examining Table 8, the proposed PSOLF algorithm reaches
and Alphine functions. Fig. 5 plots the convergence behaviors
the optimal solution and PSOLF algorithm works well compared to
for RotatedAckley, RotatedGriewank, SumSquare, Step and Non-
the other seven stated algorithms and got the first rank.
continuous Rastrigin functions.
On Sphere function, all the three algorithms SPSO, LFPSO and
PSOLF obtain best results. PSOLF indeed reaches optimal result of 6. Conclusion
0 at about 22,000 FEs. This shows that PSOLF algorithm converges
quickly and also achieves the global optimal result as shown in In this proposed system, particle swarm optimization is com-
Fig. 3(a). bined with Levy flight (PSOLF) to enhance the global search
On Rastrigin and Griewank functions, PSOLF converge very fast capability and to increase convergence efficiency. The PSOLF
at about 1350 and 1550 FEs respectively achieving optimal result algorithm is global search algorithm with several benefits. The
of 0, as seen in Fig. 3(f) and (h). SPSO and LFPSO do not perform well advantages of algorithm are: PSOLF is simple to implement, easy
and get stuck in local minima. to understand and insensitive to dimensionality. A set of standard
On Schwefel2.22 function, SPSO does improve solution contin- benchmark functions are used to evaluate the proposed system and
ually; LFPSO does not obtain optimal value for this function. PSOLF the experimental results obtained from PSOLF is compared with
seems to be good in achieving optimal result 0 at about 41,100 FEs SPSO2007, LFPSO and the results show that the proposed PSOLF
as shown in Fig. 3(b). improves the solution and has high convergence rate. In future, the
On Rosenbrock function, SPSO converges quickly and improves proposed method will be applied to solve data clustering problems.
solution subsequently. LFPSO initially improves and then gets
trapped in local minima similarly PSOLF gets stuck in local minima, References
hence both algorithms reach close results as shown in Fig. 3(c).
Even SPSO does not progress solution for unimodal Nosie and [1] H. Haklı, H. Uguz, A novel particle swarm optimization algorithm with Levy
Ackley functions, LFPSO and PSOLF get better results as shown in flight, Appl. Soft Comput. (23) (2014) 333–345.
[2] A.V. Chechkin, R. MetzlEr, J. Klafter, V.Y. Gonchar, Introduction to the theory of
Fig. 3(d) and (g), respectively.
Lévy flights, in: R. Klages, G. Radons, I.M. Sokolov (Eds.), Anomalous Transport:
On Schwfel2.26 function, SPSO converges very fast but gets Foundations and Applications, John Wiley & Sons, 2008, pp. 129–162.
trapped in local minima and PSOLF improves solution slowly but [3] R.C. Eberhart, Y. Shi, J. Kennedy, Swarm Intelligence, Morgan Kaufmann, 2001.
[4] X.-S. Yang, Engineering Optimization an Introduction with Metaheuristic Appli-
gets stuck in local minima. LFPSO achieves better result than SPSO
cations, 1st ed., John Wiley & Sons, New Jersey, 2010.
and PSOLF. [5] X.S. Yang, Nature-Inspired Metaheuristic Algorithms, 2nd ed., Luniver Press,
As seen from Figs. 4 and 5, the proposed PSOLF algorithm obtains 2010.
global optimum of 0 for unimodal, multimodal and rotated func- [6] N. Zhang, J. Ji, C. Liu, N. Zhong, An ant colony optimization algorithm for learning
classification rules, in: Proceedings of the 2006 IEEE/WIC, 2006, pp. 1034–1037.
tions such as Quartic, Schaffer, Rotated Rastrigin, Alpine, Rotated [7] D.T. Pham, A. Ghanbarzadeh, E. Koc, S. Otri, S. Rahim, M. Zaidi, The Bees Algo-
Ackley, Rotated Griewank, Sum Square, Step and Non-continuous rithm, Technical Note, Manufacturing Engineering Centre, Cardiff University,
Rastrigin at about 11200, 5300, 1625, 39,750, 2375, 1525, 21,775, UK, 2005.
[8] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proceedings of
325 and 1650 FEs, respectively. LFPSO algorithm also performs the IEEE International Joint Conference on Neural Networks, vol. 4, 1995, pp.
well on functions Quartic, Sum Square, and Step getting optimal 1942–1948.
or near optimal solutions at about 200,000, 200,000 and 71,680 [9] R.S. Parpinelli, H.S. Lopes, A.A. Freitas, Data mining with an ant colony opti-
mization algorithm, IEEE Trans. Evolut. Comput. 6 (4) (2002) 321–332.
FEs, respectively. SPSO reaches optimal value at about 200,000 [10] D.M. Van, A.P. Engelbrecht, Data clustering using particle swarm optimiza-
and 200,000 FEs for functions Sum Square, and Step (simple uni- tion, in: Proceedings IEEE Congress on Evolutionary Computation, Canberra,
modal functions) respectively. On levy, Penalized1 functions, LFPSO Australia, 2003, pp. 215–220.
R. Jensi, G.W. Jiji / Applied Soft Computing 43 (2016) 248–261 261
[11] X. Cui, T. Potok, P. Palathingal, Document clustering using particle swarm opti- [24] A. Ratnaweera, S.K. Halgamuge, H.C. Watson, Self-organizing hierarchical par-
mization, in: Proceedings of IEEE Swarm Intelligence Symposium, SIS 2005, ticle swarm optimizer with time-varying acceleration coefficients, IEEE Trans.
IEEE Press, 2005. Evolut. Comput. 8 (3) (2004) 240–255.
[12] A. Sarangi, R.K. Mahapatra, S.P. Panigrahi, DEPSO and PSO-QI in digital filter [25] X. Yang, J. Yuan, J. Yuan, H. Mao, A modified particle swarm optimizer with
design, Expert Syst. Appl. 38 (2011) 10966–10973. dynamic adaptation, Appl. Math. Comput. 189 (2007) 1205–1213.
[13] Y.-P. Chang, C.-N. Ko, A PSO method with nonlinear time-varying evolution [26] R. Mendes, J. Kennedy, J. Neves, The fully informed particle swarm: simpler,
based on neural network for design of optimal harmonic filters, Expert Syst. may be better, IEEE Trans. Evolut. Comput. 8 (3) (2004) 204–210.
Appl. 36 (2009) 6809–6816. [27] J. Kennedy, R. Mendes, Population structure and particle swarm performance,
[14] H.-C. Yang, S.-B. Zhang, K.-Z. Deng, P.-J. Du, Research into a feature selection in: IEEE Congr. Evolut. Comput., Honolulu, 2002, pp. 1671–1676.
method for hyper spectral imagery using PSO and SVM, J. China Univ. Min. [28] J.J. Liang, P.N. Suganthan, Dynamic multi-swarm particle swarm optimizer, in:
Technol. 17 (2007) 473–478. Swarm Intelligence Symposium, California, 2005, pp. 124–129.
[15] L.-Y. Chuang, H.-W. Chang, C.-J. Tu, C.-H. Yang, Improved binary PSO for feature [29] Z. Beheshti, S.M. Shamsuddin, S. Sulaiman, Fusion global-local-topology par-
selection using gene expression data, Comput. Biol. Chem. 32 (2008) 29–38. ticle swarm optimization for global optimization problems, Math. Probl. Eng.
[16] Y. Zhang, D. Huang, M. Ji, F. Xie, Image segmentation using PSO and PCM with (2014).
Mahalanobis distance, Expert Syst. Appl. 38 (2011) 9036–9040. [30] X.-S. Yang, S. Deb, Cuckoo search via Lévy flights, in: World Congress on Nature
[17] J.-P. Yang, C.-K. Kung, F.-T. Liu, Y.-J. Chen, C.-Y. Chang, R.-C. Hwang, Logic circuit & Biologically Inspired Computing, IEEE Publications, 2009, pp. 210–214.
design by neural network and PSO algorithm, in: 2010 First International Con- [31] G. Wang, L. Guo, A.H. Gandomi, L. Cao, A.H. Alavi, H. Duan, J. Li, Lévy-Flight Krill
ference on Pervasive Computing Signal Processing and Applications (PCSPA), Herd Algorithm, 2013. http://dx.doi.org/10.1155/2013/682073.
Harbin, China, 2010, pp. 456–459. [32] N. Bacanin, An object-oriented software implementation of a novel cuckoo
[18] J.J. Liang, A.K. Qin, P.N. Suganthan, S. Baskar, Comprehensive learning particle search algorithm, in: Proceedings of the 5th European Conference on European
swarm optimizer for global optimization of multimodal functions, IEEE Trans. Computing Conference (ECC’11), 2011, pp. 245–250.
Evolut. Comput. (2006) 281–295. [33] M. Tuba, M. Subotic, N. Stanarevic, Modified cuckoo search algorithm for
[19] Y. Shi, R. Eberhart, A modified particle swarm optimizer, in: The 1998 IEEE Inter- unconstrained optimization problems, in: Proceedings of the 5th Euro-
national Conference on Evolutionary Computation Proceedings, Anchorage, AK, pean Conference on European Computing Conference (ECC’11), 2011,
1998, pp. 69–73. pp. 263–268.
[20] Z.-H. Zhan, J. Zhang, Y. Li, Y.-H. Shi, Orthogonal learning particle swarm opti- [34] X.-S. Yang, Firefly algorithm, Lévy flights and global optimization, in:
mization, IEEE Trans. Evolut. Comput. 15 (2011) 832–846. (M. Bramer, R. Ellis, M. Petridis (Eds.)), Research and Development in Intelligent
[21] Z. Xinchao, A perturbed particle swarm algorithm for numerical optimization, Systems, vol. XXVI, Springer, London, 2010, pp. 209–218.
Appl. Soft Comput. 10 (2010) 119–124. [35] J. Xie, Y.Q. Zhou, H. Chen, A novel bat algorithm based on differential operator
[22] Y. Wang, B. Li, T. Weise, J. Wang, B. Yuan, Q. Tian, Self-adaptive learning based and Levy flights trajectory, Comput. Intell. Neurosci. 2013 (2013), 453812 www.
particle swarm optimization, Inf. Sci. 181 (2011) 4515–4538. hindawi.com/journals/cin/aip/453812.pdf.
[23] I.G. Tsoulos, A. Stavrakoudis, Enhancing PSO methods for global optimization, [36] M. Omran, SPSO 2007 Matlab, 2007, http://www.particleswarm.info/Programs.
Appl. Math. Comput. 216 (2010) 2988–3001. html (accessed: 04.01.16, 11:30 am).