A Simple Hybrid Particle Swarm Optimization
A Simple Hybrid Particle Swarm Optimization
1. Introduction
As a novel stochastic optimization technique, the Particle Swarm Optimization technique
(PSO) has gained much attention towards several applications during the past decade for
solving the global optimization problem or to set up a good approximate solution to the
given problem with a high probability. PSO was first introduced by Eberhart and Kennedy
[Kennedy and Eberhart, 1997]. It belongs to the category of Swarm Intelligence methods
inspired by the metaphor of social interaction and communication such as bird ocking and
sh schooling. It is also associated with wide categories of evolutionary algorithms through
individual improvement along with population cooperation and competition. Since PSO
was rst introduced to optimize various continuous nonlinear functions, it has been
successfully applied to a wide range of applications owing to the inherent simplicity of the
concept, easy implementation and quick convergence [Trelea 2003].
PSO is initialized with a population of random solutions. Each individual is assigned with a
randomized velocity based to its own and the companions ying experiences, and the
individuals, called particles, are then own through hyperspace. PSO leads to an effective
combination of partial solutions in other particles and speedens the search procedure at an
early stage in the generation. To apply PSO, several parameters including the population
(N), cognition learning factor (cp), social learning factor (cg), inertia weight (w), and the
number of iterations (T) or CPU time should be properly determined. Updating the velocity
and positions are the most important parts of PSO as they play a vital role in exchanging
information among particles. The details will be given in the following sections.
The simple PSO often suffers from the problem of being trapped in local optima. So, in this
this paper, the PSO is revised with a simple adaptive inertia weight factor, proposed to
efficiently control the global search and convergence to the global best solution. Moreover, a
local search method is incorporated into PSO to construct a hybrid PSO (HPSO), where the
parallel population-based evolutionary searching ability of PSO and local searching
behavior are reasonably combined. Simulation results and comparisons demonstrate the
effectiveness and efficiency of the proposed HPSO.
The paper is organized as follows. Section 2 describes the acronyms and notations. Section 3
outlines the proposed method in detail. In Section 4, the methodology of the proposed
HPSO is discussed. Numerical simulations and comparisons are provided in Section 5.
Finally, Concluding remarks and directions for future work are given in in Section 6.
Source: Advances in Evolutionary Algorithms, Book edited by: Witold Kosiski, ISBN 978-953-7619-11-4, pp. 468, November 2008,
I-Tech Education and Publishing, Vienna, Austria
www.intechopen.com
118
3. The PSO
In PSO, a solution is encoded as a finite-length string called a particle. All of the particles
have tness values which are evaluated by the tness function to be optimized, and have
velocities which direct the ying of the particles [Parsopoulos et. al. 2001]. PSO is initialized
with a population of random particles with random positions and velocities inside the
problem space, and then searches for optima by updating generations. It combines the local
and global search resulting in high search efficiency. Each particle moves towards its best
previous position and towards the best particle in the whole swarm in every iteration. The
former is a local best and its value is called pBest, and the latter is a global best and its value
is called gBest in the literature. After nding the two best values, the particle updates its
velocity and position with the following equation in continuous PSO:
www.intechopen.com
vt,i,j = wvt-1,i,j+cprp,i,j(pt-1,i,j-xt-1,i,j)+cprp,i,j(gt-1,j-xt-1,i,j),
(1)
xt,i,j=xt-1,i,j+vt,i,j.
(2)
119
The values cprp,i,j and cgrg,i,j determine the weights of the two parts, and cp+cg is usually
limited to 4 [Trelea 2003]. Generally, the value of each component in Xt,i and Vt,i can be
clamped to the range [L(X), U(X)] and [L(V), U(V)], respectively, to limit each particle to
ensure its feasibility.
For example, let
X3,4=(1.5, 3.6, 3.7, -3.4, -1.9),
(3)
(4)
(5)
(6)
(7)
(8)
(9)
U(X)=(2, 4, 4, 0, 0),
(10)
(11)
U(V)=( 4, 4, 4, 4, 4),
(12)
w=.9,
(13)
cp=cg=2.
(14)
(15)
(16)
(17)
(18)
and
www.intechopen.com
120
STEP 2: Evaluation: Evaluate the fitness value (the desired objective function) for each
particle.
STEP 3: Find the pBest: If the fitness value of particle i is better than its best fitness value
(pBest) in history, then set current fitness value as the new pBest to particle i.
STEP 4: Find the gBest: If any pBest is updated and is better than the current gBest, then set
gBest to the current value.
STEP 5: Update and adjustment velocity: Update velocity according to Eq.(1). Adjust the
velocity to meet its range if necessary.
STEP 6: Update and adjustment position: Update velocity and move to the next position
according to Eq.(2). Adjust the position to meet their range if necessary.
STEP 7: Stopping criterion: If the number of iterations or CPU time are met, then stop;
otherwise go back to STEP 2.
(19)
www.intechopen.com
121
Vi t 1
Pi t
xit +1
Vi t
C p Rp
Gt
Cg Rg
xit
Fig. 1. The vector synthesis of PSO.
Eqs.(1) and (2) are more complicated than the concept of the vector synthesis in increasing
the diversity of the dimensions of each particle. Hence, the following equations are
implemented in the proposed HPSO instead of Eqs.(1) and (2):
Vt,i=wiVt-1,i+cprp(Pt-1,i-Xt-1,i)+cgrg(Gt-1-Xt-1,i),
(20)
Xt,i=Xt-1,i+Vt,i.
(21)
In the traditional PSO, each particle needs to use two random number vectors (e.g., Rp and
Rq) to move to its next position. However, only two random number (e.g., rp and rq) are
needed in the proposed HPSO. Besides, Eqs(20) and (21) are very easy and efficient in
deciding the next positions for the problems with continuous variables. For example, let P3,4,
X3,4, P3,4, G3, L(X), U(X) are the same as defined in the example in Section 1. Assume rp=0.34
and rg=0.79. From Eq.(19),
wi=0.9-(4-1)[0.9-0.4]/1000=0.8985.
(22)
Plug wi, rp, rg and the other required value into Eq.(20), we have
V3,4=(0.6974, 0.28985, -1.56505, -1.75795, -3.97275),
(23)
(24)
(25)
www.intechopen.com
122
The proposed LS is very simple and similar to the famous local improvement method the
pairwise exchange procedure. In LS, the ith dimension of both the current best particle of all
population (i.e., gBest) are replaced by the current best particle of the jth particle (i.e., pBest).
If the fitness function value is improved, the the current gBest is updated accordingly.
Otherwise, there is no need to change the current gBest. The above procedure in the
proposed HPSO is repeated until all dimensions in the gBest are performed.
To minimize the number of duplicated computations of the same fitness function in LS, only
one non-gBest is randomly selected to each dimension of gBest in the local search. The
complete procedure of the local search part of the proposed HPSO can be summarized as in
the following:
STEP 0. Let d=1.
STEP 1. Let n=1.
STEP 2. If Gd=Pt,n or gt,d=pt,n,d, go to STEP 4. Otherwise, let F*=F(Gd), Gd=Pt,n, and gt,d=pt,n,d.
STEP 3. If F(Gd) is better than F*, then let F*=F(Gd). Otherwise, let gt,d=g.
STEP 4. If n<N, let n=n+1 and go to STEP 2.
STEP 5. If d<D, let d=d+1 and go to STEP 1.
5. Numerical examples
To evaluate the performance of the proposed algorithms, four famous benchmark
optimization problems [Jiang et. al. 2007] are used, which are described as follows.
2
2 2
Rosenbrock f1 ( x ) = i =1 100 ( xi +1 xi ) + ( xi 1) [-30,30]n
n
f 2 ( x ) = i =1 xi2 10 cos ( 2 xi ) + 10 [-5.12,5.12]n
Rastrigrin
Function
Griewark
n 1
f3 ( x) =
Formula
x
n
1
n
x 2 i =1 cos( i ) + 1
i =1 i
4000
i
Range
[-600,600]n
Optima Solution
0
(1,,1)
(0,,0)
(0,,0)
www.intechopen.com
123
as 1000, 1500 and 2000, respectively. Hence, there are 12 different test sets to each
benchmark problem as follows:
Set A1: N=20, D=10, T=1000;
Set A2: N=20, D=20, T=1500;
Set A3: N=20, D=30, T=2000;
Set B1: N=40, D=10, T=1000;
Set B2: N=40, D=20, T=1500;
Set B3: N=40, D=30, T=2000;
Set C1: N=80, D=10, T=1000;
Set C2: N=80, D=20, T=1500;
Set C3: N=80, D=30, T=2000;
Set D1: N=160, D=10, T=1000;
Set D2: N=160, D=20, T=1500;
Set D3: N=160, D=30, T=2000;
Each algorithm with each set of parameter is executed in 50 independent runs. The average
tness values of the best particle found for the 50 runs for the three functions are listed in
Table 2. The shaded number shows the best result with respect to the corresponding
function and the set. From the Table 2, it can be seen that IPSO outperforms the HPSO in
Rosenbrock functions for POP=20 and 80. However, the remaining cases of Rosenbrock
functions and for the Griewark function, the proposed HPSO has almost achieved better
results than SPSO and IPSO. Furthermore HPSO is superior to SPSO and IPSO at all
instances of the Rastrigrin function.
Rosenbrock
PSO
IPSO
HPSO
A1
42.6162 10.5172 3.3025
A2
87.2870 75.7246 124.3305
A3
132.5973 99.8039 122.7829
B1
24.3512 1.2446
0
B2
47.7243 8.7328 0.0797
B3
66.6341 14.7301 120.7434
C1
15.3883 0.1922 0.0797
C2
40.6403 1.5824 60.3717
C3
63.4453 1.5364 4.7461
D1
11.6283 0.0598
0
D2
28.9142 0.4771
0
D3
56.6689 0.4491
0
Average 39.48832 3.22272 20.66896
SET
Rastrigrin
Griewark
PSO
IPSO HPSO PSO
IPSO HPSO
5.2062 3.2928
0
0.0920 0.0784 0.0071
22.7724 16.4137 0.4975 0.0317 0.0236 0.0168
49.2942 35.0189 1.0760 0.0482 0.0165 0.0190
3.5697 2.6162
0
0.0762 0.0648 0.0002
17.2975 14.8894
0
0.0227 0.0182 0.0026
38.9142 27.7637
0
0.0153 0.0151 0.0012
2.3835 1.7054
0
0.0658 0.0594
0
12.9020 7.6689
0
0.0222 0.0091
0
30.0375 13.8827
0
0.0121 0.0004
0
1.4418 0.8001
0
0.0577 0.0507
0
10.0438 4.2799
0
0.0215 0.0048
0
24.5105 11.9521
0
0.0121 0.0010
0
15.67783 9.50649
0
0.03396 0.02483 0.00044
www.intechopen.com
124
SET
A1
A2
A3
B1
B2
B3
C1
C2
C3
D1
D2
D3
Convergence Iterations
max min mean
std
962 520 720.3 73.77
1264 520 1130.3 141.50
1987 914 1603.5 268.45
787 607 660.1 39.34
1129 813 1024.7 64.66
1635 1176 1477.9 114.45
782 556 602.7 37.37
1089 649 946.9 60.43
1457 1217 1346.7 56.36
681 533 567.4 28.35
1048 818 887.3 47.31
1452 1073 1232.0 73.77
SET
A1
A2
A3
B1
B2
B3
C1
C2
C3
D1
D2
D3
Success
Rate
100%
98%
96%
100%
100%
100%
100%
100%
100%
100%
100%
100%
Convergence Iterations
max min mean std
112 5
30.3 20.52
329 11 56.1 49.95
357 19 86.4 74.91
58
7
25.4 12.34
104 11 36.4 20.96
196 20 47.7 28.55
56
5
19.3 11.02
106 9
32.9 17.43
127 18 45.2 25.74
38
5
15.6 7.43
63
9
26.4 10.87
97
9
34.7 15.96
SET
A1
A2
A3
B1
B2
B3
C1
C2
C3
D1
D2
D3
Success
Rate
86%
74%
76%
98%
96%
96%
100%
100%
100%
100%
100%
100%
Convergence Iterations
max min mean std
711
6 110.7 169.22
1401 18 262.1 303.67
812 15 275.0 272.40
518
4
53.1 74.79
823 10 90.7 151.76
738 10 95.7 140.38
124
2
20.5 23.07
120 10 40.8 26.87
131
8
48.1 28.07
30
6
14.8 6.09
104
9
29.0 16.99
77
9
32.9 14.44
www.intechopen.com
125
As the dimension increases, the solution space get more complex, and PSO algorithm is
more likely to be trapped into local optima. Experimental data shown in Table 2 does not
clearly indicate that the HPSO outperforms the other PSOs in the measures of average
fitness function values. However, the Success Rates are all over 74%. Therefore, the
proposed HPSO can nd global optima with very high probability, and it is concluded that
HPSO has the strongest exploration ability and it is not easy to be trapped into local optima.
Table 3 shows that the proposed HPSO uses only 3.18 seconds in worst case and 0.6 seconds
in average. Thus, HPSO is very effective, efficient, robust, and reliable for complex
numerical optimization.
6. Conclusions
A successful evolutionary algorithm is one with a proper balance between exploration
(searching for good solutions), and exploitation (refining the solutions by combining
information gathered during the exploration phase). In this study, a new hybrid version of
PSO called HPSO is proposed. The HPSO constitutes a vector based PSO method with the
linearly varying inertia weight, along with a local search.
A novel, simpler, and efficient mechanism is employed to move the gBest to its next position
in the proposed HPSO. The HPSO combines the population-based evolutionary searching
ability of PSO and local searching behavior to effciently balance the exploration and
exploitation abilities. The result obtained by HPSO has been compared with those obtained
from traditional simple PSO (SPSO) and improved PSO (IPSO) proposed recently.
Computational results show that the proposed HPSO shows an enhancement in searching
efficiency and improve the searching quality. In summary, the results presented in this work
are encouraging and promising for the application of the proposed HPSO to other complex
problems.
Further analysis is necessary to see how other soft computing method (e.g., the genetic
algorithm, the taboo search, etc.) react to local searches for future researchers who may want
to develop their own heuristics and to make further improvements. Our research is still very
active and under progress, and it opens the avenues for future efforts in this directions such
as: how to adjust parameters, increase success rates, reduce running times, using other local
search, and the aggregation of different and new concepts to PSO.
7. References
B. Liu, L. Wang, Y.-H. Jin, F. Tang, D.-X. Huang (2005), Improved particle swarm
optimization combined with chaos, Chaos Solitons & Fractals, Vol. 25, 2005, pp.
12611271.
I.C. Trelea (2003), The particle swarm optimization algorithm: convergence analysis and
parameter selection, Information Processing Letters, Vol. 85, 2003, pp. 317325.
J. Kennedy and R.C. Eberhard (1995), Particle swarm optimization, Proceedings of IEEE
International Conference on Neural Networks, Piscataway, NJ, USA, 1995, pp. 1942-1948.
J. Kennedy and R.C. Eberhard and Y. Shi, Swarm intelligence, San Francisco, CA: Morgan
Kaufmann; 2001.
J. Kennedy and R.C. Eberhart (1997), A discrete binary version of the particle swarm
algorithm, Systems, Man, and Cybernetics, Computational Cybernetics and
Simulation, IEEE International Conference, Vol. 5, No. 12-15, 1997/10, pp. 4104-4108.
www.intechopen.com
126
www.intechopen.com
ISBN 978-953-7619-11-4
Hard cover, 284 pages
Publisher InTech
How to reference
In order to correctly reference this scholarly work, feel free to copy and paste the following:
Wei-Chang Yeh (2008). A Simple Hybrid Particle Swarm Optimization, Advances in Evolutionary Algorithms,
InTech Europe
InTech China