CSO
CSO
X, XXXX XXXX
I. I NTRODUCTION
(1)
Xi (t + 1) = Xi (t) + Vi (t + 1),
(2)
+ c2 R2 (t)(gbest(t) Xi (t)),
(3)
where,
1 = c1 R1 (t) + c2 R2 (t),
c1 R1 (t)
pbesti (t)
p1 =
c1 R1 (t) + c2 R2 (t)
c2 R2 (t)
+
gbest(t).
c1 R1 (t) + c2 R2 (t)
(4)
RAN CHENG et al.: A COMPETITIVE PARTICLE SWARM OPTIMIZER FOR LARGE SCALE OPTIMIZATION
II. A LGORITHM
strategy:
(5)
s.t. X X
Learning
Competition
Winner
Updated loser
Swarm P(t+1)
Swarm P(t)
t=t+1
Fig. 1. The general idea of CSO. During each generation, particles are
pairwise randomly selected from the current swarm for competitions. After
each competition, the loser, whose fitness value is worse, will be updated by
learning from the winner, while the winner is directly passed to the swarm
of the next generation.
(6)
(7)
where R1 (k, t), R2 (k, t), R3 (k, t) [0, 1] are three randomly
generated vectors after the k-th competition and learning
k (t) is the mean position value of the
process in generation t, X
relevant particles, is a parameter that controls the influence
although X(t)
is shared by several particles, it depends on
the mean current position of the whole swarm, which will be
less likely to introduce bias towards any particular particle.
Finally, it is noted that in CSO, only half of the particles will
be updated in each generation, while in PSO, all particles are
updated.
To illustrate the above intuitive observation, three typical
cases are considered below to show how CSO is potentially
able to perform more explorative search than the canonical
PSO.
f (X)
Algorithm 1 The pseudocode of the Competitive Swarm Optimizer (CSO). t is the generation number. U denotes a set of
particles that have not yet participated in a competition. Unless
otherwise specified, the terminal condition is the maximum
number of fitness evaluations.
1: t = 0;
2: randomly initialize P (0);
3: while terminal condition is not satisfied do
4:
calculate the fitness of all particles in P (t);
5:
U = P (t), P (t + 1) = ;
6:
while U 6= do
7:
randomly choose two particles X1 (t), X2 (t) from U ;
8:
if f (X1 (t)) f (X2 (t)) then
9:
Xw (t) = X1 (t), Xl (t) = X2 (t);
10:
else
11:
Xw (t) = X2 (t), Xl (t) = X1 (t);
12:
end if
13:
add Xw (t) into P (t + 1);
14:
update Xl (t) using (6) and (7);
15:
add the updated Xl (t + 1) to P (t + 1);
16:
remove X1 (t), X2 (t) from U ;
17:
end while
18:
t = t + 1;
19: end while
40
p1w =
pbestw (t)+gbest(t)
2
p1l =
pbestl (t)+gbest(t)
2
Xl (t)
pbestl (t)
30
Xw (t)
p1w
pbestw (t)
20
p1l
gbest(t)
10
AND
C ONVERGENCE
(8)
p2 =
X(t).
Xw (t) +
R2 (k, t) + R3 (k, t)
R2 (t) + R3 (k, t)
(9)
Compared to (4), it can be observed that (9) has better chance
to generate a higher degree of diversity. On the one hand,
particle Xw (t) is randomly chosen from the swarm before the
Global Optimum
0
0
10
20
30
40
50
X
f (X)
f (X)
RAN CHENG et al.: A COMPETITIVE PARTICLE SWARM OPTIMIZER FOR LARGE SCALE OPTIMIZATION
40
40
Xl (t)
Xl (t)
pbestl (t)
30
30
Xw (t + 1)
Xw (t)
pbestw (t)
20
20
10
10
Global Optimum
Global Optimum
0
0
10
20
30
40
50
X
f (X)
Xw (t), Xl (t + 1)
40
pbestl (t)
Xw (t + 1)
Xw (t)
10
10
20
40
50
X
Let
Global Optimum
0
0
30
When t becomes very large in the late search stage, the following relationship between pbestw (t), pbestl (t) and gbest(t)
holds:
(
pbestw (t) gbest(t),
.
(12)
pbestl (t) gbest(t)
pbestw (t)
20
20
According to the definitions of gbest and pbest in the canonical PSO, the following relationship can be obtained:
(
f (gbest(t)) f (pbestw (t)) f (Xw (t)),
(11)
f (gbest(t)) f (gbestl (t)) f (Xl (t)).
Xl (t)
30
10
30
40
50
X
(10)
(13)
(14)
also be pointed out that the proof does not guarantee the
convergence to the global optimum.
For any particle i in P (t), it can have the following two
possible behaviors after participating in a competition:
1) Xi (t + 1) = Xi (t), if Xi (t) is a winner;
2) Xi (t + 1) is updated using (6) and (7), if Xi (t) is a
loser.
In case Xi (t) is a winner, the particle will not be updated.
Therefore, we only need to consider the case when Xi (t)
is a loser and then updated. Without loss of generality, we
can rewrite (6) and (7) by considering an one-dimension
deterministic case:
1
Vi (t + 1) = Vi (t)
2
1
+ (Xw (t) Xi (t))
2
(15)
1
+ (X(t) Xi (t)),
2
Xi (t + 1) = Xi (t) + Vi (t + 1),
= 3 + ( 32 )2 2
1
4
2
3 2 2
( 2 ) 2
2 = 4 2
2
(21)
(22)
(23)
Proof. Let
1
2
1+
,
2
1
p=
X(t),
Xw (t) +
1+
1+
=
(16)
(17)
(18)
where
y(t) =
"
Vi (t)
X(t)
,A =
"
1
2
1
2
,B =
"
, (19)
RAN CHENG et al.: A COMPETITIVE PARTICLE SWARM OPTIMIZER FOR LARGE SCALE OPTIMIZATION
All statistical results, unless otherwise specified, are averaged over 25 independent runs. For each independent run,
the maximum number of fitness evaluations (FEs), as recommended in [43], is set to 5000 n, where n is the search
dimension of the test functions. In the comparisons between
different statistical results, two-tailed t-tests are conducted at
a significance level of = 0.05.
A. Parameter settings
12
160
10
140
10
Optimization result
Optimization result
10
120
100
80
60
10
10
10
40
20
100
200
300
400
500
10
600
100
200
Swarm size
300
400
500
600
400
500
600
Swarm size
(a) f2
(b) f3
10
10
10
Optimization result
Optimization result
10
5
10
10
10
15
10
10
10
10
20
10
25
10
15
100
200
300
Swarm size
(c) f1
400
500
600
10
100
200
300
Swarm size
(d) f6
Fig. 6. Statistical results of optimization errors obtained CSO on 2 nonseparable functions f2 , f3 and 2 separable functions f1 , f6 of 500 dimensions
with different swarm sizes m varying from 25 to 300.
0.25
0.2
0.15
0.1
0.05
Data of min
Fitting curve of min
Data of max
Fitting curve of max
0.05
100
200
300
400
500
600
700
800
900
1000
Fig. 7. Fitting curves describing the relationship between the social factor
and swarm size m that lead to the best search performance using the
logarithmic linear regression analysis.
TABLE I
S TATISTICAL RESULTS ( MEAN VALUES AND STANDARD DEVIATIONS ) OF OPTIMIZATION ERRORS OBTAINED BY CSO ON 2 NON - SEPARABLE FUNCTIONS
f2 , f3 AND 2 SEPARABLE FUNCTIONS f1 , f6 OF 500 DIMENSIONS WITH THE SWARM SIZE m VARYING FROM 200 TO 1000 AND FROM 0 TO 0.3.
Swarm size
m = 200
m = 400
m = 600
m = 1000
Function
f2
f3
f1
f6
f2
f3
f1
f6
f2
f3
f1
f6
f2
f3
f1
f6
=0
4.79E+01(1.97E+00)
5.95E+02(1.55E+02)
1.08E-09(2.26E-10)
3.24E-06(2.96E-07)
6.09E+01(1.06E+00)
1.31E+06(1.22E+05)
3.17E+00(3.86E-01)
2.94E-01(1.76E-02)
6.52E+01(7.68E-01)
2.75E+08(3.96E+07)
3.26E+02(2.84E+01)
3.33E+00(3.07E-02)
7.12E+01(7.47E-01)
1.40E+09(1.26E+08)
5.74E+03(4.94E+02)
6.75E+00(4.61E-02)
= 0.1
8.26E+01(2.85E+00)
8.25E+02(5.28E+01)
4.73E-23(8.70E-25)
3.57E-13(1.02E-14)
5.47E+01(3.46E+00)
4.90E+02(1.27E-01)
4.38E-16(5.76E-17)
1.49E-09(4.36E-11)
2.74E+01(3.76E+00)
4.92E+02(4.00E-01)
2.57E-08(1.69E-09)
1.27E-05(6.24E-07)
3.22E+01(4.68E-01)
6.43E+02(1.68E+01)
1.19E-02(5.35E-04)
9.58E-03(3.70E-04)
= 0.2
8.58E+01(3.48E+00)
1.07E+07(4.57E+06)
1.51E+01(1.77E+01)
2.78E+00(1.75E-01)
7.41E+01(7.92E-01)
5.01E+03(6.21E+02)
3.22E-22(2.41E-23)
8.82E-13(1.42E-14)
6.72E+01(1.26E+00)
1.39E+03(3.84E+02)
5.46E-22(2.72E-23)
1.10E-12(1.48E-14)
6.11E+01(1.61E+00)
5.97E+02(6.90E+01)
7.26E-12(2.89E-13)
1.60E-07(8.03E-09)
= 0.3
8.45E+01(1.27E+00)
4.33E+09(6.06E+08)
3.50E+04(3.77E+03)
1.04E+01(7.18E-01)
6.09E+01(1.06E+00)
2.75E+08(3.96E+07)
1.29E+03(2.69E+02)
4.00E+00(1.85E-01)
7.17E+01(8.32E-01)
5.28E+07(1.09E+07)
3.73E+01(1.61E+01)
1.91E+00(8.08E-02)
6.89E+01(7.57E-01)
8.34E+06(1.76E+06)
2.01E-18(7.69E-19)
2.55E-11(5.88E-12)
Two-tailed t-tests have been conducted between the statistical results in each row. If one result is significantly better than all the other errors, it is highlighted.
Note that the statistical results are shown in the order of f2 , f3 (non-separable functions) and f1 , f6 (separable functions), to clearly see the different
values for non-separable and separable functions.
TABLE II
T HE BEST COMBINATIONS OF THE SWARM SIZE m AND THE SOCIAL
FACTOR IN THE OPTIMIZATION OF 500-D f1 , f2 , f3 AND f6 .
min
max
m = 200
0
0.1
m = 400
0.1
0.2
m = 600
0.1
0.2
m = 1000
0.1
0.3
max and min denote the maximal and minimal that perform best with
corresponding m, respectively.
TABLE III
PARAMETER SETTINGS FOR THE SEVEN FUNCTIONS OF 100-D, 500-D
AND 1000-D.
Parameter
m
Dimensions
100-D
500-D
1000-D
100-D
500-D
1000-D
Separable functions
100
250
500
0
0.1
0.15
Non-separable functions
100
250
500
0
0.05
0.1
B. Benchmark comparisons
In order to verify the performance of CSO for large scale
optimization, CSO has been compared with a number of the
state-of-the-art algorithms tailored for large scale optimization
on the CEC08 test functions with dimensions of 100, 500 and
1000. The compared algorithms for large scale optimization
include CCPSO2 [52], MLCC [53], sep-CMA-ES [54], EPUSPSO [55] and DMS-PSO [30]. The same criteria proposed in
10
10
10
10
10
0
Fitness Error
10
Fitness Error
L (m), R (m) 0
10
10
10
10
CSO
CCPSO2
MLCC
10
10
15
10
15
20
10
10
CSO
CCPSO2
MLCC
25
10
20
0.5
1.5
FEs
(a) f1
2.5
6
x 10
10
0.5
1.5
FEs
2.5
6
x 10
(b) f5
RAN CHENG et al.: A COMPETITIVE PARTICLE SWARM OPTIMIZER FOR LARGE SCALE OPTIMIZATION
TABLE IV
T HE STATISTICAL RESULTS ( FIRST LINE ) AND THE t VALUES ( SECOND LINE ) OF OPTIMIZATION ERRORS ON 100-D TEST FUNCTIONS .
100-D
f1
f2
f3
f4
f5
f6
f7
w/t/l
CSO
9.11E-29(1.10E-28)
3.35E+01(5.38E+00)
3.90E+02(5.53E+02)
5.60E+01(7.48E+00)
0.00E+00(0.00E+00)
1.20E-014(1.52E-015)
-7.28E+05(1.88E+04)
CCPSO2
7.73E-14 (3.23E-14)
-1.20E+01
6.08E+00 (7.83E+00)
1.44E+01
4.23E+02 (8.65E+02)
-1.61E-01
3.98E-02 (1.99E-01)
3.74E+01
3.45E-03 (4.88E-03)
-3.53E+00
1.44E-13 (3.06E-14)
-2.15E+01
-1.50E+03 (1.04E+01)
-1.93E+02
4/1/2
MLCC
6.82E-14 (2.32E-14)
-1.47E+01
2.53E+01 (8.73E+00)
4.00E+00
1.50E+02 (5.72E+01)
2.16E+00
4.39E-13 (9.21E-14)
3.74E+01
3.41E-14 (1.16E-14)
-1.47E+01
1.11E-13 (7.87E-15)
-6.18E+01
-1.54E+03 (2.52E+00)
-1.93E+02
4/0/3
sep-CMA-ES
9.02E-15 (5.53E-15)
-8.16E+00
2.31E+01 (1.39E+01)
3.49E+00
4.31E+00 (1.26E+01)
3.49E+00
2.78E+02 (3.43E+01)
-3.16E+01
2.96E-04 (1.48E-03)
-1.00E+00
2.12E+01 (4.02E-01)
-2.64E+02
-1.39E+03 (2.64E+01)
-1.93E+02
4/1/2
EPUS-PSO
7.47E-01 (1.70E-01)
-2.20E+01
1.86E+01 (2.26E+0)
1.28E+01
4.99E+03 (5.35E+03)
-4.28E+00
4.71E+02 (5.94E+01)
-3.47E+01
3.72E-01 (5.60E-02)
-3.32E+01
2.06E+0 (4.40E-01)
-2.34E+01
-8.55E+02 (1.35E+01)
-1.93E+02
6/0/1
DMS-PSO
0.00E+00 (0.00E+00)
4.14E+00
3.65E+00 (7.30E-01)
2.75E+01
2.83E+02 (9.40E+02)
4.91E-01
1.83E+02 (2.16E+01)
-2.78E+01
0.00E+00 (0.00E+00)
0.00E+00
0.00E+00 (0.00E+00)
3.95E+01
-1.14E+03 (8.48E+00)
-1.93E+02
2/2/3
TABLE V
T HE STATISTICAL RESULTS ( FIRST LINE ) AND THE t VALUES ( SECOND LINE ) OF OPTIMIZATION ERRORS ON 500-D TEST FUNCTIONS .
500-D
f1
f2
f3
f4
f5
f6
f7
w/t/l
CSO
6.57E-23(3.90E-24)
2.60E+01(2.40E+00)
5.74E+02(1.67E+02)
3.19E+02(2.16E+01)
2.22E-16(0.00E+00)
4.13E-13(1.10E-14)
-1.97E+06(4.08E+04)
CCPSO2
7.73E-14 (3.23E-14)
-1.20E+01
5.79E+01 (4.21E+01)
-3.78E+00
7.24E+02 (1.54E+02)
-3.30E+00
3.98E-02 (1.99E-01)
7.38E+01
1.18E-03 (4.61E-03)
-1.28E+00
5.34E-13 (8.61E-14)
-6.97E+00
-7.23E+03 (4.16E+01)
-2.41E+02
5/1/1
MLCC
4.30E-13 (3.31E-14)
-6.50E+01
6.67E+01 (5.70E+00)
-3.29E+01
9.25E+02 (1.73E+02)
-7.30E+00
1.79E-11 (6.31E-11)
7.38E+01
2.13E-13 (2.48E-14)
-4.29E+01
5.34E-13 (7.01E-14)
-8.53E+00
-7.43E+03 (8.03E+00)
-2.41E+02
6/0/1
sep-CMA-ES
2.25E-14 (6.10E-15)
-1.84E+01
2.12E+02 (1.74E+01)
-5.29E+01
2.93E+02 (3.59E+01)
8.23E+00
2.18E+03 (1.51E+02)
-6.10E+01
7.88E-04 (2.82E-03)
-1.40E+00
2.15E+01 (3.10E-01)
-3.47E+02
-6.37E+03 (7.59E+01)
-2.41E+02
5/1/1
EPUS-PSO
8.45E+01 (6.40E+00)
-6.60E+01
4.35E+01 (5.51E-01)
-3.55E+01
5.77E+04 (8.04E+03)
-3.55E+01
3.49E+03 (1.12E+02)
-1.39E+02
1.64E+00 (4.69E-02)
-1.75E+02
6.64E+00 (4.49E-01)
-7.39E+01
-3.51E+03 (2.10E+01)
-2.41E+02
7/0/0
DMS-PSO
0.00E+00 (0.00E+00)
8.42E+01
6.89E+01 (2.01E+00)
-6.85E+01
4.67E+07 (5.87E+06)
-3.98E+01
1.61E+03 (1.04E+02)
-6.08E+01
0.00E+00 (0.00E+00)
7.85E+84
2.00E+00 (9.66E-02)
-1.04E+02
-4.20E+03 (1.29E+01)
-2.41E+02
5/0/2
TABLE VI
T HE STATISTICAL RESULTS ( FIRST LINE ) AND THE t VALUES ( SECOND LINE ) OF OPTIMIZATION ERRORS ON 1000-D TEST FUNCTIONS .
1000-D
f1
f2
f3
f4
f5
f6
f7
w/t/l
CSO
1.09E-21(4.20E-23)
4.15E+01(9.74E-01)
1.01E+03(3.02E+01)
6.89E+02(3.10E+01)
2.26E-16(2.18E-17)
1.21E-12(2.64E-14)
-3.83E+06(4.82E+04)
CCPSO2
5.18E-13 (9.61E-14)
-2.70E+01
7.82E+01 (4.25E+01)
-4.32E+00
1.33E+03 (2.63E+02)
-6.04E+00
1.99E-01 (4.06E-01)
1.11E+02
1.18E-03 (3.27E-03)
-1.80E+00
1.02E-12 (1.68E-13)
5.59E+00
-1.43E+04 (8.27E+01)
-3.96E+02
4/1/2
MLCC
8.46E-13 (5.01E-14)
-8.44E+01
1.09E+02 (4.75E+00)
-6.96E+01
1.80E+03 (1.58E+02)
-2.46E+01
1.37E-10 (3.37E-10)
1.11E+02
4.18E-13 (2.78E-14)
-7.51E+01
1.06E-12 (7.68E-14)
9.24E+00
-1.47E+04 (1.51E+01)
-3.96E+02
5/0/2
sep-CMA-ES
7.81E-15 (1.52E-15)
-2.57E+01
3.65E+02 (9.02E+00)
-1.78E+02
9.10E+02 (4.54E+01)
9.17E+00
5.31E+03 (2.48E+02)
-9.24E+01
3.94E-04 (1.97E-03)
-1.00E+00
2.15E+01 (3.19E-01)
-3.37E+02
-1.25E+04 (9.36E+01)
-3.96E+02
5/1/1
EPUS-PSO
5.53E+02 (2.86E+01)
-9.67E+01
4.66E+01 (4.00E-01)
-2.42E+01
8.37E+05 (1.52E+05)
-2.75E+01
7.58E+03 (1.51E+02)
-2.24E+02
5.89E+00 (3.91E-01)
-7.53E+01
1.89E+01 (2.49E+00)
-3.80E+01
-6.62E+03 (3.18E+01)
-3.97E+02
7/0/0
DMS-PSO
0.00E+00 (0.00E+00)
1.30E+02
9.15E+01 (7.14E-01)
-2.07E+02
8.98E+09 (4.39E+08)
-1.02E+02
3.84E+03 (1.71E+02)
-9.07E+01
0.00E+00 (0.00E+00)
5.18E+01
7.76E+00 (8.92E-02)
-4.35E+02
-7.50E+03 (1.63E+01)
-3.97E+02
5/0/2
loses in l functions.
The statistical results of the optimization errors show that
CSO has significantly better overall performance in comparison with all the other five compared algorithms on 500-D,
1000-D functions. CSO and DMS-PSO have similar performance on 100-D functions, and both outperform the rest four
algorithms. It seems that DMS-PSO is always able to find
the global optimum of f1 and f5 , regardless of the number
of search dimensions, but has poor performance on the other
five functions in comparison with CSO, especially when the
dimensionality becomes higher. In comparison, MLCC has
yielded the best results on f4 , which is a shifted Rastrigin
function with a large number of local optima. Such outstanding
10
From the statistical results of the optimization errors summarized in Table IV, V and VI, it can be noticed that CSO has
shown very good scalability to the search dimension, i.e., the
performance does not deteriorate seriously as the dimension
increases.
To further examine the search ability of CSO on the
functions of even higher dimensions, e.g., 2000-D or even
5000-D, additional experiments have been performed on f1
to f6 of 2000 and 5000 dimensions. f7 is excluded from this
experiment for the reason that its global optimum is dimension
dependent and thus it is not easy to evaluate the scalability.
It must be stressed that optimization of problems of 2000
and 5000 dimensions is very challenging for CSO since it
has not adopted any particular strategies tailored for solving
large scale optimization, e.g., the divide-and-conquer strategy.
Furthermore, to the best of our knowledge, optimization of
problems of a dimension larger than 1000 has only been
reported by Li and Yao in [52], where 2000-dimensional f1 ,
f3 and f7 have been employed to test their proposed CCPSO2.
TABLE VII
PARAMETER SETTINGS OF CSO ON 2000-D AND 5000-D FUNCTIONS .
Dimension
2000-D
5000-D
2000-D
5000-D
Separable
1000
1500
0.2
0.2
f1
f2
f3
f4
f5
f6
D = 2000
1.66E-20(3.36E-22)
6.17E+01(1.31E+00)
2.10E+03(5.14E+01)
2.81E+03(3.69E+01)
3.33E-16(0.00E+00)
3.26E-12(5.43E-14)
D = 5000
1.43E-19(3.33E-21)
9.82E+01(9.78E-01)
7.30E+03(1.26E+02)
7.80E+03(8.73E+01)
4.44E-16(0.00E+00)
6.86E-12(5.51E-14)
Since the time cost of one single run on a 5000-D functions is extremely
expensive, the statistical results of optimization errors are averaged over 10
independent runs.
Parameter
TABLE VIII
S TATISTICAL RESULTS OF THE OPTIMIZATION ERRORS OBTAINED BY CSO
ON 2000-D AND 5000-D FUNCTIONS .
Non-separable
1000
1500
0.15
0.15
with
x
j =
1
m
m
X
(xji ),
i=1
RAN CHENG et al.: A COMPETITIVE PARTICLE SWARM OPTIMIZER FOR LARGE SCALE OPTIMIZATION
11
10
10
10
10
15
10
10
CSO
CCPSO2
sepCMAES
EPUSPSO
DMSPSO
MLCC
10
10
25
10
30
500
1000
Dimension
2000
5000
10
100
10000
500
(a) f1
1000
Dimension
2000
5000
10
10000
CSO
CCPSO2
sepCMAES
EPUSPSO
DMSPSO
MLCC
10
10
15
Optimization result
10
10
10
10
10
2000
5000
10000
5000
10000
CSO
CCPSO2
sepCMAES
EPUSPSO
DMSPSO
MLCC
10
10
10
10
10
10
10
15
10
12
10
20
1000
Dimension
2000
1000
Dimension
10
CSO
CCPSO2
sepCMAES
EPUSPSO
DMSPSO
MLCC
10
500
500
(c) f3
10
100
100
(b) f2
10
10
10
100
10
10
Optimization result
10
Optimization result
10
10
20
Optimization result
CSO
CCPSO2
sepCMAES
EPUSPSO
DMSPSO
MLCC
10
Optimization result
Optimization result
10
CSO
CCPSO2
sepCMAES
EPUSPSO
DMSPSO
MLCC
10
10
100
14
500
(d) f4
1000
Dimension
2000
5000
10000
10
100
(e) f5
500
1000
Dimension
2000
5000
10000
(f) f6
Fig. 9. The statistical results of optimization errors obtained by CSO, CCPSO2, MLCC , sep-CMA-ES, EPUS-PSO and DMS-PSO on 100-D, 500-D, 1000-D
f1 to f6 , together with the statistical results of optimization errors obtained by CCPSO2, sep-CMA-ES on 2000-D f1 , f3 and the statistical results of
optimization errors obtained by CSO on 2000-D, 5000-D f1 to f6 . Note that due to the logarithmic scale used in the plots, errors of 0 cannot be shown.
CSO-n
2.71E-011(5.77E-012)
4.61E+001(1.02E+000)
5.37E+002(4.00E+001)
3.95E+003(1.32E+002)
4.04E-012(7.00E-013)
4.90E-007(5.98E-008)
-1.68E+006(8.17E+003)
CSO
6.57E-23(3.90E-24)
2.60E+01(2.40E+00)
5.74E+02(1.67E+02
3.19E+02(2.16E+01)
2.22E-16(0.00E+00)
4.13E-13(1.10E-14)
-1.97E+06(4.08E+04)
t value
2.35E+01
3.85E+01
-1.08E+00
1.36E+02
2.89E+01
4.10E+01
3.48E+01
TABLE X
S TATISTICAL RESULTS OF OPTIMIZATION ERRORS OBTAINED BY CSO- N
AND CSO ON 1000-D FUNCTIONS .
m = 500
f1
f2
f3
f4
f5
f6
f7
CSO-n
7.77E-001(2.30E-002)
8.11E+001(6.48E-001)
1.31E+007(7.74E+005)
1.02E+004(5.51E+001)
4.22E-002(1.77E-003)
5.95E-002(6.32E-003)
-2.58E+006(3.06E+004)
CSO
1.09E-21(4.20E-23)
4.15E+01(9.74E-01)
1.01E+03(3.02E+01)
6.89E+02(3.10E+01)
2.26E-16(2.18E-17)
1.21E-12(2.64E-14)
-3.83E+06(4.82E+04)
t value
1.69E+02
1.69E+02
8.46E+01
7.52E+02
1.19E+02
4.71E+01
1.09E+02
12
10
10
CSOn
CSO
CSOn
CSO
2
Swarm Diversity
Swarm Diversity
10
10
10
10
V. C ONCLUSION
1
10
0.5
1.5
2.5
FEs
3.5
4.5
5
5
x 10
(a) f1
10
0.5
1.5
2.5
FEs
3.5
4.5
5
5
x 10
(b) f3
Fig. 10. The swarm diversity profiles during 500,000 Fitness Evaluations
(FEs) of CSO with neighborhood control (denoted as CSO-n) and the original
CSO on 500-D functions on a separable function f1 and a non-separable
function f3 respectively.
CSO-n
1.51E-025(3.21E-027)
5.23E+001(1.11E+001)
7.93E+002(1.03E+002)
4.18E+002(3.04E+001)
3.11E-016(4.44E-017)
4.09E-014(1.74E-015)
-1.79E+006(1.28E+004)
CSO
4.10E-023(9.28E-025)
8.20E+001(4.53E+000)
9.32E+002(4.15E+002)
6.45E+002(2.66E+001)
2.46E-003(4.93E-003)
1.08E+000(1.41E-001)
-2.10E+006(5.73E+003)
t value
-2.20E+02
-1.24E+01
-1.63E+00
-2.81E+01
-2.49E+00
-3.83E+01
1.11E+02
TABLE XII
S TATISTICAL RESULTS OF OPTIMIZATION ERRORS OBTAINED BY CSO- N
AND CSO ON 1000-D FUNCTIONS .
m = 200
f1
f2
f3
f4
f5
f6
f7
CSO-n
3.60E-018(9.38E-019)
6.50E+001(1.10E+000)
1.61E+003(7.96E+001)
1.04E+003(4.85E+001)
7.77E-016(0.00E+000)
1.37E-010(1.84E-011)
-2.90E+006(1.69E+004)
CSO
5.22E-013(3.70E-013)
1.03E+002(3.24E+000)
1.95E+003(2.08E+002)
2.14E+003(7.51E+001)
2.46E-003(4.93E-003)
3.03E+000(2.67E-001)
-4.24E+006(3.05E+004)
p value
-7.05E+00
-5.55E+01
-7.63E+00
-6.15E+01
-2.49E+00
-5.67E+01
1.92E+02
RAN CHENG et al.: A COMPETITIVE PARTICLE SWARM OPTIMIZER FOR LARGE SCALE OPTIMIZATION
[8] Y.-J. Gong, J. Zhang, H. Chung, W.-n. Chen, Z.-H. Zhan, Y. Li, and
Y.-h. Shi, An efficient resource allocation scheme using particle swarm
optimization, IEEE Transactions on Evolutionary Computation, vol. 16,
no. 6, pp. 801816, 2012.
[9] S.-Y. Ho, H.-S. Lin, W.-H. Liauh, and S.-J. Ho, Opso: Orthogonal
particle swarm optimization and its application to task assignment
problems, IEEE Transactions on Systems, Man and Cybernetics, Part
A: Systems and Humans, vol. 38, no. 2, pp. 288298, 2008.
[10] Z. Zhu, J. Zhou, Z. Ji, and Y.-H. Shi, Dna sequence compression using
adaptive particle swarm optimization-based memetic algorithm, IEEE
Transactions on Evolutionary Computation, vol. 15, no. 5, pp. 643658,
2011.
[11] Y. Shi et al., Particle swarm optimization: developments, applications
and resources, in Proceedings of IEEE Congress on Evolutionary
Computation, vol. 1. IEEE, 2001, pp. 8186.
[12] Y. Yang and J. O. Pedersen, A comparative study on feature selection
in text categorization, in Proceedings of International Conference on
Machine Learning. Morgan Kaufmann Publishers, 1997, pp. 412420.
[13] W.-N. Chen, J. Zhang, Y. Lin, and e. Chen, Particle swarm optimization
with an aging leader and challengers, IEEE Transactions on Evolutionary Computation, vol. 17, no. 2, pp. 241258, 2013.
[14] Y. Shi and R. Eberhart, Parameter selection in particle swarm optimization, in Evolutionary Programming VII. Springer, 1998, pp. 591600.
[15] Y. Shi and R. C. Eberhart, Empirical study of particle swarm optimization, in Proceedings of IEEE Congress on Evolutionary Computation.
IEEE, 1999, pp. 19451950.
[16] , Fuzzy adaptive particle swarm optimization, in Proceedings of
IEEE Congress on Evolutionary Computation, vol. 1. IEEE, 2001, pp.
101106.
[17] A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, Self-organizing
hierarchical particle swarm optimizer with time-varying acceleration
coefficients, IEEE Transactions on Evolutionary Computation, vol. 8,
no. 3, pp. 240255, 2004.
[18] R. Cheng and M. Yao, Particle swarm optimizer with time-varying
parameters based on a novel operator, Applied Mathematics and Information Sciences, vol. 5, no. 2, pp. 3338, 2011.
[19] M. Hu, T. Wu, and J. D. Weir, An adaptive particle swarm optimization
with multiple adaptive methods, IEEE Transactions on Evolutionary
Computation, vol. 17, no. 5, pp. 705720, 2013.
[20] P. N. Suganthan, Particle swarm optimiser with neighbourhood operator, in Proceedings of IEEE Congress on Evolutionary Computation,
vol. 3. IEEE, 1999, pp. 19581962.
[21] J. Kennedy, Small worlds and mega-minds: effects of neighborhood
topology on particle swarm performance, in Proceedings of IEEE
Congress on Evolutionary Computation, vol. 3. IEEE, 1999, pp. 1931
1938.
[22] J. Kennedy and R. Mendes, Population structure and particle swarm
performance, in Proceedings of IEEE Congress on Evolutionary Computation, vol. 2. IEEE, 2002, pp. 16711676.
[23] R. Mendes, J. Kennedy, and J. Neves, The fully informed particle
swarm: simpler, maybe better, IEEE Transactions on Evolutionary
Computation, vol. 8, no. 3, pp. 204210, 2004.
[24] J. J. Liang, A. Qin, P. N. Suganthan, and S. Baskar, Comprehensive
learning particle swarm optimizer for global optimization of multimodal
functions, IEEE Transactions on Evolutionary Computation, vol. 10,
no. 3, pp. 281295, 2006.
[25] B. Qu, P. Suganthan, and S. Das, A distance-based locally informed
particle swarm model for multimodal optimization, IEEE Transactions
on Evolutionary Computation, vol. 17, no. 3, pp. 387402, 2013.
[26] J. Robinson, S. Sinton, and Y. Rahmat-Samii, Particle swarm, genetic
algorithm, and their hybrids: optimization of a profiled corrugated horn
antenna, in Proceedings of IEEE Antennas and Propagation Society
International Symposium, vol. 1. IEEE, 2002, pp. 314317.
[27] C.-F. Juang, A hybrid of genetic algorithm and particle swarm optimization for recurrent network design, IEEE Transactions on Systems,
Man, and Cybernetics, Part B: Cybernetics, vol. 34, no. 2, pp. 9971006,
2004.
[28] N. Holden and A. A. Freitas, A hybrid particle swarm/ant colony
algorithm for the classification of hierarchical biological data, in
Proceedings fo the IEEE Swarm Intelligence Symposium. IEEE, 2005,
pp. 100107.
[29] P. Shelokar, P. Siarry, V. K. Jayaraman, and B. D. Kulkarni, Particle
swarm and ant colony algorithms hybridized for improved continuous
optimization, Applied mathematics and computation, vol. 188, no. 1,
pp. 129142, 2007.
13
14