0% found this document useful (0 votes)
50 views8 pages

A Comparative Study Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms On Numerical Benchmark Problems

The paper compares the performance of differential evolution (DE), particle swarm optimization (PSO), and an evolutionary algorithm (EA) on 34 numerical benchmark problems. The results show that DE generally outperformed the other algorithms on most problems. However, on two noisy functions, both DE and PSO were outperformed by the EA. Overall, DE was the most efficient and robust algorithm across the problems, though all algorithms struggled on the two noisy test problems.

Uploaded by

Faiza Fofa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
50 views8 pages

A Comparative Study Differential Evolution, Particle Swarm Optimization, and Evolutionary Algorithms On Numerical Benchmark Problems

The paper compares the performance of differential evolution (DE), particle swarm optimization (PSO), and an evolutionary algorithm (EA) on 34 numerical benchmark problems. The results show that DE generally outperformed the other algorithms on most problems. However, on two noisy functions, both DE and PSO were outperformed by the EA. Overall, DE was the most efficient and robust algorithm across the problems, though all algorithms struggled on the two noisy test problems.

Uploaded by

Faiza Fofa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

A Comparative Study of Differential Evolution,

Particle Swarm Optimization, and Evolutionary


Algorithms on Numerical Benchmark Problems
J akob Vesterstram Ren6 Thomsen
BiRC - Bioinformatics Research Center
University of Aarhus, Ny Munkegade, Bldg. 540
DK-8000 Aarhus C, Denmark
BiRC - Bioinformatics Research Center
University of Aarhus, Ny Munkegade, Bldg. 540
DK-8000 Aarhus C, Denmark
Email: [email protected] Email: [email protected]
Abstract-Several extensions to evolutionary algorithms (EAs)
and particle swarm optimization (PSO) have heen suggested dur-
and an evolutionary algorithm (EA)' on a selection of 34
numerical benchmark problems. The main obiective was to ex-
ing the last decades offering improved performance on selected
benchmark problems. Recently, another search heuristic termed
differential evolution (DE) has shown superior in
seVprs~ r e ~ ~ . w ~ r ~ d annlieatinns. In this naner we evaluate the
amine whether one tested algorithms would outperfolm
all others on a majority of the problems. Additionally, since
we used a rather large number of benchmark problems, the
.~ ~ .~
_. . ..-. .. ~~~ ~~~~~
performance of DE, PSO, and EAs regarding their general a p
plicability as numerical optimization techniques. The comparison
is performed On a suite Of 34 mdely
The results from our study show that DE generally outperforms
the other algorithms. However, on two noisy functions, both DE
and PSO were Outperformed hy the EA.
experiments would also reveal whether the algorithms would
have any
Overall, the experimental results show that DE was far more
efficient and robust (with respect to reproducing the results in
several runs) compared to PSO and the EA. This suggests that
more emphasis should be put on DE when solving numerical
problems with real-valued parameters. However, on two noisy
test problems, DE was outperformed by the other algorithms.
difficulties or preferences.
benchmark
I. I NTRODUCTI ON
The evolutionary computation (EC) community has shown
a significant interest in optimization for many years. In par-
ticular, there has been a focus on global optimization of nu-
merical, real-valued 'black-box' problems for which exact and
analytical methods do not apply. Since the mid-sixties many
general-purpose optimization algorithms have been proposed
for finding near-optimal solutions to this class of problems;
most notably: evolution strategies (ES) [8], evolutionary pro-
gramming (EP) [3], and genetic algorithms (GA) [6].
Many efforts have also been devoted to compare these al-
gorithms to each other. Typically, such comparisons have been
based on artificial numerical benchmark problems. The goal of
many studies was to verify that one algorithm outperformed
another on a given set of problems. In general, it has been
possible to improve a given standard method within a restricted
set of benchmark problems by making minor modifications to
it.
Recently, particle swarm optimization (PSO) [7] and dif-
ferential evolution (DE) [ 1 I ] have been introduced and p a -
ticularly PSO has received increased interest from the EC
community. Both techniques have shown great promise in
several real-world applications [4], [51, [121, [141. However,
to our knowledge, a comparative study of DE, PSO, and GAS
on a large and diverse set of problems has never been made.
In this study, we investigated the performance of DE, PSO,
The paper is organized as follows. In Section I1 we introduce
the methods used in the study: DE, PSO, and the EA. Further,
Section 111outlines the experimental setup, parameter settings,
and benchmark problems used. The experimental results are
presented in Section IV. Finally, Section V contains a discus-
sion of the experimental results.
11. METHODS
A. Differential Evolution
The DE algorithm was introduced by Storn and Price in
1995 [ 1 I]. I t resembles the structure of an EA, but differs from
traditional EAs in its generation of new candidate solutions
and by its use of a 'greedy' selection scheme. DE works
as follows: First, all individuals are randomly initialized and
evaluated using the fitness function provided. Afterwards, .the
following process will be executed as long as the termination
condition is not fulfilled: For each individual in the populatb>n,
an offspring is created using the weighted difference of parent
solutions. In this study we used the DE/ rand/ l / exp scheme
shown in Figure 1. The offspring replaces the parent if it is
fitter. Otherwise, the parent survives and is passed on to the
next iteration of the algorithm.
'The EA used in this study resembled a real-valued GA.
0-7803-85 15-2/04/$20.00 02004 IEEE 1980
procedure create offspring O[ i ] from parent P[i] {
O[i ] =P(i ] // copy parent genotype to offspring
,randomly select parents P[~I ], P[i z], P[ i 3] ,
where il #iz #i 3 #i
n =U( 0, di m)
for (j =0; j <di m A U(0,l ) <CR; j ++) {
' '
O[il[n] =P[i1l[nl +F. ( P[ i z ] [ n] -P[i3l[nl)
n =( n +1) mod dim
. .
}
}. ' '
Fig. I . Pseudo-code for creating an offspring in DE. U( 0: z ) is a uniformly
disrributcd number between 0 and z. CR i s the probability of crossover, F
is the scaling factor, and dim is Ihe number of problemparameters (problem
dimensionality).
B. Particle Swarm Optimization
PSO was introduced by Kennedy and Eberhart in 1995. It
was inspired by the swarming behavior as is displayed by a
flock of birds, a school of fish, or even human social behavior
being influenced by other individuals [ 7] .
PSO consists of a swarm of particles moving in an n-
dimensional, real-valued search space of possible problem
solutions. Every particle has a position vector Z' encoding a
candidate solution to the problem (similar to the genotype in
EAs) and a velocity vector C. Moreover, each particle contains
a small memory that stores its own best position seen so far @
and a global best position a obtained through communication
with its neighbor particles. In this study we used the fully
connected network topology for passing on information (see
[I51 for more details).
Intuitively, the information about good solutions spreads
through the swarm, and thus the particles tend to move to good
areas i n the search space. At each time step t , the velocity is
updated and the particle is moved to a new position. This new
position is calculated as the sum of the previous position and
the new velocity:
q t +1) =q t ) +" t +1)
The update of the velocity from the previous velocity to the
new velocity is determined by the following equation:
where U( a, b ) is a uniformly distributed number between a
and b. The parameter w is called the inertia weight [IO]
and controls the magnitude of the old velocity "( t ) i n the
calculation of the new velocity, whereas 41 and $2 determine
the significance of $(t) and a(t), respectively. Furthermore,
at any time step of the algorithm vi is constrained by the
parameter U,,,.
The swarm i n PSO is initialized by assigning each particle to
a uniformly and randomly chosen position in the search space.
Velocities are initialized randomly in the range [-vma5, v,,,].
1981
C. Attractive and Repulsive PSO
The attractive and repulsive PSO (arPSO) 191, 1151 was
introduced by Vesterstrgm and Riget to overcome the problems
of premature convergence [I ]. The modification of the basic
PSO scheme is to modify the velocity update formula when
the swarm diversity becomes less than a value This
modification corresponds to repulsion of the particles instead
of the usual attraction scheme. Thus, the velocity is updated
according to:
v'(t+l) =w.<(t)-U(O, $Jl)(?j(t)-qt))-v(o, 42)(g(t)-z(t))
This will increase the diversity over some iterations, and
eventually when another value dhigh is reached, the commonly
used velocity update formula will be used again. Thus, arPSO
is able to zoom out when an optimum has been reached, fol-
lowed by zooming in on another hot spot, possibly discovering
a new optimum in the vicinity of the old one. Previously,
arPS0 was shown to be more robust than the basic PSO on
problems with many optima 191.
The arPSO algorithm was included in this study as a
representative for the large number of algorithmic extensions
to PSO that try to avoid the problem of premature convergence.
Other PSO extensions could have been chosen but we selected
this particular one, since the performance of arPS0 was as
good (or better) as many other extensions 1151.
D. Evolutionary Algorithm
In this study we used a simple EA (SEA) that was previ-
ously found to work well on real-world problems [13]. The
SEA works as follows: First, all individuals are randomly
initialized and evaluated according to a given fitness function.
Afterwards, the following process will be executed as long
as the termination condition is not fulfilled: Each individual
has a probability of being exposed to either mutation or
recombination (or both). Mutation and recombination oper-
ators are applied with probability p,,, and p, , respectively.
The mutation and recombination operators used are Cauchy
mutation using an annealing scheme and arithmetic crossover,
respectively. Finally, tournament selection [Z] comparing pairs
of individuals is applied to weed out the least fit individuals.
The Cauchy mutation operator is similar to the well-known
Gaussian mutation operator, but the Cauchy distribution has
thick tails that enable it to generate considerable changes
more frequently than the Gaussian distribution. The Cauchy
distribution has the form:
where a 2 0, / 3 >0, -CO <z <00 (a and p are parameters
that affect the mean and spread of the distribution). All of the
solution parameters are subject to mutation and the variance
is scaled with 0.1 x the range of the specific parameter in
question.
Moreover, an annealing scheme was applied to decrease the
value of p as a function of the elapsed number of generations t.
a was fixed to 0. In this study we used the following annealing
function:
In arithmetic crossover the offspring is generated as a
weighted mean of each gene of the two parents, i.e.,
offspring, =r . parentl, +(1 - T ) parent2,,
where offspring, is the ith gene of the offspring and parentl;
and parent2; refer to the ith gene of the two parents, respec-
tively. The weight T is determined by a random value between
0 and 1.
E. Additional Remarks
All methods described above used truncation of infeasible
solutions to the nearest boundary of the search space. Further,
the termination criterion for all methods was to stop the
search process, when the current number of fitness evaluations
exceeded a maximum number of evaluations allowed.
111. EXPERIMENTS
A. Experimental Setup and Data Sampling
The algorithms used for comparison were DE, PSO, arPS0,
and the SEA. For all algorithms the parameter settings were
manually tuned, based on a few preliminary experiments.
The specific settings for each of the algorithms are described
below. Each algorithm was tested with all of the numerical
benchmarks shown in Table I. In addition, we tested the
algorithms on fl-fil in 100 dimensions, yielding a total of
34 numerical benchmarks. For each algorithm, the maximum
number of evaluations allowed was set to 500,000 for the 30-
dimensional (or less) benchmarks and to 5,000,000 for the
100-dimensional benchmarks. Each of the experiments was
repeated 30 times with different random seeds, and the average
fitness of the best solutions (e.g. individuals or particles)
throughout the optimization run was recorded.
B. DE Settings
DE has three parameters: The size of the population (pop-
size), the crossover constant (CR), and the scaling factor ( F) .
I n all experiments they were. set to the following values:
popsize =1.00, CR =0.9, F =0.5
C. PSO Settings
PSO has several parameters: The number of particles in the
swrrm (swarmsize), maximum velocity (vmaZ), the parame-
ters for attraction towards personal best and the neighborhoods
best found solutions ($1 and $z ) , and the inertia weight (U).
For PSO we used these settings: swarmsi ze =25, umoz =
15% of the longest axis-parallel interval in the search space,
Often the inertia weight is decreased linearly over time.
The setting for PSO in this study is a bit unusual, because
the inertia weight was held constant during the run. However,
it was found that for the easier problems, the chosen settings
outperformed the setting where w was annealed. The setting
=1.8, c$2 =1.8, and w =0.6.
with constant U, on the other hand, performed poorly on
multimodal problems. To be fair to PSO, we therefore included
the arPSO algorithm, which was known to outperform PSO
(even with annealed w) on multi modal problems [9].
D. arPSO Settings
In addition to the basic PSO settings, arPS0 used the
following parameter settings: dl ow =O.OoooO5 and dhigh =
0.25. The two parameters were only marginally dependent
on the problem, and these settings were consistent with !:he
settings found in previous studies [9].
E. SEA Settings
The SEA used a population size of 100. The probability of
mutating and crossing over individuals was fixed at p , =10.9
and p , =0.7, respectively. Tournament selection with a tour-
nament size of two was used to select the individuals for the
next generation. Further, elitism with an elite size of one was
used to keep the overall best solution found in the population.
E Numerical Benchmark Functions
For evaluating the four algorithms, we used a test suite of
benchmark functions previously introduced by Yao and Liu
[16]. The suite contained a diverse set of problems, including
unimodal as well as multimodal functions, and functions with
correlated and uncorrelated variables. Additionally, two noisy
problems and a single problem with plateaus were included.
The dimensionality of the problems originally varied from 2
to 30, but we extended the set with 100-dimensional variants
to allow for comparison on more difficult problem instances.
Table I lists the benchmark problems, the ranges of their search
spaces, their dimensionalities, and their global minimum fit-
nesses. We omitted J lg and fzo from Yao and Lius study
[I61 because of difficulties in obtaining the definitions of the
constants used in these functions (they were not provided in
[161).
IV. RESULTS
A. Problems of Dimensionality 30 or Less
The results for the benchmark problems fl-fz3 are shown
in Table I1 and 111. Moreover, Figure 2 shows the convergence
graphs for selected benchmark problems. All results below
For functions fl-fd there is a consistent performance pat-
tern across all algorithms: PSO is the best, and DE is almost
as good. They both converge exponentially fast toward the
fitness optimum (resulting in a straight line when plotted uc.ing
a logarithmic y-axis). The SEA is many times slower than
the two other methods, and even though it eventually might
converge toward the optimum, it requires several hours to find
solutions that PSO and DE can reach i n a few seconds. This
analysis is illustrated in Figure 2 (a).
On function fs, DE is superior to both PSO and arFSO
and the SEA. Only DE converges toward the optimum. After
300,000 evaluations, it commences to fine-tune around the
were reported as 0.0000000e+00.
1982
TABLE I
NUMERI CAL BENCHMARK FUNCTI ONS WI TH A VARY I NG NUMBER OF DI MENSI ONS (DI M). REMARKS: 1) FUNCTI ONS SI NE A ND COSI NE TAKE
ARGUMENTS I N RADI ANS. 2) THE NOTATI ON (a)T(b) DENOTES THE DOT PRODUCT BETWEEN VECTORS a AND b. 3) THE FUNCTI ON U A ND THE VALUES
vi REFERRED TO I N J n AND J U ARE GI VEN BY U(%. (I, b, e) =b( z - a)' I F I >a, u( x, a, b, c) =b( - z - a)" I F z i 0, u( x, a, b, e) =0 I F
-a 5 3 5 a, A ND FI NALLY yi =1 +a ( Z , +1). THE MATRI X a USED I N f14. THE VECTORS D AND b USED I N fis. AND THE MATRI X a AND THE
VECTOR c USED I N f 2i - f ~~. ARE ALL DEFI NED I N THE APPENDI X.
Function Dim
J s(8) = c:z; (100. (xi+] - (xi)')'+ (xi A 1)')
30/100
Minimum value
r,cm =0 -5.12 5 xi 5 5.12
-10 5 z, 5 10
-100 5 2% 5 100
-100 5 Xi 5 100 f4ca) =0
J s(i) =o
f6(P) =0,
-+ 5 pi <;
f7(@ =0
J a( 42l f 97) =
12569.5141898.3
. r g n =0
-30 5 xi 5 30
-100 5 xi 5 100
-1.28 5 z, 5 1.28
-500 5 x, 5 500
-5.12 5 xi 5 5.12
-32 5 z, 5 32
J i n(6) =0
-600 5 zi 5 600
J II (6) =0
-50 5 zi 5 50
J 13(1. .. .,1,-4.76)
=-1.1428 -50 5 zi 5 50
J14 (-3 i . 95)
-65.54 5 zi 5 65.54
=0.998
fis(O.l9,0.19,0.12,0.14)
=0.0003075
J 16(-0.09,0.71)
- 5 5 Xi 5 5
-5 5 x, 5 5
=-1.0316
-5 5 xi 5 15 f 17(9.42,2.47) =0.398
J ~~( l . 49e- 05, l . OO) =3 -2 5 x; 5 2
f'I(% z) =-10.2
J ZZ(= z) =-10.4
J m(ir z) =-10.5
1983
optimum at an exponentially progressing rate. We may note
that PSO performs moderately better than the SEA.
DE and the SEA easily find the optimum for the f6
function, whereas both PSOs fail. This test function consists
of plateaus, and apparently both PSO methods have difficulties
with functions of this kind. The average best fitness value of
0.04 for basic PSO comes from failing twice in 30 runs.
Function f7 is a noisy problem. All algorithms seem to
converge in a similar pattern, seeFigure 2 (d). The SEA had
the best convergence speed, followed by arPS0, PSO, and
finally DE.
Functions f8-f13 are highly multimodal functions. On all of
them, DE clearly performs best and it finds the global optimum
in all cases. Neither the SEA nor the PSOs find the global
optimum for these functions in any of the runs. Further, the
SEA consistently outperforms both PSOs, with f i o being an
exception.
Both DE, the SEA, and arPSO come very close to the
global optimum on f14 in all runs, but only DE hits the
exact optimum every time, making it the best algorithm on
this problem. PSO occasionally stagnates at a local optimum,
which is the reason for its poor average best fitness.
On f i 5 both PSOs perform worse than DE and SEA. DE
converges very fast to good values near the optimum, but
seems to stagnate suboptimally. The SEA converges slowly,
but outperforms DE after 500,000 fitness evaluations. To
investigate if the SEA would continue its convergence and
ultimately reach the optimum, we tried to let it run for 2
million evaluations. In the majority of runs, it actually found
the optimum after approximately 1 million evaluations (data
not shown).
Functions f16-fl8 are all easy problems, and all algorithms
are able to find near optimum solutions quickly. Both PSO
methods have particularly fast convergence. For some reason,
the SEA seems to be able to fine-tune its results slightly better
than the other algorithms.
The last three problems are f21-fZ3. Again, DE is superior
compared to the other algorithms. It finds optimum in all cases.
DE. the SEA, and PSO all converge quickly, but the SEA
stagnates before finding the optimum, and both PSO methods
converge even earlier. arPSO performs better than PSO, and
is almost as good as the SEA.
B. Problenis of Dimensionaliry 100
On fl, PSO and DE have good exponential convergence to
optimum (similar to the results in 30 dimensions) and the SEA
is much slower. However, on f2 the picture has changed. DE
still has exponential convergence to optimum, but both PSOs
fail to find the optimum - they are now performing worse
than the SEA. The same pattern occurs for f3 and f4. This
contrasts with the 30 dimensional cases, where PSO performed
exceptionally well for fl-f4.
On the difficult f5 problem, DE is superior and finds
optimum after 3.5 million evaluations. The other algorithms
fail to find the optimum. However, the SEA is slightly better
than the basic PSO.
Both DE and the SEA quickly find the optimum of f6 in all
runs. PSO only finds the optimum in 9 out of 30 runs. This
is the reason for the average of 2.1 for PSO on this problem.
The results for the 100 dimensional version of f7 are similar
to the results in the 30 dimensional case. The SEA has the best
convergence, arPS0 is slightly slower, followed by PSO, and
finally DE.
For problems f8-fl3 the results also resemble those from
30 dimensions. One exception is that arPS0 is now marginally
worse than the SEA on fro. Thus. in 100 dimensions the SEA
is consistently better than both PSOs on these six problems.
V. DI SCUSSI ON
Overall, DE is clearly the best performing algorithm in this
study. It finds the lowest fitness value for most of the problems.
The only exceptions are: 1) The noisy f7 problem, where the
nature of the convergence of DE is similar, but still slower
than the other algorithms. Apparently, DE faces difficulties on
noisy problems. 2) On f15 DE stagnates on a suboptimal value
(and both PSOs even earlier). Only the SEA is able to find
the optimum on this problem, which only has four problem
parameters, but still appears to be very difficult.
We have not tried to tune DE to these problems. Most
likely, one could improve the performance of DE by altering
the crossover scheme, varying the parameters CR and F ,
or using a more greedy offspring generation strategy (e.g.
DEl best l l l exp) .
DE is robust; it is able to reproduce the same results
consistently over many trials, whereas the performance of
PSO and arPS0 is far more dependent on the randomized
initialization of the individuals. This difference is profound on
the 100 dimensional benchmark problems. As a result, both
versions of PSO must be executed several times to ensure good
results, whereas one run of DE and the SEA usually suffices.
PSO is more sensitive to parameter changes than the other
algorithms. When changing the problem, one probably need
to change parameters as well to sustain optimal performance.
This is not the case for the SEA and DE. The 100 dimensional
problems illustrate this point. Thus, the settings for both PSOs
do not generalize to 100 dimensions, whereas DE and the SEA
can be used with the same settings and still give the same t)pe
of convergence.
In general, DE shows great fine-tuning abilities, but on
f16 and f17 it fails in comparison to the SEA. We have not
determined why DE fails to fine tune these particular problems,
but it would be interesting to investigate why.
Regarding convergence speed, PSO is always the fastest,
whereas the SEA or arPS0 are always the slowest. However,
the SEA could be further improved with a more greedy
selection scheme similar to DE. Especially on thevery easy
functions f16-f18. PSO has a very fast convergence (3-4 times
faster than DE). This may be of practical relevance for some
real-world problems where the evaluation is computationally
expensive and the search space is relatively simple and of low
dimensionality.
1984
TABLE I I
RESULTS FOR ALL ALGORI THMS ON BENCHMARK PROBLEMS OF DI MENSI ONALI TY 30 OR LESS (MEAN OF 30 RUNS AND STANDARD DEVI ATI ONS
(STDDEV)). FOR EACH PROBLEM, THE BEST PERFORMI NG ALGORI THM(S) I S EMPHASI ZED I N BOLDFACE
Problem
Benchmark
rroblcm
1
2
3
4
5
6
7
8
9
10
I I
12
13
14
15
16
17
18
21
22
23
Mean Stddev Mean Sl ddev Mean Stddev Mean Stddev
DE
Me?." Stddev
0.0000000e+00 O.We+W
0.0000000e+00 0.00e+00
2.02W713e-09 8.26e-I0
0.0000000e+W O.M)e+W
0.0000000e+W O.M)e+OO
4.9390831~-03 1.13e-03
-1.2569481e+04 2.30~-04
0.0000000e+00 O.M)e+OO
-1.1901591e-15 7.03~-I 6
0.000OOOOe+00 O.M)e+OO
o.000ooooe+OO O.Mle+M)
-1.1428244e+00 4.45e-08
9.980039Oe41 3.75~-08
4.1736828e-04 3.01e-04
-1.0316285e+00 1.92e-08
3.8502 I ~7~- 0n 9. I 7e-09
3.9788735~-01 1.17~-08
3.000owOe+00 0.We+00
-1.0153201e+01 4.60.-07
-1.0402943e+Ol 3.58~-07
-1.0536412e+01 2.09e-07
rso
Mean Stddev
o.oooooolk+oo o.ooe+Oo
o.oooooooe+00 O.Mle+@o
0.0000000e+00 O.We+W
2.1070152e-16 8.01e-I6
4.0263857e+00 4.99e+00
4.00000We-02 1.98~-01
1.9082207e-03 1.14e-03
.7.1874076e+03 6.72er02
4.9170789e+Ol 1.62e+01
1.4046895et00 7.91e-01
2.3528934e-02 3.54~-02
3.819961 le-01 8.40~-01
-5.9688703e-01 5.17~-01
I.l570484e+00 3.68~-01
1.3378460~-03 3.94e-03
~1.03 16284e+00 3.W-08
3.9788736~-01 5.01e-09
3.oooOwOe+00 O.Wec00
-5.3944733e+00 3.40e+M)
-6.9460507~+00 3.70e+W
-6.7107552er00 3.77etW
Mea" Stddev
6.8081735e-I3 5.30e-13
2.0892037e-02 1.48e-01
O.OOoooOOe+OO 2.13e-25
3.550928&+02 2.15e+03
1.898OOM)e+O1 6.30e+01
3.8866682~-04 4.78e-04
2.1491414e+W 4.91et00
1.8422773~-07 7.15~-08
9.2344555-02 3.41e-01
8.5597888e-03 4.79e-02
-9.6263537e-01 5.14e-01
9.98W393e-01 2.13e-08
1.2476701 e-03 3.96e-03
-1.0316284e+00 3.84e-08
3.9788736e-01 5.01e-09
3.5 1627 19e+00 3.65e+00
-8.1809408e+W 2.60e+00
.8.4352620.+00 2.83e+00
-8.615504oe+W 2.88e+00
1.4183790e-05 . 8.27e-06
-n.5986527e+03 2.07et03
SEA
Mea" Stddev
1.78941 I2e-03 2.77e-04
1.7207452e-02 1.70.-03
1.5891817e-02 4.25e-03
1.9827734~-02 2.07e-03
3.1318954e+OI 1.74e+OI
0.00W000e+M) O.oOe+W
7.1062480e.M 3.27~-04
- I . 166933&+04 2.34e+02
7.1789575e-01 9.22e-01
1. 0468 l80e-02 9.0%-04
4.6366988e-03 3.96e-03
4.5626102~-06 8.11e-07
-1.I42742lkcW 1.34e-05
9.98004We-01 4.33e-08
-1.0316300e+00 3.16e-08
3.w00000ec00 O.Mle+W
-8.4076288e+~ 3.16ec00
-9.7995696e+00 2.24e+OO
3.7~18~8e.04 8.78~-05
~. ~7 ~~7 ~e - o i 2.20.-08
-8.9125~noe+oo ~.86e+oo
TABLE 111
RESULTS FOR ALL ALGORI THMS ON BENCHMARK PROBLEMS OF DI MENSI ONALI TY 100 (MEAN OF 30 RUNS AND STANDARD DEVI ATI ONS (STDDEV)).
FOR EACH PROBLEM, THE BEST PERFORMI NG ALGORI THM(S) I S EMPHASI ZED I N BOLDFACE.
Benchmark I DE I PSO I a m o I SEA
2
3
4
5
6
7
8
9
10
I I
12
13
0.0000000e+00
5.873478%-IO
1.1284972e-09
0.0000000e+W
0.00MM00e+00
-4.1898293ecM
0.0000000e+00
8.02321 17e-15
5.4210109e-20
0.0000000e+00
-1.1428244c+OO
7.6640871~-03
O.Mk+(M
1.83~-I O
1.42~-I 0
O.We+W
O.We+OO
6.58~-04
I .Me-03
o. mt 00
I .7&- I 5
O.OOet00
0.00e+W
2.74e-08
1.8045813et01
3.6666668~+03
5.31218Me+W
2.0203629e+02
2.1000WOe+W
2.7845728e-02
-2.1579M8e+M
2.4359139et02
4.49343 16e+W
4.1715onoe-01
-3.~604485~-01
1.1774980e-01
9.43~-04
6.06e-03
5.71e-04
1.29ec01
O.M)e+W
9.70.-05
5.3&+02
3.04e-01
1.47e-04
4.42e-03
2.76~-08
2.41e-08
1985
...................
I
18.12
IS.14
( e) f10
UPS0
DE. . . .
........
p lSQ5
3
i! 18.15
1620
0 10" 2 m m w 4 m 5omoo
Nu mb ol evduations
PSO -
arPSO ---
DE ....
10 ' SE* ........
IM
........ ........
I '
li
0.1
0
0 lwwo 2 m o 3 m 4 m 5 " o
2wo m n 6wo BM3 1Mxy)
Nu" Of euduatlma
Number Of svduefiona
(g) f17 (h) RI
Fig. 2. Average best fitness curves for selected benchmark problems. All results are means of 30 runs
1986
To conclude, the performance of DE is outstanding in
comparison to the other algorithms tested. It is simple, robust,
converges fast, and finds the optimum in almost every run. In
addition, it has few parameters to set, and the same settings
can be used for many different problems. Previously, the
DE has shown its worth on real-world problems, and in this
study it outperformed PSO and EAs on the majority of the
numerical benchmark problems as well. Among 'the tested
algorithms, the DE can rightfully be regarded as an excellent
first choice, when faced with a new optimization problem to
solve. The results for the two noisy benchmark functions call ,.
for further investigations. More experiments are required to
determine why and when the DE and PSO methods fail on
noisy problems.
ACKNOWLEDGMENTS
The authors would like ,to thank Wouter Boomsma for
proofreading the paper. Also, we would like to thank the
colleagues at BiRC for valuable comments on early versions of
the manuscript. This work has been supported by the Danish
Research Council.
APPENDI X
1.0 1.0 1.0
8.0 8.0 8.0
6.0 6.0 6.0
7.0 3.0 7.0
9.0 2.0 9.0
5.0 3.0 3.0
1.0 8.0 1.0
2.0 6.0 2.0
f14 :
a = (-32, -WO, 16,32,. . . , -32,-16,0,16,32
-32,. . . , -16,. . , , O, . . . ,16,. . ,, 32,. . .
f1s :
a =(0.1957,0.1947,0.1735,0.1600,0.0844,
0.0627,0.0456,0.0342,0.0323,0.0235,0.0246)
a=
1.0
8.0
6.0
3.0
2.0
5.0
8.0
6.0
REFERENCES
[I ] P. 1. Angeline. Evolutionary optimization versus particle swarmop-
timization: Philosophy and perfotmance differences. In V. W. Porto,
N. Saravanan, D. Waagen, and A. E. Eiben, editors. Evolulionory
Pmgramnling VU. pp. 601410. springer, 1998.
[2] T. Biick, D. B. Fogel. and 2. Michalewicr, edilors. Handbook of
Evolurionory Compurorion, chapter C2.3. lnstitule of Physics Publishing
and Oxford University Press, 1991.
131 L. 1. Fogel, A. 1. Owens, and M. 1. Walsh. Artificial intelligence through
a simulation of evolution. In M. Maxfield. A. Callahan, and L. I. Fogel,
editors. Biophysics and Cybernetic Systems Pmr. of [he 2nd Cybrmetir
Sciences Syntposium, pp. 131-155. Spartan Books. 1965.
[4] Y. Fukuyama, S. Takayama, Y. Nskanishi, and H. Yoshida. A particle
swarmoptimization far reactive power and voltage conVal in electric
power systems. In W. Banrhaf, I. Daida, A. E. Eiben, M. H. Garron,
V. Honavar, M. lakiela. and R. E. Smith, editors. Pmcerdingr <f the
Generic and Evolutionary C, mpt mt i m Conference, pp. 1523-1528.
Morgan Kaufmann Publishen, 1999.
[5] D. Gies and Y. Rahmat-Samii. Particle swarmoptimization for re-
configurable phasc-differmlialcd anay design. Mirnn~ave and Opti d
T ~ h n d ~ g y Letters. Vol. 38, No. 3, pp. 168-175, 203.
161 J . H. Holland. Adpatotion in Natural and Arr#ci ol Systems. University
of Michigan Press, Ann Arbor, MI, 1975.
[7] J . Kennedy and R. C. Eberhart. Panicle swarmoptimization. In
Pmcerdingr of the 1995 IEEE Internoti onal Conferenre on Neural
Nelworkr, Vol. 4. pp. 1942-1948. IEEE Press, 1995.
[8] I . Rechenberg. Evolurion araregy: Optimization of tt'chnicnl syxrmn by
meon5 of biological evol uti on. Froman-Halzboag, 1973.
[Y] J . Riget and 1. S. Vester~m~m. A diversity-guided panicle swarm
optimizer - the arPS0. Technical report. EVALife. Dept. of Computer
Science, University of Aarhus, Denmark, 2002.
[I O] Y. Shi and R.C. Eberhan. A modified panicle swarm optimizer. In
Proceedings of the IEEE Conference on Evolutionary Computation, pp.
69-73. IEEE Press. 1998.
I l l ] R. Storn and K. Price. Differential evolution - a simple and efficient
adaptive scheme for global optimization over continuous spaces. Tech-
nical repon. Intemational Computer Science Institute, Berkley, 1995.
[I21 R. Thomsen. Flexible ligand docking using differential evolution. In
Proceedings ofthe 2003 Congress on Evolutionary Cmpputarion. Vol. 4,
pp. 2354-2361. IEEE Press, 2003.
[I31 R. Thomsen. Flexible ligand docking using evolutionary algonthms:
investigating the effects of vacation operators and local search hybrids.
BioSyrtems. Vol. 72, No. 1-2. pp. 57-73, 2003.
1141 R. K. Unem and P. Vadsmp. Parameter identification of induction
momn using differential evolution. In Proceedings of the 2003 C O ~ ~ WS S
on Evolutionary Computation, Vol. 2, pp. 790-796. IEEE Press, 2003.
1151 J. S. Vesterstrgmand J. Riget. Particle swarms: Extensions for improved
local, multi-modal, and dynamic search in numerical optimization.
Master's thesis, EVALife, Dept. of Computer Science, University of
Aarhus, Denmark. 2002.
1161 X. Yao and Y. Liu. Fast evolutionary programming. I n L. J . Fogel,
P. J . Angeline. and T. Back, editors, Proceedings .f the 5th Annual
Conference on Evolutionary Programming, pp. 451460. MIT Press,
1996.
c = (0.1,0.2,0.2,0.4,0.4,0.6,0.3,0.7,0.5,0.5)
1987

You might also like