A Literature Survey of Benchmark Functions For Global Optimisation Problems PDF
A Literature Survey of Benchmark Functions For Global Optimisation Problems PDF
2, 2013
Momin Jamil*
Blekinge Institute of Technology
SE-37179, Karlskrona, Sweden
and
Harman International,
Cooperate Division,
Becker-Goering Str. 16,
D-76307 Karlsbad, Germany
E-mail: [email protected]
*Corresponding author
Xin-She Yang
Middlesex University,
School of Science and Technology,
Hendon Campus, London NW4 4BT, UK
E-mail: [email protected]
Reference to this paper should be made as follows: Jamil, M. and Yang, X-S.
(2013) ‘A literature survey of benchmark functions for global optimisation
problems’, Int. J. Mathematical Modelling and Numerical Optimisation,
Vol. 4, No. 2, pp.150–194.
Biographical notes: Momin Jamil received his BSc from the University of the
Punjab, Lahore, Pakistan in 1991, BSc in Electrical and Electronic Engineering
from Technical University of Budapest, Hungary in 1996, and Master of
Engineering from the University of Pretoria, Pretoria, South Africa in 1999.
From 2001–2005, he worked as a Development Engineer at Siemens Mobile
Phone Development Center in Ulm, Germany. From 2006–2011, he worked as
1 Introduction
The goal of any GO is to find the best possible solutions x∗ from a set X
according to a set of criteria F = {f1 , f2 , · · · fn }. These criteria are called objective
functions expressed in the form of mathematical functions. An objective function is a
A literature survey of benchmark functions 153
1 What aspects of the function landscape make the optimisation process difficult?
2 What type of a priori knowledge is most effective for searching particular types
of function landscape?
2.1 Modality
The number of ambiguous peaks in the function landscape corresponds to the modality
of a function. If algorithms encounters these peaks during a search process, there is
a tendency that the algorithm may be trapped in one of the peaks. This will have a
negative impact on the search process, as this can direct the search away from the true
optimal solutions.
2.2 Basins
2.3 Valleys
attracted to this region. The progress of a search process of an algorithm may be slowed
down considerably on the floor of the valley.
2.4 Separability
∂f (x)
= g(xi )h(x) (2)
∂xi
where g(xi ) means any function of xi only and h(x) any function of any x. If this
condition is satisfied, the function is called partially separable and easy to optimise,
because solutions for each xi can be obtained independently of all the other parameters.
This separability condition can be illustrated by the following two examples.
For example, function (f105 ) is not separable, because it does not satisfy the
condition (2)
∂f105 (x1 , x2 )
= 400(x21 − x2 )x1 − 2x1 − 2
∂x1
∂f105 (x1 , x2 )
= −200(x21 − x2 )
∂x2
On the other hand, the sphere function (f137 ) with two variables can indeed satisfy the
above condition (2) as shown below.
(
arg minimisef (x1 , ..., xp ) = arg minimisef (x1 , ...), ...,
x1 ,...,xp x1
)
arg minimisef (..., xp ) (3)
xp
these sub-objectives involves only one decision variable, while treating all the others as
constant and can be expressed as
∑
p
f (x1 , x2 , · · · , xp ) = fi (xi ) (4)
i=1
2.5 Dimensionality
subject to −35 ≤ xi ≤ 35. The global minima is located at origin x∗ = (0, · · · , 0),
f (x∗ ) = 0.
2 Ackley Function 2 (Ackley, 1987) (continuous, differentiable, non-separable,
non-scalable, unimodal)
√ 2 2
f2 (x) = −200e−0.02 x1 +x2
subject to −32 ≤ xi ≤ 32. The global minimum is located at origin x∗ = (0, 0),
f (x∗ ) = −200.
3 Ackley Function 3 (Ackley, 1987) (continuous, differentiable, non-separable,
non-scalable, unimodal)
√ 2 2
f3 (x) = 200e−0.02 x1 +x2 + 5ecos(3x1 )+sin(3x2 )
D (
∑ √ )
f4 (x) = e−0.2 x2i + x2i+1 + 3 (cos(2xi ) + sin(2xi+1 ))
i=1
x1
f5 (x) = cos(x1 )sin(x2 ) −
(x22 + 1)
D
∑
f6 (x) = xi sin(xi ) + 0.1xi
i=1
∏
D
√
f7 (x) = xi sin(xi )
i=1
15 [
∑ ]2
yi − x1 − ui
f8 (x) =
i=1
vi x2 + wi x3
∑
13
( )2
f15 (x) = x3 e−ti x1 − x4 e−ti x2 + x6 e−ti x5 − yi
i=1
∑
D−1
f21 (x) = g(xi )2
i=0
where
−(i+1)
g(x) = e−0.1(i+1)x1 − e−0.1(i+1)x2 − e[(−0.1(i+1))−e ]x3
with domain −10 ≤ xi ≤ 10. The global minimum is located at x∗ = f (0, 0),
f (x∗ ) = 0.
25 Brown Function (Begambre and Laier, 2009) (continuous, differentiable,
non-separable, scalable, unimodal)
∑
n−1
2 2
f25 (x) = (x2i )(xi+1 +1) + (x2i+1 )(xi +1)
i=1
Bukin functions (Silagadze, 2007) are almost fractal (with fine seesaw edges) in
the surroundings of their minimal points. Due to this property, they are extremely
difficult to optimise by any global or local optimisation methods.
26 Bukin Function 2 (continuous, differentiable, non-separable, non-scalable,
multimodal)
x41 2
f30 (x) = (4 − 2.1x21 + )x
3 1
+x1 x2 + (4x22 − 4)x22
0.001
f31 (x) = − ⌊ ⌋−
+ (x1 − 0.4x2 − 0.1)2
(0.001)2
0.001
⌊ ⌋
(0.001) + (2x1 + x2 − 1.5)2
2
A literature survey of benchmark functions 161
0.001
f32 (x) = − ⌊ ⌋−
(0.001)2 + (x21 + x22 − 1)2
0.001
⌊ ⌋−
(0.001)2 + (x21 + x22 − 0.5)2
0.001
⌊ ⌋
(0.001)2 + (x21 − x22 )2
( )2
∑
D
f34 (x) = x2i
i=1
where
vi = |xi − zi | , A = 0.05
⌊ x ⌋
i
zi = 0.2 + 0.49999 sgn (xi )
0.2
di = (1, 1000, 10, 100)
∑
n ∑
n
f38 (x) = −0.1 cos(5πxi ) − x2i
i=1 i=1
∑
D ( )
1
f40 (x) = x6i 2 + sin
i=1
xi
1 ∑ 6
D
f43 (x) = − sin (5πxi )
D i=1
1 ∑ 6
D
3/4
f44 (x) = − sin (5π(xi − 0.05))
D i=1
f45 (x) = 105 x21 + x22 − (x21 + x22 )2 + 10−5 (x21 + x22 )4
subject to −20 ≤ xi ≤ 20. The two global minima are located at x∗ = f (0, ±15)
f (x∗ ) = −24777.
46 deVilliers Glasser Function 1 (deVillers and Glasser, 1981)(continuous,
differentiable, non-separable, non-scalable, multimodal)
∑
24
[ ]2
f46 (x) = x1 xt2i sin(x3 ti + x4 ) − yi
i=1
∑
16
[ ]2
f47 (x) = x1 xt2i tanh [x3 ti + sin(x4 ti )] cos(ti ex5 ) − yi
i=1
∑
D
f48 (x) = (x1 − 1)2 + i(2x2i − xi−1 )2
i=2
f (x∗ ) = 0.
A literature survey of benchmark functions 165
∑
m−1 √
f53 (x) = [−(xi+1 + 47)sin |xi+1 + xi /2 + 47|
i=1
√
−xi sin |xi − (xi+1 + 47)|]
∑2
16
f57 (x) = 0.6 + [sin( xi − 1)
i=1
15
16
+sin2 (xi − 1)
15
1 16
+ sin(4( xi − 1))]
50 15
∑n
x2i ∏ ( xi )
f59 (x) = − cos √ + 1
i=1
4000 i
∑
4
f61 (x) = (i + 1)cos(ix1 + i + 1)
i
∑
4
(j + 1)cos((j + 2)x2 + j + 1)
j=0
subject to 0 ≤xj ≤ 1, j
∈ {1, 2, 3} with
constants
aij , pij and ci are given as
3 10 30 1
0.1 10 35 1.2
A = [Aij ] =
3 10 30, c = ci = 3 ,
0.1 10 35 3.2
0.3689 0.1170 0.2673
0.4699 0.4837 0.7470
p = pi = 0.1091 0.8732 0.5547
+x23
where
( )
1 tan−1 x1 , if x1 ≥ 0
θ=
2π ( x2 )
1 tan−1 x1 + 0.5 if x1 < 0
2π x2
∑
10
( ( ))2
f67 (x) = 2 + 2i − eix1 + eix2
i=1
subject to 0 ≤ xi ≤ 10.
The multiple global minima are located at x∗ = f ({0, 1.39325},{1.39325, 0}),
f (x∗ ) =−0.673668.
70 Leon Function (Lavi and Vogel, 1966) (continuous, differentiable, non-separable,
non-scalable, unimodal)
4
+(tan (x3 − x4 )) + x81
( −1
)N −∑N −1
i=1 xi
∑
N
f74 (x) = 1+D− xi
i=1
( −1
)N −∑i=1
N −1
0.5(xi +xi+1 )
∑
N
f75 (x) = 1+D− 0.5(xi + xi+1 )
i=1
1334x41 − 15360x31 + 11520x21 − 5120x1 + 2624
]2
4
x2 + 12x32 + 54x22 + 108x2 + 81
[1 ∑D
(∏ D
) N1 ]2
f84 (x) = xi − xi
D i=1 i=1
10 [
( 10 )0.2
∑ ] ∏
2 2
f88 (x) = (ln (xi − 2)) + (ln (10 − xi )) − xi
i=1 i=1
∑
D ∑
D ∑
D
( )
f89 (x) = ix2i + 20isin2 A+ ilog10 1 + iB 2
i=1 i=1 i=1
where
∑
D/4
2
f91 (x) = (x4i−3 + 10x4i−2 )
i=1
2 4
+5(x4i−1 − x4i ) + (x4i−2 − x4i−1 )
4
+10(x4i−3 − x4i )
∑
D−2
2
f92 (x) = (xi−1 + 10xi )
i=1
2 4
+5(xi+1 − xi+2 ) + (xi − 2xi+1 )
4
+10(xi−1 − xi+2 )
∑
D
f98 (x) = (x2i − i)2
i=1
√
subject to −500 ≤ xi ≤ 500. The global minima are located at x∗ = f (± i),
f (x∗ ) = 0.
99 Quadratic Function (continuous, differentiable, non-separable, non-scalable)
100 Quartic Function (Storn and Price, 1996) (continuous, differentiable, separable,
scalable)
∑
D
f100 (x) = ix4i + random[0, 1)
i=1
∑
D
f101 (x) = |x5i − 3x4i + 4x3i + 2x2i − 10xi − 4|
i=1
∑
D−2
f102 (x) = (xi+1 + 1)cos(t2 )sin(t1 ) + xi ∗ cos(t1 )sin(t2 )
i=0
√
√ to −500 ≤ xi ≤ 500, where t1 =
subject ∥xi+1 + xi + 1∥ and
t2 = ∥xi+1 − xi + 1∥.
103 Ripple Function 1 (continuous, differentiable, non-separable, non-scalable,
multimodal)
∑
2
xi −0.1 2
f103 (x) = −e-2 ln2( 0.8 ) (sin6 (5πxi ) + 0.1cos2 (500πxi ))
i=1
subject to 0 ≤ xi ≤ 1. It has one global minimum and 252004 local minima. The
global form of the function consists of 25 holes, which forms a 5 × 5 regular grid.
Additionally, the whole function landscape is full of small ripples caused by high
frequency cosine function which creates a large number of local minima.
104 Ripple Function 25 (continuous, differentiable, non-separable, non-scalable,
multimodal)
∑
2
xi −0.1 2
f104 (x) = −e-2 ln2( 0.8 ) (sin6 (5πxi ))
i=1
subject to 0 ≤ xi ≤ 1. It has one global form of the Ripple function 1 without any
ripples due to absence of cosine term.
176 M. Jamil and X-S. Yang
∑
D−1
[ ]
f105 (x) = 100(xi+1 − x2i )2 + (xi − 1)2
i=1
∑
D
(∑
D
)α
f118 (x) = x2i
i=1
∑
D
f120 (x) = (xi − 1)2 + (x1 − x2i )2
i=1
∑
n
f122 (x) = − |xi |
i=1
∑
D ∏
n
f124 (x) = |xi | + |xi |
i=1 i=1
∑
D
f125 (x) = x10
i
i=1
∑
D
f126 (x) = (xi − 1)2 + (x1 − x2i )2
i=2
1 ∑ √
D
f127 (x) = − xi sin |xi |
D i=1
180 M. Jamil and X-S. Yang
∑
5
1
f129 (x) = −
∑
4
2
i=1 (xj − aij ) + ci
j=1
4444 0.1
1 1 1 1 0.2
where A = [Aij ] =
8 8 8 8, c = ci = 0.2
6 6 6 6 0.4
3737 0.4
subject to 0 ≤ xj ≤ 10. The global minima is located at x∗ = f (4, 4, 4, 4),
f (x∗ ) ≈ −10.1499.
130 Shekel Function 7 (Opačić, 1973) (continuous, differentiable, non-separable,
scalable, multimodal)
∑
7
1
f130 (x) = −
∑
4
2
i=1 (xj − aij ) + ci
j=1
4444 0.1
1 1 1 1 0.2
8 8 8 8 0.2
where A = [Aij ] = 6 6 6 6, c = ci =
0.4
3 7 3 7 0.4
2 9 2 9 0.6
5533 0.3
subject to 0 ≤ xj ≤ 10. The global minima is located at x∗ = f (4, 4, 4, 4),
f (x∗ ) ≈ −10.3999.
131 Shekel Function 10 (Opačić, 1973) (continuous, differentiable, non-separable,
scalable, multimodal)
∑
10
1
f131 (x) = −
∑
4
2
i=1 (xj − aij ) + ci
j=1
A literature survey of benchmark functions 181
4 4 4 4 0.1
1 1 1 1 0.2
8 8 8 8 0.2
6 6 6 6 0.4
3 7 3 7 0.4
where A = [Aij ] = , c = ci =
0.6
2 9 2 9
5 5 3 3 0.3
8 1 8 1 0.7
6 2 6 2 0.5
7 3.6 7 3.6 0.5
subject to 0 ≤ xj ≤ 10. The global minima is located at x∗ = f (4, 4, 4, 4),
f (x∗ ) ≈ −10.5319.
132 Shubert Function (Hennart, 1982) (continuous, differentiable, separable,
non-scalable, multimodal)
∏
n ∑5
f132 (x) = cos((j + 1)xi + j)
i=1 j=1
f (x∗ ) ≃ −186.7309.
133 Shubert Function 3 (Adorio and Dilman, 2005) (continuous, differentiable,
separable, non-scalable, multimodal)
∑
D ∑ 5
f133 (x) = jsin((j + 1)xi + j)
i=1 j=1
∑
D
f136 (x) = x2i
i=1
∑
D
f137 (x) = (⌊|xi |⌋)
i=1
∑
D
2
f138 (x) = (⌊xi + 0.5⌋)
i=1
∑
D
( )
f139 (x) = ⌊x2i ⌋
i=1
∑
D
f140 (x) = 25 + (⌊xi ⌋)
i=1
∑
D−1 [ ]
f141 (x) = (x2i+1 + x2i )0.25 sin2 {50(x2i+1 + x2i )0.1 } + 0.1
i=1
∑
D
f142 (x) = ix2i
i=1
1∑ 4
n
f143 (x) = (x − 16x2i + 5xi )
2 i=1 i
∑
D
2
∑
D
f149 (x) = (xi − 1) − xi xi−1
i=1 i=1
∑
D
2
∑
D
f150 (x) = (xi − 1) − xi xi−1
i=1 i=1
∑
D ∑
D
f152 (x) = [D − cos xj
i=1 j=1
∑
D
[ ] [ ]
f153 (x) = 1 + 8 sin2 7(xi − 0.9)2 + 6 sin2 14(x1 − 0.9)2 + (xi − 0.9)2
i=1
2 − |x2 | 3 − |x1 |
f156 (x) = −sin(2.2πx1 + 0.5π). .
2 2
2 − |x2 | 2 − |x1 |
−sin(0.5πx22 + 0.5π). .
2 2
subject to −2 ≤ x1 ≤ 2 and −1.5 ≤ x2 ≤ 1.5, and has single global minimum
and four regularly spaced local minima positioned in a direct line, such that global
minimum is in the middle.
157 Ursem Function 4 (Rónkkónen, 2009) (continuous, differentiable, non-separable,
non-scalable, multimodal)
√
2− x21 + x22
f157 (x) = −3sin(0.5πx1 + 0.5π).
4
subject to −2 ≤ xi ≤ 2, and has single global minimum positioned at the middle
and four local minima at the corners of the search space.
158 Ursem Waves Function (Rónkkónen, 2009) (continuous, differentiable,
non-separable, non-scalable, multimodal)
subject to −0.9 ≤ x1 ≤ 1.2 and −1.2 ≤ x2 ≤ 1.2, and has single global minimum
and nine irregularly spaced local minima in the search space.
159 Venter Sobiezcczanski-Sobieski Function (Begambre and Laier, 2009) (continuous,
differentiable, separable, non-scalable)
subject to |xi | ≤ 10, where the coefficient ai = i/29.0. The global minimum is
located at x∗ = f (−0.0158, 1.012, −0.2329, 1.260, −1.513, 0.9928),
f (x∗ ) = 0.002288.
A literature survey of benchmark functions 187
[ ]2
f162 (x) = 1.613 − 4(x1 − 0.3125)2 − 4(x2 − 1.625)2 + (x2 − 1)2
x31 [ ]2
f163 (x) = 2 − 8x21 + 33x1 − x1 x2 + 5 + (x1 − 4)2 + (x2 − 5)2 − 4
3
1 ∑
D
−x2i
f164 (x) = 1 − cos(kxi )e 2
D i=1
n [ kmax
∑ ∑
f165 (x) = ak cos(2πbk (xi + 0.5))
i=1 k=0
∑
kmax ]
−n k k
a cos(πb )
k=0
D [
D ∑
∑ (100(x2 − xj )2 + (1 − xj )2 )2
i
f166 (x) =
i=1 j=1
4, 000
]
( )
−cos 100(x2i − xj ) + (1 − xj ) + 1
2 2
combines a very steep overall slope with a highly multimodal area around the
global minimum located at xi = 1, where i = 1, ..., D.
167 Wolfe Function (Schwefel, 1981) (continuous, differentiable, separable, scalable,
multimodal)
4 2
f167 (x) = (x + x22 − x1 x2 )0.75 + x3
3 1
subject to 0 ≤ xi ≤ 2. The global minima is located at x∗ = f (0, · · · , 0),
f (x∗ ) = 0.
168 Xin-She Yang Function 1 (discontinuous, differentiable, separable, scalable,
multimodal)
∑
D
i
f168 (x) = ϵi |xi |
i=1
4 Conclusions
Test functions are important to validate and compare optimisation algorithms, this is
especially true for newly developed algorithms. Here, we attempted to provide the most
comprehensive and concise list of known benchmarks or test functions. Any functions
that is left out is just unintentional. The list is compiled on all the resources all the
literature known to us by the time of writing. It can be expected that majority of these
functions can be used for testing new optimisation algorithms so as to provide a more
complete view on the performance of any algorithms of interest.
190 M. Jamil and X-S. Yang
References
Ackley, D.H. (1987) A Connectionist Machine for Genetic Hill-Climbing, Kluwer, Boston.
Adjiman, C.S., Sallwig, S., Flouda, C.A. and Neumaier, A. (1998) ‘A global optimization
method, aBB for general twice-differentiable NLPs-1, theoretical advances’, Computers
Chemical Engineering, Vol. 22, No. 9, pp.1137–1158.
Adorio, E.P. and Dilman, U.P. (2005) ‘MVF – multivariate test function
library in c for unconstrained global optimization methods’ [online]
http://www.geocities.ws/eadorio/mvf.pdf (accessed 20 January 2013).
Ali, M.M., Khompatraporn, C. and Zabinsky, Z.B. (2005) ‘A numerical evaluation of several
stochastic algorithms on selected continuous global optimization test problems’, Journal of
Global Optimization, Vol. 31, No. 4, pp.635–672.
Andrei, N. (2008) ‘An unconstrained optimization test functions collection’, Advanced Modeling
and Optimization, Vol. 10, No. 1, pp.147–161.
Auger, A., Hansen, N., Mauny, N., Ros, R. and Schoenauer, M. (2007) Bio-Inspired Continuous
Optimization: The Coming of Age, Invited Lecture, IEEE Congress on Evolutionary
Computation, NJ, USA.
Averick, B.M., Carter, R.G. and Moré, J.J. (1991) The MINIPACK-2 Test Problem Collection,
Mathematics and Computer Science Division, Agronne National Laboratory, Technical
Memorandum No. 150.
Averick, B.M., Carter, R.G., Moré, J.J. and Xue, G.L. (1992) The MINIPACK-2 Test Problem
Collection, Mathematics and Computer Science Division, Agronne National Laboratory,
Preprint MCS-P153-0692.
Báck, T. and Schwefel, H.P. (1993) ‘An overview of evolutionary algorithm for parameter
optimization’, Evolutionary Computation, Vol. 1, No. 1, pp.1–23.
Begambre, O. and Laier, J.E. (2009) ‘A hybrid particle swarm optimization – simplex algorithm
(PSOS) for structural damage identification’, Journal of Advances in Engineering Software,
Vol. 40, No. 9, pp.883–891.
Bekey, G.A. and Ung, M.T. (1974) ‘A comparative evaluation of two global search algorithms’,
IEEE Transaction on Systems, Man and Cybernetics, Vol. 4, No. 1, pp.112–116.
Bersini, H., Dorigo, M. and Langerman, S. (1996) ‘Results of the first international contest on
evolutionary optimization’, IEEE International Conf. on Evolutionary Computation, Nagoya,
Japan, pp.611–615.
Biggs, M.C. (1971) ‘A new variable metric technique taking account of non-quadratic behaviour
of the objective function’, IMA Journal of Applied Mathematics, Vol. 8, No. 3, pp.315–327.
Bohachevsky, I.O., Johnson, M.E. and Stein, M.L. (1986) ‘General simulated annealing for
function optimization’, Technometrics, Vol. 28, No. 3, pp.209–217.
Boyer, D.O., Martfnez, C.H. and Pedrajas, N.G. (2005) ’Crossover operator for evolutionary
algorithms based on population features’, Journal of Artificial Intelligence Research, Vol. 24,
No. 1, pp.1–48.
Brad, Y. (1970) ‘Comparison of gradient methods for the solution of nonlinear parametric
estimation problem’, SIAM Journal on Numerical Analysis, Vol. 7, No. 1, pp.157–186.
Branin Jr., F.H. (1972) ‘Widely convergent method of finding multiple solutions of simultaneous
nonlinear equations’, IBM Journal of Research and Development, Vol. 16, No. 5,
pp.504-522.
Chen, Y. (2003) Computer Simulation of Electron Positron Annihilation Processes, Technical
Report SLAC-Report-646, Stanford Linear Accelerator Center, Stanford University [online]
http://www.slac.stanford.edu/pubs/slacreports/slac-r-646.html.
A literature survey of benchmark functions 191
Chung, C.J. and Reynolds, R.G. (1998) ‘CAEP: an evolution-based tool for real-valued function
optimization using cultural algorithms’, International Journal on Artificial Intelligence Tool,
Vol. 7, No. 3, pp.239–291.
Clerc, M. (1999) The Swarm and the Queen, Towards a Deterministic and Adaptive Particle
Swarm Optimization, IEEE Congress on Evolutionary Computation, Washington DC, USA,
pp.1951–1957.
Corana, A., Marchesi, M., Martini, C. and Ridella, S. (1987) ‘Minimizing multimodal functions
of continuous variables with simulated annealing algorithms’, ACM Transactions on
Mathematical Software, Vol. 13, No. 3, pp.262–280.
Courrieu, P. (1997) ‘The hyperbell algorithm for global optimization: a random walk using
Cauchy densities’, Journal of Global Optimization, Vol. 10, No. 1, pp.111–133.
Cragg, E.E. and Levy, A.V. (1969) ‘Study on supermemory gradient method for the minimization
of functions’, Journal of Optimization Theory and Applications, Vol. 4, No. 3, pp.191–205.
Csendes,T. and Ratz, D. (1997) ‘Subdivision-direction selection in interval methods for global
optimization’, SIAM Journal on Numerical Analysis, Vol. 34, No. 3, pp.922–938.
Damavandi, N. and Safavi-Naeini, S. (2005) ‘A hybrid evolutionary programming method
for circuit optimization’, IEEE Transaction on Circuit and Systems I, Vol. 52, No. 5,
pp.902–910.
deVillers, N. and Glasser, D. (1981) ‘A continuation method for nonlinear regression’, SIAM
Journal on Numerical Analysis, Vol. 18, No. 6, pp.1139–1154.
Dixon, L.C.W. and Price, R.C. (1989) ‘The truncated Newton method for sparse unconstrained
optimisation using automatic differentiation’, Journal of Optimization Theory and
Applications, Vol. 60, No. 2, pp.261–275.
Dixon, L.C.W. and Szegó, G.P. (Eds.) (1978) Towards Global Optimization 2, Elsevier,
Boston/Dordrecht/London.
El-Attar, R.A., Vidyasagar, M. and Dutta, S.R.K. (1979) ‘An algorithm for II-norm minimization
with application to nonlinear II-approximation’, SIAM Journal on Numerical Analysis,
Vol. 16, No. 1, pp.70–86.
Fletcher, R. and Powell, M.J.D. (1963) ‘A rapidly convergent descent method
for minimization’, Computer Journal, Vol. 62, No. 2, pp.163–168 [online]
http://galton.uchicago.edu/∼lekheng/courses/302/classics/fletcher-powell.pdf.
Flouda, C.A., Pardalos, P.M., Adjiman, C.S., Esposito, W.R., Gúmús, Z.H., Harding, S.T.,
Klepeis, J.L., Meyer, C.A. and Schweiger, C.A. (1999) Handbook of Test Problems in Local
and Global Optimization, Kluwer, Boston.
Fraley, C. (1989) Software Performances on Nonlinear Least-Squares Problems, Technical Report
No. STAN-CS-89-1244, Department of Computer Science, Stanford University [online]
http://www.dtic.mil/dtic/tr/fulltext/u2/a204526.pdf.
Fu, M.C., Hu, J. and Marcus, S.I. (2006) ‘Model-based randomized methods for global
optimization’, Proc. 17th International Symp. Mathematical Theory Networks Systems,
Kyoto, Japan, pp.355–365.
GAMS World (2000) GLOBAL Library [online] http://www.gamsworld.org/global/globallib.html.
GEATbx – The Genetic and Evolutionary Algorithm Toolbox for Matlab [online]
http://www.geatbx.com/.
Goldstein, A.A. and Price, J.F. (1971) ‘On descent from local minima’, Mathematics and
Computation, Vol. 25, No. 115, pp.569–574.
Gordon, V.S. and Whitley, D. (1993) ‘Serial and parallel genetic algorithms as function
optimizers’, in S. Forrest (Ed.): 5th Intl. Conf. on Genetic Algorithms, pp.177–183, Morgan
Kaufmann.
192 M. Jamil and X-S. Yang
Gould, N.I.M., Orban, D. and Toint, P.L. (2001) CUTEr, A Constrained and Un-constrained
Testing Environment, Revisited [online] http://cuter.rl.ac.uk/cuter-www/problems.html
(accessed 14 July 2012).
Griewank, A.O. (1981) ‘Generalized descent for global optimization’, Journal of Optimization
Theory and Applications, Vol. 34, No. 1, pp.11–39.
Hartman, J.K. (1972) Some Experiments in Global Optimization [online]
http://ia701505.us.archive.org/9/items/someexperimentsi00hart/someexperimentsi00hart.pdf
(accessed 15 August 2012).
Hedar, A-R. (n.d.) Global Optimization Test Problems [online]
http://www-optima.amp.i.kyoto-u.ac.jp/member/student/hedar/Hedar files/TestGO.htm
(accessed 17 August 2012).
Hennart, J.P. (Ed.) (1982) ‘Numerical analysis’, Proc. 3rd AS Workshop, Lecture Notes in
Mathematics, Vol. 90, Springer.
Himmelblau, D.M. (1972) Applied Nonlinear Programming, McGraw-Hill, New York.
Jennrich, R.I. and Sampson, P.F. (1968) ‘Application of stepwise regression to non-linear
estimation’, Techometrics, Vol. 10, No. 1, pp.63–72
http://www.jstor.org/discover/10.2307/
1266224?uid=3737864&uid=2129&uid=2&uid=70&uid=4&sid=21101664491701.
Junior, A.D., Silva, R.S., Mundim, K.C. and Dardenne, L.E. (2004) ‘Performance
and parameterization of the algorithm simplified generalized simulated
annealing’, Genet. Mol. Biol., Vol. 27, No. 4, pp.616–622 [online]
http://www.scielo.br/scielo.php?script=sci arttext&pid=S1415-47572004000400024
&lng=en&nrm=iso; ISSN 1415-4757 [online]
http://dx.doi.org/10.1590/S1415-47572004000400024.
Lavi, A. and Vogel, T.P. (Eds.) (1966) Recent Advances in Optimization Techniques, John Wliley
& Sons, New York.
Lootsma, F.A. (Ed.) (1972) Numerical Methods for Non-Linear Optimization, Academic Press,
London, New York.
Mishra, S.K. (2006a) Performance of Differential Evolution and Particle Swarm
Methods on Some Relatively Harder Multi-modal Benchmark Functions [online]
http://mpra.ub.uni-muenchen.de/449/ (accessed 14 August 2012).
Mishra, S.K. (2006b) Performance of the Barter, the Differential Evolution and the Simulated
Annealing Methods of Global Optimization on Some New and Some Old Test Functions
[online] http://www.ssrn.com/abstract=941630 (accessed 14 August 2012).
Mishra, S.K. (2006c) Repulsive Particle Swarm Method on Some Difficult Test Problems of
Global Optimization [online] http://mpra.ub.uni-muenchen.de/1742/ (accessed 14 August
2012).
Mishra, S.K. (2006d) Performance of Repulsive Particle Swarm Method in Global
Optimization of Some Important Test Functions: A Fortran Program [online]
http://www.ssrn.com/abstract=924339 (accessed 14 August 2012).
Mishra, S.K. (2006e) Global Optimization by Particle Swarm Method: A Fortran Program,
Munich Research Papers in Economics [online] http://mpra.ub.uni-muenchen.de/874/
(accessed 14 August 2012).
Mishra, S.K. (2006f) Global Optimization By Differential Evolution and Particle Swarm
Methods: Evaluation on Some Benchmark Functions, Munich Research Papers in Economics
[online] http://mpra.ub.uni-muenchen.de/1005/ (accessed 14 August 2012).
Mishra, S.K. (2006g) Some New Test Functions For Global Optimization and Performance of
Repulsive Particle Swarm Method [online] http://mpra.ub.uni-muenchen.de/2718/ (accessed
14 August 2012).
A literature survey of benchmark functions 193
Moore, R.E. (1988) Reliability in Computing, Academic Press, San Diego, CA, USA.
Moré, J.J., Garbow, B.S. and Hillstrom, K.E. (1981) ‘Testing unconstrained optimization
software’, ACM Trans. on Mathematical Software, Vol. 7, No. 1, pp.17–41.
Muntenau, C. and Lazarescu, V. (1998) ‘Global search using a new evolutionary framework:
the adaptive reservoir genetic algorithm’, Complexity International, Vol. 5 [online]
http://www.complexity.org.au/ci/vol05/munteanu/munteanu.html (accessed 14 August 2012).
Neumaier, A. (2003) COCONUT Benchmark [online] http://www.mat.univie.ac.at/∼neum/
glopt/coconut/benchmark.html (accessed 14 August 2012).
Opačić, J. (1973) ‘A heuristic method for finding most extrema of a nonlinear functional’, IEEE
Transactions on Systems, Man and Cybernetics, Vol. 3, No. 1, pp.102–107.
Pintér, J.D. (1996) Global Optimization in Action: Continuous and Lipschitz Optimization
Algorithms, Implementations and Applications, Kluwer, Hingham, MA, USA.
Powell, M.J.D. (1962) ‘An iterative method for finding stationary values of a function
of several variables’, Computer Journal, Vol. 5, No. 2, pp.147–151 [online]
http://comjnl.oxfordjournals.org/content/5/2/147.full.pdf.
Powell, M.J.D. (1964) ‘An efficient method for finding the minimum of a function for several
variables without calculating derivatives’, Computer Journal, Vol. 7, No. 2, pp.155–162.
Price, K.V., Storn, R.M. and Lampinen, J.A. (2005) Differential Evolution: A Practical Approach
to Global Optimization, Springer-Verlag New York, Inc. Secaucus, NJ, USA.
Price, W.L. (1977) ‘A controlled random search procedure for global
optimisation’, Computer Journal, Vol. 20, No. 4, pp.367–370 [online]
http://comjnl.oxfordjournals.org/content/20/4/367.full.pdf.
Qing, A. (2006) ‘Dynamic differential evolution strategy and applications in electromagnetic
inverse scattering problems’, IEEE Transactions on Geoscience and Remote Sensing,
Vol. 44, No. 1, pp.116–125.
Rónkkónen, J. (2009) Continuous Multimodal Global Optimization With Differential
Evolution-Based Methods, PhD thesis, Lappeenranta University of Technology.
Rahnamyan, S., Tizhoosh, H.R. and Salama, N.M.M. (2007a) ‘A novel population initialization
method for accelerating evolutionary algorithms’, Computers and Mathematics with
Applications, Vol. 53, No. 10, pp.1605–1614.
Rahnamyan, S., Tizhoosh, H.R. and Salama, N.M.M. (2007b) ‘Opposition-based differential
evolution (ODE) with variable jumping rate’, IEEE Symposium Foundations Computation
Intelligence, Honolulu, HI, pp.81–88.
Rao, S.S. (2009) Engineering Optimization: Theory and Practice, John Wiley & Sons, Hoboken,
New Jersey, USA.
Rosenbrock, H.H. (1960) ‘An automatic method for finding the greatest or least
value of a function’, Computer Journal, Vol. 3, No. 3, pp.175–184 [online]
http://comjnl.oxfordjournals.org/content/3/3/175.full.pdf.
Salomon, R. (1996) ‘Re-evaluating genetic algorithm performance under corodinate rotation
of benchmark functions: a survey of some theoretical and practical aspects of genetic
algorithms’, BioSystems, Vol. 39, No. 3, pp.263–278.
Schaffer, J.D., Caruana, R.A., Eshelman, L.J. and Das, R. (1989) ‘A study of control parameters
affecting online performance of genetic algorithms for function optimization’, Proc. 3rd
International Conf. on Genetic Algorithms, George Mason Uni., pp.51–60.
Schumer, M.A. and Steiglitz, K. (1968) ‘Adaptive step size random search’, IEEE Transactions
on Automatic Control, Vol. 13, No. 3, pp.270–276.
Schwefel, H.P. (1981) Numerical Optimization for Computer Models, John Wiley & Sons, New
York, NY, USA.
194 M. Jamil and X-S. Yang
Schwefel, H.P. (1995) Evolution and Optimum Seeking, John Wiley & Sons, New York, NY,
USA.
Shanno, D.F. (1970) ‘Conditioning of Quasi-Newton methods for function minimization’,
Mathematics of Computation, Vol. 24, No. 111, pp.647–656.
Silagadze, Z.K. (2007) ‘Finding two-dimensional peaks’, Physics of Particles and Nuclei Letters,
Vol. 4, No. 1, pp.73–80.
Storn, R. and Price, K. (1996) Differential Evolution – A Simple and Efficient
Adaptive Scheme for Global Optimization over Continuous Spaces, Technical Report
No. TR-95-012, International Computer Science Institute, Berkeley, CA [online]
http://www1.icsi.berkeley.edu/∼storn/TR-95-012.pdf.
Suganthan, P.N., Hansen, N., Liang, J.J., Deb, K., Chen, Y-P., Auger, A. and Tiwari, S.
(2005) Problem Definitions and Evaluation Criteria for CEC 2005, Special Session on
Real-Parameter Optimization, Nanyang Technological University (NTU), Singapore, Tech.
Rep. [online] http://www.lri.fr/∼hansen/Tech-Report-May-30-05.pdf.
Tang, K., Yao, X., Suganthan, P.N., MacNish, C., Chen, Y-P., Chen, C-M. and Yang, Z. (2008)
Benchmark Functions for the CEC 2008 Special Session and Competition on Large Scale
Global Optimization, Tech. Rep. [online] http://nical.ustc.edu.cn/cec08ss.php.
Tang, K., Li, X., Suganthan, P.N., Yang, Z. and Weise, T. (2010) Benchmark Functions for the
CEC 2010 Special Session and Competition on Large-Scale Global Optimization, Tech. Rep.
[online] http://sci2s.ugr.es/eamhco/cec2010 functions.pdf.
Test Problems for Global Optimization [online]
http://www2.imm.dtu.dk/∼kajm/Test ex forms/test ex.html.
The Cross-Entropy Toolbox [online] http://www.maths.uq.edu.au/CEToolBox/.
Wayburn, T.L. and Seader, J.D. (1987) ‘Homotopy continuation methods for computer-aided
process design’, Computers and Chemical Engineering, Vol. 11, No. 1, pp.7–25.
Whitley, D., Mathias, K., Rana, S. and Dzubera, J. (1996) ‘Evaluating evolutionary algorithms’,
Artificial Intelligence, Vol. 85, Nos. 1–2, pp.245–276.
Winston, P.H. (1992) Artificial Intelligence, 3rd ed., Addison-Wesley, Boston, MA, USA..
Yang, X.S. (2010a) ‘Test problems in optimization’, Engineering Optimization: An Introduction
with Metaheuristic Applications, John Wliey & Sons [online] http://arxiv.org/abs/1008.0549.
Yang, X.S. (2010b) ‘Firefly algorithm, stochastic test functions and design optimisation’, Intl. J.
Bio-Inspired Computation, Vol. 2, No. 2, pp.78–84 [online] http://arxiv.org/abs/1008.0549.
Yao, X. and Liu, Y. (1996) ‘Fast evolutionary programming’, Proc. 5th Conf. on Evolutionary
Programming.