Aderhold 2010
Aderhold 2010
Abstract. The artificial bee colony optimization (ABC) is a population based algo-
rithm for function optimization that is inspired by the foraging behaviour of bees.
The population consists of two types of artificial bees: employed bees (EBs) which
scout for new good solution in the search space and onlooker bees (OBs) that search
in the neighbourhood of solutions found by the EBs. In this paper we study the in-
fluence of the populations size on the optimization behaviour of ABC. Moreover,
we investigate when it is advantageous to use OBs. We also propose two variants of
ABC which use new methods for the position update of the artificial bees. Empir-
ical tests were performed on a set of benchmark functions. Our findings show that
the ideal population size and whether it is advantageous to use OBs depends on the
hardness of the optimization goal. Additionally the newly proposed variants of the
ABC outperform the standard ABC significantly on all test functions. In comparison
to several other optimization algorithm the best ABC variant performs better or at
least as good as all reference algorithms in most cases.
1 Introduction
Swarm intelligence [6] is a subfield of biological inspired computation that applies
concepts found in the collective behaviour of swarms such as social insects to prob-
lems in various domains such as robotics or optimization [5]. In recent years a num-
ber of bee inspired optimization methods have been proposed (the interested reader
can refer to Baykasoglu et al.’s overview of bee inspired optimization methods [3]).
One behaviour of honey bees that has inspired optimization methods is forag-
ing. Although it is a decentralized process that works at the basis of decisions of
Andrej Aderhold · Konrad Diwold · Alexander Scheidler · Martin Middendorf
Department of Computer Science, University of Leipzig, Johannisgasse 26,
04103 Leipzig, Germany
e-mail: [email protected],
{kdiwold,scheidler,middendorf}@informatik.uni-leipzig.de
J.R. González et al. (Eds.): NICSO 2010, SCI 284, pp. 283–294, 2010.
springerlink.com © Springer-Verlag Berlin Heidelberg 2010
284 A. Aderhold et al.
individual bees, a colony is still able to maintain a good ratio of exploitation and
exploration of food sources and can adapt toward changing needs for food if neces-
sary [4]. The waggle dance has been identified as a communication mechanism that
allows scout bees that found a food site to promote this site to other foragers [19].
Besides distance and direction to a site the bee can also encode its quality. Utilizing
this mechanism foragers can distribute on the available resources regarding their
profitability. A recent study has shown [7] that recruitment strategies as they are
used in honeybees are especially beneficial if resources are of poor quality, few in
number, and of variable quality.
The artificial bee colony optimization algorithm (ABC) is an algorithm that is
inspired by principles of the foraging behaviour of honeybees and was introduced
by Karaboga [9] in 2005. The ABC algorithm has been applied to various problem
domains including the training of artificial neural networks [11, 16], the design of
a digital infinite impulse response (IIR) filters [10], solving constrained optimiza-
tion problems [13], and the prediction of the tertiary structures of proteins [2]. Its
optimization performance has been tested and compared to other optimization meth-
ods such as Genetic Algorithms (GA), Particle Swarm Optimization (PSO), Particle
Swarm Inspired Evolutionary Algorithm (PS-EA), Differential Evolution (DE), and
different evolutionary strategies [1, 12, 14, 15].
The ABC algorithm works with a population of artificial bees. The bees are di-
vided into two groups, one group of bees — called employed bees (EBs) — is re-
sponsible for finding new promising solutions and the other group of bees — called
onlooker bees (OBs) — for performing local search at these solutions. It should be
mentioned that the EB are sometimes divided into two subgroups the EBs that stay
at a location and the EBs that search for a new location. The latter one are called
scouts.
In this paper we study a central aspect of ABC which has not been studied be-
fore. That is the influence of the size of the bee population and of the ratio between
the number of employed bees and onlooker bees on the performance of the algo-
rithm. Moreover, we propose two variants of the standard ABC algorithm that use
new methods for the selection of new locations. The performance of the new vari-
ants of ABC and the standard ABC is tested against several other population based
optimization heuristics.
This article is structured as follows. In Section 2 the ABC is described. The new
variants of ABC are introduced in Section 3. The experimental setup is given in
Section 4 and the experimental results are described in Section 5. Conclusion are
given in Section 6.
space corresponds to a food source (solution) that the artificial bees can exploit. The
quality of a food source is given by the value of the function to be optimized at
the corresponding location. Initially the EBs scout and each EB decides to exploit
a food source it has found. The number of EBs thus corresponds to the number of
food sources that are currently exploited in the system. EBs communicate their food
sources to the OBs. Based on the quality of a food source the OBs decide whether
or not to visit it. Good food sources will attract more OBs. Once an OB has chosen
a food source it flies there and tries to find a better location in its neighborhood by
using a local search strategy. If the quality of a new location found by the OB is bet-
ter than the quality of the location originally communicated by of the corresponding
EB, the EB will change its location and promote the new food source. Otherwise,
the EB remains on its current food source. If the solution of an EB has not been
improved for a certain number of steps the EB will abandon the food source and
scout for a new one (i.e., it decides for a new food source in search space).
More formally: Given a dim dimensional function F and a population of n virtual
bees consisting of neb employed bees and nob onlooker bees (i.e., n = neb + nob ).
Initially and when scouting an EB i (i ∈ [1 . . . neb ]) is placed on a randomly chosen
position pi = (xi1 , . . . , xidim ) in the search space. At the beginning of an iteration each
EB i tries to improve its current position by creating a new candidate position p∗i
using the following local search rule
F(pi )
Pi = neb . (3)
∑k=1 F(pk )
After an OB has chosen the location of an EB i it tries to find a better location using
Equation 1. In response, the corresponding EB updates its position as described be-
fore in case the OB has found a better location. The algorithm monitors the number
of steps an EB remains on the same position. When the number of steps an EB has
spent at the same location reaches a limit l ≥ 1 the EB abandons its position and
286 A. Aderhold et al.
scouts for a new one. In [15] the impact of l was investigated and as a good value
l = ne · dim was proposed. The algorithm stops when a certain stop criterion (e.g.,
maximum number of iterations, or a good function value has been found) is met. An
outline of ABC is given in Algorithm 6.
For the standard ABC algorithm it was defined that the number of employer bees
equals the number of onlooker bees, i.e., neb = nob = n/2. Thus algorithm ABC de-
pends only on the parameters n and l. In [15] experiments with different population
sizes n were performed with the conclusion that, a population size of 50-100 bees
can provide reasonable convergence behaviour. The parameter l determines how fast
solutions are abandoned. In [15] it is argued that l = ne · dim shows better perfor-
mance than very high or low values of l. In a very recent study on ABC parameter
tuning [1] Akay and Karaboga concluded that for small colony sizes l = ne · dim
might not be sufficient, as the algorithm is not able to explore EB solutions enough
before they are abandoned. Hence, it is suggested to use higher values of l for small
colonies.
3 ABC Variants
The variants of ABC that are proposed in this section concern the selection of ref-
erence EBs when OBs and EBs generate candidate solutions according to Equation
1). In the standard ABC algorithm the reference EBs are selected randomly with
uniform distribution. A potential disadvantage is that the location of the chosen ref-
erence EB might not fit well to the current location of the bee. The two modifications
Artificial Bee Colony Optimization: A New Selection Scheme and Its Performance 287
of the reference selection rule that are proposed in the following aim to overcome
this problem.
Including global best solution as reference (ABCgBest ). In the proposed ABCgBest
the global best solution found so far is used in addition to the randomly chosen ref-
erence EB in order to generate new candidate solutions. Note, that this has some
similarity to the functioning of a Particle Swarm Optimization (PSO) algorithm
where the global best particles influence the position update of the particles [17].
To incorporate the global best solution Equation 1 is altered as follows
where pi denotes the bees current position, k refers to the randomly chosen reference
EB pk , best refers to the best position pbest found so far, and j ≤ dim denotes a ran-
dom dimension. To make sure that the global best term in Equation 4 always points
towards the global best reference rand(0, 1) was used (instead of rand(−1, 1)).
ABCgBest with additional distance based reference selection (ABCgBestDist ). Be-
sides including the global best reference in the generation of candidate solutions, in
this modification the distance between the current location and a potential reference
EB influences the selection probability. Therefore, instead of using the same proba-
bility for all reference EBs, an EB (or OB) at position pi chooses the reference EB
k ∈ {1, .., neb } with k = i according to the following probability
1
dist(pi ,pk )
Pk = neb (5)
∑ 1
dist(pi ,p j )
j=1, j =i
4 Experimental Setup
The performance of ABC, the proposed ABC modifications, and other reference
algorithms was tested on several standard benchmark problems (see Table 1 for
details). The following five algorithms were used as reference algorithms: the Par-
ticle Swarm Optimization (PSO) algorithm from [22], two forms of the hierarchi-
cal PSO (H-PSO and H-PSO) from [8], the differential evolution (DE) algorithm
from [18, 21], and the Ant Colony Optimization algorithm for continuous functions
(ACOR ) from [20]. The parameter values that were used for these algorithms have
been adopted from the given references (see Table 2).
288 A. Aderhold et al.
Table 1 Test function names and equations (F), domain space range (R), a standard opti-
mization goal (Gstd ) that is often used in the literature and a harder optimization goal (Ghrd ).
The hard goals were chosen in such a way that a standard ABC (with n = 100) will need
approximately 105 function evaluations to reach them. The dimension of the test functions
was dim = 30 with the exception of Schaffer’s F6 were dimension dim = 2 was used
F
R Gstd Ghrd
sin2 ( x21 + x22 ) − 0.5
Schaffer’s F6 f sc (x) = 0.5+
(1 + 0.001(x21 + x22 ))2
[−100; 100]2 10−5 10−25
10−10
n
Sphere f sp (x) = ∑ x2i [−100; 100]n 0.01
i=1
10−9
n n
Griewank f gr (x) =
1
4000 ∑ i
x2 − ∏ cos √i + 1
x
i
[−600; 600]n 0.1
i=1 i=1
10−7
n
Rastrigin f rg (x) = ∑ (x2i − 10cos(2π xi ) + 10) [−5.12; 5.12]n100
i=1
n−1
Rosenbrock f rn (x) = ∑ (100(xi+1 − x2i )2 + (xi − 1)2 ) [−30; 30]n 100 1
i=1
n
10−7
n
Ackley f ac (x) = −20exp −0.2 1n ∑ x2i − exp 1n ∑ cos(2π xi ) + 20 + e [−32; 32]n 0.1
i=1 i=1
Table 2 Setting of control parameters used in the final experiment: n is the population size,
swarm size, or colony size respectively; neb is the number of employed bees; nob is the num-
ber of onlooker bees; l is the abandon limit; dim is the dimension of problem function; ω is
the inertia weight; c(∗) is the constriction factors; CR is the crossover rate; F is the scaling
factor; k is the archive size; q is the locality of search; ε is the convergence speed
ABC PSO H-PSO H-PSO DE ACOR
n = 30 n = 40 n = 31 n = 31 n = 50 n=2
neb = 15 ω = 0.6 ω = 0.6 ω = [0.729; 0.4] CR = 0.8 k = 50
nob = 15 c1 = c2 = 1.7 c1 = c2 = 1.7 c1 = c2 = 1.7 F = 0.5 q = 0.1
l = dim ∗ ne ε = 0.85
All test runs were repeated 100 times. The number of function evaluations that
each algorithm required to reach the specified goal — the standard optimization goal
(Gstd ) and the hard optimization goal (Ghrd ) as given in Table 1 — was recorded for
each run. To evaluate the significance of the observed performance differences the
algorithms were tested pairwise against each other by using a one sided Wilcoxon
Rank Sum Test with a significance level of α = 0.05.
5 Results
Schaffer Sphere
1 1e+50
n=10
0.01 n=30 1
n=60
0.0001 n=100
n=120 1e-50
n=140
Mean Best Value
1e-06
1e-100
1e-08
1e-150
1e-10
1e-200 n=10
1e-12
n=30
1e-250 n=60
1e-14
n=100
1e-300
n=120
1e-16
n=140
1e-18 0
0 50000 100000 150000 200000 250000 300000 0 50000 100000 150000 200000 250000 300000
Griewank Rastrigin
10000 10000
n=10 n=10
100 n=30 100 n=30
1
n=60 n=60
n=100 1 n=100
0.01 n=120 n=120
0.01
n=140 n=140
Mean Best Value
0.0001
0.0001
1e-06
1e-06
1e-08
1e-08
1e-10
1e-10
1e-12
1e-12
1e-14
1e-16 1e-14
1e-18 1e-16
0 50000 100000 150000 200000 250000 300000 0 50000 100000 150000 200000 250000 300000
Rosenbrock Ackley
1e+10 100
n=10 n=10
n=30 1
n=30
1e+08 n=60 n=60
n=100 n=100
0.01
n=120 n=120
Mean Best Value
10000 1e-06
1e-08
100
1e-10
1
1e-12
0.01 1e-14
0 50000 100000 150000 200000 250000 300000 0 50000 100000 150000 200000 250000 300000
Function Evaluations Function Evaluations
Fig. 1 ABC population size test: Comparing improvement of mean best solution (y-axis) per
iteration (x-axis) for different population sizes n over 300000 function evaluations; Standard
ABC settings are used except for l. To avoid very small limit values with small population
sizes l = 100 if ne dim < 100
(i.e., n = 10) show a fast convergence at the beginning of the optimization process
(i.e., in the first 20 000 evaluation steps). However larger populations perform bet-
ter in later stages of the optimization process. Only for the Sphere function very
small populations perform best throughout the whole optimization process. But this
is a very simple optimization function. For the more complex functions as Schaf-
fer Griewank, Rastrigrin, and Ackley population size 30-60 performs best for more
than 700000 evaluations. Only for the Ackely function the population size 100 is
best for a higher number of function evaluations. Thus, our results suggest that a
population size of 30 − 60 seems good for many test functions. This is a slightly
smaller populations size than recommended in [12, 15].
Table 3 ABC with different number of employed bees neb and with or without and onlooker
bees nob for the standard optimization goal Gstd and the hard optimization goal Ghrd . Mean
number of function evaluations (mean) to reach the goal for the six test functions and signif-
icance (sig) comparing the ABC with and without onlooker bees and with the same number
of neb ; ’X’ denotes significantly better; ’-’ denotes not significantly better
As can be seen the performance regarding the number of OBs differs. Using OBs
increases the performance of the algorithm significantly for the standard optimiza-
tion goal Gstd in 5 of 6 test unctions for both numbers of EBs. But this is not the case
for the hard optimization goal Ghrd . For the case of 15 EBs the algorithm containing
Artificial Bee Colony Optimization: A New Selection Scheme and Its Performance 291
no OBs performs significantly better for three of the six test functions. Only for the
Rastrigin function the algorithm with OBs is able to perform significantly better.
For two test functions no significant difference can be constituted. When 50 EBs
are used the algorithm with no OBs performs significantly better in 3 of the 6 test
cases, whereas the algorithm with OBs performs significantly better for only two
test function (Rastrigin and Rosenbrock). For one function no statistic difference
can be constituted.
These results suggests that the advantage of using OBs in the ABC algorithm is
not so clear for the hard optimization goal Ghrd while OBs are advantageous for
most cases when only the standard optimization goal Gstd is given. This questions
the standard rule to set the ratio between the number of OB and EBs to 1/2. A
more detailed analysis is necessary to fully understand the impact of OBs on the
algorithms performance and under what conditions they are useful and when not.
Table 4 Mean number of function evaluations to reach the standard goal Gstd for ABC,
ABCgBest , and ABCgBestDist ; Population size n = 30, number of EBs neb = 15, number of
OBs nob = 15; For each test function the significance between each pair of algorithms is
shown, ’X’ denotes that the algorithm in the corresponding line is significantly better than
the algorithm in the corresponding row, ’-’ denotes no significance
Standard Test
Function Method Mean Significance
ABC ABCgBest ABCgBestDist
Schaffer ABC 19038 – –
ABCgBest 6680 X –
ABCgBestDist 6377 X –
Sphere ABC 7773 – –
ABCgBest 6509 X –
ABCgBestDist 6245 X X
Griewank ABC 10160 – –
ABCgBest 9020 X –
ABCgBestDist 8680 X X
Rastrigin ABC 3218 – –
ABCgBest 2506 X –
ABCgBestDist 2466 X –
Rosenbrock ABC 9800 – –
ABCgBest 6682 X –
ABCgBestDist 7049 X –
Ackley ABC 15759 – –
ABCgBest 10118 X –
ABCgBestDist 10038 X –
292 A. Aderhold et al.
Schaffer Sphere
Mean Function Evaluations
20000
1e+05
10000
2e+04
5e+03
5000
1e+03
ABC ABCgBestDist ACOR DE H − PSO PSO VH − PSO ABC ABCgBestDist ACOR DE H − PSO PSO VH − PSO
Griewank Rastrigin
40000
2e+05
Mean Function Evaluations
20000
5e+04
1e+04
10000
2e+03
ABC ABCgBestDist ACOR DE H − PSO PSO VH − PSO ABC ABCgBestDist ACOR DE H − PSO PSO VH − PSO
Rosenbrock Ackley
Mean Function Evaluations
20000
1e+05
15000
2e+04
10000
5e+03
ABC ABCgBestDist DE H − PSO PSO VH − PSO ABC ABCgBestDist DE H − PSO PSO VH − PSO
Fig. 2 Boxplots of the number of function evaluations needed to reach the standard optimiza-
tion goal Gstd for ABC, ABCgBestDist , PSO, H-PSO, H-PSO, DE and ACOR . Results for
ACOR are omitted when it was not able to reach the optimization goal in 500000 function
evaluations
goals will be used in order to compare the algorithms OBs have been used in the tests
for ABC and the proposed variants. Table 4 shows the mean number of function
evaluations and pairwise significance tests for the standard ABC and the proposed
modifications ABCgBest and ABCgBestDist on the test functions.
The results show that the proposed variants of ABC — ABCgBest and ABCgBestDist
— improve the performance of ABC on all test functions significantly. ABCgBestDist
is able to enhance the performance of ABCgBest on two test functions (i.e., Sphere
and Griewank). In the other cases no significant difference between the two ABC
variants could be observed.
The standard ABC algorithm and ABCgBestDist , the best performing ABC variant,
were tested against 5 reference algorithms for the standard optimization goal Gstd .
Figure 2 depicts boxplots of the number of function evaluation for each algorithm
on each test function.
In terms of the necessary number of function evaluations the proposed ABC vari-
ant ABCgBestDist performs significantly better than all the reference algorithms for
two test functions (i.e., Ackley, Rosenbrock). For two test functions (i.e., Sphere and
Artificial Bee Colony Optimization: A New Selection Scheme and Its Performance 293
Schaffer)
its performance is on par with the performance of the PSO respectively
the H-PSO algorithm. For the remaining two test functions (i.e., Griewank and
Rastrigin) the hierarchical PSO variant H-PSO outperforms all other algorithms
significantly, ABCgBestDist is the second best algorithm for this test functions.
6 Conclusion
In this paper we have proposed two variants — called ABCgBest and ABCgBestDist
— of the artificial bee colony optimization (ABC) algorithm. Both variants con-
cern the selection of the reference locations that influence the position update for
the artificial bees. Moreover, we investigated the influence of the colony size and
the relative number of so called onlooker bees in the artificial bee population on
the optimization performance. Experimental results for six standard benchmark test
functions suggest that ABC performs better with a smaller population size than used
in a standard ABC setup. However, it was also shown that the ideal population size
depends on the optimization goal. For harder optimization goals larger populations
seem to be advantageous. Whether it is advantageous to use onlooker bees depends
also on the optimization goals. For weaker optimization goals using OBs was ad-
vantageous for all test functions. But for the harder optimization goals it was in
most cases better not to use OBs. This questions the standard division of the popu-
lation of ABC into an equal number of EBs and OBs. The proposed ABC variants
ABCgBest and ABCgBestDist performed better than the standard ABC on all test func-
tions. ABCgBestDist performed slightly better than ABCgBest . In comparison to other
optimization algorithms ABCgBestDist was better or at least as good as all tested al-
gorithm on all test functions. Only for two test functions H-PSO performed better.
Acknowledgments
This work was supported by the Human Frontier Science Program Research Grant ”Opti-
mization in natural systems: ants, bees and slime moulds”.
References
[1] Akay, B., Karaboga, D.: Parameter tuning for the artificial bee colony algorithm. In:
Nguyen, N.T., Kowalczyk, R., Chen, S.-M. (eds.) ICCCI 2009. LNCS, vol. 5796, pp.
608–619. Springer, Heidelberg (2009)
[2] Bahamish, H.A.A., Abdullah, R., Salam, R.A.: Protein tertiary structure prediction us-
ing artificial bee colony algorithm. In: Asia International Conference on Modelling &
Simulation, pp. 258–263 (2009)
[3] Baykasoglu, A., Oezbakir, L., Tapkan, P.: Artificial Bee Colony Algorithm and Its Ap-
plication to Generalized Assignment Problem. In: Swarm Intelligence: Focus on Ant
and Particle Swarm Optimization, pp. 113–144. Itech Education and Publishing (2007)
294 A. Aderhold et al.
[4] Biesmeijer, J.C., de Vries, H.: Exploration and exploitation of food sources by social
insect colonies: a revision of the scout-recruit concept. Behavioral Ecology and Socio-
biology 49, 89–99 (2001)
[5] Blum, C., Merkle, D. (eds.): Swarm Intelligence: Introduction and Applications.
Springer, Heidelberg (2008)
[6] Bonabeau, E., Dorigo, M., Theraulaz, G.: Swarm intelligence: from natural to artificial
systems. Oxford University Press, Oxford (1999)
[7] Dornhaus, A., Kluegl, F., Oechslein, C., Puppe, F., Chittka, L.: Benefits of recruitment in
honey bees: effects of ecology and colony size in an individual-based model. Behavioral
Ecology (2006)
[8] Janson, S., Middendorf, M.: A hierarchical particle swarm optimizer and its adaptive
variant. IEEE Transactions on Systems, Man, and Cybernetics – Part B: Cybernetics 35,
1272–1283 (2005)
[9] Karaboga, D.: An idea based on honey bee swarm for numerical optimization. Tech.
rep., Erciyes University, Engineering Faculty (2005)
[10] Karaboga, D.: A new design method based on artificial bee colony algorithm for digital
IIR filters. Journal of the Franklin Institute 346(4), 328–348 (2009)
[11] Karaboga, D., Akay, B.: Artificial bee colony (abc) algorithm on training artificial neu-
ral networks. In: IEEE 15th Signal Processing and Communications Applications, pp.
1–4 (2007)
[12] Karaboga, D., Akay, B.: A comparative study of artificial bee colony algorithm. Applied
Mathematics and Computation 214(1), 108–132 (2009)
[13] Karaboga, D., Basturk, B.: Artificial bee colony (ABC) optimization algorithm for
solving constrained optimization problems. In: Melin, P., Castillo, O., Aguilar, L.T.,
Kacprzyk, J., Pedrycz, W. (eds.) IFSA 2007. LNCS (LNAI), vol. 4529, p. 789. Springer,
Heidelberg (2007)
[14] Karaboga, D., Basturk, B.: A powerful and efficient algorithm for numerical func-
tion optimization: artificial bee colony (ABC) algorithm. Journal of Global Optimiza-
tion 39(3), 459–471 (2007)
[15] Karaboga, D., Basturk, B.: On the performance of artificial bee colony (ABC) algo-
rithm. Applied Soft Computing 8(1), 687–697 (2008)
[16] Karaboga, D., Akay, B., Ozturk, C.: Artificial bee colony (ABC) optimization algorithm
for training feed-forward neural networks. In: Torra, V., Narukawa, Y., Yoshida, Y. (eds.)
MDAI 2007. LNCS (LNAI), vol. 4617, p. 318. Springer, Heidelberg (2007)
[17] Kennedy, J., Eberhart, R.: Particle swarm optimization. In: Proc. IEEE International
Conference on Neural Networks, vol. 4, pp. 1942–1948 (1995)
[18] Krink, T., Filipic, B., Fogel, G., Thomsen, R.: Noisy optimization problems - a partic-
ular challenge for differential evolution? In: Proc. Congress on Evolutionary Computa-
tion. IEEE Press, Los Alamitos (2004)
[19] Seeley, T.D.: The wisdom of the hive. Harvard University Press, Cambridge (1995)
[20] Socha, K., Dorigo, M.: Ant colony optimization for continuous domains. European
Journal of Operational Research 185(3), 1155–1173 (2008)
[21] Storn, R., Price, K.: Differential evolution - a simple and efficient heuristic for global op-
timization over continuous spaces. Journal of Global Optimization 11, 341–359 (1997)
[22] Trelea, I.C.: The particle swarm optimization algorithm: convergence analysis and pa-
rameter selection. IPL: Information Processing Letters 85, 317–325 (2003)