0% found this document useful (0 votes)
11 views9 pages

2015 Elsevier A Directed Artificial Bee Colony Algorithm

Uploaded by

chandreshgovind
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views9 pages

2015 Elsevier A Directed Artificial Bee Colony Algorithm

Uploaded by

chandreshgovind
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Applied Soft Computing 26 (2015) 454–462

Contents lists available at ScienceDirect

Applied Soft Computing


journal homepage: www.elsevier.com/locate/asoc

A directed artificial bee colony algorithm


Mustafa Servet Kıran a,∗ , Oğuz Fındık b
a
Department of Computer Engineering, Faculty of Engineering, University of Selcuk, 42075 Konya, Turkey
b
Department of Computer Engineering, Faculty of Engineering and Architecture, Abant İzzet Baysal University, 14280 Bolu, Turkey

a r t i c l e i n f o a b s t r a c t

Article history: Artificial bee colony (ABC) algorithm has been introduced for solving numerical optimization problems,
Received 23 May 2014 inspired collective behavior of honey bee colonies. ABC algorithm has three phases named as employed
Received in revised form bee, onlooker bee and scout bee. In the model of ABC, only one design parameter of the optimization
11 September 2014
problem is updated by the artificial bees at the ABC phases by using interaction in the bees. This updating
Accepted 9 October 2014
has caused the slow convergence to global or near global optimum for the algorithm. In order to accelerate
Available online 24 October 2014
convergence of the method, using a control parameter (modification rate-MR) has been proposed for
ABC but this approach is based on updating more design parameters than one. In this study, we added
Keywords:
Swarm intelligence
directional information to ABC algorithms, instead of updating more design parameters than one. The
Artificial bee colony performance of proposed approach was examined on well-known nine numerical benchmark functions
Direction information and obtained results are compared with basic ABC and ABCs with MR. The experimental results show that
Numerical optimization the proposed approach is very effective method for solving numeric benchmark functions and successful
in terms of solution quality, robustness and convergence to global optimum.
© 2014 Elsevier B.V. All rights reserved.

1. Introduction algorithm and solve numerical optimization problems. In the ABC


algorithm, half of the population is first scout bee. For each scout
Swarm intelligence is a subfield of artificial intelligence and the bee, a new food source which is possible solution for the opti-
algorithms of swarm intelligence have been developed by inspir- mization problem is generated. After generating new food source
ing natural behavior of real ants [1], bees, birds, fishes [2], etc. position, all the scout bees become employed bees and all employed
Artificial bee colony algorithm is one of the swarm intelligence bees try to improve food sources by using interaction between
algorithms and has been developed by using waggle dance and for- them. If a food source could not be improved in a certain time
aging behaviors of real honey bee colonies [3]. In the nature, the named as limit which is a control parameter for ABC algorithm,
honey bees search and forage food sources around the hive and the employed bee of this food source becomes a scout bee. For
share position information about the food sources. The honey bees this scout bee, a new food source is produced and the scout bee
which work in the foraging labor are divided into three groups. becomes employed bee, again. The onlooker bees wait to be shared
First group is employed bees and they move to hive nectar foraged food sources position by the employed bees in the hive. After
from food source and position information about the food sources. employed bees share position information about the food sources,
Second group consists of onlooker bees and onlookers forage food each onlooker bee select one of the food source position and tries
sources by considering information shared by employed bees. The to improve the food source position.
last group of the bees is scout bees. 5–10% of a bee population is ABC algorithm is an iterative algorithm and only one design
scout bee [3,4] and scout bees search new food source around the parameter of the optimization problem is updated by the each
hive and share position information of found new food sources with employed or onlooker bee at the each ABC iteration and updat-
the other bees. For sharing information, the bees use waggle dance ing only one design parameter has caused slow convergence for
in the dance area of the hive and the time and glow of the dance the algorithm. In order to overcome this issue, Akay and Karaboga
depend on amount and distance from hive of food source. [5] have proposed a control parameter called as modification rate-
Karaboga [3] used the aforementioned natural behaviors of MR. In this work, we used directional information for each design
the real honey bee in order to develop the artificial bee colony parameter in order to cope slow convergence of the algorithm and
the performance of the proposed approach is investigated on the
well-known numerical benchmark optimization problems.
∗ Corresponding author. Tel.: +90 332 223 1992; fax: +90 332 223 2106. The paper is organized as follows: Section 1 introduces the study
E-mail address: [email protected] (M.S. Kıran). and gives literature review on artificial bee colony algorithm. The

http://dx.doi.org/10.1016/j.asoc.2014.10.020
1568-4946/© 2014 Elsevier B.V. All rights reserved.
M.S. Kıran, O. Fındık / Applied Soft Computing 26 (2015) 454–462 455

basic ABC algorithm and modifications are explained in Section 26 numerical benchmark functions [23]. Due to premature con-
2 and the experiments and experimental results are presented in vergence and getting trap of local minima in ABC/best/1, Gao and
Section 3. The study is discussed in Section 4 and finally, the con- Liu [24] proposed to use the update rule of ABC/best/1 algorithm
clusions and future works are given in Section 5. for employed bees and the update rule of basic ABC for onlooker
bees in order to reinforce exploration ability of the method, and
1.1. Literature review they tested the variant of ABC on 28 numerical benchmark func-
tions. In another study, Gao et al. [25] defined a new update rule
The ABC algorithm was first introduced in 2005 and its perfor- for ABC algorithm, and the new update rule uses random solutions
mance is analyzed on three numerical problems [3]. It is mentioned for obtaining the candidate solution. New update rule looks like
that ABC algorithm has developed for solving numeric optimiza- to crossover operator of GA, and the method is named as CABC.
tion problems [6] and proposed modifications and improvements In this study, the orthogonal learning strategy is proposed for ABC
for the methods have also been tested by using numeric prob- methods such as basic ABC (OABC), GABC (OGABC), CABC (OCABC),
lems. We give the literature review based on modifications and and their accuracies and performances are examined on numerical
improvements of the method, and the studies on applications and benchmark functions and compared with the other nature-inspired
hybridizations which use basic ABC algorithm can be found in a optimization algorithm.
comprehensive literature review on ABC in [6]. The performance Being analyzed literature review, it is seen that the update rule of
of ABC has been investigated on the numeric benchmark functions ABC algorithm was modified, and the local search capability of the
in [7–10]. Akay and Karaboga [5] introduced a modified version of method is tried to improve in most of the papers which are based
ABC and used it for real-parameter optimization. In the modified on improvement of the ABC algorithm. Therefore, we propose the
ABC, Karaboga and Akay added two new control parameters named same update rule with a bit modification in order to improve con-
as modification rate-MR which is used for increasing convergence vergence characteristic of the basic ABC algorithm instead of local
rate of ABC and scaling factor-SF which is used for controlling mag- search capability. But it should be mentioned that this modification
nitude of perturbation to ABC. Dongli et al. [11] proposed three provides local search capability besides improvement convergence
modified versions of ABC in order to obtain better quality results for characteristics of the basic ABC algorithm.
the optimization problems. In the first modification, the neighbor-
hood structure is changed in the solution updating equation of ABC, 2. ABC algorithm
in the second modification, a new selection equation is proposed for
onlooker bees in order to choose an employed bee and the last mod- By simulating intelligent behavior of real honey bee colonies,
ified version of ABC is based on modification #1 and #2. Tsai et al. ABC algorithm tries to find a global optimum or near optimum solu-
[12] proposed a model based on ABC, by employing Newtonian law tion for the optimization problems. In the ABC algorithm, number of
of universal gravitation in onlooker bee phase of ABC. Alatas [13] food sources is equals to number of employed bees and also num-
proposed an ABC model that uses chaotic maps for parameter adap- ber of employed bees equals to number of onlooker bees. All the
tation so as to prevent the ABC to get stuck local minimums. Zhu and employed bees are scout bees in the starting of the algorithm and a
Kwong [14] modified ABC algorithm (named as GABC) by append- food source position is produced for each scout bee using Eq. (1):
ing the global best information of the population to exploitation
equation of ABC in order to increase exploitation ability of ABC. Pij = Xjmin + r × (Xjmax − Xjmin ), i = 1, 2, . . ., NE and
Gao and Liu [15] modified search equation of the basic ABC by
j = 1, 2, . . ., D (1)
using chaotic systems and opposition-based learning methods
and applied the modified ABC (called as MABC) to 28 benchmark where Pij is the jth dimension of ith food source which will be
functions. Banharnsakun et al. [16] improved the capability of con- assigned to ith employed bee, Xjmin and Xjmax are the lower and
vergence of ABC to a global optimum by using the best-so-far upper bounds of the jth dimension, respectively, r is a random num-
selection for onlooker bees and they tested performance of their ber between [0,1], NE is the number of employed bee and D is the
method on the numerical benchmark functions and image regis- dimensionality (the number of decision variables) of the problem
tration. The Rosenbrock’s rotational direction method which was or function optimized.
designed to cope with specific features of “Rosenbrock’s banana After producing a food source position for each scout bee, all the
function” was applied to ABC in order to increase exploitation and scout bees become employed bee. The qualities of the food sources
local search abilities of the basic ABC [17]. Karaboga and Akay [18] of the employed bees are measured by using Eq. (2):
adapted the basic ABC for constrained optimization problems by 
using the Deb’s Rules and evaluated the performance of the adapted 1/(1 + fi ) if (fi ≥ 0)
model on the 13 constrained optimizations in the literature. Kıran fiti = (2)
1 + abs(fi ) if (fi , 0)
and Gündüz [19] proposed a crossover-based improvement for
neighbor bee selection in onlooker bee phase of basic ABC algo- where fiti is the fitness of the ith food source and fi is the objective
rithm. Horng [20] proposed maximum entropy thresholding based function value specific for the optimization problem. In addition, a
on the ABC for image segmentation and Omkar et al. [21] pre- trial counter is defined and reset for each food source and limit value
sented vector evaluated ABC (VEABC) for multi-objective design for the population is described in the initialization of the algorithm.
optimization of laminated composite components and compared The employed bees search around the self-food sources for new
performance of VEABC with other population based methods. Liu food sources. A new food source position around the food source of
et al. [22] have published a variant of ABC algorithm which is employed bee is obtained as follows:
improved by using mutual learning which tunes the produced can-
didate food source with the higher fitness between two individuals Vij = Sij + ϕ × (Sij − Nj ), i=1, 2, . . ., NE, k ∈ {1, 2, . . ., NE} and
selected by a mutual learning factor. Gao et al. [23] proposed two
j ∈ {1, 2, . . ., D} (3)
ABC-based algorithms which use two update rules of differential
evolution (DE), and they are called as ABC/best/1 and ABC/best/2. where V is the candidate food source position produced for food
The global best-based ABC methods also use chaotic initialization source position S, N is the randomly selected neighbor food source
in order to properly distribute the agents to the search space, and for food source S and ϕ is a random number in range of [−1,1]. It
the performance and accuracy of the methods are examined on is mentioned that only one dimension of the food source position
456 M.S. Kıran, O. Fındık / Applied Soft Computing 26 (2015) 454–462

Initialization Phase
Determine the number of food sources
Define Limit for the population
Produce the food sources using equation 1.
Define trial counters for the food sources.
Assign the food sources to the employed bees
Calculate fitness of the food sources using Eq. 2.
Employed Bee Phase
For Each Employed Bee
Produce new food source position using Eq.3.
Calculate fitness of the candidate food source.
If the fitness of candidate food source is better than old one, memorize new position and reset trial
counter; otherwise increase its trial counter by 1.
Onlooker Bee Phase
Calculate being selected probabilities of the employed bees using Eq. 4
For each onlooker bee
Produce new food source position using Eq.3.
Calculate fitness of the candidate food source.
If the fitness of candidate food source is better than old one, memorize new position and reset trial
counter; otherwise increase its trial counter by 1.
Save the best solution obtained so far.
Scout Bee Phase
If a scout bee occurs
Produce a new food source position by using Eq.1.
Calculate fitness of the produced food source position by using Eq.2.
Reset its trial counter.
IF a termination condition is met THEN report the best solution
ELSE go to Employed Bee Phase

Fig. 1. The ABC algorithm.

S is updated for each the iteration and the dimension is randomly 2.1. ABC algorithm with MR control parameter
selected. The fitness of the candidate food source is obtained by
using Eq. (2) and if the fitness of candidate food source is better In order to increase convergence rate of the method, Akay and
than the old one, the employed bee memorizes the new food source Karaboga [5] proposed a control parameter named as modifica-
position and trial counter of the food source is reset; otherwise the tion rate-MR. In the basic ABC, only one dimension of the food
trial counter of the food source is increased by 1. source position is updated by the employed or onlooker bees, but
After the employed bees return to the hive, the employed in the ABC with MR (called as ABCMR ), whether a dimension will be
bees share self-food source positions with the onlooker bees. An updated is decided by using MR value which is a number in range
onlooker bee selects an employed bee and memorizes its food of [0,1]. By using MR parameter, the Eq. (2) is changed as follows:
source position in order to improve its food source by using 
Sij + ϕ × (Sij − Nj ) if (Rij < MR)
roulette-wheel selection mechanism given as follows:
Vij = (5)
Sij otherwise
fiti where Rij is a random number produced in range of [0,1]. If random
pi = NE (4)
fitj number is less than MR, the dimension j is modified and at least one
j=1
dimension is updated by using Eq. (3). The lower value for MR may
cause solutions to improve slowly while higher value for MR can be
where pi is the being selected probability of the ith employed bee caused too much diversification in the population [5]. Therefore, we
by an onlooker bee. Thereafter, the onlooker bee searches around used 0.3 and 0.7 values for the MR in the experiments by obtaining
the food source position of the employed bee by using Eq. (3). If from [5].
the fitness of the food source found by onlooker bee is better than
fitness of the food source of the employed bee, the employed bee 2.2. Directed ABC (dABC)
memorizes the food source position of the onlooker bee and trial
counter of this food source is reset; otherwise the trial counter of The searching around the food source in the basic ABC is fully
the food source is increased by 1. random in terms of direction because ϕ is a random number
The occurrence of the scout bee in the ABC depends on limit and between [−1,1]. This undirected search has caused the slow conver-
trial counters of the food sources. After onlooker’s search, the trial gence of the algorithm to the optimum or near optimum. Therefore,
counter with maximum content (H) is fixed and if H is higher than we added direction information for each dimension of for each food
the limit, a new food source position is produced for this bee by source position. By using direction information for the dimensions,
using Eq. (1) and its trial counter is reset. It is mentioned that only the Eq. (3) is modified as follows:
one scout bee can occur at the each ABC iteration. ⎧
⎪ S + ϕ × (Sij − Nj ) if (dij = 0)
The ABC algorithm is an iterative algorithm and consists of four ⎨ ij
phases sequentially realized named as initialization, employed bee, Vij = Sij + r × abs(Sij − Nj ) if (dij = 1) (6)
onlooker bee and scout bee phases. In order to terminate the algo- ⎪

rithm, the maximum iteration number, meeting an error tolerance, Sij − r × abs(Sij − Nj ) if (dij = −1)
etc can be used. The detailed algorithm of ABC is also shown in where abs is absolute function, dij is the direction information for jth
Fig. 1. dimension of the ith food source position and while ϕ is a random
M.S. Kıran, O. Fındık / Applied Soft Computing 26 (2015) 454–462 457

optimum Nx Fx Each experiment is repeated 30 times with random seeds and the
best, the worst, mean and standard deviations are reported on the
comparisons.
Search field for Fx
3.1. Benchmark functions
where, optimum is optimal value for the parameter, Fx is
the food source position of bee X and Nx is the neighbor The benchmark functions are given in Table 1. D, C, Range
food source position of bee N.
and f(x*) in Table 1, are dimensions, characteristics, lower and
If we use direction information for Fx, the search field
upper bounds of search spaces and global minimum values of the
for Fx is given as follows:
functions, respectively. The numerical functions used in the exper-
optimum Nx Fx iments have some characteristics. If a function has more than one
local minimum, this function is called as multimodal (M) and the
multimodal functions such as Rastrigin, Griewank tests search abil-
Search field for Fx ity of the algorithms. Unimodal functions (U) such as Sphere has
only one local optimum and this is global optimum. The exploita-
Fig. 2. An illustrative example of using direction information. tion ability of the algorithms is examined on this kind of functions.
If a function with n-variable can be written as sum of the n functions
of one variable, then this function is called as separable (S) function
number in range of [−1,+1], r is a random number produced in range (Sphere, Rastrigin). Non-separable functions such as Rosenbrock
of [0,1]. functions cannot be written in this form because there is interre-
The Eq. (6) identifies the direction of searching. In the initializa- lation among variables of these functions. Therefore, to optimize
tion of the algorithm, the direction of information for all dimensions non-separable functions is more difficult than optimizing the sep-
equals to 0. If the new solution obtained by Eq. (6) is better than arable functions. The dimensionality of the search space is also an
old one (the better solution is determined by using fitness values important issue with the problem for the algorithms [9,26]. If the
of the old and new solutions via Eq. (2)), the direction information global optimum of the function is in the narrow curving valley
is updated. If previous value of the dimension is less than current such as Rosenbrock’s Banana function, the methods should keep
value, the direction information of this dimension is set to −1; oth- up the direction changes in the functions. In the experiments, we
erwise the direction information of this dimension is set to 1. If new investigated and compared the performance of the methods on the
solution obtained by Eq. (6) is worse than old one, the direction numeric functions with 10, 30 and 50 dimensionalities.
information of the dimension is set to 0. In this way, the direction
information of each dimension of each food source position is used 3.2. Setting control parameters for methods
and also, the local search capability and convergence rate of the
algorithm are improved. This situation is also shown by using an In order to make a clear and consistent comparison, the control
illustrative example in Fig. 2. Based on Fig. 2, a worse value can be parameters values of the methods are equal to each other. Akay
obtained for Fx because an undirected search is performed on the and Karaboga [27] show that there is no need to a huge colony size
search field for Fx. If we use direction information for Fx, the search for basic ABC algorithm. Therefore, the population size is taken as
tends to optimum. 40 in the experiments. The limit value which is a specific control
parameter for ABC algorithms for the population is calculated as
3. Experiments follows [9]:

limit = NE × D (7)
The performance of the method is investigated on the well-
known eight benchmark functions taken from [5] and collected where limit is used for controlling occurrence of scout bee, NE is the
from the literature. The experiments are conducted on IBM- number of food source or employed bee and D is dimensionality of
compatible PC with 3.01 GHZ, 4GB Ram and Matlab® 7.04 platform. the optimization problem. By using Eq. (7), the occurrence of scout

Table 1
Benchmark functions used in the experiments.

Function C f([x]D ) Range Formulae


D 2
F1-Sphere US 0 [−100,100] D
f (x) = xi
i=1
D−1 2 2
[−2.048,2.048]D f (x) = [100(xi+1 − xi2 ) + (xi − 1) ]
F2-Rosenbrock UN 0
i=1 
1
D 1
D
F3-Ackley MN 0 [−32.768,32.768]D f (x) = −20exp −0.2 n
x2 − exp n
cos(2xi ) + 20 + e
i=1 i i=1

D 2  
D x
F4-Griewank MN 0 [−600,600]D 1
f (x) = 4000 x − cos √i +1
  
i=1 i i=1 i
D kmax kmax
D f (x) = [ak cos(2bk (xi + 0.5))] −D [ak cos(2bk 0.5)]
F5-Weierstrass MN 0 [−0.5,0.5] i=1 k=0 k=0

a = 0.5, b = 3, kmax = 20
D 2
F6-Rastrigin MS 0 [−5.12,5.12]D f (x) = [xi − 10 cos(2xi ) + 10]
i=1
D
f (x) =
 i=1 [yi − 10 cos(2yi ) + 10]
2

1
F7-Non-Cont. Rastrigin MS 0 [−5.12,5.12]D xi |xi | >
yi = 2
round(2xi ) 1
|xi | ≤
2
D
2  
F8-Schwefel MN 0 [−500,500]D f (x) = 418.9829 × D − − xi sin |xi |
D
i=1

F9-Sumsquares US 0 [−10,10]D f (x) = ixi2


i=1
458 M.S. Kıran, O. Fındık / Applied Soft Computing 26 (2015) 454–462

4.15E−01
4.53E−14
7.31E−03

5.62E−15
2.66E−12
1.16E−04
6.47E−17

6.22E−17
0.00E+00
Std. Dev.

2.80E−01
7.30E−14
6.09E−03

1.42E−15
5.58E−13
1.56E−04
1.51E−16

1.33E−16
0.00E+00
Mean

2.42E−13
2.71E−02

2.84E−14
1.48E−11
7.35E−04
2.77E−16

2.47E−16
2.00E+00

0.00E+00
Worst

1.62E−02
1.87E−14
1.90E−13

1.27E−04
7.07E−17

5.09E−17
0.00E+00
0.00E+00
0.00E+00
dABC

Best

1.55E−17

1.65E−17
1.12E−12
9.67E−01

7.74E−02
0.00E+00

2.43E+02
1.28E+00
3.05E+00
Std. Dev.
Fig. 3. The convergence graph of the methods on the sphere function with 30-D.

6.43E−17

6.46E−17
1.33E−12
1.22E−01
0.00E+00

3.12E+02
5.47E+00

1.04E+01
8.33E+00
Mean
bee is properly controlled by depending on population size and
dimensionality of the problem.
The ABCMR algorithm with lower MR value likes to basic ABC

1.04E−16

8.78E−17
4.73E−12
3.02E−01
0.00E+00
1.68E+01
1.15E+01
6.24E+00

9.09E+02
algorithm and ABCMR algorithm with higher MR value has caused

Worst
ABCMR (MR = 0.7)
more diversification in the population [5]. Therefore, we used 0.3
and 0.7 values for the MR parameter in the experiments.

The best, worst, mean and standard deviations of results obtained by 30 independent runs on numeric functions. D:10 and MCN: 500.

3.80E−17

2.15E−17
2.21E−13
6.74E−01

1.52E−03
1.01E−02
The termination condition for the methods is used maximum

0.00E+00

6.38E+00
3.09E+00
cycle number (MCN) and MCN is taken as 500, 1000 and 1500 for

Best
10, 30 and 50-dimensional numeric functions, respectively.

1.16E−11

3.64E−14

5.87E−17
2.56E−05
2.00E−17

9.06E−03

1.80E−01
5.33E+00
1.90E+00
Std. Dev.
3.3. Comparison of the methods

Comparisons of the basic ABC, ABCMR , dABC algorithms are given


in Tables 2–4. Based on the comparisons, the dABC algorithm is very
8.37E−17

1.79E−11

2.58E−14

3.35E−02

1.05E−16
1.09E−02

1.03E−05
3.69E+00

1.06E+00
effectiveness for solving numeric functions.
Mean

While the dimensionality of the functions is increased, the per-


formance of the methods is decreased but when the performance
of the methods is compared, it is shown that the dABC algorithm
1.11E−16

5.33E−11

1.81E−13

2.69E−16
3.75E−02

1.28E−04

2.97E+01
6.79E+00

1.00E+00
is better than the other methods in terms of solution quality and
Worst

robustness by considering mean results and standard deviations.


ABCMR (MR = 0.3)

Wilcoxon non-parametric signed-rank statistical test with 0.05 p


value is performed to the results of 30 independent runs and the
3.17E−17

3.61E−12
3.15E−06

2.64E−09
8.58E−07
1.27E−04
3.08E−01

2.00E−17
0.00E+00

statistical tests results are shown in Tables 5–7 for 10, 30 and 50-
Best

dimensional functions, respectively. According to statistical test


results, the proposed method is significantly different from the
basic or other ABC variants in most cases. The convergence graphs
6.49E−17

2.31E−12

4.98E−17
9.34E−10
8.92E−03
5.99E−10

2.97E−10
3.55E+01
1.02E+00
Std. Dev.

of the methods (Figs. 3–10) are also figured for the 30-D functions
and the convergence rate of dABC algorithm is better than the other
methods except Weierstrass function. In addition, the experimental
results and convergence graphs of the ABCMR show that 0.3 value
1.27E−16
2.08E−16
8.93E−01
7.67E−10

5.23E−10
8.03E−13
1.18E−10
1.08E−02

1.18E+01

for MR parameter is more appropriate than 0.7 value.


Mean

4. Discussion
1.23E−11

2.47E−16
3.05E−16

4.98E−09
2.71E−02
2.24E−09

1.28E−09
1.18E+02
3.93E+00
Worst

The swarm intelligence-based optimization methods start with


random initial solutions to search solution space. For obtaining an
optimum or near optimum solution, the interactions between the
Basic ABC

7.88E−11
1.72E−11
1.13E−11
7.40E−17
2.34E−02

1.27E−04
7.00E−17
0.00E+00
0.00E+00

agents in the population are used. In the ABC algorithm, the dance
behavior is performed for sharing position information about the
Best

food sources. In this point, the direction information is important


factor for finding a good solution although basic ABC algorithm is
Function

undirected and this information is not shared in the artificial hive


Table 2

of ABC. This issue has caused to slow convergence and decreased


F1
F2
F3
F4
F5
F6
F7
F8
F9

local search ability of basic ABC algorithm. In this work, for each
Table 3
The best, worst, mean and standard deviations of results obtained by 30 independent runs on numeric functions. D:30 and MCN: 1000.

Function Basic ABC ABCMR (MR = 0.3) ABCMR (MR = 0.7) dABC

Best Worst Mean Std. Dev. Best Worst Mean Std. Dev. Best Worst Mean Std. Dev. Best Worst Mean Std. Dev.

F1 1.88E−10 2.53E−08 3.14E−09 5.38E−09 1.85E−13 3.02E−12 8.93E−13 5.42E−13 5.06E−09 8.37E−08 3.20E−08 2.21E−08 9.53E−16 5.98E−15 1.92E−15 1.08E−15
F2 1.77E−01 2.79E+01 1.79E+01 7.44E+00 2.16E+01 2.65E+01 2.45E+01 1.57E+00 2.07E+01 2.88E+01 2.59E+01 1.42E+00 1.32E−01 2.40E+01 1.02E+01 7.35E+00
F3 4.14E−06 6.58E−05 2.36E−05 1.48E−05 1.37E−07 5.94E−07 2.69E−07 9.20E−08 2.44E−05 1.24E−04 5.46E−05 2.61E−05 1.35E−08 1.32E−07 6.76E−08 3.26E−08
F4 2.23E−10 2.33E−02 3.03E−03 7.13E−03 1.51E−10 4.65E−06 4.29E−07 1.10E−06 3.88E−07 8.67E−02 4.67E−03 1.59E−02 1.89E−15 7.53E−03 2.59E−04 1.35E−03
F5 4.16E−04 1.27E−03 7.82E−04 2.20E−04 8.88E−07 7.55E−06 2.90E−06 1.75E−06 7.39E−04 2.04E−03 1.33E−03 3.67E−04 2.13E−05 6.19E−05 3.87E−05 1.17E−05
F6 5.97E−09 1.99E+00 2.98E−01 5.03E−01 4.60E+01 7.62E+01 6.28E+01 7.13E+00 1.40E+02 1.77E+02 1.58E+02 8.85E+00 1.57E−10 1.25E+00 2.42E−01 4.39E−01
F7 1.10E−08 2.37E+00 1.14E+00 8.46E−01 3.24E+01 5.29E+01 4.32E+01 4.94E+00 1.04E+02 1.65E+02 1.38E+02 1.25E+01 3.57E−08 3.03E+00 1.01E+00 9.66E−01

M.S. Kıran, O. Fındık / Applied Soft Computing 26 (2015) 454–462


F8 1.19E+02 5.94E+02 3.99E+02 1.29E+02 1.88E+03 3.43E+03 2.78E+03 4.09E+02 4.51E+03 6.79E+03 5.89E+03 5.06E+02 1.18E−01 5.97E+02 3.68E+02 1.42E+02
F9 1.75E−11 2.97E−10 9.11E−11 7.67E−11 1.51E−14 2.62E−13 1.02E−13 7.65E−14 5.93E−10 7.05E−09 2.71E−09 1.50E−09 5.54E−16 1.20E−15 8.62E−16 1.53E−16

Table 4
The best, worst, mean and standard deviations of results obtained by 30 independent runs on numeric functions. D:50 and MCN: 1500.

Function Basic ABC ABCMR (MR = 0.3) ABCMR (MR = 0.7) dABC

Best Worst Mean Std. Dev. Best Worst Mean Std. Dev. Best Worst Mean Std. Dev. Best Worst Mean Std. Dev.

F1 4.08E−09 3.46E−07 4.75E−08 6.48E−08 3.22E−09 1.83E−08 7.76E−09 3.67E−09 5.09E−03 2.37E−02 1.33E−02 4.60E−03 1.76E−14 2.91E−12 5.32E−13 6.20E−13
F2 2.45E+01 9.05E+01 4.55E+01 1.48E+01 4.29E+01 5.13E+01 4.53E+01 1.63E+00 4.72E+01 1.40E+02 6.74E+01 2.47E+01 9.25E+00 8.78E+01 4.02E+01 2.20E+01
F3 4.62E−05 5.71E−04 2.24E−04 1.06E−04 1.23E−05 4.16E−05 2.46E−05 6.48E−06 1.34E−02 4.94E−02 2.97E−02 8.50E−03 3.53E−07 3.47E−06 1.60E−06 7.61E−07
F4 5.14E−09 1.68E−02 9.00E−04 3.46E−03 2.78E−08 3.75E−05 5.46E−06 7.92E−06 1.10E−02 2.59E−01 7.40E−02 5.80E−02 5.99E−13 1.20E−06 4.56E−08 2.15E−07
F5 2.07E−03 6.57E−03 4.19E−03 9.68E−04 4.00E−04 1.09E−03 7.55E−04 1.76E−04 1.61E−01 3.36E−01 2.11E−01 3.98E−02 1.82E−04 6.93E−04 4.53E−04 1.10E−04
F6 1.51E−07 4.12E+00 1.86E+00 1.30E+00 1.82E+02 2.26E+02 2.09E+02 1.26E+01 2.90E+02 4.08E+02 3.65E+02 2.70E+01 1.73E−07 4.98E+00 1.62E+00 1.36E+00
F7 2.72E−01 7.30E+00 4.19E+00 1.76E+00 1.23E+02 1.83E+02 1.60E+02 1.74E+01 2.52E+02 3.86E+02 3.43E+02 3.05E+01 1.00E+00 7.00E+00 3.56E+00 1.76E+00
F8 5.50E+02 1.30E+03 1.02E+03 1.80E+02 6.96E+03 8.87E+03 8.00E+03 4.26E+02 9.54E+03 1.29E+04 1.16E+04 1.07E+03 5.43E+02 1.25E+03 9.66E+02 2.07E+02
F9 1.06E−09 1.17E−07 1.34E−08 2.10E−08 6.19E−10 3.01E−09 1.42E−09 5.37E−10 8.00E−04 4.62E−03 2.02E−03 9.85E−04 5.47E−15 2.79E−13 3.80E−14 5.09E−14

459
460 M.S. Kıran, O. Fındık / Applied Soft Computing 26 (2015) 454–462

Fig. 4. The convergence graph of the methods on the Rosenbrock function with Fig. 7. The convergence graph of the methods on Weierstrass function with 30-D.
30-D.

Fig. 8. The convergence graph of the methods on the Rastrigin function with 30-D.
Fig. 5. The convergence graph of the methods on the Ackley function with 30-D.

Fig. 9. The convergence graph of the methods on Non-continuous Rastrigin function


Fig. 6. The convergence graph of the methods on the Griewank function with 30-D. with 30-D.
M.S. Kıran, O. Fındık / Applied Soft Computing 26 (2015) 454–462 461

Table 5 robustness and convergence characteristics. ABCMR method use the


Statistical significant test results among the ABC variants for 10-dimensional
technique based on updating more than one decision variable or
functions.
dimension at the each iteration of ABC for improving convergence
Function. Basic ABC ABCMR (MR = 0.3) ABCMR (MR = 0.7) and local search characteristics of basic ABC algorithm. Instead of
F1 − − − more than one decision variable, dABC method uses direction infor-
F2 + + + mation for improving convergence characteristic and local search
F3 + − − capability of basic ABC algorithm and this mechanism is better than
F4 − + +
the other technique in terms of solution quality and convergence
F5 + + NA
F6 + + + to optima according to experiments.
F7 + + +
F8 + + + 5. Conclusion and future works
F9 − + +

In this study, a new version of basic ABC algorithm named as


Table 6 dABC is described, and performance of the proposed approach is
Statistical significant test results among the ABC variants for 30-dimensional investigated on the numeric benchmark functions. Obtained results
functions. are compared with basic ABC and ABCMR algorithms and experi-
Function. Basic ABC ABCMR (MR = 0.3) ABCMR (MR = 0.7) mental results show that dABC algorithm is better than the other
F1 + + +
methods in terms of solution quality and convergence character-
F2 − − − istics. This is originated from giving the direction information to
F3 − − − the bee population. The direction information is used to constrict
F4 + + + the search space for obtaining better solutions. Therefore, obtained
F5 + + +
new solutions guide for the search process and the solution infor-
F6 − + +
F7 − + + mation about search space are shared in the bee population. This
F8 + + + version of dABC algorithm uses the update rule of basic ABC algo-
F9 + + + rithm and the other update rules proposed in the literature can be
replaced with the basic update rule of ABC algorithm. We will apply
the other update rules for dABC algorithm in our future works.
Table 7
Statistical significant test results among the ABC variants for 50-dimensional
functions. References
Function Basic ABC ABCMR (MR = 0.3) ABCMR (MR = 0.7)
[1] M. Dorigo, V. Maniezzo, A. Colorni, Positive Feedback as a Search Strategy,
F1 + + + Technical Report No. 91-016, Politecnico di Milano, Italy, 1991.
F2 − − − [2] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proc. of IEEE Inter-
F3 + + + national Conference on Neural Networks, Piscataway, NJ, 1995, pp. 1942–1948.
F4 + + + [3] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical Optimization,
Technical Report – TR06, Erciyes University, Turkey, 2005.
F5 + + +
[4] T.D. Seeley, The Wisdom of the Hive, Harvard University Press, Cambridge, MA,
F6 − + +
1995.
F7 − + + [5] B. Akay, D. Karaboga, A modified artificial bee colony algorithm for real-
F8 + + + parameter optimization, Inf. Sci. 192 (2012) 120–142.
F9 + + + [6] D. Karaboga, B. Gorkemli, C. Ozturk, D. Karaboga, A comprehensive survey:
artificial bee colony (ABC) algorithm and applications, Artif. Intell. Rev. (2012),
http://dx.doi.org/10.1007/s10462-012-9328-0.
[7] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical func-
tion optimization: artificial bee colony (abc) algorithm, J. Glob. Optim. 39 (2007)
459–471.
[8] D. Karaboga, B. Basturk, On the performance of artificial bee colony (abc) algo-
rithm, Appl. Soft Comput. 8 (2008) 687–697.
[9] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm,
Appl. Math. Comput. 214 (2009) 108–132.
[10] D. Karaboga, B. Akay, Artificial bee colony (abc), harmony search and
bees algorithms on numerical optimization, in: 2009 Innovative Production
Machines and Systems Virtual Conference, 2009, http://conference.iproms.
org/conference/download/4153/76 (accessed 01.08.12).
[11] Z. Dongli, G. Xinping, T. Yinggan, T. Yong, Modified artificial bee colony algo-
rithms for numerical optimization, in: Proc. of 3rd International Workshop on
Intelligent Systems and Applications, 2011, pp. 1–4.
[12] P.-W. Tsai, J.-S. Pan, B.-Y. Liao, S.-C. Chu, Enhanced artificial bee colony opti-
mization, Int. J. Innov. Comput. Inf. Control 5 (2009) 5081–5092.
[13] B. Alatas, Chaotic bee colony algorithms for global numerical optimization,
Expert Syst. Appl. 37 (2010) 5682–5687.
[14] G. Zhu, S. Kwong, Gbest-guided artificial bee colony algorithm for numerical
function optimization, Appl. Math. Comput. 217 (2010) 3166–3173.
[15] W.-F. Gao, S.-Y. Liu, A modified artificial bee colony algorithm, Comput. Oper.
Res. 39 (2012) 687–697.
[16] A. Barnharnsakun, T. Achalakul, B. Sirinaovakul, The best-so-far selection in
artificial bee colony algorithm, Appl. Soft Comput. 11 (2011) 2888–2901.
[17] F. Kang, J. Lie, Z. Ma, Rosenbrock artificial bee colony algorithm for accurate
global optimization of numerical functions, Inf. Sci. 181 (2011) 3508–3531.
Fig. 10. The convergence graph of the methods on the Schwefel function with 30-D. [18] D. Karaboga, B. Akay, A modified artificial bee colony (ABC) algorithm for con-
strained optimization problems, Appl. Soft Comput. 11 (2011) 3021–3031.
[19] M.S. Kıran, M. Gündüz, A novel artificial bee colony-based algorithm for solving
dimension of each food source is described a field for direction the numerical optimization problems, Int. J. Innov. Comput. Inf. Control 9 (2012)
information and this is used for updating position of food sources. 6107–6121.
[20] M.-H. Horng, Multilevel thresholding selection based on the artificial bee
Obtained results show that the proposed approach is better than colony algorithm for image segmentation, Expert Syst. Appl. 38 (2011)
the other versions of ABC algorithm in terms of solution quality, 13785–13791.
462 M.S. Kıran, O. Fındık / Applied Soft Computing 26 (2015) 454–462

[21] S.N. Omkar, J. Senthilnath, R. Khandelwal, G.N. Naik, S. Gopalakrishman, Arti- [25] W. Gao, S. Liu, L. Huang, A novel artificial bee colony algorithm based on mod-
ficial bee colony (ABC) for multi-objective design optimization of composite ified search equation and orthogonal learning, IEEE Trans. Syst. Man Cybern. B
structures, Appl. Soft Comput. 11 (2011) 489–499. (2012), http://dx.doi.org/10.1109/TSMCB. 2012.2222373.
[22] Y. Liu, X. Ling, G. Liu, Improved artificial bee colony algorithm with mutual [26] D.O. Boyer, C.H. Martinez, N.G. Pedrajas, Crossover operator for evolution-
learning, J. Syst. Eng. Electron. 23 (2012) 265–275. ary algorithms based on population features, J. Artif. Intell. Res. 24 (2005)
[23] W. Gao, S. Liu, L. Huang, A global best artificial bee colony algorithm for global 1–48.
optimization, J. Comput. Appl. Math. 236 (2012) 2741–2753. [27] B. Akay, D. Karaboga, Parameter tuning for the artificial bee colony algorithm,
[24] W. Gao, S. Liu, A modified artificial bee colony algorithm, Comput. Oper. Res. in: Proc. of the 1st International Conference on Computational Collective Intel-
39 (2012) 687–697. ligence, Springer-Verlag, Berlin, Heidelberg, 2009, pp. 608–619.

You might also like