2015 Elsevier Artificial Bee Colony Algorithm With Variable Search Strategy For Continuous Optimization
2015 Elsevier Artificial Bee Colony Algorithm With Variable Search Strategy For Continuous Optimization
Information Sciences
journal homepage: www.elsevier.com/locate/ins
a r t i c l e i n f o a b s t r a c t
Article history: The artificial bee colony (ABC) algorithm is a swarm-based optimization technique
Received 23 June 2014 proposed for solving continuous optimization problems. The artificial agents of the ABC
Received in revised form 17 December 2014 algorithm use one solution update rule during the search process. To efficiently solve opti-
Accepted 28 December 2014
mization problems with different characteristics, we propose the integration of multiple
Available online 3 January 2015
solution update rules with ABC in this study. The proposed method uses five search strat-
egies and counters to update the solutions. During initialization, each update rule has a
Keywords:
constant counter content. During the search process performed by the artificial agents,
Artificial bee colony
Continuous optimization
these counters are used to determine the rule that is selected by the bees. Because the opti-
Search strategy mization problems and functions have different characteristics, one or more search strat-
Integration egies are selected and are used during the iterations according to the characteristics of
the numeric functions in the proposed approach. By using the search strategies and mech-
anisms proposed in the present study, the artificial agents learn which update rule is more
appropriate based on the characteristics of the problem to find better solutions. The perfor-
mance and accuracy of the proposed method are examined on 28 numerical benchmark
functions, and the obtained results are compared with various classical versions of ABC
and other nature-inspired optimization algorithms. The experimental results show that
the proposed algorithm, integrated and improved with search strategies, outperforms
the basic variants and other variants of the ABC algorithm and other methods in terms
of solution quality and robustness for most of the experiments.
Ó 2015 Elsevier Inc. All rights reserved.
1. Introduction
In recent years, many swarm intelligence-based heuristic optimization techniques such as the ant colony optimization
(ACO) [24,25], particle swarm optimization (PSO) [23,33], artificial bee colony algorithm (ABC) [8], cuckoo search (CS)
[45], firefly algorithm (FA) [46], and artificial fish swarm algorithm (AFSA) [44] have been proposed in the literature. These
algorithms are generally biologically inspired by the social behaviors of insects, fish or birds. ABC, which is another biolog-
ically inspired optimization algorithm and the subject of this study, is based on the foraging and waggle dance behaviors of
the honey bee colonies. In the ABC algorithm, there are two types of bees in the hive: employed and unemployed. The unem-
ployed bees are further classified as onlooker and scout. The employed bees collect nectar from the food sources and share
the positions of the food sources with the unemployed bees. The onlooker bees search for new food sources based on the
information provided by the employed bees. An employed bee becomes a scout bee if a food source cannot be improved
⇑ Corresponding author. Tel.: +90 332 223 1992; fax: +90 332 241 0635.
E-mail address: [email protected] (M.S. Kiran).
http://dx.doi.org/10.1016/j.ins.2014.12.043
0020-0255/Ó 2015 Elsevier Inc. All rights reserved.
M.S. Kiran et al. / Information Sciences 300 (2015) 140–157 141
by the employed or onlooker bees in a predefined and reasonable time. In the ABC algorithm, the positions of the food
sources represent a possible solution for the optimization problem, and new food sources are searched by the bees. This
information is exchanged among the bees until a termination condition is met.
In ABC, the onlooker and employed bees utilize the same solution updating equation. The solution updating equation of a
basic ABC has several issues such as slow achievement of the optimal or near optimal solution (slow convergence) and inef-
ficiency during a local search on the solution space. To overcome these issues, we propose the integration of different search
strategies within the setting of the ABC algorithm. To this aim, we present five solution update strategies, and a counter is
defined for each strategy in this work. These counters are initialized with a constant integer number, and each strategy has
an equal chance of being selected by the employed or onlooker bees. When a strategy is selected by an employed or onlooker
bee, its selection opportunity is improved by increasing its counter by one, if the new solution obtained by the selected strat-
egy is better than the previous one. Therefore, the appropriate solution search equation is maintained according to the char-
acteristics of the numerical functions.
The rest of paper is organized as follows. Section 1.1 presents a literature review on the ABC algorithms, and the main
contribution of this study is given in Section 1.2. Material and methods (Section 2) gives the basic ABC concept and the pro-
posed search strategies and mechanism for the ABC algorithm. The benchmark functions used in the experiments and an
analysis of the update rules is given in Section 3. The performance of the proposed method is also investigated and compared
with ABC variants on the benchmark functions in Section 3. The obtained results are discussed in Section 4. Finally, the study
is concluded, and future directions for our work are given in Section 5.
ABC was introduced by Karaboga and was inspired by the foraging and waggle dance behaviors of honey bee colonies [8].
Since its invention in 2005, many ABC variants have been proposed in the literature, more than half of the studies are com-
posed of the bee colony optimization, honey bee mating optimization, bee algorithm or artificial bee colony algorithm, and
they provide an improvement or application of the ABC algorithm in the period of 2005 and 2012 [14]. The performance and
accuracy of the ABC algorithm was analyzed with the optimizing numerical benchmark functions [4,9,11,12]. Alatas [6] pro-
posed a version of ABC to improve the convergence characteristics and to prevent the ABC from becoming stuck on local
solutions. To improve the exploitation ability of the ABC algorithm, Zhu and Kwong [18] added the global best information
in the bee population to the solution updating equation in the method called gbest-guided ABC (GABC). Karaboga and Akay
[5] defined a new parameter called modification rate (MR) that MR controls regardless of whether a dimension of the prob-
lem is updated for the ABC algorithm to increase the convergence rate of the algorithm. Banharnsakun et al. [1] improved the
capability of convergence of the ABC to a global optimum by using the best-so-far selection for onlooker bees, and they
tested the performance of their method on numerical benchmark functions and image registration. Liu et al. [47] presented
a version of the ABC algorithm that was improved by using mutual learning, which tunes the produced candidate food source
with the higher fitness between two individuals selected by a mutual learning factor. Gao et al. [38] proposed two ABC-based
algorithms that use two update rules of differential evolution (DE) called ABC/Best/1 and ABC/Best/2. The global best-based
ABC methods also use chaotic initialization to properly distribute the agents to the search space, and the performance and
accuracy of the methods are examined on 26 numerical benchmark functions [38]. To overcome premature convergence and
local minima issues, Gao and Liu [39] proposed ABC/Best/1. In ABC/Best/1, Gao and Liu use the update rule of the ABC/Best/1
algorithm for employed bees and the update rule of basic ABC for onlooker bees to reinforce the exploration ability of the
method, and they tested the proposed method MABC on 28 numerical benchmark functions. In another study, Gao et al.
[40] defined a new update rule based on random solutions while the candidate solution is obtained, and the new rule looks
similar to the crossover operator of the GA as is named CABC. In this study, the orthogonal learning strategy is proposed for
ABC methods such as basic ABC (OABC), GABC (OGABC), CABC (OCABC), and their accuracies, and performances are examined
on numerical benchmark functions and are compared with evolutionary algorithms (EAs), differential evolution variants
(DEs) and particle swarm optimization variants (PSOs). Karaboga and Gorkemli [13] proposed a new update rule for onlooker
bees in the hive to improve the local search and convergence characteristics of the standard ABC algorithm. Inspired by PSO,
Imanian et al. [30] changed the update rule of the basic ABC algorithm to increase the convergence speed of the basic ABC
algorithm for solving high dimensional, continuous optimization problems. Wang et al. [20] proposed the MEABC algorithm
to improve the local and global search capability of the basic ABC algorithm, and they tested the performance of MEABC on
basic, shifted and rotated benchmark functions. In another study, the ABC and bee algorithm are integrated to solve six con-
strained optimization benchmark problems [21]. Kiran and Findik [28] developed a simple version of the basic ABC algo-
rithm by using direction information regarding the solutions to improve the convergence characteristics of the basic
algorithm. Mansouri et al. combined the bisection method with ABC for finding the fixed point of a nonlinear function
[31]. Gao et al. proposed two new search equations for the basic ABC algorithm to balance exploration and exploitation
on the search space [41].
In addition to modifications and improvements of ABC, ABC has been applied to solve many optimization problems such
as the image segmentation [29], synthetic aperture radar image segmentation [26], multi-objective design optimization of
laminated composite components [35], in-core fuel management optimization [22], parametric optimization of non-tradi-
tional machining processes [34], wireless sensor network routing [16], leaf-constrained minimum spanning tree problem
[3], reliability redundancy allocation problems [42], optimum design of geometrically non-linear steel frames [36], training
142 M.S. Kiran et al. / Information Sciences 300 (2015) 140–157
neural networks [10], clustering problems [2,15], minimization of weight in truss structures [27], optimal control of auto-
matic voltage regulator (AVR) systems [19], design of multiplier-less nonuniform filter bank transmultiplexers [37], optimal
design of electromagnetic devices [43], and optimal filter design [7].
In this study, different search strategies are integrated within the concept of the ABC algorithm. We analyzed the perfor-
mance and accuracy of the proposed method on several numerical benchmark function optimizations. When the experimen-
tal results are analyzed, it is shown that the integration of search strategies is a better option than the individual search
strategy in the ABC concept because each search strategy contributes the local search ability or global search ability and,
therefore, the global-local search abilities are balanced by using different search equations. In addition, the study provides
enough convenience for the researchers and practitioners by means of method selection for continuous optimization within
the framework of ABC. The experimental results also show that the performance of a search based on the integration of mul-
tiple search strategies is superior to the performance of a single solution update rule because different search strategies
improve either the local search capability or the global search capability of the algorithm.
This section contains an explanation of the original ABC algorithm and the proposed search strategies and mechanisms for
the ABC algorithm.
Inspired by the waggle dance and foraging behaviors of honey bee colonies, the ABC algorithm has been developed, and
its accuracy and performance have been investigated on the three numeric functions (Sphere, Rosenbrock and Rastrigin func-
tions). The basic ABC algorithm consists of four sequentially realized phases called the initialization, employed bee, onlooker
bee and scout bee phases. In the initialization phase, the number of food sources, the termination condition and the limit
parameter value that controls the occurrence of the scout bee and counter for each food source are defined. The food sources,
which are a possible solution for the optimization problem, are randomly produced on the solution space by using (1), and
each food source is assigned to a different employed bee. Briefly, each employed bee has a food source, and the number of
food sources is equal to the number of employed bees.
where X ji
is the jth dimension of the ith solution, and X jmax X jmin
are the upper and lower bounds for the jth dimension of the
problem, and rji is a random number in the range of [0, 1]. After the food sources/initial solutions are assigned to the
employed bees, the fitness of the solutions is calculated as follows:
(
1
1þf i ðtÞ
if ðf i ðtÞ P 0Þ
fiti ðtÞ ¼ ð2Þ
1 þ absðf i ðtÞÞ otherwise
where fiti ðtÞ is the fitness of the ith solution and f i ðtÞ is the objective function value that is specific for the optimization
problem.
In the employed bee phase, each employed bee attempts to improve the self-solution given as follows:
where qi ðtÞ is the chance of the ith solution’s selection by an onlooker bee. After an onlooker bee selects a solution of an
employed bee, the onlooker bee attempts to improve the solution by using (3). The mechanism of the employed bee phase
is followed here as well. If the solution found by the onlooker bee is better than the solution of the employed bee, the new
solution is memorized by the employed bee. Otherwise, the counter of the food source is increased by one.
In the scout bee phase of ABC, the highest content of counters for the food sources is fixed, and this content is tested with
the limit. If the content is higher than the limit, the employed bee of this food source becomes a scout bee. A new solution is
M.S. Kiran et al. / Information Sciences 300 (2015) 140–157 143
randomly produced for this scout bee by using (1), the counter of the food source is reset, and the scout bee becomes an
employed bee. In the basic ABC algorithm, only one scout bee can become an employed bee at the each iteration of ABC.
Briefly, the employed and onlooker bees provide the intensification, and the scout bees provide the diversification for the
population of the ABC algorithm. The working diagram of the ABC algorithm is given in Fig. 1.
The optimization problems have different characteristics such as multimodality and unimodality. While multimodal
functions require effective global and local search abilities for the methods, the methods should be equipped with an effec-
tive local search capability. Generally, various versions of ABC dwell on the balance of local and global search abilities for
ABC. As seen from the basic concept of ABC, both the employed and onlooker bees use the same equation for updating
the solutions. To overcome issues in balancing, searching and convergence of the algorithm, we propose five solution search
equations for the basic ABC algorithm. The equations used in the solution updating equation of ABC are given as follows:
Search with
onlooker bees
No
equations. Similar to (9), global search is provided for the method by using (7). The new food source obtained by (7) is
located on the search space by adding it to randomly selected neighbors. Therefore, Eq. (7) provides sufficient diversification
in the population. It should be noted that (5) is same as the search equation for the basic ABC algorithm.
Usage of these equations is described by employing a counter designed for each equation. The initial values of the coun-
ters are set to a constant value. The candidate solution obtained by being used as an equation is better than the old one, and
the counter of the equation is increased by one. The equation selection is based on the roulette-wheel selection mechanism
given as follows:
Cðci Þ
Pðei Þ ¼ PNE i ¼ 1; 2; . . . ; NE ð10Þ
j¼1 Cðc j Þ
where Pðei Þ is the selectivity chance of the ith equation by an employed or onlooker bee, Cðci Þ is the value of the ith counter
assigned for the ei equation, and NE is the number of equations. After the equations and selection mechanism are given
above, the procedure is operated as follows.
Employing multiple update rules in the proposed algorithm provides its coherence based on the properties of the search
space of the optimization problem such as its multimodality and unimodality. In the initialization step of the proposed
method, a constant value for the equation counters is set, and then the selection is performed by using the equation selector
given in Fig. 2. After each of the objective function evaluations in the proposed ABC algorithm, if the newly obtained solution
is better than the old solution, the counter is increased by one. This incremental behavior provides the coherence of the
method to the search space of the optimization problem, and the elective chances of one or more update rules are therefore
reinforced during the iterations. The working diagram of the proposed approach is presented in Fig. 3. In addition to update
rules, it is noted that the fitness values of the food sources in our proposed method are used for the selection of food sources
and the objective function values obtained by using the food source for comparing the candidate with actual solutions. In
other words, while the basic ABC algorithm uses the fitness value of the food sources for the greedy selection process, objec-
tive function values are used for the same purpose in the proposed method.
To investigate the performance and accuracy of the ABCVSS algorithm, the ABCVSS algorithm is applied to optimize 28
benchmark functions. The optimization results obtained by ABCVSS are compared with the original ABC, gbest-guided ABC
(GABC), ABC/Best/1, ABC/Best/2, and modified ABC (MABC) in the first experiment. We selected these basic ABC variants
for comparison because the search equation of the basic ABC algorithm is improved in these methods. The global best solution
in the population is included for the search equation of the basic ABC in the GABC method. The ABC/Best/1 and ABC/Best/2
methods use different search equations instead of the basic search equation of the ABC algorithm to improve the performance
of the ABC algorithm. We selected these basic variants of the ABC algorithm for comparison to demonstrate that the integra-
tion of search equations in a framework is better than using or improving only one search equation in the basic ABC algorithm
in terms of solution quality. In the second experiment, the results of ABCVSS are compared with CABC and ABC variants, which
use the orthogonal learning strategy on the 20 benchmark functions. CABC and its variants are powerful versions of the ABC
algorithm. The performance of the proposed method is also compared with these variants. The third experiment covers the
comparison of ABCVSS, evolutionary algorithms (EAs), differential evolution (DE), PSO and their variants. EAs, DEs and PSOs
are used for comparing our method because these methods are popular in numeric optimization. For a clear and fair compar-
ison among ABC variants, the control parameters of the methods are set according to their papers, and the termination con-
dition for all methods is the maximum number of function evaluations (Max_FEs). In all experiments and comparisons, the
population size of the ABCVSS algorithm is 40, and the limit value is set to limit ¼ D N [9], where N is the number of
employed bees and D is the dimensionality of the numeric function. In the comparison tables, the results written as boldface
font shows that the result of the method is better than the results of other methods in terms of solution quality.
To analyze and compare the performance and accuracy of the proposed method, a large set of benchmark functions col-
lected from literature [32,39,40] are used in the experiments and are listed in Table 1. These functions have different prop-
erties such as unimodality, multimodality, separable and non-separable. These properties of the functions are given in
column C of Table 1. If a function has only one local minimum, it is called a unimodal function, and the local minimum is
also the global minimum for the function. The exploitation ability of the methods is often tested on these types of functions.
If a function has more than one local minimum, this function is called the multimodal function, and multimodal functions
No
Did all employed bees
work?
Yes
Yes
Produce the candidate solution
by using selected equation
have one or more global minimums [9]. In addition to the exploitation ability, the exploration ability of the methods is often
tested on the multimodal functions. Because some functions can be reformulated as the sum of n-functions of one variable,
these functions are called separable functions. Non-separable functions cannot be reformulated by their use in this formu-
lation because there is an interrelation among the variables of the non-separable functions [9]. If a method attempts to
change a variable of the non-separable functions, the other variables will be affected by this change, and the rest of the vari-
ables should be arranged as required. Therefore, finding an optimum for the non-separable functions is more difficult than
finding the optimum of the separable functions. Another important issue for the methods is the dimensionality of the search
space [17]. Generally, solving high dimensional optimization problems is more difficult than solving low dimensional opti-
mization problems. In column C of Table 1, M shows that the function is multimodal, U shows that the function is unimodal, S
shows that the function is separable and N shows the characteristics of non-separable functions.
The benchmark functions used in the experiments are divided into three groups. The first group consists of low-dimen-
sional (D = 30) functions, the second group consists of middle-dimensional functions (D = 60) and the last group consists of
high-dimensional (D = 100) functions.
Before the experimental results and comparisons of the methods are given, we investigate the efficiency of the update
rules used in the proposed algorithm.
To show the results of the proposed search strategies and mechanisms, the proposed algorithm was run 30 times with
random seeds for the first twelve benchmark functions with D = 30, and the number of maximum function evaluations is
146 M.S. Kiran et al. / Information Sciences 300 (2015) 140–157
Table 1
Benchmark functions used in experiments.
used for the termination of the method and is set to 1.5E+5. The means of the counters’ contents for each function are shown
in Fig. 4. As seen from Fig. 4, while the benchmark functions are being optimized by the proposed approach, the appropriate
update rule is selected according to the characteristics of the problem, and there is cooperation among the update rules.
In the ABC algorithm, if each solution obtained by an employed or onlooker bee is better than the previous solution, the
previous solution is replaced with the new solution. When the exploitation ability is improved by an approach, the number
of better solutions obtained is used as an indicator for the exploitation ability of the ABC algorithm. To demonstrate the
search ability of the methods, the improvement rate is used in this study and is given as follows:
SS
iR ¼ 100 ð11Þ
Max FEs
where iR is the improvement rate of the update rule, Max_FEs is the maximum number of function evaluations, and SS is the
number of successful updates that is described as it is increased by one, if the new solution is better than the previous solu-
tion. For the benchmark functions with 30, 60 and 100 dimensionalities, Max_FEs is considered as 1.5E+5, 3E+5 and 5E+5,
M.S. Kiran et al. / Information Sciences 300 (2015) 140–157 147
F1 F2 F3
30 30 30
25 25 25
20 20 20
15 15 15
10 10 10
5 5 5
0 0 0
Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9
Rao of Being Selected Rao of Being Selected Rao of Being Selected
F4 F5 F6
50 40 50
35
40 30 40
30 25 30
20
20 15 20
10 10
10 5
0 0 0
Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9
Rao of Being Selected Rao of Being Selected Rao of Being Selected
F7 F8 F9
35 30 35
30 25 30
25 20 25
20 15 20
15 15
10 10
10
5 5 5
0 0 0
Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9
Rao of Being Selected Rao of Being Selected Rao of Being Selected
Fig. 4. The ratio of being selected of the update rules in the proposed ABCVSS algorithm.
respectively, and the improvement rates of the methods are given in Fig. 5. The basic ABC and proposed approach ABCVSS,
for short, were repeated 30 times with random seeds on the benchmark functions for this analysis to obtain improvement
rates reported on the figures. Based on the iR figures, we can say that ABCVSS operates more effective update rules than the
basic ABC in most cases.
This section contains a comparison of the original ABC, GABC, MABC, ABC/Best/1, ABC/Best/2 and ABCVSS for the 28
benchmark functions listed in Table 1. The termination condition for the methods is Max_FEs, and it is set to 1.5E+5 for
30 and 100 (F22, F23) dimensional functions, 3E+5 for 60 and 200 (F22, F23) dimensional functions and 5E+5 for 100 and
200 (F22, F23) dimensional functions. For ABC, GABC and ABCVSS, the limit value for the populations is set to
limit ¼ D N [9], where N is the number of employed bees or food sources and D is the dimensionality of the function.
For ABC/Best/1 and ABC/Best/2, the limit value is set to limit ¼ 0:6 D N as given in [38].
The results obtained by 30 independent runs under the conditions given above are reported in Table 2 for 30-dimensional
functions, Table 3 for 60-dimensional functions and Table 4 for 100-dimensional functions. In the comparison tables, because
the results of the MABC algorithm are taken directly from [39], the statistically significant test between the proposed method
and MABC algorithm could not be applied. In addition to the solution quality, accuracy and robustness based on standard
deviations, the convergence graphs for the methods are important metrics for comparing the useful approximation ability
of the methods. The convergence graphs of the implemented methods in this comparison are given in Fig. 6 on some
functions.
148 M.S. Kiran et al. / Information Sciences 300 (2015) 140–157
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
F13
F14
F15
F16
F17
F18
F19
F20
F21
F22
F23
F24
F25
F26
F27
F28
F10
F11
F12
F13
F14
F15
F16
F17
F18
F19
F20
F21
F22
F23
F24
F25
F26
F27
F28
ABC ABCVSS
100.00
c
80.00
60.00
40.00
20.00
0.00
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
F13
F14
F15
F16
F17
F18
F19
F20
F21
F22
F23
F24
F25
F26
F27
F28
Fig. 5. Improvement rates of update rules of ABC and ABCVSS on the benchmark functions with 30-dimensional (a), 60-dimensional (b), 100-dimensional
(c).
As seen from the comparison tables, the ABCVSS algorithm outperforms the other ABC variants in most cases. Although
the methods show the same performance on some functions, the convergence rate of the ABCVSS algorithm is better than the
other algorithms. Based on the standard deviation values, the proposed method is more robust than the original ABC algo-
rithm and other variants because the appropriate equation or equations are selected according to the characteristics of the
test functions in the proposed method. In addition, as seen from Tables 1–3, the successful results obtained by the proposed
approach do not depend on the dimensionalities of the numeric functions.
Wilcoxon signed-rank test is also applied to the results of the original ABC and proposed method, and the proposed
method is significantly different from the original ABC algorithm with a 0.05 significance level according to last column
of Tables 2–4.
In the comparison of CABC, OABC, OGABC, OCABC and ABCVSS, Max_FEs is set to 5E+4 for the low-dimensional functions,
1E+5 for the middle-dimensional functions and 2E+5 for the high-dimensional functions. For CABC and ABC variants that use
the orthogonal learning strategy, the low, middle and high dimensionalities are defined as 15-dimensional, 30-dimensional
and 60-dimensional for F1 to F20 functions and 50-dimensional, 100-dimensional and 200-dimensional for F22 to F23 func-
tions [40]. Because the results of CABC, OABC, OGABC and OCABC are taken directly from the study of Gao et al. [40], the
ABCVSS is run 30 times with random seeds, and the results obtained by ABCVSS are compared with the results of CABC,
OABC, OGABC and OCABC. The comparison tables designed for the different dimensions are given in Tables 5–7.
There are no results for the F8 functions in the comparison tables because the F8 function in [40] is the exponential func-
tion and its optimum is reported as zero in [38] although its optimum is 1. Based on the comparison tables, the ABCVSS
algorithm is a remarkable method for solving the numerical optimization problems such as OCABC. Similar to OCABC and
its variants, ABCVSS learns the appropriate update rule according to the characteristics of the numeric functions, and more
successful results are obtained by the proposed approach.
The ABCVSS algorithm is also compared with DE, PSO, EA and their variants. The results of these methods are taken
directly from [40]. The ABCVSS algorithm is compared with DE and its variants in Table 8, PSO and its variants in Table 9
and EAs in Table 10.
In the result tables, NA refers to the non-available result in the corresponding reference. As seen from Table 8, the ABCVSS
algorithm outperforms the DE variants for every case, with the exception of the Quartic function where JDE beats ABCVSS. In
Table 9, OGA/Q is better than ABCVSS on the Sphere and Schwefel 2.22 functions. For these functions, ABCVSS is better than
the other EAs, and OGA/Q obtains the optimum. For the rest of the cases, ABCVSS is better than the EAs or equal to the EAs.
When ABCVSS is compared with PSOs, FIPS is better than ABCVSS on the Quartic function, and OLPSO-G is better than
ABCVSS on the Ackley function. For the rest of the functions, ABCVSS outperforms PSOs in terms of solution quality.
Table 2
Comparison of the basic ABC variants on the 30 and 100 (F22, F23) dimensional functions.
149
150
Table 3
Comparison of the basic ABC variants on the 60 and 200 (F22, F23) dimensional functions.
151
152 M.S. Kiran et al. / Information Sciences 300 (2015) 140–157
Fig. 6. Convergence performance of ABCs on the 15 test functions with different dimensions.
M.S. Kiran et al. / Information Sciences 300 (2015) 140–157 153
Table 5
Comparison of the CABC, OABC, OGABC, OCABC and ABCVSS on 15-dimensional functions F1–F20 and 50-dimensional functions F22–F23.
Table 6
Comparison of the CABC, OABC, OGABC, OCABC and ABCVSS on 30-dimensional functions F1–F20 and 100-dimensional functions F22–F23.
In the present study, the proposed method is compared with the basic ABC and its variants, CABC and its variants, DEs,
PSOs and EAs. From these comparisons, the results and comparisons are summarized below.
The proposed ABCVSS algorithm is compared with the variants of the ABC algorithm in Tables 2–4. Based on Table 2, the
compared methods have equal performance for F7, F11, F12, F20 and F25 functions. The basic ABC algorithm is slightly better
than the other methods for the F10 function, the MABC algorithm is slightly better than the other methods for the F14 func-
tion and the ABCBest1 algorithm is slightly better than the other methods for the F21 function. For the F1, F2, F3, F4, F5, F6,
F8, F9, F15, F18, F23, F27 and F28 functions, the proposed method is better than the other methods. For the rest of the func-
tions, the ABCVSS algorithm shows the same performance with at least one algorithm. According to Table 3, the methods
show the same performance for the F7, F11, F12 and F25 functions. For the rest of the functions, although the ABCVSS algo-
rithm has better performance than the other methods on most of the functions, it has equal performance with at least one
154 M.S. Kiran et al. / Information Sciences 300 (2015) 140–157
Table 7
Comparison of the CABC, OABC, OGABC, OCABC and ABCVSS on 60-dimensional functions F1–F20 and 200-dimensional functions F22–F23.
Table 8
Comparison of the ABCVSS and DEs on some 30-dimensional functions.
Table 9
Comparison of the ABCVSS and PSOs on 30-dimensional some functions.
Table 10
Comparison of the ABCVSS and EAs on some 30-dimensional functions.
algorithm on some functions. The basic ABC algorithm is slightly better than the other methods for the F10 function, the
MABC algorithm is slightly better than the other methods for the F14 function and the ABCBest1 algorithm is slightly better
than the other methods for the F21 and F28 functions. In Table 4, the ABCVSS algorithm has a slightly lower performance for
the F10 and F21 functions. For the rest of the benchmark functions, the ABCVSS algorithm has high performance or equal
performance in solving the numeric problems. As seen from Tables 2–4, while the dimensions of the benchmark functions
increase, the proposed method preserves the self-search capability.
The ABCVSS algorithm is compared with CABC, OABC, OGABC, and OCABC in Tables 5–7. The proposed algorithm has a
lower performance than the other algorithms for the F9, F10, F13, F14 and F15 functions, and it has higher or equal perfor-
mance on the other benchmark functions in Table 5. The same situation is valid in Table 6, except for the F13 function. For
the F13 function, the optimal solution is obtained by the proposed method, and the ABCVSS algorithm shows the same per-
formance with the other methods on this function. In Table 7, while the ABCVSS algorithm has a lower performance than the
other methods on the F6, F9, F10, F14, F15 and F20 functions, the proposed method has a higher performance than the other
methods on F1, F2, F3, F4, F5, F13, F18 and F23. For the rest of the functions, the methods show similar performance. Tables
5–7 show that the proposed method is highly competitive in solving the numeric functions considered in the experiments.
In Table 8, the performance of the ABCVSS algorithm is compared with the Des, and this comparison shows that the pro-
posed method is better than the Des, except for the Quartic function. For this function, JaDe shows the highest performance
among the compared methods.
In Table 9, the proposed method is compared with the basic variants and variants of the PSO algorithm. As seen from the
table, all of the methods have the same performance on the Step function, the FIPS algorithm is better than the other meth-
ods for Quartic function and the OLPSO-G algorithm is better than the other methods for the Ackley function. For the rest of
the benchmark functions, the ABCVSS algorithm is better than the PSO models.
In Table 10, the proposed algorithm is compared with EAs. The OGA/Q algorithm is better than the other methods for
Sphere and Schwefel 2.22 functions. The ABCVSS algorithm has higher performance than the EAs for Penalized 1, Penalized
2, Himmelblau and Michalewicz functions. For the rest of the benchmark functions in Table 10, the ABCVSS and OGA/Q algo-
rithms have the same performance.
The conceptual structure of ABC is better than other methods because there are individual searchers (employed bees),
cooperative searchers (onlooker bees) and random searchers (scout bees) in the algorithm, and they perform different activ-
ities for obtaining optimal or near optimal solutions for the optimization problems. The methods require the fine update
rules for obtaining high quality results in addition to a conceptual structure. The update rule of the ABC algorithm is superior
in terms of exploration of the search space but poor in terms of exploitation around the solution found. The proposed
approach improves not only the exploitation ability but also its exploration ability of the method because a different update
rule can be used to find new solutions within the same iteration. Therefore, while some rules provide diversification in the
population, some rules provide intensification for the population, and the rules are combined for providing the balance
between exploration and exploitation in the algorithm. The integration of update rules for the ABC algorithm provides robust
local and global search abilities for the basic algorithm while searching the solution space of the optimization problem. For
the problems with different characteristics, the integration of update rules has the edge on solving the optimization
156 M.S. Kiran et al. / Information Sciences 300 (2015) 140–157
problems to determine which update rule is better than the others, while solving the problem is an issue in terms of time and
cost. Therefore, the proposed method is an alternative and comprehensive tool for solving the optimization problems. In
addition, the experimental results show that the proposed approach is better than the other methods in most cases as seen
from the comparisons.
It is widely accepted that problems with different characteristics require methods that have well-balanced local and glo-
bal search abilities for the optimization process. An effective global search is provided for ABC by scout bees and the update
rule of the algorithm. A local search is also performed by the employed and onlooker bees of the ABC, which introduces
issues because employed and onlooker bees use the same update rule. In the proposed method, each employed or onlooker
bee can use a different update rule for obtaining a candidate solution, which improves the local and global search abilities of
the proposed method. This modification results in a more robust and effective ABC-based optimizer. Consequently, the per-
formance of the proposed method on 28 numerical benchmark functions show that using different equations as an update
rule for the ABC algorithm provides sufficient diversification and intensification of the population on the search space of the
problem. Future works include investigating more capable update rules for the ABC and other swarm intelligence methods.
Acknowledgements
The authors thank The Scientific Research Project Coordinatorship of Selcuk University and The Scientific and Technolog-
ical Research Council of Turkey for their institutional supports.
References
[1] A. Banharnsakun, T. Achalakul, B. Sirinaovakul, The best-so-far selection in artificial bee colony algorithm, Appl. Math. Comput. 11 (2011) 2888–2901.
[2] A. Banharnsakun, B. Sirinaovakul, T. Achalakul, The best-so-far ABC with multiple patrilines for clustering problems, Neurocomputing 116 (2013) 355–
366.
[3] A. Singh, An artificial bee colony algorithm for the leaf-constrained minimum spanning tree problem, Appl. Soft Comput. 9 (2009) 625–631.
[4] B. Akay, Performance Analysis of Artificial Bee Colony Algorithm on numerical optimization problems, PhD. Dissertation, Erciyes Univ., Grad. Sch. of
Nat. and Appl. Sci., Kayseri, TR, 2009.
[5] B. Akay, D. Karaboga, A modified artificial bee colony algorithm for real parameter optimization, Inf. Sci. 192 (2012) 120–142.
[6] B. Alatas, Chaotic bee colony algorithms for global numerical optimization, Expert Syst. Appl. 37 (2010) 5682–5687.
[7] D. Bose, S. Biswas, A.V. Vasilakos, S. Laha, Optimal filter design using an improved artificial bee colony algorithm, Inf. Sci. 281 (2014) 443–461.
[8] D. Karaboga, An idea based on honey bee swarm for numerical optimization, Erciyes University, Kayseri, Turkey, Tech. Rep., TR06, 2005.
[9] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Appl. Math. Comput. 214 (2009) 108–132.
[10] D. Karaboga, B. Akay, C. Ozturk, Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks, in: 4th International
Conference MDAI, Kitakyushu, Japan, August 16–18, 2007, pp. 318–329.
[11] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, J. Global Optim.
39 (2007) 459–471.
[12] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) algorithm, Appl. Soft Comput. 8 (2008) 687–697.
[13] D. Karaboga, B. Gorkemli, A quick artificial bee colony (qABC) algorithm and its performance on optimization problems, Appl. Soft Comput. 23 (2014)
227–238.
[14] D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive survey: artificial bee colony (ABC) algorithm and applications, Artif. Intell. Rev.
(2012), http://dx.doi.org/10.1007/s10462-012-9328-0.
[15] D. Karaboga, C. Ozturk, A novel clustering approach: artificial bee colony (ABC) algorithm, Appl. Soft Comput. 11 (2011) 652–657.
[16] D. Karaboga, S. Okdem, C. Ozturk, Cluster based wireless sensor network routing using artificial bee colony algorithm, Wirel. Netw. 18 (2012) 847–860.
[17] D.O. Boyer, C.H. Martfnez, N.G. Pedrajas, Crossover operator for evolutionary algorithms based on population features, J. Artif. Intell. Res. 24 (2005) 1–
48.
[18] G. Zhu, S. Kwong, Gbest-guided artificial bee colony algorithm for numerical function optimization, Appl. Math. Comput. 217 (2010) 3166–3173.
[19] H. Gozde, M.C. Taplamacioglu, Comparative performance analysis of artificial bee colony algorithm for automatic voltage regulator (AVR) system, J.
Frankl. Inst. 348 (2011) 1927–1946.
[20] H. Wang, Z. Wu, S. Rahnamayan, H. Sun, Y. Liu, J. Pan, Multi-strategy ensemble artificial bee colony algorithm, Inf. Sci. 279 (2014) 587–603.
[21] H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inf. Sci. 258 (2014) 80–93.
[22] I.M.S. De Oliveira, R. Schirru, Swarm intelligence of artificial bees applied to in-core fuel management optimization, Ann. Nucl. Energy 38 (2011) 1039–
1045.
[23] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proc. of IEEE International Conference on Neural Networks, Piscataway, USA, 1995, pp.
1942–1948.
[24] M. Dorigo, V. Maniezzo, A. Colorni, Positive feedback as a search strategy, Politecnico di Milano, Milano, Italy, Tech. Rep. 91–016, 1991.
[25] M. Dorigo, V. Maniezzo, A. Colorni, The ant system: optimization by a colony of cooperating agents, IEEE T. Syst. Man Cy. B 26 (1991) 1–13.
[26] M. Ma, J. Lieang, M. Guo, Y. Fan, Y. Yin, SAR image segmentation based on artificial bee colony algorithm, Appl. Soft Comput. 11 (2011) 5205–5214.
[27] M. Sonmez, Artificial bee colony algorithm for optimization of truss structures, Appl. Soft Comput. 11 (2011) 2406–2418.
[28] M.S. Kiran, O. Findik, A directed artificial bee colony algorithm, Appl. Soft Comput. 26 (2015) 454–462.
[29] M.-H. Horng, Multilevel thresholding selection based on the artificial bee colony algorithm for image segmentation, Expert Syst. Appl. 38 (2011)
13785–13791.
[30] N. Imanian, M.E. Shiri, P. Moradi, Velocity based artificial bee colony algorithm for high dimensional continuous optimization problems, Eng. Appl.
Artif. Intell. 36 (2014) 148–163.
[31] P. Mansouri, B. Asady, N. Gupta, The bisection-artificial bee colony algorithm to solve fixed point problems, Appl. Soft Comput. 26 (2015) 143–148.
[32] P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y.-P. Chen, A. Auger, S. Tiwari, Problem Definitions and Evaluation Criteria for the CEC 2005 Special
Session on Real-Parameter Optimization, Nanyang Technological University, Singapore, Tech. Rep., May 2005.
[33] R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proc.MHS’95, Nagoya, Japan, 1995, pp. 39–43.
[34] S. Samanta, S. Chakraborty, Parametric optimization of some non-traditional machining processes using artificial bee colony algorithm, Eng. Appl. Artif.
Intell. 24 (2011) 946–957.
M.S. Kiran et al. / Information Sciences 300 (2015) 140–157 157
[35] S.N. Omkar, J. Senthilnath, R. Khandelwal, G.N. Naik, S. Gopalakrishman, Artificial bee colony (ABC) for multi-objective design optimization of
composite structures, Appl. Soft Comput. 11 (2011) 489–499.
[36] S.O. Degertekin, Optimum design of geometrically non-linear steel frames using artificial bee colony algorithm, Steel Compos. Struct. 12 (2012) 505–
522.
[37] V.J. Manoj, E. Elias, Artificial bee colony algorithm for the design of multiplier-less nonuniform filter bank transmultiplexer, Inf. Sci. 192 (2012) 193–
203.
[38] W. Gao, S. Liu, L. Huang, A global best artificial bee colony algorithm for global optimization, J. Comput. Appl. Math. 236 (2012) 2741–2753.
[39] W. Gao, S. Liu, A modified artificial bee colony algorithm, Comput. Oper. Res. 39 (2012) 687–697.
[40] W. Gao, S. Liu, L. Huang, A novel artificial bee colony algorithm based on modified search equation and orthogonal learning, IEEE T. Syst. Man Cy. B
(2012), http://dx.doi.org/10.1109/TSMCB.2012.2222373.
[41] W. Gao, S. Liu, L. Huang, Enhancing artificial bee colony algorithm using more information-based search equations, Inf. Sci. 270 (2014) 112–133.
[42] W.-C. Yeh, T.-J. Hsieh, Solving reliability redundancy allocation problems using an artificial bee colony algorithm, Comput. Oper. Res. 38 (2011) 1465–
1473.
[43] X. Zhang, X. Zhang, S. Yuen, S. Ho, W. Fu, An improved artificial bee colony algorithm for optimal design of electromagnetic devices, IEEE T. Magn.
(2013), http://dx.doi.org/10.1109/TMAG.2013.2241447.
[44] X.-L. Li, Z.J. Shao, J.-X. Qian, An optimizing method based on autonomous animates: fish-swarm algorithm, Syst. Eng. – Theory Pract. 22 (2002) 32–38.
[45] X.-S. Yang, S. Deb, Engineering optimization by cuckoo search, Int. J. Math. Model. Numer. Opt. 1 (2010) 330–343.
[46] X.-S. Yang, Firefly algorithms for multimodal optimization, in: 5th International Symposium SAGA, Sapporo, Japan, 2009, pp. 169–178.
[47] Y. Liu, X. Ling, G. Liu, Improved artificial bee colony algorithm with mutual learning, J. Syst. Eng. Electron. 23 (2012) 265–275.