0% found this document useful (0 votes)
10 views18 pages

2015 Elsevier Artificial Bee Colony Algorithm With Variable Search Strategy For Continuous Optimization

This study presents an enhanced artificial bee colony (ABC) algorithm that integrates multiple solution update strategies to improve continuous optimization. The proposed method employs five different search strategies, allowing the artificial agents to adaptively select the most effective rule based on the problem's characteristics. Experimental results demonstrate that this integrated approach outperforms traditional ABC variants and other optimization algorithms in terms of solution quality and robustness across various benchmark functions.

Uploaded by

chandreshgovind
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views18 pages

2015 Elsevier Artificial Bee Colony Algorithm With Variable Search Strategy For Continuous Optimization

This study presents an enhanced artificial bee colony (ABC) algorithm that integrates multiple solution update strategies to improve continuous optimization. The proposed method employs five different search strategies, allowing the artificial agents to adaptively select the most effective rule based on the problem's characteristics. Experimental results demonstrate that this integrated approach outperforms traditional ABC variants and other optimization algorithms in terms of solution quality and robustness across various benchmark functions.

Uploaded by

chandreshgovind
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

Information Sciences 300 (2015) 140–157

Contents lists available at ScienceDirect

Information Sciences
journal homepage: www.elsevier.com/locate/ins

Artificial bee colony algorithm with variable search strategy


for continuous optimization
Mustafa Servet Kiran ⇑, Huseyin Hakli, Mesut Gunduz, Harun Uguz
Selcuk University, Faculty of Engineering, Department of Computer Engineering, 42075 Konya, Turkey

a r t i c l e i n f o a b s t r a c t

Article history: The artificial bee colony (ABC) algorithm is a swarm-based optimization technique
Received 23 June 2014 proposed for solving continuous optimization problems. The artificial agents of the ABC
Received in revised form 17 December 2014 algorithm use one solution update rule during the search process. To efficiently solve opti-
Accepted 28 December 2014
mization problems with different characteristics, we propose the integration of multiple
Available online 3 January 2015
solution update rules with ABC in this study. The proposed method uses five search strat-
egies and counters to update the solutions. During initialization, each update rule has a
Keywords:
constant counter content. During the search process performed by the artificial agents,
Artificial bee colony
Continuous optimization
these counters are used to determine the rule that is selected by the bees. Because the opti-
Search strategy mization problems and functions have different characteristics, one or more search strat-
Integration egies are selected and are used during the iterations according to the characteristics of
the numeric functions in the proposed approach. By using the search strategies and mech-
anisms proposed in the present study, the artificial agents learn which update rule is more
appropriate based on the characteristics of the problem to find better solutions. The perfor-
mance and accuracy of the proposed method are examined on 28 numerical benchmark
functions, and the obtained results are compared with various classical versions of ABC
and other nature-inspired optimization algorithms. The experimental results show that
the proposed algorithm, integrated and improved with search strategies, outperforms
the basic variants and other variants of the ABC algorithm and other methods in terms
of solution quality and robustness for most of the experiments.
Ó 2015 Elsevier Inc. All rights reserved.

1. Introduction

In recent years, many swarm intelligence-based heuristic optimization techniques such as the ant colony optimization
(ACO) [24,25], particle swarm optimization (PSO) [23,33], artificial bee colony algorithm (ABC) [8], cuckoo search (CS)
[45], firefly algorithm (FA) [46], and artificial fish swarm algorithm (AFSA) [44] have been proposed in the literature. These
algorithms are generally biologically inspired by the social behaviors of insects, fish or birds. ABC, which is another biolog-
ically inspired optimization algorithm and the subject of this study, is based on the foraging and waggle dance behaviors of
the honey bee colonies. In the ABC algorithm, there are two types of bees in the hive: employed and unemployed. The unem-
ployed bees are further classified as onlooker and scout. The employed bees collect nectar from the food sources and share
the positions of the food sources with the unemployed bees. The onlooker bees search for new food sources based on the
information provided by the employed bees. An employed bee becomes a scout bee if a food source cannot be improved

⇑ Corresponding author. Tel.: +90 332 223 1992; fax: +90 332 241 0635.
E-mail address: [email protected] (M.S. Kiran).

http://dx.doi.org/10.1016/j.ins.2014.12.043
0020-0255/Ó 2015 Elsevier Inc. All rights reserved.
M.S. Kiran et al. / Information Sciences 300 (2015) 140–157 141

by the employed or onlooker bees in a predefined and reasonable time. In the ABC algorithm, the positions of the food
sources represent a possible solution for the optimization problem, and new food sources are searched by the bees. This
information is exchanged among the bees until a termination condition is met.
In ABC, the onlooker and employed bees utilize the same solution updating equation. The solution updating equation of a
basic ABC has several issues such as slow achievement of the optimal or near optimal solution (slow convergence) and inef-
ficiency during a local search on the solution space. To overcome these issues, we propose the integration of different search
strategies within the setting of the ABC algorithm. To this aim, we present five solution update strategies, and a counter is
defined for each strategy in this work. These counters are initialized with a constant integer number, and each strategy has
an equal chance of being selected by the employed or onlooker bees. When a strategy is selected by an employed or onlooker
bee, its selection opportunity is improved by increasing its counter by one, if the new solution obtained by the selected strat-
egy is better than the previous one. Therefore, the appropriate solution search equation is maintained according to the char-
acteristics of the numerical functions.
The rest of paper is organized as follows. Section 1.1 presents a literature review on the ABC algorithms, and the main
contribution of this study is given in Section 1.2. Material and methods (Section 2) gives the basic ABC concept and the pro-
posed search strategies and mechanism for the ABC algorithm. The benchmark functions used in the experiments and an
analysis of the update rules is given in Section 3. The performance of the proposed method is also investigated and compared
with ABC variants on the benchmark functions in Section 3. The obtained results are discussed in Section 4. Finally, the study
is concluded, and future directions for our work are given in Section 5.

1.1. A brief literature review on improvements of ABC

ABC was introduced by Karaboga and was inspired by the foraging and waggle dance behaviors of honey bee colonies [8].
Since its invention in 2005, many ABC variants have been proposed in the literature, more than half of the studies are com-
posed of the bee colony optimization, honey bee mating optimization, bee algorithm or artificial bee colony algorithm, and
they provide an improvement or application of the ABC algorithm in the period of 2005 and 2012 [14]. The performance and
accuracy of the ABC algorithm was analyzed with the optimizing numerical benchmark functions [4,9,11,12]. Alatas [6] pro-
posed a version of ABC to improve the convergence characteristics and to prevent the ABC from becoming stuck on local
solutions. To improve the exploitation ability of the ABC algorithm, Zhu and Kwong [18] added the global best information
in the bee population to the solution updating equation in the method called gbest-guided ABC (GABC). Karaboga and Akay
[5] defined a new parameter called modification rate (MR) that MR controls regardless of whether a dimension of the prob-
lem is updated for the ABC algorithm to increase the convergence rate of the algorithm. Banharnsakun et al. [1] improved the
capability of convergence of the ABC to a global optimum by using the best-so-far selection for onlooker bees, and they
tested the performance of their method on numerical benchmark functions and image registration. Liu et al. [47] presented
a version of the ABC algorithm that was improved by using mutual learning, which tunes the produced candidate food source
with the higher fitness between two individuals selected by a mutual learning factor. Gao et al. [38] proposed two ABC-based
algorithms that use two update rules of differential evolution (DE) called ABC/Best/1 and ABC/Best/2. The global best-based
ABC methods also use chaotic initialization to properly distribute the agents to the search space, and the performance and
accuracy of the methods are examined on 26 numerical benchmark functions [38]. To overcome premature convergence and
local minima issues, Gao and Liu [39] proposed ABC/Best/1. In ABC/Best/1, Gao and Liu use the update rule of the ABC/Best/1
algorithm for employed bees and the update rule of basic ABC for onlooker bees to reinforce the exploration ability of the
method, and they tested the proposed method MABC on 28 numerical benchmark functions. In another study, Gao et al.
[40] defined a new update rule based on random solutions while the candidate solution is obtained, and the new rule looks
similar to the crossover operator of the GA as is named CABC. In this study, the orthogonal learning strategy is proposed for
ABC methods such as basic ABC (OABC), GABC (OGABC), CABC (OCABC), and their accuracies, and performances are examined
on numerical benchmark functions and are compared with evolutionary algorithms (EAs), differential evolution variants
(DEs) and particle swarm optimization variants (PSOs). Karaboga and Gorkemli [13] proposed a new update rule for onlooker
bees in the hive to improve the local search and convergence characteristics of the standard ABC algorithm. Inspired by PSO,
Imanian et al. [30] changed the update rule of the basic ABC algorithm to increase the convergence speed of the basic ABC
algorithm for solving high dimensional, continuous optimization problems. Wang et al. [20] proposed the MEABC algorithm
to improve the local and global search capability of the basic ABC algorithm, and they tested the performance of MEABC on
basic, shifted and rotated benchmark functions. In another study, the ABC and bee algorithm are integrated to solve six con-
strained optimization benchmark problems [21]. Kiran and Findik [28] developed a simple version of the basic ABC algo-
rithm by using direction information regarding the solutions to improve the convergence characteristics of the basic
algorithm. Mansouri et al. combined the bisection method with ABC for finding the fixed point of a nonlinear function
[31]. Gao et al. proposed two new search equations for the basic ABC algorithm to balance exploration and exploitation
on the search space [41].
In addition to modifications and improvements of ABC, ABC has been applied to solve many optimization problems such
as the image segmentation [29], synthetic aperture radar image segmentation [26], multi-objective design optimization of
laminated composite components [35], in-core fuel management optimization [22], parametric optimization of non-tradi-
tional machining processes [34], wireless sensor network routing [16], leaf-constrained minimum spanning tree problem
[3], reliability redundancy allocation problems [42], optimum design of geometrically non-linear steel frames [36], training
142 M.S. Kiran et al. / Information Sciences 300 (2015) 140–157

neural networks [10], clustering problems [2,15], minimization of weight in truss structures [27], optimal control of auto-
matic voltage regulator (AVR) systems [19], design of multiplier-less nonuniform filter bank transmultiplexers [37], optimal
design of electromagnetic devices [43], and optimal filter design [7].

1.2. Main contribution of the study

In this study, different search strategies are integrated within the concept of the ABC algorithm. We analyzed the perfor-
mance and accuracy of the proposed method on several numerical benchmark function optimizations. When the experimen-
tal results are analyzed, it is shown that the integration of search strategies is a better option than the individual search
strategy in the ABC concept because each search strategy contributes the local search ability or global search ability and,
therefore, the global-local search abilities are balanced by using different search equations. In addition, the study provides
enough convenience for the researchers and practitioners by means of method selection for continuous optimization within
the framework of ABC. The experimental results also show that the performance of a search based on the integration of mul-
tiple search strategies is superior to the performance of a single solution update rule because different search strategies
improve either the local search capability or the global search capability of the algorithm.

2. Material and methods

This section contains an explanation of the original ABC algorithm and the proposed search strategies and mechanisms for
the ABC algorithm.

2.1. Basic ABC algorithm

Inspired by the waggle dance and foraging behaviors of honey bee colonies, the ABC algorithm has been developed, and
its accuracy and performance have been investigated on the three numeric functions (Sphere, Rosenbrock and Rastrigin func-
tions). The basic ABC algorithm consists of four sequentially realized phases called the initialization, employed bee, onlooker
bee and scout bee phases. In the initialization phase, the number of food sources, the termination condition and the limit
parameter value that controls the occurrence of the scout bee and counter for each food source are defined. The food sources,
which are a possible solution for the optimization problem, are randomly produced on the solution space by using (1), and
each food source is assigned to a different employed bee. Briefly, each employed bee has a food source, and the number of
food sources is equal to the number of employed bees.

X ji ¼ X jmin þ rji  ðX jmax X jmin Þ i ¼ 1; 2; . . . ; N and j ¼ 1; 2; . . . ; D ð1Þ

where X ji
is the jth dimension of the ith solution, and X jmax X jmin
are the upper and lower bounds for the jth dimension of the
problem, and rji is a random number in the range of [0, 1]. After the food sources/initial solutions are assigned to the
employed bees, the fitness of the solutions is calculated as follows:
(
1
1þf i ðtÞ
if ðf i ðtÞ P 0Þ
fiti ðtÞ ¼ ð2Þ
1 þ absðf i ðtÞÞ otherwise

where fiti ðtÞ is the fitness of the ith solution and f i ðtÞ is the objective function value that is specific for the optimization
problem.
In the employed bee phase, each employed bee attempts to improve the self-solution given as follows:

V ji ðt þ 1Þ ¼ X ji ðtÞ þ U  ðX ji ðtÞ  X jk ðtÞÞ i ¼ 1; 2; . . . ; N; i – k and j 2 f1; 2; . . . ; Dg ð3Þ

where X ji ðtÞ is the jth dimension of the ith solution and V ji ðt


þ 1Þ is the candidate solution produced in the neighborhood of
X ji ðtÞ at time step t. X jk ðtÞ is the randomly selected neighbor solution for the jth dimension of the ith solution, and U is a ran-
dom number produced in the range of [1, 1]. It should be noted that only one dimension of the problem is updated at the each
iteration time t. If the newly found solution is better than the old solution, the new solution is memorized; otherwise, the
counter of the food source is increased by one.
Before the onlooker bee phase of ABC, the chances of being selected for the food sources are calculated as follows:
fit i ðtÞ
qi ðtÞ ¼ PN ð4Þ
n¼1 fit n ðtÞ

where qi ðtÞ is the chance of the ith solution’s selection by an onlooker bee. After an onlooker bee selects a solution of an
employed bee, the onlooker bee attempts to improve the solution by using (3). The mechanism of the employed bee phase
is followed here as well. If the solution found by the onlooker bee is better than the solution of the employed bee, the new
solution is memorized by the employed bee. Otherwise, the counter of the food source is increased by one.
In the scout bee phase of ABC, the highest content of counters for the food sources is fixed, and this content is tested with
the limit. If the content is higher than the limit, the employed bee of this food source becomes a scout bee. A new solution is
M.S. Kiran et al. / Information Sciences 300 (2015) 140–157 143

randomly produced for this scout bee by using (1), the counter of the food source is reset, and the scout bee becomes an
employed bee. In the basic ABC algorithm, only one scout bee can become an employed bee at the each iteration of ABC.
Briefly, the employed and onlooker bees provide the intensification, and the scout bees provide the diversification for the
population of the ABC algorithm. The working diagram of the ABC algorithm is given in Fig. 1.

2.2. Proposed search strategies and selection mechanism for ABC

The optimization problems have different characteristics such as multimodality and unimodality. While multimodal
functions require effective global and local search abilities for the methods, the methods should be equipped with an effec-
tive local search capability. Generally, various versions of ABC dwell on the balance of local and global search abilities for
ABC. As seen from the basic concept of ABC, both the employed and onlooker bees use the same equation for updating
the solutions. To overcome issues in balancing, searching and convergence of the algorithm, we propose five solution search
equations for the basic ABC algorithm. The equations used in the solution updating equation of ABC are given as follows:

V ji ðt þ 1Þ ¼ X ji ðtÞ þ U  ðX ji ðtÞ  X jk ðtÞÞ i ¼ 1; 2; . . . ; N; i – k and j 2 f1; 2; . . . ; Dg ð5Þ

V ji ðt þ 1Þ ¼ X jr ðtÞ þ U  ðX jr ðtÞ  X jk ðtÞÞ i ¼ 1; 2; . . . ; N; j 2 f1; 2; . . . ; Dg ð6Þ

V ji ðt þ 1Þ ¼ X jr ðtÞ þ U  ðX ji ðtÞ  X jk ðtÞÞ i ¼ 1; 2; . . . ; N; i – k – r; j 2 f1; 2; . . . ; Dg ð7Þ

V ji ðt þ 1Þ ¼ X jbest ðtÞ þ U  ðX jk ðtÞ  X jr ðtÞÞ i ¼ 1; 2; . . . ; N; i – k – r; j 2 f1; 2; . . . ; Dg ð8Þ

V ji ðt þ 1Þ ¼ X ji ðtÞ þ U  ðX ji ðtÞ  X jmean ðtÞÞ i ¼ 1; 2; . . . ; N; j 2 f1; 2; . . . ; Dg ð9Þ


Eq. (5) is the same as the update rule of the basic ABC algorithm [8]. In (6) [40], (7) [40] and (8) [38], and X jk ðtÞ
are the X jr ðtÞ
jth dimensions of solutions randomly selected from the population at time step t, and k and r are not equal to each other and
i. In (8) [38], X jbest ðtÞ in the equations is the jth dimension of the best solution obtained by the population so far. In (9), which
is derived from the update rule of the ABC algorithm, X jmean ðtÞ is the average obtained by using the jth dimensions of all solu-
tions in the population. In addition, U is a random number and scaling factor in the range of [1, 1] and is produced for each
dimension, which is updated at time step t.
Because the optimum point is not known and we do not know the search direction toward the search space for a solution,
the random number is produced in the range of [1, 1]. To increase diversity in the population, the random neighbors are
used in (6) and (7). To support the local search around the global best solution of the population and to provide fast conver-
gence to optimum or near optimum solutions, Eq. (8) is used. The new food source is obtained using (9) by considering the
mean of the population. Although (9) causes slow convergence, it provides an effective global search with respect to other

Parameter Search with


Initialization
Determination employed bees

Search with
onlooker bees

Search with Yes Is there a scout bee in


scout bee the population?

No

Report the Yes Is a termination No


best solution condition met?

Fig. 1. The working diagram of basic ABC algorithm.


144 M.S. Kiran et al. / Information Sciences 300 (2015) 140–157

equations. Similar to (9), global search is provided for the method by using (7). The new food source obtained by (7) is
located on the search space by adding it to randomly selected neighbors. Therefore, Eq. (7) provides sufficient diversification
in the population. It should be noted that (5) is same as the search equation for the basic ABC algorithm.
Usage of these equations is described by employing a counter designed for each equation. The initial values of the coun-
ters are set to a constant value. The candidate solution obtained by being used as an equation is better than the old one, and
the counter of the equation is increased by one. The equation selection is based on the roulette-wheel selection mechanism
given as follows:
Cðci Þ
Pðei Þ ¼ PNE i ¼ 1; 2; . . . ; NE ð10Þ
j¼1 Cðc j Þ

where Pðei Þ is the selectivity chance of the ith equation by an employed or onlooker bee, Cðci Þ is the value of the ith counter
assigned for the ei equation, and NE is the number of equations. After the equations and selection mechanism are given
above, the procedure is operated as follows.
Employing multiple update rules in the proposed algorithm provides its coherence based on the properties of the search
space of the optimization problem such as its multimodality and unimodality. In the initialization step of the proposed
method, a constant value for the equation counters is set, and then the selection is performed by using the equation selector
given in Fig. 2. After each of the objective function evaluations in the proposed ABC algorithm, if the newly obtained solution
is better than the old solution, the counter is increased by one. This incremental behavior provides the coherence of the
method to the search space of the optimization problem, and the elective chances of one or more update rules are therefore
reinforced during the iterations. The working diagram of the proposed approach is presented in Fig. 3. In addition to update
rules, it is noted that the fitness values of the food sources in our proposed method are used for the selection of food sources
and the objective function values obtained by using the food source for comparing the candidate with actual solutions. In
other words, while the basic ABC algorithm uses the fitness value of the food sources for the greedy selection process, objec-
tive function values are used for the same purpose in the proposed method.

3. Experiments and comparisons

To investigate the performance and accuracy of the ABCVSS algorithm, the ABCVSS algorithm is applied to optimize 28
benchmark functions. The optimization results obtained by ABCVSS are compared with the original ABC, gbest-guided ABC
(GABC), ABC/Best/1, ABC/Best/2, and modified ABC (MABC) in the first experiment. We selected these basic ABC variants
for comparison because the search equation of the basic ABC algorithm is improved in these methods. The global best solution
in the population is included for the search equation of the basic ABC in the GABC method. The ABC/Best/1 and ABC/Best/2
methods use different search equations instead of the basic search equation of the ABC algorithm to improve the performance
of the ABC algorithm. We selected these basic variants of the ABC algorithm for comparison to demonstrate that the integra-
tion of search equations in a framework is better than using or improving only one search equation in the basic ABC algorithm
in terms of solution quality. In the second experiment, the results of ABCVSS are compared with CABC and ABC variants, which
use the orthogonal learning strategy on the 20 benchmark functions. CABC and its variants are powerful versions of the ABC
algorithm. The performance of the proposed method is also compared with these variants. The third experiment covers the
comparison of ABCVSS, evolutionary algorithms (EAs), differential evolution (DE), PSO and their variants. EAs, DEs and PSOs
are used for comparing our method because these methods are popular in numeric optimization. For a clear and fair compar-
ison among ABC variants, the control parameters of the methods are set according to their papers, and the termination con-
dition for all methods is the maximum number of function evaluations (Max_FEs). In all experiments and comparisons, the
population size of the ABCVSS algorithm is 40, and the limit value is set to limit ¼ D  N [9], where N is the number of
employed bees and D is the dimensionality of the numeric function. In the comparison tables, the results written as boldface
font shows that the result of the method is better than the results of other methods in terms of solution quality.

3.1. Benchmark functions used in experiments

To analyze and compare the performance and accuracy of the proposed method, a large set of benchmark functions col-
lected from literature [32,39,40] are used in the experiments and are listed in Table 1. These functions have different prop-
erties such as unimodality, multimodality, separable and non-separable. These properties of the functions are given in
column C of Table 1. If a function has only one local minimum, it is called a unimodal function, and the local minimum is
also the global minimum for the function. The exploitation ability of the methods is often tested on these types of functions.
If a function has more than one local minimum, this function is called the multimodal function, and multimodal functions

1.Calculate P(ei ) values by using (10)


2. Choose an equation by using P(ei ) and roulette-wheel selection.
3. Obtain a candidate solution by using selected equation.
4. If fitness of the new solution is better than old one, increase counter content by 1.

Fig. 2. The framework of equation selector.


M.S. Kiran et al. / Information Sciences 300 (2015) 140–157 145

Parameter Select an equation by using


Initialization
Determination Equation Selector

Produce the candidate solution


by using selected equation

No
Did all employed bees
work?

Yes

Select an equation using No Did all onlooker bees


Equation Selector work?

Yes
Produce the candidate solution
by using selected equation

Yes Is there a scout bee in


the population?

Search with scout bee


No

Report the Yes No


Is a termination
best solution condition met?

Fig. 3. The working diagram of the proposed approach.

have one or more global minimums [9]. In addition to the exploitation ability, the exploration ability of the methods is often
tested on the multimodal functions. Because some functions can be reformulated as the sum of n-functions of one variable,
these functions are called separable functions. Non-separable functions cannot be reformulated by their use in this formu-
lation because there is an interrelation among the variables of the non-separable functions [9]. If a method attempts to
change a variable of the non-separable functions, the other variables will be affected by this change, and the rest of the vari-
ables should be arranged as required. Therefore, finding an optimum for the non-separable functions is more difficult than
finding the optimum of the separable functions. Another important issue for the methods is the dimensionality of the search
space [17]. Generally, solving high dimensional optimization problems is more difficult than solving low dimensional opti-
mization problems. In column C of Table 1, M shows that the function is multimodal, U shows that the function is unimodal, S
shows that the function is separable and N shows the characteristics of non-separable functions.
The benchmark functions used in the experiments are divided into three groups. The first group consists of low-dimen-
sional (D = 30) functions, the second group consists of middle-dimensional functions (D = 60) and the last group consists of
high-dimensional (D = 100) functions.
Before the experimental results and comparisons of the methods are given, we investigate the efficiency of the update
rules used in the proposed algorithm.

3.2. The analysis of update rules in the proposed approach

To show the results of the proposed search strategies and mechanisms, the proposed algorithm was run 30 times with
random seeds for the first twelve benchmark functions with D = 30, and the number of maximum function evaluations is
146 M.S. Kiran et al. / Information Sciences 300 (2015) 140–157

Table 1
Benchmark functions used in experiments.

No of Name Search C Function


funct. range
F1 Sphere [100, 100] US ! P
f 1 ð X Þ ¼ ni¼1 x2i
F2 Elliptic [100, 100] UN ! P ði1Þ=ðn1Þ
f 2 ð X Þ ¼ ni¼1 ð106 Þ x2i
F3 SumSquares [10, 10] US ! Pn 2
f 3 ð X Þ ¼ i¼1 ixi
F4 SumPower [10, 10] MS ! P
f 4 ð X Þ ¼ ni¼1 jxi jðiþ1Þ
F5 Schwefel2.22 [10, 10] UN ! P Q
f 5 ð X Þ ¼ ni¼1 jxi j þ ni¼1 jxi j
F6 Schwefel2.21 [100, 100] UN !
f 6 ð X Þ ¼ maxi fjxi j; 1 6 i 6 ng
F7 Step [100, 100] US ! P
f 7 ð X Þ ¼ ni¼1 ðbxi þ 0:5cÞ2
F8 Quartic [1.28, 1.28] US ! P
f 8 ð X Þ ¼ ni¼1 ix4i
F9 QuarticWN [1.28, 1.28] US ! P
f 9 ð X Þ ¼ ni¼1 ix4i þ random½0; 1Þ
F10 Rosenbrock [10, 10] UN ! P 2 2
f 10 ð X Þ ¼ n1 2
i¼1 ½100ðxiþ1  xi Þ þ ðxi  1Þ 

F11 Rastrigin [5.12, 5.12] MS ! P


f 11 ð X Þ ¼ ni¼1 ½x2i  10 cosð2pxi Þ þ 10
F12 Non- [5.12, 5.12] MS ! P
f 12 ð X(Þ ¼ ni¼1 ½y2i  10 cosð2 ) pyi Þ þ 10
continuous xi jxi j < 12
rastrigin yi ¼ roundð2xi Þ
2 jxi j P 12
! Pn 2 Qn  
F13 Griewank [600, 600] MN f 13 ð X Þ ¼ 4000 i¼1 xi  i¼1 cos pxiffii þ 1
1

F14 Schwefel2.26 [500, 500] UN ! P pffiffiffiffiffiffiffi


f 14 ð X Þ ¼ 418:98 n  ni¼1 xi sinð jxi jÞ
! n q ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P ffio  P 
F15 Ackley [32, 32] MN f 15 ð X Þ ¼ 20 exp 0:2 1n ni¼1 x2i  exp 1n ni¼1 cosð2pxi Þ þ 20 þ e
! n P o P
F16 Penalized1 [50, 50] MN f 16 ð X Þ ¼ pn 10 sin2 ðpy1 Þ þ n1 ðy  1Þ2 ½1 þ 10 sin2 ðpyiþ1 Þ þ ðyn  1Þ2 þ ni¼1 uðxi ; 10; 100; 4Þ
8i¼1 i
< kðxi  aÞm xi > a
yi ¼ 1 þ 14 ðxi þ 1Þ uxi ;a;k;m ¼ 0  a 6 xi 6 a
:
kðxi  aÞm xi < a
! n P o P
F17 Penalized2 [50, 50] MN 1 2
sin ðpx1 Þ þ n1 2 2 2 2 n
f 17 ð X Þ ¼ 10 i¼1 ðxi  1Þ ½1 þ sin ð3pxiþ1 Þ þ ðxn  1Þ ½1 þ sin ð2pxiþ1 Þ þ i¼1 uðxi ; 5; 100; 4Þ

F18 Alpine [10, 10] MS ! Pn


f 18 ð X Þ ¼ i¼1 jxi  sinðxi Þ þ 0:1  xi j
F19 Levy [10, 10] MN ! P 2 2 2
f 19 ð X Þ ¼ n1 2
i¼1 ðxi  1Þ ½1 þ sin ð3pxiþ1 Þ þ sin ð3px1 Þ þ jxn  1j½1 þ sin ð3pxn Þ

F20 Weierstrass [0.5, 0.5] MN ! PD Pkmax k k


 Pkmax k k
f 20 ð X Þ ¼ i¼1 k¼0 ½a cosð2pb ðxi þ 0:5ÞÞ  D k¼0 ½a cosð2pb 0:5Þ a ¼ 0:5; b ¼ 3; kmax ¼ 20
pffiffiffiffiffiffiffiffiffiffiffiffiffi
Pn 2 
F21 Schaffer [100, 100] MN ! sin2 x 0:5
f 21 ð X Þ ¼ 0:5 þ  i¼1
P n
i
2
1þ0:001½ x2 
i¼1 i

F22 Himmelblau [5, 5] MS ! P


f 22 ð X Þ ¼ 1n ni¼1 ðx4i  16x2i þ 5xi Þ
! P  2
F23 Michalewicz [0, p] MS f 23 ð X Þ ¼  ni¼1 sinðxi Þ sin20 pi
ix

F24 Shifted [100, 100] US ! P


f 24 ð X Þ ¼ ni¼1 z2i z ¼ x  o
sphere
F25 Shifted [5.12, 5.12] MS ! P
f 25 ð X Þ ¼ ni¼1 ½z2i  10 cosð2pzi Þ þ 10 z¼xo
rastrigin
! Pn 2 Qn  
F26 Shifted [600, 600] MN 1
f 26 ð X Þ ¼ 4000 ziffi
i¼1 zi  i¼1 cos þ1 z¼xo
p
i
griewank
! n qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
P ffio  P 
F27 Shifted [32, 32] MN f 27 ð X Þ ¼ 20 exp 0:2 1n ni¼1 z2i  exp 1n ni¼1 cosð2pzi Þ
ackley z¼xo
F28 Shifted [10, 10] MN ! P
f 28 ð X Þ ¼ ni¼1 jzi  sinðzi Þ þ 0:1  zi j z ¼ x  o
alpine

used for the termination of the method and is set to 1.5E+5. The means of the counters’ contents for each function are shown
in Fig. 4. As seen from Fig. 4, while the benchmark functions are being optimized by the proposed approach, the appropriate
update rule is selected according to the characteristics of the problem, and there is cooperation among the update rules.
In the ABC algorithm, if each solution obtained by an employed or onlooker bee is better than the previous solution, the
previous solution is replaced with the new solution. When the exploitation ability is improved by an approach, the number
of better solutions obtained is used as an indicator for the exploitation ability of the ABC algorithm. To demonstrate the
search ability of the methods, the improvement rate is used in this study and is given as follows:

SS
iR ¼  100 ð11Þ
Max FEs
where iR is the improvement rate of the update rule, Max_FEs is the maximum number of function evaluations, and SS is the
number of successful updates that is described as it is increased by one, if the new solution is better than the previous solu-
tion. For the benchmark functions with 30, 60 and 100 dimensionalities, Max_FEs is considered as 1.5E+5, 3E+5 and 5E+5,
M.S. Kiran et al. / Information Sciences 300 (2015) 140–157 147

F1 F2 F3
30 30 30
25 25 25
20 20 20
15 15 15
10 10 10
5 5 5
0 0 0
Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9
Rao of Being Selected Rao of Being Selected Rao of Being Selected

F4 F5 F6
50 40 50
35
40 30 40
30 25 30
20
20 15 20
10 10
10 5
0 0 0
Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9
Rao of Being Selected Rao of Being Selected Rao of Being Selected

F7 F8 F9
35 30 35
30 25 30
25 20 25
20 15 20
15 15
10 10
10
5 5 5
0 0 0
Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9
Rao of Being Selected Rao of Being Selected Rao of Being Selected

F10 F11 F12


35 30 40
30 25
25 20 30
20 15 20
15 10
10 10
5 5
0 0 0
Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9 Eq.5 Eq.6 Eq.7 Eq.8 Eq.9
Rao of Being Selected Rao of Being Selected Rao of Being Selected

Fig. 4. The ratio of being selected of the update rules in the proposed ABCVSS algorithm.

respectively, and the improvement rates of the methods are given in Fig. 5. The basic ABC and proposed approach ABCVSS,
for short, were repeated 30 times with random seeds on the benchmark functions for this analysis to obtain improvement
rates reported on the figures. Based on the iR figures, we can say that ABCVSS operates more effective update rules than the
basic ABC in most cases.

3.3. The comparison of ABCVSS and ABC variants

This section contains a comparison of the original ABC, GABC, MABC, ABC/Best/1, ABC/Best/2 and ABCVSS for the 28
benchmark functions listed in Table 1. The termination condition for the methods is Max_FEs, and it is set to 1.5E+5 for
30 and 100 (F22, F23) dimensional functions, 3E+5 for 60 and 200 (F22, F23) dimensional functions and 5E+5 for 100 and
200 (F22, F23) dimensional functions. For ABC, GABC and ABCVSS, the limit value for the populations is set to
limit ¼ D  N [9], where N is the number of employed bees or food sources and D is the dimensionality of the function.
For ABC/Best/1 and ABC/Best/2, the limit value is set to limit ¼ 0:6  D  N as given in [38].
The results obtained by 30 independent runs under the conditions given above are reported in Table 2 for 30-dimensional
functions, Table 3 for 60-dimensional functions and Table 4 for 100-dimensional functions. In the comparison tables, because
the results of the MABC algorithm are taken directly from [39], the statistically significant test between the proposed method
and MABC algorithm could not be applied. In addition to the solution quality, accuracy and robustness based on standard
deviations, the convergence graphs for the methods are important metrics for comparing the useful approximation ability
of the methods. The convergence graphs of the implemented methods in this comparison are given in Fig. 6 on some
functions.
148 M.S. Kiran et al. / Information Sciences 300 (2015) 140–157

ABC ABCVSS ABC ABCVSS


100.00 100.00
80.00
a b
80.00
60.00 60.00
40.00 40.00
20.00 20.00
0.00 0.00
F1
F2
F3
F4
F5
F6
F7
F8
F9

F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
F13
F14
F15
F16
F17
F18
F19
F20
F21
F22
F23
F24
F25
F26
F27
F28

F10
F11
F12
F13
F14
F15
F16
F17
F18
F19
F20
F21
F22
F23
F24
F25
F26
F27
F28
ABC ABCVSS
100.00
c
80.00
60.00
40.00
20.00
0.00
F1
F2
F3
F4
F5
F6
F7
F8
F9
F10
F11
F12
F13
F14
F15
F16
F17
F18
F19
F20
F21
F22
F23
F24
F25
F26
F27
F28
Fig. 5. Improvement rates of update rules of ABC and ABCVSS on the benchmark functions with 30-dimensional (a), 60-dimensional (b), 100-dimensional
(c).

As seen from the comparison tables, the ABCVSS algorithm outperforms the other ABC variants in most cases. Although
the methods show the same performance on some functions, the convergence rate of the ABCVSS algorithm is better than the
other algorithms. Based on the standard deviation values, the proposed method is more robust than the original ABC algo-
rithm and other variants because the appropriate equation or equations are selected according to the characteristics of the
test functions in the proposed method. In addition, as seen from Tables 1–3, the successful results obtained by the proposed
approach do not depend on the dimensionalities of the numeric functions.
Wilcoxon signed-rank test is also applied to the results of the original ABC and proposed method, and the proposed
method is significantly different from the original ABC algorithm with a 0.05 significance level according to last column
of Tables 2–4.

3.4. Comparison of ABCVSS and ABCs with orthogonal learning strategy

In the comparison of CABC, OABC, OGABC, OCABC and ABCVSS, Max_FEs is set to 5E+4 for the low-dimensional functions,
1E+5 for the middle-dimensional functions and 2E+5 for the high-dimensional functions. For CABC and ABC variants that use
the orthogonal learning strategy, the low, middle and high dimensionalities are defined as 15-dimensional, 30-dimensional
and 60-dimensional for F1 to F20 functions and 50-dimensional, 100-dimensional and 200-dimensional for F22 to F23 func-
tions [40]. Because the results of CABC, OABC, OGABC and OCABC are taken directly from the study of Gao et al. [40], the
ABCVSS is run 30 times with random seeds, and the results obtained by ABCVSS are compared with the results of CABC,
OABC, OGABC and OCABC. The comparison tables designed for the different dimensions are given in Tables 5–7.
There are no results for the F8 functions in the comparison tables because the F8 function in [40] is the exponential func-
tion and its optimum is reported as zero in [38] although its optimum is 1. Based on the comparison tables, the ABCVSS
algorithm is a remarkable method for solving the numerical optimization problems such as OCABC. Similar to OCABC and
its variants, ABCVSS learns the appropriate update rule according to the characteristics of the numeric functions, and more
successful results are obtained by the proposed approach.

3.5. Comparison of proposed method and nature-inspired methods

The ABCVSS algorithm is also compared with DE, PSO, EA and their variants. The results of these methods are taken
directly from [40]. The ABCVSS algorithm is compared with DE and its variants in Table 8, PSO and its variants in Table 9
and EAs in Table 10.
In the result tables, NA refers to the non-available result in the corresponding reference. As seen from Table 8, the ABCVSS
algorithm outperforms the DE variants for every case, with the exception of the Quartic function where JDE beats ABCVSS. In
Table 9, OGA/Q is better than ABCVSS on the Sphere and Schwefel 2.22 functions. For these functions, ABCVSS is better than
the other EAs, and OGA/Q obtains the optimum. For the rest of the cases, ABCVSS is better than the EAs or equal to the EAs.
When ABCVSS is compared with PSOs, FIPS is better than ABCVSS on the Quartic function, and OLPSO-G is better than
ABCVSS on the Ackley function. For the rest of the functions, ABCVSS outperforms PSOs in terms of solution quality.
Table 2
Comparison of the basic ABC variants on the 30 and 100 (F22, F23) dimensional functions.

Function Original ABC GABC ABCBest1 ABCBest2 MABC ABCVSS


Mean Std.Dev. Sign. Mean Std.Dev. Sign Mean Std.Dev. Sign Mean Std.Dev. Sign Mean Std.Dev. Mean Std.Dev.
F1 5.10E16 8.40E17 + 4.62E16 7.12E17 + 3.11E47 3.44E47 + 5.96E35 3.61E35 + 9.43E32 6.67E32 1.53E81 8.37E81
F2 4.79E16 9.88E17 + 3.62E16 7.88E17 + 5.35E44 4.91E44 + 1.70E28 2.35E28 + 3.66E28 5.96E28 4.82E82 2.63E81
F3 5.06E16 9.20E17 + 4.55E16 7.00E17 + 6.50E48 6.04E48 + 5.55E36 3.36E36 + 2.10E32 1.56E32 3.19E89 1.48E88
F4 2.85E17 9.69E18 + 1.64E17 8.07E18 + 1.77E86 7.02E86 + 3.00E46 1.07E45 + 2.70E69 5.38E69 5.55E115 3.04E114

M.S. Kiran et al. / Information Sciences 300 (2015) 140–157


F5 1.28E15 1.44E16 + 1.35E15 1.36E16 + 2.10E25 9.08E26 + 1.36E18 4.27E19 + 2.40E17 9.02E18 7.89E43 4.32E42
F6 7.27E01 3.25E01 + 2.18E01 4.01E02 + 2.18E+00 3.27E01 + 3.55E+00 4.79E01 + 1.02E+01 1.49E+00 4.08E02 2.20E02
F7 0 0 NA 0 0 NA 0 0 NA 0 0 NA 0 0 0 0
F8 2.01E16 4.74E17 + 1.21E16 3.99E17 + 2.63E97 3.75E97 + 3.10E76 2.89E76 + 1.45E67 2.28E67 3.25E154 1.78E153
F9 4.86E02 1.49E02 + 2.03E02 5.74E03  2.06E02 4.75E03  2.53E02 4.67E03 + 3.71E02 8.53E03 1.81E02 5.27E03
F10 4.32E02 4.71E02  3.21E01 8.21E01  1.49E+01 2.87E+01 + 5.45E+00 8.40E+00 + 6.11E01 4.55E01 3.87E01 1.54E+00
F11 0 0 NA 0 0 NA 0 0 NA 0 0 NA 0 0 0 0
F12 0 0 NA 0 0 NA 0 0 NA 0 0 NA 0 0 0 0
F13 7.62E11 4.18E10 + 3.70E17 5.32E17 + 0 0 NA 1.81E08 6.29E08 + 0 0 0 0
F14 1.09E12 9.06E13 + 9.42E02 5.16E01 + 1.33E12 8.18E13 + 1.76E12 3.32E13 + 1.21E13 4.53E13 4.85E13 8.18E13
F15 3.79E14 3.99E15 + 3.20E14 3.36E15 + 3.01E14 2.91E15 + 3.07E14 3.43E15 + 4.13E14 2.17E15 2.45E14 4.00E15
F16 5.08E16 5.15E17 + 4.12E16 8.36E17 + 1.57E32 5.57E48 NA 1.57E32 5.57E48 NA 1.90E32 3.70E33 1.57E32 5.57E48
F17 4.88E16 7.45E17 + 4.01E16 8.19E17 + 1.35E32 5.57E48 NA 1.35E32 5.57E48 NA 2.23E31 1.46E31 1.35E32 5.57E48
F18 8.82E10 2.19E09 + 3.41E09 1.13E08 + 3.00E16 8.99E16 + 3.23E14 9.14E14 + 1.58E16 2.48E16 3.66E44 1.93E43
F19 4.21E16 8.31E17 + 3.28E16 5.03E17 + 1.35E31 6.68E47 NA 1.35E31 6.68E47 NA 1.48E31 2.30E32 1.35E31 6.68E47
F20 0 0 NA 0 0 NA 4.74E16 1.80E15  9.47E16 2.46E15 + 0 0 0 0
F21 3.27E01 4.44E02 + 2.66E01 4.39E02  2.39E01 6.13E02 + 2.81E01 3.92E02  2.95E01 3.17E02 2.84E01 5.69E02
F22 7.83E+01 1.02E10 NA 7.83E+01 2.94E14 NA 7.83E+01 6.68E12 NA 7.83E+01 4.86E09 + 7.83E+01 2.06E07 7.83E+01 3.02E10
F23 9.63E+01 4.42E01 + 9.94E+01 4.18E02 + 9.57E+01 3.89E01 + 8.94E+01 4.72E01 + 9.07E+01 5.03E01 9.94E+01 8.84E02
F24 4.91E16 7.25E17 + 4.38E16 8.43E17 + 0 0 NA 0 0 NA 0 0 0 0
F25 0 0 NA 0 0 NA 0 0 NA 0 0 NA 0 0 0 0
F26 8.17E10 4.47E09 + 3.33E17 5.17E17 + 8.81E16 3.38E15  1.46E07 7.78E07 + 0 0 0 0
F27 3.73E14 4.45E15 + 3.20E14 2.80E15 + 2.89E14 2.59E15 + 3.01E14 3.70E15 + 4.92E14 5.31E15 2.53E14 4.55E15
F28 1.35E09 3.48E09 + 6.65E08 2.39E07 + 1.50E16 2.48E16 + 1.33E13 4.89E13 + 1.38E16 8.11E17 7.49E17 1.48E17

149
150
Table 3
Comparison of the basic ABC variants on the 60 and 200 (F22, F23) dimensional functions.

Original ABC GABC ABCBest1 ABCBest2 MABC ABCVSS


Function Mean Std.Dev. Sign. Mean Std.Dev. Sign. Mean Std.Dev. Sign. Mean Std.Dev. Sign. Mean Std.Dev. Mean Std.Dev.
F1 1.24E15 1.16E16 + 1.06E15 1.21E16 + 3.92E44 2.64E44 + 4.82E33 2.59E33 + 6.03E29 4.31E29 1.09E83 5.01E83
F2 1.15E15 1.50E16 + 8.97E16 9.29E17 + 1.70E41 9.16E42 + 5.86E27 1.13E26 + 3.51E25 2.72E25 1.01E82 5.54E82
F3 1.21E15 1.59E16 + 1.04E15 1.27E16 + 2.06E44 1.83E44 + 9.10E34 3.87E34 + 1.39E29 8.84E30 8.17E86 4.47E85
F4 4.31E17 1.47E17 + 2.85E17 1.01E17 + 8.74E74 4.63E73 + 7.53E39 3.95E38 + 3.00E62 3.87E62 1.59E118 8.70E118

M.S. Kiran et al. / Information Sciences 300 (2015) 140–157


F5 2.80E15 2.40E16 + 2.96E15 1.85E16 + 8.48E24 2.31E24 + 1.58E17 3.32E18 + 6.96E16 1.20E16 1.47E45 7.00E45
F6 9.56E+00 2.32E+00 + 4.47E+00 6.09E01 + 2.10E+01 1.68E+00 + 2.40E+01 2.16E+00 + 3.77E+01 3.14E+00 1.68E+00 4.05E01
F7 0 0 NA 0 0 NA 0 0 NA 0 0 NA 0 0 0 0
F8 4.89E16 6.20E17 + 3.73E16 6.67E17 + 4.65E91 7.81E91 + 6.76E72 6.26E72 + 5.00E62 9.38E62 7.09E171 0
F9 9.66E02 1.84E02 + 5.43E02 7.03E03 + 6.11E02 8.89E03 + 6.79E02 9.38E03 + 1.14E01 1.16E02 4.35E02 7.69E03
F10 9.28E02 1.37E01  3.30E+00 1.28E+01  5.04E + 01 5.46E+01 + 5.10E+01 3.77E+01 + 1.51E+00 1.34E+00 5.27E01 1.18E+00
F11 0 0 NA 0 0 NA 0 0 NA 0 0 NA 0 0 0 0
F12 0 0 NA 0 0 NA 0 0 NA 0 0 NA 0 0 0 0
F13 3.74E16 2.92E16 + 2.47E04 1.35E03 + 0 0 NA 3.96E09 2.04E08 + 0 0 0 0
F14 9.55E01 5.23E+00 + 3.97E+01 6.47E+01 + 3.99E11 3.64E12 + 3.95E+00 2.16E+01 + 3.56E11 2.18E12 3.64E11 0
F15 8.68E14 8.48E15 + 7.31E14 5.57E15 + 6.93E14 5.00E15 + 7.47E14 4.12E15 + 1.37E13 1.24E14 5.93E14 7.65E15
F16 1.18E15 1.13E16 + 1.05E15 1.21E16 + 7.85E33 2.78E48 NA 7.85E33 2.78E48 NA 6.19E31 3.62E31 7.85E33 2.78E48
F17 1.14E15 1.33E16 + 1.01E15 1.28E16 + 1.35E32 5.57E48 NA 1.35E32 5.57E48 NA 3.80E29 1.87E29 1.35E32 5.57E48
F18 2.31E07 7.68E07 + 7.34E07 1.70E06 + 5.29E16 1.25E15 + 2.23E11 3.77E11 + 8.20E16 4.69E16 5.42E47 2.02E46
F19 1.07E15 1.26E16 + 8.89E16 8.73E17 + 1.35E31 6.68E47 NA 1.41E31 1.47E32 + 4.08E30 2.58E30 1.35E31 6.68E47
F20 1.75E14 1.03E14 + 9.00E15 7.90E15 + 2.42E14 8.47E15 + 2.65E14 8.94E15 + 9.94E15 5.68E15 0 0
F21 4.76E01 7.84E03  4.62E01 1.79E02 + 4.61E01 1.15E02 + 4.68E01 9.17E03  4.84E01 3.62E03 4.72E01 1.42E02
F22 7.83E+01 5.77E09 + 7.83E+01 4.89E14 + 7.83E+01 3.71E11 + 7.83E+01 1.76E08 + 7.83E+01 2.40E07 7.83E+01 6.84E06
F23 1.93E+02 6.18E01 + 1.96E+02 2.84E01 + 1.86E+02 6.01E01 + 1.76E+02 6.92E01 + 1.74E+02 9.91E01 1.99E+02 6.38E01
F24 1.16E15 1.63E16 + 1.01E15 1.23E16 + 0 0 NA 0 0 NA 5.61E29 4.18E29 0 0
F25 0 0 NA 0 0 NA 0 0 NA 0 0 NA 0 0 0 0
F26 2.63E16 2.71E16 + 6.66E17 1.08E16 + 0 0 NA 1.44E08 7.17E08 + 0 0 0 0
F27 8.38E14 7.20E15 + 7.54E14 5.00E15 + 6.90E14 4.82E15 + 7.39E14 3.54E15 + 2.00E13 3.07E14 5.97E14 8.52E15
F28 1.04E07 2.24E07 + 1.24E05 5.65E05 + 1.80E16 1.17E16  2.53E10 1.17E09 + 9.71E16 5.70E16 2.86E16 3.41E16
Table 4
Comparison of the ABC variants on the 100 and 300 (F22, F23) dimensional functions.

Function Original ABC GABC ABCBest1 ABCBest2 MABC ABCVSS


Mean Std.Dev. Sign. Mean Std.Dev. Sign. Mean Std.Dev. Sign. Mean Std.Dev. Sign. Mean Std.Dev. Mean Std.Dev.
F1 2.18E15 2.35E16 + 1.84E15 1.72E16 + 1.54E42 8.93E43 + 5.09E32 2.03E32 + 1.43E27 8.12E28 1.01E83 5.52E83
F2 2.03E15 1.98E16 + 1.66E15 1.72E16 + 5.08E40 3.02E40 + 2.81E26 1.61E26 + 3.52E24 2.47E25 5.29E75 2.80E74
F3 2.22E15 2.08E16 + 1.85E15 2.00E16 + 8.94E43 7.34E43 + 2.15E32 1.24E32 + 4.46E28 2.08E28 5.31E85 2.68E84
F4 5.25E17 1.38E17 + 3.66E17 9.63E18 + 8.92E60 4.00E59 + 1.92E29 6.05E29 + 1.92E48 3.42E48 4.86E109 2.66E108

M.S. Kiran et al. / Information Sciences 300 (2015) 140–157


F5 5.01E15 2.95E16 + 5.17E15 2.24E16 + 6.27E23 1.09E23 + 7.21E17 1.36E17 + 4.41E15 1.50E15 3.94E40 2.12E39
F6 2.39E+01 3.32E+00 + 1.59E+01 1.55E+00 + 4.72E+01 2.29E+00 + 5.06E+01 2.67E+00 + 5.98E+01 1.60E+00 7.91E+00 1.32E+00
F7 0 0 NA 0 0 NA 0 0 NA 0 0 NA 0 0 0 0
F8 9.81E16 1.02E16 + 7.50E16 9.05E17 + 2.45E88 2.81E88 + 1.61E69 1.73E69 + 5.72E60 5.32E60 6.79E166 0
F9 1.65E01 2.60E02 + 9.47E02 1.26E02 + 1.30E01 1.12E02 + 1.42E01 1.58E02 + 2.31E01 2.79E02 7.87E02 1.16E02
F10 1.97E01 3.53E01  2.40E+01 3.39E+01 + 5.81E+01 6.80E+01 + 1.18E+02 5.76E+01 + 1.98E+00 1.30E+00 5.14E01 1.05E+00
F11 0 0 NA 0 0 NA 0 0 NA 0 0 NA 0 0 0 0
F12 0 0 NA 0 0 NA 0 0 NA 0 0 NA 0 0 0 0
F13 4.66E16 4.19E16 + 1.26E16 1.56E16 + 0 0 NA 2.94E10 1.16E09 + 0 0 0 0
F14 4.55E10 1.79E09  1.20E+02 1.22E+02 + 1.22E10 4.14E12 + 1.32E+00 7.21E+00 + 1.19E10 4.06E12 1.12E10 3.67E12
F15 1.51E13 6.97E15 + 1.32E13 9.74E15 + 1.27E13 7.15E15 + 1.39E13 5.75E15 + 3.56E13 2.29E14 1.11E13 9.93E15
F16 2.16E15 2.62E16 + 1.90E15 1.95E16 + 4.71E33 1.39E48 NA 4.71E33 1.39E48 NA 1.89E30 8.42E31 4.71E33 1.39E48
F17 2.11E15 2.07E16 + 1.85E15 1.68E16 + 1.35E32 5.57E48 NA 2.18E32 6.62E33 + 1.81E28 6.44E29 1.35E32 5.57E48
F18 1.18E05 2.84E05 + 1.92E05 3.56E05 + 1.83E16 4.50E16 + 8.44E10 1.74E09 + 5.83E15 1.97E15 2.78E17 1.17E16
F19 2.02E15 1.57E16 + 1.70E15 1.76E16 + 1.35E31 6.68E47 NA 2.42E31 1.85E31 + 8.49E29 3.57E29 1.35E31 6.68E47
F20 8.15E14 2.33E14 + 6.06E14 1.79E14 + 1.18E13 2.70E14 + 1.29E13 2.33E14 + 5.21E14 6.69E15 1.89E15 7.21E15
F21 4.97E01 7.90E04 + 4.95E01 1.87E03 + 4.96E01 8.28E04 + 4.97E01 6.58E04 + 4.99E01 1.75E04 4.98E01 8.15E04
F22 7.83E+01 8.57E11 + 7.83E+01 1.72E02 + 7.83E+01 1.97E12 + 7.83E+01 1.26E09 + 7.83E+01 1.84E08 7.83E+01 6.85E06
F23 2.89E+02 1.14E+00 + 2.88E+02 6.19E01 + 2.79E+02 7.32E01 + 2.67E+02 7.75E01 + 2.63E+02 7.14E01 2.98E+02 1.20E+00
F24 2.16E15 1.81E16 + 1.90E15 1.96EE16 + 0 0 NA 0 0 NA 1.44E27 9.16E28 0 0
F25 0 0 NA 0 0 NA 0 0 NA 0 0 NA 0 0 0 0
F26 4.48E16 4.71E16 + 9.25E17 1.09E16 + 0 0 NA 1.01E12 5.23E12 + 0 0 0 0
F27 1.52E13 9.00E15 + 1.31E13 7.35E15 + 1.26E13 8.48E15 + 1.38E13 6.90E15 + 5.33E13 7.58E14 1.15E13 1.08E14
F28 4.29E06 1.47E05 + 1.03E05 2.50E05 + 3.76E16 3.77E16  1.01E09 2.12E09 + 3.01E15 1.39E15 3.48E16 2.08E16

151
152 M.S. Kiran et al. / Information Sciences 300 (2015) 140–157

Fig. 6. Convergence performance of ABCs on the 15 test functions with different dimensions.
M.S. Kiran et al. / Information Sciences 300 (2015) 140–157 153

Table 5
Comparison of the CABC, OABC, OGABC, OCABC and ABCVSS on 15-dimensional functions F1–F20 and 50-dimensional functions F22–F23.

Function CABC OABC OGABC OCABC ABCVSS


Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev.
F1 4.26E36 4.76E36 8.73E24 1.46E23 3.12E33 4.85E33 4.05E41 6.97E41 3.01E61 1.49E60
F2 2.32E32 3.03E32 1.18E19 1.31E19 2.57E29 5.56E29 1.33E37 1.74E37 1.21E53 6.64E53
F3 3.51E37 6.22E37 8.12E26 8.07E26 2.02E34 3.04E34 1.65E42 1.97E42 4.91E60 1.90E59
F4 7.98E50 2.35E49 9.66E34 4.55E33 1.40E46 4.72E46 3.15E58 8.91E58 2.67E63 1.46E62
F5 3.11E19 3.27E19 4.85E13 2.64EE13 1.09E17 8.70E18 9.13E22 7.98E22 6.14E29 1.93E28
F6 3.91E01 1.06E01 3.65E01 1.32E01 5.41E02 1.54EE02 2.85E02 8.23E02 3.13E03 3.68E03
F7 0 0 0 0 0 0 0 0 0 0
F9 1.62E02 7.26E03 6.23E03 2.64EE03 3.64E03 1.33E03 3.23E03 1.20E03 9.28E03 3.13E03
F10 1.58E01 1.37E01 2.04E01 2.70E01 5.32EE01 9.43E01 4.39E01 7.26E01 2.57E01 7.91E01
F11 0 0 0 0 1.26E11 6.32E11 0 0 0 0
F12 0 0 0 0 0 0 0 0 0 0
F13 9.39EEE10 4.69E09 1.77E17 8.88E17 9.85E04 2.75E03 0 0 9.04E04 2.78E03
F14 2.18E13 3.96EEE13 3.63E14 1.81E13 1.09E13 3.01E13 0 0 1.21EE13 3.14E13
F15 1.03E14 3.06EE15 5.84E10 2.53E10 3.49E14 1.21E14 3.83E15 9.83E16 8.88E15 2.91E15
F16 3.14E32 1.11E47 4.50E25 5.77E25 3.14E32 1.11E47 3.14E32 1.11E47 3.14E32 1.11E47
F17 1.34EE32 5.58E48 8.30E24 1.11E23 1.46E32 2.42E33 1.34E32 5.58E48 1.34E32 5.57E48
F18 4.57E20 4.71E20 6.37E12 1.82E11 1.26E16 3.36E16 1.15E22 9.62E23 3.30E32 1.63E31
F19 1.34E31 2.23E47 2.84E16 6.67E16 3.08E30 8.34E30 1.34E31 2.23E47 1.34E31 6.68E47
F20 0 0 4.59E14 1.38E13 0 0 0 0 0 0
F22 7.83E+01 9.59E10 7.83E+01 6.09E09 7.83E+01 2.81E11 7.83E+01 2.52E12 7.83E+01 1.51E12
F23 4.87E+01 2.06E01 4.79E+01 4.84E01 4.79E+01 2.72E01 4.88E+01 1.77E01 4.94E+01 2.32E01

Table 6
Comparison of the CABC, OABC, OGABC, OCABC and ABCVSS on 30-dimensional functions F1–F20 and 100-dimensional functions F22–F23.

Function CABC OABC OGABC OCABC ABCVSS


Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev.
F1 5.41E35 7.02E35 1.83E29 2.06E29 4.69E38 5.27E38 4.32E43 8.16E43 1.20E57 5.01E57
F2 4.67E31 6.68E31 1.91EE25 2.04E25 5.38E34 9.25E34 3.59E39 6.23E39 2.44E52 1.34E51
F3 7.20E36 7.79E36 4.09E30 6.99E30 3.42E38 1.02E37 4.00E44 8.10E44 9.53E61 4.52E60
F4 3.24E48 7.87E48 9.25E43 4.07E42 7.36E57 2.60E56 1.03E64 1.94E64 1.29E78 5.77E78
F5 1.43E18 5.68E19 1.59E15 1.24E15 5.33E20 3.13E20 1.17E22 7.13E23 1.47E27 7.90E27
F6 6.01E+00 8.80E01 2.35E+00 7.12E01 5.84E01 1.83E01 5.67E01 2.73E01 5.09E01 3.55E01
F7 0 0 0 0 0 0 0 0 0 0
F9 4.98E02 1.78E02 5.46E03 2.01E03 5.09E03 2.07E03 4.39E03 2.03E03 2.46E02 9.49E03
F10 1.96E01 1.36E01 6.70E01 6.16E01 1.10E+00 1.74E+00 7.89E01 6.27E01 4.92E01 1.15E+00
F11 0 0 0 0 0 0 0 0 0 0
F12 0 0 0 0 1.18E09 5.30E09 0 0 0 0
F13 4.93E04 2.20E03 0 0 0 0 0 0 0 0
F14 1.00E12 9.28E13 2.72E13 6.66E13 0 0 0 0 9.09E13 9.25E13
F15 3.07E14 2.88E15 2.09E11 7.68E12 1.93E14 4.07E15 5.32E15 1.82E15 2.84E14 5.02E15
F16 1.57E32 2.80E48 6.57E31 8.36E31 1.57E32 2.80E48 1.57E32 2.80E48 1.57E32 5.57E48
F17 1.34E32 2.80E48 1.24E29 1.59E29 1.34E32 2.80E48 1.34E32 2.80E48 1.34E32 5.57E48
F18 3.53E19 2.66E19 1.79E15 3.24E15 7.22E17 1.90E16 2.19E23 3.07E23 1.30E29 4.89E29
F19 1.34E31 0 9.23E28 2.21E27 1.34E31 0 1.34E31 0 1.34E31 6.68E47
F20 0 0 0 0 0 0 0 0 0 0
F22 7.83E+01 1.63E09 7.83E+01 3.59E10 7.83E+01 8.60E11 7.83E+01 1.12E11 7.83E+01 3.80E13
F23 9.56E+01 4.24E01 9.50E+01 4.84E01 9.52E+01 6.15E01 9.68E+01 3.82E01 9.85E+01 5.68E01

4. Results and discussion

In the present study, the proposed method is compared with the basic ABC and its variants, CABC and its variants, DEs,
PSOs and EAs. From these comparisons, the results and comparisons are summarized below.
The proposed ABCVSS algorithm is compared with the variants of the ABC algorithm in Tables 2–4. Based on Table 2, the
compared methods have equal performance for F7, F11, F12, F20 and F25 functions. The basic ABC algorithm is slightly better
than the other methods for the F10 function, the MABC algorithm is slightly better than the other methods for the F14 func-
tion and the ABCBest1 algorithm is slightly better than the other methods for the F21 function. For the F1, F2, F3, F4, F5, F6,
F8, F9, F15, F18, F23, F27 and F28 functions, the proposed method is better than the other methods. For the rest of the func-
tions, the ABCVSS algorithm shows the same performance with at least one algorithm. According to Table 3, the methods
show the same performance for the F7, F11, F12 and F25 functions. For the rest of the functions, although the ABCVSS algo-
rithm has better performance than the other methods on most of the functions, it has equal performance with at least one
154 M.S. Kiran et al. / Information Sciences 300 (2015) 140–157

Table 7
Comparison of the CABC, OABC, OGABC, OCABC and ABCVSS on 60-dimensional functions F1–F20 and 200-dimensional functions F22–F23.

Fun CABC OABC OGABC OCABC ABCVSS


Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev.
F1 2.63E34 2.68E34 2.37E36 2.14E36 7.99E44 9.34E44 3.16E46 2.33E46 5.24E53 2.86E52
F2 3.43E30 3.60E30 3.91E32 6.69E32 7.09E40 6.85E40 1.74E43 1.23E43 1.68E51 7.53E51
F3 1.64E34 1.45E34 7.51E37 3.57E37 6.02E45 5.84E45 1.86E47 1.54E47 4.06E53 1.58E52
F4 2.65E47 5.54E47 3.61E59 4.56E59 3.05E70 3.43E70 2.30E74 4.72E73 8.18E74 3.55E73
F5 5.19E18 2.10E18 7.80E19 4.64E19 7.64E23 9.40E24 3.23E24 1.13E24 1.76E27 7.06E27
F6 3.99E+01 7.62E+00 7.09E+00 9.02E01 5.19E+00 1.37E+00 4.76E+00 1.08E+00 6.01E+00 1.04E+00
F7 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
F9 1.99E01 6.07E02 8.11E03 1.23E03 7.64E03 1.95E03 4.83E03 1.90E03 5.76E02 1.16E02
F10 3.27E01 6.37E01 1.11E+00 1.06E+00 1.49E+00 1.27E+00 6.24E01 7.87E01 3.68E01 8.48E01
F11 0.00E+00 0.00E+00 4.29E05 1.66E04 2.63E13 1.31E12 0.00E+00 0.00E+00 0.00E+00 0.00E+00
F12 0.00E+00 0.00E+00 2.67E08 5.99E08 9.78E14 4.26E13 0.00E+00 0.00E+00 0.00E+00 0.00E+00
F13 1.49E08 3.31E08 4.92E04 2.46E03 2.95E04 1.47E03 0.00E+00 0.00E+00 0.00E+00 0.00E+00
F14 3.85E11 3.25E12 8.70E08 4.34E07 3.71E11 1.62E12 3.63E11 0.00E+00 3.69E11 1.85E12
F15 8.31E14 6.92E15 1.57E13 3.12E14 7.81E15 1.58E15 7.10E15 9.17E16 7.02E14 6.44E15
F16 7.85E33 2.79E48 7.85E33 2.79E48 7.85E33 2.79E48 7.85E33 1.41E48 7.85E33 2.78E48
F17 1.34E32 5.58E48 1.34E32 5.58E48 1.34E32 5.58E48 1.34E32 2.83E48 1.34E32 5.57E48
F18 4.51E18 8.82E18 1.95E16 3.11E16 3.95E16 5.76E16 2.22E17 2.73E17 3.49E27 1.89E26
F19 1.34E32 2.23E47 1.34E32 2.23E47 1.34E32 2.23E47 1.34E32 0.00E+00 1.34E31 6.68E47
F20 8.52E15 7.78E15 0.00E+00 0.00E+00 6.87E05 1.53E04 0.00E+00 0.00E+00 9.47E16 3.61E15
F22 7.83E+01 1.70E09 7.83E+01 9.28E09 7.83E+01 1.35E09 7.83E+01 2.26E10 7.83E+01 5.12E14
F23 1.87E+02 8.33E01 1.89E+02 7.57E01 1.90E+02 8.73E01 1.91E+02 5.29E01 1.96E+02 1.42E+00

Table 8
Comparison of the ABCVSS and DEs on some 30-dimensional functions.

Function Max_Fes DE JDE JADE SADE ABCVSS


Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev.
Sphere 150,000 9.80E14 8.40E14 1.46E28 1.78E28 2.69E56 1.41E55 3.28E20 3.63E20 1.08E88 5.93E88
Schwefel 200,000 1.60E09 1.10E09 9.02E24 6.01E24 3.18E25 2.05E24 3.51E25 2.74E25 9.058E56 4.32E55
2.22
Rosenbrock 300,000 2.10E+00 1.5+00 1.30E+01 1.40E+01 3.20E01 1.10E+00 2.10E+01 7.80E+00 0.0560714 0.13225
Step 10,000 4.70E+03 1.10E+03 6.13+02 1.72E+02 5.62E+00 1.87E+00 5.07E+01 1.34E+01 0 0
Quartic 300,000 4.70E03 1.20E03 3.35E03 8.68E04 6.14E04 2.55E04 4.86E03 5.21E04 0.0121398 0.002998
Schwefel 100,000 5.90E04 1.10E+03 1.70E10 1.71E10 2.62E04 3.59E04 1.13E08 1.08E08 4.851E13 8.18E13
2.26
Rastrigin 100,000 1.80E+02 1.30E+01 3.32E04 6.39E04 1.33E01 9.74E02 2.43+00 1.60E+00 0 0
Ackley 50,000 1.10E01 3.90E02 2.37E04 7.10E05 3.35E09 2.84E09 3.81E06 8.26E07 7.322E13 3.41E12
Griewank 50,000 2.00E01 1.10E01 7.29E06 1.05E05 1.57E08 1.09E07 2.52E09 1.24E09 3.178E08 1.66E07
Penalized 1 50,000 1.20E+00 1.00E02 7.03E08 5.74E08 1.67E15 1.02E14 8.25E12 5.12E12 8.271E28 4.38E27
Penalized 2 50,000 7.50E02 3.80E02 1.80E05 1.42E05 1.87E10 1.09E09 1.93E09 1.53E09 2.318E27 1.18E26
Alpine 300,000 2.30E04 1.70E04 6.08E10 8.36E10 2.78E05 8.43E06 2.94E06 3.47E06 2.323E93 1.27E92

Table 9
Comparison of the ABCVSS and PSOs on 30-dimensional some functions.

Function Max_Fes PSO FIPS HPSOTVAC CLPSO OLPSOG ABCVSS


Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev. Mean Std.Dev.
Sphere 200,000 3.34E14 5.39E14 2.42E13 1.73E13 2.83E33 3.19E33 1.58E12 7.70E13 4.12E54 6.34E54 1.73E112 9.5E112
Schwefel 200,000 1.70E10 1.39E10 2.76E08 9.04E09 9.03E20 9.58E20 2.51E08 5.84E09 9.85E30 1.01E29 2.486E65 1.33E64
2.22
Rosenbrock 200,000 2.80E+01 2.17E+01 2.51E+01 5.10E01 2.39E+01 2.65E+01 1.13E+01 9.85E+00 2.15E+01 2.99E+01 0.0725126 0.110764
Step 200,000 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00 0.00E+00
Quartic 200,000 2.29E02 5.60E03 4.24E03 1.28E04 9.82E02 3.26E02 5.85E03 1.11E03 1.16E02 4.10E03 0.0163089 0.004049
Schwefel 200,000 3.16E+03 4.06E+02 9.93E+02 5.09E+02 1.59E+03 3.26E+02 3.82E04 1.28E05 3.84E+02 2.17E+02 1.213E13 4.61E13
2.26
Rastrigin 200,000 3.57E+01 6.89E+00 6.51E+01 1.33E+01 9.43E+00 3.48E+00 9.05E05 1.25E04 1.07E+00 9.92E01 0.00E+00 0.00E+00
NCRastrigin 200,000 4.36E+01 1.12E+01 7.01E+01 1.47E+01 1.03E+01 8.24E+00 1.54E+00 2.75E+00 2.18E+00 6.31E01 0.00E+00 0.00E+00
Ackley 200,000 8.20E08 6.73E08 2.33E07 7.19E08 7.29E14 3.00E14 3.66E07 7.57E08 7.98E15 2.03E15 2.274E14 5.41E15
Griewank 200,000 1.53E03 4.32E03 9.01E12 1.84E11 9.75E03 8.33E03 9.02E09 8.57E09 4.83E03 8.63E03 0.00E+00 0.00E+00
Penalized 1 200,000 8.10E16 1.07E15 1.96E15 1.11E15 2.71E29 1.88E28 6.45E14 3.70E14 1.59E32 1.03E33 1.571E32 5.57E48
Penalized 2 200,000 3.26E13 3.70E13 2.70E14 1.57E14 2.79E28 2.18E28 1.25E12 9.45E12 4.39E04 2.20E03 1.35E32 5.57E48
M.S. Kiran et al. / Information Sciences 300 (2015) 140–157 155

Table 10
Comparison of the ABCVSS and EAs on some 30-dimensional functions.

Algorithms Sphere Schwefel 2.22 Schwefel 2.26


Max_Fes Mean Std.Dev. Max_Fes Mean Std.Dev. Max_Fes Mean Std.Dev.
ALEP 150,000 6.30E04 7.60E05 150,000 NA NA 150,000 1.00E+03 5.80E+01
FEP 150,000 5.70E04 1.30E04 200,000 8.10E03 7.70E04 900,000 1.40E+01 5.20E+01
CEP/best 250,000 3.90E07 N/A 250,000 1.90E03 N/A N/A N/A N/A
OGA/Q 112,559 0 0 112,612 0 0 302,116 3.00E02 6.40E04
LEA 110,654 4.70E16 6.20E17 110,031 4.20E09 4.20E19 302,116 3.00E02 6.40E04
ABCVSS 100,000 1.20E57 5.01E57 100,000 1.47E27 7.90E27 100,000 9.09E13 9.25E13
Rastrigin Griewank Penalized 1
ALEP 150,000 5.80E+00 2.10E+00 150,000 2.40E02 2.80E02 150,000 6.00E06 1.00E06
FEP 500,000 4.60E02 1.20E02 200,000 1.60E02 2.20E02 150,000 9.20E06 3.60E06
CEP/best 250,000 4.70E+00 N/A 250,000 2.70E07 N/A N/A N/A N/A
OGA/Q 224,710 0 0 134,000 0 0 134,556 6.00E06 1.10E06
LEA 223,803 2.10E18 3.30E18 140,498 6.10E16 2.50E17 132,642 2.40E06 2.20E06
ABCVSS 100,000 0 0 100,000 0 0 100,000 1.57E32 5.57E48
Penalized 2 Himmelblau Michalewicz
ALEP 150,000 9.80E05 1.20E05 N/A N/A N/A N/A N/A N/A
FEP 150,000 1.60E04 7.30E05 N/A N/A N/A N/A N/A N/A
EDA/L 114,570 3.40E21 N/A 153,116 78.3107 N/A 168,885 94.3757 N/A
OGA/Q 134,143 1.80E04 2.60E05 245,930 78.3 6.20E03 302,773 9.28E+01 2.60E02
LEA 132,213 1.70E04 1.20E04 243,895 78.31 6.10E03 289,863 9.30E+01 2.30E02
ABCVSS 100,000 1.35E32 5.57E48 100,000 78.3323 3.80E13 100,000 98.5434 5.68E01

algorithm on some functions. The basic ABC algorithm is slightly better than the other methods for the F10 function, the
MABC algorithm is slightly better than the other methods for the F14 function and the ABCBest1 algorithm is slightly better
than the other methods for the F21 and F28 functions. In Table 4, the ABCVSS algorithm has a slightly lower performance for
the F10 and F21 functions. For the rest of the benchmark functions, the ABCVSS algorithm has high performance or equal
performance in solving the numeric problems. As seen from Tables 2–4, while the dimensions of the benchmark functions
increase, the proposed method preserves the self-search capability.
The ABCVSS algorithm is compared with CABC, OABC, OGABC, and OCABC in Tables 5–7. The proposed algorithm has a
lower performance than the other algorithms for the F9, F10, F13, F14 and F15 functions, and it has higher or equal perfor-
mance on the other benchmark functions in Table 5. The same situation is valid in Table 6, except for the F13 function. For
the F13 function, the optimal solution is obtained by the proposed method, and the ABCVSS algorithm shows the same per-
formance with the other methods on this function. In Table 7, while the ABCVSS algorithm has a lower performance than the
other methods on the F6, F9, F10, F14, F15 and F20 functions, the proposed method has a higher performance than the other
methods on F1, F2, F3, F4, F5, F13, F18 and F23. For the rest of the functions, the methods show similar performance. Tables
5–7 show that the proposed method is highly competitive in solving the numeric functions considered in the experiments.
In Table 8, the performance of the ABCVSS algorithm is compared with the Des, and this comparison shows that the pro-
posed method is better than the Des, except for the Quartic function. For this function, JaDe shows the highest performance
among the compared methods.
In Table 9, the proposed method is compared with the basic variants and variants of the PSO algorithm. As seen from the
table, all of the methods have the same performance on the Step function, the FIPS algorithm is better than the other meth-
ods for Quartic function and the OLPSO-G algorithm is better than the other methods for the Ackley function. For the rest of
the benchmark functions, the ABCVSS algorithm is better than the PSO models.
In Table 10, the proposed algorithm is compared with EAs. The OGA/Q algorithm is better than the other methods for
Sphere and Schwefel 2.22 functions. The ABCVSS algorithm has higher performance than the EAs for Penalized 1, Penalized
2, Himmelblau and Michalewicz functions. For the rest of the benchmark functions in Table 10, the ABCVSS and OGA/Q algo-
rithms have the same performance.
The conceptual structure of ABC is better than other methods because there are individual searchers (employed bees),
cooperative searchers (onlooker bees) and random searchers (scout bees) in the algorithm, and they perform different activ-
ities for obtaining optimal or near optimal solutions for the optimization problems. The methods require the fine update
rules for obtaining high quality results in addition to a conceptual structure. The update rule of the ABC algorithm is superior
in terms of exploration of the search space but poor in terms of exploitation around the solution found. The proposed
approach improves not only the exploitation ability but also its exploration ability of the method because a different update
rule can be used to find new solutions within the same iteration. Therefore, while some rules provide diversification in the
population, some rules provide intensification for the population, and the rules are combined for providing the balance
between exploration and exploitation in the algorithm. The integration of update rules for the ABC algorithm provides robust
local and global search abilities for the basic algorithm while searching the solution space of the optimization problem. For
the problems with different characteristics, the integration of update rules has the edge on solving the optimization
156 M.S. Kiran et al. / Information Sciences 300 (2015) 140–157

problems to determine which update rule is better than the others, while solving the problem is an issue in terms of time and
cost. Therefore, the proposed method is an alternative and comprehensive tool for solving the optimization problems. In
addition, the experimental results show that the proposed approach is better than the other methods in most cases as seen
from the comparisons.

5. Conclusion and future works

It is widely accepted that problems with different characteristics require methods that have well-balanced local and glo-
bal search abilities for the optimization process. An effective global search is provided for ABC by scout bees and the update
rule of the algorithm. A local search is also performed by the employed and onlooker bees of the ABC, which introduces
issues because employed and onlooker bees use the same update rule. In the proposed method, each employed or onlooker
bee can use a different update rule for obtaining a candidate solution, which improves the local and global search abilities of
the proposed method. This modification results in a more robust and effective ABC-based optimizer. Consequently, the per-
formance of the proposed method on 28 numerical benchmark functions show that using different equations as an update
rule for the ABC algorithm provides sufficient diversification and intensification of the population on the search space of the
problem. Future works include investigating more capable update rules for the ABC and other swarm intelligence methods.

Acknowledgements

The authors thank The Scientific Research Project Coordinatorship of Selcuk University and The Scientific and Technolog-
ical Research Council of Turkey for their institutional supports.

References

[1] A. Banharnsakun, T. Achalakul, B. Sirinaovakul, The best-so-far selection in artificial bee colony algorithm, Appl. Math. Comput. 11 (2011) 2888–2901.
[2] A. Banharnsakun, B. Sirinaovakul, T. Achalakul, The best-so-far ABC with multiple patrilines for clustering problems, Neurocomputing 116 (2013) 355–
366.
[3] A. Singh, An artificial bee colony algorithm for the leaf-constrained minimum spanning tree problem, Appl. Soft Comput. 9 (2009) 625–631.
[4] B. Akay, Performance Analysis of Artificial Bee Colony Algorithm on numerical optimization problems, PhD. Dissertation, Erciyes Univ., Grad. Sch. of
Nat. and Appl. Sci., Kayseri, TR, 2009.
[5] B. Akay, D. Karaboga, A modified artificial bee colony algorithm for real parameter optimization, Inf. Sci. 192 (2012) 120–142.
[6] B. Alatas, Chaotic bee colony algorithms for global numerical optimization, Expert Syst. Appl. 37 (2010) 5682–5687.
[7] D. Bose, S. Biswas, A.V. Vasilakos, S. Laha, Optimal filter design using an improved artificial bee colony algorithm, Inf. Sci. 281 (2014) 443–461.
[8] D. Karaboga, An idea based on honey bee swarm for numerical optimization, Erciyes University, Kayseri, Turkey, Tech. Rep., TR06, 2005.
[9] D. Karaboga, B. Akay, A comparative study of artificial bee colony algorithm, Appl. Math. Comput. 214 (2009) 108–132.
[10] D. Karaboga, B. Akay, C. Ozturk, Artificial bee colony (ABC) optimization algorithm for training feed-forward neural networks, in: 4th International
Conference MDAI, Kitakyushu, Japan, August 16–18, 2007, pp. 318–329.
[11] D. Karaboga, B. Basturk, A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm, J. Global Optim.
39 (2007) 459–471.
[12] D. Karaboga, B. Basturk, On the performance of artificial bee colony (ABC) algorithm, Appl. Soft Comput. 8 (2008) 687–697.
[13] D. Karaboga, B. Gorkemli, A quick artificial bee colony (qABC) algorithm and its performance on optimization problems, Appl. Soft Comput. 23 (2014)
227–238.
[14] D. Karaboga, B. Gorkemli, C. Ozturk, N. Karaboga, A comprehensive survey: artificial bee colony (ABC) algorithm and applications, Artif. Intell. Rev.
(2012), http://dx.doi.org/10.1007/s10462-012-9328-0.
[15] D. Karaboga, C. Ozturk, A novel clustering approach: artificial bee colony (ABC) algorithm, Appl. Soft Comput. 11 (2011) 652–657.
[16] D. Karaboga, S. Okdem, C. Ozturk, Cluster based wireless sensor network routing using artificial bee colony algorithm, Wirel. Netw. 18 (2012) 847–860.
[17] D.O. Boyer, C.H. Martfnez, N.G. Pedrajas, Crossover operator for evolutionary algorithms based on population features, J. Artif. Intell. Res. 24 (2005) 1–
48.
[18] G. Zhu, S. Kwong, Gbest-guided artificial bee colony algorithm for numerical function optimization, Appl. Math. Comput. 217 (2010) 3166–3173.
[19] H. Gozde, M.C. Taplamacioglu, Comparative performance analysis of artificial bee colony algorithm for automatic voltage regulator (AVR) system, J.
Frankl. Inst. 348 (2011) 1927–1946.
[20] H. Wang, Z. Wu, S. Rahnamayan, H. Sun, Y. Liu, J. Pan, Multi-strategy ensemble artificial bee colony algorithm, Inf. Sci. 279 (2014) 587–603.
[21] H.-C. Tsai, Integrating the artificial bee colony and bees algorithm to face constrained optimization problems, Inf. Sci. 258 (2014) 80–93.
[22] I.M.S. De Oliveira, R. Schirru, Swarm intelligence of artificial bees applied to in-core fuel management optimization, Ann. Nucl. Energy 38 (2011) 1039–
1045.
[23] J. Kennedy, R.C. Eberhart, Particle swarm optimization, in: Proc. of IEEE International Conference on Neural Networks, Piscataway, USA, 1995, pp.
1942–1948.
[24] M. Dorigo, V. Maniezzo, A. Colorni, Positive feedback as a search strategy, Politecnico di Milano, Milano, Italy, Tech. Rep. 91–016, 1991.
[25] M. Dorigo, V. Maniezzo, A. Colorni, The ant system: optimization by a colony of cooperating agents, IEEE T. Syst. Man Cy. B 26 (1991) 1–13.
[26] M. Ma, J. Lieang, M. Guo, Y. Fan, Y. Yin, SAR image segmentation based on artificial bee colony algorithm, Appl. Soft Comput. 11 (2011) 5205–5214.
[27] M. Sonmez, Artificial bee colony algorithm for optimization of truss structures, Appl. Soft Comput. 11 (2011) 2406–2418.
[28] M.S. Kiran, O. Findik, A directed artificial bee colony algorithm, Appl. Soft Comput. 26 (2015) 454–462.
[29] M.-H. Horng, Multilevel thresholding selection based on the artificial bee colony algorithm for image segmentation, Expert Syst. Appl. 38 (2011)
13785–13791.
[30] N. Imanian, M.E. Shiri, P. Moradi, Velocity based artificial bee colony algorithm for high dimensional continuous optimization problems, Eng. Appl.
Artif. Intell. 36 (2014) 148–163.
[31] P. Mansouri, B. Asady, N. Gupta, The bisection-artificial bee colony algorithm to solve fixed point problems, Appl. Soft Comput. 26 (2015) 143–148.
[32] P. N. Suganthan, N. Hansen, J. J. Liang, K. Deb, Y.-P. Chen, A. Auger, S. Tiwari, Problem Definitions and Evaluation Criteria for the CEC 2005 Special
Session on Real-Parameter Optimization, Nanyang Technological University, Singapore, Tech. Rep., May 2005.
[33] R.C. Eberhart, J. Kennedy, A new optimizer using particle swarm theory, in: Proc.MHS’95, Nagoya, Japan, 1995, pp. 39–43.
[34] S. Samanta, S. Chakraborty, Parametric optimization of some non-traditional machining processes using artificial bee colony algorithm, Eng. Appl. Artif.
Intell. 24 (2011) 946–957.
M.S. Kiran et al. / Information Sciences 300 (2015) 140–157 157

[35] S.N. Omkar, J. Senthilnath, R. Khandelwal, G.N. Naik, S. Gopalakrishman, Artificial bee colony (ABC) for multi-objective design optimization of
composite structures, Appl. Soft Comput. 11 (2011) 489–499.
[36] S.O. Degertekin, Optimum design of geometrically non-linear steel frames using artificial bee colony algorithm, Steel Compos. Struct. 12 (2012) 505–
522.
[37] V.J. Manoj, E. Elias, Artificial bee colony algorithm for the design of multiplier-less nonuniform filter bank transmultiplexer, Inf. Sci. 192 (2012) 193–
203.
[38] W. Gao, S. Liu, L. Huang, A global best artificial bee colony algorithm for global optimization, J. Comput. Appl. Math. 236 (2012) 2741–2753.
[39] W. Gao, S. Liu, A modified artificial bee colony algorithm, Comput. Oper. Res. 39 (2012) 687–697.
[40] W. Gao, S. Liu, L. Huang, A novel artificial bee colony algorithm based on modified search equation and orthogonal learning, IEEE T. Syst. Man Cy. B
(2012), http://dx.doi.org/10.1109/TSMCB.2012.2222373.
[41] W. Gao, S. Liu, L. Huang, Enhancing artificial bee colony algorithm using more information-based search equations, Inf. Sci. 270 (2014) 112–133.
[42] W.-C. Yeh, T.-J. Hsieh, Solving reliability redundancy allocation problems using an artificial bee colony algorithm, Comput. Oper. Res. 38 (2011) 1465–
1473.
[43] X. Zhang, X. Zhang, S. Yuen, S. Ho, W. Fu, An improved artificial bee colony algorithm for optimal design of electromagnetic devices, IEEE T. Magn.
(2013), http://dx.doi.org/10.1109/TMAG.2013.2241447.
[44] X.-L. Li, Z.J. Shao, J.-X. Qian, An optimizing method based on autonomous animates: fish-swarm algorithm, Syst. Eng. – Theory Pract. 22 (2002) 32–38.
[45] X.-S. Yang, S. Deb, Engineering optimization by cuckoo search, Int. J. Math. Model. Numer. Opt. 1 (2010) 330–343.
[46] X.-S. Yang, Firefly algorithms for multimodal optimization, in: 5th International Symposium SAGA, Sapporo, Japan, 2009, pp. 169–178.
[47] Y. Liu, X. Ling, G. Liu, Improved artificial bee colony algorithm with mutual learning, J. Syst. Eng. Electron. 23 (2012) 265–275.

You might also like