Chun Feng2014
Chun Feng2014
Research Article
Hybrid Artificial Bee Colony Algorithm and Particle Swarm
Search for Global Optimization
Copyright © 2014 Wang Chun-Feng et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.
Artificial bee colony (ABC) algorithm is one of the most recent swarm intelligence based algorithms, which has been shown to
be competitive to other population-based algorithms. However, there is still an insufficiency in ABC regarding its solution search
equation, which is good at exploration but poor at exploitation. To overcome this problem, we propose a novel artificial bee colony
algorithm based on particle swarm search mechanism. In this algorithm, for improving the convergence speed, the initial population
is generated by using good point set theory rather than random selection firstly. Secondly, in order to enhance the exploitation
ability, the employed bee, onlookers, and scouts utilize the mechanism of PSO to search new candidate solutions. Finally, for further
improving the searching ability, the chaotic search operator is adopted in the best solution of the current iteration. Our algorithm is
tested on some well-known benchmark functions and compared with other algorithms. Results show that our algorithm has good
performance.
the particles after the main loop of the PSO is finished. Unlike by onlookers. The probability could be obtained from the
the IABAP and ABC-SPSO, a hybrid method named HPA is following equation:
proposed. The global best solution of the HPA is created using
recombination procedure between global best solutions of the fit (𝑥𝑖 )
𝑃𝑖 = , (2)
PSO and the ABC. ∑𝑆𝑛
𝑖=1 fit (𝑥𝑖 )
In this paper, we present a hybrid artificial bee colony
algorithm based on particle swarm search for global opti- where fit(𝑥𝑖 ) is the nectar amount of the 𝑖th food source and
mization, which is named “ABC-PS.” For furthermore it is associated with the objective function value 𝑓(𝑥𝑖 ) of the
improving the performance, some strategies have been 𝑖th food source. Once a food source 𝑥𝑖 is selected, she utilizes
applied. The experimental results show that the algorithm (3) to produce a modification on the position (solution) in
could do well in improving the performance of ABC algo- her memory and checks the nectar amount of the candidate
rithm in most areas. source (solution)
The rest of the paper is organized as follows. The original
ABC algorithm is introduced in Section 2. The PSO is 𝑥𝑖𝑗 = 𝑥𝑖𝑗 + 𝜓 ∗ (𝑥𝑖𝑗 − 𝑥𝑘𝑗 ) , (3)
explained in material and methods in Section 3. The proposed
ABC-PS approach is described in Section 4. The performance where 𝑖, 𝑘 ∈ {1, 2, . . . , 𝑆𝑛}, 𝑘 ≠ 𝑖, 𝑥𝑖𝑗 is a new feasible solution
of the ABC-PS is compared with that of original ABC that produced from its previous solution 𝑥𝑖𝑗 and the randomly
algorithm and the state-of-art algorithm in Section 5. Finally, selected neighboring solution 𝑥𝑘𝑗 ; 𝜓 is a random number
conclusions are given in Section 6. between [−1, 1], which controls the production of a neighbor
food source position around 𝑥𝑖𝑗 ; 𝑗 and 𝑘 are randomly
chosen indexes. In each iteration, only one dimension of each
2. The Original ABC Algorithm position is changed. Providing that its nectar is higher than
that of the previous one, the bee memorizes the new position
ABC algorithm contains three groups of bees: employed bees, and forgets the old one.
onlookers, and scouts. The numbers of employed bees and In ABC algorithm, there is a control parameter called
onlookers are set equally. Employed bees are responsible limit in the original ABC algorithm. If a food source is not
for searching available food sources and gathering required improved anymore when limit is exceeded, it is assumed to
information. They also pass their food information to onlook- be abandoned by its employed bee and the employed bee
ers. Onlookers select good food source from those found by associated with that food source becomes a scout to search
employed bees to further search the foods. When the quality for a new food source randomly, which would help avoiding
of the food source is not improved through a predetermined local optima.
number of cycles, the food source is abandoned by its
employed bee. At the same time, the employed bee becomes
a scout and start to search for a new food source. In ABC 3. Particle Swarm Optimization (PSO)
algorithm, each food source represents a feasible solution As a swarm-based stochastic optimization method, the PSO
of the optimization problem and the nectar amount of a algorithm was developed by Kennedy and Eberhart [19],
food source is evaluated by the fitness value (quality) of the which is based on social behavior of bird flocking or fish
associated solution. The number of employed bees is set to schooling. The original PSO maintains a population of
that of food sources. particles 𝑥𝑖 = (𝑥𝑖,1 , 𝑥𝑖,2 , . . . , 𝑥𝑖,𝑑 ), 𝑖 = 1, 2, . . . , 𝑆𝑛 which
Assume that the search space is 𝑑-dimension, the position distribute uniformly around search space at first. Each
of the 𝑖th food source (solution) can be expressed as a particle represents a potential solution to an optimization
𝑑-dimension vector 𝑥𝑖 = (𝑥𝑖,1 , 𝑥𝑖,2 , . . . , 𝑥𝑖,𝑑 ), 𝑖 = 1, 2, . . . , 𝑆𝑛, problem. After randomly produced solutions are assigned
𝑆𝑛 is the number of food sources. The detail of the orginal to the particles, velocities of the particles are updated by
ABC algorithm is given as follows. using self-best solution of the particle obtained previous in
At the initialization stage, a set of food source positions iterations and global best solution obtained by the particles
is randomly selected by the bees as in (1) and their nectar so far at each iteration. This is formulated as follows:
amounts are determined: 𝑗 𝑗 𝑗
best
V𝑖 (𝑘 + 1) = V𝑖 (𝑘) + 𝑐1 ∗ 𝑟1 ∗ [𝑝𝑖,𝑗 (𝑘) − 𝑥𝑖 (𝑘)]
(4)
𝑥𝑖𝑗 = 𝑥𝑗 + rand (0, 1) ∗ (𝑥𝑗 − 𝑥𝑗 ) , (1) + 𝑐2 ∗ 𝑟2 ∗ [𝑔𝑗best (𝑘) − 𝑥𝑖 (𝑘)] ,
𝑗
to scale the contribution of cognitive and social components, Definition 1. Let 𝜑(𝑛) = sup𝑟∈𝐺𝑑 |(𝑁𝑛 (𝑟)/𝑛) − |𝑟||, |𝑟| =
respectively; 𝑟1 and 𝑟2 which are stochastic elements of the 𝑟1 𝑟2 ⋅ ⋅ ⋅ 𝑟𝑑 ; then 𝜑(𝑛) is called discrepancy of point set 𝑝𝑛 (𝑘).
algorithm are random numbers in the range [0, 1]. For each
particle 𝑖, Kennedy and Eberhart [19] proposed that the Definition 2. Let 𝑥 ∈ 𝐺𝑑 ; if 𝑝𝑛 (𝑘) = (𝑟1(𝑛) ∗ 𝑘, . . . , 𝑟𝐷
(𝑛)
∗ 𝑘), (𝑘 =
position 𝑥𝑖 can be updated in the following manner: 1, . . . , 𝑛) has discrepancy 𝜑(𝑛), 𝜑(𝑛) = 𝑐(𝑟, 𝜖)𝑛(−1+𝜖) , where
𝑗 𝑗 𝑗 𝑐(𝑟, 𝜖) is constant only relate with 𝑟, 𝜖 (𝜖 > 0), then 𝑝𝑛 (𝑘) is
𝑥𝑖 (𝑘 + 1) = 𝑥𝑖 (𝑘) + V𝑖 (𝑘 + 1) . (5) called good point set and 𝑟 is called good point.
Considering the minimization problem, the personal best
solution of the particle at the next step 𝑘 + 1 is calculated as Remark 3. If let 𝑟𝑖 = 2 cos(2𝜋𝑖/𝑝), 1 ≤ 𝑖 ≤ 𝑑, where 𝑝 is the
minimum prime number content with (𝑝 − 3)/2 ≥ 𝑑, then 𝑟
𝑝𝑖best (𝑘 + 1) is a good point. If let 𝑟𝑖 = 𝑒𝑖 , 1 ≤ 𝑖 ≤ 𝑑, 𝑟 is a good point also.
𝑝best (𝑘) , if 𝑓 (𝑥𝑖 (𝑘 + 1)) ≥ 𝑓 (𝑝𝑖best (𝑘)) , (6) Theorem 4. If 𝑝𝑛 (𝑘) (1 ≤ 𝑘 ≤ 𝑛) has discrepancy 𝜑(𝑛), 𝑓 ∈
={ 𝑖 𝐵𝑑 , then
𝑥𝑖 (𝑘 + 1) , if 𝑓 (𝑥𝑖 (𝑘 + 1)) < 𝑓 (𝑝𝑖best (𝑘)) .
𝑓 (𝑝𝑛 (𝑖))
The global best position 𝑔best is determined by using (7) (𝑆𝑛 ∫ 𝑓 (𝑥) 𝑑𝑥 − ∑ ≤ V (𝑓) 𝜑 (𝑛) , (9)
𝑥∈𝐺 𝑛
is the number of the particles): 𝑑
where 𝐵𝑑 is a 𝑑-dimensional Banach space of functions 𝑓 with
𝑔best = min {𝑓 (𝑝𝑖best )} , 𝑖 = 1, 2, . . . , 𝑆𝑛, (7)
norm ‖ ∙ ‖, V(𝑓) = ‖𝑓 − 𝑢‖ measures the variability of the
where 𝑓 is the objective function. function 𝑓.
In (4), to control the exploration and exploitation abilities
Theorem 5. For arbitrary 𝑓 ∈ 𝐵𝑑 , if (9) holds, then one has
of the swarm, Shi and Eberhart proposed a new parameter
point 𝑝𝑛 (𝑘) (1 ≤ 𝑘 ≤ 𝑛), in which discrepancy is not more than
called as “inertia weight 𝜔” [20]. The inertia weight controls
𝜑(𝑛).
the momentum of the particle by weighing the contribution
of the previous velocity. By adding the inertia weight 𝜔, (4) is Theorem 6. If 𝑓(𝑥) content with |𝜕𝑓/𝜕𝑥𝑖 | ≤ 𝐿, 1 ≤ 𝑖 ≤ 𝑑,
changed
|𝜕2 𝑓/𝜕𝑥𝑖 𝜕𝑥𝑗 | ≤ 𝐿, 1 ≤ 𝑖 ≤ 𝑗 ≤ 𝑑, . . . , |𝜕𝑑 𝑓/𝜕𝑥1 ⋅ ⋅ ⋅ 𝜕𝑥𝑑 | ≤ 𝐿,
𝑗 𝑗 best
V𝑖 (𝑘 + 1) = 𝜔 ∗ V𝑖 (𝑘) + 𝑐1 ∗ 𝑟1 ∗ [𝑝𝑖,𝑗
𝑗 where 𝐿 is an absolute constant, when we want to estimate the
(𝑘) − 𝑥𝑖 (𝑘)]
integral of a function 𝑓 over the 𝑑-dimensional unit hypercube
(8)
𝑗
+ 𝑐2 ∗ 𝑟2 ∗ [𝑔𝑗best (𝑘) − 𝑥𝑖 (𝑘)] . 𝐺𝑑 , namely 𝑢 = ∫𝑥∈𝐺 𝑓(𝑥)𝑑𝑥, by the average value off over
𝑑
any point set 𝑝𝑛 (𝑘) (1 ≤ 𝑘 ≤ 𝑛), 𝑄𝑛 = ∑(𝑓(𝑝𝑛 (𝑖))/𝑛), then the
Based on the description of PSO, we can see that the parti- integration error 𝐸𝑛 = 𝑢 − 𝑄𝑛 is not smaller than 𝑜(𝑛−1 ).
cles have a tendency to fly towards the better and better search
area over the course of search process. So, the PSO algo- By Theorems 4–6, it can be seen that if we estimate the
rithm can enforce a steady improvement in solution quality. integral based on good point set, the degree of discrepancy
𝜑(𝑛) = 𝑐(𝑟, 𝜖)𝑛−1+𝜖 is only related with 𝑛. This is a good idea
4. Hybrid Approach (ABC-PS) for high dimensional approximation computation. In other
words, the idea of good point set is to take the point set more
From the above discussion of ABC and PSO, it is clear evenly than random point.
that the global best solution of the population does not be For the 𝑑-dimensional local search space 𝐻, the so-called
directly used in ABC algorithm; at the same time, it can be good point set which contains 𝑛 points can be found as
concluded that when the particles in the PSO get stuck in follows:
the local minima, it may not get rid of the local minima.
For overcoming these disadvantages of two algorithms, we 𝑝𝑛 (𝑘) = {({𝑟1 ∗ 𝑘} , {𝑟2 ∗ 𝑘} , . . . , {𝑟𝑑 ∗ 𝑘}) , 𝑘 = 1, . . . , 𝑛} ,
propose a hybrid global optimization approach by combing (10)
ABC algorithm and PSO searching mechanism. In this
algorithm, the initial population is generated by using good where 𝑟𝑖 = 2 cos(2𝜋𝑖/𝜌), 1 ≤ 𝑖 ≤ 𝑑, 𝜌 is the minimum prime
point set theory. number which content with 𝜌 ≥ 2𝑑 + 3, and {𝑟𝑖 ∗ 𝑘} is the
decimal fraction of 𝑟𝑖 ∗ 𝑘 (or with 𝑟𝑖 = 𝑒𝑖 , 1 ≤ 𝑖 ≤ 𝑑).
Since good point set principle is based on unit hypercube
4.1. Background on Good Point Set. Let 𝐺𝑑 be 𝑑-dimensional
or hypersphere, in order to map 𝑛 good points from space
unit cube in Euclidean space. If 𝑥 = (𝑥1 , 𝑥2 , . . . , 𝑥𝑑 ) ∈ 𝐺𝑑 ,
then 0 ≤ 𝑥𝑖 ≤ 1 (𝑖 = 1, 2, . . . , 𝑑). Let 𝑝𝑛 (𝑘) be a set of 𝑛 points 𝐻 : [0, 1]𝑑 to the search space 𝑇 : [𝑥, 𝑥]𝑑 , we define the
following transformation:
in 𝐺𝑑 ; then 𝑝𝑛 (𝑘) = {(𝑥1(𝑛) (𝑘), . . . , 𝑥𝑑(𝑛) (𝑘)) | 0 ≤ 𝑥𝑖(𝑛) (𝑘) ≤
1, 1 ≤ 𝑘 ≤ 𝑛, 1 ≤ 𝑖 ≤ 𝑑}. Let 𝑟 = (𝑟1 , 𝑟2 , . . . , 𝑟𝑑 ) be a point 𝑥𝑖 = 𝑥𝑖 + {𝑟𝑖 ∗ 𝑘} ∗ (𝑥𝑖 − 𝑥𝑖 ) . (11)
in 𝐺𝑑 , for 𝑁𝑛 (𝑟) = 𝑁𝑛 (𝑟1 , 𝑟2 , . . . , 𝑟𝑑 ) denoted the number
of points in 𝑝𝑛 (𝑘) which content with 0 ≤ 𝑥𝑖(𝑛) (𝑘) ≤ 𝑟𝑖 , In the following, for two-dimensional space [−1, 1], we
𝑖 = 1, . . . , 𝑑. generate 100 points by using goog point set method and
4 Mathematical Problems in Engineering
where 𝐾 is the length of chaotic sequence. Then map ch𝑖 to a (18) Chaotic search 𝐾 times in 𝑔best , and redetermine the
chaotic vector in the interval [𝑥, 𝑥]: 𝑔best of the population.
(19) iter = iter + 1.
CH𝑖 = 𝑥 + ch𝑖 ∗ (𝑥 − 𝑥) , 𝑖 = 1, . . . , 𝐾, (13) (20) End while
Mathematical Problems in Engineering 5
100 100
10−50 10−5
10−100 10−10
10−15
10−150
10−20
10−200
10−25
10−250
10−30
10−300
10−35
0 200 400 600 800 1000 1200 0 200 400 600 800 1000 1200
ABC ABC
ABC-PS ABC-PS
Figure 3: The relation of the best value and each iteration (the Figure 4: The relation of the best value and each iteration (the
function Matyas). function Booth).
0.6
5. Comparison of the ABC-PS with the Other
Hybrid Methods Based on the ABC 0.4
0.2
In this section, ABC-PS is applied to minimize a set of bench-
mark functions. In all simulations, the inertia weight in (4) is 0
defined as follows: −0.2
𝜔 − 𝜔min
𝜔 = max ∗ iter, (16) −0.4
maxcycle
−0.6
where 𝜔max and 𝜔min are the maximum inertia weight and −0.8
minimum weight, respectively, iter denotes the times of
current iteration. From the above formula, it can be seen that, −1
with the iteration increasing, the velocity V𝑖 of the particle −1.2
𝑥𝑖 becomes important more and more. 𝜔max and 𝜔min are 0 200 400 600 800 1000 1200
set to 0.9 and 0.4, respectively. Vmin = −1, Vmax = 1, and
𝑐1 = 𝑐2 = 1.3. ABC
ABC-PS
Experiment 1. In order to evaluate the performance of ABC-
Figure 5: The relation of the best value and each iteration (the
PS algorithm, we have used a test bed of four traditional function Camelback).
numerical benchmarks as illustrated in Table 1, which include
Matyas, Booth, 6 Hump Camelback, and GoldsteinCPrice
functions. The characteristics, dimensions, initial range, and
formulations of these functions are given in Table 1. Empirical Experiment 2. To further verify the performance of ABC-
results of the proposed hybrid method have been compared PS, 12 numerical benchmark functions are selected from the
with results obtained with that of basic ABC algorithm and a literatures [13–15]. This set consists of many different kinds of
latest algorithm COABC [15]. problems such as unimodal, multimodal, regular, irregular,
The values of the common parameters used in three algo- separable, nonseparable, and multidimensional. The charac-
rithms such as population size and total evaluation number teristics, dimensions, initial range, and formulations of these
are chosen in the same. Population size is 50 for all functions, functions are listed in Table 3.
the limit is 10. For each function, all the methods were run 30 In order to fairly compare the performance of ABC-PS,
times independently. In order to make comparison clear, the COAB, GABC [13], and PABC [14], the experiments are con-
global minimums, maximum number of iterations, mean best ducted the same way as described [13–15]. The minimums,
values, standard deviations are given in Table 2. For ABC and max iterations, mean best values, and standard deviations
ABC-PS, Figures 3, 4, 5, and 6 illustrate the change of the best found after 30 runs are given in Table 4. The bold font in
value of each iteration. The experiment shows that the ABC- Table 4 is the optimum value among different methods. From
PS method is much better than the initial ABC algorithm and Table 4, it can be see that the method ABC-PS is superior to
COABC. other algorithms in most cases, expect to 𝑓5 and 𝑓6 .
6 Mathematical Problems in Engineering
Functions C Range
𝑓1 = 0.26(𝑥12 + 𝑥22 ) − 0.48𝑥1 𝑥2 UN [−10, 10]
𝑓2 = (𝑥1 + 2𝑥2 − 7)2 + (2𝑥1 + 𝑥2 − 5)2 MS [−10, 10]
𝑓3 = 4𝑥12 − 2.1𝑥14 + 𝑥16 /3 + 𝑥1 𝑥2 − 4𝑥22 + 4𝑥24 MN [−5, 5]
𝑓4 = [1 + (𝑥1 + 𝑥2 + 1)2 (19 − 14𝑥1 + 3𝑥12 − 14𝑥2 + 6𝑥1 𝑥2 + 3𝑥22 )] MN [−2, 2]
∗ [30 + (2𝑥1 − 3𝑥2 )2 (18 − 32𝑥1 + 12𝑥12 + 48𝑥2 − 36𝑥1 𝑥2 + 27𝑥22 )]
C: characteristic; U: unimodal; M: multimodal; N: nonseparable; S: separable.
Functions C D Range
𝑓1 = 0.26(𝑥12 + 𝑥22 ) − 0.48𝑥1 𝑥2 UN 2 [−10, 10]
𝑛−1
𝑓2 = ∑ [100(𝑥𝑖+1 − 𝑥𝑖2 )2 + (𝑥𝑖 − 1)2 ] UN 30 [−30, 30]
𝑖=1
𝑥16
𝑓3 = 4𝑥12 − 2.1 ∗ 𝑥14 + + 𝑥1 𝑥2 − 4𝑥22 + 4𝑥24 MN 2 [−5, 5]
3
𝑛
𝑥𝑖2 𝑥 𝑛
𝑓4 = −20 exp (−0.2 ∗ √ ∑ ) − exp (∑ cos (2Π 𝑖 )) + 20 + 𝑒 MN 60 [−32, 32]
𝑖=1 𝑛 𝑖=1 𝑛
1 𝑛 2 𝑛
𝑥𝑖
𝑓5 = ∑𝑥 − ∏ cos ( ) + 1 MN 60 [−600, 600]
4000 𝑖=1 𝑖 𝑖=1 √𝑖
𝑛
𝑓6 = 418.982887 ∗ 𝑛 − ∑ (𝑥𝑖 sin(√|𝑥𝑖 |)) MN 30 [−500, 500]
𝑖=1
𝑓7 = 100(𝑥12 − 𝑥2 )2 + (𝑥1 − 1)2 + (𝑥3 − 1)2 + 90(𝑥32 − 𝑥4 )2
UN 4 [−10, 10]
+10.1((𝑥2 − 1)2 + (𝑥4 − 1)2 ) + 19.8(𝑥2 − 1)(𝑥4 − 1)
𝑛
𝑓8 = ∑|𝑥𝑖 |𝑖+1 US 30 [−1, 1]
𝑖=1
𝑛 𝑛
𝑓9 = ∑|𝑥𝑖 | + ∏|𝑥𝑖 | US 60 [−10, 10]
𝑖=1 𝑖=1
𝑓10 = max {|𝑥𝑖 | | 1 ≤ 𝑖 ≤ 𝑛} US 60 [−100, 100]
1 𝑛
𝑓11 = ∑ (𝑥𝑖4 − 16𝑥𝑖2 + 5𝑥𝑖 ) MS 60 [−5, 5]
𝑛 𝑖=1
𝑛
𝑓12 = ∑ (𝑥𝑖2 − 10 cos(2𝜋𝑥𝑖 ) + 10) MS 60 [−5.12, 5.12]
𝑖=1
C: characteristic; U: unimodal; M: multimodal; N: nonseparable; S: separable.
Mathematical Problems in Engineering 7
Table 4: ABC-PS performance comparison of ABC and the state-of-art algorithms in [13–15].
International
Journal of Journal of
Mathematics and
Mathematical
Discrete Mathematics
Sciences