0% found this document useful (0 votes)
14 views9 pages

Chun Feng2014

Uploaded by

Chakib Benmhamed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views9 pages

Chun Feng2014

Uploaded by

Chakib Benmhamed
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 9

Hindawi Publishing Corporation

Mathematical Problems in Engineering


Volume 2014, Article ID 832949, 8 pages
http://dx.doi.org/10.1155/2014/832949

Research Article
Hybrid Artificial Bee Colony Algorithm and Particle Swarm
Search for Global Optimization

Wang Chun-Feng, Liu Kui, and Shen Pei-Ping


College of Mathematics and Information, Henan Normal University, Xinxiang 453007, China

Correspondence should be addressed to Wang Chun-Feng; [email protected]

Received 17 July 2014; Accepted 1 October 2014; Published 28 October 2014

Academic Editor: Guangming Xie

Copyright © 2014 Wang Chun-Feng et al. This is an open access article distributed under the Creative Commons Attribution
License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly
cited.

Artificial bee colony (ABC) algorithm is one of the most recent swarm intelligence based algorithms, which has been shown to
be competitive to other population-based algorithms. However, there is still an insufficiency in ABC regarding its solution search
equation, which is good at exploration but poor at exploitation. To overcome this problem, we propose a novel artificial bee colony
algorithm based on particle swarm search mechanism. In this algorithm, for improving the convergence speed, the initial population
is generated by using good point set theory rather than random selection firstly. Secondly, in order to enhance the exploitation
ability, the employed bee, onlookers, and scouts utilize the mechanism of PSO to search new candidate solutions. Finally, for further
improving the searching ability, the chaotic search operator is adopted in the best solution of the current iteration. Our algorithm is
tested on some well-known benchmark functions and compared with other algorithms. Results show that our algorithm has good
performance.

1. Introduction algorithm by using chaotic map as efficient alternatives


to generate pseudorandom sequence [12]. To improve the
Optimization problems play a very important role in many exploitation ability, Zhu and Kwong presented a global-best-
scientific and engineering fields. In the last two decades, solution-guided ABC (GABC) algortihm by incorporating
several swarm intelligence algorithms, such as ant colony the information of global best solution into the solution
optimization (ACO) [1, 2], particle swarm optimization search equation [13]. By combing Powell's method, Gao
(PSO) [3, 4], and artificial bee colony (ABC) algoritm [5, et al. proposed an improved ABC algorithm-Powell ABC
6], have been developed for solving difficult optimization (PABC) algorithm [14]. In order to improve the exploitation
problem. Researchers have shown that algorithms based ability, a converge-onlookers ABC (COABC) was developed
on swarm intelligent have great potential [7–9] and have by applying the best solution of the previous iteration in
attracted much attention. search equation at the onlooker stage [15]. More extensive
The ABC algorithm was first proposed by Karaboga in review of ABC can refer to [16].
2005, inspired by the intelligent foraging behavior of honey In addition, considering PSO has good exploitation abil-
bee [5]. Since the invention of the ABC algorithm, it has been ity, a few of hybrid ABC algorithms have been presented
used to solve both numerical and nonnumerical optimization based on PSO algorithm. For example, a novel hybrid
problems. The performance of ABC algorithm has been approach referred to as IABAP based on the PSO and ABC
compared with some other intelligent algorithms, such as is presented in [17]. In this algorithm, the flow of information
GA [10], differential evolution algorithm (DE) [11]. The from bee colony to particle swarm is exchanged based on
results show that ABC algorithm is better than or at least scout bees. Another hybrid approach is the ABC-SPSO
comparable to the other methods. Recently, for improving algorithm based on the ABC and PSO [18]. In ABC-SPSO
the performance of ABC algorithm, many variant ABC algorithm, the update rule (solution updating equation) of the
algorithms have been developed. Alatas proposed a ABC ABC algorithm is executed among personal best solutions of
2 Mathematical Problems in Engineering

the particles after the main loop of the PSO is finished. Unlike by onlookers. The probability could be obtained from the
the IABAP and ABC-SPSO, a hybrid method named HPA is following equation:
proposed. The global best solution of the HPA is created using
recombination procedure between global best solutions of the fit (𝑥𝑖 )
𝑃𝑖 = , (2)
PSO and the ABC. ∑𝑆𝑛
𝑖=1 fit (𝑥𝑖 )
In this paper, we present a hybrid artificial bee colony
algorithm based on particle swarm search for global opti- where fit(𝑥𝑖 ) is the nectar amount of the 𝑖th food source and
mization, which is named “ABC-PS.” For furthermore it is associated with the objective function value 𝑓(𝑥𝑖 ) of the
improving the performance, some strategies have been 𝑖th food source. Once a food source 𝑥𝑖 is selected, she utilizes
applied. The experimental results show that the algorithm (3) to produce a modification on the position (solution) in
could do well in improving the performance of ABC algo- her memory and checks the nectar amount of the candidate
rithm in most areas. source (solution)
The rest of the paper is organized as follows. The original
ABC algorithm is introduced in Section 2. The PSO is 𝑥𝑖𝑗󸀠 = 𝑥𝑖𝑗 + 𝜓 ∗ (𝑥𝑖𝑗 − 𝑥𝑘𝑗 ) , (3)
explained in material and methods in Section 3. The proposed
ABC-PS approach is described in Section 4. The performance where 𝑖, 𝑘 ∈ {1, 2, . . . , 𝑆𝑛}, 𝑘 ≠ 𝑖, 𝑥𝑖𝑗󸀠 is a new feasible solution
of the ABC-PS is compared with that of original ABC that produced from its previous solution 𝑥𝑖𝑗 and the randomly
algorithm and the state-of-art algorithm in Section 5. Finally, selected neighboring solution 𝑥𝑘𝑗 ; 𝜓 is a random number
conclusions are given in Section 6. between [−1, 1], which controls the production of a neighbor
food source position around 𝑥𝑖𝑗 ; 𝑗 and 𝑘 are randomly
chosen indexes. In each iteration, only one dimension of each
2. The Original ABC Algorithm position is changed. Providing that its nectar is higher than
that of the previous one, the bee memorizes the new position
ABC algorithm contains three groups of bees: employed bees, and forgets the old one.
onlookers, and scouts. The numbers of employed bees and In ABC algorithm, there is a control parameter called
onlookers are set equally. Employed bees are responsible limit in the original ABC algorithm. If a food source is not
for searching available food sources and gathering required improved anymore when limit is exceeded, it is assumed to
information. They also pass their food information to onlook- be abandoned by its employed bee and the employed bee
ers. Onlookers select good food source from those found by associated with that food source becomes a scout to search
employed bees to further search the foods. When the quality for a new food source randomly, which would help avoiding
of the food source is not improved through a predetermined local optima.
number of cycles, the food source is abandoned by its
employed bee. At the same time, the employed bee becomes
a scout and start to search for a new food source. In ABC 3. Particle Swarm Optimization (PSO)
algorithm, each food source represents a feasible solution As a swarm-based stochastic optimization method, the PSO
of the optimization problem and the nectar amount of a algorithm was developed by Kennedy and Eberhart [19],
food source is evaluated by the fitness value (quality) of the which is based on social behavior of bird flocking or fish
associated solution. The number of employed bees is set to schooling. The original PSO maintains a population of
that of food sources. particles 𝑥𝑖 = (𝑥𝑖,1 , 𝑥𝑖,2 , . . . , 𝑥𝑖,𝑑 ), 𝑖 = 1, 2, . . . , 𝑆𝑛 which
Assume that the search space is 𝑑-dimension, the position distribute uniformly around search space at first. Each
of the 𝑖th food source (solution) can be expressed as a particle represents a potential solution to an optimization
𝑑-dimension vector 𝑥𝑖 = (𝑥𝑖,1 , 𝑥𝑖,2 , . . . , 𝑥𝑖,𝑑 ), 𝑖 = 1, 2, . . . , 𝑆𝑛, problem. After randomly produced solutions are assigned
𝑆𝑛 is the number of food sources. The detail of the orginal to the particles, velocities of the particles are updated by
ABC algorithm is given as follows. using self-best solution of the particle obtained previous in
At the initialization stage, a set of food source positions iterations and global best solution obtained by the particles
is randomly selected by the bees as in (1) and their nectar so far at each iteration. This is formulated as follows:
amounts are determined: 𝑗 𝑗 𝑗
best
V𝑖 (𝑘 + 1) = V𝑖 (𝑘) + 𝑐1 ∗ 𝑟1 ∗ [𝑝𝑖,𝑗 (𝑘) − 𝑥𝑖 (𝑘)]
(4)
𝑥𝑖𝑗 = 𝑥𝑗 + rand (0, 1) ∗ (𝑥𝑗 − 𝑥𝑗 ) , (1) + 𝑐2 ∗ 𝑟2 ∗ [𝑔𝑗best (𝑘) − 𝑥𝑖 (𝑘)] ,
𝑗

where V𝑖 (𝑘+1) (−V max ≤ V𝑖 (𝑘+1) ≤ V max) which represents


where 𝑖 ∈ {1, 2, . . . , 𝑆𝑛}, 𝑗 ∈ {1, 2, . . . , 𝑑}, 𝑥𝑗 and 𝑥𝑗 are the 𝑗
the rate of the position change for the particle; 𝑥𝑖 (𝑘) is the 𝑖th
lower bound and upper bound of the 𝑗th dimension, respec- 𝑗
tively. particle in 𝑗th dimension at step 𝑘; V𝑖 (𝑘) is the velocity of the
An onlooker bee evaluates the nectar information taken 𝑖th particle in 𝑗th dimension at step 𝑘; 𝑝𝑖,𝑗 best
is the personal
from all employed bees and chooses a food source with a best position of the 𝑖th particle in 𝑗th dimension at time step
probability related to its nectar amount. The food source with 𝑘; 𝑔best is the global best position obtained by the population
higher quality would have a larger opportunity to be selected at step 𝑘; 𝑐1 and 𝑐2 are the positive acceleration constants used
Mathematical Problems in Engineering 3

to scale the contribution of cognitive and social components, Definition 1. Let 𝜑(𝑛) = sup𝑟∈𝐺𝑑 |(𝑁𝑛 (𝑟)/𝑛) − |𝑟||, |𝑟| =
respectively; 𝑟1 and 𝑟2 which are stochastic elements of the 𝑟1 𝑟2 ⋅ ⋅ ⋅ 𝑟𝑑 ; then 𝜑(𝑛) is called discrepancy of point set 𝑝𝑛 (𝑘).
algorithm are random numbers in the range [0, 1]. For each
particle 𝑖, Kennedy and Eberhart [19] proposed that the Definition 2. Let 𝑥 ∈ 𝐺𝑑 ; if 𝑝𝑛 (𝑘) = (𝑟1(𝑛) ∗ 𝑘, . . . , 𝑟𝐷
(𝑛)
∗ 𝑘), (𝑘 =
position 𝑥𝑖 can be updated in the following manner: 1, . . . , 𝑛) has discrepancy 𝜑(𝑛), 𝜑(𝑛) = 𝑐(𝑟, 𝜖)𝑛(−1+𝜖) , where
𝑗 𝑗 𝑗 𝑐(𝑟, 𝜖) is constant only relate with 𝑟, 𝜖 (𝜖 > 0), then 𝑝𝑛 (𝑘) is
𝑥𝑖 (𝑘 + 1) = 𝑥𝑖 (𝑘) + V𝑖 (𝑘 + 1) . (5) called good point set and 𝑟 is called good point.
Considering the minimization problem, the personal best
solution of the particle at the next step 𝑘 + 1 is calculated as Remark 3. If let 𝑟𝑖 = 2 cos(2𝜋𝑖/𝑝), 1 ≤ 𝑖 ≤ 𝑑, where 𝑝 is the
minimum prime number content with (𝑝 − 3)/2 ≥ 𝑑, then 𝑟
𝑝𝑖best (𝑘 + 1) is a good point. If let 𝑟𝑖 = 𝑒𝑖 , 1 ≤ 𝑖 ≤ 𝑑, 𝑟 is a good point also.

𝑝best (𝑘) , if 𝑓 (𝑥𝑖 (𝑘 + 1)) ≥ 𝑓 (𝑝𝑖best (𝑘)) , (6) Theorem 4. If 𝑝𝑛 (𝑘) (1 ≤ 𝑘 ≤ 𝑛) has discrepancy 𝜑(𝑛), 𝑓 ∈
={ 𝑖 𝐵𝑑 , then
𝑥𝑖 (𝑘 + 1) , if 𝑓 (𝑥𝑖 (𝑘 + 1)) < 𝑓 (𝑝𝑖best (𝑘)) .
󵄨󵄨 𝑓 (𝑝𝑛 (𝑖)) 󵄨󵄨󵄨󵄨
󵄨󵄨
The global best position 𝑔best is determined by using (7) (𝑆𝑛 󵄨󵄨∫ 𝑓 (𝑥) 𝑑𝑥 − ∑ 󵄨󵄨 ≤ V (𝑓) 𝜑 (𝑛) , (9)
󵄨󵄨 𝑥∈𝐺 𝑛 󵄨󵄨
is the number of the particles): 󵄨 𝑑 󵄨
where 𝐵𝑑 is a 𝑑-dimensional Banach space of functions 𝑓 with
𝑔best = min {𝑓 (𝑝𝑖best )} , 𝑖 = 1, 2, . . . , 𝑆𝑛, (7)
norm ‖ ∙ ‖, V(𝑓) = ‖𝑓 − 𝑢‖ measures the variability of the
where 𝑓 is the objective function. function 𝑓.
In (4), to control the exploration and exploitation abilities
Theorem 5. For arbitrary 𝑓 ∈ 𝐵𝑑 , if (9) holds, then one has
of the swarm, Shi and Eberhart proposed a new parameter
point 𝑝𝑛 (𝑘) (1 ≤ 𝑘 ≤ 𝑛), in which discrepancy is not more than
called as “inertia weight 𝜔” [20]. The inertia weight controls
𝜑(𝑛).
the momentum of the particle by weighing the contribution
of the previous velocity. By adding the inertia weight 𝜔, (4) is Theorem 6. If 𝑓(𝑥) content with |𝜕𝑓/𝜕𝑥𝑖 | ≤ 𝐿, 1 ≤ 𝑖 ≤ 𝑑,
changed
|𝜕2 𝑓/𝜕𝑥𝑖 𝜕𝑥𝑗 | ≤ 𝐿, 1 ≤ 𝑖 ≤ 𝑗 ≤ 𝑑, . . . , |𝜕𝑑 𝑓/𝜕𝑥1 ⋅ ⋅ ⋅ 𝜕𝑥𝑑 | ≤ 𝐿,
𝑗 𝑗 best
V𝑖 (𝑘 + 1) = 𝜔 ∗ V𝑖 (𝑘) + 𝑐1 ∗ 𝑟1 ∗ [𝑝𝑖,𝑗
𝑗 where 𝐿 is an absolute constant, when we want to estimate the
(𝑘) − 𝑥𝑖 (𝑘)]
integral of a function 𝑓 over the 𝑑-dimensional unit hypercube
(8)
𝑗
+ 𝑐2 ∗ 𝑟2 ∗ [𝑔𝑗best (𝑘) − 𝑥𝑖 (𝑘)] . 𝐺𝑑 , namely 𝑢 = ∫𝑥∈𝐺 𝑓(𝑥)𝑑𝑥, by the average value off over
𝑑
any point set 𝑝𝑛 (𝑘) (1 ≤ 𝑘 ≤ 𝑛), 𝑄𝑛 = ∑(𝑓(𝑝𝑛 (𝑖))/𝑛), then the
Based on the description of PSO, we can see that the parti- integration error 𝐸𝑛 = 𝑢 − 𝑄𝑛 is not smaller than 𝑜(𝑛−1 ).
cles have a tendency to fly towards the better and better search
area over the course of search process. So, the PSO algo- By Theorems 4–6, it can be seen that if we estimate the
rithm can enforce a steady improvement in solution quality. integral based on good point set, the degree of discrepancy
𝜑(𝑛) = 𝑐(𝑟, 𝜖)𝑛−1+𝜖 is only related with 𝑛. This is a good idea
4. Hybrid Approach (ABC-PS) for high dimensional approximation computation. In other
words, the idea of good point set is to take the point set more
From the above discussion of ABC and PSO, it is clear evenly than random point.
that the global best solution of the population does not be For the 𝑑-dimensional local search space 𝐻, the so-called
directly used in ABC algorithm; at the same time, it can be good point set which contains 𝑛 points can be found as
concluded that when the particles in the PSO get stuck in follows:
the local minima, it may not get rid of the local minima.
For overcoming these disadvantages of two algorithms, we 𝑝𝑛 (𝑘) = {({𝑟1 ∗ 𝑘} , {𝑟2 ∗ 𝑘} , . . . , {𝑟𝑑 ∗ 𝑘}) , 𝑘 = 1, . . . , 𝑛} ,
propose a hybrid global optimization approach by combing (10)
ABC algorithm and PSO searching mechanism. In this
algorithm, the initial population is generated by using good where 𝑟𝑖 = 2 cos(2𝜋𝑖/𝜌), 1 ≤ 𝑖 ≤ 𝑑, 𝜌 is the minimum prime
point set theory. number which content with 𝜌 ≥ 2𝑑 + 3, and {𝑟𝑖 ∗ 𝑘} is the
decimal fraction of 𝑟𝑖 ∗ 𝑘 (or with 𝑟𝑖 = 𝑒𝑖 , 1 ≤ 𝑖 ≤ 𝑑).
Since good point set principle is based on unit hypercube
4.1. Background on Good Point Set. Let 𝐺𝑑 be 𝑑-dimensional
or hypersphere, in order to map 𝑛 good points from space
unit cube in Euclidean space. If 𝑥 = (𝑥1 , 𝑥2 , . . . , 𝑥𝑑 ) ∈ 𝐺𝑑 ,
then 0 ≤ 𝑥𝑖 ≤ 1 (𝑖 = 1, 2, . . . , 𝑑). Let 𝑝𝑛 (𝑘) be a set of 𝑛 points 𝐻 : [0, 1]𝑑 to the search space 𝑇 : [𝑥, 𝑥]𝑑 , we define the
following transformation:
in 𝐺𝑑 ; then 𝑝𝑛 (𝑘) = {(𝑥1(𝑛) (𝑘), . . . , 𝑥𝑑(𝑛) (𝑘)) | 0 ≤ 𝑥𝑖(𝑛) (𝑘) ≤
1, 1 ≤ 𝑘 ≤ 𝑛, 1 ≤ 𝑖 ≤ 𝑑}. Let 𝑟 = (𝑟1 , 𝑟2 , . . . , 𝑟𝑑 ) be a point 𝑥𝑖 = 𝑥𝑖 + {𝑟𝑖 ∗ 𝑘} ∗ (𝑥𝑖 − 𝑥𝑖 ) . (11)
in 𝐺𝑑 , for 𝑁𝑛 (𝑟) = 𝑁𝑛 (𝑟1 , 𝑟2 , . . . , 𝑟𝑑 ) denoted the number
of points in 𝑝𝑛 (𝑘) which content with 0 ≤ 𝑥𝑖(𝑛) (𝑘) ≤ 𝑟𝑖 , In the following, for two-dimensional space [−1, 1], we
𝑖 = 1, . . . , 𝑑. generate 100 points by using goog point set method and
4 Mathematical Problems in Engineering

1 where 𝑥 and 𝑥 are the lower bound and upper bound of


0.8 variable 𝑥, respectively. Finally, adopt the following equation
to generate the new candidate solution 𝑥: ̂
0.6
0.4 𝑥̂ = (1 − 𝜆) ∗ 𝑔best + 𝜆 ∗ CH𝑖 , 𝑖 = 1, . . . , 𝐾, (14)
0.2 where 𝜆 is the shrinking factor, which is defined as follows:
0 maxcycle − iter + 1
𝜆= , (15)
−0.2 maxcycle
−0.4 where maxcycle is the maximum number of iterations and
−0.6 iter is the number of current iteration.
−0.8 By (15), it can be seen that 𝜆 will become small with
the evolution generations increasing. Furthermore, combing
−1 (13), it is easy to see that 𝜆 is smaller, less chaotic search
−1 −0.5 0 0.5 1
is needed. Thus, from the above discussion, we know that
Figure 1: 100 points by using goog point set method. the local search range becomes smaller with the process of
evolution.

1 4.3. The Statement of ABC-PS Algorithm. Based on the above,


0.8 the ABC-PC algorithm is given in this subsection.
0.6
Algorithm 7.
0.4
(1) Set the population size 𝑆𝑛, give the maximum number
0.2
of iteration maxcycle, 𝑤max , 𝑤min , Vmax , and Vmin .
0 (2) Use (11) to creat an initial population {𝑥𝑖 | 𝑖 = 1, . . . ,
−0.2 𝑆𝑛}. Calculatethe function value of the population
−0.4 {𝑓𝑖 | 𝑖 = 1, . . . , 𝑆𝑛}, find the best solution 𝑔best and
−0.6
the personal bests of the population 𝑝𝑖best .
−0.8
(3) While the stopping criterion is not meet do
−1
(4) For 𝑖 = 1 to 𝑆𝑛 do % the employed bee phase
−1 −0.5 0 0.5 1 (5) Update the velocities of the particles and the positions
Figure 2: 100 points by using random method. of the particlesby using (8) and (5), respectively.
(6) Determine personal bests of the particles by using (6),
and update trail.
(7) End if
random method, respectively, and give the distribution effect
of them (see Figures 1 and 2). It can be seen that good point (8) Determine the 𝑔best of the population.
set is uniform, and as long as the sampling number is certain, (9) End for
the income distribution effect is the same at every time; so (10) For 𝑖 = 1 to 𝑆𝑛 do % the onlooker phase
good point set method has good stability.
(11) If rand < Prob(𝑖)
(12) Update the velocity of the food source 𝑖 and its the
4.2. Chaotic Search Operation. In our algorithm, assume that
position by using (8) and (5).
𝑔best is the best solution of the current iteration. To enrich the
(13) Determine personal bests of the particles by using (6),
searching behavior in 𝑔best and to avoid being trapped into
and update trail.
local optimum, chaotic dynamics is incorporated into our
algorithm and the detail is given as follows. Firstly, the well- (14) End if
known logistic equation is employed to generate a chaotic (15) If trail𝑖 = max(trail) > limit, then % the scout phase
sequence, which is defined as follows: replace 𝑥𝑖 with a new solution produced by (2).
(16) End if
ch𝑖+1 = 4 ∗ ch𝑖 ∗ (1 − ch𝑖 ) , 1 ≤ 𝑖 ≤ 𝐾, (12) (17) Determine the 𝑔best of the population.

where 𝐾 is the length of chaotic sequence. Then map ch𝑖 to a (18) Chaotic search 𝐾 times in 𝑔best , and redetermine the
chaotic vector in the interval [𝑥, 𝑥]: 𝑔best of the population.
(19) iter = iter + 1.
CH𝑖 = 𝑥 + ch𝑖 ∗ (𝑥 − 𝑥) , 𝑖 = 1, . . . , 𝐾, (13) (20) End while
Mathematical Problems in Engineering 5

100 100

10−50 10−5

10−100 10−10

10−15
10−150
10−20
10−200
10−25
10−250
10−30
10−300
10−35
0 200 400 600 800 1000 1200 0 200 400 600 800 1000 1200

ABC ABC
ABC-PS ABC-PS

Figure 3: The relation of the best value and each iteration (the Figure 4: The relation of the best value and each iteration (the
function Matyas). function Booth).

0.6
5. Comparison of the ABC-PS with the Other
Hybrid Methods Based on the ABC 0.4

0.2
In this section, ABC-PS is applied to minimize a set of bench-
mark functions. In all simulations, the inertia weight in (4) is 0
defined as follows: −0.2
𝜔 − 𝜔min
𝜔 = max ∗ iter, (16) −0.4
maxcycle
−0.6
where 𝜔max and 𝜔min are the maximum inertia weight and −0.8
minimum weight, respectively, iter denotes the times of
current iteration. From the above formula, it can be seen that, −1
with the iteration increasing, the velocity V𝑖 of the particle −1.2
𝑥𝑖 becomes important more and more. 𝜔max and 𝜔min are 0 200 400 600 800 1000 1200
set to 0.9 and 0.4, respectively. Vmin = −1, Vmax = 1, and
𝑐1 = 𝑐2 = 1.3. ABC
ABC-PS
Experiment 1. In order to evaluate the performance of ABC-
Figure 5: The relation of the best value and each iteration (the
PS algorithm, we have used a test bed of four traditional function Camelback).
numerical benchmarks as illustrated in Table 1, which include
Matyas, Booth, 6 Hump Camelback, and GoldsteinCPrice
functions. The characteristics, dimensions, initial range, and
formulations of these functions are given in Table 1. Empirical Experiment 2. To further verify the performance of ABC-
results of the proposed hybrid method have been compared PS, 12 numerical benchmark functions are selected from the
with results obtained with that of basic ABC algorithm and a literatures [13–15]. This set consists of many different kinds of
latest algorithm COABC [15]. problems such as unimodal, multimodal, regular, irregular,
The values of the common parameters used in three algo- separable, nonseparable, and multidimensional. The charac-
rithms such as population size and total evaluation number teristics, dimensions, initial range, and formulations of these
are chosen in the same. Population size is 50 for all functions, functions are listed in Table 3.
the limit is 10. For each function, all the methods were run 30 In order to fairly compare the performance of ABC-PS,
times independently. In order to make comparison clear, the COAB, GABC [13], and PABC [14], the experiments are con-
global minimums, maximum number of iterations, mean best ducted the same way as described [13–15]. The minimums,
values, standard deviations are given in Table 2. For ABC and max iterations, mean best values, and standard deviations
ABC-PS, Figures 3, 4, 5, and 6 illustrate the change of the best found after 30 runs are given in Table 4. The bold font in
value of each iteration. The experiment shows that the ABC- Table 4 is the optimum value among different methods. From
PS method is much better than the initial ABC algorithm and Table 4, it can be see that the method ABC-PS is superior to
COABC. other algorithms in most cases, expect to 𝑓5 and 𝑓6 .
6 Mathematical Problems in Engineering

Table 1: Benchmark functions used in Experiment 1.

Functions C Range
𝑓1 = 0.26(𝑥12 + 𝑥22 ) − 0.48𝑥1 𝑥2 UN [−10, 10]
𝑓2 = (𝑥1 + 2𝑥2 − 7)2 + (2𝑥1 + 𝑥2 − 5)2 MS [−10, 10]
𝑓3 = 4𝑥12 − 2.1𝑥14 + 𝑥16 /3 + 𝑥1 𝑥2 − 4𝑥22 + 4𝑥24 MN [−5, 5]
𝑓4 = [1 + (𝑥1 + 𝑥2 + 1)2 (19 − 14𝑥1 + 3𝑥12 − 14𝑥2 + 6𝑥1 𝑥2 + 3𝑥22 )] MN [−2, 2]
∗ [30 + (2𝑥1 − 3𝑥2 )2 (18 − 32𝑥1 + 12𝑥12 + 48𝑥2 − 36𝑥1 𝑥2 + 27𝑥22 )]
C: characteristic; U: unimodal; M: multimodal; N: nonseparable; S: separable.

Table 2: Results obtained by ABC, COABC, and ABC-PS algorithms.

Function Min Max iteration Algorithm Mean SD


ABC 6.03𝑒 − 07 3.64𝑒 − 07
𝑓1 0 1000 COABC 4.45𝑒 − 07 4.63𝑒 − 07
ABC-PS 0 0
ABC 1.68𝑒 − 17 1.38𝑒 − 17
𝑓2 0 1000 COABC 6.19𝑒 − 23 2.06𝑒 − 22
ABC-PS 0 0
ABC −1.03 7.20𝑒 − 17
𝑓3 −1.03 1000 COABC −1.03 1.76𝑒 − 16
ABC-PS −1.03162845348988 0
ABC 3 1.47𝑒 − 3
𝑓4 3 1000 COABC 3 3.21𝑒 − 06
ABC-PS 2.99999999999992 6.28e − 16

Table 3: Benchmark functions used in Experiment 2.

Functions C D Range
𝑓1 = 0.26(𝑥12 + 𝑥22 ) − 0.48𝑥1 𝑥2 UN 2 [−10, 10]
𝑛−1
𝑓2 = ∑ [100(𝑥𝑖+1 − 𝑥𝑖2 )2 + (𝑥𝑖 − 1)2 ] UN 30 [−30, 30]
𝑖=1
𝑥16
𝑓3 = 4𝑥12 − 2.1 ∗ 𝑥14 + + 𝑥1 𝑥2 − 4𝑥22 + 4𝑥24 MN 2 [−5, 5]
3
𝑛
𝑥𝑖2 𝑥 𝑛
𝑓4 = −20 exp (−0.2 ∗ √ ∑ ) − exp (∑ cos (2Π 𝑖 )) + 20 + 𝑒 MN 60 [−32, 32]
𝑖=1 𝑛 𝑖=1 𝑛
1 𝑛 2 𝑛
𝑥𝑖
𝑓5 = ∑𝑥 − ∏ cos ( ) + 1 MN 60 [−600, 600]
4000 𝑖=1 𝑖 𝑖=1 √𝑖
𝑛
𝑓6 = 418.982887 ∗ 𝑛 − ∑ (𝑥𝑖 sin(√|𝑥𝑖 |)) MN 30 [−500, 500]
𝑖=1
𝑓7 = 100(𝑥12 − 𝑥2 )2 + (𝑥1 − 1)2 + (𝑥3 − 1)2 + 90(𝑥32 − 𝑥4 )2
UN 4 [−10, 10]
+10.1((𝑥2 − 1)2 + (𝑥4 − 1)2 ) + 19.8(𝑥2 − 1)(𝑥4 − 1)
𝑛
𝑓8 = ∑|𝑥𝑖 |𝑖+1 US 30 [−1, 1]
𝑖=1
𝑛 𝑛
𝑓9 = ∑|𝑥𝑖 | + ∏|𝑥𝑖 | US 60 [−10, 10]
𝑖=1 𝑖=1
𝑓10 = max {|𝑥𝑖 | | 1 ≤ 𝑖 ≤ 𝑛} US 60 [−100, 100]
1 𝑛
𝑓11 = ∑ (𝑥𝑖4 − 16𝑥𝑖2 + 5𝑥𝑖 ) MS 60 [−5, 5]
𝑛 𝑖=1
𝑛
𝑓12 = ∑ (𝑥𝑖2 − 10 cos(2𝜋𝑥𝑖 ) + 10) MS 60 [−5.12, 5.12]
𝑖=1
C: characteristic; U: unimodal; M: multimodal; N: nonseparable; S: separable.
Mathematical Problems in Engineering 7

Table 4: ABC-PS performance comparison of ABC and the state-of-art algorithms in [13–15].

Function Min Max iteration Algorithm Mean SD


ABC 6.03𝑒 − 07 3.64𝑒 − 07
𝑓1 0 1000 COABC (with UTEB = 3) 1.42𝑒 − 152 4.44𝑒 − 152
ABC-PS 0 0
ABC 0.24 0.46
𝑓2 0 2000 COABC (with UTEB = 3) 0.08 0.10
ABC-PS 5.494e − 18 0
ABC −1.03 7.20𝑒 − 17
𝑓3 −1.03 1000 COABC (with UTEB = 3) −1.03 2.10𝑒 − 16
ABC-PS −1.03162845348988 0
ABC 3 1.47𝑒 − 3
𝑓4 0 1000 COABC (with UTEB = 3) 3.40𝑒 − 13 6.35𝑒 − 14
ABC-PS 8.881e − 016 0
ABC 4.46𝑒 − 09 6.68𝑒 − 09
𝑓5 0 2000 COABC (with UTEB = 3) 0 0
ABC-PS 1.551𝑒 − 012 3.468𝑒 − 012
ABC 2.05𝑒 + 02 1.63𝑒 + 02
𝑓6 0 1000 COABC (with UTEB = 3) 0 0
ABC-PS −3.637𝑒 − 012 0
ABC 1.71𝑒 − 01 6.94𝑒 − 02
𝑓7 0 2000 COABC (with UTEB = 3) 9.35𝑒 − 03 4.64𝑒 − 03
ABC-PS 8.042e − 07 3.680e − 07
ABC 7.69𝑒 − 22 2.18𝑒 − 21
𝑓8 0 1000 PABC 4.46𝑒 − 61 1.09𝑒 − 60
ABC-PS 8.344e − 067 1.865e − 066
GABC 4.73𝑒 − 13 1.56𝑒 − 13
𝑓9 0 1000 PABC 1.65𝑒 − 18 1.63𝑒 − 18
ABC-PS 2.494e − 018 3.734e − 018
GABC 6.38𝑒 − 01 1.75𝑒 − 01
𝑓10 0 1000 PABC 8.82𝑒 − 01 1.26𝑒 − 01
ABC-PS 2.50e − 006 3.01e − 006
GABC −78.3322 1.32𝑒 − 05
𝑓11 −78.33236 1000 PABC −78.3323 2.62𝑒 − 14
ABC-PS −78.33233 1.004e − 014
GABC 9.35𝑒 − 01 1.87𝑒 − 00
𝑓12 0 1000 PABC 9.58𝑒 − 03 6.27𝑒 − 03
ABC-PS 0 0

6. Conclusion Conflict of Interests


In this paper, a hybrid ABC algorithm based on particle The authors declare that there is no conflict of interests
swarm searching mechanism (ABC-PS) was presented. For regarding the publication of this paper.
overcoming the disadvantage of ABC algorithm, we adopted
good point set theory to generate the initial food source; then, Acknowledgments
the mechanism of PSO was utilized to search new candidate
solutions for improving the exploitation ability of bee swarm; The authors thank anonymous reviews for their suggestions
finally, the chaotic search operator was adopted in the best and contributions and corresponding editor for his/her valu-
solution of the current iteration to increase the searching able efforts. The research was supported by NSFC (U1404105,
ability. The experimental results show that the ABC-PS 11171094); the Key Scientific and Technological Project of
exhibits a magnificent performance and outperforms other Henan Province (142102210058); the Doctoral Scientific
algorithms such as ABC, GABC, COABC, and PABC in most Research Foundation of Henan Normal University (qd12103);
case. the Youth Science Foundation of Henan Normal University
8 Mathematical Problems in Engineering

9 [11] C. Zhang, J. Ning, S. Lu, D. Ouyang, and T. Ding, “A novel


hybrid differential evolution and particle swarm optimiza-
8 tion algorithm for unconstrained optimization,” Operations
Research Letters, vol. 37, no. 2, pp. 117–122, 2009.
7 [12] B. Alatas, “Chaotic bee colony algorithms for global numerical
optimization,” Expert Systems with Applications, vol. 37, no. 8,
6 pp. 5682–5687, 2010.
[13] G. Zhu and S. Kwong, “Gbest-guided artificial bee colony
5
algorithm for numerical function optimization,” Applied Math-
ematics and Computation, vol. 217, no. 7, pp. 3166–3173, 2010.
4
[14] W.-F. Gao, S.-Y. Liu, and L.-L. Huang, “A novel artificial bee
colony algorithm with Powell’s method,” Applied Soft Comput-
3
ing Journal, vol. 13, no. 9, pp. 3763–3775, 2013.
2 [15] J. Luo, Q. Wang, and X. Xiao, “A modified artificial bee colony
0 200 400 600 800 1000 1200 algorithm based on converge-onlookers approach for global
optimization,” Applied Mathematics and Computation, vol. 219,
ABC no. 20, pp. 10253–10262, 2013.
ABC-PS [16] D. Karaboga, B. Gorkemli, C. Ozturk, and N. Karaboga, “A
comprehensive survey: artificial bee colony (ABC) algorithm
Figure 6: The relation of the best value and each iteration (the and applications,” Artificial Intelligence Review, vol. 42, no. 1, pp.
function Goldstein). 21–57, 2014.
[17] X. Shi, Y. Li, H. Li, R. Guan, L. Wang, and Y. Liang, “An
integrated algorithm based on artificial bee colony and particle
(2013qk02); Henan Normal University National Research swarm optimization,” in Proceedings of the 6th International
Project to Cultivate the Funded Projects (01016400105); the Conference on Natural Computation (ICNC ’10), pp. 2586–2590,
Henan Normal University Youth Backbone Teacher Training. August 2010.
[18] M. El-Abd, “A hybrid ABC-SPSO algorithm for continuous
function optimization,” in Proceedings of the IEEE Symposium
References on Swarm Intelligence (SIS ’11), pp. 1–6, Paris, France, April 2011.
[19] J. Kennedy and R. C. Eberhart, “A new optimizer using particle
[1] M. Dorigo and T. Stutzle, Ant Colony Optimization, MIT Press,
swarm theory,” in Proceedings of 6th International Symposium on
Cambridge, Mass, USA, 2004.
Micro Machine and Human Science, pp. 39–43, Nagoya, Japan,
[2] T. Liao, T. Stützle, M. M. de Oca, and M. Dorigo, “A unified ant 1995.
colony optimization algorithm for continuous optimization,” [20] Y. Shi and R. C. Eberhart, “A modified particle swarm opti-
European Journal of Operational Research, vol. 234, no. 3, pp. mizer,” in Proceedings of the IEEE International Conference on
597–609, 2014. Evolutionary Computation (ICEC ’98), pp. 69–73, Anchorage,
[3] J. Kennedy and R. C. Eberhart, “Particle swarm optimization,” Alaska, USA, May 1998.
in Proceedings of the IEEE International Conference on Neural
Networks, vol. 4, pp. 1942–1948, Perth, Australia, November-
December 1995.
[4] D. Chen and C. Zhao, “Particle swarm optimization with adap-
tive population size and its application,” Applied Soft Computing
Journal, vol. 9, no. 1, pp. 39–48, 2009.
[5] D. Karaboga, An Idea Based on Honey Bee Swarm for Numerical
Optimization, Erciyes University Press, Erciyes, Turkey, 2005.
[6] D. Karaboga and B. Basturk, “On the performance of artificial
bee colony (ABC) algorithm,” Applied Soft Computing Journal,
vol. 8, no. 1, pp. 687–697, 2008.
[7] M. H. Aghdam, N. Ghasem-Aghaee, and M. E. Basiri, “Text
feature selection using ant colony optimization,” Expert Systems
with Applications, vol. 36, no. 3, pp. 6843–6853, 2009.
[8] B. Yagmahan and M. M. Yenisey, “Ant colony optimization for
multi-objective flow shop scheduling problem,” Computers and
Industrial Engineering, vol. 54, no. 3, pp. 411–420, 2008.
[9] R. E. Perez and K. Behdinan, “Particle swarm approach for
structural design optimization,” Computers and Structures, vol.
85, no. 19-20, pp. 1579–1588, 2007.
[10] Y.-T. Kao and E. Zahara, “A hybrid genetic algorithm and
particle swarm optimization for multimodal functions,” Applied
Soft Computing Journal, vol. 8, no. 2, pp. 849–857, 2008.
Advances in Advances in Journal of Journal of
Operations Research
Hindawi Publishing Corporation
Decision Sciences
Hindawi Publishing Corporation
Applied Mathematics
Hindawi Publishing Corporation
Algebra
Hindawi Publishing Corporation
Probability and Statistics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

The Scientific International Journal of


World Journal
Hindawi Publishing Corporation
Differential Equations
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Submit your manuscripts at


http://www.hindawi.com

International Journal of Advances in


Combinatorics
Hindawi Publishing Corporation
Mathematical Physics
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

Journal of Journal of Mathematical Problems Abstract and Discrete Dynamics in


Complex Analysis
Hindawi Publishing Corporation
Mathematics
Hindawi Publishing Corporation
in Engineering
Hindawi Publishing Corporation
Applied Analysis
Hindawi Publishing Corporation
Nature and Society
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

International
Journal of Journal of
Mathematics and
Mathematical
Discrete Mathematics
Sciences

Journal of International Journal of Journal of

Hindawi Publishing Corporation Hindawi Publishing Corporation Volume 2014


Function Spaces
Hindawi Publishing Corporation
Stochastic Analysis
Hindawi Publishing Corporation
Optimization
Hindawi Publishing Corporation
http://www.hindawi.com Volume 2014 http://www.hindawi.com http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014 http://www.hindawi.com Volume 2014

You might also like