Fitness Dependent Optimizer: Inspired by The Bee Swarming Reproductive Process
Fitness Dependent Optimizer: Inspired by The Bee Swarming Reproductive Process
ABSTRACT In this paper, a novel swarm intelligent algorithm is proposed, known as the fitness dependent
optimizer (FDO). The bee swarming the reproductive process and their collective decision-making have
inspired this algorithm; it has no algorithmic connection with the honey bee algorithm or the artificial
bee colony algorithm. It is worth mentioning that the FDO is considered a particle swarm optimization
(PSO)-based algorithm that updates the search agent position by adding velocity (pace). However, the FDO
calculates velocity differently; it uses the problem fitness function value to produce weights, and these
weights guide the search agents during both the exploration and exploitation phases. Throughout this paper,
the FDO algorithm is presented, and the motivation behind the idea is explained. Moreover, the FDO
is tested on a group of 19 classical benchmark test functions, and the results are compared with three
well-known algorithms: PSO, the genetic algorithm (GA), and the dragonfly algorithm (DA); in addition,
the FDO is tested on the IEEE Congress of Evolutionary Computation Benchmark Test Functions (CEC-
C06, 2019 Competition) [1]. The results are compared with three modern algorithms: (DA), the whale
optimization algorithm (WOA), and the salp swarm algorithm (SSA). The FDO results show better per-
formance in most cases and comparative results in other cases. Furthermore, the results are statistically
tested with the Wilcoxon rank-sum test to show the significance of the results. Likewise, the FDO stability
in both the exploration and exploitation phases is verified and performance-proofed using different standard
measurements. Finally, the FDO is applied to real-world applications as evidence of its feasibility.
2169-3536
2019 IEEE. Translations and content mining are permitted for academic research only.
VOLUME 7, 2019 Personal use is also permitted, but republication/redistribution requires IEEE permission. 43473
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
J. M. Abdullah, T. A. Rashid: FDO: Inspired by the Bee Swarming Reproductive Process
Heuristic algorithms search for a solution by trial and error; optimization algorithm (WOA) in 2016 [17], and the salp
they hope that a quality solution will be found in a reasonable swarm algorithm (SSA) in 2017 were proposed by the same
amount of time. Similarly, they tend to use specific random- author [18]. Two new variants of the ABC are proposed by
ization mechanisms and local searches in various ways. More Laizhong et al, the authors showed that they managed to
studies and developments have been conducted on heuristic enhance the exploitations of the novel ABC algorithm, as it is
algorithms to make what is known as metaheuristic algo- well known that the novel ABC has a good exploration ability,
rithms. Metaheuristic algorithms have better performance however, it suffers from slow exploitations. In their first work,
than heuristics algorithms, which is why the ‘‘meta’’ prefix they employed an adaptive method for the population size
was added, which means ‘‘higher’’ or ‘‘beyond’’. However, (AMPS) [19]. In the second paper, they proposed a ranking-
researchers currently use these two terms (heuristic and meta- based adaptive ABC algorithm (ARABC) [20], the atten-
heuristic) interchangeably, as there is little difference in their tion on both works was to improve exploitations ability of
definitions [3]–[5]. the novel ABC. Nonetheless, two more improvements were
The complexity of real-world problems that exist around us suggested on the novel ABC in 2018, firstly, by propos-
makes it impossible to search every possible solution simply ing the distance-fitness-based neighbor search mechanism
because of time, space, and cost considerations. As a result, (DFnABC), which is a new variant of the ABC [21], and sec-
low cost, fast, and more intelligent mechanisms are required. ondly, by proposing the dual-population framework (DPF),
Therefore, researchers have studied the behaviors of animals again to enhance ABC convergence speed [22]. Additionally,
and natural phenomena to understand how they solve their in 2018, a new algorithm which inspired by vapour-liquid
problems. For example, how ants find their path, how a group equilibrium (VLE) was proposed by Enrique M. Cortés-Toro
of fish, birds or flies avoid the enemy or hunt their prey, and his colleagues, the authors claim that their algorithm can
and how gravity works. Thus, these algorithms, which are solve highly nonlinear optimization problems in continuous
inspired by nature, are known as nature-inspired algorithms. domains [23].
Development in nature-inspired metaheuristic algorithms Various research has been conducted in the field of
began in the 1960s at the University of Michigan. John nature-inspired metaheuristic algorithms; additionally, many
Holland and his colleagues published their genetic algo- efficient algorithms have been proposed in the literature.
rithm (GA) book in 1960 and republished it in 1970 and Alternatively, there is always room for new algorithms,
1983 [6]. An algorithm that is inspired by the annealing as long as the proposed algorithm provides better or com-
process of metal, known as simulated annealing (SA), was parative performances, as explained by David H. Wolpert
developed by Kirkpatrick et al. [7]. Nevertheless, in the past and William G. Macready in their work titled ‘‘No Free
two decades, this field has witnessed many major signs of Lunch Theorems for Optimization’’ in 1997. Thus, there is no
progress. For instance, particle swarm optimization, which single global algorithm that can provide the optimum solution
was proposed by James Kennedy and Russel C. Eberhart, for every optimization problem. For example, if algorithm
has been used for many real-world applications [8]. PSO was ‘‘A’’ works better than algorithm ‘‘B’’ on optimization prob-
inspired by the swarm intelligence of fish and birds while the lem X, then there is a high chance that there is an optimization
authors were studying a flock of birds. They found that they problem Y, that works better on algorithm ‘‘B’’ than on algo-
could apply these behaviors to optimization problems; later, rithm ‘‘A’’ [24]. For these reasons, a new algorithm (FDO)
PSO became a base algorithm for other algorithms, including is proposed in this paper. This algorithm is inspired by the
our algorithm. R. Storn and K. Price developed differential swarming behavior of bees during the reproductive process
evolution (DE) in 1997. It is a vector-based algorithm that when they search for new hives. This algorithm has nothing in
outperforms GA in many applications [9]. After that, in 2001, common with the ABC algorithm (except both algorithms are
Zong WooGeem et al. developed the harmony search (HS), inspired by bee behavior, and both are nature-inspired meta-
which was applied in many optimization problems such as heuristic algorithms).
transport modeling and water distribution [10]. Then, in 2004, The major contributions of this paper are summarized as
C. Tovey and S. Nakrani developed the honey bee algorithm. follows:
They used it for Internet hosting center optimization [11]. 1- A new novel swarm intelligent algorithm is pro-
This was followed by the development of a novel bee algo- posed, which is using certain characteristics of the bee
rithm proposed by Pham et al. [12], and one year later, swarms. For example, it uses a fitness function for
D. Karaboga et al. created the artificial bee colony (ABC) generating suitable weights that help the algorithm in
algorithm in 2005. In 2009, Xin-She Yang developed the both exploration and exploitation phases, as it provides
firefly algorithm (FA) [13]; and then, the cuckoo search fast convergence towards global optimality with respect
(CS) algorithm was proposed by the same author [14]. Addi- to fair coverage of the search landscape.
tionally, Xin-She Yang proposed a bat-inspired algorithm 2- One more unique feature of FDO is that it stores the
in 2010 [15]. Then, in 2015, Mirjalili A. S. proposed the past search agent pace (velocity) for potential reuse in
dragonfly algorithm (DA) [16], which is a PSO-based algo- future steps (more on this is discussed in section IV).
rithm inspired by the dragonfly swarm behavior of attrac- 3- FDO can be considered a PSO-based algorithm since
tion to food and distraction by the enemy, then the whale it uses a similar mechanism for updating agents’
FIGURE 1. Honey bee anatomy [27]. FIGURE 2. Bee swarming process cycle [28].
TABLE 1. FDO-related Bee biological entities. of artificial scout bees is expressed as follows:
fw = 1 or fw = 0 orxi,t fitness = 0, pace
= xi,t ∗ r (3)
r < 0, pace = xi,t − x ∗ ∗ fw∗ − 1
i,t (4)
fw > 0 and fw < 1
r ≥ 0, pace = xi,t − x ∗ ∗ fw (5)
i,t
multimodal test functions, and composite test functions. for all test functions wf was equal to 0 except test function
Each set of these test functions is used to benchmark cer- (2 and 6) where wf equal to 1. Every test function was
tain perspectives of the algorithm. Unimodal benchmark minimized towards 0.0 except TF8, which was minimized
functions, for example, are used for testing the exploitation towards -418.9829 (see Appendix Tables 6, 7 and 8 for more
level and convergence of the algorithm, as their name might details about the test function conditions). For example, some
imply that they have a single optimum. However, multimodal test functions were shifted by some degrees from the origin
benchmark functions have multi optimal solutions, and they point to prove that the algorithms were not biased towards the
are used for testing the local optima avoidance and explo- origin.
ration levels. As in multimodal algorithms, there are many In Table (2), the results of FDO, DA, PSO, and GA are
optimum solutions; one of them is a global optimum solution presented. The TF1 to TF6 results showed that FDO generally
and most local optimum solutions. An algorithm must avoid provided better results than the other algorithms; however,
local optimum solutions and converge to a global optimum the TF7 results showed the other algorithms were better.
solution. Furthermore, the composite benchmark functions FDO in TF8 showed poor performance even though it had
are mostly combined, shifted, rotated, and biased versions of better results than PSO. In contrast, TF9 FDO provided a
other test functions. Composite benchmark functions provide better result than both GA and DA, and comparative results
diverse shapes for different regions of the search landscape; were produced by PSO. In TF10 to TF13 and TF18, FDO pro-
they also have a very large number of local optima. This type vided relatively comparative results to the other algorithms.
of benchmark function demonstrates that complications exist However, the results of TF14 to TF17 and TF19 confirm that
in real-world search spaces (see Table (6, 7 and 8) in the the FDO algorithm outperformed DA, PSO, and GA in all
appendix) [16]. cases.
Each algorithm in Table (2) has been tested 30 times by
using 30 search agents each with 10 dimensions; in each B. CEC-C06 2019 BENCHMARK TEST FUNCTIONS
test, the algorithm was allowed to look for the best optimum A group of 10 modern CEC benchmark test functions is used
solution in 500 iterations, and then, the average and standard as an extra evaluation on FDO, these test functions were
deviation were calculated. Regarding parameter sets, GA, improved by professor Suganthan and his colleges for a single
PSO, and DA parameter sets described in this paper [16]. But objective optimization problem [1], the test functions are
for FDO parameters, there is only wf to be tuned. In Table (2), known as ‘‘The 100-Digit Challenge’’, which are intended to
be used in annual optimization competition. See Table (9) in TABLE 4. The wilcoxon rank-sum test for classical benchmarks.
the appendix.
Functions CEC04 to CEC10 are shifted and rotated,
whereas functions CEC01 to CEC03 are not. However,
all test functions are scalable. The parameter set where
defined by the CEC benchmark developer, as functions
CEC04 to CEC10 where set as 10-dimensional minimiza-
tion problem in [−100, 100] boundary range, however,
CEC01 to CEC03 have different dimensions as shown in the
Appendix in Table 9. For more convenient, all CEC global
optimum where unified toward point 1. FDO is competed
with three modern optimization algorithms: DA, WOA, and
SSA. The reasons behind selecting these algorithms are:
1) They are all PSO-based algorithms same as FDO. 2) All
of them are well cited in the literature. 3) They are Proven
to have an outstanding performance both on benchmark test
functions and real-world problems. 4) These algorithm imple-
mentations are publicly provided by their authors. Regard-
ing algorithms parameter settings, their default parameter
settings were not modified during the tests, all competi-
tors are set the same as the settings used in their original
papers [16]–[18]. Interested readers can find these algorithms
MATLAB implementations and their parameter setting spec-
ification here [30]. Additionally, FDO default parameter set
wf = 0 is used for all test functions. a statistical comparison are shown in Table (4) and Table (5).
Each algorithm where allowed to search the landscape In Table (4), the comparison is conducted only between the
for 500 iterations using 30 agents. As shown in Table (3), FDO and DA algorithms because the DA algorithm was
FDO outperforms other algorithms except in CEC06. Even already tested against both PSO and GA in this paper [16].
though other algorithms have a comparative result in CEC03, According to the mentioned work, it has been proven that
CEC05, and CEC09 benchmarks, for example, WOA has the the DA results are statistically significant compared with PSO
same result as FDO in CEC03, but the WOA standard devi- and GA.
ation is equal to 0, this shows that WOA has the same result Again, as shown in Table (4), the FDO results are con-
every time it uses with no chance for further improvements. sidered significant in all statistical tests (unimodal, multi-
modal and composite test functions), except in TF2, that is
C. STATISTICAL TESTS because the results are more than 0.05. There are two unusual
To show that the results presented in Table (2) and Table (3) results in the composite test functions in both TF16 and TF18
are statistically significant, the p values of the Wilcoxon rank- because the DA algorithm provided the same fitness function
sum test are found for all test functions, and the results of value for each of the 30 different individual tests.
FIGURE 4. Search history of the FDO algorithms on unimodal, multimodal, and composite test functions.
FIGURE 5. The trajectory of FDO’s search agents on unimodal, multimodal, and composite test functions.
FIGURE 6. Average fitness of FDO’s search agents on unimodal, multimodal, and composite test functions.
Table (5) shows the Wilcoxon rank-sum test of FDO D. QUANTITATIVE MEASUREMENT METRICS
against DA, WOA, and SSA for 10 CEC benchmark test func- For more detailed analyses and in-depth observation of
tions, the results show that FDO performances are statistically the FDO algorithm, four more quantitative metrics were
significant in all cases, except in test function CEC03 for used, as shown in Figures (4, 5, 6 and 7). In each exper-
DA and WOA algorithms, and test function CEC04 and iment, the first test function is selected from the unimodal
CEC08 for WOA algorithm. The results of Table (4) and benchmark functions (FT1 to FT7), the second test function
Table (5) prove that FDO results are statistically significant, selected is from the multimodal test functions (TF8 to TF13),
consequently, the existence of the FDO algorithm is statisti- and the last test function is selected from the composite
cally feasible. benchmark functions (TF14 to TF19). The experiment was
FIGURE 7. Convergence curve of the FDO’s algorithms on unimodal, multimodal, and composite test functions.
FIGURE 10. Global best with average fitness results for 200 Iteration with
20 artificial scout bees on aperiodic antenna array designs.
the scout quickly explores the overall area first and then
gradually moves towards optimality.
The second metric measures the value of the search agent
(fitness function value), as shown in Figure (5). The values
start with large values and then steadily decrease. This behav-
ior guarantees that FDO will eventually reach optimality [31].
FIGURE 8. Nonuniform antenna array and a thinned antenna array [22]. The third test metric is shown in Figure (6) and shows
that the average fitness value of all FDO agents decreased
dramatically over the course of the iterations, which verifies
conducted using 10 search agents, each allowed to search the that the algorithm not only improves the global best agent (xi∗ )
two-dimensional landscape through 150 iterations. but also improves the overall agent fitness values.
The first metric measures the convergence and illustrates The fourth metric measures the convergence of the global
how well the artificial scout covers the search landscape. best agent through the course of the iteration. This proves
This is merely a search history of artificial scout movements that xi∗ becomes more accurate as the number of iterations
because the position of the artificial scouts is recorded from increases again, clear abrupt changes can be seen due to the
the beginning to the end of the test. As presented in Figure (4), emphasis on the local search and exploitation, see Figure (10).
of 2.25λ0 is a fixed element and two adjacent elements can- 2) FDO ON FREQUENCY MODULATED SOUND WAVES
not get closer than 0.25λ0 . The fitness function problem is FDO is used on frequency-modulated sound waves (FM)
described as: to optimize the parameter of an FM synthesizer, which
has an essential role in several modern music systems; this
f = max {20 log |AF(θ )|} (8) problem has six parameters to be optimized as indicated
in Equation (10).
where
4 X = {a1 ,w1 , a2 , w2 , a3 , w3 } (10)
X
AF (θ ) = cos [2π xi (cos θ − cos θs )]
The objective of this problem is to generate a sound,
i=1
as in Equation (11), that is similar to the target sound, as
+ cos [2.25 × 2π (cos θ − cos θs )] (9)
in Equation (12).
Consider that θs = 90◦ in this work is defined in
y(t) = a1 . sin (w1 .t. + a2 . sin (w2 .t.θ + a3 . sin (w3 .t.θ )))
Figure (9) [32].
The DFO algorithm is used to optimize this problem, con- (11)
sidering the constraints mentioned in Equation (7). Twenty yo (t) = (1.0). sin ((5.0).t. + (1.5). sin ((4.8).t.θ
artificial scout search agents are used for 200 iterations, +(2.0). sin ((4.9).t.θ ))) (12)
and the presented result in Figure (10) includes the global
best fitness in each iteration and the average fitness value where the parameters should be in the range [−6.4, 6.35] and
according to Equation (8). The result shows that the global θ = 2π/100, the fitness function can be calculated using
best solution reached its optimum solution in iteration 78 with Equation (13), which is simply the summation of the square
element positions = {0.713, 1.595, 0.433, 0.130}. root between the result of Equation (11) and Equation (12),
Interested readers can find more details on this problem VI. CONCLUSION
in [33]. A new swarm intelligent algorithm was proposed called the
FDO is applied to the problem with 30 agents for 200 fitness dependent optimizer; it is inspired by the bee repro-
iterations, and records of the global best solutions and aver- ductive swarming process, where scout bees search for a
age fineness values can be seen in Figure (11). Parameter- new nest site. Additionally, the algorithm is inspired by their
set X = {a1 = 0.974, w1 = −0.241, a2 = −4.3160, collective decision-making. It has no algorithmic connection
with the ABC algorithm. FDO employs fitness function val- REFERENCES
ues to generate weights that drive the search agents towards [1] K. V. Price, N. H. Awad, M. Z. Ali, and P. N. Suganthan, ‘‘The 100-digit
optimality. Additionally, FDO depends on the randomization challenge: Problem definitions and evaluation criteria for the 100-digit
challenge special session and competition on single objective numerical
mechanism in the initialization, exploration and exploitation optimization,’’ School Elect. Electron. Eng., Nanyang Technol. Univ.,
phases. A group of 19 single objective benchmark testing Singapore, Tech. Rep., Nov. 2018.
functions was used to test the performance of the FDO. [2] B. J. Copeland, Alan Turing’s Automatic Computing Engine. Oxford, U.K.:
Oxford Univ., 2005.
The benchmark testing functions were divided into three
[3] X.-S. Yang, Nature-Inspired Metaheuristic Algorithms. London, U.K.:
subgroups (unimodal, multimodal and composite test func- Luniver Press, 2010.
tions). Additionally, FDO tested on 10 modern CEC-C06 [4] I. Fister, Jr., X.-S. Yang, I. Fister, J. Brest, and D. Fister, ‘‘A brief review
benchmarks. The FDO results compared to two well-known of nature-inspired algorithms for optimization,’’ Elektrotehniški Vestnik,
vol. 80, no. 3, pp. 116–122, 2013.
algorithms (PSO and GA) and three modern algorithms [5] L. Bianchi, M. Dorigo, L. M. Gambardella, and W. J. Gutjahr, A Sur-
(DA, WOA, and SSA), FDO outperformed the competing vey on Metaheuristics for Stochastic Combinatorial Optimization, vol. 8.
algorithms in the majority of cases and produced a compara- Amsterdam, The Netherlandsl: Springer, 2008, pp. 239–287.
[6] M. Melanie, An Introduction to Genetic Algorithms. Cambridge, MA,
tive result on the others. The test results were compared using USA: MIT Press, 1999.
the Wilcoxon rank-sum test to prove their statistical signif- [7] S. Kirkpatrick, C. D. Gelatt, Jr., and M. P. Vecchi, ‘‘Optimization by
icance. Four additional experiments were conducted on the simulated annealing,’’ Science, vol. 220, no. 4598, pp. 671–680, May 1983.
FDO algorithm to measure, prove and verify the performance [8] J. Kennedy and R. Eberhart, ‘‘Particle swarm optimization,’’ in Proc. IEEE
Int. Conf. Neural Netw., Nov. 1995, pp. 1942–1948.
and credibility. Furthermore, FDO was practically applied to [9] R. Storn and K. Price, ‘‘Differential evolution—A simple and efficient
two real-world examples as evidence that the algorithm can heuristic for global optimization over continuous spaces,’’ J. Global
address real-life applications. Optim., vol. 11, pp. 341–359, Dec. 1997.
[10] Z. W. Geem, J. H. Kim, and G. V. Loganathan, ‘‘A new heuristic optimiza-
Generally, we found that the number of search agents was tion algorithm: Harmony search,’’ Simulation, vol. 76, no. 2, pp. 60–68,
related somehow to FDO performance after testing on many Feb. 2001.
standard test functions and real-world applications. Thus, [11] S. Nakrani and C. Tovey, ‘‘On honey bees and dynamic server allocation
using a small number of agents (below five) would notably in internet hosting centers,’’ Adapt. Behav., vol. 12, nos. 3–4, pp. 223–240,
Dec. 2044.
decrease the accuracy of the algorithm, and a large number of [12] D. T. Pham, M. Castellani, and M. Sholedolu, ‘‘The bees algorithm,’’
search agents would improve the accuracy and cost more time Manuf. Eng. Centre, Cardiff Univ., Cardiff, U.K., Tech. Rep., 2005.
and space, partially because the algorithm depends on the [13] X.-S. Yang and X. He, ‘‘Firefly algorithm: Recent advances and applica-
tions,’’ Int. J. Swarm Intell., vol. 1, no. 1, pp. 36–50, 2013.
fitness weight on the significant part of its searching mech- [14] X. S. Yang and S. Deb, ‘‘Cuckoo search via Lévy flights,’’ in Proc.
anism; in view of this, it is known as the fitness dependent World Congr. Nature Biologically Inspired Comput. (NaBIC), Dec. 2009,
optimizer. pp. 210–214.
Future works will adapt, implement and test both multi- [15] X.-S. Yang, ‘‘A new metaheuristic bat-inspired algorithm,’’ in Nature
Inspired Cooperative Strategies for Optimization, vol. 284. Berlin,
objective and binary objective optimization problems on Germany: Springer, 2010, pp. 65–74.
FDO. Finally, integrating evolutionary operators into FDO [16] S. Mirjalili, ‘‘Dragonfly algorithm: A new meta-heuristic optimization
and hybridizing it with other algorithms can be considered technique for solving single-objective, discrete, and multi-objective prob-
lems,’’ Neural Comput. Appl., vol. 27, no. 4, pp. 1053–1073, May 2015.
as potential future research. [17] S. Mirjaliliab and A. Lewisa, ‘‘The whale optimization algorithm,’’ Adv.
Eng. Softw., vol. 95, pp. 51–67, May 2016.
[18] S. Mirjalilia, A. H. Gandomibf, S. Z. Mirjalili, C. Saremia, H. Farisd,
and S. M. Mirjalilie, ‘‘Salp swarm algorithm: A bio-inspired optimizer for
VII. APPENDIX engineering design problems,’’ Adv. Eng. Softw., vol. 114, pp. 163–191,
See Tables 6–8. Dec. 2017.
[19] L. Cui et al., ‘‘A novel artificial bee colony algorithm with an adaptive JAZA MAHMOOD ABDULLAH was born in
population size for numerical function optimization,’’ Inf. Sci., vol. 414, Sulaimaniyah, Iraq, in 1985. He received the B.Sc.
pp. 53–67, Nov. 2017. degree in the field of computer science from the
[20] L. Cui, G. Li, X. Wang, Q. Lin, J. Chen, N. Lu , J. Lu, ‘‘A ranking-based University of Sulaimani, Iraq, in 2008, and the
adaptive artificial bee colony algorithm for global numerical optimiza- master’s degree (Hons.) in the field of software
tion,’’ Inf. Sci., vol. 417, pp. 169–185, Nov. 2017. systems and internet technology from the Univer-
[21] L. Cui et al., ‘‘A smart artificial bee colony algorithm with distance-fitness- sity of Sheffield, U.K., in 2012. He is currently
based neighbor search and its application,’’ Future Gener. Comput. Syst.,
pursuing the Ph.D. degree with the University of
vol. 89, pp. 478–493, Dec. 2018.
Sulaimani, working in the field of Artificial Intel-
[22] L. Cui et al., ‘‘An enhanced artificial bee colony algorithm with dual-
population framework,’’ Swarm Evol. Comput., vol. 43, pp. 184–206 , ligence in the subject of swarm-based algorithms
Dec. 2018. He worked as an Assistant Programmer for two years, and then continued
[23] E. M. Cortés-Toro, B. Crawford, J. A. Gómez-Pulido, R. Soto, and his study in the University of Sheffield. After that, he joined the Department
J. M. Lanza-Gutiérrez, ‘‘A new metaheuristic inspired by the vapour- of Informations Technology, University of Sulaimani, where he is currently
liquid equilibrium for continuous optimization,’’ Appl. Sci., vol. 8, no. 11, affiliated with.
p. 2080, Oct. 2018.
[24] D. H. Wolpert and W. G. Macready, ‘‘No free lunch theorems for optimiza-
tion,’’ IEEE Trans. Evol. Comput., vol. 1, no. 1, pp. 67–82, Apr. 1997.
[25] J. D. Villa, ‘‘Swarming behavior of honey bees (Hymenoptera: Apidae)
in southeastern louisiana,’’ Ann. Entomological Soc. Amer., vol. 97, no. 1,
pp. 111–116, Jan. 2004.
[26] A. Avitabile, R. A. Morse, and R. Boch, ‘‘Swarming honey bees
guided by pheromones,’’ Ann. Entomological Soc. Amer., vol. 68, no. 6,
pp. 1079–1082, Nov. 1975.
[27] H. Blackiston, Beekeeping For Dummies, 2nd ed. Medina, OH, USA:
A.I. Root Company, 2009.
[28] T. D. Seeley, Honeybee Democracy, Princeton, NJ. USA: Princeton Univ.
Press, 2010.
[29] K. M. Schultz, K. M. Passino, and T. D. Seeley, ‘‘The mechanism of flight
guidance in honeybee swarms: subtle guides or streaker bees?’’ J. Exp. TARIK AHMED RASHID received the Ph.D.
Biol., vol. 211, pp. 3287–3295, Oct. 2008. degree in computer science and informatics from
[30] A. Mirjalili and S. Mirjalili. (2015). Seyedali Mirjalili. Accessed: the College of Engineering, Mathematical and
Jan. 01, 2019.[Online]. Available: http://www.alimirjalili.com/Projects. Physical Sciences, University College Dublin
html (UCD), in 2006, where he was a Postdoctoral
[31] F. van den Bergh and A. P. Engelbrecht, ‘‘A study of particle swarm Fellow of the Computer Science and Informatics
optimization particle trajectories,’’ Inf. Sci., vol. 176, no. 8, pp. 937–971,
School, from 2006 to 2007. He joined the Uni-
Apr. 2006.
versity of Kurdistan Hewlêr, in 2017. His research
[32] N. Jin and Y. Rahmat-Samii, ‘‘Advances in particle swarm optimization
for antenna designs: Real-number, binary, single-objective and multiob- interests include three fields. The first field is the
jective implementations,’’ IEEE Trans. Antennas Propag., vol. 55, no. 3, expansion of machine learning and data mining
pp. 560–562, Mar. 2007. to deal with time series applications. The second field is the development
[33] S. Das and P. N. Suganthan, ‘‘Problem definitions and evaluation criteria of DNA computing, optimization, swarm intelligence, and nature inspired
for CEC 2011 competition on testing evolutionary algorithms on real world algorithms and their applications. The third field is networking, telecommu-
optimization problems,’’ Jadavpur Univ., Kolkata, India, Nanyang Technol. nication, and telemedicine applications.
Univ., Singapore, Tech. Rep., Dec. 2010.