0% found this document useful (0 votes)
14 views15 pages

Yang 2013

The document reviews the Firefly Algorithm (FA), a nature-inspired metaheuristic algorithm that has gained significant attention for its efficiency in solving complex optimization problems. It discusses the fundamentals of FA, its advantages over other algorithms, and its applications in various fields, highlighting its ability to balance exploration and exploitation. Recent advances and variants of FA are also examined, showcasing its effectiveness in high-dimensional and multimodal optimization scenarios.

Uploaded by

wspanialyfilip
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
14 views15 pages

Yang 2013

The document reviews the Firefly Algorithm (FA), a nature-inspired metaheuristic algorithm that has gained significant attention for its efficiency in solving complex optimization problems. It discusses the fundamentals of FA, its advantages over other algorithms, and its applications in various fields, highlighting its ability to balance exploration and exploitation. Recent advances and variants of FA are also examined, showcasing its effectiveness in high-dimensional and multimodal optimization scenarios.

Uploaded by

wspanialyfilip
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

36 Int. J. Swarm Intelligence, Vol. 1, No.

1, 2013

Firefly algorithm: recent advances and applications

Xin-She Yang*
School of Science and Technology,
Middlesex University,
The Burroughs, London NW4 4BT, UK
E-mail: [email protected]
*Corresponding author

Xingshi He
School of Science,
Xi’an Polytechnic University,
No. 19 Jinhua South Road, Xi’an 710048, China
E-mail: [email protected]

Abstract: Nature-inspired metaheuristic algorithms, especially those based


on swarm intelligence, have attracted much attention in the last ten
years. Firefly algorithm appeared in about five years ago, its literature
has expanded dramatically with diverse applications. In this paper, we will
briefly review the fundamentals of firefly algorithm together with a selection
of recent publications. Then, we discuss the optimality associated with
balancing exploration and exploitation, which is essential for all metaheuristic
algorithms. By comparing with intermittent search strategy, we conclude that
metaheuristics such as firefly algorithm are better than the optimal intermittent
search strategy. We also analyse algorithms and their implications for higher-
dimensional optimisation problems.

Keywords: algorithms; bat algorithm; cuckoo search; firefly algorithm;


metaheuristic; nature-inspired algorithms.

Reference to this paper should be made as follows: Yang, X-S. and He, X.
(2013) ‘Firefly algorithm: recent advances and applications’, Int. J. Swarm
Intelligence, Vol. 1, No. 1, pp.36–50.

Biographical notes: Xin-She Yang is a Reader in Modelling and Simulation


at School of Science and Technology, Middlesex University. He is an
Adjunct Professor of Reykjavik University, Iceland, and a Distinguished
Professor at Xi’an Polytechnic University, China. He was a Senior
Research Scientist at UK’s National Physical Laboratory. He has
authored/edited 14 books and published more than 150 papers. He is the
Editor-in-Chief of Int. J. Mathematical Modelling and Numerical Optimisation
(IJMMNO).

Xingshi He is Professor and Deputy Dean at School of Science, Xi’an


Polytechnic University, China. He has many years research experience in
mathematical modelling, statistics, and optimisation algorithms. He has

Copyright © 2013 Inderscience Enterprises Ltd.


Firefly algorithm 37

published five books and more than 80 papers in journals and conference
proceedings. He has won many awards, including Provincial Distinguished
Professorship in 2005 and Silver Medal for Research Excellence in 2004.

1 Introduction

Metaheuristic algorithms form an important part of contemporary global optimisation


algorithms, computational intelligence and soft computing. These algorithms are usually
nature-inspired with multiple interacting agents. A subset of metaheuristcs are often
referred to as swarm intelligence (SI)-based algorithms, and these SI-based algorithms
have been developed by mimicking the so-called SI characteristics of biological agents
such as birds, fish, humans and others. For example, particle swarm optimisation was
based on the swarming behaviour of birds and fish (Kennedy and Eberhar, 1995), while
the firefly algorithm (FA) was based on the flashing pattern of tropical fireflies (Yang,
2008, 2009) and cuckoo search algorithm was inspired by the brood parasitism of some
cuckoo species (Yang and Deb, 2009).
In the last two decades, more than a dozen new algorithms such as particle swarm
optimisation, differential evolution, bat algorithm, FA and cuckoo search have appeared
and they have shown great potential in solving tough engineering optimisation problems
(Yang, 2008, 2010a, 2010b; Bansal and Deep, 2008; Floudas and Pardolos, 2009;
Parpinelli and Lopes, 2011; Gandomi et al., 2011). Among these new algorithms, it has
been shown that FA is very efficient in dealing with multimodal, global optimisation
problems.
In this paper, we will first outline the fundamentals of FA, and then review the latest
developments concerning FA and its variants. We also highlight the reasons why FA is so
efficient. Furthermore, as the balance of exploration and exploitation is important to all
metaheuristic algorithms, we will then discuss the optimality related to search landscape
and algorithms. Using the intermittent search strategy and numerical experiments, we
show that FA is significantly more efficient than intermittent search strategy.

2 FA and complexity

2.1 Firefly algorithm


FA was first developed by Xin-She Yang in late 2007 and 2008 at Cambridge University
(Yang, 2008, 2009), which was based on the flashing patterns and behaviour of fireflies.
In essence, FA uses the following three idealised rules:

• Fireflies are unisex so that one firefly will be attracted to other fireflies regardless of
their sex.

• The attractiveness is proportional to the brightness, and they both decrease as their
distance increases. Thus for any two flashing fireflies, the less brighter one will
38 X-S. Yang and X. He

move towards the brighter one. If there is no brighter one than a particular firefly,
it will move randomly.
• The brightness of a firefly is determined by the landscape of the objective
function.

As a firefly’s attractiveness is proportional to the light intensity seen by adjacent


fireflies, we can now define the variation of attractiveness β with the distance r by

β = β0 e−γr ,
2
(1)

where β0 is the attractiveness at r = 0.


The movement of a firefly i is attracted to another more attractive (brighter) firefly
j is determined by
2 ( )
xt+1
i = xti + β0 e−γrij xtj − xti + αt ϵti , (2)

where the second term is due to the attraction. The third term is randomisation
with αt being the randomisation parameter, and ϵti is a vector of random
numbers drawn from a Gaussian distribution or uniform distribution at time t.
If β0 = 0, it becomes a simple random walk. On the other hand, if γ = 0, it
reduces to a variant of particle swarm optimisation (Yang, 2008). Furthermore, the
randomisation ϵti can easily be extended to other distributions such as Lévy flights
(Yang, 2008). A demo version of FA implementation by Xin-She Yang, without
Lévy flights for simplicity, can be found at Mathworks file exchange website
(http://www.mathworks.com/matlabcentral/fileexchange/29693-firefly-algorithm).

2.2 Parameter settings

As αt essentially control the randomness (or, to some extent, the diversity of solutions),
we can tune this parameter during iterations so that it can vary with the iteration counter
t. So a good way to express αt is to use

αt = α0 δ t , 0 < δ < 1), (3)

where α0 is the initial randomness scaling factor, and δ is essentially a cooling factor.
For most applications, we can use δ = 0.95 to 0.97 (Yang, 2008).
Regarding the initial α0 , simulations show that FA will be more efficient if α0
is associated with the scalings of design variables. Let L be the average scale of the
problem of interest, we can set α0 = 0.01L initially. The factor 0.01 comes from the
fact that random walks requires a number of steps to reach the target while balancing
the local exploitation without jumping too far in a few steps (Yang, 2009, 2010a).
The parameter β controls the attractiveness, and parametric studies suggest that
β0 = 1 can be used for most applications.
√ However, γ should be also related to the
scaling L. In general, we can set γ = 1/ L. If the scaling variations are not significant,
then we can set γ = O(1).
For most applications, we can use the population size n = 15 to 100, though the
best range is n = 25 to 40 (Yang, 2008, 2009).
Firefly algorithm 39

2.3 Algorithm complexity

Almost all metaheuristic algorithms are simple in terms of complexity, and thus they
are easy to implement. FA has two inner loops when going through the population n,
and one outer loop for iteration t. So the complexity at the extreme case is O(n2 t).
As n is small (typically, n = 40), and t is large (say, t = 5, 000), the computation cost
is relatively inexpensive because the algorithm complexity is linear in terms of t. The
main computational cost will be in the evaluations of objective functions, especially
for external black-box type objectives. This latter case is also true for all metaheuristic
algorithms. After all, for all optimisation problems, the most computationally extensive
part is objective evaluations.
If n is relatively large, it is possible to use one inner loop by ranking the
attractiveness or brightness of all fireflies using sorting algorithms. In this case, the
algorithm complexity of FA will be O(nt log(n)).

2.4 FA in applications

FA has attracted much attention and has been applied to many applications
(Apostolopoulos and Vlachos, 2011; Chatterjee et al., 2012; Hassanzadeh et al., 2011;
Sayadi et al., 2010; Yang, 2010c; Horng et al., 2012; Horng, 2012). Horng et al. (2012)
and Horng (2012) demonstrated that firefly-based algorithm used least computation
time for digital image compression, while Banati and Bajaj (2011) used FA for feature
selection and showed that FA produced consistent and better performance in terms of
time and optimality than other algorithms.
In the engineering design problems, Gandomi et al. (2011) and Azad and Azad
(2011) confirmed that FA can efficiently solve highly non-linear, multimodal design
problems. Basu and Mahanti (2011) as well as Chatterjee et al. (2012) have applied FA
in antenna design optimisation and showed that FA can outperform artificial bee colony
(ABC) algorithm. In addition, Zaman and Matin (2012) have also found that FA can
outperform PSO and obtained global best results.
Sayadi et al. (2010) developed a discrete version of FA which can efficiently solve
NP-hard scheduling problems, while a detailed analysis has demonstrated the efficiency
of FA over a wide range of test problems, including multobjective load dispatch
problems (Apostolopoulos and Vlachos, 2011; Yang, 2009, 2012a). Furthermore, FA can
also solve scheduling and travelling salesman problem in a promising way (Palit et al.,
2011; Jati and Suyanto, 2011; Yousif et al., 2011).
Classifications and clustering are another important area of applications of FA with
excellent performance (Senthilnath et al., 2011; Rajini and David, 2012). For example,
Senthilnath et al. (2011) provided an extensive performance study by compared FA with
11 different algorithms and concluded that FA can be efficiently used for clustering. In
most cases, FA outperform all other 11 algorithms. In addition, FA has also been applied
to train neural networks (Nandy et al., 2012).
For optimisation in dynamic environments, FA can also be very efficient as shown
by Farahani et al. (2011a, 2011b) and Abshouri et al. (2011).
40 X-S. Yang and X. He

2.5 Variants of FA

For discrete problems and combinatorial optimisation, discrete versions of FA have been
developed with superior performance (Sayadi et al., 2010; Hassanzadeh et al., 2011;
Jati and Suyanto, 2011; Fister et al., 2012; Durkota, 2011), which can be used for
travelling-salesman problems, graph colouring and other applications.
In addition, extension of FA to multiobjective optimisation has also been investigated
(Apostolopoulos and Vlachos, 2011; Yang, 2012b).
A few studies show that chaos can enhance the performance of FA (Coelho et al.,
2011; Yang, 2011), while other studies have attempted to hybridise FA with other
algorithms to enhance their performance (Giannakouris et al., 2010; Horng, 2012; Horng
and Liou, 2011; Rampriya et al., 2010).

3 Why FA is so efficient?

As the literature about FA expands and new variants have emerged, all pointed out the
FA can outperform many other algorithms. Now we may ask naturally ‘Why is it so
efficient?’. To answer this question, let us briefly analyse the FA itself.
FA is swarm-intelligence-based, so it has the similar advantages that other
swarm-intelligence-based algorithms have. In fact, a simple analysis of parameters
suggests that some PSO variants such as accelerated PSO (Yang et al., 2011) are a
special case of FA when γ = 0 (Yang, 2008).
However, FA has two major advantages over other algorithms: automatical
subdivision and the ability of dealing with multimodality. First, FA is based on
attraction and attractiveness decreases with distance. This leads to the fact that the whole
population can automatically subdivide into subgroups, and each group can swarm
around each mode or local optimum. Among all these modes, the best global solution
can be found. Second, this subdivision allows the fireflies to be able to find all optima
simultaneously if the population size is sufficiently higher than the number of modes.

Mathematically, 1/ γ controls the average distance of a group of fireflies that can be
seen by adjacent groups. Therefore, a whole population can subdivide into subgroups
with a given, average distance. In the extreme case when γ = 0, the whole population
will not subdivide. This automatic subdivision ability makes it particularly suitable for
highly non-linear, multimodal optimisation problems.
In addition, the parameters in FA can be tuned to control the randomness as iterations
proceed, so that convergence can also be sped up by tuning these parameters. These
above advantages makes it flexible to deal with continuous problems, clustering and
classifications, and combinatorial optimisation as well.
As an example, let use two functions to demonstrate the computational cost saved
by FA. For details, please see the more extensive studies by Yang (2009). For De Jong’s
function with d = 256 dimensions


256
f (x) = x2i . (4)
i=1

Genetic algorithms required 25, 412 ± 1, 237 evaluations to get an accuracy of 10−5
of the optimal solution, while PSO needed 17, 040 ± 1, 123 evaluations. For FA,
Firefly algorithm 41

we achieved the same accuracy by 5, 657 ± 730. This save about 78% and 67%
computational cost, compared to GA and PSO, respectively.
For Yang’s forest function
( ) [ ]

d ∑
d
( 2)
f (x) = |xi | exp − sin xi , −2π ≤ xi ≤ 2π, (5)
i=1 i=1

GA required 37, 079 ± 8, 920 with a success rate of 88% for d = 16, and PSO required
19, 725 ± 3, 204 with a success rate of 98%. FA obtained a 100% success rate with
just 5, 152 ± 2, 493. Compared with GA and PSO, FA saved about 86% and 74%,
respectively, of overall computational efforts.
As an example for automatic subdivision, we now use the FA to find the global
maxima of the following function
( ) ( )

d ∑
d
f (x) = |xi | exp − x2i , (6)
i=1 i=1

with the domain −10 ≤ xi ≤ 10 for all (i = 1, 2, ..., d) where d is the number of
dimensions. This function has multiple global
√ optima (Yang, 2010c). In the case of
d = 2, we have 4 equal maxima f∗ = 1/ e ≈ 0.6065 at (1/2, 1/2), (1/2, −1/2),
(−1/2, 1/2) and (−1/2, −1/2) and a unique global minimum at (0, 0).

Figure 1 Four global maxima at (±1/2, ±1/2)

0
5

0
5
−5 −5 0

In the 2D case, we have a four-peak function are shown in Figure 1, and these
global maxima can be found using the implemented FAs after about 500 function
evaluations. This corresponds to 25 fireflies evolving for 20 generations or iterations.
The initial locations of 25 fireflies and their final locations after 20 iterations are shown
in Figure 2.
We can see that FA can indeed automatically subdivide its population into subgroups.
42 X-S. Yang and X. He

Figure 2 (a) Initial locations of 25 fireflies and (b) their final locations after 20 iterations

5 5

0 0

−5 −5
−5 0 5 −5 0 5
(a) (b)

4 Search optimality

4.1 Intensification versus diversification

The main components of any metaheuristic algorithms are: intensification and


diversification, or exploitation and exploration (Blum and Roli, 2003; Yang, 2012a).
Diversification means to generate diverse solutions so as to explore the search space on
the global scale, while intensification means to focus on the search in a local region by
exploiting the information that a current good solution is found in this region. This is
in combination with the selection of the best solutions.
Exploration in metaheuristics can be achieved often by the use of randomisation
(Blum and Roli, 2003; Yang, 2008, 2009), which enables an algorithm to have the ability
to jump out of any local optimum so as to explore the search globally. Randomisation
can also be used for local search around the current best if steps are limited to a local
region. When the steps are large, randomisation can explore the search space on a
global scale. Fine-tuning the right amount of randomness and balancing local search and
global search are crucially important in controlling the performance of any metaheuristic
algorithm.
Exploitation is the use of local knowledge of the search and solutions found so far so
that new search moves can concentrate on the local regions or neighbourhood where the
optimality may be close; however, this local optimum may not be the global optimality.
Exploitation tends to use strong local information such as gradients, the shape of the
mode such as convexity, and the history of the search process. A classic technique is
the so-called hill-climbing which uses the local gradients or derivatives intensively.
Empirical knowledge from observations and simulations of the convergence
behaviour of common optimisation algorithms suggests that exploitation tends to
increase the speed of convergence, while exploration tends to decrease the convergence
rate of the algorithm. On the other hand, too much exploration increases the probability
of finding the global optimality, while strong exploitation tends to make the algorithm
being trapped in a local optimum. Therefore, there is a fine balance between the right
amount of exploration and the right degree of exploitation. Despite its importance, there
is no practical guideline for this balance.
Firefly algorithm 43

4.2 Landscape-based optimality or algorithm optimality?

It is worth pointing out that the balance for exploration and exploitation was often
discussed in the context of optimisation algorithms; however, this can be more often
related to the problem type of interest, and thus such balance is problem-specific,
depending on the actual landscape of the objective function. Consequently, there may
be no universality at all. Therefore, we may have to distinguish landscape-dependent
optimality for exploration and exploitation, and algorithm-based optimality. The former
focuses on landscape, while the latter focuses on algorithms.
Landscape-based optimality focuses on the information of the search landscape and
it is hoped that a better (or even best) approach/algorithm may find the optimal solutions
with the minimum effort (time, number of evaluations), while algorithm-based approach
treats objective functions as a black-box type and tries to use partly the available
information during iterations to work out the best ways to find optimal solutions. As to
which approach is better, the answer may depend on one’s viewpoint and focus. In any
case, a good combination of both approaches may be needed to reach certain optimality.

4.3 Intermittent search strategy

Even there is no guideline in practice for landscape-based optimality, some preliminary


work on the very limited cases exists in the literature and may provide some insight into
the possible choice of parameters so as to balance these components. In fact, intermittent
search strategy is a landscape-based search strategy (Bénichou et al., 2006).
Intermittent search strategies concern an iterative strategy consisting of a slow phase
and a fast phase (Bénichou et al., 2006, 2011). Here the slow phase is the detection
phase by slowing down and intensive, static local search techniques, while the fast phase
is the search without detection and can be considered as an exploration technique. For
example, the static target detection with a small region of radius a in a much larger
region b where a ≪ b can be modelled as a slow diffusive process in terms of random
walks with a diffusion coefficient D.
Let τa and τb be the mean times spent in intensive detection stage and the time
spent in the exploration stage, respectively, in the 2D case. The diffusive search process
is governed by the mean first-passage time satisfying the following equations (Bénichou
et al., 2011)
∫ 2π
1
D∇2r t1 + [t2 (r) − t1 (r)]dθ + 1 = 0, (7)
2πτa 0

1
u · ∇r t2 (r) − [r2 (r) − t1 (r)] + 1 = 0, (8)
τb

where t2 and t1 are mean first-passage times during the search process, starting from
slow and fast stages, respectively, and u is the mean search speed.
After some lengthy mathematical analysis, the optimal balance of these two stages
can be estimated as
τa D 1
roptimal = 2 ≈ 2[ ]2 . (9)
τb a
2 − ln(b/a)
1
44 X-S. Yang and X. He

Assuming that the search steps have a uniform velocity u at each step on average, the
minimum times required for each phase can be estimated as

D ln2 (b/a)
τamin ≈ , (10)
2u2 [2 ln(b/a) − 1]

and

a 1
τbmin ≈ ln(b/a) − . (11)
u 2
When u → ∞, these relationships lead to the above optimal ratio of two stages. It
is worth pointing out that the above result is only valid for 2D cases, and there is
no general results for higher dimensions, except in some special 3D cases (Bénichou
et al., 2006). Now let us use this limited results to help choose the possible values of
algorithm-dependent parameters in FA (Yang, 2008, 2009), as an example.
For higher-dimensional problems, no result exists. One possible extension is to use
extrapolation to get an estimate. Based on the results on 2D and 3D cases (Bénichou
et al., 2011), we can estimate that for any d-dimensional cases d ≥ 3
( ) ( ( ) )
d−1
τ1 D b b
∼ O , τm ∼ O , (12)
τ22 a2 u a

where τm the mean search time or average number of iterations. This extension may
not be good news for higher dimensional problems, as the mean number of function
evaluations to find optimal solutions can increase exponentially as the dimensions
increase. However, in practice, we do not need to find the guaranteed global optimality,
we may be satisfied with suboptimality, and sometimes we may be ‘lucky’ to find such
global optimality even with a limited/fixed number of iterations. This may indicate there
is a huge gap between theoretical understanding and the observations as well as run-time
behaviour in practice. More studies are highly needed to address these important issues.

5 Numerical experiments

5.1 Landscape-based optimality: a 2D example

If we use the 2D simple, isotropic random walks for local exploration to demonstrate
landscape-based optimality, then we have

s2
D≈ , (13)
2
where s is the step length with a jump during a unit time interval or each iteration step.
From equation (9), the optimal ratio of exploitation and exploration in a special case of
b ≈ 10a becomes
τa
≈ 0.2. (14)
τb2
Firefly algorithm 45

In case of b/a → ∞, we have τa /τb2 ≈ 1/8. which implies that more times should
spend on the exploration stage. It is worth pointing out that the naive guess of
50-50 probability in each stage is not the best choice. More efforts should focus on the
exploration so that the best solutions found by the algorithm can be globally optimal
with possibly the least computing effort. However, this case may be implicitly linked
to the implicit assumptions that the optimal solutions or search targets are multimodal.
Obviously, for a unimodal problem, once we know its modality, we should focus more
on the exploitation to get quick convergence.
In the case studies to be described below, we have used the FA to find the optimal
solutions to the benchmarks. If set τb = 1 as the reference timescale, then we found
that the optimal ratio is between 0.15 to 0.24, which are roughly close to the above
theoretical result.

5.2 Standing-wave function

Let us first use a multimodal test function to see how to find the fine balance between
exploration and exploitation in an algorithm for a given task. A standing-wave test
function can be a good example (Yang, 2009, 2010a)
{ [ ] ]}
∑d ( )10 [ ∑
d 2
f (x) = 1 + exp − i=1 xβi − 2 exp − i=1 (xi − π)
(15)
∏d
· i=1 cos2 xi ,

which is multimodal with many local peaks and valleys. It has a unique global minimum
at fmin = 0 at (π, π, ..., π) in the domain −20 ≤ xi ≤ 20 where i = 1, 2, ..., d and
β = 15. In this case, we can estimate that R = 20 and a ≈ π/2, this means that
R/a ≈ 12.7, and we have in the case of d = 2
1
pe ≈ τoptimal ≈ ≈ 0.19. (16)
2[2 − 1/ ln(R/a)]2
This indicate that the algorithm should spend 80% of its computational effort on global
explorative search, and 20% of its effort on local intensive search.
For the FA, we have used n = 15 and 1, 000 iterations. We have calculated
the fraction of iterations/function evaluations for exploitation to exploration. That is,
Q = exploitation/exploration, thus Q may affect the quality of solutions. A set of
25 numerical experiments have been carried out for each value of Q and the results are
summarised in Table 1.

Table 1 Variations of Q and its effect on the solution quality

Q 0.4 0.3 0.2 0.1 0.05


fmin 9.4e-11 1.2e-12 2.9e-14 8.1e-12 9.2e-11

This table clearly shows that Q ≈ 0.2 provides the optimal balance of local exploitation
and global exploration, which is consistent with the theoretical estimation.
Though there is no direct analytical results for higher dimensions, we can expect that
more emphasis on global exploration is also true for higher dimensional optimisation
problems. Let us study this test function for various higher dimensions.
46 X-S. Yang and X. He

5.3 Comparison for higher dimensions

As the dimensions increase, we usually expect the number of iterations of finding the
global optimality should increase. In terms of mean search time/iterations, Bénichou
et al.’s (2006, 2011) intermittent search theory suggests that

2b b
τm = , (17)
(d=1) u 3a

2b2 √
τm = ln(b/a), (18)
(d=2) au
( )2
2.2b b
τm = . (19)
(d=3) u a

For higher dimensions, we can only estimate the main trend based on the intermittent
search strategy. That is,
( ) ( ( ) )
d−1
τ1 D b b
∼O , τm ∼O , (20)
τ22 a2 u a

which means that number of iterations may increase exponentially with the dimension
d. It is worth pointing out that the optimal ratio between the two stage should be
independent of the dimensions. In other words, once we find the optimal balance
between exploration and exploitation, we can use the algorithm for any high dimensions.

Figure 3 Comparison of the actual number of iterations with the theoretical results by the
intermittent search strategy (see online version for colours)

4
x 10
10
Intermittent
Firefly Algorithm
8

0
2 4 6 8 10

Note: This clearly show that FA is better than the intermittent search strategy.

Now let us use FA to carry out search in higher dimensions for the above standing
wave function and compare its performance with the implication of intermittent search
strategy. For the case of b = 20, a = π/2 and u = 1, Figure 3 shows the comparison
of the numbers of iterations suggested by intermittent search strategy and the actual
Firefly algorithm 47

numbers of iterations using FA to obtain the globally optimal solution with a tolerance
or accuracy of five decimal places.
It can be seen clearly that the number of iterations needed by the intermittent search
strategy increases exponentially versus the number of dimensions, while the actual
number of iterations used in the algorithm only increases slightly, seemingly weakly
a low-order polynomial. This suggests that FA is very efficient and requires far fewer
(and often many orders lower) number of function evaluations.

6 Conclusions

Nature-inspired metaheuristic algorithms have gained popularity, which is partly due to


their ability of dealing with non-linear global optimisation problems. We have reviewed
the fundamentals of FA, the latest developments with diverse applications. As the time
of writing, a quick Google search suggests that there are about 323 papers on FAs from
2008. This review can only cover a fraction of the literature. There is no doubt that FA
will be applied in solving more challenging problems in the near future, and its literature
will continue to expand.
On the other hand, we have also highlighted the importance of exploitation and
exploration and their effect on the efficiency of an algorithm. Then, we use the
intermittent search strategy theory as a preliminary basis for analysing these key
components and ways to find the possibly optimal settings for algorithm-dependent
parameters.
With such insight, we have used the FA to find this optimal balance, and confirmed
that FA can indeed provide a good balance of exploitation and exploration. We have also
shown that FA requires far fewer function evaluations. However, the huge differences
between intermittent search theory and the behaviour of metaheuristics in practice also
suggest there is still a huge gap between our understanding of algorithms and the actual
behaviour of metaheuristics. More studies in metaheuristics are highly needed.
It is worth pointing out that there are two types of optimality here. One optimality
concerns that for a given algorithm what best types of problems it can solve. This
is relatively easy to answer because in principle we can test an algorithm by a wide
range of problems and then select the best type of the problems the algorithm of
interest can solve. On other hand, the other optimality concerns that for a given problem
what best algorithm is to find the solutions efficiently. In principle, we can compare
a set of algorithms to solve the same optimisation problem and hope to find the best
algorithm(s). In reality, there may be no such algorithm at all, and all test algorithms
may not perform well. Search for new algorithms may take substantial research efforts.
The theoretical understanding of metaheuristics is still lacking behind. In fact, there
is a huge gap between theory and applications. Though theory lags behind, applications
in contrast are very diverse and active with thousands of papers appearing each year.
Furthermore, there is another huge gap between small-scale problems and large-scale
problems. As most published studies have focused on small, toy problems, there is
no guarantee that the methodology that works well for such toy problems will work
for large-scale problems. All these issues still remain unresolved both in theory and in
practice.
As further research topics, most metaheuristic algorithms require good modifications
so as to solve combinatorial optimisation properly. Though with great interest and
48 X-S. Yang and X. He

many extensive studies, more studies are highly needed in the area of combinatorial
optimisation using metaheuristic algorithms. In addition, most current metaheuristic
research has focused on small scale problems, it will be extremely useful if further
research can focus on large-scale real-world applications.

References
Abshouri, A.A., Meybodi, M.R. and Bakhtiary, A. (2011) ‘New firefly algorithm based on
multiswarm and learning automata in dynamic environments’, Third Int. Conference on
Signal Processing Systems (ICSPS2011), August 27–28, Yantai, China, pp.73–77.
Apostolopoulos, T. and Vlachos, A. (2011) ‘Application of the firefly algorithm for solving
the economic emissions load dispatch problem’, International Journal of Combinatorics,
Article ID 523806 [online] http://www.hindawi.com/journals/ijct/2011/523806.html.
Azad, S.K. and Azad, S.K. (2011) ‘Optimum design of structures using an improved firefly
algorithm’, International Journal of Optimisation in Civil Engineering, Vol. 1, No. 2,
pp.327–340.
Banati, H. and Bajaj, M. (2011) ‘Firefly based feature selection approach’, Int. J. Computer
Science Issues, Vol. 8, No. 2, pp.473–480.
Bansal, J.C. and Deep, K. (2008) ‘Optimisation of directional overcurrent relay times by particle
swarm optimisation’, in Swarm Intelligence Symposium (SIS 2008), IEEE Publication,
pp.1–7.
Basu, B. and Mahanti, G.K. (2011) ‘Firefly and artificial bees colony algorithm for synthesis
of scanned and broadside linear array antenna’, Progress in Electromagnetic Research B,
Vol. 32, No. 2, pp.169–190.
Bénichou, O., Loverdo, C., Moreau, M. and Voituriez, R. (2006) ‘Two-dimensional intermittent
search processes: an alternative to Lévy flight strategies’, Phys. Rev., Vol. E74, 020102(R).
Bénichou, O., Loverdo, C., Moreau, M. and Voituriez, R. (2011) ‘Intermittent search strategies’,
Review of Modern Physics, Vol. 83, No. 1, pp.81–129.
Blum, C. and Roli, A. (2003) ‘Metaheuristics in combinatorial optimisation: overview and
conceptual comparison’, ACM Comput. Surv., Vol. 35, No. 3, pp.268–308.
Chatterjee, A., Mahanti, G.K. and Chatterjee, A. (2012) ‘Design of a fully digital controlled
reconfigurable switched beam concentric ring array antenna using firefly and particle
swarm optimisation algorithm’, Progress in Electromagnetic Research B, Vol. 36, No. 1,
pp.113–131.
Coelho, L.d.S., Bernert, D.L.d.A. and Mariani, V.C. (2011) ‘A chaotic firefly algorithm applied to
reliability-redundancy optimisation’, in 2011 IEEE Congress on Evolutionary Computation
(CEC ‘11), pp.517–521.
Durkota, K. (2011) Implementation of a Discrete Firefly Algorithm for the QAP Problem within
the Sage Framework, BSc thesis, Czech Technical University.
Farahani, S.M., Abshouri, A.A., Nasiri, B. and Meybodi, M.R. (2011a) ‘A Gaussian firefly
algorithm’, Int. J. Machine Learning and Computing, Vol. 1, No. 5, pp.448–453.
Farahani, S.M., Nasiri, B. and Meybodi, M.R. (2011b) ‘A multiswarm based firefly algorithm in
dynamic environments’, in Third Int. Conference on Signal Processing Systems (ICSPS2011),
August 27–28, Yantai, China, pp.68–72.
Fister Jr., I., Fister, I., Brest, J. and Yang, X.S. (2012) ‘Memetic firefly algorithm for
combinatorial optimisation’, in B. Filipič and J. Šilc (Eds.): Bioinspired Optimisation
Methods and Their Applications (BIOMA2012), pp.75–86, 24–25 May 2012, Bohinj,
Slovenia.
Firefly algorithm 49

Floudas, C.A. and Pardolos, P.M. (2009) Encyclopedia of Optimisation, 2nd ed., Springer, Berlin,
Germany.
Gandomi, A.H., Yang, X.S. and Alavi, A.H. (2011) ‘Cuckoo search algorithm: a metaheuristic
approach to solve structural optimisation problems’, Engineering with Computers, Vol. 27,
DOI: 10.1007/s00366-011-0241-y.
Giannakouris, G., Vassiliadis, V. and Dounias, G. (2010) ‘Experimental study on a hybrid
nature-inspired algorithm for financial portfolio optimisation’, SETN 2010, Lecture Notes in
Artificial Intelligence (LNAI 6040), pp.101–111.
Hassanzadeh, T., Vojodi, H. and Moghadam, A.M.E. (2011) ‘An image segmentation approach
based on maximum variance intra-cluster method and firefly algorithm’, in Proc. of 7th Int.
Conf. on Natural Computation (ICNC2011), pp.1817–1821.
Horng, M-H., Lee, Y-X., Lee, M-C. and Liou, R-J. (2012) ‘Firefly metaheuristic algorithm for
training the radial basis function network for data classification and disease diagnosis’,
in R. Parpinelli and H.S. Lopes (Eds.): Theory and New Applications of Swarm Intelligence,
pp.115–132.
Horng, M-H. (2012) ‘Vector quantization using the firefly algorithm for image compression’,
Expert Systems with Applications, Vol. 39, No. 1, pp.1078–1091.
Horng, M-H. and Liou, R-J. (2011) ‘Multilevel minimum cross entropy threshold selection
based on the firefly algorithm’, Expert Systems with Applications, Vol. 38, No. 12,
pp.14805–14811.
Jati, G.K. and Suyanto, S. (2011) ‘Evolutionary discrete firefly algorithm for travelling salesman
problem’, ICAIS2011, Lecture Notes in Artificial Intelligence (LNAI 6943), pp.393–403.
Kennedy, J. and Eberhart, R. (1995) ‘Particle swarm optimisation’, in Proc. of the IEEE Int.
Conf. on Neural Networks, Piscataway, NJ, pp.1942–1948.
Nandy, S., Sarkar, P.P. and Das, A. (2012) ‘Analysis of nature-inspired firefly algorithm based
back-propagation neural network training’, Int. J. Computer Applications, Vol. 43, No. 22,
pp.8–16.
Palit, S., Sinha, S., Molla, M., Khanra, A. and Kule, M. (2011) ‘A cryptanalytic attack on the
knapsack cryptosystem using binary firefly algorithm’, in 2nd Int. Conference on Computer
and Communication Technology (ICCCT), 15–17 September, India, pp.428–432.
Parpinelli, R.S. and Lopes, H.S. (2011) ‘New inspirations in swarm intelligence: a survey’,
Int. J. Bio-Inspired Computation, Vol. 3, No. 1, pp.1–16.
Rajini, A. and David, V.K. (2012) ‘A hybrid metaheuristic algorithm for classification using
micro array data’, Int. J. Scientific & Engineering Research, Vol. 3, No. 2, pp.1–9.
Rampriya, B., Mahadevan, K. and Kannan, S. (2010) ‘Unit commitment in deregulated power
system using Lagrangian firefly algorithm’, Proc. of IEEE Int. Conf. on Communication
Control and Computing Technologies (ICCCCT2010), pp.389–393.
Sayadi, M.K., Ramezanian, R. and Ghaffari-Nasab, N. (2010) ‘A discrete firefly meta-heuristic
with local search for makespan minimization in permutation flow shop scheduling
problems’, Int. J. of Industrial Engineering Computations, Vol. 1, No. 1, pp.1–10.
Senthilnath, J., Omkar, S.N. and Mani, V. (2011) ‘Clustering using firefly algorithm: performance
study’, Swarm and Evolutionary Computation, Vol. 1, No. 3, pp.164–171.
Yang, X.S. (2008) Nature-Inspired Metaheuristic Algorithms, Luniver Press, UK.
Yang, X.S. (2009) ‘Firefly algorithms for multimodal optimisation’, in O. Watanabe and
T. Zeugman (Eds.): Proc. 5th Symposium on Stochastic Algorithms, Foundations and
Applications, Lecture Notes in Computer Science, Vol. 5792, pp.169–178.
Yang, X.S. (2010a) Engineering Optimisation: An Introduction with Metaheuristic Applications,
John Wiley and Sons, USA.
50 X-S. Yang and X. He

Yang, X.S. (2010b) ‘A new metaheuristic bat-inspired algorithm’, in J.R. Gonzalez et al. (Eds.):
Nature Inspired Cooperative Strategies for Optimisation (NICSO 2010), Vol. 284, pp.65–74,
Springer, SCI.
Yang, X.S. (2010c) ‘Firefly algorithm, stochastic test functions and design optimisation’,
Int. J. Bio-Inspired Computation, Vol. 2, No. 2, pp.78–84.
Yang, X.S. and Deb, S. (2009) ‘Cuckoo search via Lévy flights’, Proceedings of World Congress
on Nature & Biologically Inspired Computing (NaBIC 2009, India), IEEE Publications,
USA, pp.210–214.
Yang, X.S. (2011) ‘Chaos-enhanced firefly algorithm with automatic parameter tuning’,
Int. J. Swarm Intelligence Research, Vol. 2, No. 4, pp.1–11.
Yang, X.S. (2012a) ‘Swarm-based metaheuristic algorithms and no-free-lunch theorems’,
in R. Parpinelli and H.S. Lopes (Eds.): Theory and New Applications of Swarm Intelligence,
pp.1–16, Intech Open Science.
Yang, X.S. (2012b) ‘Multiobjective firefly algorithm for continuous optimization’, Engineering
with Computers, Online Fist, DOI: 10.1007/s00366-012-0254-1.
Yang, X.S., Deb, S. and Fong, S. (2011) ‘Accelerated particle swarm optimization and support
vector machine for business optimization and applications’, Networked Digital Technologies
(NDT 2011), Communications in Computer and Information Science, Vol. 136, Part I,
pp.53–66.
Yousif, A., Abdullah, A.H., Nor, S.M. and abdelaziz, A.A. (2011) ‘Scheduling jobs on grid
computing using firefly algorithm’, J. Theoretical and Applied Information Technology,
Vol. 33, No. 2, pp.155–164.
Zaman, M.A. and Matin, M.A. (2012) ‘Nonuniformly spaced linear antenna array design using
firefly algorithm’, Int. J. Microwave Science and Technology, Vol. 2012, Article ID: 256759,
8 pp, doi:10.1155/2012/256759.

You might also like