Nature inspired meta heuristic algorithms for optimization
Nature inspired meta heuristic algorithms for optimization
https://doi.org/10.1007/s00607-021-00955-5
SURVEY ARTICLE
Abstract
Optimization and decision making problems in various fields of engineering have a
major impact in this current era. Processing time and utilizing memory is very high for
the currently available data. This is due to its size and the need for scaling from zettabyte
to yottabyte. Some problems need to find solutions and there are other types of issues
that need to improve their current best solution. Modelling and implementing a new
heuristic algorithm may be time consuming but has some strong primary motivation -
like a minimal improvement in the solution itself can reduce the computational cost.
The solution thus obtained was better. In both these situations, designing heuristics
and meta-heuristics algorithm has proved it’s worth. Hyper heuristic solutions will
be needed to compute solutions in a much better time and space complexities. It
creates a solution by combining heuristics to generate automated search space from
which generalized solutions can be tuned out. This paper provides in-depth knowledge
on nature-inspired computing models, meta-heuristic models, hybrid meta heuristic
models and hyper heuristic model. This work’s major contribution is on building a
hyper heuristics approach from a meta-heuristic algorithm for any general problem
domain. Various traditional algorithms and new generation meta heuristic algorithms
has also been explained for giving readers a better understanding.
B Vinod Chandra S. S.
[email protected]
Anand H. S.
[email protected]
123
V. Chandra S. S., A. H. S.
1 Introduction
Nature has mystified man since time immemorial. It gifts solutions to all problems.
Several natural phenomena have been adapted in computational science to create
optimization algorithms. People acclimatized nature inspired design models for engi-
neering designs. This is an intelligent approach in design science, models, and it’s
mathematics.
The evolutionary process of natural selection has created many emergent behavioural
patterns in organisms that have optimizing overtones. These have been separately iden-
tified and used in nature-inspired algorithms. Evolutionary algorithms and swarm
algorithms are developed based on naturally occurring mechanisms. The primary
nature-inspired algorithms are spread over to the species’ behaviour, which lives in a
nest or an environment. These species’ intelligent behaviour can be turned-out as an
algorithm that can help solve complex problems. The field of nature-inspired Comput-
ing has been endowed with many optimizing algorithms. These algorithms are distinct
in the source of inspiration in the methodology and mapped into the computational
domain. Most of the algorithms designed till now principally deal with an agent or
many agents that exhibit collective intelligent behaviour.
Nature exhibits intelligent ways to decipher complex optimization problems.
Nature-inspired Computing is a relatively stable paradigm that has profound appli-
cations in the optimization arena. Swarm Intelligence, Evolutionary Computation,
Cellular Automata, and Neural Computation are among the sought-after sub-domains
of nature-inspired Computing in the engineering framework.
Novel meta heuristic approaches inspired by a natural or human-made metaphor
include Imperial colonization, league championships, gravity, electromagnetism, river
formation, bacterial foraging [1], and another subclass inspired by animal behaviour
ants [2], bees [3], bats [4], wolves, cats, termites [5], frogs [6], eagles, dolphins and
much more. However, many of the above-specified methods except a few are without
any strong mathematical background.
Research has not yet analysed the natural swarm behaviour optimizing multi-
ple objectives simultaneously or optimization at different levels to be applied for
multi-objective or multi-level optimization problems with the maximum spread, diver-
sity, and minimum computational speed. A popular way of handling Multi-objective
optimization is the use of the evolutionary technique. However, it is a black-box opti-
mization with parameter fine-tuning done using trial and error method and does not
account for self-adaptation. Meta heuristic approaches work well compared to tra-
ditional methods when there is no clear understanding of the problem. There does
not exist a single method that optimizes a problem effectively and efficiently as these
objectives are contradictory. They also suffer from the curse of dimensionality prob-
lem as it takes quadratic, cubic, or polynomial time with an increase in the number of
parameters.
This paper gives an overview on the various nature inspired algorithms and its
classification. Process by which heuristics can be combined with multiple objectives
is also mentioned. Conversion of meta heuristics to hyper heuristics is also in the scope
of the study.
123
Nature inspired meta heuristic algorithms for optimization…
While designing the nature inspired algorithms, the following must be noted: first,
identifying analogies between nature and mathematical problems. The next step is
understanding the computer modelling of realistic behaviour and finally engineering
the model in a simplified manner in tune with the application. Figure 1 gives a brief
idea of how to make a nature inspired model.
Nature inspired models can be classified into three: Bio inspired models, swarm
based models, and evolutionary based models. These models are giving computational
techniques to the scientist by designing algorithms.
Nature inspired Computing is based on the emergence, self organisation, and decen-
tralised behaviour of different species found in nature to accomplish different activities
for their livelihood. Nature inspired Computing aims to model algorithms to address
complex problems based on natural behaviours. The autonomous agents in nature
inspired computing systems are coined as effectors and detectors. Detector agents of
similar types receive the information and stimulus from the environment and other
agents. Effectors contribute to the change in the environment by changing the internal
states and the surrounding.
Nature inspired systems are based on the principle of emergence. The prominent
feature of systems that shows the property of emergence from individual agents’
interactions is uncertainty. Three inherent properties of nature inspired Computing,
which made it desirable for large scale computing problems: parallelism, simplicity,
and the localised interaction. Any nature inspired computing model follows a general
method where a group of autonomous agents follows simple rules and coordinates in
a decentralised manner to achieve a creative act.
Bio inspired Computing, Evolutionary Computing, and swarm intelligence are
widespread nature inspired methods successfully in scientific applications.
The real life models are derived from biological species. The human being is such
a biological model with a diversified character sequence. However, many species in
nature show peculiar character behaviour. Swarm based algorithms are worked based
on the group behaviour of these species, which is contributed individually. In addition
to the expected behaviour of these species, individual performance gives a solution
to the world. For example, ants are working as a group, while canine (dogs) are
performing individually. So, both are intelligent agents, but behaviours are different.
Their movements can be derived as a solution to the complex optimization problem.
123
V. Chandra S. S., A. H. S.
Biological concepts inspired by our surroundings like the natural immune system,
brain, life, natural agents like ants, birds, bees, molecular activity, evolution, etc. are
modelled mathematically to solve complex real time tasks. The feature of practical
combinatorial optimization problems like uncertainty and the nonlinear parameter
space limits deterministic methods for deriving the perfect solution. Adaptable and
self organizing biological systems make it suitable for handling real time problems
with complex parameter space. Generally, bio inspired Computing takes a bottom up
approach in a decentralised manner while tuning it meet the problem specification.
These methods have been successful in parallel Computing, networking, data analytic,
power systems, and optimization. As the complexity of the problem increases, the
number of possible solutions also increased. The potential solution can be categorised
as worst, passable, and best.
Biologically inspired methods follow a heuristic approach in generating random
solutions to the problems. They are found superior to other deterministic methods
when the uncertainty and randomness of the problem space increases. Meta heuristics
form an alternative approach to tackle sophisticated search and optimization problems
when mathematical methods fail to produce optimal solutions. Heuristic and meta
heuristic approaches propose smart ways to scan the possible solutions to arrive at the
specific problem’s attractive choice in reasonable computational time.
Population based algorithms constitute a collection of complete solutions that are a col-
lective of partial solutions and solution attributes. There are multiple approaches while
selecting individuals that constitute a new population. This is by selecting newly gener-
ated individuals and ’elite’ individuals. At the initialisation time, a diverse population
is selected through randomisation and takes an explicit criterion. The populations are
evolved in ’generations’. A new offspring is created by combining features of current
parents.
Evolutionary techniques are inspired by the natural evolutionary process compris-
ing of selection, reproduction, and variation. They are heuristic methods for solving
complex optimization problems to develop approximate solutions or near optimal
solutions with reasonable computational complexity. It is based on the natural phe-
nomenon of the survival of the fittest. The best member survives and contributes to
the further generations, and unfit members are eliminated from the competition during
each generation.
Evolutionary computation makes use of a population of potential solutions with
probabilistic rules to evaluate fitness functions defining the optimization problems.
Optimization problems with real time parameters and constraints provide little under-
standing of the problem at hand. Traditional mathematical approaches fail to provide
optimal solutions to such kind of problems. Evolutionary approaches work well with
such problems: a black box optimization approach with parameter fine tuning done
using trial and error method.
Various computational paradigms that come under the evolutionary techniques are
genetic algorithm, genetic programming, evolutionary strategies, and evolutionary
123
Nature inspired meta heuristic algorithms for optimization…
programming. The genetic algorithm imitates the natural evolution found full on
success classification and optimization problems. An initial population is randomly
selected in the solution hyperspace, where the individuals propagate in the successive
iterations to move towards the optimal goal. The individual members are denoted using
encoded strings, where the dimensionality and type of each member varying with the
specific problem.
Swarm intelligence emerged from the collective behaviour of natural agents in real-
ising a complicated task. A swarm is a collection of agents that depicts an emergent
behaviour to elucidate a complex problem without any centralised coordination. The
intelligent behaviour of different types of swarms, such as ants, birds, bees, bacteria,
spider and fish, has already been used to solve many combinatorial optimization prob-
lems. Natural agents accomplish collective intelligence by following simple rules.
The agents interact locally in a decentralised manner, leading to the emergence of
an intelligent action collectively that is unknown to the individual members. Primary
capabilities of swarm intelligence focused on load balancing, clustering, optimization,
and Routing. Complex tasks from truck routing to military robots are later realised
using swarm intelligence based algorithms.
Swarm Intelligence has two fundamental concepts, such as self organizing and
division of labor. The self organization of swarms includes positive feedback, negative
feedback, fluctuations, and multiple interactions of agents. The swarms are worked
by division of labor by simultaneous task performance by cooperating specialised
individuals and enabling the swarm to respond to changed search space conditions.
123
V. Chandra S. S., A. H. S.
123
Nature inspired meta heuristic algorithms for optimization…
Table 1 continued
in the nature inspired algorithms are hyper heuristics based. A combination of nature
inspired techniques can be developed to solve complex problems.
Meta heuristics provide a short cut to solving difficult problems with limited time
and/or information to make a decision. The meta heuristic leads to a good decision most
of the time. The term Meta means ’in an upper level,’ and Heuristic means ’to find’.
Meta heuristics is formally defined as an iterative generation process that guides by
combining different intelligent concepts to explore the search space, learning strategies
used to structure information to find efficiently near optimal solutions. Mathematically
meta heuristics are guiding strategies of optimization. A strategy that guides the search
process in a search space avoids trapped in confined areas called meta heuristics.
The goal of meta heuristic is to find an optimal solution or near optimal solution to
a problem. The techniques ranged from local search procedure through a complex
process. These algorithms are usually non deterministic and not problem specific.
Two main approaches used to handle optimization problems are deterministic meth-
ods and non deterministic methods. Mathematical methods are deterministic, where
the exact solution for the problem is obtained. Non deterministic techniques are applied
to problems that cannot be solved correctly. Non deterministic methods exhibit some
randomness and arrive at approximate solutions. Heuristic and Meta heuristic methods
are non deterministic methods that perform reasonably well in obtaining compromise
123
V. Chandra S. S., A. H. S.
solutions. The heuristic approach uses trial and error methods to reach an optimal
solution.
The meta heuristic approach uses a group of search agents to explore feasible regions
in search space, following specific rules. Nature inspired Computing is paradigm falls
in the meta heuristic category. It uses an analogy from nature to generate approxi-
mate solutions for practical optimization problems. There are population based and
trajectory based optimization algorithms inspired by nature. Generally, optimization
problems are classified into Linear, Quadratic and Nonlinear functions. Any optimiza-
tion method is said as successful only if the problem is formulated effectively. Linear
and Quadratic optimization problems are more comfortable to solve as it has a glob-
ally optimal solution, which is either a single point or multiple points along a line.
However, nonlinear optimization problems are relatively difficult to compute.
We have already shown in Table 1, the various nature inspired algorithms. We
can categorize them into six inspirational categories like Animal herd based, Animal
swarm based, Animal behavior based, Natural process based, Astronomy based meta-
heuristics and metaheuristics that are based on other inspirations. There have been
many modified algorithms available which actually have been derived from the tradi-
tional nature inspired algorithms. All of them fall either into any one of these classes.
Animal herd behaviour has been taken to propose Migrating birds optimisation algo-
rithm by imitating the V shaped flying pattern. It also demonstrates the energy saving
behaviour of birds. Crow search algorithm is a population based algorithm which make
use of the crow behaviour of hiding excess food and extracting when needed property.
Similar algorithms like Lions algorithm, Elephants herding algorithm - which imitates
the behaviour of elephant by leaving group when they become adults can be seen as
modified herd behaviour algorithms.
Salp Swarm Algorithm for optimization problems which got inspired from the
swarm behavior of salp during their navigation and food search in oceans was a modi-
fied algorithm which processed swarming capabilities.Ant lion optimiser, Grasshopper
optimisation algorithm, Chicken swam optimisation all made use of the concept of
capturing local maxima for optimization. Many of the processes in nature are inher-
ently procedural may produce complex forms and results even without an interference
of an outside intelligence. Thus, many such processes become the inspiration for new
metaheuristics. Chemical Reaction Optimization that simulates the interactions of
molecules to obtain low energy stability, Stochastic Fractal Search algorithm inspired
by the natural phenomenon of growth, Water wave optimization which make use of
water wave propagation, Mine blast algorithm whch uses the bomb blasting concepts
can be seen as the major algorithms in this category.
Even though we have meta heuristic algorithms, experts in optimization domain mostly
chooses hybrid meta heuristic algorithms. The motivation for implementing hybrid
algorithms is usually to obtain better performance approaches that take advantage of
the properties of each individual strategy. Hybrid meta heuristics have been success-
fully used for many real world optimization problems such as flight scheduling, load
123
Nature inspired meta heuristic algorithms for optimization…
Multi objective optimization problems have several objective functions that need to be
optimized concurrently. In multiple objectives cases, due to lack of common measure
and conflict among objective functions, there is no solution that is best for all objec-
tives. There is a set of solutions for the multi objective problem that cannot normally
be compared. Such solutions are called non dominated solutions (or Pareto optimal
solutions) only when no improvement is possible in any objective function without
sacrificing at least one of the other objective functions. The objective space is very
informative in multi objective optimization. Figure 3 is given a feasible set Y in an
objective space of a problem with two objectives. The minimum value of f1 = 0 for
a solution x with f2(x) = 8 but the minimum value of f2 is 1. There are several min-
imisers for f1 between 7 and 8. In Fig. 3, a set of minimisers of all functions is usually
disjoint.
Multi objective optimization leads to the estimation of a set of possible solutions,
which is the trade off between contradicting objectives. The scope of multi objective
optimization is found, but not limited to, optimal design and manufacturing, inverse
problems, system modelling, planning optimal Control, forecasting, and prediction,
123
V. Chandra S. S., A. H. S.
Fig. 3 Feasible set in the objective space of a multi objective optimization problem with two objectives
123
Nature inspired meta heuristic algorithms for optimization…
The basic idea of hyper heuristics is to develop new algorithms for problem solving
by combining known heuristics so that the selected methods will compensate for
others’ weaknesses. A more refined definition may be stated like a search method
or learning mechanism for selecting or generating heuristics to solve computational
search problems. The main feature that makes hyper heuristics unique is that they
operate on a search space of heuristics rather than directly on the search space of
solutions to the underlying problem that is being addressed. Two main hyper heuristic
categories can be considered: heuristic selection and heuristic generation.
Researchers make an algorithm to serve general purpose problems, where practi-
tioners need specific solutions. This gap of custom made solution or custom made
computational architecture triggered the need of a hyper heuristic. For example, in a
hill climbing problem, hyper heuristics will start searching from the subset of solu-
tions, leading to an optimized global solution. In meta heuristics, the probability factor
will also be considered, which may or may not lead to the global solution. The major
123
V. Chandra S. S., A. H. S.
An initial classification of the hyper heuristics algorithm was put forward by Burke et
al. during early 2010 [49]. The figure has been recreated here and is given in Fig. 5.
In this classification, only two attributes are considered.
123
Nature inspired meta heuristic algorithms for optimization…
This classification considers two aspects: (i) type of heuristics search space and (ii)
various feedback information sources. Concerning the search space, there can be either
(i) heuristic selection and (ii) heuristic generation. The second level in this dimension
corresponds to the difference in constructive and perturbation search paradigms [50,
51]. Perturbation methods work in a way by selecting the full solution set and then
changing component by component and checking its effect. Constructive methods
deals with partial candidate solutions, in which one or more solution components are
missing, and iteratively extending them. The hyper heuristic learning process obtains
feedback from the search process. According to the source of input during learning,
it is divided into online and offline learning [52]. In online learning hyper heuristics,
the learning takes place while the algorithm is solving an instance of a problem. In
contrast, in offline learning hyper heuristics, the idea is to gather knowledge in the
form of rules or programs from a set of training instances that will hopefully generalize
to solving unseen models.
This is a well accepted approach, but as the nature of problems gets complicated.
When the heuristic accuracy is a major concern, various other factors are needed for the
apt selection. Nature of heuristics and feedback still forms the basic structure. There
are other components, which partially contribute more to the selection of heuristics.
An extended classification of hyper heuristics scheme is proposed in Fig. 6.
Within the hyper selection heuristics, there can be a single objective or multi objec-
tive. As for hyper heuristics, the major focus is given to the multi objective problems
that can explore all the hyper heuristics features. Similarly, the solution classification
is also based on multi point heuristics, where multiple current solutions are considered
[53–55]. It can be either a single point where the current solution is selected or a mixed
approach that combines single and multiple solutions in a phased manner.
Parameter setting can be on four different levels, where a static process fixes it by
considering a prior search process. The dynamic setting allows value to get changed
in a predefined manner. But if the parameters change reactively during the search
123
V. Chandra S. S., A. H. S.
process, it is adaptive, and while the algorithm adjusts the parameters and self learn,
it is treated as self adaptive [56].
Availability is a significant issue. The solution search space can be either private,
public, or hybrid. In the case of public space, the solutions can be shared among
problem where the heuristic can easily search for a global solution. Each private
solution space of the problem is used for dynamic parameter settings, making a stable
prediction [57]. A hybrid model which make use of different public solution space
gives better heuristic results. Domain specific or recursive heuristic selection is a
major attribute in solution recognition. Stochastic models or non stochastic models are
considered while making the accept/reject decision. Non stochastic move acceptance
methods can be further classified into basic methods, such as accepting All moves,
accepting Improving or Equal moves, accepting Only improving moves, and threshold
acceptance methods.
As we have seen about heuristics, meta heuristics, and hyper heuristics, it will be a very
natural question to understand the transformation’s basic logic. To summarise in a sen-
tence, we can say hyper heuristics uses multiple heuristic methods for automating the
solutions for a generalized problem. How can heuristics be combined? Is any automa-
tion a hyper heuristics? Do generalized optimizers lead to hyper heuristic space? All
these are the basic question that can come to mind f any researcher. Hence an example
is given which details the step by step transformation of a problem to hyper heuristics
[58,59].
123
Nature inspired meta heuristic algorithms for optimization…
Let’s discuss the hill climbing problem. It is a heuristic search used for mathematical
optimization problems. Given a large set of inputs and a good heuristic function, it
tries to find a sufficiently good solution to the problem. This solution may not be the
global optimal maximum. Hill climbing solves problems where the objective is to
find the maximum or minimum of a given real function. Heuristics do not promise an
optimal solution, but it will give a good solution in a reasonable time.
Incorporating a heuristic method to tune another heuristic approach to provide the
best solution, least possibly in an automated solution domain, can be termed hyper
heuristics. At each iteration, a hyper heuristic chooses and applies one of the heuristics
to a candidate solution as a higher level of abstraction. Now let’s see how we can
transform the normal hill climbing algorithm into a hyper heuristic approach.
The hill climbing algorithm may show various types of results. It can stop iteration
at local maxima and may not try out further for global [60]. It may also show plateaus as
solutions even when there is a global solution available. Available state space estimates
of the objective function is shown in Fig. 7.
Having a heuristic selection method to fine tune the selected heuristics is the baseline
of this approach. Figure 8 shows a basic schematic of the same. Initially, a set of input
space vector will be fed to the system. A heuristic selection process is carried out, which
results from a randomly chosen function in the background. This will be applied to
see the results, and if an acceptance case of global maxima is obtained, it is moved to
an elite list and to a set of intermediate results. This intermediate result will also be
shared with the current domain space, which is in an incremental learning phase. The
process iteration is carried out unless the Elite list remains unchanged and which is
further given as the global maximum.
123
V. Chandra S. S., A. H. S.
This can be tested with any random test input. Even if the system comes to a local
minimum, the result is stored as an intermediate one and is available in the Elite list.
Any random jump from the Elite point will again lead the system to another maximum.
If not, then the Elite list point can be considered the global maximum. The process
that was carried out was the training of a heuristic using another. Any meta heuristic
optimization can be changed to hyper, thereby providing the very best solution in
optimum time.
7 Conclusion
Solving real life problems has always been a challenge. This paper does a wide review
of the various heuristics optimization algorithm. Various nature inspired computing
models for optimizations like evolutionary and swarm based has been well defined, and
the basic optimization method used in each is also explained. Meta heuristics and the
introduction of each heuristic algorithm and its evolution have been mentioned in this
work. Work also points out on how a metaheuristics optimization can be transformed
into hyper heuristics. A simple hill climbing algorithm fine tuned using double heuristic
shows how performance can be improved in hyper heuristics. Various drawbacks of
the natural heuristic algorithms pushed the double optimisation possible, thus making
hyper heuristics a reality. Review on the various hybrid meta heuristic algorithm and
the various new generation algorithm also throws light to the scope of multi level
heuristics. The paper ends with a remark on the possibility of developing any meta
heuristic algorithm to multi level hyper heuristic algorithm.
Supplementary Information The online version contains supplementary material available at https://doi.
org/10.1007/s00607-021-00955-5.
Acknowledgements The authors would like to thank the Government for India for providing Copyrights
(IPR) for the Nature Inspired Algorithm developed by the authors (Registration Nos.: L-74114/2018, L-
65846/2017, L-62609/2015, and L-60823/2014). The authors also extend their thanks to all the Machine
Intelligent Research group, who have been a constant support during the various phases of analysis and
implementation of different nature inspired algorithms.
123
Nature inspired meta heuristic algorithms for optimization…
References
1. Niu B, Wang H (2012) Bacterial colony optimization. Discrete Dyn Nat Soc. https://doi.org/10.1155/
2012/698057
2. Maniezzo V, Gambardella LM, de Luigi F (2004) Ant Colony Optimization. In: New optimization
techniques in engineering. Studies in fuzziness and soft computing, Springer, vol 141, Germany.
https://doi.org/10.1007/978-3-540-39930-8_5
3. Karaboga D (2005) An idea based on honey bee swarm for numerical optimization. Technical Report-
TR06. Erciyes University, Engineering Faculty, Computer Engineering Department
4. Yang XS (2010) A new metaheuristic bat-inspired algorithm, nature inspired cooperative strategies
for optimization, studies in computational intelligence, vol 284. Springer, Germany. https://doi.org/
10.1007/978-3-642-12538-6_6
5. Hedayatzadeh R, Salmassi F Akhavan, Keshtgari M, Akbari R, Ziarati K (2010) Termite colony opti-
mization: a novel approach for optimizing continuous problems. In: 2010 18th Iranian conference on
electrical engineering, Isfahan, pp 553–558, https://doi.org/10.1109/IRANIANCEE.2010.5507009
6. Eusuff M, Lansey K, Pasha F (2006) Shuffled frog-leaping algorithm: a memetic meta-heuristic for
discrete optimization. Eng Optim 38(2):129–154. https://doi.org/10.1080/03052150500384759
7. Kirkpatrick S, Gelatt CD, Vecchi MP (1983) Optimization by simulated annealing. Science
220(4598):671–680
8. Saju Sankar S, Vinod Chandra SS (2020) A multi-agent ACO algorithm for effective vehicular traffic
management system. Lect Notes Comput Sci 12145:640–647
9. Saju Sankar S, Vinod Chandra SS (2020) An ant colony optimization algorithm based automated
generation of software test cases. Lect Notes Comput Sci 12145:231–239
10. Kennedy J, Eberhart R (1995) Particle swarm optimization. In: Proceedings of ICNN’95 - international
conference on neural networks, pp 1942–1948, vol 4, Australia. https://doi.org/10.1109/ICNN.1995.
488968
11. Vinod Chandra SS, Saju Sankar S, Anand HS (2020) Multi-objective particle swarm optimization for
cargo packaging. Lect Notes Comput Sci 12145:415–422
12. Saritha R, Vinod Chandra SS (2016) An approach using particle swarm optimization and rational
kernel for variable length data sequence optimization. Lect Notes Comput Sci 9712:401–409
13. Reynolds RG (1994) An introduction to cultural algorithms. In: Sebald AV, Fogel LJ (eds), Proceedings
of the third annual conference on evolutionary programming, pp 131–139. World Scientific, River Edge
14. Woo Z, Hoon J, Loganathan GV (2001) A new heuristic optimization algorithm: harmony search.
Simulation 76(2):60–68. https://doi.org/10.1177/003754970107600201
15. Pham D, Ghanbarzadeh A, Koç E, Otri S, Rahim S, Zaidi MB (2005) The Bees Algorithm Technical
Note, Manufacturing Engineering Centre, Cardiff University, pp 1-57, UK
16. Saritha R, Vinod Chandra SS (2018) Multi modal foraging by honey bees toward optimizing profits at
multiple colonies. IEEE Intell Syst 34:14–22
17. Saritha R, Vinod Chandra SS (2017) Multi dimensional honey bee foraging algorithm based on optimal
energy consumption. J Inst Eng Ser B 98:517–525
18. Krishnanand KN, Ghose D (2009) Glowworm swarm optimization for simultaneous capture of multiple
local optima of multimodal functions. Swarm Intell 3:87–124. https://doi.org/10.1007/s11721-008-
0021-5
19. Atashpaz-Gargari E, Lucas C (2007) Imperialist competitive algorithm: an algorithm for optimization
inspired by imperialistic competition. In: IEEE congress on evolutionary computation, pp 4661–4667,
Singapore. https://doi.org/10.1109/CEC.2007.4425083
20. Rabanal P, Rodríguez I, Rubio F (2007) Using river formation dynamics to design heuristic algo-
rithms. In: Unconventional computation, lecture notes in computer science, vol 4618. Springer,
Germany.https://doi.org/10.1007/978-3-540-73554-0_16
21. Shah-Hosseini H (2008) Intelligent water drops algorithm: a new optimization method for solv-
ing the multiple knapsack problem. Int J Intell Comput Cybern 1:193–212. https://doi.org/10.1108/
17563780810874717
22. Yang XS (2009) Firefly algorithms for multimodal optimization. In: Lecture notes in computer science,
vol 5792. Springer, Germany. https://doi.org/10.1007/978-3-642-04944-6_14
23. Rashedi E, Nezamabadi-pour H, Saryazdi S (2010) BGSA: binary gravitational search algorithm. Nat
Comput 9:727–745. https://doi.org/10.1007/s11047-009-9175-3
123
V. Chandra S. S., A. H. S.
24. Yang X, Deb Suash (2009) Cuckoo search via levy flights. In: World congress on nature and biologically
inspired computing (NaBIC), pp 210–214, India. https://doi.org/10.1109/NABIC.2009.5393690
25. Benasla L, Belmadani A, Mostefa R (2014) Spiral optimization algorithm for solving combined eco-
nomic and emission dispatch. Int J Electr Power Energy Syst 62:163–174. https://doi.org/10.1016/j.
ijepes.2014.04.03
26. Anathalakshmi Ammal R, Sajimoan PC, Vinod Chandra SS (2020) Termite inspired algorithm for
traffic engineering in hybrid software defined networks. PeerJ Comput Sci 6:283
27. Yang XS (2012) Flower pollination algorithm for global optimization. In: Lecture notes in computer
science, vol 7445. Springer, Germany. https://doi.org/10.1007/978-3-642-32894-7_27
28. Cuevas E, Cienfuegos M (2014) A new algorithm inspired in the behavior of the social-spider for
constrained optimization. Exp Syst Appl 41:412–425. https://doi.org/10.1016/j.eswa.2013.07.067
29. Eesa AS, Brifcani AMA, Orman Z (2013) Cuttlefish algorithm: a novel bio-inspired optimization
algorithm. Int J Sci Eng Res 4(9):1978–1986
30. Vinod Chandra SS (2016) Smell detection agent based optimization algorithm. J Inst Eng India Ser B
97:431–436. https://doi.org/10.1007/s40031-014-0182-0
31. Saju Sankar S, Vinod Chandra SS (2020) A structural testing model using SDA algorithm. Lect Notes
Comput Sci 12145:405–412
32. Ananthalakshmi Ammal R, Sajimon PC, Vinod Chandra SS (2017) Application of smell detection
agent based algorithm for optimal path identification by SDN Ccntrollers. Lect Notes Comput Sci
10386:502–510
33. Odili J, Kahar M, Nizam M, Shahid A (2015) African buffalo optimization a swarm-intelligence
technique. Proc Comput Sci. https://doi.org/10.1016/j.procs.2015.12.291
34. Biyanto TR, Fibrianto HY, Nugroho G, Hatta AM, Listijorini E, Budiati T, Huda H (2016) Duelist
algorithm: an algorithm inspired by how duelist improve their capabilities in a duel. In: International
conference in swarm intelligence, Springer, pp 39–47
35. Biyanto TRM, Irawan S, Febrianto HY, Afdanny N, Rahman AH, Gunawan KS, Pratama Januar AD,
Bethiana Titania N (2017) Killer whale algorithm: an algorithm inspired by the life of killer whale.
Proc Comput Sci 124:151–157
36. Wedyan A, Whalley J, Narayanan A (2017) Hydrological cycle algorithm for continuous optimization
problems. J Optim. https://doi.org/10.1155/2017/3828420
37. Jain M, Maurya S, Rani A, Singh V, Thampi SM, El-Alfy E-SM, Mitra S, Trajkovic L (2018) Owl
search algorithm: a novel nature-inspired heuristic paradigm for global optimization. J Intell Fuzzy
Syst 34:1573–1582
38. Jain M, Singh V, Rani A (2019) A novel nature-inspired algorithm for optimization: squirrel search
algorithm. Swarm Evol Comput 44:148–175
39. RezaFathollahi-Fard AM, Hajiaghaei-Keshteli M, Tavakkoli-Moghaddam RezaTavakkoli-
Moghaddam RR (2018) The social engineering optimizer (SEO). Eng Appl Artif Intell 72:267–293
40. Elsisi M (2019) Future search algorithm for optimization. Evol Intell 12(1):21–31
41. Harifi S, Khalilian M, Mohammadzadeh J, Ebrahimnejad S (2019) Emperor penguins colony: a new
metaheuristic algorithm for optimization. Evol Intell 12:211–226
42. Kaveh A, Dadras AA (2017) A novel meta-heuristic optimization algorithm: thermal exchange opti-
mization. Adv Eng Softw 110:69–84
43. Heidari AA, Mirjalili S, Faris H, Aljarah I, Mafarja M, Chen H (2019) Harris hawks optimization:
algorithm and applications. Future Gen Comput Syst 97:849–872
44. Askari Q, Younas I, Saeed M (2020) Political optimizer: a novel socio-inspired meta-heuristic for
global optimization. Knowl Based Syst 195:105709
45. Askari Q, Saeed M, Younas I (2020) Heap-based optimizer inspired by corporate rank hierarchy for
global optimization. Exp Syst Appl 161:113
46. Zaeimi M, Ghoddosian A (2020) Color harmony algorithm: an art-inspired metaheuristic for mathe-
matical function optimization. Soft Comput 24(16):12027–12066
47. Harifi S, Mohammadzadeh J, Khalilian M, Ebrahimnejad S (2020) Giza pyramids construction: an
ancient-inspired metaheuristic algorithm for optimization. Evol Intell 2020:1–19
48. Vinod Chandra SS, Anand HS, Saju Sankar S (2020) Optimal reservoir optimization using multiob-
jective genetic algorithm. Lect Notes Comput Sci 12145:445–454
49. Burke EK, Gendreau M, Hyde M, Kendall G, Ochoa G, Özcan E, Qu R (2013) Hyper heuristics: a
survey of the state of the art. J Oper Res Soc 64(12):1695–1724
123
Nature inspired meta heuristic algorithms for optimization…
50. Gómez RH, Coello CAC (2017) A hyper heuristic of scalarizing functions. In: Proceedings of the
genetic and evolutionary computation conference, pp 577–584
51. Hansen P, Mladenovic N, Pérez JAM (2010) Variable neighbourhood search: methods and applications.
Ann Oper Res 175(1):367–407
52. Uludag G, Kiraz B, Etaner-Uyar AS, Ozcan E (2013) A hybrid multi population framework for dynamic
environments combining online and offline learning. Soft Comput 17(12):2327–2348
53. Hsiao P-C, Chiang T-C, Fu L-C (2012) A vns based hyper heuristic with adaptive computational budget
of local search. In: Proceedings of the IEEE congress on evolutionary computation, pp 1–8
54. Meignan D (2011) An evolutionary programming hyper heuristic with co-evolution. In: Proceedings
of the 53rd annual conference of the UK operational research society
55. Lehrbaum A, Musliu N (2012) A new hyper heuristic algorithm for cross do main search problems.
In: Proceedings of the learning and intelligent optimization, LNCS, pp 437–442
56. Salcedo-Sanz S, Matías-Román J, Jiménez-Fernández S, Portilla-Figueras A, Cuadra L (2014) An
evolutionary based hyper heuristic approach for the jaw breaker puzzle. Appl Intell 40(3):404–414
57. Salhi A, Rodríguez JAV (2014) Tailoring hyper heuristics to specific instances of a scheduling problem
using affinity and competence functions. Memetic Comput 6(2):77–84
58. Strickler A, Lima JAP, Vergilio SR, Pozo AT (2016) Deriving products for variability test of feature
models with a hyper heuristic approach. Appl Soft Comput 49:1232–1242
59. Parejo JA, Ruiz-Cortés A, Lozano S, Fernandez P (2012) Metaheuristic optimization frameworks: a
survey and benchmarking. Soft Comput 16(3):527–561
60. Tyasnurita R, Özcan E, John R (2017) Learning heuristic selection using a time delay neural network
for open vehicle routing. In: Proceedings of the IEEE congress on evolutionary computation, pp 1474–
1481
61. Vinod Chandra SS, Anand HS (2021) Phototropic algorithm for global optimisation problems. Applied
Intelligence
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.
123