Nature-Inspired Optimization Algorithms: Research Direction and Survey

Download as pdf or txt
Download as pdf or txt
You are on page 1of 35

Nature-Inspired Optimization Algorithms: Research Direction

and Survey

ROHIT KUMAR SACHAN, IIT Kanpur, India


DHARMENDER SINGH KUSHWAHA, MNNIT Allahabad, India

Nature-inspired algorithms are commonly used for solving the various optimization problems. In past few
decades, various researchers have proposed a large number of nature-inspired algorithms. Some of these
algorithms have proved to be very efficient as compared to other classical optimization methods. A young
researcher attempting to undertake or solve a problem using nature-inspired algorithms is bogged down by a
plethora of proposals that exist today. Not every algorithm is suited for all kinds of problem. Some score over
others. In this paper, an attempt has been made to summarize various leading research proposals that shall
pave way for any new entrant to easily understand the journey so far. Here, we classify the nature-inspired
algorithms as natural evolution based, swarm intelligence based, biological based, science based and others.
In this survey, widely acknowledged nature-inspired algorithms namely- ACO, ABC, EAM, FA, FPA, GA,
GSA, JAYA, PSO, SFLA, TLBO and WCA, have been studied. The purpose of this review is to present an
exhaustive analysis of various nature-inspired algorithms based on its source of inspiration, basic operators,
control parameters, features, variants and area of application where these algorithms have been successfully
applied. It shall also assist in identifying and short listing the methodologies that are best suited for the
problem.
CCS Concepts: • Computing methodologies → Bio-inspired approaches; Genetic algorithms; •
Mathematics of computing → Evolutionary Algorithm;
KEYWORDS
Additional Key Words and Phrases: Ant Colony, Artificial Bee Colony, Environmental Adaption, Jaya
Algorithm, Flower Pollination, Shuffled Frog Leaping, Swarm intelligence

Reference format:
Rohit Kumar Sachan, and Dharmender Singh Kushwaha. Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey. 35 pages.

1 INTRODUCTION
Recent past has witnessed a wide adoption of nature-inspired algorithms for diverse real-world
optimization problems that include engineering experiments, scientific experiments and business
decision making. These algorithms are based on randomization concept and draw inspiration from
natural phenomenon. A few of the several nature inspired algorithms proposed till now, have
proved to be very efficient. Many algorithms give adequate results, but no algorithm gives an
admirable performance in solving of all the optimization problems. In other words, an algorithm XX
may show good performance for some problems while it may perform poorly for other problems
[219]. However, as compared to classical optimization techniques, nature-inspired algorithms
obtain optimal solutions for a wider range of problem domains in a reasonably practical time.
In real world, optimization problem is categorized into two categories: single objective and
multi-objective. In single objective problem, only one objective is optimized while, multi-
objective problem focuses on more than one objective. Consequently, we have two types of
optimization algorithms namely single objective optimization algorithms and multi-objective
optimization algorithms. The term objective refers to an objective function which is a

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:2 R.K.Sachan and D.S.Kushwaha

mathematical formulation of the optimization criteria of a problem. The objective function


produces a numerical result based on input variables. The result of the objective function is known
as fitness (or cost) and the number of variables given as input to the objective function is referred
to as the dimension of objective function.
The main characterizing features of a good nature-inspired algorithm are a higher convergence
rate, less processing time, an unbiased exploration and exploitation and less number of algorithm-
specific control parameters. Convergence rate is the speed in terms of the number of iterations
beyond which a repeated sequence is produced by an algorithm. This repeated sequence is known
as a convergent sequence and is closer to the desired solution. The time required for execution of
an algorithm is known as processing time. Exploration and exploitation are the elementary
parameters of any nature-inspired algorithm. In exploration, exclusive new region of a search
space is visited. In exploitation, only the neighborhood region of previously visited points of a
search space is visited [36]. A good algorithm requires equilibrium between exploration and
exploitation.
Generally, all nature-inspired algorithms require two types of controlling parameters: common
control parameters (or regular parameters) and algorithm-specific control parameters (or
dependent parameters) [159]. Common control parameters are the problem independent
parameters like- population size, number of dimension, number of iterations, etc. On the other
hand, algorithm-specific control parameters are the problem dependent parameters. In other words,
the value of these parameters may differ from problem to problem like- GA requires a mutation
and a crossover probability; PSO requires inertia weight and learning factors, etc. These dependent
parameters may influence the performance of the algorithm. So a good nature-inspired algorithm
should use minimum number of the algorithm-specific control parameters.
This paper presents an extensive overview and exhaustive analysis of various nature-inspired
algorithms. The organization of the paper is as follows: Section 2 highlights the past related work.
Classification of the nature-inspired algorithms is discussed in Section 3. Section 4 provides a
broad review of various nature-inspired algorithms with their variants and applications. Section 5
enlists the comparative study of discussed algorithms based on its source of inspiration, basic
operators, control parameters and features in chronological order. Section 6 outlines general
conclusion.

2 RELATED WORK
Recently, many researchers have made an attempt to compare various existing evolutionary and
nature-inspired algorithms. In paper [55], Elbeltagi et al. presented a comparative study of five
evolutionary algorithms for continuous and discrete optimization. These algorithms are ant-
colony, genetic, memetic, particle swarm and shuffled frog leaping. In another work, Parpinelli et
al. [147] have reviewed the recently proposed swarm intelligence algorithms which also include a
comparative analysis of ten algorithms based on source of inspiration, exploitation, exploration
and communication model. Newly introduced algorithms like bat, cuckoo search and firefly are
discussed by Sureja [206] along with the comparative result analysis of bat, cuckoo search, firefly,
genetic and particle swarm for ten continuous and discrete optimization problems. In another
paper [27], Binitha et al. presented a detailed literature survey and a comparision of various bio-
inspired algorithms based on their representation, operators, control parameters and area of
application. In [223], Yang discussed various search strategies and new challenges of nature-
inspired meta-heuristic algorithms. In a related work Agarwal et al. [8] carried out a
comprehensive review of twelve nature-inspired algorithms based on input parameters,
evolutionary mechanism and applied application area while Kaur et al. [102] presented a
comparative study of bat, cuckoo search, firefly and krill herd on the basis of their corresponding
behaviour, objective function, features and area of application. A detailed insight of ant colony,

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:3

artificial bee, evolutionary strategies, particle swarm, genetic algorithms and genetic programming
has been outlined in the work of Dixit et al. [45].

3 CLASSIFICATION OF NATURE-INSPIRED ALGORITHMS


We categorize various Nature-Inspired Algorithms (NIAs) into five major categories based on the
source of inspiration: natural evolution based, swarm intelligence based, biological based, science
based and others. The natural evolution based algorithms are based on the basic principles of the
theory of natural evolution. This theory is known as “Darwinism”. Swarm intelligence based
algorithms are inspired by the collective behaviour of creatures like ants, bats, bees, cuckoos and
fireflies. The biological based algorithms are motivated by the social behavioral pattern of
biological systems. The science based algorithms are based on the scientific concepts. The
algorithms that are inspired by any other natural phenomena fall into the category of others. Fig. 1
shows the classification of NIAs.

Fig. 1. Classification of nature-inspired algorithms

4 REVIEW OF NATURE-INSPIRED ALGORITHMS


The most prominent NIAs are Genetic Algorithms (GA), Ant Colony Optimization (ACO),
Particle Swarm Optimization (PSO), Shuffled Frog Leaping Algorithm (SFLA), Artificial Bee
Colony (ABC), Firefly Algorithm (FA), Gravitational Search Algorithm (GSA), Cuckoo Search
(CS), Bat Algorithm (BA), Environmental Adaption Method (EAM), Teacher Learning based
Algorithm (TLBO), Flower Pollination Algorithm (FPA), Water Cycle Algorithm (WCA) and
Jaya Algorithm. These algorithms are used for finding the optimal solution. These algorithms start
with the search of optimal solution within search space of a randomly initialized population [147].
In each iteration, current population is replaced by a newly generated population. Fig. 2 shows the
evolution timeline of various nature-inspired algorithms. Some of the widely recognized NIAs are
examined in the subsequent sections.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:4 R.K.Sachan and D.S.Kushwaha

Fig. 2. Evolution timeline of nature-inspired algorithms

4.1 Natural Evolution based Algorithms


Natural evolution is the one of the oldest and well-known concept in the field of nature-inspired
algorithms. These algorithms are inspired by theory of natural evolution. In this, only fittest
individuals of current population are selected for next generation. The well-known natural
evolution based algorithms are elaborated in the following sections.

4.1.1 Genetic Algorithms (GA)


Holland et al. [80] proposed the genetic algorithms at the University of Michigan. The concept of
GA is based on the Darwin’s theory of biological evolution- “Survival of the fittest”. It states that
only fittest individuals shall survive during the next generation while the unfit individuals shall be
eliminated.
Biological evolution is defined as a genetic change in population over time (or generations).
Many times, these changes are very small and not noticeable. These changes occur at the gene
level. The changes, which are fit for survival of genes, are only passed onto the next generation.
During the biological evolution, various activities take place like- crossover and mutation of
genes, selection of the fittest genes for the next generation.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:5

In simulation of GA, individual genes (or solutions) are expressed in string format, called
“chromosome”. It uses three basic operators: crossover, mutation and selection [133]. In the
evolution process, current population is replaced by new population, which has better average
fitness than the previous generation. So the mean value of fitness of the next generation becomes
fitter than its predecessor generation.
Various researchers have proposed different types of crossover, mutation and selection
operators [193]. The principle behind the crossover and mutation is the same so as to modify or
update the old chromosome and produce a new chromosome (or offspring). The main difference
between crossover and mutation is that crossover operator is performed over two or more than two
chromosomes, while the mutation operator is performed on a single chromosome. The fittest
individuals are selected through selection operator.
The main parameters of GA are population size, number of generations, crossover probability,
mutation probability and selection operator. The population size and number of generations are the
common control parameters. The crossover probability, mutation probability and selection
operator are the algorithm-specific control parameters. Length of chromosome and chromosome
encoding method are also considered as algorithm-specific parameters. Due to crossover and
mutation, GA has the ability of exploration and exploitation simultaneously.
The basic GA is sufficiently efficient. However variants of GA have been proposed to improve
its effectiveness, efficiency and robustness. The comparative study of various existing variants is
shown in Table 1. Numerous optimization problems have been successfully solved by GA. The
recent applications are: software effort estimation [197], carpool service problem in cloud
computing [84], VM placement in data centers [207], image enhancement and segmentation [151],
water distribution system design [26], production ordering problem in an assembly process [188],
medical image protection [145], wireless networks [142], vendor-managed inventory routing
problem [146], parameter selection of photovoltaic panel [22] and QoS-aware service selection
[44].

Table 1. Comparative study of GA variants

Reference Algorithm Algorithm/s compared with Application


[40] Non-dominated Sorting Pareto-Archived ES (PAES) and Multi-objective test
GA II (NSGA II) Strength-Pareto EA (SPEA) problems
[152] Adaptive GA (AGA) Project Scheduling Problem Library Resource Levelling
(PSPLIB) Problem (RLP) in
project management
[121] Multi-objective GA Maximum applications scheduling Scheduling in cloud
(MOGA) algorithm and random scheduling computing
algorithm
[124] Real-coded GA Nelder–Mead simplex, PSO and Brain images
Bacterial Foraging (BF) segmentation
[65] Binary-Real Coded GA Lagrangian Relaxation GA (GA-LR), Unit Commitment
Integer-Coded GA (ICGA), Matrix Real- (UC) problem
Coded GA (MRCGA), Enhanced
Simulated Annealing (ESA), SFLA
[236] Hybrid GA/PSO Existing IPPS methods Process planning and
scheduling
[178] Simplified GA COCOMO model Software effort
estimation
[13] Elitism GA (EGA) GA Extreme Learning

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:6 R.K.Sachan and D.S.Kushwaha

Machine (ELM)
[143] GA with a GA Unequal-Area Facility-
decomposition strategy Layout Problem (UA-
FLP)
[120] Hybrid Cuckoo Search- ACO, PSO, Immune Based Algorithm Benchmark problems
GA (CSGA) (IBA) and Cuckoo Search (CS) and hole-making
sequence optimization

4.1.2 Environmental Adaption Method (EAM)


Environmental adaption method was proposed by Mishra et al. [129] for solving different
benchmark functions. EAM is based on an improved theory of Darwinism. This theory does not
consider the impact of current environmental conditions on the individuals. Due to this, the fitness
improvement is very slow. In improved theory, impact of environmental conditions is considered
for improving the fitness of individuals. In EAM, an individual adapts to environmental conditions
to survive in a changing environment. Various observations state that during the evolution,
average fitness of current generation always improves from the average fitness of previous
generation. EAM improves the requirement of time and storage in comparison of GA and PSO
[129].
The EAM simulates the improved Darwin’s theory. For simulation, it uses three basic
operators: adaption, mutation (or alteration) and selection [129]. Adaption operator is used to
update the genome structure of individual. This modification depends on current fitness of
individual and environmental fitness. For that, an exponential operator is used for damping the
effect of change in the solution. As the solution moves towards the optimal solution, the ratio of
fitness tends towards 1 as successive generation passes on. The mutation operator is applied on the
individual to add the effect of environmental noise. The mutation operator just inverts the bits of
individual genes with mutation probability. After this, best individuals are selected through a
selection operator. During adaption, the new solution is calculated by equation 1.
𝑛𝑛𝑛𝑛𝑛𝑛_𝑠𝑠𝑠𝑠𝑠𝑠 = �𝛼𝛼 × 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑠𝑠𝑠𝑠𝑠𝑠 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 _𝑠𝑠𝑠𝑠𝑠𝑠 )⁄𝑎𝑎𝑎𝑎𝑎𝑎 _𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓
+ 𝛽𝛽�⁄2𝐿𝐿 − 1 (1)
where α and β are random numbers and L is the number of bits representing an individual.
The EAM requires three main control parameters. These are population size, number of
generations and mutation probability. The population size and number of generations are the
common control parameters. Mutation probability and size of an individual are the algorithm-
specific control parameter. Due to adaption operator, fitness of individuals improves in a short
duration of time. EAM has an unbiased exploration and exploitation [130]. The adaption operator
exploits the neighborhoods of current solutions and simultaneously mutation operator explores the
new solutions.
Some modified and updated versions of EAM are suggested by researchers. A comparative
study of various existing variants of EAM is shown in Table 2. Limited number of applications of
EAM have been found till date like- test case generation [130].

Table 2. Comparative study of EAM variants

Reference Algorithm Algorithm/s compared with Application


[129] EAM GA and PSO Rastrigin and Schwefel
function
[131] Modified EAM (MEAM) EAM and ePSO Rosenbrock, Rastrigin,
Griewank and Schwefel
function

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:7

[130] Improved EAM (IEAM) GA and EAM Several benchmark


functions and Test case
generation
[140] Non-dominated Sorting NSGA-II Two multi-objective
EAM (NS-EAM) function (Vanneta and
Schaffer)
[210] EAM for Dynamic EAM BBOB-2009 benchmark
environment (EAMD) functions
[211] Hybrid GA-EAM PSO Time Variant Acceleration Rosenbrock and Rastrigin
Coefficient (PSO-TVAC), Self- function
Adaptive DE (SADE) and EAM
[212] EAM with Real parameter Several state-of-the-art BBOB-2009 benchmark
encoding for Dynamic algorithms functions
environment (EAMD-R)

4.2 Swarm Intelligence based Algorithms


Swarm intelligence is a novel idea to inspire researchers to address optimization problem
efficiently and effectively. The
se algorithms are based by the intelligent behaviour of the creatures like- ants, bees, birds, fishes,
fireflies, bats and cuckoos. A large number of algorithms come under this category. The four
widely used swarm intelligence based algorithms are described next.

4.2.1 Ant Colony Optimization (ACO)


ACO is an extended form of traditional construction heuristic [48]. The construction algorithms
solve the problem in an incremental way. These start with an initial solution and iteratively a
solution component is added randomly or greedily without backtracking. Greedy approach gives a
better solution than the random approach, but it generates a limited number of solutions. The
greediness is based on the profitability of the solution. The construction algorithms are the fastest
approximation method, but often generate solutions that may be not optimal or high quality. With
the help of the local search algorithms, solution can be improved. The local search algorithm
explores the neighbors of the current solution and improves it. It finds the better neighbor solution
if it exists.
In real life, the ants follow the stochastic construction approach and pheromone model for the
search of the food. In this, new solution is generated by adding stochastic solution component in a
particular solution. Stochastic component is a small random value which increases the randomness
in search of optimal solution. Due to stochastic component, real ants discover a large number of
solutions [48].
Dorigo [46] has proposed a new algorithm based on the ant’s cooperative behavior. This is
known as ACO. The ants move randomly in search of food. On discovering the food source, ants
leave pheromone trails on the way back to their colony. This is used as a communication channel
by the other ants. During the search of food, if any ant finds the pheromone path, they stop
wandering and start to follow the pheromone path. The pheromone path signifies the presence of a
food source. Multiple ants follow the same path and leave down pheromone trails, thus increasing
the pheromone strength on the path. After a period of time, evaporation of the pheromone trail
starts. The evaporation reduces the attractive strength of the deposited pheromone. In a
comparison between the shorter path and longer path, a longer path has lesser pheromone density
because it has more time for pheromone evaporation than the shorter path implying that shorter
paths have distinctly higher pheromone density [47].

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:8 R.K.Sachan and D.S.Kushwaha

The basic ant colony metaheuristic follows the three steps [46, 47].
1. In first step, initial solutions are constructed by all ants.
2. In second step, solutions are improved by the local search algorithm.
3. In third step, pheromones are updated.
In simulation of ACO, ants memorize the traversed path and deposited pheromone. Ants try to
search a path with minimum distance between colonies and the food source through random
movements. The movement depends on the probability of solution components. The probability
depends on the pheromone values and heuristic information, which are associated with each path.
The probability of searching shortest path is inversely proportional to the route’s distance. For
each successive movement, the ants always choose the edge which has a higher probability of
solution component. When all ants have completed their search, the pheromone amount of
complete path is updated. The purpose of this is to improve the pheromone value of good solution
and to reduce the pheromone value of bad solution. The worth of solution depends on the amount
of pheromone levels on path associated with solutions. Different ACO variants may have different
way to update pheromone.
The parameters required in ACO are number of ants, number of iterations, pheromone
evaporation rate and amount of reinforcement. The number of ants and number of iterations are
the common control parameters. The pheromone evaporation rate, heuristic information and
amount of reinforcement are algorithm-specific control parameters. In ACO, a balanced
exploration and exploitation can be achieved through the management of pheromone trails [49].
ACO is good for finding approximate solutions of hard optimization problems through graph or
tree.
Dorigo et al. present recent advancement and application of ACO in paper [48] and recent
advances in a successive paper [49]. Many customized and modified variants of ACO were
proposed by various researchers. The comparative study of various existing variants of ACO is
shown in Table 3. The recent applications of ACO are: software effort estimation [205], routing
for mobile Ad-hoc network [31], spatial clustering algorithm [1], web service compositions in
cloud computing [237], train routing selection problem [187], requirement selection in software
development [41], energy-efficient networks [39], biometrics fusion [107], unsupervised
probabilistic feature selection in pattern recognition [37] and blood vessel segmentation in retina
diagnosis system [15].

Table 3. Comparative study of ACO variants

Reference Algorithm Algorithm/s compared with Application


[17] Elitist Continuous ACO Coastal aquifer management
ACO (ECACO) problem
[106] Binary Ant System Continuous ACO (CACO), Unconstrained optimization
(BAS) Continuous Interacting Ant problems
Colony (CIAC), ACO and API
algorithm
[235] Continuous ACO Benchmark functions Structural Health Monitoring
(CnACO ) (SHM)
[221] Multi-Objective Ranked Positional Weight Mixed-model Assembly Line
ACO (MOACO) Method (RPWM), GA Artificial Balancing Problem (MALBP)
Immune Algorithm (AIS)
[116] Hybrid GA-ACO GA, ClustalW, Central-star Multiple Sequence Alignment
algorithm and Horng's GA (MSA) problem

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:9

[149] Hybrid ACO-PSO Tree-based PSO (PSOTREE) QoS constrained multicast


(HACOPSO) and Tree Growth based ACO routing problem
Algorithm (TGBACA )
[201] Hybrid ACO-ANNs ACO Feature subset selection
problem in field of medical
diagnosis
[208] Parallel ACO ACO and Simulated Resource job scheduling
Annealing (SA) problem

4.2.2 Particle Swarm Optimization (PSO)


In particle swarm optimization, the word swarm signifies the bird’s flock or fish’s school and the
word particle denotes a bird in a flock or a fish in the school. PSO was based on the social
behaviour of particles in swarms. This includes synchronous movement, unpredictable and
frequent direction change, scattering and regrouping etc. In a swarm, every particle learns from his
current experience and shared experience of other particles of the swarm. Kennedy and Eberhart
[103] proposed PSO based on this hypothesis.
The social behaviour of particles synchronizes a collision-free movement in the search space
towards the roost. For that, each particle matches its velocity with the nearest neighbors and
maintains inter-individual distance in swarms. For maintaining the unanimous and unchanging
direction random variable craziness is added in velocity. A roost is a location in the search space
that attracts the particles until they reached there. It may be a food source, wire or a tree. For
optimizing the movement towards the roost, each particle remembers their best value and share
global best value with other particles of the swarm. Each particle synchronizes its flying
movement based on the remembered value.
In computer simulation of PSO, all particles have its own velocity and position. Each particle
remembers the personal best position (pbest) and shares the global best position (gbest) among all
the particles for finding quality solutions. The particle’s new position depends on the distance of
its current position from pbest and gbest. The new velocity and new position of particles are
calculated using equations 2 and 3. These equations have been developed after various
development stages.
𝑛𝑛𝑛𝑛𝑛𝑛_𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 + 𝑐𝑐1 𝑟𝑟1 × (𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 − 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝) + 𝑐𝑐2 𝑟𝑟2
× (𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔 − 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝) (2)

𝑛𝑛𝑛𝑛𝑛𝑛_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 + 𝑛𝑛𝑛𝑛𝑛𝑛_𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 (3)


where r1, r2 are two random variables with range [0, 1] and c1, c2 are the learning factors.
The main control parameters in PSO are the number of particles, number of iterations and
learning factors. Number of particles and number of iterations are the common control parameters.
Learning factor and maximum velocity are algorithm-specific control parameters.
PSO is conceptually simple and computationally economical (in terms of speed and memory)
and can be programmed with few lines code. It uses only few basic arithmetic operators. PSO does
not use crossover or mutation operator like GA. The adjustment of pbest and gbest is conceptually
similar to the crossover or mutation of GA.
There are more than two dozen PSO variants and its hybrid approach with other nature-inspired
approaches like- GA, ABC has also been investigated [13, 227]. The different variants of PSO
have different parameters like- inertia weight, learning factors, velocity clamping, acceleration
constants and mutation operators [86]. The comparative study of various existing variants of PSO
is shown in Table 4. In many problems, PSO gives better results than traditional optimization
methods and even better than genetic algorithm. Some of the latest applications of PSO are:

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:10 R.K.Sachan and D.S.Kushwaha

software cost estimation [198], human motion tracking [186], data clustering [60], resource
allocation in the cloud [135], online dictionary learning [217], capacitor placement problem in
distribution system planning [115], vehicle routing problem [231], optimal power management
based on driving condition for electric vehicles [32], robotic [137], inventory and location control
in supply chain network [136], assembly line balancing [42] and brain MR image segmentation
[113].

Table 4. Comparative study of PSO variants

Reference Algorithm Algorithm/s compared with Application


[104] Binary PSO - De Jong’s-1975 test bed
[63] Regrouping PSO PSO Ackley, Griewangk, Quadric,
(RegPSO) Rastrigin, Rosenbrock,
Spherical and Weighted Sphere
function
[240] Cooperative Quantum- PSO and Quantum-behaved Distribution algorithms
behaved PSO (CQPSO) PSO (QPSO)
[196] Discrete PSO Decimal Codification based GA Transmission network
(DCGA) expansion planning
[134] Modified Binary PSO PSO and BPSO Cancer diagnoses

[199] Personal Best Position PSO 15 Scalable problems and 13


PSO (PBPPSO) Non-scalable problems
[232] Niche PSO PSO Target tracking
[233] Neighborhood Search Barebones PSO (BPSO) Ship design
Barebones (NSBPSO)
[93] Immunity-Enhanced PSO DE and PSO Structural damage detection
(IEPSO)
[220] MOPSO Well-known evolutionary Feature selection and
multi-objective algorithms classification
[109] Guaranteed Convergence Earlier reported approaches Optimal power flows problem
PSO (GCPSO)
[202] Hybrid PSO-GA GA and PSO Closed-Loop Supply Chain
(CLSC) network design

[218] Hybrid GA-PSO Basic GA and PSO Welding robot path planning

4.2.3 Artificial Bee Colony (ABC)


Intelligent foraging behaviour of honey bee inspired Karaboga [94] to propose a new swarm based
algorithm. This is known as ABC. ABC obtains the swarm intelligent behaviour by self-
organizing and division of labour concepts. Self-organizing includes- positive feedback, negative
feedback, fluctuation and multiple interactions. Division of labour means each task is performed
by specialized individuals. The swarm of honey bee has three basic components- food source,
employee foragers and unemployed forgers. The honey bee searches food sources in intelligent
and well-organized manner. When any bee finds a good food source based on nectar value, it
shares collected information with other bees and the rest of the bees follow the same food source.
In ABC [95], the hive has three types of bees namely: employed bees, onlooker bees and scout
bees. These bees have distinct role in hive. Scout bees search the food locations in the vicinity of
the hive randomly. It becomes employed once it finds the location of a food source. Employed

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:11

bees locate their food source; evaluate its amount of nectar and return to hive. It then starts
dancing near the hive. This dance is known as waggle dance. The information exchange between
the bees is an important process in ABC. The communication depends on the quality or richness of
food source. The quality information of current sources is present in dance floor and it is measured
by bee’s dance duration. Based on the probability (or profitability) of food source, onlooker bee
chooses a best food source.
During every search cycle, bees follow three basic steps [94, 97].
1. In first step, employed bees locate the food sources and measure the value of nectar.
2. In second step, employed bee shares the value of nectar with onlookers, and then a good
food source is selected by onlookers.
3. In third step, scout bees explore the search area for new food sources.
In simulation of ABC, possible solution of problem is represented by food source. One
employed bee is associated with every food source. The fitness of objective function is expressed
by the amount of nectar or quality of a food source. The onlooker bees search the food source
based on probability of source. The scouts explore the other food source without any guidance.
In ABC, employed bees find the new food source based on the neighboring food sources and
the probability of new sources is calculated. Based on these probabilities, the onlooker bees
compute the newly discovered food source position. The selection between new and current food
source is based on greedy approach, i.e. one that gives the best solution. Finally, scout bees
identify the abandoned food source and replace it by random food source. If successive iteration
does not improve the probability of a food source, then it is an abandoned food source. The new
solution position and probability are determined by equation 4 and equation 5 respectively.

𝑛𝑛𝑛𝑛𝑛𝑛_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 + 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 × (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 − 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝_𝑘𝑘) (4)


where rand is a random number within range [-1, 1] and random dimension index k is selected
between 1 to n.
𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓
𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃𝑃 = (5)
𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡_𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓
The control parameters of ABC are the number of food source which same to the number of
onlookers or employed bees, maximum number of cycles and the value of limit [97]. The number
of food source and value of limit are algorithm-specific parameters. Limit is a control parameter to
control the selection of food source, meaning that if the profitability of a food source is not
improved within prefix trails then the food source is converted into an abandoned source. ABC is
a simple, flexible and robust method. In ABC, exploration and exploitation process work in
parallel. Onlooker and employed bees exploit the search space, while the scout bees explore the
search space [94].
Many researchers have proposed various variants and modified ABC. A comparative study of
various existing variants of ABC is shown in Table 5. Some of the recent applications of ABC are:
travelling salesman problem [98], independent path and software test suite optimization [114],
software effort estimation [70], economic dispatch problem [191], mobile robot path planning
[34], load balancing in cloud [144], optimal placement problem in wireless sensor network [77],
optimal power flow [2], image segmentation [29], energy aware routing in WSN [110], crack
identification in beam [43] and FIR filter design [52].

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:12 R.K.Sachan and D.S.Kushwaha

Table 5. Comparative study of ABC variants

Reference Algorithm Algorithm/s compared with Application


[12] Chaotic ABC ABC algorithm Rosenbrock, Griewangk and
(CABC) Rastrigin function
[241] Cooperative ABC ABC, PSO and Cooperative PSO Sphere, Rosenbrock, Griewank,
(CABC) (CPSO) Rastrigin, Ackley and Schwefel
function
[89] Hybrid GA-ABC GA, ABC and Conventional Proportional Integral (PI) speed
Gradient Descent Method controller of Permanent Magnet
Synchronous Motor (PMSM)
[76] Discrete ABC GA and ABC Blocking Flow Shop (BFS)
(DABC) scheduling problem
[96] Constrained ABC GA, PSO, ABC, Homomorphous 13 linear, nonlinear and
Mapping (HM) and Adaptive quadratic test functions
Segregational Constraint Handling
EA (ASCHEA)
[53] Hybrid ABC- ABC and PSO CEC05 benchmark functions
SPSO
[9] Multi-objective Multi-Objective EA based on Unconstrained and constrained
ABC (MOABC) Decomposition (MOEAD), test problems
Dynamical Multi-Objective EA
(DMOEADD) and Multiple
Trajectory Search (MTS)
[100] Binary ABC Binary DE (BinDE) and PSO Uncapacitated Facility Location
(DisABC) Problem (UFLP)
[18] ABC with Levy ACO, Standard ABC and Dynamic Steel space frame design
Flight distribution Harmony Search problem
(LFABC)
[108] Co-variance MOABC Portfolio optimization
guided ABC (M-
CABC)

4.2.4 Firefly Algorithm (FA)


The intelligent flashing behaviour of fireflies has worked as a source of inspiration in development
of new algorithms. Each firefly has a unique flashing pattern. It works as a signaling and
communication mechanism between them and to attract the prey. It also acts as a protective
warning system. Generally, the male and female fireflies attracted each other with a unique pattern
of flashing for mating. The intensity of flashing light depends on the attractiveness of fireflies, the
distance between the fireflies and the degree of absorption of the medium.
Yang [227] associates the flashing behaviour of fireflies with the objective function and
proposes a new algorithm based on that which is known as FA. The FA has conceptual similarity
with BFA (Bacterial Foraging Algorithms). In BFA, attraction between bacteria partly depends on
fitness value and partly on the distance between them. But in FA, attraction is based on objective
function and monotonic decay of attraction with distance. FA explores search space more
efficiently than BFA.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:13

The computer simulation of firefly algorithm depends on the following flashing rules of firefly
[224, 225]:
1. The first rule states that, all fireflies are unisexual. For sexual activity, they are attracted
to other fireflies.
2. According to the second rule, attractiveness is proportional to their brightness. Both being
inversely proportional to their mutual distance between the fireflies. A less brighter
firefly moves towards the more brighter firefly. If a brighter firefly is not found, then they
will move randomly. Attraction also depends on the degree of absorption.
3. The third rule states that, the brightness of a firefly is equal to the fitness of objective
function.
In FA, the attractiveness is calculated by brightness which is calculated with the association of
the encoded objective function. The light intensity fluctuates due to distance and absorption by the
media. The calculation of attractiveness and fluctuation in light intensity is very important. In
general, the brightness (I) and attractiveness (β) of a firefly are calculated using equations 6 and 7
respectively. The movement of a firefly i towards the brighter firefly j is calculated using equation
8.
2 𝐼𝐼0
𝐼𝐼(𝑟𝑟) = 𝐼𝐼0 𝑒𝑒 −𝛾𝛾 𝑟𝑟 𝑜𝑜𝑜𝑜 (6)
1 + 𝛾𝛾𝛾𝛾 2
2 𝛽𝛽 0
𝛽𝛽(𝑟𝑟) = 𝛽𝛽0 𝑒𝑒 −𝛾𝛾 𝑟𝑟 𝑜𝑜𝑜𝑜 (7)
1+ 𝛾𝛾𝛾𝛾 2

where γ is the light absorption coefficient. I0 and β0 are the brightness and attractiveness at
distance r = 0 respectively.
2
𝑋𝑋𝑖𝑖 = 𝑋𝑋𝑖𝑖 + 𝛽𝛽0 𝑒𝑒 −𝛾𝛾𝑟𝑟𝑖𝑖𝑖𝑖 �𝑋𝑋𝑗𝑗 − 𝑋𝑋𝑖𝑖 � + 𝛼𝛼𝜀𝜀𝑖𝑖 (8)

where the second and third term represents the attractiveness and randomization of α with
randomization parameter ϵi respectively. For most implementations βo = 1, α ϵ [0, 1] and γ ϵ [0, ∞),
but normally it lies between 0.01 and 100.
In FA, the control parameters are number of fireflies, number of iterations, light absorption
coefficient and attractiveness. Light absorption coefficient and attractiveness are the algorithm-
specific control parameters. Number of fireflies and number of iterations are the common control
parameters. Due to variation in attractiveness, FA explored the search space efficiently. FA is
capable of finding local and global optima simultaneously. Thus, FA is appropriate for parallel
execution. FA exploits the search space better than GA and PSO [227].
A comparative study of various existing variants of FA is shown in Table 6. Some of the
popular applications of FA are: load dispatch problem in economic emissions [14], travelling
salesman problem [88], clustering algorithm [192], feature selection [21], image compression [81],
image registration [238], manufacturing cell formation [190], image watermarking [128] and
software effort estimation [71].

Table 6. Comparative study of FA variants

Reference Algorithm Algorithm/s compared with Application


[228] Levy Flights FA GA and PSO Michalewicz, Rosenbrock, De
(LFA) Jong, Schwefel, Ackley, Rastrigin,
Easom, Griewank, Yang and
Shubert function
[66] Gaussian FA (GD- PSO, FA and Time-varying Sphere, Rosenbrock, Rastrigin,

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:14 R.K.Sachan and D.S.Kushwaha

FF) inertia weight PSO(PSO-TVIW) Griewank and Ackley function


[209] Elitist FA FA Seven benchmark problems
[30] Binary Real Coded GA, PSO and Simulating Unit Commitment Problem (UCP)
FA (BRCFF) Annealing (SA)
[85] Parallel FA GA, PSO and FA Graphical Processing Unit (GPU)
implementation
[6] Hybrid Evolutionary GA, PSO, Evolutionary Parameters optimization in a
Firefly Algorithm Programming (EP) and FA complex and nonlinear problem
(HEFA)
[173] Hybrid ACO–FA Harmony Search (HS), PSO, Several benchmark problems
GA, Particle Swarm Ant Colony of unconstrained optimization
Optimization (PSACO)
[67] Chaos FA (CFA) FA Sphere and Griewank function
[229] Multi-objective FA Vector Evaluated GA (VEGA), Design optimization benchmarks
(MOFA) NSGA-II, MODE and DEMO in industrial engineering
[158] Discrete Firefly CPLEX and PSO Capacitated Facility Location
Algorithm with the Problem (CFLP)
Standard GA
[99] Hybrid Discrete PSO+SA, PSO+TS, Flexible Job Shop Scheduling
Firefly Algorithm MOPSO+LS and FL+EA Problem (FJSP)
(HDFA)

4.3 Biological based Algorithms


The biological based algorithms are inspired by social behavioral pattern of biological sciences
like botany and zoology. Botany includes the plant systems and zoology focuses in the living
systems. Biological systems have many characteristics like robustness, adaptability and optimal
decision. These characteristics work as motivational factor. The biologically inspired algorithms
are described in detail next.

4.3.1 Shuffled Frog Leaping Algorithm (SFLA)


SFLA combines the principle of Shuffled Complex Evolution (SCE) and Particle Swarm
Optimization (PSO) [62]. SFLA inherits the deterministic search approach from PSO and the
random search approach from SCE. SFLA is inspired by the information sharing among the
memetics of group.
Consider a swamp with frogs, in which stones are laid in different position on to which the
frogs leap to find stone containing the maximum amount of food. During this process, frogs share
the food information with others frogs, so that memes can be improved. The memes represent the
traits of frog same as a genes in a GA [61]. The improvement of memes implies improvement in
individual position of frogs. SFLA is based on this choreograph situation.
During the simulation of SFLA, the frogs are divided into several memeplexes (subgroups).
Within the memeplexes, frogs share their experiences with the other members. This process is
known as memetic evolution. This improves the quality of memes and improves the individual
position of frogs towards the goal. Memetic evolution is repeated for a specific number of times.
Later, frogs of memeplexes are reshuffled. This shuffling process improves the quality of memes
by sharing the experience of frogs from different memeplexes.
SFLA was introduced by Eusuff and Lansey [61]. In SFLA, the population of virtual frogs is
known as solutions and fitness value is known as performance. To begin with, all frogs are
arranged according to their performance in decreasing order. After this, all frogs are partitioned
into a number of memeplexes in such way that each memeplexes have an equal number of frogs.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:15

Within each memeplex a local evolution is performed to find a local optimal solution. After a
definite number of memetic evolution steps, all frogs are reshuffled for a global evolution to find a
globally optimal solution. This process is repeated until stopping criteria not met like - number of
fixed iterations or the desired solution meets. During the memetic evolution, the new position of
the frogs is calculated by the equations 9 and 10.
𝑐𝑐ℎ𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎_𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 = 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 × �𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏 _𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 − 𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤 _𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 � (9)

𝑛𝑛𝑛𝑛𝑛𝑛_𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 + 𝐶𝐶ℎ𝑎𝑎𝑎𝑎𝑎𝑎𝑒𝑒_𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝


(10)
𝑤𝑤ℎ𝑒𝑒𝑒𝑒𝑒𝑒 −𝐷𝐷𝑚𝑚𝑚𝑚𝑚𝑚 ≤ 𝑐𝑐ℎ𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎_𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 ≤ 𝐷𝐷𝑚𝑚𝑚𝑚𝑚𝑚
where rand is a random number in the interval [0, 1] and the value of change frog position is
within the range of the maximum permissible change in frog’s position.
If new position of frog is better than previous frog position, then the worst frog replaced with
new frog (solution). Otherwise, the same calculation is repeated with the global best frog (i.e.
frogbest_fitness replace by frogglobal_best_fitness). If there is no scope for improvement in the solution,
then worst frog is replaced with a new randomly generated solution. The memetic evolution step is
conceptually similar to PSO and the creation of memeplex and shuffling step is based on the SCE
algorithm.
In SFLA, the control parameters are number of frogs and number of iterations, number of
memeplexes, size of memeplex and number of evolution steps. The algorithm-specific control
parameters are number of memeplexes, size of memeplex and number of evolution steps. The
number of frogs and number of iterations are the common control parameters. SFLA is robust, fast
for finding the solution and suitable for parallelization problem [62].
The comparative study of different existing variants of SFLA is shown in Table 7. The SFLA is
applied in numerous applications. Some of the recent applications of SFLA include grid task
scheduling [222], UAV flight controller problem [155], set covering problem [35], brain MR
image segmentation [112], clustering in WSN [64], vehicle routing problem [122], resource
scheduling in a cloud [126], manufacturing cell design problem [203], economic dispatch problem
[185], job shop scheduling problem [117] and ground water calibration problem [61].

Table 7. Comparative study of SFLA variants

Reference Algorithm Algorithm/s compared with Application


[54] SFLA with search SFLA and GA Construction project
acceleration factor management (time-cost trade-
(MSFL) off)
[153] Modified SFL SFLA Shubert, Hartmann-3, Shekel,
(MSFL) with adaptive Hartmann-6, Rosenbrock and
coefficient Zakharov functions
[19] Discrete SFL (DSFL) SFLA, Discrete PSO (DPSO) and Nonlinear and multimodal
Binary GA (BGA) functions
[170] Chaotic SFLA SFLA, Variants of GA and PSO Sphere, Schwefel, Rosenbrock,
(CSFLA) Quadric, Rastrigin and
Griewank function
[112] MSFLA 3D-Otsu thresholding with SFLA MR brain image segmentation
and GA

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:16 R.K.Sachan and D.S.Kushwaha

4.3.2 Flower Pollination Algorithm (FPA)


Pollination is a natural process of biological evolution of plant. It is based on fertilization of seeds.
Based on the pollination process, Yang [226] proposed a FPA. The main purpose of a flower in a
plant is reproduction. The reproduction takes place via pollination process.
During pollination, the pollens are transferred from one flower to another flower with the help
of pollinators like- birds, bees, insects, humans and others. Without pollinators pollination is not
possible. Normally two forms of pollination take place- abiotic and biotic. During biotic
pollination, pollens are transferred by pollinators like- birds, bees, insects and other animal.
During abiotic pollination, there is no need of any pollinator. Wind and water helps in pollination.
Pollination can also take place as self-pollination or cross-pollination. Self-pollination occurs in
the same flower or different flower of the same plants while cross-pollination occurs between the
flowers of different plant.
For simulation, FPA follows the basic considerations [226]:
1. Self-pollination and abiotic pollination comprise as local pollination.
2. Cross-pollination and biotic pollination comprise as global pollination.
3. Transition between the pollination is controlled by switching probability P.
4. The reproduction probability of flower is measured as flower constancy. It depends on
the similarity of two flowers.
In local pollination, flower pollens move in very short range. This movement happens due to
the physical closeness and other factors like wind, water, etc. In global pollination, flower pollens
can move a long distance over a long range. It can be made by animals, birds or humans. The
pollination process ensures the reproduction of the fittest. Normally local pollination has a higher
fraction in overall pollination process. Flower constancy of pollination is calculated by equations
11 and 12 respectively.
𝑛𝑛𝑛𝑛𝑛𝑛_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 + 𝐿𝐿(𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 − 𝑔𝑔∗ ) (11)
where g* is the current fittest solution and parameter L is the strength of the pollination.
𝑛𝑛𝑛𝑛𝑛𝑛_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 + 𝜖𝜖�𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑗𝑗 − 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑘𝑘 � (12)
where pollenj and pollenk are pollens from the different flowers of the same plant (population). ϵ is
a uniform distribution in [0, 1]. Pollen is a mimic of the flower constancy.
In FPA, the control parameters are number of pollens and number of iterations, switching
probability and strength of the pollination. Switching probability and strength of the pollination
are algorithm-specific control parameters. Number of pollens and number of iterations are the
common control parameters. Most of the applications work well at switching probability 0.8.
The comparative study of various existing variants of FPA is shown in Table 8. Paper [20] and
[33] present a review of the FPA and its applications. FPA has numerous applications such as
image compression [101], economic load dispatch problem [154], retinal vessel segmentation in
medical [57], life time of a node optimization in WSNs [194], feature selection [174], solving
Sudoku puzzles [5], photovoltaic parameter selection in renewable energy [11], optimal placement
in distributed system [172], economic and emission dispatch problem [3] and capacitor placement
problem in electric power distribution system [4].

Table 8. Comparative study of FPA variants

Reference Algorithm Algorithm/s compared with Application


[230] Multi-objective Non-dominated Sorting GA (NSGA-II), Design of a disc brake
FPA (MOFPA) Vector Evaluated GA (VEGA), Multi-

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:17

objective DE (MODE), DE for Multi-


objective optimization (DEMO), Strength
Pareto EA (SPEA) and Multi-objective
Bees algorithm (Bees)
[5] Hybrid FPA with Harmony Search (HS) Sudoku Puzzles
Chaotic Harmony
Search (FPCHS)
[90] Hybrid FPA with K-Means and FPA Data Clustering
K-Means (FPAKM)
[174] Binary FPA PSO, FA and Harmony Search (HS) Feature selection
[138] Modified FPA FPA, Bat Algorithm, FA, GA and 23 well-known
(MFPA) Simulated Annealing (SA) optimization benchmark
functions

4.4 Science based Algorithms


Science based algorithms are based on a scientifically proven concept of physics or chemistry or
mathematics. These concepts are the basic principles of the universe. The algorithms inspired by
science are described next.

4.4.1 Gravitational Search Algorithm (GSA)


GSA is inspired by Newton’s law of gravity and motion. Three types of mass explained in physics
theory: active mass, passive mass and inertia mass. Based on this concept of mass, the law of
gravity and law of motion are rewritten as- “The gravitational force (Fij), acting on mass i by mass
j, is proportional to the product of the active gravitational of mass j (Maj) and passive gravitational
of mass i (Mpi), and inversely proportional to the square distance (R) between them. The
acceleration (ai) is proportional to gravitational force (Fij) and inversely proportional to the inertia
mass of i (Mii).”
Based on the above gravitational force theory, Rashedi et al. [168] proposed GSA. For
simulation, each mass (agent) has four properties: position, inertial mass, active mass and passive
mass. The position defines a solution of problem and masses are calculated by objective function.
The heaviest mass is considered as an optimal solution [168]. Due to gravitational force, every
agent attracts each other. Due to attraction all agents move towards the heavier agents (higher
gravitational force). The movement produces acceleration in agents. The force between the agents
depends on the product of their masses and distance between them instead of square of distance.
Because, it gives better result than the square of distance. The force is calculated by equation 13.
The acceleration is computed as proportion of total acting force on the agent and mass of the
agent. The acceleration is calculated by equation 14. Due to acceleration, every agent has some
velocity. Due to the velocity, the agents will update its position. The velocity and position of the
agents are computed by equations 15 and 16.
𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎_𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 × 𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝_𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚
𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 = 𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔𝑔_𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 × (13)
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑
𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡_𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎_𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓
𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 = (14)
𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖_𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚

𝑛𝑛𝑛𝑛𝑛𝑛_𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 = 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 × 𝑐𝑐𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢𝑢__𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 + 𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎𝑎 (15)

𝑛𝑛𝑛𝑛𝑛𝑛_𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐__𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝𝑝 + 𝑛𝑛𝑛𝑛𝑛𝑛_𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣𝑣 (16)


where rand is a uniform random variable in the interval [0, 1].

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:18 R.K.Sachan and D.S.Kushwaha

GSA has improved performance capability in term of the exploration and exploitation [177]. In
GSA, number of masses and position of the agent are common control parameters and the
gravitational constant is an algorithm-specific control parameter.
Since the development of GSA, lots of modified variants have been proposed. These
modifications enhance the incremental performance of GSA. The comparative study of different
variants of GSA is shown in Table 9. In paper [177], author disused variants and application of
GSA. The GSA has a number of good applications in various areas. Some of them are: IIR and
rational filter modelling [169], parameter optimization of sensor monitoring to minimize the
energy [175], routing and wavelength assignment problem in optical networks [176], PID
controller problem [51], optimal power flow problem [50], economic and emission dispatch
problem of power systems [195], finding the near-optimal base station in WSNs [157], energy
efficient WSNs [148], optimal IIR filter designing [183], gas synthesis production problem [68],
data clustering and classification [111] and heat and power economic dispatch problem [24].

Table 9. Comparative study of GSA variants

Reference Algorithm Algorithm/s compared with Application


[167] Binary GSA (BGSA) GA, Binary PSO (BPSO) Seven unimodal, five
multimodal, ten
multimodal test functions
with fix dimension
[78] Multi-objective GSA MOPSO. MOGSA and Several Multi- Three multi-objective
with uniform mutation objective EA (MOEAs) functions (MOP5, MOP6
and elitist policy and MOPCI)
[72] Multi-Objective GSA NSGA-II and Multi-objective GA for Motif Discovery Problem
(MO-GSA) motif discovery (MOGAMOD) (MDP) (DNA patterns)
[127] Hybrid PSO and GSA PSO and GSA 23 benchmark functions
(PSOGSA)
[87] Hybrid GSA and Fuzzy logic control and Harmonic Controlling if active
Fuzzy Logic (GSA- distortion power filter
FL)
[199] Hybrid genetic GSA Gait for the hexapod robot
gravitational algorithm
[141] Non-dominated NSGA-II, MOGSA and MOPSO Multi-objective
Sorting GSA benchmark problems
(NSGSA) (SCH, FON, POL, KUR,
and ZDT)
[16] Intelligent GSA based Swarm intelligence based and Pattern recognition
classifier (IGSA- evolutionary classifiers problem and different
classifier) benchmarks
[118] Chaotic GSA (CGSA) GA, PSO and GSA Identifying the parameters
of Lorenz chaotic system
[74] Hybrid GSA and ABC ABC and GSA Five benchmark functions
(GSA-ABC)
[214] Gravitational Particle PSO and GSA Seven unimodal, five
Swarm (GPS) multimodal, ten
multimodal test functions
with fix dimension
[234] Niche GSA (NGSA) State-of-the-art Niching algorithms Unconstrained and

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:19

constrained standard
benchmark functions
[91] Hybrid PSO and GSA PSO and GSA Economic Emission Load
(HPSO-GSA) Dispatch (EELD)
problems
[38] Hybrid of Improved IPSO and IGSA Multi-robot path planning
PSO and Improved
GSA (Hybrid IPSO–
IGSA)

4.4.2 Water Cycle Algorithm (WCA)


The water cycle process and flow of streams in the real world are the source of inspiration for the
WCA. This is a natural phenomenon. This process includes various activities like- evaporation,
condensation, precipitation, transpiration, percolation and surface run-off [59].
In the real world, sources of water are rain, snow and groundwater. Open sources of water are
created by rain. Stream and river originate in the mountain top where icy masses liquefy. The
water absorbed by aquifer, is known as groundwater. Generally, streams and rivers move downhill
on the surface. The streams combine into the river, rivers merge into the sea and finally, all
streams and river become one into the sea. The water evaporates from open water bodies, like-
rivers, ponds and lakes. The evaporation and transpiration of water produce clouds and water then
comes back in the form of rain and snow. This observation inspired Eskandar et al. [59] to propose
a meta-heuristic algorithm named the WCA.
In simulation of WCA [59], the population is known as raindrops and fitness value is known as
cost of raindrop. The best raindrop is selected as a sea and some good raindrops are selected as a
river and the remaining raindrops are selected as a stream. The number of rivers is a user
parameter. The river /or sea absorbs the water from the stream based on the magnitude / intensity
of the flow. The intensity is calculated by the equation 17. Intensity represents the number of
streams which flow into a river or sea. Normally, the stream flows into river or directly into sea
and river flows into the sea. The new position of a stream and river is calculated by equations 18
and 19. Same calculation is repeated with exchange the position of stream and river; river and sea.
The best position in between both calculated position is selected as a next position of stream and
river. Water evaporation is an important process in a water cycle algorithm which avoid from the
rapid convergence. In evaporation condition, the distance between the sea and river controls the
search intensity near the sea. If distance is lesser than a fixed value, then the evaporation process
starts. If evaporation condition is satisfied, then raining starts. During raining process, new
raindrops and streams originate at different places. The new position of these new streams is
calculated by equation 20 which is similar as mutation operator of GA. Again, same process in
repeated with new raindrops.
𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐
𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖𝑖_𝑓𝑓𝑓𝑓𝑓𝑓𝑓𝑓 = 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 �� � × 𝑛𝑛𝑛𝑛_𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟� (17)
𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡𝑡_𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐

𝑛𝑛𝑛𝑛𝑛𝑛_𝑝𝑝𝑝𝑝𝑝𝑝_𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝_𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 × 𝐶𝐶


× (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝_𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 − 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝_𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠) (18)

𝑛𝑛𝑛𝑛𝑛𝑛_𝑝𝑝𝑝𝑝𝑝𝑝_𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝_𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 + 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 × 𝐶𝐶


× (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝_𝑠𝑠𝑠𝑠𝑠𝑠 − 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑝𝑝𝑝𝑝𝑝𝑝_𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟) (19)

𝑛𝑛𝑛𝑛𝑛𝑛_𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 = 𝐿𝐿𝐿𝐿 + 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 × (𝑈𝑈𝑈𝑈 − 𝐿𝐿𝐿𝐿) (20)

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:20 R.K.Sachan and D.S.Kushwaha

where C is random variable between 1 and 2; LB and UB are the lower and upper bounds of
problem’s variable.
WCA always aims to find a globally optimal solution via effective exploration and exploitation
[181]. WCA uses less number of insensitive user parameters, which means that WCA is capable of
solving numerous optimization problems via fixed user defined parameters [181]. The number of
rivers and evaporation condition are the algorithm-specific control parameters.
Over the last few years, various improved, modified and hybrid version of the WCA have been
proposed by various authors. A comparative study of various existing variants of WCA is shown
in Table 10. WCA has numerous applications in several varieties of optimization problems such as
weight optimization problem of truss structure [58], optimal operation of reservoir system [75],
water distribution system [182], power system stabilizer [69] and load frequency controller for
power system [56].

Table 10. Comparative study of WCA variants

Reference Algorithm Algorithm/s compared with Application


[59] WCA GA, DE, HS, Hybrid PSO and Three-bar truss problem, Speed
TLBO reducer problem, Pressure vessel
design problem, Tension/
compression spring design problem,
Welded beam design problem,
Rolling element bearing design
problem, Multiple disk clutch brake
design problem
[180] Multi-objective NSGA-II, MOPSO, Micro-GA, Four-bar truss design problem,
WCA (MOWCA) Elitist-mutation Multi-objective Speed reducer problem, Disk brake
PSO (EM-MOPSO), Hybrid design problem, Welded beam
Quantum Immune Algorithm design problem, Spring design
(HQIA) problem, Gear train design problem
[179] WCA with WCA, PSO, DE and BFO Constrained and unconstrained
Evaporation Rate optimization problems
(ER-WCA)
[79] Chaotic WCA WCA, PSO and Variants of PSO Several benchmark problems and
training of NNs

4.5 Other Algorithms


Sometimes it is hard to categorize the some algorithms in the above discussed classification;
because they are inspired by any other natural phenomena like- teaching methodology, winning
tendency, etc. In such case, these algorithms are put under in this category. Next section describes
the teaching learning based optimization and Jaya algorithm.

4.5.1 Teaching Learning Based Optimization (TLBO)


The concept of TLBO is based on the teaching-learning methodology of class of learners. In this,
various types of learning are possible like- learning from teachers, self-learning of learners, group
learning of learners, learning from other learner which has higher knowledge, learning from
assignments and examination, etc. Rao [166] considers the learning from teachers and learning
from other learner for the teacher learning based optimization. During teacher’s learning, the
teacher teaches the learners and learner increases his or her knowledge from teacher. In learning
from other learner, learners exchange their knowledge with other classmate which has higher

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:21

knowledge, with the aim to increase his or her knowledge. The aim of this teaching-learning
methodology is that learners should increase their knowledge and score higher grades.
For computer simulation, the TLBO process is performed in two phases [160, 164]: “Teacher
phase” and “Learner phase”. In teacher phase, the best learner of the class is selected as a teacher.
The teacher emphasizes to increase the mean result of the class. In the learner phase, every learner
increases his knowledge by interacting with other learner of class. This interaction among the
learners happens randomly for enhancing knowledge of learners.
In TLBO, class of learners is considered as a population, number of subjects is considered as a
design variable and a result is considered as a fitness value. In the teacher phase, first the mean
result of the class is computed and then the difference mean of the class is calculated by equation
21. The difference mean is the difference between the best learner’s results and mean of class
result. The new solution depends on the current solution and the difference mean of the class. The
new solution is calculated using equation 22. During the learner phase, two learners are selected
randomly and then the new solution is calculated using equations 23 or 24.
𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑_𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 = 𝑟𝑟 × (𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏_𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑙𝑒𝑒𝑒𝑒 − 𝑇𝑇𝐹𝐹 × 𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚_𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟) (21)
where TF is the teaching factor either 1 or 2 and r is the random number in the range [0, 1].
𝑛𝑛𝑛𝑛𝑛𝑛_𝑠𝑠𝑠𝑠𝑠𝑠 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑠𝑠𝑠𝑠𝑠𝑠 + 𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑𝑑_𝑚𝑚𝑚𝑚𝑚𝑚𝑚𝑚 (22)

𝑛𝑛𝑛𝑛𝑛𝑛_𝑠𝑠𝑠𝑠𝑠𝑠 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑠𝑠𝑠𝑠𝑠𝑠 + 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 × (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑠𝑠𝑠𝑠𝑠𝑠_𝑃𝑃 − 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑠𝑠𝑠𝑠𝑠𝑠_𝑄𝑄), 𝑖𝑖𝑖𝑖 𝑃𝑃 > 𝑄𝑄 (23)

𝑛𝑛𝑛𝑛𝑛𝑛_𝑠𝑠𝑠𝑠𝑠𝑠 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑠𝑠𝑠𝑠𝑠𝑠 + 𝑟𝑟𝑟𝑟𝑟𝑟𝑟𝑟 × (𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑠𝑠𝑠𝑠𝑠𝑠_𝑄𝑄 − 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑠𝑠𝑠𝑠𝑠𝑠_𝑃𝑃), 𝑖𝑖𝑖𝑖 𝑃𝑃 < 𝑄𝑄 (24)


where P and Q are the randomly selected learners such that P ≠ Q.
At the end of both phases, the new solution is selected only when it gives better result. TLBO
does not require any dependent parameters. It requires only regular parameters like- number of
learners, number of subjects and number of iterations.
The comparative study of the various existing variants of TLBO is shown in Table 11. TLBO
has been successfully functioning in various areas of engineering and science. A large number of
applications were presented by various researchers in the recent past. Some of the well known
applications of TLBO are: data clustering problem [189], electric power dispatch problem [123],
machine process parameter optimization [113], IIR filter designing [200], optimal capacitor
placement problem [204], flow shop and job shop scheduling problem [23], PID-controller [184],
software effort estimation [105] and optimizing the order neural network [139].

Table 11. Comparative study of TLBO variants

Reference Algorithm Algorithm/s compared with Application


[28] Cooperative Co- Modified TLBO (m-TLBO) High dimensional
evolutionary TLBO problems
(CC-TLBO)
[165] Elitist TLBO TLBO, DE and EP Constrained benchmark
functions and optimization
problems of the industrial
environment
[73] Multi-objective Multi-objective EA Motif Discovery Problem
TLBO (MO-TLBO) (MDP) in Bioinformatics
[125] Multi-objective Multi-objective EA based on Reactive power handling
TLBO based on decomposition (MOEA/D) problem
Decomposition

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:22 R.K.Sachan and D.S.Kushwaha

(MOTLA/D)
[92] Hybrid model of DE Non-dominated scheduling schemes Optimal hydro-thermal
and TLBO (hDE- for the MOSOHTS scheduling (MOSOHTS)
TLBO)
[215] Harmony Search Harmony Search (HS) and complex Unconstrained
Based Teaching benchmark functions Optimization Problems
Learning (HSTL)
[171] Binary TLBO PMU placement methods Optimal placement of
(BTLBO) Phasor Measurement
Units (PMU) placement
problem
[10] Hybrid of TLBO and Harmony Search (HS) Designing of steel space
Harmony search frames
[119] Discrete TBLO Different versions of DTLBO Flow shop rescheduling
(DTLBO)
[82] Teaching-Learning Well-known constrained engineering Constrained optimization
based Cuckoo Search design problems problems
(TLCS)
[83] TLCS with Lévy TLCS Structure designing
flight machine
[216] Hybridized TLBO BAT, CS, Artificial Cooperative Proton exchange
with DE Search (ACS), Backtracking Search membrane fuel cell
(TLBO–DE) (BS), Melody Search (MS), Quantum (PEMFC) problem
behaved PSO (QPSO), and Intelligent
Tuned HS (ITHS)
[150] Multi-objective MO-TLBO CEC 2009 standard test
Improved TLBO problems
(MO-ITLBO)

After the wide acceptance and popularity of TLBO, Rao [159] proposed a comparatively
simpler algorithm than the TLBO, which has lesser computational steps and equally powerful as
other algorithms.

4.5.2 Jaya Algorithm


Jaya algorithm [159] is based on the concept in which a solution moves towards the best solution
and discards the worst solution. It always tries to find an optimal solution therefore victorious.
That’s why it is called Jaya algorithm because of the word ‘JAYA’ is derived from the word
‘victory’. It is simple yet powerful algorithm.
During the simulation of Jaya, first the best and worst solutions are computed, then the current
solution is modified by the equation 25 and the new solution is produced.
𝑛𝑛𝑛𝑛𝑛𝑛_𝑠𝑠𝑠𝑠𝑙𝑙 = 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑠𝑠𝑠𝑠𝑠𝑠 + 𝑟𝑟1(𝑏𝑏𝑏𝑏𝑏𝑏𝑏𝑏_𝑠𝑠𝑠𝑠𝑠𝑠 − |𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑠𝑠𝑠𝑠𝑠𝑠|) − 𝑟𝑟2(𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤_𝑠𝑠𝑠𝑠𝑠𝑠 − |𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐_𝑠𝑠𝑠𝑠𝑠𝑠|) (25)
where r1 and r2 are the random number in the range [0,1].
Same as TLBO, Jaya does not have any dependent parameters. It has only regular parameters
like- number of candidates (population size), number of design variable and number of iterations.
Jaya is a very recently introduced algorithm, so only limited variants have been found to date.
The comparative study of existing variants of Jaya is shown in Table 12. The Jaya has been
fruitfully applied in various areas of engineering and science like- micro-channel heat sink
problem [161], surface grinding process optimization problem [163], nano-finishing process
optimization [162], Tea Category Identification problem (TCI) [239], load dispatch problem [25],

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:23

distributed energy resource distribution problem [213], Carbon Fibre-Reinforced Polymer (CFRP)
[7], production scheduling problems [156] and photovoltaic parameter selection [132].

Table 12. Comparative study of Jaya variants

Reference Algorithm Algorithm/s compared with Application


[159] Jaya Algorithm Homomorphous Mapping (HM), Simple Multi- Constrained and
member Evolution Strategy (SMES), GA, DE, unconstrained functions
ABC, PSO, TLBO, Biogeography based
Optimization (BBO) and Heat Transfer Search
(HTS)
[239] Jaya algorithm Various neural network based methods Tea-Category
and Fractional Identification (TCI)
Fourier entropy problem
[156] Multi-objective GA and DE Master Production
Jaya Scheduling (MPS)

5 COMPARATIVE STUDY OF NATURE-INSPIRED ALGORITHMS


Table 13 illustrates a tabular study of all discussed nature-inspired algorithms in terms of source of
inspiration, objective function or basic operators, common control parameters, algorithm-specific
parameters and main features. All these algorithms are arranged in chronological order.

Table 13: Summary of various nature-inspired algorithms

S. Algorithm Source of Objective function / Common Algorithm- Features


No and inspiration Basic operators Control Specific
. Develop Parameters Control
by Parameters
1 Genetic Darwin’s Crossover, Mutation Population Crossover Ability of
Algorithms theory of and Selection size, probability, explore and
(GA) – biological Number of Mutation exploit
1975 evolution generation probability, simultaneous
John H. Chromosome ly, Powerful,
Holland et length and Robust
al. encoding
technique
2 Ant Cooperative Pheromone amount, Number of Pheromone Finding good
Colony behaviour of Trail evaporation ants, evaporation paths
Optimizati real ants Number of rate, Heuristic through grap
on (ACO) iterations information, h or tree
– 1992 Amount of
Macro reinforcement
Dorigo
3 Particle Social Velocity and Position Number of Inertia weight, Very simple
Swarm behaviour of of particles particles, Learning concept,
Optimizati creatures new_velocity = Number of Factors, Computation
on (PSO) – like-bird current_velocity + iteration Maximum ally
1995 flocking or c1×r1× (pbest – velocity inexpensive

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:24 R.K.Sachan and D.S.Kushwaha

Dr. James fish current_position) +


Kennedy schooling c2×r2× (gbest –
and Dr. current_position)
Russell new_position =
Eberhart current_position +
new_velocity
4 Shuffled Leaping and Replacement, Number of Number of Simple
Frog shuffling Shuffling and Frog’s frogs, memeplexes, method with
Leaping behaviour of position Number of Size of less
Algorithm frogs change_frog_position iterations memeplexes, computation,
(SFLA) – = rand× (frogbest_fitness Number of Stable
2003 – frogworst_fitness) evolutionary convergence
Muzaffar new_frog_position = steps and high
Eusuff and old_frog_position + quality
Kevin change_frog_position solution
Lansey
5 Artificial Foraging Reproduction, Maximum Number of Relatively
Bee behaviour of Replacement of bee, cycle food source, fast,
Colony honey bee Selection Number Value of limit Robust
(ABC) – new_position = (MCN) search
2005 current_position + ϕ process,
Dr. Dervis × (current_position – Simple and
Karaboga current_position_k) flexible
6 Firefly Intelligent Brightness (light Number of Attractiveness, High
Algorithms flashing intensity), fireflies, Light convergence
(FA) – behaviour of Attractiveness and Number of absorption s rate,
2007 fireflies Movement of firefly iterations coefficient robust,
Xin-She new_ff_i = old_ff_i + Finds good
Yang βo e-γr2ij (old_ff_j – optimum
old_ff_i) + αϵi solutions in
less
number of
iterations
7 Gravitation Newton’s Mass, Force, Number of Gravitational High
al Search law of Acceleration, masses, constant performance
Algorithm gravity Velocity and agent’s Position of
(GSA) – position agents,
2009 new_velocity = rand Number of
Esmat × current_velocity + iteration
Rashedi et acceleration
al. new_position =
old_position +
new_velocity
8 Environme Improved Adaption, Mutation Population Mutation Adapt
ntal version of and Selection size, probability, environment
Adaption Darwin new_sol = (α × Number of Number of bits al condition
Method principle (current_sol) generation for individual
(EAM) – ^(fitness(old_sol)/avg
2011 _fitness) + β) / 2L-1
Dr. K. K.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:25

Mishra
9 Teaching Teaching- Teacher phase: Population NA No
Learning learning difference_mean = size, algorithm-
Based methodolog r×(best_learner – Number of specific
Optimizati y TF×mean_result) iterations, parameter
on new_sol = current_sol Teaching required,
(TLBO)- + difference_mean Factor efficiency in
2011 Learner phase: term of less
R. Venkata new_sol = current_sol number of
Rao et al. + r×(current_sol_P – function
current_sol_Q) evaluations
10 Flower Flower Global pollination: Number of Switch Simple,
Pollination pollination new_pollen = pollens, probability, flexible,
Algorithms process of old_pollen + L× Number of Strength of efficient,
(FPA) – flowering (old_pollen – iterations pollination Higher
2012 plants current_best_pollen) convergence
Xin-She Local pollination: rate
Yang new_pollen =
old_pollen + ϵ ×
(old_random1_pollen

old_random2_pollen)
11 Water Water cycle new_pos_stream = Number of Number of Not trapped
Cycle process and current_pos_stream + raindrops, river, in local
Algorithm movement rand×C× Number of Evaporation solutions
(WCA) – of water (current_pos_river – iteration condition,
2012 streams current_pos_stream) Number of
Hadi new_pos_river = design variable
Eskandar current_pos_river +
et al. rand×C×
(current_pos_sea –
current_pos_river)
12 Jaya Victory Move in the direction Population NA Simple, No
Algorithm (Win) of best solution and size, algorithm-
– 2016 dispose of the worst Number of specific
R. Venkata solution iterations parameter
Rao new_sol = current_sol required, Yet
+ r1×(best_sol - powerful
|current_sol|) – r2×
(worst_sol -
|current_sol|)

6 CONCLUSION
Nature-inspired algorithms are the recent evolution from GA. These are highly efficient
algorithms and produce a near optimal solution for real-world optimization problems. The
monumental impact of these algorithms is attributed to their widespread use for solving a vast
variety of problems. We have presented a systematic review of various nature-inspired algorithms.
Given that no algorithm proves their excellency in solving all the optimization problems. It may
provide superior performance for some problems while it may perform poorly for other problems.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:26 R.K.Sachan and D.S.Kushwaha

Many times, the characteristic of the problem may affect the performance of algorithms. Among
the multitude of known optimization techniques, GA and PSO are most widely used. PSO is much
simpler than the GA because it does not use crossover/mutation operators. ACO is efficient for
the graph-based and tree-based optimization problem. EAM is the technique that adapts to current
environmental changes, thus giving better performance than other techniques for solving the
constrained and unconstrained optimization problems. Both FA and FPA show higher
convergence rate and they improve the timing and result performance in various fields. Less
number of algorithm-specific parameters is the characterizing feature of TLBO. With this
technique, an optimal solution can be obtained in a comparatively lesser number of iterations.
TLBO requires less computational effort for large scale problems. WCA offers efficient solutions
than others in terms of the computational cost. Jaya is a straightforward and equally capable
technique as other techniques. Also, it is independent of algorithm-specific parameters. The
algorithms that have been recently proposed are yet to be explored in their application area. This
comprehensive review of all known optimization algorithms can be used as a source of
information for further research. Our aim is to encourage and pave way for upcoming researchers
in assisting them to identify or develop novel and efficient optimization algorithms for tackling
large scale real-world problems.

REFERENCES
[1] Tülin Inkaya, Sinan Kayaligil, and Nur Evin Özdemirel. 2015. Ant colony optimization based clustering
methodology. Appl. Soft Comput. 28 (2015), 301–311.
[2] Kadir Abaci, Volkan Yamacli, and ALI AKDAGLI. 2016. Optimal power flow with SVC devices by using the
artificial bee colony algorithm. Turkish J. Electr. Eng. Comput. Sci. 24, 1 (2016), 341–353.
[3] A.Y. Abdelaziz, E.S. Ali, and S.M. Abd Elazim. 2016. Combined economic and emission dispatch solution using
Flower Pollination Algorithm. Int. J. Electr. Power Energy Syst. 80 (2016), 264–274.
[4] Almoataz Y. Abdelaziz, Ehab S. Ali, and Sahar M. Abd Elazim. 2016. Flower pollination algorithm for optimal
capacitor placement and sizing in distribution systems. Electr. Power Components Syst. 44, 5 (2016), 544–555.
[5] Osama Abdel-Raouf, Ibrahim El-Henawy, and Mohamed Abdel-Baset. 2014. A novel hybrid flower pollination
algorithm with chaotic harmony search for solving sudoku puzzles. Int. J. Mod. Educ. Comput. Sci. 6, 3 (2014), 38.
[6] Afnizanfaizal Abdullah, Safaai Deris, Mohd Saberi Mohamad, and Siti Zaiton Mohd Hashim. 2012. A new hybrid
firefly algorithm for complex and nonlinear problem. In Distributed Computing and Artificial Intelligence. Springer,
673–680.
[7] Kumar Abhishek, V. Rakesh Kumar, Saurav Datta, and Siba Sankar Mahapatra. 2016. Application of JAYA
algorithm for the optimization of machining performance characteristics during the turning of CFRP (epoxy)
composites: comparison with TLBO, GA, and ICA. Eng. Comput. (2016), 1–19.
[8] Parul Agarwal and Shikha Mehta. 2014. Nature-Inspired Algorithms: State-of-Art, Problems and Prospects. Nature
100, 14 (2014).
[9] Reza Akbari, Ramin Hedayatzadeh, Koorush Ziarati, and Bahareh Hassanizadeh. 2012. A multi-objective artificial
bee colony algorithm. Swarm Evol. Comput. 2 (2012), 39–52.
[10] Alper Akin and Ibrahim Aydogdu. 2015. Optimum design of steel space frames by hybrid teaching-learning based
optimization and harmony search algorithms. World Acad. Sci. Eng. Technol. Civ. Environ. Eng. 2, 7 (2015), 739–
745.
[11] D.F. Alam, D.A. Yousri, and M.B. Eteiba. 2015. Flower pollination algorithm based solar PV parameter estimation.
Energy Convers. Manag. 101 (2015), 410–422.
[12] Bilal Alatas. 2010. Chaotic bee colony algorithms for global numerical optimization. Expert Syst. Appl. 37, 8 (2010),
5682–5687.
[13] Vimala Alexander and Pethalakshmi Annamalai. 2016. An Elitist Genetic Algorithm Based Extreme Learning
Machine. In Computational Intelligence, Cyber Security and Computational Models. Springer, 301–309.
[14] Theofanis Apostolopoulos and Aristidis Vlachos. 2010. Application of the firefly algorithm for solving the
economic emissions load dispatch problem. Int. J. Comb. 2011 (2010).
[15] Ahmed Hamza Asad, Ahmad Taher Azar, and Aboul Ella Hassanien. 2017. A new heuristic function of ant colony
system for retinal vessel segmentation. In Medical Imaging: Concepts, Methodologies, Tools, and Applications. IGI
Global, 2063–2081.
[16] Hossein Askari and Seyed-Hamid Zahiri. 2012. Decision function estimation using intelligent gravitational search
algorithm. Int. J. Mach. Learn. Cybern. 3, 2 (2012), 163–172.
[17] Behzad Ataie-Ashtiani and Hamed Ketabchi. 2011. Elitist continuous ant colony optimization algorithm for optimal
management of coastal aquifers. Water Resour. Manag. 25, 1 (2011), 165–190.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:27

[18] I Aydogdu, A. Akin, and Mehmet Polat Saka. 2016. Design optimization of real world steel space frames using
artificial bee colony algorithm with Levy flight distribution. Adv. Eng. Softw. 92 (2016), 1–14.
[19] M.T. Vakil Baghmisheh, Katayoun Madani, and Alireza Navarbaf. 2011. A discrete shuffled frog optimization
algorithm. Artif. Intell. Rev. 36, 4 (2011), 267.
[20] Kamalam Balasubramani and Karnan Marcus. 2014. A study on flower pollination algorithm and its applications.
Int. J. Appl. or Innov. Eng. Manag. 3, 11 (2014), 230–235.
[21] Hema Banati and Monika Bajaj. 2011. Fire fly based feature selection approach. IJCSI Int. J. Comput. Sci. Issues 8,
4 (2011).
[22] J.D. Bastidas-Rodriguez, G. Petrone, C.A. Ramos-Paja, and G. Spagnuolo. 2017. A genetic algorithm for identifying
the single diode model parameters of a photovoltaic panel. Math. Comput. Simul. 131 (2017), 38–54.
[23] Adil Baykasoglu, Alper Hamzadayi, and Simge Yelkenci Köse. 2014. Testing the performance of teaching--learning
based optimization (TLBO) algorithm on combinatorial problems: Flow shop and job shop scheduling cases. Inf.
Sci. (Ny). 276 (2014), 204–218.
[24] Soheil Derafshi Beigvand, Hamdi Abdi, and Massimo La Scala. 2016. Combined heat and power economic dispatch
problem using gravitational search algorithm. Electr. Power Syst. Res. 133 (2016), 160–172.
[25] Motilal Bhoye, M.H. Pandya, Sagar Valvi, Indrajit N. Trivedi, Pradeep Jangir, and Siddharth A. Parmar. 2016. An
emission constraint economic load dispatch problem solution with microgrid using JAYA algorithm. In Energy
Efficient Technologies for Sustainability (ICEETS), 2016 International Conference on. 497–502.
[26] W. Bi, Graeme C. Dandy, and Holger R. Maier. 2015. Improved genetic algorithm optimization of water distribution
system design by incorporating domain knowledge. Environ. Model. Softw. 69 (2015), 370–381.
[27] S. Binitha, S. Siva Sathya, and others. 2012. A survey of bio inspired optimization algorithms. Int. J. Soft Comput.
Eng. 2, 2 (2012), 137–151.
[28] Subhodip Biswas, Souvik Kundu, Digbalay Bose, and Swagatam Das. 2012. Cooperative co-evolutionary teaching-
learning based algorithm with a modified exploration strategy for large scale global optimization. In International
Conference on Swarm, Evolutionary, and Memetic Computing. 467–475.
[29] Ankita Bose and Kalyani Mali. 2016. Fuzzy-based artificial bee colony optimization for gray image segmentation.
Signal, Image Video Process. 10, 6 (2016), 1089–1096.
[30] K. Chandrasekaran and Sishaj P. Simon. 2012. Network and reliability constrained unit commitment problem using
binary real coded firefly algorithm. Int. J. Electr. Power Energy Syst. 43, 1 (2012), 921–932.
[31] Shubhajeet Chatterjee and Swagatam Das. 2015. Ant colony optimization based enhanced dynamic source routing
algorithm for mobile Ad-hoc network. Inf. Sci. (Ny). 295 (2015), 67–90.
[32] Zeyu Chen, Rui Xiong, and Jiayi Cao. 2016. Particle swarm optimization-based optimal power management of plug-
in hybrid electric vehicles considering uncertain driving conditions. Energy 96 (2016), 197–208.
[33] Haruna Chiroma, Nor Liyana Mohd Shuib, Sanah Abdullahi Muaz, Adamu I. Abubakar, Lubabatu Baballe Ila, and
Jaafar Zubairu Maitama. 2015. A review of the applications of bio-inspired flower pollination algorithm. Procedia
Comput. Sci. 62 (2015), 435–441.
[34] Marco A. Contreras-Cruz, Victor Ayala-Ramirez, and Uriel H. Hernandez-Belmonte. 2015. Mobile robot path
planning using artificial bee colony and evolutionary programming. Appl. Soft Comput. 30 (2015), 319–328.
[35] Broderick Crawford, Ricardo Soto, Cristian Peña, Wenceslao Palma, Franklin Johnson, and Fernando Paredes. 2015.
Solving the set covering problem with a shuffled frog leaping algorithm. In Asian Conference on Intelligent
Information and Database Systems. 41–50.
[36] Matej Črepinšek, Shih-Hsi Liu, and Marjan Mernik. 2013. Exploration and exploitation in evolutionary algorithms:
A survey. ACM Comput. Surv. 45, 3 (2013), 35.
[37] Behrouz Zamani Dadaneh, Hossein Yeganeh Markid, and Ali Zakerolhosseini. 2016. Unsupervised probabilistic
feature selection using ant colony optimization. Expert Syst. Appl. 53 (2016), 27–42.
[38] P.K. Das, Himansu Sekhar Behera, and Bijaya K. Panigrahi. 2016. A hybridization of an improved particle swarm
optimization and gravitational search algorithm for multi-robot path planning. Swarm Evol. Comput. 28 (2016), 14–
28.
[39] Mateus de Paula Marques, Fábio Renan Durand, and Taufik Abrão. 2016. WDM/OCDM energy-efficient networks
based on heuristic ant colony optimization. IEEE Syst. J. 10, 4 (2016), 1482–1493.
[40] Kalyanmoy Deb, Amrit Pratap, Sameer Agarwal, and TAMT Meyarivan. 2002. A fast and elitist multiobjective
genetic algorithm: NSGA-II. IEEE Trans. Evol. Comput. 6, 2 (2002), 182–197.
[41] José Del Sagrado and Isabel Maria del Águila Cano. 2016. Ant colony optimization for requirement selection in
incremental software development. (2016).
[42] Yilmaz Delice, Emel Kizilkaya Aydogan, Ugur Özcan, and Mehmet Sitki Ilkay. 2017. A modified particle swarm
optimization algorithm to mixed-model two-sided assembly line balancing. J. Intell. Manuf. 28, 1 (2017), 23–36.
[43] Zhenghao Ding, Zhongrong Lu, Min Huang, and Jike Liu. 2017. Improved artificial bee colony algorithm for crack
identification in beam using natural frequencies only. Inverse Probl. Sci. Eng. 25, 2 (2017), 218–238.
[44] Zhijun Ding, Youqing Sun, Junjun Liu, Meiqin Pan, and Jiafen Liu. 2017. A genetic algorithm based approach to
transactional and QoS-aware service selection. Enterp. Inf. Syst. 11, 3 (2017), 339–358.
[45] Manish Dixit, Nikita Upadhyay, and Sanjay Silakari. 2015. An Exhaustive Survey on Nature Inspired Optimization
Algorithms. Int. J. Softw. Eng. Its Appl. 9, 4 (2015), 91–104.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:28 R.K.Sachan and D.S.Kushwaha

[46] Marco Dorigo. 1992. Optimization, learning and natural algorithms. Ph. D. Thesis, Politec. di Milano, Italy (1992).
[47] Marco Dorigo and Christian Blum. 2005. Ant colony optimization theory: A survey. Theor. Comput. Sci. 344, 2
(2005), 243–278.
[48] Marco Dorigo and Thomas Stützle. 2003. The ant colony optimization metaheuristic: Algorithms, applications, and
advances. In Handbook of metaheuristics. Springer, 250–285.
[49] Marco Dorigo and Thomas Stützle. 2010. Ant colony optimization: overview and recent advances. In Handbook of
metaheuristics. Springer, 227–263.
[50] Serhat Duman, Ugur Güvenç, Yusuf Sönmez, and Nuran Yörükeren. 2012. Optimal power flow using gravitational
search algorithm. Energy Convers. Manag. 59 (2012), 86–95.
[51] Serhat Duman, Dinçer Maden, and Ugur Güvenç. 2011. Determination of the PID controller parameters for speed
and position control of DC motor using gravitational search algorithm. In Electrical and Electronics Engineering
(ELECO), 2011 7th International Conference on. I--225.
[52] Atul Kumar Dwivedi, Subhojit Ghosh, and Narendra D. Londhe. 2017. Low-Power FIR Filter Design Using Hybrid
Artificial Bee Colony Algorithm with Experimental Validation Over FPGA. Circuits, Syst. Signal Process. 36, 1
(2017), 156–180.
[53] Mohammed El-Abd. 2011. A hybrid ABC-SPSO algorithm for continuous function optimization. In Swarm
Intelligence (SIS), 2011 IEEE Symposium on. 1–6.
[54] Emad Elbeltagi, Tarek Hegazy, and Donald Grierson. 2007. A modified shuffled frog-leaping optimization
algorithm: applications to project management. Struct. Infrastruct. Eng. 3, 1 (2007), 53–60.
[55] Emad Elbeltagi, Tarek Hegazy, and Donald Grierson. 2005. Comparison among five evolutionary-based
optimization algorithms. Adv. Eng. informatics 19, 1 (2005), 43–53.
[56] Mohammed A. El-Hameed and Attia A. El-Fergany. 2016. Water cycle algorithm-based load frequency controller
for interconnected power systems comprising non-linearity. IET Gener. Transm. Distrib. 10, 15 (2016), 3950–3961.
[57] Eid Emary, Hossam M. Zawbaa, Aboul Ella Hassanien, Mohamed F. Tolba, and Václav Snášel. 2014. Retinal vessel
segmentation based on flower pollination search algorithm. In Proceedings of the Fifth International Conference on
Innovations in Bio-Inspired Computing and Applications IBICA 2014. 93–100.
[58] Hadi Eskandar, A. Sadollah, and A. Bahreininejad. 2013. Weight optimization of truss structures using water cycle
algorithm. Iran Univ. Sci. &amp; Technol. 3, 1 (2013), 115–129.
[59] Hadi Eskandar, Ali Sadollah, Ardeshir Bahreininejad, and Mohd Hamdi. 2012. Water cycle algorithm--A novel
metaheuristic optimization method for solving constrained engineering optimization problems. Comput. Struct. 110
(2012), 151–166.
[60] Ahmed A.A. Esmin, Rodrigo A. Coelho, and Stan Matwin. 2015. A review on particle swarm optimization
algorithm and its variants to clustering high-dimensional data. Artif. Intell. Rev. 44, 1 (2015), 23–45.
[61] Muzaffar M. Eusuff and Kevin E. Lansey. 2003. Optimization of water distribution network design using the
shuffled frog leaping algorithm. J. Water Resour. Plan. Manag. 129, 3 (2003), 210–225.
[62] Muzaffar Eusuff, Kevin Lansey, and Fayzul Pasha. 2006. Shuffled frog-leaping algorithm: a memetic meta-heuristic
for discrete optimization. Eng. Optim. 38, 2 (2006), 129–154.
[63] George I. Evers and Mounir Ben Ghalia. 2009. Regrouping particle swarm optimization: a new global optimization
algorithm with improved performance consistency across benchmarks. In Systems, Man and Cybernetics, 2009.
SMC 2009. IEEE International Conference on. 3901–3908.
[64] Xunli Fan and Feiefi Du. 2015. Shuffled frog leaping algorithm based unequal clustering strategy for wireless sensor
networks. Appl. Math. Inf. Sci. 9, 3 (2015), 1415.
[65] Mai A. Farag, M.A. El-Shorbagy, I.M. El-Desoky, A.A. El-Sawy, A.A. Mousa, and others. 2015. Binary-Real
Coded Genetic Algorithm Based k-Means Clustering for Unit Commitment Problem. Appl. Math. 6, 11 (2015),
1873.
[66] Sh M. Farahani, A.A. Abshouri, B. Nasiri, and MR2011 Meybodi. 2011. A Gaussian firefly algorithm. Int. J. Mach.
Learn. Comput. 1, 5 (2011), 448.
[67] A.H. Gandomi, X.S. Yang, S. Talatahari, and A.H. Alavi. 2013. Firefly algorithm with chaos. Commun. Nonlinear
Sci. Numer. Simul. 18, 1 (2013), 89–98.
[68] T. Ganesan, I. Elamvazuthi, Ku Zilati Ku Shaari, and P. Vasant. 2013. Swarm intelligence and gravitational search
algorithm for multi-objective optimization of synthesis gas production. Appl. Energy 103 (2013), 368–374.
[69] Navid Ghaffarzadeh. 2015. Water Cycle Algorithm Based Power System Stabilizer Robust Design for Power
Systems. J. Electr. Eng. 66, 2 (2015), 91–96.
[70] Farhad Soleimanian Gharehchopogh, Isa Maleki, Amin Kamalinia, and Habibeh Mohammad Zadeh. 2014. Artificial
bee colony based constructive cost model for software cost estimation. J. Sci. Res. Dev. 1, 2 (2014), 44–51.
[71] Nazeeh Ghatasheh, Hossam Faris, Ibrahim Aljarah, Rizik M.H. Al-Sayyed, and others. 2015. Optimizing Software
Effort Estimation Models Using Firefly Algorithm. J. Softw. Eng. Appl. 8, 3 (2015), 133.
[72] David L. González-Álvarez, Miguel A. Vega-Rodriguez, Juan A. Gómez-Pulido, and Juan M. Sánchez-Pérez. 2011.
Applying a multiobjective gravitational search algorithm (MO-GSA) to discover motifs. In International Work-
Conference on Artificial Neural Networks. 372–379.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:29

[73] David L. González-Álvarez, Miguel A. Vega-Rodriguez, Juan A. Gómez-Pulido, and Juan M. Sánchez-Pérez. 2012.
Multiobjective Teaching-Learning-Based Optimization (MO-TLBO) for Motif Finding. In Computational
Intelligence and Informatics (CINTI), 2012 IEEE 13th International Symposium on. 141–146.
[74] Zhifeng Guo. 2012. A hybrid optimization algorithm based on artificial bee colony and gravitational search
algorithm. Int. J. Digit. Content Technol. its Appl. 6, 17 (2012), 620–626.
[75] Omid Bozorg Haddad, Mojtaba Moravej, and Hugo A. Loáiciga. 2014. Application of the water cycle algorithm to
the optimal operation of reservoir systems. J. Irrig. Drain. Eng. 141, 5 (2014), 4014064.
[76] Yu-Yan Han, Jun-Hua Duan, and Min Zhang. 2011. Apply the discrete artificial bee colony algorithm to the
blocking flow shop problem with makespan criterion. In Control and Decision Conference (CCDC), 2011 Chinese.
2131–2135.
[77] Hashim A. Hashim, Babajide Odunitan Ayinde, and Mohamed A. Abido. 2016. Optimal placement of relay nodes in
wireless sensor network using artificial bee colony algorithm. J. Netw. Comput. Appl. 64 (2016), 239–248.
[78] Hamid Reza Hassanzadeh and Modjtaba Rouhani. 2010. A multi-objective gravitational search algorithm. In
Computational Intelligence, Communication Systems and Networks (CICSyN), 2010 Second International
Conference on. 7–12.
[79] Ali Asghar Heidari, Rahim Ali Abbaspour, and Ahmad Rezaee Jordehi. 2017. An efficient chaotic water cycle
algorithm for optimization tasks. Neural Comput. Appl. 28, 1 (2017), 57–85.
[80] John H. Holland. 1992. Adaptation in natural and artificial systems: an introductory analysis with applications to
biology, control, and artificial intelligence, MIT press.
[81] Ming-Huwi Horng. 2012. Vector quantization using the firefly algorithm for image compression. Expert Syst. Appl.
39, 1 (2012), 1078–1091.
[82] Jida Huang, Liang Gao, and Xinyu Li. 2015. A teaching--learning-based cuckoo search for constrained engineering
design problems. In Advances in Global Optimization. Springer, 375–386.
[83] Jida Huang, Liang Gao, and Xinyu Li. 2015. An effective teaching-learning-based cuckoo search algorithm for
parameter optimization problems in structure designing and machining processes. Appl. Soft Comput. 36 (2015),
349–356.
[84] Shih-Chia Huang, Ming-Kai Jiau, and Chih-Hsiang Lin. 2015. A genetic-algorithm-based approach to solve carpool
service problems in cloud computing. IEEE Trans. Intell. Transp. Syst. 16, 1 (2015), 352–364.
[85] Alwyn V Husselmann and K.A. Hawick. 2012. Parallel parametric optimisation with firefly algorithms on graphical
processing units. In Proceedings of the International Conference on Genetic and Evolutionary Methods (GEM). 1.
[86] Muhammad Imran, Rathiah Hashim, and Noor Elaiza Abd Khalid. 2013. An overview of particle swarm
optimization variants. Procedia Eng. 53 (2013), 491–496.
[87] H.R. Imani Jajarmi, Azah Mohamed, and H. Shareef. 2011. GSA-FL controller for three phase active power filter to
improve power quality. In Control, Instrumentation and Automation (ICCIA), 2011 2nd International Conference
on. 417–422.
[88] Gilang Kusuma Jati and others. 2011. Evolutionary discrete firefly algorithm for travelling salesman problem,
Springer.
[89] Ravi Kumar Jatoth and A. Rajasekhar. 2010. Speed control of pmsm by hybrid genetic artificial bee colony
algorithm. In Communication Control and Computing Technologies (ICCCCT), 2010 IEEE International
Conference on. 241–246.
[90] R. Jensi and G. Wiselin Jiji. 2015. Hybrid data clustering approach using K-Means and Flower Pollination
Algorithm. arXiv Prepr. arXiv1505.03236 (2015).
[91] Shanhe Jiang, Zhicheng Ji, and Yanxia Shen. 2014. A novel hybrid particle swarm optimization and gravitational
search algorithm for solving economic emission load dispatch problems with various practical constraints. Int. J.
Electr. Power Energy Syst. 55 (2014), 628–644.
[92] Xingwen Jiang and Jianzhong Zhou. 2013. Hybrid DE-TLBO algorithm for solving short term hydro-thermal
optimal scheduling with incommensurable Objectives. In Control Conference (CCC), 2013 32nd Chinese. 2474–
2479.
[93] Fei Kang, Junjie Li, and Sheng Liu. 2013. Combined data with particle swarm optimization for structural damage
detection. Math. Probl. Eng. 2013 (2013).
[94] Dervis Karaboga. 2005. An idea based on honey bee swarm for numerical optimization,
[95] Dervis Karaboga and Bahriye Akay. 2009. A comparative study of artificial bee colony algorithm. Appl. Math.
Comput. 214, 1 (2009), 108–132.
[96] Dervis Karaboga and Bahriye Akay. 2011. A modified artificial bee colony (ABC) algorithm for constrained
optimization problems. Appl. Soft Comput. 11, 3 (2011), 3021–3031.
[97] Dervis Karaboga and Bahriye Basturk. 2007. A powerful and efficient algorithm for numerical function
optimization: artificial bee colony (ABC) algorithm. J. Glob. Optim. 39, 3 (2007), 459–471.
[98] Dervis Karaboga and Beyza Gorkemli. 2011. A combinatorial artificial bee colony algorithm for traveling salesman
problem. In Innovations in Intelligent Systems and Applications (INISTA), 2011 International Symposium on. 50–53.
[99] S. Karthikeyan, P. Asokan, S. Nickolas, and Tom Page. 2015. A hybrid discrete firefly algorithm for solving multi-
objective flexible job shop scheduling problems. Int. J. Bio-Inspired Comput. 7, 6 (2015), 386–401.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:30 R.K.Sachan and D.S.Kushwaha

[100] Mina Husseinzadeh Kashan, Nasim Nahavandi, and Ali Husseinzadeh Kashan. 2012. DisABC: a new artificial bee
colony algorithm for binary optimization. Appl. Soft Comput. 12, 1 (2012), 342–352.
[101] Gaganpreet Kaur, Dheerendra Singh, and Manjinder Kaur. 2013. Robust and Efficient’RGB’based Fractal Image
Compression: Flower Pollination based Optimization. Int. J. Comput. Appl. 78, 10 (2013).
[102] Prabhneet Kaur and Taranjot Kaur. 2014. A Comparative Study of Various Metaheuristic Algorithms. Int. J.
Comput. Sci. Inf. Technol. 5, 5 (2014), 6701.
[103] James Kennedy and Russell Eberhart. 1995. Particle Swarm Optimization. In International Conference on Neural
Networks. 1942–1948.
[104] James Kennedy and Russell C. Eberhart. 1997. A discrete binary version of the particle swarm algorithm. In
Systems, Man, and Cybernetics, 1997. Computational Cybernetics and Simulation., 1997 IEEE International
Conference on. 4104–4108.
[105] Thanh Tung Khuat and My Hanh Le. 2016. A Novel Technique of Optimization for the COCOMO II Model
Parameters using Teaching-Learning-Based Optimization Algorithm. J. Telecommun. Inf. Technol. , 1 (2016), 84.
[106] Min Kong and Peng Tian. 2005. A binary ant colony optimization for the unconstrained function optimization
problem. In International Conference on Computational and Information Science. 682–687.
[107] Amioy Kumar and Ajay Kumar. 2016. Adaptive management of multimodal biometrics fusion using ant colony
optimization. Inf. Fusion 32 (2016), 49–63.
[108] Divya Kumar and K.K. Mishra. 2016. Portfolio optimization using novel co-variance guided Artificial Bee Colony
algorithm. Swarm Evol. Comput. (2016).
[109] K. Ravi Kumar and M. Sydulu. 2014. Guaranteed Convergence based PSO approach for Optimal Power Flows with
Security Constraints. Majlesi J. Energy Manag. 3, 3 (2014).
[110] Rajeev Kumar and Dilip Kumar. 2016. Multi-objective fractional artificial bee colony algorithm to energy aware
routing protocol in wireless sensor network. Wirel. Networks 22, 5 (2016), 1461–1474.
[111] Yugal Kumar and G. Sahoo. 2014. A review on gravitational search algorithm and its applications to data clustering
& classification. Int. J. Intell. Syst. Appl. 6, 6 (2014), 79.
[112] Anis Ladgham, Fayçal Hamdaoui, Anis Sakly, and Abdellatif Mtibaa. 2015. Fast MR brain image segmentation
based on modified Shuffled Frog Leaping Algorithm. Signal, Image Video Process. 9, 5 (2015), 1113–1120.
[113] Salim Lahmiri. 2017. Glioma detection based on multi-fractal features of segmented brain MRI by particle swarm
optimization techniques. Biomed. Signal Process. Control 31 (2017), 148–155.
[114] Soma Sekhara Babu Lam, M.L. Hari Prasad Raju, Swaraj Ch, Praveen Ranjan Srivastav, and others. 2012.
Automated generation of independent paths and test suite optimization using artificial bee colony. Procedia Eng. 30
(2012), 191–200.
[115] Chu-Sheng Lee, Helon Vicente Hultmann Ayala, and Leandro dos Santos Coelho. 2015. Capacitor placement of
distribution systems using particle swarm optimization approaches. Int. J. Electr. Power Energy Syst. 64 (2015),
839–851.
[116] Zne-Jung Lee, Shun-Feng Su, Chen-Chia Chuang, and Kuan-Hung Liu. 2008. Genetic algorithm with ant colony
optimization (GA-ACO) for multiple sequence alignment. Appl. Soft Comput. 8, 1 (2008), 55–78.
[117] Deming Lei and Xiuping Guo. 2016. A shuffled frog-leaping algorithm for job shop scheduling with outsourcing
options. Int. J. Prod. Res. 54, 16 (2016), 4793–4804.
[118] Chaoshun Li, Jianzhong Zhou, Jian Xiao, and Han Xiao. 2012. Parameters identification of chaotic system by
chaotic gravitational search algorithm. Chaos, Solitons & Fractals 45, 4 (2012), 539–547.
[119] Jun-qing Li, Quan-ke Pan, and Kun Mao. 2015. A discrete teaching-learning-based optimisation algorithm for
realistic flowshop rescheduling problems. Eng. Appl. Artif. Intell. 37 (2015), 279–292.
[120] W.C.E. Lim, Ganesan Kanagaraj, and S.G. Ponnambalam. 2016. A hybrid cuckoo search-genetic algorithm for hole-
making sequence optimization. J. Intell. Manuf. 27, 2 (2016), 417–429.
[121] Jing Liu, Xing-Guo Luo, Xing-Ming Zhang, Fan Zhang, and Bai-Nan Li. 2013. Job scheduling model for cloud
computing based on multi-objective genetic algorithm. IJCSI Int. J. Comput. Sci. Issues 10, 1 (2013), 134–139.
[122] Jianping Luo, Xia Li, Min-Rong Chen, and Hongwei Liu. 2015. A novel hybrid shuffled frog leaping algorithm for
vehicle routing problem with time windows. Inf. Sci. (Ny). 316 (2015), 266–292.
[123] Barun Mandal and Provas Kumar Roy. 2013. Optimal reactive power dispatch using quasi-oppositional teaching
learning based optimization. Int. J. Electr. Power Energy Syst. 53 (2013), 123–134.
[124] S. Manikandan, K. Ramar, M. Willjuice Iruthayarajan, and K.G. Srinivasagan. 2014. Multilevel thresholding for
segmentation of medical brain images using real coded genetic algorithm. Measurement 47 (2014), 558–568.
[125] Miguel A. Medina, Carlos A. Coello Coello, and Juan M. Ramirez. 2013. Reactive power handling by a multi-
objective teaching learning optimizer based on decomposition. IEEE Trans. power Syst. 28, 4 (2013), 3629–3637.
[126] Yue Miao, Fu Rao, and Luo Yu. 2015. Research on the Resource Scheduling of the Improved SFLA in Cloud
Computing. Int. J. Grid Distrib. Comput. 8, 1 (2015), 101–108.
[127] Seyedali Mirjalili and Siti Zaiton Mohd Hashim. 2010. A new hybrid PSOGSA algorithm for function optimization.
In Computer and information application (ICCIA), 2010 international conference on. 374–377.
[128] Anurag Mishra, Charu Agarwal, Arpita Sharma, and Punam Bedi. 2014. Optimized gray-scale image watermarking
using DWT--SVD and Firefly Algorithm. Expert Syst. Appl. 41, 17 (2014), 7858–7867.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:31

[129] K.K. Mishra, Shailesh Tiwari, and A.K. Misra. 2011. A bio inspired algorithm for solving optimization problems. In
Computer and Communication Technology (ICCCT), 2011 2nd International Conference on. 653–659.
[130] K.K. Mishra, Shailesh Tiwari, and Arun Kumar Misra. 2014. Improved environmental adaption method and its
application in test case generation. J. Intell. Fuzzy Syst. 27, 5 (2014), 2305–2317.
[131] K.K. Mishra, Shailesh Tiwari, and Arun Kumar Misra. 2012. Improved environmental adaption method for solving
optimization problems. In Computational Intelligence and Intelligent Systems. Springer, 300–313.
[132] Soumya Mishra and Pravat Kumar Ray. 2016. Power quality improvement using photovoltaic fed DSTATCOM
based on JAYA optimization. IEEE Trans. Sustain. Energy 7, 4 (2016), 1672–1680.
[133] Melanie Mitchell, Stephanie Forrest, and John H. Holland. 1992. The royal road for genetic algorithms: Fitness
landscapes and GA performance. In Proceedings of the first european conference on artificial life. 245–254.
[134] Mohd Saberi Mohamad, Sigeru Omatu, Safaai Deris, and Michifumi Yoshioka. 2011. A modified binary particle
swarm optimization for selecting the small subset of informative genes from gene expression data. IEEE Trans. Inf.
Technol. Biomed. 15, 6 (2011), 813–822.
[135] R.S. Mohana. 2015. A position balanced parallel particle swarm optimization method for resource allocation in
cloud. Indian J. Sci. Technol. 8, S3 (2015), 182–188.
[136] Seyed Mohsen Mousavi, Ardeshir Bahreininejad, S. Nurmaya Musa, and Farazila Yusof. 2017. A modified particle
swarm optimization for solving the integrated location and inventory control problems in a two-echelon supply
chain network. J. Intell. Manuf. 28, 1 (2017), 191–206.
[137] J. Mukund Nilakantan and S.G. Ponnambalam. 2016. Robotic U-shaped assembly line balancing using particle
swarm optimization. Eng. Optim. 48, 2 (2016), 231–252.
[138] Emad Nabil. 2016. A modified flower pollination algorithm for global optimization. Expert Syst. Appl. 57 (2016),
192–203.
[139] Janmenjoy Nayak, Bighnaraj Naik, and H.S. Behera. 2016. Optimizing a higher order neural network through
teaching learning based optimization algorithm. In Computational Intelligence in Data Mining--Volume 1. Springer,
57–71.
[140] Ritu Nigam, Arjun Choudhary, and K.K. Mishra. 2014. Non-dominated sorting environmental adaptation method
(NS-EAM). In Signal Processing and Integrated Networks (SPIN), 2014 International Conference on. 595–600.
[141] Hadi Nobahari, Mahdi Nikusokhan, and Patrick Siarry. 2011. Non-dominated sorting gravitational search algorithm.
In Proc. of the 2011 International Conference on Swarm Intelligence, ICSI. 1–10.
[142] Tetsuya Oda, Donald Elmazi, Admir Barolli, Shinji Sakamoto, Leonard Barolli, and Fatos Xhafa. 2016. A genetic
algorithm-based system for wireless mesh networks: analysis of system data considering different routing protocols
and architectures. Soft Comput. 20, 7 (2016), 2627–2640.
[143] Frederico Galaxe Paes, Artur Alves Pessoa, and Thibaut Vidal. 2017. A hybrid genetic algorithm with
decomposition phases for the Unequal Area Facility Layout Problem. Eur. J. Oper. Res. 256, 3 (2017), 742–756.
[144] Jeng-Shyang Pan, Haibin Wang, Hongnan Zhao, and Linlin Tang. 2015. Interaction artificial bee colony based load
balance method in cloud computing. In Genetic and Evolutionary Computing. Springer, 49–57.
[145] Narendra K. Pareek and Vinod Patidar. 2016. Medical image protection using genetic algorithm operations. Soft
Comput. 20, 2 (2016), 763–772.
[146] Yang-Byung Park, Jun-Su Yoo, and Hae-Soo Park. 2016. A genetic algorithm for the vendor-managed inventory
routing problem with lost sales. Expert Syst. Appl. 53 (2016), 149–159.
[147] Rafael S. Parpinelli and Heitor S. Lopes. 2011. New inspirations in swarm intelligence: a survey. Int. J. Bio-Inspired
Comput. 3, 1 (2011), 1–16.
[148] J. Rejina Parvin and C. Vasanthanayaki. 2013. Gravitational search algorithm based mobile aggregator sink nodes
for energy efficient wireless sensor networks. In Circuits, Power and Computing Technologies (ICCPCT), 2013
International Conference on. 1052–1058.
[149] Manoj Kumar Patel, Manas Ranjan Kabat, and Chita Ranjan Tripathy. 2014. A hybrid ACO/PSO based algorithm
for QoS multicast routing problem. Ain Shams Eng. J. 5, 1 (2014), 113–120.
[150] Vivek K. Patel and Vimal J. Savsani. 2016. A multi-objective improved teaching--learning based optimization
algorithm (MO-ITLBO). Inf. Sci. (Ny). 357 (2016), 182–200.
[151] Mantas Paulinas and Andrius Ušinskas. 2015. A survey of genetic algorithms applications for image enhancement
and segmentation. Inf. Technol. Control 36, 3 (2015).
[152] Jose Luis Ponz-Tienda, Victor Yepes, Eugenio Pellicer, and Joaquin Moreno-Flores. 2013. The resource leveling
problem with multiple resources using an adaptive genetic algorithm. Autom. Constr. 29 (2013), 161–172.
[153] Mohammad Pourmahmood, Mohammd Esmaeel Akbari, and Amin Mohammadpour. 2011. An efficient modified
shuffled frog leaping optimization algorithm. Int. J. Comput. Appl 32, 1 (2011), 975–8887.
[154] R. Prathiba, M. Balasingh Moses, and S. Sakthivel. 2014. Flower pollination algorithm applied for different
economic load dispatch problems. Int. J. Eng. Technol. 6, 2 (2014), 1009–1016.
[155] Huangzhong Pu, Ziyang Zhen, and Daobo Wang. 2011. Modified shuffled frog leaping algorithm for optimization
of UAV flight controller. Int. J. Intell. Comput. Cybern. 4, 1 (2011), 25–39.
[156] S. Radhika, Srinivasa Rao Ch, Neha Krishna, and K. Karteeka Pavan. 2016. Multi-Objective Optimization of Master
Production Scheduling Problems using Jaya Algorithm. (2016).

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:32 R.K.Sachan and D.S.Kushwaha

[157] Marjan Kuchaki Rafsanjani and Mohammad Bagher Dowlatshahi. 2012. Using gravitational search algorithm for
finding near-optimal base station location in two-tiered WSNs. Int. J. Mach. Learn. Comput. 2, 4 (2012), 377.
[158] A. Rahmani and S.A. MirHassani. 2014. A hybrid firefly-genetic algorithm for the capacitated facility location
problem. Inf. Sci. (Ny). 283 (2014), 70–78.
[159] R. Rao. 2016. Jaya: A simple and new optimization algorithm for solving constrained and unconstrained
optimization problems. Int. J. Ind. Eng. Comput. 7, 1 (2016), 19–34.
[160] R. Rao. 2016. Review of applications of TLBO algorithm and a tutorial for beginners to solve the unconstrained and
constrained optimization problems. Decis. Sci. Lett. 5, 1 (2016), 1–30.
[161] R. V Rao, K.C. More, J. Taler, and P. Ocłoń. 2016. Dimensional optimization of a micro-channel heat sink using
Jaya algorithm. Appl. Therm. Eng. 103 (2016), 572–582.
[162] R. Venkata Rao, Dhiraj P. Rai, and J. Balic. 2015. A new optimization algorithm for parameter optimization of
nano-finishing processes. Int. J. Sci. Technol. (2015).
[163] R. Venkata Rao, Dhiraj P. Rai, and Joze Balic. 2016. Surface Grinding Process Optimization Using Jaya Algorithm.
In Computational Intelligence in Data Mining--Volume 2. Springer, 487–495.
[164] R. Venkata Rao, Vimal J. Savsani, and D.P. Vakharia. 2012. Teaching--learning-based optimization: an
optimization method for continuous non-linear large scale problems. Inf. Sci. (Ny). 183, 1 (2012), 1–15.
[165] R. Rao and Vivek Patel. 2012. An elitist teaching-learning-based optimization algorithm for solving complex
constrained optimization problems. Int. J. Ind. Eng. Comput. 3, 4 (2012), 535–560.
[166] Ravipudi V Rao, Vimal J. Savsani, and D.P. Vakharia. 2011. Teaching--learning-based optimization: a novel
method for constrained mechanical design optimization problems. Comput. Des. 43, 3 (2011), 303–315.
[167] Esmat Rashedi, Hossein Nezamabadi-Pour, and Saeid Saryazdi. 2010. BGSA: binary gravitational search algorithm.
Nat. Comput. 9, 3 (2010), 727–745.
[168] Esmat Rashedi, Hossein Nezamabadi-Pour, and Saeid Saryazdi. 2009. GSA: a gravitational search algorithm. Inf.
Sci. (Ny). 179, 13 (2009), 2232–2248.
[169] Esmat Rashedi, Hossien Nezamabadi-Pour, and Saeid Saryazdi. 2011. Filter modeling using gravitational search
algorithm. Eng. Appl. Artif. Intell. 24, 1 (2011), 117–122.
[170] Vahid Rashtchi, Meisam Hatami, and Mahdi Sabouri. 2012. Chaotic Shuffled Frog Leaping Optimization
Algorithm. (2012).
[171] Abdelmadjid Recioui, Hamid Bentarzi, and Abderrahmane Ouadi. 2015. Application of a binary teaching learning-
based algorithm to the optimal placement of phasor measurement units. In Progress in Clean Energy, Volume 1.
Springer, 817–830.
[172] P. Dinakara Prasad Reddy, V.C. Veera Reddy, and T. Gowri Manohar. 2016. Application of flower pollination
algorithm for optimal placement and sizing of distributed generation in Distribution systems. J. Electr. Syst. Inf.
Technol. (2016).
[173] R.M. Rizk-Allah, Elsayed M. Zaki, and Ahmed Ahmed El-Sawy. 2013. Hybridizing ant colony optimization with
firefly algorithm for unconstrained optimization problems. Appl. Math. Comput. 224 (2013), 473–483.
[174] Douglas Rodrigues, Xin-She Yang, André Nunes De Souza, and João Paulo Papa. 2015. Binary flower pollination
algorithm and its application to feature selection. In Recent Advances in Swarm Intelligence and Evolutionary
Computation. Springer, 85–100.
[175] A. Sh Rostami, H.M. Bernety, and A.R. Hosseinabadi. 2011. A novel and optimized Algorithm to select monitoring
sensors by GSA. In Control, Instrumentation and Automation (ICCIA), 2011 2nd International Conference on. 829–
834.
[176] Álvaro Rubio-Largo, Miguel A. Vega-Rodriguez, Juan A. Gómez-Pulido, and Juan M. Sánchez-Pérez. 2011. A
multiobjective gravitational search algorithm applied to the static routing and wavelength assignment problem. In
European Conference on the Applications of Evolutionary Computation. 41–50.
[177] Norlina Mohd Sabri, Mazidah Puteh, and Mohamad Rusop Mahmood. 2013. A review of gravitational search
algorithm. Int. J. Adv. Soft Comput. Appl 5, 3 (2013), 1–39.
[178] Rohit Kumar Sachan et al. 2016. Optimizing Basic COCOMO Model Using Simplified Genetic Algorithm.
Procedia Comput. Sci. 89 (2016), 492–498.
[179] Ali Sadollah, Hadi Eskandar, Ardeshir Bahreininejad, and Joong Hoon Kim. 2015. Water cycle algorithm with
evaporation rate for solving constrained and unconstrained optimization problems. Appl. Soft Comput. 30 (2015),
58–71.
[180] Ali Sadollah, Hadi Eskandar, and Joong Hoon Kim. 2015. Water cycle algorithm for solving constrained multi-
objective optimization problems. Appl. Soft Comput. 27 (2015), 279–298.
[181] Ali Sadollah, Hadi Eskandar, Ho Min Lee, Do Guen Yoo, and Joong Hoon Kim. 2016. Water cycle algorithm: A
detailed standard code. SoftwareX (2016).
[182] Ali Sadollah, Do Guen Yoo, Jafar Yazdi, Joong Hoon Kim, and Younghwan Choi. 2014. Application Of Water
Cycle Algorithm For Optimal Cost Design Of Water Distribution Systems. (2014).
[183] S.K. Saha, R. Kar, D. Mandal, and S.P. Ghoshal. 2013. Gravitational search algorithm with wavelet mutation
applied for optimal IIR band pass filter design. In Communications and Signal Processing (ICCSP), 2013
International Conference on. 14–18.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:33

[184] Binod Kumar Sahu, Swagat Pati, Pradeep Kumar Mohanty, and Sidhartha Panda. 2015. Teaching--learning based
optimization algorithm based fuzzy-PID controller for automatic generation control of multi-area power system.
Appl. Soft Comput. 27 (2015), 240–249.
[185] Dina M. Said, Nabil M. Hamed, and Almoataz Y. Abdelaziz. 2016. Shuffled Frog Leaping Algorithm for Economic
Dispatch with Valve Loading Effect. Int. Electr. Eng. J. 7, 5 (2016), 2240–2248.
[186] Sanjay Saini, Dayang Rohaya Bt Awang Rambli, M. Nordin B. Zakaria, and Suziah Bt Sulaiman. 2014. A review on
particle swarm optimization algorithm and its variants to human motion tracking. Math. Probl. Eng. 2014 (2014).
[187] Marcella Samà, Paola Pellegrini, Andrea D Ariano, Joaquin Rodriguez, and Dario Pacciarelli. 2016. Ant colony
optimization for the real-time train routing selection problem. Transp. Res. Part B Methodol. 85 (2016), 89–108.
[188] Nobuo Sannomiya and Hitoshi Iima. 2016. Genetic algorithm approach to a production ordering problem in an
assembly process with buffers. Sel. Pap. from 7 (2016), 403–408.
[189] Suresh Chandra Satapathy and Anima Naik. 2011. Data clustering based on teaching-learning-based optimization. In
International Conference on Swarm, Evolutionary, and Memetic Computing. 148–156.
[190] Mohammad Kazem Sayadi, Ashkan Hafezalkotob, and Seyed Gholamreza Jalali Naini. 2013. Firefly-inspired
algorithm for discrete optimization problems: an application to manufacturing cell formation. J. Manuf. Syst. 32, 1
(2013), 78–84.
[191] Dinu Calin Secui. 2015. A new modified artificial bee colony algorithm for the economic dispatch problem. Energy
Convers. Manag. 89 (2015), 43–62.
[192] J. Senthilnath, S.N. Omkar, and V. Mani. 2011. Clustering using firefly algorithm: performance study. Swarm Evol.
Comput. 1, 3 (2011), 164–171.
[193] R.R. Sharapov. 2007. Genetic algorithms: basic ideas, variants and analysis, INTECH Open Access Publisher.
[194] Marwa Sharawi, E. Emary, Imane Aly Saroit, and Hesham El-Mahdy. 2014. Flower pollination optimization
algorithm for wireless sensor network lifetime global optimization. Int. J. Soft Comput. Eng. 4, 3 (2014), 54–59.
[195] Binod Shaw, V. Mukherjee, and S.P. Ghoshal. 2012. A novel opposition-based gravitational search algorithm for
combined economic and emission dispatch problems of power systems. Int. J. Electr. Power Energy Syst. 35, 1
(2012), 21–33.
[196] H. Shayeghi, M. Mahdavi, and A. Bagheri. 2010. Discrete PSO algorithm based optimization of transmission lines
loading in TNEP problem. Energy Convers. Manag. 51, 1 (2010), 112–121.
[197] Alaa F. Sheta. 2006. Estimation of the COCOMO model parameters using genetic algorithms for NASA software
projects. J. Comput. Sci. 2, 2 (2006), 118–123.
[198] Alaa F. Sheta, Aladdin Ayesh, and David Rine. 2010. Evaluating software cost estimation models using particle
swarm optimisation and fuzzy logic for NASA projects: a comparative study. Int. J. Bio-Inspired Comput. 2, 6
(2010), 365–373.
[199] Narinder Singh and S.B. Singh. 2012. Personal best position particle swarm optimization. J. Appl. Comput. Sci.
Math. 12, 6 (2012), 69–76.
[200] R. Singh and H.K. Verma. 2013. Teaching--learning-based Optimization Algorithm for Parameter Identification in
the Design of IIR Filters. J. Inst. Eng. Ser. B 94, 4 (2013), 285–294.
[201] Rahul Karthik Sivagaminathan and Sreeram Ramakrishnan. 2007. A hybrid approach for feature subset selection
using neural networks and ant colony optimization. Expert Syst. Appl. 33, 1 (2007), 49–60.
[202] Soleimani Hamed and Govindan Kannan. 2015. A hybrid particle swarm optimization and genetic algorithm for
closed-loop supply chain network design in large-scale networks. Appl. Math. Model. 39, 14 (2015), 3990–4012.
[203] Ricardo SotoEmail, Broderick Crawford, Emanuel Vega, Franklin Johnson, and Fernando Paredes. 2015. Solving
Manufacturing Cell Design Problems Using a Shuffled Frog Leaping Algorithm. In The 1st International
Conference on Advanced Intelligent System and Informatics (AISI2015), November 28-30, 2015, Beni Suef, Egypt.
253–261.
[204] Sneha Sultana and Provas Kumar Roy. 2014. Optimal capacitor placement in radial distribution systems using
teaching learning based optimization. Int. J. Electr. Power Energy Syst. 54 (2014), 387–398.
[205] Sehra Sumeet Kaur and Nishu Dewan. 2014. Ant Colony Optimization Based Software Effort Estimation. J.
Comput. Sci. Technol. 5.3, 1 (2014), 67–90.
[206] Nitesh Sureja. 2012. New inspirations in nature: A survey. Int. J. Comput. Appl. Inf. Technol. 1, 3 (2012), 21–24.
[207] Maolin Tang and Shenchen Pan. 2015. A hybrid genetic algorithm for the energy-efficient virtual machine
placement problem in data centers. Neural Process. Lett. 41, 2 (2015), 211–221.
[208] Dhananjay Thiruvady, Andreas T. Ernst, and Gaurav Singh. 2016. Parallel ant colony optimization for resource
constrained job scheduling. Ann. Oper. Res. 242, 2 (2016), 355–372.
[209] Surafel Luleseged Tilahun and Hong Choon Ong. Modified firefly algorithm. J. Appl. Math. 2012.
[210] Ashish Tripathi, Prateek Garbyal, K.K. Mishra, and Arun Kumar Misra. 2014. Environmental adaption method for
dynamic environment. In Systems, Man and Cybernetics (SMC), 2014 IEEE International Conference on. 216–221.
[211] Ashish Tripathi, Divya Kumar, Krishn Kumar Mishra, and Arun Kumar Misra. 2014. GA-EAM based hybrid
algorithm. In International Conference on Intelligent Computing. 13–20.
[212] Ashish Tripathi, Nitin Saxena, Krishn Kumar Mishra, and Arun Kumar Misra. 2015. An Environmental Adaption
Method with real parameter encoding for dynamic environment. J. Intell. Fuzzy Syst. 29, 5 (2015), 2003–2015.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
XX:34 R.K.Sachan and D.S.Kushwaha

[213] Indrajit N. Trivedi, Swati N. Purohit, Pradeep Jangir, and Motilal T. Bhoye. 2016. Environment dispatch of
distributed energy resources in a microgrid using JAYA algorithm. In Advances in Electrical, Electronics,
Information, Communication and Bio-Informatics (AEEICB), 2016 2nd International Conference on. 224–228.
[214] Hsing-Chih Tsai, Yaw-Yauan Tyan, Yun-Wu Wu, and Yong-Huang Lin. 2013. Gravitational particle swarm. Appl.
Math. Comput. 219, 17 (2013), 9106–9117.
[215] Shouheng Tuo, Longquan Yong, and Tao Zhou. 2013. An improved harmony search based on teaching-learning
strategy for unconstrained optimization problems. Math. Probl. Eng. 2013 (2013).
[216] Oguz Emrah Turgut and Mustafa Turhan Coban. 2016. Optimal proton exchange membrane fuel cell modelling
based on hybrid Teaching Learning Based Optimization--Differential Evolution algorithm. Ain Shams Eng. J. 7, 1
(2016), 347–360.
[217] Lizhe Wang et al. 2015. Particle swarm optimization based dictionary learning for remote sensing big data.
Knowledge-Based Syst. 79 (2015), 43–50.
[218] Xuewu Wang, Yingpan Shi, Dongyan Ding, and Xingsheng Gu. 2016. Double global optimum genetic algorithm--
particle swarm optimization-based welding robot path planning. Eng. Optim. 48, 2 (2016), 299–316.
[219] David H. Wolpert and William G. Macready. 1997. No free lunch theorems for optimization. IEEE Trans. Evol.
Comput. 1, 1 (1997), 67–82.
[220] Bing Xue, Mengjie Zhang, and Will N. Browne. 2013. Particle swarm optimization for feature selection in
classification: A multi-objective approach. IEEE Trans. Cybern. 43, 6 (2013), 1656–1671.
[221] Betul Yagmahan. 2011. Mixed-model assembly line balancing using a multi-objective ant colony optimization
approach. Expert Syst. Appl. 38, 10 (2011), 12453–12461.
[222] Wu Yang and Yuanshu Sun. 2011. An Improved Shuffled Frog Leaping Algorithm for Grid Task Scheduling. In
Network Computing and Information Security (NCIS), 2011 International Conference on. 342–346.
[223] Xin-She Yang. 2012. Nature-inspired mateheuristic algorithms: Success and new challenges. arXiv Prepr.
arXiv1211.6658 (2012).
[224] Xin-She Yang. 2010. Firefly algorithm. Eng. Optim. (2010), 221–230.
[225] Xin-She Yang. 2010. Firefly algorithm, stochastic test functions and design optimisation. Int. J. Bio-Inspired
Comput. 2, 2 (2010), 78–84.
[226] Xin-She Yang. 2012. Flower pollination algorithm for global optimization. In International Conference on
Unconventional Computing and Natural Computation. 240–249.
[227] Xin-She Yang. 2009. Firefly algorithms for multimodal optimization. In International Symposium on Stochastic
Algorithms. 169–178.
[228] Xin-She Yang. 2010. Firefly algorithm, Levy flights and global optimization. In Research and development in
intelligent systems XXVI. Springer, 209–218.
[229] Xin-She Yang. 2013. Multiobjective firefly algorithm for continuous optimization. Eng. Comput. 29, 2 (2013), 175–
184.
[230] Xin-She Yang, Mehmet Karamanoglu, and Xingshi He. 2013. Multi-objective flower algorithm for optimization.
Procedia Comput. Sci. 18 (2013), 861–868.
[231] Baozhen Yao, Bin Yu, Ping Hu, Junjie Gao, and Mingheng Zhang. 2016. An improved particle swarm optimization
for carton heterogeneous vehicle routing problem with a collection depot. Ann. Oper. Res. 242, 2 (2016), 303–320.
[232] Hai Tao Yao, Hai Qiang Chen, and Tuan Fa Qin. 2013. Niche PSO particle filter with particles fusion for target
tracking. In Applied Mechanics and Materials. 1368–1372.
[233] Jingzheng Yao and Duanfeng Han. 2013. Improved barebones particle swarm optimization with neighborhood
search and its application on ship design. Math. Probl. Eng. 2013 (2013).
[234] Sajjad Yazdani, Hossein Nezamabadi-pour, and Shima Kamyab. 2014. A gravitational search algorithm for
multimodal optimization. Swarm Evol. Comput. 14 (2014), 1–14.
[235] Ling Yu and Peng Xu. 2011. Structural health monitoring based on continuous ACO method. Microelectron. Reliab.
51, 2 (2011), 270–278.
[236] Mingrang Yu, Yingjie Zhang, Kun Chen, and Ding Zhang. 2015. Integration of process planning and scheduling
using a hybrid GA/PSO algorithm. Int. J. Adv. Manuf. Technol. 78, 1–4 (2015), 583–592.
[237] Qiang Yu, Ling Chen, and Bin Li. 2015. Ant colony optimization applied to web service compositions in cloud
computing. Comput. Electr. Eng. 41 (2015), 18–27.
[238] Yudong Zhang and Lenan Wu. 2012. A novel method for rigid image registration based on firefly algorithm. Int. J.
Res. Rev. Soft Intell. Comput. 2, 2 (2012).
[239] Yudong Zhang, Xiaojun Yang, Carlo Cattani, Ravipudi Venkata Rao, Shuihua Wang, and Preetha Phillips. 2016.
Tea category identification using a novel fractional Fourier entropy and Jaya algorithm. Entropy 18, 3 (2016), 77.
[240] Di Zhou, Jun Sun, and Wenbo Xu. 2010. An advanced quantum-behaved particle swarm optimization algorithm
utilizing cooperative strategy. In Advanced Computational Intelligence (IWACI), 2010 Third International
Workshop on. 344–349.
[241] Wenping Zou, Yunlong Zhu, Hanning Chen, and Zhu Zhu. 2010. Cooperative approaches to artificial bee colony
algorithm. In Computer Application and System Modeling (ICCASM), 2010 International Conference on. V9--44.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.
Nature-Inspired Optimization Algorithms: Research Direction and Survey XX:35

Rohit Kumar Sachan received B.Tech in Computer Science and Engineering (2008)
from Galgotias College of Engineering and Technology, Greater Noida, India and
M.Tech in Computer Science and Engineering (2012) from Amity University, Noida,
India. He received Ph.D degree (2021) from Motilal Nehru National Institute of
Technology Allahabad, Allahabad, India. Currently he is working as Sr. Project
Engineer in C3i Center, Department of Computer Science and Engineering, Indian
Institute of Technology Kanpur, Kanpur, India.

Dharmender Singh Kushwaha received B.E (Bachelor in Engineering) degree in


Computer Science and Engineering from University of Pune, Maharasta, India, in
1990. He was awarded Gold Medal in M.Tech. (Computer Science and Engineering)
from Motilal Nehru National Institute of Technology Allahabad, Allahabad, India. He
received Ph.D degree from Motilal Nehru National Institute of Technology Allahabad,
Allahabad, India. Currently he is working as Professor in Department of Computer
Science and Engineering, Motilal Nehru National Institute of Technology Allahabad,
Allahabad, India.

XXX XXX XXX XXX, Vol. XX, No. XX, Article XX. Publication date: Feb 2021.

You might also like