Nature Inspired Optimization Algorithms and Soft Computing Methods Technology and Applications For IoTs Smart Cities
Nature Inspired Optimization Algorithms and Soft Computing Methods Technology and Applications For IoTs Smart Cities
Nature-Inspired
Optimization Algorithms
and Soft Computing
Other volumes in this series:
Volume 1 Knowledge Discovery and Data Mining M.A. Bramer (Editor)
Volume 3 Troubled IT Projects: Prevention and turnaround J.M. Smith
Volume 4 UML for Systems Engineering: Watching the wheels, 2nd Edition J. Holt
Volume 5 Intelligent Distributed Video Surveillance Systems S.A. Velastin and P.
Remagnino (Editors)
Volume 6 Trusted Computing C. Mitchell (Editor)
Volume 7 SysML for Systems Engineering J. Holt and S. Perry
Volume 8 Modelling Enterprise Architectures J. Holt and S. Perry
Volume 9 Model-Based Requirements Engineering J. Holt, S. Perry, and M. Bownsword
Volume 13 Trusted Platform Modules: Why, when and how to use them A. Segall
Volume 14 Foundations for Model-Based Systems Engineering: From patterns to
models J. Holt, S. Perry, and M. Bownsword
Volume 15 Big Data and Software Defined Networks J. Taheri (Editor)
Volume 18 Modeling and Simulation of Complex Communication M.A. Niazi (Editor)
Volume 20 SysML for Systems Engineering: A model-based approach, 3rd Edition J.
Holt and S. Perry
Volume 22 Virtual Reality and Light Field Immersive Video Technologies for Real-
World Applications G. Lafruit and M. Tehrani
Volume 23 Data as Infrastructure for Smart Cities L. Suzuki and A. Finkelstein
Volume 24 Ultrascale Computing Systems J. Carretero, E. Jeannot, and A. Zomaya
Volume 25 Big Data-Enabled Internet of Things M. Khan, S. Khan, and A. Zomaya
(Editors)
Volume 26 Handbook of Mathematical Models for Languages and Computation A.
Meduna, P. Horáček, and M. Tomko
Volume 29 Blockchains for Network Security: Principles, technologies and
applications H. Huang, L. Wang, Y. Wu, and K.R. Choo (Editors)
Volume 30 Trustworthy Autonomic Computing T. Eza
Volume 32 Network Classification for Traffic Management: Anomaly detection,
feature selection, clustering and classification Z. Tari, A. Fahad, A. Almalawi,
and X. Yi
Volume 33 Edge Computing: Models, technologies and applications J. Taheri and S.
Deng (Editors)
Volume 34 AI for Emerging Verticals: Human-robot computing, sensing and
networking M.Z. Shakir and N. Ramzan (Editors)
Volume 35 Big Data Recommender Systems Vols 1 & 2 O. Khalid, S.U. Khan and A.Y.
Zomaya (Editors)
Volume 37 Handbook of Big Data Analytics Vols 1 & 2 V. Ravi and A.K. Cherukuri (Editors)
Volume 39 ReRAM-Based Machine Learning H. Yu, L. Ni and S.M.P. Dinakarrao
Volume 40 E-learning Methodologies: Fundamentals, technologies and applications
M. Goyal, R. Krishnamurthi, and D. Yadav (Editors)
Volume 44 Streaming Analytics: Concepts, architectures, platforms, use cases and
applications P. Raj, C. Surianarayanan, K. Seerangan, and G. Ghinea (Editors)
Volume 44 Streaming Analytics: Concepts, architectures, platforms, use cases and
applications P. Raj, A. Kumar, V. Garcı́a Dı́az, and N. Muthuraman (Editors)
Volume 46 Graphical Programming Using LabVIEWTM: Fundamentals and advanced
techniques J.C. Rodrı́guez-Quiñonez and O. Real-Moreno
Volume 54 Intelligent Network Design Driven by Big Data Analytics, IoT, AI and Cloud
Computing S. Kumar, G. Mapp, and K. Cergiz (Editors)
Volume 56 Earth Observation Data Analytics Using Machine and Deep Learning:
Modern tools, applications and challenges S. Garg, S. Jain, N. Dube and N.
Varghese (Editors)
Volume 57 AIoT Technologies and Applications for Smart Environments. M. Alazab, M.
Gupta, and S. Ahmed (Editors)
Volume 60 Intelligent Multimedia Technologies for Financial Risk Management:
Trends, tools and applications K. Sood, S. Grima, B. Rawal, B. Balusamy,
E. Özen, and G.G.G. Gan (Editors)
Volume 63 Personal Knowledge Graphs (PKGs): Methodology, tools and applications
S. Tiwari, F. Scharffe, F. Ortiz-Rodrı́guez, and M. Gaur (Editors)
Volume 115 Ground Penetrating Radar: Improving sensing and imaging through
numerical modelling X.L. Travassos, M.F. Pantoja, and N. Ida
Nature-Inspired
Optimization Algorithms
and Soft Computing
Methods, technology and applications
for IoTs, smart cities, healthcare and
industrial automation
Edited by
Rajeev Arya, Sangeeta Singh, Maheshwari P. Singh,
Brijesh R. Iyer and Venkat N. Gudivada
This publication is copyright under the Berne Convention and the Universal Copyright
Convention. All rights reserved. Apart from any fair dealing for the purposes of research
or private study, or criticism or review, as permitted under the Copyright, Designs and
Patents Act 1988, this publication may be reproduced, stored or transmitted, in any
form or by any means, only with the prior permission in writing of the publishers, or in
the case of reprographic reproduction in accordance with the terms of licences issued
by the Copyright Licensing Agency. Enquiries concerning reproduction outside those
terms should be sent to the publisher at the undermentioned address:
While the authors and publisher believe that the information and guidance given in this
work are correct, all parties must rely upon their own skill and judgement when making
use of them. Neither the author nor publisher assumes any liability to anyone for any
loss or damage caused by any error or omission in the work, whether such an error or
omission is the result of negligence or any other cause. Any and all such liability is
disclaimed.
The moral rights of the author to be identified as author of this work have been
asserted by him in accordance with the Copyright, Designs and Patents Act 1988.
10 Conclusion 251
Aakansha Garg, Varuna Gupta, Vandana Mehndiratta, Yash Thakur
and Dolly Sharma
10.1 Concluding remarks 251
10.2 Challenges and potentials of bio-inspired optimization
algorithms for IoT applications 255
10.2.1 Challenges 255
10.2.2 Potentials 256
10.3 Challenges and opportunities of bio-inspired optimization
algorithms for biomedical applications 256
10.4 Recent trends in smart cities planning based on nature-inspired
computing 257
10.5 Future perspectives of nature-inspired computing 258
10.6 Bio-inspired heuristic algorithms 259
10.7 Probable future directions 260
References 261
Index 267
About the editors
The objective of this book series is to enlighten researchers with the current trends
in the field of blockchain technology and its integration with distributed computing
and systems. Blockchains show riveting features such as decentralization, verifia-
bility, fault tolerance, transparency, and accountability. The technology enables
more secure, transparent, and traceable scenarios to assist in distributed applica-
tions. We plan to bring forward the technological aspects of blockchain technology
for Internet of Things (IoTs), cloud computing, edge computing, fog computing,
wireless sensor networks (WSNs), peer-to-peer (P2P) networks, mobile edge
computing, and other distributed network paradigms for real-world enhanced and
evolving future applications of distributed computing and systems.
We are honoured to write the foreword of this first book in our book series.
This valuable contribution to the field of blockchain technology and social media
computing will serve as a useful reference for those seeking to deepen their
understanding of this rapidly evolving field and inspire further research and inno-
vation. The authors have provided insightful and thought-provoking perspectives
on the current state and future potential of blockchain technology in social media
computing, and we are confident that this book will be a valuable resource for years
to come.
As the world becomes more interconnected and reliant on technology, social
media has emerged as a ubiquitous tool for communication, entertainment, and
commerce. However, the centralized nature of many social media platforms has led
to concerns about data privacy, security, and content moderation. With the rise of
fake news, privacy breaches, and centralized control, the need for a more secure
and decentralized social media platform has never been greater. In recent years,
blockchain technology has emerged as a potential solution to many of these chal-
lenges, offering a decentralized and secure platform for social media computing.
The book “Blockchain Technology for Secure Social Media Computing”
provides a comprehensive overview of blockchain technology’s use in social media
computing. With contributions from experts in the field, the book covers a wide
range of topics, from the basics of blockchain technology to its applications in
social media computing and emerging trends in the field. In addition, this book
provides a comprehensive overview of the latest technologies, and applications of
blockchain in online social media. From blockchain-based security for social media
computing to emerging trends in social networking design, models, and handling,
the authors cover a broad range of topics to address the challenges and opportu-
nities of blockchain technology in the social media sphere.
xiv Nature-inspired optimization algorithms and soft computing
The chapters are written by leading researchers and academics who share their
experience and knowledge to provide readers with a practical understanding of how
blockchain technology can be leveraged to build a more secure, transparent, and
decentralized social media platform. It is an essential guide for researchers, prac-
titioners, and students who are interested in learning more about blockchain tech-
nology and its potential to transform social media computing.
Dr. Brij B. Gupta, Director, CCRI & Professor, CSIE, Asia University, Taiwan
1.1 Introduction
Today in every field, there is a desire to get maximum profit with the least
investment. Efficiency and utilization with a minimum investment are key
requirements. And here comes the idea of optimization. Optimization is a process
of choosing the best available options from given alternatives which, as a result,
gives us the best solution. For example, in the design of a bridge, civil engineers
must make many decisions in various stages of construction. So, optimization is
nothing but making the best feasible decision. The goal of such decisions is either
to maximize the profit or to minimize the effort. It is a crucial tool for analyzing
systems. Both maximizing and minimizing are two categories of optimization
problems. Optimization methods are applied to many problems in various fields to
handle practical problems. It is not limited to some fields only; the idea is used
widely in all fields. With the advancement in computing techniques, optimization
has become an important part of problem solving. Many optimization techniques
have been proposed in the last few years by researchers. However, despite many
optimization algorithms, no method is suitable for all optimization problems. The
most appropriate method is selected based on the specific problem.
1.2 Optimization
Optimization is finding the condition that maximizes or minimizes the value of a
function. In mathematical terms, an optimization problem is a problem of finding
the best optimum solution from the given set of solutions. Classification of opti-
mization problems is based on several factors, such as it is based on the existence of
constraints, also based on the nature of variables, and depending on the structure of
the problem. Operations research is a field of mathematics that deals with the
1
Machine Vision and Intelligence Lab, National Institute of Technology Jamshedpur, India
2
Ryazan State Radio Engineering University, Russia
3
Leeds Beckett University, UK
2 Nature-inspired optimization algorithms and soft computing
parameters and constants of the system. And the second method is by optimizing all
variables or selected variables of the system.
There are several methods used for solving optimization problems. For
example, the hill climbing method starts from a basic approach and then moves to
improve solutions. This method solves global problems by using the information on
local optimum solutions. The optimum point is dependent on nearby points as
defined by the method.
There are several activities taking place every day around the world; these activities
can be considered as a system (whether it can be theoretical entities or physical
entities). The performance of these systems depends on the working principles of
various indices of the system, which means the efficient operation of various
indices makes the system efficient. This is only possible when the optimization of
various indices has been performed. While representing a system in a mathematical
model, algebraic variables are used to represent these indices. These variables
depend upon many factors [2]. After representing the system in a mathematical
model, some optimization techniques are applied. As a result of this, the values of
those variables are found, which either maximize or minimize the system. In the
case of maximization, it must maximize the profit or gain of the system and in the
case of minimization, it must minimize the loss or waste of the system.
Optimization is needed in many fields, such as in electrical engineering to mini-
mize total harmonic distortion in the multilevel inverter; in civil engineering,
optimization techniques are used in every step of a project life cycle; in mechanical
engineering also, optimization is used for mechanical design, similarly, in com-
puter science, there are many areas where optimization being used, etc. [3].
Let us take an example for a better understanding of the importance of opti-
mization. These days cloud computing has become the latest computing model for
a variety of applications. It can allow access to shared resources immediately after
receiving a request from clients and also discharged with a little administration. In
cloud computing, all the applications keep running on a virtual platform and all the
resources are distributed among the virtual machines [4]. Due to these kinds of
user-friendly applications, the number of clients using cloud services increased
drastically. The main goal of a cloud service provider is to create an illusion among
clients that they have an infinite number of resources but in reality, they do not
have one. So, creating this kind of illusion is possible only with the help of efficient
scheduling. So if we can develop such a scheduling algorithm that can utilize the
resources which are being provided by the service provider in a better manner or an
optimal manner then it would immensely end in a better performance by the service
provider. A good task scheduler should adapt its scheduling strategy to the chan-
ging environment and the type of tasks. Therefore, a dynamic task scheduling
algorithm is appropriate for cloud environments such as ant colony optimization,
particle swarm optimization, etc.
4 Nature-inspired optimization algorithms and soft computing
bacteria foraging algorithm, and, in 2004, the honey bee algorithm was developed
by S. Nakrani and C. Tovey. Based on this algorithm, in 2005, novel bee algorithm
was developed by D.T. Pham et al., and the artificial bee colony was also devel-
oped by D. Karaboga. An efficient cuckoo search algorithm was proposed by Yang
and Deb during the period from 2009 to 2010 [6]. Nevertheless, so many meta-
heuristics algorithms are now being developed to use for optimization.
There are so many problems in the world. Among those, some problems are very
hard to solve, which we consider as NP-problems. For these problems, finding the
exact solution is very difficult by using conventional methods. Either it fails to
solve the problems or it becomes too time consuming. So, to solve these kinds of
problems, researchers used approximate methods to find approximate solutions.
Both metaheuristics and heuristics are approximate approaches that are used to
solve different optimization problems [3]. Simply, we can define a heuristic
approach as a method that uses local information to find the solution to a given
problem and a reverse metaheuristics approach as a method that uses global
information to solve any given problem. In addition to this, both these optimization
techniques differ from each other in many ways.
Naturally, heuristic methods are deterministic. On the other hand, metaheuristics
methods are an extended form of heuristic approach along with randomization.
Most of the heuristics methods are algorithmic or iterative types, but most of
the metaheuristics methods are nature-inspired; they are also iterative types. In both
methods, the nature of the solution is inexact and gives a near-optimal solution.
Metaheuristics methods are also called guided random search techniques, as it
does a random search to explore the entire search space. It is an advantage over
heuristic methods because metaheuristics methods avoid getting trapped in local
optima and always drive the search toward global optima. In metaheuristics
methods, there is a starting point for the search process, which must be specified, as
well as a stopping condition must also be set to stop the search process. Both these
methods are generally easy to implement, and usually, they can be assimilated
efficiently.
results than the original methods [8]. Many researchers used these metaheuristic
optimization algorithms with some modifications to find more efficient solutions
throughout their research work. There is an efficient ant colony optimization
algorithm, which is also a metaheuristic optimization algorithm that can take all
types of optimization problems like linear, non-linear, and mixed integer [9]. A
good balance is needed for all metaheuristics algorithms. Every metaheuristic
algorithm follows different strategies to balance mainly two factors: one is to
identify the high-quality search space for finding better results and another factor is
not wasting unnecessary time in the already explored search space or the search
space, which does not give a promising solution.
Metaheuristic optimization methods can mainly be classified into two parts:
one is single-solution-based metaheuristics and the other is population-based
metaheuristics. Single-solution-based algorithms are also called trajectory-based
algorithms. Those algorithms work on a single solution at any point in time, while
population-based metaheuristic algorithms work on the whole population or a
whole bundle of solutions at a time. Metaheuristic algorithms are mainly inspired
by nature. For example, evolutionary algorithms are inspired by biology, particle
swarm optimization, and ant colony optimization, these swarm intelligence-based
algorithms are inspired by ant or bee colonies, and the simulated annealing algo-
rithm is inspired by physics, etc. Metaheuristic optimization algorithms can also be
classified into two categories: one is deterministic and the other is stochastic. A
deterministic optimization algorithm follows a deterministic decision approach to
solve a given optimization problem, e.g., tabu search and local search. While sto-
chastic optimization algorithms follow random rules to solve a given optimization
problem, e.g., simulated annealing and evolutionary algorithms. There are different
criteria based on which metaheuristic optimization algorithms can be classified, but
here we have discussed a few.
While heuristic methods may offer a quick fix to specific planning or scheduling
challenges, they are only sometimes effective in delivering the best possible results.
Moreover, heuristic approaches need more flexibility to create lasting, optimal
solutions that enhance productivity and profitability.
This section provides an overview of various heuristic optimization techni-
ques, including fundamental constructive algorithms such as evolutionary pro-
gramming and greedy strategies and local search algorithms like hill climbing.
to find, generate, tune, or select an algorithm that can provide a satisfactory solu-
tion to optimization problems or machine learning problems. Metaheuristics
explore a subset of solutions that would otherwise be too vast to be completely
enumerated or explored. In the realm of computer science and mathematics, a
metaheuristic is a heuristic or procedure that operates on a higher level than a
partial search algorithm. These methods do not rely on many assumptions about the
problem at hand, which makes them applicable to various problems. However,
unlike iterative methods and optimization algorithms, metaheuristics cannot guar-
antee that a globally optimal solution can be found for some problems. Although
most literature on metaheuristics is experimental in nature, providing empirical
results from computer experiments with the algorithms, some formal theoretical
results are also available, typically concerning convergence and the potential to
find the global optimum.
The field of metaheuristics has produced many algorithms that claim novelty
and practical efficacy. While high-quality research can be found in the field, many
publications suffer from flaws such as vagueness, lack of conceptual elaboration,
poor experiments, and disregard for previous literature. Metaheuristic algorithms
belong to the category of computational intelligence paradigms, mainly used for
solving complex optimization problems. This section aims to provide an overview
of metaheuristic optimization algorithms. There are two main categories of meta-
heuristic algorithms: trajectory-based and population-based.
The above process continues until convergence, i.e., the population becomes
highly like its previous state. The converged population is used as the solution to
the given problem. Despite criticisms of the evolutionary theory, GA can provide
good enough solutions for complex problems, including NP-complete problems
such as the Knapsack Problem, Bin Packing, and Graph Coloring.
10 Nature-inspired optimization algorithms and soft computing
modify the position of their food source and discover new positions. They keep the
new position if their nectar amount is higher than the previous one. Onlooker bees
evaluate the nectar information from all employed bees and choose a food source to
modify. If the new nectar amount is higher, the onlooker keeps the new position.
The abandoned sources are replaced with new sources by artificial scouts.
The ABC algorithm is a powerful optimization technique that has been suc-
cessfully used in various domains, such as image processing, classification, and
feature selection.
References
[1] https://en.wikipedia.org/wiki/Mathematical_optimization#Classification_
of_critical_points_and_extrema
[2] Foulds, L.R., Optimization Techniques: An Introduction, New York,
Heidelberg, Berlin: Springer-Verlag, 1981.
[3] C. Bastien and T. Marco. An Introduction to Metaheuristics for
Optimization, Springer, 2018.
[4] Arunarani, D.M. and V. Sugumaran. Task scheduling techniques in cloud
computing: a literature survey. Future Generation Computer Systems,
91407, 415, 2019.
[5] Kenneth, S., S. Marc, and G. Fred. A history of metaheuristics. cs.AI, 4 Apr
2017.
[6] Yang, X.-S. Metaheuristic optimization. Scholarpedia, 6(8), 11472, 2011.
[7] Eden, M.R., I. Marianthi, and G.P. Towler (eds.). Proceedings of the 13th
International Symposium on Process Systems Engineering – PSE, 2018,
https://doi.org/10.1016/B978-0-444-64241-7.50129-4
[8] Fevrier, V., M. Patricia, and C. Oscar. A survey on nature-inspired optimi-
zation algorithms with fuzzy logic for dynamic parameter adaptation. Expert
Systems with Applications, 41, 6459–6466, 2014, https://doi.org/10.1016/j.
eswa.2014.04.015
[9] Omid, B.-H., S. Mohammad, and H.A. Loáiciga. Meta-Heuristic and
Evolutionary Algorithms for Engineering Optimization, WILEY, 2017.
[10] Zanakis, S.H. and R.E. James. Heuristic “optimization”: why, when, and
how to use it. Interfaces, 11(5), 84–91, 1981.
[11] Lee, K.Y. and A. El-Sharkawi Mohamed (eds.). Modern Heuristic
Optimization Techniques: Theory and Applications to Power Systems,
vol. 39, New York, NY: John Wiley & Sons, 2008.
[12] Framinan, J.M., N.D.G. Jatinder, and R. Leisten. A review and classification
of heuristics for permutation flow-shop scheduling with makespan objective.
Journal of the Operational Research Society, 55(12), 1243–1255, 2004.
Introduction to various optimization techniques 13
1
Delhi Technical Campus, India
2
Western Norway University of Applied Sciences, Norway
3
Guru Gobind Singh Indraprastha University, India
4
National Institute of Technology – Patna, India
16 Nature-inspired optimization algorithms and soft computing
The optimization algorithms that imitate the behavior of natural and biological
systems are referred to as nature-inspired algorithms. These algorithms are also known
as intelligent optimization algorithms or nature-inspired metaheuristic algorithms.
Heuristic algorithms generate innovative solutions through trial and error, but meta-
heuristic algorithms employ memory and other forms of knowledge and strategy. These
algorithms are well-known, efficient methods for handling a variety of challenging
optimization issues [2]. Consequently, the vast majority of natural algorithms are bio-
inspired. Over a hundred distinct algorithms and variants are existing today. There are
many classification levels for highlighting the source of inspiration. For convenience,
we employ the most evolved sources, such as biology, physics, and chemistry [3].
Some of the most significant nature-inspired algorithms are PSO based on swarm
intelligence, genetic algorithms (GA) [7,8], and differential evolution [9] based on
evolution, PSO based on flocks of a bird moving in search of food [10,11], ACO
[12] based on intelligent behavior of ants, firefly algorithm (FFA) [13] based on
intelligent behavior of firefly, cuckoo search algorithm (CSA) [14] inspired by
search intelligence of cuckoo, harmony search [15] based on the pursuit of the ideal
harmony is the goal of music, artificial bee colony (ABC) [16] chicken swarm
optimization [17] based on the foraging actions of a chicken swarm that had been
partitioned into several different subgroups based on the intelligence of bee,
glowworm algorithm (GWA) [18] mimics the behavior of glow worms. Bat
Nature-inspired optimization algorithm 17
algorithm (BA) [19] based on behavior of bats, flower pollination algorithm (FPA)
[20] based on biological evolution of plants, WCA [21] inspired by the nature’s
process of evaporation, condensation, and precipitation. Mine blast algorithm [22]
based on the mine bomb explosion concept, water wave optimization [23] inspired
by the shallow water wave theory, to find the optimal solution these algorithms are
significantly used. Initially, the search space is a randomly initialized population.
For every round, present population is swapped by a newly created population.
Table 2.1 reflects the chronological timeline of various nature-inspired algorithms.
S. Component Role
no.
1 Population Population contain samples, performs both local and
worldwide searches
2 Randomness Adjustment as per global and local search. Also get
far from local minima
3 Selection Vital constituents for convergence
4 Mutation, crossover, and Create new solutions and their assessment iteratively
algorithmic formulas
18 Nature-inspired optimization algorithms and soft computing
Bio-inspired Other
but not SI algorithms
based
SI
Bio-Inspired
24%
Bio-Inspired
Algorithms
multipurpose; (b) they are very competent in solving nonlinear design problems.
This section discusses the mathematical context of the algorithms, without covering
mathematical proofs of these algorithms. Just to aid the reader to comprehend a
particular optimization algorithm. Further, techniques have been explored to
improve these algorithms for better results.
Swarm intelligence relates to the cooperative, emerging performance of sev-
eral, cooperating agents who track some simple guidelines. By the inspiration from
swarm behavior in nature, various swarm algorithms have been developing in the
last decades. The swarm agents are the core concept of the swarm algorithm.
However, every agent might be treated as obtuse. The comprehensive association
of various agents might display some self-organization pattern. Moreover, this can
act like some kind of supportive intelligence. Self-organization and disseminated
control are astonishing elements of swarm systems in nature, leading to an emit
behavior. Arise from native communication among the system agents is known as
emit behavior and it is unlikely to be accomplished by any of the agents of the
system performing unaided.
The collective behavior of natural agents, like particles, pollens of flower’s,
bees, and fish have been used to design the multi-agent system. Since 1991, swarm-
based algorithms have been proposed by adopting distinctive swarm behaviors of
ants, particles, flies, bees, cats, etc. These algorithms are formulated by mimicking
the intelligent behavior of the organisms like fishes, fireflies, ants, bees, birds, and
bats. A huge number of algorithms come under this group. Such kinds of algo-
rithms are mostly population-based stimulated by the cooperative behavior of
social insects as well as animal societies some algorithms such as PSO, FFA, and
CSA [10–19]. This section explores various types of multi-agent systems known as
metaheuristics and focuses on swarm-based algorithms only. The standard PSO
[11] adopts the collective behavior of birds, while the FFA [13] focuses on the
flashing behavior of fireflies. However, BA [19] uses the echolocation of foraging
bats. Swarm algorithms are the widely held and widely used algorithms for various
optimization problems. There are several reasons for such attractiveness: firstly,
swarm algorithms commonly share information between multiple agents; secondly,
several agents can be parallelized naturally so that complex optimization turns out
to be more practical as per the implementation scenario. Several well-known
22 Nature-inspired optimization algorithms and soft computing
swarm algorithms have been explained in this subsection to understand their brief
mathematical background and list out the well-known variant of every algorithm.
These algorithms have produced better results for most of the problems and hence
grabbed the attention of investigators, while some other algorithms have had enquiries
raised on their presence due to insignificant results or less visibility to a majority of
authors. A few of them are elaborated in next section. Most of the researched swarm
algorithms are listed below regardless of their recognition. Twenty-five algorithms are
listed in Table 2.3 and shown in Figure 2.4, mentioning name of founder/author/
pioneer, the year of development, behavior pattern of swarm, and the number of
citations from Google Scholar. However, a higher number of citations of an algorithm
do not directly translate to a better capability of that algorithm.
NUMBER OF CITATIONS
18,562
20,000
12,958
15,000
10,000
3,989 4,017 4,731 3,034 3,036
5,000 2,086
563
2.8.1.1 PSO
This algorithm is inspired by nature whose basis is swarm intelligence. Observing
the behavior of a swarm of birds seeking food leads to the manifestation of PSO
[11]. The primary objective of the method is to determine the particle locations
which will yield the optimal solution of the specified cost function. The PSO
algorithm is based on the examination of a group of birds, in which a bird that has
located food will guide the others to the food source [24].
The initialization is arbitrary, and multiple iterations are then performed, with
the particles position and velocity getting updated at the completion of each itera-
tion [25].
Algorithm for PSO
Where V and X represent the velocity and position of the particle, respec-
tively, t represents time, L1 and L2 stand for learning factors or acceleration
coefficients, 1 and 2 are arbitrary values between 0 and 1, W is the inertia weight
such that (0<W<1), and Pbestid and Gbest are particle’s best and global
position.
Until a satisfactory value for the global position best is reached, the procedure
of updating V and X is repeated (Gbest). The particle then uses the below equations
to recalculate Pbestid and Gbest based on the cost function.
Pbestid = Pid if cost function of Pi<cost function of Pbesti
Pbestid else
Gbest = Pid if cost function of Pi<cost function of Gbest
Gbest else
24 Nature-inspired optimization algorithms and soft computing
5. The closest (x,y) coordinates are used to re-map the new locations.
Repeat steps 2–5 until the maximum number of iterations has been reached.
Where k [ [1, Number of rooster node], k 6¼ m and Xmn denotes the position of
rooster m in nth dimension during t and t + 1 itertion, randnð0; m2 Þ is Gaussian
random variable with variance m2 (which is a small constant) and mean value is 0.
fm is the fitness value for the subsequent rooster m. [ is a low value constant.
26 Nature-inspired optimization algorithms and soft computing
Where
fmfp1
K1 ¼ ejfmþ2j
K2 ¼ efp2fm
p1 is the index of a rooster, and p2 is a chicken from the group that can be a hen or
rooster and a uniform random number between 0 and 1 generated by randn.
Chicks’ position update:
tþ1
Xm;n ¼ Xm;n
t
þ AB Xq;nt
Xm;n
t
t
Xm;n denotes position of chicken’s mother, AB is arbitrary number between 0
and 2 to obtain the optimum position.
2.8.1.4 ACO
Dorigo first suggested ACO in his PhD thesis in 1992 [12]. The swarming nature
established by ACO is modeled after ant colonies. It provides a sufficient solution
to the engineering optimization issue. Ants typically leave behind pheromone trails
that lead one another to food sources as they scout the area. The stimulated or
computer-generated “ants” also keep track of their locations and the qualities of
their resolves, enabling additional ants in later simulation cycles to find better
solutions. Artificial “ants” navigate within a constrained location while displaying
all potential resolutions to find the best possible resolutions [30]. It demonstrates
the ACO pseudo code, which calls for two-step calculations.
Create ants solutions as shown in equation below.
To understand the mathematics of ACO, let us consider probability pkab of an
ant to move from a state to b state.
a b
Mab Nab
pkab ¼ P b
M a Nac
ZE ac
Update pheromones:
X
j
new
Mab old
E:Mab þ 4Mab
k
k¼1
where Mab is the amount of pheromone deposited for a state transition ab, E is the
pheromone evaporation coefficient.
DMabk is the amount of pheromone deposited by kth ant.
Nature-inspired optimization algorithm 27
2.8.1.5 FFA
The flashing pattern of tropical fireflies serves as the basis for the FFA. It was created
in 2007 by Xin-She Yang [13]. This algorithm applied three fundamental rules. First,
because fireflies have no sexual preference, they are all attracted to each other.
Second, when the distance between fireflies grows, the attraction, which is propor-
tionate to the luminosity, decreases. The less bright firefly will therefore move
toward the more brightly lit one for any pair of flashing fireflies. Last but not the
least, a firefly’s brightness is determined by how the objective function is perceived,
and brightness is correlated with the cost of the objective function [31–33].
{
for i = 1 to k
{
for j = 1 to k
{
If (ı́j > ı́i
{
Move firefly i towards j by using Equation
Attractiveness changes as a function of distance w via e@w
2
2.8.1.6 CSA
This method is based on the cuckoo parasites’ breeding behavior. These natural
processes are ingrained by the algorithm [14].
In this algorithm, a few following presumptions are made:
Assumption 1: One egg is laid by a cuckoo at a time, and it is then dropped into a
different nest each time.
Assumption 2: The nest that produces the highest caliber eggs will be passed
down to succeeding generations.
Assumption 3: The number of host nests is fixed, and each nest holds only one egg.
Assumption 4: There is a pa chance that the host bird will find a foreign egg.
A cuckoo egg stands for a fresh solution, and each nest represents a solution.
The algorithm’s goal is to employ novel, possibly superior solutions [34,35].
Pseudo Code of CSO:
Get a cuckoo that has been randomly created using Levy’s flights as a solution, and
then assess its fitness function (i.e., Fi).
Fi > Fj
Replace j by new solution.
The worst nets are uncontrolled to a certain degree (pa), and new ones are generated.
Maintain the best nests. g*. }
Output the best results g* and xt+1.
8 Chicken swarm algorithm Xianbing Meng et al. [17] 2014 Dividing the hens into several subgroups and watching the
foraging behavior of the swarm.
9 Glow worm swarm Krishnanand and Ghose 2005 Mimics the behavior of glow worms.
optimization (GWSO) [18]
10 Wasp swarm algorithm Pinto [68] 2007 Wasp colonies establish a hierarchy among themselves through
(WSA) communication among the individuals.
11 Lion optimization algorithm Wang et al. [69] 2012 Lion prides promote the principle of group living and
(LOA) evolution.
12 Spider monkey algorithm Bansal et al. [70] 2014 Stimulates spider monkeys’ foraging behavior.
13 Whale optimization Mirjalili and Lewis [71] 2016 Mimics humpback whales’ social behavior.
algorithm (WOA)
14 Wolf pack algorithm Hu-Sheng et al. [72] 2014 To find prey on the Tibetan Plateau, the wolf pack joins forces
and collaborates closely.
Nature-inspired optimization algorithm 31
CLUSTERING &
OPTIMAL COVERAGE ROUTING
APPLICATION OF NATURE-
INSPIRED OPTIMIZATION
ALGORITHMS
Tillet et al. [44] have provided the first study based on the application of
evolutionary algorithms on cluster head selection. PSO has been used for cluster
head (CH) detection.
It is an evolutionary algorithm technique in which test answers are allowed to
interact and work together to find the optimal answer to the problem at hand, much like
a natural swarm of birds. The wireless sensor network (WSN) algorithm can be broken
down into three logical parts. The clusters are produced in the first section. Each
cluster’s cluster heads are chosen in the second section. The third section assigns
cluster heads to each of the network’s nodes. Each phase in this situation uses PSO. A
succinct overview of the use of PSO in WSN to handle a variety of difficulties,
including optimal deployment, node localization, clustering, and data aggregation, was
presented by Kulkarni et al. [45]. An energy-efficient CH selection approach based on
PSO has been presented by Srinivasa Rao et al. [46], employing an effective particle
model and fitness function. They have taken into account a number of factors,
including residual energy, intra-cluster distance, and sink distance, for the energy
efficiency of the suggested approach. The technique was tested using several WSN
scenarios, sensor node counts, and CHs. By utilizing the PSO-based methodology
within the cluster as opposed to the base station, Buddha Singh et al. [47] had trans-
formed it into a semi-distributed method. Their suggested method’s main goal is to
locate the head nodes close to the cluster density’s center. On the basis of the optimal
CH position, they have also calculated the estimated distance travelled by packet
transmission from node to BS. Additionally, they have computed the anticipated
number of retransmissions along the path to CH and examined the impact of link
failure. Last but not the least, power computation is performed to assess the energy
savings, and the results are compared with LEACH-C and procedures.
Latiff et al. [48] defined a new cost function in their research titled “Energy-
aware clustering for wireless sensor networks using particle swarm optimization.”
The proposed protocol creates clusters evenly distributed over the whole WSN field
by choosing a high-energy node as the CH.
To determine the maximum quality thresholding potential for cluster generation in
WSNs, Jenn-Long Liu et al. [49] developed a genetic algorithm (GA)-based entirely
adaptive clustering procedure. The results of the simulation show that the most likely
distribution closely reflects the analytical conclusions made by the authors’ updated for-
mulas. Given that it uses the most effective probability, power-efficient clustering algo-
rithm, the suggested LEACH-GA method exceeds LEACH in terms of network lifetime.
Complex combinatorial problems can be solved using swarm intelligence tech-
niques like ACO [50]. It is a cleverly devised approach for resolving challenging
integration issues. To resolve the routing issue in sensor networks, Tiago Camilo,
Carlos Carreto, and others [51] discovered the usefulness of the Ant Colony
Optimization metaheuristic. The energy-efficient ant-based routing (EEABR) pro-
tocol finds paths between nerves and node nodes using “lightweight” ants, which can
be improved by grade and energy levels. The experimental findings demonstrated
that the set of rules produces excellent outcomes in various WSNs
A novel WSN router operating protocol was introduced by Selcuk Okdem et al.
[52]. The protocol is implemented by offering a potent multi-course data transfer
technique and using the ACO algorithm to enhance route paths and obtain
Nature-inspired optimization algorithm 33
dependable connections in the event of node faults. The suggested approach is con-
trasted with EEABR, an event-based template-based set of fashion ant-based rules.
The results show that their approach significantly reduces the amount of energy
consumed, which is utilized as a performance indicator for unique-sized WSNs.
The improvement of artificial bee colonies is suggested by Karaboga et al.
[53]. Bees are divided into three groups: employed, spectator, and scout. In the
beginning, one type of bee—an employed bee—makes up half of the colony, and
another—an onlooker bee—makes up the other half. Employed bees scout out
potential food sources in the area and advise curious bees of new food sources.
According to the knowledge they have, spectator bees search for a fresh one in the
vicinity of food supplies that are nearby. A scout bee is a bee that searches at
random. Employed bees switch to a new food source if the new food source’s
nectar content is greater than the previous one. After conducting all possible sear-
ches, observer bees select the food source with the highest probability and move it
to a new location. Up until the maximum number of cycles is reached, the cost
function of the new one is compared to the old one, and the best solution is stored.
In their article, “A comprehensive survey: ABC algorithm and applications,”
Dervis Karaboga et al. [54] gave a thorough assessment of the application of ABC
in numerous disciplines.
PSO with time variable acceleration constants, hierarchical PSO with time
varying acceleration constants, PSO with time varying inertia weight, and PSO
with supervisor student mode are the four PSO variations that Guru et al. [55]
presented for energy aware clustering. In wireless sensor networks, four PSO
techniques were used to group sensors into clusters. Based on integration criteria, it
operates. The inertia w weight drops in line during the correction from 0.9 during
the first multiplication to 0.4 during the last repetition. The PSO determines the
inertia gravity in the acceleration times, and the acceleration constants c1 and
c2 change sequentially throughout the multiplication, causing the particles to travel
in huge steps at first but then smaller steps in each repetition.
Various authors have applied PSO in various areas of WSN.
● PSO in node deployment
● PSO in node localization
● PSO in energy aware clustering
● PSO in data aggression
Brief summaries of various well-known nature-inspired algorithms used in clus-
tering routing protocols are shown in Table 2.5.
In Node Deployment
In Data Aggregation
Table 2.5 Summary of some of the prominent NIOA applied on clustering routing
protocols
individual energies [56]. During the startup phase, the LEACH Protocol is used to
elect CH, and the node that has the highest energy is known as the sea or CH. More
energy-efficient nodes are clustered at the cluster’s epicenter (where the rivers
empty into the sea), while less efficient nodes are located further from the center
(streams farthest from the sea). After some iterations, the energy of each node will
be computed, and its new positions will be decided. Rivers are selected from nodes
with the optimal amount of energy, while less energetic nodes are rejected. We
select high-energy nodes as rivers and low-energy ones as streams. Similarly, to
how the locations of rivers and streams are revised at each energy assessment, a
new CH is selected at regular intervals. The same procedure is carried out repeat-
edly till the maximum number of rounds has been reached [57]. In the end, the
WCA reduces the expense of selecting the best node that can serve as the cluster
head in the WSN’s hierarchical routing protocol. The acquired findings showed
that the suggested technique outperforms traditional LEACH in terms of results.
One such procedure that uses water to adjust the temperature in joints is UFSW [63].
Underwater, which may be still or may be continuously flowing across the surface of
the samples to be welded, is where the welding is done. Underwater, which may be still
or may be continuously flowing across the surface of the samples to be welded, is
where the welding is done. Due to the extensive circulation and high heat-capturing
capabilities of the water, the coarsening and dissolving of precipitates are controlled
[60]. Over FSW and FW, UFSW has the capacity to offer better mechanical qualities
and fine-grain structural features. Heat circulation, material movement, and intermix-
ing during UFSW are significantly influenced by the various welding settings as well
as varied joint designs and tool features. This alters the joint’s mechanical character-
istics and produces change in the macro- and microstructural aspects [59–63].
Therefore, choosing the best parameter combination and optimizing it are crucial for
enhancing the mechanical properties of the joints. Sharma et al. [64] investigated
multi-response optimization using the TOPSIS technique for different FSW joints.
They discovered that the rotating speed has a significant impact on micro hardness and
UTS. Palanivel et al. [65] used central composite face-centered factorial during dis-
similar joining of AAs utilizing FSW to establish empirical modeling between the
FSW parameters and the UTS. The literature that is now accessible also shows a dearth
of research into the simulation of the UFSW process for using various evolutionary
optimization techniques. Motivated by these knowledge gaps, the following goals were
systematically designed and accomplished by various authors: (i) to investigate the
impact of three UFSW input parameters on the UTS, E%, and IS of marine grade AA
6082-T6 joints using the Taguchi’s L18 standard orthogonal array and (ii) to compare
the simulation abilities of these optimization algorithms.
In order to achieve better results from the chosen prediction mode, the para-
meter values were normalized to a range of (0, 1).
Five independent experiments were carried out for each set of input parameters to
analyze the data using each optimization procedure. The following sections include the
findings of top three runs (experiments) for selected optimization algorithm.
The values of numerous parameters or constraints that must be established in
each method, such as the number of iterations, population or swarm size, and
number of elements or particles taken into account, are described in Table 2.8. This
table’s values were determined through a series of exploratory trials. Table 2.9 also
S. no. A B C
1 17 710 50
2 17 710 63
3 17 710 80
4 17 900 50
5 17 900 63
6 17 900 80
7 17 1,120 50
8 17 1,120 63
9 17 1,120 80
10 20 710 50
11 20 710 63
12 20 710 80
13 20 900 50
14 20 900 63
15 20 900 80
16 20 1,120 50
17 20 1,120 63
18 20 1,120 80
Table 2.8 Additional parameters that are stated for each algorithm
shows that different parameters were required to simulate UTS, E%, and IS. To
simulate the E% parameter, the model required more parameters than the other
response parameters, particularly for the PSO and FFA algorithms [66,67].
Table 2.10 Experimental and predicted values from PSO algorithm for the UTS
parameter
Table 2.11 Experimental and predicted values from PSO algorithm for the %
elongation parameter
Table 2.12 Experimental and predicted results from PSO algorithm for the IS
parameter
Table 2.13 Experimental and predicted values from FFA for the UTS parameter
Table 2.14 Experimental and predicted values from FFA for the % elongation
parameter
Table 2.15 Experimental and predicted results from FFA for the IS parameter
Table 2.14 shows that the predicted values obtained were quite close to the
expected experimental results. The entire error range was discovered to be between
0.004% and 11.645%.
The results show that projected values are very close to experiment data. In
other cases, the variation can be as low as 0.002%. The complete error range was
discovered to be between 0.017% and 4.839%.
The evaluation of prediction error for the various UFSW process parameters
and the different optimization strategies is shown in Tables 2.16–2.18.
The three runs of the PSO- and FFA-based simulations are shown along with the
maximum, lowest, and mean values for the maximum and minimum prediction
errors for the UTS. Table 2.16 shows that the FFA-based simulation forecasted the
data the most accurately, with an average maximum and minimum error of 0.541%
Table 2.16 Maximum and minimum prediction errors for the UTS
Table 2.17 Maximum and minimum prediction errors for the elongation %
Table 2.18 Maximum and minimum prediction errors for the impact strength
and 0.026%, respectively. The overall mean prediction error remained as low as
0.236% in this simulation. Experiment 1 demonstrated the FFA algorithm’s best
mean error values, with a mean error percentage of 0.141%. The top-performing PSO
experiment had average errors of 0.377% and 1.142%, respectively. The FFA-based
simulation achieved a small error value of 0.009% over the three trials mentioned.
In this case, the PSO and FFA-based models both performed reasonably well;
however, the FFA-based simulation yielded the lowest mean average prediction
error of 2.816%. Experiment 1 of the FFA technique had the lowest mean error
values, with an average error of 2.35%. The FFA-based prediction model achieved
an error value of 0.004%.
Table 2.18 shows the mean, maximum, and minimum impact strength predicted
errors for tests performed with PSO and FFA. The FFA-based simulation performed
admirably in this case as well, with prediction error% ranging from 0.1 to 3.537 and the
lowest mean average prediction error of 1.469%. In experiment 2, the mean error values
were 1.246%, with an average error percentage of 1.246%. The FFA-based prediction
model achieved an error value of 0.017% across the three experiments shown in
Table 2.12. PSO produced a minimum error value of 0.002% for impact strength.
2.11 Conclusion
For the last few decades, traditional search algorithms have been used to sort out
engineering problems. Many promising algorithms have found potential results. This
chapter presented an in-depth view of nature-inspired optimization algorithm and its
application in engineering constraint problems. Various nature-inspired optimization
algorithms were considered for extensive review. Different classes of such algo-
rithms are presented. Some of the popular and widely used NIOA were elaborated
and their applications on different domains are also discussed. Application on clus-
tering routing protocols and solid-state wielding is elucidated in details. The WCA-
based clustering routing protocol eventually lessens the cost of determining the best
node as the cluster head. This novel energy-efficient clustering mechanism sustains
the network lifetime. Similarly, PSO and FFA evolutionary optimization algorithms
are used to simulate the UFSW process. The results strongly suggest that the FFA
technique can be successfully used to simulate a highly efficient model for predicting
the parameters of the UFSW process. The findings and results drawn from the study
of NIOA are of great relevance to researchers of optimization algorithms and also
help in optimizing engineering problems and practical application hence therefore
has been adequately detailed. There are a variety of research directions that could be
well thought of as a helpful expansion of this analysis.
References
[1] A. Singh, S. Sharma, and J. Singh, “Nature-inspired algorithms for wireless
sensor networks: a comprehensive survey,” Comput. Sci. Rev., vol. 39, no.
100342, p. 100342, 2021.
46 Nature-inspired optimization algorithms and soft computing
[77] X. Cao, H. Zhang, J. Shi, and G. Cui, “Cluster heads election analysis for
multi-hop wireless sensor networks based on weighted graph and particle
swarm optimization,” in 2008 Fourth International Conference on Natural
Computation, 2008.
[78] S. Jin, M. Zhou, and A. S. Wu, “Sensor network optimization using a genetic
algorithm,” in Proceedings of the 7th World Multiconference on Systemics,
Cybernetics and Informatics, 2003, pp. 109–116.
[79] D. S. Hussain and O. Islam, “Genetic algorithm for energy-efficient trees in
wireless sensor networks,” in Advanced Intelligent Environments, Boston,
MA: Springer US, 2009, pp. 139–173.
[80] A. Norouzi, F. S. Babamir, and A. H. Zaim, “A new clustering protocol for
wireless sensor networks using genetic algorithm approach,” Wirel. Sens.
Netw., vol. 03, no. 11, pp. 362–370, 2011.
[81] W. Luo, “A quantum genetic algorithm based QoS routing protocol for
wireless sensor networks,” in 2010 IEEE International Conference on
Software Engineering and Service Sciences, 2010.
[82] H.-S. Seo, S.-J. Oh, and C.-W. Lee, “Evolutionary genetic algorithm for
efficient clustering of wireless sensor networks,” in 2009 6th IEEE
Consumer Communications and Networking Conference, 2009.
[83] A.-A. Salehpour, B. Mirmobin, A. Afzali-Kusha, and S. Mohammadi, “An
energy efficient routing protocol for cluster-based wireless sensor networks
using ant colony optimization,” in 2008 International Conference on
Innovations in Information Technology, 2008.
[84] S. Mao, C. Zhao, Z. Zhou, and Y. Ye, “An improved fuzzy unequal clus-
tering algorithm for wireless sensor network,” Mob. Netw. Appl., vol. 18, no.
2, pp. 206–214, 2013.
[85] A. M. S. Almshreqi, B. M. Ali, M. F. A. Rasid, A. Ismail, and P. Varahram,
“An improved routing mechanism using bio-inspired for energy balancing in
wireless sensor networks,” in The International Conference on Information
Network 2012, 2012.
[86] D. Karaboga, S. Okdem, and C. Ozturk, “Cluster based wireless sensor
network routing using artificial bee colony algorithm,” Wirel. Netw., vol. 18,
no. 7, pp. 847–860, 2012.
[87] Y. Kumar and G. Sahoo, “A two-step artificial bee colony algorithm for
clustering,” Neural Comput. Appl., vol. 28, no. 3, pp. 537–551, 2017.
[88] P. P, M. Garg, and N. Jain, “An energy efficient routing protocol using ABC
to increase survivability of WSN,” Int. J. Comput. Appl., vol. 143, no. 2,
pp. 37–42, 2016.
[89] S. Potthuri, T. Shankar, and A. Rajesh, “Lifetime improvement in wireless
sensor networks using hybrid differential evolution and simulated annealing
(DESA),” Ain Shams Eng. J., vol. 9, no. 4, pp. 655–663, 2018.
[90] S. Kaur and R. Mahajan, “Hybrid metaheuristic optimization based energy
efficient protocol for wireless sensor networks,” Egypt. Inform. J., vol. 19,
no. 3, pp. 145–150, 2018.
This page intentionally left blank
Chapter 3
Application aspects of nature-inspired
optimization algorithms
Abhinav Kumar1, Subodh Srivastava1, Vinay Kumar1 and
Niharika Kulshrestha2
3.1 Introduction
NIOAs are sets of nature-based computational algorithms [1]. Basically, these are a
type of metaheuristic algorithms that are adapted or governed by natural phenom-
ena. It is categorized as bio-inspired (BIOAs), evolution based, and natural science
inspired (physics and chemistry) optimization algorithms (NSIOAs) [2] as shown in
Figure 3.1.
BIOAs are divided into evolution and swarm intelligence-based optimization
algorithm (OA). Genetic algorithm (GA) [3] and differential evolution (DE) [4] fall
under evolution-based OA. Swarm intelligence-based OAs are particle swarm
optimization (PSO) algorithm [5], fire fly (FF) [6], artificial bee colony (ABC) [7],
bacterial foraging (BF) [8], ant colony optimization (ACO) [9], bat algorithm (BA)
[10], cuckoo search (CS) [11], and so on. NSIOAs include simulated annealing
(SA) [12], gravitational search (GS) [13], and big bang big crunch (BBBC) [14]. As
shown in Figure 3.2, NIOAs [15] can be significantly used for engineering, finance
1
Department of Electronics and Communication Engineering, National Institute of Technology – Patna,
India
2
Department of Physics, GLA University, India
54 Nature-inspired optimization algorithms and soft computing
Bacterial foraging
Cuckoo search
Firefly algorithm
Environmental
Finance Healthcare
Nature-
inspired Industrial
Engineering optimization designs
algorithms
(NIOAs)
Image denoising
Image
Pre-processing
enhancement
Feature selection
Classification
segmentation, feature extraction, feature selection, and classification are the basic
building blocks of image processing [31]. Pre-processing refers to the task of image
resizing, normalization, cropping, noise removal, image quality improvement, and
image restoration [32]. Pre-processing steps enhance the features of image.
Segmentation helps to find the region of interests (ROIs) by dividing the images
into a group of segments [33]. ROIs indicate the abnormal region in the case of
medical images. Feature extraction is the procedure of mining the features of an
image that assists in differentiation between two objects [34]. It is derived from the
internal attributes and the characteristics of the image. Feature selection is a sta-
tistical approach which finds the redundant and non-redundant features related to
the image [35]. It is usually used for dimension deduction. Classification [36]
describes the class or label of the image. It is estimated on the basis of extracted and
selected features. For example, in the case of image cancer classification, the
classification process categorized the images as cancerous or non-cancerous.
Nowadays, the steps involved in image processing are clubbed with ML and
DL to accelerate the accuracy and computation rate. Yet, it cannot be done without
the aid of an optimization technique. OAs are employed in various image proces-
sing application like to increase image quality, lessen noise, improve segmentation,
to optimize the feature set and classifications parameters. For instance, DE [37],
GA [38], PSO [39], ACO [40,41], and ABC [42] algorithms have been used for
image enhancement, segmentation [37,43], feature selection, classification [40],
and clustering, respectively.
Here, y is the observed image; x is the noiseless image; ah2 is additive noise,
which due to machine calibration or sensors; and mh1 is inherent characteristic of
image known as multiplicative noise.
Based on the evolution of imaging modalities [27] like mammography (X-ray)
[32], ultrasound [44], magnetic resonance imaging (MRI) [45], computed tomo-
graphy (CT) [42], positron emission tomography (PET), and micro biopsy [33], the
noises can be categorized as Quantum (Poisson) [32], Speckle (Rayleigh) [44],
Rician [46], and Gaussian, respectively. The most commonly used denoising filters
[31] are median [47], wiener [48], wavelet [49], bilateral [50], non-local means
[51], total variation (TV) [52], anisotropic diffusion [53], fourth-order partial dif-
ferential equation [54], and complex diffusion [55]. All these filters are constituted
with a number of image parameter. The parameters are selected manually, which
increases the time complexity and affects the denoising filters accuracy. So, there is
a need of optimization that can improve the efficiency of denoising filters by
selecting the optimal filter parameters and denoising rules, and also minimize the
error between the noisy and denoised image.
detection [38] from crowd management. The traditional approach of image classi-
fication is to draw out the features from the image and then use a ML algorithm to
learn a mapping between the features and the labels. However, this approach
requires domain expertise in feature engineering and may not generalize well to
new data. Support vector machine (SVM) [35], K-nearest neighbor [68], decision
tree, random forest, convolutional neural network (CNN) [69], mask regional
convolutional neural network [70], and so on are classification techniques.
Classification techniques have some limitations in handling complex and non-
linear relationships between the features and the labels. So, the OAs like GA, PSO,
CS, and ACO can be used to optimize the classification parameters where the
fitness function can be defined as the classification accuracy of a classifier trained
on the extracted features.
3.3 Implementation
NIOAs have several parameters that need to be set correctly to ensure good per-
formance. These parameters [71] include the population size, crossover with
mutation rates (for GA), inertia weight with learning factors (for PSO), and pher-
omone evaporation and deposition rates (for ACO). Implementation of NIOAs
involves parameter tuning, representation of the solution, fitness function, and
stopping criteria [72]. Parameter tuning helps to find the best values for these
above-mentioned parameters. It relies on the specific issues being solved, and
different values can lead to different solutions. The solution representation is
another crucial aspect of implementing these NIOAs. The solution needs to be
encoded in a way that is suitable for the specific algorithm being used. For exam-
ple, in GA, solutions are typically exhibited as binary strings, while in PSO, solu-
tions are represented as vectors. The solution representation can greatly affect the
performance of the algorithm, and different representations can lead to different
solutions. The fitness function is the objective function that the OAs are trying to
optimize. It evaluates how good a particular solution is. It needs to be expressed in
a way that exactly captures the problem being solved. In some cases, it can be
difficult to design, and it may require domain-specific knowledge. Furthermore,
stopping criteria determine when the OAs should stop searching for a solution. It
may be the number of assigned iterations, the maximum time allowed, or when a
specific solution is found. Choosing the right stopping criteria is crucial to prevent
overfitting or underfitting the problem being solved.
Figure 3.4 demonstrates the generalized implementation steps to perform an
optimization in image denoising. It starts from input noisy image followed by the
definition objective function; choosing of an OAs; initialization of the OAs; itera-
tion of the OAs; determination of stopping criteria; and it ends with output denoised
image. Table 3.1 illustrates the implementation steps of PSO-based TV denoising
filter. The optimization begins with defining an objective function. The OF is
defined in terms of noisy and denoised image. After that, the PSO particles are
initialized and its fitness is evaluated. Based on the fitness evaluation, PSO
62 Nature-inspired optimization algorithms and soft computing
Choose an
Define the objective
Input noisy image optimization
function
algorithm
Output denoised
image
particles are updated. These steps are repeated until the matching of stopping cri-
teria. At the end, the optimized denoised image is obtained as output. PSO particles
are categorized as position and velocity vector as stated in the definition.
The implementation of the NIOAs needs to be carefully designed and opti-
mized to ensure a good performance. Furthermore, the algorithm needs to be
designed to take advantage of parallel processing when available, as well as other
optimizations such as local search and hybridization with other algorithms. It also
needs to be scalable and adaptable to different problem sizes and types.
the maximum number of iterations and the target fitness value. The con-
vergence criteria should be selected based on the problem’s complexity and
the algorithm’s performance.
The parameters of NIAOs are of two kinds: global and local parameters. Global
parameters affect the behavior of the entire algorithm, such as the PS in GA or the
number of iterations in ACO. Furthermore, local parameters affect the behavior of
individual components of the algorithm, such as the crossover probability in GA or
the pheromone evaporation rate in ACO. There are various methods for parameter
tuning [67] in NIOAs, including manual tuning, grid search, random search, and
metaheuristic optimization. Each of these methods has its advantages and dis-
advantages, and the selection of the technique depends on the specific problem at
hand. A well-tuned NIOAs can significantly improvise the proficiency and exact-
ness of the optimization process.
Table 3.2 shows the time complexity chart of manual tuning, grid search,
random search, and metaheuristic optimization-based parameters tuning.
Optimization problems can be categorized into two types: constrained [73] and
unconstrained optimization [74]. Constraints are set of circumstances that must be
satisfied in order to achieve a valid solution. In both cases, NIOAS have become
popular due to their effectively solving procedure. Figure 3.5 exhibits the cate-
gorization of NIOAs problems according to constrained- and unconstrained-based
optimization.
NIOAs
problems
Constrained Unconstrained
optimization optimization
Constraint
Penalty function handling Population- Single-point
methods methods based methods methods
a variable x must be greater than or equal to some value b, then a linear penalty
function might add a term of the form. It may be mathematically noted as:
PðxÞ ¼ maxð0; b xÞ (3.2)
This penalty function is zero when x is greater than or equal to b, and is equal
to ðb xÞ when x is less than b. The linear penalty function increases proportion-
ally with the amount of constraint violation.
This penalty function is zero when x is greater than or equal to b and is equal to
ðb xÞ2 when x is less than b. The quadratic penalty function increases much faster
than the linear penalty function with the amount of constraint violation.
constraint handling [73] techniques along with penalty function method are listed
below:
(a) Lagrange Multiplier Methods: Lagrange multiplier methods add a Lagrange
multiplier term to the OF that enforces the constraints as equality constraints
[76]. The Lagrange multiplier is a scalar parameter that is adjusted during
optimization to ensure that the constraints are satisfied.
(b) Sequential Quadratic Programming (SQP): SQP methods solve a sequence
of sub-problems that approximate the original problem while satisfying the
constraints [91]. The sub-problems are solved using quadratic programming,
which can handle nonlinear constraints.
(c) Interior Point Methods: Interior point methods solve the optimization pro-
blem by moving towards the interior of the feasible region [92]. The constraints
are handled by adding barrier functions to the OF, which ensures that the
optimization variables remain within the feasible region.
(d) Active Set Methods: Active set methods involve solving a sequence of sub-
problems in which some of the constraints are active (i.e., they are satisfied
with equality) and others are inactive (i.e., they are satisfied with inequality)
[93]. The active set is updated iteratively until a feasible solution is found.
Table 3.4 Overview of feature selection steps with the aid of PSO
Initialization: Initialize the population of ants, where each ant signifies a potential solution
(i.e., a subset of features).
Pheromone trails: Each ant constructs a pheromone trail that exhibits the quality of its
solution, where the pheromone level increases with the fitness value of the solution.
Probabilistic transition: Each ant probabilistically selects the next feature to include in its
solution based on the pheromone level and the importance of the feature.
Local search: Each ant performs a local search to improve its solution by adding or
removing features.
Global update: After each iteration, the pheromone levels are updated based on the best
solution found so far.
Termination: The algorithm stops when a stopping criterion is satisfied in terms of number
of iterations.
Once the optimization process is completed, the subset of features with the highest fitness
value is selected as the optimal subset.
Initialization: Initialize the population of bees, where each bee characterizes a potential
solution (i.e., a subset of features).
Employed bees: Each employed bee evaluates a fitness function for its solution and then
searches for a neighboring solution to improve its fitness. The neighboring solution is
generated by randomly selecting a feature to add or remove from the current solution.
Onlooker bees: It select a solution based on its fitness value and then search for a
neighboring solution to improve its fitness. The selection probability is relative to the fitness
value of the solution.
Scout bees: If an Employed or onlooker bee has not found a better solution after a certain
number of iterations, it becomes a scout bee and generates a new solution randomly.
Termination: The algorithm stops when a stopping criterion matched.
Once the optimization process is completed, the subset of features with the highest fitness
value is selected as the optimal subset.
Initialization: Initialize the population of cuckoos, where each cuckoo signifies a potential
solution (i.e., a subset of features).
Levy Flight: Each cuckoo generates a new solution by performing a random walk in the
feature space, which is modelled by a Levy Flight distribution.
Evaluate fitness: Each cuckoo evaluates a fitness function for its solution and compares it to
the fitness values of the other cuckoos in the population.
Replace eggs: If a cuckoo’s solution is better than another cuckoo’s solution, it replaces the
other cuckoo’s solution with a new solution generated by a Levy Flight.
Abandon eggs: Some cuckoos may lay eggs in the nests of other cuckoos. If a cuckoo finds
a better solution by laying an egg in another cuckoo’s nest, it abandons its own nest and
adopts the other cuckoo’s nest.
Termination: The algorithm iterates until stopping condition is matched.
Once the optimization process is completed, the subset of features with the highest fitness
value is selected as the optimal subset.
Application aspects of nature-inspired optimization algorithms 73
Initialization: Initialize the population of fireflies, where each firefly represents a potential
solution (i.e., a subset of features).
Attraction: Each firefly moves towards other fireflies that have a higher fitness value,
which is modelled by an attractiveness function that depends on the distance between the
fireflies and their fitness values.
Randomization: Each firefly also moves randomly to explore the search space and escape
from local optima.
Intensity: The intensity of each firefly’s light is relative to its fitness value, and the intensity
decreases with distance from the firefly.
Absorption: If a firefly’s light intensity is greater than another firefly’s light intensity, it
absorbs the other firefly’s position and updates its solution.
Termination: It will terminate after the matching of convergence norm.
Once the optimization process is completed, the subset of features with the highest fitness
value is selected as the optimal subset.
Table 3.9 Overview on the practical engineering applications with the aid of
NIOAs
(c) GA: It is inspired by the process of natural selection and genetics. Its working
areas in engineering applications include structural design, material selection,
and manufacturing optimization. The practical example of GA is in the opti-
mization of aircraft wing design. GA can be used to optimize the wing shape
and size, reducing drag, and improving the lift-to-drag ratio of the aircraft.
(d) ABC: It is evolved by the foraging behavior of honey bees. It has been used in a
variety of engineering areas like image processing, power system optimization,
and parameter tuning. One practical real application of ABC is in the optimization
of water distribution networks. ABC can be used to optimize the placement and
sizing of pipes, reducing leaks, and improving the efficiency of the network.
(e) FA: It is originated by the flashing behavior of fireflies. It can be used in robotics,
power system optimization, and signal processing. One of the most common prac-
tical applications of FA is in the optimization of building energy systems. FA can be
used to optimize the control of heating, ventilation, and air conditioning systems,
reducing energy consumption, and improving the comfort of the occupants.
(f) BF: It mimics the behavior of bacteria foraging for food. Some examples of
practical engineering applications are signal-image processing, sensor net-
works, control systems, electrical power systems, bioinformatics, environ-
mental, chemical, and mechanical engineering. For instance, it can be used to
optimize the load dispatch problem, which involves scheduling the generation
and distribution of electrical power in a system. In transportation engineering, it
optimizes the traffic signal timings and routing in transportation networks.
3.8 Conclusion
NIOAs have been shown to be effective in solving a wide range of engineering pro-
blems. These algorithms take inspiration from natural phenomena and use them to
create optimization algorithms that can be used to find the best solution to a problem.
76 Nature-inspired optimization algorithms and soft computing
List of Abbreviations
ABC Artificial Bee Colony
ACO Ant Colony Optimization
AI Artificial Intelligence
BA Bat Algorithm
BBBC Big Bang Big Crunch
BIOAs Bio-Inspired Optimization Algorithms
CNN Convolutional Neural Network
CP Constraint Programming
CS Cuckoo Search
CSPs Constraint Satisfaction Problems
DE Differential Evolution
DL Deep Learning
FF Firefly
GA Genetic Algorithm
GS Gravitational Search
HCHTs Hybrid Constraint Handling Techniques
IoT Internet of Things
ML Machine Learning
MR Mutation Rate
MI Mutual Information
NIOAs Nature-Inspired Optimization Algorithms
Application aspects of nature-inspired optimization algorithms 77
References
[1] W. R. Ashby, Principles of Self-Organizations: Transaction. Pergamon
Press, 1962.
[2] I. Fister Jr, X.-S. Yang, I. Fister, J. Brest, and D. Fister, “A brief review of nature-
inspired algorithms for optimization,” arXiv Prepr. arXiv1307.4186, 2013.
[3] S. Forrest, “Genetic algorithms,” ACM Comput. Surv., vol. 28, no. 1,
pp. 77–80, 1996.
[4] S. Das and P. N. Suganthan, “Differential evolution: a survey of the state-
of-the-art,” IEEE Trans. Evol. Comput., vol. 15, no. 1, pp. 4–31, 2010.
[5] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings
of ICNN’95-International Conference on Neural Networks, 1995, vol. 4,
pp. 1942–1948.
[6] X.-S. Yang, “Firefly algorithms for multimodal optimization,” in
Stochastic Algorithms: Foundations and Applications: 5th International
Symposium, SAGA 2009, Sapporo, Japan, October 26–28, 2009.
Proceedings 5, 2009, pp. 169–178.
[7] D. Karaboga and B. Basturk, “A powerful and efficient algorithm for
numerical function optimization: artificial bee colony (ABC) algorithm,” J.
Glob. Optim., vol. 39, pp. 459–471, 2007.
[8] S. Das, A. Biswas, S. Dasgupta, and A. Abraham, “Bacterial foraging
optimization algorithm: theoretical foundations, analysis, and applica-
tions,” Found. Comput. Intell. Glob. Optim., vol. 3, pp. 23–55, 2009.
[9] M. Dorigo, M. Birattari, and T. Stutzle, “Ant colony optimization,” IEEE
Comput. Intell. Mag., vol. 1, no. 4, pp. 28–39, 2006.
[10] X.-S. Yang and A. Hossein Gandomi, “Bat algorithm: a novel approach for
global engineering optimization,” Eng. Comput., vol. 29, no. 5, pp. 464–
483, 2012.
[11] A. H. Gandomi, X.-S. Yang, and A. H. Alavi, “Cuckoo search algorithm: a
metaheuristic approach to solve structural optimization problems,” Eng.
Comput., vol. 29, pp. 17–35, 2013.
78 Nature-inspired optimization algorithms and soft computing
[58] S. Winkler, “Vision models and quality metrics for image processing
applications,” Verlag nicht ermittelbar, 2001.
[59] S. Khalid, T. Khalil, and S. Nasreen, “A survey of feature selection and
feature extraction techniques in machine learning,” in 2014 Science and
Information Conference, 2014, pp. 372–378.
[60] Z. Daixian, “SIFT algorithm analysis and optimization,” in 2010
International Conference on Image Analysis and Signal Processing, 2010,
pp. 415–419.
[61] N. Dalal and B. Triggs, “Histograms of oriented gradients for human
detection,” in 2005 IEEE Computer Society Conference on Computer
Vision and Pattern Recognition (CVPR’05), 2005, vol. 1, pp. 886–893.
[62] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-up robust features
(SURF),” Comput. Vis. image Underst., vol. 110, no. 3, pp. 346–359, 2008.
[63] H. Abdi and L. J. Williams, “Principal component analysis,” Wiley
Interdiscip. Rev. Comput. Stat., vol. 2, no. 4, pp. 433–459, 2010.
[64] B. F. Darst, K. C. Malecki, and C. D. Engelman, “Using recursive feature
elimination in random forest to account for correlated variables in high
dimensional data,” BMC Genet., vol. 19, no. 1, pp. 1–6, 2018.
[65] S. Balakrishnama and A. Ganapathiraju, “Linear discriminant analysis—a
brief tutorial,” Inst. Signal Inf. Process., vol. 18, no. 1998, pp. 1–8, 1998.
[66] A. Kraskov, H. Stögbauer, and P. Grassberger, “Estimating mutual infor-
mation,” Phys. Rev. E, vol. 69, no. 6, pp. 66138, 2004.
[67] P. Liashchynskyi and P. Liashchynskyi, “Grid search, random search, genetic
algorithm: a big comparison for NAS,” arXiv Prepr. arXiv1912.06059, 2019.
[68] R. Kumar, R. Srivastava, and S. Srivastava, “Detection and classification of
cancer from microscopic biopsy images using clinically significant and biolo-
gically interpretable features,” J. Med. Eng., vol. 2015, Article no. 457906, 2015.
[69] P. Kumar, S. Srivastava, R. K. Mishra, and Y. P. Sai, “End-to-end improved
convolutional neural network model for breast cancer detection using mam-
mographic data,” J. Def. Model. Simul., vol. 19, no. 3, pp. 375–384, 2022.
[70] P. Kumar, A. Kumar, S. Srivastava, and Y. Padma Sai, “A novel bi-modal
extended Huber loss function based refined mask RCNN approach for
automatic multi instance detection and localization of breast cancer,” Proc.
Inst. Mech. Eng. Part H J. Eng. Med., vol. 236, no. 7, pp. 1036–1053, 2022.
[71] X.-S. Yang and X. He, “Nature-inspired optimization algorithms in engineer-
ing: overview and applications,” Nat.-Inspired Comput. Eng., pp. 1–20, 2016.
[72] A. M. Hemeida, S. Alkhalaf, A. Mady, E. A. Mahmoud, M. E. Hussein, and
A. M. B. Eldin, “Implementation of nature-inspired optimization algo-
rithms in some data mining tasks,” Ain Shams Eng. J., vol. 11, no. 2,
pp. 309–318, 2020.
[73] X.-S. Yang (Ed.), “Chapter 14 — How to deal with constraints,” in Nature-
Inspired Optimization Algorithms (Second Edition), Academic Press, 2021,
pp. 207–220. https://doi.org/10.1016/B978-0-12-821986-7.00021-4.
[74] J. Nocedal, “Theory of algorithms for unconstrained optimization,” Acta
Numer., vol. 1, pp. 199–242, 1992.
82 Nature-inspired optimization algorithms and soft computing
1
Impledge Technologies, India
2
ABES Institute of Technology – Ghaziabad, India
86 Nature-inspired optimization algorithms and soft computing
For optimization problems, the original version of PSO is not very efficient.
Therefore, it is modified by introducing inertia weight in the velocity vector
updating equation. This updated algorithm is known as canonical PSO algorithm.
So now, (4.2) can be written as follows:
In the above two equations, there is a very slight difference. Both equations
would be identical if suitable parameters are chosen.
Ptþ1
i ¼ Pti þ vtþ1
i (4.6)
Next
}
}
algorithm, which makes it simple [1,5]. It is best fit for discrete, continuous, non-
linear, and non-convex kind of problems. Figure 4.2 illustrates various advantages
and disadvantages of PSO.
Scheduling Routing
Clustering Optimization
Swarm
Intelligence
Capabilities
Proximity
Principle
Diverse Stability
Response
Principle Principle
General
Swarm
Principles
Quality Adaptability
Principle Principle
● Quality Principle: Swarm should have the capability to react for quality fac-
tors like deciding safe location.
● Diverse Response Principle: Resources must note be accumulated into nar-
rowing area. Distribution must be planned so that each agent can be highly
protected against environmental fluctuations.
In 1989, the concept of SI was presented by Gerardo Beni and Jing Wang. It is
a rising field of biologically inspired artificial intelligence, which is based on the
behavior theory of social insects like bat, bees, ants, wasps etc. Advantages and
disadvantages of Swarm Intelligence is listed in Figure 4.5. Multiple SI-based
algorithms were developed and applied effectively multi-domain problems. There
are many ways to classify the Swarm based algorithm, but one significant cate-
gorization is based on insect, bacteria, bird, wild, and amphibious. Figure 4.6 shows
some well-known and proven optimization algorithms.
● Ant colony optimization (ACO) is truly inspired by the searching behavior of
actual ants. It is a population-based metaheuristic approach, and the idea was
proposed by Marco Dorigo in his PhD thesis in 1992 [51]. Several real-life
examples of ACO are job scheduling, travelling salesman problem, path opti-
mization, timetable scheduling, distribution planning, etc. [52].
● Cuckoo search (CS) algorithm was proposed in 2009 by Xin-She Yang and Deb. It is
nature-inspired metaheuristic algorithm, encouraged by the brood parasitism of certain
cuckoo varieties by leaving their eggs in the nest of host birds of other varieties [53].
● Teaching learning-based optimization (TLBO) algorithm was developed in
2011 by Rao et al. This algorithm mimics the classroom environment of
teacher and learners, for optimizing any given objective function. There are
two phases: teacher phase and learner phase [54,55].
● Particle swarm optimization (PSO) algorithm has many similarities with
genetic algorithm and evolutionary computation. It is an iterative approach; its
Particle swarm optimization applications and implications 97
• Behavior: It is very difficult to forecast the behavior from the distinct rules
• Information: The colony functionality can not be predicted with the information of
individual agent functioning
• Sensitivity: A minor change in simple rules may impact the behaviors of different group
Disadvantages level
• Act: Agent behavior may be seen as noise in action of choice is stochastic
• Non optimal: Because swarm systems are highly redundant and there is no central control
• Uncontrollable: Difficult to exercise control over swarm
• Unpredictable: Swarm system complexity leads to unforeseeable results
• Non immediate: Complex swarm systems with rich hierarchies take time
Ant Colony
Hunting Cuckoo
Optimization
Search Search
(ACO)
(HS) (CS)
Artificial
Immune Intelligent
System Water Drop
(AIS) (IWD)
Gravitational Paticle
Search Swarm
Algorithm Optimization
(GSA) Swarm Intelligence (PSO)
Algorithms
Orcas
Algorithm
(OA) Fish Swarm
Algorithm
(FSA)
Tunicate
Swarm Firefly
Algorithm Krill Herd Bat
Algorithm Algorithm
(TSA) Algorithm
(KH) (FA)
(BA)
PSO is a metaheuristic technique, and at present, it is one of the highly defined and
most extensively used SI algorithm. Because of the simplicity of PSO, it has wide
range of applications in problems for optimizing single objective [2,4]. PSO is
problem independent algorithm, where merely single information is needed to run
the algorithm, which is the fitness calculation for candidate solution.
Standard form of PSO is altered and modified, many variants were proposed by
researchers to deal with distinct types of problems [5]. Some examples are travel-
ling salesman problem, task planning problem, optimizing logistic planning, UAV
mission planning, flood control and routing, and optimizing water harvesting.
where p[q]
g is the best position related to qth swarm, calculated with qth objective
function.
A modified version of VEPSO is employed, where one processor indicates
each swarm. The total number of swarms need not to be same as overall objective
functions. In this system, swarms can communicate with other swarms through
Island Migration approach. This algorithm is effectively applied for regulating
generator contributions for transmission systems and optimizing radiometer array
antenna [73].
One more comparable approach like VEPSO was suggested, known as multi-
species PSO. This algorithm was established and linked to robotics for self-
sufficient agent response knowledge framework. Subswarms were utilized, which
form groups, one is used for every objective function. Then again, every subswarm
is estimated with its specific objective function and greatest particle information is
conveyed to nearby subswarms for particles velocity update. Therefore, ith particle
velocity for sth swarm is modified as per equation:
½s ½s ½s ½s
vij ðt þ 1Þ ¼ vij ðtÞ þ a1 ðpij ðtÞ xij ðtÞÞ þ a2
½s ½s
(4.9)
ðpgj ðtÞ xij ðtÞÞÞ þ A;
where
X
Hs
½l ½s
A¼ ðpgj ðtÞ xij ðtÞÞ
l¼1
Hs is the total number of swarms, which conveys with sth swarm. pg[l] is the lth
swarm’s best position.
the ith particle velocity update in the form of best position (pi) and repository
leader (Rh) will be:
Since the repository has restricted size so when it is full, new solutions can
be stored based on retention criteria.
2. Repository insufficiencies to store limited records is overcome in another
approach. It uses relatively complicated tree-type structure, known as domi-
nant tree, for addressing the unconstrained repository upkeep. Except for
repository upkeep, this algorithm acts like MOPSO. Inclusion of one more
feature into this algorithm makes it more efficient, which is mutation, also
known as craziness. Mutation acts for particle velocity, which conserves
diversity [75].
3. Another algorithm is based on the selection criteria of leaders. More precisely
every particle is evaluated for each objective function independently. Swarm
global best position is updated as per mean of the best particle for each func-
tion. Particle diversity is conserved over the distance calculation. Choice of
leader is inclined with respect to non-dominant solution, which promotes the
mitigation of particles gathering in groups. This algorithm is examined over the
number of problems to estimate efficiency [76].
4. Maximin PSO is one more approach, which utilized maximin fitness function.
Fitness function for a particular decision vector (x) with swarm size (N) and
number of objective functions (k) is given as follows:
Those decision vectors whose maximin function value is smaller than zero is
considered as nondominant solutions. Swarm diversity is encouraged by
maximin function because it corrects particles clusters [77]. There is one
debatable point about it that it encourages intermediate solutions in convex
faces and severe solutions in concave faces. This point can be handled by using
a sufficiently large group of swam.
5. Another MOPSO technique with crowding distance mechanism is developed.
It is beneficial for selecting globally the best particle and nondominant solu-
tions removal from outer archive. To maintain diversity, mutation is taken into
consideration. For every nondominant solution, crowding distance is calculated
individually [78].
Let us assume,
● R as outer archive
● f1, f2 . . . , fk are objective functions
● then the calculation of crowding distance of p is R
● q is the point on R, which instantly follows p in sorting
Particle swarm optimization applications and implications 103
Swarm leaders are selected from the fraction of nondominant point of R, which
are having maximum crowding distance. Mutation can also be applied upon par-
ticles to encourage swarm diversity.
where gbest: best position found so far in the neighborhood of the ith particle.
(G) PSO with Hybrid Whale (HW) Algorithm: Whale optimization algorithm is
popular due to its excellent exploration capability. When it is combined with
PSO, it overcomes the PSO phase limitations [86,87]. Forced whale works for
exploration and capping phenomenon for exploitation phase to converge glo-
bal optimum quickly.
(H) PSO with Many Other Algorithms: It has been observed that PSO algorithm
can be integrated with ant colony optimization (ACO), cuckoo search (CS),
artificial bee colony (ABC) algorithms, gray wolf optimization (GWO), cat
swarm optimization (CWO), salp swarm algorithm (SSA), etc. for enhanced
performance [88–91].
4.9 Convergence
The word convergence can be defined in two distinct forms in relation with PSO
[9,10]:
● All particles must converge at a point in search space, which is the con-
vergence of series of solutions. Convergence may be optimum or may not be.
● Irrespective of swarm behavior, all particles personal best (p) and swarm’s
position best (g) approaches the problem’s local optimum. Here convergence is
for local optimum.
Particle swarm optimization applications and implications 107
Series of solutions for convergence has been analyzed for PSO. These analyses
give the outcome in selecting PSO parameters, which are required for convergence
and avoid particle’s divergence. The analysis was criticized because it was too sim-
plified. Assumption was that there is only one particle, which does not use stochastic
variable. Reason for attraction was particles best-known position (p) and swarm’s
best-known position (g), remains unchanged throughout the entire optimization
process [97]. However, for swarm convergence, it has been observed that such
simplifications are not going to impact boundaries defined by this examination for
parameters. In recent years, significant efforts have been made for modifying the
modelling assumption in stability analysis of PSO [98]. Recently, more generalized
results were applied to many PSO variants. It has been observed about the necessary
minimum requirement about the modeling assumption. For local optimum, con-
vergence has been investigated for PSO [99,100]. Evidence shows that some mod-
ifications are required in PSO for achieving guarantee local optimum. Hence, it can
be concluded that PSO parameters and algorithm convergence potentiality depends
upon observational and experimental results. Orthogonal learning strategy can also
be employed for improving already existing information to achieve quicker global
convergence, excellent quality solution, and powerful robustness [101].
If there is a requirement of trade off in between divergence and convergence,
some adaptive kind of mechanism needs to be introduced. In comparison to regular
PSO, adaptive PSO offers improved search efficiency. Adaptive PSO (APSO) can
execute global search in whole search space along with improved convergence rate.
Search efficiency and effectiveness can be improved by controlling real time
algorithm parameters and acceleration coefficients [25]. Though APSO encloses
many new parameters, it does not require additional design and hence no imple-
mentation complexity.
selection subfields where feature need to be extracted are text recognition, image
recognition, and pattern recognition. Features include contrast, object, size, color
change, texture change, cluster importance, etc. Standard techniques are used so
far, either for global or local selection. Therefore, feature selection is more of NP
hard problem. PSO capability is proven for resolving such problems [5,104]. For
clustering, PSO is used to search centroids of clusters, which offers better result.
● Edge Detection: Edges play an important role in images. In literature,
numerous traditional approaches have been defined, which relies on first order,
second order, Laplacian operator, or Canny edge detector, etc. PSO-based
approaches have proven their potential to overcome many challenges faced by
traditional methods. PSO gives best fitness curve of image, which represents
object boundaries. PSO integrated algorithms gives more precise results for
highly noisy images too [105,106].
● Edge Linking: Many image detection methods undergo for certain limitations
like false edge detection, missing actual edge, detecting thin lines, or produ-
cing appropriate thickness [89]. Connecting broken edges is also a tedious job.
PSO-based techniques are efficient for detecting edges continuous and clear
object images, even in the presence of noise also.
● Image Compression: The purpose behind the image compression is to remove
redundancies available in the image to save storage space and efficient trans-
mission bandwidth. PSO integrated algorithms helps to fulfil the said task and
compressed images benefits for time required to CPU execution [107,108].
References
[1] J. Kennedy and R. C. Eberhart, “Particle swarm optimization”, In:
Proceedings of the IEEE International Conference on Neural Networks,
Perth, Australia, pp. 1942–1948, 1995.
[2] T. M. Blackwell, “Particle swarms and population diversity”, Soft
Computing, vol 9(11), pp. 793–802, 2005.
[3] Gad, A. G., “Particle swarm optimization algorithm and its applications: a
systematic review”, Archives of Computational Methods in Engineering,
vol 29, pp. 2531–2561, 2022.
[4] V. P. Kour and S. Arora, “Particle swarm optimization-based support
vector machine (P-SVM) for the segmentation and classification of plants”,
IEEE Access, vol 7, pp. 29374–29385, 2019.
[5] S. W. Lin, K. C. Ying, S. C. Chen, and Z. J. Lee, “Particle swarm optimi-
zation for parameter determination and feature selection of support vector
machines”, Expert Systems with Applications, vol 35, no. 4, pp. 1817–1824,
2008.
110 Nature-inspired optimization algorithms and soft computing
5.1 Introduction
The human race is currently witnessing an era where the world is changing at a very
fast pace, technologically. The innovations and inventions have been taking place
on a daily basis. Also, it is not only engineering field that is experiencing this fast
cycle of new incoming technologies but also various fields like medical, social
engineering, etc. are also accepting emerging techniques.
One of the most influential reasons for these changes being visible is the
advent of Artificial Intelligence (AI). AI is applied and has affected all aspects of
life. Some of the examples are presented below:
● AI used to write research papers [1]
● AI’s advent into legal services [2]
● AI in medical field [3]
● AI applications in literature [4]
● AI in robots [5]
● AI used in drone swarms [6,7]
The above list is not even the tip of the iceberg, considering the contributions
of AI in today’s world. Remember that this list does not feature the detailed list of
engineering explorations brought about by AI.
Evolutionary optimization algorithms are one of the important components of
AI. These optimization techniques are used to find optimal solutions for complex
search problems. Another application of the optimization techniques is to obtain
feasible solutions for problems, where finding a solution itself is the objective.
Such problems have very less number of solutions that are acceptable and thus such
a problem is similar to finding a needle in a haystack.
Evolutionary optimization algorithms are usually based on methodologies that
are derived from natural processes. These methods are very robust and easily adapt
1
National Institute of Technology and Science – Warangal, India
120 Nature-inspired optimization algorithms and soft computing
to different multidimensional complex problems. Each algorithm has its own merits
and demerits.
The initial optimization method, i.e., genetic algorithm was derived from
Darwin’s theory of “Survival of the fittest.” In this theory, the individuals with a
better set of genes are able to survive and reproduce. Hence the new generations,
have the genes of those individuals from the previous generation that were able to
better adapt themselves to the environment. The process repeats and those genes,
which are required to survive are passed on and may be modified as per the
adaptations.
Evolutionary optimization algorithms were introduced as it was observed that
the existing conventional methods applied for solving optimization problems were
not capable to obtain optimal solutions. These algorithms are fast, robust, less
probable to get trapped in local minima, adaptable, etc. A number of evolutionary
optimization algorithms have been introduced over a period of time. Some of these
methods have been modified from the base version while some have been combined
with other optimization methods to form a hybrid method. Different applications of
optimization techniques have been discussed in detail in [8]. Also, step-by-step
evaluation of some of these optimization techniques has been presented in [9].
In this chapter, an exhaustive list of evolutionary optimization algorithms is
presented first. In the second part of the chapter, some of the advanced evolutionary
optimization techniques are discussed.
adventurous, and (iii) unstable. This kind of behavior of the members can lead to
irrational movements which could lead a member to an inferior position. Such
anarchic movement of members increases as the difference between them increases
or the situation worsens.
The members of the society have the knowledge about the best position
attained by any other member of the society till the current iteration is considered.
The members are also aware of the member who is occupying the current best
position in the society. There are three policies on which a member shall move.
1. The first policy for movement is based on the current position of a member.
This movement is dependent on Fickleness Index (FI). It represents a mem-
ber’s dissatisfaction with its current position in comparison with other mem-
bers. The FI can be obtained through one of the following equations:
f ðXik ðkÞÞ f ðPiðkÞÞ
FIi ðkÞ ¼ 1 ai ð1 ai Þi
f ðXi ðkÞÞ f ðXi ðkÞÞ
f ðGðkÞÞ f ðPiðkÞÞ
FIi ðkÞ ¼ 1 ai ð1 ai Þ
f ðXi ðkÞÞ f ðXi ðkÞÞ
where a is a number between 0 and 1; “k” is the current iteration; “I” is the
member of the society; Xi(k) is the position of ith member for kth iteration; Pi
(k) is the personal best position achieved by ith member; G(k) is the global best
position; ik* is the member holding the best position in the society in the kth
iteration; and f() represents the objective function value.
2. In the second policy, the movement of the member is dependent on other
members of the society. The movement under this policy is controlled by
external irregularity index (EI). EI is proposed to be evaluated as:
the pit. The pit size is observed to be dependent on the amount of hunger and the
moon size.
Ants, i.e., the prey, move in a stochastic way in search of food. Its movement is
modeled as:
X ðtÞ ¼ ½0; csð2rðt1Þ 1Þ; csð2rðt2Þ 1Þ; ::::csð2rðtnÞ 1
where cs is the cumulative sum; r(t) is evaluated as follows:
rðtÞ ¼ 1 if rand > 0:5
0 if rand 0:5
“n” is the maximum iteration count and “t” is the random walk step.
A pattern of such a random walk is represented through a graph in the refer-
ence publication.
The positions of the ants are saved in a matrix called Mant, which is given as:
2 3
A1;1 A1;2 . . . . . . A1;d
6 7
6 A2;1 A2;2 . . . . . . A2;d 7
6 7
6 . .. .. 7
6 .. . . 7
4 5
An;1 An;2 An;d
where “d” is the number of dimensions; “n” is the number of ants; and Aij repre-
sents a value for ith and for jth dimension.
The ant in the ALO is similar to wolves in Grey Wolf Optimizer or a particle
in PSO.
There is another matrix MOA which stores the objective function value of each
ant. This matrix will be a column matrix of size “n”.
Similar to Mant, another matrix is formed which is termed as Mantlion. It is used
to store the position of each antlion. Also, corresponding to MOA, another matrix is
used for storing the objective function value of antlions and is given by MOAL.
The movement of the ants should be within the search space. Also, in case the
ant gets fitter than the antlion then it means that it has been caught by that antlion.
The random walk is represented as:
ðXit ai Þ ðdi cti Þ
Xit ¼ þ ci
ðdit ai Þ
where “i” represents the variable/dimension; “t” is the iteration; “a” is the mini-
mum random walk for the variable “i”; “c” and “d” are the minimum and maximum
values of ith variable in the tth iteration.
The effect of antlion traps on the random walk of ants is given as:
cti ¼ Antliontj þ ct
dit ¼ Antliontj þ d t
where Antlionjt gives the position of the jth antlion in the tth iteration.
128 Nature-inspired optimization algorithms and soft computing
where “I” is a particular crow in the flock; “it” is the iteration; “x” represents the
position of a crow in the search space; “r” is a random number between 0 and 1;
“fl” signifies the flight length. Indirectly, it is a factor which helps control the
search process and leads it into an exploration or an exploitation phase; “m”
represents the individual best position; “AP” gives awareness probability of
a crow.
In CSA, it is required to store the individual best positions of each crow and
update them in every iteration.
where “t” is the iteration; “I” is the cuckoo number; a is a constant defined as a > 0
or preferably taken as a = 1; represents entry-wise multiplication.
130 Nature-inspired optimization algorithms and soft computing
where “X” represents a solution; “f” represents first shot points; “e” represents the
explosion point; “m” represents the direction of shrapnel; “d” represents the dis-
tance that this shrapnel will cover.
The position of the exploding mine is defined as:
f f
Xeðnþ1Þ ¼ dnþ1 cos ðqÞ; n ¼ 0; 1; 2; :::
where rand is a random number; q is angle at which the shrapnel will travel and it is
given as 360/Ns, where Ns is the number of shrapnel pieces.
The distance and the direction of the shrapnel are given as:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
f f
dnþ1 ¼ ðXnþ1 Xnf Þ2 þ ðFnþ1 f
Fnf Þ2 ; n ¼ 0; 1; 2; 3; :::
f
Fnþ1 Fnf
mfnþ1 ¼ f
; n ¼ 0; 1; 2; 3; :::
Xnþ1 Xnf
later part is directed toward exploitation. Thus, the distance of shrapnel is con-
trolled as:
f
dn1
dnf ¼ ; n ¼ 1; 2; 3; :::
expðk=aÞ
where a is the reduction constant. The value of alpha should be such that the
shrapnel distance should reduce with every iteration. This reduction should lead to
no distance traveled by the shrapnel in the last iteration.
where Costn is the objective function value of nth raindrops; NRaindrops is given as
NPOP – Nsr, where Npop is the population size.
Streams can combine to form a river or can directly flow into the sea. The
distance of travel by a stream can be randomly chosen.
The new position of the rivers and streams is represented as:
iþ1
XStream ¼ XStream
i
þ rand C ðXRiver
i
XStream
i
Þ
iþ1
XRiver ¼ XRiver
i
þ rand C ðXSea
i
XRiver
i
Þ
where rand is a random number between 0 and 1; “X” is the position.
In case the objective function value of a stream is better than the connecting
river then their positions are interchanged. A similar operation can happen between
a sea and a river.
In case, the distance between the river and sea is smaller than dmax, then the
process of evaporation happens after which the raining process will start which is
similar to the generation of random solutions within the search space. The value of
dmax is proposed to increase the search process near the sea. The value of dmax can
also be made linearly dependent on iterations as:
di
iþ1
dmax ¼ dmax
i
max
maxiter
132 Nature-inspired optimization algorithms and soft computing
The raining process can be used to generate random solutions near the sea for
better exploitation in that region.
5.4 Conclusion
The chapter is used to present a vast collection of nature-inspired algorithms along
with their references. This information should be very useful for researchers who
can access the details of approximately 200 algorithms in a single place.
The chapter also introduces seven recently proposed evolutionary algorithms
along with the necessary equations of these methods.
References
[29] Drias H, Sadeg S, and Yahi S. Cooperative bees swarm for solving the
maximum weighted satisfiability problem. In: International Work-
Conference on Artificial Neural Networks. Springer; 2005. p. 318–325.
[30] Pham DT, Ghanbarzadeh A, Ko̧c E, et al. The bees algorithm—a novel tool
for complex optimisation problems. In: Intelligent Production Machines
and Systems. Elsevier; 2006. p. 454–459.
[31] Wang T, Yang L. Beetle swarm optimization algorithm: theory and appli-
cation; 2018. arXiv preprint arXiv:180800206.
[32] Zhong C, Li G, and Meng Z. Beluga whale optimization: a novel nature-
inspired metaheuristic algorithm. Knowledge-Based Systems. 2022;251:
109215.
[33] Erol OK and Eksin I. A new optimization method: big bang–big crunch.
Advances in Engineering Software. 2006;37(2):106–111.
[34] Simon D. Biogeography-based optimization. IEEE Transactions on
Evolutionary Computation. 2008;12(6):702–713.
[35] Askarzadeh A. Bird mating optimizer: an optimization algorithm inspired
by bird mating strategies. Communications in Nonlinear Science and
Numerical Simulation. 2014;19(4):1213–1228.
[36] Hatamlou A. Black hole: a new heuristic optimization approach for data
clustering. Information Sciences. 2013; 222:175–184.
[37] Hayyolalam V and Kazem AAP. Black widow optimization algorithm: a
novel meta-heuristic approach for solving engineering optimization pro-
blems. Engineering Applications of Artificial Intelligence. 2020;87:
103249.
[38] Taherdangkoo M, Shirzadi MH, Yazdi M, et al. A robust clustering method
based on blind, naked mole-rats (BNMR) algorithm. Swarm and
Evolutionary Computation. 2013;10:1–11.
[39] Das AK and Pratihar DK. A new bonobo optimizer (BO) for real-parameter
optimization. In: 2019 IEEE Region 10 Symposium (TENSYMP). IEEE;
2019. p. 108–113.
[40] Shi Y. Brain storm optimization algorithm. In: International Conference in
Swarm Intelligence. Springer; 2011. p. 303–309.
[41] Marinakis Y, Marinaki M, and Matsatsinis N. A bumble bees mating
optimization algorithm for global unconstrained optimization problems. In:
Nature Inspired Cooperative Strategies for Optimization (NICSO2010).
Springer; 2010.p. 305–318.
[42] Jada C, Vadathya AK, Shaik A, et al. Butterfly mating optimization. In:
Intelligent Systems Technologies and Applications. Springer; 2016. p. 3–15.
[43] Arora S and Singh S. Butterfly optimization algorithm: a novel approach
for global optimization. Soft Computing. 2019;23(3):715–734.
[44] Chu SC, Tsai PW, and Pan JS. Cat swarm optimization. In: Pacific
Rim International Conference on Artificial Intelligence. Springer; 2006.
p. 854–858.
[45] Formato RA. Central force optimization: a new nature inspired computa-
tional framework for multidimensional search and optimization. In: Nature
Advanced optimization by nature-inspired algorithm 135
[62] Pierezan J and Coelho LDS. Coyote optimization algorithm: a new meta-
heuristic for global optimization problems. In: 2018 IEEE Congress on
Evolutionary Computation (CEC). IEEE; 2018. p. 1–8.
[63] Meng A, Ge J, Yin H, et al. Wind speed forecasting based on wavelet
packet decomposition and artificial neural networks trained by crisscross
optimization algorithm. Energy Conversion and Management. 2016;
114:75–88.
[64] Askarzadeh A. A novel metaheuristic method for solving constrained
engineering optimization problems: crow search algorithm. Computers &
Structures. 2016;169:1–12.
[65] Yang XS and Deb S. Cuckoo search via Lévy flights. In: 2009 World
Congress on Nature & Biologically Inspired Computing (NaBIC). IEEE;
2009. p. 210–214.
[66] Eesa AS, Brifcani AMA, and Orman Z. Cuttlefish algorithm – a novel bio-
inspired optimization algorithm. International Journal of Scientific &
Engineering Research. 2013;4(9):1978–1986.
[67] Brammya G, Praveena S, Ninu Preetha N, et al. Deer hunting optimization
algorithm: a new nature-inspired meta-heuristic paradigm. The Computer
Journal. 2019. https://doi.org/10.1093/comjnl/bxy133.
[68] Storn R and Price K. Differential evolution – a simple and efficient heur-
istic for global optimization over continuous spaces. Journal of Global
Optimization. 1997;11(4):341–359.
[69] Kaveh A and Farhoudi N. A new optimization method: Dolphin echolo-
cation. Advances in Engineering Software. 2013;59:53–70.
[70] Mirjalili S. Dragonfly algorithm: a new meta-heuristic optimization tech-
nique for solving single-objective, discrete, and multi-objective problems.
Neural Computing and Applications. 2016;27(4):1053–1073.
[71] Agushaka JO, Ezugwu AE, and Abualigah L. Dwarf mongoose optimiza-
tion algorithm. Computer Methods in Applied Mechanics and Engineering.
2022;391:114570.
[72] Oyelade ON, Ezugwu AES, Mohamed TI, et al. Ebola optimization search
algorithm: a new nature-inspired metaheuristic optimization algorithm.
IEEE Access. 2022;10:16150–16177.
[73] Chen Z, Francis A, Li S, et al. Egret swarm optimization algorithm:
an evolutionary computation approach for model free optimization.
Biomimetics. 2022;7(4):144.
[74] Abedinpourshotorban H, Shamsuddin SM, Beheshti Z, et al.
Electromagnetic field optimization: a physics-inspired metaheuristic opti-
mization algorithm. Swarm and Evolutionary Computation. 2016;26:8–22.
[75] Wang GG, Deb S, and Coelho LdS. Elephant herding optimization. In:
2015 3rd International Symposium on Computational and Business
Intelligence (ISCBI). IEEE; 2015. p. 1–5.
[76] Harifi S, Khalilian M, Mohammadzadeh J, et al. Emperor Penguins
Colony: a new metaheuristic algorithm for optimization. Evolutionary
Intelligence. 2019;12(2):211–226.
Advanced optimization by nature-inspired algorithm 137
[111] Mozaffari MH, Abdy H, and Zahiri SH. IPO: an inclined planes system
optimization algorithm. Computing and Informatics. 2016;35(1):222–240.
[112] Li C, Chen G, Liang G, et al. Integrated optimization algorithm: a meta-
heuristic approach for complicated optimization. Information Sciences.
2022;586:424–449.
[113] Jahangiri M, Hadianfard MA, Najafgholipour MA, et al. Interactive auto-
didactic school: a new metaheuristic optimization algorithm for solving
mathematical and structural design optimization problems. Computers &
Structures. 2020;235:106268.
[114] Tang D, Dong S, Jiang Y, et al. ITGO: invasive tumor growth optimization
algorithm. Applied Soft Computing. 2015;36:670–698.
[115] Rad HS and Lucas C. A recommender system based on invasive weed
optimization algorithm. In: 2007 IEEE Congress on Evolutionary
Computation. IEEE; 2007. p. 4297–4304.
[116] Javidy B, Hatamlou A, and Mirjalili S. Ions motion algorithm for solving
optimization problems. Applied Soft Computing. 2015;32:72–79.
[117] Katayama K and Narihisa H. Iterated local search approach using genetic
transformation to the traveling salesman problem. In: Proceedings of the
1st Annual Conference on Genetic and Evolutionary Computation, vol. 1;
1999. p. 321–328.
[118] Rao R. Jaya: a simple and new optimization algorithm for solving con-
strained and unconstrained optimization problems. International Journal of
Industrial Engineering Computations. 2016;7(1):19–34.
[119] Moein S and Logeswaran R. KGMO: a swarm optimization algorithm based on
the kinetic energy of gas molecules. Information Sciences. 2014;275:127–144.
[120] Gandomi AH and Alavi AH. Krill herd: a new bio-inspired optimization
algorithm. Communications in Nonlinear Science and Numerical
Simulation. 2012;17(12):4831–4845.
[121] Pereira JLJ, Francisco MB, da Cunha Jr SS, et al. A powerful Lichtenberg
Optimization Algorithm: a damage identification case study. Engineering
Applications of Artificial Intelligence. 2021;97:104055.
[122] Nematollahi AF, Rahiminejad A, and Vahidi B. A novel multi-objective
optimization algorithm based on Lightning Attachment Procedure
Optimization algorithm. Applied Soft Computing. 2019;75:404–427.
[123] Yazdani M and Jolai F. Lion optimization algorithm (LOA): a nature-
inspired metaheuristic algorithm. Journal of Computational Design and
Engineering. 2016;3(1):24–36.
[124] Cuevas E, Fausto F, and González A. The locust swarm optimization
algorithm. In: New Advancements in Swarm Algorithms: Operators and
Applications. Springer; 2020. p. 139–159.
[125] Kushwaha N, Pant M, Kant S, et al. Magnetic optimization algorithm for
data clustering. Pattern Recognition Letters. 2018;115:59–65.
[126] Tayarani-N MH and Akbarzadeh-T MR. Magnetic-inspired optimization
algorithms: operators and structures. Swarm and Evolutionary
Computation. 2014;19:82–101.
140 Nature-inspired optimization algorithms and soft computing
[159] Jia H, Peng X, and Lang C. Remora optimization algorithm. Expert Systems
with Applications. 2021;185:115665.
[160] Abualigah L, Abd Elaziz M, Sumari P, et al. Reptile Search Algorithm
(RSA): a nature-inspired meta-heuristic optimizer. Expert Systems with
Applications. 2022;191:116158.
[161] Labbi Y, Attous DB, Gabbar HA, et al. A new rooted tree optimization
algorithm for economic dispatch with valve-point effect. International
Journal of Electrical Power & Energy Systems. 2016;79:298–311.
[162] Ahmadianfar I, Heidari AA, Gandomi AH, et al. RUN beyond the meta-
phor: an efficient optimization algorithm based on Runge Kutta method.
Expert Systems with Applications. 2021;181:115079.
[163] Mirjalili S, Gandomi AH, Mirjalili SZ, et al. Salp Swarm Algorithm: a bio-
inspired optimizer for engineering design problems. Advances in
Engineering Software. 2017;114:163–191.
[164] Kaur A, Jain S, and Goel S. Sandpiper optimization algorithm: a novel
approach for solving real-life engineering problems. Applied Intelligence.
2020;50(2):582–619.
[165] Moosavi SHS and Bardsiri VK. Satin bowerbird optimizer: a new optimi-
zation algorithm to optimize ANFIS for software development effort esti-
mation. Engineering Applications of Artificial Intelligence. 2017;60:1–15.
[166] Glover F. Heuristics for integer programming using surrogate constraints.
Decision Sciences. 1977;8(1):156–166.
[167] Dhiman G and Kumar V. Seagull optimization algorithm: theory and its
applications for large-scale industrial engineering problems. Knowledge-
Based Systems. 2019;165:169–196.
[168] Masadeh R, Mahafzah BA, and Sharieh A. Sea lion optimization algorithm.
International Journal of Advanced Computer Science and Applications.
2019;10(5).
[169] Shabani A, Asgarian B, Salido M, et al. Search and rescue optimization
algorithm: a new optimization method for solving constrained engineering
optimization problems. Expert Systems with Applications. 2020;161:
113698.
[170] Emami H. Seasons optimization algorithm. Engineering with Computers.
2020;38:1–21.
[171] Dai C, Zhu Y, Chen W. Seeker optimization algorithm. In: International
Conference on Computational and Information Science. Springer; 2006.
p. 167–176.
[172] Fausto F, Cuevas E, Valdivia A, et al. A global optimization algorithm
inspired in the behavior of selfish herds. Biosystems. 2017;160:
39–55.
[173] Abedinia O, Amjady N, and Ghasemi A. A new metaheuristic algorithm
based on shark smell optimization. Complexity. 2016;21(5):97–116.
[174] Duan Q, Gupta VK, and Sorooshian S. Shuffled complex evolution
approach for effective and efficient global minimization. Journal of
Optimization Theory and Applications. 1993;76(3):501–521.
Advanced optimization by nature-inspired algorithm 143
[209] Asgari HR, Bozorg Haddad O, Pazoki M, et al. Weed optimization algo-
rithm for optimal reservoir operation. Journal of Irrigation and Drainage
Engineering. 2016;142(2):04015055.
[210] Ahmadianfar I, Heidari AA, Noshadian S, et al. INFO: an efficient opti-
mization algorithm based on weighted mean of vectors. Expert Systems
with Applications. 2022;195:116516.
[211] Mirjalili S and Lewis A. The whale optimization algorithm. Advances in
Engineering Software. 2016;95:51–67.
[212] Bayraktar Z, Komurcu M, and Werner DH. Wind Driven Optimization
(WDO): a novel nature-inspired optimization algorithm and its application
to electromagnetics. In: 2010 IEEE Antennas and Propagation Society
International Symposium. IEEE; 2010. p. 1–4.
[213] Covic N and Lacevic B. Wingsuit flying search—a novel global optimi-
zation algorithm. IEEE Access. 2020;8:53883–53900.
[214] Shahrezaee M. Image segmentation based on world cup optimization
algorithm. Majlesi Journal of Electrical Engineering. 2017;11(2).
[215] Punnathanam V and Kotecha P. Yin-Yang-pair optimization: a novel
lightweight optimization algorithm. Engineering Applications of Artificial
Intelligence. 2016;54:62–79.
This page intentionally left blank
Chapter 6
Application and challenges of optimization in
Internet of Things (IoT)
Hemlata Sharma1, Shilpa Srivastava2 and Varuna Gupta2
6.1 Introduction
Optimization is crucial in the Internet of Things (IoT) to maximize the efficiency of
various IoT systems and devices. Some applications of optimization in the IoT
include resource optimization, energy optimization, network optimization, and
routing optimization. Despite the potential benefits of optimization in IoT, there are
also several challenges that need to be addressed like scalability, real-time pro-
cessing, security, and interoperability.
Overall, optimization is a critical component of IoT systems, and overcoming
these challenges will be key to realizing the full potential of this emerging
1
Sheffield Hallam University, UK
2
Christ (Deemed to be University) – Delhi, India
148 Nature-inspired optimization algorithms and soft computing
Routing algorithms
Security algorithms
Resource allocation algorithms
each other and with the cloud, and optimizing the network can improve efficiency,
reduce latency, and increase reliability [11]. Here are some common algorithms
used for network optimization in IoT:
● Routing algorithms: These algorithms determine the optimal path for data to
travel between devices in the network. This can involve selecting the shortest
or most reliable path, or balancing the load across different paths [12].
Examples of routing algorithms include Dijkstra’s algorithm, A* search, and
genetic algorithms.
● Resource allocation algorithms: These algorithms manage the allocation of
network resources such as bandwidth, power, and memory. They can be used
to balance the resources available to different devices in the network and
prevent overload or congestion. Examples of resource allocation algorithms
include time-division multiple access (TDMA), frequency-division multiple
access (FDMA), and code-division multiple access (CDMA) [13].
● Data aggregation algorithms: These algorithms reduce the amount of data
transmitted across the network by aggregating data from multiple devices
before sending it to the cloud. This can reduce network traffic and improve
efficiency. Examples of data aggregation algorithms include hierarchical
clustering, centroid-based clustering, and k-means clustering.
● Predictive analytics algorithms: These algorithms analyze data from the
network to predict future trends and patterns. This can be used to optimize the
network by anticipating changes in usage and adjusting network resources
accordingly. Examples of predictive analytics algorithms include decision
trees, random forests, and neural networks.
● Security algorithms: These algorithms protect the network from unauthorized
access, data breaches, and other security threats [14]. They can be used to
encrypt data, authenticate devices, and monitor network activity for anomalies.
Examples of security algorithms include AES, RSA, and SHA-256.
Overall, network optimization is a critical component of IoT systems, and the
selection of appropriate algorithms depends on the specific requirements of the
network and the devices involved.
152 Nature-inspired optimization algorithms and soft computing
introduced. This can result in downtime, lost data, and other issues that can
impact the overall performance of the network.
● Privacy concerns: IoT networks often collect and transmit large amounts of
data, which can raise privacy concerns for users. This can result in decreased
trust and increased regulatory scrutiny.
Overall, while network optimization in IoT has many benefits, it is important
to carefully consider the potential disadvantages before implementing optimization
strategies. This will help to ensure that the benefits outweigh the risks and that the
network operates effectively and securely.
other particles in the swarm. The position and velocity of each particle are
updated at each iteration based on its own best solution found so far (pbest) and
the best solution found by any particle in the swarm (gbest). PSO has been
applied to a wide range of optimization problems and is known for its sim-
plicity and effective convergence characteristics.
● Bee Algorithm: It is a metaheuristic optimization algorithm that is inspired by
the foraging behavior of honeybees. It is used to find the global optimum of a
function in a search space. In BA, the swarm of bees is divided into employed
bees, onlooker bees, and scout bees. The employed bees search for nectar
sources in their assigned search space and share the information with the
onlooker bees. The onlooker bees then choose the best solution based on the
information shared by the employed bees. The scout bees, on the other hand,
explore new search spaces if the solution quality degrades. The best solution
found so far by the swarm is updated at each iteration. BA has been applied to
various optimization problems and is known for its fast convergence, robust-
ness, and ability to handle multimodal and complex functions.
Ant Colony
Particle Swarm Optimization
Optimization
Genetic
Algorithm
selection and evolution, EAs can help improve the performance, scalability, and
robustness of IoT systems.
from success and efficiency of the distributed, coordinated, and collective behavior
of swarms in the real world, researchers have tried to develop sophisticated meth-
ods and systems that make use of the techniques of the swarms to find solutions to
complex optimization problems.
Bio-inspired algorithms are a type of heuristic algorithms that are inspired by
nature and biological systems. In the context of the IoT, bio-inspired algorithms can
be used to solve various optimization problems and improve the efficiency of IoT
systems.
For example, genetic algorithms (GAs) are popular bio-inspired algorithms
that can be used for optimizing complex problems in IoT. In a GA, a population
of candidate solutions is evolved over multiple generations using genetic opera-
tions such as selection, crossover, and mutation. The solutions are evaluated
based on their fitness and the best solutions are selected for the next generation.
This process is repeated until a satisfactory solution is found or a stopping
criterion is met. Another example is the use of ACO algorithms in IoT. ACO
algorithms are inspired by the behavior of ant colonies, where ants work together
to find the shortest path between their colony and a food source. In IoT, ACO
algorithms can be used to find the shortest path for data transmission or to solve
routing problems.
In addition to GAs and ACO, there are other bio-inspired algorithms such as
PSO and ABC that can be applied to various optimization problems in IoT.
Overall, the use of bio-inspired algorithms in IoT can help improve the effi-
ciency and performance of IoT systems by providing better solutions to complex
problems.
Firefly Algorithm
● Increased reliability: Load optimization helps to ensure that the cognitive IoT
system is operating within its capacity and avoiding overloading, which can
increase the reliability and stability of the system.
● Cost savings: By optimizing the use of resources and reducing the demand on the
system, load optimization can help to reduce the overall cost of operating the cognitive
IoT system.
In conclusion, load optimization plays a crucial role in ensuring the efficient
and effective operation of the cognitive IoT system. By optimizing the use of
resources and reducing the demand on the system, load optimization can bring
several advantages, including improved resource utilization, better performance,
increased efficiency, increased reliability, and cost savings [42].
that require intelligence, learning, and adaptation [49,50]. They can be highly
effective at finding solutions to complex problems, but may require large
amounts of data to train the model and can be computationally intensive. In the
context of the IoT, bio-inspired optimization algorithms can be used to opti-
mize various aspects of the system, including resource allocation, routing, and
energy management.
In general, nature-inspired algorithms are well-suited to solving problems that
require the exploration of large solution spaces, while evolutionary algorithms are
well-suited to solving problems with multiple objectives. Bio-inspired algorithms
are often used in IoT applications that require learning and adaptation over time.
Ultimately, the choice of algorithm will depend on the specific needs and
requirements of the IoT system being designed.
References
[1] D. Yadav, “Blood coagulation algorithm: a novel bio-inspired meta-heuristic
algorithm for global optimization.” Mathematics, vol. 9, no. 23, pp. 3011,
2021, https://doi.org/10.3390/math9233011.
[2] E. Kanniga and P. S. Jadhav, “A study paper on forest fire detection using
wireless sensor network.” International Journal of Psychosocial Rehabilitation,
vol. 23, no. 4, pp. 397–407, 2019, https://doi.org/10.37200/ijpr/v23i4/pr190199.
[3] X. Cui and G. Chen, “Application of improved ant colony optimization in
vehicular ad-hoc network routing.” In: 2021 IEEE 3rd Eurasia Conference
on IOT, Communication and Engineering (ECICE), 2021, doi: 10.1109/
ecice52819.2021.9645678.
[4] S. J. Wagh, M. S. Bhende, and A. D. Thakare, “Blockchain and IoT opti-
mization.” In: Energy Optimization Protocol Design for Sensor Networks in
IoT Domains, pp. 205–224, 2022, doi: 10.1201/9781003310549-9.
[5] A. Bhavya, K. Harleen, and C. Ritu, Transforming the Internet of Things for
Next-Generation Smart Systems. IGI Global, 2021.
[6] A. Kumar, P. S. Rathore, V. G. Diaz, and R. Agrawal, Swarm Intelligence
Optimization. John Wiley & Sons, 2021.
[7] U. Kaur and Shalu, “Blockchain- and deep learning-empowered resource
optimization in future cellular networks, edge computing, and IoT: open
challenges and current solutions.” In: Blockchain for 5G-Enabled IoT,
pp. 441–474, 2021, doi: 10.1007/978-3-030-67490-8_17.
[8] Introduction to Internet of Things (Basic Concept, Challenges, Security
Issues, Applications & Architecture). Nitya Publications, 2020.
[9] M. A. R. Khan, S. N. Shavkatovich, B. Nagpal, et al., “Optimizing hybrid
metaheuristic algorithm with cluster head to improve performance metrics
on the IoT.” Theoretical Computer Science, vol. 927, pp. 87–97, 2022,
https://doi.org/10.1016/j.tcs.2022.05.031.
[10] K. Gulati, R. S. Kumar Boddu, D. Kapila, S. L. Bangare, N. Chandnani, and
G. Saravanan, “A review paper on wireless sensor network techniques in
Application and challenges of optimization in IoT 169
[23] X. Wang and Y. LI, “Solving Shubert function optimization problem by using
evolutionary algorithm.” Journal of Computer Applications, vol. 29, no. 4,
pp. 1040–1042, 2009, doi: https://doi.org/10.3724/sp.j.1087.2009.01040.
[24] A. Ghaedi, A. K. Bardsiri, and M. J. Shahbazzadeh, “Cat hunting optimi-
zation algorithm: a novel optimization algorithm.” Evolutionary
Intelligence, vol. 16, pp. 417–438, 2021, doi: https://doi.org/10.1007/
s12065-021-00668-w.
[25] M. Emami, A. Amini, and A. Emami, “Stock portfolio optimization with
using a new hybrid evolutionary algorithm based on ICA and GA: recursive-
ICA-GA (Case Study of Tehran Stock Exchange).” SSRN Electronic
Journal, 2012, doi: https://doi.org/10.2139/ssrn.2067126.
[26] Y. Gao and K. Zhu, “Hybrid PSO-Solver algorithm for solving optimization
problems.” Journal of Computer Applications, vol. 31, no. 6, pp. 1648–1651,
2012, doi: https://doi.org/10.3724/sp.j.1087.2011.01648.
[27] A. Majumder, “Termite alate optimization algorithm: a swarm-based nature
inspired algorithm for optimization problems.” Evolutionary Intelligence,
vol. 16, pp. 1–21, 2022, doi: https://doi.org/10.1007/s12065-022-00714-1.
[28] S. Yi and S. Yue, “Study of logistics distribution route based on improved
genetic algorithm and ant colony optimization algorithm.” Internet of Things
(IoT) and Engineering Applications, vol. 1, pp. 11–17, 2016, doi: https://doi.
org/10.23977/iotea.2016.11003.
[29] G. Yu and L. Kang, “New evolutionary algorithm based on girdding for
dynamic optimization problems.” Journal of Computer Applications, vol. 28,
no. 2, pp. 319–321, 2008, doi: https://doi.org/10.3724/sp.j.1087.2008.00319.
[30] G. G. Wang, S. Deb, and L. D. S. Coelho, “Earthworm optimization algo-
rithm: a bio-inspired metaheuristic algorithm for global optimization pro-
blems.” International Journal of Bio-Inspired Computation, vol. 1, no. 1,
pp. 1, 2015, doi: https://doi.org/10.1504/ijbic.2015.10004283.
[31] M. Kumar, S. Kumar, P. K. Kashyap, et al., “Green communication in the
Internet of Things: a hybrid bio-inspired intelligent approach.” International
Journal Sensors, vol. 10, no. 22, pp. 3910, 2022, https://doi.org/10.3390/
s22103910
[32] S. Arslan and S. Aslan, “A modified artificial bee colony algorithm for clas-
sification optimization.” International Journal of Bio-Inspired Computation,
vol. 1, no. 1, pp. 1, 2022, doi: https://doi.org/10.1504/ijbic.2022.10049021.
[33] X. S. Yang, “Firefly algorithm, stochastic test functions and design optimi-
sation.” International Journal of Bio-Inspired Computation, vol. 2, no. 2,
pp. 78, 2010, doi: https://doi.org/10.1504/ijbic.2010.032124.
[34] D. Devassy, J. Immanuel Johnraja, and G. J. L. Paulraj, “NBA: novel bio-
inspired algorithm for energy optimization in WSN for IoT applications.”
The Journal of Supercomputing, vol. 78, no. 14, pp. 16118–16135, 2022,
doi: https://doi.org/10.1007/s11227-022-04505-4.
[35] S. Hamrioui and P. Lorenz, “Bio inspired routing algorithm and efficient
communications within IoT.” IEEE Network, vol. 31, no. 5, pp. 74–79, 2017,
doi: https://doi.org/10.1109/mnet.2017.1600282.
Application and challenges of optimization in IoT 171
Optimization is a set of methods which provides the best possible solution for a
given problem. In this chapter, the role of optimization in healthcare systems,
medical diagnosis, biomedical informatics, biomedical image processing, ECG
classification, feature extraction and classification, and intelligent detection of
disordered systems have been discussed in detail. Approaches for predictive ana-
lytics in healthcare and innovations and technologies for smart healthcare have
been discussed using various methods. Some of the issues and challenges while
using optimization algorithms for smart healthcare and wearables have been
demonstrated in this chapter.
7.1 Introduction
1
Department of Physics, GLA University, India
2
Department of Electronics and Communication Engineering, National Institute of Technology –
Patna, India
174 Nature-inspired optimization algorithms and soft computing
Output
Input Image Description
Figure 7.1 Branches that are correlated with digital image processing
Feature Feature
Image
selection extraction
classification
feature of the image and then these features are classified called features classifi-
cation [5]. These features are then selected, this step is known as feature selection
and then selected features are extracted called feature extraction to give the resul-
tant output to find the abnormality in the image [6]. This image processing is used
in the healthcare system widely for the early detection of chronic and harmful
diseases such as heart disorders [7], lung cancer, breast cancer [8], and brain cancer
[9].
These branches of digital image processing can be processed using machine
learning (ML) [10] or deep learning (DL) [11] method. ML is an important
advanced technology in health informatics. ML is a growing technology used to
mine knowledge from data, i.e., automatic learning from volumetric data. ML
implies proposing algorithms which can learn and progress over time and can be
used for predictions [12]. Various applications of ML [13] are machine vision,
biometric recognition, handwriting recognition, medical diagnosis, alignment of
biological sequences, drug design, speech recognition, text mining, natural lan-
guage processing (NLP), fault diagnostics, load forecasting, control and automa-
tion, and business intelligence. In this chapter, ML is discussed for the early
detection of disorders or diseases in the human body. It can also be used in smart
healthcare systems and technologies. The main objective of the ML system is to
determine the accuracy. It provides patients’ safety and healthcare quality [14]. DL
[15] techniques are the advanced version of ML techniques. This technique reduces
Optimization applications and implications in biomedicines and healthcare 175
the need of solving the problem manually. Deep learning allows the computer to
build complex concepts out of simpler concepts. One of the examples of a DL
model is the feed-forward deep network or multilayer perceptron (MLP) [16]. An
MLP is just a mathematical function mapping some set of input values to output
values. The function is formed by composing many simpler functions [17].
Optimization algorithm (OA) [18] is the utilization of the minimum resources
to obtain the best result. Although optimization may not seem like a task for
machine learning, optimization techniques are frequently used in machine learning
algorithms. Evolutionary computation is a branch of evolutionary biology that
creates search and optimization techniques to aid in the completion of challenging
tasks and issues. Genetic algorithm (GA) [19], evolution methods, evolutionary
programming, and genetic programming are the main branches of evolutionary
computation. These all are fundamental characteristics of the evolution process.
The objective of the optimization is to minimize the different types of constraints to
upsurge the performance of the proposed algorithm for getting the high perfor-
mance, efficacy, and accuracy of the system [20]. OA can be applied anywhere and
there are various kinds of optimization algorithms such as cuckoo search [21], ant
colony [22], honey bee [23], fruit fly [24], a particle swarm [25], random search
[26], and halving interval [27].
been proposed for nurse scheduling programs [43]. A mixed integer programming
(MIP) model can also be used for workforce scheduling [44].
The primary function of an optimization algorithm is to support the location of
medical facilities, organ transplantation, vaccine development, and disease
screening. GA has been used for Human resource (HR) planning. Fuzzy Delphi is
used to identify the factor influencing HR planning. This strategy encourages 36%
of the talents and includes recognizing the competencies that could be in demand in
emergencies [45].
selection of the disease; any of the OA is used. One of the most common cancers
that affect roughly one million people worldwide is skin cancer. Its early detection
can initiate early treatment and cure for the patients. Thermal exchange OA is used
to detect skin cancer at its early stages [62]. A hybrid system is proposed to identify
various diseases with an optimizing classifier parameter for the support vector
machine (SVM) [63] and MLP. A hybrid intelligent system has been proposed
considering the objectives like prediction accuracy [64], sensitivity, and specificity
[65]. Global optimization-based techniques have been used to improve the diag-
nosis of breast cancer in patients [66]. Computer-aided diagnosis (CAD) [67] is
designed with the aid of SVM where parameters are altered by optimization algo-
rithms [14]. Over a decade, computer-aided diagnostics have proven to be effective
and accurate for diagnosis. It has been suggested to use deep learning techniques
along with supervised machine learning and meta-heuristic algorithms to conduct
an accurate diagnosis [68]. For feature extraction mainly convolutional neural
network (CNN) [69] is used and for feature selection, ant colony optimization is
used [68]. Accuracy of more than 99% is obtained for the diagnosis of brain tumors
and COVID-19. Cancer [70], diabetes, and heart disease can all be detected using
an artificial neural network (ANN) that has been tuned using the directed bee
colony (DBC) algorithm [71]. For the diagnosis of diseases including Alzheimer’s
disease, brain disorders, diabetes, liver diseases, COVID-19, etc., metaheuristic
optimization algorithms have been utilized [60]. Cancer classification and auto-
matic detection from 1,000 microscopic have been proposed for biopsy images [6].
It has four essential tissues: connective, epithelial, muscular, and nervous.
solution for the complex problems of bioinformatics [51]. This algorithm provides
fast and reasonably accurate solutions to such problems [25]. ACO and cuckoo
algorithms [76] are computationally intensive and too complex where as meta-
heuristic algorithm is quite simple [21]. Gradient descent optimization [77] is a
traditional minimization optimization technique that determines how the weights
are changed. Least Mean Square algorithm (LMS) [78] is the computation of
weight to minimize the mean square error (MSE) between desired and actual out-
puts. Probabilistically, the LMS algorithm’s solution and the MSE optimization’s
solution are equivalent.
x ¼ ½x 1 ; x 2 ; x 3 . . . x n (7.1)
The classifier [83] takes the feature vector as input and performs the classifi-
cation. Any pattern recognition task such as classification or clustering can be done
to detect the object. The essential characteristics of good features are as follows:
(i) Robustness – the property of a feature’s invariance to operations such as
translation, rotation, noise, and illumination is called robustness.
(ii) Shift invariance – the capability to maintain its state during when shift
operations are performed.
(iii) Rotation invariance – this is the capability to maintain its state originally
stated when it is rotated.
(iv) Size invariance – this is the ability to hold steady as its size changes.
(v) Mirror shear and affine invariance – features hold true when mirroring,
shearing, and affine transformation are applied.
(vi) Occlusion invariance – the attribute of the features that do not change when
all or a portion of the item is obscured is referred to as occlusion invariance.
(vii) Discrimination – there should not be any overlapping features and the
properties should be able to tell one object apart from another.
(viii) Reliability – similar objects should have comparable values; hence, the
values should be reliable.
(ix) Independence – if two traits are statistically uncorrelated from one another,
they are said to be independent. In other words, changing the value of one
attribute does not change the value of the others.
(x) Resistance to noise – an effective feature should be impervious to noise,
artifacts, etc.
(xi) Compactness – the features must be few in number in order to be displayed
in a short space.
All features do not exhibit these properties. Hence, suitable parameters that
distinguish the object uniquely should be identified and extracted. The feature can
be classified as shown in Figure 7.3.
After image segmentation, the next step is featuring extraction. The technique
of extracting and creating features to help in object classification is known as
feature extraction [80]. This stage is crucial since the features’ quality affects how
Optimization applications and implications in biomedicines and healthcare 181
Features
Nature of
Nature features Domain Information
object
Application-
General Edge-based Region-based Space domain- Preserving
specific
features features features based features features
features
Frequency
Pixel values Structural Structural Non-preserving
domain-based
features features features features
features
Global features
well they perform the categorization task. The feature vector is a vector that stores
the expanded set of features. The categories of feature extraction include
histogram-anchored characteristics like mean, median, variance, skewness, kurto-
sis, and first- and second-order moments, as well as texture features, also known as
Harlicks features, which can be smooth or rough, etc. [5]. Area, perimeter, orien-
tation, equivalent diameter, circularity, eccentricity, image curvature, and wavelet
features [84] are some examples of geometric features, also referred to as shape
features, that are used for the detection of transient changes in abnormalities like
microcalcification and spiculated masses, among others.
mean, or z-score, is equal to the raw score’s x value plus the population’s standard
deviation. The ratio of variation to an image’s mean is known as the normalized
gray level variance (Nvar). The gradient of an image gives information about fine
edges and structures in an image. The mean energy of the gradient measures the
energy of the gradient of image. Texture characteristics are crucial in the classifi-
cation of mammograms. Haralick’s texture features have been obtained from the
gray level co-occurrence matrix (GLCM) probabilities as follows: homogeneity or
angular second moment, contrast, correlation, variance1, variance2, standard
deviation1, standard deviation2, dissimilarity measure, local homogeneity or inverse
difference moment, energy, entropy haralick’s correlation, cluster shade, cluster
prominence, sum average, sum entropy, sum variance, difference variance, differ-
ence entropy, information measure1, and information measure2. Geometric features
or geometric shape feature describe the geometric properties of the masses.
Geometric elements are crucial to medical diagnostics for breast cancer detection.
The features that are frequently utilized in automatic speech and pattern
recognition are MeI frequency cepstral coefficients (MFCC) [90]. MFCC feature
extraction processing steps are shown in Figure 7.4.
Mel frequency Cepstral coefficients is given as:
X
k1 p
C ½r ¼ 2y½k cos r ð2k þ 1Þ (7.2)
k¼0
2k
where C ½r is the MFCC coefficient and y½k is the input signal.
Input
Feature extraction
Statistical test
N
Apply performance
test and check if OK
End
7.7.6 Classification
A naive Bayes (NB) [83] classifier is based on classical Bayesian statistics and uses
all the attributes and permits them to contribute to the decisions, taking into account
the features as equally important and independent of each other, considering the
class. Naı̈ve Bayes algorithm is as follows:
Y
yNB ¼ arg maxq P yq P xj jyq (7.3)
j
where yNB signifies the class output by the naı̈ve Bayes classifier. P xj jyq is the
class-conditional probability, and P yq is the unknown probability.
K-nearest neighbor, also known as KNN, is a data-driven classifier that is
straightforward and intuitive like NB classifier [94]. The K-NN algorithm is a non-
parametric technique that makes no assumptions regarding the relationship between
characteristics ½x1 ; x2; x3 . . . xn and class membership. The class-conditional
densities, or P xj jyq , of the feature-vector distributions are necessary for
classification.
Quantities that can have a wide range of values are ideally suited for estimation
using logistic regression models. Because classification issues are so prevalent,
statisticians have developed a method to use regression models for this problem.
The method or model that is produced is known as the logistic regression method or
model. For classification, Fisher’s Linear Discriminant [95] can also be utilized.
This n-dimensional classification challenge has been reduced to a potentially sim-
pler one-dimensional problem. Rosenblatt [96] proposed the machine perceptron
whose architecture encodes the structure of a linear discriminant function. Support
vector regression is a logical progression from classification-based techniques. All
the characteristics of SVM classifiers are retained through SVM regression analy-
sis. SVM uses an optimization process that is effective for both the linearly
separable and inseparable samples in order to get the weight vector that maximizes
the margin. SVM hinges on these two mathematical operations:
(i) Input nonlinear pattern is mapped into a high-dimensional space.
(ii) Building of the finest hyperplane for linear separating the feature vectors
revealed in step (I).
Using previously undiscovered data, the SVM criterion function of the biggest
margin offers a novel approach and ensures accurate classification. The criterion
function is the most obvious for the purpose of classification and is the number of
samples misclassified by the weight vector. Since this function is stepwise constant,
it is naturally a weak candidate for gradient search. The Perceptron algorithm
seems to be an alternative to the criterion function. Neural networks (NN) can be
used to overcome the perceptron’s limitations. The gradient approaches for mini-
mization are not applicable since the perceptron criterion function takes into
account incorrectly categorized data. The NN generally use gradient algorithms for
minimization while solving regression problems, taking into account all samples
and the minimal squared-error criterion.
186 Nature-inspired optimization algorithms and soft computing
the Taylor series and the bird swarm algorithm (BSA) are combined to create the
Taylor-BSA [103].
List of abbreviations
ACO Ant colony optimization
AI Artificial Intelligence
ANN Artificial neural network
BSA Bird swarm algorithm
CAD Computer-aided diagnosis
CE Capsule endoscopy
CHOA Chimp optimization algorithm
CoT Cloud of Things
CT Computer tomography
CV Computer vision
DBC Directed bee colony
DBN Deep belief network
DL Deep learning
DWT Discrete wavelet transforms
FCM Fuzzy-c-means
GDP Gross domestic product
GLCM Gray level co-occurrence matrix
HR Human resource
IoMT Internet of Medical Things
IoT Internet of Things
LPC Linear predictive coding
MAD Median absolute deviation
MFCC MeI frequency cepstral coefficients
MIP Mixed integer programming
ML Machine learning
MLP Multilayer perceptron
MRI Magnetic resonance imaging
NB Nı̈ve Bayes
NLP Natural language processing
NN Neural networks
Nvar Normalized gray level variance
OA Optimization algorithm
OR Operating room
PCO Particle swarm optimization
RFNN Recurrent fuzzy neural networks
SSA Salp swarm algorithm
SVM Support vector machine
SVNN Support vector neural network
Optimization applications and implications in biomedicines and healthcare 191
References
[1] G. Stockman and L. G. Shapiro, Computer Vision. Prentice Hall PTR, 2001.
[2] R. Szeliski, Computer Vision: Algorithms and Applications. Springer Nature,
2022.
[3] P. Kumar, S. Srivastava, and R. Srivastava, “Basic understanding of medical
imaging modalities,” in High-Performance Medical Image Processing,
Apple Academic Press, pp. 1–17.
[4] S. Srivastava, N. Sharma, R. Srivastava, and S. K. Singh, “Restoration of
digital mammographic images corrupted with quantum noise using an
adaptive total variation (TV) based nonlinear filter,” in 2012 International
Conference on Communications, Devices and Intelligent Systems (CODIS),
2012, pp. 125–128.
[5] V. P. Singh, S. Srivastava, and R. Srivastava, “Effective mammogram
classification based on center symmetric-LBP features in wavelet domain
using random forests,” Technol. Heal. Care, vol. 25, no. 4, pp. 709–727,
2017, doi: 10.3233/THC-170851.
[6] R. Kumar, R. Srivastava, and S. Srivastava, “Detection and classification of
cancer from microscopic biopsy images using clinically significant and
biologically interpretable features,” J. Med. Eng., vol. 2015, 457906, 2015.
[7] M. Nath, S. Srivastava, N. Kulshrestha, and D. Singh, “Detection and loca-
lization of S1 and S2 heart sounds by 3rd order normalized average Shannon
energy envelope algorithm,” Proc. Inst. Mech. Eng. Part H J. Eng. Med.,
vol. 235, no. 6, pp. 615–624, 2021.
[8] A. Kumar, P. Kumar, and S. Srivastava, “A skewness reformed complex
diffusion based unsharp masking for the restoration and enhancement of
Poisson noise corrupted mammograms,” Biomed. Signal Process. Control,
vol. 73, no. August 2021, p. 103421, 2022, doi: 10.1016/j.bspc.2021.103421.
[9] R. R. Kumar, A. Kumar, and S. Srivastava, “Anisotropic diffusion based
unsharp masking and crispening for denoising and enhancement of MRI
images,” in: 2020 International Conference on Emerging Frontiers in
Electrical and Electronic Technologies (ICEFEET 2020), pp. 0–5, 2020,
doi: 10.1109/ICEFEET49149.2020.9186966.
[10] T. M. Mitchell, Machine Learning, vol. 1. McGraw-Hill New York, 2007.
[11] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, no.
7553, pp. 436–444, 2015.
[12] B. Nithya and V. Ilango, “Predictive analytics in health care using machine
learning tools and techniques,” in 2017 International Conference on
Intelligent Computing and Control Systems (ICICCS), 2017, pp. 492–499.
[13] J. G. Carbonell, R. S. Michalski, and T. M. Mitchell, “An overview of
machine learning,” Mach. Learn., pp. 3–23, 1983.
[14] E. V Bernstam, J. W. Smith, and T. R. Johnson, “What is biomedical
informatics?,” J. Biomed. Inform., vol. 43, no. 1, pp. 104–110, 2010.
192 Nature-inspired optimization algorithms and soft computing
[105] I. Goyal, A. Singh, and J. K. Saini, “Big Data in healthcare: a review,” in:
2022 1st International Conference on Informatics (ICI), 2022, pp. 232–234.
[106] S. K. Reddy, T. Krishnaveni, G. Nikitha, and E. Vijaykanth, “Diabetes
prediction using different machine learning algorithms,” in: 2021 Third
International Conference on Inventive Research in Computing Applications
(ICIRCA), 2021, pp. 1261–1265.
[107] V. Dhar, “Big data and predictive analytics in health care,” Big Data,
vol. 2, no. 3. New Rochelle, NY, pp. 113–116, 2014.
[108] R. Amarasingham, R. E. Patzer, M. Huesch, N. Q. Nguyen, and B. Xie,
“Implementing electronic health care predictive analytics: considerations
and challenges,” Health Aff., vol. 33, no. 7, pp. 1148–1154, 2014.
[109] B. Boukenze, H. Mousannif, A. Haqiq, and others, “Predictive analytics in
healthcare system using data mining techniques,” Comput Sci Inf Technol,
vol. 1, pp. 1–9, 2016.
[110] A. Ray and A. K. Chaudhuri, “Smart healthcare disease diagnosis and
patient management: innovation, improvement and skill development,”
Mach. Learn. with Appl., vol. 3, p. 100011, 2021.
[111] G. Muhammad, F. Alshehri, F. Karray, A. El Saddik, M. Alsulaiman, and
T. H. Falk, “A comprehensive survey on multimodal medical signals fusion
for smart healthcare systems,” Inf. Fusion, vol. 76, pp. 355–375, 2021.
[112] E. Freeman, I. E. Agbehadji, and R. C. Millham, “Nature-inspired search
method for location optimization of smart health care system,” in 2019
International Conference on Mechatronics, Remote Sensing, Information
Systems and Industrial Information Technologies (ICMRSISIIT), 2020,
vol. 1, pp. 1–9.
[113] G. L. Tortorella, F. S. Fogliatto, A. Mac Cawley Vergara, R. Vassolo, and
R. Sawhney, “Healthcare 4.0: trends, challenges and research directions,”
Prod. Plan. Control, vol. 31, no. 15, pp. 1245–1260, 2020.
[114] K. Ashok and S. Gopikrishnan, “Statistical analysis of remote health
monitoring based iot security models and deployments from a pragmatic
perspective,” IEEE Access, vol. 11, pp. 2621–2651, 2023.
[115] A. Kumar, R. Krishnamurthi, A. Nayyar, K. Sharma, V. Grover, and E.
Hossain, “A novel smart healthcare design, simulation, and implementation
using healthcare 4.0 processes,” IEEE Access, vol. 8, pp. 118433–118471,
2020.
[116] R. M. K. Mohamed, O. R. Shahin, N. O. Hamed, H. Y. Zahran, and M. H.
Abdellattif, “Analyzing the patient behavior for improving the medical
treatment using smart healthcare and IoT-based deep belief network,” J.
Healthc. Eng., vol. 2022, 2051642, 2022.
[117] M. M. E. Mahmoud, J. J. P. C. Rodrigues, K. Saleem, J. Al-Muhtadi, N.
Kumar, and V. Korotaev, “Towards energy-aware fog-enabled cloud of
things for healthcare,” Comput. & Electr. Eng., vol. 67, pp. 58–69, 2018.
[118] M. M. E. Mahmoud, J. J. P. C. Rodrigues, S. H. Ahmed, et al., “Enabling
technologies on cloud of things for smart healthcare,” IEEE Access, vol. 6,
pp. 31950–31967, 2018.
Optimization applications and implications in biomedicines and healthcare 199
1
National Institute of Technology Patna (NITP), India
202 Nature-inspired optimization algorithms and soft computing
cover the whole process of making something, turning factories into smart places.
Intelligent equipment, warehousing systems, and production facilities that provide
end-to-end integration constitute the links. Its integration includes inbound logis-
tics, production, marketing, outbound logistics, and service. Factory automation is
projected to enhance tighter collaboration between business partners such as sup-
pliers and customers and among employees, hence creating new opportunities for
all parties involved to profit from one another. Figure 8.1 depicts the evolution of
factory digitalization.
As a result of industrial digitization, we can expect to see new business models
and big opportunities. Already, we can see the beginnings of this trend. Direct
metal laser sintering (DMLS), an additive method like 3-D printing that deposits
melted metal powder layers by laser, is used by some businesses to fabricate low-
volume metal products from digital 3-D models. These components are made of a
totally dense metal that has superior mechanical qualities. The DMLS process is
intriguing because it can create intricate geometries that cannot be made using
conventional machining techniques.
Implementing a digitization plan that considers how the organization’s opera-
tions and structure will change is very important. On a cultural level, it can be hard to
accept change, and leaders and other important people may be resistant. Because of
this, it is important to set digitalization goals, create a digitalization strategy, choose
the right technological enablers, build technology leadership, train your staff, and
create a digital culture. All of these tasks should be done at the same time.
The term “production monitoring” refers to the process of inspecting your products
on-site every single working day of your production. An inspector will look at your
manufacturing facility to make sure it meets your requirements. They will pick
random units for inspection and look for and fix any flaws they find [7].
Monitoring production gives you information about the whole manufacturing
process, makes you aware of any mistakes that might have happened at any point, and
helps you avoid delivery delays. With production monitoring, we can keep an eye on
several parameters at the spindle, machine, and factory levels at the same time.
Automated and real-time data collection makes it possible to look at the data and find
bottlenecks and other useful information. In the context of production monitoring,
tracking entails keeping tabs on information in order to analyze the health of machinery.
By analyzing production data, we can monitor and improve our processes more effec-
tively. Enterprise ERP systems have the potential to incorporate monitoring technolo-
gies that expand their scope and improve their precision across the organization.
Nowadays, it is highly motivated to automate all monitoring operations. When
we refer to automated monitoring, we mean a system of continuous monitoring. This
sort of system is intended to notify a certified operator through an alarm, dialer, or
pager in the event that a water treatment facility or water distribution system
fails during regular operation. Automation reduces expenses and time spent on
204 Nature-inspired optimization algorithms and soft computing
(iii) Specialty products: Specialty products are often status or prestige purchases, so it
is important to use messaging to build brand loyalty and keep customers through
new product launches, feature updates, future directions, and brand innovations.
(iv) Unsought products: Promoting brand recognition can help customers get to
know a company’s product and image, which can build trust and keep cus-
tomers for longer. For example, a pest control company may use a catchy
musical jingle on local radio and television stations to increase the likelihood
that customers think of the company before another.
Product classification can help you understand how different products fit into
the greater context of making purchases in a certain market, which is why product
category research is so crucial. There are numerous types of products and services
currently available on the market. Due to the fact that the conditions surrounding
the purchase of each type of product are distinct, it is feasible to build marketing
strategies that are adapted to the product, the consumer, and the purchasing style in
order to increase sales.
When used on products that are already in high demand, aggressive marketing
methods that are meant for products that are not selling well may be unnecessary or
even harmful. Similarly, you may not want to invest money from a marketing
budget on a widespread advertising campaign for a highly specialized product that
is only likely to appeal to a portion of the market. Spending some time considering
product categories can help you make more efficient use of advertising resources
by providing a more focused, precise message and rooting it in the customer
behavior insights that are related to product classification. This might help you
make better use of your advertising budget.
commodity prices, and difficulty transferring across industries due to the unpre-
cedented spike in client demand [13]. Depending on how production processes
work, digital transformation and the Internet of Things (IoT) could be both helpful
and harmful. Because of this, some technological advances, like AI, robotics and
drones, electric cars, and on-demand delivery, could change the way we look at the
traditional supply chain. There are a number of different ways that this could be
accomplished. Even if their long-term goal is to increase the efficiency and cost-
effectiveness of e-commerce processes, one of the most difficult aspects of
accomplishing this goal is integrating these systems and services across all of a
company’s existing supply chain activities. This is one of the components that must
be accomplished [14].
Inventory management is important for almost all business owners, but it is
even more important if we have more than one way to sell our products (or even a
physical store). Attempting to manually balance all of these variables raises the
probability of making mistakes. Not to mention the length of time required. What
if, however, it is informed that a team could use that time to focus on more
important tasks while still managing the inventory? At this stage, automated
inventory management comes into play. By using the features of a retail operating
system, we may reduce the amount of time spent on inventory management each
year while also improving the accuracy of those procedures.
Both retailers and wholesalers can keep an eye on their stock in real-time
thanks to automated inventory management systems. As a result of the systems,
workflows are simpler and more effective. By creating your retail automation with
pre-built circumstances, we will be able to focus on other crucial responsibilities
with the assurance that the inventory will be managed automatically. This will give
us an extra time to complete tasks. The majority of modern businesses, whether or
not they specialize in e-commerce, employ automated inventory management to
track and organize their stock, suppliers, and sales. Using an automated system
enables merchants to monitor stocks in real-time and make timely business-critical
choices. The main features of the automated inventory management are listed in
Figure 8.2.
For instance, if one of our products has reached the lowest possible stock level
and it is, therefore, necessary to reorder, the software that manages our inventory
will tell us automatically (or even reorder for us). Our inventory automation should
also work with other retail management systems, like your point-of-sale (POS)
software and order management system. In order to achieve real-time accuracy in
inventory management, we must keep track of sales across all channels. Only then
can we hope to achieve that objective. The main benefit of automated inventory
management is that we will know exactly which items sell better than others, which
products sell poorly, how well each product is doing, and how much profit you are
making. When we add this information to your demand projections, risk manage-
ment, cash-flow projections, and expected profit margins, we can make plans that
are more accurate and solid [15].
In the next section, the primary advantages of automated inventory manage-
ment are highlighted in Figure 8.3.
Applications and challenges of optimization in industrial automation 207
automatically updated across all channels. The automation engine will auto-
matically register and update the system whenever an item is sold, returned, or
received, freeing up the time to focus on what truly matters.
8.3.2 Scalability
Using cutting-edge software makes it much easier for a company to expand its
operations and improve its products. With the help of automated inventory systems,
businesses can be sure that building new warehouses will be profitable without
hiring more people or spending more money on tracking and sending purchase
orders by hand. This is made possible by automated inventory management sys-
tems. Real-time inventory data connected with other management systems ensures
the accuracy of decisions regarding the expansion of the firm.
In the end, automation has the potential to bring about a lot of scalability and
growth opportunities. Software that automates inventory management saves time,
makes it more accurate, and makes it less likely that mistakes will be made while
processing thousands of transactions per day.
8.3.3 Accuracy
Automated inventory management software provides retailers and producers with
accurate, real-time stock data. This enables the system to automatically replace
supplies in the event that they run out. In addition, by analyzing stock data patterns,
organizations may make more precise and thorough predictions and suggestions
regarding recruiting, optimal reordering points, arranging shifts to meet anticipated
demand, changing targets, and eventually increasing sales. These forecasts and
suggestions can be derived from an analysis of data trends.
When data is entered manually, there is a much higher chance of human error
occurring. What is the answer? Do away with any and all requirements for manually
entering inventory data. When we choose to automate the operations involved in
inventory management, the software will handle the administration of data entry on
its own. This includes adding, deleting, forecasting, and restocking stock in real-time.
8.3.4 Synchronization
Seventy-two percent of stock-outs were caused by mistakes in the way stores
ordered and restocked items, according to a study that looked at 600 stores in 29
counties. All of these things can hurt a company’s reputation: when demand fore-
casts are made by hand, they are often wrong, wrong orders can lead to lost sales,
and not having enough goods in the warehouse can make customers unhappy. With
automated inventory control software, stores, warehouses, and manufacturers are
always aware of what needs to be reordered. This helps them fulfill their duties
without delay, which reduces the number of errors made.
When talking to factories that use several distribution networks, the most
common problem they wish to fix is managing inventory availability. Having to
manually type in inventory availability as purchases are made, all while trying to
avoid out-of-stocks and cancellations, is not only stressful but also expensive and
Applications and challenges of optimization in industrial automation 209
resource intensive. Errors are likely to occur when orders are entered manually
from one system into another. It is vital to regularly monitor and oversee sales
across all channels in order to have a complete understanding of the situation and
synchronize all data. All of this data can be synchronized in one central location so
that reports can be generated to prevent underselling and overselling. This will
ultimately result in cost savings and improved customer satisfaction.
Figure 8.4 Steps to be taken to ensure safety and security in optimizing the
automation industry
Applications and challenges of optimization in industrial automation 211
● Keeping to a regular schedule: Because long-term use can cause wear and tear,
it is important to do regular maintenance checks on automated systems. The
return on investment for your system may also improve.
● Risk analysis: Before doing maintenance on a system, you need to know
everything there is to know about its risk profile. Knowing about safeguards
and proper procedures can help you save a significant amount of money. To
make progress, all involved parties must have a shared understanding of the
situation.
● Lockout/tagout: When doing maintenance, it is best to follow the Lockout/
Tagout procedures for controlling dangerous materials. Locking and tagging
involve isolating energy sources that could be dangerous and could lead to the
sudden release of stored mechanical energy. It is advised that this method be
utilized as frequently as feasible; but, due to the need for a power source for
programming and software upgrades, this is not always practicable.
● Realizing obligatory legal standards: Keep in mind that there are regulations
governing the standards that must be met by an organization in terms of worker
safety and maintenance practices. You should stay current with any mod-
ifications that may occur.
accurate error detection [20]. Controlling quality guarantees that consistency and
high standards of business practices may be maintained across the many production
processes that are carried out on a daily basis at your organization. Moreover,
automatic quality control at multiple revision points can help you find deviations as
soon as they happen, so you can fix them before the number of them gets out of
hand. Your business will be able to make the most of its resources and the pro-
duction of its employees if it makes use of the advantages offered by quality con-
trol. These advantages include the following:
● saves energy spent searching for faults in content;
● in comparison to hours spent manually reviewing files, it inspects files in
seconds;
● it easily detects all types of faults at any point in the process;
● achieve faster turnaround times by expediting approvals and decreasing quality
delays;
● can compare two files side by side to check whether there are any differences
between them;
● can find out which of your company’s touchpoints are prone to errors and
eliminate them;
● it can make use of the time that you would have spent otherwise editing in
order to bring jobs to market more quickly.
For optimizing manufacturing automation while retaining the quality of the
data, we should focus on the following factors as shown in Figure 8.5. Moreover,
here are five tips that will help you improve your quality assurance process and use
the best methods and procedures in your company.
(i) Establish a work environment in which employees are encouraged to look for
methods to enhance the service quality of the organization. A way to ensure
that quality management remains at the forefront of employees’ minds is to
cultivate an environment that promotes continuous improvement in this area.
Giving individuals extra money when they exhibit a commitment to quality
is one approach to reinforcing this attitude in the workplace.
(ii) Keep the machinery and automated software in good working order by
scheduling frequent servicing. Preventative maintenance of machinery is
essential for reliable production and worker security. When preventive
maintenance is neglected, faulty plastic parts are produced, which leads to
the breakdown of the device.
(iii) Develop a comprehensive training program. The foundation of great quality
management is a well-organized training program. During staff orientation,
quality management and its significance should be introduced. Continuous
training and the introduction of new processes and procedures are additional
chances to emphasize adherence to quality standards.
(iv) Establish a comprehensive overview for verifying product quality. Accurate
product design relies heavily on the results of the quality inspection phase of
production. A comprehensive inspection procedure is essential since it is
frequently the last chance for a manufacturer to identify any design problems
before a product is shipped.
(v) Schedule regular audits of the business’s processes. Internal audits of the
supply chain can be used to assess the efficiency of the operation and whe-
ther or not all regulations are being followed. Component manufacturers can
improve customer satisfaction and get ready for external safety and com-
pliance evaluations by performing internal audits.
space as feasible, whereas, secondary and tertiary packaging are the extra layers of
protection that make it possible to ship a product anywhere. Several aspects must
be considered while maximizing the packaging, which includes:
(i) Choosing the right materials for packaging: It is necessary to test the product
to see how fragile it is and if the materials will protect it enough during
shipping. Its primary package should facilitate an appealing opening
experience for consumers. By selecting the appropriate secondary and ter-
tiary packing, protection from potential shipping-related handling and
environmental problems can be ensured.
(ii) Appropriate package design: The correct package design, employing the correct
quantity and type of packing materials, is a crucial component that can have a
variety of effects on your supply chain. Non-standard package forms that attract
consumers’ attention on store shelves may not be the most space-efficient way
to package your product in order to maximize space in trucks and warehouses.
(iii) Choosing the right amount of packaging has a big impact on your supply chain
and the environment. Underpackaging results in damaged products, which
increase your costs through product returns and waste. In addition to introducing
an excessive amount of garbage into the environment, excessive packaging also
results in greater packing costs and waste. In addition to causing stress on the
supply chain, overpackaging also necessitates extra room for storage and transit.
(iv) Product count optimization: If we are bundling many things for shipping
purposes, we may be able to reduce costs and increase productivity by ree-
valuating how we mix the products into packs and pallets.
(a) The number of packages from the packaging bridge can be significantly
boosted by integrating automated procedures to supplement human labor. To
meet the demands of single-line and multi-line orders, third-party logistics
providers might use techniques that enable them to manufacture up to 1,000
unique packages every hour. These configurations can alleviate the workload
of staff and help a corporation react to demand variations. This can be a major
source of competitive advantage as the e-commerce business evolves and
client expectations increase.
(b) Automation is a cost-effective way for any merchant or logistics provider to
improve operations and make them better fit the current market. Typically,
businesses benefit most from solutions that help them avoid bottlenecks and
Applications and challenges of optimization in industrial automation 215
Inventory
Logistic Procurement
network and sourcing
Supply chain
optimization issues
want this [23–26]. So, every industry will have to make improving their supply chain
a top priority if they want to reach operational excellence. Corporations must make
the best use of their industrial and logistics capabilities, optimize and streamline
flows throughout the supply chain, and get the most out of their resources if they
want to keep their stock low while also being more responsive and reducing
delivery times. Figure 8.6 depicts the issues faced during the optimization of supply
chain.
While we are trying to optimize the supply chain, the primary challenges that
we focus on are as follows:
(a) Logistics network design
(b) Procurement and sourcing optimization
(c) Inventory optimization
(d) Transportation planning and optimization
(e) Sales forecasting
References
[1] Nitzan, D. and Rosen, C. A. Programmable industrial automation. IEEE
Transactions on Computers, C-25(12), 1259–1270, 1976.
[2] Atack, J., Margo, R., and Rhode, P. Automation of manufacturing in the late
nineteenth century: the hand and machine labor study. Journal of Economic
Perspectives, 33, 51–70, 2019. Doi:10.1257/jep.33.2.51.
[3] Wollschlaeger, M., Sauter, T., and Jasperneite, J. The future of industrial
communication: automation networks in the era of the internet of things and
industry 4.0. IEEE Industrial Electronics Magazine, 11.1, 17–27, 2017.
[4] Lu, Y., Xu, X., and Wang, L. Smart manufacturing process and system
automation – a critical review of the standards and envisioned scenarios.
Journal of Manufacturing Systems, 56, 312, 2020.
[5] Kagermann, H., Wahlster, W., and Helbig, J. Recommendations for imple-
menting the strategic initiative INDUSTRIE 4.0. Final Report of the
Industrie 4.0, 2013, p. 82.
[6] Mabkhot, M. M., Al-Ahmari, A. M., Salah, B., and Alkhalefah, H.
Requirements of the smart factory system: a survey and perspective.
Machines, 6.2, 23, 2018.
[7] Edrington, B., Zhao B., Hansel A., Mori M., and Fujishima M. Machine
monitoring system based on MTConnect technology. Procedia CIRP, 22,
92–97, 2014.
[8] Wang, L., Xu, S., Qiu, J., et al. Automatic monitoring system in underground
engineering construction: review and prospect. Advances in Civil Engineering,
5, 1–16, 2020.
[9] Doukas, C., Chantzis, D., Stavropoulos, P., Papacharalampopoulos, A., and
Chryssolouris, G.. Monitoring and control of manufacturing processes: a
review. Procedia CIRP, 8, 421–425, 2013. Doi:10.1016/j.procir.2013.06.127.
Applications and challenges of optimization in industrial automation 217
1
Department of Electronics & Communication Engineering, Kakatiya Institute of Technology and
Science, India
2
Department of Electrical & Electronics Engineering, Kakatiya Institute of Technology and Science,
India
220 Nature-inspired optimization algorithms and soft computing
development, NIOAs are likely to become even more powerful and versatile,
paving the way for new applications and solutions in various fields.
Types of nature-inspired optimization algorithms
The list of nature-inspired optimization algorithms along with their descrip-
tions and acronyms:
nests to find the optimal solution. The cuckoos adjust the quality of their eggs
and the host cuckoo discards eggs with lower quality.
7. Differential evolution (DE): DE is a population-based optimization algorithm
that is inspired by the process of natural selection. In DE, a population of
solutions is evolved through differential mutation and crossover operations.
The fittest individuals are selected to create the next generation of solutions.
8. Genetic algorithm (GA): GA is an optimization algorithm inspired by the
process of natural selection. It uses a population of candidate solutions to
evolve better solutions through selection, crossover, and mutation.
9. Ant colony optimization (ACO): ACO is a metaheuristic algorithm
based on ant behavior that uses pheromone trails to determine the shortest
path between a food source and the nest. It is frequently employed in the
way to solve of optimization problems such as the traveling salesman
problem.
10. Harmony search algorithm (HSA): HSA [1] is an optimization algorithm
inspired by the process of music improvisation. It generates new solutions by
improvising new melodies, with the aim of finding the optimal solution. It is a
metaheuristic optimization algorithm inspired by the process of musical
improvisation. The algorithm was developed by Geem et al. in 2001 and has
since been applied to various optimization problems, including engineering
design, water resource management, and image processing. In HSA, a set of
decision variables is represented as musical improvisation, and the optimi-
zation process is modeled as a process of generating better improvisations.
The improvisations are evaluated using an objective function, and the better
ones are used to create new improvisations through a process of improvisation
memory, pitch adjustment, and harmony memory. Improvisation memory
involves storing the best improvisations encountered so far, while pitch
adjustment involves changing the decision variables of an improvisation.
Harmony memory involves selecting the best improvisations from the
improvisation memory and using them to generate new improvisations. HSA
has been shown to be effective in solving a wide range of optimization pro-
blems, including function optimization, parameter tuning, and feature selec-
tion. It has several advantages over other optimization algorithms, such as its
ability to handle discrete, continuous, and mixed-variable problems, and its
fast convergence rate.
11. Differential evolution (DE): DE is an optimization algorithm that uses the
difference between two solutions in the population to generate a new candi-
date solution. It is based on the principles of evolution and natural selection.
12. Cuckoo search (CS): CS is an optimization algorithm inspired by the beha-
vior of cuckoo birds. It uses a population of cuckoos to lay eggs in host nests,
with the aim of finding the optimal solution.
13. Glowworm swarm optimization (GSO): GSO is an optimization algorithm
inspired by the behavior of glowworms. It uses the bioluminescent behavior
of glowworms to search for the optimal solution.
222 Nature-inspired optimization algorithms and soft computing
Data mining and machine learning: Algorithms are used to optimize machine
learning algorithms and data mining techniques for improved accuracy, efficiency,
and scalability.
Financial optimization: Bio-inspired algorithms are applied to optimize financial
models, portfolio selection, and risk management strategies for improved financial
performance.
Biomedical engineering: Bio-inspired algorithms are used to optimize various
biomedical engineering applications, such as prosthetics, implants, and drug
delivery systems, for improved efficiency and effectiveness.
Agriculture and environmental optimization: NIOAs are applied to optimize
agricultural and environmental systems for improved crop yields, water manage-
ment, and environmental sustainability.
Game theory and optimization: NIOAs are used to optimize game strategies and
outcomes in various fields such as economics, psychology, and computer science.
Disaster management and emergency response: NIOAs are applied to optimize
disaster management and emergency response strategies for improved disaster
preparedness, response time, and resource allocation.
Social network analysis and optimization: Bio-inspired algorithms are used to
optimize social network analysis and management for improved communication,
collaboration, and information sharing.
Bioinformatics and genomics: NIOAs are applied to optimize bioinformatics and
genomic data analysis for improved accuracy, efficiency, and scalability.
Smart grid optimization: Algorithms are used to optimize the operation and
control of smart grids for improved energy efficiency, reliability, and stability.
Multi-objective optimization: Algorithms are applied to optimize multiple
objectives simultaneously, such as cost, performance, and environmental impact,
for improved decision-making.
Quality control and assurance: NIOAs are used to optimize quality control and
assurance techniques for improved product quality, defect detection, and
efficiency.
Supply chain optimization: Optimization algorithms are applied to optimize
supply chain management for improved efficiency, responsiveness, and cost-
effectiveness.
Chemical engineering: Bio-inspired algorithms are used to optimize chemical
processes and systems for improved efficiency, cost-effectiveness, and environ-
mental sustainability.
Construction and project management: NIOAs are applied to optimize con-
struction and project management processes for improved cost-effectiveness, effi-
ciency, and quality.
Energy management and optimization: NIOAs are used to optimize energy
consumption, production, and management for improved energy efficiency and
sustainability.
9.1.1 Implementation
To discuss the process of implementation of a naturally inspired algorithm, an
optimization technique is considered and discussed its features and procedure of
implementation. The harmony search optimization algorithm is selected to study
how to implement an naturally inspired optimization algorithm to find an optimized
solution by maximizing or minimizing a given objective function.
Image segmentation method based on a multilevel thresholding technique is
considered for the explanation of the implementation of optimization techniques.
thresholds are computed based on some criteria like Otsu’s inter-class variance and
Kapurs’s entropy functions [5–9]. The thresholding method holds the properties like
simplicity, accuracy, and robustness, which can be classified into two major cate-
gories: bi-level and multilevel [1,7], the pixels of an image are classified into dif-
ferent classes based on the threshold level selected on the histogram. In the case of
bi-level thresholding, all pixels are classified into two groups based on threshold
level. Pixels in the second category of multilevel thresholding are classified into
more than two classes. Nonetheless, the primary constraint in multilevel thresholding
is accuracy, stability, and execution time, among many other things.
In the case of color images [10], each pixel consists of three components (red,
green, and blue) [10], due to this heavy load, segmentation of color images might
be more exigent and intricate. Accordingly, it is essential to find the optimal
thresholds by using optimization algorithms by maximizing the inter-class variance
in Otsu’s and the histogram entropy in the case of Kapur’s on a histogram of an
image. As per the no-free-lunch (NFL) principle [11], no algorithm can solve all
types of optimization problems, one optimization algorithm may be useful well in
one type of application and not succeed to solve other kinds of applications; thus, it
is indispensable to devise and transform new algorithms.
Techniques with histogram plots are incapable of owning spatial contextual
information to compute optimized thresholds. To conquer these drawbacks, a novel
methodology is presented in this chapter, a curve which is having similar character-
istics that of the histogram, and considers spatial contextual information of image
pixels is named “Energy Curve” [3] can be used in the place of the histogram, HSA [1]
used to select optimized gray levels, energy curve characteristics are similar to a his-
togram. For each value in an image, energy is computed in the grayscale range of that
image. The threshold levels can be computed based on the values and peak points on
the energy curve. In the literature, numerous optimization techniques along with the
efficiencies and applications in particular fields are available; a few of them mentioned
are PSO [12], ACO [13], BFO [14], ABC [15], GWO [16], MFO [17], SSA [18], FA
[19], WOA [20], SCA [21], KHO [22], BA [23], FPA [24], and MVO [25]. Moreover,
several customized algorithms are used in multilevel thresholding. For example, Chen
et al. [26] proposed an improvised algorithm (IFA) to segment compared with PSO
[27], and other methods [10,27–29].
where C1 and C2 are two classes, p indicated pixel value for the gray levels
f1; 2; 3; :::::; L 1g in an image, and L 1 has indicated the maximum gray level.
Expectations from modern evolutionary approaches for image processing 227
If the gray level is below the threshold th, then that pixel is grouped into class C1,
else grouped into class C2. The set of rules for MT are
C1 p if 0 p < th1
C2 p if th1 p < th2
(9.2)
Ci p if thi p < thi
Cn p if thn p < thn þ 1
From (9.2), C1; C2; :::::Cn indicates different classes, and threshold levels to
find objects are represented by fth1; th2; ::: :; thi; thi þ 1; thng; these thresholds can
be computed based on either a histogram or an energy curve. By use of these
threshold levels, the entire pixels will be classified into different regions (C1,
C2 ... Cn). The significant methods of segmentation of images based on threshold
levels are Otsu’s and Kapur’s methods, and in both cases, threshold levels can be
computed by maximizing the cost function (interclass variance). In this chapter,
optimized threshold levels thn are computed by Otsu’s method th values [17]. In
this method, interclass variance is considered the objective function, also called a
cost function. For experimentation, grayscale images are considered. The below
expression gives the probability distribution for each gray level:
hic XNP
Phic ¼ ; Phi ¼ 1 (9.3)
NP i¼1 c
Whereas w0 ðthÞ and w1 ðthÞ are the probability distributions for C1 and C2, as
it is shown below
X
th X
th
wc0 ðthÞ ¼ Phci ; wc1 ðthÞ ¼ Phci (9.5)
j¼1 j¼thþ1
The mean or average of two classes mc0 and mc1 to be computed, the variance
c
between classes s2 is given by (9.6) and (9.7)
Xth
iPhci X
L
iPhci
mc0 ¼ ; m c
¼ (9.6)
wc ðthÞ 1 i¼thþ1 wc1 ðthÞ
i¼1 0
c
s2 ¼ sc1 þ sc2 (9.7)
228 Nature-inspired optimization algorithms and soft computing
Notice that for both (9.6) and (9.7), c determined by the type of image, where
sc1 and sc2 in (9.8) are the variances of classes C1 and C2
2 2
sc1 ¼ wc0 mc0 þ mcT ; sc2 ¼ wc1 mc1 þ mcT (9.8)
where mcT ¼ wc0 mc0 þ wc1 mc1 and wc0 þ wc1 ¼ 1: Based on the values of sc1 and sc2 , (9.8)
gives the objective function:
c
J ðthÞ ¼ max s2 ðthÞ ; 0 th L 1 (9.9)
c
From (3.9), s2 ðthÞ is the total variance between two various regions after
segmentation by Otsu’s scheme for given th, optimization techniques required to
compute the threshold level ðthÞ by maximizing the fitness function is shown in
(9.9). Similarly, for MT, the objective (or fitness) function J(th) (as shown in
(9.10)) to segment an image into k classes, requires k variances.
c
J ðTH Þ ¼ max s2 ðthi Þ ; 0 thi L 1; where i ¼ 1; 2:::k (9.10)
where TH is a vector, TH ¼ ½th1 ; th2 ; th3 ::: :::thk1 for multi-level thresholding; the
variances between classes can be computed from (9.11)
c
X
k X
k 2
s2 ¼ sci ¼ wci mci mcT (9.11)
i¼1 i¼1
where ith represents i class, wci indicates probability of ith classes, and mcj is the mean
of ith class. For MT segmentation, these parameters are anticipated as
X
th1 X
th1 X
th1
wc0 ðthÞ ¼ Phci ; wc1 ðthÞ ¼ Phci wck1 ðthÞ ¼ Phci (9.12)
i¼1 i¼th1 þ1 i¼thk þ1
Xth1
iPhci X
th2
iPhci XL
iPhci
mc0 ¼ ; m c
¼ m c
¼ (9.13)
i¼1 0
wc ðth1 Þ 1 i¼th þ1 wc0 ðth2 Þ k1
wc ðthk Þ
i¼th þ1 1
1 k
where TH is a vector, ½th1 ; th2 ; th3 ::: :::thk1 . Each entropy is calculated separately
with its th value, given for k entropies
X
th1 Xth1 c
Phc Phci Phci Phi
H c1 ¼ c
i
ln H c
2 ¼ ln H ck
i¼1
w 0 wc0 i¼1
w c
1 w c
1
X
th1 c
Phci Phi
¼ c ln (9.15)
w
i¼th þ1 k1
wck1
k
From (9.16), its second term should be a constant; consequently, the energy
associated with each pixel is Ex 0. From the above equation, we can clear that the
energy for a particular gray, level is zero if each element of Bx either 1 or 1, can
be put forward in another way as all the pixels of image I ði; jÞ with gray level either
greater than x or less than x, otherwise energy level at a particular gray value x is
positive as given in Figure 9.2.
Problem statement
The optimal threshold levels can be computed by (i) maximizing inter-class var-
iance for Otsu’s method and maximizing total entropy for Kapurs’s method. For the
230 Nature-inspired optimization algorithms and soft computing
Take image
x(i,j)
Fing Histogram
of image
Is termination No
criteria satisfied
Yes
End
7000 0.08
0.01 2500 10000
0.07
6000
Probability of Gray Value
5000
0.05
4000 0.006 1500 6000
0.04
3000
0.03 0.004 1000 4000
2000 0.02
0.002 500 2000
1000 0.01
0 0 0 0 0
0 50 100 150 200 250 300 0 50 100 150 200 250 300 0 50 100 150 200 250 300 0 50 100 150 200 250 300 0 50 100 150 200 250 300
GVl
x 10 5 Energy Curve of Image
3.5 x 105 Energy Curve of Image 5 Energy Curve of Image
6
x 104 Energy Curve of Image 6 8
x 10 5 Energy Curve of Image 3 x 10
3
6 2.5
5 5
2.5 6
Energy Value
Energy Value
4 2
Energy Value
Energy Value
Energy Value
4
5
2
3 3 4 1.5
1.5
3
2 2 1
1 2
1 1 1 0.5
0.5
0 0
0 0 0 0 50 100 150 200 250 300 0 50 100 150 200 250 300
0 50 100 150 200 250 300 0 50 100 150 200 250 300 0 50 100 150 200 250
GrayValue
Figure 9.2 First row: input images; second row: histograms; third row: energy curves
232 Nature-inspired optimization algorithms and soft computing
After generating each new decision value, xnew , the harmony search algorithm
checks whether pitch adjustment is required or not. To do this, a PAR is defined,
which is a combination of the frequency and bandwidth factor (BF). These two
factors are adjusted to obtain a new xnew value from a local search around the HM.
The pitch-adjusted solution, xnew ðkÞ, can be calculated as xnew(k)+/-rand(0,1)•BW
with a probability of PAR. This pitch adjustment process is similar to the mutation
process in other bio-inspired algorithms. The range of pitch adjustment is restricted
by the limits ½Xl ðk Þ; Xu ðk Þ.
In the finishing stage, a new harmony vector, xnew , is computed and evaluated
against the worst harmony vector, xw , in the harmony memory. The fitness of xnew is
compared with that of xw , and if it is better, xw is replaced by xnew and becomes part
of the harmony memory. This process is repeated to maximize the objective function.
The optimization algorithm used in this study considers k variables, repre-
sented by threshold values thk , to form a harmony or solution. For multilevel seg-
mentation, the algorithm’s population is represented by
HM ¼ ½x1 ; x2 ::: :; x1HMS T ; xi ¼ ½th1 ; th2 ::: ::: ::; thk (9.20)
where each element xi in the harmony memory represents a threshold value for each
of the k variables. The HMS is the maximum size of HM. In this study, an 8-bit
image with intensity values ranging from 0 to 255 is used, and the algorithm’s
parameters are set to HM = 50, HMCR = 0.5, PAR = 0.2, BW = 0.1, and NI = 3,500.
The number of regions considered for segmentation ranges from 2 to 5.
Computing optimized threshold levels
The introduction section of this chapter outlines various multilevel thresholding
techniques for image segmentation and highlights the limitations of the histogram-
based methods. In the proposed method, an Energy Curve is utilized in place of the
histogram, and the optimization of threshold levels is achieved using HSA. This is
done by maximizing the inter-class variance for Otsu’s method and entropy for
Kapur’s method, as shown in (9.10) and (9.14). The new approach is visually
represented in Figure 9.1.
From the flow chart, take an image for experimentation xði; jÞ for multilevel
thresholding-based segmentation, and plot the energy curve of the considered color
image by using (3.1), then assign the design parameters of HSA, and the solution
matrix values are filled with arbitrary numbers, initially denoted as xi (set of
threshold levels) as per (3.18), then divide the entire pixels in the image as per
selected threshold levels into different classes or regions as per Otsu’s technique
and Kapur’s method, then find the inter-class variance and entropy of the seg-
mented image, as given in (9.14). Afterward, find the new set of threshold levels
with (9.2) again, find the fitness and compare it with the previous fitness function,
run this procedure until no improvement in the objective function or reach the
specified number of iterations, and lastly find the optimized threshold valued (xnew )
and classify the gray levels as (9.1) and (9.2) for the final segmentation of R, G, and
B components separately for color images. The results of this scheme are compared
with histogram-based techniques for evolution.
234 Nature-inspired optimization algorithms and soft computing
9.2 Results
This chapter introduces a novel technique for image segmentation that overcomes
the limitations of methods that rely solely on image histograms. Such methods lack
spatial contextual information and therefore may not yield optimal threshold levels.
To address this issue, the proposed method utilizes energy curve, which captures
the spatial relationship between pixels and their neighbors. The optimization of
threshold levels for multilevel segmentation is achieved using harmony search
algorithm (HSA) in combination with energy curve (Figures 9.3 and 9.4).
The primary objective of this article is to highlight the benefits of using the
energy curve in comparison to the histogram. The research adopts a primarily
quantitative approach for evaluating the proposed method, which involves ana-
lyzing nine images. To implement the proposed method, a MATLAB code is
developed, and the experiment is conducted on an eighth-generation Intel processor
CORE i5-8250u with a clock speed of 1.60 GHz and 8 GB of internal memory. The
MATLAB code is designed to generate energy curves and validation measures,
which are then presented in tables. Some of the results presented in this chapter are
taken from papers [30] authored by the authors of this chapter.
The resulting images are displayed in Figures 9.2–9.4, while comparative
metrics are provided in Tables 9.1 and 9.2. The optimized threshold levels for N =
2–5 are indicated for five standard images, and the mean fitness of the proposed
system is compared to various algorithms. The histograms and energy curves are
displayed with their corresponding threshold levels denoted by red spots. In the
tables, histogram is referred to as “H,” the energy curve as “E,” the number of
regions as “N,” and the threshold levels as “Th Levels.” The segmentation efficiency
improves with an increase in the mean value of the fitness function. Table 9.1 present
the mean fitness function values for the proposed technique and algorithms based on
histograms like Otsu’s method with HAS (HAS_O) [1], Kapur’s method based on
HSA (HAS_K) [1], PSO with Otsu’s method [31], Kapur’s method with EMO
(EMO_K) [32], and Otsu’s method with EMO (EMO_O) for optimized threshold
levels of N = 2–5 for five benchmark images. As shown in Table 9.1 for the bridge
and butterfly images, the proposed method has a higher mean fitness function value
than the other algorithms. Table 9.1 illustrates the proposed method’s threshold
levels and mean fitness function values for the baboon and man images, highlighting
that the proposed method provides higher fitness values. In comparison to the pro-
posed method, the HAS K and EMO K methods produce lower fitness function
values. When N = 2, the mean object function for the bridge image is 2.06E+03 with
the histogram and 1.58521E+11 with the energy curve.
Time taken for threshold level optimization and segmentation through histo-
gram and energy curve approaches are presented in Table 9.2. The findings
demonstrate that the proposed technique demands lesser time in comparison to
other optimized algorithms. On an average, the proposed method takes
13.7813 seconds, which is lower than the time required for the EMO technique
employing Otsu’s method and Kapur’s method. In summary, the threshold levels
optimized using the HSA based on energy curve outperforms the ones obtained via
0.012 0.012 0.012 0.012
0 0 0
0 0 50 100 150 200 250 300 0 50 100 150 200 250 300 0 50 100 150 200 250 300
0 50 100 150 200 250 300 Gray Value Gray Value Gray Value
Gray Value
Figure 9.3 First row: segmentation results of “bridge” image with histogram on N = 2, 3, 4, and 5. Threshold given in second row
is indicated by red dots on histogram.
2 2 2
2
1.8 1.8 1.8
1.8
1.6 1.6
Figure 9.4 First row: segmentation results of “bridge” image with histogram on N = 2, 3, 4, and 5. Threshold given in second row
is indicated by red dots on energy curve.
Table 9.1 Thresholding levels for different levels (N), and mean of fitness function with different segmentation methods
Table 9.2 Time required for segmentation based on the proposed method and
other methods
Time (s)
histogram methods. Consequently, the segmentation process with the energy curve
approach is superior to histogram techniques. This study predominantly utilized
quantitative methods to scrutinize the results.
4. Selection strategy: This parameter determines how the candidate solutions are
selected for reproduction in each iteration of the algorithm. Common selection
strategies include roulette wheel selection, tournament selection, and rank-
based selection.
5. Termination criteria: This parameter specifies when the algorithm should be
stopped. Reaching a maximum number of iterations, a minimum
acceptable fitness value, or a specified amount of computing time all are
common termination criteria.
6. Initialization method: This parameter controls how the first set of candidate
solutions is generated. Random initialization, initialization based on domain-
specific knowledge, and initialization based on previous successful solutions
are all common initialization methods.
7. Fitness function: This parameter determines how the quality of candidate
solutions is evaluated. The fitness function should be chosen carefully to
accurately reflect the optimization problem being solved.
These parameters can be adjusted based on the distinctiveness of the optimi-
zation problem being solved to achieve better performance.
The GA and HSA are both population-based optimization algorithms that are
devised to locate the optimal solution for the problem under test. While both
algorithms have their own unique set of parameters, the most common parameters
that need to be tuned for both algorithms are:
Parameters for genetic algorithm
1. Population size: The number of individuals in each generation of the
algorithm.
2. Selection method: The method used to select individuals for the next gen-
eration (e.g., roulette wheel selection, tournament selection, etc.).
3. Crossover rate: The probability that crossover will occur between two indi-
viduals in the population.
4. Mutation rate: The probability that mutation will occur in an individual in the
population.
5. Fitness function: The function used to evaluate each individual’s fitness in the
population.
Parameters for harmony search algorithm
1. Harmony memory size: The size of the harmony memory used in the
algorithm.
2. Pitch adjusting rate: The probability that a pitch adjustment will occur in the
algorithm.
3. Harmony memory consideration rate: The rate at which the algorithm
considers the harmony memory when generating new solutions.
4. Maximum improvisation iterations: The maximum number of iterations the
algorithm will perform before converging to a solution.
5. Objective function: The function used to evaluate the fitness of each gener-
ated solution.
Expectations from modern evolutionary approaches for image processing 241
It is important to note that the optimal values for these parameters may vary
depending on the problem being solved. Therefore, it is recommended to experi-
ment with different parameter values and analyze the results to determine the
optimal set of parameters for a specific problem.
The two different types of optimization techniques used to find the maximum or
minimum value of a function are constrained optimization and unconstrained
optimization. Unconstrained optimization is the process of finding the smallest or
maximum value of a function without any constraints. In other words, there are no
restrictions on the variables’ possible values. The goal of unconstrained optimiza-
tion is to find the variable values that result in the function’s minimum or
maximum value.
Constrained optimization involves finding the minimum or maximum value of
a function subject to one or more constraints. In other words, there are limitations
on the values that the variables can take on. The goal of constrained optimization is
to find the value of the variables that satisfy the constraints and result in the
minimum or maximum value of the function.
There are several differences between constrained and unconstrained optimi-
zation techniques:
in the harmony search optimization algorithm. These methods ensure that the
optimal solution is found while satisfying the constraints of the problem.
9.8 Conclusion
References
With the advancement in technology, day-to-day life has become a lot easier.
Resources requirement and task complexity are also increasing day by day. To
imbibe these requirements with less effort, using minimum resources is the most
adaptable solution. Optimization is performed to satisfy all these demands while
reducing overall cost, complexity, and enhancing efficiency. Some problems are
deterministic in nature like refrigerator cooling off mode, iron heating off mode,
etc. Contrary to this, most of the real-world problems are stochastic processes for
example traffic light controlling based on number of vehicles on the road, early
diagnosis of heart diseases based on various parameters, identify number of cellular
users who may demand network resources at same instance of time, image scenario
like background, posture, etc., while using image processing and assuming human
speed of working in a factory with automation processes and speed of other vehi-
cles for an automated vehicle on the road. Maximum real-world problems are
indeterministic in nature. Traditional methods like Newton’s method, Secant
method, Steffensen’s method, Principal component analysis [1], and Brent’s
method [2]. These are computation methods to find only the exact solutions for
real-world problems [3]. These methods always follow the traditional way only and
are incompatible to any spontaneous changes.
Since the last few decades, researchers are more focused toward optimization.
It may be done in different ways: To analyze a random data in much discrete way,
machine learning algorithms are used. Some examples are K-means [4], K-nearest
neighbor [5], reinforcement learning [6], Gaussian mixture model [7], Artificial
neural networks [8], Hidden Markov Model [9], logistic regression [10], and sup-
port vector machine [11]. To reduce the cost and latency of transmission, com-
pression algorithms like discrete cosine transform, discrete wavelet transform, and
Lempel-Ziv-Welch (LZW) [12] are used. To ensure security, encryption algorithms
1
ABES Institute of Technology, India
2
Christ (Deemed to be University) – Delhi, India
252 Nature-inspired optimization algorithms and soft computing
like advanced encryption standard, data encryption standard, and triple data
encryption standards are used [13]. To optimize parameters like energy consump-
tion and other real-time requirements, numerous other algorithms have been used
nowadays.
There are various problems to be solved in a specified steps or time. For
example, an autonomous car path is difficult to design as the condition of the road,
and other vehicles movements are non-deterministic. It is also difficult to predict
the time when movement of vehicles will be predictable. Such problems are called
NP-hard problems. To solve such problems, nature-inspired algorithms provide
satisfactory solutions [14]. Based on the inspiration source, nature-inspired algo-
rithms are categorized as follows. Stochastic algorithms are based on random
search, for example, stochastic hill climbing and Tabu search. Evolutionary algo-
rithms replicate the behavior of living organisms, for example, genetic algorithm
and strength Pareto evolutionary algorithm. Physical algorithms are based on
physical phenomena such as simulated annealing and harmony search. Bio-inspired
algorithms are inspired by Darwin’s theory of survival. Swarm-based algorithms
are based on behavior of social creatures, for example, ant colony optimization
and particle swarm optimization. Chemistry-based algorithms are based on opti-
mization of chemical reactions, for example, grenade explosion method and river
formation dynamics.
The nature-inspired algorithms has been adopted from different fields of sci-
ence, i.e., physics, chemistry, biology, mathematics, social science, etc. These
algorithms can be used to optimize single-objective problems and multi-objectives
problems [15]. Due to the nature-inspired algorithms’ precision, its application area
has been increased a lot. Genetic algorithm along with heuristic and machine
learning algorithm has been used to optimize various parameters like cost, energy
consumption, and latency using D2D communication for an IoT, 5G, and 6G sys-
tem[16]. A neoteric gravitational search algorithm for multi-constraint optimization
problem has been used to optimize throughput of served users in UAV commu-
nication [17]. To automate a 3D packaging system, particle swarm optimization
along with IoT has been designed [18]. The hill climbing algorithm has been used
for maximum power point tracking in a photovoltaic power harvesting system for
smart nodes of Internet of Things (IoT) [19]. An ensemble learning methods are
used as base classifier to identify the cross-semantic and structural features of cloud
to avoid massive data transmission in cloud computing systems [20]. Improved
simulated annealing is applied to physiological detection and positioning of a
visitor into the park [21]. The variety of bio-inspired algorithms is far beyond the
scope of our explanation.
Particle swarm optimization (PSO) is a random search algorithm based on
swarm, and it is simpler than other nature-inspired algorithms in terms of complexity
and fast convergence [22]. PSO has been implemented to solve many real-time pro-
blems like, PSO with gradient descent method and support vector machine has been
used to identify malicious data [23]. PSO has been used to allocate resources auto-
matically to a distributed data computation for large data computations. It ensures that
the subgroups will get resources directly proportional to computations [24]. The
Conclusion 253
meta-heuristic PSO has been used to make decisions for node deployment fulfilling
multiple objectives such as QoS for energy consumption, delay, and throughput,
ensuring failure tolerance [25]. Since past two decades, PSO has been used in bio-
medical field and image processing. A single-particle optimizer-based local search
has been used for DNA sequence compression [26]. It is evident that in some appli-
cations, performance of PSO has been further improved by integrating with some
other algorithms. A hybridization of PSO with dragon fly optimization and gray wolf
optimization has been used to find a feature set for early detection of Alzheimer’s
disease [27]. A multi-swarm technique further improves the overall outcomes for a
problem. Initially, swarms are used for unconstrained local optimization. The results
are further optimized globally using constrained PSO [28]. For optimum utilization of
resources, PSO has been proposed. It optimizes routes based on arrival time and
waiting tolerance of the passengers [29]. Thus, PSO is applied to solve a large set of
engineering problems. To further improve the performance, these can be hybridized
with other algorithms. PSO and its allies can be used for linear, non-linear, uncon-
strained, and constrained problem optimizations. These algorithms can be applied for
solving single- and multi-objective problems.
Conventional nature-inspired algorithms and swarm optimization can be used
for optimization. IoT, 5G, and 6G systems are becoming more and more complex.
Conventional nature-inspired optimization algorithms like genetic algorithms,
PSO, and simulated annealing improves overall performance of the system. To
enhance it further, other optimization algorithms have been adapted. Cat swarm
optimization algorithm and its allies versions can be applied in various fields like
wireless sensor networks, electrical engineering, and communication engineering.
These algorithms are based on cat lazy behavior contrary to her alert mode, and
these are considered as seeking (local optimization) and tracing modes (global
optimization) [30]. Cuckoo optimization algorithm is generally used for continuous
problems, which are non-linear in nature. These are based on the cuckoo’s habit to
lay eggs in others’ nests and selecting the best nest where her eggs can nurture
without fail. This algorithm is used for complex problems, which are NP hard like
data fusion in wireless networks, train neural network, manufacturing scheduling,
and nurse scheduling [31]. Mine blast optimization algorithm (MBA) is based on
clearing the mines field while targeting the most explosive (optimum) mine in
which created pieces of shrapnel collide with other mines this is used as direction
for further process [32]. This algorithm is good for constrained optimization pro-
blems. MBA, its hybrid, and advanced versions has been used to find solutions to
various real-time problems. Water cycle algorithm (WCA) is based on the flow of
the river, i.e., how the river is formed at high mountains due to snowfall, flows
down to the valley, and ends up in the sea. This algorithm is widely used in various
mechanical engineering design problems [33]. WCA and its various hybrid and
advanced versions have been applied to solve a large number of engineering pro-
blems like spam e-mail detection, PI controller, and energy optimization [34].
The algorithms discussed till now can perform better but there is a limitation
that if an algorithm is stuck into the premature stage of local optima, then an
optimal value may not be obtained. This problem can be solved by using anarchic
254 Nature-inspired optimization algorithms and soft computing
Using other neighborhood devices for computation may fasten the process, but it may
interfere with the user’s privacy. Various security encryption algorithms are available
to ensure user’s privacy but ensuring privacy of user along with cloud-based power
system results best. An identity-based encryption with equality test ensures no leak-
age of data to ensure user’s security [47].
Apart from above various nature-inspired algorithms also finds applications in
the fields of biomedical, healthcare, and industrial automation. For early detection
of diseases various features have been incorporated in wearable devices like
smartwatches and smart phones. Algorithms like GA, swarm intelligence, neural
networks, and these elites are used for predictive analysis and to optimize the
resource [48]. These algorithms are also widely used in industrial automations like
warehouse replenishment where each time position allocations of a good is a new
permutation which very well mimics the behavior of chromosomes. Fault diagnosis
of rotation machinery is optimized by quantum genetic algorithm [49]. A robotic
assembly planning has been implemented using GA, ant colony optimization,
memetic algorithm, flower pollination algorithm, and teaching learning algorithms
[50]. The scope is beyond mentioned areas, i.e., nature-inspired algorithms can be
applied for optimization of a large variety of real-time problems.
10.2.1 Challenges
Scalability: As IoT systems continue to grow in size and complexity, the scalability
of bio-inspired optimization algorithms becomes a significant challenge. These
algorithms may take a long time to converge or sometimes during local optimiza-
tion, a wrong convergent point may lead to no convergence during global optimi-
zation. For smart applications, time and resource optimization is desirable.
Real-time constraints: Real-time applications require a flexible system which may
adapt to real-time changes, which is a limitation as many bio-inspired optimization
algorithms are computationally complex.
Robustness: IoT applications often operate in dynamic and unpredictable envir-
onments. Therefore, the robustness of bio-inspired optimization algorithms to
changing conditions and perturbations is crucial.
Interpretability: Bio-inspired optimization algorithms are often considered as
“black-box” approaches, which may limit their interpretability and applicability in
certain IoT domains.
256 Nature-inspired optimization algorithms and soft computing
10.2.2 Potentials
Adaptability: Bio-inspired optimization algorithms are inherently adaptable and
can dynamically adjust to changes in the environment, making them suitable for
IoT applications that operate in dynamic and unpredictable environments.
Efficiency: Bio-inspired optimization algorithms are often highly efficient, making
them suitable for IoT applications with limited computational resources.
Multimodal optimization: Many IoT applications involve the optimization of
multiple objectives or criteria. Bio-inspired optimization algorithms can handle
such problems efficiently, making them suitable for IoT applications with complex
optimization objectives.
Diversity: Bio-inspired optimization algorithms can generate diverse solutions,
making them suitable for IoT applications that require diverse solutions to be
explored.
In summary, bio-inspired optimization algorithms have great potential for
IoT applications, but they also face challenges that need to be addressed to fully
exploit their potential. Researchers need to modify existing algorithms or propose
new algorithms and techniques that can overcome these challenges while lever-
aging the potentials of these algorithms to address the optimization problems in
IoT applications.
Other algorithms: Cat optimization, flower pollination optimization, lion and ant
optimization, etc. are nature-inspired algorithms that are based on some virtues of
animals. For example, cat optimization uses the virtue of the cat being attentive while
being in sleep mode for a large amount of time. These algorithms are used in smart
city planning to optimize power consumption, network management resources, etc.
Overall, nature-inspired computing is an exciting and rapidly evolving field
that is helping to drive innovation in smart city planning. By leveraging the prin-
ciples of natural systems, researchers and planners are developing new algorithms
and techniques that can help cities become more sustainable, efficient, and livable.
Although there are a number of algorithms in the IoT. IoT is a network of
interconnected devices that collect and exchange data. IoT applications can range
from smart homes and wearables to industrial monitoring and smart cities. To
process and analyze the vast amount of data generated by these devices, efficient
and effective algorithms are essential. Here are some of the most commonly used
algorithms in IoT applications:
Machine learning algorithms: Machine learning algorithms are widely used in
IoT applications for predicting, classifying, and clustering data. These algorithms
use statistical models to analyze the data generated by IoT devices, enabling them
to make accurate predictions and decisions.
Data compression algorithms: As IoT devices generate vast amounts of data, it is
often necessary to compress this data to reduce storage and transmission costs. Data
compression algorithms such as Huffman coding and Lempel-Ziv-Welch (LZW)
compression are commonly used in IoT applications.
Encryption algorithms: With the increasing amount of sensitive data generated by
IoT devices, encryption algorithms such as advanced encryption standard (AES)
and RSA are crucial for securing data and protecting privacy.
Optimization algorithms: Optimization algorithms are used to improve the per-
formance of IoT devices and systems. These algorithms aim to maximize efficiency
while minimizing energy consumption and other resources.
Sensor data fusion algorithms: IoT applications often involve data from multiple
sensors. Sensor data fusion algorithms are used to combine the data from these
sensors and provide a more accurate representation of the physical environment.
Time series analysis algorithms: IoT applications often generate time series data,
which can be analyzed using time series analysis algorithms. These algorithms can
detect patterns, anomalies, and trends in the data, enabling more accurate predic-
tions and decisions.
Overall, the algorithms used in IoT applications vary depending on the specific
use case, but the ones mentioned above are some of the most commonly used.
References
[8] R. Dastres and M. Soori, “Artificial neural network systems,” Int. J. Imaging
Robot., vol. 2021, no. 2, pp. 13–25, 2021, www.ceserp.com/cp-jour
[9] L. Rabiner and B. Juang, “An introduction to hidden Markov models,”
in IEEE ASSP Magazine, vol. 3, no. 1, pp. 4–16, 1986. doi: 10.1109/
MASSP.1986.1165342.
[10] C. J. Peng, K. L. Lee, and G. M. Ingersoll, “An introduction to logistic
regression analysis and reporting,” The Journal of Educational Research, vol.
96, no. 1, pp. 3–14, 2002. doi: 10.1080/00220670209598786.
[11] T. Evgeniou and M. Pontil, “Support vector machines: theory and applica-
tions.” In: G. Paliouras, V. Karkaletsis, C. D. Spyropoulos (eds), Machine
Learning and Its Applications, ACAI 1999. Lecture Notes in Computer
Science, vol. 2049, 2001, Springer, Berlin, Heidelberg. https://doi.org/
10.1007/3-540-44673-7_12 10.1007/3-540-44673-7.
[12] G. Signoretti, M. Silva, P. Andrade, I. Silva, E. Sisinni, and P. Ferrari, “An
evolving TinyML compression algorithm for IoT environments based on
data eccentricity,” Sensors, vol. 21, no. 4153, pp. 1–25, 2021. doi: 10.3390/
s21124153.
[13] I. Kuzminykh, M. Yevdokymenko, and V. Sokolov, “Encryption algorithms
in IoT: security vs lifetime how long the device will encryption algorithms in
IoT: security vs lifetime,” preprint article, no. January 2022, 2021. https://
www.researchgate.net/publication/353237519_Encryption_Algorithms_in_
IoT_Security_vs_Lifetime _How_long_the_device_will_live.
[14] Z. Wang, C. Qin, B. Wan, and W. W. Song, “A comparative study of
common nature-inspired algorithms for continuous function optimization,”
Entropy, vol. 23(7), no. 874, pp. 1–40, 2021. doi: 10.3390/e23070874.
[15] F. G. Mohammadi, F. Shenavarmasouleh, K. Rasheed, T. Taha, M. H.
Amini, and H. R. Arabnia, “The application of evolutionary and nature
inspired algorithms in data science and data analytics,” in 2021 International
Conference on Computational Science and Computational Intelligence
(CSCI), 2021.
[16] A. Garg, R. Arya, and M. P. Singh, “An integrated approach for dual
resource optimization of relay-based mobile edge computing system,”
Concurr. Comput. Pract. Exp., no. February, pp. 1–16, March 2023, doi:
10.1002/cpe.7682.
[17] H. Li, J. Li, M. Liu, Z. Ding, and F. Gong, “Energy harvesting and resource
allocation for cache-enabled UAV based IoT NOMA networks,” IEEE
Trans. Veh. Technol., vol. 70, no. 9, pp. 9625–9630, 2021, doi: 10.1109/
TVT.2021.3098351.
[18] T. H. S. Li, C. Y. Liu, P. H. Kuo, et al., “A three-dimensional adaptive PSO-
based packing algorithm for an IoT-based automated e-fulfillment packaging
system,” IEEE Access, vol. 5, pp. 9188–9205, 2017, doi: 10.1109/
ACCESS.2017.2702715.
[19] X. Liu and E. Sanchez-Sinencio, “A highly efficient ultralow photovoltaic
power harvesting system with MPPT for Internet of Things smart nodes,”
Conclusion 263
IEEE Trans. Very Large Scale Integr. Syst., vol. 23, no. 12, pp. 3065–3075,
2015, doi: 10.1109/TVLSI.2014.2387167.
[20] J. Zhang, P. Liu, F. Zhang, H. Iwabuchi, A. A. D. H. E. A. De Moura, and V.
H. C. De Albuquerque, “Ensemble meteorological cloud classification meets
internet of dependable and controllable things,” IEEE Internet Things J.,
vol. 8, no. 5, pp. 3323–3330, 2021, doi: 10.1109/JIOT.2020.3043289.
[21] C. C. Lin, W. Y. Liu, and Y. W. Lu, “Three-dimensional Internet-of-Things
deployment with optimal management service benefits for smart tourism
services in forest recreation parks,” IEEE Access, vol. 7, pp. 182366–
182380, 2019, doi: 10.1109/ACCESS.2019.2960212.
[22] A. Erturk, M. K. Gullu, D. Cesmeci, D. Gercek, and S. Erturk, “Spatial
resolution enhancement of hyperspectral images using unmixing and binary
particle swarm optimization,” IEEE Geosci. Remote Sens. Lett., vol. 11, no.
12, pp. 2100–2104, 2014, doi: 10.1109/LGRS.2014.2320135.
[23] J. Liu, D. Yang, M. Lian, and M. Li, “Research on intrusion detection based
on particle swarm optimization in IoT,” IEEE Access, vol. 9, pp. 38254–
38268, 2021, doi: 10.1109/ACCESS.2021.3063671.
[24] Q. Chen, J. Sun, and V. Palade, “Distributed contribution-based quantum-
behaved particle swarm optimization with controlled diversity for large-
scale global optimization problems,” IEEE Access, vol. 7, pp. 150093–
150104, 2019, doi: 10.1109/ACCESS.2019.2944196.
[25] M. Z. Hasan and H. Al-Rizzo, “Optimization of sensor deployment for
industrial internet of things using a multiswarm algorithm,” IEEE Internet
Things J., vol. 6, no. 6, pp. 10344–10362, 2019, doi: 10.1109/JIOT.2019.
2938486.
[26] Z. Zhu, J. Zhou, Z. Ji, and Y. H. Shi, “DNA sequence compression using
adaptive particle swarm optimization-based memetic algorithm,” IEEE
Trans. Evol. Comput., vol. 15, no. 5, pp. 643–658, 2011, doi: 10.1109/TEVC.
2011.2160399.
[27] Y. F. Khan, B. Kaushik, M. Khalid Imam Rahmani, and M. E. Ahmed, “HSI-
LFS-BERT: novel hybrid swarm intelligence based linguistics feature
selection and computational intelligent model for Alzheimer’s prediction
using audio transcript,” IEEE Access, vol. 10, no. November, pp. 126990–
127004, 2022, doi: 10.1109/ACCESS.2022.3223681.
[28] Q. Zhao and C. Li, “Two-stage multi-swarm particle swarm optimizer for
unconstrained and constrained global optimization,” IEEE Access, vol. 8, no.
1, pp. 124905–124927, 2020, doi: 10.1109/ACCESS.2020.3007743.
[29] F. Chen, Y. Lu, L. Liu, and Q. Zhu, “Route optimization of customized
buses based on optimistic and pessimistic values,” IEEE Access, vol. 11, no.
January, pp. 11016–11023, 2023, doi: 10.1109/ACCESS.2023.3241235.
[30] A. M. Ahmed, T. A. Rashid, and S. A. M. Saeed, “Cat swarm optimization
algorithm: a survey and performance evaluation,” Comput. Intell. Neurosci.,
vol. 2020, Article ID 4854895, 2020, doi: 10.1155/2020/4854895.
264 Nature-inspired optimization algorithms and soft computing