0% found this document useful (0 votes)
87 views204 pages

Meta-Heuristic Optimization Techniques - Applications in Engineering

This chapter provides an overview of nature-inspired metaheuristic optimization algorithms. It discusses algorithms inspired by ant colony optimization, particle swarm optimization, cuckoo search algorithm, and ant-lion optimization. These algorithms were developed based on natural phenomena observed in ant colonies, bird flocking, brood parasitism of cuckoo birds, and hunting behavior of ant lions. The chapter presents the mathematical analysis of these algorithms and their application in solving complex engineering optimization problems with multiple objectives and constraints.

Uploaded by

mesfin Demise
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
87 views204 pages

Meta-Heuristic Optimization Techniques - Applications in Engineering

This chapter provides an overview of nature-inspired metaheuristic optimization algorithms. It discusses algorithms inspired by ant colony optimization, particle swarm optimization, cuckoo search algorithm, and ant-lion optimization. These algorithms were developed based on natural phenomena observed in ant colonies, bird flocking, brood parasitism of cuckoo birds, and hunting behavior of ant lions. The chapter presents the mathematical analysis of these algorithms and their application in solving complex engineering optimization problems with multiple objectives and constraints.

Uploaded by

mesfin Demise
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 204

Anuj Kumar, Sangeeta Pant, Mangey Ram, Om Prakash Yadav (Eds.

)
Meta-heuristic Optimization Techniques
De Gruyter Series on the
Applications of Mathematics
in Engineering and
Information Sciences

Edited by
Mangey Ram

Volume 10
Meta-heuristic
Optimization
Techniques
Applications in Engineering

Edited by
Anuj Kumar, Sangeeta Pant, Mangey Ram
and Om Prakash Yadav
Editors
Assoc. Prof. Dr. Anuj Kumar Prof. Dr. Mangey Ram
Department of Mathematics Department of Mathematics Computer Sciences
College of Engineering Studies and Engineering
University of Petroleum & Energy Studies Graphic Era University
PO Bidholi (Via Prem Nagar) 566/6 Bell Road
Dehradun 248007 Clement Town, Dehradun 248002
India India
[email protected] [email protected]

Asst. Prof. Dr. Sangeeta Pant Prof. Dr. Om Prakash Yadav


Department of Mathematics Industrial and Manufacturing Engineering
College of Engineering Studies Department
University of Petroleum & Energy Studies North Dakota State University
PO Bidholi (Via Prem Nagar) 1410 14th Avenue North
Dehradun 248007 Civil & Ind. Engr. Room 202
India Fargo, ND 58102
[email protected] USA
[email protected]

ISBN 978-3-11-071617-7
e-ISBN (PDF) 978-3-11-071621-4
e-ISBN (EPUB) 978-3-11-071625-2
ISSN 2626-5427

Library of Congress Control Number: 2021951120

Bibliographic information published by the Deutsche Nationalbibliothek


The Deutsche Nationalbibliothek lists this publication in the Deutsche Nationalbibliografie;
detailed bibliographic data are available on the Internet at http://dnb.dnb.de.

© 2022 Walter de Gruyter GmbH, Berlin/Boston


Cover image: MF3d/E+/Getty Images
Typesetting: Integra Software Services Pvt. Ltd.
Printing and binding: CPI books GmbH, Leck

www.degruyter.com
Preface
This book is motivated by the fact that meta-heuristic optimization techniques
have become very popular among researchers and engineers over the last two dec-
ades. The widespread applicability of various optimization methods makes them a
hot spot for researchers. A few years back one even can’t think that school of fish,
genes, nature of bat or ant can be used to design optimization algorithms, but na-
ture has the solution of every problem. Nature-inspired optimization algorithms
usually attempt to find a good approximation to the solution of complex optimiza-
tion problems of various fields of sciences, engineering, and industries. It is appli-
cable in almost all spheres of human life for the purpose of optimization of
various parameters.
The objective of this book is to cover the recent development and the engineer-
ing applications of meta-heuristic optimization techniques.

Dr. Anuj Kumar


Dr. Sangeeta Pant
Prof. Dr. Mangey Ram
Prof. Dr. Om Prakash Yadav

https://doi.org/10.1515/9783110716214-202
Acknowledgments
The editors are thankful to De Gruyter for providing this opportunity. We acknowledge
the professional and technical support provided by De Gruyter and the reviewers. We
are also thankful to all the chapter contributors for their contribution.

https://doi.org/10.1515/9783110716214-203
Contents
Preface V

Acknowledgments VII

Nitin Uniyal, Sangeeta Pant, Anuj Kumar, Prashant Pant


Nature-inspired metaheuristic algorithms for optimization 1

Sina Asherlou, Aref Yelghi, Erhan Burak Pancar, Şeref Oruç


An optimization approach for highway alignment using metaheuristic
algorithms 11

Sukhveer Singh, Sandeep Singh


A method for solving bi-objective transportation problem under fuzzy
environment 37

Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak


Application of particle swarm optimization technique in an interval-valued
EPQ model 51

Nagendra Singh, Yogendra Kumar


Optimization techniques used for designing economic electrical power
distribution 79

Bidyut B. Gogoi, Anita Kumari, S. Nirmala, A. Kartik


Meta-heuristic optimization techniques in navigation constellation
design 93

Ch. Swetha Devi, V. Ranga Rao, Pushpalatha Sarla


Correlation and heuristic analysis of polymer-modified concrete subjected
to alternate wetting and drying 107

Kamal Kumar, Amit Sharma


q-Rung orthopair fuzzy entropy measure and its application in multi-attribute
decision-making 117

Soumendra Goala, Palash Dutta, Bornali Saikia


A fuzzy multi-criteria decision-making approach for crime linkage utilizing
resemblance function under hesitant fuzzy environment 129
X Contents

Kumar Anupam, Pankaj Kumar Goley, Anil Yadav


Integrating novel-modified TOPSIS with central composite design to model
and optimize O2 delignification process in pulp and paper industry 145

T. K. Priyanka, Manoj K. Singh, Anuj Kumar


Deep learning for satellite-based data analysis 173

Editors’ Biography 189

Index 191
Nitin Uniyal, Sangeeta Pant, Anuj Kumar, Prashant Pant
Nature-inspired metaheuristic algorithms
for optimization
Abstract: With the advancement of technologies in engineering science, there has
always been an advent of optimization for further refinement. Albeit not abruptly
but still with considerable consistency, multi-objective metaheuristic optimization
methods extend their roots in almost all complex engineering problems. This chap-
ter gives an overview of the most interesting class, that is, nature-inspired optimiza-
tion algorithms evolved in due course of time and inspiration from nature. A
mathematical analysis is kept prominent in all sections for a better understanding
of the method to the young researchers.

Keywords: metaheuristics, ant colony optimization (ACO), particle swarm optimiza-


tion (PSO), cuckoo search algorithm (CSA), ant-lion optimization (ALO)

1 Introduction
Optimization, an art of approaching maximum, can be precisely defined as a pursuit
for the local best value of objective with the aid of some well-defined schemes called
an optimization algorithm. Mathematically, it involves seeking the extremum of a
multivariable function f ðx1 , x2 , . . . , xn Þ keeping due regard to few restrictions imposed
on the input variables xi , 1 ≤ i ≤ n. The function is called an objective function and the
restrictions are called constraints. The evolution of a civilized modern human from
the ancient species of Homo sapiens is enough to feel the inevitability of optimization
in every natural phenomenon. Modern engineering relies heavily on the optimization
techniques. For instance, an architect designs an optimized cost model of a building,
keeping in mind the constraints of his/her budget and building construction laws of
the locality. On the other hand, gadget companies always aim to maximize the gadget
features with restrictions on the chip size and price bracket of the product.
A description of the most general optimization problem is as follows:
Given f : Dn ! R k , ðk ≥ 1Þ and Dn = D1 × D2 ×    × Dn 3 x = ðx1 , x2 , . . . , xn Þ
 
Find x* = x1* , x*2 , . . . , x*n 2 Dn which satisfies

Nitin Uniyal, Anuj Kumar, Department of Mathematics, University of Petroleum and Energy
Studies, Dehradun, India
Sangeeta Pant, Department of Mathematics, University of Petroleum and Energy Studies,
Dehradun, India, e-mail: [email protected]
Prashant Pant, Munich School of Engineering, Technical University of Munich, Germany

https://doi.org/10.1515/9783110716214-001
2 Nitin Uniyal et al.

hj ðxÞ = 0, 1 ≤ j ≤ J (1)

gm ðxÞ ≥ 0, 1 ≤ k ≤ M (2)

pl ðxÞ ≤ 0, 1 ≤ l ≤ L (3)

and minimizes

f ðxÞ = ½ f1 ðxÞ, f2 ðxÞ, . . . , fk ðxÞT 2 R k (4)

where n is the number of parameters to be optimized. The function f ðxÞ is called an


objective function and its domain Dn = D1 × D2 ×    × Dn is called the decision vari-
able space. Di , either continuous or discrete, is the search space of xi , the ith optimi-
zation variable. A feasible solution x* that minimizes (maximizes) the objective
function is called an optimal solution. The equalities for hj and inequalities for gm
and pl are constraints. An n-dimensional vector x = ðx1 , x2 , . . . , xn Þ satisfying all con-
straints is called feasible solution and the subset X  Dn containing all of them is
feasible decision space or feasible region or search space.The range of X under f ,
that is, f ð XÞ  R k is feasible criterion space.
It is evident that a single solution is barely the best in addressing all the objec-
tives simultaneously for the case k ≥ 2, that is, multi-objective. This causes to pay
attention to a refined set Xp  X of solutions that may be improved further for some
fi ðxÞ, but at the cost of degradation of at least one fj≠i ðxÞ. It is worthy to mention
that each member of x* 2 Xp dominates (or is nondominated by) each member of
x 2 XnXp in the sense of functional values.
Mathematically, ∀x* 2 Xp , ∀x 2 XnXp ; fi ðx* Þ ≤ fi ðxÞ holds for each i 2 f1, 2, . . . , kg
and fi ðx* Þ < fi ðxÞ for at least one i 2 f1, 2, . . . , kg. This refined set Xp is called as Pareto
 
optimal solution set and its range f Xp is called as Pareto front or Pareto boundary
(Figure 1). For any two solutions x*1 , x2* 2 Xp , as none of them dominates the other in all
objectives, hence, are incomparable.
An optimal solution x* is called the global optimum solution if f ðx* Þ − f ðxÞ ≥ 0 or
≤ 0∀x 2 Dn . Since a nonlinear programming does not allow to get global optimum
*
straightaway, compromising with a local best x* in some open  set  L  D is justifi-
n
** **
able [13]. The criteria for such local optimum solution x are f x − f ðxÞ ≥ 0 or ≤ 0
∀x 2 L. The initial point x0 2 L plays a vital role in the convergence rate of any local
optimization algorithm and must be chosen carefully. But if there is an algorithm that
is independent of this choice, then we say that the algorithm is globally convergent [1].

2 Nature-inspired metaheuristics
Whereas a heuristic algorithm discovers the optimal solution in the search space of
an optimization problem by “trial and error” with a weak guarantee of success, a
Nature-inspired metaheuristic algorithms for optimization 3

Figure 1: Example of Pareto front (in red); point C is dominated by both points A and B.

metaheuristic algorithm performs better than that. The later uses a trade-off be-
tween randomization and local search and can be used for seeking global optima
too. In metaheuristic algorithms, there is an exhaustive global exploration in the
search space, and based on some current good solution, an intense search in a local
region is done further to get the best solutions [18]. Presenting an overview of some
nature-inspired metaheuristic techniques has gained popularity due to their robust-
ness and flexibility of application in different branches of engineering.

2.1 Ant colony optimization (ACO)

Proposed by Dorigo [2], the source of inspiration for ant colony optimization (ACO) al-
gorithmant colony optimization was the foraging nature of some ant species. The nat-
ural community behavior of these species to discover the shortest path from their nest
to food helps them to survive easily. Though this behavior is purely mutual, a single
ant cannot succeed in finding the shortest path [3]. An ant first randomly explores the
surrounding area of her nest, locates the food source, and carries an amount of food
back to its nest. During the return journey, a chemical pheromone gets deposited on
the ground which indirectly guides other members to the food source. Apart from the
trail to food source, the deposited pheromone also reflects the quantity and quality of
food. After some time, the ants tend to follow the trail of high-quality pheromone
which eventually becomes the trail of high accumulation of pheromone too. Since
ants prefer to follow trails with larger amounts of pheromone, eventually all ants con-
verge to the shortest path between their nest and food source (Figure 2).
4 Nitin Uniyal et al.

Figure 2: The behavior of real ant movements.

Among all variations of ACO algorithms proposed, ant system [4] is the first
having a closed resemblance with the Travelling Salesman Problem, where the 2-
tuple ði, jÞ is used to represent the edge joining city i to city j. Denote by τij , the
amount of pheromone accumulated on edge ði, jÞ, which is updated by all m ants of
the group as per the equation:
X
m
τij ð1 − ρÞ.τij + Δτkij (5)
k=1

where ρ is the evaporation rate, Δτkij is the amount of pheromone laid by ant k on
edge ði, jÞ defined as
(Q
if ant k uses edgeði, jÞ in its tour
Δτij = Lk
k
(6)
0 otherwise

where Lk is the tour length of ant k and Q is a constant.


Denoting by dij (the distance between nodes i and jÞ, the desirability of movement
  
ηij from i to j must be inversely proportional to it, that is ηij = 1 dij . If Ai denotes the
set of all vertices not visited yet and adjacent to i, then for an ant k currently at node i,
the probability of its movement to node j is stochastically given by
β
τkij .ηij
pkij =P β
(7)
z2Ai τkiz .ηiz

The parameter β ≥ 1 is to control the influence of ηij .


Nature-inspired metaheuristic algorithms for optimization 5

2.2 Particle swarm optimization (PSO)

Proposed by R. Eberhart and J. Kennedy [5, 8], the population-based search algo-
rithm particle swarm optimization (PSO) mimics the behavior of species where there
is no permanent position of the leader in group. Only temporary leaders lead the
group for small intervals of time during the search of food. This temporary leader is
the one among all lies at the closest vicinity of the food source. For instance, the
school of fish and flock of birds belong to this category. This behavior optimizes
their quest of food after the finite shifts at the temporary leader position.
Let xi and vi , respectively, be the position and velocity of the ith particle in the
population of N particles (or individuals) of search space where each particle repre-
sents a probable solution. The update scheme in the coordinates of position and ve-
locity of each particle is of iterative nature. The iterative scheme depends upon the
personal best position and other parameters of ith particle in the group. This necessi-
tates the flocking around the space of best solutions in the search space. Let pit denote
the personal best position of particle i at time t, then the next step update is given by
(    
pit if f xti + 1 ≥ f pit
pt + 1 =
i
    (8)
xti + 1 if f xti + 1 < f pit

where f denotes the fitness function (or objective function).


The personal best position of the ith particle is the position at which the fitness
function has least value. The algorithm is referred to lbest or gbest depending upon
the choice of local or global neighborhood of the particle. A gbest model determines
the best particle of the whole swarm by choosing the best of all personal best posi-
tions. Denoting by p ^, the global best particle is then mathematically represented as
follows:
        
p ^t Þ = min f p0t , f p1t , . . . , f pNt
^ 2 p0t , p1t , . . . , pNt s.t. f ðp (9)

The updates in the velocity and position of ith particle are due to the following
equations:

vit + 1 = vit + c1 r1 pit − xit + c2 r2 p^t − xit (10)

xti + 1 = xit + vit + 1 (11)

Here, the numbers r1 and r2 are members of two independent bounded sequences
< r1 > and < r2 > in the open interval ð0, 1Þ. To scale these numbers, constants c1 and
c2 (called acceleration coefficients) are used for stochastic purpose which help in
keeping each particle amid the pbest and gbest positons. The process is iterative
in nature and stops after the specified number of steps or when the velocity update is
zero [14].
6 Nitin Uniyal et al.

2.3 Cuckoo search algorithm (CSA)

Developed by Xin-She Yang and Suash Deb, cuckoo search algorithm (CSA) is in-
spired by obligate brood parasitism of a certain species of birds [19]. A particular
species of birds like ani and guira cuckoos are very aggressive in reproduction and
follow a parasitic strategy to lay their eggs in the nests of other host birds. If some-
how the host bird realizes that the eggs do not belong to it, then either it removes
the eggs or leaves the nest permanently. Studies show that a cuckoo bird mimics
the eggs of host bird in color and shape which eventually reduces the chances of
getting abandoned by the host bird. Not only this, the newly born cuckoo chicks
also mimic the call of host chicks to deceive the mother host for feeding [7, 10].
Let n be the total number of host nests. Suppose that 0 ≤ pa ≤ 1 be the probability
that the egg laid by cuckoo is identified by the host bird which in turn leads to taking
new random nests (or solutions). Each egg represents a solution and a cuckoo egg rep-
resents a new solution which is potentially better than the ordinary solution (i.e., a
host egg). For simplicity, if each nest has a single egg, then the new solution for
cuckoo i under the assumption of Lévy flight is determined by the following equation:
xit+1 = xit + α¯Lévy ðλÞ (12)

where α > 0 is the step size depending upon the problem and the symbol ¯ denotes
the entry-wise product. Since the current position depends solely on the previous
position, hence (12) is a stochastic equation. Further, the Lévy flight speeds up the
search in local neighborhood of the best solutions obtained so far. The Lévy flight
thus entails the random walk in which the step length is randomized using Lévy
distribution with infinite mean and infinite variance:

Lévy⁓u = t − λ , ð1 < λ ≤ 3Þ (13)

The pseudo-code for cuckoo search is given in Figure 3.

2.4 Ant-lion optimization (ALO)

The ant-lion optimization (ALO) algorithm is influenced from the foraging behavior of
antlions larvae which actually belong to the Myrmeleontidae family and Neuroptera
order (net-winged insects) [16]. An antlion hunts by making conical-shaped pits in
sand where an insect (preferably ant) gets trapped due to the intelligence of the
hunter (Fig. 4). The size of the pit made by an antlion is directly proportional to the
level of hunger and the size of moon estimated by an internal lunar clock [6, 7]. Inside
the pit, it hides underneath the base as a sit-and-wait predator [17]. If the prey falls
inside this trap, then it cannot escape the trap easily. The reason is the intelligence of
antlion as the antlion starts digging out more sand to throw it toward the edge of pit
Nature-inspired metaheuristic algorithms for optimization 7

Figure 3: The pseudo-code for cuckoo search.

as soon as it realizes that a prey has fallen inside the pit. The mathematical model for
ALO is described using random walk of ants as follows:

Figure 4: Movement of prey inside antlion’s trap.

!
X
T
Xi = 0 rð1Þ rð1Þ + rð2Þ . . . r ð jÞ (14)
j =1
8 Nitin Uniyal et al.

where T is the maximum number of iteration and rð jÞ is defined stochastically using


the random number distributed uniformly in the interval ½0, 1 as follows:
(
1, if rand > 0.5
r ðt Þ = (15)
− 1, if rand ≤ 0.5

Equation (14) cannot be used directly and needs to be converted to the position in
actual search space depending on the bounds as follows:
 t  
Xi − ai di − cti
Xi =
t  t  + ci (16)
di − ai

where ai and bi are minimum and maximum of random walk in ith variable and; cti
and dti are minimum and maximum of ith variable at tth iteration.
The ants’ movements are affected by antlion’s traps due to the following
equations:

cti = Antliontj + ct

dti = Antliontj + dt

where Antliontj is the position of jth Antlion at tth iteration, selected by the Roulette
wheel depending on the fitness. An antlion updates its position to catch new prey
as per the proposed equation:
   
Antliontj = Antti if f Antti > f Antliontj (17)

3 Conclusion
Nature-inspired optimization algorithms are a class of stochastic search and optimi-
zation procedures with metaheuristic character inspired by the principles of Dar-
win’s theory of biological evolution. They maintain a population of structures that
evolve according to rules of selection and other operators (mostly inspired by bio-
logical phenomena). They are expected to provide nonoptimal but good-quality sol-
utions to the problems generally unsolved by exact methods. Self-adaptation and
robustness are some of the key features of nature-inspired optimization algorithms
which make them prevalent over other methods. From literature, it is reasonably
evident that they have engrossed many researchers and practitioners since their in-
ception. Applicability of nature-inspired optimization algorithms includes very com-
plex problems of various fields of engineering and science [9, 11, 12, 15]. In this
chapter, the basic components and working of some of the most prominent nature-
inspired optimization algorithms such as ACO, PSO, CSA, and ALO are explained in
detail by keeping mathematical analysis prominent.
Nature-inspired metaheuristic algorithms for optimization 9

References
[1] Bergh F. 2001. An analysis of particle swarm optimizers, Ph.D. Thesis, University of Pretoria.
[2] Dorigo M. 2006. Artificial ants as a computational intelligence technique, IEEE Computational
Intelligence Magazine, 28–49.
[3] Dorigo M. 1994. Learning by probabilistic Boolean networks, in IEEE International Conference
on Neural Networks, 887–891.
[4] Dorigo M. 1992. Optimization, learning and natural algorithms, Ph.D. thesis, Politecnico di
Milano.
[5] Eberhart R., Kennedy J. (1995). A new optimizer using particle swarm theory, in International
Symposium on Micro Machine and Human Science, 39–43.
[6] Goodenough J., McGuire B., Jacob E. Perspectives on animal behavior. John Wiley & Sons,
Hoboken, New Jersey, USA 2009.
[7] Grzimek B., Schlager N., Olendrof D., Mcdade M.C. Grzimek’s animal life encyclopedia.
Michigan, Gale Farmington Hills, 2004.
[8] Kennedy J., Eberhart R. Particle swarm optimization. IEEE International Conference on Neural
Networks 1995, 4, 1942–1948.
[9] Kumar A., Pant S., Ram M. (2019a) Grey wolf optimizer approach to the reliability‐cost
optimization of residual heat removal system of a nuclear power plant safety system. Quality
and Reliability Engineering international. Wiley, 1–12. https://doi.org/10.1002/qre.2499.
[10] Kumar A., Pant S., Singh S.B. 2016 Reliability Optimization of Complex System by Using
Cuckoos Search algorithm, Mathematical Concepts and Applications in Mechanical
Engineering and Mechatronics, IGI Global, 95–112
[11] Kumar A., Pant S., Ram M. Complex system reliability analysis and optimization. In: Ram M.,
Davim J.P. eds Advanced Mathematical Techniques in Science and Engineering. River
Publisher, Denmark, 185–199, 2018.
[12] Negi G., Kumar A., Pant S., Ram M. GWO: a review and applications. International Journal of
System Assurance Engineering and management 2021, 12, 1–8. https://doi.org/10.1007/
s13198-020-00995-8.
[13] Omran M.G.H., Particle swarm optimization methods for pattern recognition and image
processing, Ph.D. Thesis, University of Pretoria, 2004.
[14] Pant S., Kumar A., Ram M. Reliability Optimization: A Particle Swarm Approach. Advances in
Reliability and System Engineering. Springer International Publishing, Midtown Manhattan,
New York City, USA, 163–187, 2017.
[15] Pant S., Kumar A., Ram M. Solution of nonlinear systems of equations via metaheuristics.
International Journal of Mathematical, Engineering and Management Sciences 2020, 4(5),
1108–1126.
[16] Mirjalili S. The ant lion optimization. Advances in Engineering Software 2015, 83, 80–98.
[17] Scharf I., Ovadia O. Factors influencing site abandonment and site selection in a sit-and-wait
predator: A review of pit-building antlion larvae. Journal of Insect Behavior 2006, 19, 197–218.
[18] Uniyal N., Pant S., Kumar A. An overview of few nature inspired optimization techniques and
its reliability applications. International Journal of Mathematical, Engineering and
Management Sciences 2020, 5(4), 732–743.
[19] Yang X.S., Deb S. Cuckoo Search Vai Levy flights. Proc. of World Congress on Nature &
Biologically Inspired Computing (NaBIC 2009). December 2009, India. IEEE Publications, USA,
pp. 210–214 (2009).
Sina Asherlou, Aref Yelghi, Erhan Burak Pancar, Şeref Oruç
An optimization approach for highway
alignment using metaheuristic algorithms
Abstract: One important problem for highway vertical profile design is not carrying
out grade lines inspection accurately and in a short time. This leads to loss of time
and inability to perform complete inspection at all points, causing out-of-control
formations. Grade lines optimization includes meeting the design criteria while mini-
mizing the total amount of earthwork. Today, grade lines design is done by trial and
error. Yet, since it includes many mathematical calculations, this can lead to a signifi-
cant time loss. In recent years, research has been done on grade lines optimization
using metaheuristic algorithms. In a highway project, determining the optimum
grade lines requires an earthwork optimization. The optimum grade lines with the
minimum cost should be determined by considering various design restrictions. How-
ever, it is undeniable that a grade lines design like this can only be done using a mul-
tistep optimization technique. Here, earthwork balancing, that is, an optimum grade
lines design to equalize the cutting fill, was compared between three different optimi-
zation methods (PSO-FA-ABC) and an analysis was done to prove that the algorithm
presented here can be successfully used for grade lines optimization. In this study,
we made software coding for grade lines optimization and found that the results were
quite satisfactory.

Keywords: highway, grade lines, optimization, earthwork, metaheuristic algorithm

1 Introduction
It is no exaggeration to state that optimization is everywhere, from engineering de-
sign to business planning and from the course of the Internet to vacation planning.
To make the best use of valuable resources, there is always a need for solutions
under various restrictions. Mathematical optimization or programming is used in
design planning problems. Thus, the most cost-effective alternative with the highest

Sina Asherlou, Department of Civil Engineering, Avrasya University, Trabzon, Turkey, e-mail:
[email protected]
Aref Yelghi, Department of Computer Engineering, İstanbul Ayvansaray University, İstanbul,
Turkey, e-mail: [email protected]
Erhan Burak Pancar, Department of Civil Engineering, Ondokuz Mayıs University, Samsun,
Turkey, e-mail: [email protected]
Şeref Oruç, Department of Civil Engineering, Karadeniz Technical University, Trabzon, Turkey,
e-mail: [email protected]

https://doi.org/10.1515/9783110716214-002
12 Sina Asherlou et al.

accessible performance is reached by maximizing the desirable factors under these


restrictions and minimizing the undesirable ones. In optimization design, the pur-
pose is to maximize production efficiency [1].
The body of a road consists of two parts: the infrastructure and the superstruc-
ture. The infrastructure is the part under the surface that is formed according to the
project after earthworks (cutting and filling), that is, the subsoil. The performance
of the superstructure is directly associated with the physical properties and condi-
tion of the subsoil. In this regard, the infrastructure must always meet the require-
ments. The superstructure is a stratified structure that distributes the traffic loads
over the subsoil and it is the part of the road designed to carry the traffic on the
road throughout its economic life without being subject to any major deformations
or cracks while withstanding any environmental or climatic conditions [2, 3].
In a highway study, solutions are produced using computers for rapid control
and inspection of certain critical points. With these solutions, problems during ap-
plication can be eliminated. In a highway, tangential design has significant effect
on road safety, construction costs, and operating costs. With increased use of com-
puters, optimum design has been studied since the 1960s [4].
Theoretically, there are an infinite number of alternatives to evaluate the tan-
gent optimization problem for highways. Some previous applications have formu-
lated optimization problems and their cost functions as ambiguous. Thus, using fast
and efficient search algorithms to solve these problems have become inevitable [5].
Chew et al. [6] conducted numerical studies for calculating the amount of earth-
work needed on vertical profiles. Kim et al. [5, 7, 8] and Jha and Schonfeld [9] used
genetic algorithms (GAs) to calculate the areas of land volumes in the vertical pro-
file view, without using vertical profile. Easa [10] conducted studies on the vertical
profile and created a vertical profile template to calculate land volumes using a lin-
ear program.
There are other studies on grade lines optimization to minimize earthworks. For
example, Goh et al. [11] conducted research on dynamic programming and state
parameterization model using linear programming, Moreb [12] on linear program-
ming, FWA et al. [13] on GAs, and Göktepe et al. [14] on grade lines optimization
using dynamic programming.
Earthwork volumes that are calculated over a profile view may lead to very
rough and sometimes severely incorrect analyses. To obtain more efficient results,
Göktepe et al. [15–18] developed their grade lines optimization studies, that is, the
“weighted ground line method.” The authors found the grade lines passing through
the volume centers of the sections, ensuring the calculation of earthwork volume
among design templates in very sensitive analyses.
In his research in 2013, Özkan created several alternative horizontal curves in the
early stages of the geometric design of a highway and completed the geometric design
outline for any alternative horizontal curve by creating a vertical curve of the highway.
Since there can be an infinite number of vertical curves for each horizontal curve, the
An optimization approach for highway alignment using metaheuristic algorithms 13

designer created an optimization method to achieve the most economical design and
focused on optimizing the vertical curve [4].
Bosurgi et al. reconstructed the topography of a study area using particle
swarm optimization (PSO) and considering geometric and environmental restric-
tions [19].
Al-sayed et al. conducted research on determining the location of a vertical
curve on the vertical profile section based on the amount of earthwork [20].
Ghanizadeh and Heidarabadizadeh [21] conducted an optimization analysis for
earthwork costs on the height of a point of intersection and the vertical curve length
using cost based optimization, PSO, and GA.
Ghanizadeh et al. [22] carried out a study to minimize the sum of the absolute
variance between existing grounds, widely used in grade lines design.
As seen in the literature, optimizations are crucial for earthwork costs in high-
way designs. In this regard, the current study aimed to minimize the amount of
earthwork for grade lines design and to obtain an equal amount of cutting–filling.
The findings for the grade lines optimization algorithm designed here were
found to be very satisfactory in terms of highway earthwork costs. A software coding
was designed to make instant calculations using this algorithm. The algorithm ob-
tained here paved the way for the first step to implement grade lines optimization
software.

2 Materials and methods


In this study, an analysis was done for optimum grade lines design using three dif-
ferent metaheuristic methods. The areas of the grade lines above and below the
ground line were calculated. In this context, we used PSO [23] developed by Kenedy
and Eberhart in 1995 on MATLAB, the artificial bee colony (ABC) method proposed
by Karaboğa in 2005 [24], and the firefly algorithm (FA) [25] proposed by Xin Shi
Yang in 2012. With these methods, we drew the grade lines over the ground line
and calculated the areas above the line one by one, that is, the areas of the cutting
zones. Also, the areas below the line, that is, the filling parts, were calculated. The
cutting–filling difference was calculated and these calculations were repeated
by certain iterations according to optimization methods. The repetitions were con-
tinued until this difference reached a value close to zero. The line found as a result
was accepted as the optimum grade lines. In this regard, metaheuristic methods
can help area differences find a value close to zero. A line that is found in this way
can actually be determined by optimization according to the area. In the software,
the area is calculated using the cross method. The algorithm prepared here was in-
tegrated into metaheuristic algorithms. All three methods were assessed and com-
pared 30 times with 2,000 iterations on the Y and X–Y axes.
14 Sina Asherlou et al.

2.1 Particle swarm optimization

The initial ideas of Kennedy (a social psychologist) and Eberhart (an electrical en-
gineer) about particle swarms aimed at using analogues of simple social interac-
tion rather than individual cognitive abilities in producing computational
intelligence [23]. The word particle here refers to a bee in a colony or to a bird in a
flock. Each individual or particle in the swarm behaves in a distributed fashion
using its own intelligence or group intelligence. Thus, if one particle finds a bet-
ter path to the food, the rest of the swarm (even those far from the swarm) can
immediately follow that path. Optimization methods based on swarm intelligence
are called inspirational behavior and GA, on the contrary, is a method based on
evolutionary algorithm. PSO performs better on constant problems. In the context
of multivariate optimization problems, each particle begins at a random position.
Each particle is assumed to have two properties: one is position and the other is
velocity. Each particle navigates the design area and remembers the best position
that it has discovered (for finding the food source or the objective function
value). The particles communicate with each other for information regarding
good positions and adjust their position and velocity based on this information
[26]. The flowchart for this algorithm is shown in Figure 1.

2.2 Firefly algorithm

The FA models the behavior of fireflies according to the brightness characteristics


of each other. The algorithm has three basic rules: (1) all fireflies are of one gen-
der. Therefore, they move toward each other regardless of gender. (2) Attraction is
directly proportional to brightness: the less bright ones move toward the more
bright ones. The greater the distance between two fireflies, the less attraction
there is. Fireflies that cannot find a brighter one move randomly. The attractive-
ness of a firefly is determined by its objective function value [28]. The flowchart
for this algorithm is shown in Figure 2.

2.3 Artificial bee colony

The minimal model of feed selection that leads to the emergence of the collective
intelligence of honeybee swarms consists of three basic components. In other
words, there are three groups of bees in a colony: employed, onlookers, and scouts
[29]. In the proposed model, half of the colony consists of employed bees and the
other half consists of onlookers. An employed bee works on each source of nectar.
The number of nectar sources is equal to the number of employed bees [30]. The
overall algorithm structure of ABC optimization is shown in Figure 3.
An optimization approach for highway alignment using metaheuristic algorithms 15

Start

Set the Parameters

Evaluate the particle solutions and put to


particle Cost

Base on the Cost determines the local best for


each particle solution

Base on the Cost determine the global best


for all solutions

For each particle defines new solutions

Evaluate particle solution and put to particle


cost

Evet
Yes
Are stopping criteria
met?

No
Best Cost with
solution

End

Figure 1: Flowchart of particle swarm optimization diyagramı [27].

2.4 Vertical profile design

Since it is the part where the grade lines is determined, vertical profile design is a very
important stage of highway projects. Essentially, this stage affects road construction
costs, road safety, and performance. In fact, vertical profile design includes the
ground line, ground elevation, points of intersection, and the straight and vertical
curves of the grade lines. In this regard, costs and the amount of earthwork can be
minimized by the accurate and fine design of each of the above-mentioned criteria.
In other words, individual optimizations for each criteria in vertical profile design
16 Sina Asherlou et al.

Start

Set the Parameters

Evaluate the Firefly solutions and put to particle


Cost

Base on the Cost determines the best for


each dimension

For each Firefly defines new solutions for


each dimensions

Yes
Are stopping criteria
met?

No
Evet
Best Cost with solution

End

Figure 2: Flowchart of the firefly algorithm [23].

can lead to a significant effect on cost reduction. The optimum design of the vertical
profile elements will undeniably affect the amount of earthwork.

2.5 Ground line

In a vertical profile design, first, the ground elevations and cross-section kilometers
of the axis line of the highway project are determined. The vertical profile section
should be designed to show the distances on the X-axis, that is, the horizontal axis,
and the elevations on the Y-axis. In this context, the sample ground line design dis-
cussed here is shown in Figure 4.

2.6 Grade lines

Today, in vertical profile design, grade lines are drawn by determining intersection
points. Yet, the most important factor in grade lines design is where the grade lines
An optimization approach for highway alignment using metaheuristic algorithms 17

Start

Set the Parameters

Evaluate the Bee solutions and put to particle Cost

Evaluate the Employed Bee solutions and put to particle Cost

Evaluate the Onlooker Bee solutions and put to particle Cost

Evaluate the Scout Bee solutions and put to particle Cost

Yes
Are stopping
criteria met?

No

Best Cost with solution

End

Figure 3: Flowchart of artificial bee colony.

Profile
400

300
Elevation

200
Ground Line
100

0
0 200 400 600 800
Distance

Figure 4: The profile of ground line.


18 Sina Asherlou et al.

will pass. The most important issue in grade lines design is to minimize the amount
of cutting and filling between the ground line and the grade lines, that is, to try to
keep the cutting–filling difference close to zero. Grade lines design on the sample
ground line is shown in Figure 5.

Profile
400

300
Elevation

200
Ground Line
100 Grade Line

0 200 400 600 800


Distance

Figure 5: The profile of grade line.

2.7 Cutting–filling areas

Elements of the grade lines consist of straight lines and vertical curves at intersec-
tion points. As explained above, optimizing the design for each element has a key
role in minimizing the amount of earthwork. Within this study, a method was devel-
oped to determine the optimum linear grade lines to keep cutting–filling equal while
minimizing the amount of earthwork.
Care is taken to pass the straight parts of the grade lines in a way to equalize the
lower and upper areas of the ground line. However, a straight line that is passed like
this can still fail to equalize cutting–filling and minimize the amount of earthwork,
even when adjusted by the rule of thumb. Thus, here, we calculated and equalized the
areas of the regions above (cutting) and below (filling) the grade lines by taking into
account the points where the two lines intersect. We also kept these areas to a mini-
mum while minimizing the amount of earthwork. If the areas shown in Figure 6 are
equal, it means that the optimum grade lines design has been achieved.
When optimization is done over the abovementioned areas, the difference be-
tween cutting and fill areas becomes zero. This means that the amount of cut earth
can be completely used for the filling. The grade lines designed here were examined
on the Y-axis of the coordinate system, as well as on the X–Y axis.
An optimization approach for highway alignment using metaheuristic algorithms 19

Profile
400
Cut
300
Elevation

Cut
200
Fill Ground Line
100 Grade Line

0
0 200 400 600 800
Distance

Figure 6: An example of calculated area by using filling and cutting.

2.8 Optimization on the Y-axis

The sample ground elevations used here are thought to be in the middle of a high-
way project. The determination of the points where the straight lines intersect,
which are the intersection points on the vertical profile section, were first done ran-
domly and approximately. However, as in Figure 7, these points were inspected in
variation by forming a matrix at a certain interval on the Y-axis. This way, the opti-
mum grade lines design is done by applying trial and error on thousands of points
on the positions of the intersection points.

Profile Peak
400

300
+Y
Elevation

200
Ground Line
100 Grade Line
-Y
0
0 200 400 600 800
Distance

Figure 7: Controlling on the Y-axis.

For the method, we need to define a certain distance for the grade lines to be vari-
able up and down at the starting, intersection, and ending points on the Y-axis. The
current variables are shown in Table 1. The equation for how much the variables
can change in positive and negative directions is given in equation (1).
20 Sina Asherlou et al.

Table 1: Decision variables based on Y-axis.

YStart YPeak YEnd

YS − Min = YS − 20 (1)
YS − Max = YS + 20

YP − Min = YP − 20
YP − Max = YP + 20
YE − Min = YE − 20
YE − Max = YE + 20

2.9 Optimization on the X–Y axis

Grade lines design and optimization are carried out by examining both axes of the
coordinate system at the same time, that is, the X–Y axis. At this stage, the design is
done by minimizing the difference between cutting–filling. Our analysis on the X–Y
axis is shown in Figure 8.

Profile
400
350
300
Elevation

250 +Y
200
Ground Line
150
-X +X Grade Line
100
50
-Y
0
0 200 400 600 800
Distance

Figure 8: Controlling on X and Y axes.

The variables for inspections on both axes are given in Table 2. In this regard, cer-
tain changes are made in both directions.

Table 2: Decision variables based on Y and X axes.

YStart YPeak YEnd XStart XPeak XEnd


An optimization approach for highway alignment using metaheuristic algorithms 21

The equations used on the Y-axis are explained in equation (1). The equations re-
quired for the X-axis are shown in the following equation:

XS − Min = XS − 5 (2)
XS − Max = XS + 5
XP − Min = XP − 5
XP − Max = XP + 5
XE − Min = XE − 5

XE − Max = XE + 5

The grade lines that were designed using the methods described above should en-
sure the least amount of earthwork and equal cutting–filling.

3 Results
In the context of this research, a grade lines algorithm was designed by minimizing
the amount of earthwork while equalizing the amount of cutting–filling to minimize
costs. The flowchart of the straight sections of this grade lines calculation algorithm
is given in Figure 9. Then, using these algorithms, one of the most important steps of
finding any intersection point in highway vertical profile design was revealed and
the infrastructure of the arrangement (flowchart) that allows for the optimum grade
lines was made.
The grade lines can be designed in a way to ensure near zero excess area in cut-
ting-–filling. The algorithm prepared here was integrated into metaheuristic algo-
rithms. Then, all three methods were assessed and compared 30 times with 2,000
iterations on the Y and X–Y axes. As a result of this comparison, the diagrams ob-
tained from the methods for the Y-axis are shown in Figure 10 and the diagrams
from the methods for the X–Y axis are shown in Figure 11.

4 Discussion
Despite being quite professional for highway design, the software used today does
not offer a method for optimum grade lines design. Here, we took the preliminary
steps for the optimum grade lines design using the designed algorithm.
Within this study, all the details obtained with the optimization software,
which was designed by considering a part of a highway project as a sample, were
explained step by step. This sample highway project consists of completely random
22 Sina Asherlou et al.

Start

Drawing ground line


Metahirostik

Pso - Fa – Abc using one of these


methods Fig(1-3)

<2000
Show a line by individual

Calculate intersect

Calculate fill and cut area

Calculate finally line by metahirostic

Show optimal line

END

Figure 9: The flowchart of project.

Comparison Chart
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29
90.94947
The difference of area

0.9094947
0.0090949 ABC
9.095E-05 FA
9.095E-07 PSO
9.095E-09
9.095E-11
9.095E-13

Figure 10: The project applied on the Y-axis.


An optimization approach for highway alignment using metaheuristic algorithms 23

Comparison Chart
1 3 5 7 9 11 13 15 17 19 21 23 25 27 29
90.94947
The difference of area

0.9094947
0.0090949 ABC
9.095E-05 FA
9.095E-07 PSO
9.095E-09
9.095E-11
9.095E-13

Figure 11: The project applied on the Y and X axes.

distances and elevations. The project includes only two line segments, with a
starting, ending, and vertical curve intersection point. The optimization of the line
segments alone is given in Figure 12. Here, the X-axis shows the distance and the
Y-axis shows the elevation levels.

Figure 12: Optimization of straight ground line.


24 Sina Asherlou et al.

The peak of the ground elevations was accepted as the intersection point of
the grade lines on the sample project. Thus, since the intersection point makes
thousands of changes in the X and Y directions and can change its position all
thanks to the software, the position of the intersection point shall not be impor-
tant. Figure 13 shows where the intersection point was selected.

Profile
400 peak

300
Elevation

200
Ground Line

100 Grade Line

0
0 200 400 600 800
Distance

Figure 13: Determining peak point.

The designed algorithm coding was tested with three different metaheuristic meth-
ods and the findings were recorded. The methods used here (PSO-FA-ABC) are
shown along with their flowcharts. The application on the Y-axis by integrating the
software with the above methods is shown in Figures 14 and 16, while the applica-
tion on the X–Y axis is shown separately in Figures 15 and 17.

4.1 Optimum application on the Y-axis

Findings for the sample project that were controlled by all three metaheuristic
methods on the Y-axis are given in Table 3.
Accordingly, all data were obtained by 30 trials of all three methods with 2,000
iterations and a population of 5. The diagrams obtained here are shown in Figure 18
for PSO, Figure 19 for FA, and Figure 20 for ABC.

4.2 Optimum application on the X–Y axis

Findings for the sample project that were controlled by all three metaheuristic
methods on the X–Y axis are given in Table 4.
Accordingly, all data were obtained by 30 trials of all three methods with 2,000
iterations and a population of 5. The diagrams obtained here are shown in Figure 21
for PSO, Figure 22 for FA, and Figure 23 for ABC.
An optimization approach for highway alignment using metaheuristic algorithms 25

Figure 14: Project applied based on Y-axis.

Figure 15: Project applied based on Y and X axess.


26 Sina Asherlou et al.

Figure 16: Project applied based on Y-axis by magnifiying.

Figure 17: Project applied based on Y and X axes by magnifying.


An optimization approach for highway alignment using metaheuristic algorithms 27

Table 3: The difference of area using ABC, FA, and PSO in 30 times run for Y-axis.

No. ABC FA PSO

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-


28 Sina Asherlou et al.

Figure 18: Fitness function (area) of PSO algorithm based on Y-axis in 1 time run.

Based on these diagrams, the optimum grade lines design that was finally drawn
by all three methods, along with the distance and elevations, was determined.
The most optimum grade lines are shown in Figure 24 for PSO, Figure 25 for FA,
and Figure 26 for ABC.

5 Conclusion
According to the data we obtained here using three methods (ABC–FA–PSO), the
findings were quite satisfactory. Although, among the methods used, ABC has
much less consistent findings than the others and on the contrary, PSO has very sta-
ble findings with almost the most optimum value.
As shown in the diagrams, PSO can draw the grade lines on the sample with
almost zero difference in cutting–filling. However, for FA, this difference varies be-
tween zero and one; for ABC, the change in the difference is much more unstable.
According to these numerical data, PSO values seem to be more reliable and stable.
Also, according to the data obtained by PSO, the findings of the analyses on the Y
and X–Y axes are very close to each other.
Future research can make it possible to find the grade lines by making optimum
volume calculations and automatically determining intersection points and adding
An optimization approach for highway alignment using metaheuristic algorithms 29

the vertical curves of those points. In this regard, the algorithm designed here is an
important step toward optimum soil volume balancing in the future.

Figure 19: Fitness function (area) of FA algorithm based on Y-axis in 1 time run.

Figure 20: Fitness function (area) of ABC algorithm based on Y-axis in 1 time run.
30 Sina Asherlou et al.

Table 4: The difference of area using ABC, FA, and PSO in 30 times run for Y and X axes.

No. ABC FA PSO

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-

 . . .E-


An optimization approach for highway alignment using metaheuristic algorithms 31

Figure 21: Fitness function (area) of PSO algorithm based on X and Y-axis in 1 time run.

Figure 22: Fitness function (area) of FA algorithm based on X and Y-axis in 1 time run.
32 Sina Asherlou et al.

Figure 23: Fitness function (area) of ABC algorithm based on X and Y-axis in 1 time run.

Figure 24: The best result of PSO.


An optimization approach for highway alignment using metaheuristic algorithms 33

Figure 25: The best result of FA.

Figure 26: The best result of ABC.


34 Sina Asherlou et al.

References
[1] Yang X.S. Nature-inspired Metaheuristic Algorithms, Second edition. Luniver press, United
Kingdom, 2010.
[2] Karayolları Esnek Üstyapılar Projelendirme Rehberi. Ankara: Karayolları Genel Müdürlüğü
Teknik Araştırma Dairesi Başkanlığı Üstyapı Şubesi Müdürlüğü, 1995.
[3] Sütaş İ., Güven Ö. Karayolu Inşaatında Uygulama Ve Projelendirme. Teknik Kitaplar Yayınevi,
İstanbul, 1986.
[4] Özkan E. Optimization Of Hıghway Vertıcal Alıgnment By Dırect Search Technıque. Yüksek
Lisans Tezi, Middle East Technical University, Fen Bilimleri Enstitüsü, Ankara, Turkey, 2013.
[5] Kim E., Jha K.M., Son B. Improving the computational efficiency of highway alignment
optimization models through a stepwise genetic algorithms approach. Transportation
Research 2005, Part B39, 339–360.
[6] Chew E.P., Goh C.J., Fwa T.F. Simultaneous optimization of horizontal and vertical alignments
for highways. Transportation Research 1989, Part B, 23(5), 315–329.
[7] Kim E., Jha K.M., Son B. A stepwise highway alignment optimization using genetic algorithms.
Transportation Research Board 2002; Paper No 03: 4150–4158.
[8] Kim E., Jha K.M. Highway alignment optimization incorporating bridges and tunnels. Journal
Of Transportation Engineering 2007, ASCE, 71–81.
[9] Jha K.M., Schonfeld P. A highway alignment optimization model using geographic information
systems. Transportation Research 2004, Part A, 38, 455–481.
[10] Easa S.M. Selection of roadway grades that minimize earthwork cost using linear
programming. Transportation Research 1988, Part A, 22(2), 121–136.
[11] Goh C.J., Chew E.P., Fwa T.F. Discrete and continuous model for computation of optimal
vertical highway alignment. Transportation Research 1988, Part B, 22(9), 399–409.
[12] Moreb A.A. Linear programming model for finding optimal roadway grades that minimize
earthwork cost. European Journal of Operational Research 1995, Part 93, 148–154.
[13] Fwa T.F. Optimal vertical alignment analysis for highway design. Journal Of Transportation
2002, Engineering/September/October, 395–402.
[14] Goktepe A.B., Lav A.H., Altun S. Dynamic optimization algorithm for vertical alignment of
highways. Mathematical and Computational Applications 2005, 10(3), 341–350.
[15] Goktepe A.B., Lav A.H. Method for balancing cut-fill and minimizing the amount of earthwork
in the geometric design of highways. Journal of Transportation Engineering 2003, © ASCE /
September/October, 564–571.
[16] Goktepe A.B., Lav A.H. Method for optimizing earthwork considering soil properties in the
geometric design of highways. Journal Of Surveying Engineering 2004, © ASCE / November,
183–190.
[17] Goktepe A.B., Lav A.H., Altun S. Dynamic optimization algorithm for vertical alignment of
highways. Mathematical and Computational Applications 2005, 10(3), 341–350.
[18] Goktepe A.B., Altun S., Ahmedzade P. Optimization of vertical alignment of highways utilizing
discrete dynamic programming and weighted ground line. Turkish Journal of Engineering and
Environmental Science 2009, 33, 105–116.
[19] Bosurgi G., Pellegrino O., Sollazzo G. A PSO highway alignment optimization algorithm
considering environmental constraints. Advances in Transportation Studies 2013, 31.
[20] Al-Sobky S. An optimization approach for highway vertical alignment using the earthwork
balance condition. World Applied Sciences Journal 2014, 29(7), 884–891.
[21] Ghanizadeh A.R., Heidarabadizadeh N. Optimization of vertical alignment of highways in
terms of earthwork cost using colliding bodies optimization algorithm. Iran University of
Science and Teknology 2018, 8(4), 657–674.
An optimization approach for highway alignment using metaheuristic algorithms 35

[22] Ghanizadeh A.R., Heidarabadizadeh N., Mahmoodabadi M.J. Effect of objective function on
the optimization of highway vertical alignment by means of Metaheuristic algorithms. Civil
Engineering Infrastructures Journal 2020, 53(1), 115–136.
[23] Kennedy J., Ve Eberhart R. Particle swarm optimization, Proceedings of the. 1995 IEEE
International Conference on Neural Networks, Kasım 1995, New Jersey, Bildiriler Kitabı,
1942–1948.
[24] Karaboga D. An idea based on honey bee swarm for numerıcal optımızatıon, 2005.
[25] Yang X.S. Nature-inspired Optimization Algorithms. Academic Press, 2020.
[26] Reynolds C.W. Flocks, herds and schools: A distributed behavioral model. SIGGRAPH
Computer Graphics 1987, 21(4), 25–34.
[27] Hassan R., Cohanim B., De Weck O., Ve Venter G. Structural Dynamics and Materials
Conference, A Comparison of Particle Swarm Optimization and the Genetic Algorithm.
American Institute of Aeronautics and Astronautics, Texas, 2005.
[28] Yang X. Nature-inspired Metaheuristic Algorithms, Second edition. Luniver Press, United
Kingdom, 2010.
[29] Karaboga D., Gorkemli B., Ozturk C., Ve Karaboga N. A comprehensive survey: Artificial bee
colony (ABC) algorithm and applications. Artificial Intelligence Review 2014, 42(1), 21–57.
[30] Karaboga D., Akay B., Ve Ozturk C. Modeling Decisions for Artificial Intelligence. Torra V.,
Narukawa Y., Yoshida Y. (editors), Springer Berlin Heidelberg, Berlin, 2007.
Sukhveer Singh, Sandeep Singh
A method for solving bi-objective
transportation problem under fuzzy
environment
Abstract: In this chapter, a bi-objective fuzzy transportation problem (BoFTP) is
considered such that the total fuzzy cost (FC) and fuzzy time (FT) of transportation
are minimized without taking into account their priorities. In literature, there is no
technique available for finding the efficient solutions of BoFTP under uncertainty.
So, we aim at formulating a BoFTP in an uncertain environment along with a novel
algorithm to find efficient solutions of BoFTP under uncertainty. The proposed algo-
rithm gives optimal solution faster in comparison to other available techniques in
literature for the given uncertain transportation problem (TP). Moreover, it avoids
the degeneracy and requires low computational effort.

Keywords: ranking function, efficient or optimal solution, transportation problem


(TP), fuzzy transportation problem (FTP)

1 Introduction
Practical applications give rise to a large class of mathematical programming
problems frequently. For instance, a product may be transported from factories
(sources) to retail stores (destinations). One must know the amount of the product
available as well as the demand of the product. So, the difficulty is that the differ-
ent ways of the network joining the sources to the destinations have different
costs linked with them. Therefore, we aim at calculating the minimum cost routing
of products from point of supply to point of destination, and this problem is
named cost minimizing transportation problem (TP). Many researchers have dis-
cussed in detail: single-objective TP [19–22] to minimize the duration of transpor-
tation as well as multiple-objective TP [15–17]. Practically, a decision-maker does
not have the exact transportation cost and time, therefore, creating uncertainty in
cost and time. Hence, it is interesting to the deal with TP in an uncertain environment.
Zadeh [25] gave the concept of fuzzy set (FS) for handling uncertainty that occurs
due to imprecision instead of randomness. The concept of decision-making in fuzzy

Sukhveer Singh, Department of Mathematics, Indian Institute of Technology, Roorkee, Uttarak-


hand, e-mail: [email protected]
Sandeep Singh, Department of Mathematics, Akal University, Talwanndi sabo, Bhatinda, Punjab,
e-mail: [email protected]

https://doi.org/10.1515/9783110716214-003
38 Sukhveer Singh, Sandeep Singh

environment was given by Bellmann and Zadeh [2]. After this innovative work, many
authors, namely S. C. Fang [7], H. Rommelfanger [18], and H. Tanaka et al. [24], have
studied fuzzy linear programming (LP) problem techniques.
In literature, we find that there are many transportation models where fuzzy LP
has been applied or approaches to solve multi-objective fuzzy TP. From this idea,
Chanas [5] developed fuzzy programming in multi-objective LP by using parametric
approach. Tanakka and Asai [23] introduced fuzzy LP problem in fuzzy environ-
ment. Further, Zimmermann [26] makes use of intersection of all fuzzy constraints
and goals by proposing a fuzzy multi-criteria decision-making (MCDM) set. Lai-
Hawng [10] discussed multi-objective LP problems that taking all parameters, along
with a triangular fuzzy possibility distribution. Bit [3] considered fuzzy program-
ming approach to MCDM TP where the constraints are of equality type. Later on, Bit
et al. [4] studied a fuzzy programming approach to multi-objective solid TP. Also,
various authors [1, 6, 8, 14] worked on developing different models for solving fuzzy
multi-objective TPs.
In this chapter, a novel algorithm has been developed to find fuzzy optimal
value of bi-objective fuzzy TP. The proposed algorithm gives optimal solution faster
in comparison to other available techniques in literature for the given uncertain TP.
Moreover, it avoids the degeneracy and demands low computational effort.

2 Basic preliminaries
In this section, we discuss the basic structures [9], operations [9], and ranking for-
mulas (RF) [11].

2.1 Definitions

Definition 1 A crisp set A  X with characteristic function μA gives a value either


zero or one for every member that belongs to X. That function can also be generalized
to a function μA~ in such a way that the value assigned to universal set X falls within
some specified range [0, 1], that is, μA~ ðxÞ: X ! f0, 1g. The grade of the membership
element in the set A is depicted by the assigned value.
μA~ is known as membership function (MF) and A ~ = fðx, μ ~ ðxÞÞ;x 2 Xg given by
A
μA~ , x 2 X is known as a FS.

Definition 2 A FS A, ~ outlined set of real number, is known to be a fuzzy number


(FN) if its MF satisfies the defining prelims:
A. Continuous: μA~ ðxÞ: X ! f0, 1g.
B. μA~ ðxÞ = 0 for all x 2 ð − ∞, c ∪ ½d, ∞Þ.
A method for solving bi-objective transportation problem under fuzzy environment 39

C. Strictly increasing on [c, a] and strictly decreasing on [b, d].


D. x 2 ½a, b for all x 2 ½a, b.

~ = ða, b, c, dÞ is called a triangular FN if its MF is


Definition 3 A FN A
8
>
> ðx − aÞ
>
> , a≤x<b
>
< ðb − aÞ
μA~ ðxÞ = 1 , b≤x≤c
>
>
>
> ðc − xÞ
>
: , c<x≤d
ðd − cÞ

where a, b, c, d 2 R.

2.2 Arithmetic operations

Let A~ 1 = ða1 , b1 , c1 , d1 Þ and A


~ 2 = ða2 , b2 , c2 , d2 Þ be two trapezoidal FNs, then
~ ~
(i) A1 ¯A2 = ða1 + a2 , b1 + b2 , c1 + c2 , d1 + d2 Þ.
~ 1 ΘA
(ii) A ~ 2 = ða1 − d2 , b1 − c2 , c1 − b2 , d1 − a2 Þ.

~ ðλa1 , λb1 , λc1 , λd1 Þ λ>0
(iii) λ#A1 = .
ðλd1 , λc1 , λb1 , λa1 Þ λ<0

2.3 Ranking function

For comparing FNs, we use the concept of RF [12, 13]. A RF (ranking formula)
<: FðRÞ ! R, where F(R) set of FNs, given on set of reals, and maps each FN into a
real number. Let A ~ and B
~ be two FNs, then
~ ~ ~
(i) A ≥ B if <ðAÞ ≥ <ðBÞ~
<
(ii) A~ > ~ if <ðAÞ
B ~ > <ðBÞ
~
<
~ =B
(iii) A ~ if <ðAÞ
~ = <ðBÞ
~
<

3 Formulation of BoFTP
Suppose there are m sources and n destinations. Let ai ði = 1, 2, . . ., mÞ be the unit
availability at source i, bj ðj = 1, 2, . . ., nÞbe the unit demand at the destination j,
~cij ði = 1, 2, . . ., m; j = 1, 2, . . ., nÞ and ~tij ði = 1, 2, . . ., m; j = 1, 2, . . ., nÞ, respectively, denote
the FC and FT of transportation of unit homogeneous product from i to j, and
40 Sukhveer Singh, Sandeep Singh

xij ði = 1, 2, . . ., m; j = 1, 2, . . ., nÞ be the variable taking the value zero or one according


to the entire requirement of destination j is not met or met from source.
Suppose C ~ is the total FC and T ~ is the FT of transportation. Mathematically, we

have xij s to minimize two-objective functions:
m X
X n
~=
C ~cij #xij (1)
i=1 j=1

~ = maxf~tij #xij ; i = 1, 2, . . ., m; j = 1, 2, . . ., ng
T
(2)
subject to the constraints
X
n
bj xij ≤ ai ; ði = 1, 2, . . ., mÞ (3)
j=1
X
m
xij = 1 ; ðj = 1, 2, . . ., nÞ (4)
i=1

xij = 0 or 1 ði = 1, 2, . . .m; j = 1, 2, . . ., nÞ (5)

4 Solution procedure
The proposed algorithm has three subparts, as given below, to determine the fuzzy
efficient optimal solution of the BoFTP.

4.1 Conversion of two objectives into single objective

Here, we use the following process to convert BoFTP to single-objective fuzzy TP as


follows:
Step 1: The set ~tij ði = 1, 2, . . ., m; j = 1, 2, . . ., nÞ is partitioned into subsets Lk ðk = 1, 2, . . ., qÞ
in the following way. Each of the subsets Lk ′s consists of the ~tij ′s having
the same fuzzy value. L1 consists of the ~tij having the largest FT; L2 consists
of the ~tij having next largest FT, and so on; and Lq consists of the ~tij having
the smallest FT.

Step 2: Next, M0 , M1 , . . ., Mq , known as preemptive priority factors, are allocated to


~ ΣL xij , ΣL xij , . . ., ΣL xij , respectively. Here, ΣL xij is the Sigma of the xij corre-
C, 0 1 q k
sponding to the ~tij belonging to Lk . All the Mk ′s + ve fixed reals S.T. Σqk = 0 αk Mk ,
A method for solving bi-objective transportation problem under fuzzy environment 41

where αk ′s are reals, can be positive, zero, or negative; αk have similar sign as
non-zero smallest subscript in it of other αk ′s.

⇒ M0 , M1 , . . ., Mq are M0 >> M1 >> . . . >> Mq .

Step 3: Cost–time trade-off fuzzy TP, with T ~ second-priority objec-


~ first-priority and C
tives, are reduced to single-objective seeking to calculate xij ′s to minimize
X
m X
n X
q X
Z = M0 ~cij #xij + Mk xij (6)
i=1 j=1 k=1 Lk

subject to constraints (3)–(5).

4.2 Proposed algorithm to obtained fuzzy efficient solution

Step 1: The single-objective FTP, obtained in Section 4.1, is transformed into a tab-
ular form.
Step 2: Consider a set S having the cells ði, jÞ which has the FC with minimum rank
among each entry of its corresponding row and column in the obtained
table.
Step 3: Calculate P~ij for each cell ði, jÞ 2 S,

~ ij = Sum of fuzzy costs of nearest adjacent sides of cell ði, jÞ


where P .
Number of fuzzy costs added
Step 4: Find the maximum of <ðP ~ ij Þ and then allocate that cell (i,j). If two or more
~ ′
<ðPij Þ s have similar values, then allocate that cell which has the least cost
among all cells for which <ðP ~ ij Þ′s are equal. Again, if costs of these cells
are equal then randomly allocation that cell for which ai ≠bj .
Step 5: Check whether the requirement of each destination is fulfilled or not. If not
then repeat steps 2–5, else the obtained fuzzy solution is our fuzzy optimal
solution of fuzzy TP.

4.3 Procedure to obtain 2nd subsequent and efficient fuzzy


solution
ð1Þ
After finding the first efficient solution, xij has been obtained for a given fuzzy TP.
ð2Þ
The second 
efficient

solution xij is obtained by deleting all ði, jÞ cells corresponding
 
~
to < ~tij ≥ < Tðx ð1Þ
ij Þ . The final result of the problem is assigned the second efficient
ð2Þ
solution xij . Further, the third efficient solution is achieved by removing these ði, jÞ
cells in the second cost–time
 
trade-off fuzzy TP in fuzzy environment; these are corre-
 
sponding to < ~tij ≥ < Tðx ~ ð2Þ Þ . The following efficient solutions of the problem are ac-
ij

quired by working in the similar method.


42 Sukhveer Singh, Sandeep Singh

5 Numerical example
In this section, a numerical problem having four origins and five destinations is
considered and the algorithm is applied, as explained Section 4. Table 1 displays
the tabular representation of problem. The upper entries denote the FC of unit prod-
uct which has to transport from origin i to destination j, and the lower entries de-
note the FT of transportation from origin i to destination j.

Table 1: BoFTPs.

D1 D2 D3 D4 D5 ai

O1 ð0, 1, 2, 5Þ ð1, 2, 3, 6Þ ð1, 2, 3, 6Þ ð2, 5, 7, 14Þ ð0, 0.5, 1.5, 2Þ 


ð1, 3, 4, 8Þ ð1, 3, 4, 8Þ ð3, 7, 10, 20Þ ð3, 5, 8, 16Þ ð2, 5, 7, 14Þ
ð1, 3, 4, 8Þ ð0, 0.5, 1.5, 2Þ ð0, 0.5, 1.5, 2Þ ð0, 1, 2, 5Þ ð3, 5, 8, 16Þ 
O2
ð1, 3, 4, 8Þ ð2, 5, 7, 14Þ ð5, 7, 12, 24Þ ð5, 9, 14, 28Þ ð3, 5, 8, 16Þ

O3 ð0, 0.5, 1.5, 2Þ ð2, 5, 7, 14Þ ð5, 6, 11, 22Þ ð0, 0.5, 1.5, 2Þ ð1, 4, 5, 10Þ

ð3, 5, 8, 16Þ ð0, 1, 2, 5Þ ð1, 3, 4, 8Þ ð1, 3, 4, 8Þ ð1, 3, 4, 8Þ
ð9, 11, 20, 40Þ ð13, 17, 30, 60Þ ð3, 7, 10, 20Þ ð0, 1, 2, 5Þ ð1, 4, 5, 10Þ 
O4
ð1, 3, 4, 8Þ ð2, 4, 6, 12Þ ð2, 5, 7, 14Þ ð0, 0.5, 1.5, 2Þ ð0, 0.5, 1.5, 2Þ

bj     

In Table 1, the upper entries of cell ði, jÞ depict the unit FC and the lower entries of
the cell ði, jÞ depict FT from origin Oi to destination Dj . In the last row and column,
the units of the commodity needed at the destination are given by bj , whereas the
units of commodity available at the origin are given by ai . The aim of the numerical
problem is to calculate xij ′s such that two objective functions are minimized,
~ = ð0, 1, 2, 5Þx11 ¯ð1, 2, 3, 6Þx12 ¯ð1, 2, 3, 6Þx13 ¯ð2, 5, 7, 14Þx14
C
¯ð0, 0.5, 1.5, 2Þx15 ¯ð1, 3, 4, 8Þx21
¯ð0, 0.5, 1.5, 2Þx22 ¯ð0, 0.5, 1.5, 2Þx23 ¯ð0, 1, 2, 5Þx24 ¯ð3, 5, 8, 16Þx25 ¯ð0, 0.5, 1.5, 2Þx31
¯ð2, 5, 7, 14Þx32 ¯ð5, 6, 11, 22Þx33 ¯ð0, 0.5, 1.5, 2Þx34 ¯ð1, 4, 5, 10Þx35 ¯ð9, 11, 20, 40Þx41
¯ð13, 17, 30, 60Þx42 ¯ð3, 7, 10, 20Þx43 ¯ð0, 1, 2, 5Þx44 ¯ð1, 4, 5, 10Þx45

~ = maxf~tij # xij : i = 1, 2, 3, 4; j = 1, 2, 3, 4, 5g.


T
A method for solving bi-objective transportation problem under fuzzy environment 43

Solution procedure

Following procedure are given in Section 4.1, two objective functions are minimized
and convert into the single-objective FTP that aim is to calculate xij ′s such that,
~ = M0 ðð0, 1, 2, 5Þx11 ¯ð1, 2, 3, 6Þx12 ¯ð1, 2, 3, 6Þx13 ¯ð2, 5, 7, 14Þx14 ¯ð0, 0.5, 1.5, 2Þx15
Z
¯ð1, 3, 4, 8Þx21 ¯ð0, 0.5, 1.5, 2Þx22 ¯ð0, 0.5, 1.5, 2Þx23 ¯ð0, 1, 2, 5Þx24 ¯ð3, 5, 8, 16Þx25
¯ð0, 0.5, 1.5, 2Þx31 ¯ð2, 5, 7, 14Þx32 ¯ð5, 6, 11, 22Þx33 ¯ð0, 0.5, 1.5, 2Þx34 ¯ð1, 4, 5, 10Þx35
¯ð9, 11, 20, 40Þx41 ¯ð13, 17, 30, 60Þx42 ¯ð3, 7, 10, 20Þx43 ¯ð0, 1, 2, 5Þx44 ¯ð1, 4, 5, 10Þx45 Þ

¯ð0, 0.5, 1.5, 2ÞfM1 ðx24 Þ + M2 ðx23 Þ + M3 ðx13 Þ + M4 ðx14 + x25 + x31 Þ + M5 ðx15 + x22 + x43 Þ
+ M6 ðx42 Þ + M7 ðx11 + x12 + x21 + x33 + x34 + x35 + x41 Þ + M8 ðx32 + x44 + x45 Þ

subject to constraints (3)–(5) after assigning values to all parameters.


Tabular presentation of single-objective FTP is shown in Table 2, where ði, jÞ′s
ði = 1, 2, 3, 4; j = 1, 2, 3, 4, 5Þ depicts FC, and it has been considered that M0 >> M1 >> . . .
>> M8 while minimizing Z. ~
In the fuzzy matrix given in Table 2, the rank of the FC of the cells (1,5), (2,2),
(2,3), (3,1), (3,4), and (4,4) is minimum corresponding to their row and column. The
values of P ~ ij of these cells are,

~ 15 = ð5, 10, 15, 30ÞM0 ¯ð0, 1, 3, 4ÞM4 = ð2.5, 5, 7.5, 15ÞM0 ¯ð0, 0.5, 1.5, 2ÞM4
P
2

~ 22 = ð4, 10.5, 15.5, 30ÞM0 ¯ð0, 0.5, 1.5, 2ÞM2 ¯ð0, 1, 3, 4ÞM7 ¯ð0, 0.5, 1.5, 2ÞM8
P
4
= ð1, 2.625, 3.875, 7.5ÞM0 ¯ð0, 0.125, 0.375, 0.5ÞM2 ¯ð0, 0.25, 0.75, 1ÞM7
¯ð0, 0.175, 0.375, 0.5ÞM8

ð6, 9.5, 17.5, 35ÞM0 ¯ð0, 0.5, 1.5, 2ÞM1 ¯ð0, 0.5, 1.5, 2ÞM3

~ 23 = ¯ð0, 0.5, 1.5, 2ÞM5 ¯ð0, 0.5, 1.5, 2ÞM7


P
4
= ð1.5, 8.375, 4.375, 8.75ÞM0 ¯ð0, 0.125, 0.375, 0.5ÞM1 ¯ð0, 0.125, 0.375, 0.5ÞM3
¯ð0, 0.125, 0.375, 0.5ÞM5

~ 31 = ð12, 19, 31, 62ÞM0 ¯ð0, 1, 3, 4ÞM7 ¯ð0, 0.5, 1.5, 2ÞM8
P
3
= ð4, 6.3, 10.3, 20.6ÞM0 ¯ð0, 0.3, 1, 1.3ÞM7 ¯ð0, 0.16, 0.5, 0.6ÞM8
Table 2: Single-objective FTP.
44

D1 D2 D3 D4 D5 ai

O1 ð0, 1, 2, 5ÞM0 ¯ ð1, 2, 3, 6ÞM0 ¯ ð1, 2, 3, 6Þ M0 ¯ ð2, 5, 7, 14Þ M0 ¯ ð0, 0.5, 1.5, 2Þ M0 ¯

ð0, 0.5, 1.5, 2Þ M7 ð0, 0.5, 1.5, 2Þ M7 ð0, 0.5, 1.5, 2Þ M3 ð0, 0.5, 1.5, 2Þ M4 ð0, 0.5, 1.5, 2Þ M5
O2 ð1, 3, 4, 8Þ M0 ¯ ð0, 0.5, 1.5, 2Þ M0 ¯ ð0, 0.5, 1.5, 2Þ M0 ¯ ð0, 1, 2, 5Þ M0 ¯ ð3, 5, 8, 16Þ M0 ¯

ð0, 0.5, 1.5, 2Þ M7 ð0, 0.5, 1.5, 2Þ M5 ð0, 0.5, 1.5, 2Þ M2 ð0, 0.5, 1.5, 2Þ M1 ð0, 0.5, 1.5, 2Þ M4
O3 ð0, 0.5, 1.5, 2Þ M0 ¯ ð2, 5, 7, 14Þ M0 ¯ ð5, 6, 11, 22Þ M0 ¯ ð0, 0.5, 1.5, 2Þ M0 ¯ ð1, 4, 5, 10Þ M0 ¯

ð0, 0.5, 1.5, 2Þ M4 ð0, 0.5, 1.5, 2Þ M8 ð0, 0.5, 1.5, 2Þ M7 ð0, 0.5, 1.5, 2Þ M7 ð0, 0.5, 1.5, 2Þ M7
O4 ð9, 11, 20, 40ÞM0 ¯ ð13, 17, 30, 60ÞM0 ¯ ð3, 7, 10, 20ÞM0 ¯ ð0, 1, 2, 5ÞM0 ¯ ð1, 4, 5, 10ÞM0 ¯

ð0, 0.5, 1.5, 2ÞM7 ð0, 0.5, 1.5, 2ÞM6 ð0, 0.5, 1.5, 2ÞM5 ð0, 0.5, 1.5, 2ÞM8 ð0, 0.5, 1.5, 2ÞM8
Sukhveer Singh, Sandeep Singh

bj     
A method for solving bi-objective transportation problem under fuzzy environment 45

~ 34 = ð6, 12, 20, 42ÞM0 ¯ð0, 0.5, 1.5, 2ÞM1 ¯ð0, 1, 3, 4ÞM7 ¯ð0, 0.5, 1.5, 2ÞM8
P
4
= ð1.5, 3, 5, 10.5ÞM0 ¯ð0, 0.125, 0.375, 0.5ÞM1 ¯ð0, 0.25, 0.75, 1Þ
M7 ¯ð0, 0.125, 0.375, 0.5ÞM8

~ 44 = ð4, 11.5, 16.5, 32ÞM0 ¯ð0, 0.5, 1.5, 2ÞM5 ¯ð0, 0.5, 1.5, 2ÞM7 ¯ð0, 0.5, 1.5, 2ÞM8
P
3
= ð1.3, 3.83, 5.5, 10.6ÞM0 ¯ð0, 0.16, 0.5, 0.6ÞM5 ¯ð0, 0.16, 0.5, 0.6ÞM7
¯ð0, 0.16, 0.5, 0.6ÞM8
By using the ranking function, we can see that maxfP ~ 15 , P
~22 , P
~ 23 , P
~ 31 , P
~ 34 , P
~ 44 g = P
~ 31 ;
therefore, allocate the cell (3,1) and block S3 and D1 as shown in Table 3.
The steps of algorithm is repeated (Section 4.2) until all the requirements of each
destination is completed. The first fuzzy efficient or optimal solution of the problem
ð1Þ ð1Þ ð1Þ ð1Þ ð1Þ ð1Þ
shown in Table 1 is x13 = 2, x15 = 1, x21 = 1, x22 = 3, x31 = 2, x44 = 2, and the first efficient
~ ð1Þ ~ ð1Þ
optimal value of FC and FT is Cðx ij Þ = ð3, 12, 23, 42Þ, Tðxij Þ = ð3, 7, 10, 20Þ, respectively.
To obtain the next efficient fuzzy optimal solution, block  all the  cells (i,j) in the
 
previous cost–time trade-off fuzzy TP for which < ~tij ≥ < Tðx ~ ð1Þ Þ = 10 units. Fol-
ij
lowing the procedure, four fuzzy efficient or optimal solutions can be calculated, as
shown in Table 4.

6 Conclusion
In this chapter, a BoFTP has been formulated in uncertain environment, and a
novel algorithm is proposed to find uncertain efficient solutions. The values ob-
tained by the proposed algorithm shows that the decision-maker has more flexibil-
ity because the decision-maker does not have the exact transportation cost and
time, and so there exists an uncertainty about the cost and time. The proposed algo-
rithm avoids the degeneracy and gives the efficient solution faster than other exist-
ing algorithms for the given FTP. It also reduces the computational work.
Table 3: After 1st allocation.
46

D1 D2 D3 D4 D5 ai

O1 ð0, 1, 2, 5ÞM0 ¯ ð1, 2, 3, 6ÞM0 ¯ ð1, 2, 3, 6Þ M0 ¯ ð2, 5, 7, 14Þ M0 ¯ ð0, 0.5, 1.5, 2Þ M0 ¯

ð0, 0.5, 1.5, 2Þ M7 ð0, 0.5, 1.5, 2Þ M7 ð0, 0.5, 1.5, 2Þ M3 ð0, 0.5, 1.5, 2Þ M4 ð0, 0.5, 1.5, 2Þ M5

O2 ð1, 3, 4, 8Þ M0 ¯ ð0, 0.5, 1.5, 2Þ M0 ¯ ð0, 0.5, 1.5, 2Þ M0 ¯ ð0, 1, 2, 5Þ M0 ¯ ð3, 5, 8, 16Þ M0 ¯

ð0, 0.5, 1.5, 2Þ M7 ð0, 0.5, 1.5, 2Þ M5 ð0, 0.5, 1.5, 2Þ M2 ð0, 0.5, 1.5, 2Þ M1 ð0, 0.5, 1.5, 2Þ M4

O3 ð0, 0.5, 1.5, 2Þ M0 ¯ ð2, 5, 7, 14Þ M0 ¯ ð5, 6, 11, 22Þ M0 ¯ ð0, 0.5, 1.5, 2Þ M0 ¯ ð1, 4, 5, 10Þ M0 ¯
(3) 
ð0, 0.5, 1.5, 2Þ M4 ð0, 0.5, 1.5, 2Þ M8 ð0, 0.5, 1.5, 2Þ M7 ð0, 0.5, 1.5, 2Þ M7 ð0, 0.5, 1.5, 2Þ M7

O4 ð9, 11, 20, 40ÞM0 ¯ ð13, 17, 30, 60ÞM0 ¯ ð3, 7, 10, 20ÞM0 ¯ ð0, 1, 2, 5ÞM0 ¯ ð1, 4, 5, 10ÞM0 ¯ 
ð0, 0.5, 1.5, 2ÞM7 ð0, 0.5, 1.5, 2ÞM6 ð0, 0.5, 1.5, 2ÞM5 ð0, 0.5, 1.5, 2ÞM8 ð0, 0.5, 1.5, 2ÞM8
Sukhveer Singh, Sandeep Singh

bj     
Table 4: Set of fuzzy efficient solutions.

Fuzzy efficient Optimal solution Total FC Total FT


solutions
ð1Þ ð1Þ ð1Þ ð1Þ ð1Þ ð1Þ ð1Þ
xij x13 = 2, x15 = 1, x21 = 1, x22 = 3, x31 = 2, x44 = 2 ð1Þ
Cðx
~ ð1Þ Þ
ij T~ ðxij Þ
= ð3, 12, 23, 42Þ = ð3, 7, 10, 20Þ
ð2Þ ð2Þ ð2Þ ð2Þ ð2Þ ð2Þ ð2Þ ð2Þ
xij x11 = 1, x12 = 1, x15 = 1, x22 = 3, x31 = 1, x34 = 2, x43 =2 ð2Þ
Cðx
~ ð2Þ Þ
ij T~ ðxij Þ
= ð7, 20.5, 35.5, 65Þ = ð3, 5, 8, 16Þ
ð3Þ ð3Þ ð3Þ ð3Þ ð3Þ ð3Þ ð3Þ
xij x11 = 3, x15 = 1, x22 = 3, x33 = 1, x34 = 2, x43 =1 ð3Þ
Cðx
~ ð3Þ Þ
ij T~ ðxij Þ
= ð8, 19, 36, 69Þ = ð2, 5, 7, 14Þ
ð4Þ ð4Þ ð4Þ ð4Þ ð4Þ ð4Þ ð4Þ
xij x12 = 3, x21 = 3, x33 = 2, x34 = 1, x44 = 1, x45 =1 ð4Þ
Cðx
~ ð4Þ Þ
ij T~ ðxij Þ
= ð17, 32.5, 51.5, 103Þ = ð1, 3, 4, 8Þ
A method for solving bi-objective transportation problem under fuzzy environment
47
48 Sukhveer Singh, Sandeep Singh

References
[1] Ammar E.E., Youness E.A. Study on multiobjective transportation problem with fuzzy
numbers. Applied Mathematics and Computation 2005, 166, 241–253.
[2] Bellmann R.E., Zadeh L.A. Decision making in fuzzy environment. Management Sciences
1970, 17, 141–164.
[3] Bit A.K. Fuzzy programming with Hyperbolic membership functions for Multi-objective
capacitated transportation problem. OPSEARCH 2004, 41, 106–120.
[4] Bit A.K., Biswal M.P., Alam S.S. Fuzzy programming approach to multicriteria decision making
transportation problem. Fuzzy Sets and Systems 1992, 50, 35–41.
[5] Chanas S., Kolodziejckzy W., Machaj A fuzzy approach to the transportation problem. Fuzzy
Sets and Systems 1984, 13, 211–221.
[6] Das S.K., Goswami A., Alam S.S. Multiobjective transportation problem with interval cost,
source and destination parameters. European Journal of Operational Research 1999, 117,
100–112.
[7] Fang S.C., Hu. C.F., Wang H.F., Wu S.Y. Linear programming with fuzzy coefficients in
constraints. Computers and Mathematics with Applications 1999, 37, 63–76.
[8] Gupta P., Mehlawat M.K. An algorithm for a fuzzy transportation problem to select a new type
of coal for a steel manufacturing unit. Top 2007, 15, 114–137.
[9] Kaufmann A., Gupta M.M. Introduction to fuzzy arithmetics, theory and applications. In: Van
Nostrand Reinhold. New York, 1991.
[10] Lai Y.J., Hawng C.L. Fuzzy Mathematical Programming. In: Lecture notes in Economics and
Mathematical systems. Springer-Verlag, 1992.
[11] Liou T.S., Wang M.J. Ranking fuzzy numbers with integral values. Fuzzy Sets and Systems
1992, 50, 247–255.
[12] Nehi H.M., Maleki H.R., Mashinchi M. Solving fuzzy number linear programming problem by
lexicographic ranking function. Italian Journal of Pure and Applied Mathematics 2004, 16,
9–20.
[13] Noora A.A., Karami P. Ranking functions and its applications to fuzzy DEA. International
Mathematical Forum 2008, 3, 1469–1480.
[14] Pramanik S., Roy T.K. Multiobjective transportation model with fuzzy parameters: Priority
based fuzzy goal programming approach. Journal of Transportation Systems Engineering
Information Technology 2008, 8, 40–48.
[15] Prakash S. Transportation problem with objectives to minimizes the total cost and duration of
transportation. OPSEARCH 1981, 18, 235–238.
[16] Prakash S., Agarwal A.K., Shah S. Non-dominated solutions of cost–time trade-off
transportation and assignment problems. OPSEARCH 1988, 25, 126–131.
[17] Purushotam S., Prakash S., Dhyani P. A transportation problem with minimization of duration
and total cost of transportation as high and low priority objectives respectively. Bulletin of
the Technical University of Istanbul 1984, 37, 1–11.
[18] Rommelfanger H., Wolf J., Hanuscheck R. Linear programming with fuzzy coefficients. Fuzzy
sets and systems 1989, 29, 195–206
[19] Seshan C.R., Tikekar V.G. On Sharma-Sawrup algorithm for time minimizing transportation
problems. Proceeding of the Indian Academy of Sciences, Mathematical Sciences 1980, 89,
101–102.
A method for solving bi-objective transportation problem under fuzzy environment 49

[20] Sharma J.K., Sawrup K. Bi-level time minimizing transportation problem. Discrete
Optimization 1977, 5, 714–723.
[21] Sonia, Puri M.C. Two level hierarchical time minimizing transportation problem. TOP 2004,
12, 301–330.
[22] Sonia, Khandelwal A., Puri M.C. Bi-level time minimizing transportation problem. Discrete
Optimization 2008, 5, 714–723.
[23] Tanaka H., Asai K. Fuzzy linear programming problems with fuzzy numbers. Fuzzy Sets and
Systems 1984, 13, 1–10.
[24] Tanaka H., Ichihashi, Asai K. A formulation of fuzzy linear programming based on comparison
of fuzzy numbers. Control and Cybernetics 1984, 13, 185–194.
[25] Zadeh L.A. Fuzzy sets. Information and Control 1965, 8, 338–353.
[26] Zimmermann H.J. Fuzzy programming and linear programming with several objective
functions. Fuzzy Sets and System 1978, 1, 45–55.
Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak
Application of particle swarm optimization
technique in an interval-valued EPQ model
Abstract: There are certain practical scenarios in the field of engineering, science,
and management where the derivative-based methods or the gradient-based iterative
methods for optimization fail because of the highly complex nature of the problem.
The application of some heuristic or meta-heuristic search algorithms becomes fruit-
ful in those scenarios to obtain the optimum solution. particle swarm optimization
(PSO) technique is one such algorithm based on the hunting technique of a flock of
beasts in the jungle. In this chapter, an economic production quantity (EPQ) model is
developed under an imperfect production system allowing shortages. In reality, the
screening procedure for finding out the defective items may not be 100% perfect. As a
result, some faulty products may be counted as good products and may reach the
customer resulting in a sales return. On the other hand, most of the existing EPQ
models consider that the various inventory cost parameters like setup cost, holding
cost, production cost, rework cost, etc. are fixed numbers. However, in practice, it has
been observed that these are imprecise in nature. So, to depict the real-life situation,
the different inventory cost coefficients are assumed to be an interval. Consequently,
the resulting mathematical optimization problem is interval valued. Hence, it is not
possible to solve using classical optimization techniques due to it is high non-
linearity nature. The quantum-behaved PSO (QPSO) technique, one variant of classi-
cal PSO, is applied to solve the problem. Finally, a numerical example is provided to
validate the model. A sensitivity analysis for the key parameters is performed to find
out some managerial implications.

Keywords: QPSO technique, EPQ model, Interval number, ranking of interval


numbers

1 Introduction
Most of the optimization problems that arise in representing a real-life problem in
the field of engineering, science, management, and so on are very complex in

Subhendu Ruidas, Department of Mathematics, Sonamukhi College, Bankura 722 207, India,
e-mail: [email protected]
Mijanur Rahaman Seikh, Department of Mathematics, Kazi Nazrul University, Asansol 713 340,
India, e-mail: [email protected]
Prasun Kumar Nayak, Department of Mathematics, Midnapore College (Autonomous),
Midnapore 721 101, India, e-mail: [email protected]

https://doi.org/10.1515/9783110716214-004
52 Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak

nature. They may not be differentiable functions, may not be concave or convex
functions, and as a result, it becomes very difficult to solve the corresponding prob-
lem to achieve the optimal solution. In today’s competitive world, every production
manager must have to take managerial decisions optimally by implementing all
real-life scenarios. Due to complexity in the objective function, researchers have
preferred some meta-heuristic and heuristic search algorithms to derive the opti-
mum solution. PSO technique is one such technique influenced by the natural be-
havior of a flock of birds or a school of fish.
For the last few decades, researchers are trying to develop inventory models in
real-life environments. In the classical EPQ models, one of the major drawbacks is that
it does not consider the case of the imperfectness of the production system. However,
every real-life production system must produce some defective items due to some un-
avoidable facts like machine malfunction, aging, and power disruption. Porteus [20] as-
similated the concept of defectiveness in the manufacturing system for inventory
management. Rosenblatt and Lee [21] formulated an EPQ model assuming that the
faulty items might be repaired instantaneously with an additional cost. In the work of
Salameh and Jaber [28], the defective items were sold after the screening procedure at a
reduced price. Cardenas-Barron [3] corrected Salameh and Jaber’s [28] equation and nu-
merical results. Sarker et al. [29] formulated a production inventory model accounting
for a multistage production system considering rework for the defective items. Sarkar
and Moon [17] analyzed the effect of inflation in a defective manufacturing system. In
the work of Ruidas et al. [22], imperfects products were classified into two classes: scrap
products and repairable products. Scrap products were vended at a reduced selling
price. Repairable products were sold as good quality products after repairing. Pal and
Adhikari [18] developed an imperfect economic order quantity model with exponential
partial backlogging where the reworking process for defective items was considered.
A screening process is performed to find out the defective items from the regular
production run, and the sorted defective items are set for a reworking process. Now,
it is quite practical that the screening process may not be error-free. As a result, some
defective items may reach the customer as perfect items and thus result in a sales re-
turn. The researcher in recent years has also taken this particular issue in their
model. Considering the possibility of sales return, Krishnamoorthy and Panayap-
pan [12] developed an imperfect EPQ model. Manna et al. [15] examined the effect
of screening errors in an imperfect EPQ model where the demand was dependent
on warranty and price discount. They considered two types of screening errors: a
faulty product might be picked as a flawless product and a flawless product might
be picked as a faulty product due to error in screening. Dey and Giri [6] developed
a new perspective by taking into account the effect of learning in the inspection
process in an imperfectly integrated EPQ model. They also considered the inspec-
tion process to be erroneous with two types of misclassification errors.
Most of the abovementioned research works are based upon the assumption
that the different inventory cost components such as setting up cost, storing cost,
Application of particle swarm optimization technique 53

manufacturing cost, and shortage cost are fixed in nature. But there exist various
complex cases in the field of business, management, engineering, and so on where it
is not possible to determine these data precisely. We can handle those imprecise sit-
uations in light of probability theory, fuzzy sets theory, intuitionistic fuzzy set theory,
and so on. There are various works in this field addressing this issue. Wang and Tang
[33] proposed an EPQ model where the cost for setting up process, inventory holding
process, and the cost for backordering were expressed as fuzzy variables. Maity et al.
[14] developed a production inventory model with shortages considering the effect of
inflation in a fuzzy stochastic environment. Kundu and Chakrabarty [13] also pro-
posed a production inventory model assuming the demand rate and the deterioration
rate of the product as fuzzy numbers. Patro et al. [19] recently proposed an inventory
model considering all costs, defective rate, and annual demand to be fuzzy numbers.
They also incorporated inspection errors in the model.
However, one major drawback of expressing these imprecise elements as a
probabilistic or fuzzy number is that it is extremely tough to select the pertinent
probability distribution function (in case of probabilistic number) or the appropri-
ate membership, nonmembership function (in case of fuzzy number) because of
unavailability of past records. For instance, we may take the event of introducing a
brand new item in the market. We prefer interval number [16] theory to those fuzzy or
probabilistic approaches to express the imprecise quantity. A very limited amount of
research works have been carried out so far using interval number theory in in-
ventory management. In the works of Chakrabortty et al. [4] and Ruidas et al. [24],
the various inventory cost coefficients were expressed as interval numbers. In an
EPQ model assuming carbon emissions proposed by Ruidas et al. [25], the various
carbon emission parameters were assumed to be interval numbers. Ruidas et al. [26]
also proposed an imperfect EPQ model, where the demand and the defective produc-
tion rate of the product were considered as rough interval numbers.
Because of the nature of the inventory cost coefficients, the resulting mathemati-
cal optimization problem is also interval valued. While encountering such kind of op-
timization problem, the well-known classical techniques or other derivative-based
iterative techniques are not quite helpful. In such scenarios, it is required to apply
some heuristic or meta-heuristic search algorithms incorporating the idea of interval
mathematics and ordering relations between intervals. PSO technique is one of those
techniques introduced by Kennedy and Eberhart [8, 11]. Bhunia et al. [1, 2] have
developed an inventory model considering interval-valued inventory cost compo-
nents. They applied the PSO technique to solve the relevant optimization problem.
Applying the QPSO algorithm, Ruidas et al. [23] derived the interval-valued opti-
mal profit in a single period EPQ model, introducing a selling price revision in an
interval environment. We can also find some applications of PSO in the supply
chain management problem [7, 32].
In this chapter, we have formulated an EPQ model assuming that all the prod-
ucts produced are not of perfect quality. Immediately after the production process,
54 Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak

a screening procedure is performed to detect the imperfect items and are classified
into two sections: repairable product and scrap product. Repairable products are
then sent to the reworking process with some additional cost, and scrap products
are vended at a subsidiary market at a salvage price. Due to an error in the screen-
ing process, a defective product may come out as a good one and may reach the
customer, ultimately causing a sales return. Also, to reflect the practical situation,
we have allowed shortages in the model. The imprecise cost coefficients are as-
sumed to be interval valued. The expression for the total profit of the manufacturer
has been derived in an interval environment and the corresponding mathematical
optimization problem has been solved applying the QPSO algorithm.
The remainder of the chapter is structured as follows: some well-known basic def-
initions and preliminaries are recalled in Section 2. In the next section, the details
about the proposed EPQ model in an interval environment are discussed. The solu-
tion procedure, that is, the algorithm of QPSO is described in Section 4. The proposed
model is validated with numerical illustration in Section 5. In Section 6, sensitivity
analysis of some key parameters is carried out. Conclusions have been drawn in
Section 7.

2 Preliminaries
Definition 1 (Interval number [16]). An interval number is written as M  = ½mL , mR  =
fm:mL ≤ m ≤ mR , mL , mR 2 <g where mL and mR , respectively, are termed as the lower
and upper limits of the interval M.  An interval M
 can also be written using its center

and width as M = < mc , mw > where mc = 2 ðmL + mR Þ and mw = 21 ðmR − mL Þ are the cen-
1

ter and the width of the interval, respectively.

2.1 Basic interval arithmetic

This subsection provides some primary arithmetic operations between interval num-
 = ½mL , mR  and N
bers as defined by R. E. Moore [16]. Let M  = ½nL , nR  be two finite inter-
val numbers. Then the basic arithmetic operations between them are defined as:
 +N
M  = ½mL + nL , mR + nR 

 −N
M  = ½mL − nR , mR − nL 
(
½cmL , cmR  if c > 0

cM =
½cmR , cmL  if c < 0

 ·N
M  = ½minfmL nL , mL nR , mR nL , mR nR g, maxfmL nL , mL nR , mR nL , mR nR g.
Application of particle swarm optimization technique 55


M

mL mL mR mR

mL mL mR mR

 = min , , , , max , , , .
N nL nR nL nR nL nR nL nR

2.2 Order relations of interval numbers

Since our basic aim is to solve an optimization problem, it is very important to


study the order relations between two intervals in order to compare them. So far, a
number of mathematical formula regarding definitions of order relations of intervals
have been proposed by various authors ([10, 11, 18, 33]). In this chapter, we have used
the approach of Sahoo et. al [27].
Any two interval numbers M  = ½mL , mR  and N = ½nL , nR  will fall into one of the
three categories mentioned below:
Non-overlying intervals (e.g., ½2, 5 and ½6, 9).
Partially overlying intervals (e.g., ½2, 8 and ½6, 10).
Fully overlying intervals (e.g., ½2, 10 and ½6, 9).

Definition 2 ([27]). For a maximization problem, the order relation >max among two
 = ½mL , mR  = < mc , mw > and N
intervals M  = ½nL , nR  = < nc , nw > is defined by M
 >max
 , mc > nc for class I and class II intervals, and M
N > N  , either mc ≥ nc ^ mw < nw
max
or mc ≥ nc ^ mR > nR for class III interval.
In agreement with this definition, M  is selected for a maximization problem.

Definition 3 ([27]). For a minimization problem, the order relation < min among two
intervals M  = ½nL , nR  = < nc , nw > is defined by M
 = ½mL , mR  = < mc , mw > and N  < min N

 
, mc < nc for class I and class II intervals and M < min N , either mc ≤ nc ^ mw < nw or
mc ≤ nc ^ mL < nL for class III interval.
In agreement with this definition, M  is selected for a minimization problem.

3 Model development
The following assumptions and notations have been used throughout the chapter to
develop the model:
Notations:

Z Demand rate of the product ðunits=time unitÞ


R Production rate of the product ðunits=time unitÞ
yðtÞ On hand inventory level at time t ðunitsÞ
56 Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak

B Maximum shortage level (units) (decision variable)


Y Volume of production batch size in a single cycle (units) (decision
variable)
x defective product production rate from regular production (units/
time unit)
w Rate of faulty items from customer’s end (units/time unit)
α Ratio of faulty product from regular production (x = Rα), 0.01 ≤ α ≤ 0.09
β Ratio of faulty product from customer’s end (w = Zβ), 0.001 ≤ β ≤
0.005
θ Proportion of faulty product that cannot be reworked
T Cycle length (time unit)
Y1 On hand inventory level at time (t1 + t2) (units)
Y2 On hand inventory level at time (t1 + t2 + t3) (units)
p1 Selling price of per unit perfect product ($/unit)
p2 Selling price of per unit scrap product ($/unit)
c0 = ½c0L , c0R  Cost for setting up process (interval valued) per unit cycle ($/unit)
c1 = ½c1L , c1R  Cost for production (interval valued) per unit product ($/unit)
c2 = ½c2L , c2R  Cost for holding products (interval valued) per unit product per unit
time ($/unit)
c3 = ½c3 L , c3 R  Cost for shortages (interval valued) per unit product per unit time
($/unit)
c4 = ½c4 L , c4 R  Cost for repairing (interval valued) per unit product ($/unit)
c5 = ½c5 L , c5 R  Cost for rejecting (interval valued) per unit product ($/unit)
ti Time intervals (i = 1, 2, 3, 4, 5) (time units)

Assumptions:
1. Here, a single item production inventory model is considered over an infinite
planning horizon.
2. Rate of production of perfect product is greater than the demand rate.
3. Inspection process is not error-free.
4. Not all the defective items (found by screening process) are reworkable; θ per-
cent of them are scrap items.
5. x is the rate of defective items from the regular production process found by the
inspection process, whereas w is the rate of defective items returned from cus-
tomers due to error in the screening process. So the total defective rate of the
machine is x + w unit per unit time.
6. Returned defective items are not repaired.
7. Shortages are allowed and are backlogged fully.
Application of particle swarm optimization technique 57

8. Setting up time for repairing procedure is neglected.


9. Cost incurred due to inspection procedure is neglected to be much lesser than
other cost components.

3.1 Mathematical model

The inventory situation is described graphically in Figure 1. The inventory cycle starts
at time t = 0 with inventory level − B. Now during the time interval, t1 and t2 items are
produced with the rate R. At the same time, the inventory level depletes at the rate
ðZ + xÞ for demand and defective items and at the rate w for sales return. So the net
accumulation rate of the inventory level is ðR − Z − x − wÞ. Let the total amount of
product produced during this period ðt1 + t2 Þ be Y.

Figure 1: Behavior of the inventory with respect to time.

A screening process is then applied to sort out the defective products from the total
lot. The total number of defective items found by the screening process is Yα units. So
the amount of scrap items is Yαθ units. The reworkable items are reworked at the
same rate as that production rate in the time interval t3 . The period t4 is the consump-
tion period, and t5 is the shortage period.
The nature of the inventory level is governed by the following differential equation:
8
> R − Z − x − w, 0 ≤ t ≤ t1 ;
>
>
>
>
> R − Z − x − w, t1 ≤ t ≤ t1 + t2 ;
dyðtÞ <
= R − Z − w, t1 + t2 ≤ t ≤ t1 + t2 + t3 ; (1)
dt >
>
>
> − − + + ≤ ≤ + + + ;
>
> Z w, t1 t2 t3 t t 1 t2 t3 t 4
:
− Z − w, t1 + t2 + t3 + t4 ≤ t ≤ t1 + t2 + t3 + t4 + t5 ;
58 Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak

with the conditions

yð0Þ = − B, yðt1 Þ = 0

yðt1 + t2 Þ = Y1 , yðt1 + t2 + t3 Þ = Y2
yðt1 + t2 + t3 + t4 Þ = 0, yðt1 + t2 + t3 + t4 + t5 Þ = − B.

Now solving these differential equations with those boundary conditions, we get
8
> ðR − Z − x − wÞt − B, 0 ≤ t ≤ t1 ;
>
>
>
>
< ðR − Z − x − wÞt − B,
> t1 ≤ t ≤ t 1 + t 2 ;
yðtÞ = Y1 + ðR − Z − wÞðt − t1 − t2 Þ, t1 + t2 ≤ t ≤ t1 + t2 + t3 ; (2)
>
>
>
> Y2 − ðZ + wÞðt − t1 − t2 − t3 Þ, t1 + t2 + t3 ≤ t ≤ t1 + t2 + t3 + t4 ;
>
>
:
− ðZ + wÞðt − t1 − t2 − t3 − t4 Þ, t1 + t2 + t3 + t4 ≤ t ≤ t1 + t2 + t3 + t4 + t5 ;

Initially, inventory cycle starts with inventory level − B and reaches to 0 after time t1 .
Therefore, we have
ð0 ð t1
B
dy = ðR − Z − x − wÞdt which gives t1 =
−B 0 R−Z−x−w

After time t1 + t2 the inventory level reaches from 0 to Y1 . Thus


ð Y1 ð t1 + t2
Y1
dy = ðR − Z − x − wÞdt, that is, t2 = .
0 t1 R−Z−x−w

Since Y denotes the lot size in a cycle, then Y = Rðt1 + t2 Þ and therefore

Y B + Y1 YðR − Z − x − wÞ
= thus Y1 = − B. (3)
R R−Z−x−w R

Consequently,

Y B
t2 = − (4)
R R−Z−x−w

Now the amount of reworkable items = Yαð1 − θÞ so

Yαð1 − θÞ
Yαð1 − θÞ = Rt3 , and hence t3 = .
R

The inventory level reaches from Y1 to Y2 after adding the reworked items, in the time
interval ½t1 + t2 , t1 + t2 + t3 . Therefore,
ð Y2 ð t1 + t2 + t3
dy = ðR − Z − wÞdt
Y1 t1 + t2
Application of particle swarm optimization technique 59

) Y2 − Y1 = ðR − Z − wÞt3
YðR − Z − x − wÞ YðR − Z − wÞαð1 − θÞ
) Y2 = −B+ (5)
R R

In the time interval ½t1 + t2 + t3 , t1 + t2 + t3 + t4  the inventory level decreases with con-
stant rate Z + w and reaches from Y2 to 0.
ð0 ð t1 + t2 + t3 + t4
Hence dy = ð − Z − wÞdt or Y2 = ðZ + wÞt4 .
Y2 t1 + t2 + t3

Y
Consequently, t4 = Z +2w.
The inventory level goes down to − B during the time period t5 . Therefore, we have
ð −B ð t1 + t2 + t3 + t4 + t5
dy = ð − Z − wÞdt, that is, B = ðZ + wÞt5
0 t1 + t2 + t3 + t4

Consequently, t5 = Z +B w.
Hence the total inventory cycle length is given by

Yð1 − αθÞ Yð1 − αθÞ


T = t1 + t2 + t3 + t4 + t5 = = (6)
Z+w Zð1 + βÞ

The different cost components involving different inventory processes in a produc-


tion cycle are evaluated below.

3.1.1 Production setup cost

For every production system, a setup cost is expended that includes transportation cost
and cost for making the system ready to start the production process. Here, the setup
cost is assumed to be fixed for every cycle. Also, this cost is taken as interval valued
and its value is c0 = ½c0L , c0R .
Hence, the setup cost per unit time is given as
hc i Zð1 + βÞc 
0L c0R 0L Zð1 + βÞc0R
, = , (7)
T T Yð1 − αθÞ Yð1 − αθÞ

3.1.2 Production cost

Here the cost of producing a single item is taken as an interval number c1 = ½c1L , c1R .
So the total production cost per unit time is also interval valued and is expressed by
½PCL , PCR , where
60 Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak

Y. c1L Zð1 + βÞc1L Y. c1R Zð1 + βÞc1R


PCL = = and PCR = = .
T ð1 − αθÞ T ð1 − αθÞ

3.1.3 Holding cost

The holding cost per unit product per unit time ðc2 = ½c2L , c2R Þ being an interval
number the corresponding inventory holding cost per unit time is also interval valued
and is expressed as ½HCL , HCR , where
"ð ð t1 + t2 + t3 ð t1 + t2 + t3 + t4 #
c2L t1 + t2
HCL = yðtÞdt + yðtÞdt + yðtÞdt
T t1 t1 + t2 t1 + t2 + t3

ðt ð t1 + t2 + t3
c2L 1 + t2
= fðR − Z − x − wÞt − Bgdt + fY1 + ðR − Z − wÞðt − t1 − t2 Þgdt
T t1 t1 + t2
ð t1 + t2 + t3 + t4 
+ fY2 − ðZ + wÞðt − t1 − t2 − t3 Þgdt
t1 + t2 + t3

Y
= c2L Rð1 − αθÞ2 − Zð1 + βÞf1 + α − 2αθ + α2 ð1 − θÞ2 g
2Rð1 − αθÞ

RB2 ð1 − αÞ
+ −B (8)
2Yð1 − αθÞðR − Z − x − wÞ

and
"ð ð t1 + t2 + t3 ð t1 + t2 + t3 + t4 #
t1 + t2
c2R
HCR = yðtÞdt + yðtÞdt + yðtÞdt
T t1 t1 + t2 t1 + t2 + t3

ðt ð t1 + t2 + t3
c2R 1 + t2
= fðR − Z − x − wÞt − Bgdt + fY1 + ðR − Z − wÞðt − t1 − t2 Þgdt
T t1 t1 + t2
ð t1 + t2 + t3 + t4 
+ fY2 − ðZ + wÞðt − t1 − t2 − t3 Þgdt
t1 + t2 + t3

Y
= c2R Rð1 − αθÞ2 − Zð1 + βÞf1 + α − 2αθ + α2 ð1 − θÞ2 g
2Rð1 − αθÞ

RB2 ð1 − αÞ
+ −B . (9)
2Yð1 − αθÞðR − Z − x − wÞ
Application of particle swarm optimization technique 61

3.1.4 Shortage cost

The shortage cost component c3 = ½c3 L , c3 R  being an interval number, the total
shortage cost per unit time will also be interval valued and is expressed as
½SCL , SCR , where
"ð ð t1 + t2 + t3 + t4 + t5 #
c3 L t1
SCL = − yðtÞdt + − yðtÞdt
T 0 t1 + t2 + t3 + t4

RB2 ð1 − αÞc3 L
= (10)
2Yð1 − αθÞðR − Z − x − wÞ

and
"ð ð t1 + t2 + t3 + t4 + t5 #
t1
c3
SCR = L − yðtÞdt + − yðtÞdt
T 0 t1 + t2 + t3 + t4

RB2 ð1 − αÞc3 R
= . (11)
2Yð1 − αθÞðR − Z − x − wÞ

3.1.5 Reworking cost

The Yαð1 − θÞ units of product are reworked with an additional cost in order to sell
them as perfect items. The cost for repairing a single defective product ðc4 = ½c4 L , c4 R Þ
being interval valued, total repairing cost per unit time is also interval-valued and is
expressed as ½RCL , RCR , where

Yαð1 − θÞc4 L Zð1 + βÞαð1 − θÞc4 L


RCL = = (12)
T ð1 − αθÞ

and

Yαð1 − θÞc4 R Zð1 + βÞαð1 − θÞc4 R


RCR = = . (13)
T ð1 − αθÞ

3.1.6 Rejecting cost

The customers may receive Yαθ units of defective products because of an error in
the screening procedure and they may cause sales return. An extra cost is incurred
due to this reason, which we term as rejecting cost. Per unit rejecting cost being an
interval number c5 = ½c5 L , c5 R , total rejecting cost per unit time is also interval val-
ued and is expressed as ½JCL , JCR , where
62 Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak

ð T 
c5 L
JCL = wdt
T 0
ð T 
c5 L
= βZdt = c5 L Zβ (14)
T 0

and
ð 
c5 R T
JCL = wdt
T 0
ð T 
c5
= R βZdt = c5 R Zβ. (15)
T 0

3.1.7 Total cost

Now the total inventory cost per unit time comprises the setup cost per unit time,
production cost per unit time, repairing cost per unit time, holding cost per unit
time, shortage cost per unit time, and rejecting cost per unit time. Since all those
costs are interval valued, the total cost per unit time will also be interval valued. Let
it be TC = ½TCL , TCR . Therefore,

TCL = c0L + PCL + HCL + SCL + RCL + JCL



Zð1 + βÞc0L Zð1 + βÞc1L Y
= + + c2L fRð1 − αθÞ2 − Zð1 + βÞf1 + α − 2αθ
Yð1 − αθÞ ð1 − αθÞ 2Rð1 − αθÞ

RB2 ð1 − αÞ RB2 ð1 − αÞc3 L
+ α2 ð1 − θÞ2 gg + −B +
2Yð1 − αθÞðR − Z − x − wÞ 2Yð1 − αθÞðR − Z − x − wÞ

Zð1 + βÞαð1 − θÞc4 L


+ + c5 L Z (16)
ð1 − αθÞ

and

TCR = c0R + PCR + HCR + SCR + RCR + JCR



Zð1 + βÞc0R Zð1 + βÞc1R Y
= + + c2R fRð1 − αθÞ2 − Zð1 + βÞf1 + α − 2αθ
Yð1 − αθÞ ð1 − αθÞ 2Rð1 − αθÞ

RB2 ð1 − αÞ RB2 ð1 − αÞc3 R
+ α2 ð1 − θÞ2 gg + −B +
2Yð1 − αθÞðR − Z − x − wÞ 2Yð1 − αθÞðR − Z − x − wÞ

Zð1 + βÞαð1 − θÞc4 R


+ + c5 R Z. (17)
ð1 − αθÞ
Application of particle swarm optimization technique 63

3.1.8 Manufacturer’s sales revenue

The perfect quality products and the reworked products are sold at a selling price p1
per unit item. The scrap items are sold at a secondary market at a selling price
p2 ð < p1 Þ per unit item. So total profit of the manufacturer in a cycle is given by

Yð1 − αθÞ . p1 + Yαθ.p2

Hence total revenue per unit time is given by

Yð1 − αθÞ . p1 + Yαθ.p2


SR =
T

Now SR is a crisp number. We may express it as an interval number in the form


SR = ½SRL , SRR , where SRL = SRR = SR.

3.1.9 Manufacturer’s profit

Since the sales revenue and the total inventory cost are interval valued, the net profit
of the manufacturer will also be interval valued. Let it be PR = ½PRL , PRR . Then

PRL = SRL − TCR and PRR = SRR − TCL .

So

fYð1 − αθÞ . p1 + Yαθ . p2 g . Zð1 + βÞ Zð1 + βÞc0R Zð1 + βÞc1R


PRL = − −
Yð1 − αθÞ Yð1 − αθÞ ð1 − αθÞ

Y
− c2R fRð1 − αθÞ2 − Zð1 + βÞf1 + α − 2αθ + α2 ð1 − θÞ2 gg
2Rð1 − αθÞ

RB2 ð1 − αÞ RB2 ð1 − αÞc3 R
− −B −
2Yð1 − αθÞðR − Z − x − wÞ 2Yð1 − αθÞðR − Z − x − wÞ

Zð1 + βÞαð1 − θÞc4 R


− − c5 R Z (18)
ð1 − αθÞ

and

fYð1 − αθÞ . p1 + Yαθ.p2 g . Zð1 + βÞ Zð1 + βÞc0L Zð1 + βÞc1L


PRR = − −
Yð1 − αθÞ Yð1 − αθÞ ð1 − αθÞ

Y
− c2L fRð1 − αθÞ2 − Zð1 + βÞf1 + α − 2αθ + α2 ð1 − θÞ2 gg
2Rð1 − αθÞ

RB2 ð1 − αÞ RB2 ð1 − αÞc3 L
− −B −
2Yð1 − αθÞðR − Z − x − wÞ 2Yð1 − αθÞðR − Z − x − wÞ
64 Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak

Zð1 + βÞαð1 − θÞc4 L


− − c5 L Z. (19)
ð1 − αθÞ

Obviously, PRL and PRR becomes functions of Y and B.


Now, we aim to maximize the profit of the manufacturer per unit time. Hence,
our problem is to

maximize PRðY, BÞ = ½PRL , PRR  (20)

where PRL , PRR are given in (18) and (19).


Since the above optimization problem is interval valued, the traditional classi-
cal optimization technique or the derivative-based approaches will not work in
this case. So, we are going to apply the PSO technique, developed by Kennedy and
Eberhart [8, 11].

4 Solution procedure
Kennedy, an American social psychologist, and Eberhart, an American electrical engi-
neer first developed the PSO technique in the year 1995 [8, 11]. The concept of PSO
mimics the hunting technique of a large group of beasts in the jungle. The beasts move
around the jungle randomly in search of prey. In any instance of time, during hunting,
the animal member nearest to the prey gives some signal to the other members of the
group and influences them to move in that direction. While deciding the next move
following that signal, the remaining members also use their individual experiences so
far about the prey. This searching process is repeated until the prey is captured. Mim-
icking this concept in PSO, the search domain and the actual solution point are consid-
ered to be the jungle and the prey, respectively. All the members of the flock are
potential solution points on the search region.

4.1 Implementation of QPSO

The objective function arrived in (20) is a function of double variables Y and B hav-
ing interval value, that is, PRðY, BÞ = ½PRL ðY, BÞ, PRR ðY, BÞ. Here we are going to de-
scribe the QPSO technique for a double variable optimization problem.
Let us consider a two-dimensional feasible region where a swarm of a particle
of size N is moving. In every iteration, all of the particles move from one position to
another position. In conforming to the concept of PSO mechanism, a general parti-
cle, say the ith ð1 ≤ i ≤ NÞ particle in the swarm is said to possess the attributes men-
tioned below at kth stage/iteration.
Application of particle swarm optimization technique 65

(i) The present location/position vector xki = ðYik , Bki Þ.


(ii) The present velocity vector vki = ðvkiY , vkiB Þ
(iii) The best position achieved so far by the ith particle, denoted as pbest position
pki = ðpkiY , pkiB Þ. This pbest position is determined by the best functional value of
PRðY, BÞ attained up to present iteration.
(iv) The global best position, denoted by gbest position Gk = ðGkY , GkB Þ which is the
finest value among all the individual pbest positions.

According to the mechanism of PSO, during the movement from present iteration to
next iteration, the particle velocities are updated by the rule
k+1
viY = wvkiY + c1 r1k ðpkiY − Yik Þ + c2 r2k ðGkY − Yik Þ; k = 1, 2, .., mg (21)

vkiB+ 1 = wvkiB + c1 r1k ðpkiB − Bki Þ + c2 r2k ðGkB − Bki Þ; k = 1, 2, .., mg . (22)

The parameter, w, employed in the formula is called the inertia weight. The pur-
pose of including w is to help for quicker convergence of the mechanism. It is ad-
justed with the help of some random numbers in such a way that w fluctuates
between 0.9 and 0.4. The acceleration coefficients c1 and c2 are employed to ad-
minister the particle movements in every iteration. It is customary to assume both
c1 and c2 to be 2.0. r1jk ,r2jk are two uniformly distributed random numbers in the in-
terval (0,1). mg denotes the maximum iteration number.
The ith particle updates its location following the rule

Yik + 1 = Yik + viY


k+1
(23)

and

Bki + 1 = Bki + vkiB+ 1 . (24)

So updated position of the ith particle is xik + 1 = ðYik + 1 , Bik + 1 Þ.


The updation rule for pbest positions of every particle is

p0i = x0i
(
pki ; if PRðYik + 1 , Bki + 1 Þ ≤ PRðpki Þ
pik + 1 = (25)
xik + 1 ; if PRðYik + 1 , Bki + 1 Þ > PRðpki Þ

where the function PR is subjected to be maximized.


During the computation procedure, the objective functions PRL and PRR are com-
puted at the point xik + 1 = ðYik + 1 , Bki + 1 Þ. Also the values of PRL and PRR are already com-
puted or initialized at the location xik . The comparison of values stated in the above
formula is made by applying the interval order relations narrated in Section 2.2.
66 Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak

The gbest position Gk is calculated by applying the rule

Gk + 1 = arg maxðPRðpki + 1 ÞÞ, 1 ≤ i ≤ N.

After the inception of the primary technique of PSO, different researchers have
modified it in many ways to avail quicker convergence and smooth functioning of
the algorithm. As an effect, many modified and improved editions of PSO algo-
rithm are available now in literature. Clerc and Kennedy [5] proposed a modified
velocity-updating formula by inserting a constriction factor χ. Conforming to their
rule, the updated values of vkiY+ 1 and viB
k+1
are

vkiY+ 1 = χ½vkiY + c1 r1k ðpkiY − Yik Þ + c2 r2k ðGkY − Yik Þ; k = 1, 2, .., mg (26)

vkiB+ 1 = χ½vkiB + c1 r1k ðpkiB − Bki Þ + c2 r2k ðGkB − Bki Þ; k = 1, 2, .., mg . (27)

The parameter χ is expressed as χ = p2 ffiffiffiffiffiffiffiffiffiffiffiffi where ϕ = c1 + c2 and ϕ is assumed


2−ϕ− 2
ϕ − 4ϕ
to be greater than four. Clearly, value of χ depends on the values of c1 and c2 . It is
a good practice to take the values of c1 and c2 , each equal to 2.05 so that ϕ turns
out to be bigger than four. As a consequence, χ takes the value 0.729. Experiences
reveal that insertion of χ in this particular form leads to confirming convergence.
This typical constriction factor induced edition of PSO is referred to as PSO-CO.
According to the indication in trajectory analysis in [5], every particle needs to
converge to their local attractor Pi = ðPiY , PiB Þ to confirm convergency of the method.
The components of PiY and PiB are defined as
k k
c1 r1Y piY + c2 r2Y
k
GkY
k
PiY =
c1 r1Y + c2 r2Y
k k

) PiY
k
= ϕY pkiY + ð1 − ϕY ÞGkY (28)

and
k k
c1 r1B piB + c2 r2B
k
GkB
k
PiB = k + c rk
c1 r1B 2 2B

) PiY
k
= ϕB pkiB + ð1 − ϕB ÞGkB (29)

c1 rk c1 r k
where ϕY = 1Y , that is, ϕY ⁓Uð0, 1Þ and ϕB = 1B , that is, ϕB ⁓Uð0, 1Þ.
c1 r + c2 rk
k c1 r + c2 rk
k
1Y 2Y 1B 2B
All the particles present in the swarm fly in the search domain tracking pki and Gk .
Recently, the extension of the PSO algorithm has also been feasible, taking
into account the quantum nature of the particles. The modified edition of PSO de-
veloped in quantum world is known as the QPSO technique. The term “trajectory”
is worthless in the quantum field as the location and speed of a quantum-behaved
Application of particle swarm optimization technique 67

particle cannot be determined concurrently. Sun, Feng, and Xu [31] proposed the
use of wave functions in place of location and speed to represent the state of a
quantum-behaved particle. Following them, at the k + 1th iteration, the locations
of the particles are updated by the rule
!
 k  1
k+1  k
Yi = PiY ± β mY − Yi ln k + 1
k
(30)
uY

and
!
  1
Bik + 1 = PiB
k
± βmkB − Bki  ln (31)
ukB + 1

where uYk + 1 and uBk + 1 are uniformly distributed random numbers in (0,1). The param-
eter β is fluctuated in between 1.0 and 0.5 to administer the convergency speed. mk
is referred as the mean best (mbest) position. These mbest positions are nothing but
the average of the pbest positions of the particles in the swarm. So,
 X 
  1 N k 1X N
mk = mkY , mkB = piY , pkiB . (32)
N i=1 N i=1

4.2 Algorithm of QPSO

The algorithm of QPSO technique may be stated as follows:


Set the population size (N), dimension of the search region, maximum number of
generation (maxgen), and the value of the parameter β. Initialize the particle position
in the search space xi0 = ðYi0 , B0i Þ and pbest position for every particle p0i = ðp0iY , p0iB Þ;
Compute the fitness PRðYi0 , B0i Þ for every particle at xi0 and determine the gbest
position by applying G = arg maxfPRðp0i Þg, 1 ≤ i ≤ N;

For gen = 1 to maxgen


For i = 1 to N
Determine the objective value PRðYik , Bki Þ at the point xik = ðYik , Bki Þ;
if PRðYik + 1 , Bki + 1 Þ ≤ PRðpki Þ
pki + 1 = xik + 1
else
pki + 1 = pki

Endfor //end for loop i


Store the maximum of all pki + 1 as gbest value Gk + 1
Determine the mbest position m using equation (32)
Choose an appropriate value for β
68 Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak

Update the position of the particle using equation (30) and (31)
Endfor //end for loop gen

5 Numerical illustration
Let us consider a television manufacturing company that produces a particular
model of television throughout a certain period of time. The values of various inven-
tory parameters are given as follows.
Production rate R = 300 units per year; the demand of the television Z = 250 units
per year; setup cost remains between $110 and $115 per cycle, that is, the interval-
valued setup cost c0 = $½110, 115 per setup; other cost parameters are: c1 = $½32, 33
per unit, c2 = $½10, 11 per unit per year, c3 = $½5, 6 per unit per year, c4 = $½12, 13 per
unit, and c5 = $½45, 46 per unit. Proportion of produced defective items α = 0.06; pro-
portion of scrap item θ = 0.2; proportion of sales return β = 0.007; selling price of per-
fect items p1 = $40 per unit and; selling price of scrap items p2 = $25 per unit.
The best-found/optimal solution of ((20)) with the above data using QPSO tech-
nique is given below:
Optimal lot size Y * = 381.99089 units; optimal value of maximum shortage level
B = 26.85822 units; optimal profit of the company ½PR*L , PR*R  = $½1330.91, 1614.10; cen-
*

tral value of optimal profit =$1472.51; optimal length of a single production cycle
T * = 1.51552 year; optimal production run time t1* + t2* = 1.27326 year; total setup cost
=$½72.58, 75.88; total production cost =$½8065.68, 8317.73; total holding cost
=$½24.53, 26.99; total shortage cost =$½44.06, 52.88; total rework cost =$½177.85, 192.67;
total rejection cost =$½78.75, 80.50.

6 Sensitivity analysis and managerial insights


Here we shall discuss the effect on the optimal inventory decisions due to changes
in the key inventory parameters α, β, θ, R, Z, p1 , c0 , c1 , c2 , c3 , and c5 over the optimal
solution derived in the numerical example section. These analyses are performed by
making changes in a single parameter at once and taking the other parameter as
fixed. The results obtained in this process have been depicted in Table 1.
From the results portrayed in Table 1, we can have the following insights:

1. When α increases, the optimal solution is attained at a higher lot size ðY * Þ and a
lower value of maximum shortage level ðB* Þ. More importantly, the optimal profit
of the company decreases when α increases, but the optimal cycle length and op-
timal production run time of the manufacturing process increases when α in-
creases. Also, the optimal profit is mildly sensitive regarding the parameter α. More
Table 1: Sensitivity analysis for key parameters.

Parameter Changes Y* B* Profit T Production Rejecting cost


run time

α . . . [., .] . . [., .]

. . . [., .] . . [., .]

. . . [., .] . . [., .]

. . . [., .] . . [., .]

β . . . [., .] . . [., .]

. . . [., .] . . [.,.]

. . . [., .] . . [.,.]

. . . [.. .] . . [., .]

θ . . . [., .] . . [., .]

. . . [., .] . . [., .]

. . . [., .] . . [., .]

. . . [., .] . . [., .]

R  . . [., .] . . [., .]

 . . [., .] . . [., .]

 . . [., .] . . [., .]


Application of particle swarm optimization technique

 . . [., .] . . [., .]

(continued)
69
Table 1 (continued)
70

Parameter Changes Y* B* Profit T Production Rejecting cost


run time

Z  . . [., .] . . [., .]

 . . [., .] . . [.,.]

 . . [., .] . . [., .]

 . . [., .] . . [.,.]

p1  . . [., .] . . [., .]

 . . [., .] . . [., .]

 . . [., .] . . [., .]

 . . [., .] . . [., .]

c0 [,] . . [., .] . . [., .]

[,] . . [., .] . . [., .]

[,] . . [., .] . . [., .]

[,] . . [., .] . . [., .]

c1 [,] . . [., .] . . [., .]


Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak

[,] . . [., .] . . [., .]

[,] . . [., .] . . [., .]

[,] . . [., .] . . [., .]


c2 [, ] . . [., .] . . [., .]

[, ] . . [., .] . . [., .]

[, ] . . [., .] . . [., .]

[, ] . . [., .] . . [., .]

c3 [., .] . . [., .] . . [., .]

[., .] . . [., .] . . [., .]

[., .] . . [., .] . . [., .]

[., .] . . [., .] . . [., .]

c5 [, ] . . [., .] . . [., .]

[, ] . . [., .] . . [., .]

[, ] . . [., .] . . [., .]

[, ] . . [., .] . . [., .]
Application of particle swarm optimization technique
71
72 Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak

defective items result in more repairing costs for the manufacturer and as a result,
their profit decreases. The nature of the optimal profit concerning α has been depicted
in Figure 2.

1520
Centre value of Profit

1500

1480

1460

1440

1420
0.04 0.05 0.06 0.07 0.08
Α Figure 2: Impact of α on optimal profit.

2. When β increases, optimal lot size, optimal cycle length, optimal production run
time, and the rejecting cost increases, whereas the optimal profit and ðB* Þ decreases.
The optimal profit is not much sensitive concerning the parameter β whereas the
total rejecting cost is highly sensitive concerning β. So the company may incorpo-
rate a two-level inspection process to deal with it. Figure 3 represents the relation-
ship between β and the total rejecting cost.
Centre value of Rejecting cost

100

90

80

70

60

0.005 0.006 0.007 0.008 0.009


Β Figure 3: Impact of β on rejecting cost.

3. When the proportion of scrap items θ increases, we see that the optimal lot size B* ,
manufacturer’s optimal profit, and the optimal cycle length of the inventory process in-
creases. Although a higher rate of scrap items causes a lower revenue for the manufac-
turer, due to the subsequent changes in the cycle length, the optimal profit per unit time
increases. Also, all those elements are very low sensitivity concerning the parameter θ.

4. When the production rate R increases, the maximum value of shortage level ðB* Þ
increases but the optimal lot size, optimal profit, optimal cycle length, and optimal
Application of particle swarm optimization technique 73

production run time decreases. The optimum solutions are highly sensitive concern-
ing this parameter of R. This indicates that the production rate must not be increased
arbitrarily. The behavior of optimal profit concerning the production rate is repre-
sented by Figure 4.

1520
Centre value

1500

1480

1460

1440

280 290 300 310 320


Production Rate Figure 4: Impact of R on optimal profit.

5. When the demand of the product Z increases, the optimal lot size, optimal profit,
optimal production run time, optimal cycle length, and rejecting cost increases, but
the optimal value of maximum shortage level decreases. The optimal solutions are
exceedingly sensitive concerning the parameter Z, and this Z is one of the highest
sensitive parameters among all the parameters. So, the company should first make
a proper assumption about the demand of the product by inviting some expert’s
opinion. Figure 5 shows the relationship between the optimal profit of the company
and the demand rate of the product.

1650
Centre value of Profit

1600
1550
1500
1450
1400
1350
1300
230 240 250 260 270
Demand Rate Figure 5: Impact of Z on optimal profit.

6. We see that changes in per unit selling price for perfect item p1 does not affect
the optimal value of the decision variables Y * and B* . This happens because here
the demand is considered to be independent of the selling price. But the optimal
profit of the manufacturer is increased drastically concerning the increment in p1 .
This suggests that manufacturers should put proper concentration on determining
74 Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak

the selling price of the products and while doing it they must keep in mind that a
higher selling price may affect the demand of the product in reality.

7. When the production setup cost c0 increases, optimal lot size B* , and the optimal
value of production run time increases, but the manufacturer’s optimal profit de-
creases. Although the optimal profit is not very sensitive regarding c0 , the results
indicate that if the setup cost for the manufacturing process is higher, the manufac-
turer should plan for bigger batch size.

8. It is observed that the changes in per unit production cost c1 do not affect the
optimal solution. However, it is one of the most significant parameters concerning
the profit of the company, and it has an inverse relationship with the optimal profit.
So, the manufacturer must take appropriate care to select the best raw materials for
the company.

9. When the holding cost per unit item per unit time c2 increases, the optimal solu-
tion is obtained for a lower value of batch size and higher value of maximum short-
age level. The manufacturer’s optimal profit, optimal manufacturing run time, and
optimal cycle length decreases with an increment in the c2 . This recommends a
lesser production volume to decrease the holding cost when the holding cost is
higher. Also, c2 is not highly sensitive regarding the optimal profit of the company.

10. When the shortage cost per unit item per unit time c3 increases, the optimal lot size
B* , optimal production cycle length, optimal manufacturing run time, and optimal
profit of the company decreases. When the shortage cost increases, the manufacturer
attempts to reduce the shortage amount and as a result the optimum value of the maxi-
mum shortage level B* decreases. The parameter c3 is less sensitive concerning the
company’s optimal profit.

11. When the rejecting cost per unit item c5 changes, the optimal solution is attained at
the same value of Y * and B* . However, when c5 increases, the manufacturer’s optimal
profit decreases as the total rejecting cost increases, though the amount of change is
very less.

7 Conclusion
This chapter develops a production inventory model with shortages assuming the
chances of defective production. After the regular manufacturing process, all the
products are screened to sort out the defective products. Also, the screening pro-
cess is not error-free. The scenario is quite a realistic one. The main contribution
Application of particle swarm optimization technique 75

of the chapter is to express the various inventory cost coefficients like production
cost, setup cost, holding cost, reworking cost, shortage cost, and the rejecting cost
as imprecise quantities. In reality, these quantities are not fixed and they can be
expressed as interval numbers. So we can say that the proposed model is a realis-
tic one. The QPSO technique has been employed to maximize the interval-valued
profit function of the producer. Another major significant insight is that as the
proposed model contains interval-valued cost parameters, the optimal profit is
also obtained as interval valued which is more practical. The classical fuzzy or
probabilistic optimization technique for handling imprecision fails to derive a
similar type of solution as in those cases they actually optimize the expected
value of the profit function or the defuzzified profit function. Also, the optimal
lot size of the produced quantity has been obtained as a fixed number through
dealing with imprecise data. This result will also be appreciated by the manage-
ment because the management is concerned about the lot size to be produced in
advance for smooth accommodation of the required raw materials, labor, etc.
Concerning the relevancy of the proposed model in the real world, it can be
said that all those manufacturing companies which produce the same product for
a long period with multiple production cycles may adopt this particular model. In
those cases, the different inventory cost components do not remain same through-
out the whole planning horizon and must be expressed as interval numbers. The
screening process and also the scenario of sales return as presented in the chapter
are applicable for those kinds of manufacturing processes. A large number of in-
dustries manufacture a similar kind of product throughout the year. The produc-
tion managers of these kinds of industries will be benefitted from the result of this
model because the concept of interval-valued inventory cost parameters is well
suited for this type of industry. Depending upon the inter-valued total profit per
unit time obtained as a result of this model, the managers can take some optimis-
tic and pessimistic decisions considering their circumstances.
However, in this proposed model we have asserted the demand rate and the
proportion of defective items as fixed numbers, which can also be considered as
imprecise numbers in the future to formulate more realistic EPQ models. Also, the
double inspection process can be incorporated to enrich the model.

References
[1] Bhunia A.K., Mahato S.K., Shaikh A.A., Jaggi C.K. A deteriorating inventory model with
displayed stocklevel-dependent demand and partially backlogged shortages with all unit
discount facilities via particle swarm optimisation. International Journal of Systems Science:
Operations & Logistics 2015, 1(3), 164–180.
76 Subhendu Ruidas, Mijanur Rahaman Seikh, Prasun Kumar Nayak

[2] Bhunia A.K., Shaikh A.A. Investigation of two-warehouse inventory problems in interval
environment under inflation via particle swarm optimization. Mathematical and Computer
Modelling of Dynamical Systems 2016, 22(2), 160–179.
[3] Cardenas-Barron L.E. Observation on: Economic production quantity model for items with
imperfect quality. International Journal of Production Economics 2000, 64, 59–64.
[4] Chakrabortty S., Pal M., Nayak P.K. Solution of interval-valued manufacturing inventory
models with shortages. International Journal of Engineering and Physical Sciences 2010, 4(2),
96–101.
[5] Clerc M., Kennedy J. The particle swarm-explosion, stability, and convergence in a multi-
dimensional complex space. IEEE Transactions on Evolutionary Computations 2002, 6(1),
58–73.
[6] Dey O., Giri B.C. A new approach to deal with learning in inspection in an integrated vendor-
buyer model with imperfect production process. Computers & Industrial Engineering 2019,
131, 515–523.
[7] Domoto E., Okuhara K., Ueno N., Ishii H. Target inventory strategy in multistage supply chain
by particle swarm optimization. Asia Pacific Management Review 2007, 12(2), 117–122.
[8] Eberhart R.C., Kennedy J. (1995). A new optimizer using particle swarm theory. In Proceedings
of the Sixth International Symposium on Micro Machine and Human Science, Nagoya, Japan,
39–43.
[9] Hu B.Q., Wang S. A novel approach in uncertain programming part I: New arithmetic and order
relation for interval numbers. Journal of Industrial and Management Optimization 2006, 2(4),
351–371.
[10] Ishibuchi H., Tanaka H. Multiobjective programming in optimization of the interval objective
function. European Journal of Operational Research 1990, 48(2), 219–225.
[11] Kennedy J., Eberhart R. (1995). Particle swarm optimization. Proceedings of
ICNN’95- International Conference On Neural Network, Perth, WA, Australia, 4, 1942–1948,
doi: 10.1109/ICNN.1995.488968.
[12] Krishnamoorthi C., Panayappan S. An EPQ model with imperfect production systems with
rework of regular production and sales return. American Journal of Operations Research 2012,
2, 225–234.
[13] Kundu A., Chakrabarty T. A production lot size model with Fuzzy-Ramp type demand and fuzzy
deterioration rate under permissible delay in payments. International Journal of Mathematics
in Operational Research 2011, 3(5), 524–540.
[14] Maity A.K., Maity K., Maity M. Optimal remanufacturing control policy with defective items.
International Journal of Operation Research 2009, 6(4), 500–518.
[15] Manna A.K., Dey J.K., Mondal S.K. Effect of inspection errors on imperfect production
inventory model with warranty and price discount dependent demand rate. RAIRO-Operation
Research 2020, 54(4), 1189–1213.
[16] Moore R.E. Method and application of interval analysis. SIAM 1979, Philadelphia.
[17] Sarkar B., Moon I. An EPQ model with inflation in an imperfect production system. Applied
Mathematics and Computation 2011, 217(2011), 6159–6167.
[18] Pal B., Adhikari S. Price-sensitive imperfect production inventory model with exponential
partial backlogging. International Journal of Systems Science: Operations & Logistics 2019, 6
(1), 27–41.
[19] Patro R., Nayak M.M., Acharya M. An EOQ model for fuzzy defective rate with allowable
proportionate discount. OPSEARCH 2019, 56(1), 191–215.
[20] Porteus E.L. Optimal lot sizing, process quality improvement and setup cost reduction.
Operations Research 1986, 34(1), 137–144.
Application of particle swarm optimization technique 77

[21] Rosenblatt M.J., Lee H.L. Economic production cycles with imperfect production process. IIE
Transactions 1986, 18(1), 48–55.
[22] Ruidas S., Seikh M.R., Nayak P.K., Pal M. Interval valued EOQ model with two types of
defective items. Journal of Statistics and Management Systems 2018, 21(6), 1059–1082.
[23] Ruidas S., Seikh M.R., Nayak P.K., Sarkar B. A single period production inventory model in
interval environment with price revision. International Journal of Applied and Computational
Mathematics 2019, 5(7). https://doi.org/10.1007/s40819-018-0591-x.
[24] Ruidas S., Seikh M.R., Nayak P.K. An EPQ model with stock and selling price dependent
demand and variable production rate in interval environment. International Journal of System
Assurance Engineering and Management 2020, 11(2), 385–399.
[25] Ruidas S., Seikh M.R., Nayak P.K. A production inventory model with interval-valued carbon
emission parameters under price-sensitive demand. Computers & Industrial Engineering
2021, 150(Article), 107154. https://doi.org/10.1016/j.cie.2021.107154.
[26] Ruidas S., Seikh M.R., Nayak P.K. A production-repairing inventory model considering
demand and the proportion of defective items as rough intervals. Operational Research
International Journal 2021. https://doi.org/10.1007/s12351-021-00634-5.
[27] Sahoo L., Bhunia A.K., Kapur P.K. Genetic algorithm based multi-objective reliability
optimization in interval environment. Computers & Industrial Engineering 2012, 62(1), 152–160.
[28] Salameh M., Jaber M. Economic production quantity model for items with imperfect quality.
International Journal of Production Economics 2000, 64(1–3), 59–64.
[29] Sarkar B.R., Jamal A.A.M., Mondal S. Optimal batch sizing in a multi-stage production system
with rework consideration. European Journal of Operations Research 2008, 184(3), 915–929.
[30] Sengupta A., Pal T.K. On comparing interval numbers. European Journal of Operational
Research 2000, 127(1), 28–43.
[31] Sun J., Feng B., Xu W.B. (2004). Particle swarm optimization with particles having quantum
behavior. IEEE Proceedings of the 2004 Congress on Evolutionary Computation, Portland, OR,
USA, 1, 325–331, doi: 10.1109/CEC.2004.1330875.
[32] Taleizadeh A.A., Akhavan Niaki S., Shafii T.N., Meibodi R.G., Jabbarzadeh A. A particle swarm
optimization approach for constraint joint single buyer-single vendor inventory problem with
changeable lead time and (r, Q) policy in supply chain. International Journal of Advanced
Manufacturing Technology 2010, 51, 1209–1223.
[33] Wang X., Tang W. Fuzzy EPQ inventory models with Backorder. Journal of System Sciences
and Complexity 2009, 22, 313–323.
Nagendra Singh, Yogendra Kumar
Optimization techniques used for designing
economic electrical power distribution
Abstract: Design of electrical machinery system is very important because its size is
very big and handling is very difficult. Therefore, it is required to design the electrical
engineering system, including electrical machines and power system, very compact
but its efficiency cannot be reduced. From long ago, optimization techniques are
used to develop compact and efficient electrical systems. Optimization techniques
can optimize the mathematical model of any linear or nonlinear system in real time
and provide minimum or maximum optimum values. So many optimization techni-
ques are used for obtaining the best value of the mathematical model. But due to day-
by-day updates in the technology, nobody can say this is the most suitable deign of
the electrical system. This chapter proposed the latest optimization technology that is
used to distribute the electrical power in an economic and efficient way. Due to high
loss of power in transmission line, per unit distributed cost increases. This chapter
also discussed the comparative analysis of different optimization techniques with a
suitable case study, and about the various challenges that arise when optimizing the
data using optimization techniques.

Keywords: optimization techniques, design of electrical distribution system, eco-


nomic load dispatch, transmission line and load demand

1 Introduction
Electricity is very important in our daily life. We cannot imagine the world without
electricity. Requirement of electricity is very high, whereas its generation is limited.
Due to the high demand of electricity, it is required to optimize the schedule of dis-
tribution, So that fulfill. Electricity to all the all consumers without overloading [1].
Since the electrical transmission and distribution network is very big and com-
plicated, many factors affect the transmitted power on the network. Even due to
these factors, huge losses arise on the transmission network. Such losses create
high voltage drop on the electrical network.

Nagendra Singh, Department of Electrical Engineering, Trinity College of Engineering and Tech-
nology, Karimnagar, Telangana, India, e-mail: [email protected]
Yogendra Kumar, Department of Electrical Engineering, MANIT, Bhopal, Madhya Pradesh, India,
e-mail: [email protected]

https://doi.org/10.1515/9783110716214-005
80 Nagendra Singh, Yogendra Kumar

Losses and voltage drop also depend on the types of consumers and the load
operated by the consumers. In general, many consumers operate high inductive
load and hence increase the reactive power and decrease the power factor, thereby
increasing the line losses and voltage drop [2].
In India, thermal power plant is the largest power-generating plant. Second
largest power is generated by hydropower plants. Solar and wind power generation
contribution is very less when compared to the load demand. Figure 1 shows the
current percentage of power generation by different power plants [3].

Figure 1: Power generated by different power plants. [16]

We know that the efficiency of thermal generation plant is about 37–40%. So the
system already had very poor efficiency. And when transmitted power for long dis-
tance using long transmission line, transmission loss arises and hence received low
voltages at the receiving end. The generation cost of the thermal power plant is
shown in Figure 2.

Figure 2: Operating cost of the electrical power plant. [1]


Optimization techniques used for designing economic electrical power distribution 81

So, here, demand of electricity is very high. Generation of power is very less
when compared to demand, when transmitted and distributed electrical power due to
losses got low voltage at the receiving end. So we have to make some special arrange-
ment so that fulfill the load demand and save the generation cost as well as decrease
the line losses.
Hence, to find out the best solution of the problem, the following steps are taken
into consideration, as shown in Figure 3:

Physical System

Converted In to Mathematical Model

Formulated Objective for Minimization Formulated Objective for Maximization

Cost, Losses, Size of Equipments Efficiency, Profit, Life of Machines

Identify the parameters which are directly or indirectly affects the goal

Single Parameter Affects the goal More than one parameter affects the goal

Single Objective Problem Multi Objective Problem

Use suitable optimizationtechniques for single as well as multi-objective problem

Find Optimum value of parameters between maximum & minimum limits of parameters

Calculating the desire value and compare with tradition methods

Figure 3: Process of optimization for the problems.

– First, identify the consumers demand.


– Prepare a load–duration curve which helps to understand the demand of con-
sumers per hour on a day.
– Identify the parameters that affect the generation cost and transmission losses.
– Formulate mathematical model using these parameters.
82 Nagendra Singh, Yogendra Kumar

– Select metaheuristic optimization technique and find out the best values of these
parameters.
– Calculate the cost of generation using optimum data, and compare the results
with tradition methods.

For distribution of power among different consumers at different time of 24 h, distri-


bution companies first prepared the load–duration curve. This load–duration curve
helps to understand how much demand is required at every hour for the whole day.
It also helps to know when peak demand is required and what is the base load at
the whole day required [4].
So for the systematic power distribution and managing the load demand, apply
the economic load dispatch (ELD) system. Economic load distribution system is a
simple concept; it is used to maintain the load demand according to the available
generated power at any instant. ELD also maintains all constraints during the
power supply to various consumers [5, 6].
ELD techniques require optimization techniques which help to find out the most
suitable generating limits to fulfill the demand at any instant of time. Evolutionary
optimization techniques use for the evolutionary process to find out the solution of
the problem. These algorithms provide the global solution of discontinuous, noncon-
vex, and highly nonlinear problems [7].
ELD problem solved using many classical methods and new evolutionary tech-
niques is shown in Table 1.

Table 1: Optimization techniques used for the solution of ELD problem.

S. no. Classical methods New optimization techniques

 Nonlinear programming [] Fuzzy logic []

 Pattern search [] Genetic algorithm []

 Harmony search [] Simulated annealing []

 Lagrangian relaxation [] ANN []

 Dynamic programming [] Tabu search []

 Quadratic programming [] Particle swarm optimization [, ]

 Linear programming [] Ant colony []

 Interior point method [] Differential algorithm []

JAYA optimization algorithm []

Dragonfly algorithm []

Ant lion optimization (ALO) []


Optimization techniques used for designing economic electrical power distribution 83

1.1 Objective function

For obtaining the optimum solution of any quantity, a mathematical objective function
is required. ELD problem can be formulated in single objective and multi-objective
using constraints. Single objective is formulated for the fulfillment of one objective
with constraints. Multi-objectives are formulated for more than one objective, which
also included constraint. Multi-objective problems are more complex when compared
to a single objective function [8]:

Objective function of ELD = minimizeðGeneration costÞ (1)


 
Fi ðPi Þ = Minimum ai Pi2 + bi Pi + ci (2)
Subject to constraints
X
n
Load balance Pi = PD + PL (3)
i=1

Generation limits Pimin ≤ Pi ≤ Pimax (4)

Line loss of transmission line is shown as follows:

X
n X
n   Xn
LL = Pi Bij Pj + Bi0 Pi (5)
i=1 j=1 i=1

where Pi is the generated power of the ith generating units; ai, bi, and ci are fuel cost
coefficients; LL is loss of power in transmission; Bij is the loss coefficient of ijth ele-
ment; Bi0 is a loss coefficient vector; and B00 is constant loss.
Line loss of medium transmission line is very less but its length is about 150 km, so
it can cover only very few consumers who reside near generating stations. For long
transmission line, line loss value depends on the transmitted power as shown in equa-
tion (5). So if optimal power is transmit on transmission line, it is possible to control the
line loss.

2 Optimization techniques
Evolutionary techniques are the robust optimization techniques which are suitable for
finding the optimum solution of discontinuous, nonlinear, and multi-objective problems.
Optimization is a process to find the best global solution for the problem. For any definite
problem, first decide the search area using minimum and maximum limits of the param-
eters which directly affect the process and randomly start the process in a defined search
area for obtaining global solution of the defined objective function with constraints [9].
Figure 4 shows various optimization techniques used for the solution of ELD problem.
84

O p tim iz a tio n T e c h n iq u e s

T ra d itio n a l T e c h n iq u e s N o n - T ra d itio n a l T e c h n iq u e s

L in e a r P ro g ra m m in g E v o lu tio n a ry N o n -g ra d ie n t P ro b a b ilis tic

N o n lin e a r S im u la tio n
p ro g ra m m in g E v o lv e in A n n e a lin g
Nagendra Singh, Yogendra Kumar

E v o lv e in

Figure 4: Various optimization techniques. [16]


G e n e tic S o c ia l B e h a v io r

In te g e r P ro g ra m m in g T a b u S e a rc h

G e n e tic A lg o rith m P a rtic le s w a rm


G e o m e tric o p te m iz a tio n
P ro g ra m m in g

A rtific ia l B E E
Q u a d ra tic M e m e tic A lg o rith m c o lo n y
p ro g ra m m in g
A n t c o lo n y
Optimization techniques used for designing economic electrical power distribution 85

Evolutionary optimization techniques are able to optimize the real-world prob-


lem. The implementation procedure is also very easy for complex problems. It can
handle various constraints easily without any large modification in programming.
In evolutionary techniques, parallel search process is applied so it has a high proba-
bility to obtain the optimal solution of problems [10].
Iterative computation is one another important area of evolutionary technique,
where approximate mathematical model is used for evaluation of the optimum solu-
tion. In iterative computation techniques, fitness function is not planned, it is di-
rectly assumed by the expert [11, 12].
Evolutionary techniques are near global search, stochastic optimization meth-
ods. It searches the optimal solution using population size of the problem using the
number of iterations [13]. With the help of mutate operator creates new population
from the existing population. The scale of optimal solution of each individual is
measured by its fitness function of the problem [14].
The following procedure is used for the optimum solution of the nonlinear prac-
tical problem.

2.1 Optimal design

For the best and compact design of the system, it is required to obtain the optimal
values of design parameters. With the help of optimization techniques, can evaluate
the optimal values of the variable which are used for design of the system. The de-
sign parameters may be associated with dimensions, shapes, and numbers, or may
contain any other information. Such parameters are called decision parameters. Ob-
jective function may be defined as the cost of production, consumption of energy in
the particular system like office, lift or any firm, reliability, and stability margin. In
the proposed work, our main objective is to use the generating units in such a way
that the demand is fulfilled and total generation cost is minimized [15].

2.2 Optimal control

Optimal solutions of nonlinear problems depend on some parameters and hence it


is necessary to control these parameters. Proper adjustment of variables can give an
efficient, optimum solution of the problem with less effort and on minimum time.

2.3 Modeling

For optimization of any physical system, it is required to obtain the relationship be-
tween input and output using a mathematical model. So modeling is a process to
86 Nagendra Singh, Yogendra Kumar

identify the parameters of the system and relationship between them. Optimization
methods optimize such mathematical models and minimize the error between the
predicted output and actual output.

2.4 Scheduling

When optimizing the data of the system, proper sequence of operation is required.
If proper sequence is not followed by the expert, wrong output may be obtained;
hence, the final result may vary from actual values. That is why we have to set the
schedule of the operation of the data in sequence [16].

2.5 Prediction and forecasting

When developing a mathematical model of the physical system, it is required to pre-


dict and forecast the correct parameters which directly or indirectly affect the final
results. For more accurate prediction of parameters, experts used past data of the
problem.

2.6 Data mining

In any sample data which we want to optimize, it is required to identify the hidden
information of the sample data. It is a very difficult task because for large data, no-
body knows what the information it contains and which useful information it car-
ries. So data mining is the process to identify the used information from the large
sample data effectively in short time with less efforts [17].

2.7 Machine learning

In the age of automation, many real-world systems are equipped with automated and
intelligent subsystems which can make decisions and adjust the system optimally. In
the design of such intelligent subsystems, often machine learning techniques, involv-
ing different soft computing techniques and artificial intelligence techniques, are
combined. To make such a system operate with a minimum change or with minimum
energy requirement, an optimization procedure is needed. Since such subsystems are
Optimization techniques used for designing economic electrical power distribution 87

to be used online and the online optimization is a difficult proposition due to time
restrictions, such problems are often posed as an online optimization problem and
solved by trying them on a number of synthetic scenarios.

3 Application of optimization techniques


in electrical engineering
Optimization techniques are used in various applications in electrical engineer-
ing; Table 2 shows some of them.

Table 2: Application of optimization techniques.

Name of techniques Application in electrical engineering

Fuzzy logic [] – Control of electrical machine


– Electric transformer protection system
– Transmission line protection

Genetic algorithm [] – Stability analysis of power system


– Design of structure of electrical network

Simulated annealing [] – For planning of generation and transmission


– For maintenance of the schedule of generator

ANN [] – Multi-objective optimization problems


– For the load forecasting

Tabu search [] – Unit commitment


– For planning of reactive power compensation

Particle swarm – Economic load dispatch


optimization [, ] – For the optimization of distribution and operation

Ant colony [] Designing power system damping controllers

Differential algorithm [] – Analog electronic circuits sizing


– Synthesis of time-modulated antenna arrays
– Color quantization

JAYA optimization – Load forecasting problems


algorithm [] – Optimal coordination of over current relays

Dragonfly algorithm [] – Optimization of solar cell

Ant lion optimization [] – Optimal reactive power dispatch


88 Nagendra Singh, Yogendra Kumar

4 Case study for ELD problem using optimization


techniques
This section takes one case study to show the effectiveness of optimization techniques
for the optimization of ELD problem. Data of six generating unit (IEEE 30 bus) systems
are considered as shown in Table 3. Line loss parameters value are given in Table 4.
Results obtained by different techniques without loss are given in Table 5, whereas
results of optimization with line loss are given in Table 6.

Table 3: Capacity and cost coefficients for six generating unit systems for the demand
of 700 MW [11].

Generating unit ai bi ci Pimin Pimax

 . . .  

 . . .  

 . . .  

 . . .  

 . . .  

 . . .  

Table 4: Line loss parameters values.

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

Table 5: Results of different optimization techniques without line loss and demand of 700 MW.

Unit power output (MW) GA Ant colony DE PSO NPSO

P .  . . .

P . . . . .

P . . . . .


Optimization techniques used for designing economic electrical power distribution 89

Table 5 (continued)

Unit power output (MW) GA Ant colony DE PSO NPSO

P . . . . .

P . . . . .

P . . . . .

Total generated power (MW)     

Total generation cost ($/h) . . . . .

Computation time (s) . . . . .

Table 6: Results of six generating systems with line loss for the demand of 700 MW.

Optimal output power (MW) GA Ant colony DE PSO NPSO

P .  . . .

P . . . . .

P . . . . .

P . . . . .

P . . . . .

P . . . . .

Line losses (MW) . . . . .

Total generated power (MW) . . . . .

Total generation and . . . . .


transmission cost ($/h)

Computation time (s) . . . . .

Table 5 shows the results of ELD problem using different optimization techni-
ques. In this case, line loss is not considered, hence, received 810.3483 $/h as the
total generation cost using new particle swarm optimization (PSO). The time for
evaluation of results is almost less than 2 s for all optimization methods. So we can
say that the evaluation time is very less for small as well as large data, whereas clas-
sical methods take very long evaluation time. Table 6 shows the response of differ-
ent optimization techniques with line losses. We have received 829.37 ($/h) as the
total generation cost when load connected is 700 MW using new PSO. Time for eval-
uation of results is again very less, that is, nearly 2 s. All programs were run in Mat-
lab software and the optimal generation point for the load of 700 MW is obtained.
Using equation (2), the total generation cost of the plant is calculated. All methods
90 Nagendra Singh, Yogendra Kumar

are very effective and able to give the global solution of the nonlinear ELD problem.
The time taken in optimization is also very less for small data as well as large data.

5 Conclusions
Electrical power sector is running in crisis phase in terms of unremitting gap between
demand and supply. Looking at the present strength of India and to achieve target in
the field of electrical power, it is necessary to adopt change in the form of efficient
operation methodology for thermal power plant generation. Optimization techniques
are recently used for optimal operation of the power plant. Classical methods found
so many issues and were unable to give global solutions of the large data, and have
also taken very long computation time. Effectiveness of the new optimization techni-
ques is already shown in the case study section. These methods are very effective for
the evaluation of optimal value of linear and nonlinear data. Time taken for optimiza-
tion is also very less. So all in all, we can say that optimization techniques can
help restructure the electrical system as per the current scenario.

References
[1] Jabr R.A., Coonick A.H., Cory B.J. A homogeneous linear programming algorithm for the
security constrained economic dispatch problem. IEEE transactions on power systems 2000,
15(3), 930–936.
[2] Al-Sumait J.S., Al-Othman A.K.A., Sykulski J.K. Application of pattern search method to power
system valve point economic load dispatch. Electrical power energy system 2007, 29(10),
720–730.
[3] Arul R., Ravi G., Velusami S. Non-convex economic dispatch with heuristic load patterns using
harmony search algorithm. International Journal of computer applications 2011, 16(1), 26–34.
[4] Tkayuki S., Kamu W. Lagrangian relaxation method for price based unit commitment problem.
Engineering optimization taylor Francis 2004, 5(3), 36–41.
[5] Travers Dean L., John K.R. Dynamic dispatch by constructive dynamic programming. IEEE
Transactions on power systems 1998, 13(1), 72–78.
[6] Farag A., Al-Baiyat S., Cheng T.C. Economic load dispatch multiobjective optimization
procedures using linear programming techniques. IEEE transactions on power systems. 1995
10(2), 731–739.
[7] Momoh J.A., Zhu J.Z. Improved interior point method for OPF problems. IEEE transactions on
power systems 1999, 14(3), 1114–1120.
[8] Nima A., Nasiri-Rad H. Nonconvex Economic dispatch with ac constraints by a new real coded
genetic algorithm. IEEE transactions on power systems 2009, 24(3), 1489–1502.
[9] Mantawy A.H., Abdel-Magid Youssef L., Shokri S. A simulated annealing algorithm for unit
commitment. IEEE transactions on power systems. 1998 13(1), 197–204.
Optimization techniques used for designing economic electrical power distribution 91

[10] Mohatram M. Hybridization of artificial neural network and Lagrange multiplier method to
solve economic load dispatch problem. IEEE International Conference on Infocom
Technologies and Unmanned Systems (Trends and Future Directions) (ICTUS). 2017.
[11] Senthil K., Manikandan K. Improved Tabu search algorithm to economic emission dispatch
with transmission line constraint. International Journal of Computer Science &
Communication 2010, l(2), 145–149.
[12] Arul R., Ravi G., Velusami S. Non-convex economic dispatch with heuristic load patterns using
harmony search algorithm. International Journal of computer applications 2011, 16(1), 26–34.
[13] Jabr R.A., Coonick A.H., Cory B.J. A homogeneous linear programming algorithm for the
security constrained economic dispatch problem. IEEE transactions on power systems 2000,
15(3), 930–936.
[14] Maharana H.S., Dash S.K. Comparative Optimization Analysis of Ramp Rate Constriction
Factor Based PSO and Electro Magnetism Based PSO for Economic Load Dispatch in Electric
Power System. IEEE International Conference on Applied Machine Learning (ICAML). 2019.
[15] Balamurugan K., Umamaheswari K., Prabhu Raj M., Nivetha S., Giriprasad K., Ravirahul M.,
Guhaneeswaran V., Arun R.M. Solving Economic Load Dispatch Using JAYA Optimization
Algorithm. Research Journal of Chemistry and Environment 2020, 24(1), 145–151.
[16] Das D., Bhattacharya A., Ray R.N. Dragonfly Algorithm for solving probabilistic Economic Load
Dispatch problems. Neural Computing and Applications 2020, 32, 3029–3045.
[17] Nikitha M., Pratheksha J.L., Aishwarya T., Karthikaikannan D. Solution of Economic Load
Dispatch problem using Conventional methods and Particle Swarm Optimization. IJITEE 2020,
9(10), 243–247.
[18] Leena Daniel K.T.C. Economic Load Dispatch Using Ant Lion Optimization. International
Journal of Engineering Trends and Technology 2019, 67(4), 81–84.
Bidyut B. Gogoi, Anita Kumari, S. Nirmala, A. Kartik
Meta-heuristic optimization techniques
in navigation constellation design
Abstract: In this chapter, a new approach has been adopted for the constellation
design of a navigational satellite constellation, wherein we have employed a well-
established genetic algorithm, primarily known as the multi-objective genetic al-
gorithm. The design of an optimal constellation for satellite constellation is of prime
importance. The design of evenly placed constellation like the Walker constellation is
quite easy and straightforward, but for an unevenly placed constellation the design
of an optimal constellation is quite challenging and is a time-consuming process.
Therefore, a new strategy has been followed for the design of unevenly placed con-
stellations and in this process we have tried to find an optimal constellation for the
Navigation with Indian Constellation. This method would further prove vital in the
design of future uneven constellations for other regional navigation systems as well.

Keywords: meta-heuristic, multi-objective genetic algorithm (MOGA), NavIC con-


stellation, navigation systems, constellation parameters

1 Introduction
Meta-heuristic optimization techniques for constellation design have been the most
promising topic among satellite engineers and scientists in the last three decades. A
careful undermining into the existing literatures reveals that there have been many
researches in the recent past to derive optimal satellite constellation geometry
using genetic algorithm (GA). This dates back to the 1990s when Soccorsi and Pal-
merini proposed a design of a constellation of satellites meant for regional coverage
[1]. This was followed by the work of Frayssinhes in 1996 [2], where he employed a
GA for the design of constellations of satellites in circular orbits. Later on, Ely and
his coworkers considered satellites in elliptic orbits and worked on their constella-
tion design through utilization of other such approaches [3, 4]. Since then, numerous

Bidyut B. Gogoi, Space Navigation Group, U. R. Rao Satellite Centre, Indian Space Research
Organization, Bangalore 560017, e-mail: [email protected]
Anita Kumari, Space Navigation Group, U. R. Rao Satellite Centre, Indian Space Research
Organization, Bangalore 560017, e-mail: [email protected]
S. Nirmala, Space Navigation Group, U. R. Rao Satellite Centre, Indian Space Research
Organization, Bangalore 560017, e-mail: [email protected]
A. Kartik, Space Navigation Group, U. R. Rao Satellite Centre, Indian Space Research
Organization, Bangalore 560017, e-mail: [email protected]

https://doi.org/10.1515/9783110716214-006
94 Bidyut B. Gogoi et al.

GA [14, 15] approaches were employed for varied constellation design of satellites
at almost all altitudes [5–7]. Of late, it has been seen to produce promising results
in LEO-based satellite navigation systems as well [8, 9]. The work in this chapter
intends to explore a new methodology to determine near-optimal constellations
for navigational satellite systems using a meta-heuristic GA approach with the
multi-objective GA (MOGA) [10, 11, 14, 15].
The Walker constellation approach [12] offers superior performances but it
works well only for those constellations that are evenly placed and cannot be fol-
lowed for unevenly placed constellations. Design of regional constellations for sat-
ellite navigation entails best constellation parameters, and those parameters must
be optimal so as to attain the preferred performance with the minimum resources.
Therefore, for an optimized design of regional navigation constellations we should
emphasize on a design wherein we should determine the minimum requisite planes
with least of satellites on each of the plane. To cater this, a very efficient methodol-
ogy needs to be followed, which should give us the best constellation parameters
with a minimum search space of possibilities. Moreover, constructing and upholding
of a navigation constellation is extremely pricey, so to maintain such a system with a
minimum cost and a maximum benefit, the total minimum essential coverage area of
the navigation constellation must be programmed with excel and accordingly that
system must be anti-designed for a best performance with an optimal cost. Therefore,
researchers working on the design of such uneven constellations went with the brute
force method [13]. But it is a tedious process and time-consuming, being relying on
exhaustive search. Recent works show that the MOGA have proved to be vital in the
design of regional navigation satellite constellations [14, 15]. Therefore, in this work,
we have adopted the MOGA approach for the design of unevenly placed satellite navi-
gation systems and in this context, we have considered the unevenly placed Regional
Navigation Satellite System of India known as IRNSS or NavIC.
NavIC currently provides a navigation solution round the clock and irrespective
of any weather constraints with coverage throughout India and also extends to an
area of around 1,500 km surrounding India. The IRNSS or NavIC constellation has
been successfully operational with the least number of satellites. In this work, the
constellation design problem of NavIC with the MOGA has been studied in compari-
son with the current existing design of NavIC, wherein our prime objective has been
to find the best constellation parameters to minimize the maximum dilution of pre-
cision (DOP) values. Results are summarized side-by-side with the existing ones in
order to provide the readers a clear insight of the optimized design. Results demon-
strate that the proposed methodology provides a slightly better position accuracy
and is also computationally less expensive than the existing one. This work there-
fore proposes a new and proficient technique of determining optimal constellations
for unevenly placed satellite navigation systems, which would in turn serve as a
major baseline for the design of future navigational satellite systems with uneven
geometries.
Meta-heuristic optimization techniques in navigation constellation design 95

The chapter has been organized as follows: Section 1 gives the introduction, Sec-
tion 2 describes the meta-heuristic optimization techniques and MOGA, Section 3
gives the overview of the multi-objective approach used, Section 4 details the pro-
posed methodology, Section 5 presents the results obtained and analyses in this
work, finally Section 6 concludes the whole work.

2 Meta-heuristic optimization techniques


and MOGA
Meta-heuristic optimization techniques follow a realistic approach that may not be
necessarily ideal but ample enough to accomplish the objectives. Such techniques are
generally based on iterative generation process in which a heuristic unites all potential
varied notions for discovering and utilizing the domain of all prospects. Literally,
“meta” indicates “in an upper level” and “heuristic” implies “to find.” GA is generally
an iterative process that is accomplished in three different steps, namely, selection,
mutation, and crossover which, at each iteration, derives the successive population
from the current population.
MOGA is a meta-heuristic optimization technique [10, 11, 14, 15], which may be
exploited to solve search and optimization problems. MOGA follows a natural se-
lection procedure which repeatedly transforms an individual population wherein
it selects some random ones as parental candidature from the present existing
population and uses them to produce offspring that will continue for the next
generation. This process is repeated for many generations, and ultimately through
plentiful selection, mutation, and crossover we arrive at a population that pro-
vides us the most appropriate solution. In this chapter, we have assumed the
input parameters for constellation as the initial population for MOGA which se-
lects individuals from this present population and then combines those inputs
with certain random changes to generate a set of inputs for finding the geometric
DOP (GDOP) at every iterative step.

3 Overview of the multi-objective approach used


Our multi-objective approach has the following parts:
1) Generating various sets of constellation parameter
2) Selecting best constellation parameters with the aid of fitness function
96 Bidyut B. Gogoi et al.

3.1 Generating various sets of constellation parameter

Various sets of constellation parameter can be generated by designing a problem


with a multi-objective optimization technique. In this work, we have chosen as deci-
sion variables, a few fundamental parameters of constellation design configuration
and formulated the problem as follows:

g
Minimize
y1 = Ωð1Þ + M ð1Þ + ωð1Þ − θg ð1Þ − λð1Þ
y2 = Ωð2Þ + M ð2Þ + ωð2Þ − θg ð2Þ − λð2Þ
y3 = Ωð3Þ + M ð3Þ + ωð3Þ − θg ð3Þ − λð3Þ (1)
... = ...
yn = ΩðnÞ + M ðnÞ + ωðnÞ − θg ðnÞ − λðnÞ

where n is the number of satellites under consideration and used for the constellation
design, Ω is the RAAN or right ascension of the ascending node of satellites, M denotes
in-plane phase angle of satellites, ω denotes the argument of the perigee, θg represents
the epoch sidereal angle, and λ indicates the longitudinal crossings of satellites while yi
denotes fitness function which is to be optimized. If the problem is properly known, we
can add the constraints accordingly. For instance, if at a particular instant of time, sup-
pose for the first satellite, we have M ð1Þ as 15° and ωð1Þ = θg ð1Þ = 0 then we can re-
formulate the first constraint like Ωð1Þ − λð1Þ = − 15. Analogously, we can put the con-
straints for all equations so as to get a system of linearly independent equations. In
almost all of our computations, we have taken ω = 0 and θg = 200. For NavIC (IRNSS),
we have analyzed the work for varying values of phase angle and RAAN, and all the
parameters have been fixed with certain values. Therefore, for the 7 satellite constella-
tion design problem we have 14 different variables (7 RAAN variables and 7 phase
angle variables). In the next section, the chosen fitness function is shown which is ap-
plicable on any number of satellites and planes. From the study by Nirmala et al. [13],
we have employed the idea of inclination and semimajor axis.

3.2 Selecting best constellation parameters with the aid


of fitness function
For selecting the best constellation parameters, we have considered the following
fitness function [14]:

Cf = W P
totrec P
totrec (2)
GDOP Wi GDOPi + Wα Wi ð 1 − α i Þ
i=1 i=1

where Cf denotes the fitness function, WGDOP indicates the weight of GDOP, totrec
represents the total receiver points, GDOPi denotes mean GDOP of the ith number of
Meta-heuristic optimization techniques in navigation constellation design 97

receiver, Wα stands for the availability weight, and αi represents the presence of the
ith receiving point. The fitness function was utilized to appraise the candidate solu-
tions in such a way that the candidates having least fitness value were considered to
be best, and it has been demonstrated in Figure 1(a). In the plot, the encircled entry
corresponds to the value with the minimum cost. In a similar manner, we have gener-
ated all the other sets of candidate solution (by changing the constraints).

(a)

(b)

Figure 1: Solution of candidates versus their cost (a) and Pareto-front (b).
98 Bidyut B. Gogoi et al.

The multi-objective problem (1) comprises seven objective functions that form a
minimization problem. For the Pareto front plot as shown in Figure 1(b), we have
taken the first two of those functions. From the plot, it is evident that the variation
in decision values obtained during all iterations satisfy all the objectives.

4 The methodology
In this context, we have employed the MOGA [10, 11, 14, 15] methodology for the
design of optimized navigation constellations. At each iterative step, the outputs
generated by the MOGA optimization technique will serve as an input for computing
the DOP values.
Firstly, we have used the MOGA optimization tool as available in MATLAB to
perform numerical simulations for all the parameters used in the constellation de-
sign. Then using these results, we compute the horizontal DOP (HDOP) and vertical
DOP (VDOP).
As GA has not been exclusively designed for all kinds of optimization problem,
efforts were made with a new methodology wherein the MOGA approach is em-
ployed on a multi-optimization problem. At the initial level, we have generated a
random population followed by the computational cost of each individual of the
population. This is followed by random pairing of individuals from that population
to construct the parent generation based on a selection process known as a tourna-
ment selection. This parent generation then takes part in an arithmetic crossover
operation to produce an intermediate generation followed by an operation called
mutation. Nonuniform mutation has been applied to this intermediate generation to
finally create the child generation. Next, we compute the cost of each child and
based on the cost of the candidates in initial generation and child generation,
a second tournament selection has been followed, and the whole process is contin-
ued until the preset stopping criteria of the procedure are satisfied.
For the computation of DOP, the orbit of the satellites was simulated by the use
of the full force model. The frequency of sampling has been taken as 1 per minute, the
mask angle was chosen as 5° and the grid for sampling has been chosen as 5° × 5°. The
initial population size for the MOGA technique has been taken as 300 and the maxi-
mum iteration has been chosen as 200.

5 Results and analysis


The MOGA is a quite random technique and hence generates various set of results
for every successful iteration. Based on the value of the fitness function, the best
constellation parameter sets have been selected and the results have been shown in
Meta-heuristic optimization techniques in navigation constellation design 99

this section. The obtained solution has been utilized to produce a global optimal
solution. In the results, we have demonstrated the HDOP and VDOP values obtained
from the best constellation parameter sets via MOGA and made a comparison with
the present HDOP and VDOP of the present NavIC constellation. The present constel-
lation for NavIC consists of seven satellites, the longitudinal crossings (λ), and incli-
nations (i) for each of them are given in Table 1.

Table 1: NavIC (IRNSS) satellites with


longitudinal crossings (λ) and inclinations (i).

IRNSS satellite λ i

A ° °

B ° °

C ° °

D .° °

E .° °

F .° °

G .° °

To demonstrate the design of the NavIC constellation carried out with the proposed
methodology, Table 2 tabulates few of the results obtained.

Table 2: Values of constellation parameters obtained with the proposed method.

Case Constellation NavIC (IRNSS) satellites


study parameters
A B C D E F G

 Ω (deg) . . . . . . .

M (deg) . . . . . . .

 Ω (deg) . . . . . . .

M (deg) . . . . . . .

In both cases, we have plotted the DOP as shown in Figures 2–5. For a comparative
analysis, we have put our obtained results for HDOP and VDOP side-by-side with
the HDOP and VDOP obtained using the present operational constellation.

Case 1: In Figure 2, we have shown a comparison of our HDOP values obtained to the
HDOP values obtained using the present operational NavIC constellation, whereas in
100 Bidyut B. Gogoi et al.

Figure 3, we have shown a comparison of VDOP values. Figure 2(b) demonstrates that
this obtained design (test case 1) with the proposed methodology has an improvement
in the coverage area both in and around India. This is evident as we now have an im-
proved HDOP (Figure 2(b)) (increase in green- and blue-colored regions) in comparison
to the HDOP (Figure 2(a)) obtained from the existing satellite constellation design.
Thus, the accuracy of the horizontal positions can be improved with this design.

(a)

(b)

Figure 2: Horizontal DOP for case 1: (a) existing design and (b) proposed design.
Meta-heuristic optimization techniques in navigation constellation design 101

Analogously, Figure 3(b) has shown that there is an improvement in the regions with
maximum VDOP with the proposed design. This has been evident from the green-
colored regions which showed drastic improvements in Figure 3(b) (proposed design)
from Figure 3(a) (existing design) as it completely turned to green showing that it has
lesser VDOP values. This exemplifies that the proposed technique may be a better alter-
native for navigation constellation design.

(a)

(b)

Figure 3: Vertical DOP for Case 1: (a) existing design, (b) proposed design.
102 Bidyut B. Gogoi et al.

Case 2: Similar to the previous case study, in Figure 4 we have shown a comparison
of our HDOP values obtained to the HDOP values obtained using the current opera-
tional constellation, whereas Figure 5 shows a similar comparison but for VDOP val-
ues. Figure 4(b) demonstrates that this obtained design (test case 2) with the
proposed methodology has an improvement in the area of coverage with lesser HDOP

(a)

(b)

Figure 4: Horizontal DOP for case 2: (a) existing design and (b) proposed design.
Meta-heuristic optimization techniques in navigation constellation design 103

in a bigger area as compared to the existing design. Further, Figure 5(b) shows that
lesser VDOP regions are more prominent in the proposed design when compared to
the existing NavIC constellation design (Figure 5(a)).

(a)

(b)

Figure 5: Vertical DOP for case 2: (a) existing design and (b) proposed design.
104 Bidyut B. Gogoi et al.

6 Conclusion
This chapter proposes a new technique where we have utilized a meta-heuristic op-
timization technique for designing navigational satellite constellation. In the pro-
cess, we have implemented the methodology for finding an optimized design of the
NavIC constellation. The proposed approach has proved to be vital not only to find
the constellation parameters but also helped in finding the optimal parameters with
a minimal cost. Results are demonstrated in two different sets of parameters that
are generated with this approach. A comparative analysis of these two generated
sets with the existing operational constellation has been made with the DOP values.
The test cases exemplify that the proposed methodology can be implemented to de-
sign an optimal constellation. Promising improvements in the DOP values from the
existing constellation justify the efficacy of the presented technique and hence may
be an effectual strategy in designing future constellations of navigational satellites.

References
[1] Soccorsi F.M., Palmerini G.B. 1996 Design of Satellites Constellations for Regional Coverage.
AAS 96-208, AAS/AIAA Space Flight Mechanics Meeting.
[2] Frayssinhes E. 1996 Investigating New Satellites Constellations Geometries with Genetic
Algorithms. AIAA Paper 96-3636, Proceedings of the AAIA/AAS Specialist Conference, San
Diego, CA, Jul. 1996, pp. 582–588.
[3] Ely T.A., Crossley W.A., Williams E.A. 1998 Satellite Constellation Design for Zonal Coverage
Using Genetic Algorithms. AAS Paper 98-128, AAS/AIAA Space Flight Mechanics Meeting,
Monterey, CA February.
[4] Ely T.A., Anderson R.L., Bar-Sever Y.E., Bell D.J., Guinn R., Jah M.K., Kallemeyn P./.H., Levene
E.D., Romans L.J., Wu S.C. (1999). Mars Network Constellation Design Drivers and Strategies.
AAS/AIAA Astrodynamics Specialist Conference Paper AAs 99-301, Girdwood, Alaska.
[5] George E. 1997 Optimization of Satellite Constellations for Discontinuous Global Coverage via
Genetic Algorithms. AAS Paper 97-621, AAS/AIAA Astrodynamics Specialist Conference, Sun
Valley, ID, Aug. 4-7.
[6] Goldberg D.E. Genetic algorithms in search, optimization and machine learning. Addison-
Wesley Longman Publishing Co., Inc., 75 Arlington Street, Suite 300 Boston, MA, United
States, 1983.
[7] Savitri T., Kim Y., Jo S., Bang H. 2017. Satellite Constellation Orbit Design Optimization with
Combined Genetic Algorithm and Semianalytical Approach. Hindawl International Journal of
Aerospace Engineering 2017, 1235692.
[8] Kohani S., Zong P. A genetic algorithm for designing triplet LEO satellite constellation with
three adjacent satellites. International Journal of Aeronautical and Space Sciences 2019, 20,
537–552.
[9] Ma F., Zhang X., Lin P. Hybrid constellation design using a genetic algorithm for a LEO-based
navigation augmentation system. GPS Solutions 2020, 24, Article number: 62, 1–14.
[10] Murata T., Ishibuchi H., “MOGA: Multi-objective genetic Algorithms” IEEE – International
Conference on Evolutionary Computation, December 1995, pp. 289–294.
Meta-heuristic optimization techniques in navigation constellation design 105

[11] Konak A., Smith A.E. Multi-objective optimization using genetic algorithms: A tutorial.
Reliability Engineering & System Safety 2006, 91(9), 992–1007.
[12] Walker J.G. (1977): Continuous Whole Earth Coverage by Circular Orbit Satellites, Royal
Aircraft Establishment, Farnborough (UK), TR 77044, March 1977.
[13] Nirmala S., Rathanakara S.C., Ganeshan A.S. global Indian navigation satellites:
Constellation studies. Space Navigation Group. NSD, ISRO-ISAC-TR-0887, Aug’ 2009, ISRO.
[14] Ozdemir H.I., Roquet J.F., Lamont G.B., “Design of a Regional Navigation Satellite System
constellation using Genetic Algorithm”, ION GNSS 21st, International Technical meeting of the
Satellite Division, 16-19, September 2008,Savannah, GA.
[15] Hui L.U., Xin L.I.U. Compass Augmented Regional Constellation Optimization by a Multi-
Objective Algorithm Based on Decomposition and PSO. Chinese Journal of Electronics 21(2),
374–378, Apr 2012.
Ch. Swetha Devi, V. Ranga Rao, Pushpalatha Sarla
Correlation and heuristic analysis
of polymer-modified concrete subjected
to alternate wetting and drying
Abstract: Durability of any material in aggressive environment is important to make
the universal one and it is a major issue concerned with structures that are made up
of concrete and placed in highly aggressive environmental conditions. The sulfate
media may severely affect the performance of concrete structures. Targeting to un-
derstand this, the primary aim of this research work is to study correlation of the
strength characteristics of fiber-reinforced polymer concrete effect of sulfate attack.
In this chapter, the correlation between weight before and after alternate wetting
and drying when percentage of polymer added when the compressive strength of
iron fiber, steel fiber, and high-density polyethylene (HDPE) fiber was studied. It
also describes the autocorrelation function of compressive strength after 150
cycles of alternate wetting and drying (MPa). Through the statistical investigation,
it was clearly revealed that there is a high correlation between the weight before
and after alternate wetting and drying when percentage of polymer added. The fi-
bers assumed for the study are iron fiber, steel fiber (SF), HDPE fiber, and polypropyl-
ene fiber (PPF). Styrene-butadiene rubber latex (SBR latex) is used as the polymer.
Polymer is employed at varying percentages (0, 0.5, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2,
and 2.4%) by weight of cement and strength parameters were evaluated.

Keywords: autocorrelation function, wetting-drying cycle, polypropylene fiber


(PPF), styrene-butadiene rubber latex (SBR latex), steel fiber (SF)

1 Introduction
Modern day engineering structures require materials that are durable and have high
endurance during the life cycle due to the strength requirement. Among all the ma-
terials used for construction, concrete plays key role in civil engineering structures,
owing to its versatility and high fatigue strength compared with and other materials

Ch. Swetha Devi, Research scholar, K. L. University, Vijayawada, Andhra Pradesh,


e-mail: [email protected]
V. Ranga Rao, Professor, K. L. University, Vijayawada, Andhra Pradesh
Pushpalatha Sarla, S. R. Engineering College, Warangal, Telangana; Department of Mathematics,
Sumathi Reddy Institute of Technology for Women, Warangal, India

https://doi.org/10.1515/9783110716214-007
108 Ch. Swetha Devi, V. Ranga Rao, Pushpalatha Sarla

used for structural erection. Structures that are constructed with concrete are sub-
jected not only to high amount of stress but also to severe environmental condi-
tions. Many concrete structures are also subjected to erosion owing to salts present
in the environment they service [1–3]. Erosion not only causes structural deteriora-
tion but also affects the material present inside reinforced concrete, including steel,
that in turn corrodes and weakens the structure. Among all types of erosion, sulfate
erosion is majorly prevalent and it is termed as the major deteriorating factor con-
cerned with concrete. Several countries face this issue, and specifically countries
like China have large amount of soil in the coastal region which has more SO42–,
which impregnates the concrete and also undergoes several reactions (both physi-
cal and chemical) that reduces the wall strength of concrete and causes structural
damage [4, 5].
To avoid this, many materials have been added to concrete including fly ash,
which reduces the sulfate attack and chloride penetration, but the strength of con-
crete seems to be reducing compared with original strength [6, 7]. The expansion of
concrete in early age is found to be decreasing the shrinkage strain created in the
cement mortar, which uses Portland blast furnace slag-based cementitious materi-
als [7]. When the concrete is subjected to alternate wetting-drying cycle, it causes
severe deterioration in the concrete owing to the chemical reaction occurring inside
the cement–aggregate matrix combined with the salts prevailing in the intruding
water. This occurs in areas were the water level changes frequently including sub-
water structures and tidal areas, where the ions being transported into the concrete
during wetting process causes severe damage [8]. It is also found that the due to the
interior ion concentration that is left due to the drying process, the volume in-
creases up to 4 to 5 times in certain cases [8], hence an increase in ion concentra-
tion intensity is found. Wetting-drying cycle environment research is gaining
impetus and it is found to be the best method to analyze sulfate and chloride at-
tached in the immersed concrete [9, 11, 12].
Considering this, it is very evident that analyzing the wetting-drying cycle
seems to be of paramount importance, if the structure is going to be under severe
environmental conditions, and its key to define various attributes for civil engineer-
ing projects that are subjected to severe environmental stress [10–15]. Bountiful
studies were undertaken by several researchers to analyze the behavior of concrete
under severe environmental stress conditions and many of them used different
methodology, tools, techniques, and protocols. Different studies were performed to
understand the wetting-drying cycle, and the methods they had employed also
seems to be different; yet, three common methods were employed such as simulat-
ing the engineering wetting- drying environment, enhancing the erosion rate, and
facilitating the operational protocols [16–18]. Owing to this, confusion exists since
the protocols and methods employed are different and hence cannot be used to gen-
eralize the outcome of those research works; yet, all of them are confirming that the
deterioration is majorly caused due to the chloride ion diffusion into the concrete
Correlation and heuristic analysis of polymer-modified concrete 109

and the subsequent impact on chemical reaction. Numerous research works have
been carried out in macroscopic level of concrete to understand its mechanical na-
ture under the sulfate and chloride attack after being subjected to alternate wetting-
drying cycles. In all those works, the strength that is used as an indicator or main
parameter is either compressive strength or split tensile strength to indicate the de-
terioration that may occur in the concrete during this cycle [18, 19]. Gao et al. stud-
ied the nature of concrete under severe conditions using alternate wetting-drying
cycle and indicated the loss of strength in terms of percentage loss in compressive
and tensile strength for various mix proportions of concrete. Their results indicated
that the split tensile strength has more severity in terms of strength loss when the
concrete is subjected to alternate wetting-drying cycles which was also confirmed
in several other research works [19, 20]. Hence, in this work we had taken the
compressive strength as indicator for the analysis and did correlation to obtain
the strength loss to evaluate the impact of alternate wetting-drying cycle that
happens under severe conditions.

2 Materials used
In this study the following materials were used:
– OPC 43 grade cement with specific gravity of 3.15
– The specific gravity of 2.60 and quality modulus of 2.11 of locally available sand
– Coarse aggregate used in experimentation of size 12 mm and down having spe-
cific gravity of 2.65
– CONPLAST 430, a commercially available super plasticizer at 1% by weight of
cement
– Styrene-butadiene rubber latex as polymer. The percentage of polymer added
was 0, 0.5, 1.0, 1.2, 1.4, 1.6, 1.8, 2.0, 2.2, and 2.4% by weigh of cement.
– GI fiber, Steel fiber, polypropylene fiber (PPF), and HDPE fiber were used at the
ratio of 48, 40, 25, and 2,534, respectively.
– Magnesium sulfate (MgSO4), to prepare the sulfate media.

2.1 Experimental procedure

In this study experimentation is done based on Indian standard code provisions


and concrete compressive strength was evaluated using cubical specimens of size
150 mm. The strength data is used to obtain the correlation study after taking aver-
age value of three cubes per value.
110 Ch. Swetha Devi, V. Ranga Rao, Pushpalatha Sarla

3 Test results and discussions


3.1 Compressive strength test

Figure 1 gives the loss of weighted and compressive strength steel, GI, HDPE, and
PPF fibers reinforced polymer concrete when subjected to sulfate attack with 150
cycles of alternate wetting and drying in sulfate solution.

3.1.1 Autocorrelation

The autocorrelation function (ACF) (or correlogram) is used to analyze time series
data under the premise that the time series has constant correlation (auto-covari-
ance). The log ACF was calculated. Following that, it was found that the log of fre-
quencies times the coefficient of the log ACF estimate. In order to use this method to
the supplied data series, first determine the ACF of the series until the it is negative,
and then use all of the data series’ positive values. Finally, they will apply a regres-
sion line to predict the log of ACF values vs the natural log of ACF lags. We utilized
SPSS to compute autocorrelation coefficients after 150 cycles of alternate wetting
and drying to find compressive strength features related to their dependency on the
type of drying utilized. The log log correlogram plot is given here, with the log of
lags (k = 1, 2, 3) along the X-axis and the log log ACF on the Y-axis. Figure 2 shows
that the series exhibits a substantial degree of trend line and curve, as well as the
projected regression line Y = 0.0862X + 0.775. Complete variance in the explanatory
variable that can be represented by the predictor variable is indicated by the R2
value of 0.9126. The auto-covariance function is referred to as the correlogram ap-
proach for analyzing a time series. As the auto-covariance function, you specify the
predicted correlation between data values. To compute the coefficients of the esti-
mated log ACF vs the log of frequencies, one must do a log ACF vs. log of frequencies
study. Using SPSS software, we found the autocorrelation of compressive strength
after 150 cycles of alternating soaking and drying. To make things easier, we have
created a log log correlogram plot with the log log ACF on the X-axis and the log log
lag k (k = 1, 2, 3) on the Y-axis. In Figure 1, the data series shows a substantial degree
of trend line and graph, as well as the expected regression line, Y = 0.0862X + 0.775.
The R2 value of 0.9126 describes what is the total variation that is present in the ex-
planatory variable, which will be explained by the predictor variable.
The autocorrelation of the compressive strength values of GI fiber reinforced
polymer-based concrete when subjected to sulfate attack utilizing alternate wetting
and drying is shown in Figure 3. A log log correlogram plot was produced for acces-
sibility by graphing log lag k (k = 1, 2, 3 . . .) along the X-axis and log log ACF on the
Y-axis. The data series showed a substantial degree of trend line and curve, as
shown in Figure 2, as well as the expected regression line Y = 0.0849X + 0.7994.
Correlation and heuristic analysis of polymer-modified concrete 111

y = -0.0862x + 0.775
R² = 0.9126 Log AC
1.000
0 5 10 15 20
AUTOCORRELATION

.100

.010

.001
LAG VALUES

Series1 Linear (Series1)

Figure 1: Autocorrelation plot for the steel fiber reinforced polymer concrete after 150 cycles.

The total variation in the dependent variable that can be explained by the indepen-
dent variable is represented by the R2 value of 0.9418.

y = -0.0849x + 0.7994
R² = 0.9418 Log AC
1.000
0 5 10 15 20
AUTOCORRELATION

.100

.010

.001
LAG VALUES

Series1 Linear (Series1)

Figure 2: Autocorrelation plot for the GI fiber reinforced polymer concrete after 150 cycles of
alternate wetting and drying.

The autocorrelation of compressive strength data of HDPE fiber reinforced polymer


concrete when subjected to sulfate attack utilizing different wetting and drying
procedures is shown in Figure 4. A log log correlogram plot was produced for ac-
cessibility by graphing log lag k (k = 1, 2, 3 . . .) along the X-axis and log log ACF
on the Y-axis. The data series showed a substantial degree of trend line and curve
in Figure 3, as well as the expected regression line Y = 0.0867X + 0.8044. The com-
plete variance in the dependent variable that can be explained by the independent
variable is denoted by the R2 value of 0. 933.
112 Ch. Swetha Devi, V. Ranga Rao, Pushpalatha Sarla

y = - 0.0867x + 0.8044
R² = 0.933 Log AC
1.000
0 2 4 6 8 10 12 14 16 18
AUTOCORRELATION

.100

.010

.001
LAG VALUES
Series1 Linear (Series1) Linear (Series1)

Figure 3: Autocorrelation plot for the HDPE fiber reinforced polymer concrete after 150 cycles of
alternate wetting and drying.

The autocorrelation of the compressive strength values of PPF reinforced polymer


concrete that is subjected to sulfate attack utilizing alternate wetting and drying is de-
picted in Figure 5. A log log correlogram plot was produced for accessibility by graph-
ing log lag k (k = 1, 2, 3 . . .) along the X-axis and log log ACF on the Y-axis. The data
series showed a substantial degree of trend line and curve, as shown in Figure 4, as
well as the expected regression line Y = 0.0917X + 0.8087. The R2 value of 0.8858
represents the total variance in the dependent variable explained by the indepen-
dent variable.

y = -0.0917x + 0.8087
R² = 0.8858 Log AC
1.000
0 5 10 15 20
AUTOCORRELATION

.100

.010

.001
LAG VALUES

Series1 Linear (Series1)

Figure 4: Autocorrelation plot for the polypropylene fiber reinforced polymer concrete after 150
cycles of alternate wetting and drying.
Correlation and heuristic analysis of polymer-modified concrete 113

3.1.2 Correlation

The relationship between the compressive strength findings of steel fiber reinforced
polymer concrete during the sulfate attack with wetting and drying before and after
the addition of a percentage of polymer is depicted in Figure 5. By assuming com-
pressive strength data before and after wetting and drying on the X-axis and compres-
sive strength results after wetting and drying on the Y-axis, when the percentage of
polymer added illustrates the positive scatter points in the xy-plane, the trend line
along the data series represents the linear equation Y = 0.9528X – 0.9163 with the R2
value of 0.4585 showing the positive correlation. As a result, they are both associated
positively.

y = 0.9528x - 0.9163
Correlation Graph R² = 0.4585
82
Compressive strength after

80
wetting and drying

78

76 Series1

74 Linear (Series1)

72
76 78 80 82 84
Compressive strength before wetting and drying

Figure 5: Correlation between the results of steel fiber weight before and after alternate wetting
and drying when percentage of polymer added.

Figure 6 depicts the relationship between the compressive strength results of GI fiber
reinforced polymer concrete when subjected to sulfate attack with alternate wetting
and drying before and after the addition of a percentage of polymer. By assuming
compressive strength data before wetting and drying on the X-axis and compressive
strength results after wetting and drying on the Y-axis, when the percentage of polymer
added illustrates the positive scatter points in the xy-plane, the trend line along the
data series represents the linear equation Y = 1.239X – 24.853 with the correlation R2
value of 0.7126 showing the positive correlation between the two variables. This dem-
onstrates that when an increase in the weight when the percentage of polymer added
before wetting and drying of compressive strength is the same, the percentage of
polymer added after wetting and drying of compressive strength is the same.
The relationship between the compressive strength findings of HDPE fiber rein-
forced polymer concrete during sulfate attack with alternate wetting and drying before
and after the addition of a percentage of polymer is depicted in Figure 7. By assuming
compressive strength data before and after wetting and drying on the X-axis and com-
pressive strength results after wetting and drying on the Y-axis when the percentage
114 Ch. Swetha Devi, V. Ranga Rao, Pushpalatha Sarla

y = 1.2391x-24.853
Correlation Graph R² = 0.7126
84
After wetting and drying
Compressive strength 82
80
78
76 Series1
74
72 Linear (Series1)
70
76 78 80 82 84 86

Compressive strength before wetting and drying

Figure 6: Correlation between compressive strength results of GI fiber weight before and after
alternate wetting and drying when percentage of polymer added.

of polymer added illustrates the positive scatter points in the xy-plane, and the
trend line along the data series represents the linear equation Y = 1.333X – 31.884 with
a correlation R2 value of 0.6595 showing the positive correlation.

Correlation Graph y = 1.333x-31.884


80 R² = 0.6595
Compressive strength After

79
78
wetting and drying

77
76
75
74 Series1
73
72 Linear (Series1)
71
70
69
77 78 79 80 81 82 83 84
Compressive strength before wetting and drying

Figure 7: Correlation between compressive strength results of HDPE fiber weight


before and after alternate wetting and drying when percentage of polymer added.

The relationship between the compressive strength findings of PPF reinforced polymer
concrete when subjected to sulfate attack with alternate wetting and drying before and
after the addition of a percentage of polymer is depicted in Figure 8. Assuming com-
pressive strength data before and after wetting and drying on the X-axis and compres-
sive strength results after wetting and drying on the Y-axis, when the percentage of
polymer added illustrates the positive scatter points in the xy-plane, the trend line
along the data series represents the linear equation Y = 0.8672X + 5.2893 with the
correlation R2 value of 0.7753 showing the positive correlation.
Correlation and heuristic analysis of polymer-modified concrete 115

Correlation Graph
Compressive strength After wetting

y = 0.8672x + 5.2893
80 R² = 0.7753

75
and drying

70
Series1
65 Linear (Series1)

60
68 73 78 83 88
Compressive strength before wetting and drying

Figure 8: Correlation between compressive strength results of polypropylene fiber weight before
and after alternate wetting and drying when percentage of polymer added.

4 Conclusion
In this work, an attempt is made to perform correlation study for the concrete sub-
jected to alternate cycles of wetting and drying in terms of compressive strength ra-
tios. Experiments are conducted to obtain compressive strength by subjecting the
concrete to alternate wetting and drying conditions and then testing them. A regres-
sion model is used to understand the variables behavior in the experimental pro-
cess. Meanwhile, these R2 values represent the entire variance in the explanatory
variable (i.e., compressive strength), explained by the predictor variable (i.e., cycles
of alternate wetting and drying). It was also revealed that there was a correlation
between the dry strength values of various types of fibers prior to and after wetting
and drying when the proportion of polymer was added. According to the results,
there is a positive association between different impact wetting and drying cycles
and data series; so, this method was suggested to compute alternative impact wet-
ting and drying cycle-based affects in concrete submitted to harsh environmental
circumstances.

References
[1] Fu C.Q. et al, Chloride penetration into concrete damaged by uniaxial tensile fatigue loading.
Construction Building Materials 2016, 125, 714–723.
[2] Ye H.L. et al, Influence of cracking on chloride diffusivity and moisture influential depth in
concrete subjected to simulated environmental conditions. Constr. Building Materials 2013,
47, 66–79.
116 Ch. Swetha Devi, V. Ranga Rao, Pushpalatha Sarla

[3] Ye H.L. et al, Model of chloride penetration into cracked concrete subject to drying–wetting
cycles. Construction Building Materials 2012, 36, 259–269.
[4] Tixier R., Mobasher B. Modelling of damage in cement-based materials subjected to external
sulphate attack. Formulation Journal of Materials in Civil Engineering 2003, 15, 305–313.
[5] Lee S.T. et al, Effect of limestone filler on the deterioration of mortars and pastes exposed to
sulphate solutions at ambient temperature. Cement and Concrete Research 2008, 38, 68–76.
[6] Nie Q. et al, Chemical mechanical and durability properties of concrete with local mineral
admixtures under sulphate environment in. Northwest China Materials 2014, 7, 3772–3785.
[7] Lee B. et al, Influence of α-Calcium sulphate hemihydrate on setting, compressive strength,
and shrinkage strain of cement mortar. Materials 2019, 12, 1–10.
[8] Szweda Z., Ponikiewski T., Katzer J. A study on replacement of sand by granulated ISP slag in
SCC as a factor formatting its durability against chloride ions. Journal of Clean Production
2017, 156, 569–576.
[9] Zhang P. et al, The effect of air entrainment on the mechanical properties’ chloride migration
and microstructure of ordinary concrete and fly ash concrete. Journal of Materials in Civil
2018, 30, 04018265.
[10] Zhang P. et al, Influence of freeze-thaw cycles on capillary absorption and chloride
penetration into concrete. Cement and Concrete Research 2017, 100, 60–67.
[11] Ganjian E., Pouya H.S. The effect of Persian Gulf tidal zone exposure on durability of mixes
containing silica fume and blast furnace slag. Construction and Building Materials 2009, 23,
644–652.
[12] He R. et al, Sulphate corrosion resistance of hybrid fibre reinforced concrete. Bulletin of the
Chinese Ceramic Society 2017, 36, 1457–1463.
[13] Gao R. et al, Experimental study of the deterioration mechanism of concrete under sulphate
attack in wet-dry cycles. China Civil Engineering Journal 2010, 43, 48–54.
[14] Wang Q., Yang D.Y. Influence of the dry–wet circulation on the concrete sulphate attack.
Concrete 2008, 30, 22–24.
[15] Gao J.M. Durability of concrete exposed to sulphate attack under flexural loading and
drying–wetting cycles. Construction and Building Materials 2013, 39, 33–38.
[16] Matsumoto K., Takanezawa T., Ooe M. Ocean tide models developed by assimilating TOPEX/
POSEIDON altimeter data into hydrodynamical model. A global model and a regional model
around Japan Journal of Oceanography 2000, 56, 567–581.
[17] Fan Z., Ma Y., Ma Y. Salinized soils and their improvement and utilization in west China. Arid
Zone Research 2001, 18, 181–186.
[18] Rajamma R. Characterisation and use of biomass fly ash in cement-based materials. Journal
of Hard Material 2009, 172, 1049–1060.
[19] ASTM C1012 2010 Standard Test Method for Change of Hydraulic-cement Mortars Exposed to
a Sulphate Solution ASTM International: West Conshohocken PA USA
[20] Amudhavalli N.K., Poovizhiselvi M. Relationship between Compressive Strength and Flexural
Strength of Polyester Fibres Reinforced Concrete. International Journal of Engineering Trends
and Technology 2017, 45, 158–160.
Kamal Kumar, Amit Sharma
q-Rung orthopair fuzzy entropy measure
and its application in multi-attribute
decision-making
Abstract: Fuzzy set and its extension q-rung orthopair fuzzy set (q-ROFS) are more
effective and attractive tools to express the quantitative complexity during the deci-
sion-making procedure, which are receiving more attention by the researchers for
new research direction in recent years. Keeping the advantages of q-ROFS, this
chapter proposes an entropy measure (EM) to determine the uncertainty of q-ROFS.
The various notable features of the proposed EM are also proved. Then, by using the
proposed EM, a multi-attribute decision-making (MADM) approach has been devel-
oped. A real-life MADM illustration has been considered to illustrate the proposed
MADM process. Comparative studies are also given to show the advantages of the
proposed MADM approach. The proposed approach can also overcome the short-
comings of the existing MADM approaches given in [19–21].

Keywords: fuzzy set, q-rung orthopair fuzzy set, MADM, score function, entropy
measure, aggregating operators

1 Introduction
Growth and development of any individual, society, business, county and so on de-
pend on the suitable acting decision. There are lots of decision-making problems in
daily real life. Multi-attribute decision-making (MADM) issues are the important and
common activities of daily life. But the main important task for decision-maker(s)
(DMks) to handle the MADM issues is to choose the appropriate environment to the
given assessments of alternatives for performance toward attributes. Because of
this, the issue becomes more complicated and ambiguous for DMks, who are unable
to provide their evaluations in the form of precise numbers. To take off such types
of strains of DMks, the “fuzzy set” (FS) is introduced by Zadeh [1], and after which
extensions such as “intuitionistic FS” (IFS) [2], “interval-valued IFS” [3], and Py-
thagorean FS (PFS) [4] have been suggested as a useful environment to communi-
cate with complexity. Under these environments, the researchers [5–9] paid more

Kamal Kumar, Department of Mathematics, Amity school of Applied Science, Amity University
Haryana, Gurugram, India, e-mail: [email protected]
Amit Sharma, Department of Mathematics, Amity school of Applied Science, Amity University
Haryana, Gurugram, India, e-mail: [email protected]

https://doi.org/10.1515/9783110716214-008
118 Kamal Kumar, Amit Sharma

attention to tackle the real-life MADM issues. Garg and Kumar [10] introduced the
possibility degree method for MADM issues under the IVIFN environment.
During the MADM process, ranking orders (ROs) of alternatives depend on attrib-
ute’s weight. The weights of attributes are important during the aggregation process
because changes in the weights will affect the RO of alternatives. Due to the com-
plexity, in some cases, DMks cannot assign the weights for attributes. To manage
this, entropy measure (EM) is an effective tool which measures the uncertainty of
any data set. In the last decade, EMs for IFSs are introduced by many researchers
[11–17]. Szmidt and Kacprzyk [11] introduced EM to quantify the uncertainty of
IFSs. The EM by using the cosine and cotangent functions for IFSs was introduced
by Wei et al. [12] and Wang and Wang [13], respectively. Exponential function-
based EM for IFSs was presented by Verma and Sharma [14]. Furthermore, none of
the existing EMs of IFSs contain the degree of hesitancy. Other than these, a few
shortcomings of the above-existing EM of IFS were investigated by Liu and Ren [15]
and developed a new EM by containing the IFS hesitance degree. Garg et al. [16]
proposed the generalized EM of degree β and order α. Garg and Kaur [17] proposed
(R, S)-norm novel EM for IFS.
Recently, Yager [18] introduced a new extension of FS known as q-rung orthopair
FS (q-ROFS) hτ, θi with the constraint conditions 0 ≤ τ ≤ 1, 0 ≤ θ ≤ 1, 0 ≤ τq + θq ≤ 1
and q ≥ 1. IFS and PFS can be generated by setting q = 1 and q = 2 in constraints of q-
ROFS, respectively. It is easily observed that when q increases, then the acceptable
space of the membership grade (MG) and non-membership grade (NMG) will be in-
creased. The q-ROFS provides more flexibility to assess the information about the al-
ternatives. Recently, many researchers go deeply to handle the DM problems under
the q-rung orthopair fuzzy (q-ROF) environment. For instance, Liu et al. [19] defined
the q-ROF weighted extended Bonferroni mean (q-ROFWEBM) aggregating operators
(AOs) and knowledge measure-based EM for the q-ROF value (q-ROFV). Liu and
Wang [20] introduced the q-ROF-weighted averaging (q-ROFWA) and q-ROF weighted
geometric AOs for q-ROFV. Riaz et al. [21] proposed the q-ROF Einstein weighted aver-
aging (q-ROFEWA) AOs for the q-ROF environment. Riaz et al. [22] introduced the Ein-
stein-prioritized weighted averaging AOs for q-ROFV. Liu and Liu [23] defined the BM
AO for q-ROF environment. Garg [24] introduced the possibility degree measure for q-
ROFVs. Khan et al. [25] defined the knowledge measure for q-ROFVs.
In this chapter, we propose a new EM to depict the uncertainty of q-ROFS. The
legality and validity of certain features of the proposed EM have been verified. The
primary objective of defining EM is to assess the weight of the parameters when
they are unknown. We establish a new MADM framework in the q-ROF environment
based on the proposed EM. Real-life illustrative examples are used to evaluate the
developed MADM framework, and ROs are compared to current strategies to show
the proposed strategy’s benefits. The proposed MADM framework can address the
shortcomings of the MADM methods discussed in [19–21], which include the inabil-
ity to classify the ROs of alternatives in certain scenarios.
q-Rung orthopair fuzzy entropy measure and its application 119

This chapter is organized as follows: a literature review associated with the q-


ROFS is given in Section 2. In Section 3, an EM has been constructed for q-ROFS,
and defined some properties and axiom definitions. Section 4 describes the pro-
posed MADM method. In Section 5, an example has been created to demonstrate the
proposed MADM technique. Section 6 lists the benefits of the proposed MADM pro-
cess. Finally, Section 7 concludes the chapter.

2 Preliminaries
Definition 2.1 (Yager [18]). A q-ROFS A in the universe of discourse X is repre-
sented by
A = f < xt , τA ðxt Þ, θA ðxt Þ > jxt 2 Xg (1)

where τA ðxt Þ and θA ðxt Þ represent the MG and NMG of element xt belonging to the
q-ROFS A, respectively, xt 2 X, 0 ≤ τA ðxt Þ ≤ 1, 0 ≤ θA ðxt Þ ≤ 1 and 0 ≤ τqA ðxt Þ + θqA ðxt Þ ≤ 1.
The hesitant degree πA ðxt Þ of element xt belonging to the q-ROFS A is defined as
 1
πA ðxt Þ = 1 − τqA ðxt Þ − θqA ðxt Þ q , where q ≥ 1 and xt 2 X.
Liu et al. [19] called the pair τA ðxt Þ, θA ðxt Þ in the q-ROFS A = f < xt , τA ðxt Þ, θA ðxt Þ >
jxt 2 Xg an q-ROFV.

Definition 2.2 (Liu and Wang [20]). Let γ = hτ, θi be a q-ROFV, where 0 ≤ τ ≤ 1,
0 ≤ θ ≤ 1, and 0 ≤ τq + θq ≤ 1. The score value SðγÞ of the q-ROFV γ = hτ, θi is defined as

SðγÞ = τq − θq (2)

where SðγÞ 2 ½ − 1, 1.

Definition 2.3 (Liu and Wang [20]). The accuracy value H ðγÞ for the q-ROFV γ = hτ, θi
is defined as

H ðγÞ = τq + θq (3)

where H ðγÞ 2 ½0, 1.

Definition 2.4 (Liu and Wang [20]). For any two q-ROFV γ1 and γ2
   
(i) If S γ1 > S γ2 , then γ1  γ2 .
   
(ii) If S γ1 < S γ2 , then γ1  γ2 .
   
(iii) If S γ1 = S γ2 , then
   
(a) If H γ1 > H γ2 , then γ1  γ2 .
   
(b) If H γ1 < H γ2 , then γ1  γ2 .
   
(c) If H γ1 = H γ2 , then γ1 = γ2 .
120 Kamal Kumar, Amit Sharma

3 Entropy measure for q-ROFS


This segment introduced the EM for q-ROFS. Let φðXÞ be the gathering of all q-ROFS.

Definition 3.1 (Liu et al. [19]). If A 2 φð XÞ, then EM E: φð X Þ ! ½0, 1 fulfills the sub-
sequent properties:
(P1) EðAÞ = 0 , τA ðxt Þ = 1 or θA ðxt Þ = 1
(P2) EðAÞ = 1 , , πA ðxt Þ = 1 ∀xt 2 X;
(P3) EðAÞ = EðAc Þ;
(P4) IfA1 ≤ A2 then EðA1 Þ ≤ EðA2 Þ.

Definition 3.2 For a q-ROFS A = f < xt , τA ðxt Þ, θA ðxt Þ > jxt 2 Xg, we define the following
EM:
n qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 
1 X   
EðAÞ = τqA ðxt ÞθqA ðxt Þ + 2πqA ðxt Þ + 1 − τqA ðxt Þ 1 − θqA ðxt Þ (4)
3n t = 1
where q ≥ 1.

Theorem 3.1 The EM EðAÞ for q-ROFS A defined in equation (4) satisfies (P1)–(P4)
properties as given in Definition 3.1.
Proof. Let A = f < xt , τA ðxt Þ, θA ðxt Þ > jxt 2 Xgbe a q-ROFS.
(P1) We have
EðAÞ = 0
 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 
1 X n  ffi
, τqA ðxt ÞθqA ðxt Þ + 2πqA ðxt Þ + 1 − τqA ðxt Þ 1 − θqA ðxt Þ = 0
3n t = 1
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 ffi
, τqA ðxt ÞθqA ðxt Þ = 0, πqA ðxt Þ = 0 and 1 − τqA ðxt Þ 1 − θqA ðxt Þ = 0

, πqA ðxt Þ = 0, τqA ðxt Þ = 0 or πqA ðxt Þ = 0, θqA ðxt Þ = 0


pffiffiffi
(P2) Since x achieves its maximum value ðx + yÞ=2 when x = y, therefore, we have

EðAÞ = 1
 qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 
1 X n  ffi
, τqA ðxt ÞθqA ðxt Þ + 2πqA ðxt Þ + 1 − τqA ðxt Þ 1 − θqA ðxt Þ = 1
3n t = 1
X n qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
  ffi

q q q q q
, τA ðxt ÞθA ðxt Þ + 2πA ðxt Þ + 1 − τA ðxt Þ 1 − θA ðxt Þ = 3n
t=1
 
, nθqA ðxt Þ + 2nπqA ðxt Þ + n 1 − τqA ðxt Þ = 3n

, 2nπqA ðxt Þ = 2n
, πqA ðxt Þ = 1
q-Rung orthopair fuzzy entropy measure and its application 121

(P3) Since Ac = f < xt , θA ðxt Þ, τA ðxt Þ > jxt 2 Xg, therefore, we have
n qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi

1 X 
E ð AÞ = τqA ðxt ÞθqA ðxt Þ + 2πqA ðxt Þ + 1 − τqA ðxt Þ 1 − θqA ðxt Þ
3n t = 1
n qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
ffi

1 X q q q q  q
= θA ðxt ÞτA ðxt Þ + 2πA ðxt Þ + 1 − θA ðxt Þ 1 − τA ðxt Þ
3n t = 1

= E ð Ac Þ
pffiffiffiffiffi pffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
(P4) Construct the function f ðx, yÞ = xy + 2ð1 − x − yÞ + ð1 − xÞð1 − yÞ, where x, y 2
½0, 1 and x + y ≤ 1. Now we will show that when x ≤ y, the function f ðx, yÞ increases
with x and decreases with y. The partial derivatives of f ðx, yÞ in terms of x and y can
be calculated as follows:
rffiffiffi rffiffiffiffiffiffiffiffiffi
∂f ðx, yÞ 1 y 1 1 − y
= − −2
∂x 2 x 2 1−x
rffiffiffi sffiffiffiffiffiffiffiffiffi
∂f ðx, yÞ 1 x 1 1 − x
= − −2
∂y 2 y 2 1−y

Since ð∂f ðx,yÞÞ=∂x ≤0, ð∂f ðx,yÞÞ=∂x ≤0, when x ≤ y, therefore, f ðx, yÞ is increasing with
   
x and decreasing with y. Thus, f τq1 ðxt Þ, θq1 ðxt Þ ≤ f τq2 ðxt Þ, θq2 ðxt Þ when τq2 ðxt Þ ≤ θq2 ðxt Þ
and τq1 ðxt Þ ≤ τq2 ðxt Þ, θq1 ðxt Þ ≥ θq2 ðxt Þ.
Similarly, ð∂f ðx, yÞÞ=∂x ≤ 0, ð∂f ðx, yÞÞ=∂x ≥ 0, when x ≥ y, therefore, f ðx, yÞ is de-
   
creasing with x and increasing with y. Thus, f τq1 ðxt Þ, θq1 ðxt Þ ≤ f τq2 ðxt Þ, θq2 ðxt Þ when
τq2 ðxt Þ ≤ θq2 ðxt Þ and τq1 ðxt Þ ≥ τq2 ðxt Þ, θq1 ðxt Þ ≤ θq2 ðxt Þ.
Pn q q Pn q q
Therefore, if A1 ≤ A2 then 3n 1
t = 1 f ðτ1 ðxt Þ, θ1 ðxt ÞÞ ≤ 3n
1
t = 1 f ðτ2 ðxt Þ, θ2 ðxt ÞÞ. Hence,
EðA1 Þ ≤ EðA2 Þ.

4 Proposed MADM approach


Adopt “m” alternatives A = fA1 , A2 , . . . , Am g and n different attributes G = fG1 ,
G2 , . . . , Gn g. DMk evaluates the alternatives Ak toward the attribute Gt by using q-ROFV
 
~γkt = < ~τkt , ~θkt > to construct the DMx R
~ = ~γ
kt m × n , where k is the +ve integer from 1 to
m, and t is also +ve integer from 1 to n. The constructed DMx is shown as follows:
G1 G2 ... Gn
0 1
A1 ~γ11 ~γ12    ~γ1n
B ~γ . . . ~γ2n C
~ = A2 B 21 ~γ22 C
R B C
.. B .. .. C
B . . . . .. C
. @ . . A
Am ~γm1 ~γm2 . . . ~γmn
122 Kamal Kumar, Amit Sharma

This MADM proposed technique consists of the following steps:


 
~
Step 1: By applying equation
   (5), convert  the DMx R = ~γkt m × n into the normalized
DMx R = γkt m × n = < ~τkt , ~θkt > as follows:
m×n
(
~τkt , ~θkt , if Gt is a benefit − type attribute
γkt = (5)
~θkt , ~τkt , if Gt is a cost − type attribute
  
Step 2: Obtain the EM matrix DE = E γkt m × n by calculating EM for each
  1qffiffiffiffiffiffiffiffiffiffiffi
q-ROFV
 
given qinffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
the normalized ffi DMx R = γ where E γ = τqkt θqkt +
q  q  q
kt m × n kt 3
2πkt + 1 − τkt 1 − θkt Þ.

Step 3: Compute the weight ωt of the attribute Gt , t = 1, 2, . . . , n, as follows:


1 − et
P
ωt = (6)
n − nt= n et
P     1qffiffiffiffiffiffiffiffiffiffiffi qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
 
where et = m1 m k = 1 E γkt and E γkt = 3 τqkt θqkt +2πqkt + 1−τqkt 1−θqkt is
the EM for γkt = τkt , θkt .
Step 4: Calculate each alternative Ak ’s overall output QðAk Þ where QðAk Þ is defined
as follows:
 
QðAk Þ = wt τqk − θqk (7)

Step 5: Calculate the RO of the alternatives A1 , A2 , . . . , Am corresponding to the de-


scending value of QðAk Þ, where k = 1, 2, . . . , m.

5 Illustrative example
In this section, we adopt an illustrative example from [19] to illustrate the proposed
MADM method.
A US bicycle manufacturer want to set up business in the Asian market. For
this, four prospective locations in different countries A1 , A2 , A2 , and A4 are selected
as alternatives. To choose the best location, the company management sets the five
attributes G1 (market), G2 (investment cost), G3 (labor characteristics), G4 (infra-
structure), and G5 (possibility for further expansion). Assume that DMks use q-
 
ROFVs to evaluate the alternatives and get a DMx R ~ = ~γ
kt 4 × 5 as follows:

G1 G2 G3 G4 G5
0 1
A1 < 0.8, 0.1 > < 0.6, 0.2 > < 0.7, 0.4 > < 0.7, 0.6 > < 0.6, 0.3 >
B C
B < 0.7, 0.5 > < 0.7, 0.2 > < 0.6, 0.4 > < 0.6, 0.3 > < 0.9, 0.5 > C
~ = A2
R B C
A3 B < 0.7, 0.2 > < 0.5, 0.4 > < 0.6, 0.2 > < 0.5, 0.4 > < 0.5, 0.5 > C
@ A
A4 < 0.6, 0.3 > < 0.7, 0.5 > < 0.8, 0.3 > < 0.8, 0.2 > < 0.9, 0.2 >
q-Rung orthopair fuzzy entropy measure and its application 123

To tackle with this example, we use the proposed MADM procedure (set the value of
q = 3) as follows:
Step 1: Meanwhile G2 is the cost-type attribute; therefore, the normalizing DMx
 
R = γkt 4 × 5 can be obtained by using equation (5) and shown as follows

G1 G2 G3 G4 G5
0 1
A1 < 0.8, 0.1 > < 0.2, 0.6 > < 0.7, 0.4 > < 0.7, 0.6 > < 0.6, 0.3 >
B C
A2 B < 0.7, 0.5 > < 0.2, 0.7 > < 0.6, 0.4 > < 0.6, 0.3 > < 0.9, 0.5 > C
R= B C
A3 B < 0.7, 0.2 > < 0.4, 0.5 > < 0.6, 0.2 > < 0.5, 0.4 > < 0.5, 0.5 > C
@ A
A4 < 0.6, 0.3 > < 0.5, 0.7 > < 0.8, 0.3 > < 0.8, 0.2 > < 0.9, 0.2 >
  
Step 2: The calculated values of the EM matrix DE = E γkt 4 × 5 by using the pro-
posed EM given in equation (4) are given as follows:

0 1
0.5649 0.8252 0.7061 0.6240 0.8213
B C
B 0.6764 0.7192 0.8047 0.8213 0.3603 C
DE = B
B 0.7192 0.8721
C
@ 0.8252 0.8721 0.8333 C
A
0.8213 0.6764 0.5762 0.5733 0.3736

Step 3: By using equation (6), calculate the attribute’s weight ωt of the attribute
Gt , where t = 1, 2, 3, 4, 5 as ω1 = 0.2053, ω2 = 0.1529, ω3 = 0.1833, ω4 = 0.1870,
and ω5 = 0.2716.
 
Step 4: Calculate the overall performance QðAk Þ = ωt τqk − θqk for the alternatives
Ak ðk = 1, 2, 3, 4Þ and obtain the results as follows:

QðA1 Þ = 0.1993, QðA2 Þ = 0.2208, QðA3 Þ = 0.1090, QðA4 Þ = 0.3844

Step 5: Since QðA4 Þ> QðA2 Þ> QðA1 Þ >QðA3 Þ, therefore A4  A2  A1  A3 , and hence,
A4 is the best alternative.

The experimental results for various MADM methods are compared in the following
section. We can see from Table 1 that Liu et al.’s MADM structure [19] using the q-
ROFWEBM AO gets the RO “A4  A1  A3  A2 , ” Liu and Wang’s MADM method
[20] using the q-ROFWA AO gets the RO “A4  A1  A2  A3 , ” and Riaz et al.’s
MADM method [21] using the q-ROFEWA AO gets the RO “A4  A1  A2  A3 , ” and
the MADM system given in this chapter gets the RO “A4  A2  A1  A3 .” The best
location is A4 for the task by each method.
124 Kamal Kumar, Amit Sharma

Table 1: ROs obtained by the different existing methods for


Example 1.

MADM methods ROs

Liu et al.’s MADM method [] A4  A1  A3  A2

Liu and Wang’s MADM method [] A4  A1  A2  A3

Riaz et al.’s MADM method [] A4  A1  A2  A3

The proposed MADM method A4  A2  A1  A3

6 Advantages of the proposed MADM approach


In this segment, consider an MADM situation to show how the proposed MADM frame-
work can overcome the disadvantages of current MADM methods.

Example 2: LetA1 , A2 , and A3 represent the three alternatives and let G1 , G2 , and G3 represent the
three attributes. Assume that the DMks use the q-ROFVs to evaluate the alternatives corresponding
 
to the attributes and get a DMx R~ = ~γ
kt 3 × 3 as follows:

G1 G2 G3
0 1
A1 h0.8, 0.1i h1, 0i
h0.7, 0.4i
~ = A2 B C
R @ h1, 0i h0.7, 0.2i h0.6, 0.4i A
A3 h1, 0i h0.5, 0.4i h0.6, 0.2i

Step 1: There is no need to normalize since all attributes are of the same kind.
  
Step 2: The calculated values of the EM matrix DE = E γkt 3 × 3 by using the proposed EM given in
equation (4) are given as follows:
0 1
0.5649 0 0.7061
B C
DE = @ 0 0.7192 0.8047 A
0 0.8721 0.8252

Step 3: By using equation (6), calculate the attribute’s weight ωt of the attribute Gt , where t = 1, 2, 3
as ω1 = 0.5402, ω2 = 0.3125, ω3 = 0.1473.
 q q
Step 4: Calculate the overall performance QðAk Þ = ωt τ k − θk for the alternatives Ak ðk = 1, 2, 3, 4Þ
and obtain the results as follows:

QðA1 Þ = 0.6296, QðA2 Þ = 0.6673, QðA3 Þ = 0.5899

Step 5: Since QðA2 Þ > QðA1 Þ > QðA3 Þ, therefore A2  A1  A3 , and hence, A2 is the best alternative.
The experimental results for various MADM methods are compared in the following section.
We can see from Table 2 that Liu et al.’s MADM structure [19] using the q-ROFWEBM AO gets
q-Rung orthopair fuzzy entropy measure and its application 125

the RO “A2 = A1 = A3 , ” Liu and Wang’s MADM method [20] using the q-ROFWA AO gets the RO
“A2 = A1 = A3 , ” and Riaz et al.’s MADM method [21] using the q-ROFEWA AO gets the RO
“A2 = A1 = A3 , ” where they have the downsides that they cannot recognize ROs among the
alternatives A1 , A2 , and A3 in this situation. The proposed MADM structure obtains the RO
A2  A1  A3 of the alternative which can dominate the downsides of the existing MADM
method given in [19–21].

Table 2: ROs obtained by the different existing methods for


Example 2.

MADM methods ROs

Liu et al.’s MADM method [] A2 = A1 = A3

Liu and Wang’s MADM method [] A2 = A1 = A3

Riaz et al.’s MADM method [] A2 = A1 = A3

The proposed MADM method A2  A1  A3

7 Conclusion
q-ROFS is the generalization of IFS and PFS, which represents the more acceptable
space for DMks to express their preferences. Therefore, whole objective of this chapter
works under the q-ROFS environment for handling the MADM issues. For this, we
have constructed an EM for measuring the fuzziness of the q-ROFS, which will be
helpful for the DMk to measure the uncertainty and obtain the attribute weight of the
MADM issues. Attribute’s weight acts as a fundamental role through the whole MADM
process and directly affects ROs of alternatives. Therefore, based on the proposed EM,
an MADM method under the q-ROF environment is utilized to handle the realistic
cases to exhibit the efficiency of the developed MADM approach. Based on the com-
puted results, it can be decided that the proposed framework of MADM can resolve
the shortcomings of current MADM techniques. In the context of q-ROFVs, the pro-
posed approach of MADM offers a practical approach to deal with MADM issues. In
future, we will develop new applications in other fields like reliability analysis, engi-
neering, military, medical diagnosis problem, and pattern recognition problem.
126 Kamal Kumar, Amit Sharma

References
[1] Zadeh L.A. Fuzzy sets. Information and Control 1965, 8(3), 338–353.
[2] Atanassov K.T. Intuitionistic fuzzy sets. Fuzzy Sets and Systems 1986, 20(1), 87–96.
[3] Atanassov K., Gargov G. Interval valued intuitionistic fuzzy sets. Fuzzy Sets and Systems
1989, 31(3), 343–349.
[4] Yager R.R. Pythagorean membership grades in multicriteria decision making. IEEE
Transactions on Fuzzy Systems 2013, 22(4), 958–965.
[5] Garg H. Novel intuitionistic fuzzy decision making method based on an improved operation
laws and its application. Engineering Applications of Artificial Intelligence 2017, 60, 164–174.
[6] Garg H., Kumar K. A novel correlation coefficient of intuitionistic fuzzy sets based on the
connection number of set pair analysis and its application. Scientia Iranica 2017, 25(4),
2373–2388.
[7] Garg H., Kumar K. Linguistic interval-valued Atanassov intuitionistic fuzzy sets and their
applications to group decision-making problems. IEEE Transactions on Fuzzy Systems 2019,
27(12), 2302–2311.
[8] Gupta P., Mehlawat M.K., Grover N., Pedrycz W. Multi-attribute group decision making based
on extended TOPSIS method under interval-valued intuitionistic fuzzy environment. Applied
Soft Computing 2018, 69, 554–567.
[9] Liu J.C., Li D.F. Corrections to “TOPSIS-based nonlinear-programming method- ology for multi-
attribute decision making with interval-valued intuitionistic fuzzy sets”[apr 10 299-311]. IEEE
Transactions on Fuzzy Systems 2018, 26(1), 391–391.
[10] Garg H., Kumar K. A novel possibility measure to interval-valued intuitionistic fuzzy set using
connection number of set pair analysis and its applications. Neural Computing and
Applications 2020, 32, 3337–3348.
[11] Szmidt E., Kacprzyk J. Entropy for intuitionistic fuzzy sets. Fuzzy Sets and Systems 2001, 118
(3), 467–477.
[12] Wei C.P., Gao Z.H., Guo T.T. An intuitionistic fuzzy entropy measure based on trigonometric
function. Control and Decision 2012, 27(4), 571–574.
[13] Wang J.Q., Wang P. Intuitionistic linguistic fuzzy multi-criteria decision-making method based
on intuitionistic fuzzy entropy. Control and Decision 2012, 27(11), 1694–1698.
[14] Verma R., Sharma B.D. Exponential entropy on intuitionistic fuzzy sets. Kybernetika 2013, 49
(1), 114–127.
[15] Liu M., Ren H. A new intuitionistic fuzzy entropy and application in multi- attribute decision
making. Information 2014, 5(4), 587–601.
[16] Garg H., Agarwal N., Tripathi A. Generalized intuitionistic fuzzy entropy measure of order α
and degree β and its applications to multi-criteria decision making problem. International
Journal of Fuzzy System Applications (IJFSA) 2017, 6(1), 86–107.
[17] Garg H., Kaur J. A novel (r, s)-norm entropy measure of intuitionistic fuzzy sets and its
applications in multi-attribute decision-making. Mathematics 2018, 6(6), 92.
[18] Yager R.R. Generalized orthopair fuzzy sets. IEEE Transactions on Fuzzy Systems 2017, 25(5),
1222–1230.
[19] Liu Z., Liu P., Liang X. Multiple attribute decision-making method for dealing with
heterogeneous relationship among attributes and unknown attributes. 2017; weight
information under q-rung orthopair fuzzy environment. International Journal of Intelligent
Systems 2018, 33(9), 1900–1928.
[20] Liu P., Wang P. Some q-rung orthopair fuzzy aggregation operators and their applications to
multiple-attribute decision making. International Journal of Intelligent Systems 2018, 33(2),
259–280.
q-Rung orthopair fuzzy entropy measure and its application 127

[21] Riaz M., SaIabun W., Farid H.M.A., Ali N., Watróbski J. A robust q-rung orthopair fuzzy
information aggregation using Einstein operations with application to sustainable energy
planning decision management. Energies 2020, 13(9), 2155.
[22] Riaz M., Athar Farid H.M., Kalsoom H., Pamuvcar D., Chu Y.M. A robust q- rung orthopair fuzzy
einstein prioritized aggregation operators with application towards MCGDM. Symmetry 2020,
12(6), 1058.
[23] Liu P., Liu J. Some q-rung orthopair fuzzy Bonferroni mean operators and their application to
multi-attribute group decision making. International Journal of Intelligent Systems 2018, 33
(2), 315–347.
[24] Garg H. A new possibility degree measure for interval-valued q-rung orthopair fuzzy sets in
decision-making. International Journal of Intelligent Systems 2021, 36(1), 526–557.
[25] Khan M.J., Kumam P., Shutaywi M. Knowledge measure for the q-rung orthopair fuzzy sets.
International Journal of Intelligent Systems 2021, 36(2), 628–655.
Soumendra Goala, Palash Dutta, Bornali Saikia
A fuzzy multi-criteria decision-making
approach for crime linkage utilizing
resemblance function under hesitant fuzzy
environment
Abstract: Statistics from the government report shows that the hike of crime rate in-
creases each year from the previous one. Serial crimes bring panic among the people,
causes disharmony, and leave psychological scars within the society. Investigating
and tracking the crime is very often troublesome due to the lack of valid evidences.
Another challenging task is to determine the crimes that were committed by same
suspects from a great number of similar crimes. Investigators study a bunch of crimi-
nal cases and analyze step-by-step linkage of the crime and reveal the crimes which
were committed by common suspects. Here, we will generalize the resemblance func-
tion on hesitant fuzzy sets. Moreover, we will analyze with the help of multi-criteria
decision-making under hesitant fuzzy environment. With the help of this method, an
analyst will be able to find degree of extent to share common suspects. In addition, a
numerical case example on credit card fraud detection will be carried out to show the
applicability of the methodology in our daily life.

Keywords: crime linkage analysis, hesitant fuzzy set, resemblance function, multi-
criteria decision-making

1 Introduction
Crime linkage is the practice of recognizing different crimes which were committed
by same suspects. Crime analysts can directly identifying the suspects if the victims
know the identity of their attackers. However, in most of the cases, the suspects are
completely unknown to the victims. So in this situation, proper evidences such as
DNA, fingerprint, and other forensic evidences are important to make the investiga-
tion process trustworthy. Forensic evidences that are recovered from crime scenes

Soumendra Goala, Department of Mathematics, Dibrugarh University, Dibrugarh 786004, India,


e-mail: [email protected]
Palash Dutta, Department of Mathematics, Dibrugarh University, Dibrugarh 786004, India,
e-mail: [email protected]
Bornali Saikia, Department of Mathematics, Dibrugarh University, Dibrugarh 786004, India,
e-mail: [email protected]

https://doi.org/10.1515/9783110716214-009
130 Soumendra Goala, Palash Dutta, Bornali Saikia

are used to target the suspects successfully. But due to lack of forensic evidences,
the investigation process becomes a complicated task.
It is a fact of basic psychology that each person differs from others in psycho-
logical thoughts. A person’s mental state, his/her way of understanding, behavioral
pattern, and subconscious and conscious mind influence his/her daily activities.
This is the same reason a criminal during the crime is influenced by his\her men-
tal state, way of understanding, behavioral pattern, and subconscious and con-
scious mind. Then, it can be concisely stated that behavioral pattern or behavioral
characteristics of any offender is revealed by his/her actions and as a result of it,
one always finds some basic similarities in the crimes that the same offenders
commit [1–4].
In crime linkage analysis, an analyst recovers data/information from crimes,
coded and organized properly, logically and mathematically interpreted. After that,
the offenders are compared based on the collected evidences that are recovered
from the crimes [4]. However, improper and limited information/data make the in-
vestigation process difficult for the crime analyst, which means that means col-
lected data/information may be uncertain in nature. Moreover, in reality, the crimes
that are committed by the same suspects are not always found to be alike. So, when
evaluating similarities among crimes, uncertainties may be found.
In this current work, crime linkage analysis has been performed on the basis of
hesitant fuzzy multi-criteria decision-making (MCDM) techniques. Here, we also
generalize the idea of resemblance function [5] on hesitant fuzzy sets (HFSs) origi-
nally developed for intuitionistic fuzz sets.
In 1965, Zadeh developed fuzzy set theory (FST) [6] as an extension of tradi-
tional set theory to deal with uncertain human behavior. But sometimes, it presents
limitations to handle such vague information due to variance in uncertainty. To
overcome those limitations, several extensions have been introduced. In literature,
intuitionistic fuzzy set (IFS) [7] is formed by adding both membership function (MF)
and non-membership function (NMF), and sum of both MF and NMF are in [0, 1].
These extensions of fuzzy set manage those sources of incompleteness or indistin-
guishable information in different ways. However, in many real world problems, it
is hard to define the grade of MF of an element due to its set of possible values.
So, the HFS [8] is introduced to handle such situations and are widely applicable in
decision-making, medical diagnosis, pattern recognition, and fuzzy optimization
problems.
Similarity measure is an effective tool for describing similarities and dissimilar-
ities between objects under uncertainty through HFSs. Some significant similarity
measures have been developed in literature [9–12].
A fuzzy multi-criteria decision-making approach for crime linkage utilizing 131

2 Related works of fuzzy crime analysis


Researchers with the help fuzzy mathematics elaborate their works on topics of
preventing offences, prediction, and crime linkage. From the field of fuzzy, a new
pseudo-outer product utilizing fuzzy neural network was presented [13], which was
able to relate similarities between two information of finger prints and differentiate
whether the two fingerprints belongs to the same individual or not. Fuzzy cluster-
ing analysis was applied [14] to find the crime prone spot in a city. A new intelli-
gent decision support system was introduced [15] to detect the pattern of crime and
connection between the patterns of crime with police duty deployment. They used
fuzzy time series analysis to create self-arranging map network. A new method was
developed by combining AHP approach and geographical information system to
find the most the area that may be crime potential area [16]. A fuzzy clustering
analysis was developed [17] to find pattern of crime utilizing original forensic data.
A new idea of using fuzzy time series for prediction of crimes was developed [18].
A multi-attribute decision-making approach was developed [19] to analyze serial
crime of high volume crimes. A fuzzy clustering technique was developed [20] for
criminal profiling to detect and stop crimes. Some properties of crimes are used in a
city and mapped by the area using fuzzification and later the value that is evaluated
by defuzzification to detect serial crimes [21]. The distinctiveness between two crimes
is determined by using hesitant fuzzy distance measure and comparing each pairs,
and discussed a fuzzy MCDM method for crime linkage [22]. Recently, a new idea of
resemblance function on IFSs has been introduced to analysis crime linkage [5].

3 Problem statement and motivation of the study


Due to lack of proper evidences, it is sometimes very difficult to identify the crime
committed by same offenders from piles of collected crime data. In this case, an inves-
tigator looks for the evidences from the crime data that reflects offender’s actions dur-
ing the crime. Each and every person always has unique and different psychological
level from others. Similarly, due to an offender’s uniqueness in his or her behavioral
pattern, personality, and subconscious mind, he/she tends to leave some similar sets
of evidences among the sets of crime. Here, we can have a decision-making approach
by visualizing the psychological aspect of an offender to detect serial crimes.
Besides this, reports taken and descriptions written by different investigators in
different crime scenes vary from one another. Also, we observe differences in de-
scriptions of crime scene by most of eyewitnesses, police, and investigators from
initial reports with that of descriptions collected from other sources, and these are
uncertain in nature. In such cases, HFS is used a tool to express crimes in terms of
evidences.
132 Soumendra Goala, Palash Dutta, Bornali Saikia

There are many studies in crime linkage analysis from the background of statis-
tics, but not much found from the background of FST. Also, a research is found
where serial crimes are compared pairwise using HFSs [22]. Later, a method was de-
veloped for the new resemblance function on IFS to eliminate the drawbacks in
pairwise comparison [5]. Therefore, we are motivated to generalize the concept of
resemblance function in hesitant fuzzy environment, and to use this concept in
crime linkage analysis.

4 Mathematical preliminaries
Definition 1 (fuzzy set [6]).
A fuzzy set A is defined by its MF on universal setX:

μA :X ! ½0, 1

Definition 2 (hesitant fuzzy set [8]).


An HFS A is defined on universal set X is defined in terms of a function that returns
a subset of ½0, 1 when applied to X. The expression on HFS was given by Torra
(2009) as:

A = f < x, hA ðxÞ > jx 2 Xg

Here, hA ðxÞ represents the different possible membership grades of the element
x 2 X to the set A with the condition hA ðxÞ 2 ½0, 1, hA ðxÞ is termed as hesitant fuzzy
element (HFE). It can be quite obviously noted that the number of values in dif-
ferent HFEs may be different.
For making comparisons, the concept of score functions was introduced.

Definition 3 (Score function [23]).


For an HFE, the score function of a HFE h is denoted by sðhÞ, and is defined as

1X
sðhÞ = γ
lh γ2h

where lh is the cardinality of h.


With the help of score functions, comparison can be made between two HFEs:
h1 and h2 :
a) sðh1 Þ > sðh2 Þ, then h1 is superior to h2 , denoted by h1 > h2
b) sðh1 Þ = sðh2 Þ, then h1 is indifferent to h2 , denoted by h1 = h2

However, some problems arise in the traditional comparison carried out by score
functions. To eliminate them, the idea of deviation function was introduced.
A fuzzy multi-criteria decision-making approach for crime linkage utilizing 133

Definition 4 (deviation degree [24]).


Deviation degree σðhÞ of an HFE h is defined as
!1
1 rX
ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi 2
σðhÞ = ðγ − sðhÞÞ2
lh γ2h

Using the variance function, a new idea to compare HFSs was introduced.

Definition 5 (comparison of HFSs [24]).


Let h1 and h2 be two HFEs, sðh1 Þ and sðh2 Þ be its score functions, and σðh1 Þ and σðh2 Þ
be its deviation degrees, then
a) If sðh1 Þ > sðh2 Þ, then h1 is superior to h2 , denoted by h1 > h2
b) If sðh1 Þ = sðh2 Þ
i) If σðh1 Þ = σðh2 Þ, then h1 = h2
ii) If σðh1 Þ > σðh2 Þ, then h2 is superior to h1 denoted by h1 < h2
iii) If σðh1 Þ < σðh2 Þ, then h1 is superior to h2 denoted by h1 > h2

5 Resemblance function
Goala and Dutta [5] first introduced the concept of resemblance function on IFSs to
evaluate the similarity among several IFSs and show its advantages of over tradi-
tional similarity measures. Inspired by this idea, the resemblance function has been
generalized to evaluate similarities among several HFSs.

Definition 6 (resemblance function on HFSs).


Consider
     
H= x, hA1 ðxÞ , x, hA2 ðxÞ , . . . , fhx, hAn ðxÞig

be a collection of HFSs. Resemblance measure from A to ½0, 1 can be defined as

R : H ! ½0, 1 such that


1X n       
 
Rð H Þ = 1 − s hAi − s hAi 
n i=1
  1X n    
where s hAi = s hAi and s hAi is the score function of the HFEs hAi ðxÞ.
n i=1
134 Soumendra Goala, Palash Dutta, Bornali Saikia

Example 1: Let us consider three intuitionistic fuzzy numbers H1 = fhx, f0.1, 0.3gig,
H2 = fhx, f0.7, 0.7, 0.9gig, and H3 = fhx, f0.5, 0.3, 0.3gig.
Then, their resemblance degree is given by

RðfH1 , H2 , H3 gÞ
3  X 
1X  n    
= 1 −  s hAi − s hAi 
3 i=1 i=1

1
= fð1 − j0.44444444 − 0.2jÞ + ð1 − j0.44444444 − 0.76666667jÞ + ð1 − j0.44444444 − 0.36666667jÞg
3
= 0.78518519

Definition 7 (generalization of resemblance function on HFSs).


 
Let us consider a set of HFSs H = H ~ 2, . . . , H
~ 1, H ~ n each defined on the universal set
nD E o
X = fx1 , x2 , ..., xm g, where H ~ j = xi , h ~ ðxi Þ :xi 2 X , j = 1, 2, ..., n, and i = 1, 2, ..., m.
Hj
 
Therefore, resemblance function on H = H ~ 2, . . . , H
~ 1, H ~ n can be defined as:
8 n      
< P 1P m 
   
R H ~ 2, . . . , H
~ 1, H ~ n = i = 1 wi m j = 1 1 − s hH~ j ðxi Þ − s hH~ j ðxi Þ  ; n≠1
:
1, n = 1

where
  1X n  
s hH~ ðxi Þ = s hH~ ðxi Þ
j n j=1 j

 
s hH~ ðxi Þ is the score function of the HFEs hH~ ðxi Þ.
j j
wi is the weight of the set H ~ j , and w1 + w2 +    + wn = 1.
 
R H ~ 2, . . . , H
~ 1, H ~ n = 1 for n = 1 since HFS can be considered as similar or a re-
semblance to itself.

Example 2: For example, let us consider three HFSs:

H1 = fhx1 , f0.1, 0.3gi, hx2 , f0.5, 0.3, 0.3gi, hx3 , f0.7, 0.7, 0.9gi, x4 , f0.3, 0.3gg
H2 = fhx1 , f0.7, 0.7, .9gi, hx2 , f0.7, 0.7, 0.9gi, hx3 , f0.3, 0.3, 0.1gi, hx4 , f0.7, 0.5, 0.9gig
H3 = fhx1 , f0.5, 0.5, 0.3gi, hx2 , f0.5, 0.3, 0.3gi, hx3 , f0.7, 0.7, 0.7gi, hx4 , f0.3, 0.5, 0.3gig

Consider that each criteria is of equal importance, that is, wi = 41 ; i = 1, 2, 3, 4.

Therefore, resemblance value among H1 , H2 , and H3 is given by


A fuzzy multi-criteria decision-making approach for crime linkage utilizing 135

3      
X
4
1X 
RðfH1 , H2 , H3 gÞ = wi 
1 − s hHj ðxi Þ − s hHj ðxi Þ 
i=1
3 i=1
= 0.77654321

This value of resemblance will give an overall degree of similarity among H1 , H2 ,


and H3 .

5.1 Some theorems on resemblance function

Resemblance function obeys the following theorem [5]:


 
For a collection of HFSs H ~ 2, . . . , H
~ 1, H ~n
 
a) Theorem 1. 0 ≤ R H ~ ,H ~ , ...,H ~n ≤ 1
 1 2 
b) Theorem 2. If R H ~ 2, . . . , H
~ 1, H ~ n = 1 for any finite n, then H ~2 =    = H
~1 = H ~ n.
   
c) Theorem 3. If H ~ 2, . . . , H
~ 1, H ~ n = ~J 1 , ~J 2 , . . . , ~J n , then
   
R H ~ 1, H ~ 2, . . . , H
~ n = R ~J 1 , ~J 2 , . . . , ~J n ,

But the converse is not true.


 
Proof of Theorem 1. Consider a set of HFSs H = H ~ 1, H ~ 2, . . . , H
~ n each defined on the
~ j = fhxi , h ~ ðxi Þi:xi 2 Xg, j = 1, 2, ..., n, and
universal set X = fx1 , x2 , ..., xm g, where H
 Hj 
i=1,2,...,m. Therefore, resemblance function on H = H ~ 2, . . . , H
~ 1, H ~ n can be defined as:
For n = 1, the case is obvious.
We know that

0 ≤ hH~ ðxi Þ ≤ 1
i
 
) 0 ≤ s hH~ ðxi Þ ≤ 1
i

m 
X 
) 0≤ s hH~ ðxi Þ ≤ 1
j
j=1
 
) 0 ≤ s hH~ ðxi Þ ≤ 1
j
    

) 0 ≤ 1 − s hH~ ðxi Þ − s hH~ ðxi Þ  ≤ 1
j j

m      
1X 
) 0≤ 1 − s hH~ ðxi Þ − s hH~ ðxi Þ  ≤ 1
m j=1 j j

m      
Xn
1X  X
) 0≤ wi 
1 − s hH~ ðxi Þ − s hH~ ðxi Þ  ≤ 1, where wi = 1
i=1
m j=1 j j
136 Soumendra Goala, Palash Dutta, Bornali Saikia

 
) 0≤R H ~ 1, H
~ 2, . . . , H
~n ≤ 1

 
Proof of Theorem 2. Given R H ~ 2, . . . , H
~ 1, H ~n = 1
m      
Xn
1X 
) wi 
1 − s hH~ ðxi Þ − s hH~ ðxi Þ  = 1
i=1
m j=1 j j

m      
1X 
) 1 − s hH~ ðxi Þ − s hH~ ðxi Þ  = 1
m j=1 j j

 
X m   X m  
 
) 1− s hH~ ðxi Þ − s hH~ ðxi Þ  = 1
 j=1 j
j=1
j 
m 
X  Xm  
) s hH~ ðxi Þ − s hH~ ðxi Þ = 0
j j
j=1 j=1

) hH~ ðxi Þ = hH~ ðxi Þ for all j and k


j k

~1 = H
That is, H ~2 =    = H
~n

   
Proof of Theorem 3. If H ~ ,H ~ , ...,H
~ n = ~J 1 , ~J 2 , . . . , ~J n
  1 2
Then, R H ~ 1, H ~ 2, . . . , H ~n
m      
X n
1X 
= wi 1 − s hH~ ðxi Þ − s hH~ ðxi Þ 
i=1
m j=1 j j

  
X n
1X        
= wi 
1 − s h~J ðxi Þ − s h~J ðxi Þ  _·_ H ~ j = ~J j ) s h ~ ðxi Þ = s h~ ðxi Þ
Hj Jj
i=1
m j j

 
= R ~J 1 , ~J 2 , . . . , ~J n

For the converse part, we are taking a counter example.


Consider two HFSs H1 = fhx, f0.7, 0.9, 0.7gig, H2 = fhx, f0.7, 0.7, 0.7gig

RðfH1 , H2 gÞ = 0.96666667

Again, consider two HFSs J1 = fhx, f0.9, 0.9, 0.9gig, J2 = fhx, f0.7, 0.9, 0.9gig

RðfJ1 , J2 gÞ = 0.96666667

Thus, though RðfH1 , H2 gÞ = RðfJ1 , J2 gÞ, fH1 , H2 g≠fJ1 , J2 g.


A fuzzy multi-criteria decision-making approach for crime linkage utilizing 137

5.2 Advantages of resemblance function over traditional


similarity measure
In decision-making situations, similarities are evaluated between two objects or in-
dividuals. More than two similarity measures are evaluated between the objects in
pairwise. Human intuition does not work like that. People often compare several
things at once. Suppose some types cyber have been executed in a week. Each
crime can be categorized with the help properties of the crimes and can be ex-
pressed using HFSs. By similarity measures, a decision maker will be able to deter-
mine commonality only between two crimes, and in the whole situation similarity
can be measured pairwise. But by using resemblance function, an expert will be
able to determine degree of similarity among more than two crimes. These types of
issues are already addressed in the work of Goala and Dutta [5] on IFSs.
Take three HFSs from Example 2:

H1 = fhx1 , f0.1, 0.3gi, hx2 , f0.5, 0.3, 0.3gi, hx3 , f0.7, 0.7, 0.9gi, x4 , f0.3, 0.3gg

H2 = fhx1 , f0.7, 0.7, .9gi, hx2 , f0.7, 0.7, 0.9gi, hx3 , f0.3, 0.3, 0.1gi, hx4 , f0.7, 0.5, 0.9gig
H3 = fhx1 , f0.5, 0.5, 0.3gi, hx2 , f0.5, 0.3, 0.3gi, hx3 , f0.7, 0.7, 0.7gi, hx4 , f0.3, 0.5, 0.3gig

By traditional similarity measures, one will be able to evaluate similarity degree be-
tween H1 and H2 , H1 and H3 , and H2 and H3 , that is, in pairwise way.
By utilizing resemblance measure, one can evaluate similarity degree among
H1 , H2 , and H3 at once as
3      
X4
1X 
RðfH1 , H2 , H3 gÞ = wi 1 − s hHj ðxi Þ − s hHj ðxi Þ 
i=1
3 i=1
= 0.77654321

Obviously, this is one of the major advantages of resemblance measure over tradi-
tional similarity measures on HFSs.

6 Methodology
Consider a set of crimes fCR1 , CR2 , . . . , CRn g. Here, we need the crimes that are commit-
ted by same suspects. Let X = fX1 , X2 , . . . , Xm g be the actions or behavior of the crimes;
in this case, it is the set of criteria. Depending on the linkage, analysis will be carried out.
The crimes CRj are described with the help of actions or behavior of the sus-
pects observed in the crime scene and also these are represented by using hesi-
tant fuzzy membership grades as
 
CRj = Xi , hij jXi 2 X g where i = 1, 2, . . . , m and j = 1, 2, . . . , n
138 Soumendra Goala, Palash Dutta, Bornali Saikia

where hij is the HFEs denoting the degree of presence of action or behavior of the
offender n the crime CRj .
Now, resemblance degree is evaluated among all collection of subsets of
fCR1 , CR2 , . . . , CRn g. Obviously, the higher value resemblance degree correspond-
ing to any subset of crimes will imply higher possibility of similarity among the cor-
responding set of crimes. Therefore, depending upon the resemblance values, easy
ranking of the subsets can be found.
Then, set a threshold value [5, 22] for resemblance degree above or equal to
which consideration be taken that corresponding set of crimes are related by same
offenders.
In this approach, one can find crimes that can be considered to be related by
same offenders or shortly can be said as committed by same offenders.

7 A case study
In this text, a practical example is carried out using resemblance function taking
six credit card fraud detections expressed in hesitant Fuzzy information [22]. They
original study was carried out on 10 credit card frauds where the cards were not
stolen physically but money was stolen by some tricks.
For the frauds, the following three evidences are taken:
(a) e1 : Perception of card holder on how the fraud was executed.
(b) e2 : Perception of card provider or issuer on how the fraud was executed.
(c) e3 : Digital evidence.

For the practical example, four methods for credit card fraud are taken as the crite-
ria of the fuzzy MCDM approach:
(1) X1 : It is a way of knowing information related to credit card through a phone
call, email, and text message.
(2) X2 : It is another way of stealing information/data of the credit card and its
holder by sending computer virus, malware, or Trojan type application to the
victim through email or text message.
(3) X3 : It is a click jacking technique: that is, the click of the card owner in a fraud
website or advertisement or in a window is directed to another shopping website
or gateway of online payment without the knowledge of the card holder.
(4) X4 : Cloning of credit card during swiping in shopping or payment.

For simplicity, the information of crimes are represented in Table 1:


A fuzzy multi-criteria decision-making approach for crime linkage utilizing 139

Table 1: Table for showing the frauds and its different evidences.

Criteria Evidences CR1 CR2 CR3 CR4 CR5 CR6

X1 e1 Very low High Medium Low High High

e2 Low High Low Absent Very high High

e3 Absent Very high Low Low High High

X2 e1 Medium High Medium Medium Very high High

e2 Low Very high Low Low Very high Very high

e3 Low Very high Low Very low Very high Very high

X3 e1 High Low High High Medium Low

e2 High Very low High Very high Low Very low

e3 Very high Very low High Very high Low Very low

X4 e1 Low High Low Low High Absent

e2 absent Medium Medium Medium Medium Medium

e3 Low Very high Low Low Very high High

The following membership grades are taken for linguistic variables used in
Table 2.

Table 2: Membership grades corresponding to


linguistic variables.

Linguistic variable Membership grades

No 

Very low .

Low .

Medium .

High .

Very high .

Sure 

From the information from Table 1 and membership grades corresponding to the
used linguistic variables, the six credit card frauds are represented as HFSs as
follows:
140 Soumendra Goala, Palash Dutta, Bornali Saikia

CR1 = fhX1 , f0.1, 0.3gi, hX2 , f0.5, 0.3, 0.3gi, hX3 , f0.7, 0.7, 0.9gi, hX4 , f0.3, 0.3gig
CR2 = fhX1 , f0.7, 0.7, .9gi, hX2 , f0.7, 0.7, 0.9gi, hX3 , f0.3, 0.3, 0.1gi, hX4 , f0.7, 0.5, 0.9gig

CR3 = fhX1 , f0.5, 0.5, 0.3gi, hX2 , f0.5, 0.3, 0.3gi, hX3 , f0.7, 0.7, 0.7gi, hX4 , f0.3, 0.5, 0.3gig
CR4 = fhX1 , f0.3, 0.3gi, hX2 , f0.5, 0.3, 0.1gi, hX3 , f0.7, 0.9, 0.9gi, hX4 , f0.3, 0.5, 0.3gig
CR5 = fhX1 , f0.7, 0.9, 0.7gi, hX2 , f0.9, 0.9, 0.9gi, hX3 , f0.5, 0.5, 0.3gi, hX4 , f0.7, 0.5, 0.9gig
CR6 = fhX1 , f0.7, 0.7, 0.7gi, hX2 , f0.7, 0.9, 0.9gi, hX3 , f0.3, 0.1, 0.1gi, hX4 , f0.5, 0.9gig

Applying resemblance function on each combination of crimes. The resemblance


values are shown in Tables 3–7:

Table 3: Resemblance among all combinations pairwise.

Resemblance among all combinations pairwise

 RðfCR1 , CR2 gÞ .

 RðfCR1 , CR3 gÞ .

 RðfCR1 , CR4 gÞ .

 RðfCR1 , CR5 gÞ .

 RðfCR1 , CR6 gÞ .

 RðfCR2 , CR3 gÞ .

 RðfCR2 , CR4 gÞ .

 RðfCR2 , CR5 gÞ .

 RðfCR2 , CR6 gÞ .

 RðfCR3 , CR4 gÞ .

 RðfCR3 , CR5 gÞ .

 RðfCR3 , CR6 gÞ .

 RðfCR4 , CR5 gÞ .

 RðfCR4 , CR6 gÞ .

 RðfCR5 , CR6 gÞ .


A fuzzy multi-criteria decision-making approach for crime linkage utilizing 141

Table 4: Resemblance among all combinations taking three crimes.

Resemblance among all combinations taking four crimes

 RðfCR1 , CR2 , CR3 gÞ .

 RðfCR1 , CR2 , CR4 gÞ .

 RðfCR1 , CR2 , CR5 gÞ .

 RðfCR2 , CR3 , CR4 gÞ .

 RðfCR2 , CR3 , CR5 gÞ .

 RðfCR2 , CR3 , CR6 gÞ .

 RðfCR3 , CR4 , CR5 gÞ .

 RðfCR3 , CR4 , CR6 gÞ .

 RðfCR4 , CR5 , CR6 gÞ .

 RðfCR1 , CR3 , CR4 gÞ .

 RðfCR1 , CR3 , CR5 gÞ .

 RðfCR1 , CR3 , CR6 gÞ .

 RðfCR2 , CR4 , CR5 gÞ .

 RðfCR2 , CR4 , CR6 gÞ .

 RðfCR3 , CR5 , CR6 gÞ .

 RðfCR1 , CR4 , CR5 gÞ .

 RðfCR1 , CR4 , CR6 gÞ .

 RðfCR1 , CR5 , CR6 gÞ .

 RðfCR2 , CR5 , CR6 gÞ .

 RðfCR1 , CR2 , CR6 gÞ .

Table 5: Resemblance among all combinations taking four crimes.

Resemblance among all combinations taking four crimes

 RðfCR1 , CR2 , CR3 , CR4 gÞ .

 RðfCR1 , CR2 , CR3 , CR5 gÞ .

 RðfCR1 , CR2 , CR3 , CR6 gÞ .

 RðfCR2 , CR3 , CR4 , CR5 gÞ .

 RðfCR2 , CR3 , CR4 , CR6 gÞ .


142 Soumendra Goala, Palash Dutta, Bornali Saikia

Table 5 (continued)

Resemblance among all combinations taking four crimes

 RðfCR1 , CR3 , CR4 , CR5 gÞ .

 RðfCR3 , CR4 , CR5 , CR6 gÞ .

 RðfCR1 , CR3 , CR4 , CR6 gÞ .

 RðfCR1 , CR4 , CR5 , CR6 gÞ .

 RðfCR1 , CR3 , CR5 , CR6 gÞ .

 RðfCR2 , CR4 , CR5 , CR6 gÞ .

 RðfCR2 , CR3 , CR5 , CR6 gÞ .

Table 6: Resemblance among all five crimes.

Resemblance among all combinations taking five crimes

 RðfCR1 , CR2 , CR3 , CR4 , CR5 gÞ .

 RðfCR1 , CR2 , CR3 , CR4 , CR6 gÞ .

 RðfCR1 , CR3 , CR4 , CR5 , CR6 gÞ .

 RðfCR2 , CR3 , CR4 , CR5 , CR6 gÞ .

 RðfCR1 , CR2 , CR4 , CR5 , CR6 gÞ .

 RðfCR1 , CR2 , CR3 , CR5 , CR6 gÞ .

Table 7: Resemblance among all five crimes.

Resemblance among all crimes

 RðfCR1 , CR2 , CR3 , CR4 , CR5 , CR6 gÞ .

Now, we are setting a threshold value δ = 0.9 for the resemblance grades. From
the above tables (Tables 3–7), find the set of frauds whose corresponding resem-
blance values are greater than the threshold value δ = 0.9.

fRðfCR1 , CR3 gÞ, RðfCR1 , CR4 gÞ, RðfCR2 , CR5 gÞ, RðfCR5 , CR6 gÞ
RðfCR1 , CR3 , CR4 gÞ, RðfCR2 , CR5 , CR6 gÞg ≥ δ

Thus, it can be obviously concluded that the set of crimes, fCR1 , CR3 , CR4 gand
fCR2 , CR5 , CR6 g, may be connected by the same offender. This is the similar result
from the original study [22].
A fuzzy multi-criteria decision-making approach for crime linkage utilizing 143

8 Discussion and conclusion


In this paper, study has been carried out on crime linkage (detecting serial crimes)
utilizing resemblance measure among a set of HFSs. To carry out the linkage analy-
sis fuzzy MCDM approach has been used depending upon the evidences that of the
crimes. In addition, a case study has been done on a collection of six credit card
frauds and the results were found to be similar from existing studies, and this
shows the reliability of the resemblance measure.
The contribution can be summarized as follows:
(a) The resemblance measure on HFSs has been introduced by one which can ob-
tain similarity degree among any finite collection of HFSs where traditional sim-
ilarity measures fails to obtain similarity measures between more than two
fuzzy sets.
(b) Using resemblance measure on HFSs, an MCDM method has been studied for
crime linkage and a numerical example has been depicted to show the applica-
bility of the methodology.

In future, we intend to work on finding new resemblance measures to determine


similar crimes committed by same offenders.

References
[1] Bennell C., Jones N. Between a ROC and a hard place; A method for linking serial burglaries by
modus operandi. Journal of Investigative Psychology and Offender Profiling 2005, 2, 23–41.
[2] Goodwill A. The development of a filter model for prioritizing suspects in burglary offences.
Psychology Crime & Law 2006, 12, 395–416.
[3] Grubin D., Kelly P., Brunsdon C. Linking serious sexual assaults through behaviour. Home
Offence Research Study. Home Offence; Research, Development and Statistics Directorate.
2001; 215.
[4] Woodhams J., Bull R., Hollin C.R., Kocsis R.N. Case linkage; Identifying crimes committed by
the same offender in Criminal profiling. Humana Press Inc., 2007, 117–133.
[5] Goala S., Dutta P. Intuitionistic fuzzy multi criteria decision making approach to crime linkage
using resemblance function. International Journal of Applied and Computational Mathematics
2019, 5, 112.
[6] Zadeh L.A. Fuzzy sets. Information and Control 1965, 8, 338–353.
[7] Atanassov K. Intuitionistic fuzzy sets. Fuzzy Sets and Systems 1986, 20, 87–96.
[8] Torra V. Hesitant fuzzy sets. International Journal of Intelligent System 2010, 25(6), 529–539.
[9] Xu Z.S., Xia M.M. Distance and similarity measures for hesitant fuzzy sets. Information
Sciences 2011, 52, 2128–2138.
[10] Zhang X., Xu Z. Novel distance and similarity measures on hesitant fuzzy sets with
applications to clustering analysis. Journal of Intelligent & Fuzzy Systems 2015, 28(5),
2279–2296.
144 Soumendra Goala, Palash Dutta, Bornali Saikia

[11] Zhou X., Li Q. Some new similarity measures for hesitant fuzzy sets and their applications in
multiple attribute decision making. arXiv preprint arXiv 2012, 1211, 4125.
[12] Yang M.S., Hussain Z. Distance and similarity measures of hesitant fuzzy sets based on
Hausdorff metric with applications to multi-criteria decision making and clustering. Soft
Computing 2019, 23(14), 5835–5848.
[13] Queck C., Tan K.B., Sagar V.K. Pseudo-outer product based fuzzy neural network fingerprint
verification system. Neural Networks 2001, 14(3), 305–323.
[14] Grubesic T.H. On the application of fuzzy clustering from crime hot-spot detection. Journal of
Quantitative Criminology 2006, 22(1), 77–105.
[15] Sheng T.L., Shu C.K., Fu C.T. An intelligent decision support model using FSOM and rule
extraction for crime prevention. Expert System with Application 2010, 37(10), 7108–7119.
[16] Nurul H., Mohd S., Hafiz S. Identification of potential crime area using analytical hierarchy
process (AHP) and geographical information system (GIS). International Journal of Innovative
Computing 2012, 01(1), 15–22.
[17] Stofel K., Cotofrei P., Han D. Fuzzy clustering based methodology for multidimensional data
analysis in computational forensic domain. International Journal of Computer Information
Systems and Industrial Management Applications 2012, 4, 400–410.
[18] Shrivastav A.K., Ekta Applicability of soft computing technique for crime forecasting; a
preliminary investigation. International Journal of Computer Science & Engineering
Technology (IJCSET) 2012, 3(9), 415–421.
[19] Albertetti F., Cotofrei P., Grossrieder L., Ribaux O., Stoffel K. The CriLiM Methodology; crime
linkage with a Fuzzy MCDM approach. European Intelligence and Security Informatics
Conference; 2013.
[20] Adeyiga J.A., Bello A. A review of different clustering techniques in criminal profiling.
International Journal of Advanced Research in Computer Science and Software Engineering
2016, 6(4), 659–666.
[21] Gupta S., Kumar S. Crime detection and prevention using social network analysis.
International Journal of Computer Applications 2015, 126(6), 14–19.
[22] Goala S., Dutta P., Fuzzy Multicriteria A. Decision-making approach to crime linkage.
International Journal of Information Technologies and Systems Approach 2018, 11(2), 31–50.
[23] Xia M.Z. Geometric Bonferroni means with their application in, Knowledge-Based Systems
2013, 40, 88–100.
[24] Chen N., Xua X. Correlation coefficients of hesitant fuzzy sets and their applications to
clustering analysis. Applied Mathematical Modelling 2013, 37(4), 2197–2211.
Kumar Anupam, Pankaj Kumar Goley, Anil Yadav
Integrating novel-modified TOPSIS with
central composite design to model and
optimize O2 delignification process in pulp
and paper industry
Abstract: Oxygen (O2) delignification of brown stock pulp is an overly complex pro-
cess in the paper industry. Several approaches have been adopted in the past to
model and optimize this process. The present study undertakes the modeling and
optimization of O2 delignification (O2D) using a hybrid approach developed by inte-
grating entropy weight-coupled novel-modified technique for order preference by
similarity to ideal solution method, a multi-criteria decision-making method, with
central composite design, a response surface method. A case study of O2D of Melia
dubia kraft pulp (MDKP) with temperature, time and NaOH dose as input factors
while O2D-PY (pulp yield), O2D-KN (kappa number), O2D-IV (intrinsic viscosity), and
O2D-BR (brightness) as output factors have been taken as a sample optimization
problem in this work. Analysis of variance and relevant diagnostics were performed
to validate the model. The empirical relation so established was subsequently em-
ployed for optimization by the desirability approach. This hybrid modeling and op-
timization modus operandi furnished 90 °C, 90 min, and NaOH dose of 1.31% as
optimum values of input factors with overall process desirability 0.669. Comparing
this overall process desirability and optimum values of input factors for M. dubia
O2D with those reported in the literature, it was found that the present method pre-
dicted ~8% higher desirability and ~6% lower NaOH dose requirement. Both the
above beneficial observations demonstrate that the proposed optimization method
can prove to be an efficient optimizing scheme for O2D process.

Keywords: multi-criteria decision-making, entropy weights, response surface meth-


odology, oxygen delignification, pulp and paper, Melia dubia

Kumar Anupam, Department of Chemical Engineering, Deenbandhu Chhotu Ram University


of Science and Technology, Murthal, Sonipat 131039, Haryana, India; Chemical Recovery
and Biorefinery Division, Central Pulp and Paper Research Institute, Himmat Nagar, Saharanpur
247001, Uttar Pradesh, India
Pankaj Kumar Goley, Engineering and Maintenance Division, Central Pulp and Paper Research
Institute, Himmat Nagar, Saharanpur 247001, Uttar Pradesh, India
Anil Yadav, Department of Chemical Engineering, Deenbandhu Chhotu Ram University of Science
and Technology, Murthal, Sonipat 131039, Haryana, India

https://doi.org/10.1515/9783110716214-010
146 Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

1 Introduction
Oxygen (O2) delignification is an intermediary process between pulping and bleach-
ing operations in the pulp and paper industry. It is implemented to remove the re-
siduary lignin existing in pulp post-cooking with the use of oxygen under alkaline
conditions. With its ability to furnish superior yield compared to prolonged pulping,
to cut gross bleaching chemical expenditures and to reduce wastewater treatment ex-
penses by mitigating color, adsorbable organic halides, biological oxygen demand,
and chemical oxygen demand of the effluent; O2 delignification (O2D) has evolved as
one of the most significant eco-friendly processes in the present pulp and paper
manufacturing environment. However, it is a pretty complicated chemical process
due to the involvement of complex delignification chemistry wherein phenolic radical
formation takes place and numerous types of oxidative species are present [1]. Thus,
elucidating, modeling, and optimizing the O2D process in order to determine its oper-
ating conditions is not a trivial task.
There are different approaches that have been used to model and optimize O2D
process. Leh et al. [2] used response surface methodology (RSM) based central com-
posite design (CCD) with 16 factorial design points, 8 axial points, and 4 center
points to optimize O2D of soda pulp of palm oil empty fruit bunch. Anupam et al. [3]
exploited five-level RSM-CCD and desirability approach to model and optimize the
O2D of Melia dubia kraft pulp (MDKP). Vianna et al. [4] coupled chemical kinetic
equations with some classical optimization methods to model, simulate and de-
velop an optimization tool in a commercial simulator for O2D of eucalyptus pulp in
an industrial setup. Euler et al. [5] implemented a machine learning technique
based on Kringing algorithm to obtain a prototypical model for optimization of a
nonlinear industrial O2D process. Many other researchers have reported different
chemical kinetics and mass transfer based sophisticated mathematical models for
optimization and control of O2D processes [6].
This chapter illustrates a framework based on combination of multi-criteria de-
cision-making approach (MCDM) and RSM to model and optimize O2D process. Such
approaches have been implemented in modeling and optimization of various other
processes. Chavan and Patil [7] applied the amalgamation of technique for order
preference by similarity to ideal solution (TOPSIS) and analytic hierarchy process
methods for performing Taguchi L16 based optimization of the machining parame-
ters in the drilling process of spheroidal graphite. Bhat et al. [8] employed GRA
(grey relation analysis) to RSM based CCD experimental matrix with three input and
three output factors to optimize the drilling process of glass fiber reinforced polyes-
ter composite. Bhat et al. [9] integrated TOPSIS with RSM-CCD to formulate an em-
pirical model to analyze the parametric effect on a composite performance grade
and optimize the marine-grade glass fiber reinforced polyester drilling process pa-
rameters. Sharma et al. [10] used an MCDM framework based on RSM-GRA to develop
an empirical equation for electric discharge drilling process in terms of three process
Integrating novel-modified TOPSIS with central composite design 147

inputs and one gray performance grade as output for further optimization using
teaching and learning-based optimization algorithm. Chakraborty et al. [11] combined
design of experiments (DOE) and TOPSIS to develop metamodels for optimization of
two nontraditional machining processes namely electrical discharge machining
and wire electrical discharge machining. These associates applied this approach
under different weight criteria to establish regression equations for ranking and
selecting the best alternatives. There are numerous other cases that have been reported
in the literature for modeling and optimization using the hybrid DOE – MCDM techni-
ques [12–14]. These hybrid optimization approaches have been found to be excellent,
robust, resistant to inclusion of additional factors, consist of lesser computational
stages, and supply superior results as compared to traditional statistical and artificial
intelligence techniques such as genetic algorithm and artificial neural network.
MCDM methods belong to the course of operation research. These act as decision
tools for a decision maker to perform the comprehensive evaluation of a set of avail-
able alternatives under a provided set of performance criteria [15]. MCDM methods
have been implemented to solve a variety of decision-making issues encountered in
sundry streams of science and technology [16–18]. There are numerous MCDM meth-
ods that are reported in literature; nevertheless, TOPSIS is extensively accepted due
to its several advantages and satisfactory performance in solving miscellaneous real-
world decision problems [15, 19, 20]. The procedure of MCDM analysis requires as-
signment of proper weights either subjective or objective to the criteria. To abstain
from any particular perception of decision maker, the objective weights are em-
ployed. Entropy method is widely practiced for determining objective weights.
TOPSIS integrated with entropy method has been found to be effective in ranking
the alternatives [21]. TOPSIS was introduced by Hwang and Yoon [22]. With further
research and development, several versions of this method have been proposed by
various researchers [23–26]. The novel-modified TOPSIS (NMT) is one such version
that has been developed based on the concept of minimum distance between the
alternatives and optimized ideal reference point in a plane [27].
Also, the cocktail of RSM and desirability approach is extensively used for opti-
mization of multi-response streams in diverse physicochemical processes including
different parameters which lead to the antagonistic responses [28]. RSM is a com-
pendium of statistical and mathematical methods for planning experiments, con-
structing empirical polynomial models in terms of input and output process
parameters, evaluating the parametric effects and predicting the performance of a
process in an economical and authoritative fashion under multivariate system [29,
30]. It includes design types such as CCD, Box–Behnken design, Doehlert design,
three-level factorial design, and optimal design. Although each design has certain
advantages and disadvantages, CCD is one of the most widely used RSM design for
building a second-order polynomial regression model because of its flexibility, effi-
ciency, and ability to run sequentially [31, 32]. The desirability approach of optimiza-
tion is heavily implemented in a multi-response system because of its easy algorithm,
148 Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

implementation, and flexibility to comprehend the priorities of the decision makers


as compared to other techniques used in optimizing the multiple responses [33]. The
details of this approach are provided elsewhere in the chapter.
With this background, entropy – NMT and CCD – has been taken respectively as
MCDM and RSM tools in this study. To the best of our knowledge, NMT has not been
integrated with any DOE protocol for modeling and optimizing any pulp and paper-
making process. Hence, the aim of this chapter is to explore the applicability of
combined CCD-NMT methodology to construct an empirical model of O2D process
for further finding the optimal process conditions using desirability approach. For
this purpose, O2D data of MDKP from our previous study has been used [3]. The rest
of the chapter is structured in different sections as follows: Section 2 presents the
data collection strategy, procedure of implementing NMT, calculation of entropy
weights, RSM-based CCD protocol, and the proposed framework; Section 3 discusses
O2D beneficial and nonbeneficial criteria along with their weights, conversion of
CCD responses to NMT score, execution of CCD with NMT score as output factor,
and optimization using desirability approach; and Section 4 draws relevant con-
clusion from the study and highlights the limitation as well as further extension of
this study.

2 Research methodology
2.1 Proposed optimization framework

The proposed optimization framework comprises of four phases namely opting the
experimental design strategy, implementing MCDM, building model, and perform-
ing optimization. In the first phase, a CCD experimental plan with three indepen-
dent factors (time, temperature, and NaOH charge) and four dependent responses
(pulp brightness, kappa number, intrinsic viscosity, and brightness) associated
with O2D of MDKP was opted in this study. In the second phase, the novel TOPSIS
method and entropy weight method was used to develop a composite score (called
NMT score) by integrating all the abovementioned four responses at each experi-
mental point generated by CCD. In the third phase, the CCD was again used to build
an empirical model by taking the above-said three independent factors as input
while the NMT score as the new output. Finally, in the fourth phase, optimization of
the process was done using the desirability function approach. All the steps in-
volved in the proposed optimization scheme have been represented in Figure 1.
Integrating novel-modified TOPSIS with central composite design 149

Figure 1: Flowsheet of the proposed optimization scheme.


150 Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

2.2 Data acquisition

For the first phase of proposed optimization scheme, data related to O2D of MDKP
were acquired from our previous study [3]. Data consisted of results of four responses,
namely, O2D-PY, O2D-KN, O2D-IV, and O2D-BR. Here, O2D-PY denotes the pulp yield,
O2D-KN the kappa number, O2D-IV the intrinsic viscosity, and O2D-BR the brightness
of MDKP obtained after its O2D experiments performed using a five level and three
input factor based RSM-CCD experimental design. Table 1 shows all the values of O2D-
PY, O2D-KN, O2D-IV, and O2D-BR corresponding to each experimental run. This table
served as decision matrix for operating NMT and calculating entropy weights. Here,
O2D-PY, O2D-KN, O2D-IV, and O2D-BR were treated as criteria while the experimental
runs were taken as alternatives. Thus, there were 4 criteria and 20 alternatives.

Table 1: Decision matrix for O2 delignification of MDKP.

Run OD-PY OD-KN OD-IV OD-BR


number (%) (cm/g) (% ISO)

 . . . .


 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .
 . . . .

2.3 Execution of novel-modified TOPSIS

The classic TOPSIS concept means that the ideal choice is far from the negative ideal
solution, while close to the positive ideal solution at the same time. Implementing clas-
sical TOPSIS to a decision-making problem using vector normalization requires seven
steps [34]. The NMT has the same procedure up to the fifth step as involved in the
Integrating novel-modified TOPSIS with central composite design 151

classical TOPSIS. However, in the sixth and the seventh steps, there is a difference. All
the steps vis-à-vis involved in classical TOPSIS and NMT are mentioned below:

Step 1: The first step involves establishing a decision matrix (DMat). A DMat comprises
of alternatives and criteria accompanied by respective weight as well as respective
performance values associated with each alternative. A selection problem having
Ai ði = 1, 2, 3, ..., pÞ alternatives; Cj ðj = 1, 2, 3, ..., qÞ criteria; wtj ðj = 1, 2, 3, ..., qÞ weights,
and xij ði = 1, 2, 3, ..., p; j = 1, 2, 3, ..., qÞ performance values associated with alternatives
can be represented as DMat in the following form:

wt1 wt2 wt3 − wtj

C1 C2 C3 − Cj
2 3
A1 x11 x12 x13 − x1j
A2 6
6 x21 x22 x23 − x2j 7
7
6 7
DMatðxij Þp × q = A3 6
6 x31 x32 x33 − x3j 7
7, i = 1, 2, 3, ..., p; j = 1, 2, 3, ..., q (1)
6 7
−4 − − − − − 5
Ai xi1 xi2 xi3 − xij

Here, the order of the DMat is p × q.

Step 2: The second step is applied for vector normalization of the decision matrix to
produce normalized decision matrix. The vector normalized decision matrix com-
prises of the normalized values (vnvij ) of the criteria. These normalized values are
calculated using the following equation:
xij
vnvij =  1=2 (2)
Pp 2
i = 1 xij

Step 3: In the third step, the weighted normalized decision matrix is constructed.
The weighted normalized decision matrix encompasses the weighted normalized
values (wtnvij ) of the criteria. The weighted normalized values (wtnvij ) are deter-
mined using the relation shown below:

wtnvij = wtj × vnvij , i = 1, 2, 3, ..., p; j = 1, 2, 3, ..., q (3)

Step 4: The fourth step is intended to identify the positive and negative ideal solu-
tions from the weighted normalized values of the criteria as calculated above. The
positive ideal solution ( + IDS) and the negative ideal solution ( − IDS) of a criterion
depend upon the beneficial and nonbeneficial nature of that criteria. For a criterion
that is considered as beneficial, its maximum weighted normalized value is + IDS while
its minimum weighted normalized value is − IDS. On the other hand, this situation
gets reversed for a nonbeneficial criterion. The beneficial and nonbeneficial criteria
152 Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

contemplated in this study are described in Section 3.2 of this chapter. The + IDS and
the − IDS can be represented mathematically as:
X  X 
+
max  min 
IDSj = wtnvij j 2 J , wtnvij j 2 J i = 1, 2, 3, . . ., p (4)
i i
+ 
= IDS1 , + IDS2 , + IDS3 , . . . , + IDSp
 X  X 

min  max 
IDSj = wtnvij j 2 J , wtnvij j 2 J i = 1; 2; 3; . . .; p (5)
i i
− 
= IDS1 , − IDS2 , − IDS3 , ........., − IDSp

Here, J and J * represent the set of wtnvij for beneficial and nonbeneficial criteria,
respectively.

Step 5: The fifth step calculates the positive separation measures ( + SPMi ) and the neg-
ative separation measures ( − SPMi ) of alternatives. These measures suggest divergence
of every alternative from its + IDS and − IDS respectively. + SPMi and − SPMi basically
express the Euclidean distance and can be formulated as following mathematical
relations:
!1
X
q 2
+ + 2
SPMi = ðwtnvij − IDSj Þ , i = 1, 2, 3, ..., p (6)
j=1

!1
X
q 2
− − 2
SPMi = ðwtnvij − IDSj Þ , i = 1, 2, 3, ..., p (7)
j=1

Step 6: The sixth step in classical TOPSIS evaluates the relative proximity (rpi ) of an
alternative with respect to its best supposed solution. The values of rpi is treated as
the net performing score of an alternative. It can be found using the succeeding
equation as represented below:

rpi = − SPMi =ð + SPMi + − SPMi Þ, i = 1, 2, 3, ..., p (8)

However, the sixth step in NMT involves establishing the + SPMi ; − SPMi plane in
which + SPMi represents the x-axis, − SPMi the y-axis and coordinate ð + SPMi , − SPMi Þ
the alternatives. In this plane, a coordinate ðminð + SPMi Þ, maxð − SPMi ÞÞ is set as the
optimized ideal reference point and distance between this coordinate and each al-
ternative is calculated as per the following equation:
qffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi
rpi = ½ + SPMi − minð + SPMi Þ2 + ½ − SPMi − maxð − SPMi Þ2 , i = 1, 2, 3, ..., p (9)

Step 7: Finally, in the seventh step of classical TOPSIS, the rpi values obtained in
the sixth step are arranged in the decreasing order to provide the first rank to the
best apt alternative and the last rank to the least apt alternative. However, in the
Integrating novel-modified TOPSIS with central composite design 153

seventh step of NMT, the rpi values obtained in the above step are arranged in as-
cending order to provide the first position to the least value and the last position to
the highest value. Thus, the best alternative gets the least value while the worst al-
ternative gets the highest value.

2.4 Calculation of entropy weight

The entropy concept of evaluating objective weights was developed by Shannon


[35]. Determination of criteria weights using entropy involves five steps [36]. The
first step requires formation of a decision matrix. This decision matrix is the same
as that deemed in the TOPSIS procedure. The decision matrix comprising of p alter-
natives and q criteria can be expressed by equation (1). The second step incorpo-
rates normalization of the decision matrix for computing the projection values
(projij ). The third step aims at calculation of the entropy of each criterion (entrj ). The
fourth step is implemented to determine the degree of dispersion of each criterion
(dispj ) while the final step, that is, the fifth step evaluates the objective weight of
each criterion (owtj ). The projection values, the entropies, the degrees of dispersion,
and the objective weights are estimated using the following equations:
xij
projij = Pp , i = 1, 2, 3, ..., p; j = 1, 2, 3, ..., q (10)
i = 1 xij

1 X  
p
entrj = − projij ln projij , i = 1, 2, 3, ..., p; j = 1, 2, 3, ..., p (11)
ln p i = 1

dispj = 1 − entrj , j = 1, 2, 3, ..., p (12)

dispj
owtj = Pp (13)
j = 1 dispj

It is important to mention here that the values of entropy lie in the range 0 to 1,
and the summation of all the values of the objective weights of all the criteria
equals ~1.

2.5 Central composite design

The CCD plan mentioned in our previous work has been adopted in this study
too [3]; however, with the difference that in this study, a NMT score has been
considered as a single response instead of previous four responses namely O2D-
PY, O 2 D-KN, O 2 D-IV, and O 2 D-BR that have been measured after O 2 D of MDKP.
The conversion of O2D-PY, O2D-KN, O2D-IV, and O2D-BR data into the NMT score
has been discussed in Section 3.2 of the chapter. Table 2 shows the new CCD
154 Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

experimental matrix consisting of 20 experimental runs with different coded lev-


els of time, min (t), temperature, °C (T), and NaOH charge, %w/w (N) as input
factor while NMT score as a single output response. The coded levels of t, T, and N
varied from −1.682 to 1.682 while the actual levels varied from 10–110 min, 83–117 °C,
and 0.32–3.68% w/w, respectively. Thus, a second-order empirical polynomial model
of O2D of MDKP was developed by taking NMT score as a function of time, min (t),
temperature, °C (T), and NaOH charge, %w/w (N). This second-order polynomial ex-
pression can be represented as the following equation:

NMTS = η0 + η1 t + η2 T + η3 N + η11 t2 + η22 T 2


(14)
+ η33 N 2 + η12 tT + η13 tN + η23 TN

Here, NMTS represents the NMT score (the output response); η0 the intercept; η1 , η2 ,
η3 , η11 , η22 , η33 , η12 , η13 , η23 the regression coefficients; t, T, N the linear terms; t2 ,
T 2 , N 2 the quadratic terms; and tT, tN, TN the two factor interaction terms. The
analysis of variance (ANOVA), fit statistics, estimation of coefficients, diagnos-
tics (normal plot of residual, residuals versus predicted, residuals versus run, re-
sidual versus factor), and graphs (2D contours and 3D surfaces) were plotted
using Design Expert 12 software.

Table 2: CCD plan incorporating NMT score as response.

Run Input variables Response

Time (min) Temperature (°C) NaOH (%w/w) NMT score

   . .


   . .
   . .
   . .
   . .
   . .
   . .
   . .
   . .
   . .
   . .
   . .
   . .
   . .
   . .
   . .
   . .
   . .
   . .
   . .
Integrating novel-modified TOPSIS with central composite design 155

2.6 Desirability approach of optimization

The Design Expert 12 software numerical optimization program is based on the de-
sirability function approach formulated by Harington [37] and Derringer and Suich
[38]. Derringer and Suich recommended three types of discontinuous functions for
optimization of number of equations depending on the category of optimization ob-
jective. For every function or equation, one desirability function is created, which is
high when at maximum, minimum or target level, and low when otherwise. The
functions used for maximization and minimization are shown below respectively by
the following equations [39]:
8
>
> 0; fr ðxÞ < A
< s
fr ð x Þ − A
dmax = ; A ≤ fr ðxÞ ≤ B (15)
r
>
>
B−A
:
1; fr ðxÞ > B
8
>
> 0; fr ðxÞ > B
< s
fr ðxÞ − B
dmin = ; A ≤ fr ð x Þ ≤ B (16)
r
>
>
A − B
:
1; fr ðxÞ < A

Here, A, B, r, and s denote lower limit, upper limit, number of functions to be opti-
mized simultaneously and weight value, respectively. Since this study involves only
maximization and minimization case, only functions associated with them are
shown above. It can be seen from the above equation that the desirability range for
all responses ranges from 0 to 1. Numerical optimizers combine individual desirability
into one numeric value and find the greatest overall desirability. If there are R number
of desirability functions, say d1 , d2 ,. . ., dr , they can be combined to achieve an over-
all desirability function (D) using the geometric mean as follows:
Y
R 1=R
D= dr (17)
r=1

3 Results and discussion


3.1 O2 delignification criteria weights

Weight of a criterion performs an authoritative task in a decision-making process. It


informs its comparative significance among a set of criteria considered within a de-
cision matrix under investigation. This inherent primary information is transmitted
to the decision maker for making an effective decision and thus influences the final
ranking sequence of the alternatives [40]. Since each criterion has its own effect on
156 Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

an alternative; a specific information is associated with it and hence determining its


proper weight becomes very crucial in an MCDM framework. This study considers
the objective weights of the criteria calculated according to the entropy method.
The objective weights are independent of decision maker preference or biasness
and only consider the criteria as sources of information [41].
The entropy method measures the randomness confined in the decision-making
unit and promptly produces a group of weights for a certain criterion on the basis of
mutual disparity of distinct criteria levels of alternatives for individual criteria and
afterward for entire criteria simultaneously [41]. The bigger the value of the entropy
related to a criterion, the smaller is its weight and the lesser is its influence in the
decision-making process [42]. In other words, the larger the dispersion associated
with a criterion, the higher is its weight [43]. The dispersion exemplifies the intrinsic
contrast intensity of the criterion [23].
Table 1 served as the decision matrix for calculation of entropy, dispersion and
weights of criteria considered for O2D of MDKP. The criteria considered were O2D-
PY, O2D-KN, O2D-IV, and O2D-BR. These criteria are the properties of O2 delignified
MDKP. The projection and ln values required in calculation of entropy weights are
shown in Table 3. The entropy, degree of dispersion, and objective weights of crite-
ria are mentioned in Table 4. The degree of dispersion is the highest for kappa num-
ber. Hence, this criterion is the most significant among all the criteria considered
for O2D. It is also proclaimed by its uppermost objective weight. The degrees of dis-
persion and objective weights of criteria follow the order: kappa number ˃ intrinsic
viscosity ˃ brightness ˃ yield. However, the entropy follows the opposite trend. The
higher the degrees of dispersion and objective weights, the lower are the entropy.

Table 3: Projection and ln values during calculation of entropy weights.

Projection value ln values

Run OD-PY OD-KN OD-IV OD-BR OD-PY OD-KN OD-IV OD-BR

 . . . . −. −. −. −.


 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.
Integrating novel-modified TOPSIS with central composite design 157

Table 3 (continued)

Projection value ln values

Run OD-PY OD-KN OD-IV OD-BR OD-PY OD-KN OD-IV OD-BR

 . . . . −. −. −. −.


 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.
 . . . . −. −. −. −.

Table 4: Entropy, dispersion, and weights of various O2 delignification criteria.

Criteria OD-PY OD-KN OD-IV OD-BR

Entropy . . . .


Dispersion . . . .
Weight . . . .

3.2 Converting CCD responses to NMT score

The decision matrix consisting of O2D-PY, O2D-KN, O2D-IV, and O2D-BR data has
been illustrated in Table 1. The normalized and the weighted normalized values of
O2D criteria obtained using equations (2) and (3), respectively, are given in Table 5.
MCDM analysis requires identification of criteria as beneficial and nonbeneficial cri-
teria. A beneficial criterion is the one whose higher value is advantageous while a
nonbeneficial criterion is the one whose lower value is favorable. In TOPSIS, these
criteria come into play when positive and negative ideal solutions are to be mea-
sured. Among all the pulping criteria considered in this study, kappa number is a
nonbeneficial criterion while the remaining are beneficial criteria. From the paper-
making point of view, the higher values of O2D-PY, O2D-IV, and O2D-BR are desired
whilst the lower values of O2D-KN are intended.
The positive and the negative ideal solutions of each criterion were identified
from weighted normalized values depending upon their beneficial and nonbenefi-
cial nature. Thus, the positive and the negative ideal solution of each criteria as
identified from Table 5 are: + ISYI = 0.0015, − ISYI = 0.0013; + ISKN = 0.0288, − ISKN =
0.1790; + ISIV = 0.1012, − ISIV = 0.0304; and + ISBR = 0.0289, − ISBR = 0.0163. The posi-
tive separation measures, the negative separation measures, and the NMT score for
each of the experimental run evaluated using equations (6), (7), and (9), respectively,
are also listed in Table 5. It can be observed that the run 2 gets the lowest NMT score
and hence gets the first position. Thus, the process conditions corresponding to run
Table 5: Calculation of NMT score from CCD data.
158

Run NDM WNDM PSM NSM NMTS

OD-PY OD-KN OD-IV OD-BR OD-PY OD-KN OD-IV OD-BR

 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .
 . . . . . . . . . . .

NDM is normalized decision matrix, WNDM is weighted normalized decision matrix, PSM is positive separation measure, NSM is negative separation
measure, NMTS is NMT score.
Integrating novel-modified TOPSIS with central composite design 159

2, that is, time = 60 min, temperature = 117 °C, and NaOH dose = 2% w/w can be con-
sidered as the best suitable operating conditions for O2D of MDKP. However, these
optimal conditions are not close to the earlier reported optimum values. Hence, CCD
was further applied to the NMT score as explained in Section 3.3.

3.3 CCD applied to NMT score

Table 2 illustrates the new CCD matrix consisting of time, temperature, and NaOH
dose as input variables while NMT score as single output response. It can be seen in
Table 5 that NMT score ranges from minimum 0.0123 to maximum 0.1221. The ratio
of maximum NMT score to minimum NMT score is 9.9587. This ratio needs to be
checked to find the necessity of applying a transformation of the response data. A
ratio >10 usually indicates a transformation is required. Since, in the present study
this ratio is ≤10 so no transformation was applied to NMT score. The equations were
developed by setting the transformation tab in the Design-Expert 12 software to
“None.”

3.3.1 Model equation

The software suggested a quadratic model for NMT score in terms of t, T, and N set
during O2D of MDKP. Two types of quadratic models were developed: one for the
coded factors and the other for the actual factors. The final equations for the coded
factors and the actual factors, respectively, are shown in the following equations:

NMTS = 0.0962 − 0.0093t − 0.0267T − 0.0172N + 0.0084tT

− 0.0078tN − 0.0095TN − 0.0073t2 − 0.0125T 2 (18)

− 1.3042 × 10 − 5 N 2

NMTS = − 0.9184 − 0.0016t + 0.0226T + 0.0934N + 2.7884 × 10 − 5 tT

− 0.0003tN − 0.0009TN − 8.1273 × 10 − 6 t2 − 0.0001T 2 (19)

− 1.3043 × 10 − 5 N 2
Both equations (18) and (19) can be used to predict NMT score at a given level of time,
temperature and NaOH charge. However, in case of equation (18) it is required to
specify the given levels in the coded terms while in case of equation (19) the given
levels must be maintained in the genuine units for each independent variable. The
high and the low levels of each independent variable are enciphered as +1 and –1,
respectively. Again, it should be noted that the equation (18) can be implemented to
distinguish the comparative repercussions of the factors but the equation (19) should
160 Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

not be used for this purpose. The reason for this may be attributed to the scaled coef-
ficients of the equation (19). Coefficient scaling is done to correspond to the units of
each factor, so the intercept is not centered in the design space.
Table 6 shows the coefficient estimate of equation (18). Coefficient estimates ex-
emplify the anticipated alteration in response per unit alteration in factor value if
all remainder factors are kept unchanged. The intercept of an orthogonal design is
the overall mean response of all runs. A coefficient is an adjustment around its
mean according to the coefficient setting. If the factors are orthogonal, then VIF is 1.
A VIF > 1 shows multicollinearity and the larger the VIF, the tighter the correlation
of the factors. As an approximate rule, a VIF of less than 10 is acceptable.

Table 6: Coefficient in terms of coded factors.

Factor Coefficient estimate df Standard error % CI low % CI high VIF

Intercept .  . . .


t-Time −.  . −. −. .
T-Temp −.  . −. −. .
N-NaOH −.  . −. −. .
tT .  . . . .
tN −.  . −. −. .
TN −.  . −. −. .
t −.  . −. −. .
T −.  . −. −. .
N −.  . −. . .

df is degree of freedom, CI is confidence interval, VIF is variance inflation factor.

Table 7 shows the ANOVA for the quadratic model in coded factors. A model F value
of 26.66 indicates that the model is significant. Also, a model p-value of <0.0001
means that there is only a 0.01% chance that a large F-number can occur due to
noise. A p-value <0.0500 indicates that the model term is significant. Here, t, T, N,
tT, tN, TN, t2, and T2 are significant model terms while N2 is not a significant term. If
the model involves many insignificant model terms (excluding those which are es-
sential to support hierarchy), then eliminating those terms may improve the model.
Lack of fit F value 1.00 shows that it is not significant compared to the lack of fit
pure error. Noise can cause this large lack of fit F value with a 49.90% chance.
Adapting the model requires a lack of fit to be not significant. This nonsignificant
lack of fit confirms the satisfactory relation between NMT score and time, tempera-
ture, and NaOH charge.
The fit statistics reveals that the standard deviation, mean, and coefficient of
variation of the model are 0.0090, 0.0827, and 10.92%, respectively. The coefficient
of variation is the standard deviation expressed as a percentage of the mean.
The model R2, adjusted R2, and predicted R2 are 0.9600, 0.9240, and 0.8175,
respectively. The adequate precision of the model is 17.3562. The predicted R2 is in
Integrating novel-modified TOPSIS with central composite design 161

sufficient accord with the adjusted R2, that is, the difference between the two is
<0.2. The adequate precision calculates the signal-to-noise (S/N) ratio. An S/N ratio
>4 is appropriate. The S/N ratio of 17.356 in the present case indicates an ample sig-
nal. Hence, this model can be used to navigate the design space. The positive and
negative signs of the regression coefficients in equation (18) imply respectively the
direct and the indirect proportionality of a term with NMT score. It can be observed
that all the coefficients are negative except that of interaction term tT. It means with
the increase of t, T, N, t2 , T 2 , N 2 , tN, and TN, NMT score for O2D decreases while
with the increase of tT, it increases.

Table 7: ANOVA for quadratic model.

Source Sum of squares df Mean square F-value p-value Remarks

Model .  . . <. Significant


t-Time .  . . .
T-Temp .  . . <.
N-NaOH .  . . <.
tT .  . . .
tN .  . . .
TN .  . . .
t .  . . .
T .  . . .
N .E-  .E- . .
Residual .  .
Lack of fit .  . . . Not significant
Pure error .  .
Cor total . 

df is degree of freedom.

3.3.2 Model diagnostics

The model diagnostics in the present study involved plotting of normal %probabil-
ity versus externally studentized residuals, externally studentized residuals versus
run number, predicted, time, temperature, and NaOH. These plots are respectively
shown in Figure 2(a–f) respectively. The normal plot of residuals exhibits whether or
not the residuals pursue a normal distribution and thus the straight line. It can be
seen in the Figure 2(a) that residuals are following a straight line except some scat-
ters. The plot of externally studentized residuals against the experimental run num-
ber examines the prowling factors that might have affected the output while
performing experiments. The graph of the residuals against the rising forecasted re-
sponse values checks the hypothesis of constant variance. The plot of the residuals
against any input parameter examines whether the variance not described by the
model is distinct for distinct levels of a parameter. Further, in Figure 2(b–f) no ordered
162 Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

(a) (b)

(c) (d)

(e) (f)

Figure 2: Diagnostics: (a) normal plot of residuals, (b) residuals versus run numbers, (c) residuals
versus predicted, (d) residuals versus time, (e) residuals versus temperature, and (f) residuals
versus NaOH dose.
Integrating novel-modified TOPSIS with central composite design 163

fashion of residuals can be observed, that is, these residuals are randomly scattered.
Also, these residuals are evenly distributed across the x-axis and lie between
±3.00 [44]. All these diagnostics confirm that the formulated regression models
for the O2D of MDKP in terms of NMT score, time, temperature, and NaOH charge
is well fit, adequate, and reliable to illuminate the performance of this process.

3.3.3 Model graphs

3D surface and 2D contour plots were developed to interpret the effects of time, tem-
perature, and NaOH charge on the NMT score. Figure 3(a–c) shows the surface plots
while Figure 3(d–f) illustrates the contour plots. The surface plots indicate that with
the surge in time, temperature and NaOH dose during O2D the NMT score decreases.
Thus, a low NMT score signifies a better degree of O2D. This is consistent with the
O2D chemistry described in the literature. It can also be noted that the temperature
shows the steepest slope followed by the NaOH charge and the time. The more the
steepness of the slope, the more is the magnitude of interaction toward the process.
It implies that the contribution of temperature and time toward O2D is the greatest
and the least while that of NaOH charge is in between. The geometry of the contour
plots tells about the relevance of the joint interaction between the process input vari-
ables toward the process. The high ellipticity of the contours is the indication of the
better interaction among the input variables. From the shape of the contours dis-
played in Figure 3(d–f), it can be said that there is a significant two-way interaction
between time, temperature, and NaOH charge. It is also evident from the p-values of
tT, tN, and TN terms obtained in Table 7.

3.4 Optimization using desirability approach

Design Expert 12 numerical optimization features provided by the software were


used for finding the optimum values of the temperature, time, NaOH charge, and
NMT score. This facility is based on the concept of desirability as explained in the
Section 2.6. This method of optimization was adopted in the present work because
of “its simplicity, availability in the design expert software, relatively low computa-
tional cost, fast convergence, flexibility in weighting and ability to assign impor-
tance to the individual responses” [45].
Under this concept, a goal for each of the independent and dependent parame-
ters is to be fixed. The different goals provided in this facility are – maximize, mini-
mize, target, in range, equal to, none, and Cpk. The first four goals are available for
both the independent and the dependent parameters, the goal “equal to” is available
only for the independent parameters while the goals “none and Cpk” are exclusive
for the dependent parameters. The goal for the temperature, the time, and the NaOH
164 Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

(a) (b)

(c) (d)

(e) (f)

Figure 3: (a, b, and c) Three-dimensional response surfaces (d, e, and f) 2D contour plots.
Integrating novel-modified TOPSIS with central composite design 165

charge was set as minimum, maximum, and minimum, respectively. These goals
were borrowed from Anupam et al. [3]. The goal for NMT score was fixed as minimum
since in the NM-TOPSIS method, the lowest score is considered as the best [27].
Also, the numerical optimization facility provides option to regulate weight, im-
portance, minimum, and maximum value of each factor involved in the process.
The goal, lower limit, upper limit, lower weight, upper weight, and importance con-
sidered in this work are shown in Table 8. All the goals have equal weights and
equal importance. By default, the equal weight and the equal importance of goals
are set at 1 and 3, respectively. It is to be noted that the features of a goal can be
changed by changing the weight or the importance. This optimization approach
combines the individual desirability (di ) into an overall desirability function (DðxÞ)
with a purpose to maximize this function. Implementing the desirability method,
the objective function of the present optimization problem can be framed as
8
>
> Maximize D = dNMTS such that
>
>
<30 min ≤ t ≤ 90 min
(20)
>
> 90 °C ≤ T ≤ 110 °C
>
>
:
1% ≤ n ≤ 3%

Here,
 s
NMTSðt, T, nÞ − NMTSL
dNMTS = 1 − (21)
NMTSU − NMTSL

NMTSðt, T, nÞ is the response surface model represented in equation (18); s is the


weight factor; NMTSL and NMTSU stand for lower and upper limits, respectively.

Table 8: Constraints for optimization.

Name Goal Lower limit Upper limit Lower weight Upper weight Importance

Time Maximize     
Temp Minimize     
NaOH Minimize     
NMT score Minimize . .   

With the above goals, the optimum values of the temperature, the time, the NaOH
dose, and the NMT score were found to be 90 °C, 90 min, 1.31%, and 0.0960, re-
spectively. The individual desirability of temperature, time, NaOH dose, and NM-
TOPSIS score was estimated to be 1, 1, 0.843, and 0.238, respectively, while the overall
desirability of the process was 0.669. These values depend on the closeness of the
lower and the upper limits with respect to the actual optimum. The influence of time,
temperature, and NaOH dose on the desirability function is displayed in Figure 4(a–c).
166 Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

(a)

(b)

(c)

Figure 4: (a, b, and c) Effect of time, temperature, and NaOH charge on desirability.
Integrating novel-modified TOPSIS with central composite design 167

Here, the overall desirability decreases with increasing temperature and NaOH
dose while increases with increasing time. Figure 5 displays the ramps of the nu-
merical optimization. The ramps present the pictorial view of optimal solution of
the independent and dependent factors. The optimal values of the independent
parameters are shown with red spheres while those of the dependent parameters
are reflected with blue spheres.

Figure 5: Ramp for the numerical optimization of O2 delignification process.

3.5 Comparison with literature

The O2D optimization problem considered in this study has also been attempted previ-
ously through RSM-CCD and desirability approach [3]. Hence, the results obtained
through the novel optimization framework presented in this study were compared
with those reported in the literature to judge its adequacy. With the earlier approach,
four separate empirical models for O2D of MDKP, namely O2D-PY, O2D-KN, O2D-IV,
and O2D-BR, were developed. However, through the present MCDM-based hybrid ap-
proach, an overall empirical model for O2D performance-grade MDKP was formulated
by combining O2D-PY, O2D-KN, O2D-IV, and O2D-BR into one NMT score.
Comparing the significant and nonsignificant terms, it was found that the qua-
dratic terms of time and NaOH dose in the previous pulp yield model and the qua-
dratic term of NaOH dose in the previous kappa number model is nonsignificant
while in the present NMT score model only the quadratic term of NaOH dose is non-
significant. Further, in the present NMT score model, only the interaction term of time
and temperature exhibited the synergistic effect while all the other terms revealed the
antagonistic effect. On the other hand, the interaction terms of time and temperature
168 Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

along with the quadratic terms of NaOH dose demonstrated the synergistic effect in
the previous models of O2D-PY, O2D-KN, and O2D-IV, while the remaining terms in
these models displayed the antagonistic effect. Nevertheless, this scenario was
completely changed in the former brightness model in which the interaction terms of
time and temperature along with the quadratic terms of NaOH dose showed the an-
tagonistic effect while the remaining terms indicated the synergistic effect.
The sum of square (SS) values of the previous O2D models indicated that the
impact of time toward O2D-PY, O2D-KN, O2D-IV, and O2D-BR is respectively 9%,
7.75%, 12.09%, and 5.26% while that of temperature is 56%, 50.59%, 51.19%, and
54.68%; and that of NaOH charge is 19.71%, 20.50%, 19.01%, and 20.43%. On the
other hand, the SS values of the present model shows that the contribution of time,
temperature, and NaOH charge toward the O2D performance score is 5.88%,
47.55%, and 19.61%, respectively. Looking into the contribution of two-factor inter-
action terms (i.e., time × temperature, time × NaOH charge, and temperature × NaOH
charge), it was found that their total contribution in the O2D-PY, O2D-KN, O2D-IV,
O2D-BR, and NMT score models is 5.81%, 8.43%, 9.29%, 7.45%, and 8.82% respec-
tively. Thus, it can be said that the present combined model accounts the interac-
tion effects at par or even better than the previous models though in all the cases it
is less than 10%. Similarly, the influence of the quadratic terms (i.e., time2,
temperature2, and NaOH dose2) in the early reported O2D-PY, O2D-KN, O2D-IV, and
O2D-BR models is 6.71%, 10.78%, 7.07%, and 10.56% respectively which is lower
than their 15.19% contribution in the NMT score model.
The optimum temperature (90 °C) and the optimum time (90 min) predicted by
both the approaches are same. However, the optimum NaOH charge projected by
the present framework is ~6% lower than that calculated by the earlier approach.
This is consistent with the diverse modeling and optimizing algorithms practiced
for O2D. They also aimed at reducing NaOH consumption [5] because reduction in
NaOH consumption can make O2D more profitable from economic point of view.
Considering the overall process desirability, the present optimization methodology
predicted ~8% higher desirability as compared to the previous one. In the desirabil-
ity approach of optimization, all the desirability values lie in the range 0–1 and the
maximum desirability value is near to the unity [46]. Hence, a higher desirability
value is desired because the higher the desirability values the closer the optimum
values are to the unity. Also, at the new optimum value, that is, 1.31% NaOH dose
O2D-PY = 96.85%, O2D-KN = 10.69, O2D-IV = 638.49 cm3/g, and O2D-BR = 42.16%ISO.
These values demonstrate that the quality parameters of O2 delignified MDKP ob-
tained through the modeling approach applied in the present work is commensu-
rate to those obtained previously.
Integrating novel-modified TOPSIS with central composite design 169

4 Conclusion
A hybrid scheme based on CCD, NMT, and desirability function has been proposed
for optimization of O2D process. The results obtained from O2D of MKDP were taken
as representative data set. The NMT was used to develop a performance grade for
O2D process in terms of NMT score, CCD was used to analyze the effects of process
variables (time, temperature, and NaOH charge) on NMT score with the help of an
empirical equation and the desirability function was used to find the optimal pro-
cess conditions. The entropy method was used to determine the objective weight of
O2D criteria. It was found that O2D-KN was the most significant criteria with 56.21%
weightage followed by O2D-IV (33.65%), O2D-BR (9.58%), and O2D-PY (0.56%). The
empirical model displayed a high degree of fit with R2, adjusted R2, and predicted R2
0.9600, 0.9240, and 0.8175, respectively. The normal plot of residuals and other di-
agnostics also advocated the adequacy and reliability of the regression model.
These models also accounted for more interactive and quadratic contribution than
the previous models. The predicted optimal process conditions of time (90 min) and
temperature (90 °C) were identical with the previous models while that of NaOH
charge (1.31% w/w) a significant and beneficial reduction. This work illustrates that
researchers active in this field can explore such methods for several other pulp and
paper optimization problems for better results.

References
[1] Esteves C.S.V.G., Brännvall E., Östlund S., Sevastyanova O. Evaluating the Potential to Modify
Pulp and Paper Properties through Oxygen Delignification ACS Omega 2020, 5, 13703−13711
https://dx.doi.org/10.1021/acsomega.0c00869.
[2] Leh C.P., Wan Rosli W.D., Zainuddin Z., Tanaka Optimisation of oxygen delignification in
production of totally chlorine-free cellulose pulps from oil palm empty fruit bunch fibre.
Industrial Crops and Products 2008 November, 28(3), 260–267, https://doi.org/10.1016/j.
indcrop.2008.02.016.
[3] Anupam K., Deepika S.V., Lal P.S. Antagonistic, synergistic and interaction effects of process
parameters during oxygen delignification of Melia dubia kraft pulp. Journal of Cleaner
Production 2018, 199, 420–430, https://doi.org/10.1016/j.jclepro.2018.07.125.
[4] Vianna V., Yamamoto C.I., Vieira O. Modeling and simulation of an oxygen delignification
industrial process of cellulosic pulp using kinetic expressions and the software CADSIM Plus.
International Journal of Advanced Engineering Research and Science (IJAERS) 2018, 5(5),
261–271, https://dx.doi.org/10.22161/ijaers.5.5.35.
[5] Euler G., Nayef G., Fialho D., Brito R., Brito K. Modeling of oxygen delignification process
using a Kriging based algorithm. Cellulose 2020, 27, 2485–2496, https://doi.org/10.1007/
s10570-020-02991-4.
[6] Susilo J., Bennington C.P.J. Modeling kappa number and pulp viscosity in industrial oxygen
delignification systems. Chemical Engineering Research & Design, 85(A6), 872–881.
170 Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

[7] Chavan P., Patil A. Taguchi-based optimization of machining parameter in drilling spheroidal
graphite using combined TOPSIS and AHP method. In: Venkata Rao R., Taler J. (eds) Advanced
Engineering Optimization Through Intelligent Techniques. Advances in Intelligent Systems
and Computing. Springer, Singapore, Vol. 949, 2020, https://doi.org/10.1007/978-981-13-
8196-6_70.
[8] Bhat R., Mohan N., Sharma S., Agarwal R.A., Rathi A., Subudhi K.A. Multi-response
optimization of the thrust force, torque and surface roughness in drilling of glass fiber
reinforced polyester composite using GRA-RSM. Materials Today: Proceedings 2019, 19,
Part 2, 333–338, https://doi.org/10.1016/j.matpr.2019.07.608.
[9] Bhat R., Mohan N., Sharma S., Shandilya M., Jayachandran K. An integrated approach of CCD-
TOPSIS-RSM for optimizing the marine grade GFRP drilling process parameters. Materials
Today: Proceedings 2019, 19, Part 2, 307–311, https://doi.org/10.1016/j.matpr.2019.07.214.
[10] Sharma N., Ahuja N., Goyal R., Rohilla V. Parametric optimization of EDD using RSM-Grey-
TLBO-based MCDM approach for commercially pure titanium. Grey Systems: Theory and
Application 2020, 10(2), 231–245, https://doi.org/10.1108/GS-01-2020-0008.
[11] Chakraborty S., Chatterjee P., Das P.P. A DoE–TOPSIS method-based meta-model for
parametric optimization of non-traditional machining processes. Journal of Modelling in
Management 2019, 14(2), 430–455, https://doi.org/10.1108/JM2-08-2018-0110.
[12] Wang P., Zhu Z., Wang Y. A novel hybrid MCDM model combining the SAW, TOPSIS and GRA
methods based on experimental design. Information Sciences 1 June 2016, 345, 27–45,
https://doi.org/10.1016/j.ins.2016.01.076.
[13] Wang P., Meng P., Zhai J.Y., Zhu Z.Q. A hybrid method using experiment design and grey
relational analysis for multiple criteria decision making problems Knowledge-Based Systems,
2013;53 100–107.
[14] Şimşek B., Yiç T., Şimşek E.H. A TOPSIS-based Taguchi optimization to determine optimal
mixture proportions of the high strength self-compacting concrete. Chemometrics and
Intelligent Laboratory Systems 2013, 125, 18–32.
[15] Behzadian M., Otaghsara S.K., Yazdani M., Ignatius J.A. state-of the-art survey of TOPSIS
applications. Expert systems with applications 2012, 39, 13051–13069, https://doi.org/
10.1016/j.eswa.2012.05.056.
[16] Stojčić M., Zavadskas E.K., Pamučar D., Stević Ž., Mardani A. Application of MCDM methods
in sustainability engineering: A literature review. 2008-2018. Symmetry(Basel) 11, 350,
https://doi.org/10.3390/sym11030350.
[17] Saghafi S., Ebrahimi A., Mehrdadi N., Bidhendy G.N. Evaluation of aerobic/anaerobic
industrial wastewater treatment processes: The application of multi‐criteria decision
analysis. Environmental Progress & Sustainable Energy 2019, 38(5), 1–7, https://doi.org/
10.1002/ep.13166.
[18] Ganji S.M.S.A., Hayati M. Selecting an appropriate method to remove cyanide from the
wastewater of Moteh gold mine using a mathematical approach. Environmental Science and
Pollution Research 25, 23357–23369, https://doi.org/10.1007/s11356-018-2424-1.
[19] Mardani A., Jusoh A., Nor K., Khalifah Z., Valipour A. Multiple criteria decision-making
techniques and their applications – a review of the literature from 2000 to 2014. Economics
Research 2015 Istraživanja 28, 516–571, https://doi.org/10.1080/1331677X.2015.1075139.
[20] Wang T.C., Chang T.H. Application of TOPSIS in evaluating initial training aircraft under a
fuzzy environment. Expert Systems With Applications 2007, 33, 870–880, https://doi.org/
10.1016/j.eswa.2006.07.003.
[21] Srdjevic B., Medeiros Y.D.P., Faria A.S. An objective multi-criteria evaluation of water
management scenarios. Water Resources Management, 18, 35–54.
Integrating novel-modified TOPSIS with central composite design 171

[22] Hwang C.L., Yoon K. Methods for multiple attribute decision making. In: Multiple Attribute
Decision Making. Lecture Notes in Economics and Mathematical Systems. Springer, Berlin,
Heidelberg, Vol. 186, 1981, https://doi.org/10.1007/978-3-642-48318-9_3.
[23] Deng H., Yeh C., Willis R.J. Inter-company comparison using modified TOPSIS with objective
weights 2000, 27, 963–973, https://doi.org/10.1016/S0305-0548(99)00069-6.
[24] Chen C. Extensions of the TOPSIS for group decision-making under fuzzy environment. Fuzzy
Sets and Systems 2000, 114(1), 1–9.
[25] Krohling R.A., Pacheco A.G.C. A-TOPSIS – An approach Based on TOPSIS for Ranking
Evolutionary Algorithms Procedia Computer Science 2015, 55, 308–317.
[26] Giove S. Interval TOPSIS for multicriteria decision making. In: Marinaro M., Tagliaferri R. (eds)
Neural Nets. WIRN 2002; Lecture Notes in Computer Science. Springer, Berlin,
Heidelberg, Vol. 2486, https://doi.org/10.1007/3-540-45808-5_5.
[27] Ren L., Zhang Y., Wang Y., Sun Z. Comparative analysis of a novel MTOPSIS method and
TOPSIS. Applied Mathematics Research Express 2007, Article ID abm005.
[28] Amdoun R., Khelifi L., Khelifi-Slaoui M., Amroune S., Asch M., Assaf-Ducrocq C., Gontier E.
The desirability optimization methodology; a tool to predict two antagonist responses in
biotechnological systems: Case of biomass growth and hyoscyamine content in elicited
datura starmonium hairy roots. Iranian Journal of Biotechnology 2018 January, 16(1), e1339
11–19.
[29] Montgomery D.C. Design and Analysis of Experiments, fourth. John Wiley and Sons, 1997.
[30] Ani J.U., Okoro U.C., Aneke L.E., et al. Application of response surface methodology for
optimization of dissolved solids adsorption by activated coal. Applied Water Science 2019, 9,
60, 1–11, https://doi.org/10.1007/s13201-019-0943-7.
[31] Marget W.M., Morris M.D. Digging into DOE Selecting the right central composite design for
response surface methodology applications by Richard Verseput 2000. Central Composite
Experimental Designs for Multiple Responses with Different Models 2019, 61(4), 524–532,
Technometrics, https://www.qualitydigest.com/june01/html/doe.html.
[32] Verseput R., Digging Into DOE: Selecting the right central composite design for response
surface methodology applications, 2000;https://www.qualitydigest.com/june01/html/doe.
html
[33] Costa N.R., Lourenço J., Pereira Z.L. Desirability function approach: A review and performance
evaluation in adverse conditions Chemometrics and Intelligent Laboratory Systems 2011, 107,
234–244.
[34] Anupam K., Lal P.S., Bist V., Sharma A.K., Swaroop V. Raw material selection for pulping and
papermaking using TOPSIS multiple criteria decision making design. Environmental Progress
& Sustainable Energy 2014, 33, 1034–1041, https://doi.org/10.1002/ep.11851.
[35] Shannon C.E. A mathematical theory of communication. Bell System Technical Journal 1948,
27, 379–423, https://doi.org/10.1002/j.1538-7305.1948.tb01338.x.
[36] Anupam K., Swaroop V., Sharma A.K., Lal P.S., Bist V. Sustainable raw material selection for
pulp and paper using SAW multiple criteria decision making design. IPPTA Journal 2015, 27,
67–76.
[37] Harington J. The desirability function. Industrial Quality Control 1965, 21, 494{498.
[38] Derringer G., Suich R. Simultaneous optimization of several response variables. Journal of
Quality Technology 1980, 12, 214{219.
[39] Kuhn M. The desirability Package, 2016.
[40] Garg H., Agarwal N., Choubey A. Entropy based multi-criteria decision making method under
fuzzy environment and unknown attribute weights. Global Journal of Technology and
Optimization 2015, 6, 1000182, https://doi.org/10.4172/2229-8711.1000182.
172 Kumar Anupam, Pankaj Kumar Goley, Anil Yadav

[41] Vujičić M., Papić M.Z., Blagojević M.D. Comparative analysis of objective techniques for
criteria weighing in two MCDM methods on example of an air conditioner selection. Tehnika –
Menadžment 2017, 67, 422–429, https://doi.org/10.5937/tehnika1703422V.
[42] Lotfi F.H., Fallahnejad R. Imprecise Shannon’s entropy and multi attribute decision making.
entropy 2010, 12, 53–62, https://doi.org/10.3390/e12010053.
[43] Chauhan A., Vaish R. Magnetic material selection using multiple attribute decision making
approach. Materials and Design 2012, 36, 1–5, https://doi.org/10.1016/j.
metdes.2011.11.021.
[44] Yi X.S., Shi W.X., Yu S.L., Ma C., Sun N., Wang S., Jin L.M., Sun L.P. Optimization of complex
conditions by response surface methodology for APAMe oil/water emulsion removal from
aqua solutions using nano-sized TiO2/Al2O3 PVDF ultrafiltration membrane. Journal of
Hazardous Materials 2011, 193, 37e44.
[45] Baroutaji A., Gilchrist M.D., Smyth D., Olabi A.G. Crush analysis and multi-objective
optimization design for circular tube under quasi-static lateral loading. Thin-Walled
Structures 2015, 86, 121–131, https://doi.org/10.1016/j.tws.2014.08.018.
[46] Naik D.K., Maity K. Application of desirability function based response surface methodology
(DRSM) for investigating the plasma arc cutting process of sailhard steel. World Journal of
Engineering 2018, 15(4), 505–512, https://doi.org/10.1108/WJE-06-2017-0125.
T. K. Priyanka, Manoj K. Singh, Anuj Kumar
Deep learning for satellite-based data
analysis
Abstract: The application of a neural network is not new in data analysis. However,
the availability and cost of the modern hardware have prompted to configure the net-
work with more count of layers added with many more number of neurons in a layer.
The presence of the graphical processing units (GPU) has also accelerated the use of
deep learning. It is used successfully in many active areas of research, including face
recognition, fingerprint recognition, image analysis, and agricultural and health sci-
ences. Deep learning has also found its application in a wide range of datasets and
imageries obtained from the satellites. In the last ten years, many articles were pub-
lished with a study using the data derived from the satellites. Many studies have used
imageries obtained from the satellites. There chapter demonstrates the application of
deep learning using the satellite-derived datasets and presents a survey on the utili-
ties of deep learning on the satellite based data analysis.

Keywords: deep learning, satellite, remote sensing, big data, machine learning

1 Introduction
Neural networks are often used in artificial intelligence and machine learning. Multi-
ple layers are used in neural networks to extract hidden information and pattern in
input data [1, 2]. In many instances, a small number of layers cannot be sufficient in
pattern recognition. However, in some cases, with shallow layers and large number
of nodes, large amount of variation and information from the data can be extracted.
With addition of large number of nodes, the performance of the neural networks
cannot be certain in many problems. Increasing layers in the networks are shown to
significantly improve the performance of the networks. But increasing the count of
layers increases the computational complexity. But exponential increase in the com-
puting powers and dedicated hardware for such computing have opened the door for
the deep learning and its applications in the multiple fields. Here, the deep means
large number of layers in the neural networks. The deep learning is applied on vary-
ing fields. This implies that the deep learning can be applied on varying types of
datasets. The datasets include financial datasets, agricultural datasets, social media
datasets, survey datasets, climate datasets, and varying types of images. In addition,

T. K. Priyanka, Manoj K. Singh, Anuj Kumar, Department of Mathematics, University of Petroleum


and Energy Studies, Dehradun, India

https://doi.org/10.1515/9783110716214-011
174 T. K. Priyanka, Manoj K. Singh, Anuj Kumar

deep learning is also applied on the satellite imageries and the datasets derived from
the satellite sensors. Every day, huge amount of data is being generated from the sen-
sors on-board satellites. Some of the examples of such sensor and satellite system are
(1) moderate resolution imaging spectroradiometer (MODIS) on-board the Terra and
Aqua, (2) multiangle imaging radiometer, (3) clouds and Earth’s radiant energy sys-
tem, LANDSAT, advanced very-high-resolution radiometer, adobe the NOAA-15, and
gravity recovery and climate experiment. Useful information is extracted from the im-
ageries and the data are retrieved from the satellites [3]. Some of the applications of
the data and imageries derived from the satellites include data fusion [4] and agricul-
tural fire analysis [5, 6]. Among many techniques applied on the satellite-derived da-
tasets, deep learning is one of the best techniques that have applied to extract the
pattern and objects. In addition to object and pattern recognition, deep learning is also
applied for the classification applications on the satellite imageries. The deep learning
technique is applied on many fields related to data retrieved from the satellites includ-
ing land-use land-cover and urban environment. In addition, the technique is also ap-
plied on detecting precipitation, clouds, solar radiation, sea surface temperature, and
soil moisture from the imageries. Deep learning is also involved in the forecasting of
weather events which involves the large amount of data from the station, satellites,
and through model simulations. The forecasting requires processing of large amount of
data and large computational resources. The deep learning has shown to be very use-
ful in the weather forecasting. The technique is also useful in the estimation of popula-
tion including human, whales and trees. Agriculture is one of the many fields where
deep learning is playing an important role.
With growing applications of the deep learning, there is a need for computational
framework and architecture. Some of the popular deep neural network architecture
are deep belief networks (DBN), recurrent neural networks, and convolutional neural
networks (CNN). Many useful and important frameworks that have been developed are
in use including TensorFlow, Pytorch, Keras, Caffe, Sonnet, MXNet, and U-Net. For
satellite imageries, DeepSat is a framework for learning.

2 Deep learning framework


In this section, neural networks are discussed from discriminant functions point of
view. Linear discriminant function can be implemented as one of the simplest neu-
ral networks. The weights of the function given in the equation (1) can be obtained
by using the neural network frameworks [2]:
yðxÞ = wt x + w0 (1)

where w is the weight vector, w0 is known as bias, both of which require computa-
tions, x is a vector of inputs, and y is the output vectors. The weight vector w and
Deep learning for satellite-based data analysis 175

the bias w0 are learned by using back propagation technique used in neural net-
works. This simple model is also shown in the Figure 1, where input vector consists
of three elements. One additional element in the input vector represents the node
corresponding to the bias which is kept as one.

Figure 1: A neural network model with one input and one output layers.

In the same line, equation (1) can be used k times for k-class discrimination prob-
lem. The k discriminant functions defined in equation (2) are used for the k-class
problem:

yk ðxÞ = wTk x + wk0 (2)

where wk s are weight vectors, wk0 s are biases, x is input vector, and yk ′s are output
vectors. Figure 2 shows a very simple neural network for four-dimensional feature
vector and three-class classification problem.
The linear discriminant function of equation (1) can be generalized by using
nonlinear functions. For example, if f is a not linear function, then the discriminant
 
function of the form y = f wt x + w0 is another discriminant function, with non-
176 T. K. Priyanka, Manoj K. Singh, Anuj Kumar

Figure 2: A neural networks with one input and one output layers for three-class classification
problem.

linear nature. The function family of linear function such as f is known as activation
functions. Some of the popular activation functions used in the neural networks in-
clude logistic sigmoid, tanh, binary step, ReLU, Leaky ReLU, parametric ReLU, and
softmax activation functions. The functions discussed above are used in deep learn-
ing framework, which is discussed in the following section.

2.1 Computational framework: Keras, CAFFE

A deep learning framework was developed in the University of Berkeley with name
CAFFE. It is a convolution architecture based system. This framework is written in
C++ and python. It can process over 60 million images per day. It is known for its
speed and modularity. Keras work as a python interface for artificial neural networks.
It was developed in ONEIRO. It contains some standard implementations of most
used neural networks. It also supports convolutional and recurrent neural networks.
It can also be used in smartphones [7]. A framework of keras is shown in Figure 3.
Deep learning for satellite-based data analysis 177

Figure 3: Algorithm of Keras.

Deep learning is a computationally intensive program. For this, dedicated hard-


ware is also developed in addition to the software. Graphical processing unit (GPU) is
one such specific hardware and software system. GPU is a graphic processing unit. It is
an electronic circuit used to manipulate and alter the memory rapidly. They efficiently
manage computer graphics and images. They are more efficient than the CPU. GPU are
either present in the motherboard or can be inserted as graphic cards. Designed for
proportionate processing, the GPU is used in various applications, including graphics
and video relinquishing, though they are best picked upon for their gaming capabili-
ties, for creative artificial intelligence, GPUs are more popular. GPUs have a computa-
tional ability; they deliver acceleration by taking advantage of parallel nature [8].
Deep learning often involves large amount of data, the core of so-called big
data. It is a collection of a large amount of data. The amount of data is so huge that
the traditional techniques cannot analyze the data [9]. These unstructured data are
collected from social media, the Internet, sensors, and other devices. Analysts, re-
searchers, and business make better and faster decisions using previously inacces-
sible or unusable data by analysis of big data. Data are collected in tremendous
speed [10] and these are directly written into the disk. Big data helps in gaining
more information accurately as there is a massive amount of data. It can be used
178 T. K. Priyanka, Manoj K. Singh, Anuj Kumar

for product development, predictive maintenance, customer experience, and many


other ways. One of big data’s main challenges is to keep pace with data and find
efficiency in storing it [11]. Recently, metaheuristics optimization algorithms have
also been using for big data analysis [12]. Last two decades have seen a remarkable
progress in the development and applications of metaheuristics [13–16]. Deep
learning training and optimization of various parameters associated with is also
get benefited with this development [17, 18].

2.2 Satellite data

Satellites provide data at synoptic scale. The resolution and frequency of satellites gen-
erate big amount of meaningful data. The data and imageries derived from the satel-
lites are used in multiple applications. Telemetry and anomaly forecasting are made
using LSTM network. These two methods are implemented for detecting various types
of anomalies. False-positive flags are also reduced by using the two methods. In this,
features extracted with unsupervised techniques have resulted in automatic features
which are otherwise not interpretable by human eyes. Deep learning is being replaced
with machine learning in many places [19]. A high spatial-resolution sensor is needed
to observe the ice wedges degradation in Pan-arctic applications [20]. A convolutional
neural network (CNN) based model which is a deep learning tool was used for polygon
mapping across Artic for ice wedge. According to Keshk and Xu [21], scene-dependent
fusion algorithms are selected. The choice of suitable algorithm out of the available
algorithm is also required. The selected algorithm should not have limited the ice-
wedge polygon mapping. In addition, the VHSR imageries have enabled the applica-
tions in feature extraction of permafrost. Polar geospatial center provides product
based on Brovey algorithm [22, 23]. The product is suitable for general image analysis.
A super-resolution image is obtained by up-scaling low-resolution images to
high-resolution image. To obtain a super-resolution image, various technologies uti-
lizing the deep layers in neural networks have been studied. They have performed
better than the earlier frameworks for deep learning [24]. A set of experiments are
constructed to measure the behavior and effectiveness of the deep learning frame-
works. Deep network cascade is better than other methods in quantitative methods.
Deep learning has taken a significant step in super-resolution [25].

3 Deep learning for satellite imageries


Deep learning approaches and the developed artificial neural networks is deep neu-
ral network. This approach has deep network architecture and effective weight
shared convolutional layers [26, 27].
Deep learning for satellite-based data analysis 179

Site-independent and site-specific neural networks are used for the model of
sea surface temperature. This data-driven model is based on deep learning technol-
ogy [28]. This deep learning model can automatically learn rules of sea surface tem-
perature pattern. The spatial-temporal alterations associated are avoided explicitly,
modelling various complex processes with physical equations [29].
Satellite images time series algorithms are also widely used [30]. The satellite
images time series should be sufficient long and must have a suitable temporal
resolution for correctly normal behavior and detect seasonal trends. In this ap-
proach, the extraction of change in two temporal resolutions present in the time
series of satellite images that are analyzed in the context of several temporal reso-
lutions are further regrouped. This technique depends on unsupervised feature
with neural networks [31]. This method works for real-life datasets, but for unbal-
anced datasets, the smaller classes could not be separated from the larger classes
[32, 33].
Deep learning algorithms are used in radio communications for interference
management. Interference detection and interference classification are the main
areas. The previous techniques used to decide whether an interference or not. The
classificatory is trained with various strategies to cope with as multiple interfer-
ences [34]. The results are used for future communications.

3.1 Classification: land use, land cover, and urban environment

Satellite imageries with high spatial resolution provide information for great diver-
sity in remote sensing applications including demographic land uses. To the sheer
amount of data from satellites, it cannot be analyzed manually, therefore different
techniques of machine learning are used. Machine learning automates the analysis
of satellite data. Deep learning combined with unsupervised learning is developed
to study high-resolution images [35]. These techniques were used to distinguish the
socioeconomic regions in the cities in India and to locate schools in Africa. The por-
tion of deep learning has been converted from Caffe to Keras, a portion of unsuper-
vised learning from scikit-learn. Therefore, pipelining the deep learning with
unsupervised learning is used to identify features from satellite images [36].
Remote sensing principles are also used in detecting building damages [37].
CNNs are trained for binary classifications and the samples derived are very lim-
ited. The sample size from the satellite imageries is relatively smaller, whereas a
range of images obtained from airborne systems. With an increase in count of sen-
sor platforms, the fitted networks would hold key in the assessing damages at the
regional level. At this point, satellite images are the only of information utilized in
this kind of study [38]. A further refinement in the model would lead to a better
result. Sample which are specific to location using an online learning methodolo-
gies, would definitely improve the detection qualities. To use the samples obtained
180 T. K. Priyanka, Manoj K. Singh, Anuj Kumar

from the multi-resolution remote sensing at optimal level, class number can be in-
creased. The images with resolution are multiple levels are increasing. A method
like this, which has the ability to take advantage of the resolution and multiple
scales, will essentially be useful when large volume of data in the form of pixels will
be processed [39].
Automatic detection in satellite imagery can be used for detection of oil tanks,
aircraft, and so on in military applications. CNN is used in large range of image rec-
ognition problems as it is invariant to small rotation and shifts. EdgeBoxes is strong
in varying the size of objects. A collective use of EdgeBoxes and CNN is useful in
automatic detection in satellite images.
Deep CNNs can be used to optimize the cost in the estimation of global varia-
tions present in time series of long-term average PM2.5 concentrations. New global
IMAGE-PM2:5 model which depends on input of size 1, can be utilized in estimating
PM2:5 concentrations. The model is computationally fast and its performance in
terms of performance can be compared with Bayesian hierarchical models used fre-
quently by the Global Burden of Disease Project [40]. The IMAGEPM 2:5 model has
potential to be used as an independent method for estimating the global exposure.
The model also has the potential to embed itself into some complex hierarchical
models [41].
CNNs in automatic difference extraction can be used for the disaster manage-
ment using the imageries obtained from the remote sensors. This method can be
used detect region effected with a disaster automatically with a high degree of accu-
racy. CNN can also improve the precision of the classification. Earlier techniques
used only two channels; first channel is used in pre-disaster and second in post-
disaster. Both the channels are gray scales. An ordinary subtraction provides the
information on disaster. However, this new method uses six channels in total.
The deep learning techniques are applied in built environment as well. The fea-
tures determined by remote sensing are utilized in predicting the mortality [42]. The
estimated features are primarily associated with demographic data. The upcoming
modeling principles should identify features from the remotely sensed images
related to health outcomes. The modeling approach has the potential in specified
public health problems [43].

3.2 Object detection: precipitation, cloud, solar radiation, sea


surface temperature, and soil moisture

Currently, all detections of clouds are based on classifying individual pixels from
their signature spectrums [44]. Clouds can also be detected using deep learning in
combination with images in visible bands. A U-net architecture was used for detect-
ing clouds. The information on optical and physical properties of clouds, which is
well developed now, can be very useful in pre-processing of the cloud data, which
Deep learning for satellite-based data analysis 181

will lead to a better classification of clouds. The algorithms for cloud detections can
incorporate the spatial features and textures. It can be an improvement over the al-
gorithm using Fmask. The method shows high degree of performance utilizing only
the visible channels with possible inclusion on infrared channels [45].
In fields such as classification and change detection, remote sensing of images
has been successfully applied. Remote sensing image processing involves a few pre-
processing procedures [46]. The neural network has been used in the remote sens-
ing community for many years. For nonstandard image processing tasks, deep
learning has been adapted.
Snow and cloud detection algorithms are time consuming. Often they require man-
ual labeling of cloud and snow areas. In addition, snow and cloud have similar appear-
ance in satellite imageries, which adds to the difficulties in detection. Previous detection
methods have applied threshold tests which were semi-manual, but the results were not
accurate. An end-to-end fully CNN is configured, and a multiple scaled forecasting
method is explored to take the advantage of spatial information at low-level and seman-
tic information at high simultaneously. This method also reduces the error rate [47].
Threshold method used for cloud detection uses high reflectivity of the clouds
RGB and infrared channels. Threshold factors are not accurate because of favorably
influencing factors [48]. Artificial neural networks, which are part of deep learning,
are used because they can learn the feature vector of the clouds quickly from the
input samples and detect the clouds. Few of the shortcomings of the deep learning
model are the inadequacy of the learning samples [49].
SD methods can take measurement of snow depth using station data but cannot
detect the spatiotemporal variations present in SD, when stations are sparsely distrib-
uted [50]. Passive microwave has acquired the long-term and large-scale datasets
of snow depth. But the passive microwave is assumed such that the electromagnetic
radiation snow depends on the snow depth. Combination of remote sensing and sta-
tion observation is used to measure snow depth. A linear relation between snow
depth and brightness temperature is established. DBN snow depth has better predic-
tions than the traditional linear regression techniques [51].
Large parts of Amazon are inaccessible, but they are maintained by satellite im-
aging. It also helps in observing climate change using various CNN models to learn
about the climate feature vector in Amazon Basin. The feature classes are generated
from selected images, and the models are used to make predictions at initial states,
regarding image labels were also a part [52].
Many techniques have been developed over last decades for detection of precipi-
tating clouds. Machine learning technique is considered superior than the parametric
approaches [53]. These approaches are primarily associated with restricted number of
variables, and machine learning can consider the whole set of data available. Differ-
ent algorithms are tried for the detection but the difference in performance is very
less, as spectral properties of clouds are only considered. More studies are required as
the rainfall is also based on textural properties of clouds.
182 T. K. Priyanka, Manoj K. Singh, Anuj Kumar

3.3 Weather forecasting

In many studies, including weather forecasting, management of water resources, and


climate studies, the foremost requirement is of accurate precipitation data at various
scales. Satellite-based data can provide precipitation data at synoptic scale. The data
available at various resolution and stochastic as well as deterministic method to scale
the data at various scales also help the models in using the satellite-derived data at
multiple scales. Infrared data is used to estimate precipitation data and passive mi-
crowave data for the post process. Deep learning models based on infrared channels
demonstrate that features extracted from the infrared channels even sparsely sam-
pled are better at detecting precipitation. This model is the only model that can be
used for warm cloud precipitation using data of water vapor.
Forecasting of energy availability with good accuracy is also need of the hour.
The solar energy is function of daytime; in the modeling, there is a requirement of
frameworks which can inter-compare forecasting methods. A new contribution
emerges, which is a form of an enhanced solar resource assessment [54]. Long short-
term memory can be one of the different methods to be used for predictive modelling,
which can be integrated within the solar energy domains to increase economic feasi-
bility [55]. Weather forecasting is done by the data collected by satellites. The images
make major predictions of clouds, precipitation, and lightning [56].
A deep learning algorithm is enhanced by using interpolation methods. FO-
based approach can be better than Brox’s approach in optical flow utilization. The
case when, there is a lack of high-frequency dataset, RetinaNet is a better model to
use. For the kind of dataset where interpolation or data augmentation is used, deep
learning algorithm is better. Predicting a cyclone is accurate by using deep learning.
Deep learning method is better for cyclone prediction [57].

3.4 Population estimation: human

Deep learning techniques have been used for finding the depression rates in adults
in street view green and blue spaces. Green spaces such as trees and plants, and
blue spaces such as rivers and lakes have a massive impact on mental health. The
streetscape measurements from street view were compared with satellite-derived
outputs [58]. Cross-sectional analysis was done on questionnaire data of those
aged 60 or above. The study gives information on the exposure extent to which
street view green and blue space are related to geriatric depression in Beijing,
China. Deep learning in combination with street view data has been used to extract
measurements of blue and green spaces instead of in situ field observation and re-
mote sensing [59].
Governmental and nongovernmental organizations do measurements of
human well-being. Existing data are sparse and are not straightforward to scale [60].
Deep learning for satellite-based data analysis 183

Other relevant data for measurements of well-being are collectively increasing.


Satellite-based in-depth learning approaches to measure the wealth is flexible and
precise. The utilization of the satellite-derived data on many countries suggests
that, it could be extrapolated in generating wealth estimates in regions with unavail-
ability of data [61].
In many countries around the world, the census is conducted once in 5 to
10 years. Most of the censuses have errors because many people do not get regis-
tered. Censuses are essential so that the supply of food, water, and other essential
services are met the demand. CNNs can be used directly to predict the population
using satellite images. Already known historical data can be used to predict the
population growth.

3.5 Economic condition

Human settlements of different types can be analyzed using satellite imagery. High-
resolution unsupervised machine learning with CNNs is used to distinguish these
neighborhoods. These methods show promising results in developing to underdevel-
oped communities. The communities with low socioeconomic and demographic data
are showing better and promising results. For the same study to be performed for the
rest of the world, high-resolution satellites are required.
Deep learning has significant remote sensing applications used in disaster man-
agement. Reparation cost for living is challenging to quantify in real time. Satellite
images of before and after the significant crisis of world are analyzed. Residual net-
work (ResNet) and pyramid scene parsing network (PSPNet) are used to quantify
the magnitude of the damage. ResNet offers the robustness to low image quality,
whereas PSPNET offers scene parsing contextualized analysis of the damage. This
shows 90% accuracy. As there are multiple factors for the damages, all of them are
fitted in a multilinear regression model for quantifying the real blow [62].

3.6 Agricultural application

Three methods can be used for precise boundaries of farmlands using satellite im-
ages. Edge based, region based, and hybrid based are used. Filters are used to iden-
tify discontinuity in the edge-based method, region-based method, and hit and trial
methods [63]. These methods need handwork. Therefore, deep CNNs are used for
conditioned interference to estimate at a given pixel. The probability of being in
a field, and to predict the distance from closest boundary, is used in a model. These
are processed further to find the instant segmentation [64].
Mapping and dating the burned regions are an application of fire management.
With the help of satellite imagery using deep learning, mapping, and dating burned
184 T. K. Priyanka, Manoj K. Singh, Anuj Kumar

regions is a recent advance. The good results obtained using these deep learning
models and the expectation of a consistent increase in computational capabilities
along with data availability are worth following [65, 66].

4 Summary
Overall, satellite images in deep learning are widely used for prediction, classifica-
tion, and estimation. There is a huge expansion of platforms that collect satellite
images which made improvising and creating new techniques of deep learning
easy. Remote sensing has been applied frequently in classification and change de-
tection, whereas CNNs are used for minor binary classifications. In detection, first
the detection was based on individual pixel spectrum, now it has increased with
comparison of similar objects using CNN which reduced the error. Availability of
real-time precipitation data with adequate resolution analysis of climate has be-
come more authentic. With advancement in hardware and software, recent global
problems like climate change, human wellbeing, and immigrations are analyzed by
creating new techniques using available data. Lack of availability of public training
dataset is one of the limitations of deep learning. The application in deep learning
in remote sensing needs further research.

References
[1] Duda R.O., Hart P.E. Pattern recognition and scene analysis. 1973.
[2] Bishop C.M. Neural Networks for Pattern Recognition. Oxford university press, 1995.
[3] Gautam R., Singh M.K. Urban heat island over Delhi punches holes in widespread fog in the
Indo‐Gangetic Plains. Geophysical Research Letters 2018, 45(2), 1114–1121.
[4] Singh M.K., Gautam R., Venkatachalam P. Bayesian merging of MISR and MODIS aerosol
optical depth products using error distributions from AERONET. IEEE Journal of Selected
Topics in Applied Earth Observations and Remote Sensing 2017, 10(12), 5186–5200.
[5] Liu T., Marlier M.E., Karambelas A., Jain M., Singh S., Singh M.K., Gautam R., DeFries
R.S. Missing emissions from post-monsoon agricultural fires in northwestern India: Regional
limitations of MODIS burned area and active fire products. Environmental Research
Communications 2019, 1(1), 011007.
[6] Liu T., Mickley L.J., Gautam R., Singh M.K., DeFries R.S., Marlier M.E. Detection of delay in
post-monsoon agricultural burning across Punjab, India: Potential drivers and consequences
for air quality. Environmental Research Letters 2021, 16(1), 014014.
[7] Iglovikov V., Sergey M., Vladimir O. Satellite imagery feature detection using deep
convolutional neural network: A kaggle competition. ArXiv. 2017.
[8] Eberly D.H. CPU Computing. GPGPU Programming for Games and Science. 2020; 33–120.
[9] Davenport T.H., Paul B., Randy B. How ‘big Data’ Is Different. MIT Sloan Management Review
2012, 54(1).
Deep learning for satellite-based data analysis 185

[10] Marx V. The big challenges of big data. Nature 2013, 498(7453), 255–260.
[11] Fan J., Fang H., Han L. Challenges of big data analysis. National Science Review 2014, 1(2),
293–314.
[12] Bansal P., Kaur R. Twitter sentiment analysis using machine learning and optimization
techniques. Inter-national Journal of Computer Applications 2018, 179.
[13] Kumar A., Pant S., Ram M. Grey Wolf Optimizer Approach to the Reliability‐Cost Optimization
of Residual Heat Removal System of a Nuclear Power Plant Safety System. Quality and
Reliability Engineering International. Wiley, 2019, 1–12.
[14] Uniyal N., Pant S., Kumar A. An overview of few nature inspired optimization techniques and
its reliability applications. International Journal of Mathematical, Engineering and
Management Sciences 2020, 5(4), 732–743.
[15] Negi G., Kumar A., Pant S., Ram M. GWO: A review and applications. International Journal of
System Assurance Engineering and management 2020.
[16] Kumar A., Pant S., Ram M. Complex system reliability analysisand optimization. In: Ram M.,
Davim J.P. (eds) Advanced Mathematical Techniques in Science and Engineering. River
Publisher, 185–199, ISBN: 9788793609341, e-ISBN: 9788793609334. 2018.
[17] Leung F.H., et. al. Tuning of the structure and parameters of a neural network using an
improved genetic algorithm. IEEE Transactions on Neural Networks 2003, 14(1), 79–88.
[18] Meissner M., Schmuker M., Schneider G. Optimized particle swarm optimization (OPSO) and
its application to artificial neural network training. BMC Bioinformatics 2006, 7(1), 125.
[19] Asokan A., Anitha J. Machine learning based image processing techniques for satellite image
analysis -a survey. Proceedings of the International Conference on Machine Learning, Big
Data, Cloud and Parallel Computing: Trends, Prespectives and Prospects, COMITCon 2019
119–124. 2019.
[20] Divesh S. Proceedings of the VLDB Endowment. Proceedings of the VLDB Endowment 10:
2032–2033. 2017.
[21] Keshk H.M., Xu C.Y. Satellite super-resolution images depending on deep learning methods:
A comparative study. 2017 IEEE International Conference on Signal Processing,
Communications and Computing, ICSPCC 2017 2017-Janua:1–7. 2017.
[22] DeLancey E.R., Jahan K., Jason T.B., Jennifer N.H. Large-scale probabilistic identification of
boreal peatlands using google earth engine, open-access satellite data, and machine
learning. PLoS One 2019, 14(6), 1–23.
[23] Shikhar S., Stefan L. A comparative study of LSTM neural networks in forecasting day-ahead
global horizontal irradiance with satellite data. Solar Energy 2018, 162, 232–247.
[24] Hemanth J. (Lecture Notes On Data Engineering And Communications Technologies Vol. 32)
Jude Hemanth, Madhulika Bhatia, Oana Geman (Editors) – Data Visualization And Knowledge
Engineering Spotting Data Points Wi.Pdf.
[25] Montavon G. Introduction to neural networks. Lecture Notes in Physics 2020, 968, 37–62.
[26] Henarejos P., Miguel A.V., Ana I.P. Deep learning for experimental hybrid terrestrial and
satellite interference management. ArXiv. 2019: 1–5.
[27] O’meara C., Leonard S., Martin W. Applications of deep learning neural networks to satellite
telemetry monitoring. 15th International Conference on Space Operations, 2018 (June):1–16.
[28] Anderson P.G., Klein G., Oja E., Steele N.C., Antoniou G., Mladenov V., Paprzycki M. Neural
networks and their applications: Introduction. Informatica (Ljubljana) 2001, 25(1), 1.
[29] Guo Y., Yu L., Ard O., Songyang L., Song W., Michael S.L. Deep learning for visual
understanding: A review. Neurocomputing 2016, 187, 27–48.
[30] Kalinicheva E., Dino I., Jeremie S., Maria T. Unsupervised change detection analysis in
satellite image time series using deep learning combined with graph-based approaches. IEEE
186 T. K. Priyanka, Manoj K. Singh, Anuj Kumar

Journal of Selected Topics in Applied Earth Observations and Remote Sensing 2020, 13,
1450–1466.
[31] Ma L., Yu L., Xueliang Z., Yuanxin Y., Gaofei Y., Brian A.J. Deep learning in remote sensing
applications: A meta-analysis and review. ISPRS Journal of Photogrammetry and Remote
Sensing 2019, 152, 166–177.
[32] Block J., Mehrdad Y., Mai N., Daniel C., Marta J., John G., Tom D., Ilkay A. An unsupervised
deep learning approach for satellite image analysis with applications in demographic
analysis. Proceedings – 13th IEEE International Conference on EScience, EScience 2017: 9–18.
[33] Oh K.S., Keechul J. GPU implementation of neural networks. Pattern Recognitio 2004, 37(6),
1311–1314.
[34] Han W., Ruyi F., Lizhe W., Yafan C. A semi-supervised generative framework with deep
learning features for high-resolution remote sensing image scene classification. ISPRS
Journal of Photogrammetry and Remote Sensing 2018, 145, 23–43.
[35] Pan B., Zhenwei S., Xia X. MugNet: Deep learning for hyperspectral image classification using
limited samples. ISPRS Journal of Photogrammetry and Remote Sensing 2018, 145, 108–119.
[36] Gebru T., Jonathan K., Yilun W., Duyun C., Jia D., Erez L.A., Li F. Using deep learning and
google street view to estimate the demographic makeup of neighborhoods across the United
States. Proceedings of the National Academy of Sciences of the United States of America
2017, 114(50), 13108–13113.
[37] Duarte D., Nex F., Kerle N., Vosselman G. Satellite image classification of building damages
using airborne and satellite image samples in a deep learning approacH. ISPRS Annals of the
Photogrammetry, Remote Sensing and Spatial Information Sciences 2018, 4(2), 89–96.
[38] Helber P., Benjamin B., Andreas D., Damian B. EuroSAT: A novel dataset and deep learning
benchmark for land use and land cover classification. ArXiv 2017, 1, 1–10.
[39] Aung H.T., Sao H.P., Wataru T. Building footprint extraction in yangon city from monocular
optical satellite image using deep learning. Geocarto International 2020, 0(0), 1–21.
[40] Song A., Yongil K., Youkyung H. Uncertainty analysis for object-based change detection in
very high-resolution satellite images using deep learning network. Remote Sensing 2020,
12(15).
[41] Hong K.Y., Pedro O.P., Scott W. Predicting global variations in outdoor PM2:5concentrations
using satellite images and deep convolutional Neural Networks. ArXiv. 2019.
[42] Hasanain B., Andrew D.B., Matthew L.B. Using model checking to detect simultaneous
masking in medical alarms. IEEE Transactions on Human-Machine Systems 2016, 46(2),
174–185.
[43] Bruzelius E., Matthew L., Avi K., Jordan D., Matteo D., Aaron B., Patrick D., Bruno S., Philip
J.L., Prabhjot S. Satellite images and machine learning can identify remote communities to
facilitate access to health services. Journal of the American Medical Informatics Association
2019, 26(8–9), 806–812.
[44] Sun L., Xu Y., Shangfeng J., Chen J., Quan W., Xinyan L., Jing W., Xueying Z. Satellite data
cloud detection using deep learning supported by hyperspectral data. International Journal of
Remote Sensing 2020, 41(4), 1349–1371.
[45] Chen H., Chandrasekar V., Robert C., Pingping X. A machine learning system for precipitation
estimation using satellite and ground radar network observations. IEEE Transactions on
Geoscience and Remote Sensing 2020, 58(2), 982–994.
[46] Tao Y., Xiaogang G., Alexander I., Soroosh S., Kuolin H. Precipitation identification with
bispectral satellite information using deep learning approaches. Journal of Hydrometeorology
2017, 18(5), 1271–1283.
Deep learning for satellite-based data analysis 187

[47] Zhan Y., Jian W., Jianping S., Guangliang C., Lele Y., Weidong S. Distinguishing cloud and
snow in satellite images via deep convolutional network. IEEE Geoscience and Remote
Sensing Letters 2017, 14(10), 1785–1789.
[48] Jeppesen J.H., Rune H.J., Fadil I., Thomas S.T. A cloud detection algorithm for satellite imagery
based on deep learning. Remote Sensing of Environment 2019, 229, 247–259.
[49] Prathap G., Ilya A. Deep learning approach for building detection in satellite multispectral
imagery. ArXiv 2018, 1, 461–465.
[50] Kim J., Kwangjin K., Jaeil C., Yong Q.K., Hong J.Y., Yang W.L. Satellite-based prediction of
arctic sea ice concentration using a deep neural network with multi-model ensemble. Remote
Sensing 2019, 11(1).
[51] Borowicz A., Hieu L., Grant H., Georg N., Caroline H., Vladislav K., Heather J.L. Aerial-trained
deep learning networks for surveying cetaceans from satellite imagery. PLoS ONE 2019, 14
(10), 1–15.
[52] Reichstein M., Gustau C.V., Bjorn S., Martin J., Joachim D., Nuno C. Prabhat. Deep learning
and process understanding for data-driven earth system science. Nature 2019, 566(7743),
195–204.
[53] Khan M.J., Adeel Y., Nizwa J., Shifa N., Khurram K. Automatic target detection in satellite
images using deep learning. Journal of Space Technology 2017, 7(1), 44–49.
[54] Meyer H., Meike K., Tim A., Thomas N. Comparison of four machine learning algorithms for
their applicability in satellite-based optical rainfall retrievals. Atmospheric Research 2016,
169, 424–433.
[55] Shakya S., Sanjeev K., Mayank G. Deep learning algorithm for satellite imaging based cyclone
detection. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
2020, 13, 827–839.
[56] Shen H., Tongwen L., Qiangqiang Y., Liangpei Z. Estimating regional ground-level PM 2.5
directly from satellite top-of-atmosphere reflectance using deep belief networks. Journal of
Geophysical Research: Atmospheres 2018, 123(24), 13875–13886.
[57] Lee M.S., Kyung A.P., Jinho C., Ji E.P., Joon S.L., Ji H.L. Red tide detection using deep learning
and high-spatial resolution optical satellite imagery. International Journal of Remote Sensing
2020, 41(15), 5838–5860.
[58] Berriel R.F., André T.L., Alberto F.S., Thiago O.S. Deep learning based large-scale automatic
satellite crosswalk classification. ArXiv. 2017: 1–5.
[59] Helbich M., Yao Y., Ye L., Jinbao Z., Penghua L., Ruoyu W. Using deep learning to examine
street view green and blue spaces and their associations with geriatric depression in Beijing,
China. Environment International 2019, 126, 107–117.
[60] Yeh C., Anthony P., Anne D., George A., Zhongyi T., David L., Stefano E., Marshall B. Using
publicly available satellite imagery and deep learning to understand economic well-being in
Africa. Nature Communications 2020, 11(1), 1–11.
[61] Rohith G., Lakshmi S.K. Super-resolution based deep learning techniques for panchromatic
satellite images in application to pansharpening. IEEE Access 2020, 8, 162099–162121.
[62] Lu Lili W.G. Automatic quantification of settlement damage using deep learning of satellite
images 2020, 3, 2–7.
[63] Nguyen T.T., Thanh D.H., Minh T.P., Tuyet T.V., Thanh H.N., Quyet T.H., Jun J. Monitoring
agriculture areas with satellite images and deep learning. Applied Soft Computing Journal
2020, 95, 106565.
188 T. K. Priyanka, Manoj K. Singh, Anuj Kumar

[64] Kussul N., Andrii K., Andrii S., Bohdan Y., Mykola L. Land degradation estimation from global
and national satellite based datasets within un program. Proceedings of the 2017 IEEE 9th
International Conference on Intelligent Data Acquisition and Advanced Computing Systems:
Technology and Applications, IDAACS 2017 1:383–386, 2017.
[65] Pinto M.M., Renata L., Ricardo M.T., Isabel F.T., Carlos C.C. A deep learning approach for
mapping and dating burned areas using temporal sequences of satellite images. ISPRS
Journal of Photogrammetry and Remote Sensing 2020, 160, 260–274.
[66] Waldner F., Foivos I.D. Deep learning on edge: Extracting field boundaries from satellite
images with a convolutional neural network. Remote Sensing of Environment 2020, 245
(February), 111741.
Editors’ Biography
Dr. Anuj Kumar is an Associate Professor of Mathematics at the University of Petroleum and Energy
Studies (UPES), Dehradun, India. Before joining UPES, he worked as an Assistant Professor
(Mathematics) in The ICFAI University, Dehradun, India. He has obtained his Master’s and doctorate
degrees in Mathematics from G. B. Pant University of Agriculture and Technology, Pantnagar, India.
His area of interest is reliability analysis and optimization. He has published many research
articles in journals of national and international repute. He is an Associate Editor of International
Journal of Mathematical, Engineering and Management Sciences. He is also a regular reviewer of
various reputed journals of Elsevier, IEEE, Springer, Taylor & Francis, and Emerald.

Dr. Sangeeta Pant received her doctorate from G. B. Pant University of Agriculture and Technology,
Pantnagar, India. Currently, she is an Assistant Professor at the Department of Mathematics of the
University of Petroleum and Energy Studies, Dehradun. She has published around 23 research
articles in the journals of national and international repute in her area of interest and instrumental
in various other research-related activities such as editing/reviewing for various reputed journals
and organizing/participating in conferences. Her area of interest is numerical optimization,
evolutionary algorithms, and nature-inspired algorithms.

Dr. Mangey Ram received the Ph.D. degree major in Mathematics and minor in Computer Science
from G. B. Pant University of Agriculture and Technology, Pantnagar, India. He has been a faculty
member for around 13 years and has taught several core courses in pure and applied mathematics
at undergraduate, postgraduate, and doctorate levels. He is currently Research Professor at
Graphic Era (Deemed to be University), Dehradun, India, and Visiting Professor at Peter the Great
St. Petersburg Polytechnic University, Saint Petersburg, Russia. Before joining the Graphic Era, he
was a Deputy Manager (Probationary Officer) with Syndicate Bank for a short period. He is Editor-in
-Chief of International Journal of Mathematical, Engineering and Management Sciences; Journal of
Reliability and Statistical Studies; Journal of Graphic Era University; Series Editor of six book series
with Elsevier, CRC Press – A Taylor and Frances Group, Walter De Gruyter Publisher Germany, River
Publisher and a guest editor and an associate editor with various journals. He has published more
than 250 research publications (journal articles/books/book chapters/conference articles) in IEEE,
Taylor & Francis, Springer Nature, Elsevier, Emerald, World Scientific, and many other national and
international journals and conferences. Also, he has published more than 50 books (authored/
edited) with international publishers like Elsevier, Springer Nature, CRC Press – A Taylor and
Frances Group, Walter De Gruyter Publisher Germany, River Publisher. His fields of research are
reliability theory and applied mathematics. Dr. Ram is a Senior Member of the IEEE, Senior Life
Member of Operational Research Society of India, Society for Reliability Engineering, Quality and
Operations Management in India, and Indian Society of Industrial and Applied Mathematics. He has
been a member of the organizing committee of a number of international and national conferences,
seminars, and workshops. He has been conferred with Young Scientist Award by the Uttarakhand
State Council for Science and Technology, Dehradun, in 2009. He has been awarded the Best Faculty
Award in 2011; Research Excellence Award in 2015; and recently Outstanding Researcher Award in
2018 for his significant contribution in academics and research at Graphic Era, Dehradun, India.

Dr. Om Prakash Yadav is a Professor and Duin Endowed Fellow in the Department of Industrial and
Manufacturing Engineering (IME) at North Dakota State University (NDSU), Fargo. He received his
Ph.D. degree in Industrial Engineering from Wayne State University, MS in Industrial Engineering
from the National Institute of Industrial Engineering Mumbai (India), and BS in Mechanical
Engineering from Malaviya National Institute of Technology, Jaipur (India). His research interests

https://doi.org/10.1515/9783110716214-012
190 Editors’ Biography

include reliability modeling and analysis, risk assessment, design optimization and robust design,
and manufacturing systems analysis. The research work of his group has been published in
Reliability Engineering and Systems Safety, Journal of Risk and Reliability, Quality and Reliability
Engineering International, International Journal of Production Research, Engineering Management
Journal, and IEEE Transaction of Systems, Man, and Cybernetics: Systems. He is currently serving as
Editor-in-Chief of International Journal of Reliability and Safety and on the editorial board of several
international journals. Dr. Yadav is a recipient of 2015 and 2018 IISE William A. J. Golomski Best
Paper Award and 2021 SRE Best Paper Award. He has published over 150 research papers in the
area of reliability, risk assessment, design optimization, and operations management. He is
currently a member of IISE, ASQ, SRE, and INFORMS. Dr. Yadav has also served as the Interim Chair
of IME department at NDSU for three years. He is the founding Director of the Center for Quality,
Reliability, and Maintainability Engineering (CQRME), which is supported by 10-member companies
since 2013. Prior to his tenure at NDSU, Dr. Yadav spent almost three years at Ford Motor Company,
Dearborn, working as a Reliability Engineer in Ford Product Development Center (FPDC).
Index
ABC 11, 13–14, 24, 26–30, 32–33 highway 11–13, 15–16, 19, 21
algorithm 11, 13–14, 21, 24, 28–29, 31–32 holding cost 60
ANOVA 145, 154, 160–161 horizontal curves 12
applications 12
arrangement 21 infrastructure 12, 21
autocorrelation function 107, 110 intelligence 14
intersection point 21, 23–24
balancing 11, 28 interval arithmetic 54
big data 173, 177–178 interval number 53–54
brightness 145, 148, 150, 156, 168 interval order relation 55
interval-valued optimization problem 64
CCD 146–148, 150, 153–154, 157–159, 167, 169 intrinsic viscosity 145, 148, 150, 156
central composite design 145–146 intuitionistic fuzzy set 130
comparison 21 inventory cost components 52
costs 12–13, 15, 21 iterations 13, 21, 24
crime 129–132, 137–138, 140–143
cutting 18–19 kappa number 145, 148, 150, 156–157, 167
cutting–filling 13, 18, 20–21, 28
linkage 129–132, 137, 143
defective product 52, 54
design 11–16, 18–21, 26 machine learning 173, 178–179, 181, 183
desirability 145–148, 155, 163, 165–169 MADM 117–119, 121–125
diagrams 21, 24, 26, 28 manufacturer’s profit 63
distance 14, 19, 23, 26 MCDM 146–148, 156–157, 167
Melia dubia 145–146
earthwork 11–13, 15, 18, 21 metaheuristic 11, 13, 21, 24, 93–95, 104
economic production quantity 51, 53 meta-heuristic search algorithms 53, 93–94, 98
elevations 16, 19, 23–24, 26 methods 11, 13–14, 21, 24, 26, 28
employed bees 14 model 12, 14
entropy 120, 145, 147–148, 150, 153, 156, 169 modeling 145–146, 148, 168
multi-criteria decision making 129–130, 145
FA 11, 13–14, 24, 26–31, 33
filling 18–19 non-overlying intervals 55
findings 13, 24, 28
fitness function 28–29, 31–32 onlookers 14
flock 14 optimization 11–14, 15, 18, 20–21, 145–150,
fully overlying intervals 55 155, 163, 165, 167–169
fuzzy set 117, 132 optimum 11–13, 16, 18–19, 21, 26, 28
fuzzy sets 129–131, 139, 143 oxygen delignification 145–146

grade lines 11–13, 15–16, 18–21, 24, 26, 28 paper industry 145–146
ground line 16 partially overlying intervals 55
particle swarm optimization technique 51, 64
hesitant fuzzy elements 132–133 production cost 59
hesitant fuzzy set 129, 131, 132, 134 production inventory model 52
HFSs 130, 133–137, 143 profile section 13, 16, 19

https://doi.org/10.1515/9783110716214-013
192 Index

PSO 11, 13–14, 24, 26–28, 30–32 score value 119


scrap items 72
QPSO technique 66 screening process 52
q-ROFS 117–120, 125 sensitivity analysis 68
serial crimes 129
rejecting cost 61 shortage cost 61
remote sensing 173, 179–184 similarity measures 130, 133, 137, 143
resemblance 129, 131, 134–135, 137–138, 140, soil 28
142–143 swarm 14–15
resemblance function 129, 133–135, 138
RetinaNet 182 TOPSIS 145–148, 150, 152–153, 157, 165
RSM 146–148, 150, 167 total cost 62
trials 24
sales revenue 63
satellite imageries 174, 178–179 vertical curve 12–13, 23
De Gruyter Series on the Applications
of Mathematics in Engineering and Information
Sciences

Already published in the series


Volume 9: Linear Integer Programming. Theory, Applications, Recent Developments
Elias Munapo, Santosh Kumar
ISBN 978-3-11-070292-7, e-ISBN (PDF) 978-3-11-070302-3
e-ISBN (EPUB) 978-3-11-070311-5

Volume 8: Mathematics for Reliability Engineering. Modern Concepts and Applications


Mangey Ram, Liudong Xing (Eds.)
ISBN 978-3-11-072556-8, e-ISBN (PDF) 978-3-11-072563-6
e-ISBN (EPUB) 978-3-11-072559-9

Volume 7: Mathematical Fluid Mechanics. Advances on Convection Instabilities


and Incompressible Fluid Flow
B. Mahanthesh (Ed.)
ISBN 978-3-11-069603-5, e-ISBN (PDF) 978-3-11-069608-0
e-ISBN (EPUB) 978-3-11-069612-7

Volume 6: Distributed Denial of Service Attacks. Concepts, Mathematical and Cryptographic


Solutions
Rajeev Singh, Mangey Ram (Eds.)
ISBN 978-3-11-061675-0, e-ISBN (PDF) 978-3-11-061975-1
e-ISBN (EPUB) 978-3-11-061985-0

Volume 5: Systems Reliability Engineering. Modeling and Performance Improvement


Amit Kumar, Mangey Ram (Eds.)
ISBN 978-3-11-060454-2, e-ISBN (PDF) 978-3-11-061737-5
e-ISBN (EPUB) 978-3-11-061754-2

Volume 4: Systems Performance Modeling


Adarsh Anand, Mangey Ram (Eds.)
ISBN 978-3-11-060450-4, e-ISBN (PDF) 978-3-11-061905-8
e-ISBN (EPUB) 978-3-11-060763-5

Volume 3: Computational Intelligence. Theoretical Advances and Advanced Applications


Dinesh C. S. Bisht, Mangey Ram (Eds.)
ISBN 978-3-11-065524-7, e-ISBN (PDF) 978-3-11-067135-3
e-ISBN (EPUB) 978-3-11-066833-9

www.degruyter.com
Volume 2: Supply Chain Sustainability. Modeling and Innovative Research Frameworks
Sachin Kumar Mangla, Mangey Ram (Eds.)
ISBN 978-3-11-062556-1, e-ISBN (PDF) 978-3-11-062859-3,
e-ISBN (EPUB) 978-3-11-062568-4

Volume 1: Soft Computing. Techniques in Engineering Sciences


Mangey Ram, Suraj B. Singh (Eds.)
ISBN 978-3-11-062560-8, e-ISBN (PDF) 978-3-11-062861-6,
e-ISBN (EPUB) 978-3-11-062571-4

You might also like