(Quantum Machine Intelligence) Siddhartha Bhattacharyya, Mario Köppen, Elizabeth Behrman, Ivan Cruz-Aceves - Hybrid Quantum Metaheuristics - Theory and Applications-CRC Press (2022)
(Quantum Machine Intelligence) Siddhartha Bhattacharyya, Mario Köppen, Elizabeth Behrman, Ivan Cruz-Aceves - Hybrid Quantum Metaheuristics - Theory and Applications-CRC Press (2022)
(Quantum Machine Intelligence) Siddhartha Bhattacharyya, Mario Köppen, Elizabeth Behrman, Ivan Cruz-Aceves - Hybrid Quantum Metaheuristics - Theory and Applications-CRC Press (2022)
Metaheuristics
Series Page
SERIES EDITORS
Edited by
Siddhartha Bhattacharyya
Mario Köppen
Elizabeth Behrman
Ivan Cruz-Aceves
MATLAB® is a trademark of The MathWorks, Inc. and is used with permission. The MathWorks
does not warrant the accuracy of the text or exercises in this book. This book’s use or discussion of
MATLAB® software or related products does not constitute endorsement or sponsorship by The
MathWorks of a particular pedagogical approach or particular use of the MATLAB® software.
© 2022 selection and editorial matter, Siddhartha Bhattacharyya, Mario Köppen, Elizabeth Behrman,
Ivan Cruz-Aceves, individual chapters, the contributors
Reasonable efforts have been made to publish reliable data and information, but the author and pub-
lisher cannot assume responsibility for the validity of all materials or the consequences of their use.
The authors and publishers have attempted to trace the copyright holders of all material reproduced
in this publication and apologize to copyright holders if permission to publish in this form has not
been obtained. If any copyright material has not been acknowledged please write and let us know so
we may rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information stor-
age or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.
com or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA
01923, 978-750-8400. For works that are not available on CCC please contact mpkbookspermis-
[email protected]
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are
used only for identification and explanation without intent to infringe.
DOI: 10.1201/9781003283294
Elizabeth would like to dedicate this volume to JPK, EJB, and JFB;
and to the memory of CFB.
Ivan would like to dedicate this volume to his lovely family, his wife
Mary, his children Ivan and Yusef, and his mother Bety.
Contents
Editors .....................................................................................................................xiii
Preface.....................................................................................................................xix
Contributors ..........................................................................................................xxiii
vii
viii Contents
Index...................................................................................................................... 249
Editors
Dr. Siddhartha Bhattacharyya received his Bachelors in
Physics, and in Optics and Optoelectronics, and Masters
in Optics and Optoelectronics from University of Cal-
cutta, India, in 1995, 1998, and 2000, respectively. He
completed PhD in Computer Science and Engineering
from Jadavpur University, India, in 2008. He is the re-
cipient of the University Gold Medal in Masters from
the University of Calcutta. He is the recipient of several
coveted awards including the Distinguished HoD Award
and Distinguished Professor Award conferred by Com-
puter Society of India, Mumbai Chapter, India in 2017,
the Honorary Doctorate Award (D. Litt.) from The Uni-
versity of South America, and the South East Asian Re-
gional Computing Confederation (SEARCC) International Digital Award ICT Edu-
cator of the Year in 2017. He has been appointed as the ACM Distinguished Speaker
for the tenure 2018–2020. He has been inducted into the People of ACM Hall of
Fame by ACM, USA in 2020. He has been appointed as the IEEE Computer Soci-
ety Distinguished Visitor for the tenure 2021–2023. He has been elected as the full
foreign member of the Russian Academy of Natural Sciences. He has been elected
a full fellow of the Royal Society for Arts, Manufacturers and Commerce (RSA),
London, UK.
He is currently serving as the Principal of Rajnagar Mahavidyalaya, Rajnagar,
Birbhum. He served as a Professor in the Department of Computer Science and En-
gineering of Christ University, Bangalore. He served as the Principal of RCC Insti-
tute of Information Technology, Kolkata, India during 2017–2019 and as a Senior
Research Scientist in the Faculty of Electrical Engineering and Computer Science
of VSB Technical University of Ostrava, Czech Republic (2018–2019). Prior to this,
he was the Professor of Information Technology in RCC Institute of Information
Technology, Kolkata, India. He served as the Head of the Department from March,
2014 to December, 2016. Prior to this, he was an Associate Professor of Information
Technology in RCC Institute of Information Technology, Kolkata, India, from 2011
to 2014. Before that, he served as an Assistant Professor in Department of Computer
Science and Information Technology of University Institute of Technology, The Uni-
versity of Burdwan, India from 2005 to 2011. He was a Lecturer in Information Tech-
nology of Kalyani Government Engineering College, India during 2001–2005. He is
a co-author of 6 books and the co-editor of 80 books. He has more than 300 research
publications in international journals and conference proceedings to his credit. He
holds 19 patents. He has been the member of the organizing and technical program
committees of several national and international conferences. He is the founding
Chair of ICCICN 2014, ICRCICN (2015, 2016, 2017, 2018), ISSIP (2017, 2018)
xiii
xiv Contents
(Kolkata, India). He was the General Chair of several international conferences like
WCNSSP 2016 (Chiang Mai, Thailand), ICACCP (2017, 2019) (Sikkim, India), and
(ICICC 2018) (New Delhi, India), and ICICC 2019 (Ostrava, Czech Republic).
He is the Associate Editor of several reputed journals including Applied Soft
Computing, IEEE Access, Evolutionary Intelligence, and IET Quantum Communi-
cations. He is the editor of International Journal of Pattern Recognition Research
and the founding Editor in Chief of International Journal of Hybrid Intelligence, In-
derscience. He has guest-edited several issues with several international journals. He
is serving as the Series Editor of IGI Global Book Series Advances in Information
Quality and Management (AIQM), De Gruyter Book Series Frontiers in Computa-
tional Intelligence (FCI), CRC Press Book Series(s) Computational Intelligence and
Applications & Quantum Machine Intelligence, Wiley Book Series Intelligent Sig-
nal and Data Processing, Elsevier Book Series Hybrid Computational Intelligence
for Pattern Analysis and Understanding, and Springer Tracts on Human Centered
Computing.
His research interests include hybrid intelligence, pattern recognition, multimedia
data processing, social networks, and quantum computing.
He is a life fellow of Optical Society of India (OSI), India, life fellow of Inter-
national Society of Research and Development (ISRD), UK, a fellow of Institution
of Engineering and Technology (IET), UK, a fellow of Institute of Electronics and
Telecommunication Engineers (IETE), India, and a fellow of Institution of Engineers
(IEI), India. He is also a senior member of Institute of Electrical and Electronics En-
gineers (IEEE), USA, International Institute of Engineering and Technology (IETI),
Hong Kong and Association for Computing Machinery (ACM), USA.
He is a life member of Cryptology Research Society of India (CRSI), Com-
puter Society of India (CSI), Indian Society for Technical Education (ISTE), In-
dian Unit for Pattern Recognition and Artificial Intelligence (IUPRAI), Center for
Education Growth and Research (CEGR), Integrated Chambers of Commerce and
Industry (ICCI), and Association of Leaders and Industries (ALI). He is a member
of Institution of Engineering and Technology (IET), UK, International Rough Set
Society, International Association for Engineers (IAENG), Hong Kong, Computer
Science Teachers Association (CSTA), USA, International Association of Academi-
cians, Scholars, Scientists and Engineers (IAASSE), USA, Institute of Doctors En-
gineers and Scientists (IDES), India, The International Society of Service Innovation
Professionals (ISSIP), and The Society of Digital Information and Wireless Com-
munications (SDIWC). He is also a certified Chartered Engineer of Institution of
Engineers (IEI), India. He is on the Board of Directors of International Institute of
Engineering and Technology (IETI), Hong Kong.
Contents xv
xix
xx Preface
the exploration and exploitation capability of the elephant herd optimization. These
algorithms are compared to their classical counterpart. They are implemented on the
Salinas dataset. The proposed qutrit-based algorithm is found to converge faster and
produce more robust results. The Xie-Beni Index is used as the fitness function. A
few statistical tests like mean, standard deviation, and Kruskal-Wallis test are per-
formed to establish the efficiency of the proposed algorithm. The F score is used
to compare the segmented images using the optimal cluster numbers. The proposed
algorithms are found to perform better in most of the cases.
Salp Swarm Algorithm (SSA) is a recently introduced metaheuristic algorithm
applied for solving benchmark and real-world optimization problems. SSA has good
exploitation ability during the search process. However, its exploration ability is lim-
ited. Chapter 9 attempts to enhance the balance between exploration and exploitation
in SSA by hybridizing with the quantum-inspired framework. The Delta potential-
well model (DPWM) from quantum mechanics is known for improving the conver-
gence and diversity in the population to enhance the exploration ability of SSA. The
proposed hybrid method is tested using well-known complex convex and noncon-
vex multiobjective benchmark problems. A comparative study is conducted between
proposed MQSSA and well-regarded algorithms MSSA and NSGA-II. The experi-
mental results exemplify that the overall performance of MQSSA is competitive as
compared with other approaches.
Chapter 10 proposes a quantum-inspired multiobjective algorithm for automatic
clustering of gray scale images. The well-known Non-dominated Sorting Genetic
Algorithm II (NSGA-II) inspired by the intrinsic principles of quantum mechanical
phenomena is presented here for solving the aforesaid problem by optimizing two
objective functions simultaneously. The proposed algorithm has been compared with
its classical counterpart and the experimental results over six Berkeley gray scale im-
ages certify the efficiency and robustness of the proposed algorithm with reference to
the optimal computational time, mean fitness value, standard deviation, and standard
error. Finally, a statistical superiority t-test has been conducted between these two
algorithms to ascertain the supremacy of the results of the proposed algorithm.
Chapter 11 summarizes the findings reported in the volume with future directions
of research.
The book is intended for researchers, academicians, and practitioners. This will
serve as a readymade material for the researchers and academicians as it covers a
wide range of subject areas belonging to several majors falling under the umbrella
of e-waste management. Additionally, it may be treated as a handbook which will
suffice the needs of policymakers, supply chain managers, and technology Ninjas.
The editors would feel rewarded if the concepts presented in the book come to the
social cause.
August, 2021
Siddhartha Bhattacharyya, Birbhum, India
Elizabeth Behrman, Kansas, USA
Mario Köppen, Fukuoka, Japan
Ivan Cruz-Aceves, Guanajuato, Mexico
Contributors
xxiii
xxiv Contributors
DOI: 10.1201/9781003283294-1 1
2 Hybrid Quantum Metaheuristics: Theory and Applications
algorithms exploit the features inspired by theory and principles of quantum mechan-
ical systems, like qubits (quantum bits), superposition of states, etc. It necessitates
these ideas for developing a computing paradigm much speedy than the conven-
tional computing framework. The viable reason for this enhanced computing speed
is achieved by dint of exploiting the inherent parallelism perceived in the qubits,
the basic unit of a quantum computer. This makes QCs to be far more effective in
comparison with their classical counterparts for obtaining factorization of large num-
bers [15] and searching elements from databases [14].
Quantum-Inspired Metaheuristics are comparatively the new area of research of
metaheuristics class. They draw their faces fundamentally from two dissimilar fields,
viz., Metaheuristic and Quantum Computing. Quantum computing is fundamentally
harnessing and utilizing the astounding laws of quantum mechanics for processing
information. A conventional computer (traditional computer) employs long stream
of “bits,” which essentially encode either 0 or 1. On the contrary, a quantum com-
puter employs quantum bits or qubits. A quantum bits is a quantum system, which
basically encodes 0 and 1 into two perceptible quantum states. Owing to the fact that
quantum bits behave quantumly, the happening of “superposition” and “entangle-
ment” can be capitalized for getting better efficiency. A quantum computer is able to
process a huge number of calculations at once. Unlike a classical computer, which
works with 0s and 1s, a quantum computer can use 0s, 1s, and their superposed
form. Hence, using the amazing features of quantum computing, called “superpo-
sition” and “entanglement,” a quantum computer can perform any certain impen-
etrable tasks more efficiently and more quickly in comparison with their classical
counterpart.
is found from its neighborhood and used for its later operation. This strategy is of-
ten used in metaheuristics that is called hill-climbers. The other popular strategy is
popularly known as mildest ascent/descent strategy, which usually chooses that par-
ticular solution that improves the present solution by a very little amount. On the
other hand, one important move strategy may be exemplified as the first improving
strategy, where instead of other moves, the first move is chosen that improves the
present solution.
Simulated Annealing (SA) is a very popular metaheuristics, which employs a
move strategy that imitates the annealing procedure of a crystalline solid. It was
first introduced by Kirkpatrick et al. [5] in 1983, which was designed on the basis
of the method as presented by Metropolis et al. [16] in 1953. A concept called “lo-
cal optimum” was introduced to signify the solution, which is better than any other
solutions in the neighborhood. The “global optima” is the best solution to be found
in any optimization problem. When it is found that the current solution falls into a
local optimum, there must have a strategy that the metaheuristic must adopt to “es-
cape” from this situation. These kindx of metaheuristics are basically structures that
depend mainly on iterative improvement for finding good solutions.
Other popular and commonly used strategies that are very frequently espoused to
have a random change, called perturbation to the present solution. Two among them
are popularly known as Iterated Local Search (ILS) and Multi-start Local Search
(MLS), respectively [17].
One more important strategy of this category uses memory structures to record in-
formation of the previous progresses of the search procedures aiming to acquire good
solutions. The commonly used metaheuristics that adopt this strategy are termed as
Tabu Search (TS) [6][7][8] algorithms. Different categories of memory structures can
be used to recollect specific features of the trajectory that the algorithm has taken up
in its search space. A tabu list stores the last solutions that has been encountered
during operation and prohibits these solutions such that they are not visited again
until they are available on the list. Other popular kind of metaheuristic, known as
Guided Local Search (GLS) [18], presents a dissimilar type of memory, known to be
an augmented objective function, which encompasses a penalty factor for each of the
potential element. While it reaches a local optimum, the penalty factor is increased
for each element of the present solution. This allows the search process to escape
from falling in the local optimum.
In the literature, there exist several variations of this algorithm, like the Reactive
GRASP. In Reactive GRASP, the parameter that is used to define the limit of the
RCL during its first phase is self-adjusted based on the quality of the solutions found
earlier [27]. In addition, there also exist several other techniques of this kind; some of
them are employed to speed up in searching in various fields, like cost perturbations,
memorization and learning, and many others [28].
exploit various features of quantum computing in the development process, the en-
tanglement induced optimization algorithms use the quantum entanglement feature
to develop these kinds of algorithms. When dealing with high-dependence problems,
the conventional optimization techniques have been proved to be non-efficient, espe-
cially in locating the global optimum. This problem has been addressed by introduc-
ing a novel meta-heuristic, called the entanglement-enhanced quantum-inspired tabu
search algorithm (Entanglement-QTS) [49].
The quantum-inspired tabu search (QTS) is a popular, effective, simple, and ro-
bust meta-heuristic. Unlike other QIEAs, this novel (QTS) algorithm interestingly
applies both the best and worst solutions. The reason behind this is to guide the indi-
vidual to chase toward finding a better solution and also to go out of the way from a
worse solution. QTS can show its efficiency to speedily reach at the global optimum.
The several applications of QTS algorithm can be found in the literature [50][51][52],
which have encouraging and incomparable searching capability. Entanglement-QTS
uses backbone of the quantum-inspired tabu search and the encouraging feature of
quantum computing, called quantum entanglement.
In comparison with the other QIEAs, the Entanglement-QTS uses qubits, which
are in entangled states. Basically, these entangled states can be demonstrate as an
inflated degree of correlation, render the variables twine together. It signifies a state-
of-the-art thought that can notably perk up to deal with high-dependence and multi-
modal problems. This algorithm is able to find optimal solutions, balance in diversi-
fication and intensification. Entanglement-QTS uses quantum not gate, which helps
to escape several local optimal solutions, strengthen the intensification consequence
by entanglement local search, and speed up the optimization procedure by applying
entangled states.
1.7 CONCLUSION
This chapter presents an outline of the basic theory and concept pertaining to quan-
tum inspired metaheuristics. This chapter throws light on several types of quantum
inspired metaheuristics in details. This chapter also comes up with a bird’s eye view
on different bi-level/multi-level quantum system-based optimization techniques. In
addition to that, several entanglement induced optimization techniques and W -state
encoding of optimization methods have also been discussed. The applications related
to the theme of the topic have been provided that would also certainly bring up to
date the readers.
REFERENCES
1. C. Blum and A. Roli. Metaheuristic in combinatorial optimization: Overview and concep-
tual comparison. Technical report, IRIDIA: Technical Report, 13, 2001.
2. F. Glover and G. A. Kochenberger. Handbook on Metaheuristics. Kluwer Academic Pub-
lishers, 2003.
3. K. Kennedy and R. Eberhart. Particle swarm optimization. In: Proceedings of the IEEE
International Conference on Neural Networks (ICNN95), Perth, Australia, 4:1942–1948,
1995.
4. M. Dorigo, V. Maniezzo, and A. Colorni. The ant system: Optimization by a colony of co-
operating agents. IEEE Transactions on Systems, Man, and Cybernetics, Part B, 26(1):29–
41, 1996.
5. S. Kirkpatrik, C.D. Gelatt, and M.P. Vecchi. Optimization by simulated annealing. Science,
220:671–680, 1995.
6. F. Glover. Tabu search – Part I. ORSA Journal on Computing, 1(3):190–206, 1998.
7. F. Glover. Tabu search – Part II. ORSA Journal on Computing, 2(1):4–32, 1990.
8. F. Glover. Tabu search and adaptive memory programming: Advances, applications and
challenges. In: Barr, Helgason, and Kennington, editors, Interfaces in Computer Science
and Operations Research. Kluwer Academic Publishers, 1996.
9. R. Storn and K. Price. Differential evolution – a simple and efficient heuristic for global
optimization over continuous spaces. Technical report, Technical Report TR-95-012, ICSI.
10. J. L. Cohon. Multiobjective Programming and Planning. Academic Press, New York, 1978.
11. D. E. Goldberg. Genetic Algorithms in Search, Optimization and Machine Learning.
Addison-Wesley Longman Publishing Co., Inc., Boston, MA, 1989.
12. D. A. V. Veldhuizen and G. B. Lamont. Multiobjective evolutionary algorithms: Analyzing
the state-of-the-art. Journal of Evolutionary Computation, 8(2):125–147, 2000.
13. K. Deb. Multi-objective optimization using evolutionary algorithms. Wiley, Chichester,
UK, 1989.
14. L. K. Grover. Quantum computers can search rapidly by using almost any transformation.
Physical Review Letters, 80(19):4329–4332, 1998.
15. P. W. Shor. Polynomial-time algorithms for prime factorization and discrete logarithms on
a quantum computer. SIAM Journal of Computing, 26(5):1484–1509, 1997.
16. N. Metropolis, A. W. Rosenbluth, M.N. Rosenbluth, A. H. Teller, and E. Teller. Equa-
tion of state calculations by fast computing machines. The Journal of Chemical Physics,
21(6):1087–1092, 1953.
17. H. Lourenco, O. Martin, and T. St utzle. Iterated local search. Handbook on Metaheuris-
tics, 2003.
An Introductory Illustration to Quantum-Inspired Metaheuristics 15
18. C. Voudouris and E. Tsang. Guided local search and its application to the traveling sales-
man problem. European Journal of Operational Research, 113(2):469–499, 1999.
19. M. Dorigo, V. Maniezzo, and A. Colorni. The ant system: An autocatalytic optimizing
process. Technical Report Technical Report TR91-016, Politecnico di Milano, Italy, 1991.
20. A. Colorni, M. Dorigo, and V. Maniezzo. Distributed optimization by ant colonies. In:
Proceedings of ECAL91 European Conference on Artificial Life, pages 131–142, Elsevier,
Amsterdam, The Netherlands, 1991.
21. L. Wang, Q. Niu, and M. R Fei. A novel quantum ant colony optimization algorithm and
its application to fault diagnosis. Transactions of the Institute of Measurement and Control,
30(3–4):313–329, 2008.
22. S. Dey, S. Bhattacharyya, and U. Maulik. Quantum inspired meta-heuristic algorithms for
multi-level thresholding for true colour images. In: Proceeding of 2013 Annual IEEE India
Conference (INDICON), Mumbai, India, 2013.
23. S. Dey, S. Bhattacharyya, and U. Maulik. Efficient quantum inspired metaheuristics for
multi-level true colour image thresholding. Applied Soft Computing, 56:472–513, 2017.
24. S. Dey, S. Bhattacharyya, and U. Maulik. New quantum inspired meta-heuristic techniques
for multi-level colour image thresholding. Applied Soft Computing, 46:677–702, 2016.
25. T. A. Feo and M. G. C. Resende. Greedy randomized adaptive search procedures. Journal
of Global Optimization, 6(2):109–133, 1995.
26. J. P. Hart and A. W. Shogan. Semi-greedy heuristics: An empirical study. Operations Re-
search Letters, 6(3):107–114, 1987.
27. M. Prais and C. C. Ribeiro. Reactive grasp: An application to a matrix decomposition
problem in TDMA traffic assignment. INFORMS Journal on Computing, 12(3):164–176,
2000.
28. M. G. C. Resende and C. C. Ribeiro. Greedy Randomized Adaptive Search Procedures.
Handbook on Metaheuristics, Springer, 2003.
29. J. Holland. Adaptation in neural artificial systems. University of Michigan, Ann Arbor, MI,
1975.
30. H.-G. Beyer and H.-P. Schwefel. Evolution strategies: A comprehensive introduction. Jour-
nal Natural Computing, 1(1):3–52, 2002.
31. D. B. Fogel. Evolutionary Computation. IEEE Press, Piscataway, NJ, 1995.
32. F. Glover, M. Laguna, and R. Marti. Fundamentals of scatter search and path relinking.
Control and Cybernetics, 39(3):653–684, 2000.
33. F. Glover, M. Laguna, and R. Marti. Scatter search and path relinking: Advances and ap-
plications. Handbook of metaheuristics, 2003.
34. I. Salman, O. Ucan, O. Bayat, and K. Shaker. Impact of metaheuristic iteration on artificial
neural network structure in medical data. Processes, 6(57), 2018.
35. P. Bertolazzi, G. Felici, P. Festa, G. Fiscon, and E. Weitschek. Integer programming mod-
els for feature selection: New extensions and a randomized solution algorithm. European
Journal of Operational Research, 250:389–399, 2016.
36. P. Festa and M. G. C. Resende. Basic components and enhancements. Telecommunication
Systems, 46:253–271, 2011.
37. C. Blum, J. Puchinger, G.R. Raidl, and A. Roli. Hybrid metaheuristics in combinatorial
optimization: A survey. Applied Soft Computing, 11:4135–4151, 2011.
38. G. Souza, E. Goldbarg, M. Goldbarg, and A. Canuto. A multiagent approach for meta-
heuristics hybridization applied to the traveling salesman problem. In: Proceedings of the
2012 Brazilian Symposium on Neural Networks, Curitiba, Parana, Brazil, 20–25 October
2012, pages 208–213, 2012.
16 Hybrid Quantum Metaheuristics: Theory and Applications
39. Z. Tiejun, T. Yihong, and X. Lining. A multi-agent approach for solving traveling salesman
problem. Wuhan University Journal of Natural Sciences, 11:1104–1108, 2006.
40. X.-F. Xie and J. Liu. Multiagent optimization system for solving the traveling salesman
problem (TSP). IEEE Transactions on Systems, Man, and Cybernetics, Part B, 39:489–
502, 2009.
41. F. Fernandes, S. Souza, M. Silva, H. Borges, and F. Ribeiro. A multiagent architecture for
solving combinatorial optimization problems through metaheuristics. In: Proceedings of
the IEEE International Conference on Systems, Man and Cybernetics, San Antonio, TX,
USA, 11–14 October 2009, pages 3071–3076, 2009.
42. R. Malek. An agent-based hyper-heuristic approach to combinatorial optimization prob-
lems. In: Proceedings of the IEEE International Conference on Intelligent Computing and
Intelligent Systems (ICIS), Xiamen, China, 29–31 October 2010, pages 428–434, 2010.
43. M. Milano and A. Role. MAGMA: A multiagent architecture for metaheuristics. IEEE
Transactions on Systems, Man, and Cybernetics, Part B, 34:925–941, 2004.
44. G. X. Zhang. Quantum-inspired evolutionary algorithms: A survey and empirical study.
Journal of Heuristics, 17(3):303–351, 2011.
45. G. X. Zhang. Time-frequency atom decomposition with quantum-inspired evolutionary
algorithms. Circuits, Systems, and Signal Processing, 29(2):209–233, 2010.
46. M. D. Platel, S. Schliebs, and N. Kasabov. Quantum-inspired evolutionary algorithm: A
multimodel EDA. IEEE Transactions on Evolutionary Computation, 13(6):1218–1232,
2009.
47. K. H. Han and J.-H. Kim. Quantum-inspired evolutionary algorithm for a class of com-
binatorial optimization. IEEE Transactions on Evolutionary Computation, 6(6):580–593,
2002.
48. K. H. Han and J.-H. Kim. Quantum-inspired evolutionary algorithms with a new termina-
tion criterion, h gate, and two-phase scheme. IEEE Transactions on Evolutionary Compu-
tation, 8(2):156– 169, 2004.
49. S. Y. Kuo and Y. H. Chou. Entanglement-enhanced quantum-inspired Tabu search algo-
rithm for function optimization. IEEE Access, 5, 2017.
50. H.-P. Chiang, Y.-H. Chou, C.-H. Chiu, S.-Y. Kuo, and Y.- M. Huang. A quantum-inspired
Tabu search algorithm for solving combinatorial optimization problems. Soft Computing,
18(9):1771–1781, 2014.
51. Y.-H. Chou, S.-Y. Kuo, C.-Y. Chen, and H.-C. Chao. A rule-based dynamic decision-
making stock trading system based on quantum-inspired Tabu search algorithm. IEEE
Access, 2:883–896, 2014.
52. Y.-H. Chou, S.-Y. Kuo, C. Kuo, and Y.-C. Tsai. Intelligent stock trading system based on
QTS algorithm in Japan’s stock market. In: Proceedings of the IEEE International Confer-
ence on Systems, Man and Cybernetics, San Antonio, TX, pages 997–982, 2013.
53. W. Dur, G. Vidal, and J. I. Cirac. Three qubits can be entangled in two inequivalent ways.
Physical Review A, 62(6), 2000.
54. E. DHondt and P. Panangaden. The computational power of the W and GHZ states. Quan-
tum Information & Computation, 6(2), 2005.
55. X. B. Chen, Q. Y. Wen, F. Z. Guo, Y. Sun, G. Xu, and F. C. Zhu. Controlled quantum
secure direct communication with W state. International Journal of Quantum Information,
6(4):899–906, 2008.
56. M. M. Cunha, A. Fonseca, and E. O. Silva. Tripartite entanglement: Foundations and ap-
plications. Universe, 5(209), 2019.
57. D. Cruz, R. Fournier, F. Gremion, A. Jeannerot, K Komagata, T. Tosic, J. Thiesbrummel,
C. L. Chan, N. Macris, M. A. Dupertuis, and J. G. Clement. Efficient quantum algorithms
An Introductory Illustration to Quantum-Inspired Metaheuristics 17
for GHZ and W states, and implementation on the IBM quantum computer. Advanced
Quantum Technologies, 2(5–6), 2019.
58. S. A. MirHassani, S. Raeisi, and A. Rahmani. Quantum binary particle swarm
optimization-based algorithm for solving a class of bi-level competitive facility location
problems. Optimization Methods and Software, 30(4):756–768, 2015.
59. G. Zhang, G. Zhang, Y. Gao, and J. Lu. Competitive strategic bidding optimization in
electricity markets using bi-level programming and swarm technique. IEEE Transactions
on Industrial Electronics, 58(6):2138–2146, 2011.
60. S. Dey, S. Bhattacharyya, and U. Maulik. Quantum inspired genetic algorithm and parti-
cle swarm optimization using chaotic map model based interference for gray level image
thresholding. Swarm and Evolutionary Computation, 15:38–57, 2014.
61. T. Zhang, T. Hu, J. W. Chen, Z. Wan, and X. Guo. Solving bi-level multiobjective pro-
gramming problem by elite quantum behaved particle swarm optimization. Abstract and
Applied Analysis, 2012, 2012.
62. S. Kumar, P. Kumar, T. K. Sharma, and M. Pant. Bi-level thresholding using PSO, artificial
bee colony and MRLDE embedded with Otsu method. Memetic Computing, 5:323–334,
2013.
63. X. Chang, Z. Ma, Y. Yang, Z. Zeng, and A. G. Hauptmann. Bi-level semantic representation
analysis for multimedia event detection. Memetic Computing, 27(5):1180–1197, 2017.
64. X. Yan, N. Lv, Z. Liu, and K. Xu. Quantum-inspired evolutionary algorithm for transporta-
tion network design optimization. In: Proceeding of 2008 Second International Conference
on Genetic and Evolutionary Computing, Hubei, China, 2008.
65. S. Dey, S. Bhattacharyya, and U. Maulik. Quantum-inspired multi-objective simulated an-
nealing for bi-level image thresholding. Quantum Inspired Computational Intelligence, Re-
search and Applications, 2017.
66. S. Dey, I. Saha, U. Maulik, and S. Bhattacharyya. New quantum inspired meta-heuristic
methods for multi-level thresholding. In: Proceeding of 2013 International Conference
on Advances in Computing, Communications and Informatics (ICACCI), Mysore, India,
2013.
67. V. Tkachuk. Quantum genetic algorithm on multilevel quantum systems. Mathematical
Problems in Engineering, 2018:1–12, 2018.
68. F. A. Cardenas-Lopez, L. Lamata, J. C. Retamal, and E. Solano. Multiqubit and multi-
level quantum reinforcement learning with quantum technologies. PLoS ONE, 13(7):1–12,
2018.
69. P. Niemann, R. Wille, and R. Drechsler. Equivalence checking in multi-level quantum
systems. International Conference on Reversible Computation, LNCS, 8507:201–215,
2014.
70. S. Carrasco, J. Rogan, and J. A. Valdivia. Speeding up maximum population transfer in
periodically driven multi-level quantum systems. Scientific Reports, 9, 2019.
71. M. Grace, C. Brif, H. Rabitz, I. Walmsley, R. Kosut, and D. Lidar. Encoding a qubit into
multilevel subspaces. New Journal of Physics, 8, 2006.
72. B. C. Roy and P. K. Das. Optimal control of multi-level quantum system with energy cost
functional. International Journal of Control, 80(8):1299–1306, 2007.
73. J. Cao H. Gao and M. Diao. A simple quantum-inspired particle swarm optimization and
its application. Information Technology Journal, 10(12):2315–2321, 2011.
74. H.-P. Chiang, Y.-H. Chou, C.-H. Chiu, S.-Y. Kuo, and Y.-M. Huang. A quantum-inspired
Tabu search algorithm for solving combinatorial optimization problems. Soft Computing,
18(9):1771–1781, September 2014.
18 Hybrid Quantum Metaheuristics: Theory and Applications
75. S. Kuo and Y. Chou. Entanglement-enhanced quantum-inspired Tabu search algorithm for
function optimization. IEEE Access, 5:13236–13252, 2017.
76. H. Wang, J. Liu, J. Zhi, and C. Fu. The improvement of quantum genetic algorithm
and its application on function optimization. Mathematical Problems in Engineering,
2013(730749):1–10, 2013.
77. H. Xiong, Z. Wu, H. Fan, G. Li, and G. Jiang. Quantum rotation gate in quantum-inspired
evolutionary algorithm: A review, analysis and comparison study. Swarm and Evolutionary
Computation, 42:43–57, 2018.
78. D. Zouache, F. Nouioua, and A. Moussaoui. Quantum-inspired firefly algorithm with parti-
cle swarm optimization for discrete optimization problems. Soft Computing, 20(7):2781–
2799, July 2016.
79. H. N. Abdull Hamed, N. Kasabov, Z. Michlovsky, and S. M. Shamsuddin. String pattern
recognition using evolving spiking neural networks and quantum inspired particle swarm
optimization. In: Neural Information Processing. Lecture Notes in Computer Science, vol.
5864, pages 611–619. Springer, Berlin, Heidelberg, 2009.
80. J. Zhang, H. Li, Z. Tang, Q. Lu, X. Zheng, and J. Zhou. An improved quantum-inspired
genetic algorithm for image multilevel thresholding segmentation. Mathematical Problems
in Engineering, 2014(295402):1–12, 2014.
81. S. Dey, S. De, D. Ghosh, D. Konar, S. Bhattacharyya, and J. Platos. A novel quantum
inspired sperm whale metaheuristic for image thresholding. In: 2019 Second International
Conference on Advanced Computational and Communication Paradigms (ICACCP), pages
1–7, Feb 2019.
82. S. Das, S. De, and S. Bhattacharyya. True color image segmentation using quantum-
induced modified-genetic-algorithm-based FCM algorithm. In: Quantum-Inspired Intelli-
gent Systems for Multimedia Data Analysis, pages 55–94. Research Essentials Collection.
IGI Global, 2018.
83. S. Das, S. De, S. Bhattacharyya, and A. E. Hassanien. Color MRI image segmentation us-
ing quantum-inspired modified genetic algorithm-based FCM. In: Recent Trends in Signal
and Image Processing. Advances in Intelligent Systems and Computing, vol. 727, pages
151–164. Springer, Singapore, 2019.
84. F. Liu, H. Duan, and Y. Deng. A chaotic quantum-behaved particle swarm optimization
based on lateral inhibition for image matching. Optik, 123(12):1955–1960, 2012.
85. Y. Feng, H. Yin, H. Lu, L. Cao, and J. Bai. FCM-based quantum artificial bee colony
algorithm for image segmentation. In: Proceedings of the 10th International Conference
on Internet Multimedia Computing and Service, pages 6:1–6:7, 2018.
86. E. Osaba, J. Del Ser, A. Iglesias, and X.-S. Yang. Soft computing for swarm robotics: New
trends and applications. Journal of Computational Science, 39, 2020.
87. Y. Tan and Z. Zheng. Research advance in swarm robotics. Defence Technology 9:18–39,
2013.
88. M. Contreras-Cruz, V. Ayala, and U. Hernandez-Belmonte. Mobile robot path planning us-
ing artificial bee colony and evolutionary programming. Applied Soft Computing, 30:319–
328, 2015.
89. J. Gu, X. Gu, and B. Jiao. A quantum genetic based scheduling algorithm for stochas-
tic flow shop scheduling problem with random breakdown. IFAC Proceedings Volumes,
41(2):63–68, 2008.
90. J. Gu, X. Gu, and M. Gub. A novel parallel quantum genetic algorithm for stochastic
job shop scheduling. Journal of Mathematical Analysis and Applications, 355(1):63–81,
2009.
An Introductory Illustration to Quantum-Inspired Metaheuristics 19
DOI: 10.1201/9781003283294-2 21
22 Hybrid Quantum Metaheuristics: Theory and Applications
Figure 2.1: Bagging with final ensemble combiner (Adapted from Gorodetsky, V.
& Serebryakov, S. (2006). Methods and algorithms of the collective recognition: a
survey. SPIIRAS Proceedings, 3(1), 139-171)
examples from the training sample may be unclaimed. This type of selection is called
Bootstrap. The L obtained samples are used to train L models, which in turn are com-
bined into a team. In regression problems, the outputs of the classifiers are averaged;
in classification problems, the voting method is usually used.
Bagging leads to improvements in unstable algorithms such as artificial neural
networks and classification and regression trees (CARTs). Due to the use of bagging
in some works, an improvement in pattern recognition was noted [3][4]. Thus, bag-
ging is useful in the case of different classifiers and instability, when small changes
in the initial sample lead to significant changes in the classification [5].
As seen in Figure 2.1, the final ensemble combiner can use different ways to
combine the classifier decisions, including voting, stacking and ensemble selection.
Existing classification algorithms can be combined into several groups according to
the principle of their operation:
1. Euclidean distance: s
n
dE (x, y) = ∑(xi − yi )2 (2.1)
i
2. Manhattan distance:
n
dMan (x, y) = ∑ |xi − yi |2 (2.2)
i
3. Chebyshev distance:
dCh (x, y) = maxi (|xi − yi |) (2.3)
4. Minkowski distance of the pth order:
n 1
dMink (x, y) = (∑ |xi − yi | p ) p (2.4)
i
where, Ek is the standard of the kth class; nk is the number of objects in the sample
of the kth class; m is the dimension of the feature space and K is the number of
classes.
According to a certain metric, the distance from the classified object to each of
the standards is calculated, the object is attributed to the class, the distance to the
standard of which is minimal.
2. Method of k nearest neighbors [6]: For this algorithm, the data must be repre-
sented as a matrix of distances between the sampled objects, calculated according
to a certain metric. Classes of k objects closest to the classified object are consid-
ered. The object belongs to the class that is most often found among its neighbors.
When solving practical problems with this method, it is very important to choose
the correct metric for calculating the distance between objects, as well as the
value of the parameter k. If the value of the parameter is too small (for example,
k = 1), the algorithm becomes susceptible to the negative influence of outliers; if
the value of k is too high, too many neighboring objects are included in the cal-
culations, which can also negatively affect the quality of the classification, since
among the neighbors of the classified object there may be many objects of another
class.
24 Hybrid Quantum Metaheuristics: Theory and Applications
3. The method of potential functions [7]: This method is based on the physical prin-
ciple of the potential of the electric field of a charged particle. The distance from
the classified object to each object of the training sample is calculated. The de-
cision rule is constructed as in the method of nearest neighbors, the difference is
that the sample object has some measure of importance (“charge”) relative to the
classified object.
1. Naive Bayesian classifier [8]: This method is based on the assumption that the
features that describe the objects of the sample are statistically independent. This
assumption greatly facilitates the problem of estimating the distribution density,
since instead of the n-dimensional density, it is necessary to estimate n one-
dimensional densities. Densities can be estimated both parametrically and non-
parametrically. According to the Bayes rule, the posterior probabilities of each
of the K classes are found, provided that the attribute x of the classified object is
measured:
f (x|i) × P(i)
P(i|x) = m ; i = 1, 2, . . . K (2.6)
∑ j f (x| j) × P( j)
where, f (x|i) is the assessment of the conditional distribution density of the at-
tribute x for the ith class and P(i) is an estimate of the prior probability of the
class.
The decision rule for the classified object x is as follows:
2. Parzen window method: This method uses nonparametric estimation of the den-
sity [9] of the distribution of classes for the available sample, therefore, it does
not put forward hypotheses about the structure of the distribution density func-
tion. The decision rule for classifying object x is as follows:
n
d(x, x j )
i∗ = arg maxi λi ∑ [y j = i]K( ); i = 1, 2, . . . K (2.8)
j=1 h
where, λi is the price of the correct answer for class i; n is the sample size; y j
is the class of the jth object; K(t) is the nuclear function; d(x, x j ) is the distance
between the classified object x and the object x j and h is the window width.
3. EM-algorithm (expectation-maximization): The EM algorithm [10] estimates the
density as a mixture of parametric distributions. In this algorithm, two stages are
A Quantum-Inspired Approach to Collective Combine Basic Classifiers 25
iteratively performed: the estimation stage, in which the expected value of the
likelihood function is calculated, and the maximization stage, in which the pa-
rameters of the likelihood function are calculated that maximize it.
1. Fisher’s linear discriminant [11], also known as linear discriminant analysis, is ap-
plicable if the sample satisfies the following hypotheses: the classes are normally
distributed and the class covariance matrices are equal. Fisher’s linear discrimi-
nant is a simplification of the quadratic discriminant. In the case of two classes
in two-dimensional space, the dividing surface constructed using this method will
be a straight line. In the case of a larger number of classes, the dividing surface
will be piecewise linear.
2. Logistic regression [12]. For the case of 2 classes, a linear classification algorithm
is constructed with a decision rule of the form:
m
log reg(x, w) = sign( ∑ w j x j − w0 ) = sign(x, w) (2.9)
j=1
where, w j is the weight of the jth feature; w0 is the decision threshold; w is the
vector of weights and (x, w) is the scalar multiplication of the weights vector and
features of the object.
The problem of training the logistic regression algorithm is to find the optimal
vector of weights w that minimizes the loss function of the form:
n
1
L(w) = ∑ ln(1 + ) (2.10)
i=1 eyi (x,w)
3. The support vector machine [13] is one of the most popular supervised learning
methods for several reasons:
(i) The fastest method for finding the decision rule.
(ii) Reduces to solving a quadratic programming problem in a convex domain,
which always has a unique solution.
(iii) Finds the dividing surface of the classes with a dividing bar of maximum
width, which contributes to more confident classification.
In the case of two classes and a linearly separable sample, the decision rule of the
SVM algorithm takes the form:
yi ((w, xi ) + b) ≥ 1; i = 1, 2, . . . n (2.11)
where, yi is the class label of the ith sample object; w is the vector of weight
coefficients and n is the sample size.
26 Hybrid Quantum Metaheuristics: Theory and Applications
Into this sum with nonzero coefficients λi includes only those objects of the se-
lection that lie on the dividing surface. These objects are called support vectors.
In the case of a linearly inseparable sample, the feature space Rn is transferred to
a space of higher dimension H using function ϕ (x), in which the sample becomes
linearly divisible. In this case, the decision rule is sought in the form:
There is also a multiclass support vector machine, which is reduced to dividing the
problem into several binary classification problems according to the “one against
all” scheme.
Depending on the principle by which the next feature for splitting is selected, and
how it is split, there are several variants of this algorithm:
1. Algorithm ID3 - in it, the next feature is selected according to the criterion of
information gain.
2. Algorithm C4.5 [15] is an improved version of ID3, the choice of a feature ac-
cording to the criterion of normalized increment of information.
A Quantum-Inspired Approach to Collective Combine Basic Classifiers 27
In practice, in order to avoid the effect of overfitting after building a decision tree,
some of its branches are truncated to maintain better generalizing ability, this proce-
dure is called pruning.
2.8.1 VOTING
Let a classifier ensemble consists of L base classifiers in the set D =
{D1 , D2 , . . . , DL }, and any object x ∈ Rn is assigned to one of the c possible classes
Ω = {ω1 , ω2 , . . . , ωc }.
For x to be classified, L classifiers output a matrix M = [mi, j ]; i = 1, 2, . . . , L; j =
1, 2, . . . , c.
1. Majority voting rule: Suppose mi, j ∈ {0, 1}, where mi, j = 1 if Di predicts x in
class ω j and mi, j =0, otherwise. x is assigned to ωk if
L L
c
∑ mi,k = max
j=1
∑ mi, j (2.15)
i=1 i=1
28 Hybrid Quantum Metaheuristics: Theory and Applications
2. Average of probabilities rule: Suppose mi, j ∈ {0, 1}, where mi, j is the degree of
support that classifier Di gives to the hypothesis that x comes from class ω j , de-
noted as mi, j = PDi (ωi |x). x is assigned to ωk if
1 L c 1
L
∑
L i=1
mi,k = max ∑ mi, j
j=1 L i=1
(2.16)
2.8.2 STACKING
Stacking constructs a set of heterogonous or homogeneous base classifiers, and the
outputs of base classifiers are used to train metaclassifier which produces single out-
put as final classification result. The task of the metaclassifier is to correct any mis-
takes made by base classifiers and minimize the generalization error. Any classifica-
tion algorithm can be used to train base classifier or metaclassifier. The procedure of
stacking algorithm is as follows:
1. Step 1: split a dataset into three disjoint subsets: the training set, the validation
set, and the testing set;
2. Step 2: train a set of base classifiers on the training set;
3. Step 3: apply those base classifiers to classify the validation set;
4. Step 4: using the outputs of base classifiers from Step 3 as the features, along with
the true class label, to train the metaclassifier;
5. Step 5: test the metaclassifier on the testing set to evaluate the performance of
Stacking ensemble.
1. Step 1: A set of heterogonous or homogeneous base classifiers are trained for the
same task.
2. Step 2: The chosen algorithm is employed to compute the weight of base classi-
fiers, and a subset of base classifiers whose weight is bigger than a preset threshold
is combined to construct an ensemble.
3. Step 3: The output layer calculates the final degree of membership Y of jth exam-
ple x j to the class base on Majority voting rule (soft vote) or Average of probabil-
ities rule.
A Quantum-Inspired Approach to Collective Combine Basic Classifiers 29
Figure 2.2: Ensemble selection (Modified from Gorbachev, S., Arkhipov, A., Gor-
bacheva, N., Bhattacharyya, S., Cao, J. & Kale, S. (2021). Study and Developing
of Diversity Generation Methods in Heretogeneous Ensemble Models. International
Journal of Distributed Computing and Technology, 7(1), 816)
(k)
The outputs Y j of the penultimate layer (Figure 2.2) for each ith class calculate
the degree of membership of the jth example x j of training sample to the class as
a weighted average linear combination of the normalized outputs of each classifier
(neuroexpert):
1 L (1) (1) 1 L (c) (c)
Y j = ∑ νi yi j , . . .Y j = ∑ νi yi j
(1) (c)
(2.17)
L i=1 L i=1
The weights of each classifier (neuroexpert) are calculated based on the number of
errors they made using the following methods:
(i) Fisher linear discriminant
(ii) logistic regression
(iii) Single-layer perceptron
(iv) SVM support vector machine, which is most significant in terms of maximizing
the separation ability between classes and in terms of reliability
(v) the <<naive>> Bayesian classifier (the most popular of the simple ones)
(vi) heuristic algorithms.
30 Hybrid Quantum Metaheuristics: Theory and Applications
where pi is the probability of errors of the ith classifier on the training sample. The
fewer errors it makes, the greater its weight. As a probability estimate, we can take
the error rate γi or γi + N1 , so that the denominator does not vanish, i.e.:
1 − γi
νi = ln ; i = 1, 2, . . . L (2.19)
γi + N1
If any neuroexpert makes more than half of the mistakes, then his weight is νi < 0.
Such an unreliable classifier is not taken into account in the meta-network, putting
νi = 0.
which can accommodate plenty of processing units and CUDA (a specialized soft-
ware) are also very useful for the same purpose [20]. In the recent times, a number
of renowned companies have successfully designed powerful quantum computers
which are more efficient than classical computers in all respects. These quantum
computers can efficiently be used in different fields such as machine learning, simu-
lations, optimization to name a few. Some of the popular companies which have suc-
cessfully developed quantum computers are Google [21], IBM [22], Intel [23] and
D-Wave Systems [24] to name a few. In the literature, some popular metaheuristics
can be listed as simulated annealing (SA) [25], particle swarm optimization (PSO)
[26], differential evolution (DE) [27], and ant colony optimization (ACO) [28] to
name a few.
In 1996, Narayanan and Moore used the thought and features of quantum mechan-
ics to develop efficient metaheuristics [22]. The authors used a quantum-inspired
crossover to find optimal solution for the traveling salesman problem (TSP). A set
of guidelines has been introduced as an effort to characterize a method for designing
and developing quantum algorithms. The theory and feathers of quantum computing
have been used by different researchers several times to design a number of quantum
inspired algorithms. One pioneer algorithms of this category is popularly known as
Genetic Quantum Algorithm (GQA) [30]. In 2019, Montiel et al. presented a popular
quantum-inspired algorithm, called quantum-inspired Acromyrmex evolutionary al-
gorithm (QIAEA) [31]. The authors carefully observed the colony habits of the Atta
and Acromyrmex in their daily life. This fact motivated them to develop QIAEA.
Han et al. [32] developed a programmed version (parallel) of QGA, called paral-
lel QGA (PQGA). PQGA has been applied on knapsack problem for optimization.
PQGA has been compared with QGA to judge its efficacy, where PQGA outper-
formed other. In 2012, Li et al. [33] presented a watershed-based quantum evolu-
tionary algorithm for texture image clustering and SAR segmentation. Wang et al.
[34] introduced a QEA and particle swarm optimization method (PSO) based quan-
tum swarm evolutionary algorithm (QSwE). A variety of Q-bit expression form and
an improved version of PSO have been introduced in QSwE for updating quantum
angle. Zouache et al. [35] introduced PSO based quantum-inspired firefly algorithm
(QIFAPSO). The communal habits of the firefly, swarm and the concept of quantum
computing have been combined in a single skeleton to design QIFAPSO.
Apart from that, several quantum based metaheuristics are available in the litera-
ture. Some of them are quantum-inspired evolutionary algorithm based on p-system
(QEPS) [36] developed by Zhang et al., quantum-inspired DE and PSO algorithm
(QDEPSO) [37] proposed by Zouache and Moussaoui etc. Chaos Quantum-Inspired
Particle Swarm Optimization (CQPSO) was developed in [38] to handle Economic
load dispatch (ELD) problem. Hassan et al. [39] proposed quantum inspired bat al-
gorithm to deal with various economic load dispatch problem. In the wireless sen-
sor network, Ullah and Wahid [40] designed a quantum inspired genetic algorithm
framework for topology control. Several quantum inspired algorithms have been in-
troduced in the literature so far for handling bi-level optimization problem. Zhang
et al. [41] used bi-level programming in collaboration with swarm intelligence to
32 Hybrid Quantum Metaheuristics: Theory and Applications
develop a strategic bidding optimization algorithm. Dey et al. [42] presented quan-
tum inspired bi-level optimization algorithms using GA and PSO for gray-level im-
age thresholding. The quantum inspired multi-objective based simulated annealing
has been designed by Dey et al. [3] for bi-level image thresholding. The compu-
tational capability of bi-level system has been enhanced to the multi-level frame by
altering its basic structure. Dey et al. [2] introduced Quantum inspired particle swarm
optimization and quantum inspired differential evolution for multi-level colour image
thresholding. Tkachuk [45] used quantum technological approach to develop quan-
tum inspired evolutionary algorithm. Later, Dey et al. [2][4][47] designed a number
of quantum inspired metaheuristics in multi-level and colour domain. Cardenas et
al. [48] designed a protocol to perform quantum reinforcement learning (QRL) and
quantum technologies (QT).
Quantum inspired metaheuristics have been widely used in pattern recognition.
Dey et al. [49] introduced a quantum inspired sperm whale algorithm for multi-level
thresholding. The basic operators of quantum computing have been fused with the
sperm whale algorithm. Dutta et al. [50] introduced a novel metaheuristic, called the
Border Collie Optimization. The motivation behind designing of this algorithm is
mimicking the sheep herding behaviour of Border Collie dogs.
Unlike, single objective optimization (SOO), multi-objective optimization
(MOO) handles more than one objective function at a time. Kim et al. [51] designed
a QEA based quantum-inspired multi-objective evolutionary algorithm (QMEA) for
solving the 0-1 knapsack problem. Moghadam et al. [52] first introduced a quantum
version of gravitational search algorithm (GSA), called quantum-behaved gravita-
tional search algorithm (QGSA).
Later, in 2015, Chakraborti et al. [53] introduced a modified version of QGSA,
called modified binary quantum-behaved gravitational search algorithm with differ-
ential mutation (MBQGSA-DM). Like QGSA, this algorithm also used differential
mutation strategy. Li and Wang [54] proposed a hybrid quantum-inspired genetic
algorithm (HQGA). This algorithm has been designed to efficiently deal with multi-
objective based combinatorial optimization problem, called low shop scheduling
problem (FSSP). A novel algorithm, called Quantum Ant Colony Multi-Objective
Routing (QACMOR) has been developed to deal with WSN routing problem. In this
algorithm, the concepts of quantum computing and multi-objective function have
been utilized.
2.9 CONCLUSION
With the development of machine learning theory and the accumulation of practical
experience of using various algorithms, it became clear that there is no ideal classifi-
cation method that would be better than all others for all sizes of the training sample,
for any percentage of noise in data, for any complexity of the boundaries of dividing
objects into classes etc. Therefore, at present, ensemble classification methods that
combine many different classifiers trained on different data samples. One of the most
accurate and fast parallelization methods available today is bagging, which turns
out to be useful in the case of heterogeneous classifiers and instability, when small
A Quantum-Inspired Approach to Collective Combine Basic Classifiers 33
changes in the initial sample lead to significant changes in the classification. To in-
crease the speed of combining decisions of basic classifiers, a new quantum-inspired
method of collective decision-making based on metaheuristic quantum algorithms is
proposed. The development of ensemble methods in high-speed online learning is
expected in the future.
REFERENCES
1. Friedman, J. & Greedy, H. (2001). Function approximation: a gradient boosting machine.
Annals of statistics, 29(5), 1189-1232.
2. Zhou, ZH. & Wu, J. & Tang, W. (2002). Ensembling neural networks: many could be better
than all. Artificial intelligence, 137(1), 239-263.
3. Sahu, A. & Runger, G. & Apley, D. (2011). Image denoising with a multi-phase kernel
principal component approach and an ensemble version. IEEE applied imagery pattern
recognition workshop, 1-7.
4. Shinde, A. & Sahu, A. & Apley, D. & Runger, G. (2014). Preimages for variation patterns
from kernel PCA and bagging. IIE Transactions, 46(5), 429-456.
5. Buhlmann, P. & Hothorn, T. (2007). Boosting algorithms: Regularization, prediction and
model fitting. Statistical Science, 477-505.
6. Arya, S. & Mount, D. & Netanyahu, N. & Silverman, R. & Wu, A. (1998). An optimal
algorithm for approximate nearest neighbor searching fixed dimensions. Journal of the
ACM, 45(6), 891-923.
7. Aizerman, M. & Bravermann, E. and Rosonoer, L. (1970). The Potential Function Method
in Machine Learning Theory. Moscow: Science.
8. Beletskaya, S. & Asanov, Yu. & Povalyaev, A. & Gaganov, A.V. (2015). Research of the
effectiveness of genetic algorithms for multicriteria optimization. Voronezh State Technical
University Bulletin, 11(1), 1-4.
9. Epanechnikov, V. (1969). Nonparametric estimation of multidimensional probability den-
sity. Probability theory and its applications, 14(1), 156-161.
10. Dempster, A. & Laird, N. & Rubin, D. (1977). Maximum likelihood from incomplete data
via the EM algorithm. Journal of the Royal Statistical Society. Series B (Methodological),
39(1), 1-38.
11. Scholkopft, B. & Mullert, K. & Fisher, R. (1999). Discriminant analysis with kernels.
Neural Networks for Signal Processing, 1(1), 41-48.
12. Hosmer Jr, DW. & Lemeshow, S. & Sturdivant, R. (2013). Applied logistic regression.
New- York: John Wiley & Sons.
13. Cortes, C. & Vapnik, VN. (1995). Support-vector networks. Machine Learning, 20(3),
273297.
14. Kamiski, B. & Jakubczyk, M. & Szufel, P. (2017). A framework for sensitivity analysis of
decision trees. Central European Journal of Operations Research, 26(1), 135159.
15. Quinlan, JR. (2014). C4.5: programs for machine learning. Amsterdam: Elsevier.
16. Breiman, L. & Friedman, J. & Stone, CJ. & Olshen, RA. (1984). Classification and regres-
sion trees. Monterey: Wadsworth & Brooks.
17. Rutkovskaya, D. & Rutkovsky, L. & Pilinsky, M. (2013). Neural networks, genetic algo-
rithms and fuzzy systems. Moscow: Hotline-Telecom.
18. Glover, F. & Kochenberger. G.A. (2003). Handbook on Metaheuristics. Kluwer Academic
Publishers.
34 Hybrid Quantum Metaheuristics: Theory and Applications
19. Veldhuizen, DAV. & Lamont, GB. (2000). Multiobjective evolutionary algorithms: An-
alyzing the state-of-the-art. Journal of Evolutionary Computation, The MIT Press, 8(2),
125147.
20. Fabris, F. & Krohling, RA. (2012). A co-evolutionary differential evolution algorithm for
solving min-max optimization problems implemented on gpu using c-cuda. Expert Syst.
Appl., 39(6), 1032410333.
21. Google llc. (2019). quantum, google ai. https://ai.google/research/teams/applied-
science/quantum-ai/, 2019. Accessed: May. 18, 2021, [Online].
22. IBM. (2019). ibm q, quantum computing. https://www.research.ibm.com/ibm-q/, 2019.
Accessed: May. 4, 2021. [Online].
23. Intel corporation. (2019). 2018 ces: Intel advances quantum and neuromorphic com-
puting research. https://newsroom.intel.com/news/intel-advances-quantumneuromorphic-
computing-research/,2019. Accessed: May. 2, 2021, [Online].
24. D-wave systems. (2019). dwave systems. https://www.dwavesys.com/home/, 2019. Ac-
cessed: May. 15, 2021, [Online].
25. Kirkpatrik, S. & Gelatt, CD. & Vecchi. MP. (1995). Optimization by simulated annealing.
Science, 220, 671680.
26. Kennedy, K. & Eberhart, R. (1995). Particle swarm optimization. in: Proceedings of
the IEEE International Conference on Neural Networks (ICNN95), Perth, Australia, 4,
19421948.
27. Storn, R. & Price, K. (1995). Differential evolution a simple and efficient heuristic for
global optimization over continuous spaces. Technical Report TR-95-012, ICSI.
28. Dorigo, M. & Maniezzo, V. & Colorni, A. (1996). The ant system: optimization by a colony
of cooperating agents. IEEE Trans. Syst. Man & Cybernet. Part B, 26(1), 2941.
29. Narayanan, A. & Moore, M. (1996). Quantum-inspired genetic algorithms. in: Proceedings
of IEEE Int. Conf. Evol. Comput., 6166.
30. Han, KH. & Kim, JH. (2000). Genetic quantum algorithm and its application to combina-
torial optimization problem. in: Proceedings of Congr. Evol. Comput.(CEC), 2, 13541360.
31. Montiel, O. & Rubio, Y. & Olvera, C. & Rivera, A. (2019). Quantum inspired acromyrmex
evolutionary algorithm. Nat. Sci. Rep., 9(12181), 169176.
32. 32. Han, KH. & Park, KH. & Lee, CH. & Kim, JH. (2001). Parallel quantum inspired
genetic algorithm for combinatorial optimization problem. in: Proceedings of Congr. Evol.
Comput., 2, 14221429.
33. Li, Y. & Shi, H. & Jiao, L. & Liu. R. (2012). Quantum evolutionary clustering algorithm
based on watershed applied to sar image segmentation. Neurocomputing, 87(10), 9098.
34. Wang, Y. & Feng, XY. & Huang, YX. & Pu, DB. & Zhou, WG. & Liang, YC. & Zhou,
CG. (2007). A novel quantum swarm evolutionary algorithm and its applications. Neuro-
computing, 70(4), 633640.
35. Zouache, D. & Nouioua, F. & Moussaoui, A. (2016). Quantum inspired firefly algorithm
with particle swarm optimization for discrete optimization problems. Soft Computing,
20(7), 27812799.
36. Zhang, G. & Gheorghe, M. & Wu. C. (2008). A quantum-inspired evolutionary algorithm
based on p systems for knapsack problem. Fundam. Inform., 87(1), 93116.
37. Zouache, D. & Moussaoui, V. (2015). Quantum-inspired differential evolution with particle
swarm optimization for knapsack problem. J. Inf. Sci. Eng., 31, 17791795.
38. Meng, K. & Wang, HG. & Dong, Z. & Wong, KP. (2010). Quantum inspired particle swarm
optimization for valve-point economic load dispatch. IEEE Transactions on Power Sys-
tems, 25(1), 215222.
A Quantum-Inspired Approach to Collective Combine Basic Classifiers 35
39. Tehzeeb ul Hassan, H. & Asghar, MU. & Zamir, MZ. & Faiz, HMA. (2017). Economic
load dispatch using novel bat algorithm with quantum and mechanical behaviour. In: Pro-
ceedings of 2017 International Symposium on Wireless Systems and Networks (ISWSN),
16.
40. Ullah, S. & and Wahid, & M. (2015). Topology control of wireless sensor network us-
ing quantum inspired genetic algorithm. International Journal of Swarm Intelligence and
Evolutionary Computation, 04, 08.
41. Zhang, G. & Zhang, G. & Gao, Y. & Lu, J. (2011). Competitive strategic bidding op-
timization in electricity markets using bilevel programming and swarm technique. IEEE
Transactions on Industrial Electronics, 58(6), 21382146.
42. Dey, S. & Bhattacharyya, S. & Maulik, U. (2014). Quantum inspired genetic algorithm
and particle swarm optimization using chaotic map model based interference for gray level
image thresholding. Swarm and Evolutionary Computation, 15, 3857.
43. Dey, S. & Bhattacharyya, S. & Maulik, U. (2017). Quantum-inspired multi-objective sim-
ulated annealing for bilevel image thresholding. Quantum Inspired Computational Intelli-
gence, Research and Applications.
44. Dey, S. & Bhattacharyya, S. & Maulik, U. (2013). Quantum inspired meta-heuristic algo-
rithms for multi-level thresholding for true colour images. In: Proceeding of 2013 Annual
IEEE India Conference (INDICON), Mumbai, India.
45. Tkachuk. V. (2018). Quantum genetic algorithm on multilevel quantum systems. Mathe-
matical Problems in Engineering, 112.
46. Dey, S. & Bhattacharyya, S. & Maulik, U. (2016). New quantum inspired meta-heuristic
techniques for multi-level colour image thresholding. Applied Soft Computing, 46,
677702.
47. 47. Dey, S. & Bhattacharyya, S. & Maulik, U. (2017). Efficient quantum inspired meta-
heuristics for multi-level true colour image thresholding. Applied Soft Computing, 56,
472513.
48. Ctardenas-Ltopez, FA. & Lamata, L. & Retamal, JC. & Solano, E. (2018). Multiqubit and
multilevel quantum reinforcement learning with quantum technologies. PLoS ONE, 13(7),
112.
49. Dey, S. & De, S. & Ghosh, D. & Konar, D. & Bhattacharyya, S. & Platos, J. (2019). A novel
quantum inspired sperm whale metaheuristic for image thresholding. In: Proceedings of
2019 Second International Conference on Advanced Computational and Communication
Paradigms (ICACCP), 17.
50. Dutta, T. & Bhattacharyya, S. & Dey, S. & Platos, S. (2020). Border Collie Optimization.
IEEE Access, 8, 109177-109197.
51. Kim, Y. & Kim, JH. & Han, KH. (2006). Quantum-inspired multiobjective evolutionary
algorithm for multiobjective 0/1 knapsack problems. In: Proceedings of IEEE Int. Conf.
Evol. Comput., 26012606.
52. Moghadam, MS. & Nezamabadi-Pour, H. & Farsangi, MM. (2005). A quantum behaved
gravitational search algorithm. Intell. Inf. Manage., 2012(4), 390395.
53. Chakraborti, T. & Chatterjee, A. & Halder, T. & Konar. A. (2015). Automated emotion
recognition employing a novel modified binary quantum-behaved gravitational search al-
gorithm with differential mutation. Expert Syst., 32(4), 522530.
54. Li, B. & Wang, L. (2007). A hybrid quantum-inspired genetic algorithm for multiobjective
flow shop scheduling. IEEE Trans. Syst., Man, Cybern. B, Cybern., 37(3), 576591.
3 Function Optimization
Using IBM Q
3.1 INTRODUCTION
An optimization problem is the searching technique to find out the best solution
from all the possible solutions. When the practical optimization problem in any dis-
cipline is represented by a mathematical function, it is called objective function.
Real-world optimization problems are typically having several objective functions.
There are many ways available in literature for resolving the practical optimization
problems. In this chapter, the extensively used optimization techniques are described
briefly. The main focus has been given to solve the optimization problems using
IBM Q. After introduction in the first section of this chapter, a brief overview to
single-objective and multi-objective optimization with their difficulties have been
discussed.
Section 3.3 has been reserved to discuss about modern techniques of resolving
optimization problems. Genetic algorithms, simulated annealing, particle swarm op-
timization, differential evolution, ant colony optimization, bee-colony optimization,
harmony search algorithm, bat-algorithm, Cuckoo search, neural network-based op-
timization, fuzzy optimization, etc. are available in literature, which are considered
as modern methods of optimization problem solving. All the afore-mentioned meth-
ods are described in short with their consequences.
When the complexity of optimization problems and amount of data involved rise,
more efficient ways of solving optimization problems are needed. The power of
quantum computing can be used for solving problems which are not practically feasi-
ble on classical computers, or suggest a considerable speed up with respect to the best
known classical algorithm. In Section 3.4, the author has enlightened on the basics
of quantum computing and the quantum algorithms used for solving optimization
problems.
IBM provides an experimental cloud-enabled quantum computing user-interface
platform, known as IBM Q, for the students, researchers, and general science en-
thusiasts. It allows users to run established algorithms and experiments, work with
quantum bits (qubits), etc. The different features of IBM Q have been reflected in
Section 3.5.
The circuit composer in IBM Q is a tool that allows users to visually learn how to
create quantum circuits. In Section 3.6, the step-by-step approach has been presented
to build a sample quantum application using circuit composer.
Qiskit (Quantum Information Software Kit) in IBM Q is an open-source quantum
computing framework, which enables developers and researchers to conduct quan-
tum explorations using Python scripts. Section 3.7 has been utilized to showcase a
DOI: 10.1201/9781003283294-3 37
38 Hybrid Quantum Metaheuristics: Theory and Applications
sample Qiskit project with its different components like Qiskit Terra, Qiskit Aqua,
Qiskit Ignis, and Qiskit Aer.
In Section 3.8, the author has enlightened on the application of quantum comput-
ing, where IBM Q can be utilized seamlessly. Out of those applications, objective
function optimization using IBM Q has been broadly illustrated in this section. Port-
folio optimization, risk analysis, and Monte-Carlo-like applications are considered
as few examples of optimization.
The last section of this book chapter is dedicated for drawing a conclusionary
communication in terms of the application of IBM Q in objective function optimiza-
tion.
1. Existence of mixed type of variables (such as Boolean, discrete, integer, and real)
2. Existence of non-linear constraints
3. Existence of multiple conflicting objectives
4. Existence of multiple optimum (local and global) solutions
5. Existence of stochasticity and uncertainty in describing the optimization problem
Figure 3.1: Decision variable space and the corresponding objective space.
Minimize/Maximize fm (x), m = 1, 2, . . . , M;
Sub ject to g j (x) >= 0, j = 1, 2, . . . , J;
(3.1)
hk (x) = 0, k = 1, 2, . . . , K;
(L) (U)
xi <= xi <= xi , i = 1, 2, . . . , n;
1. In a single-objective optimization, there is only one goal, the search for an opti-
mum solution. In multi-objective optimization, there are two goals, progressing
toward the Pareto-optimal front and maintaining diversity among the solutions in
the Pareto-optimal front.
2. In a single-objective optimization, there is only one search space, the decision
space. But in multi-objective optimization, there are two search spaces, decision
space and objective space.
3. Single-objective optimization is the degenerate case of multi-objective optimiza-
tion. In many cases, multi-objective optimization can be converted into single-
objective optimization.
Ψ⟩ = α 0⟩ + β 1⟩ (3.2)
where α and β are complex numbers and |α |2 + |β |2 = 1
Quantum logic gates, operating on a number of qubits, are the building blocks of
quantum circuits. There are many types of quantum gates, like H (Hadamard) gate,
CX (Controlled-X) gate, ID (Identity) gate, U3 gate, U2 gate, U1 gate, Rx gate, Ry
gate, Rz gate, X gate, Y gate, Z gate, S gate, Sdg gate T gate, Tdg gate, cH gate, cY
44 Hybrid Quantum Metaheuristics: Theory and Applications
gate, cZ gate, cRz gate, cU1 gate, cU3 gate, ccX gate, SWAP gate, etc. Even custom
gates can also be created for using in quantum circuits.
Digital quantum computers use quantum logic gates to do computation. A quan-
tum computer consists of the below-mentioned blocks or chambers, IBMs quantum
computing reference website [9]
problem has been used to discuss the performance of GQA. Further improvement to
this algorithm had been made using parallelism feature.
Quantum annealing algorithm such as Quantum Processing Unit (QPU) is ap-
plicable for solving binary optimization problems [11]. In [12], quantum annealer
has been utilized to optimize the traffic flow, as mentioned by [13]. The QPU is de-
signed to solve Quadratic Unconstrained Binary Optimization (QUBO) problems,
where each qubit represents a variable and couplers between qubits represent the
costs associated with qubit pairs. Quantum annealing algorithm has been used in
[14] to resolve the Nurse Scheduling Problem (NSP), which arises when searching
the optimal schedule for a set of available nurses to create a rotating roaster.
Quantum Adiabatic Algorithm (QAA) [15] has been used on a quantum computer
for finding the global minima of a classical cost function. Performance have been
measured in [16] by generating over 200,000 instances of MAX 2-SAT on 20 qubits.
Few real-life optimization examples, which can be resolved by quantum comput-
ing are
Circuit composer of IBM Q is being widely used to simulate the quantum circuits
visually and test those very easily. A quantum calculator (addition, subtraction, mul-
tiplication, and division) is demonstrated and simulated using circuit composer [17].
The algorithm for obtaining maximum and minimum of any mathematical model
has been presented in [18]. Circuit composer has been used as a simulator to find
out the minimum from Titanic passengers age. In [19], controlled square root of Z
gate has been constructed and tested using circuit composer. An exhaustive survey
Function Optimization Using IBM Q 47
(a)
(b)
a. Job is validating
b. Job is successfully queued
c. Job has successfully run
Once job has successfully run, the results can be plotted using plot˙histogram
function.
Function Optimization Using IBM Q 49
From the two plots provided in Figure 3.10 and Figure 3.11, the differences are
very clear. When the result on Jupyter notebook is run, probability of occurrences
is ideal and caught at 2 significant qubits 00000 and 00011. The probability of oc-
currences in other qubits is ignored. When the result is sent to the real quantum
computer, that is, IBM Q for analysis, it provides probability of occurrences in many
qubits by considering the minimal errors too. By improving the technology of real
quantum computer, the scientists are working out to diminish these insignificant er-
rors.
print(result) = ’00011’: 513, ’00000’: 511
print(result˙ibmq) = ’00101’: 1, ’01011’: 5, ’00010’: 36, ’10010’: 2, ’01010’:
1, ’10000’: 20, ’00011’: 311, ’00111’: 2, ’00001’: 54, ’00000’: 571, ’01000’: 4,
’10001’: 1, ’11111’: 1, ’11011’: 1, ’00100’: 2, ’10011’: 11, ’01001’: 1
Figure 3.11: Creating a job for IBM Q and visualizing the result.
computer using VQE. In [29], VQE in IBM Q has been applied to solve the MaxCut
NP-complete binary optimization problem with 5 qubits.
Grover’s Adaptive Search (GAS) [30] is a fast quantum mechanical algorithm
for combinatorial optimization problems, which can resolve an O(N/2) optimization
problem into O(N) steps. In [31], a modified version of Grover’s search algorithm
with fewer gates, optimized number of iterations and improved performance has been
presented. To establish the upgraded and optimized quantum search, set search and
array search algorithms have been implemented using IBM Q. Grover optimizer can
be easily applied to solve the QUBO problem [32].
In July 2020, IBM came up with Qiskit Optimization Module [33]. This is now in
an initial stage by keeping in mind the goal of providing a super-optimized solution
to the users within few milliseconds for any input problem. This module will act
as a black box with the combination of quantum and classical resources. Users do
not need to have the knowledge of quantum theory and mechanics. According to the
documentation, it empowers easy and efficient modeling of optimization problems
for developers and optimization experts without quantum expertise. IBMs DOcplex
(Decision Optimization CPLEX) [34] modeling for python is used to develop the
Qiskit Optimization Module.
3.9 CONCLUSION
As per definition, optimization is the technique of finding an alternative with the
most cost-effective or highest achievable performance under the given constraints,
by maximizing desired factors and minimizing undesired ones. Since many years, lot
of efforts have been incorporated to resolve the optimization problems, specifically
the NP-hard and multi-objective problems. It becomes slight easy after the invention
of quantum computer, which is very costly and sensitive. IBM makes it available
freely for the common people by introducing IBM Q Experience online platform.
In this chapter, efforts have been made to discuss the two features, circuit com-
poser and QISkit, available in IBM Q. A 5-qubit circuit has been created on Jupyter
notebook using QISKit software development kit and the same has been tested using
real IBM quantum computer. The chapter has been concluded by discussing on the
techniques to resolve optimization problems using IBM Q.
ACKNOWLEDGMENTS
This work had been accomplished on IBM Cloud to leverage its IBM Quantum ex-
perience platform. The author is grateful to IBM for providing access to its IBM Q
environment.
REFERENCES
1. Kalyanmoy Deb. Optimization for engineering design: Algorithms and examples. PHI
Learning Pvt. Ltd., 2012.
54 Hybrid Quantum Metaheuristics: Theory and Applications
2. David E. Goldberg and John Henry Holland. Genetic algorithms and machine learning.
Machine Learning, 3: 95–99, 1988.
3. Scott Kirkpatrick, C Daniel Gelatt, and Mario P Vecchi. Optimization by simulated an-
nealing. Science, 220(4598):671–680, 1983.
4. James Kennedy and Russell Eberhart. Particle swarm optimization. In Proceedings of
ICNN’95-International Conference on Neural Networks, volume 4, pages 1942–1948.
IEEE, 1995.
5. Xin-She Yang. Bat algorithm for multi-objective optimisation. International Journal of
Bio-Inspired Computation, 3(5):267–274, 2011.
6. Xin-She Yang and Suash Deb. Cuckoo search via Levy flights. In 2009 World Congress on
Nature & Biologically Inspired Computing (NaBIC), pages 210–214. IEEE, 2009.
7. Guanrong Chen, Trung Tat Pham, and N.M. Boustany. Introduction to fuzzy sets, fuzzy
logic, and fuzzy control systems. Applied Mechanics Reviews, 54(6):B102–B103, 2001.
8. Detlef Nauck, Frank Klawonn, and Rudolf Kruse. Foundations of neuro-fuzzy systems.
John Wiley & Sons, Inc., 1997.
9. https://www.ibm.com/quantum-computing/.
10. Kuk-Hyun Han and Jong-Hwan Kim. Genetic quantum algorithm and its application to
combinatorial optimization problem. In Proceedings of the 2000 Congress on Evolutionary
Computation. CEC00 (Cat. No. 00TH8512), volume 2, pages 1354–1360. IEEE, 2000.
11. Kuk-Hyun Han, Kui-Hong Park, Ci-Ho Lee, and Jong-Hwan Kim. Parallel quantum-
inspired genetic algorithm for combinatorial optimization problem. In Proceedings of the
2001 Congress on Evolutionary Computation (IEEE Cat. No. 01TH8546), volume 2, pages
1422–1429. IEEE, 2001.
12. Florian Neukart, Gabriele Compostella, Christian Seidel, David Von Dollen, Sheir Yarkoni,
and B. Parney. Traffic flow optimization using a quantum annealer. Frontiers in ICT, 4:29,
2017.
13. Mark W Johnson, Mohammad HS Amin, Suzanne Gildert, Trevor Lanting, Firas Hamze,
Neil Dickson, Richard Harris, Andrew J Berkley, Jan Johansson, Paul Bunyk, et al. Quan-
tum annealing with manufactured spins. Nature, 473(7346):194–198, 2011.
14. Kazuki Ikeda, Yuma Nakamura, and Travis S Humble. Application of quantum annealing
to nurse scheduling problem. Scientific Reports, 9(1):1–10, 2019.
15. Edward Farhi, Je rey Goldstone, Sam Gutmann, and Michael Sipser. Quantum computation
by adiabatic evolution. arXiv preprint quant-ph/0001106, 2000.
16. Elizabeth Crosson, Edward Farhi, Cedric Yen-Yu Lin, Han-Hsuan Lin, and Peter Shor.
Different strategies for optimization using the quantum adiabatic algorithm. arXiv preprint
arXiv:1401.7320, 2014.
17. Prathamesh P Ratnaparkhi and K. Bikash. Demonstration of a quantum calculator on IBM
quantum experience platform, DOI: 10.13140/RG.2.2.12661.63209 (2018)
18. Yanhu Chen, Shijie Wei, Xiong Gao, Cen Wang, Jian Wu, and Hongxiang Guo. An opti-
mized quantum maximum or minimum searching algorithm and its circuits. arXiv preprint
arXiv:1908.07943, 2019.
19. Petar Nikolov and Vassil Galabov. Experimental realization of controlled square root of
z gate using ibm’s cloud quantum experience platform. arXiv preprint arXiv:1806.02575,
2018.
20. J. Abhijith, Adetokunbo Adedoyin, John Ambrosiano, Petr Anisimov, Andreas artschi,
William Casper, Gopinath Chennupati, Carleton Corin, Hristo Djidjev, David Gunter, et
al. Quantum algorithm implementations for beginners. arXiv e-prints, pages arXiv:1804,
2018.
Function Optimization Using IBM Q 55
21. https://qiskit.org/.
22. Robert Wille, Rod Van Meter, and Yehuda Naveh. Ibm’s qiskit tool chain: Working with
and developing for real quantum computers. In 2019 Design, Automation & Test in Europe
Conference & Exhibition (DATE), pages 1234–1240. IEEE, 2019.
23. https://github.com/Qiskit/qiskit-aqua.
24. Karthik Srinivasan, Saipriya Satyajit, Bikash K Behera, and Prasanta K Panigrahi. Efficient
quantum algorithm for solving travelling salesman problem: An IBM quantum experience.
arXiv preprint arXiv:1805.10928, 2018.
25. Abhimanyu Nowbagh and K. Bikash. A quantum approach for solving vehicle routing
problem: An IBM quantum experience.
26. Carla Silva, Ines Dutra, and Marcus S Dahlem. Driven tabu search: A quantum inherent
optimisation. arXiv preprint arXiv:1808.08429, 2018.
27. Mitali Sisodia, Abhishek Shukla, Alexandre AA de Almeida, Gerhard W. Dueck, and Anir-
ban Pathak. Circuit optimization for IBM processors: A way to get higher fidelity and
higher values of nonclassicality witnesses. arXiv preprint arXiv:1812.11602, 2018.
28. Alberto Peruzzo, Jarrod McClean, Peter Shadbolt, Man-Hong Yung, Xiao-Qi Zhou, Peter
J Love, Alan Aspuru-Guzik, and Jeremy L O’brien. A variational eigenvalue solver on a
photonic quantum processor. Nature Communications, 5(1):1–7, 2014.
29. Nikolaj Moll, Panagiotis Barkoutsos, Lev S Bishop, Jerry M Chow, Andrew Cross, Daniel
J Egger, Stefan Filipp, Andreas Fuhrer, Jay M Gambetta, Marc Ganzhorn, et al. Quantum
optimization using variational algorithms on near-term quantum devices. Quantum Science
and Technology, 3(3):030503, 2018.
30. Lov K Grover. A fast quantum mechanical algorithm for database search. In Proceedings
of the Twenty-Eighth Annual ACM Symposium on Theory of Computing, pages 212–219,
1996.
31. Austin Gilliam, Marco Pistoia, and Constantin Gonciulea. Optimizing quantum search us-
ing a generalized version of Grover’s algorithm. arXiv preprint arXiv:2005.06468, 2020.
32. https://qiskit.org/documentation/tutorials/optimization/index.html.
33. https://www.ibm.com/blogs/research/2020/07/quantum-optim-module/.
34. https://developer.ibm.com/docloud/documentation/optimization-modeling/modeling-for-
python/.
4 Multipartite Adaptive
Quantum-Inspired
Evolutionary Algorithm to
Reduce Power Losses of a
Radial Distribution
Network
4.1 INTRODUCTION
Over the last few decades, some new approximation algorithms have emerged with
the aim to explore the search space, which is commonly known as metaheuristics.
Generally, metaheuristic is defined as an iterative process which guides the heuristic
method by combining the intelligent concepts for exploiting and exploring the search
space. If a heuristic optimization algorithm is expressed in a metaheuristic frame-
work with different intelligent concepts to explore the search space is also referred
as metaheuristic. Glover introduced the term metaheuristic in 1986 by combing the
Greek prefix Meta with heuristic. Heuristic means to find or discover, whereas the
suffix Meta means beyond or higher level solution [1]. Metaheuristic method allows
the local search operators to escape from local optima by generating new initial so-
lutions or allowing worsening moves for the local search in an intelligent way. High
quality solutions are produced in metaheuristics by introducing a bias with various
forms [2].
Metaheuristic methods have demonstrated to the scientific community that they
are often feasible, alternative, and superior to more traditional methods such as dy-
namic programming and branch and bound, etc. In comparison with traditional meth-
ods, metaheuristics are often providing a better trade-off between computing time
and solution quality for large and complicated problems. Metaheuristic methods are
often more flexible than traditional methods in two different ways. Firstly, these are
adapted to fit for most real-life optimization problems in terms of computational time
and solution quality, which can vary greatly across different situations. Secondly,
they do not offer any demands on formulation of optimization problem. Metaheuris-
tic methods are implemented by several commercial vendors in their software as
primary optimization engine.
Metaheuristic algorithms attempt to find the best feasible solution of an optimiza-
tion problem out of all possible solutions. Series of operations are performed on the
DOI: 10.1201/9781003283294-4 57
58 Hybrid Quantum Metaheuristics: Theory and Applications
optimization problem to search for the better solution. Local search methods and
population based methods are normally employed by metaheuristics to obtain the
feasible solutions. Local search methods use iterative process to find the optimal
solutions [3]. Population-based methods find the optimal solution by iteratively se-
lecting and then combining existing solutions from a set, usually called as popula-
tion. The most important member of this class, which mimics the principle of natural
evolution, is evolutionary algorithm (EA). In EA, selection operator generally gives
direction to the search process by using Darwinian principle. The solution for next
iteration is generated through variation operators like crossover by recombining the
solutions from current iteration. Local heuristics and mutation operator are used to
improve the exploration and exploitation, that is, escaping from local minima and
increasing the convergence rate. EAs are popular due to their ease of implementation
and employed for solving difficult and complex optimization problems. However,
EA often suffers from some limitations.
Quantum-inspired Evolutionary Algorithm (QiEA) is used to overcome these lim-
itation, which use probabilistic representation along with some concepts and opera-
tions of quantum computing [4]. It uses a single qubit with small population size and
is governed by principle of quantum mechanics [5]. Q-gates are used in QiEA as a
variation operator to drive the individuals in the population towards better solution.
In recent times, Adaptive Quantum-inspired Evolutionary Algorithm (AQiEA) [6] is
applied on various engineering optimization problems with a measurement opera-
tor, which is a modified version of QiEA. AQiEA uses two sets of qubits, whereas
QiEA uses a single set of qubit. Recently, AQiEA is applied on optimization prob-
lem of Distributed Generator (DG) [7]-[8], Network Reconfiguration [9]-[11], ce-
ramic grinding [12], Cost analysis of DG and Capacitor [13], Siting and sizing of
Capacitors [14], and simultaneous implementation of both DG and capacitors [15].
In this chapter, we are proposing Multipartite Adaptive Quantum-inspired Evolution-
ary algorithm , which is an updated version of AQiEA. MAQiEA improves on both
exploration and exploitation ability of AQiEA by introducing changes in Rotation
towards Better Strategy and Rotation away from Worse Strategy. In MAQiEA, the
Rotation toward Better Strategy of AQiEA is converted into Rotation Around Better
Strategy as it offers more un-restricted exploration as compared to the previous strat-
egy, which allowed exploration in only improving direction. Similarly, the Rotation
away from Worse Strategy in AQiEA involved only two individuals, one was Best
Individual and the other was sequential selected Individual and was primarily used
for exploitation purpose, i.e., searching around the Best individual with the help of
other individuals in the population, so it was bipartite. However, recent algorithms
like Grey Wolf Optimizer (GWO) [16], Symbiotic Organism Search [17] and Salp
Swarm Algorithm (SSA) [18] etc. are shown to be good exploiters and they tend to
use multiple individuals based variation operators rather than bipartite variation op-
erators, therefore, it was decided to augment the exploitation strategy, i.e., Rotation
away from worse by converting it into Multipartite Adaptive Variation operator from
the current bipartite version in AQiEA.
Multipartite Adaptive QIEA to Reduce Power Losses 59
voltage dependent load model (VDLM) to find the power losses incurred in the sys-
tem. In this study constant current (CC) and constant impedance (CZ) which varies
linearly and square of the voltage are considered. The industrial load (IL), commer-
cial load (CL) and residential load (RL) power requirements vary exponentially with
the terminal voltage. In addition to VDLM, CP load model is also used in the study.
A class of mixed load (ML) is also investigated to find the power losses incurred in
the system i.e., combination of all load including both VDLM and CP load model.
Vijay babu et al. [22] used an analytical approach with real power loss expression
to find the optimal location and capacity of DG. An investigation has been performed
with four different scenarios to minimize the losses with different types of DGs. All
class of mix DGs viz., Type-I (which injects only active power), Type-II (both injects
both active and reactive powers), Type-III (only reactive power), is also used in this
study as fourth scenario. Prakash and Kathod [23] presented an analytical technique
to reduce losses with implementation of single DG. In this study, optimization of DG
reduces the magnitude of active and reactive power components. A hybrid technique
i.e., combination of both analytical and metaheuristic technique (genetic algorithm)
is used in ref [24], to determine the location and capacity of DG. An investigation
has been performed on DGs mode of operation, two different scenarios are used
with different power factors. Ahmed et al. [25] used a linearized model to estimate
the optimal capacity of DG with graph flow and Kalman filter. Optimal size of DG is
obtained with a two stage method, where graph flow is used to create a linear model
and Kalman filter is used to find the optimal size of DG. Mahmoud et al. [26] pre-
sented an analytical approach with integration of DG in distribution system to reduce
losses. Multiple DGs are installed in the system with optimal power factors and four
different scenarios are used to maximize the percentage power loss reduction with
different types of DGs.
Khoa et al. [27] proposed an optimization technique known as one rank cuckoo
search algorithm (ORCSA). ORCSA is used to solve the combinatorial optimization
problem of DG by finding its optimal location and capacity with different power
factors. Multiple objective optimizations are used in the study, Similar to the above,
Sultana et al. [16] studied the effect of DG allocation on distribution system with
grey wolf optimization algorithm with an objective to reduce the losses. Yifei et al.
[28] studied the impact of DG in DN with loss sensitivity factor (LSF), which is used
to determine the optimal allocation of DG. Mistry [29] used two different optimiza-
tion techniques to reduce the losses in a DN with implementation of multiple DGs.
Carvalho and Niraldo [30] used an optimization technique known as Ant Colony Op-
timization for optimization DG with same objective as mentioned above. Vizhiy and
Santhi [31] presented a multiobjective optimization problem with DG to reduce the
losses. Biogeography based optimization (BBO) is used to find optimal placement
and capacity of DG. Ali et al. [32] used evolutionary algorithm technique called as
Ant Lion Optimization (ALO), which is used to find the optimal allocation of DG.
Proposed algorithm is based on behaviour of hunting lion ants, a multiobjective ap-
proach is used to reduce the losses with improvement in voltage profile.
Mahajan and Vadhera [33] used particle swarm optimization technique with an
objective to minimize the losses in DN by finding the ideal location of DG with op-
timal size. An investigation has been performed to find the improvement in voltage
profile with multiple weight factors. Snigdha and Panigrahi [34] studied the effect
of DG in distribution system with multiobjective differential evolution algorithm to
maximize the benefits of DG owners and utilities by minimizing the power losses
with different scenarios. Chaotic symbiotic organism search algorithm with multi
objective optimization problem is used in ref [35] to minimize the losses in a DN by
62 Hybrid Quantum Metaheuristics: Theory and Applications
allocating the DG at optimal location with capacity. Benchmark test bus systems are
also used to show the effectiveness of CSOS. Power loss reduction index is consid-
ered as primary objective to reduce the losses with implementation of DG, Selective
Particle Swarm Optimization (SPSO) determines the optimal placement and capacity
of DG [36]. Sarfaraz et al. [37] used same algorithm to find the optimal location and
capacity of DG with same objective. Multiple DGs are used with small rating DGs
in [36], whereas ref [37] uses a single DG with high operating power (size). Power
loss reduction with multiple DGs is high in comparison with single DG.
Devang and Ritesh [38] studied the effect of DG with different power factors i.e,
lagging power factor, 0.8 power factor and unity power factor on distribution system
at various locations. In this study, two DGs are employed with small size to minimize
the losses. Jamian et al. [39] used Gravitational search algorithm to reduce the losses
in the DN by placing the DG at its optimal location with optimal capacity. Zhang and
Bo [40] studied the impact of DG in radial DN on power loss minimization. Simu-
lation results conclude that, DG should be placed at the end of the line when it is
operating with low rating and at the middle of the line when it is operating with high
power rating. DG should be placed nearer to substation when it has high operating
power greater than the load demand. An optimization technique known as Genetic
Algorithm is used to find the placement and capacity of DG with an objective to
minimize the power losses [41]. Ang et al. [42] used a new metaheuristic technique
viz., sine cosine algorithm to find the optimal placement and capacity of single and
multiple DGs in DN with an objective to improve the voltage profile in the net-
work and maximize the percent power loss reduction in the system. Simultaneous
implementation of DG and Capacitors are also used by some authors to reduce
the power losses. In such studies, DG injects only active power with unity power
factor into the system, whereas Capacitor injects only reactive power with zero power
factor into the system, injection of both powers into the system results in high re-
duction in power losses as compared with independent implementation. However,
independent implementation of DG has high reduction in power losses in compari-
son with independent implementation of Capacitors. Simultaneous implementation
of both DG and capacitor induces high investment cost, maintenance and operation
cost. In our study, only independent implementation of DG is considered.
It has been observed from the above literature that only constant power load is
used which doesn’t vary with time. In distribution network, consumers use different
load models with different ratings. Majority of load used in distribution network is
dependent on voltage, however CP load model is independent of voltage. If the opti-
mal placement and capacity of DG obtained with CP load model is used in practical
distribution network, which induces more power losses into the system due to the
improper location and sizing of DG. Some have used different load models other
than CP load. Roy et al. [43] studied the impact of DG in a DN on different load
models with voltage profile. Initial investigation has been performed by analyzing
the impact of static load (CP load) on DN. Dynamic analysis shows that composite
load model has high voltage dips and CZ load model has low voltage dips. Oscar
et al. [44] used a novel approach to minimize the power losses by varying the load
Multipartite Adaptive QIEA to Reduce Power Losses 63
model i.e., twenty four load model is used for optimization of DG by keeping the
voltages within the limits. Divya and Srinivasan [45] studied the effect of DG un-
der fault conditions with a simple radial distribution system. Voltage Stability Index
(VSI) is used to find the optimal allocation of DG, whereas its optimal capacity is
obtained with Particle Swarm Optimization. Aashish et al. [46] studied the effect of
practical load models with integration of DG in DN. Several performance indices are
developed as multi-objective function. Genetic algorithm and Particle Swarm opti-
mization are also used to determine the optimal location and size of DG. Das et al.
[47] investigated the effect of DG on a VDLM i.e., residential time varying load to
reduce the power losses. Sensitivity index-based method is used to find the optimal
placement of DG with variation in load whereas, Genetic Algorithm determines the
optimal capacity of DG.
In this study, an investigation has been performed to study the effect of DG with
variation in load. Different types of load models, which are dependent on exponen-
tial characteristics of node voltage, are shown in Figure 4.3. Optimal placement
and sizing of DG is a nondifferentiable combinatorial complex optimization prob-
lem. Multi-partite Adaptive Quantum-inspired Evolutionary Algorithm (MAQiEA)
is used to find the optimal location and capacity of DG for VDLM. MAQiEA is the
updated version of AQiEA. In AQiEA, three rotation strategies are used to converge
the population towards global optima, whereas MAQiEA uses probabilistic rotation
around better and multi-partite rotation away from worse rotation strategies which
provides for relatively better exploration and exploitation. MAQiEA has high ro-
bustness and has better exploitation and exploration of search space in comparison
with AQiEA as shown by test results.
64 Hybrid Quantum Metaheuristics: Theory and Applications
Voltage Stability Index: Distribution network has complex structure, poor voltage
regulation and high power losses are observed in the system at the end nodes. In
Multipartite Adaptive QIEA to Reduce Power Losses 65
Table 4.1
Exponent Values of Different Voltage Dependent Loads
Load Type Exponent
Constant Power Load µcp γcp
0 0
Industrial Load µi γi
0.18 6.0
Residential Load µr γr
0.92 4.04
Commercial Load µc γc
1.51 3.40
Constant Impedance Load µci γci
1 1
Constant Current Load µcc γcc
2 2
order to maintain the system voltage within acceptable limits, voltage stability index
is considered.
Power loss minimization in distribution network with optimal placement and sizing
of DG is an interesting and challenging area of research. Optimal location and ca-
pacity of DG minimizes power losses and improves the voltage profile in the system.
Power injected by at a particular bus, m is given as follows.
c) Power injection: Total power injected by substation and Distributed generator has
to meet power demand at load centers including losses.
n
PSubstation + ∑ PDG,i < PDemand + Ploss (4.8)
m=1
d) Voltage limit: Optimal location and capacity of DG not only reduces the power
losses but also improves the voltage profile in the system. After installing DG at
optimal location with capacity, voltage has to be in permissible limits.
V min < V < V max (4.9)
Sk = Pk + jQk (4.10)
B1 1 1 1 1 1 1 1 I2
B2 0 1 1 0 0 0 0
I3
B3 0 0 1 0 0 0 0
I4
B4 = 0 0 0 1 1 1 1 I5
B5 0 0 0 0 1 1 1
I6
B6 0 0 0 0 0 1 0 I7
B7 0 0 0 0 0 0 1 I8
The relation between equivalent current injection and branch currents are expressed
as
B = BIBC I (4.13)
Algorithm for formulation of BIBC Matrix:
Step I: If a distribution network has u branch section and v bus section, a null matrix
with dimension u*(v-1) is created.
68 Hybrid Quantum Metaheuristics: Theory and Applications
Step II: If a line section (BL ) is located between two buses bus m and bus n, copy the
column of mth bus of BIBC matrix to the nth column of BIBC matrix and a value +1
is added at a position of l-row, n-bus column. If a line section (B5 ) is located between
two buses bus 6 and bus 8, copy the 6th column of BIBC matrix to 8th column of
BIBC matrix and +1 is added at a 5-row, 8-column of BIBC matrix.
Step III: The above process is repeated for all branch line sections in the distribution
network.
The above process of copying the columns from one column to other column and
addition of value +1 is shown in Figure 4.5.
Formation of BCBV Matrix:
Branch-current and bus voltage relationship (BCBV) is obtained by applying KVL
to the simple radial distribution network shown in Figure 4.6.
From the above equation, it has been observed that the bus voltage can be expressed
as function of substation voltage, branch current and line parameters. Branch Current
Multipartite Adaptive QIEA to Reduce Power Losses 69
bution load flow is used to find the total power losses induced in the system, bus
voltages and individual power losses for each branch. Step-by-step process of direct
load flow is given as follows.
Step I: Read the initial line data and load data of the distribution network.
70 Hybrid Quantum Metaheuristics: Theory and Applications
4.5 ALGORITHM
In recent times, metaheuristics are mainly used to solve several optimization prob-
lems. Metaheuristic methods have demonstrated the scientific community that they
are often feasible, alternative, and superior to more traditional methods such as dy-
namic programming and branch and bound, etc. In comparison with traditional meth-
ods, metaheuristics are often providing a better trade-off between computing time
and solution quality for large and complicated problems. In EA, individuals will
compete with one another. The fittest individual in the population will move for-
ward to the next generation. It acts as a parent and it will again compete with child
in the next generation, the fittest among them will move forward to further gener-
ation. This process repeats until convergence exists. EA often suffers from some
major limitations i.e., stagnation, sensitivity to the choice of parameter, premature
and Slow convergence. QiEA overcomes the above limitations by creating a good
balance between exploration and exploitation. QiEA is designed by integrating prin-
ciples of Quantum mechanics viz., measurement, entanglement, superposition and
interference into current framework of EA. It is proposed to solve difficult combi-
natorial and non-differentiable optimization problems. In this study, A Multi-partite
Adaptive Quantum-inspired Evolutionary Algorithm (MAQiEA) is used to solve non
linear large-scale optimization problem. AQiEA [9] is different from QiEA, AQiEA
uses two Q-bits per solution vector, whereas Quantum-inspired Evolutionary Algo-
rithm uses a single Q-bit. In Adaptive Quantum-inspired Evolutionary Algorithm,
the smallest information element in a quantum computer is a quantum-bit (qubit)
analogous to classical bits. The basis states are represented in Hilbert space by a
vector as |0⟩ and |1⟩. The qubit can be represented by vector |C⟩ and it is defined as
Where A and B are complex numbers which specifies the probability amplitudes
associated with states |1⟩ and |0⟩ respectively and should satisfy the condition.
| A |2 + | B |2 = 1 (4.18)
Where | A |2 and | B |2 specify the probability of qubit to be in state 0 and 1.
The proposed Multi-partite Adaptive Quantum-inspired Evolutionary Algorithm
employs two qubits, first qubit is used to store the solution vector of design vari-
ables and the second qubit is used to store the scaled and ranked objective function
value [8]. The classical implementation of Entanglement principles are mathemati-
cally represented as follows.
|C2i (t)⟩ = f1 |C1i (t)⟩ (4.19)
|C1i (t + 1)⟩ = f2 (|C2i (t)⟩, |C1i (t)⟩, |C1 j (t)⟩) (4.20)
72 Hybrid Quantum Metaheuristics: Theory and Applications
Where |C2i is ith vector of second qubit, |C1i is ith solution vector of first qubit
and |C1 j is jth solution vector of first qubit, t is iteration number, f1 and f2 are the
functions through which both the qubits are classically entangled. The second qubit
is used as feedback in parameter/ tuning free adaptive quantum-inspired rotation
crossover operator. A1i is the probability amplitude of the scaled value of ith variable
in the ith qubit. The variables are scaled between upper and lower limits, the limits
are taken as zero and one. The qubits are stored in quantum register. Number of
variables is equal to number of qubits per quantum register Qi . The structure of Qi is
shown below:
Q1,i = [A1,i,1 , A1,i,2 .....A1,i,n ]
..................
Q1,m = [A1,m,1 , A1,m,2 , .....A1,m,n ]
The second set of qubit in quantum register Qi+1 is used to store the scaled and
ranked objective function value of corresponding solution vector in Qi . The fittest
vector for objective function value is assigned 1, whereas the worst vector for objec-
tive function value is assigned 0 of second qubit set. The remaining solution vectors
for objective function value of second qubit is also ranked in the range of zero and
one.
If nv represents the number of variables used to solve the optimization problem
of DG for optimal placement and capacity of DG with total population np.
Q1 X11 ... ... Y1nv
... ...
... ...
... ...
... ...
... ...
= (4.21)
Qnp Xnp1 ... ... Ynpnv
Minimization of power losses with optimal location and sizing of DG is considered
as main objective. The solution vector for solving the above mentioned objective is
represented in Figure 4.8 as follows: The solution vector for simultaneous placement
Figure 4.8: Solution vector representation for DG with optimal location and sizes.
...
...
...
Three rotation strategies have been applied to converge the population adaptively
towards global optima.
Rotation towards the Best Strategy (R-I): All the solution vectors in the popula-
tion are rotated towards the best solution vector. It is expected that better candidate
solution will be found for all other vectors by rotating the remaining solution vectors
towards the best solution vector.
Rotation Around the Better Strategy (R-IIA): This strategy is primarily used for
exploration purpose, that is two individuals are randomly selected and the search
takes place around the better individual. The direction and the magnitude of search
region is determined by the relative fitness represented in second set of qubits and
relative position of the two individuals stored in the first set of qubits.
Multi Parent Rotation away from worse (R-IIIM): This is inspired from multi par-
enting strategy, which has been previously used by some metaheuristics such as Grey
Wolf Optimization [16] and Symbiotic Organism Search [17]. These metaheuristics
are known for their exploitation, so R-IIIM now employs the best individual, a se-
quentially selected individual, a randomly selected individual and the worst individ-
ual in the population.
Flow chart of proposed algorithm with direct load flow is shown in Figure 4.9.
Pseudo code of the Multipartite Adaptive Quantum-inspired Evolutionary Algo-
rithm is shown below:
Pseudo Code:
Initialization
N p =Numberof Quantum Registers i.e., Quantum-inspired Registers Q1
for i=1: N p
Q1 (i)
=rand (0, 1) ;
Do
Measurement Operator
for i=1: N p
if rand(0,1)¡(Q1 (i))2
Qm (i)= (Q1 (i))2
else
Qm (i)= (1-(Q1 (i))2 )
Fitness calculation
for i=1: N p
var(i) = Back transform(Qm (i))
fitness function(i) = DFL PF(var(i))
Assign Q2 using fitness level of solution vector of Q1
Apply Adaptive Quantum based crossover operator using Q1 and Q2 to gener-
ate Q1c
Elitist selection between Q1c and Q1
While (!termination criteria)
Description:
1. Population size, number of variables and maximum number of iterations are ini-
tially assigned for quantum register i.e., the Quantum-inspired register Q1 is initial-
ized randomly.
74 Hybrid Quantum Metaheuristics: Theory and Applications
Figure 4.9: Flow chart for the proposed algorithm with Load Flow.
Multipartite Adaptive QIEA to Reduce Power Losses 75
Table 4.2
Initial Load Data of the Test Bus System
Particular Value (69 Bus System)
Total Active Power Demand (MW) 3.81
Total reactive Power Demand (MVAr) 2.694
Buses 1 - 69
Sectionalizing switches 1 - 68
Tie line switches 69 - 73
Maximum power rating of DG (MW) 1
bus system are considered as candidate nodes for optimal location and capacity of
multiple DGs except the substation node. All the parameters in the system i.e., line
data and load data of the benchmark test bus system is converted into per units (p.u)
for calculation purpose. Experimental results on test bus systems are carried out on
MATLAB environment hosted on a Intel Core TM i3 CPU computing machine with
4 GB RAM capacity @1.80GHz. MAQiEA has shown better performance as com-
pared with other algorithms available in the literature. The initial data of the medium
voltage test bus system viz., line data and load data are shown in Table 4.2, the to-
tal real and reactive power demand of the system i.e., active and reactive loads on
the network is 3801kW and 2694kVAr. The 69 bus system mainly consists of sixty
nine buses, which has 5 tie line switches and 68 sectionalizing switches which are
numbered as 1 to 68 and 69 to 73 respectively. Active and reactive power losses
of constant power load model with normally open switches 69, 70, 71, 72 and 73
i.e., without opening any tie line switches and without DG implementation is 224.94
kW and 102.12 kVAr with minimum voltage 0.9092p.u respectively. The parameters
used for testing of MAQiEA is given in Table 4.3. First of all, testing has been per-
formed to validate the changes made in the original AQiEA [8] to arrive at MAQiEA.
Thereby, testing was done for validating design decision for changing Rotation To-
wards Better Strategy in AQiEA [7] to Rotation Around Better Strategy (TR-II).
Further, testing was also performed for validating the design decision for changing
Bipartite Rotation away from Worse Strategy in AQiEA [8] to Multipartite Rotation
away from Worse Strategy in MAQiEA. Constant power load model is considered to
validate the results.
Tests for validating design decision for Rotation Around Better Strategy (R-IIA):
The R-II in AQiEA was generating a random variable between [0, 1] as is done
in majority of EAs and so it was termed as a Rotation towards Better strategy. How-
ever, In R-IIA, a random variable is generated between [-1, 1], and is termed as
Rotation Around Better Strategy in MAQiEA. We have arrived at [-1, 1], after thor-
ough investigations, which have been performed to find the power losses incurred in
the system with different search spaces other than 0 and 1. Seven different search
intervals have been investigated to arrive at the best performing one. In Case I, the
Multipartite Adaptive QIEA to Reduce Power Losses 77
Table 4.3
Parameters Different State-of-Art-techniques
GA Population Size=50; Number of Generations=100; Muta-
tion probability= 0.02; Crossover probability= 0.8
PSO Population Size=50; Acceleration factor C1 =C2 =2; Inertia
weights Wmax =0.9, Wmin =0.4
ALO Number of Agents N=50; Itermax =200
GSA Number of Agents N=50; Itermax =200
MAQiEA Population Size=50; Itermax =200
Table 4.4
Wilcoxon Signed Rank Test on MAQiEA with Different Cases
Comparison R+ R− PValue Hypothesis
Case-I Vs Case-II 0 465 0 H0 : Case-I ≥ Case-II
H1 : Case-I ≤ Case-II
Case-I Vs Case-III 26 439 0 H0 : Case-I ≥ Case-III
H1 : Case-I ≤ Case-III
Case-I Vs Case-IV 26 439 0 H0 : Case-I ≥ Case-IV
H1 : Case-I ≤ Case-IV
Case-I Vs Case-V 0 465 0 H0 : Case-I ≥ Case-V
H1 : Case-I ≤ Case-V
Case-I Vs Case-VI 71 394 0.0004 H0 : Case-I ≥ Case-VI
H1 : Case-I ≤ Case-VI
Case-I Vs Case-VII 0 465 07 H0 : Case-I ≥ Case-VII
H1 : Case-I ≤ Case-VII
search interval limits are [-1, 1], for Case II the limits are [0, 1], similarly for other
remaining the search intervals as Case III to Case VII are [-0.5, 1], [-1, 0.5], [-0.25,
1], [-0.25, 0.25] and [0.25, 0.5], respectively. The proposed algorithm is tested with
different search intervals to minimize the fitness function. It was observed [-1 1], has
minimum fitness value in comparison with others. Wilcoxon signed rank test is per-
formed between Case I and other Cases to test the statistical significance of Cass I.
In Wilcoxon signed rank test two hypothesis are created, Null Hypothesis and Alter-
nate Hypothesis i.e., H0 and H1 , respectively and significance level, α is 0.05, which
is compared with the PValue to arrive at a conclusion [53]-[54] shown in Table 4.4.
Table 4.4 shows, the pair wise comparison of Case-I with other Cases. Case-I shows
significant improvement over other Cases with level of significance α =0.05. It has
been observed from tabulated results that null hypothesis is rejected based on PValue
i.e., level of significance, hence Case I is the best performing search interval.
78 Hybrid Quantum Metaheuristics: Theory and Applications
Table 4.5
Performance Analysis of MAQiEA with Other Alternatives
AQiEA MAQiEA1 MAQiEA2 MAQiEA
St. Dev 0.6573 0.0908 0.0984 0.0821
Average 73.4776 71.9906 71.9624 71.8834
Tests for validating design decision for Multipartite Rotation away from Worse
Strategy: In MAQiEA, a novel rotation strategy i.e., Multi Parent Rotation away
from worse is used, R-IIIM, which now employs the best individual, a sequentially
selected individual, a randomly selected individual and the worst individual in the
population. Whereas R-III, in AQiEA, had used the best individual, and a sequen-
tially selected individual. In order to show the effectiveness of MAQiEA, two algo-
rithms other than AQiEA, is incorporated in the study viz., MAQiEA1 and MAQiEA2 .
MAQiEA1 uses tripartite multi parent R-III strategy with best individual, a sequen-
tially selected individual, and a randomly selected individual from the population,
whereas MAQiEA2 uses tripartite multi parent strategy with best individual, a se-
quentially selected individual, and the worst individual in the population. Each algo-
rithm is analysed with thirty independent runs. Based on the performance of theses
runs minimum power loss, maximum power loss, average power loss and standard
deviation are calculated. In addition, Wilcoxon signed rank test is also used to vali-
date the results statistically. Null hypothesis and alternate hypothesis are represented
as H0 and H1 for each test case. Table 4.5 shows the power losses i.e., minimum,
maximum and average power losses after thirty independent runs. Table 4.6 shows,
the pair wise comparison of MAQiEA and other algorithms. It has been observed
from tabulated results that null hypothesis is rejected based on PValue i.e., level of
significance. That is MAQiEA is the best performing version on the basis of Average
and Standard deviation as well as on the basis of Wilcoxon Signed Rank test. For
constant impedance load model, total active and reactive power losses induced in the
system before installing DG is 188.6958kW, 86.5806kVAr with minimum 0.9174p.u.
Similarly, for constant current load model, real and reactive power losses obtained
for base case i.e., without implementation of DG is 158.878kW and 73.767kVAr
with minimum VSI 0.7305p.u. For industrial load model, the load used in the test
bus system is totally dependent on industrial load. Similarly for residential and com-
mercial load models also load used in the system is purely dependent on residential
and commercial loads. In case of constant current load and constant impedance load
model the total load varies with square of the voltage and linearly with voltage re-
spectively. Power losses obtained for practical load models without implementing
DG for industrial, residential and commercial load is171.4316kW, 164.9382kW and
157.0083kW, respectively. A class of mix load is also considered in the study, which
incorporates all voltage-dependent load models including constant power load which
is independent of voltage. The total active and reactive power losses obtained before
implementing DG is 208.8178kW and 95.0525kVAr respectively.
Multipartite Adaptive QIEA to Reduce Power Losses 79
Table 4.6
Wilcoxon Signed Rank Test on MAQiEA with Other Algorithms
Comparison R+ R− PValue Hypothesis
AQiEA Vs MAQiEA1 464 1 0 H0 : AQiEA ≤ MAQiEA1
H1 : AQiEA ≥ MAQiEA1
AQiEA Vs MAQiEA2 464 1 0 H0 : AQiEA ≤ MAQiEA2
H1 : AQiEA ≥ MAQiEA2
AQiEA Vs MAQiEA 464 1 0 H0 : AQiEA ≤ MAQiEA
H1 : AQiEA ≥ MAQiEA
MAQiEA1 Vs MAQiEA 438 27 0 H0 : MAQiEA1 ≤ MAQiEA
H1 : MAQiEA1 ≥ MAQiEA
MAQiEA2 Vs MAQiEA 366 99 0.03 H0 : MAQiEA2 ≤ MAQiEA
H1 : MAQiEA2 ≥ MAQiEA
The parameters used in the algorithms are given in Table 4.3. For constant power
load model, tabulated results in Tables 4.7, 4.8 and 4.9 demonstrate that, MAQiEA
has high reduction in power losses of 71.7497kW and 35.9981kVAr with minimum
voltage 0.979 by implementing DG at optimal location 60, 61 and 17 with optimal
sizes 1MW, 774kW and 509kW respectively. Whereas, GA has minimum reduction
in power losses as compared with all other algorithms. GA has power loss reduction
of 89.737kW and 43.2099kVAr with minimum voltage 0.9093 at optimal location 61,
12, and 23 with capacities 1MW, 520kW and 500kW respectively. ALO has power
loss reduction of 78.1456kW and 38.5394kVAr with optimal location 68, 62 and
63 and capacity of 361kW, 933kW and 935kW. Placing DG at optimal location with
optimal capacity not only reduces the power loss but also improves the voltage profile
of the system. MAQiEA has better improvement in voltage profile in comparison
with GA, PSO, GSA and ALO.
For constant impedance load, proposed algorithm has high reduction in power
loss of 61.3645kW and 31.4816kVAr with implementation of DG at 61, 16 and 60
with capacities of 639kW, 486kW and 1MW respectively. Except the proposed al-
gorithm, ALO has high power loss reduction of 65.3053kW and 32.8232kVAr with
location 61, 68 and 62 and sizing of 1MW, 410kW and 750kW in comparison with
other algorithms. MAQiEA has high improvement in VSI in comparison with GA,
GSA, PSO and ALO. GA has better improvement in voltage profile and VSI in com-
parison with PSO and GSA, however in this case also GA minimum loss reduction
in power loss of 77.8474kW and 38.6822kVAr with DG location 65,55 and 61 and
with capacity of 976kW, 940kW and 467kW respectively.
Similarly for constant current load, ALO has high reduction in power loss ex-
cept MAQiEA. MAQiEA has maximum power loss reduction in comparison with
other algorithms. The overall active and reactive power loss obtained after imple-
menting DG with ALO is 59.1839kW and 30.0164kVAr at locations 64, 60 and 68
with optimal capacities 736kW, 691kW and 900kW respectively. Whereas, PSO and
80
Table 4.7
Comparative Analysis of MAQiEA with Other Algorithms.
Base Case GA PSO GSA ALO MAQiEA
Location ... 61, 12, 23 68, 57, 61 60, 63, 56 68, 62, 63 60, 61, 17
Size (MW) ... 1, 0.52, 0.5 0.87, 1, 0.88 0.7964, 0.9312, 0.3607, 0.9326, 1, 0.7733,
81
82
Table 4.9
Comparative Analysis of MAQiEA with Other Algorithms (Contd...)
Base Case GA PSO GSA ALO MAQiEA
Location ... 61, 12, 23 68, 57, 61 60, 63, 56 68, 62, 63 60, 61, 17
Size (MW) ... 1, 0.52, 0.5 0.87, 1, 0.88 0.7964, 0.9312, 0.3607, 0.9326, 1, 0.7733,
0.9657 0.9351 0.5084
Residential Load Ploss (kW ) 164.9382 56.1564 48.5161 54.8719 45.0331 39.6854
GSA produce active power loss of 63.5kW and 66.99kW. GA has minimum power
loss reduction of 71.1685kW and 31.9649kVAr with locations 27, 64 and 49 with
optimal capacity 717kW, 972kW and 800kW. Maximum percentage power loss re-
duction is obtained with MAQiEA as 52.6348kW and 27.6494kVAr with voltage
profile improvement of 0.9832 p.u. MAQiEA has maximum improvement in VSI in
comparison with all other algorithms, minimum VSI of MAQiEA is 0.9264p.u, ALO
has minimum VSI of 0.924p.u followed by GSA with 0.8748p.u, followed by GA
with 0.8694p.u and PSO with minimum VSI of 0.8655p.u. Improvement in voltage
profile is observed, after installing DG, proposed algorithm has maximum improve-
ment in voltage profile of 0.9832p.u and ALO has improvement in voltage profile of
0.9826p.u.
For industrial load model, the performance of MAQiEA is better in all instances
i.e., real power loss, reactive power loss, voltage profile and VSI. The tabulated re-
sults demonstrate that MAQiEA has high reduction in power loss in comparison with
other algorithms. Overall real and reactive power losses incurred in the system after
implementing DG is 30.7966kW and 18.2978kVAr at location 16, 61, and 60 with
optimal capacity 514kW, 714kW and 1MW with minimum VSI and voltage pro-
file of 0.943p.u and 0.9864p.u respectively. ALO, GSA, PSO and GA has minimum
power loss of 39.7631kW, 44.62kW, 43.83kW and 49.24kW. Minimum improve-
ment in voltage profile is observed in GA of 0.9692p.u, whereas GSA has minimum
VSI of 0.862p.u.
Similarly for residential load and commercial load, the power losses obtained af-
ter installing DG with MAQiEA is 39.6854kW and 43.1541kW. GA has power loss
reduction of 56.1564kW and 58.0609kVAr. For residential load, optimal location and
capacity of DG with MAQiEA is 60, 16, 61 and 1MW, 491kW and 583kW. For com-
mercial load, optimal placement and sizing of DG with the proposed algorithm is 61,
16, 60 and 578kW, 485kW and 967kW respectively. MAQiEA has maximum reduc-
tion in power loss as compared with other algorithm for residential and commercial
load models. For residential load model, ALO has power loss reduction of 45.033kW
and 24.113kVAr with locations 68, 63, 62 and 508kW, 789kW and 1MW respec-
tively. Similarly for commercial load model, ALO has active and reactive power
loss reduction of 47.7983kW and 25.1554kVAr with optimal placement and sizing
of DG 62, 68, 63 and 602kW, 816kW and 846kW with minimum voltage profile of
0.984kW and VSI of 0.9312p.u. For mixed load model, integration of all voltage
dependent load models is considered. The load at every bus is given in appendix.
MAQiEA has maximum improvement in voltage profile and VSI of 0.9808p.u and
0.9242p.u. Whereas other algorithms such as ALO, GSA, PSO and GA has mini-
mum voltage profile of 0.9823p.u, 0.9758p.u, 0.9727p.u, and 0.9736p.u. Minimum
VSI of ALO, GSA, PSO and GA are 0.9281p.u, 0.9054p.u, 0.8898p.u, and 0.8986p.u
respectively. Minimum power losses are obtained with MAQiEA of 62.3229kW and
31.7996kVAr with location 61, 16, 60 and capacities of 748kW, 532kW and 1MW
respectively.
Optimal location and capacity of DG not only reduces the power losses but also
improves the voltage profile. Improvement in voltage profile for constant power
84 Hybrid Quantum Metaheuristics: Theory and Applications
load, constant impedance load, constant current load, industrial load, residential
load, commercial load and mix load models are shown in Figure 4.11 with all al-
gorithms including base case. Similarly for improvement in VSI for constant power
load, constant impedance load, constant current load, industrial load, residential load,
Multipartite Adaptive QIEA to Reduce Power Losses 85
commercial load and mix load models are shown in Figure 4.12 with all algorithms
including base case.
Figure 4.13 shows the overall power loss comparison of MAQiEA with ALO,
GSA, PSO, GA and base case. Proposed algorithm has high reduction in power loss
86 Hybrid Quantum Metaheuristics: Theory and Applications
for all load models including constant power load, Figure 4.14 shows the overall im-
provement in voltage profile of MAQiEA with all different load models. Figure 4.15
shows the overall improvement in VSI of MAQiEA with all different load models.
Figure 4.13: Comparison of power loss with MAQiEA with other algorithms for all
load models.
Figure 4.14: Voltage Profile improvement of MAQiEA for all Load Models.
Discussions:
Distribution network has major power loss percentage in power system network as
compared with generation and transmission system. Distributed generators are nor-
mally employed in DN to reduce the power losses. Majority of researchers have im-
plemented DG in DN with a CP load model. However, it is well known that load at
DN varies from time to time. In this study, an investigation has been performed with
DG on different load models to reduce the losses. Voltage-dependent load are used
in this study. CP load model is generally independent of voltage, which doesn’t vary
with voltage. Most of the loads used by consumers at load centers are dependents on
Multipartite Adaptive QIEA to Reduce Power Losses 87
Figure 4.15: Voltage Stability Index of MAQiEA for all Load Models.
was an improvement over QiEA. MAQiEA didnt require any additional operator to
avoid premature convergence. QiEA uses single Q-bit whereas AQiEA used two Q-
bits. Q-gates are used in QiEA to move the system towards convergence whereas
MAQiEA used a Multi-partite Adaptive Crossover operator for better convergence.
AQiEA has used three rotation strategies to move the search towards better solutions,
which were bipartite Rotation towards Best, Rotation towards Better and Rotation
away from Worse. Whereas MAQiEA uses improved probabilistic rotation around
better & multi-partite rotation away from worse rotation strategies which provides
for relatively better exploration and exploitation, in addition to Rotation towards
Best Strategy in AQiEA. MAQiEA has high robustness and has better exploitation
and exploration of search space in comparison with AQiEA as shown by test results.
Wilcoxon signed rank test has been used to arrive at best design of MAQiEA amongst
various alternatives. It has been observed from tabulated results that for voltage de-
pendent load models that location of DG is fixed for all load models except constant
power and constant current load. The load on the system is varying exponentially
with voltage level at the node in case of IL, CL and RL. The optimal location of DG
for practical loads is fixed for MAQiEA but other algorithms have different optimal
location for different load models. The robustness of the proposed algorithm is very
high. The results of simulated experiments in the tables demonstrate that MAQiEA is
performing better in comparison with other algorithms (GA, GSA, PSO and ALO).
4.7 CONCLUSIONS
Minimization of power loss in a DN is one of the challenging areas of research for
the distribution utilities. In recent times, power losses are reduced by implementing
DGs into distribution network. However, majority of research has been done on this
important optimization problem with CP load model. Majority of consumers at load
center uses VDLMs such as CZ, CC, IL, RL and CL, whereas CP load model is inde-
pendent of voltage. If the optimal placement and capacity of DG with CP load model
is used on practical distribution system, it induces high power losses and poor voltage
regulation in the system. In this study, an investigation has been performed to reduce
the losses in the distribution system with DG for different VDLMs. Optimal location
and capacity of DG is a difficult non differentiable, non linear, complex combinato-
rial optimization problem. A Multipartite Adaptive Quantum-inspired Evolutionary
Algorithm is proposed for optimal location and sizing of DG. MAQiEA uses prob-
abilistic approach with Q-bits, and is an updated version of AQiEA that has intro-
duced two Q-bits per solution vector and entanglement inspired adaptive crossover
operator. MAQiEA has introduced a Multipartite Adaptive Crossover operator as a
variation operator for better convergence. The effectiveness of MAQiEA is tested on
standard IEEE benchmark test bus system. Tabulated result shows the effectiveness
of the proposed algorithm as compared with other algorithms.
Multipartite Adaptive QIEA to Reduce Power Losses 89
REFERENCES
1. Sorensen, K., Sevaux, M., and Glover, F. (2017). A history of metaheuristics. arXiv
preprint arXiv:1704.00853.
2. Blum, C., and Roli, A. (2008). Hybrid metaheuristics: An introduction. In Hybrid Meta-
heuristics (pp. 1–30). Springer, Berlin, Heidelberg.
3. Singh, B., and Sharma, J. (2017). A review on distributed generation planning. Renew-
able and Sustainable Energy Reviews, 76, 529–544.
4. Han, K. H., and Kim, J. H. (2002). Quantum-inspired evolutionary algorithm for a class
of combinatorial optimization. IEEE Transactions on Evolutionary Computation, 6(6),
580–593.
5. Zhang, G. (2011). Quantum-inspired evolutionary algorithms: A survey and empirical
study. Journal of Heuristics, 17(3), 303–351.
6. Mani, A., and Patvardhan, C. (2009, May). A novel hybrid constraint handling technique
for evolutionary optimization. In 2009 IEEE Congress on Evolutionary Computation
(pp. 2577–2583). IEEE.
7. Manikanta, G., Mani, A., Singh, H. P., and Chaturvedi, D. K. (2016, September).
Placing distributed generators in distribution system using adaptive quantum inspired
evolutionary algorithm. In 2016 Second International Conference on Research in
Computational Intelligence and Communication Networks (ICRCICN) (pp. 157–162).
IEEE.
8. Manikanta, G., Mani, A., Singh, H. P., and Chaturvedi, D. K. (2019). Adaptive quantum-
inspired evolutionary algorithm for optimizing power losses by dynamic load allocation
on distributed generators. SJEE, 16(3), 325–357.
9. Manikanta, G., Mani, A., Singh, H. P., and Chaturvedi, D. K. (2019). Distribution Net-
work Reconfiguration using Adaptive quantum-inspired evolutionary algorithm. Inter-
national Conference on Recent innovation in Electrical Electronics and Communication
Engineering (ICRIEECE-2018) at School of Electrical Engineering, Kalinga Institute of
Industrial Technology (KIIT), Bhubaneswar, India
10. Manikanta, G., Mani, A., Singh, H. P., and Chaturvedi, D. K. (2018, December). Mini-
mization of Power Losses in Distribution System with Variation in Loads Using Adap-
tive Quantum inspired Evolutionary Algorithm. In 2018 4th International Conference on
Computing Communication and Automation (ICCCA) (pp. 1–6). IEEE.
11. Manikanta, G., Mani, A., Singh, H. P., and Chaturvedi, D. K. (2018, October). Distri-
bution Network Reconfiguration with Different Load Models using Adaptive Quantum
inspired Evolutionary Algorithm. In 2018 International Conference on Sustainable En-
ergy, Electronics, and Computing Systems (SEEMS) (pp. 1–7). IEEE.
12. Mani, A., and Patvardhan, C. (2012). An improved model of ceramic grinding process
and its optimization by adaptive Quantum inspired evolutionary algorithm. International
Journal of Simulations: Systems Science and Technology, 11(6), 76–85.
13. Manikanta, G., Mani, A., Singh, H. P., and Chaturvedi, D. K. (2017). DG and Capacitor
Placement in Distribution system considering Cost and Benefits using AQiEA, National
System Conference, DEI, Agra.
14. Manikanta, G., Mani, A., Singh, H. P., and Chaturvedi, D. K. (2016, November). Sitting
and sizing of capacitors in distribution system using adaptive quantum inspired evo-
lutionary algorithm. In 2016 7th India International Conference on Power Electronics
(IICPE) (pp. 1–6). IEEE.
15. Manikanta, G., Mani, A., Singh, H. P., and Chaturvedi, D. K. (2019). Simultaneous
placement and sizing of DG and capacitor to minimize the power losses in radial
90 Hybrid Quantum Metaheuristics: Theory and Applications
of distributed generation. In 2015 IEEE PES Innovative Smart Grid Technologies Latin
America (ISGT LATAM) (pp. 214–218). IEEE.
45. Divya, K., and Srinivasan, S. (2016, January). Optimal siting and sizing of DG in radial
distribution system and identifying fault location in distribution system integrated with
distributed generation. In 2016 3rd International Conference on Advanced Computing
and Communication Systems (ICACCS) (Vol. 1, pp. 1–7). IEEE.
46. Bohre A. K., Agnihotri G. (2016). Optimal sizing and sitting of DG with load mod-
els using soft computing techniques in practical distribution system. IET Generation,
Transmission & Distribution, 10(11), 2606–2621.
47. Das, S., Das, D., and Patra, A. (2016, July). Distribution network reconfiguration using
distributed generation unit considering variations of load. In 2016 IEEE 1st International
Conference on Power Electronics, Intelligent Control and Energy Systems (ICPEICES)
(pp. 1–5). IEEE.
48. Price, W. W., Casper, S. G., Nwankpa, C. O., Bradish, R. W., Chiang, H. D., Concordia,
C., ... and Wu, G. (1995). Bibliography on load models for power flow and dynamic
performance simulation. IEEE Power Engineering Review, 15(2), 70.
49. Price, W. W., Taylor, C. W., and Rogers, G. J. (1995). Standard load models for
power flow and dynamic performance simulation. IEEE Transactions on Power Sys-
tems, 10(CONF-940702-), 1302–1313.
50. Concordia, C., and Ihara, S. (1982). Load representation in power system stability stud-
ies. IEEE Transactions on Power Apparatus and Systems, (4), 969–977.
51. Price, W. W., Chiang, H. D., Clark, H. K., Concordia, C., Lee, D. C., Hsu, J. C., ... and
Vaahedi, E. (1993). Load representation for dynamic performance analysis. IEEE Trans-
actions on Power Systems (Institute of Electrical and Electronics Engineers);(United
States), 8(2).
52. Teng, J. H. (2003). A direct approach for distribution system load flow solutions. IEEE
Transactions on Power Delivery, 18(3), 882–887.
53. Garca, S., Molina, D., Lozano, M., and Herrera, F. (2009). A study on the use of non-
parametric tests for analyzing the evolutionary algorithms behaviour: A case study on
the CEC2005 special session on real parameter optimization. Journal of Heuristics,
15(6), 617–644.
54. Derrac, J., Garca, S., Molina, D., and Herrera, F. (2011). A practical tutorial on the use of
nonparametric statistical tests as a methodology for comparing evolutionary and swarm
intelligence algorithms. Swarm and Evolutionary Computation, 1(1), 3–18.
Table 4.10
Line Parameters for IEEE 69 Bus Radial Distribution System
Branch No From Bus To Bus R(ohms) X(ohms)
1 1 2 0.0005 0.00112
2 2 3 0.0005 0.00112
3 3 4 0.0015 0.0036
4 4 5 0.0251 0.0294
5 5 6 0.366 0.1864
6 6 7 0.381 0.1941
7 7 8 0.0922 0.047
8 8 9 0.0493 0.0251
9 9 10 0.819 0.2707
10 10 11 0.1872 0.0619
11 11 12 0.7114 0.2351
12 12 13 1.03 0.34
13 13 14 1.044 0.345
14 14 15 1.058 0.3496
15 15 16 0.1966 0.065
16 16 17 0.3744 0.1238
17 17 18 0.0047 0.0016
18 18 19 0.3276 0.1083
19 19 20 0.2106 0.069
20 20 21 0.3416 0.1129
21 21 22 0.014 0.0046
22 22 23 0.1591 0.0526
23 23 24 0.3463 0.1145
24 24 25 0.7488 0.2475
25 25 26 0.3089 0.1021
26 26 27 0.1732 0.0572
27 3 28 0.0044 0.0108
28 28 29 0.064 0.1565
29 29 30 0.3978 0.1315
30 30 31 0.0702 0.0232
31 31 32 0.351 0.116
32 32 33 0.839 0.2816
33 33 34 1.708 0.5646
34 34 35 1.474 0.4873
35 3 36 0.0044 0.0108
36 36 37 0.064 0.1565
37 37 38 0.1053 0.123
38 38 39 0.0304 0.0355
39 39 40 0.0018 0.0021
40 40 41 0.7283 0.8509
41 41 42 0.31 0.3623
42 42 43 0.041 0.0478
43 43 44 0.0092 0.0116
44 44 45 0.1089 0.1373
45 45 46 0.0009 0.0012
46 4 47 0.0034 0.0084
47 47 48 0.0851 0.2083
48 48 49 0.2898 0.7091
49 49 50 0.0822 0.2011
50 8 51 0.0928 0.0473
51 51 52 0.3319 0.1114
52 9 53 0.174 0.0886
53 53 54 0.203 0.1034
54 54 55 0.2842 0.1447
55 55 56 0.2813 0.1433
56 56 57 1.59 0.5337
57 57 58 0.7837 0.263
58 58 59 0.3042 0.1006
59 59 60 0.3861 0.1172
60 60 61 0.5075 0.2585
61 61 62 0.0974 0.0496
62 62 63 0.145 0.0738
63 63 64 0.7105 0.3619
64 64 65 1.041 0.5302
65 11 66 0.2012 0.0611
66 66 67 0.0047 0.0014
67 12 68 0.7394 0.2444
68 68 69 0.0047 0.0016
94 Hybrid Quantum Metaheuristics: Theory and Applications
Table 4.11
Load Parameters for IEEE 69 Bus Radial Distribution System
Bus No Active Load (kW) Reactive Load (kVAr) Load Model
1 0 0 Substation
2 0 0 Constant Impedance Load
3 0 0 Constant Current Load
4 0 0 Constant Current Load
5 2.6 2.2 Constant power Load
6 40.4 30 Residential Load
7 75 54 Industrial Load
8 30 22 Constant power Load
9 28 19 Industrial Load
10 145 104 Constant Current Load
11 145 104 Commercial Load
12 8 5 Commercial Load
13 8 5.5 Commercial Load
14 0 0 Residential Load
15 45.5 30 Constant Current Load
16 60 35 Constant Impedance Load
17 60 35 Constant Impedance Load
18 0 0 Commercial Load
19 1 0.6 Constant power Load
20 114 81 Commercial Load
21 5 3.5 Constant Impedance Load
22 0 0 Constant Current Load
23 28 20 Industrial Load
24 0 0 Constant Impedance Load
25 14 10 Constant power Load
26 14 10 Commercial Load
27 26 18.6 Constant power Load
28 26 18.6 Residential Load
29 0 0 Residential Load
30 0 0 Commercial Load
31 0 0 Residential Load
32 14 10 Commercial Load
33 19.5 14 Commercial Load
34 6 4 Constant power Load
35 26 18.55 Constant Impedance Load
36 26 18.55 Constant Current Load
37 0 0 Commercial Load
38 24 17 Constant power Load
39 24 17 Constant Current Load
40 1.2 1 Commercial Load
41 0 0 Constant power Load
42 6 4.3 Constant Current Load
43 0 0 Commercial Load
44 39.22 26.3 Industrial Load
45 39.22 26.3 Commercial Load
46 0 0 Industrial Load
47 79 56.4 Constant power Load
48 384.7 274.5 Residential Load
49 384.7 274.5 Constant power Load
50 40.5 28.3 Residential Load
51 3.6 2.7 Constant Impedance Load
52 4.35 3.5 Constant Impedance Load
53 26.4 19 Constant Impedance Load
54 24 17.2 Industrial Load
55 0 0 Commercial Load
56 0 0 Industrial Load
57 0 0 Residential Load
58 100 72 Constant Current Load
59 0 0 Constant Current Load
60 1244 888 Constant power Load
61 32 23 Constant power Load
62 0 0 Constant Impedance Load
63 227 162 Constant Current Load
64 59 42 Industrial Load
65 18 13 Constant power Load
66 18 13 Industrial Load
67 28 20 Constant Impedance Load
68 28 20 Constant Impedance Load
5 Quantum-Inspired Manta
Ray Foraging Optimization
Algorithm for
Automatic Clustering of
Color Images
5.1 INTRODUCTION
Clustering or cluster analysis can be defined as a process of discovering the underly-
ing structure of a data set by partitioning the entire data into two or more groups. In
this process, similar data points are kept together in the same group and dissimilar
data points are kept separate.
The extensive use of clustering can be applied in the field of data mining, engi-
neering, economics, sociology, biology, and physics [1][2]. In order to deal with the
clustering problems, several approaches have been proposed so far, which includes
hierarchical clustering, non-hierarchical clustering, fuzzy clustering, artificial neural
network-based clustering and evolutionary approach-based clustering [1, 2] to name
a few. In the literature, a variety of clustering methods have been developed so far
since the past few years. The prerequisite for most of the existing methods is that they
must have the knowledge of apposite number of clusters beforehand. In most occa-
sions, the dataset may suffer from insufficient and inappropriate knowledge about the
data, that makes the functioning of clustering algorithm, a challenging and tedious
task. To cope up with this limitation, few automatic clustering techniques have been
already developed by several researchers [8][9][10][11].
In the recent years, metaheuristic algorithms have been considered a good choice
for solving several kinds of optimization problems. They are able to provide an ap-
propriate solution for different simple and complex optimization problems within
a short time frame. Some popular metaheuristic algorithms may include Genetic
Algorithm [7], Particle Swarm Optimization [8], Differential Evaluation [9], etc.
Though, the nature-inspired metaheuristic algorithms are capable to solve a prob-
lem very quickly, still they may suffer from premature convergence. In order to
handle this situation, several new approaches can be adopted efficiently and effec-
tively. These approaches may include introducing new parameters in the existing
algorithm, hybridizing more than one algorithm or even incorporating the features
of quantum computing into an existing algorithm. In this regard, quantum-inspired
metaheuristic algorithms have achieved a remarkable efficiency with reference to
DOI: 10.1201/9781003283294-5 95
96 Hybrid Quantum Metaheuristics: Theory and Applications
may include Genetic algorithm [7], Differential Evaluation [9], Particle Swarm Op-
timization [8], Ant Colony Optimization [28], Bat Optimization [29], Bacterial For-
aging algorithm [30], Firefly algorithm [31], Cuckoo Search [32], and Crow Search
algorithm [33][34] to name a few. By identifying the problem of automatic clus-
tering, several nature-inspired metaheuristics algorithms have been developed so
far, which are available in the literature [8][9][10][11]. The nature-inspired meta-
heuristic algorithms are capable to solve any kind of complex optimization problems
within a short time frame; in addition, they are also capable to provide an optimal or
near optimal solution. In spite of these capabilities, they may suffer from premature
convergence. In order to overcome this situation, the quantum-inspired framework
has been combined with several meta-heuristic approaches. These quantum-inspired
meta-heuristic algorithms can be efficiently and effectively applied to solve various
types of optimization problems, viz., task scheduling on distributed systems [23],
combinational optimization problems [24][17], multi-cast routing problems in wire-
less mesh networks [37], multi-level thresholding problems [38][39], image anal-
ysis [15], mathematical function optimization [40][41], and automatic clustering
[10][11][28][13][14][34] to name a few.
where α and β represent the probability amplitudes of the corresponding states, re-
spectively. The following normalization condition should be satisfied by them.
| α |2 + | β |2 = 1 (5.2)
Here, for the purpose of quantum measurement, the superposition state |Ψ⟩ is col-
lapsed either to |0⟩ or to |1⟩ by satisfying the following equation.
|0⟩ if | α |2 > | β |2 ,
|Ψ⟩ = (5.3)
|1⟩ Otherwise
gate, etc [39]. In this paper, two very useful gates, viz., Rotation gate and Pauli-X
gate [28][13] have been incorporated with the classical MRFO to develop proposed
QIMRFO.
Mathematically, one qubit state is converted to other by using Pauli-X gate as follows:
α 0 1 α β
X |Ψ⟩ = X = = (5.8)
β 1 0 β α
The computational details of the Quantum Rotation gate and the Pauli-X gate have
been elaborately described in [28][13].
image. This index was proposed by Pakhira et al in 2004. The maximum value of
PBM index indicates optimal result.
Mathematically, PBM index can be defined as follows:
2
1
PBM = × f racE0 EK × DK (5.9)
K
where K represents the number of clusters. The parameter E0 is constant that can be
defined as E0 = ∑P∈DS ∥P −V ∥ where V represents the center of the patterns P ∈ DS.
The parameter EK can be defined as EK = ∑Ki=1 ∑Nj=1 Ui j Pj −Vi , where N is
the number of data points in the data set, U(DS) = [Ui j ]K×N is the partition matrix
of the data points. Vi represents the center of the ith cluster. DK signifies the cluster
separation measure, which can be defined as DK = maxKi,j=1 Vi −V j . The details of
this cluster validity index are available in [42].
T −t+1
where β = 2.er1 ( T ) .sin(2π r1 ), r1 is a random number selected between [0, 1] and
T represents the maximum number of iteration.
The mathematical model used for this strategy can also be represented as follows:
d d (t) − Pd (t)) + β .(Pd (t) − Pd (t)),
P + r.(Prand
rand i rand i
d i=1
Pi (t + 1) = d (t) − Pd (t)) + β .(Pd (t) − Pd (t)), (5.12)
Pd + r.(Pi−1
rand i rand i
i = 2, 3, ..., N
It can be noted that, based on some predefined condition, either Equation (5.11) or
Equation (5.12) is used in cyclone foraging strategy.
In somersault foraging strategy, each manta ray tries to update its position to the
best position found so far by swimming around the food source. It then somersaults
to new position. The mathematical model used for somersault foraging is represented
as follows.
d
Pid (t + 1) = Pid (t) + S.(r2.Pbest − r3.Pid (t)), i = 1, 2, ..., N (5.13)
where the somersault range of manta rays is decided by using somersault factor,
S = 2. r2 and r3 are two random numbers selected from [0, 1].
3. The cluster centroids are identified from the original population POP[N×L] with
the help of quantum state population QSP[N×L] . In QSP[N×L] , the values of |Ψ⟩
that are identified as |1⟩, indicate active cluster centroids. Then, fitness FTN of
each individual manta ray is computed by using Equation (5.9).
4. Quantum rotation gate is used to generate the new quantum state population
QSNP[N×L] by using Equation (5.6). The same set of operations as specified in
Step 3 are performed again to compute new fitness values NFTN of each individ-
ual manta ray by identifying the cluster centroids which belong to POP[N×L] with
the help of QSNP[N×L] .
5. Thereafter, the quantum state population QSP[N×L] and the fitness FTN of the
manta rays are updated based on the fitness values of FTN and NFTN .
6. The basic steps of MRFO are performed on the original population POP[N×L] to
perturb it.
a. The position of the ith manta ray at a time stamp t is updated by using the
following criteria.
If rand < 0.5, then
t
i. If Tmax < rand then, Equation (5.12) is used
ii. Otherwise, Equation (5.11) is used
Else Equation (5.10) is used.
b. The fitness FTN of all manta rays is computed by executing Step 3 and
thereafter, the best individual among them is identified.
c. The positions of all manta rays at time stamp t are updated by Equation
(5.13). Thereafter, Step 7 is performed.
7. If the fitness values are not improved, then quantum Pauli-X gate is used (by
using Equation (10.24)) based on a predefined mutation probability to achieve
diversity in POP[N×L] and afterward, the Step 3 is performed; otherwise Step 8 is
performed.
8. Steps 5 to 9 are repeated for a predefined number of times or until the stopping
criteria is met.
9. Finally, the best fitness value and the corresponding number of cluster centroids
are reported as the optimal results.
In the proposed QIMRFO, a single manta ray may have more than one fitness val-
ues due to the use of quantum rotation gate operation. As the quantum states of that
individual are changed due to the operation of quantum rotation gate, there is a pos-
sibility that the selection of active cluster centroids will be changed. The execution is
always carried out only with the best fitness values of the manta rays, which produces
the exploration in the search space. Similarly, the exploitation in the search space is
accomplished by the use of Pauli-X gate as it acts like a mutation operation which
enables the quantum states to flip their values from 0 to 1 and vice versa, depending
upon a predefined mutation probability. So, by incorporating the features of quan-
tum gates, the search space can be diversified and it always yields better solution.
The flowchart of the proposed algorithm is presented in Figure 5.1.
102 Hybrid Quantum Metaheuristics: Theory and Applications
Figure 5.1: Flowchart of QIMRFO algorithm for automatic clustering of color im-
ages.
structure. The experimental process and the analysis of the results are presented in
the following subsections.
Table 5.1
Experimental Results of Sensitivity Analysis [19][20][21] for QIMRFO
Parameters Range 1st Order Effect Total Effect
Population 10 -1.0714 0.0999
20 -0.9827 0.1678
30 0.7925 0.5478
40 0.3803 0.5320
50 0.2999 0.8432
Maximum Iteration 100 -0.4746 1.0764
500 0.0632 1.1200
1000 0.8305 1.3829
1500 0.8211 0.9851
2000 0.5319 0.8733
Somersault Factor 1.5 0.0461 1.7926
1.6 0.1297 1.1091
1.7 0.4793 0.8737
1.8 0.4305 1.2862
1.9 0.3922 1.7701
2.0 0.3774 1.6529
Mutation Probability 0.1 -0.6052 0.0542
0.05 -0.4899 0.0833
0.03 -0.1701 0.0798
0.01 0.6915 0.0916
0.005 -0.0762 1.2940
0.003 0.2824 1.9963
0.001 0.5350 0.0374
Quantum-Inspired Manta Ray Foraging Optimization for Automatic Clustering 107
Table 5.2
Settings of Parameters for QIMRFO, MRFO and GA
Parameters QIMRFO MRFO GA
Maximum Iteration : MAX I 1000 1000 1000
Population Size: N 30 50 50
Somersault Factor : SF 1.7 2 -
Crossover Probability : CP - - 0.85
Mutation Probability : MP 0.01 - 0.001
Small Rotation Angle : δ [-1.0, 1.0] - -
works [8][18]. Each algorithm has been executed for 40 times using these differ-
ent settings of parameters and the average values among different runs have been
considered for reporting purpose.
Table 5.3
Number of Cluster (ηc ), Mean (µ ), Standard Deviation (σ ), Standard Error
(ε ), Computational Time (τ ) in Second of QIMRFO, MRFO, and GA
Data Sets Algorithms ηc µ σ ε τ
#22093 QIMRFO 5 3.369005 0.150678 0.008699 65.97
MRFO 5 2.378611 0.294815 0.037547 126.06
GA 5 2.049843 0.349721 0.060214 162.19
#163014 QIMRFO 4 0.089985 0.001435 0.001015 73.24
MRFO 3 0.077392 0.039875 0.028196 152.54
GA 4 0.058110 0.047322 0.065918 177.45
#102061 QIMRFO 6 1.836728 0.173652 0.000462 200.11
MRFO 6 1.969722 0.329838 0.007926 134.26
GA 4 1.373659 0.759327 0.009992 276.08
#159045 QIMRFO 5 2.060309 0.000439 0.000273 89.29
MRFO 5 1.687840 0.011284 0.000206 111.13
GA 5 1.405063 0.069531 0.009842 151.24
#pool QIMRFO 4 22.10002 0.059366 0.000117 112.36
MRFO 4 15.36671 0.624183 0.002393 147.10
GA 4 10.73810 0.547716 0.007555 163,44
#flower QIMRFO 4 0.015439 0.002204 0.000833 297.28
MRFO 4 0.009497 0.042827 0.006027 297.01
GA 5 0.006394 0.007843 0.005196 312.14
#tulips QIMRFO 5 0.418231 0.000438 0.002458 356.29
MRFO 5 0.292812 0.000126 0.000972 382.15
GA 5 0.248261 0.004701 0.067391 430.38
#fruits QIMRFO 4 2.575446 0.000278 0.000197 86.45
MRFO 4 1.528733 0.023288 0.016467 191.62
GA 4 1.793581 0.019843 0.008652 222.47
respectively. The test values prove that QIMRFO is the best performing algorithm
among all of the competitive algorithms.
The convergence curves of QIMRFO, MRFO, and GA are shown in Figure 5.4.
Each figure shows that QIMRFO converges faster than others. Hence, the superior-
ity of the proposed algorithm has been visually and quantitatively established using
different measures. The population diversity curves using the quantum rotation gate
and the Pauli-X gate have been presented in Figures 5.5 and 5.6, respectively.
Table 5.4
Number of Cluster (ηc ) with its Corresponding Threshold Values of the Clus-
tered Images
Data Sets ηc Color Component Threshold Value
#22093 5 R [30, 86, 100, 135, 232]
G [51, 92, 143, 175, 247]
B [36, 75, 140, 169, 188]
#163014 4 R [40, 103, 140, 189]
G [42, 125, 150, 176]
B [20, 75, 100, 153]
#102061 6 R [47, 80, 91, 132, 145, 240]
G [62, 100, 155, 210, 229, 241]
B [51, 74, 135, 212, 240, 250]
#159045 5 R [39, 75, 112, 147, 180]
G [30, 61, 100, 143, 175]
B [11, 25, 63, 95, 140]
#pool 4 R [15, 25, 140, 200]
G [43, 61, 90, 192]
B [10, 53, 142, 191]
#flower 4 R [50, 85, 158, 200]
G [25, 60, 106, 153]
B [10, 42, 147, 196]
#tulips 5 R [25, 61, 83, 212, 229]
G [30, 100, 112, 146, 200]
B [31, 62, 80, 150, 191]
#fruits 4 R [37, 77, 148, 162]
G [10, 62, 100, 146]
B [25, 36, 70, 98]
exploitation in the search space for identifying the optimal results. Several tests have
been conducted among the competitive algorithms to judge the effectiveness of the
proposed algorithm. The experimental results prove that QIMRFO outperforms oth-
ers in all aspects.
In future, the functionality of the proposed algorithm can be expanded in such a
way that it can efficiently handle the high dimensional data sets. Various quantum
gate strategies can also be used to develop new algorithms for solving different types
of optimization problems.
110 Hybrid Quantum Metaheuristics: Theory and Applications
Table 5.5
Results of t-Test Between QIMRFO vs. MRFO, and QIMRFO vs. GA
Data Sets p - value Significance
#22093 QIMRFO & MRFO <0.0001 Extremely Significant
QIMRFO & GA <0.0001 Extremely Significant
#163014 QIMRFO & MRFO 0.04571 Significant
QIMRFO & GA <0.0001 Extremely Significant
#102061 QIMRFO & MRFO 0.0268 Significant
QIMRFO & GA 0.0003 Extremely Significant
#159045 QIMRFO & MRFO <0.0001 Extremely Significant
QIMRFO & GA <0.0001 Extremely Significant
#pool QIMRFO & MRFO <0.0001 Extremely Significant
QIMRFO & GA <0.0001 Extremely Significant
#flower QIMRFO & MRFO 0.3835 Not Significant
QIMRFO & GA <0.0001 Extremely Significant
#tulips QIMRFO & MRFO <0.0001 Extremely Significant
QIMRFO & GA <0.0001 Extremely Significant
#fruits QIMRFO & MRFO <0.0001 Extremely Significant
QIMRFO & GA <0.0001 Extremely Significant
Table 5.6
Results of Friedman Test [47][48] for QIMRFO, MRFO and GA
Data Sets QIMRFO MRFO GA
#22093 3.9218 (1) 2.7263 (2) 2.4497 (3)
#163014 0.0921 (1.5) 0.0921 (1.5) 0.0765 (3)
#102061 1.9728 (2) 2.1097 (1) 1.7383 (3)
#159045 2.0999 (1) 1.6982 (2.5) 1.6982 (2.5)
#pool 22.8993 (1) 15.9609 (2) 14.9761 (3)
#flower 0.0171 (1.5) 0.0171 (1.5) 0.0162 (3)
#tulips 0.4314 (1) 0.2983 (2.5) 0.2983 (2.5)
#fruits 2.5833 (1) 1.5934 (3) 1.8217 (2)
Average Rank : 1.25 2.0 2.75
ACKNOWLEDGMENT
This work was supported by the AICTE sponsored RPS project on Automatic Clus-
tering of Satellite Imagery using Quantum-Inspired Metaheuristics vide F.No 8-
42/RIFD/RPS/Policy-1/2017-18.
Quantum-Inspired Manta Ray Foraging Optimization for Automatic Clustering 111
Figure 5.4: Convergence curves of QIMRFO, MRFO, and GA for test images.
112 Hybrid Quantum Metaheuristics: Theory and Applications
Figure 5.5: Population Diversity using Quantum Rotation Gate of test images
[31][44][45].
Quantum-Inspired Manta Ray Foraging Optimization for Automatic Clustering 113
Figure 5.6: Population Diversity Increased by using Pauli-X Gate of test images
[31][44][45].
114 Hybrid Quantum Metaheuristics: Theory and Applications
REFERENCES
1. A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Prentice-Hall, Inc., USA,
1988.
2. A. K. Jain, M. N. Murty, and P. J. Flynn. Data clustering: A review. ACM Computing
Surveys, 31(3):264–323, 1999.
3. S. Bandyopadhyay and U. Maulik. Genetic clustering for automatic evolution of clusters
and application to image classification. Pattern Recognition, 35(6):1197–1208, 2002.
4. A. E. Ezugwu. Nature-inspired metaheuristic techniques for automatic clustering: A survey
and performance study. SN Applied Sciences, 2, 2020.
5. A. Jose-Garca and W. Gomez-Flores. Automatic clustering using nature-inspired meta-
heuristics: A survey. Applied Soft Computing, 41:192–213, 2016.
6. S. Das, A. Abraham, and A. Konar. Automatic clustering using an improved differential
evolution algorithm. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Sys-
tems and Humans, 38(1):218–237, 2008.
7. John H. Holland. Adaptation in Natural and Artificial Systems: An Introductory Analysis
with Applications to Biology, Control and Artificial Intelligence. MIT Press, Cambridge,
MA, 1992.
8. J. Kennedy and R. Eberhart. Particle swarm optimization. In Proc. IEEE International Con-
ference on Neural Networks, Perth, Australia, pp. 1942–1948, 1995.
9. R. Storn and K. Price. Differential evolution – a simple and efficient heuristic for global
optimization over continuous spaces. Journal of Global Optimization, 11:341–359, 1997.
10. S. Bhattacharyya, V. Snasel, A. Dey, S. Dey, and D. Konar. Quantum spider monkey opti-
mization (QSMO) algorithm for automatic gray-scale image clustering. International Con-
ference on Advances in Computing, Communications and Informatics (ICACCI 2018), pp.
1869–1874, 2018.
11. A. Dey, S. Bhattacharyya, S. Dey, J. Platos, and V. Snasel. Quantum-inspired bat optimiza-
tion algorithm for automatic clustering of grayscale images, vol. 922, pp. 89–101. Springer,
Singapore, 2019.
12. A. Dey, S. Dey, S. Bhattacharyya, J. Platos, and V. Snasel. Novel quantum-inspired ap-
proaches for automatic clustering of gray level images using particle swarm optimization,
spider monkey optimization and ageist spider monkey optimization algorithms. Applied
Soft Computing, 88(106040), 2020.
13. A. Dey, S. Dey, S. Bhattacharyya, J. Platos, and V.S. Snasel. Quantum-Inspired Automatic
Clustering Algorithms: A comparative study of Genetic Algorithm and Bat Algorithm, pp.
89–114. De Gruyter, 2020.
14. A. Dey, S. Dey, S. Bhattacharyya, V. Snasel, and A.E. Hassanien. Simulated Annealing
Based Quantum-Inspired Automatic Clustering Technique, pp. 73–81. Cairo, 2018.
15. S. Dey, S. Bhattacharyya, and M. Ujjwal. Quantum Behaved Swarm Intelligent Techniques
for Image Analysis: A Detailed Survey, pp. 1–39. GI Global, Hershey, USA, 2015.
16. S. Dey and U. Bhattacharyya and S. Maulik. Quantum-Inspired Automatic Clustering
Technique Using Ant Colony Optimization Algorithm, 2018.
17. K.H. Han and J.H. Kim. Quantum-inspired evolutionary algorithm for a class of combina-
torial optimization. IEEE Transactions on Evolutionary Computation, 6(6):580–593, 2002.
18. W. Zhao, Z. Zhang, and L. Wang. Manta ray foraging optimization: An effective bio-
inspired optimizer for engineering applications. Engineering Applications of Artificial In-
telligence, 87:103300, 2020.
19. A. Saltelli, P. Annoni, I. Azzini, F. Campolongo, M. Ratto, and S. Tarantola. Variance
based sensitivity analysis of model output. design and estimator for the total sensitivity
index. Computer Physics Communications, 181(2):259–270, 2010.
Quantum-Inspired Manta Ray Foraging Optimization for Automatic Clustering 115
20. A. Saltelli and I.M. Sobol. Sensitivity analysis for nonlinear mathematical models: Numer-
ical experience. Matematicheskoe Modelirovanie, 7(11):16–28, 1995.
21. I.M. Sobol. Global sensitivity indices for nonlinear mathematical models and their Monte
Carlo estimates. Mathematics and Computers in Simulation, 55(1–3):271–280, 2001.
22. H. Frigui and R. Krishnapuram. A robust competitive clustering algorithm with applica-
tions in computer vision. IEEE Transactions on Pattern Analysis and Machine Intelligence,
21(5):450–465, 1999.
23. P.S. Bradley and U.M. Fayyad. Refining initial points for k-means clustering. In Proceed-
ings of the Fifteenth International Conference on Machine Learning, pp. 91–99. Morgan
Kaufmann Publishers Inc., 1998.
24. D. Pelleg and A. Moore. X-means: Extending k-means with efficient estimation of the
number of clusters. In Proceedings of the 17th International Conference on Machine Learn-
ing, pp. 727–734. Morgan Kaufmann, 2000.
25. X.L. Meng and D.V. Dyk. The em algorithm an old folksong sung to a fast new tune.
Journal of the Royal Statistical Society. Series B (Methodological), 59(3):511–567, 1997.
26. F. Murtagh. A survey of recent advances in hierarchical clustering algorithms. The Com-
puter Journal, 26(4):354–359, 1983.
27. F.J. Rohlf. 12 single-link clustering algorithms. In Classification Pattern Recognition and
Reduction of Dimensionality, vol. 2 of Handbook of Statistics, pp. 267–284. Elsevier,
1982.
28. M. Dorigo, M. Birattari, and T. Stutzle. Ant colony optimization, vol. 1. IEEE, 2006.
29. X.S. Yang. A new metaheuristic bat-inspired algorithm. In Nature Inspired Cooperative
Strategies for Optimization (NICSO 2010), pp. 65–74. Springer, 2010.
30. K.M. Passino. Biomimicry of bacterial foraging for distributed optimization and control.
IEEE Control Systems Magazine, 22(3):52–67, 2002.
31. X.-S. Yang. Firefly algorithm, stochastic test functions and design optimisation. Interna-
tional Journal of Bio-inspired Computation, 2(2):78–84, 2010.
32. X.-S. Yang and S. Deb. Cuckoo search via levy flights. pp. 210–214, 2010.
33. A Askarzadeh. A novel metaheuristic method for solving constrained engineering opti-
mization problems: Crow search algorithm. Computers & Structures, 169:1–12, 2016.
34. Z.A. Babak, B.H. Omid, and X. Chu. Crow Search Algorithm (CSA), pp. 143–149.
Springer Singapore, Singapore, 2018.
35. T. Gandhi, Nitin, and T. Alam. Quantum genetic algorithm with rotation angle refinement
for dependent task scheduling on distributed systems. In 2017 Tenth International Confer-
ence on Contemporary Computing (IC3), pp. 1–5. IEEE, Aug 2017.
36. H.P. Chiang, Y.H. Chou, C.H. Chiu, S.Y. Kuo, and Y.M. Huang. A quantum-inspired
tabu search algorithm for solving combinatorial optimization problems. Soft Computing,
18:1771–1781, 2013.
37. M. Mahseur, A. Ramdane-Cherif, D. Acheli, and Y. Meraihi. A quantum-inspired bi-
nary firefly algorithm for qos multicast routing. International Journal of Metaheuristics,
6(4):309, 2017.
38. S. Dey, S. Bhattacharyya, and U. Maulik. Efficient quantum inspired metaheuristics for
multi-level true color image thresholding. Applied Soft Computing, 56:472–513, 2017.
39. S. Dey, I. Saha, S. Bhattacharyya, and U. Maulik. Multilevel thresholding using quantum-
inspired metaheuristics. Knowledge-Based System, 67:373–400, 2014.
40. S.S. Tirumala. A quantum-inspired evolutionary algorithm using Gaussian distribution-
based quantization. Arabian Journal for Science and Engineering, 43:471–482, 2018.
116 Hybrid Quantum Metaheuristics: Theory and Applications
41. Y.-J. Yang, S.Y. Kuo, F.-J. Lin, I.-I. Liu, and Y.-H. Chou. Improved quantum-inspired tabu
search algorithm for solving function optimization problem. 2013 IEEE International Con-
ference on Systems, Man, and Cybernetics, pp. 823–828, 2013.
42. M. Pakhira, S. Bandyopadhyay, and M. Ujjwal. Validity index for crisp and fuzzy clusters.
Pattern Recognition, 37:487–501, 2004.
43. Berkley images. www2.eecs.berkeley.edu/Research/ Projects/CS/vision/bsds/BSDS300/
html/dataset/ images.html. Accessed on 15/01/2020.
44. Real life images. www.hlevkin.com/06testimages.htm. Accessed on 15/01/2020.
45. Real life images. https://homepages.cae.wisc.edu/ ece533/images/. Accessed on
23/05/2020.
46. B. Flury. A First Course in Multivariate Statistics. Springer Texts in Statistics
47. M. Friedman. The use of ranks to avoid the assumption of normality implicit in the analysis
of variance. Journal of the American Statistical Association, 32(200):675–701, 1937.
48. M. Friedman. A comparison of alternative tests of significance for the problem of m rank-
ings. Annals of Mathematical Statistics, 11(1):86– 92, 1940.
6 Automatic Feature
Selection for Coronary
Stenosis Detection in X-Ray
Angiograms Using
Quantum Genetic
Algorithm
6.1 INTRODUCTION
The automatic coronary stenosis detection problem is a challenging task since it in-
volves detailed analysis over X-Ray Coronary Angiograms (XCA) in the form of
gray-level digital images. XCA remains as the gold-standard imaging technique for
medical diagnosis of arterial diseases, including stenosis and other related condi-
tions. In this procedure, a liquid dye, such as fluorescein, is injected using a thin
catheter inserted into an access point to the bloodstream (usually in arm or groin).
The dye reveals an arterial structure that can be easily seen on X-ray images and
allows to cardiologists to detect narrowed or blocked areas through the coronary
arteries.
In clinical practice, the stenosis cases detection process is performed by cardiolo-
gists. To detect the possible stenosis cases, the specialist performs a visual scan over
an X-ray coronary angiogram (XCA) image which can be printed over a physical
media or as gray-level digital image. Until the process, the specialist labels the an-
giogram over different regions where a stenosis case is present according with his
expertise and knowledge. Figure 6.1 illustrates an angiogram and their respective
stenosis regions labeled by a specialist. However, given the limited access to such
delicate clinical expertise, the variability of diagnoses among specialists has allowed
that the automatic Computer-Aided Diagnosis (CAD) systems play a vital role in
cardiology to assist the detection of coronary artery stenosis.
The problem of automated stenosis cases detection in XCA has been addressed
from different approaches in literature. For example, Kishore and Jayanthi [1] make
use of a fixed-size window (patch) that is manually selected from an previously en-
hanced image and after, an adaptive threshold algorithm is applied to keep only ves-
sel pixels. With the segmented image, a vessel width is calculated by adding the
intensity values from the left to the right edge. In this approach, there is no need to a
Figure 6.1: X-ray coronary angiogram. From left to right: Original angiogram, steno-
sis diseases labeled by a cardiologist and, the zoom of the stenosis labels.
skeletonization process of the arteries, using only the vessel width to determine the
grade of a stenosis case into the selected window. Saad [2] was able to detect stenosis
cases by using vessel skeletons. This approach requires a previous image segmenta-
tion process in order to extract vessel pixels and after, a skeletonization procedure
is performed to extract only the center lines corresponding to the vessels. With the
vessel center lines (skeleton), the length of the orthogonal line is computed using a
fixed-size window that is moving over the image in order to obtain a vessel-width
measure that is compared with a fixed value to determine if a stenosis case exists or
not in that region of the image. Sameed et al. [3] make use of the Hessian matrix to
enhance vessel pixels and determine candidate stenosis regions, by identifying nar-
rowed vessel areas. Wan et al. [4] carried out the vessel diameter estimation using
a smoothed vessel centerline curve from the candidate stenosis regions detected by
the Hessian-approach. Both, from the artery lumen (vessel diameter), allow deter-
mining the stenosis measurement and final classification. Cervantes-Sanchez et al.
[5] proposed a method for computing the vessel width along the arteries by applying
the second-order derivatives directly over the enhanced images, where the cases of
stenosis were labeled as local minima through the vessel width; in this approach,
no additional skeletonization or vessel diameter estimation was needed. Posteriorly,
Cruz-Aceves et al. [6] used a Bayes classifier over a handcrafted 3D feature vector
that was obtained from the results of potential cases of coronary stenosis identi-
fied previously by a second-order derivative operator. Major disadvantage for this
method relies in the need of a predefined threshold or fixed value in the form of
a vessel width, narrowest measure, etc., which a current computed value, must be
compared or contrasted against to determine if it is greater, equal or lower than that
fixed threshold, resulting in a problem by itself the determination of that fixed value.
Convolutional Neural Networks (CNNs) have emerged to overcome the disadvan-
tages of methods where a-priori fixed value (or a set of them) must be established
to perform a classification. They have been successfully used to solve diverse kinds
Automatic Feature Selection for Coronary Stenosis Detection in X-Ray Angiograms 119
CNNs have been applied successfully for the coronary stenosis detection problem.
Antczak et al. [8] tested several CNNs architectures using a natural image dataset
plus 10000 instances of synthetically generated vessels patches, which includes pos-
itive and negative cases. An additional strength of CNNs is the possibility to pass
knowledge from a pre-trained CNN to a new one. This process is called Transfer
Learning [9]. However, success of knowledge transfer depends on dissimilarity be-
tween the source domain (where the CNN has been trained) and the target domain
(where the knowledge is transferred) [10].
Major disadvantages of CNNs are related with choosing the right CNN archi-
tecture and the large amount of instances required to achieve a correct training and
classification, despite the risk to fall in an over-fitting state due mainly to an unbal-
anced dataset (where the number of positive and negative cases are significantly dif-
ferent). For instance, in the automated coronary stenosis detection problem, the num-
ber of positive cases are significantly smaller than negative cases. To overcome this
120 Hybrid Quantum Metaheuristics: Theory and Applications
disadvantage, strategies such as data augmentation [11] and synthetic data genera-
tion [12] are applied. However, those strategies entails the inconvenient of manually
select a representative and significant dataset from which the new augmented data
will be generated or a correct methodology or model to generate synthetic cases
closed to the real ones.
In this research is proposed a novel method based on the automated feature se-
lection for detection of coronary stenosis cases in X-ray coronary angiograms. The
method was tested using an image database with ≈ 2800 instances. In addition, the
proposed method was compared with other five strategies taken from the state-of-
the-art, which includes machine learning and deep learning techniques. To measure
the effectiveness of the proposed and compared methods, the Accuracy and the Jac-
card’s Index metrics were used, achieving a rate of 0.92 and 0.85, for the accuracy
and Jaccard index, respectively. The obtained results probed the method effectiveness
for the automated stenosis cases detection problem in X-ray coronary angiograms by
selecting only a subset of features and keeping at same time, an optimal classification
rate.
6.2 BACKGROUND
6.2.1 FEATURE EXTRACTION
In Image Processing, feature extraction term refers to the multiple metrics that can
be computed or extracted from an image, a region, or a single pixel from it. Since
single-pixel metrics does not offer valuable information by itself, the most common
approach to extract significant information from an image are the windows (patches)
or the entire image. The feature extraction process can be performed in an automated
way, for instance, by using a CNN. However, almost all CNN-based procedures ob-
fuscate the extracted features because the complexity inherent to the process or the
feature itself. This kind of strategies leads to produce a classification result only.
On the other hand, specific or manually extracted features can be analyzed in detail.
However, this approach has the disadvantage that features must be collected by the
researcher in a non-fully automated way most of the time.
From an overall point of view, extracted features from an image can be classified
in three categories: intensity, texture, and morphology. Each of these groups will
be described on next sections and how they are related with the studied problem.
It is important to mention that, since the gray-level scale pixel intensity is the most
common approach for X-ray coronary angiogram digital imaging, all features will
be related to work with gray-level images.
Figure 6.3: Intensity-based features for two images of 16x16 in the first row. Second
row contains the corresponding pixel intensities for each of them. Third row contains
the corresponding intensity-basic features.
where P(i, j, d, θ ) is the frequency which two pixels with intensities i and j at a
distance d and an angle θ occurs. G is the number of gray levels used.
Figure 6.4: Sample of a raw X-ray coronary angiogram image and their correspond-
ing vessel segmentation and skeleton extraction, from left to right.
Figure 6.5: The same above X-ray angiogram with their respective vessel segmenta-
tion and skeleton overlapped in order to illustrate a combined feature extraction.
For this study, the Frangi [14] method was used in order to segment vessels from
the angiographies because the optimal results are obtained. The Frangi method uses
the eigenvalues obtained from a Hessian matrix. The Hessian matrix is computed
from the second-order derivative of the original image. It is calculated by convolving
a Gaussian kernel at different orientations with the original image as follows:
x2 + y2
G(x, y) = − exp − ∥y∥ < L/2, (6.2)
2σ 2
where σ is the spread of the Gaussian profile and L is the length of the vessel seg-
ment. The resultant Hessian matrix is expressed as follows:
Hxx Hxy
H= , (6.3)
Hyx Hyy
where Hxx , Hxy , Hyx , and Hyy are the directional second-order partial derivatives of
the image.
The segmentation function defined by Frangi for 2-D vessel detection is as follows:
0 if λ2 > 0,
f (x) = R2b 2 (6.4)
exp − 2α 2 1 − exp 2Sβ 2 elsewhere.
The α parameter is used with Rb to control the shape discrimination. The β parameter
is used by S2 for noise elimination. Rb and S2 are calculated as follows:
|λ1 |
Rb = , (6.5)
|λ2 |
124 Hybrid Quantum Metaheuristics: Theory and Applications
q
S2 = λ12 + λ22 , (6.6)
where λ1 and λ2 are the eigenvalues of Hessian matrix.
The Frangi method response is a gray-scale image with the vessel pixels en-
hanced. In order to fully eliminate non-vessel pixels (background and noise), a bi-
narization of the Frangi response must be applied. In this research, the Otsu [15]
method was used because the threshold value is calculated automatically based on
the image pixels by computing the weighted sum of variance of the two classes,
expressed as follows:
where ω0 and ω1 weights are the probabilities of the two classes separated by a
threshold t, and σ02 and σ12 are the statistical variances of ω0 and ω1 , respectively.
After the vessel segmentation procedure was applied, the Medial Axis Transform
technique was used to extract the vessel skeletons, which makes use of the Voronoi
method, expressed as follows:
where Rk is the Voronoi region associated with the site Pk (a tuple of nonempty
subsets in the space X), which contains the set with all points in X whose distance to
Pk is not greater than their distance to the other sites Pj . j is any index different from
k. d(x, Pk ) is a closeness measure from point x to point Pk . The Euclidean distance is
commonly used as a closeness measure and it is defined as follows:
q
D(p1 , p2 ) = (x2 − x1 )2 + (y2 − y1 )2 (6.9)
where D(p1 , p2 ) is the distance between two points p1 and p2 defined by coordinates
(x1 , y1 ) and (x2 , y2 ), respectively, in a 2-D plane.
Figure 6.6 illustrates five coronary angiograms with their respective segmentation
response applying the Frangi method and the corresponding results after the binariza-
tion and skeletonization tasks were performed. In Section A.1 is presented a Matlab
implementation to extract vessel segments. The code function makes use of other
auxiliary functions presented in subsequent sections in the appendix. For instance,
in Section A.2 is presented the code that finds and returns the positions related with
the pixels of a segment in order to calculate their morphology on a posterior process.
The code in Section A.3 is useful to extract a window from a 2-D matrix. Finally,
the code presented in Section A.4 finds and returns the first row position of a matrix
where a specific row vector is located. Those methods are useful to extract distinct
features related with vessel’s morphology.
Figure 6.6: Image vessel detection. In first row, original angiograms are presented.
Second row shows the Frangi segmentation response. In third row, the Otsu method
response is illustrated. Last row contains the vessel skeletons by applying the Medial
Axis Transform method.
extracted, the challenge is to find the best feature subset that is able to classify coro-
nary stenosis and non-stenosis cases correctly. The complexity associated to solve
the posed problem relies in the large number of combinations of features to achieve
the best classification rate. The problem complexity can be computed as 2n possibili-
ties to combine features, where n is the number of features. Figure 6.7 illustrates two
examples of feature selection.
Feature Set
supervised learning models. This means that a training data set and its correspond-
ing label set are required to perform the SMV training. In order to project the data
represented in a space χ to a higher dimensional space F , the SVM makes use of the
Mercer kernel operator. For a given training data x1 , ..., xn , that are vectors in some
space χ ⊆ Rd , the support vectors can be considered as a set of classifiers expressed
as follows [19]:
n
f (x) = ∑ αi K(xi , x) . (6.10)
i=1
When K satisfies the Mercer condition [20], it can be expressed as follows:
where Φ : χ → F and “·” denotes an inner product. With this assumption, f can be
rewritten as follows:
f (x) = w · Φ(x),
n
(6.12)
w = ∑ αi Φ(xi ).
i=1
therefore, the product of bra and ket vectors can be expressed using the notation of
Dirac as follows:
v1
v2
⟨M|V ⟩ = w1 w2 . . . wi . = w1 v1 + w2 v2 + . . . + wi vi (6.16)
..
vi
When translating previous physics concepts to computing field, the term qubit is used
to represent the minimal unit of information that stores |0⟩ and ⟨1| states. Applying
this principle to the QGA, an initial quantum population Q(0) is composed of a set of
quantum individuals. Each individual will be composed of a quantum chromosome
128 Hybrid Quantum Metaheuristics: Theory and Applications
which results in ! t
β t+1
j 0 1 αj
=
α t+1
j 1 0 β tj
The quantum mutation using the insertion gate is performed by the permutation or
swapping between two qubits chosen randomly. For instance, given the following
quantum chromosome with first and third qubits chosen randomly:
α1 α2 α3 . . . α j
,
β1 β2 β3 . . . β j
the new mutated chromosome applying the insertion strategy will be as follows:
α3 α2 α1 . . . α j
β3 β2 β1 . . . β j
were selected for a crossover operation with a randomly selected point between first
and second positions. The resultant recombined chromosomes are expressed as fol-
lows:
α1 α2′ α3′ . . . α ′j
β1 β2′ β3′ . . . β j′ m∗
′
α1 α2 α3 . . . α j
β1′ β2 β3′ . . . β j n∗
Using the previous concepts, the QGA steps are defined and compared with its clas-
sical corresponding GA in Table 6.1.
130 Hybrid Quantum Metaheuristics: Theory and Applications
Table 6.1
QGA Steps
Step Quantum GA Classic GA
1 Initialize quantum population Q0 Generate initial population P0
2 Make P(0) by measuring each indi- Evaluate P(0)
vidual Q(0) → P(0)
3 while (not termination condition) while (not termination condition)
do do
4 begin begin
5 t ← t +1 t ← t +1
6 Perform Quantum Crossover Perform Crossover
7 Perform Quantum Mutation Perform Mutation
8 Measure Q(t) → P(t) Evaluate population P(t)
9 end end
The remaining features are related with the vessel morphology. As mentioned before,
a previous vessel enhancement and skeletonization tasks are required to keep only
information related with vessel pixels. The Frangi method was applied in order to
perform a vessel enhancement over the original images. Later, a binarization of each
Frangi response was performed by applying the Otsu method in order to discriminate
non-vessel pixels. Finally, the Medial Axis Transform procedure was applied in order
to extract vessel skeletons. Shape-based features are described as follows:
3. Feature Selection
Perform a Feature Selection using QGA with SVM.
After the feature selection task is finished, the SVM can be trained using only
the feature subset found previously by the QGA. In order to probe the effective-
ness, a testing set is used. In addition, the classification results are measured us-
ing two different metrics in order to assess the achieved results. As a first instance,
the True-Positive (TP), True-Negative (TN), False-Negative (FN), and False-Positive
(FP) fractions are used to obtain the Accuracy (Acc) metric. This metrics are com-
puted as follows:
TP
T PR = , (6.26)
T P + FN
TN
T NR = , (6.27)
T N + FP
FP
FPR = , (6.28)
FP + T N
TP+TN
Acc = , (6.29)
T P + T N + FP + FN
Automatic Feature Selection for Coronary Stenosis Detection in X-Ray Angiograms 133
Figure 6.9: 40 instances of coronary X-ray angiogram patches taken from the
Antczak [8] image dataset. (a) Positive stenosis cases. (b) Negative stenosis cases.
For the training and the feature selection process, 1670 instances were taken ran-
domly from the image set in a proportion of 50−50% for positive and negative cases,
respectively. The remaining instances were used for testing purposes.
As mentioned before, an SVM was used to perform the classification of positive
and negative cases. It was configured to use a polynomial kernel of order 6, a ker-
nel scale of 6.5, and a kernel offset of 0.1. All SVM parameters were established
134 Hybrid Quantum Metaheuristics: Theory and Applications
6.5 RESULTS
After conducting the experiment, significant results were obtained related with the
feature selection process and the classification accuracy achieved. In order to assess
the feature selection results, a statistical analysis was performed considering the best
result achieved by the QGA on each trial from a set of 30 trials. In its best global
solution, the QGA was able to find an optimal feature subset of 20 instead of the ini-
tial set containing 31 features. Additionally, it is interesting to notice the frequency
which each feature was present in the best solution achieved by QGA over all trials,
since it provides information about the influence of that particular feature on the clas-
sification result. Table 6.2 describes the frequency which each feature was present in
the best solution achieved by the QGA in each trial.
Table 6.2
Feature Selection Frequency over All QGA Trials
Feature Frequency
Contrast 0.55
Correlation 0.64
Sum Variance 0.73
Sum Entropy 0.45
Entropy 0.45
Difference Entropy 0.45
Min 1.00
Max 1.00
Mean 0.64
Std.Dev. 0.91
Number of Vessel Pixels 0.91
Number of Vessel Segments 0.73
Vessel Density 0.91
Vessel Length 0.91
Number of Bifurcation Points 0.64
Grey Level Coefficient of Variation 1.00
Gradient Mean 0.73
Gradient Coefficient of Variation 0.91
Mean Vessel Width 0.91
Vessel. Std. Dev. Max 0.64
Automatic Feature Selection for Coronary Stenosis Detection in X-Ray Angiograms 135
Figure 6.10 illustrates the frequency for all set of 31 features. Based on the results
1
Frequency
0.5
0 Feature
Name
Figure 6.10: Frequency in which each feature was present in best solution achieved
by QGA over all trials. x-axis describes each feature name. y-axis represents the
frequency of each feature, where 1 means that the feature appears in the solution
achieved by the QGA in all trials (the 100%).
described in Table 6.2 and the chart presented in Figure 6.10, the importance or effect
of each individual feature in the classification task and how the combination of them
can lead to an optimal classification rate can be observed. It is remarkable that some
features appear on all solutions achieved by the QGA, meaning that those features
are really important in the classification process. In addition, it can be contrasted the
results described in Table 6.2 with Figure 6.10 and how almost a half of selected
features are above of the mean frequency, which was 0.72.
The performance results are presented in Table 6.3. The proposed method is com-
pared with other four classification methods in the literature, such as Feed Forward
(FFNN) and Back Propagation (BPNN) Neural Networks, UNET and CNN-16C
Convolutional Neural Networks.
Based on results presented in Table 6.3, the classification process was performed
with the highest rates in terms of the Accuracy and Jaccard Index metrics by
the SVM-based classification method. However, the proposed method was able to
achieve closest performance results to the SVM by using only a subset of 20 features
instead of the total feature set, which contains a total of 31 features. It is important
to notice that the performance of non-linear classification methods such as FFNN
and BPNN decreases when a feature selection task is applied to them. This can
be attributed to the loss of information associated with the elimination of features
which are relevant for a non-linear classification task. On the other hand, deep learn-
ing methods such as UNET and CNN-16C Convolutional Neural Networks were
able to achieve the closest results to the best rate after the proposed method. In this
136 Hybrid Quantum Metaheuristics: Theory and Applications
Table 6.3
Accuracy Rate and Jaccard Index Comparison for the Proposed Method and
5 additional Classification Methods from the State of the Art
Number of Jaccard
Method Features
Accuracy Index
20 0.70 0.54
FFNN
31 0.72 0.61
20 0.69 0.57
BPNN
31 0.71 0.00
SVM 31 0.92 0.86
UNET [7] – 0.76 0.72
CNN-16C [8] – 0.86 0.74
Proposed Method 20 0.92 0.85
context, it is important to mention that CNNs were applied using only the same
training and testing set such as the rest of techniques and no data-augmentation or
transfer-learning processes were applied in order to measure the effectiveness of all
techniques under the same conditions. In Figure 6.11, a subset of instances cor-
responding to the True-Positive, True-Negative, False-Positive, and False-Negative
fractions are illustrated.
6.6 CONCLUSION
In this research, a method is proposed for the stenosis classification problem, which
makes use of a Quantum Genetic Algorithm (QGA) to perform an automatic feature
Automatic Feature Selection for Coronary Stenosis Detection in X-Ray Angiograms 137
selection in order to select only those features that has a strong influence on a Support
Vector Machine-based (SVM) classifier. The QGA performs a search over the space
formed by the feature set in order to find an optimal combination of features and
at the same time, keeping or decreasing the loss rate in the training stage. Initially,
a set of 31 features were extracted from an image database which contains ≈ 2780
instances of X-ray coronary angiogram patches. The image database is balanced in
terms of the positive and negative stenosis cases. After the feature selection process
ends, a subset of 20 features was able to keep the classification rate in terms of
the Accuracy metric and the Jaccard index, compared with the original set with 31
features. By using only 20 features, the accuracy rate and Jaccard index were 0.92
and 0.85, respectively, which are very similar to those obtained using the full set of
31 features. In addition, the reduction of features has effect on the time required to
perform an exhaustive feature extraction of new angiograms, since the required time
for extract the 31 features was 0.94 seconds, versus 0.62 seconds that are required to
extract the 20 selected features, considering a window that will perform a scan over
an angiogram to detect regions with possible stenosis cases. Based on the results
obtained in this study, it can be concluded that the proposed method can be applied
in clinical practice to assists cardiologists in the evaluation and finding of possible
stenosis cases in X-ray coronary angiograms.
ACKNOWLEDGMENT
The present research has been supported by the Universidad Tecnológica de León.
REFERENCES
1. A. Kishore and V. Jayanthi. Automatic stenosis grading system for diagnosing coronary
artery disease using coronary angiogram. International Journal of Biomedical Engineering
and Technology, 31(3):260–277, 2018.
2. I. Saad. Segmentation of coronary artery images and detection of atherosclerosis. Journal
of Engineering and Applied Sciences, 13:7381–7387, 2018.
3. S. Sameh, M. Azim, and A. AbdelRaouf. Narrowed coronary artery detection and clas-
sification using angiographic scans. In 2017 12th International Conference on Computer
Engineering and Systems (ICCES), pages 73–79, 2017.
4. T. Wan, H. Feng, C. Tong, D. Li, and Z. Qin. Automated identification and grading of
coronary artery stenoses with x-ray angiography. Computer Methods and Programs in
Biomedicine, 167:13–22, 2018.
5. F. Cervantes-Sanchez, I. Cruz-Aceves, and A. Hernandez-Aguirre. Automatic detection of
coronary artery stenosis in x-ray angiograms using Gaussian filters and genetic algorithms.
AIP Conference Proceedings, 1747, 2016.
6. I. Cruz-Aceves, F. Cervantes-Sanchez, and A. Hernandez-Aguirre. Automatic detection
of coronary artery stenosis using Bayesian classification and Gaussian filters based on
differential evolution. Hybrid Intelligence for Image Analysis and Understanding, pages
369–390, 2017.
7. A. Harouni, A. Karargyris, M. Negahdar, D. Beymer, and T. Syeda-Mahmood. Universal
multi-modal deep network for classification and segmentation of medical images. In 2018
138 Hybrid Quantum Metaheuristics: Theory and Applications
IEEE 15th International Symposium on Biomedical Imaging (ISBI 2018), pages 872–876,
2018.
8. Karol Antczak and ukasz Liberadzki. Stenosis detection with deep convolutional neural
networks. MATEC Web of Conferences, 210:04001, 2018.
9. Shin Hoo-Chang, Holger Roth, Mingchen Gao, Le Lu, Ziyue Xu, Isabella Nogues, Jianhua
Yao, Daniel Mollura, and Ronald-M. Summers. Deep convolutional neural networks for
computer-aided detection: CNN architectures, dataset characteristics and transfer learning.
IEEE Transactions on Medical Imaging, 35(5):1285–1298, 2016.
10. Hossein Azizpour, Ali Razavian, Josephine Sullivan, Atsuto Maki, and Stefan Carlsson.
From generic to specific deep representations for visual recognition. In 2015 IEEE Con-
ference on Computer Vision and Pattern Recognition Workshops (CVPRW), pp. 36–45,
2015.
11. David-A. van Dyk and Xiao-Li Meng. The art of data augmentation. Journal of Computa-
tional and Graphical Statistics, 10(1):1–50, 2001.
12. Jessamyn Dahmen and Diane Cook. Synsys: A synthetic data generation system for health-
care applications. Sensors, 19(5): 2019.
13. R. Haralick, K. Shanmugam, and I. Dinstein. Textural features for image classification.
IEEE Transactions on Systems, Man, and Cybernetics, 3(6):610–621, 1973.
14. A. Frangi, W. Niessen, K. Vincken, and M. Viergever. Multiscale vessel enhancement filter-
ing. Medical Image Computing and Computer-Assisted Intervention (MICCAI98), pages
130–137, 1998.
15. Otsu Nobuyuki. A threshold selection method from gray-level histograms. IEEE Transac-
tions on Systems, Man and Cybernetics, 9(1):62–66, 1979.
16. Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine Learning,
20:273–297, 1995.
17. Nello Cristianini, John Shawe-Taylor, et al. An introduction to support vector machines
and other kernel-based learning methods. Cambridge University Press, 2000.
18. William-S. Noble. What is a support vector machine? Nature Biotechnology, 24:1565–
1567, 2006.
19. Simon Tong and Daphne Koller. Support vector machine active learning with applications
to text classification. Journal of Machine Learning Research, pages 45–66, 2001.
20. Christopher-J.C. Burges. A tutorial on support vector machines for pattern recognition.
Data Mining and Knowledge Discovery, 2:121–167, 1998.
21. F. Shi, H. Wang, L. Yu, and F. Hu. Analyze of 30 Cases of MATLAB Intelligent Algo-
rithms. Beihang University Press, 2010.
22. A. Narayanan and M. Moore. Quantum-inspired genetic algorithms. In Proceedings of
IEEE International Conference on Evolutionary Computation, pages 61–66, 1996.
23. Davendra Donald. Traveling salesman problem, theory and applications. Intech, Rijeka,
2011.
24. Kuk-Hyun Han and Jong-Hwan Kim. Quantum-inspired evolutionary algorithms with a
new termination criterion, h/sub /spl epsi// gate, and two-phase scheme. IEEE Transactions
on Evolutionary Computation, 8(2):156–169, 2004.
25. Zhifeng Zhang and Hongjian Qu. A new real-coded quantum evolutionary algorithm. In
Proceedings of the 8th WSEAS International Conference on Applied Computer and Ap-
plied Computational Science, pages 426–429, 2009.
26. Gexiang Zhang. Quantum-inspired evolutionary algorithms: A survey and empirical study.
Journal of Heuristics, 17:303–351, 2011.
Automatic Feature Selection for Coronary Stenosis Detection in X-Ray Angiograms 139
27. Utpal Roy, Sudarshan Roy, and Susmita Nayek. Optimization with quantum genetic algo-
rithm. International Journal of Computer Applications, 102(16):1–7, 2014.
28. Ying Sun and Xiong Hegen. Function optimization based on quantum genetic algorithm.
Research Journal of Applied Sciences, Engineering and Technology, 7(1):144–149, 2014.
29. Wang Huaixiao, Li Ling, Liu Jianyong,Wang Yong, and Fu Chengqun. Improved quan-
tum genetic algorithm in application of scheduling engineering personnel. Abstract and
Applied Analysis, 2014:1–10, 2014.
30. Lee Jia-Chu, Lin Whei-Min, Liao Gwo-Ching, and Tsao Ta-Peng. Quantum genetic algo-
rithm for dynamic economic dispatch with valve-point effects and including wind power
system. International Journal of Electrical Power & Energy Systems, 33(2):189–197, 2011.
31. Ying Sun and Xiong Hegen. A novel quantum-inspired evolutionary algorithm for mul-
tisensor image registration. The International Arab Journal of Information Technology,
3(1):9–15, 2006.
32. Hu Wei. Cryptanalysis of tea using quantum-inspired genetic algorithms. Journal of Soft-
ware Engineering and Applications, 3:50–57, 2010.
33. Rafael Lahoz-Beltra. Quantum genetic algorithms for computer scientists. Computers,
5(4):243–249, 2012.
34. Leonard Susskind and Art Firedman. Quantum Mechanics: The Theoretical Minimum.
Penguin Books: London, UK, 2015.
Quantum Preprocessing
7 for Deep Convolutional
Neural Networks in
Atherosclerosis Detection
7.1 INTRODUCTION
Atherosclerosis is a specific type of stenosis, i.e., narrowing or occlusion of the artery
lumen, that occurs due to the accumulation of some substances as cholesterol on
the coronary arteries’ inner walls. An opportune diagnosis and atherosclerosis treat-
ments are essential since it represents the leading cause of Coronary Artery Dis-
ease. According to the World Health Organization (WHO), this heart condition has
a high mortality rate worldwide, with 17.9 million estimated deaths every year [1].
Atherosclerosis detection consists of arteries visual inspection through a screening
test, either non-invasively employing computed tomography (Coronary Computed
Tomography Angiography, CCTA) or with the regular procedure consisting of insert-
ing a catheter through the groin or arm into the coronary arteries (X-ray Coronary
Angiography, XCA). In both cases, a contrast medium is injected to guide or locate
the arteries. Nonetheless, XCA remains the gold standard used by specialists since
it offers enough resolution for diagnosis. Besides, the patients can receive treatment
during the same session. For instance, during the visual examination of the XCA
images, the physician may identify regions with stenosis as shown in Figure 7.1.
Figure 7.1: X-ray coronary angiography image. Stenosis and non-stenosis samples
regions marked in green and red, respectively.
However, given the limited access of specialists and the time consumption for
diagnoses, it has allowed the Computer-Aided Diagnosis (CAD) systems to play
a vital role in cardiology. CAD systems have been a significant field of research
during the last few decades, developed to improve and support the medical diagnosis
process. CAD uses Machine Learning (ML) methods to analyze imaging or non-
imaging (i.e., clinical profile) patient data to assess patients’ conditions.
In computer vision, one of the breakthroughs occurred when Krizhevsky et al.
[2] presented Alexnet, a Deep Convolutional Neural Network (DCNN) that won the
2012 ImageNet Large Scale Visual Recognition Challenge (ILSVRC) [3]. New Deep
Learning (DL) algorithms have been proposed to adapt CNN architectures to chal-
lenging medical imaging problems. Despite significant research problems on the
medical imaging domain using DL algorithms, it often suffers two significant dif-
ficulties in practice: 1) limited amount of labeled data; 2) mislabeling labels. New
hybrid models have emerged to build more efficient ML algorithms. The most rele-
vant is Quantum Machine Learning (QML) [4], which relies on quantum computing
emulated from physical properties. Such improvement can be accomplished using
quantum mechanical systems.
In this work, a Hybrid Quantum-Convolutional Neural Network for atheroscle-
rosis detection in XCA images is presented. This approach considers an automatic
preprocessing step, implemented through a Quantum Convolutional Layer (QCL),
whose behavior corresponds to apply a quantum circuits as an image filter. The pre-
possessing quantum layer receives as input an XCA patch of one single-channel.
Moreover, it generates a multichannel image, which feeds a classical CNN to perform
the atherosclerosis detection. The presented hybrid method showed that employing
a QCL as an XCA image preprocessing improves the network performance against
raw-XCA images, usually feeding a CNN. Five different evaluation metrics were
used to measure the performance of the proposed method. Besides, two different
optimization techniques were compared: Stochastic Gradient Descent and Stochas-
tic Gradient Descent with Momentum. Additionally, two different CNN architec-
tures previously introduced for atherosclerosis detection were studied. The employed
dataset consists of 250 real XCA images, where 125 images are used for training and
the remaining 125 for testing.
The remainder of this document is organized as follows: Firstly, in Section 7.2,
the related work is addressed. Section 7.3 describes briefly the concepts related to
Quantum computation and CNNs. Section 7.4 presents the introduced methodology
for atherosclerosis detection. In Section 7.5, the experimental and numerical results
are carefully detailed and discussed. Finally, the conclusions are stated in Section 7.6.
to compute the arterial diameter of each skeleton point. Finally, the detection of
general or severe obstruction was assigned based on the ratio given by the segment
minimum diameter over its average diameter for each segment.
Wu et al. [6] used a U-Net architecture to segment the vascular structure of the
XCA sequence taking advantage of the binary output to calculate a contrast-filling
degree for each frame. Next, a percentage of sequences passing to the detection step
was selected, where a Deconvolutional Single-Shot Multibox Detector may generate
the location information. Jevitha et al. [7] combined a contour activation algorithm
with a Frangi filter to extract the left main coronary artery. Next, bifurcation points
were automatically detected using a template kernel. Later, measurements were taken
on the artery bifurcation angles. The result was used to discriminate between stenotic
and healthy arteries. These methods building the atherosclerosis detection and clas-
sification ruled around blood vessel segmentation try to emulate physicians’ proce-
dures when diagnosing patients. This last approach has the advantage that the work-
flow often has a direct interpretation. Regardless, the main task results (atheroscle-
rosis detection) are conditioned by the segmentation algorithm performance, which,
in many cases, struggles to detect narrowed regions.
Recently, proposals have emerged that do not consider vessels’ extraction as a
necessary step. Instead, end-to-end systems have been built, taking advantage of the
rapid increase in computational power and the superior performance that algorithms
based on deep learning have shown.
Au et al. [8] introduced a patch-based CNN that automatically characterizes
and analyzes coronary stenosis in such a context. The network was based on a
DenseNet [9], where skip connections between convolutional layers were added.
Likewise, Antczak and Liberadzki [10] developed a patch-based CNN based on a
VGG architecture [11], adding dropout layers after the convolution’s layers and re-
moving the pooling layers. This approach employed an artificial dataset to overcome
the problem of a limited amount of training data. Both networks were trained from
scratch using a Sigmoid activation function in the last dense layer to compute the
patch probability of belonging to the stenosis class.
Cong et al. [12] presented another interesting solution. The InceptionV3 [13] net-
work was trained from scratch to select a subset of candidate frames (like what was
done in [6]), considering the contrast filling degree and other image quality measures,
such as well-defined vessel borders. A second step used Transfer Learning for steno-
sis detection with an InceptionV3 network pre-trained with the ImageNet database.
Fine-tuning was performed using a strategy called redundancy training that included
pre-classified redundant frames in the training dataset.
Ovalle et al. [14] presented a method for successfully detecting coronary artery
stenosis in XCA images, evaluating three pre-trained state-of-the-art architectures
(VGG16 [11], ResNet50 [15], and InceptionV3 [13]) via Transfer Learning. Such
a method incorporates a network-cut approach where only a sub-set of layers was
considered. Layers between the cut layer and custom classifier are discarded. During
the fine-tuning step, an artificial dataset was exploited. Moreover, the fine-tuning
was substantially improved using a sub-set of real XCA images. This approach
144 Hybrid Quantum Metaheuristics: Theory and Applications
outperformed the results obtained using only the fine-tuning process during Transfer
Learning.
Figure 7.2 shows an overview of the previously published methods evaluated us-
ing XCA images. End-to-end methods have shown worthy potential for detecting
Figure 7.2: Typical workflow of the previously published algorithms for coronary
artery stenosis detection in XCA images.
Classifier previously introduced by Bergholm et al. in [19] for the binary clas-
sification of pigmented skin lesions. Images’ dimensionality is transformed from
128 × 128 × 3 to a 4-dimensional vector via an autoencoder cleverly being imple-
mented with a CNN. Next, each vector was encoded in quantum amplitudes and sent
to the 2-qubit classifier to obtain the predicted expected values of the image, being
either a melanoma or melanocytic nevi.
On the other hand, Bisarya et al. [20] proposed a quantum convolutional neural
network (QCNN) for cancer detection in breast cell data by exploiting the Wisconsin
dataset. Two cases were theoretically considered: the former uses numerical data
associated with the size of the cells and their texture, which were processed using
an architecture of two QCLs with only ten parameters, each one assigned to a qubit;
the latter uses gray-scale magnetic resonance images, processed in sets of 4 × 4-
pixel arrays, encoded in a 256-qubits system. Regardless, only the first approach was
implemented due to the limited availability of resources.
It is noteworthy that according to the authors’ best knowledge, there are currently
no works that use quantum convolutional networks to detect atherosclerosis or other
types of stenosis. Therefore, in this study, a Hybrid Quantum-Convolutional Neural
Network for atherosclerosis detection in XCA images, employing a QCL, is pre-
sented. The QCL successfully performs a preprocessing over the XCA images to
generate a multichannel image feeding a classical CNN and improving the detection
performance against the raw-XCA images.
Figure 7.3: Illustration of the two basic states on the Bloch Sphere. The graph was
plotted using the Johansson et al. library [23].
Next, a couple of examples are included to clarify some basic properties of quan-
tum computing.
Example 1. Consider a 2-qubit system whose state is described by the state vector
√1 [1, 0, 0, 1]⊺ = √1 |0⟩ + √1 |3⟩ = √1 |(00)2 ⟩ + √1 |(11)2 ⟩. This representation is a
2 2 2 2 2
Quantum Preprocessing for DCNN in Atherosclerosis Detection 147
valid quantum state since | √12 |2 + | √12 |2 = 1, where (X)2 is a binary representation
reading from left to right of the vector state positions.
|ψ ⟩AB = |φ ⟩A ⊗ |ϕ ⟩B , (7.5)
1
√ (|(00)2 ⟩ + |(11)2 ⟩) ̸= |φ ⟩A ⊗ |ϕ ⟩B ,
2
is entangled. Such a 2-qubit system is also known as an EPR-pair [24].
U∗ U = UU∗ = I, (7.7)
gate allows taking a qubit from a definite computational basis state into a two states’
superposition. The Hadamard matrix H is defined by
1 1 1
H= √ . (7.8)
2 1 −1
Meanwhile, the T-gate performs a rotation π4 around the z-axis and is defined as
follows
1 0
T= π . (7.9)
0 e4i
These two operations can be composed to approximate unitary transformations
on a single qubit, such as the Pauli-X, Y, and Z gates (σx , σy , and σz ) used to rotate
the superposition along x-, y-, or z-axis. The Pauli gates are defined by
0 1
σx = = HT4 H, (7.10)
1 0
0 −i
σy = = T2 HT4 HT6 , (7.11)
i 0
1 0
σz = = T4 . (7.12)
0 −1
The Pauli matrices are involutory, that is, a matrix that is its own inverse, such that
1 0
σx = σy = σz = −iσx σy σz =
2 2 2
= I. (7.13)
0 1
It can be shown that by a given angle γ and a Pauli matrix σa , a = [x, y, z]
exp(i γσa ) = cos(γ )I + i sin(γ )σa . (7.14)
Therefore, the rotations operator, which rotate the unit Bloch vector by an angle γ
around a specific axis, is given by
γ
γ γ
Rx (γ ) = e−i 2 σx = cos I − i sin σx , (7.15)
2 2
γ γ γ
Ry (γ ) = e−i 2 σy = cos I − i sin σy , (7.16)
2 2
γ γ γ
Rz (γ ) = e−i 2 σz = cos I − i sin σz . (7.17)
2 2
Moreover, the rotation operators can be expanded as
cos (γ /2) −i sin (γ /2)
Rx (γ ) = , (7.18)
−i sin (γ /2) cos (γ /2)
cos (γ /2) − sin (γ /2)
Ry (γ ) = , (7.19)
sin (γ /2) cos (γ /2)
" γ #
e− 2 i 0
Rz (γ ) = γ . (7.20)
0 e2i
Quantum Preprocessing for DCNN in Atherosclerosis Detection 149
Example 3. For instance, let us consider the following quantum circuit that rotates
a qubit in the x-axis and afterward around the y-axis. First, a qubit in the ground
state |0⟩ = [1, 0]⊺ is rotated through the angle γ1 around the x-axis applying the gate
Rx (γ1 ), and eventually, the y-axis by the angle γ2 via the gate Ry (γ2 ). After these
operations, the qubit is now in the state
Example 4. Let us consider the state |ψ ⟩ = Ry (γ2 )Rx (γ1 ) |0⟩, applying a mea-
surement over the Pauli-Z observable, the expectation value is given by ⟨σz ⟩ψ =
⟨ψ |σz |ψ ⟩ . Depending on the circuit parameters γ1 and γ2 , the output is yielded in
the range [−1, 1]. For instance, if γ1 = 0.5 and γ2 = 1.0 then ⟨σz ⟩ψ = 0.47416.
Figure 7.4: Typical architecture of a Convolutional Neural Network. The design com-
prises one convolutional layer with a kernel size of 3 × 3, followed by a pooling layer
with a window size of 2 × 2. Finally, a fully connected layer is involved into the
model used to optimize the objective functions.
where f is an activation function (e.g., Sigmoid), bk is the k-bias, I[c] is the image at
the c-channel, wk [c] is the k-kernel for such a channel, and ∗ denotes the convolution
operator.
0 0 0 0 0 0 0 1 1 -1
0 0 1 1 1 0 0 0 1 0 0 2 3 2 0
0 0 0 1 1 1 0 0 -1 -1 0 -1 0 1 0
0 0 0 0 1 1 0 0 0 0 1 -1
Kernel
0 0 0 0 1 1 0
0 2 3 1 0
0 0 0 2 1 2 0
0 0 2 0 0
0 0 0 0 0 0 0
Input Output
can be applied, the max and average pooling are the most used. In max-pooling, the
maximum value is taken from a certain spatial neighborhood (for example, a 2 × 2
window). Instead of taking the largest element, the average-pooling selects the aver-
age value of the neighborhood.
2 3 0
2 3 0
0 2 3 2 0 0 2 0
0 -1 0 1 0
Max-Pooling
0 0 0 1 -1
0 2 3 1 0
0 0 2 0 0 0.25 1.5 0
0 0.5 0
Average-Pooling
each neuron on the next layer. In summary, the fully connected step consists of three
layers:
Fully connected input layer: It makes the last features map into a single one-
dimensional vector, commonly used in the transition from the last convolu-
tional/pooling layer to the fully connected layer.
First fully connected layer: A set of weights are learned to predict the correct label
from one-dimensional feature representation.
Fully connected output layer: It retrieves the final probabilities for each class.
𝑥!
𝑤!
𝑥"
∑ 𝑓 𝑓 𝑏 + & 𝑥! 𝑤!
. 𝑤"
!
.
.
𝑥# 𝑤#
Figure 7.7: The basic process of an activation function. The sum of all weighted
inputs (w⊺ x) and the bias b is passed through a non-linear activation function f to
generate the neuron output.
the nature of the problem to solve. The most widely used activation functions are
the Sigmoid, Hyperbolic Tangent, Rectified Linear Unit (ReLU), Leaky ReLU, and
SoftMax. The Sigmoid function returns a value close to zero for small values in the
argument and close to 1 for large argument values,
1
Sigmoid(x) = . (7.23)
1 + e−x
As an alternative to the Sigmoid function, the Hyperbolic Tangent function could
be used as an activation function producing outputs in the range of [−1, 1] and is
Quantum Preprocessing for DCNN in Atherosclerosis Detection 153
formally defined as
ex − e−x
tanh(x) = . (7.24)
ex + e−x
The Rectified Linear Unit (ReLU) activation function returns the element-wise max-
imum between 0 and the input value as follows:
Leaky ReLU is a variation of the ReLU function, which has a small positive value
when the input is not active
Lastly, the SoftMax function converts a real input vector into a vector of probabilities.
Therefore, the elements of the output vector must sum up to 1. The SoftMax function
applied on the vector x is computed as
ex
SoftMax(x) = N
. (7.27)
∑ exi
i=1
Figure 7.8: Outline of the proposed methodology. The input XCA patches pass
through a QCL to generate a 9-channel image to feed the traditional CNN to de-
tect atherosclerosis.
154 Hybrid Quantum Metaheuristics: Theory and Applications
Figure 7.9: 9-qubit Random Quantum Circuit employed in the Quantum Convolu-
tional Layer. The circuit has the Pauli-X, Pauli-Y, and Pauli-Z rotations randomly
distributed and controlled-CNOT gates as imprimitives (2-qubit entangled gates).
Figure 7.10: Design of the Quantum Convolutional Layer consisting of three main
stages: ground-state initialization (encoding), Random Quantum Circuit (RQC) defi-
nition and output state computation, and the measurement (decoding). From a sliding
window of 3 × 3, the QCL generates nine expectation values mapped into nine dif-
ferent channels from data.
Table 7.1
CNN Architectures for Atherosclerosis Detection
Network Architecture Layer Description
64-Conv(3 × 3) Convolutional layer with 64 kernels, each with a size of 3 × 3
pixels
64-Conv(1 × 1) Convolutional layer with 64 kernels, each with a size of 1 × 1
pixels
64-Conv(3 × 3) Convolutional layer with 64 kernels, each with a size of 3 × 3
pixels
I) Au et al. [8] Conc(1, 3) Concatenation of the outputs of the first and third convolu-
tional layer
64-Conv(1 × 1) Convolutional layer with 64 kernels, each with a size of 1 × 1
pixels
64-Conv(3 × 3) Convolutional layer with 64 kernels, each with a size of 3 × 3
pixels
Conc(1, 6) Concatenation of the outputs of the first and sixth convolu-
tional layer
GMP Global Max Pooling layer
1-Dense Dense layer with one neuron and a Sigmoid activation func-
tion
8-Conv(7 × 7) Convolutional layer with eight kernels, each with a size of
7 × 7 pixels
Dropout(0.5) Dropout layer with a rate of 0.5
8-Conv(7 × 7) Convolutional layer with eight kernels, each with a size of
7 × 7 pixels
II) Antczak and 8-Conv(7 × 7) Convolutional layer with eight kernels, each with a size of
Liberadzki [10] 7 × 7 pixels
Dropout(0.5) Dropout layer with a rate of 0.5
8-Conv(7 × 7) Convolutional layer with eight kernels, each with a size of
7 × 7 pixels
Dropout(0.5) Dropout layer with a rate of 0.5
16-Dense Dense layer with 16 neurons and a Sigmoid activation func-
tion
1-Dense Dense layer with one neuron and a Sigmoid activation func-
tion
Let yn be the class indicator variable for the n-th input patch, which operates as
(
0, if non-atherosclerosis,
yn = (7.29)
1, if atherosclerosis.
Furthermore, ŷ corresponds to the estimated probability by the CNN that the input
patch has atherosclerosis. Lastly, the optimization process is achieved by minimizing
Quantum Preprocessing for DCNN in Atherosclerosis Detection 157
1 N
min J(y, ŷ) = −
w
∑ [yn log(ŷn )+(1−yn )log(1−ŷn )] ,
N n=1
(7.30)
where N represents the batch-size used during the minimization. For artherosclerosis
detection, the loss function can be rewritten as
(
log(ŷn ), if yn = 1,
J(yn , ŷn ) = − (7.31)
log(1 − ŷn ), if yn = 0,
where α is the learning rate. SGD can lead to slow convergence, particularly after the
initial steep gains. Some methods have been incorporated to overcome such incon-
venience. Thus, the momentum is a method that integrates the past gradients in each
dimension. In SGD with momentum [28] (SGDM), the gradient at every dimension
is incorporated to gain velocity where the parameters have a consistent gradient. The
momentum update is given by
F-Measure represents the harmonic mean between recall (sensitivity) and preci-
sion values. A general F-Measure is described as follows
Precision × Recall
Fβ -Measure = (1 + β 2 ) × . (7.38)
β 2 × Precision + Recall
size 32 × 32 pixels obtained from XCA images. In numerical experiments, the pixel
values were normalized into [0, 1].
The patches were obtained, as seen in Figure 7.11, following the next steps:
1. Input images are downsampled from 256 × 256 to 128 × 128 pixels.
2. A sliding window of size 32 × 32 was moved to obtain a set of patches from each
image.
3. Each patch is labeled as non-stenosis, stenosis, respectively.
The sliding window produces multiple overlapping matches; thus, multiple patches
may be classified as positive cases, even if there is only one stenosis on the image.
Downsampling Windowing
Figure 7.11: General outline of XCA patches generation. First, the XCA image is
sub-sampled; next, a sliding window generates the output patches.
Figure 7.12: Feature maps generated by the Quantum Convolutional Layer (QCL).
First row: raw XCA images. The following nine rows show the generated images by
the QCL.
Figure 7.13: k-Fold cross-validation procedure. The dataset was split into training
and testing sets. Next, the training set is divided into folds to train CNN. The fold
marked as gray was taken as validation, while the folds in light green as training data.
The learning curve can measure the model’s performance during the training and
validation steps in terms of accuracy and loss. In Figure 7.14, losses and accuracy
curves are plotted concerning the number of epochs used to train the model.
Figure 7.15 shows that the accuracy (using SGDM) increases while the loss de-
creases. The Q-CNN-A (CNN proposed by Au et al. [8] with the quantum convolu-
tional layer) achieves the best accuracy in training and validation. Those results are
followed by the Q-CNN-B (CNN proposed by Antczak and Liberadzki [10] using the
quantum convolutional layer). In terms of validation loss, the models start to overfit
during the training process.
On the other hand, the accuracy curves in Figure 7.16 also shown that the CNN
trained with the outcome of the QCL reach the highest accuracy (in particular, Q-
CNN-A) by using SGD as the optimizer. This same network architecture but using
only the raw (normalized) XCA for training, followed in accuracy performance.
In contrast to the loss curves obtained by SGDM, the curves in Figure 7.17 suggest
that the validation loss decreases and has a small gap with the training loss during
the training process.
Figure 7.14: Training and validation accuracy curves of the CNN models employing
SGD with Momentum concerning the number of epochs (1000) in the training phase.
Top: Training accuracy curves, Bottom: Validation accuracy curves.
detection results were compared using SGD and the innovative variant using mo-
mentum (SGDM). Table 7.2 presents the performance rates of atherosclerosis detec-
tion for each CNN studied. It shows that the use of momentum provides improved
performance for both CNN architectures. The best performance for each metric was
highlighted.
Furthermore, the network performance surpassed the raw-XCA patches version
when the network is trained using the QCL preprocessing, reaching an accuracy of
83.36% against 79.84% for the architecture employed by Au et al. [8].
For the second architecture (Antczak and Liberadzki [10]), the accuracy is im-
proved from 69.44% to 80.80% when the network is fed with the QCL output im-
ages. In general, an improvement for precision, sensitivity, specificity, and F1 score
was achieved using the QCL.
Quantum Preprocessing for DCNN in Atherosclerosis Detection 163
Figure 7.15: Training and validation loss curves of the CNN models employing SGD
with Momentum concerning the number of epochs (1000) in the training phase. Top:
Training loss curves, Bottom: Validation loss curves.
Figure 7.16: Training and validation accuracy curves of the CNN models employ-
ing SGD, and concerning as 1000 the number of epochs in the training phase. Top:
Training accuracy curves, Bottom: Validation accuracy curves.
Figure 7.17: Training and validation loss curves of the CNN models employing SGD
concerning the number of epochs (1000) in the training phase. Top: Training loss
curves, Bottom: Validation loss curves.
the five evaluation metrics concerning the classical CNN architectures trained with
the normalized-raw XCA images.
Additionally, the numerical results are permitted to assert that momentum during
the optimization step improves the detection performance. The atherosclerosis de-
tection with the DenseNet-based CNN employing the Quantum Convolutional Layer
reached the best accuracy, with an 83.36%, while in precision, sensitivity, specificity,
and F1-score achieved an 81.52%, 86.67%, 83.99%, and 80.00%, respectively.
A future direction of this work will incorporate this quantum layer during the
optimization procedure. Finally, in agreement with numerical results, the proposed
method has shown the potential of a preprocessing Quantum Convolutional Layer to
generate discriminative feature maps (a multichannel image) to feed a classical CNN
to detect atherosclerosis in XCA images.
166 Hybrid Quantum Metaheuristics: Theory and Applications
Table 7.2
Network Detection Results.
Model Accuracy Precision Sensitivity F1 -Score Specificity
83.36% 81.52% 86.67% 83.99% 80.00%
QCL-SGDM-A
(±2.11%) (±2.13%) (±2.77%) (±2.07%) (±2.62%)
80.80% 78.55% 85.08% 81.64% 76.45%
QCL-SGDM-B
(±3.16%) (±2.16%) (±5.37%) (±3.41%) (±2.19%)
82.88% 78.46% 91.11% 84.29% 74.52%
QCL-SGD-A
(±1.65%) (±1.94%) (±1.90%) (±1.42%) (±2.96%)
60.64% 59.09% 71.75% 64.47% 49.35%
QCL-SGD-B
(±1.55%) (±1.16%) (±9.81%) (±3.72%) (±8.20%)
79.84% 75.98% 87.62% 81.35% 71.94%
SGDM-A
(±4.28%) (±3.15%) (±6.06%) (±4.26%) (±3.32%)
69.44% 67.80% 75.87% 71.13% 62.90%
SGDM-B
(±1.99%) (±2.28%) (±10.06%) (±4.17%) (±7.36%)
67.36% 64.96% 76.83% 69.90% 57.74%
SGD-A
(±2.55%) (±0.86%) (±12.20%) (±5.12%) (±7.59%)
60.80% 58.55% 78.10% 65.67% 43.23%
SGD-B
(±2.09%) (±1.42%) (±19.40%) (±7.36%) (±16.11%)
The detection performance for each neural network (A:Au et al. [8], B: Antczak and Liberadzki[10]) was carried out
employing the Quantum Convolutional Layer (QCL) preprocessing and the original XCA. The networks were also opti-
mized using Stochastic Gradient Descent (SGD) and SGD with Momentum (SGDM).
Figure 7.18: Detection result sample. Each row shows one case of True-Positive
(TP), True-Negative (TN), False-Positive (FP), and False-Negative (FN). From left
to right: one raw XCA images and the nine-channel QCL outcome image.
CONFLICTS OF INTEREST
The authors declare that there is no conflict of interest.
Quantum Preprocessing for DCNN in Atherosclerosis Detection 167
ACKNOWLEDGEMENTS
This research was supported by the Engineering Division of the Campus Irapuato-
Guanajuato, grant NUA 147347; and the Mexican Council of Science and Technol-
ogy CONACyT, Doctoral Studies Grants no. 626154 and 626155.
ETHICAL APPROVAL
All procedures performed in studies involving human participants were in accor-
dance with the ethical standards of the institutional and/or national research commit-
tee and with the 1964 Helsinki Declaration and its later amendments or comparable
ethical standards. For this type of study, formal consent is not required.
REFERENCES
1. World Health Organization. Cardiovascular Diseases (CVDs). https://www.who.int/news-
room/fact-sheets/detail/cardiovascular-diseases-(cvds), Aug 2020.
2. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hinton. Imagenet classification with deep
convolutional neural networks. In Advances in Neural Information Processing Systems,
pages 1097–1105, 2012.
3. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhi-
heng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, et al. Imagenet large
scale visual recognition challenge. International Journal of Computer Vision, 115(3):211–
252, 2015.
4. Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and
Seth Lloyd. Quantum machine learning. Nature, 549(7671):195– 202, 2017.
5. Chenxin Sui, Zhuang Fu, Zeyu Fu, Yao Wang, Yu Zhuang, Rongli Xie, Yanna Zhao, Jun
Zhang, and Jian Fei. A Novel method for vessel segmentation and automatic diagnosis of
vascular stenosis. In 2019 IEEE International Conference on Robotics and Biomimetics
(ROBIO), pages 918–923. IEEE, 2019.
6. WeiWu, Jingyang Zhang, Hongzhi Xie, Yu Zhao, Shuyang Zhang, and Lixu Gu. Auto-
matic detection of coronary artery stenosis by convolutional neural network with temporal
constraint. Computers in Biology and Medicine, 118:103657, 2020.
7. S. Jevitha, M. Dhanalakshmi, and Pradeep G Nayar. Analysis of left main coronary bifur-
cation angle to detect stenosis. In International Conference on Intelligent Systems Design
and Applications, pages 627–639. Springer, 2018.
8. Benjamin Au, Uri Shaham, Sanket Dhruva, Georgios Bouras, Ecaterina Cristea, Andreas
Coppi, Fred Warner, Shu-Xia Li, and Harlan Krumholz. Automated characterization of
stenosis in invasive coronary angiography images with convolutional neural networks.
arXiv preprint arXiv:1807.10597, 2018.
9. Gao Huang, Zhuang Liu, Laurens Van Der Maaten, and Kilian Q Weinberger. Densely
connected convolutional networks. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, pages 4700–4708, 2017.
10. Karol Antczak and Lukasz Liberadzki. Stenosis detection with deep convolutional neural
networks. In MATEC Web of Conferences, volume 210, page 04001. EDP Sciences, 2018.
11. Karen Simonyan and Andrew Zisserman. Very deep convolutional networks for large-scale
image recognition. arXiv preprint arXiv:1409.1556, 2014.
168 Hybrid Quantum Metaheuristics: Theory and Applications
12. Chao Cong, Yoko Kato, Henrique Doria Vasconcellos, Joao Lima, and Bharath Venkatesh.
Automated stenosis detection and classification in X-ray angiography using deep neu-
ral network. In 2019 IEEE International Conference on Bioinformatics and Biomedicine
(BIBM), pages 1301–1308. IEEE, 2019.
13. Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir
Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. Going deeper
with convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pat-
tern Recognition, pages 1–9, 2015
14. Emmanuel Ovalle-Magallanes, Juan Gabriel Avina-Cervantes, Ivan Cruz–Aceves, and Jose
Ruiz-Pinales. Transfer learning for stenosis detection in X-ray coronary angiography.
Mathematics, 8(9):1510, 2020.
15. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for
image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern
Recognition, pages 770–778, 2016.
16. Maxwell Henderson, Samriddhi Shakya, Shashindra Pradhan, and Tristan Cook. Quan-
volutional neural networks: Powering image recognition with quantum circuits. Quantum
Machine Intelligence, 2(1):1–9, 2020.
17. Jennifer Sleeman, John Dorband, and Milton Halem. A hybrid quantum enabled RBM ad-
vantage: convolutional autoencoders for quantum image compression and generative learn-
ing. In Quantum Information Science, Sensing, and Computation XII, volume 11391, page
113910B. International Society for Optics and Photonics, 2020.
18. Vijayasri Iyer, Bhargava Ganti, A.M. Hima Vyshnavi, P.K. Krishnan Namboori, and Sriram
Iyer. Hybrid quantum computing based early detection of skin cancer. Journal of Interdis-
ciplinary Mathematics, 23(2):347–355, 2020.
19. Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, Carsten Blank, Keri McK-
iernan, and Nathan Killoran. Pennylane: Automatic differentiation of hybrid quantum-
classical computations. arXiv preprint arXiv:1811.04968, 2018.
20. Aradh Bisarya, Shubham Kumar, Walid El Maouaki, Sabyasachi Mukhopadhyay, Bikash
K Behera, Prasanta K Panigrahi, et al. Breast Cancer Detection Using Quantum Convolu-
tional Neural Networks: A Demonstration on a Quantum Computer. medRxiv, 2020.
21. Michael A. Nielsen and Isaac L. Chuang. Quantum Computation and Quantum Informa-
tion: 10th Anniversary Edition. Cambridge University Press, 2010.
22. Ronald De Wolf. Quantum computing: Lecture notes. arXiv preprint arXiv:1907.09415,
2019.
23. J. Robert Johansson, Paul D Nation, and Franco Nori. QuTiP: An open-source Python
framework for the dynamics of open quantum systems. Computer Physics Communica-
tions, 183(8):1760–1772, 2012.
24. Albert Einstein, Boris Podolsky, and Nathan Rosen. Can quantum-mechanical description
of physical reality be considered complete? Physical Review, 47(10):777, 1935.
25. Dominik Scherer, Andreas Muller, and Sven Behnke. Evaluation of pooling operations in
convolutional architectures for object recognition. In International Conference on Artificial
Neural Networks, pages 92–101. Springer, 2010.
26. Xavier Glorot and Yoshua Bengio. Understanding the difficulty of training deep feedfor-
ward neural networks. In Proceedings of the thirteenth international conference on artificial
intelligence and statistics, pages 249–256, 2010.
27. Herbert Robbins and Sutton Monro. A stochastic approximation method. The Annals of
Mathematical Statistics, pages 400–407, 1951.
Quantum Preprocessing for DCNN in Atherosclerosis Detection 169
28. Ning Qian. On the momentum term in gradient descent learning algorithms. Neural Net-
works, 12(1):145–151, 1999.
29. Karol Antczak and Lukasz Liberadzki. Deep Stenosis Detection Dataset.
https://github.com/KarolAntczak/DeepStenosisDetection, Aug 2020.
30. Timo Ojala, Matti Pietikainen, and David Harwood. A comparative study of texture mea-
sures with classification based on featured distributions. Pattern Recognition, 29(1):51–59,
1996.
8 Multilevel Quantum
Elephant Herd Algorithm
for Automatic Clustering of
Hyperspectral Images
Automatic clustering of hyperspectral images is a very strenuous task due to the
presence of huge number of redundant bands and complexity to process them. In
this work, two quantum versions of Elephant Herd Optimization algorithm are pro-
posed for this purpose. The binary and ternary quantum logics used enhances the
exploration and exploitation capability of the elephant herd optimization. These al-
gorithms are compared to their classical counterpart. They are implemented on the
Salinas dataset. The proposed qutrit based algorithm is found to converge faster and
produce more robust results. The Xie-Beni Index is used as the fitness function. A
few statistical tests like mean, standard deviation and Kruskal Wallis test are per-
formed to establish the efficiency of the proposed algorithm. The F score is used
to compare the segmented images using the optimal cluster numbers. The proposed
algorithms are found to perform better in most of the cases.
8.1 INTRODUCTION
Hyperspectral image (HSI) processing has caught the attention of many researchers
in the past decade. The development of powerful spectral cameras provided re-
searchers with the tools to easily acquire them. HSI can provide extensive and metic-
ulous information about the object or area which are captured by means of the spec-
tral cameras [5]. The reflection and absorption capabilities of different materials
present on the Earth’s surface are unique. These spectral informations are used by
HSI to recognize them individually [41]. HSI is extensively used in various fields
like environmental studies [45], military applications [25] and medical fields [36].
The number of bands in HSI varies from 10 to around 400. The main problem with
hyperspectral images is the presence of redundant and correlated information, lead-
ing to Hughes phenomena [21]. The rich spectral information increases the comput-
ing time in processing HSI. Hence, various dimensionality reduction techniques are
widely researched [11].
To extract useful information from HSI, different methods like classification, clus-
tering, and unmixing are used [5]. Clustering is a very beneficial method in HSI anal-
ysis when the ground truth image is not available. According to Zhang et al. [51],
HSI clustering algorithms [32] can be categorized into four groups. Centre-based
method is one of the most widely used clustering algorithms. In this type of clus-
tering, the data points are grouped based on their distance from the cluster centers.
K-means [32], is one of the most popular clustering algorithms based on Euclidean
distance. It is a hard clustering algorithm, which is sensitive to initial cluster centers
and membership values [28]. Fuzzy C means (FCM) [7], is a soft clustering algo-
rithm and produces better results than K -means, at the cost of increased iterations.
In both the algorithms, the main disadvantage is that the number of clusters should
be mentioned beforehand. In HSI, knowing the number of segments may always not
be possible.
Determining the number of clusters in HSI automatically is a challenging task.
Recently, researchers have started exploring methods to automatically detect cluster
numbers in various problems like image segmentation, data segmentation and others.
Very few works have been done on determining the number of clusters in HSI.
Clustering is considered to be a type of nondeterministic hard optimization prob-
lem or NP Hard problem [4]. Metaheuristics are found to be useful for solving NP
Hard problems in an efficient manner. Metaheuristics take reasonable time and pro-
duce near optimal solutions. Hence, many metaheuristic algorithms have been intro-
duced in the literature for solving clustering problems. Metaheuristic algorithms are
stochastic in nature and easy to implement [14]. They are mostly inspired by natural
phenomena like swarming of birds [42], ant’s colony, food finding behavior [12] and
others. Genetic Algorithm (GA) [20], Particle Swarm Optimization (PSO) [42], Ant
Colony Optimization [12] and Differential Evolution (DE) [44] are few well-known
metaheuristic algorithms.
In recent years, quantum computing has drawn the attention of a lot of researchers.
It was originally conceptualized by Sir Richard Feynman [16]. The various quan-
tum phenomena like superposition, entanglement, and interference can enhance the
computing capability of an algorithm exponentially [34]. Researchers have explored
these ideas to embed the basic principles of quantum computing with metaheuristic
algorithms [40]. Hence, a new category of metaheuristics was developed from these,
which are called quantum-inspired metaheuristics.
The main motivation of this work is to develop a fast and robust automatic cluster-
ing algorithm for HSI. Elephant Herd Optimization (EHO) [47] is a comparatively
new metaheuristic algorithm, based on the clan formation habit of elephants. The
simplicity of the algorithm and its good exploration abilities have inspired in devel-
oping the qubit and qutrit versions of the algorithm called the Qubit Elephant Herd
Optimization (QubEHO) and the Qutrit Elephant Herd Optimization (QutEHO), re-
spectively. The exploitation capability of a metaheuristic means that it is capable
of refining the result space. The EHO algorithm [47] lacks in this but the quan-
tum versions can easily achieve this. The parallel computing capability of a qubit or
qutrit enhances the exploitation property of the EHO algorithm [47]. Moreover, the
QubEHO and QutEHO are found to exhibit higher convergence speeds.
In this work, the Band Selection Convolutional Neural Network (BS-NET-
Conv) [8] is used in the pre-processing stage to reduce the number of bands in the
HSI. The Xie-Beni Index (XB-Index) [49] is used as the fitness function to detect the
optimal number of clusters. The Fuzzy C Means [7] algorithm is used to determine
the clusters.
Multilevel Quantum Elephant Herd Algorithm for Automatic Clustering of Hyperspectral Images173
• Two algorithms viz., the qubit and qutrit versions of Elephant Herd Optimization
are devised for optimal number of cluster detection in HSI.
• An algorithm for qubit based quantum rotation gate implementation for bringing
diversity in the population.
• An algorithm for qutrit based quantum rotation gate implementation for bringing
diversity in the population.
The chapter is organized as follows: Section 8.2 contains a brief literature survey of
the used methods. The important background concepts are discussed in Section 8.3.
Section 8.4 contains the details of the proposed methodology. The experimental
results and their analysis are provided in Section 8.5. A brief conclusion of the pro-
posed method has been drawn in Section 8.6.
a powerful tool for optimization. Particle Swarm Optimization [42], Ant Colony Op-
timization [12], Cuckoo Search [50], Harris hawk Optimization [19], Border Collie
Optimization [14] and Elephant Herd Optimization [47] are few well-known swarm
intelligent algorithms. EHO [47], due to its easy implementation and good perfor-
mance, has drawn the attention of a lot of researchers. The algorithm has good explo-
ration capability but the exploitation of search space is less efficient. This also leads
to slower convergence. Hence, a lot of enhanced and hybrid versions of the EHO
algorithm have been researched [23] to overcome these disadvantages. In [23], three
different enhanced versions of EHO algorithm were proposed to overcome these de-
ficiencies. Li et al. [29] presented a detailed study on the different variants of the
EHO algorithm published so far, along with their different features.
Researchers working on metaheuristic algorithms, have been captivated by the
idea of designing algorithms with quantum advantage. These are also known as
quantum-inspired algorithms, as they are inspired by the principles of quantum com-
puting but the simulations are done on classical computers. Narayanan and Moore
were the first to conceptualize a quantum-inspired evolutionary algorithm for solv-
ing the Travelling Salesman problem [35]. In [18], a Quantum-Inspired Evolutionary
Algorithm was proposed with a better population diversity and concept of look-up
table for the application of rotation gates. Recently, a lot of researchers has devel-
oped quantum-inspired versions of metaheuristic algorithms like the Improved Bloch
Quantum Artificial Bee Colony algorithm, which was proposed in [22]. It involves
a complicated Bloch sphere representation, which makes it a little complex. Very
few works have been done on multilevel quantum systems. A qutrit-based Genetic
Algorithm was proposed in [46]. However, hardly any work has been done yet on
developing multilevel quantum-based EHO algorithm.
In Eqn. (8.1), cli is the ith clan, Ebestcli represents the updated position of the ma-
triarch in the clith clan, β is a factor that dictates the influence of the Ec,cli on new
Ebestnew,cli , j . The value of β ranges between [0, 1]. Ec,cli is the center of clan cli ,
calculated with the help of the following equation.
nci
1
Ec ,cli ,d = × ∑ Ecli , j,d (8.2)
ncli j=1
In Eqn. (8.2), d is the d th dimension and D is the total dimension. ncli is the number
of elephants in the clith clan. Eqn. (8.2) is used to calculate the center of a clan.
All the other elephants ( j) next positions are calculated using the following equa-
tion:
Enew,cli , j = Ecli , j + α × Ebestcli − Ecli , j × r (8.3)
In Eqn. (8.3), α is the influence that the matriarch has on the other elephants of the
clan. Its value ranges from [0, 1]. r is a random number.
Quantum computation like classical computation is not restricted to two states. Quan-
tum states can have D-dimensional states. This generalized state of quantum bits is
called qudit [3]. Hence, a qudit can exist in the following state.
Dn −1
|ψ ⟩ = ∑ qi |i⟩ (8.8)
i=0
A qutrit is the three-valued quantum state. It has three basis states, viz., |0⟩, |1⟩ and
|2⟩. The superposition of a qutrit state can be expressed as follows:
|ψ ⟩ = q0 |0⟩ + q1 |1⟩ + q2 |2⟩ (8.9)
The normalization of a qutrit state is expressed as follows:
|q0 |2 + |q1 |2 + |q2 |2 = 1 (8.10)
The process is run for a number of iterations until a minimum value for Eqn. (8.11)
is obtained.
1 k d m 2
Mm = ∑ ∑ ui j d (Pj , ki )
d i=1
(8.15)
j=1
and the minimum distance between two cluster centers is given by the following
equation:
dmin = min d 2 (ki , k j ) (8.16)
i, j
XB-Index [49] is a minimization function and the minimum value of Eqn. (8.14),
represents the optimal solution.
(a) (b)
1 1
|QP⟩ = √ |0⟩ + √ |1⟩ (8.17)
2 2
1 1 1
|QP⟩ = √ |0⟩ + √ |1⟩ + √ |2⟩ (8.18)
3 3 3
Their corresponding basis state predictions (CR) are done using Algorithm 2 for
qubit or Algorithm 3 for qutrit. Algorithm 4 for qubit or Algorithm 5 for qutrit are
Multilevel Quantum Elephant Herd Algorithm for Automatic Clustering of Hyperspectral Images179
proposed for rotating the quantum states without the help of look-up-tables. The
implementation of rotation gates brings diversity to the population. Each qubit or
qutrit represents the individual elephant. A fixed number of clans are considered and
the population is scattered randomly among the clans. Then, a random number of
zeros (z) are introduced in CR. The number of zeros represents the number of clusters
to be considered. The QP value for which CR is true, is taken as the cluster center.
z number of clusters are calculated using FCM algorithm [7]. The XB-Index [49] is
used as the fitness function to check the optimality of the number of clusters obtained.
Subsequently, the clan updation operation is done. The Matriarch of each clan
is identified. The QP value for which CR is true, is updated using Eqn. (8.1). Other
elephants of the clan are updated similarly using Eqn. (8.2). The separation operation
180 Hybrid Quantum Metaheuristics: Theory and Applications
is executed on the least fit elephant using Eqn. (8.4). The whole process is run for
Ge number of generations. In Figure 8.2, the basic operations viz., formation of the
clan, separation of male elephant from the clan and how other elephants follow the
Matriarch can be visualized. The use of quantum bits helps to search the solution
space faster and more efficiently. This can be visualized in Figure 8.1. For a single
qubit, we can see that two different solutions points are considered. Simiarly, for a
single qutrit three different solutions points are taken. Hence, it enhances the speed
of the algorithm producing more robust results.
Figure 8.2: Elephant Herd Optimization [47]-Clan formation, Clan members follow-
ing matriarch, Seperation from clan (Elephant picture from [1]).
Multilevel Quantum Elephant Herd Algorithm for Automatic Clustering of Hyperspectral Images181
8.5.3 ANALYSIS
The proposed QubEHO and QutEHO algorithms are compared with classical
EHO [47] algorithm. To obtain an impartial analysis, all the algorithms need to be
evaluated using the same parameters. Hence, all the algorithms were run 50 times
for 100 iterations each. The algorithms were executed in MATLAB R2019a, on Intel
(R) Core (TM) i7 8700 Processor in Windows 10 environment.
The mean, standard deviation, and best convergence time of all the three algo-
rithms are presented in Table 8.1. It is observed that the proposed QutEHO algo-
rithm arrives at optimal results in negligible time when compared to the other two
algorithms. The classical EHO [47] algorithm takes the highest time to converge.
In Table 8.2, the optimal cluster numbers (CL) obtained for EHO [47], QubEHO,
and QutEHO are reported. Their corresponding fitness values (FV) are also presented
in Table 8.2 where . The Salinas dataset [2], has 16 classes. The most optimal results
Table 8.1
Mean, Standard Deviation (Std) and the Best Reported Time for EHO [47],
QubEHO, and QutEHO
Parameters EHO [47] QubEHO QutEHO
Mean 0.2032 0.0725 0.0389
Std 0.1394 0.0124 0.0219
Time 181.9412 24.3839 3.3035
182 Hybrid Quantum Metaheuristics: Theory and Applications
Table 8.2
Some Cluster Numbers and Best Fitness Values Obtained for EHO [47],
QubEHO and QutEHO.
Methods EHO [47] QubEHO QutEHO
Sr No CL FV CL FV CL FV
1. 4 0.0685 6 0.0733 10 0.0129
2. 4 0.0828 5 0.0693 9 0.0143
3. 4 0.0835 5 0.0787 9 0.0144
4. 5 0.0835 5 0.0660 7 0.0182
5. 4 0.0860 5 0.0614 7 0.0182
Table 8.3
Normalized F-Score [31] for EHO [47], QubEHO and QutEHO
Process EHO [47] QubEHO QutEHO
0.1998 0.0038 0.0016
are produced by the QutEHO algorithm, followed by the QubEHO algorithm. The
classical EHO [47] has comparatively lesser number of classes. In a real-life sce-
nario, where the ground truth cannot be obtained and the number of classes cannot
be estimated, the proposed algorithms, specially the qutrit version can be both bene-
ficial and time efficient.
To judge the quality of the segmented images, the normalized F score is used [31].
The F score is evaluated with the help of the following equation.
1 √ r e2i
F(SI) = r∑ √ (8.19)
1000(v × h) i=1 Ai
Here, SI is the final segmented image. The dimension of the image is v × h. The num-
ber of regions is designated by r. A and e stand for the area and the average color
error of the ith region, respectively. The results are normalized for easy representa-
tion. The results of all the three methods, viz., EHO [47], QubEHO, and QutEHO
are presented in Table 8.3. The proposed algorithm produces better results compared
to the other two algorithms. Few segmented images along with the ground truth im-
age and the pre-processed images are presented in Figure 8.3. In Figure 8.3, the
segmented images using EHO [47] has four clusters, QubEHO has five clusters and
QutEHO has nine clusters, respectively.
Another statistical test called the Kruskal-Wallis test [27] is applied to check the
null hypothesis with 1% significance level. The p-value obtained is less than 0.001
indicating that the results are highly significant. Hence, the null hypothesis that all
the results from all the three methods belong to the same distribution, stands rejected.
Multilevel Quantum Elephant Herd Algorithm for Automatic Clustering of Hyperspectral Images183
(a) Ground Truth Image (b) Pe-processed Image (c) EHO [47]
Figure 8.3: (a) Ground Truth Image of Salinas Dataset [2], (b) Pre-processed Im-
age using BSCNN [8], (c)-(e) Clustered Images using EHO [47], QubEHO, and
QutEHO.
The results are presented in Table 8.4. The box-plot of the test is given in Figure 8.4.
The convergence curve of all the participating methods is presented in Figure 8.5.
From the convergence curve, it can be visually observed that the proposed algorithm
converges faster and with more optimal values compared to EHO [47] and QubEHO.
184 Hybrid Quantum Metaheuristics: Theory and Applications
Table 8.4
Kruskal-Wallis Test [27]
Test p-value Significance
Kruskal-Wallis Test 2.8166e-25 Highly Significant
0.7
0.6
0.5
0.4
0.3
0.2
0.1
1 2 3
Figure 8.4: Box-Plot of Kruskal-Wallis Test [27] for EHO [47], QubEHO, and
QutEHO.
EHO
QubEHO
QutEHO
100
Fitness Values
10-1
10-2
0 20 40 60 80 100
Iterations
Figure 8.5: Convergence curve for EHO [47], QubEHO, and QutEHO.
Multilevel Quantum Elephant Herd Algorithm for Automatic Clustering of Hyperspectral Images185
8.6 CONLUSION
In this chapter, qubit and qutrit-based Elephant Herd Optimization algorithms are
proposed for automatic clustering of hyperspectral images. The modified rotation
gate operation enhances the diversity of the population. The exploration and ex-
ploitation capabilities of the classical EHO algorithm is enhanced, with a faster
convergence capability. As automatic cluster detection is a tedious task in HSI pro-
cessing, these algorithms can be highly beneficial in real-life scenarios. The results
indicate that the proposed QubEHO, and QutEHO produce optimal results compared
to the classical version of EHO. The proposed algorithms produces better clustering
outcome when their comparative F scores are considered. The statistical tests also
establish the efficiency of the proposed algorithms. As a future direction, the qudit
version of the EHO algorithm can be developed.
REFERENCES
1. Elephant clipart. Free download transparent .PNG | Creazilla, Jul 2021. [Online; accessed
17. Jul. 2021].
2. Hyperspectral Remote Sensing Scenes - Grupo de Inteligencia Computacional (GIC), Jul
2021. [Online; accessed 17. Jul. 2021].
3. Qudits | Cirq | Google Quantum AI, Jul 2021. [Online; accessed 16. Jul. 2021].
4. Laith Abualigah, Amir H. Gandomi, Mohamed Abd Elaziz, Husam Al Hamad, Mahmoud
Omari, Mohammad Alshinwan, and Ahmad M. Khasawneh. Advances in meta-heuristic
optimization algorithms in big data text clustering. Electronics, 10(2), 2021.
5. P. Azimpour, R. Shad, M. Ghaemi, and H. Etemadfard. Hyperspectral image clus-
tering with albedo recovery fuzzy c-means. International Journal of Remote Sensing,
41(16):6117–6134, 2020.
6. Frank B. Baker and Lawrence J. Hubert. Measuring the power of hierarchical cluster
analysis. Journal of the American Statistical Association, 70(349):31–38, 1975.
7. J. C. Bezdek, R. Ehrlich, and W. Full. Fcm: The fuzzy c-means clustering algorithm.
Computers & Geosciences, 10(2):191 – 203, 1984.
8. Y. Cai, X. Liu, and Z. Cai. Bs-nets: An end-to-end framework for band selection of hyper-
spectral image. IEEE Transactions on Geoscience and Remote Sensing, 58(3):1969–1984,
2020.
9. T. Caliński and H. JA. A dendrite method for cluster analysis. Communications in Statistics
- Theory and Methods, 3:1–27, 01 1974.
10. Mulin Chen, Qi Wang, and Xuelong Li. Discriminant analysis with graph learning for
hyperspectral image classification. Remote Sensing, 10(6), 2018.
11. Samiran Das, Shubhobrata Bhattacharya, Aurobinda Routray, and Alok Kani Deb. Band
selection of hyperspectral image by sparse manifold clustering. IET Image Processing,
13(10):1625–1635, 2019.
12. M. Dorigo, V. Maniezzo, and A. Colorni. Ant system: optimization by a colony of cooper-
ating agents. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics),
26(1):29–41, 1996.
13. J. C. Dunn. A fuzzy relative of the isodata process and its use in detecting compact well-
separated clusters. Journal of Cybernetics, 3(3):32–57, 1973.
14. Tulika Dutta, Siddhartha Bhattacharyya, Sandip Dey, and Jan Platos. Border collie opti-
mization. IEEE Access, 8:109177–109197, 2020.
186 Hybrid Quantum Metaheuristics: Theory and Applications
51. Hongyan Zhang, Han Zhai, Liangpei Zhang, and Pingxiang Li. Spectralspatial sparse
subspace clustering for hyperspectral remote sensing images. IEEE Transactions on Geo-
science and Remote Sensing, 54(6):3672–3684, 2016.
52. Yang Zhao, Yuan Yuan, Feiping Nie, and Qi Wang. Spectral clustering based on iterative
optimization for large-scale and high-dimensional data. Neurocomputing, 318:227–235,
2018.
53. Yang Zhao, Yuan Yuan, and Qi Wang. Fast spectral clustering for unsupervised hyperspec-
tral image classification. Remote Sensing, 11(4), 2019.
54. Yanfei Zhong, Liangpei Zhang, and Wei Gong. Unsupervised remote sensing image clas-
sification using an artificial immune network. International Journal of Remote Sensing,
32(19):5461–5483, 2011.
9 Toward Quantum-Inspired
SSA for Solving
Multiobjective
Optimization Problems
9.1 INTRODUCTION
minimize f (x) = ( f1 (x) , f1 (x) , . . . fn (x) )
where x ε X
n≥2
set X includes all the feasible solutions
is called estimated Pareto-optimal (PE) [1]. The set of vectors in the search space
that correspond to the PE is mentioned as the actual Pareto-optimal (PA) [2][3]. The
essence of solving a multiobjective optimization problem is identifying a set of well-
distributed optimal solutions along the Pareto-optimal front in the search space.
Today, a metaheuristic algorithm is a popular approach to resolve compound opti-
mization problems in the field of optimization in science and engineering disciplines.
Optimization is an intelligent method to explore the optimal solution among all ob-
tainable ones of a specific problem [4][5]. The conventional approach for the mul-
tiobjective optimization problem is to scalarize the vector of multiobjective into a
single objective by averaging the value returned by objective functions with a weight
vector. Converting MOP into a single objective optimization problem allows a single-
objective algorithm to be used more straightforwardly. Still, the obtained optimal
solution is primarily dependent on the weight vector considered during the scalariza-
tion process. Moreover, a decision-maker has to know the problem in advance and
provide a weight against each objective.
Furthermore, the decision-maker would be more interested in learning alternate
solutions if obtainable. Algorithms are associated with such a predefined assump-
tion, making them ineffective in solving today’s compounded optimization problems.
Solving a multiobjective optimization problem is one of such kind where a set of uni-
formly distributed solutions are required along the Pareto-optimal front in the search
space.
Swarm-based optimization algorithm is a metaheuristic algorithm that works with
a group of solutions and attempts to achieve the optimal solution in each iteration.
It is often inspired by biological groups’ sociality and has been widely applied for
many real-world optimization problems to overcome the traditional optimization ap-
proach’s restrictions. Such an intelligent system brings the advantages of being work-
able and straightforward to deal with different optimization problems. Additionally,
it has the natural tendency to work with problem represented as the population of
the optimal solution. Moreover, it takes place with the benefits of inherent move-
ment, exploration, and exploitation, that lessen the possibility of entrapment into
local optima. A number of Pareto-optimal sets can be obtained using a swarm-based
optimization method for the multiobjective optimization problem.
The Salp swarm algorithm (SSA) is a new swarm-based metaheuristic method
that imitates Salps flocking behavior in the ocean by making a chain. The SSA is
similar to other evolutionary algorithms in many features, and it works proficiently
for numerous real-world optimization problems. The flocking behavior of SSA can
avoid entrapment of each solution into local optima up to some point due to its salp
chain behavior. However, there are optimization problems where SSA cannot obtain
a solution and easily trap into a local or deceptive optimum. The multiobjective op-
timization problem is one of such kind. The difficulty for SSA lies mainly due to a
good search strategy for the multiobjective optimization problem. The original de-
sign of SSA is to save only one solution and update the positions based on the food
source to obtain the best solution. However, there is no single best solution for the
MOPs. It is required to obtain a set of well-distributed optimal solutions along the
Toward Quantum-Inspired SSA for Solving Multiobjective Optimization Problems 191
Pareto-optimal front in the search space. Thus, there is a need to modify the SSA
algorithms’ original design to perform the overall searching process well with a bal-
ance of exploration and exploitation propensity to achieve the expected results.
The quantum-inspired algorithm is a new branch of study in the area of evolution-
ary computation. It is characterized by the particular principles of quantum physics
such as uncertainty, superposition, interference, etc. The approach to merge and de-
sign the quantum-inspired algorithms for classical computers represents the solutions
into quantum representation. The principles of quantum computing offer better di-
versity during the optimization process. The quantum search strategy intelligently
guides the individuals toward the global optima by significantly improving conver-
gence speed and solution efficiency. The variety can be derived from the representa-
tion model of the population. A probabilistic model of a linear superposition of states
presents better characteristics of generating diversity in the population. Maintaining
a good assortment in the population increases the searchability of the algorithm and
resolves search stagnation. Sun et al. presented Quantum-behaved Particle Swarm
Optimization (QPSO) to improve the performance of PSO, including good conver-
gence and global search ability [6]. The approach guaranteed subjectively to discover
a reasonable optimal solutions in the search space. The experimental results indicate
that the QPSO works better than PSO and is a promising approach. Hence, in this
study, we propose to employ quantum’s inspiration to standard SSA for the same
reason as QPSO for multiobjective optimization problems and compare it to MSSA
and NSGA-II.
This article introduces a new presentation for the Multiobjective Quantum in-
spired Salp Swarm Algorithm (MQSSA) to ameliorate the overall performance of
SSA. The approach is a hybrid of novel paradigms: SSA and Quantum Computing.
Besides many other essential properties, this model can find a suitable solution faster
using fewer individuals. This approach reduces the required number of estimation
dramatically, which is a predominant effecting factor for solving the optimization
problems. The Delta potential-well model (DPWM) representation of SSA in this
paper enhances the convergence speed faster than traditional SSA. It maintains the
population’s diversity, preventing the population from stagnating in deceptive optima
and increasing the algorithm’s searchability. According to DPWM, if individual Salp
in the SSA algorithm has quantum behavior, the algorithm is bound to work differ-
ently due to Heisenberg’s uncertainty principle of quantum physics. To the best of
our knowledge, a Quantum-inspired approach to improving the performance of SSA
for the multiobjective optimization problems is introduced the first time. In this new
approach, a simplified representation with DPWM integration to improve original
SSA performance is introduced for MOP, making this algorithm more comfortable
to understand and implement. The proposed algorithm’s performance is evaluated
on the multiobjective domain’s complex benchmark problems in this work. A com-
parative study is performed to assess the performance with well-regarded algorithms
MSSA and NSGA-II.
The rest of this paper is structured as follows: Standard Salp Swarm Algorithm
(SSA) is presented in the next section, followed by the proposed algorithm with Delta
192 Hybrid Quantum Metaheuristics: Theory and Applications
Figure 9.1 shows a salp chain that is created by group of salps in deep ocean.
According to the research, this chain follows the leader and follower pattern, where
leaders’ responsibility is to direct the chain toward food source and followers align
with the leader in turn. For more details, refer to [7]. The mathematical representa-
tion and implementation of this chain categorize the Salps population into the front
salps as leader, which position being updated based target and the rear salps as fol-
lower, which updates its position to align with adjacent individuals. This complete
procedure is divided into three parts: Initialization, Define leaders, and Salps position
updates.
9.2.1 INITIALIZATION
F in equation (9.1) represents an objective function of minimization problem:
Fj + c1 ((ub j − lb j ) c2 + lb j ) c3 ≥ 0
x1j = (9.3)
Fj − c1 ((ub j − lb j ) c2 + lb j ) c3 ≤ 0
1 i
xij = x j + xi−1
j (9.4)
2
4t 2
c1 = 2e−( T ) (9.5)
This is important to note that updating leaders’ position by determining the updat-
ing direction c3 by equation (9.3), only related to individual toward global optimum
and it has nothing to do with historical position of salps. However, equation (9.4) up-
dates the followers position with a mechanism of adjacent salps, where jth variable of
the ith individual is xij and the jth variable of the ith position of the nearby individual
194 Hybrid Quantum Metaheuristics: Theory and Applications
Bl = (β + (1 − β )) (9.8)
r1d ∗ X jk + r2d ∗ Leader j
Ad = (9.9)
r1d + r1d
where r1d and r2d are random numbers in the range [0, 1]. Leader j is the leader
position and represented as the best location.
d
1
BestMeanl =
N ∑ lead j (l) (9.10)
j=1
the repository to keep only other than dominated solutions in the repository. After
that, it also deletes the solutions from the crowded region to maintain the number of
solutions. For this, it first ranked the solutions and then selected using the roulette
wheel technique.
Step 10 executes only once at first iteration and generates a salp chain where a
division of population happens into leaders and followers. After that, this algorithm
is used in step 8, followed by evaluating solutions for the coverage and updating
the search boundary. The process repeats for MAX generation, and in each gener-
ation, it produces Pareto optimal solutions, which is a set of uniformly distributed
solutions in the search space along the estimated Pareto optimal front. At step 8, the
interference process and proposed quantum-based equations for SSA are executed to
generate new solutions. This process consists of updating the individual position us-
ing equation (9.7), calculating the contraction-expansion coefficient, evaluating the
converging points and best mean along with the fitness of salps. At step 7, the other
than dominated solutions to be put into an archive if the archive has an empty space,
i.e., its not full. At this stage, the other than dominated solutions are compared with
available solutions in the repository to keep only other than dominated obtained so-
lutions in the repository. After that, at step 11 CoverageSelection() function used to
delete the solutions from the crowded region to maintain the number of solutions.
For this, it first ranked all the solutions and then selected using an intelligent roulette
wheel technique. At step 14, the algorithm returns the obtained Pareto-optimal set as
the best solution along with the first approximation.
Table 9.1
Benchmark Problems Used in This Study
Table 9.2
Statistical Results of Pareto Sets Proximity (PSP)
1. Population size 60
2. Maximum number of generations 1000 for all the test functions
3. The same number of function evaluation is considered for all the test problems.
4. For NSGA-II, Crossover = 0.7, Mutation = 0.4 as percentage, and mu=0.02 as
mutation rate.
The average results of 30 independent runs and average mean, the standard devia-
tion of metrics PSP and IGD are summarized in Tables 9.2 and 9.3. The average norm
indicates how the MQSSA performs on average, and the standard deviation shows
how stable it is during all the runs. The experimental results demonstrate that the
overall performance of MQSSA is competitive as compared with other approaches.
Table 9.3
Statistical Results of Inverted Generational Distance (IGD)
optimization algorithms in this domain are selected for the results endorsement and
for the comparative study: MSSA and NSGA-II.
Tables 9.2 and 9.3 indicate that the MQSSA algorithm remarkably performs better
than MSSA and NSGA-II on most of the ZDT functions. Further, when examined,
the obtained PF in Figures 9.4–9.6 indicates that MQSSA shows a superior conver-
gence than MSSA and NSGA-II for the ZDT benchmark functions. The obtained
solutions are well distributed uniformly for MQSSA algorithm, which means the al-
gorithm coverage is high. A space in midway of the obtained Pareto optimal set by
MSSA and NSGA-II in some of the ZDT problems indicates that how the coverage
of the algorithm is negatively impacted. The SCH1 and ZDT1 functions are convex-
shaped as indicated by the Pareto optimal front. Hence, the coverage and convergence
propensity of the algorithms can be benchmarked. The obtained Pareto-optimal set
in Figures 9.3 and 9.4 indicate that MQSSA performs anew better than MSSA and
NSGA-II algorithms. The coverage is deficient for the NSGA-II on ZDT2 function.
It looks that the convergence and coverage of MQSSA are better than MSSA.
Further, it can be observed that the MSSA algorithm find a gap in the obtained Pareto
optimal set, but the obtained solutions are well distributed in rest of the true Pareto-
optimal set. Additionally, the problems ZDT2 and ZDT4 shown to be Pareto optimal
front of type concave-shaped, which is invariably difficult for the algorithm designed
based on aggregation approach. But, the obtained results indicates that MQSSA can
efficiently approximate these functions’ right front with exceedingly good coverage
and convergence. When it is compared between MQSSA and NSG-II in Figures
9.4–9.6, results indicate that the coverage and convergence of MQSSA are better
and keen.
Tables 9.2 and 9.3 and Figure 9.6 analysis of ZDT3 function has Pareto optimal
set with isolated regions. Such kind of Pareto-optimal set is usual in the real-world
optimization problems. It is very difficult for the algorithms to obtain the Pareto op-
timal set of such problems. There is a highly possibility that the algorithm is failed
202 Hybrid Quantum Metaheuristics: Theory and Applications
to obtain the Pareto-optimal set in all the separated areas and pin down in one of
the region. The comparison outcome on ZDT3 and other previously discussed ZDT
problems indicate that the achievement of NSGA-II is comparatively low, the cov-
erage is good, but the convergence is inferior. The obtained Pareto optimal set of
several isolated regions is far away from the true Pareto-optimal front. In the view
of obtained Pareto-optimal front by MQSSA and MSSA, it is evident that MQSSA
performs comparatively better than MSSA with regard to coverage and convergence.
These outcomes exemplify that MQSSA can successfully discover all the isolated
regions of the Pareto optimal front with well-distributed solutions in all the areas.
Figure 9.3: SCH1 Pareto front obtained by MQSSA, NSGA-II, and MSSA.
Figure 9.4: ZDT1 Pareto front obtained by MQSSA, NSGA-II, and MSSA.
The experimental results and above discussion prove that MQSSA can estimate
Pareto optimal front of type concave and convex shaped with a reasonable coverage
and convergence on four ZDT series benchmark problems: ZDT1, ZDT2, ZDT3, and
ZDT4.
Toward Quantum-Inspired SSA for Solving Multiobjective Optimization Problems 203
Figure 9.5: ZDT2 obtained Pareto front by MQSSA, NSGA-II, and MSSA.
Figure 9.9: KUR obtained Pareto front by MQSSA, NSGA-II, and MSSA.
Further study on FON and KUR, which is equipped with nonconvex-shaped, can
benchmark the convergence and coverage algorithm. The analysis outcome of Fig-
ures 9.8–9.9 shows that all three algorithms, MQSSA, MSSA, and NSGA-II, are
unable to converge for KURs Pareto optimal front. Most of the obtained solutions
are far from the Pareto-optimal front. However, the results from Tables 9.2–9.3 show
that MQSSA performs slightly better. Inspecting the fronts of MQSSA, NSGA-II,
and MSSA for FON shows that MQSSA and NSGA-II provide better convergence.
However, the results from Tables 9.2 and 9.3 show NSGA-II performs slightly better.
These results exemplify that MQSSA successfully guide salp chain with regard
to separate regions of true Pareto optimal front and is competitive as compared with
other approaches.
9.6 CONCLUSION
This chapter presents a novel Multiobjective Quantum-inspired Salp Swarm Al-
gorithm (MQSSA) with Delta potential-well model presentation, which is a better
Toward Quantum-Inspired SSA for Solving Multiobjective Optimization Problems 205
alternative than the binary presentation for the multiobjective optimization problems.
The proposed approach is evaluated on several multiobjective optimization bench-
mark problems having convex-shaped and concave-shaped Pareto-optimal front. It
shows a favorable outcome, with better performance than other well-regarded al-
gorithms in the multiobjective domain. Further investigations would be required to
evaluate the robustness of MASSA on solving different kinds of optimization prob-
lems.
Besides, the experimental study showed that the proposed algorithm, MQSSA:
1. Has the ability to achieve the Pareto-optimal front for the large dimensional mul-
tiobjective optimization problems.
2. Excellent speed as compared to the traditional SSA.
3. An appropriate balance between exploitation and exploration propensities.
4. Appropriate convergence rate and good coverage.
Future study would include applying the proposed algorithm to solve more bench-
mark problems and real-world optimization problems from different domains. It
would also be interesting to study the relationship and the differences between this
algorithm and other optimization approaches, such as several improved PSO and
other swarm intelligence techniques.
REFERENCES
1. H. Li, K. Deb, and Q. Zhang, “Variable-length Pareto optimization via decomposition-
based evolutionary multiobjective algorithm,” IEEE Trans. Evol. Comput., vol. 23, no.
6, pp. 987–999, Dec. 2019.
2. K. Li, R. Chen, G. Fu, and X. Yao, “Two-archive evolutionary algorithm for constrained
multiobjective optimization,” IEEE Trans. Evol. Comput., vol. 23, no. 2, pp. 303–315,
Apr. 2019.
3. Y. Sun, B. Xue, M. Zhang, and G.G. Yen, “A new two-stage evolutionary algorithm for
many-objective optimization,” IEEE Trans. Evol. Comput., vol. 23, no. 5, pp. 748–761,
Oct. 2019.
4. X. Yang, “Review of meta-heuristics and generalized evolutionary walk algorithm”, Int.
J Bio-Inspir. Com., vol. 3, no. 2, pp. 77–84, 2011.
5. Z. Michalewicz, Genetic algorithms + data structures = evolution programs (2nd, ex-
tended ed.). New York, NY: Springer-Verlag New York, Inc., 1994.
6. J. Sun, B. Feng, and W. Xu, “Particle swarm optimization with particles having quantum
behavior,” in Proc. Congr. Evol. Comput., vol. 1, pp. 325–331, 2004.
7. S. Mirjalili, A.H. Gandomi, S.Z. Mirjalili, S. Saremi, H. Faris, S.M. Mirjalili, “Salp
swarm algorithm: A bio-inspired optimizer for engineering design problems,” Adv. Eng.
Softw., vol. 114, pp. 163–191, 2017.
8. B. Xiao, R. Wang, Y. Xu, J. Wang, W. Song, and Y. Deng, “Simplified Salp
Swarm Algorithm,” 2019 IEEE International Conference on Artificial Intelligence
and Computer Applications (ICAICA), Dalian, China, 2019, pp. 226–230, doi:
10.1109/ICAICA.2019.8873515.
9. B. Xiao, R. Wang, Y., Xu, et al. “Salp Swarm Algorithm based on Particle-best”, In:
IEEE 3rd Information Technology, Networking, Electronic and Automation Control
Conference (ITNEC). Chengdu, pp. 1383–1387, 2019.
206 Hybrid Quantum Metaheuristics: Theory and Applications
10. S. Li, Y. Yu, D. Sugiyama, D., et al. “A Hybrid Salp Swarm Algorithm With Gravita-
tional Search Mechanism,” In: 5th IEEE International Conference on Cloud Computing
and Intelligence Systems (CCIS), Nanjing, pp. 257–261, 2018.
11. S. Ekinci and B. Hekimoglu, “Parameter optimization of power system stabilizer via
Salp Swarm algorithm,” In: 5th International Conference on Electrical and Electronic
Engineering (ICEEE), Istanbul, pp. 143–147, 2018.
12. Z. Xing and H. Jia. “Multilevel Color Image Segmentation Based on GLCM and Im-
proved Salp Swarm Algorithm,” IEEE Access, vol. 7, pp. 37672–37690, 2019.
13. H.M. Ridha, C. Gomes, H. Hizam, and S. Mirjalili. “Multiple scenarios multiobjective
salp swarm optimization for sizing of standalone photovoltaic system,” Renewable En-
ergy, Elsevier, vol. 153(C), pp. 1330–1345, 2020.
14. M. Clerc and J. Kennedy, “The Particle Swarm: Explosion, stability and convergence in
a multi-dimensional complex space”, IEEE Trans. Evol. Comput., 6: 58–73, 2002.
15. J. Liu, W. Xu, and J. Sun, “Quantum behaved particle swarm optimization with mu-
tation operator”, In: Proc. of IEEE International Conference on Tools with Artificial
Intelligence, pp. 240–244, 2005.
16. J. Sun et al, “A global search strategy of quantum-behaved particle swarm optimization”,
IEEE Conference on Cybernetics and Intelligent Systems, pp 111–116, 2004.
17. G. Li, L. Yan, and B. Qu, “Multiobjective particle swarm optimization based on Gaus-
sian sampling,” IEEE Access, vol. 8, pp. 209717–209737, 2020, doi: 10.1109/AC-
CESS.2020.3038497.
18. K. Deb, S. Agrawal, A. Pratap, and T. Meyarivan. A Fast Elitist Non-dominated
Sorting Genetic Algorithm for Multiobjective Optimization: NSGA-II. In: Schoe-
nauer M. et al. (eds) Parallel Problem Solving from Nature PPSN VI. PPSN 2000.
Lecture Notes in Computer Science, vol. 1917. Springer, Berlin, Heidelberg, 2000.
https://doi.org/10.1007/3-540-45356-383.
10 Quantum-Inspired
Multi-Objective NSGA-II
Algorithm for Automatic
Clustering of Gray Scale
Images
10.1 INTRODUCTION
Clustering [1][2][3] is a process of partitioning a heterogeneous dataset into some
groups of homogeneous data points or elements. It can be considered as a challeng-
ing task to generate appropriate number of cluster from a given dataset due to lack
of proper knowledge of the dataset. By addressing the problem of automatic clus-
tering, several research works have been done so far, some of them are presented in
[4][5][6][7]. While considering the purpose of clustering, sometimes the clustering
algorithms provide good results with one type of datasets but not be able to provide
good results with other types of dataset, which yields more challenges in the field of
automatic clustering [12]. Though, there exist many automatic clustering algorithms
[8][9][10][11], but those algorithms have focused only on optimizing a single ob-
jective, whereas in real-world scenario, many problems may have more than one
objective which need to be taken for consideration [13]. In this aspect, the require-
ment of multi-objective optimization has been realized by which multiple objectives
which are not only similar but even possibly conflicting can be tackled properly.
Nowadays, multi-objective optimization algorithms are becoming popular for
their capability of searching a highly complex search space. In the last decade,
researchers have developed many nature-inspired multi-objective optimization al-
gorithms, which include non-dominated sorting GA (NSGA-II) [14][15], Pareto
envelope-based selection algorithm (PESAII) [16], Strength Pareto Evolutionary Al-
gorithm (SPEA) [17], and its improved version, SPEA2 [18]. The overview and ap-
plicability of the multi-objective algorithms for clustering are presented by Maulik
et al. in [19]. Zhou et al. addressed the basic principles, advancements, and applica-
tions of multi-objective algorithms to solve several real-world optimization problems
in [20].
In recent years, the concepts of quantum computing [21] are being incor-
porated with the evolutionary algorithms for effectively exploring the search
space for multi-objective optimization problems. Quantum-inspired evolutionary
algorithms have been developed for performing quasi quantum operations on
states, at the same time [21]. The state of a qubit can be represented as
|Ψ⟩ = α |0⟩ + β |1⟩ (10.1)
where α and β are complex numbers. The probability amplitudes and specify α2 β2
the probabilities of the state to be in |0⟩ or |1⟩, respectively. The superposition state
|Ψ⟩ can be realized by the following:
|0⟩ , if |α |2 > |β |2
|Ψ⟩ = (10.2)
|1⟩ , Otherwise
where α 2 and β 2 should always satisfy the following equation:
|α |2 + |β |2 = 1 (10.3)
where the total number of data points (Di ) is represented by Ni in the ith cluster of the
data set DS . Now, the mathematical representation for the CSM index can be defined
by the following equation in which the data set DS contains NC number of clusters.
" #
1
NC
1
NC ∑ |Ni | ∑ max Di f f (Di , Dmx )
i=1 Di ∈ DSi Dmx ∈DSi
CSM = (10.5)
1
NC
NC ∑ min Di f f (Ci ,C j )
i=1 j∈NC , j̸=i
" #
NC
1
∑ |Ni | ∑ max Di f f (Di , Dmx )
i=1 Di ∈ DSi Dmx ∈DSi
∴ CSM = (10.6)
NC
∑ min Di f f (Ci ,C j )
i=1 j∈NC , j̸=i
where the difference between any two data points Di and D j is defined as
Di f f (Di , D j ). The optimal result is achieved for a minimum value of CSM index
[7].
210 Hybrid Quantum Metaheuristics: Theory and Applications
where the ith cluster center of a cluster Ci is represented by Zi and the total number
(i)
of objects is represented by Ni with all Xl belonging to Ci . The distance between
clusters Ci and C j is measured in terms of the cluster dissimilarity measure Di j , which
can be defined as follows:
Di j Zi − Z j (10.9)
Finally, the Davies Bouldin (DB) index [30] is defined by the following equation.
1 Nc
DB = ∑ Ri
Nc i=1
(10.10)
1. Ri j ≥ 0
2. Ri j = R ji
3. if Si = S j = 0 then Ri j = 0
4. if S j = Sk and Di j < Dik then Ri j > Rik
5. if S j > Sk and Di j = Dik then Ri j > Rik
10.4.1 NSGA-II
In the year 2002, a fast elitist Non-dominated Sorting Genetic Algorithm (NSGA-II)
[15] was proposed by Deb, et al. It provides a mechanism for better sorting and in-
corporates elitism and diversity preservation mechanism to improve the performance
of the NSGA algorithm [14]. It shows good performance in solving critical prob-
lems. The working principle of the NSGA-II algorithm for automatic clustering of
gray scale images is discussed as follows:
1. A population Pt is created from the input gray scale image as described in Sec-
tion 10.4.2.
2. The active cluster centroids of all the chromosomes are identified belonging to
Pt as discussed in Section 10.4.3.
3. Both the fitness values FV 1 by Equation (10.6) and FV 2 by Equation (10.10)
are evaluated simultaneously for all the chromosomes as explained in Section
10.3.
4. The tournament selection is performed and the population is updated by main-
taining elitism as discussed in Section 10.4.4.
5. The conventional crossover operation is performed based on a predefined
crossover probability to produce new offspring NPt .
6. The conventional mutation is performed based on a predefined mutation proba-
S
bility over some randomly selected chromosomes belonging to Pt NPt .
Quantum-Inspired Multi-Objective NSGA-II for Automatic Clustering 213
S
7. After that, the fast non-dominated sorting is performed on Pt NPt to produce
near Pareto-optimal front as discussed in Section 10.4.5.
8. The crowding distance of all the elements from near Pareto-optimal front is
conducted to identify the first N number of chromosomes to produce the next-
generation population Pt+1 . Both the fitness values, FV 1, and FV 2, along with
their number of cluster centroids, corresponding to those solutions, are memo-
rized as explained in Section 10.4.6.
9. Steps 4 to 8 are repeated until the stopping criteria is met.
10. Finally, the obtained output is reported.
where i = {1, 2, ..., N}. Another way of representing Equation (10.18) is as follows:
cos θi1t cos θi2t ... cos θiL
t
qti j = (10.19)
sin θi1t sin θi2t ... sin θiL
t
each string. The binary strings BtiL are then generated after observing the values of
Qt by the following equation.
( 2 2
t
Bi, j = 1, if β t
i, j > α t
i, j (10.20)
0, Otherwise.
where ∆θ is a very small rotation angle which has been taken randomly between
[−0.5, 0.5] for updating the value of each qubit in Qt to produce NQt . It is depicted
by
α cos θ cos(∆θ ) − sin(∆θ ) cos θ
R(δ θ ) = R(δ θ ) =
β sin θ sin(∆θ ) cos(∆θ ) sin θ
′ (10.22)
cos(θ + δ θ ) α
= = ′
sin(θ + δ θ ) β
Figure 10.1 depicts the effect of quantum rotation gate operation, which is responsi-
ble for creating new quantum states.
Quantum-Inspired Multi-Objective NSGA-II for Automatic Clustering 215
Now, each individual from NQt produces new binary string NBtiL , where i =
{1, 2, ..., N} and L is the length of each chromosome. Then, each string belonging
to NBt is used to evaluate the new fitness from both the objective functions of each
chromosome of Pt . Thereafter, the best solutions are identified between Qt and NQt
to update Qt along with Bt . An extensive explanation of the working principle of
quantum rotation gate operation is presented in [28][35].
The Pauli-X gate operation on a single qubit, responsible for reversing the probability
amplitude values of that qubit, is demonstrated as follows:
α 0 1 α β
PX = = (10.24)
β 1 0 β α
Input Parameters
Maximum Generation:= Max G
Population Size := N
Crossover Probability := C p
Mutation Probability := µ p
Output Parameters
Optimum Cluster Number:= ONC
Optimum Fitness Value1 := FV 1
Optimum Fitness Value2 := FV 2
1. t ← 0
2. Create original population Pt from the input gray scale image as described in
Section 10.4.2
3. Create quantum state population Qt to encode Pt as discussed in Section 10.5.1.
4. Identify active cluster centroids of all the chromosomes belonging to Pt by the
guidance of Qt as described in Section 10.5.2.
5. Evaluate both the fitness values, FV 1 by Equation (10.6) and FV 2 by Equation
(10.10) simultaneously of all the chromosomes belonging to Pt as discussed in
Section 10.3.
6. Perform quantum-behaved rotation gate operation on Qt to create new NQt as
elaborated in Section 10.5.3.1.
7. Again identify active cluster centroids of all the chromosomes belonging to Pt by
the guidance of NQt .
8. Evaluate both the fitness values, FV 1 and FV 2, simultaneously of all the chromo-
somes belonging to Pt .
S
9. Identify N number of best solutions from Qt NQt by performing fast non-
dominated sorting followed by crowding distance calculation and update Qt by
those solutions along with their fitness values and number of cluster centroids.
10. Depending upon a predefined crossover probability C p , perform quantum-
behaved crossover operation on Qt to produce offspring NQt as elaborated in
Section 10.5.3.2.
11. Depending upon a predefined mutation probability µ p , perform quantum-behaved
mutation operation on some strings of Qt as discussed in Section 10.5.3.3.
12. Perform fast non-dominated sorting followed by crowding distance calculation to
generate the near Pareto-optimal front and thereafter consider the first N number
of solutions from the front to prepare the next-generation population Pt+1 with
its corresponding quantum state population Qt+1 . Memorize the corresponding
fitness values along with their number of cluster centroids as described in Sections
10.5.4 and 10.5.5 repeatedly.
13. t ← t + 1
14. a. If t < Max G then
b. Repeat Steps from 6 to 14
15. Finally, report the obtained output.
Quantum-Inspired Multi-Objective NSGA-II for Automatic Clustering 219
M True − MComp
MS = (10.25)
M True
1 K
SIL = ∑ S(Ci )
K i=1
(10.26)
Quantum-Inspired Multi-Objective NSGA-II for Automatic Clustering 221
Figure 10.5: Original test images [31]: (a)#86000, (b)#92059, (c)#89072, (d)#86016,
(e)#87046, (f)#94079.
where the Silhouette width for the given cluster Ci is represented by S(Ci ). The value
of S(Ci ) is computed as
1 b(x) − a(x)
S(Ci ) = ∑ max(a(x), b(x))
Ni x∈C
(10.27)
i
222 Hybrid Quantum Metaheuristics: Theory and Applications
Table 10.1
Input Parameters for QIMONSGA-II and NSGA-II [36]
Parameters QIMONSGA-II NSGA-II
Population Size : 50 50
Maximum Generation : 50 100
Crossover Probability : 0.8 0.8
1 1
Mutation Probability : ChromosomeLength ChromosomeLength
Small Rotation Angle : [-0.1 to 0.1] -
Table 10.2
Results of Mean Fitness Values from CSM [7] and DB [30]
Data Sets CVI QIMONSGA-II NSGA-II
#86000 CSM 0.29673 0.33841
DB 0.274e7 0.355e7
#92059 CSM 0.27115 0.36954
DB 0.335e7 0.248e7
#89072 CSM 5.07643 6.32973
DB 6.92591 7.19382
#86016 CSM 0.13794 0.26844
DB 1.35473 1.90035
#87046 CSM 0.30133 0.31472
DB 0.352e7 0.287e6
#94079 CSM 0.51692 0.39961
DB 8.19665 8.94837
where Ni is the number of patterns belonging to Ci , a(x) represents the within cluster
mean distance by averaging the distance between x and the rest of the patterns from
the same cluster, whereas, b(x) represents the smallest mean distance of x to the
patterns from another cluster. The SIL [37] index value generally lies between -1 to
1. The optimal result is achieved for the maximum value of SIL index.
Figure 10.6: Non-dominating Pareto optimal front of test images [31] (a)#86000,
(b)#92059, (c)#89072, (d)#86016, (e)#87046, (f)#94079 by QIMONSGA-II.
Figure 10.7: Non-dominating Pareto optimal front of test images [31] (a)#86000,
(b)#92059, (c)#89072, (d)#86016, (e)#87046, (f)#94079 by NSGA-II.
analyzing the results of Table 10.3, it is established that the proposed algorithm per-
forms better than its classical counterpart as most of the results are in favor of the
proposed algorithm.
While performing the unpaired t-test [38] between QIMONSGA-II and NSGA-II
[15], it is seen that eight results are “extremely significant,” two are “very significant”
and one is “significant” but one result is identified as “not significant” result. The
result of unpaired t-test [38] is presented in Table 10.4.
224 Hybrid Quantum Metaheuristics: Theory and Applications
Table 10.3
Results of Standard Deviation (σ ), Standard Error (ε ) and Optimal computa-
tional Time (τ in second) by CSM [7] and DB [30]
Data Sets CVI QIMONSGA-II NSGA-II
σ ε τ σ ε τ
#86000 CSM 0.02682 0.00489 63 0.03783 0.00504 97
DB 0.01775 0.00324 85 0.02764 0.00573 118
#92059 CSM 0.00784 0.00143 54 0.05941 0.01084 72
DB 0.03965 0.00729 77 0.08451 0.01543 65
#89072 CSM 0.07834 0.01431 61 0.01919 0.00350 89
DB 0.23968 0.04375 67 0.38936 0.07108 94
#86016 CSM 0.16543 0.03020 45 0.25828 0.04715 94
DB 0.08164 0.01492 59 0.09739 0.01778 78
#87046 CSM 0.49472 0.09032 56 0.20398 0.03726 56
DB 0.03657 0.00667 83 0.05826 0.01064 126
#94079 CSM 0.21943 0.04006 38 0.22559 0.04186 61
DB 0.05828 0.01064 52 0.04694 0.00857 87
Table 10.4
Results of Unpaired t-test [38] between QIMONSGA-II and NSGA-II for CSM
[7] and DB [30]
Data Sets CVI QIMONSGA-II vs. NSGA-II
P −Value Significance Level
#86000 CSM <0.0001 Extremely Significant
DB <0.0001 Extremely Significant
#92059 CSM <0.0001 Extremely Significant
DB <0.0001 Extremely Significant
#89072 CSM <0.0001 Extremely Significant
DB 0.0022 Very Significant
#86016 CSM 0.0233 Significant
DB <0.0001 Extremely Significant
#87046 CSM 0.8915 Not Significant
DB <0.0001 Extremely Significant
#94079 CSM 0.0457 Significant
DB <0.0001 Extremely Significant
Table 10.5 presents the obtained values of MS [36] and SIL [37], which have
been used to compare the performance of the proposed algorithm with its classical
counterpart. While considering the score value of MS [36] and SIL, it is found that
all the test images excluding #87046 have scored better MS [36] score values and SIL
Quantum-Inspired Multi-Objective NSGA-II for Automatic Clustering 225
Table 10.5
Results of Performance Evaluation by MS [36] and SIL [37]
Data Sets Performance Metrics QIMONSGA-II NSGA-II
#86000 MS 0.50127 0.58982
SIL 0.65294 0.52369
#92059 MS 0.39421 0.43024
SIL 0.58392 0.44993
#89072 MS 0.36375 0.36333
SIL 0.70025 0.65047
#86016 MS 0.53285 0.59346
SIL 0.68327 0.63284
#87046 MS 0.47293 0.49211
SIL 0.45982 0.49328
#94079 MS 0.32691 0.43973
SIL 0.68485 0.47226
[37] values. This proves the supremacy of the proposed algorithm over its classical
counterpart.
Finally, the clustered images obtained from the proposed algorithm and its clas-
sical counterparts are presented in Figures 10.8 and 10.9, respectively. The corre-
sponding threshold values obtained by QIMONSGA-II, used for creating the clus-
tered images, are presented in Table 10.6.
Table 10.6
Results of Number of Clusters and Threshold Values
Data Sets Number of Clusters Threshold Values
#86000 3 [55, 102, 180]
#92059 4 [46, 89, 119, 223]
#89072 3 [56, 120, 193]
#86016 4 [82, 140, 176, 209]
#87046 4 [75, 112, 127, 239]
#94079 5 [57, 103, 141, 168, 217]
REFERENCES
1. A. K. Jain and R. C. Dubes. Algorithms for Clustering Data. Prentice-Hall, Inc., USA,
1988.
2. A. K. Jain, M. N. Murty, and P. J. Flynn. Data clustering: A review. ACM Computing
Surveys, 31(3):264–323, 1999.
3. Ashwini Gulhane, Prashant Paikrao, and D. Chaudhari. A review of image data clustering
techniques. International Journal of Soft Computing and Engineering (IJSCE), 2(1):212–
215, 2011.
4. J. C. Platt, M. Czerwinski, and B.A. Field. Phototoc: Automatic clustering for browsing
personal photographs. In Fourth International Conference on Information, Communica-
tions and Signal Processing, 2003 and the Fourth Pacific Rim Conference on Multimedia.
Proceedings of the 2003 Joint, volume 1, pages 6–10, 2003.
5. J.H. Chen, Y.C. Chang, and W.L. Hung. A robust automatic clustering algorithm for prob-
ability density functions with application to categorizing color images. Communications
in Statistics – Simulation and Computation, 47(7):2152–2168, 2018.
6. T. Geraud, P. Strub, and J. Darbon. Color image segmentation based on automatic mor-
phological clustering. In Proceedings 2001 International Conference on Image Processing
(Cat. No.01CH37205), volume 3, pages 70–73, 2001.
7. T. Lei, P. Liu, X. Jia, X. Zhang, H. Meng, and A.K. Nandi. Automatic fuzzy clustering
framework for image segmentation. IEEE Transactions on Fuzzy Systems, 28(9):2078–
2092, 2020.
8. S. Bandyopadhyay and U. Maulik. Genetic clustering for automatic evolution of clusters
and application to image classification. Pattern Recognition, 35(6):1197–1208, 2002.
9. S. Das, A. Abraham, and A. Konar. Automatic clustering using an improved differential
evolution algorithm. IEEE Transactions on Systems, Man, and Cybernetics – Part A: Sys-
tems and Humans, 38(1):218–237, 2008.
10. A.E. Ezugwu. Nature-inspired metaheuristic techniques for automatic clustering: A survey
and performance study. SN Applied Sciences, 2, 2020.
11. A. Jose-Garca and W. Gomez-Flores. Automatic clustering using nature-inspired meta-
heuristics: A survey. Applied Soft Computing, 41:192–213, 2016.
12. S. Saha and S. Bandyopadhyay. A generalized automatic clustering algorithm in a multi-
objective framework. Applied Soft Computing, 13:89–108, 2013.
13. K. Suresh, D. Kundu, S. Ghosh, S. Das, and A. Abraham. Data clustering using multiob-
jective differential evolution algorithms. Fundamenta Informaticae, 97:381–403, 2009.
Quantum-Inspired Multi-Objective NSGA-II for Automatic Clustering 229
14. N. Srinivas and K. Deb. Muiltiobjective optimization using nondominated sorting in ge-
netic algorithms. Evolutionary Computation, 2(3):221–248, 1994.
15. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan. A fast and elitist multiobjective genetic
algorithm: Nsgaii. IEEE Transactions on Evolutionary Computation, 6:182–197, 2002.
16. D.W. Corne, N.R. Jerram, J.D. Knowles, and M.J. Oates. PESA-II: Region-based selection
in evolutionary multiobjective optimization. In Proceedings of the 3rd Annual Conference
on Genetic and Evolutionary Computation, GECCO01, pages 283–290, San Francisco,
CA, USA, 2001. Morgan Kaufmann Publishers Inc.
17. E. Zitzler and L. Thiele. Multiobjective evolutionary algorithms: A comparative case study
and the strength pareto approach. IEEE Transactions on Evolutionary Computation, vol. 3,
no. 4, 257–271, 1999.
18. M. Kim, T. Hiroyasu, M. Miki, and S. Watanabe. SPEA2+: Improving the performance
of the strength pareto evolutionary algorithm 2. In Xin Yao, Edmund K. Burke, Jose A.
Lozano, Jim Smith, Juan Julian Merelo-Guervos, John A. Bullinaria, Jonathan E. Rowe,
Peter Tino, Ata Kaban, and Hans-Paul Schwefel, editors, Parallel Problem Solving from
Nature – PPSN VIII, pages 742–751, Springer, Berlin, Heidelberg, 2004.
19. U. Maulik, S. Bandyopadhyay, and A. Mukhopadhyay. Multiobjective genetic algorithms
for clustering: Applications in data mining and bioinformatics. Springer Science & Busi-
ness Media, 2011.
20. Aimin Zhou, Bo-Yang Qu, Hui Li, Shi-Zheng Zhao, Ponnuthurai Nagaratnam Suganthan,
and Qingfu Zhang. Multiobjective evolutionary algorithms: A survey of the state of the art.
Swarm and Evolutionary Computation, 1(1):32–49, 2011.
21. T. Hey. Quantum computing: An introduction. Computing & Control Engineering Journal,
10:105–112, June 1999.
22. K.H. Han and J.H. Kim. Quantum-inspired evolutionary algorithm for a class of combina-
torial optimization. IEEE Transactions on Evolutionary Computation, 6(6):580–593, 2002.
23. T. Gandhi, Nitin, and T. Alam. Quantum genetic algorithm with rotation angle refinement
for dependent task scheduling on distributed systems. In 2017 Tenth International Confer-
ence on Contemporary Computing (IC3), pages 1–5. IEEE, Aug 2017.
24. H.P. Chiang, Y.H. Chou, C.H. Chiu, S.Y. Kuo, and Y.M. Huang. A quantum-inspired
tabu search algorithm for solving combinatorial optimization problems. Soft Computing,
18:1771–1781, 2013.
25. M. Ross and H. Oscar. A review of quantum-inspired metaheuristics: Going from classical
computers to real quantum computers. IEEE Access, 8:814–838, 2019.
26. C. Wojciech and K. Joanna. Quantum-inspired evolutionary approach for the quadratic
assignment problem. Entropy, 20(10):781, Oct 2018.
27. S. Dey, S. Bhattacharyya, and U. Maulik. Quantum inspired automatic clustering for multi-
level image thresholding. In 2014 International Conference on Computational Intelligence
and Communication Networks, pages 247–251, 2014.
28. A. Dey, S. Dey, S. Bhattacharyya, J. Platos, and V. Snasel. Novel quantum inspired ap-
proaches for automatic clustering of gray level images using particle swarm optimization,
spider monkey optimization and ageist spider monkey optimization algorithms. Applied
Soft Computing, 88(106040), 2020.
29. C.-H. Chou, M.-C. Su, and E. Lai. A new cluster validity measure and its application to
image compression. Pattern Analysis and Applications, 7(2):205–220, Jul 2004.
30. D.L. Davies and D.W. Bouldin. A cluster separation measure. IEEE Transactions on Pat-
tern Analysis and Machine Intelligence, 1:224–227, February 1979.
31. Berkley images. Accessed on 15/12/2020.
230 Hybrid Quantum Metaheuristics: Theory and Applications
32. R. Blatt, H. Haiffner, C.F. Roos, C. Becher, and F. Schmidt-Kaler. Course 5 – quantum
information processing in ion traps. In Daniel EstOEeve, Jean-Michel Raimond, and Jean
Dalibard, editors, Quantum Entanglement and Information Processing, volume 79 of Les
Houches, pages 223–260. Elsevier, 2004.
33. S. Bandyopadhyay, S. Saha, U. Maulik, and K. Deb. A simulated annealing-based multi-
objective optimization algorithm: AMOSA. IEEE Transactions on Evolutionary Computa-
tion, 12(3):269–283, 2008.
34. K. Deb. Multiobjective Optimization Using Evolutionary Algorithms. Wiley, New York,
2001.
35. A. Dey, S. Dey, S. Bhattacharyya, J. Platos, and V. Snasel. Quantum inspired meta-heuristic
approaches for automatic clustering of color images. International Journal of Intelligent
Systems, 2021.
36. A. Mukhopadhyay, S. Bandyopadhyay, and U. Maulik. Clustering using multi-objective
genetic algorithm and its application to image segmentation. In 2006 IEEE International
Conference on Systems, Man and Cybernetics, vol. 3, pp. 2678–2683, 2006.
37. P.J. Rousseeuw. Silhouettes: A graphical aid to the interpretation and validation of cluster
analysis. Journal of Computational and Applied Mathematics, 20:53–65, 1987.
38. B. Flury. A First Course in Multivariate Statistics. Springer Texts in Statistics.
11 Conclusion
A metaheuristic is a heuristic (partial search) algorithm that is more or less an ef-
ficient optimization algorithm to real-world problems. Hybrid metaheuristics refer
to a proper and judicious combination of several other metaheuristics and machine
learning algorithms. The hybrid metaheuristics have been found to be more robust
and failsafe owing to the complementary character of the individual metaheuristics
in the resultant combination. This is primarily due to the fact that the vision of hy-
bridization is to combine different metaheuristics such that each of the combination
supplements the other in order to achieve the desired performance.
Quantum computer, as the name suggests, principally works on several quantum
physical features. These could be used as an immense alternative to today’s appo-
site computers since they possess faster processing capability (even exponentially)
than classical computers. A number of researchers coupled the underlying princi-
ples of quantum computing into various metaheuristic structures to introduce dif-
ferent quantum-inspired algorithmic approaches [1]–[5]. The evolution of the quan-
tum computing paradigm has led to the evolution of time efficient and robust hybrid
metaheuristics by means of conjoining the principles of quantum mechanics with
the conventional metaheuristics, thereby enhancing the real-time performance of the
hybrid metaheuristics.
This volume is a novel effort to bring together the recent advances and trends
in designing new and novel quantum-inspired metaheuristics to solve real-life prob-
lems in various branches of science and engineering. This volume introduces the
principles of quantum mechanics to evolve hybrid metaheuristics-based optimiza-
tion techniques useful for real-world engineering and scientific problems. Starting
from the introductory chapter, which presents an outline of the basic theory and con-
cepts pertaining to quantum-inspired metaheuristics, the chapter also throws light on
several types of quantum-inspired metaheuristics in details. It also comes up with
a bird’s eye view on the different bi-level/multi-level quantum system-based opti-
mization techniques. In addition to that, several entanglement-induced optimization
techniques and W-state encoding of optimization methods have also been discussed.
The applications related to the theme of the topic have been provided that would also
certainly be bring up to date the readers.
With the development of machine learning theory and the accumulation of prac-
tical experience of using various algorithms, it became clear that there is no ideal
classification method that would be better than all others for all sizes of the training
sample, for any percentage of noise in data, for any complexity of the boundaries of
dividing objects into classes, etc. Therefore, at present, ensemble classification meth-
ods that combine many different classifiers trained on different data samples. One of
the most accurate and fast parallelization methods available today is bagging, which
turns out to be useful in the case of heterogeneous classifiers and instability, when
small changes in the initial sample lead to significant changes in the classification.
REFERENCES
1. Wang, L. & Niu, Q. & Fei, M. R. (2008). A novel quantum ant colony optimization algo-
rithm and its application to fault diagnosis. Transactions of the Institute of Measurement
and Control, 30(3–4), 313–329.
2. Dey, S. & Bhattacharyya, S. & Maulik, U. (2013). Quantum-inspired metaheuristic algo-
rithms for multi-level thresholding for true colour images. In Proceeding of 2013 Annual
IEEE India Conference (INDICON).
3. Dey, S. & Bhattacharyya, S. & Maulik, U. (2017). Efficient quantum-inspired metaheuris-
tics for multi-level true colour image thresholding. Applied Soft Computing, 56, 472–513.
234 Hybrid Quantum Metaheuristics: Theory and Applications
%Outputs:
% segments: A Nx4 matrix containing the
% segment path located by the row
% and column index. Third column
% represents the corresponding
% pixel slope. Fourth column
% represents the resultant angle
% in order to search structure
% borders.
%------------------------------------------------
segments = {};
k = 0;
end
else
%Remove end-point:
endpoints(1, :) = [];
end
end
% / / angle=45 Degrees
% / m=1 / m=1
% [v1]----[v2] [v3]
% m=0
%˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜
% S1 = 0, S2 = Inf
% m=0
% [v3] [v1]----[v2]
% | m=Inf | m=Inf angle=90 Degrees
% | |
%[v1]----[v2] [v3]
% m=0
%˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜
% S1 = 0, S2 = -1
% m=0
% [v3] [v1]----[v2]
% \ m=-1 \ m=-1 angle=135 Degrees
% \ \
%[v1]----[v2] [v3]
% m=0
%˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜
% S1 = Inf, S2 = Inf
% [v1]
% | m=Inf
% | angle=0 Degrees
% [v2]
% | m=Inf
% |
% [v3]
%˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜
% S1 = Inf, S2 = 0
% [v1] [v1]
% | m=Inf |m=Inf
% | | angle=90 Degrees
% [v2]----[v3] [v3]----[v2]
% m=0 m=0
%˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜
% S1 = Inf, S2 = 1
% [v1] [v3] [v1]
% m=Inf | / | m=Inf
% | / m=1 | angle=45 Degrees
% [v2] [v2]
% /
% / m=1
Automatic Feature Selection for Coronary Stenosis Detection in X-Ray Angiograms 239
% [v3]
%˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜
% S1 = Inf, S2 = -1
% [v3][v1] [v1]
% m=-1 \ |m=Inf | m=Inf
% \| | angle=135 Degrees
% [v2] [v2]
% \
% \ m=-1
% [v3]
%˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜
% [v1]
% \ m=-1
% \
% [v2]
% \ m=-1
% \
% [v3]
%˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜˜
% [v1]
% / m=1
% /
% [v2]
% / m=1
% /
% [v3]
%
s1 = getSlope(v1, v2);
if ˜exist(’v3’)
slope = s1;
switch slope
case 0
angle = 90;
case 1
angle = 135;
case -1
angle = 45;
case Inf
angle = 0;
otherwise
angle = 99;
end
else
240 Hybrid Quantum Metaheuristics: Theory and Applications
s2 = getSlope(v2, v3);
if s1 == 0 && s2 == 0
slope = 0;
angle = 90;
elseif (s1 == 0 && s2 == 1) || (s2 == 0 && s1 == 1)
slope = 0;
angle = 90;
elseif (s1 == 0 && s2 == Inf) || (s2 == 0 && s1 == Inf)
slope = 1;
angle = 135;
elseif (s1 == 0 && s2 == -1) || (s2 == 0 && s1 == -1)
slope = 0;
angle = 90;
elseif s1 == Inf && s2 == Inf
slope = Inf;
angle = 0;
elseif (s1 == Inf && s2 == 0) || (s2 == Inf && s1 == 0)
slope = 1;
angle = 45;
elseif (s1 == Inf && s2 == 1) || (s2 == Inf && s1 == 1)
slope = Inf;
angle = 0;
elseif (s1 == -1 && s2 == Inf) || (s2 == -1 && s1 == Inf)
slope = Inf;
angle = 0;
elseif s1 == -1 && s2 == -1
slope = -1;
angle = 45;
elseif s1 == 1 && s2 == 1
slope = 1;
angle = 135;
else
slope = 99;
angle = 99;
end
end
end
col, bp)
%Find Segment Function
% Search and detect all possible pixels that are part of a
$segment.
%---------------------------------------------------------------
%Artifact: findsegment.m
%Version: 1.0
%Date: 15/03/2020 12:35:00
%Author: Miguel Angel Gil Rios
%Email: [email protected]
%---------------------------------------------------------------
%Usage:
% [points, m_result, end_point] = findsegment(m, row,
% col, bp)
%
%Inputs:
% m: A Logical matrix.
% row: The row index where the start point pixel is located.
% col: The column index where the start point pixel is located.
% bp: An optional array of Nx2 indicating the current
% found branch points locations (row, col).
%
%Outputs:
% points:A Nx2 matrix containing the positions (row,
$ column) with pixels that are part of the segment.
% m _result:A copy of the m input matrix with zeros in the
% positions that were identified as part of the segment.
% end_point:A 1x2 vector containing the position (row,
% col) of the segment end reached by the function.
% Segment end can be reached under next conditions:
% 1. No more ways to explore are available
% (all neighborgs are 0).
% 2. We fall in a branch point from which a non unique
% path is possible to follow.
% 3. We reach some of the matrix boundaries and
% there is no more remaining positions to explore.
% However, you can assume additional considerations:
% 1. If row and col values are similar to those stored
% in end_point there is probably to exists
% only an isolated pixel but not a segment.
% 2. You can check if end_point values are similar to
% another end-point that you are keeping in your
% computing code and you could consider to remove it.
%
242 Hybrid Quantum Metaheuristics: Theory and Applications
%---------------------------------------------------------------
points = zeros(0);
m_result = m;
end_point(1, 1:2) = -1;
row_current = row;
col_current = col;
k = 0;
flag = true;
if ˜exist(’bp’)
bp = zeros(0);
end
%We reach the end of the segment if (flag == false) under next
%conditions:
% 1. No more ways to explore are available(all neighborgs
% are 0).
% 2. We fall in a branch point from which a non unique path
% is possible to follow.
% 3. We reach some of the matrix boundaries and there is no
% more remaining positions to explore.
first_time = true;
while flag == true
k = k + 1;
points(k, 1) = row_current;
points(k, 2) = col_current;
points(k + 1, 2) = col_current;
end
m_result(row_current, col_current) = 0;
end
flag = false;
else
%Find the positions from 1 to 8 where is posible to
% follow a path:
vpos = findpixdirs(sw);
pos = find(vpos == 1);
if c == true
subwindow = eswm_center (m, ri, ci, h, w);
else
subwindow = eswm_top_left(m, ri, ci, h, w);
end
end
for i = 1 : h
c_ini = ci - floor(w / 2);
for j = 1 : w
if r_ini > 0 && c_ini > 0 && r_ini <= size(m, 1)
&& c_ini <= size(m, 2)
subwindow(i, j) = m(r_ini, c_ini);
end
c_ini = c_ini + 1;
end
r_ini = r_ini + 1;
end
end
row_index = k;
k = size(m, 1) + 1;
end
end
end
Index
W state, 8, 9 crossover, 212, 215
t-test, 107 crowding distance, 208
F score, 182 Cuckoo search algorithm, 42
69 bus system, 76
Delta Potential-well Model, 192, 195
Quantum Ant Colony Multi-Objective Differential Evolution, 1, 2, 6
Routing, 13 Distributed Generator, 58
249
250 INDEX
residential load, 60
rotation, 96, 98, 102, 108
rotation gates, 179
Scatter Search, 6
sensitivity analysis, 96, 106
Shor’s factorization algorithm, 2
Simulated Annealing, 1, 4
Simulated annealing (SA), 41
Single objective optimization, 1
Single objective optimization problems
(SOOPs), 38
Single point based search, 2
Single point-based search, 2
standard deviation, 181
Stenosis, 141
Stochastic Gradient Descent, 142, 157
Stochasticity, 38
Superposition, 43, 145
Support Vector Machine, 125
SVM, 125, 137
Swarm Optimization, 2
Tabu Search, 1, 2, 4
Tabu search, 52
texture features, 121, 130