Proposing A Model To Enhance The IoMT-Based EHR Storage System Security
Proposing A Model To Enhance The IoMT-Based EHR Storage System Security
Sheng-Lung Peng
Noor Zaman Jhanjhi
Souvik Pal
Fathi Amsaad Editors
Proceedings of
3rd International
Conference
on Mathematical
Modeling and
Computational
Science
ICMMCS 2023
Advances in Intelligent Systems and Computing
Volume 1450
Series Editor
Janusz Kacprzyk, Systems Research Institute, Polish Academy of Sciences,
Warsaw, Poland
Advisory Editors
Nikhil R. Pal, Indian Statistical Institute, Kolkata, India
Rafael Bello Perez, Faculty of Mathematics, Physics and Computing,
Universidad Central de Las Villas, Santa Clara, Cuba
Emilio S. Corchado, University of Salamanca, Salamanca, Spain
Hani Hagras, School of Computer Science and Electronic Engineering,
University of Essex, Colchester, UK
László T. Kóczy, Department of Automation, Széchenyi István University,
Gyor, Hungary
Vladik Kreinovich, Department of Computer Science, University of Texas
at El Paso, El Paso, TX, USA
Chin-Teng Lin, Department of Electrical Engineering, National Chiao
Tung University, Hsinchu, Taiwan
Jie Lu, Faculty of Engineering and Information Technology,
University of Technology Sydney, Sydney, NSW, Australia
Patricia Melin, Graduate Program of Computer Science, Tijuana Institute
of Technology, Tijuana, Mexico
Nadia Nedjah, Department of Electronics Engineering, University of Rio de
Janeiro, Rio de Janeiro, Brazil
Ngoc Thanh Nguyen , Faculty of Computer Science and Management,
Wrocław University of Technology, Wrocław, Poland
Jun Wang, Department of Mechanical and Automation Engineering,
The Chinese University of Hong Kong, Shatin, Hong Kong
The series “Advances in Intelligent Systems and Computing” contains publications
on theory, applications, and design methods of Intelligent Systems and Intelligent
Computing. Virtually all disciplines such as engineering, natural sciences, computer
and information science, ICT, economics, business, e-commerce, environment,
healthcare, life science are covered. The list of topics spans all the areas of modern
intelligent systems and computing such as: computational intelligence, soft comput-
ing including neural networks, fuzzy systems, evolutionary computing and the fusion
of these paradigms, social intelligence, ambient intelligence, computational neuro-
science, artificial life, virtual worlds and society, cognitive science and systems,
Perception and Vision, DNA and immune based systems, self-organizing and
adaptive systems, e-Learning and teaching, human-centered and human-centric
computing, recommender systems, intelligent control, robotics and mechatronics
including human-machine teaming, knowledge-based paradigms, learning para-
digms, machine ethics, intelligent data analysis, knowledge management, intelligent
agents, intelligent decision making and support, intelligent network security, trust
management, interactive entertainment, Web intelligence and multimedia.
The publications within “Advances in Intelligent Systems and Computing” are
primarily proceedings of important conferences, symposia and congresses. They
cover significant recent developments in the field, both of a foundational and
applicable character. An important characteristic feature of the series is the short
publication time and world-wide distribution. This permits a rapid and broad
dissemination of research results.
Indexed by DBLP, INSPEC, WTI Frankfurt eG, zbMATH, Japanese Science and
Technology Agency (JST).
All books published in the series are submitted for consideration in Web of Science.
For proposals from Asia please contact Aninda Bose ([email protected]).
Sheng-Lung Peng · Noor Zaman Jhanjhi ·
Souvik Pal · Fathi Amsaad
Editors
Proceedings of 3rd
International Conference
on Mathematical Modeling
and Computational Science
ICMMCS 2023
Editors
Sheng-Lung Peng Noor Zaman Jhanjhi
Department of Creative Technologies School of Computer Science, SCS
and Product Design Taylor’s University
National Taipei University of Business Subang Jaya, Malaysia
Taoyuan, Taiwan
Fathi Amsaad
Souvik Pal College of Engineering and Computer
Department of Computer Science Science, Joshi Research Center 489
and Engineering Wright State University
Sister Nivedita University Dayton, OH, USA
Kolkata, West Bengal, India
© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature
Singapore Pte Ltd. 2023
This work is subject to copyright. All rights are solely and exclusively licensed by the Publisher, whether
the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse
of illustrations, recitation, broadcasting, reproduction on microfilms or in any other physical way, and
transmission or information storage and retrieval, electronic adaptation, computer software, or by similar
or dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this publication
does not imply, even in the absence of a specific statement, that such names are exempt from the relevant
protective laws and regulations and therefore free for general use.
The publisher, the authors, and the editors are safe to assume that the advice and information in this book
are believed to be true and accurate at the date of publication. Neither the publisher nor the authors or
the editors give a warranty, expressed or implied, with respect to the material contained herein or for any
errors or omissions that may have been made. The publisher remains neutral with regard to jurisdictional
claims in published maps and institutional affiliations.
This Springer imprint is published by the registered company Springer Nature Singapore Pte Ltd.
The registered company address is: 152 Beach Road, #21-01/04 Gateway East, Singapore 189721,
Singapore
Organizing Committee and Key Members
v
vi Organizing Committee and Key Members
ix
Invited Speaker(s)
xi
Preface and Acknowledgment
The main goal of this proceedings book is to bring together top academic scientists,
researchers, and research scholars so they can share their experiences and research
results on all aspects of intelligent ecosystems, data sciences, and mathematics.
ICMMCS 2023 is a conference that aims to bring together academic scientists,
professors, research scholars, and students who work in different areas of engi-
neering and technology. On ICMMCS 2023, you will have the chance to meet some
of the best researchers in the world, learn about some new ideas and developments
in research around the world, and get a feel for new Science–Technology trends. The
conference will give authors, research scholars, and people who attend the chance
to work together with universities and institutions across the country and around the
world to promote research and develop technologies on a global scale. The goal of
this conference is to make it easier for basic research to be used in institutional and
industrial research and for applied research to be used in real life.
ICMMCS 2023 has been jointly organized by Society for Intelligent Systems and
Mother Teresa Women’s University, Madurai, Tamil Nadu [NAAC “A” Accredited
Government University in Tamil Nadu] in association with National Taipei University
of Business, Taiwan; Statistical and Informatics Consultation Center (SICC), Univer-
sity of Kufa, Iraq; and Sultan Moulay Slimane University, Beni Mellal—Khénifra
region of Morocco, in Hybrid mode (Physical mode and Google Meet Platform)
on 24 and 25 February, 2023. The conference brought together researchers from all
regions around the world working on a variety of fields and provided a stimulating
forum for them to exchange ideas and report on their researches. The proceeding
of ICMMCS 2023 consists of 51 best selected papers, which were submitted to
the conferences, and peer-reviewed by conference committee members and interna-
tional reviewers. The presenters have shown their slides either virtually or in person.
Experts in the field of education have gathered from all over the world, including
India, Malaysia, Vietnam, Iraq, Spain, Pakistan, Taiwan, Canada, and Morocco,
to discuss how to better prepare the next generation of leaders through education.
Knowledge domains from many countries’ research cultures were brought together
at this meeting. Academic conferences rely heavily on its authors and presenters for
xiii
xiv Preface and Acknowledgment
their credibility. In light of the current global pandemic, we appreciate the authors’
decision to present their works at this conference.
We are very grateful to Almighty for always being there for us, through good
times and bad, and for giving us ways to help ourselves. From the Call for Papers to
the finalization of the chapters, everyone on the team worked together well, which is
a good sign of a strong team. The editors and organizers of the conference are very
grateful to all the members of Springer, especially Mr. Aninda Bose, for his helpful
suggestions and for giving them the chance to finish the conference proceedings. We
also appreciate the help of Prof. William Achauer and Prof. Anil Chandy. We are
also thankful to Mrs. Ramya Somasundaram, who works for Springer as a project
coordinator, for her help. We’re grateful that reviewers from all over the world and
from different parts of the world gave their support and stuck to their goal of getting
good chapters submitted during the pandemic.
Last but not least, we want to wish all of the participants’ luck with their presen-
tations and social networking. This conference can’t go well without your strong
support. We hope that the people who went to the conference enjoyed both the tech-
nical program and the speakers and delegates who were there virtually. We hope you
have a productive and fun time at ICMMCS 2023.
xv
xvi Contents
xxi
xxii About the Editors
Outstanding Associate Editor for IEEE ACCESS. Active reviewer for a series of
top-tier journals has been awarded globally as a top 1% reviewer by Publons (Web
of Science). He is an external Ph.D./Master thesis examiner/evaluator for several
universities globally. He has completed more than 40 internationally funded research
grants successfully. He has served as a Keynote/Invited speaker for more than 60
international conferences globally, and chaired international conference sessions
internationally. He has vast experience in academic qualifications, including ABET,
NCAAA, and NCEAC, for 10 years. His research areas include Cybersecurity, IoT
security, Wireless security, Data Science, Software Engineering, and UAVs.
D. Nagarajan (B)
Department of Mathematics, Rajalakshmi Institute of Technology, Chennai, India
e-mail: [email protected]
K. Chourashia
Department of Mathematics, Vels Institute of Science, Technology and Advanced Studies,
Chennai, India
A. Udhayakumar
Vels Institute of Science, Technology and Advanced Studies, Chennai, India
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 1
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_1
2 D. Nagarajan et al.
1 Introduction
Among all the real-time applications, recognising a human voice is both a crit-
ical and difficult challenge. It has been discovered that combining neural networks
and fuzzy logic is a very efficient way to consistently identify unknown sounds.
It is the most important sub-discipline of soft computing. It has been used in the
investigation process due to its significance. Artificial neural networks can be used
to include human learning through mathematical operations and design. Due to a
lack of information on a particular technique, we may meet uncertainty in decision-
making processes in real-world applications. Fuzzy logic can be used to tackle this
problem because it deals with uncertainty. Therefore, while there is insufficient infor-
mation exists in a human voice, the process of recognising the voice will be done by
assigning membership values to the components of the process. A neural network
informs the human brain about performance based on actions such as learning,
reasoning, and adjusting, whereas fuzzy logic deals with uncertainty by incorpo-
rating a human approach to comprehending linguistic variables. The Neuro-Fuzzy
System (NFS) is developed by combining these two key fields and has been used in a
variety of applications. As a result, it’s a learning design hybrid with fuzzy reasoning.
In NFS, fuzzy logic (FL) handles IF–THEN rules, while the neural network (NN)
decides parameter values. FL will perform well for diverse types of noise during
speech recognition because it is a multi-valued logic [11]. A mathematical model
is a condensed representation of an artificial neural network’s supporting brain-like
system, which is a network of distributed parallel computing (ANN). NN’s greatest
strength is how adaptable it is. NN will adjust the weights automatically to optimise
the system’s behaviour as pattern recognisers, decision-makers, system controllers,
predictors, and other roles. Even if the system’s control alters over time, the NN’s
adaptability will result in strong performance or allow the system to operate well.
Another advantage of NN over the inventor’s analytical growth is the requirement of
the ability to learn from instances. Researchers are interested in NN because of its
capacity to frame robots with biological organism awareness. NN has been employed
in principle biological computations for its judgement or intuition. In 1965, Zadeh
invented fuzzy sets as a way to represent and use imprecise data. In knowledge-based
systems, interpretational morphology offered by FL will be able to approximate the
capacities of human thinking. Thinking and learning are the functions of a human
mind called cognitive process, which hold uncertainty in nature, and this uncertainty
can be captured by fuzzy logic properly by the mathematical fortitude Thinking and
learning are cognitive processes in the human mind that contain uncertainty in nature,
and this uncertainty can be captured by fuzzy logic and mathematical models. The
Fuzzy Logic approach is a sophisticated mathematical branch that provides control
solutions. Human common sense knowledge is inherently imperfect and hazy. The
first-order logic and possibilistic theory methods give a useful theoretical frame-
work. Therefore, a system with a mathematical model that is challenging to derive is
easy with FSs. With imprecise information, the decision-making process is quietly
possible using FL. Cognitive uncertainties can be dealt with by neuro-fuzzy networks
Neuro-Fuzzy Logic Application in Speech Recognition 3
2 Literature Review
Somarathi and Vamshi [1] employed a neural network to solve the problem of assem-
bling a fuzzy logic controller in a novel method. Guz and Guney [2] looked into the
advantages and disadvantages of creating fuzzy rule bases for NFSs. Kumari and
Sunita [3] showed that a neuro-fuzzy integrated methodology is ideal for detecting
cardiac problems. To recognise the pattern, [4] developed a Neuro-fuzzy algo-
rithm. Vaidhehi [5] The Sugeno type ANFIS model was used to present a way for
constructing a web-based neural fuzzy advising system. Petchinathan et al. [6] used
a Local Linear Model Tree and an ANFIS to build and regulate a pH neutralisation
procedure. ANFIS was utilised by Ramesh et al. [7] to recover temperature and evap-
oration portraits up to 10 km over the hot station Gadanki. ANFIS was studied by
Dragomir et al. [8] as a scenario for predicting and controlling the energy produced
by Renewable Sources. Junior et al. [9] investigated the application of NFS for series
design and pricing estimation. Chauduri et al. [10] focused on mental health and the
use of soft computing and neuro-fuzzy techniques to provide a better way of identi-
fying an illness using various tools and approaches. Maskara et al. [11] demonstrated
that, in the presence of noise in attention and uncertainty in disease diagnosis, intel-
ligent techniques such as ANN and ANFIS have a stable behaviour. An ANFIS
for anticipating surface roughness in end milling was presented by Markopoulos
et al. [12]. Shaabani et al. [13] employed a hybrid strategy in ANFIS to identify a
disease, combining Back Propagation and Least Square Error, and exhibiting fuzzy
systems’ linguistic strength and neural networks’ quantitative capabilities. Mathur
4 D. Nagarajan et al.
et al. [14] employed ANFIS to predict in-socket continuous limb temperature and
compared expected and actual data. For the development of intelligent trustworthy
and reclamation robots, [15] offered a combined technique employing ANF and
Bayesian procedure to achieve rapid and proper choice, as well as to calculate and
adapt its own performance. Sahin and Erol [16] created a model that used NN
and ANFIS to anticipate soccer game attendance percentages. Mamak et al. [17]
compared ANFIS and FAO 56 formula using mean square error and mean absolute
error, it was discovered that ANFIS accurately forecasted daily evapotranspiration.
Hadroug et al. [18] employed ANFIS to regulate the speed and fatigue temperature
of a gas cylinder in order to achieve optimal performance. Pradeep et al. [19] used an
ANFIS-based UPQC to reduce current and voltage exaggeration at the distribution
system’s consumer end. Atsalakis [20] used ANFIS and NN to offer and confer two
data-driven models for estimating the ailment of professional welders. An et al. [21]
focused a study on using ANFIS to calculate and determine lost information in data,
as well as using Fuzzy DE to deal with differential equations while missing infor-
mation in equations. Wending [22] explained the hyper neuro-fuzzy systems. Vani
and Anusuya [23] detailed the review of fuzzy speech recognition.
3 Basic Concepts
ANNs combine mathematical behaviour and algorithms with the way humans learn,
and they can learn to do tasks depending on training data. The vast majority
of the neural network must be taught. To perform better as pattern recognisers,
decision-makers, system controllers, predictors, and other functions, they automati-
cally modify their weights. Due to its adaptability, the ANN can operate effectively
even when the system or environment changes over time. During learning time, it can
establish its own representation of the message received which is the self-organisation
of ANN. It carries a parallel computation using specially designed hardware devices
and produces a real-time operation. Partial elimination of a network directed to
decline of the corresponding performance. Even when there is network damage in
ANN can maintain network capabilities it leads to a fault tolerance.
to deal with the depiction of common sense knowledge which is naturally imprecise
and non-categorical, an evolution of fuzzy logic is persuaded. In this case, knowl-
edge is understood as a hazy limitation on a set of variables, and decision-making is
achievable in uncertain situations. Fuzzification is possible for any system. A system
with fuzzy logic is called fuzzy system. This system is suitable for reasoning with
uncertainty as well as the system which has a difficult mathematical design.
NNs can be employed only when the training data are attainable. One cannot interpret
the solution obtained from learning process. Most of the NNs designed as to a black
boxes so final result cannot be described in terms of rules. Here the learning process
is initiated without any prior knowledge, and thus it must learn from scrape. It takes
long time and also there is no assurance for success. Whereas in fuzzy logic, it is
difficult to establish a model from a fuzzy system and stand in need for fine-tuning
and reproduction before working with appropriate membership values. Only if there
is intelligence about to the answer in the form of lexical if–then rules can it be used.
Every intelligent method has specified computational properties in learning ability,
confession of decisions, etc. for particular real-world problems. NNs are doing well
in pattern recognition but they fail to explain the way of reaching the decision.
Whereas Fuzzy logic systems are doing well in explaining their way of reaching the
decision but cannot pick up the rules automatically for decision process. Because
many complex domains contain a variety of peculiar component difficulties and may
necessitate multiple sorts of processing, hybrid systems are often quite useful for a
variety of application domains. These constraints are the motivation of bringing NN
and FL together and create a hybrid system named Neuro-Fuzzy Systems (NFSs).
Neural networks and fuzzy logic can be used to produce a system that can deal with
intellectual uncertainty in a human-like manner.
The Neuro-Fuzzy System is a realistic integration of the benefits of both neural and
fuzzy logic, allowing for the creation of more intelligent decision-making systems.
In this system, neural network contributes immense parallelism, stability and data
learning into the system or simply learning ability to optimise the parameters whereas
fuzzy logic contributes design of uncertainty, transference of uncertainty, and subjec-
tive knowledge or simply for the representation of the knowledge in an intelligible
manner. NFS provides the specific merits of the corresponding application.
6 D. Nagarajan et al.
4 Experimental Results
The neuro-fuzzy network implementation was done using simulation. Using the
equations developed in the previous part, the simulation was created in the C program-
ming language and assessed using numerous conventional data sets. Vowel, one of
a number of data sets used as neural network benchmarks, will be the application
problem used as the testbed for this study. It is used to recognise the eleven vowel
sounds from various speakers without regard to the speaker. The Vowel data set
utilised in this study was originally compiled by Deterding for a “non-connectionist”
Neuro-Fuzzy Logic Application in Speech Recognition 7
5 Conclusion
Numerous applications of fuzzy theory have proved successful. This study demon-
strates how it can be applied to boost neural network efficiency. Fuzziness has a lot
of benefits, and one of them is that it can deal with imperfect data. Although neural
networks are well renowned for being great classifiers, the quantity and calibre of
the training set can have an adverse effect on their performance. The problem class
of speaker-independent speech recognition is one illustration of how neuro-fuzzy
methods are beneficial. As mentioned in the previous section, simulation experiments
made use of the Vowel data collection. This well-known data collection has been used
in numerous studies with dismal outcomes. As a result, one researcher stated that
“bad outcomes seem to be inherent to the data”. This is accurate to such an extent.
This issue with subpar performance lends greater support to effective approaches.
Speech recognition is a good fit for the neuro-fuzzy model. The block box attitude
of the NN and the speech recognition to the challenge of selecting adequate. This
combination can be used to avoid membership values for fuzzy systems. It can also
8 D. Nagarajan et al.
establish a model’s learning efficiency and prior knowledge to specify the problem,
therefore neuro-fuzzy models are only suitable for application areas where interpre-
tation is required. In the future, develop the idea to the speech recognition using
neutrosophic neural network system.
References
1. Somarathi, S., & Vamshi, S. (2013). Design of NEURO fuzzy systems. International Journal
of Information and Computation Technology, 3(8), 819–824.
2. Guz, Y. K., & Guney, I. (2010). Adaptive neuro-fuzzy inference system to improve the
power quality of variable-speed wind power generation system. Turkish Journal of Electrical
Engineering & Computer Sciences, 18(4), 625–645.
3. Kumari, N., Sunita, S. (2013). Comparision of ANNs, fuzzy logic and neuro-fuzzy integrated
approach for diagnosis of coronary heart disease: A survey. International Journal of Computer
Science and Mobile Computing, 2(6), 216–224.
4. Balbinot, A., & Favieiro, G. (2013). A neuro-fuzzy system for characterization of arm
movements. Sensors, 13, 2613–2630.
5. Vaidhehi, V. (2014). A framework to design a web based neuro fuzzy system for course advisor.
International Journal of Innovative Research in Advanced Engineering, 1(1), 186–190.
6. Petchiathan, G., Valarmathi, K., Devaraj, D., & Radhakrishnan, T. K. (2014). Local linear model
tree and neuro-fuzzy system for modelling and control of an experimental pH neutralization
process. Brazilian Journal of Chemical Engineering, 31(2), 483–495.
7. Ramesh, K., Kesarkar, A. P., Bhate, J., Ratnam, M. V., Jayaraman, A. (2015). Adaptive
neuro-fuzzy inference system for temperature and humidity profile retrieval from microwave
radiometer observations. Atmosphere Measurement Techniques, 8, 369–384.
8. Dragomir, O. E., Dragomir, F., Stefan, V., Minca, E. (2015) Adaptive neuro-fuzzy inference
systems as a strategy for predicting and controlling the energy produced from renewable
sources. Energies, 8, 13047–13061.
9. Junior, C. A. A., Silva, L. F. D., Silva, M. L. D., Leite, H. G., Valdetaro, E. B., Donato, D.
B., & Castro, R. V. O. (2016). Modelling and forecast of charcoal prices using a neuro-fuzzy
system. Cerne, 22(2), 151–158.
10. Chauduri, N. B., Chandrika, D., Kumari, D. K. (2016) A review on mental health using
soft computing and neuro-fuzzy techniques. International Journal of Engineering Trends and
Technology, 390–394.
11. Maskara, S., Kushwaha, A., Bhardwaj, S. (2016). Adaptive neuro-fuzzy system for cancer.
International Journal of Innovative Research in Computer and Communication Engineering,
4(6), 11944–11948.
12. Markopoulos, A. P., Georgiopoulos, S., Kinigalakis, M., & Manolakos, D. E. (2016). Adaptive
neuro-fuzzy inference system for end milling. Journal of Engineering Science and Technology,
11(6), 1234–1248.
13. Shaabani, M. E., Banirostam, T., & Hedayati, A. (2016). Implementation of neuro fuzzy system
for diagnosis of multiple sclerosis. International Journal of Computer Science and Network,
5(1), 157–164.
14. Mathur, N., Glesk, I., & Buis, A. (2016). Comparision of adaptive neuro-fuzzy inference system
(ANFIS) and Gaussian processes for machine learning (GPML) algorithms for the prediction
of skin temperature in lower limb prostheses. Medical Engineering and Physics, 38(2016),
1083–1089.
15. Hernandez, U. M., Solis, A. R., Panoutsos, G., Sanij, A. D. (2017). A combined adaptive neuro-
fuzzy and Bayesian for recognition and prediction of gait events using wearable sensors. IEEE
International Conference on Fuzzy Systems, 34–34.
Neuro-Fuzzy Logic Application in Speech Recognition 9
16. Sahin, M., & Erol, I. R. (2017). A comparative study of neural networks and ANFIS for
forecasting attendance rate of soccer games. Mathematical and Computer Applications, 22(43),
1–12.
17. Mamak, M., Unes, F., Kaya, Z. Y., Demirci, M. (2017). Evaporation prediction using adap-
tive neuro-fuzzy inference system and Penman FAO. In “Environmental Engineering” 10th
International conference vilnius gediminas technical university (pp. 1–5).
18. Hadroug, N., Hafaifa, A., Guemana, M., Kouzou, A., Salam, A., & Chaibet, A. (2017). Heavy
duty gas turbine monitoring based on adaptive neuro-fuzzy inference system: Speed and exhaust
temperature control. Mathematics-in-Industry Case Studies, 8(8), 1–20.
19. Pradeep, M., Padmaja, V., & Himabindu, E. (2018). Adaptive neuro-fuzzy based UPQC in a
distributed power system for enhancement of power quality. Helix, 8(2), 3170–3175.
20. Atsalakis, G. S. (2018). Applications of a neuro-fuzzy system for welders’ indisposition
forecasting. Journal of Scientific and Engineering Research, 5(4), 171–182.
21. An, V. G., Anh, T. T., Bao, P. T. (2018). Using genetic algorithm combining adaptive neuro-fuzzy
inference system and fuzzy differential to optimizing gene. MOJ Proteomics Bioinformatics,
7(1), 65–72
22. Wending, L. (2022). Implementing the hybrid neuro-fuzzy system to model specific learning
disability in special University education programs. Journal of Mathematics, 2022:6540542
23. Vani, H., Anusuya, M. (2020). Fuzzy speech recognition: a review. International Journal of
Computer Applications, 177(47), 39–54
A Machine Learning Model
for Predicting COVID-19
Abstract The pandemic has had a significant impact on both public health and
the global economy, leading to widespread lockdowns and disruptions to daily life.
Despite the rollout of vaccines, the virus continues to spread and the situation remains
fluid, with new variants emerging and the threat of further waves of infections.
Efforts are underway to keep the infection from spreading and to find treatments
and cure. The aim of this paper is to demonstrate the usefulness of machine learning
techniques and algorithms in recognizing and predicting COVID-19 instances. The
study improved the understanding of the mechanisms that lead to the spread of
COVID-19 as well as the efficacy of various treatment methods. Our findings suggest
that machine learning can be useful in recognizing, investing in, and forecasting
COVID-19 situations. Machine learning techniques and algorithms can help address
these gaps and improve our ability to respond to the pandemic. The use of supervised
learning algorithms especially Random Forest demonstrated favorable outcomes,
achieving a testing accuracy of 92.9%. The study concluded that predictive models
are necessary in the fight against COVID-19 and can lead to better public health
outcomes. In the future, recurrent supervised learning is expected to yield even better
accuracy.
1 Introduction
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 11
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_2
12 L. Ibeh and S. Mohamud
million and deaths exceeding 1.1 million [1]. The pandemic has put immense pres-
sure on medical systems worldwide, leading to shortages in hospital beds, medical
equipment, and trained healthcare workers. Effective screening and diagnosis are
crucial for mitigating the burden on healthcare systems and for making timely clin-
ical decisions. Tests for reverse transcriptase polymerase chain reaction (RT-PCR),
the most validated diagnostic test for COVID-19, have been in short supply in many
developing countries [1–3].
To meet the challenges posed by the COVID-19 pandemic, researchers have devel-
oped prediction models that aim to help medical personnel in prioritizing patients and
assessing the risk of infection [1]. These models take into consideration factors such
as confirmed cases and death rates. This technique has the potential to improve the
COVID-19 patients’ planning, treatment, and reported results. In this study, we intro-
duce a machine learning algorithm that identifies a probable SARS-CoV- 2 positive
outcome [1]. Our model was built using data from all persons tested for SARS-CoV-2
throughout the epidemic year (2020). As a consequence, our approach may be used
to efficiently filter and prioritize SARS-CoV-2 testing in the general population [4].
2 Methods
The following packages and libraries are required for the project: Datetime, Numpy,
Pandas, SciPy, Scikit Learn, and Jupyter Notebook.
Before getting the data, we need to define measurable and quantifiable goals. Defining
measurable and quantifiable goals prior to obtaining data helps ensure that the data
collected are relevant and useful for achieving the desired outcomes [5]. Given that,
our goal here is to predict if the COVID-19 is going to increase or not, using random
forest model. The data utilized in this research were obtained from Kaggle under the
name “2019 Corona Virus Dataset”. It was developed using information from many
sources, including the World Health Organization and John Hopkins University (26).
Additionally, considering the availability of complete data, our main concentration
was on 12 countries, which include Belgium, China, France, Germany India, Iran,
Italy, Pakistan, Spain, Turkey, US, and the United Kingdom.
The COVID-19 data are organized into columns, including date, string, and numerical
data types. Additionally, there are categorical variables. To prepare the data for the
A Machine Learning Model for Predicting COVID-19 13
machine learning model, label encoding was performed on the categorical variables
[1]. This involves assigning a numerical value to each unique categorical value in
the column [5]. The data contain multiple missing values, which can result in errors
when used as input. To resolve this issue, the missing values are filled with “NA”.
When it comes to the impact of COVID-19 on countries, data exploratory can provide
an overview of how the virus has spread and the measures taken by different govern-
ments to control its spread [6]. Figure 1 shows an overview of number of cases in
the countries represented below [7]. Figure 2 shows the confirmed cases of Belgium,
China, France, Germany India, Iran, Italy, Pakistan, Spain, Turkey, US, and the
United Kingdom [7]. Figures 3 and 4 show the number of daily cases and number of
daily new fatalities, respectively.
The model used in this study to predict the increase of COVID-19 was Random
Forest [8]. Because of its capacity to handle high dimensionality, non-linearity, and
complexity in data, Random Forest is a popular machine learning technique for
predicting outcomes [7]. It is an ensemble approach for making predictions that
include numerous decision trees.
The mathematical equation for the random forest:
14 L. Ibeh and S. Mohamud
1
N
MSE = ( f i − yi )2
N i=0
where N is the number of data points, f i is the model’s output, and yi is the actual
value for each data point.
For our case, we predicted if the COVID-19 will increase in the coming months. This
is clearly a scoring problem which means predicting or estimating the probability of
an occurrence [3, 7].
The modeling process started with importing sklearn and label encoder library as
shown below
from sklearn. preprocessing import LabelEncoder
LE = LabelEncoder()
Then we select the target variables, the one we are going to predict(x)
target = ‘ConfirmedCases’
Then we import the Random forest classifier
from sklearn. ensemble import RandomForestClassifier
Defining our model
rfc = RandomForestClassifier(n_estimators = 10, max_samples = 0.8, random_
state = 1).
16 L. Ibeh and S. Mohamud
Our data were divided into training and testing sets. Before separating the data, we
ensured that the training and testing sets have the same class balance as the dataset
[5]. The models are trained using 80% of the data and tested with 20%. Training the
model.
rfc.fit(train_df[features], train_df[target])
The next step was to make prediction based on the feature from the test data
predictions2 = rfc.predict(test_df[features])
predictions = predictions2[0:500]
Creating a dataframe to store the target columns
Final_work = pd.DataFrame({‘ForecastId’: test_df[‘ForecastId’], ‘Confirmed-
Cases’: predictions, ‘Fatalities’: predictions2}).
3 Results
From Fig. 2, Belgium has a lot of confirmed cases from mid-March to mid-April with
more than 500,000 thousands. In addition, China managed to stabilize the pandemic
while other countries shows an increase in the trend.
Figure 3 shows the trend of confirming numbers of daily cases. China started with
an increase and reached a peak at February 15th with more than 10000 cases then
the cases started to reduce, United States showed an increase in the daily cases from
March 17 with more than 5000 daily cases and increased all the way to 35000 cases.
Other countries started low and the trend started to increase in the middle March
to the end. Figure 4 shows the number of daily new fatalities. Countries like Italy,
Spain, US, France, United Kingdom, Belgium showed an increase in trend at the end
of March and beginning of April.
4 Discussion
From the outcome of the predictive model shown in Table 1, the Random Forest model
result indicated that the rise in confirmed cases and fatalities rate will increase in the
coming months. This prediction was based on various factors such as demographic
data, past trends, and other relevant variables.
Figure 2 shows the confirmed cases of Belgium, China, France, Germany India,
Iran, Italy, Pakistan, Spain, Turkey, US, and the United Kingdom. The findings show
that close proximity with a person diagnosed with COVID-19 was a significant
factor. This supports the high level of transmission of the virus and emphasizes the
significance of maintaining social distancing measures [1, 10–12]. Belgium has a lot
of confirmed cases from mid-March to mid-April with more than 500,000 thousands
while China stabilized the cases and other countries showed an increase in the trend.
The reason why Belgium had a large number of confirmed cases from mid-March
to mid-April with over 500,000 cases could be due to various factors such as high
levels of community transmission, inadequate measures for controlling the spread
of the virus, and a higher rate of testing that revealed more positive cases. As for
China, it managed to stabilize the situation by implementing strict measures such as
lockdowns, widespread testing, and contact tracing. This, along with the country’s
vast resources and infrastructure, helped in containing the spread of the virus. In
18 L. Ibeh and S. Mohamud
5 Conclusion
outbreaks and hot spot areas, which can help allocate resources more effectively and
inform the development and implementation of effective prevention measures.
Acknowledgements Sincere gratitude to my Professor, Dr. Lawrence Ibeh for guiding me through
this entire project.
References
1. Zoabi, Y., Deri-Rozov, S., & Shomron, N. (2021). Machine learning-based prediction of
COVID-19 diagnosis based on symptoms. Npj Digital Medicine, 4(1), 3. https://doi.org/10.
1038/s41746-020-00372-6
2. Iwendi, C., Bashir, A. K., Peshkar, A., Sujatha, R., Chatterjee, J. M., Pasupuleti, S., Mishra, R.,
Pillai, S., & Jo, O. (2020). COVID-19 patient health prediction using boosted random forest
algorithm. Frontiers in Public Health, 8, 357. https://doi.org/10.3389/fpubh.2020.00357
3. Ožiūnas, D. O. (2021). Identifying severity of COVID-19 in patients using machine learning
methods. University of Twente.
4. Babukarthik, R. G., Adiga, V. A. K., Sambasivam, G., Chandramohan, D., & Amudhavel, J.
(2020). Prediction of COVID-19 using genetic deep learning convolutional neural network
(GDCNN). IEEE Access: Practical Innovations, Open Solutions, 8, 177647–177666. https://
doi.org/10.1109/ACCESS.2020.3025164
5. Zhang, S., Zhang, C., & Yang, Q. (2003). Data preparation for data mining. Applied Artificial
Intelligence: AAI, 17(5–6), 375–381. https://doi.org/10.1080/713827180
6. Yan, L., Zhang, H-T., Goncalves, J., Xiao, Y., Wang, M., Guo, Y., Sun, C., Tang, X., Jing, L.,
Zhang, M., Huang, X., Xiao, Y., Cao, H., Chen, Y., Ren, T., Wang, F., Xiao, Y., Huang, S.,
Tan, X., Yuan, Y. (2020). An interpretable mortality prediction model for COVID-19 patients.
Nature Machine Intelligence, 2(5), 283–288. https://doi.org/10.1038/s42256-020-0180-7
7. The Class of AI. (n.d.). Covid_19_Analysis_Week4.ipynb at master · the classofai/COVID_19.
8. Schott, M. (2019). Random Forest Algorithm for machine learning - capital one tech -
medium. Capital One Tech. https://medium.com/capital-one-tech/random-forest-algorithm-
for-machine-learning-c4b2c8cc9feb
9. Li, Y., Zhang, C., & Zhang, S. (2003). Cooperative strategy for web data mining and cleaning.
Applied Artificial Intelligence: AAI, 17(5–6), 443–460. https://doi.org/10.1080/713827173
10. Pasupuleti, R. R. (2021). Rapid determination of remdesivir (SARSCoV-2 drug) in human
plasma for therapeutic drug monitoring in COVID-19-Patients. Process Biochemistry, 102(3),
150–156.
11. Scarpone, C., Brinkmann, S. T., Große, T., Sonnenwald, D., Fuchs, M., & Walker, B. B. (2020).
A multimethod approach for county-scale geospatial analysis of emerging infectious diseases:
A cross-sectional case study of COVID-19 incidence in Germany. International Journal of
Health Geographics, 19(1), 32. https://doi.org/10.1186/s12942-020-00225-1
12. National center for biotechnology information. (n.d.). Nih.gov. Retrieved February 22, 2023,
from https://www.ncbi.nlm.nih.gov/
13. Frontiers. (n.d.). Frontiersin.org. Retrieved February 22, 2023, from https://www.frontiersin.
org/
14. Moulaei, K., Shanbehzadeh, M., Mohammadi-Taghiabad, Z., & Kazemi-Arpanahi, H. (2022).
Comparing machine learning algorithms for predicting COVID-19 mortality. BMC Medical
Informatics and Decision Making, 22(1), 2. https://doi.org/10.1186/s12911-021-01742-0
20 L. Ibeh and S. Mohamud
15. Podder, P., Bharati, S., Mondal, M. R. H., & Kose, U. (2021). Application of machine learning
for the diagnosis of COVID-19. In U. Kose, D. Gupta, V. H. C. de Albuquerque, & A. Khanna
(Eds.), Data Science for COVID-19 (pp. 175–194). Elsevier.
16. Prakash, K. B. (2020). Analysis, prediction and evaluation of COVID-19 datasets using machine
learning algorithms. International Journal of Emerging Trends in Engineering Research, 8(5),
2199–2204. https://doi.org/10.30534/ijeter/2020/117852020
Thyroid Disease Prediction Using a Novel
Classification Enhancing MLP
and Random Forest Algorithms
Abstract It has just become apparent how important it is to anticipate thyroid sick-
ness. Thyroid problems impact people all over the world. This disease has become
a significant problem in India as well. The disease thyroiditis is one of them that
is growing as people’s lives change, with several study findings estimating that 42
million Indians experience “thyroid problems.“ Thyroid illness affects individuals
rather often. As a result, thyroid disease prediction is currently necessary. This study
used a brand-new hybrid categorization to forecast thyroid illness. We anticipate that
this study will provide a helpful overview of recent findings in this area and show
how to apply Random Forest methodologies as a tool for thyroid ailment prediction
innovations. The multi-layer perception (MLP) techniques as well as the random
forest method are used in the hybrid classification. The findings clearly show that
our hybrid approach is superior, and as a result, it is advised for this task in thyroid
ailment prediction.
D. Akila (B)
Department of Computer Applications, Saveetha College of Liberal Arts and Sciences, SIMATS,
Chennai, India
e-mail: [email protected]
B. Sakar
Department of Computer Science and Engineering, JIS College of Engineering, Kalyani, India
S. Adhikari
School of Engineering, Swami Vivekananda University, Kolkata, India
e-mail: [email protected]
R. Bhuvana
Department of Computer Science, Agurchand Manmull Jain College, Chennai, India
e-mail: [email protected]
V. R. Elangovan
Department of Computer Applications, Agurchand Manmull Jain College, Chennai, India
D. Balaganesh
Berlin School of Business and Innovation, Berlin, Germany
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 21
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_3
22 D. Akila et al.
1 Introduction
Thyroid disorders have become more common in recent years. The thyroid gland
plays a pivotal role in regulating metabolism. Hyperthyroidism and hypothyroidism
are two of the most common conditions brought on by abnormalities in the thyroid
gland. Hypothyroidism and hyperthyroidism, two common thyroid disorders, are
diagnosed in a sizable number of people every year. Both hyperthyroidism and
hypothyroidism can arise from insufficient levels of thyroid hormones, which the
thyroid gland produces as levothyroxine (T4) and triiodothyronine (T3), respectively.
Several techniques are discussed in the literature as potential means of identifying
thyroid disease. Proactively diagnosing thyroid illness is essential for timely patient
management and reducing mortality and healthcare expenditures. The incidence of
thyroid disease has increased recently. One of the thyroid gland’s primary roles is
to control metabolism. Hyperthyroidism and hypothyroidism are the most prevalent
diseases caused by thyroid gland dysfunction. Thyroid conditions such as hypothy-
roidism and hyperthyroidism affect a sizable population. Both hyperthyroidism and
hypothyroidism can result from insufficient levels of the thyroid hormones levothy-
roxine (T4) and triiodothyronine (T3). The medical literature suggests a plethora
of methods for identifying thyroid illness. Predicting the onset of thyroid problems
ahead of time is crucial for saving lives and reducing healthcare costs [1, 2].
Heartbeat, body temperature, and, most importantly, metabolism—the body’s
usage and absorption of nutrients—are all regulated by thyroid hormones. Major
problems may develop when the thyroid gland functions excessively (hyper-
thyroidism with high hormone levels) or inadequately (hypothyroidism with
low hormonal changes). Additionally, the thyroid gland may become inflamed
(thyroiditis) or expand as a result of one or more swellings that develop there (nodules,
multinodular goiter). These nodules may include cancerous tumors in some cases.
Because of this, treating thyroid problems is a crucial concern [3]. According to Few,
there are 38,000 persons worldwide who suffer from congenital hypothyroidism on
average. Nearly 42 million people in poor nations like India suffer from thyroid
illness. Indians appear to experience it more frequently, as seen by the one out of
2640 ratio reported for Mumbai. More than 25,000 hospitals worldwide now gather
patient data in a variety of ways. In the conventional approach, statistical testing and
traditional analysis are used to conduct clinical and medical investigations [4].
Analyzing thyroid illness is one of the most difficult and brutal projects since it
requires a ton of knowledge and information. If this illness is discovered at an early
stage, the patient will receive proper care from the doctor. Specialist examination or
multiple blood tests are the usual ways to diagnose thyroid disease. Thyroid hormone
replacement is a safe and effective medication that helps manage one’s adverse effects
and clears up confusion with early diagnosis and treatment. The detection of diseases
with high accuracy is currently one of the essential challenges in medical sciences that
require innovation. Many cutting-edge tactics and computational frameworks have
been created in this decade to promote their operations [5]. Artificial intelligence has
already been extensively employed in recent years for a variety of purposes, including
Thyroid Disease Prediction Using a Novel Classification Enhancing … 23
2 Related Works
The medical field creates a lot of complicated data that is hard to handle. In the
past few years, machine learning techniques have been used more and more to study
and classify different illnesses. In this part, we looked into numerous methods for
anticipating thyroid problems. The many machine learning methods employed in the
field of illness prediction are shown in this section.
The thyroid is a crucial organ that produces several hormones that the body uses for
a variety of crucial functions, according to Asif et al. [10]. So, thyroid illness threatens
the health of every part of the body, including the endocrine, circulatory, neurolog-
ical, respiratory, digestive, muscular, and reproductive systems. Heart failure, losing
consciousness, and mental illnesses are frequent events that can all result in death.
Because of this, good clinical diagnosis and early identification of thyroid illnesses
help maintain the physiological equilibrium of the human body and potentially save
countless lives. They explored several machine learning techniques for the early
diagnosis and prediction of thyroid illness in their study, and they recommended
the multilayer perceptron (MLPC), which had the greatest accuracy of 99.70%. It
may thus be used realistically, which will help medical professionals detect thyroid
problems early. Thus, their suggested approach can aid in the fight against thyroid
illness and promote human welfare.
The thyroid dataset was studied by Yadav et al. [11] using a variety of machine
learning classifiers, including decision trees, random forest trees, additional trees,
and bagged ensemble models. Using bagging ensemble approaches, the seed value
of 35 and the n-fold value of 10 have been shown to have the maximum accuracy.
Therefore, when compared to the other three classification methods, the bagging
ensemble methodology is the best.
Abbad Ur Rehman et al. [12] Early disease diagnosis and identification are crucial
for human survival. Specific and reliable identification and detection have been easier
to achieve because of machine learning algorithms. Due to the symptoms of thyroid
illness being confused with those of other conditions, diagnosis is difficult. The three
newly added characteristics in the thyroid dataset have a beneficial influence on
classifier performance, and the findings reveal that they outperform previous studies
24 D. Akila et al.
in terms of accuracy. Nave Bayes obtained 100% accuracy in all three portions of
the experiment after analyses of KNN, SVM, decision tree, logistic regression, and
Nave Bayes, whereas logistic regression achieved 100 and 98.92% accuracy in L1-
and L2-based feature extraction, respectively. KNN also produced great results, with
a 97.84% accuracy rate and a 2.16% error rate. The benefits and resilience of the new
dataset are evident after analysis and would enable clinicians to obtain more exact
and accurate findings in less time.
The study that deals with categorizing thyroid disorders into hyperthyroidism and
hypothyroidism was described by Salman & Sonuc [13]. Algorithms were used to
classify their illnesses. Using multiple methods, machine learning produced positive
results and was developed using multiple models. A total of 16 inputs and 1 output
were used in the first model, and the correctness of the random forests method
produced a result of 98.93%, the greatest accuracy of the other algorithms. Based
on prior research, the following features were left out of the second embodiment:
The characteristics that were removed were thyroid hormone replacement, hypothy-
roidism, and hyperthyroidism. Here, they have found that certain algorithms’ accu-
racy has been retained while others have improved. The Naive Bayes method was
shown to optimize the model by a ratio of 90.67. The MLP algorithm’s greatest
precision was 96.4 percent accuracy.
According to Jajroudi et al. [14], one of the most important considerations in
scheduling therapy for cancer patients is survival (6). To predict survival, data mining
techniques like decision trees, ANNs, regression, and others are available. The ANN
model was applied to survival analysis recently. According to reviews of other prior
studies, ANN has demonstrated promising results in the prediction of lung, breast,
and esophageal cancer survival in their study. Regression and ANN were used to
forecast thyroid cancer survival. In their investigation, MLP effectively served as an
appropriate technique for predicting survival in thyroid cancer patients. It is advised
to employ additional ANN techniques, such as genetic processes with more precise
data, to get better outcomes. Due to a lack of supporting data, certain useful aspects
were left out of their analysis. A more accurate model might be used to depict them.
For the assessment of thyroid nodules, Ouyang et al. [15] examined three linear
and five nonlinear machine learning systems. Overall, the performance of the linear
and nonlinear methods was comparable. According to their findings, RF and k-
SVM, two nonlinear machine learning algorithms, performed marginally better than
competing techniques. Their machine learning technique may make it simpler to
diagnose malignant nodules since it is simple to use, repeatable, and inexpensive.
Multiple machine learning methods were created and verified by them for the predic-
tion of cancerous thyroid nodules. Many FNAs detect nodules with an acceptable
low risk of cancer in order to Toing.
For the diagnosis of diseases, Krishnamoorthi et al‘s. [16] ML method is consid-
ered advantageous. Early diagnosis and treatment benefits patients. In their research,
they have investigated a handful of accurate machine learning (ML) classification
algorithms for the identification of diabetic patients. The classification issue involves
an expression of precision, and ML was applied to the PIDD data set. On the testing
Thyroid Disease Prediction Using a Novel Classification Enhancing … 25
dataset, the algorithm was trained, verified, and validated. The results of their imple-
mentation technique demonstrate that the LR algorithm beats rival ML algorithms.
The results of association rule mining indicate that glucose and BMI are signifi-
cantly associated with diabetes. LR’s ROC values have been required to be 86%. In
the future, unstructured data will be taken into consideration, which is the study’s
primary weakness. For the prediction of cancer, Parkinson’s disease, cardiovascular
disease, and COVID-19, other healthcare areas may employ or be recommended
+e models.
Uddin et al. [17] looked at how well different machine learning methods predicted
diseases. Because clinical data and study focus varied so much, it was not possible
to compare studies on predicting disease until a standard baseline for the dataset and
scope was set. They only compared studies that used more than one machine learning
method to predict sickness with the same data. Even though there are differences in
how often and how well they work, the results show that there could be more than
one algorithmic family for predicting sickness.
Sreejith et al. [18] wrote about a system that lets users access the functions of a
healthcare management system whenever they want. Based on the user’s readings,
being able to predict cardiac illness lets patients get the help they need as soon as
possible. By giving the doctor the ability to examine the medical histories of diverse
patients, the quality of the medication provided by the doctor is also enhanced. Here,
the paper evaluates several methods and suggests using the random forest approach
to predict cardiac disease. They may include different sensor fusion techniques to
outperform wearable technology. It will result in the inclusion of different health
metrics [19].
According to Rahman et al. [20], the main goal of their research is to develop a
system that can accurately diagnose patients with chorionic liver infections using six
unique supervised classifier models. They investigated how each classifier performed
when given patient data and found that the LR classifier provided the highest order
exactness (75% based on the F1 measure) to predict liver disease, while the NB
classifier provided the lowest precision (53%). The decision support system and
diagnosis of chronic diseases will now use the outperform classification technique.
The program can forecast liver infections in advance and provide health status advice.
In low-wage countries without enough medical infrastructure or specialists, their
implementation can be unexpectedly profitable. There are some implications from
their findings for the next worthy KS in their field. More algorithms may be chosen
to create an ever-more accurate model of liver disease prediction, and performance
can be steadily enhanced. They have only examined a few well-known supervised
machine learning systems. Additionally, their work is poised to play a vital part
in medical research as well as provide therapeutic targets to prevent liver infection.
Several image processing methods, pattern matching strategies, and inferred machine
learning algorithms have been presented by Suseendran and his research group [21–
25] in order to improve precision, comparability, and performance. Singh et al. [26]
have given assessment methods for heart disease prediction utilizing soft computing
algorithms. In order to improve efficiency and effectiveness in healthcare, Rakshit
26 D. Akila et al.
et al. [27] have suggested a variety of healthcare approaches based on the Internet of
Things.
Summary:
• For more accurate findings, classifiers with various KNN distance functions and
data augmentation approaches can be applied.
• By employing various and sizable datasets for various illnesses, we may witness
the identification of various thyroid dataset dataset-influenced and test more.
3 Proposed Method
Preprocessing, feature selection, and classification are the three steps used in this
suggested technique to process the task. Preprocessing is a crucial stage since the
database is repetitive and noisy. By looking at the data, we may do feature extraction,
data combination to fill in missing values, and excess data removal because lacking
quality as excess data would result in inaccurate results. The feature selection process
employs linear discriminant analysis. Additionally, classification methods such as
KNN, SVM, MLP, and Random Forest are discussed. MLP and Random Forest
make up the Hybrid Algorithm. The overall process of the proposed system is shown
in Fig. 1.
(i) Data
We were able to get a lot of information about thyroid hormone levels, and we are now
using this information to classify diseases in our study. Deep learning techniques are
used to quickly and effectively treat thyroid problems and other illnesses because they
play an important role in the healthcare industry and help us diagnose and classify
diseases. The information was collected on 1250 people, respectively, males and
females, for whom the age group ranged from 1 to 1 year. The data were obtained from
outside hospitals but also labs that focus on analyzing and able to diagnose diseases,
and the sample taken from the data was the information of Indian citizens as well as
the type of data linked to thyroid problems. Since 90 years, these samples contain both
healthy people and those without thyroid disease who both have hyperthyroidism and
hypothyroidism. The information was obtained during a one to four-month period
with the main goal of employing machine learning techniques to categorize thyroid
diseases.
As the data collected included 17 variables or attributes, all of which were rele-
vant to the study (For example, ID, age, gender, the message “thyroid hormones,”
“on anti-thyroid medicine,” “sick,” “during pregnancy,” “thyroid surgery,” “query
hypothyroid,” “query hyperthyroid,” “TSH M,” “TSH,” “T3 M,” “T3, T3, T4”, and
the category”) were taken into consideration.
(ii) Preprocessing
Outliers are taken out and the data is standardized in this part. A model was developed
using the processed data. Before trying to apply classifier to the data index, the data
must be properly preprocessed and organized. Before connecting, such data should be
carefully handled [16]. During this stage, inconsistent data have been handled and
eliminated to produce more precise results. The pre-processing method carefully
checks the data to disclose the data through analysis and the identification of lost
data. Data preparation and cleaning are all part of the pre-processing process. Missing
values can be found in this data collection.
(iii) Feature Selection Techniques
The Feature Selection Technique (FST) consistently improves classification accu-
racy while reducing computational expense. Additionally, FST removes unimportant
characteristics and makes machine learning less time-consuming. The following are
the feature selection methods that are employed:
Linear Discriminant Analysis (LDA): LDA is a supervised approach used to
extract the key features from a dataset. It is used to decrease computing costs and
prevent overfitting of the data. To do this, a feature space is projected onto a more
condensed, lower-dimensional space with the best class separability. In LDA, the axes
that maximize the partition among the various classes are given more consideration
[28].
(iv) Classification
(a) KNN
KNN is one of the first and most straightforward statistical learning techniques or
classification algorithms. “K” refers to the number of nearest neighbors, which can
be supplied explicitly in the item constructor or estimated using the upper bound
made available by the stated value. Hence, classifications for similar scenarios are
comparable, and a new sample is classified by comparing it to each of the existing
examples. When an unidentified chemical is acquired, the nearest neighbor technique
searches the patterns region for k-training instances that are geographically adjacent
to the unknown sample. Two distinct approaches are introduced to translate the
distance between nodes into a weight, and predictions for several nodes can be
28 D. Akila et al.
derived from a training sample situated far away. The automated system has numerous
advantages, including its user-friendliness and analytic tractability. Due to the fact
that it only utilizes a single instance, the classifier is both highly efficient and must
perform well in disease prediction, notably in HD prediction [29].
KNN is one of the supervised machine learning methods. It is commonly employed
in classification issues. KNN is frequently used to classify items according to the
distance or nearest measure, i.e., the separation between the item and all other objects
in the training set. The item is classified utilizing K-neighbors. The procedure is
executed before the positive integer K is defined. The Euclidean distance is widely
employed to determine the dimensions of various objects [16].
The following gives the computation for the Euclidean distance equation:
[
| k
|∑
Euclidean = | (Xi − Yi)2 (1)
i=1
The human nervous system serves as an inspiration for the multilayer perceptron
idea [24]. The benefits of MLP include being: Highly fault-tolerant, meaning that
even if neurons and the connections between them fail, they continue to function;
and (ii) Nonlinear in nature, making it appropriate for a variety of real-world issues
[28].
The complicated function known as MLP receives numerical inputs and returns
the same numbers. A fully linked MLP network is shown in Fig. 3. It has three layers:
the domain’s raw input is taken in by the input layer, feature extraction is done by
the hidden layer, and prediction is done by the output layer. A deep learning network
has multiple hidden layers. On the other side, adding additional hidden layers might
cause vanishing gradient issues, which call for the adoption of unique techniques to
fix. The parameters of the model of the MLP, which include the number of layers
hidden and neurons, must be carefully selected [30].
Cross-validation methods are routinely employed to determine optimum values for
these hyperparameters. The MLP networks’ output and hidden neurons use activation
mechanisms (f). Normally, the activation function used by all hidden neurons is
the same. As opposed to the concealed layers, the output layer often has a distinct
activation function. The option is made based on the intention or kind of prediction of
the model. An operational amplifier is used to give the neural network non-linearity
[30].
When the bias is present, a node in a multilayer perceptron may be described as a
neuron that computes the weight value of the inputs and sends it via the input signal
[31]. This is how the entire procedure is described:
∑
p
Vj = Wij Xi + θ j (3)
i=1
Yj = Fi(V j) (4)
where fj(vj) is the input layer of the jth neuron, yj is the output, and vj is the concate-
nation of inputs X1, X2,… XP, j is the bias, and Wji is the network here between
input Xi as well as the neuron j.
A popular choice for the activation function is the sigmoid function, as follows:
1
F(a) = (5)
1 + e−a
There are many distinct kinds of neural networks, but multilayer neural channels
are the most often used. Due to the presence of several hidden layers in their structure,
multilayer neural networks are popular because they may occasionally assist in the
resolution of difficult issues that a single convolutional-level neural network cannot
[31].
(d) The Random Forest
In the same way that wood is made up of many trees, randomized forests (RF) are
a group of DTs that work together [17]. When DTs are made in a lot of detail, the
trained data are often over fitted, which means that a small change in the input data
can cause a big difference in the classification results. They are very sensitive to the
data they were taught on, which makes it easy for them to make mistakes on the
test data. The different DTs of an RF were taught how to use its different training
data components. For a sample to be put into a category, its entry must give back to
every DT of the two trees. Then, each DT gives a classification result that takes a
different part of the input vector into account. The trees then choose the classification
that gets a certain number of “votes” (for a discrete classification outcome) or the
average of all the trees in the forest (for a numeric classification outcome). The RF
technique, which takes into account the results of several different DTs, can reduce
the differences caused by the evaluation of a single DT within the same dataset.
This program evaluates a lot of different decision trees, creating a forest. Another
name for it is a collection of decision tree methods [28].
The RF approach combines random feature selection with bagging. The following
three random forest tuning settings are crucial: (1) the number of trees (n tree), (2)
the minimum node size, and (3) the number of characteristics used to divide each
node (m try). The benefits of the random forest algorithm are described below [32].
1. The ensemble learning algorithm known as the random forest is precise.
2. Large data sets may be processed using random forest effectively.
3. It can cope with a large number of input variables.
4. Random forest calculates the key classification variables.
5. Missing data can be accommodated.
6. Techniques for balance error for class-unbalanced data sets are available in
random forests.
7. With this technique, generated forests may be preserved for later use.
8. Overfitting is overcome by random forest.
9. RF is much less sensitive to anomalies in training data.
10. Parameters in RF may be simply adjusted, negating the requirement for tree
trimming.
11. Accuracy and variable importance are generated automatically in RF [32].
A randomized forest tree is one of many trees in a forest that helps with predic-
tion decisions. It offers the finest division of all medical data qualities or other
characteristics [11].
DT and ensemble learning is the foundation of the data categorization method
known as RF. It creates a large number of trees and a forest of choice trees when it’s
in beginner mode. Throughout the testing period, every tree in the forest forecasts
the classifier for every occurrence. The final decision for every test set is taken by
majority voting once a classification is generated from each tree. According to this
theory, test data should be provided to the classifier who obtains the most votes. For
each piece of information contained in the data gathered, this process is repeated
[29].
An artificial neural network called a multilayer perceptron produces several
outputs from a collection of inputs (MLP). A directed graph is formed between
the hidden layer and output layer of an MLP by numerous layers of input nodes.
Backpropagation is used by MLP to prepare the network. A deep learning method is
MLP. A directed graph, in which the signal only moves in one way between nodes,
is what distinguishes a multilayer perceptron from other neural networks. Except for
the input nodes, every node has a nonlinear activation function. Backpropagation is
a supervised learning technique used by an MLP. A deep learning technique called
MLP uses many units called neurons [13].
32 D. Akila et al.
The weights and biases of an MLP network are represented by the location of
a particle. Finding a position/weight combination that leads the network to provide
computed output that resembles the output of the labeled training data is the goal [30].
The RF technique employs a tree-based solution known as a forest to educate an MLP
network. The tree’s potential solutions are all referred to as particles. An ensemble
learning technique called Random Forest builds a “forest” of many decision trees.
To categorize a new item based on its characteristics, each tree is assigned a class,
and each tree “votes” for that class. The forest selects the categorization that receives
the most votes. It uses bagging and the random subspace approach to build trees.
(v) Result
This portion of the study project—which was carried out using MATLAB 2016a and
an i5 CPU and 4 GB of RAM—discusses the suggested thyroid classifier performance
results of the proposed classification compared to existing methodologies and also
collects the thyroid dataset from the UCI repository. Different classification methods
are employed to identify the classes. Using a classification technique, it classifies
data from the thyroid.
Hypogonadism, thyroid issues, goiter, thyroid nodules, and thyroid cancer are
among the specific types of thyroid illnesses that have been recognized. Using a
classification system, our suggested technique locates the specific thyroid illness.
Accuracy specifically refers to the percentage of a test dataset that the model
correctly predicts or the precision of the model. Accuracy is determined as follows by
identifying true positives (TP) & true negatives (TN) as examples that are correctly
categorized, and false positives (FP) & false negatives (fn (FN) as cases that are
incorrectly classified:
TP + TN
Accuracy = (6)
T P + FP + T N + FN
Thyroid Disease Prediction Using a Novel Classification Enhancing … 33
Table 1 Prediction of
Techniques Accuracy (%)
thyroid disease accuracy
SVM 95
KNN 93
RF 94
MLP 92
Hybrid (MLP + RF) 98
On the other hand, precision and recall aim to measure, respectively, rates for
True Positive (TP) and True Negative (TN). Precision is the ability of classifiers to
prevent misclassifying positive instances as really negative ones.
TP
Precision = (7)
T P + FP
Instead, recall evaluates how sensitive the model is. It is defined as the proportion
of a class’s correct predictions to the total cases in which they occur.
TP
Recall = (8)
T P + FN
4 Conclusion
One of the disorders that affect the global population and are becoming more preva-
lent is thyroid disease. Our work focuses on the categorization and prediction of
thyroid disorders since medical reports indicate major imbalances in thyroid diseases.
In this article, a unique hybrid classification is applied for thyroid prediction and
diagnosis. The following result demonstrates that our suggested hybrid classification
approach has an accuracy value of 98%. Hybrid classification comprises the Multi-
Layer Perception (MLP) method and the Random Forest Algorithm. Our suggested
methods provide great performance, accuracy, and support for the identification of
thyroid illness when compared to existing methods like SVM and KNN. We may
suggest other deep learning and machine learning techniques in categorization in the
future to improve illness prediction.
Future work: Many types of thyroid exist these days which remain unclassified.
So the above work may be extended to the detection of the type of thyroid and
the stage of the same. Which may help medical practitioners to suggest respective
treatment procedures.
References
1. Chaganti, R., Rustam, F., De La Torre Díez, I., Mazón, J. L. V., Rodríguez, C. L., Ashraf, I.
(2022) Thyroid disease prediction using selective features and machine learning techniques.
Cancers 14 (16), 1–23. https://doi.org/10.3390/cancers14163914
2. Turanoglu-Bekar, E., Ulutagay, G., & Kantarcı-Savas, S. (2016). Classification of thyroid
disease by using data mining models: A comparison of decision tree algorithms. Oxford Journal
of Intelligent Decision and Data Science, 2016(2), 13–28. https://doi.org/10.5899/2016/ojids-
00002
3. Aversano, L., Bernardi, M. L., Cimitile, M., Iammarino, M., Macchia, P. E., Nettore, I. C., &
Verdone, C. (2021). Thyroid disease treatment prediction with machine learning approaches.
Procedia Computer Science, 192, 1031–1040. https://doi.org/10.1016/j.procs.2021.08.106
4. Raisinghani, S., Shamdasani, R., Motwani, M., Bahreja, A., & Raghavan Nair Lalitha, P. (2019).
Thyroid prediction using machine learning techniques. In: Communications in computer
and information science (vol. 1045). Springer Singapore. https://doi.org/10.1007/978-981-13-
9939-8_13
5. Shankar, K., Lakshmanaprabu, S. K., Gupta, D., Maseleno, A., & de Albuquerque, V. H. C.
(2020). Optimal feature-based multi-kernel SVM approach for thyroid disease classification.
Journal of Supercomputing, 76(2), 1128–1143. https://doi.org/10.1007/s11227-018-2469-4
6. Akhtar, T., Gilani, S. O., Mushtaq, Z., Arif, S., Jamil, M., Ayaz, Y., Butt, S. I., & Waris, A.
(2021). Effective voting ensemble of homogenous ensembling with multiple attribute-selection
approaches for improved identification of thyroid disorder. Electronics (Switzerland), 10(23).
https://doi.org/10.3390/electronics10233026
7. Dharmarajan, K., Balasree, K., Arunachalam, A. S., & Abirmai, K. (2020). Thyroid disease
classification using decision tree and SVM. Executive Editor, 11(03), 3234.
8. Olatunji, S. O., Alotaibi, S., Almutairi, E., Alrabae, Z., Almajid, Y., Altabee, R., Altassan,
M., Basheer Ahmed, M. I., Farooqui, M., & Alhiyafi, J. (2021). Early diagnosis of thyroid
cancer diseases using computational intelligence techniques: A case study of a Saudi Arabian
Thyroid Disease Prediction Using a Novel Classification Enhancing … 35
mosaic better accuracy. In Intelligent computing and innovation on data science: Proceedings
of ICTIDS 2021 (pp. 201–212). Springer Singapore. https://doi.org/10.1007/978-981-16-3153-
5_23
25. Suseendran, G., Balaganesh, D., Akila, D., & Pal, S. (2021, May). Deep learning frequent
pattern mining on static semi structured data streams for improving fast speed and complex
data streams. In 2021 7th International conference on optimization and applications (ICOA)
(pp. 1–8). IEEE. https://doi.org/10.1109/ICOA51614.2021.9442621
26. Singh, D., Sahana, S., Pal, S., Nath, I., Bhattacharyya, S. (2020). Assessment of the heart
disease using soft computing methodology. In V. Solanki, M. Hoang, Z. Lu, P. Pattnaik (Eds.),
Intelligent computing in engineering. Advances in intelligent systems and computing (vol 1125).
Springer, Singapore. https://doi.org/10.1007/978-981-15-2780-7_1
27. Rakshit, P., Nath, I., & Pal, S. (2020). Application of IoT in healthcare. In Principles of Internet
of Things (IoT) ecosystem: Insight paradigm (pp. 263–277). https://doi.org/10.1007/978-3-030-
33596-0_10
28. Ahuja, R., Sharma, S. C., & Ali, M. (2019). A diabetic disease prediction model based on
classification algorithms. Annals of Emerging Technologies in Computing, 3(3), 44–52. https:/
/doi.org/10.33166/AETiC.2019.03.005
29. Ali, M. M., Paul, B. K., Ahmed, K., Bui, F. M., Quinn, J. M. W., & Moni, M. A. (2021). Heart
disease prediction using supervised machine learning algorithms: Performance analysis and
comparison. Computers in Biology and Medicine, 136, 104672. https://doi.org/10.1016/j.com
pbiomed.2021.104672
30. Al Bataineh, A., & Manacek, S. (2022). MLP-PSO hybrid algorithm for heart disease
prediction. Journal of Personalized Medicine, 12(8). https://doi.org/10.3390/jpm12081208
31. Yildirim, P. (2017). Chronic kidney disease prediction on imbalanced data by multilayer percep-
tron: Chronic kidney disease prediction. Proceedings–International Computer Software and
Applications Conference, 2, 193–198. https://doi.org/10.1109/COMPSAC.2017.84
32. Jabbar, M. A., Deekshatulu, B. L., & Chandra, P. (2016). Intelligent heart disease prediction
system using random forest and evolutionary approach. Journal of Network and Innovative
Computing, 4(April), 175–184. www.mirlabs.net/jnic/index.html
YouTube Sentimental Analysis Using
a Combined Approach of KNN
and K-means Clustering Algorithm
Abstract Sentiment analysis is the method for learning what users think and feel
about a service or a product. YouTube, one of the most widely used video-sharing
websites, receives millions of views daily. Many businesses utilize YouTube, a
well-known social media platform, to sell their goods through videos and adver-
tisements. Popular YouTube channels are seeing a sharp increase in the daily volume
of comments. We cannot easily notice and comprehend this enormous volume of
comments, which are largely unstructured, so we need some applications or methods
that use large amounts of data to perform sentiment analysis. So the sentiment anal-
ysis is, therefore, necessary to categorize the comments on a bigger platform to
find meaningful ways to categorize. In this paper, we employed sentimental analysis
and methods that may be applied to comments on YouTube videos. Additionally, it
S. Adhikari
School of Engineering, Swami Vivekananda University, Kolkata, India
e-mail: [email protected]
R. Kaushik
Department of Computer Science, CHRIST University, Bangalore, India
A. J. Obaid
Faculty of Computer Science and Mathematics, University of Kufa, Najaf, Iraq
e-mail: [email protected]
S. Jeyalaksshmi
Department of Information Technology, Vels Institute of Science Technology and Advanced
Studies, Chennai, India
D. Balaganesh (B)
Berlin School of Business and Innovation, Berlin, Germany
e-mail: [email protected]
F. H. Hanoon
Department of Physics, College of Science, University of Thi-Qar, Nassiriya, Iraq
Collage of Engineering, Medical Instruments Technology Engineering, National University of
Science and Technology, Dhi Qar, Iraq
F. H. Hanoon
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 37
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_4
38 S. Adhikari et al.
describes and groups various techniques that are helpful in sentiment analysis and
data mining studies. For sentimental analysis, we merged the K-Nearest Neighbor
(KNN) and K-means clustering approaches. For more discussion, the proposed
technique is compared with the SVM classifier and Naive Bayes for better accuracy.
1 Introduction
The Internet’s growing popularity has changed how we think about the things we do
every day. With the rise of social media, faster and easier Internet access, and smart
devices, this effect has become stronger. With the ease and time savings that have
been made possible by the shift from the physical world to the cyber world, there is no
doubt that the quality of life has been increasing quickly [1]. Today, it is impossible
to think of doing research or keeping up with the newest news without utilizing the
Internet. In reality, as a result of the Internet’s widespread use, more complex ideas,
including big data and the Internet of Things (IoT), have emerged. However, despite
the Internet’s widespread use, not all websites and platforms receive the same amount
of traffic because of their popularity and advantages [1].
The virtual space for social networking that online social media provides allows
users to express their opinions and provide reviews on media material. Social media
data mining aims to unearth meaningful knowledge from information gathered from
user interactions with web content. Monitoring suspicious activity on the internet
has become essential. Data may be communicated in text, audio, and video formats
in online forums. However, a text corpus is the most popular and helpful format for
debate on the internet. The best approach to using it and discussing information in
the textual format is through text corpora. Data from online media can be utilized
for good or bad, even by criminals. Data from online media may be utilized in both
positive and negative ways, and criminal authorities might use it to incite irrational
opposition to lawful activity. To spot any suspicious activity, the discussion boards
need to be continuously monitored. Many law enforcement organizations around the
world are seeking ways to monitor these forums and detect any potential unlawful
activity. However, there are several difficulties in analyzing these suspicious activi-
ties, including locating suspicious published materials and publications produced by
users and examining user behavior in social media [2].
Social media platforms like Facebook, Twitter, and YouTube have evolved dramat-
ically, changing how people live their lives. These platforms allow users to post
videos, exchange messages, and communicate their opinions with others. The most
popular medium for sharing videos is YouTube. With 30 million daily users, it is the
second-most-frequented website in the whole globe. Over one billion videos are seen
daily on YouTube, and 500 h of footage are uploaded every minute. YouTube divides
each video into relevant categories to make it easier to find this material. Like, dislike,
and commenting are just a few of the new options that YouTube has included to allow
YouTube Sentimental Analysis Using a Combined Approach of KNN … 39
users to review videos [3]. The public is accustomed to using the video-hosting site
YouTube. According to Our World Data, with over a billion active users, YouTube
has had one of the highest numbers of social media users during the previous 5 years.
The popularity of YouTube is presently being utilized by several businesses to grow
their clientele and increase their marketing footprint through product videos [4].
Sentiment analysis may assist users in comprehending the user’s perspective and
is effective for rapidly grasping the big picture when employing a lot of text data.
The term “sentimental analysis” is also used to refer to the process of identifying the
positive, negative, or neutral thoughts, perspectives, attitudes, perceptions, emotions,
and sentiments expressed in a text. Current YouTube usage numbers give an idea of
the site’s size: at the time of writing, over 1 billion active users are watching video
material each month, totaling approximately six billion hours of video. Additionally,
YouTube is responsible for 10% of all internet traffic and 20% of visits to websites
[5]. Sentiment analysis is the process of identifying, extracting, and categorizing the
views, feelings, and attitudes stated in the text as they relate to various issues. Senti-
ment analysis is also known as information extraction, evaluation mining, appraisal
extraction, and attitude analysis. Through the study of criticism (or review) language,
opinion mining, and sentiment analysis studies seek to understand the thoughts of
people throughout the Web [6].
Researchers are now interested in research on knowledge extraction from a corpus
of texts. The text of opinions is among the most popular sources for information
excavation. Social media is the source of many opinions. Nearly all human actions
revolve around opinions, which also heavily impact people or organizations [7]. The
outcomes of opinion analysis are excellent or negative, build or decline, and so on.
In this research, opinions from Twitter and Facebook are divided into positive and
negative categories. Text mining uses several algorithms to categorize views into
positive or negative sentiments. In this work, the K-Means cluster and KNN are used
in conjunction. The goal of K-Means clustering is to group a collection of items so
that they are more similar to one another than to other groups.
2 Related Works
In this part, we looked into several emotional analytics and data mining techniques
using a variety of published articles.
The technique of looking for or extracting valuable information from textual
material is known as text mining [8]. It looks for intriguing patterns in huge datasets.
It employs a variety of pre-processing techniques, including stemming and stop-
word deletion. Their study included comprehensive information on the stop-word
deletion and stemming algorithms, two text mining pre-processing approaches. They
anticipate that the community of text-mining researchers will benefit from this study
and get a solid understanding of the various pre-processing strategies.
Ezpeleta et al. [9] described a brand-new social spam filtering technique. We offer
ways to support the idea that by capturing the mood of the words, it is feasible to
40 S. Adhikari et al.
enhance the outcomes of the present social spam filtering. First, several tests are run
with and without the mood feature using a social spam dataset. We then compare
the outcomes and show how mood analysis might enhance social spam filtering
performance. Results indicate that utilizing the Online Comments Dataset and the
validation dataset, respectively, the best accuracy attained with the dataset increased
from 82.50 to 82.58% and also from 93.97 to 94.38%.
Allahyari et al. [10] tried to give a concise overview of the field of text mining.
We provide a summary of the most essential methods and algorithms that are widely
applied in the text domain. Additionally, various significant text-mining techniques
in the biomedical field were reviewed in this work. Despite the limitations of this
page, it is hard to thoroughly detail all of the many approaches and algorithms, but
it should provide a general picture of how text mining is progressing at the moment.
Given the enormous amount of scholarly literature that is created each year, text
mining is crucial for scientific study. Due to the regular addition of numerous new
papers, these vast online archives of scientific literature are substantially expanding.
Although this expansion has made it easier for scholars to acquire more scientific
knowledge, it has also made it very challenging for them to find papers that are
more relevant to their interests. Researchers’ interest in analyzing and mining this
enormous volume of text is therefore high.
To find YouTube, Wahyono et al. [11] developed a mobile application that was
based entirely on students’ emotions and online learning while viewers saw online
learning materials. When figuring out the results of a website, algorithms for artificial
intelligence use text files instead of comments. This study uses a text-based set of
guidelines for a text-based emotion class type with k-NN to determine each student’s
sentiments based only on user comments on YouTube and online learning resources.
By using this program, teachers may learn how their students feel after seeing videos
of the study materials they provide for YouTube and online courses.
Among others, the legal field is one of several whose primary foundation is infor-
mation that is preserved as text [12]. Each case that a legal analyst is working on is
a research challenge. The legal or judicial argument is based on thorough research
to create arguments. The intricacy and quantity of papers that must be looked for
and examined make the aforementioned process highly challenging. Today’s search
possibilities are largely keyword based. To make this procedure simpler, researchers
have introduced the TM approach and associated techniques. The study suggests
using an unsupervised text mining approach called clustering to organize papers to
improve document search.
Hashimi et al. [13] mentioned that the majority of text mining methods rely
on several strategies, including clustering, classification, relationship mining, and
pattern matching. These methods have been applied to finding, locating, and
extracting pertinent facts and information from unstructured and disorganized textual
resources. To provide a framework and design, mining approaches have been
provided along with various algorithms and classifications. Classification, clustering,
linear regression, cluster analysis learning, anomaly detection methods, summariza-
tion, and other supervised training approaches are just a few of the diverse methods
that have been found. Each of those strategies is essential for creating and putting
YouTube Sentimental Analysis Using a Combined Approach of KNN … 41
into use data warehouses that are useful for various purposes. Most often, academics,
researchers, development centers, etc. employ data warehouses.
These days, social networking sites like YouTube, Facebook, and others are quite
popular [14]. The best feature of YouTube is that users can subscribe to and comment
on videos. However, flooding the comments on those videos attracts spammers. As
a result, this study uses K-Nearest Neighbor and Support Vector Machine (SVM) to
construct a YouTube identification framework (k-NN). This study is divided into five
(5) steps, including data gathering, pre-processing, feature selection, classification,
and detection. The use of Weka and RapidMiner is made for the experiments. SVM
and KNN accuracy results employing both machine learning methods demonstrate
good accuracy results. Naive Bayes often comes up on top, followed by Decision
Tree and Logistics. Weka’s results, in contrast, demonstrate an accuracy of at least
90%. A further defense against spam attacks is to attempt to avoid clicking links in
comments.
By incorporating qualitative analysis into the already-existing quantitative anal-
ysis method, Lee et al. [15] demonstrated a way to assess the effect of identifying
future signals. This methodology offers an improved method for confirming the
validity and reliability of analytical results. The feasibility of the updated technique
is beneficial to determining the progress of the issue, expanding from prospective to
emergent concerns, as we discovered in the study case on the ethical dilemmas of
AI. It is commonly regarded in many fields of research and administration that the
updated methodology, which combines qualitative content analysis, is an ambidex-
trous approach that allows analysts to strike a balance between rigor and flexibility.
In practice, the strategy is anticipated to benefit government and commercial stake-
holders by giving them a thorough understanding of the current state of affairs,
including both hidden and well-known signals as well as their significance. Singh
et al. [16] have given assessment methods for heart disease prediction utilizing soft
computing algorithms. Many Internet of Things (IoT) strategies and techniques for
improving healthcare performance have been described by Rakshit et al. [17]. In
order to improve accuracy, comparison results, and performance, Suseendran and his
research team [18–22] have reviewed several image processing approaches, pattern
matching techniques, and inferred machine learning algorithms that may be analyzed
for sentiment analysis.
Summary:
• The improvement in the absolute number of comments on YouTube and also the
daily active visitors on this website is remarkable.
• This indicates that mood analysis can distinguish between spam and valid social
media comments.
• The mood feature gives each type of video a unique feature for comments.
This modification aids classifiers in removing spam comments and enhances
performance.
42 S. Adhikari et al.
3 Proposed Method
(i) Dataset
Using the YouTube Data API, the used datasets were taken directly from YouTube
[20]. The attractiveness of the channel as well as the availability of recent comments
serves as the foundation for the retrieved datasets. Other than these two features,
there was nothing else taken into account. Consequently, the datasets were chosen
at random (not based on celebrities or whatsoever). The overall number of YouTube
channels used is 100, and there are 10,000 total samples.
(ii) Pre-processing
Pre-processing involves cleaning up the raw dataset using operations like tokeniza-
tion, stop-word removal, and stemming. For the subsequent feature extraction and
selection phase, the clean dataset would be utilized.
Features extraction is a procedure for converting data that was previously in text form
into a machine-understandable format. Making the information into vectors is one
of them, which makes it simpler for robots to learn. There are other ways to create
vectors, however, in this study, we only employ two categories of vectorizers:
• The hash function known as HashingVectorizer is a useful tool for efficiently
mapping words to features. The hashing function is used by Hashing Vectorizer
to determine how many frequencies are present in each text. With this technique,
a text document is transformed into a tokens event matrix [4].
• The term frequency-inverse documents frequency vectorizer is TFIDFVectorizer.
It uses statistics to determine each word’s weight in the sample document [5]. IDF
is the weight of how broadly dispersed the word is over the whole dataset, and TF
is the frequency with which the phrase appears in the dataset. The amount of the
IDF increases with the amount of information that does not include the relevant
phrase.
• The CountVectorizer technique turns a group of text documents into a token
count matrix. Digitization is the process not only offers a quick approach to alter
a collection of text files and create a vocabulary of recognized different words but
it can also be used to encrypt new documents [4].
Weights that are often employed in information retrieval and text mining are term
frequency and inverse document frequency (TF-IDF) [18]. The formula to determine
TF is
TF(d , t) = f (d , t) (1)
If there are checkable facts that may be utilized as a query inside the TF-IDF
technique, TF-IDF weighing may be completed.
(iv) Sentimental analysis
(a) SVM
SVM is effective in distinguishing between good and bad issues, like spam. A super-
vised learning model called SVM examines the information utilized in classifica-
tion and regression. SVM is frequently used for classification issues. For binary
classification problems, SVM is utilized together with kernel functions [14].
In the field of machine learning, a support vector classifier is one such supervised
training method that makes sufficient progress on a range of tasks, specifically while
analyzing the feelings. The more complicated the data, the more correct the forecast
will be, making SVM algorithms superb classifiers [5].
A “good” linear separator across different classes is what Support Vector Machines
seek to identify. Only two classes—a positive class and now a negative class—can be
44 S. Adhikari et al.
distinguished by a single SVM. The SVM method looks for a hyperplane that is the
farthest away from both positive and negative instances (also known as the margin).
Support vectors are papers that define the hyperplane’s precise location and have
a distance from it. If the documentation vector of the two classifications cannot be
separated linearly, a hyperplane is chosen so that the fewest document vectors may
be found on the incorrect side [10].
The word “naive” refers to the assumption that the characteristics in a dataset are inde-
pendent of one another. This classifier is a probabilistic learning approach based on
the Bayesian theorem. This classifier may be used for sentiment analysis, document
classification, text categorization, spam filtering, etc. Few explained how generative
classifiers, also known as Bayesian classifiers, aim to construct a probabilistic clas-
sifier by modeling the underlying word properties in various classes. The next step
is to categorize the text using the posterior probabilities that the documents belong
to the various groups based on the occurrence of particular words in the texts [2]. A
class c Naive Bayes is as follows for a document d:
P(d|c)P(c)
P(c|d ) = (3)
P(d)
P(c|d) stands for the likelihood function of the classification, P(c) for the posterior
distribution of a class, P(d|c) for the likelihood, or the likelihood of the predictor
provided the lesson and P(d) for the posterior distribution of the predictor [2].
(c) KNN
A supervised learning technique is KNN. Data in the KNN method is shown as a
vector space. KNN emphasizes the k training data points that are most comparable to
a test data point. The method will integrate the neighbors’ labels to decide the label
of the testing data point after identifying the K-Nearest Neighbors [14].
A method for categorizing objects based on educational data that is nearest to
the item is the K-Nearest Neighbor (KNN) criteria set [11]. Friendship distances,
whether close or far, are often determined using the general method shown in the
equation below, based on the Euclidean distance.
[
| n
|∑
d = | (ai − bi)2 (4)
i=1
One of the partitioning techniques that is frequently used in data mining is k-means
clustering. In the case of text data, the k-means clustering divides n texts into k
groups. The clusters are constructed around a representative object.
The closest neighbor classifier is a closeness classifier that performs the classification
using distance-based measurements. The fundamental contention is that, based on
similarity metrics like cosine established, documents that are members of the same
class are much more likely to be “similar” or close to one another (2.2). From the
different classifiers of the documents related to the training set, the categorization of
the test dataset is deduced. The method is known as k-nearest neighbor categorization
and the most prevalent class from such k neighbors is given as the classifier [10] if
we take into account the k-nearest neighbor in the train data set.
46 S. Adhikari et al.
1. Input: D is for the document set, S is for similarity, and k is the cluster count.
2. Output: k-cluster collection.
3. Initialization.
4. Choose k data points at random to serve as initial centroids.
5. Determine the cluster K number.
6. Setting up the cluster center.
7. Distribute all information and items to the nearby cluster.
8. Update the centroid using the current membership of the cluster.
9. Do, but do not converge.
10. Based on the most comparable papers, assign them to the centroids.
11. Determine the cluster centers for each cluster.
12. End.
13. Change each object’s assignment using the new cluster center.
14. The clustering process is complete if the cluster doesn’t change.
15. Else.
16. carry out Step 7 once more until each cluster shows no change.
17. give back k clusters.
(v) Results
YouTube is used in this study as a helpful resource for collecting text remarks in
the comment column. Figure 2 demonstrates the beneficial source from the Channel
on YouTube that includes study material. The overall amount of YouTube channels
used is 100, and there are 10,000 total samples.
One of the criteria used to gauge how accurately an algorithm is applied is perfor-
mance evaluation. The Confusion Matrix is employed in this review. By contrasting
the outcomes of the categorization of the training data, the worth of accuracy, recall,
clarity, and F1 score for the classified testing data will be examined.
The accuracy score measures the algorithm’s effectiveness using an Eq. (5).
TP + TN
Accuracy = (5)
TP + TN + FN + TN
The equation defines recall as the proportion of the chosen special attention to the
entire number of relevant things accessible (6).
TP
Recall = (6)
TP + FN
Precision is the proportion of the relevant item that was chosen to all other relevant
items. When information requests and replies are matched using an equation, this is
what is meant by precision (7).
TP
Precision = (7)
TP + FP
4 Conclusion
In the past 10 years, it has become clearer that social media platforms are becoming
more popular. These platforms are now more widely known and used, which has
encouraged spammers, fraudsters, and other bad actors to attack them. YouTube has
an unusually high number of users and traffic for one of the most popular social
media sites. In this research paper, we introduce a sentiment analysis algorithm
for YouTube comments. For emotional analysis in this research, we merged the K-
Nearest Neighbor (KNN) and K-means clustering approaches. The suggested combi-
nation technique of the KNN and K-means algorithms yields a precision of 98.13%
in the emotional analysis of YouTube comments. When compared to other current
algorithms like SVM, Naive Bayes, etc., the suggested approaches provide promising
results.
References
1. Abdullah, A. O., Ali, M. A., Karabatak, M., & Sengur, A. (2018). A comparative analysis of
common YouTube comment spam filtering techniques. In 2018 6th international symposium
on digital forensic and security (ISDFS) (pp. 1–5). IEEE.
2. Sharmin, S., & Zaman, Z. (2017). Spam detection in social media employing machine learning
tool for text mining. In 2017 13th International conference on signal-image technology &
internet-based systems (SITIS) (pp. 137–142). IEEE.
3. Alhujaili, R. F., & Yafooz, W. M. (2021). Sentiment analysis for youtube videos with user
comments. In 2021 International conference on artificial intelligence and smart systems
(ICAIS) (pp. 814–820). IEEE.
YouTube Sentimental Analysis Using a Combined Approach of KNN … 49
4. Irawaty, I., Andreswari, R., &Pramesti, D. (2020). Vectorizer comparison for sentiment analysis
on social media youtube: A case study. In 2020 3rd International conference on computer and
informatics engineering (IC2IE) (pp. 69–74). IEEE.
5. Singh, R., & Tiwari, A. (2021). Youtube comments sentiment analysis.
6. Riaz, S., Fatima, M., Kamran, M., & Nisar, M. W. (2019). Opinion mining on large-scale data
using sentiment analysis and k-means clustering. Cluster Computing, 22(3), 7149–7164.
7. Zul, M. I., Yulia, F., & Nurmalasari, D. (2018). Social media sentiment analysis using K-means
and naïve Bayes algorithm. In 2018 2nd International conference on electrical engineering and
informatics (ICon EEI) (pp. 24–29). IEEE.
8. Vijayarani, S., Ilamathi, M. J., & Nithya, M. (2015). Preprocessing techniques for text mining-
an overview. International Journal of Computer Science & Communication Networks, 5(1),
7–16.
9. Ezpeleta, E., Iturbe, M., Garitano, I., Mendizabal, I. V. D., & Zurutuza, U. (2018). A good anal-
ysis of youtube comments and a method for improved social spam detection. In International
conference on hybrid artificial intelligence systems (pp. 514–525). Cham, Springer.
10. Allahyari, M., Pouriyeh, S., Assefi, M., Safaei, S., Trippe, E. D., Gutierrez, J. B., & Kochut,
K. (2017). A brief survey of text mining: Classification, clustering and extraction techniques.
arXiv preprint arXiv:1707.02919
11. Wahyono, I. D., Saryono, D., Putranto, H., Asfani, K., Rosyid, H. A., Sunarti, M. M. M., Horng,
G. J., & Shih, J. S. (2022). Emotion Detection based on column comments in material of online
learning using artificial intelligence. iJIM, 16(03), 83.
12. Wagh, R. S. (2013). Knowledge discovery from legal documents dataset using text mining
techniques. International Journal of Computer Applications, 66(23).
13. Hashimi, H., Hafez, A., & Mathkour, H. (2015). Selection criteria for text mining approach.
Computers in Human Behavior, 51, 729–733.
14. Aziz, A., Foozy, C. F. M., Shamala, P., & Suradi, Z. (2017). YouTube spam comment detec-
tion using support vector machine and K-nearest neighbor. Indonesian Journal of Electrical
Engineering and Computer Science, 5(3), 401–408.
15. Lee, Y. J., & Park, J. Y. (2018). Identification of future signal based on the quantitative and
qualitative text mining: A case study on ethical issues in artificial intelligence. Quality Quantity,
52(2), 653–667.
16. Singh, D., Sahana, S., Pal, S., Nath, I., Bhattacharyya, S. (2020). Assessment of the heart
disease using soft computing methodology. In V. Solanki, M. Hoang, Z. Lu, P. Pattnaik (Eds.),
Intelligent computing in engineering. Advances in intelligent systems and computing (vol
1125). Springer, Singapore. https://doi.org/10.1007/978-981-15-2780-7_1
17. Rakshit, P., Nath, I., & Pal, S. (2020). Application of IoT in healthcare. In Principles of Internet
of Things (IoT) ecosystem: Insight paradigm (pp. 263–277). https://doi.org/10.1007/978-3-030-
33596-0_10
18. Suseendran, G., Chandrasekaran, E., Pal, S., Elangovan, V. R., & Nagarathinam, T. (2021).
Comparison of multidimensional hyperspectral image with SIFT image mosaic methods for
mosaic better accuracy. In Intelligent computing and innovation on data science: Proceedings
of ICTIDS 2021 (pp. 201–212). Springer, Singapore. https://doi.org/10.1007/978-981-16-3153-
5_23
19. Suseendran, G., Balaganesh, D., Akila, D., & Pal, S. (2021). Deep learning frequent pattern
mining on static semi structured data streams for improving fast speed and complex data
streams. In 2021 7th International conference on optimization and applications (ICOA) (pp. 1–
8). IEEE. https://doi.org/10.1109/ICOA51614.2021.9442621
20. Jeyalaksshmi, S., Akila, D., Padmapriya, D., Suseendran, G., & Pal, S. (2021). Human facial
expression based video retrieval with query video using EBCOT and MLP. In Proceedings of
first international conference on mathematical modeling and computational science: ICMMCS
2020 (pp. 157–166). Springer Singapore. https://doi.org/10.1007/978-981-33-4389-4_16
50 S. Adhikari et al.
21. Suseendran, G., Doss, S., Pal, S., Dey, N., & Quang Cuong, T. (2021). An approach on data
visualization and data mining with regression analysis. In Proceedings of first international
conference on mathematical modeling and computational science: ICMMCS 2020 (pp. 649–
660). Springer, Singapore. https://doi.org/10.1007/978-981-33-4389-4_59
22. Pal, S., Suseendran, G., Akila, D., Jayakarthik, R., & Jabeen, T. N. (2021). Advanced FFT archi-
tecture based on cordic method for brain signal encryption system. In 2021 2nd International
conference on computation, automation and knowledge management (ICCAKM) (pp. 92–96).
IEEE. https://doi.org/10.1109/ICCAKM50778.2021.9357770
Big Data Analytics: Hybrid Classification
in Brain Images Using BSO and SVM
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 51
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_5
52 S. Pal et al.
1 Introduction
Due to information technology, digital medical technology has gotten better, medical
data is growing at an exponential rate, and medical science has become a good
example of how science works. This has led to the phenomenon of “big data”. Data
has evolved into a new strategic asset and a key driver of innovation in the age of
big data, and it is transforming how biomedical research is conducted as well as how
people live and think. Parts of the healthcare profession can be used to improve the
cataloging and management of healthcare big data and build a data foundation for
future development and use. This can be done through the assimilation, analysis, and
technical specification characterization of big knowledge in the healthcare service
field. It also gives a strong theoretical and technological foundation for the creation
and use of big data in the fields of medicine and health. The project study outcomes
can enhance the conceptual and practical system inside the field of healthcare health
big data research by supplying essential technologies and information models for
big data in the medical and healthcare industries [1].
Medical data in the healthcare industry has grown quickly in recent years. A
zettabyte of patient records was produced in the USA in 2018. The adoption of new
techniques based on big data technology, machine learning (ML), and artificial intel-
ligence (AI) has consequently become important as a result of this accumulation
of medical data, particularly photographs [2]. Presently, several researchers have
created machine-learning methods for the early diagnosis of chronic illnesses. Wear-
able technology provides healthcare facilities with simple, dependable, affordable,
and light health monitoring solutions. The ongoing monitoring of bodily changes
with smart sensors has become a way of life as a result of several medical awareness
campaigns. The majority of health education projects call for illness prevention and
early disease detection. The use of technology to create medical data using Spark and
machine deep learning to forecast health problems is highly practical and valuable
in the field of healthcare. People will benefit from receiving warnings about health
problems and information about health threats earlier. In smartphone applications,
it can also assist doctors in patient tracking. Using recommendation system-based
machine learning techniques, also makes it easier to cure human diseases based on
sophisticated testing [3].
A brain tumor is an abnormal development of cells within the area of the brain’s
skull that can either be malignant or not. MR brain scans are being used to classify
brain tumors, which is a new trend in medical imaging. Due to its rarity and fatal
nature, tumor research is an intriguing field. Neurologists can assist the individual to
live a longer lifespan in comparison by finding brain tumor tissues early [4]. One of
the most deadly types of illnesses that are dramatically on the rise in brain tumors.
According to statistics gathered from worldwide scientific organizations like the
American Cancer Society (ASCO), the rate of cancer-related deaths is rising quickly
Big Data Analytics: Hybrid Classification in Brain Images Using BSO … 53
globally. The growth of brain tumors, which can take many different shapes and sizes
and manifest in a variety of sites, is one of the leading reasons for rising death rates in
both children and adults. It has been discovered that during the past several decades,
the overall number of persons suffering from and passing away from brain tumors
has grown by 300 people annually [5]. A tumor is a strange growth in the tissues. The
cells in a brain tumor proliferate and multiply uncontrollably, appearing uncontrolled
by the mechanisms that control normal cells. Primary or metastatic brain tumors can
be malignant or benign. Metastatic brain tumors that have migrated to the brain from
another part of the body are what are referred to as cancers. Magnetic resonance
imaging (MRI) imagery is frequently utilized while treating tumors in the brain,
ankle, or foot [6].
One of the computer vision jobs is classification, where machine learning is used
to extract information from a collection of input data, look for certain patterns, and
then come to conclusions based on the facts they have discovered. The employment
of machine learning algorithms in a variety of sectors, including medical, bioin-
formatics, economics, agriculture, robotics, etc., has led to their widespread usage
and increased academic research. A supervised training task called classification
produces a categorical output or the class to which a given instance belongs. The
purpose of supervised learning is to create a decision matrix that accurately cate-
gorizes unknown examples using a model developed on the training dataset where
the categories of cases are known. In the training set of data, the decision model
looks for patterns that will allow it to classify newly discovered cases. Since medical
datasets often contain a large number of attributes and examples, classifying them is
a difficult challenge. The need for early and accurate diagnosis for patient recovery is
driving the search for quicker and more accurate categorization algorithms in CAD
systems [7].
Machine learning researchers have extensively researched the classification
problem. Numerous categorization techniques have been devised and are often
utilized in real-world settings. For instance, support vector machines (SVM), deci-
sion trees (DT), artificial neural networks (ANN), k-nearest neighbor (KNN), naive
Bayesian classification (NBC), etc. However, many of these techniques have a locally
optimal solution since they are structurally deterministic [8]. The Brain Storm Opti-
mization (BSO) algorithms are a novel form of swarm intelligence that is based on
the brainstorming process, a collective human activity. BSO involves the convergent
operation and the divergent operation, which are its two main operations. Through
iterative solution dispersion and convergence in the search space, an “acceptable”
optimum might be attained. Naturally, both convergence and divergence are capa-
bilities of the chosen optimization method [9]. In this article, brain tumor pictures
obtained from big data are processed using a new, unique hybrid algorithm. Particle
Swarm Optimization (PSO) is used for segmentation, but Support Vector Machine
(SVM) and Brain Storm Optimization (BSO) are used for classification.
54 S. Pal et al.
2 Related Works
The use of information technology and electronic health systems in the medical
field has helped improve patient care, which brings up issues with segmentation and
categorization. In this part, we investigated segmentation and classification methods
for big data medical pictures using machine learning and optimization techniques.
van Opbroek et al. [10] published an automatic method for brain extraction and
brain tissue segmentation. By using the Gaussian scale-space features and the Gaus-
sian derivative features, they were able to make segmentations that were usually
pretty smooth and gave good results without any additional spatial regularization.
Because it was hard to tell the difference between the basal ganglia and the white
matter around it, segmentations were not always smooth in some slices, especially
those that had the basal ganglia. The suggested multi-feature SVM classification
generates appropriate segmentations quickly.
Pourpanah et al. [11] came up with a hybrid FMM-BSO model to solve the problem
of selecting features when putting data into groups. First, FMM is applied as a method
of supervised learning to progressively build hyperboles. The optimal feature subset
is then extracted using BSO as the underlying method to optimize classification accu-
racy and reduce model complexity. To assess the efficacy of the FMM-BSO model,
ten benchmark classification tasks and an actual case study, namely, motor failure
detection, were employed. The effectiveness of FMM-BSO has been compared to
that of the classic FMM and other approaches described in the literature in terms of
classification accuracy and the number of characteristics chosen. Overall, FMM-BSO
is capable of producing promising outcomes that are comparable to, if not superior to,
those from other cutting-edge techniques. FMM-BSO, however, necessitates higher
execution times than FMM-PSO and FMM-GA.
In a study by Zhang et al. [12], the use of artificial intelligence based on machine
learning for large data analysis was examined. The use of SVM in classification
algorithms for large data was researched, and it is non-linear and was used. Through
the discussion of the multi-classification method, the KNN algorithm was utilized to
enhance the one-to-one SVM approach. The reliability of the revised technique for
massive data analysis was then confirmed by numerical tests and example analysis.
The upgraded one-to-one SVM outperformed the neural network in terms of classi-
fication accuracy for faults in power transformers, reaching 92.9%. This paper offers
a theoretical foundation for the use of support vector machines and other artificial
intelligence tools in large data processing.
The effectiveness of swarm intelligence algorithms is typically assessed using
benchmark functions [13]. The theoretical study of the algorithm’s running times
is lacking. Each member of the swarm is an answer in the search area as well as a
sample of data from the search area. Better algorithms and search techniques could
be suggested based on the assessments of these data. A new and intriguing swarm
intelligence technique is brainstorm optimization (BSO). This work has studied the
BSO algorithm’s evolution and uses fit from the standpoint of data analysis. The BSO
algorithm may be thought of as combining data mining with swarm intelligence
Big Data Analytics: Hybrid Classification in Brain Images Using BSO … 55
and textural characteristics were taken out of the gray-level co-occurrence matrix
(GLCM), which was done after the morphological operation. Brain MRI scans are
used to classify cancers using a probabilistic neural network (PNN) classifier.
To acquire precise vessels, Wen et al. [19] suggested a unique cerebrovascular
segmentation approach. First, a new FMM is used to match the intensity histograms of
the photos, which leads to a better fit (two Gaussian probability density functions and
a Rayleigh distribution function). The best FMM parameters are then obtained using
the modified PSO method. Our met, therefore, has a higher impact on segmenting
tiny blood vessels. It can cut down on the number of convergence iterations required
by other methods like SA, SEM, or EM, improving performance. Our approach has
two drawbacks. To reach a stable state, the PSO algorithm cycle is first repeated
consecutively. They think that utilizing the parallel architecture of contemporary
graphics technology will enhance its performance. Second, several fractured points in
the tiny vessels result from our method’s failure to take into account the neighborhood
link between the voxels. Rakshit et al. [20] talked about how IoT can be used in
different ways to improve performance and results in the healthcare sector. Singh
et al. [21] have shown how to use soft computing algorithms to assess and predict heart
disease. Suseendran and his research team [22–26] have talked about different ways
to process images, match patterns, and use machine learning to improve accuracy,
comparison results, and performance.
Summary:
• To evaluate the performance of some methods, larger datasets must be used.
• The field missing rate in the clinical health data is also rather high, and it greatly
affects the classification outcomes.
3 Proposed Method
extracted image is used for segmentation using PSO, and hybrid (BSO + SVM)
classification is used to get the final result.
Figure 1 shows the block diagram of the proposed method. It shows the steps
involved in our proposed hybrid classification method.
(ii) Dataset
This explains the components, the source where the brain imaging data was gath-
ered, and the techniques for feature extraction and segmentation in brain MRI. The
suggested approach applies to datasets with 256 × 256 and 512 × 512-pixel size
brain MRI imagers to better improve it, it is transformed to greyscale.
(iii) Preprocessing
The preprocessing stage raises the caliber of the MR images of brain tumors and
prepares them for upcoming processing by clinical professionals or imaging modali-
ties. Additionally, it aids in enhancing MR image characteristics. The factors include
increasing the signal-to-noise ratio, improving the aesthetic appeal of MR images,
eliminating background noise and unimportant details, smoothing interior areas, and
keeping important edges [18].
(iv) PCA Feature Extraction
The technique of obtaining key information from segmented images, such as rough-
ness, shape, contrast, and color properties, is known as feature extraction. It is neces-
sary to reduce the number of characteristics since too many add to computation
times and memory storage, which can occasionally complicate the categorization
of tumors. Since (PCA) effectively reduces the dimension of the data and thus also
lowers the computing cost of evaluating fresh data, it was applied. PCA is an excel-
lent approach for reducing the dimensionality of a data set with many interconnected
variables while retaining the majority of the variability.
58 S. Pal et al.
It works by rearranging the variables in the data set in air weights or variances.
This method has three outcomes: it orthogonalized input vectors’ components so that
they are uncorrelated with one another, it sorts the resulting orthonormal components
in order of increasing variation, and it eliminates the components that contribute the
least to the variance in the data set [5].
(v) PSO Segmentation
Segmentation is a crucial step that directly affects the classification’s outcome. Even
if the greatest classifier available is utilized segmentation will still result in a subpar
classification result. Even with a relatively simple classifier, a solid segmentation
will undoubtedly result in a higher classification rate. However, due to factors and
methods outside of the MRI picture, precise segmentation is challenging [5].
Particle Swarm Optimization, a technique inspired by nature, is used to segment
the tumor part from the MR picture (PSO). By initializing cluster centroids, PSO
offers an optimum solution. It functions similarly to how swarms react and interact
with one another as they move around in quest of a solution [13]. The PSO method
[29] is simple to use, concurrent, and extremely efficient. The PSO approach is useful
for addressing nonlinear, non-differentiable, and multi-modal function combinatorial
optimization because of its parallel structure, which provides great performance.
The PSO algorithm’s information distribution also makes it adaptable [19]. The two
equations below serve as the foundation for applying particle swarm optimization:
The coordinates of the intensity values are denoted by a particle (i, j)a, where
Vp stands for particle velocity. Gbest() is the overall best fitness value determined
by any particles in the solution set, while Pbest() is the fittest solution of a single
particle. C2 and C1 are constants, while the method rand() creates random numbers
between 0 and 1. The new location may be determined along with the new velocity:
• Determining the p-best for each particle and changing it when a newer one
performs better;
• Calculating the g-best value;
• Changing each particle’s speed according to Eq.
• When the termination requirements are satisfied, the iteration must end. If not,
step 3 of the procedure must be redone.
(vi) Classification
ωx + b = 0, (3)
Let
ωxi + b ≥ 1, yi = 1 (4)
where w and x represented the inner product, this proves that the sample can be
separated linearly.
(b) Brain Storm Optimization (BSO)
One well-known population-based algorithm that draws inspiration from nature and
falls under the umbrella of swarm intelligence is the brainstorm optimizer (BSO).
The brainstorming method used by people to generate ideas served as the model for
this program [7]. Yuhui Shi introduced the BSO algorithm in 2011. This method
was used to solve several challenging optimization issues, including path planning,
satellite configuration, clustering, grid system energy optimization, positioning of
drones for the best coverage, etc.
60 S. Pal et al.
1. Initialization: assess the n probable solutions (individuals) that are generated at random;
2. While not having found a “good enough” solution or completing the specified number of
iterations;
3. Clustering: a clustering algorithm that divides n people into m clusters;
4. To create new people, randomly choose one or two cluster(s);
5. Selection: Using the same individual index, the freshly created person is compared against the
current individual, and the better one is preserved and registered as the new individual;
6. Evaluate the n people.
This improves the classifier’s ability to distinguish between normal and abnormal
brain images.
(viii) Result
On MATLAB R2017b, all of the experimental models were carried out. The tech-
nology is used for 256 × 256 and 512 × 512 pixel brain MRI images on a dataset.
Figure 2 shows the Grayscale used as an additional improvement.
Accuracy:
TP + TN
Accuracy = (6)
TP + FP + TN + FN
The suggested method’s accuracy is displayed in Table 1 and Fig. 3. The result
above demonstrates how accurate the hybrid BSO + SVM classification is. The
hybrid classification algorithm’s average accuracy is 98%. When compared to other
current approaches, our suggested algorithm displays the greatest accuracy and
performance.
4 Conclusion
Big data is made up of huge amounts, a lot of complexity, different types of data, and
fast transmission rates. As a result, processing and analyzing large data sets has drawn
increasing interest. In big data MRI brain images, segmentation and classification are
Big Data Analytics: Hybrid Classification in Brain Images Using BSO … 63
performed. The most important concept for study and analysis is thought to be brain
MRI image analysis. In this study, big-data MRI brain pictures are classified using
a brand-new hybrid algorithm. For segmentation and classification, we use Particle
Swarm Optimization (PSO). Brain Storm optimization (BSO), and Support Vector
Machine (SVM) methods are used in hybrid classification. The suggested hybrid
method has shown improved outcomes across all criteria for the difficult task of
segmenting and classifying brain tumors. The aforementioned outcome demonstrates
that the hybrid BSO + SVM classification is 98% accurate. When compared to other
approaches that are already in use, our suggested algorithm exhibits the best accuracy
and performance.
References
1. Xing, W., & Bei, Y. (2020). Medical Health Big Data Classification Based on KNN Classifi-
cation Algorithm. IEEE Access, 8, 28808–28819. https://doi.org/10.1109/ACCESS.2019.295
5754
2. TchitoTchapga, C., Mih, T. A., TchagnaKouanou, A., FozinFonzin, T., KuetcheFogang, P.,
Mezatio, B. A., &Tchiotsop, D 2021 Biomedical Image Classification in a Big Data Architec-
ture Using Machine Learning Algorithms Journal of Healthcare Engineering 2021 https://doi.
org/10.1155/2021/9998819
3. Ismail, A., Abdlerazek, S., & El-Henawy, I. M. (2020). Big data analytics in heart disease
prediction. Journal of Theoretical and Applied Information Technology, 98(11), 1970–1980.
4. Dixit, A., & Nanda, A. (2019). Brain MR Image Classification via PSO-based Segmentation.
2019 12th International Conference on Contemporary Computing, IC3 2019, 1–5. https://doi.
org/10.1109/IC3.2019.8844883
5. Faisal, Z., & El Abbadi, N. K. (2019). Detection and recognition of brain tumors based on
DWT, PCA, and ANN. Indonesian Journal of Electrical Engineering and Computer Science,
18(1), 56–63. https://doi.org/10.11591/ijeecs.v18.i1.pp56-63
6. Alam, M., &Amjad, M. (2018). Segmentation and Classification of Brain MR Images Using Big
Data Analytics. Proceedings - 2018 4th International Conference on Advances in Computing,
Communication, and Automation, ICACCA 2018, 1–5. https://doi.org/10.1109/ICACCAF.
2018.8776742
7. Tuba, E., Strumberger, I., Bezdan, T., Bacanin, N., & Tuba, M. (2019). Classification and
Feature Selection Method for Medical Datasets by Brain Storm Optimization Algorithm and
Support Vector Machine. Procedia Computer Science, 162(Iii), 307–315. https://doi.org/10.
1016/j.procs.2019.11.289
8. Xue, Y., Zhao, Y., & Slowik, A. (2021). Classification Based on Brain Storm Optimization with
Feature Selection. IEEE Access, 9, 16582–16590. https://doi.org/10.1109/ACCESS.2020.304
5970
9. Cheng, S., Sun, Y., Chen, J., Qin, Q., Chu, X., Lei, X., & Shi, Y. (2017). A comprehensive survey
of brain storm optimization algorithms. 2017 IEEE Congress on Evolutionary Computation,
CEC 2017 - Proceedings, 1637–1644. https://doi.org/10.1109/CEC.2017.7969498
10. Van opbroek, A., Van der Lijn, F., & De Bruijne, M. (2022). Automated Brain-Tissue Segmen-
tation by Multi-Feature SVM Classification. The MIDAS Journal. https://doi.org/10.54294/
ojfo7q
11. Pourpanah, F., Lim, C. P., Wang, X., Tan, C. J., Seera, M., & Shi, Y. (2019). A hybrid model
of fuzzy min–max and brainstorm optimization for feature selection and data classification.
Neurocomputing, 333, 440–451. https://doi.org/10.1016/j.neucom.2019.01.011
64 S. Pal et al.
12. Zhang, Z. (2020). Big data analysis with artificial intelligence technology based on a machine
learning algorithm. Journal of Intelligent and Fuzzy Systems, 39(5), 6733–6740. https://doi.
org/10.3233/JIFS-191265
13. Cheng, S., Qin, Q., Chen, J., & Shi, Y. (2016). Brainstorm optimization algorithm: A review.
Artificial Intelligence Review, 46(4), 445–458. https://doi.org/10.1007/s10462-016-9471-0
14. Xue, Y., & Zhao, Y. (2022). Structure and weights search for classification with feature selection
based on the brainstorm optimization algorithm. Applied Intelligence, 52(5), 5857–5866. https:/
/doi.org/10.1007/s10489-021-02676-w
15. Ji, W., Yin, S., & Wang, L. (2019). A big data analytics-based machining optimization approach.
Journal of Intelligent Manufacturing, 30(3), 1483–1495. https://doi.org/10.1007/s10845-018-
1440-9
16. Narmatha, C., Eljack, S. M., Tuka, A. A. R. M., Manimurugan, S., & Mustafa, M 2020 A
hybrid fuzzy brain-storm optimization algorithm for the classification of brain tumor MRI
images Journal of Ambient Intelligence and Humanized Computing 0123456789 https://doi.
org/10.1007/s12652-020-02470-5
17. Surantha, N., Lesmana, T. F., & Isa, S. M. (2021). Sleep stage classification using extreme
learning machine and particle swarm optimization for healthcare big data. Journal of Big Data,
8(1). https://doi.org/10.1186/s40537-020-00406-6
18. Varuna Shree, N., & Kumar, T. N. R. (2018). Identification and classification of brain tumor MRI
images with feature extraction using DWT and probabilistic neural network. Brain Informatics,
5(1), 23–30. https://doi.org/10.1007/s40708-017-0075-5
19. Wen, L., Wang, X., Wu, Z., Zhou, M., & Jin, J. S. (2015). A novel statistical cerebrovascular
segmentation algorithm with particle swarm optimization. Neurocomputing, 148, 569–577.
https://doi.org/10.1016/j.neucom.2014.07.006
20. Rakshit, P., Nath, I., & Pal, S. (2020). Application of IoT in healthcare. Principles of Internet of
Things (IoT) Ecosystem: Insight Paradigm, pp. 263–277. https://doi.org/10.1007/978-3-030-
33596-0_10
21. Singh, D., Sahana, S., Pal, S., Nath, I., Bhattacharyya, S. (2020). Assessment of the Heart
Disease Using Soft Computing Methodology. In: Solanki, V., Hoang, M., Lu, Z., Pattnaik, P.
(eds) Intelligent Computing in Engineering. Advances in Intelligent Systems and Computing,
vol 1125. Springer, Singapore. https://doi.org/10.1007/978-981-15-2780-7_1
22. Suseendran, G., Balaganesh, D., Akila, D., & Pal, S. (2021, May). Deep learning frequent
pattern mining on static semi structured data streams for improving fast speed and complex
data streams. In 2021 7th International Conference on Optimization and Applications (ICOA)
(pp. 1–8). IEEE. doi: https://doi.org/10.1109/ICOA51614.2021.9442621.
23. Jeyalaksshmi, S., Akila, D., Padmapriya, D., Suseendran, G., & Pal, S. (2021). Human Facial
Expression Based Video Retrieval with Query Video Using EBCOT and MLP. In Proceed-
ings of First International Conference on Mathematical Modeling and Computational Science:
ICMMCS 2020 (pp. 157–166). Springer Singapore. https://doi.org/10.1007/978-981-33-4389-
4_16
24. Pal, S., Suseendran, G., Akila, D., Jayakarthik, R., & Jabeen, T. N. (2021, January). Advanced
FFT architecture based on Cordic method for Brain signal Encryption system. In 2021 2nd Inter-
national Conference on Computation, Automation and Knowledge Management (ICCAKM)
(pp. 92–96). IEEE.doi: https://doi.org/10.1109/ICCAKM50778.2021.9357770.
25. Suseendran, G., Doss, S., Pal, S., Dey, N., & Quang Cuong, T. (2021). An Approach on Data
Visualization and Data Mining with Regression Analysis. In Proceedings of First International
Conference on Mathematical Modeling and Computational Science: ICMMCS 2020 (pp. 649–
660). Springer Singapore. https://doi.org/10.1007/978-981-33-4389-4_59
26. Suseendran, G., Chandrasekaran, E., Pal, S., Elangovan, V. R., & Nagarathinam, T. (2021).
Comparison of Multidimensional Hyperspectral Image with SIFT Image Mosaic Methods for
Mosaic Better Accuracy. In Intelligent Computing and Innovation on Data Science: Proceedings
of ICTIDS 2021 (pp. 201–212). Springer Singapore. https://doi.org/10.1007/978-981-16-3153-
5_23
Breast Cancer Detection Using Hybrid
Segmentation Using FOA and FCM
Clustering
Abstract Medical image processing has recently been widely applied in a variety of
fields. Finding the anomaly problems in that image is highly beneficial for the early
diagnosis of these ailments. There are several techniques available for segmenting
MRI images to find breast cancer. Breast cancer is the second greatest cause of death
in women. Early detection of breast cancer reduces the number of women who die
from cancer. If caught in time, breast cancer is among the forms of cancer that can be
cured. In this study, breast cancer is identified in medical photos using a unique hybrid
segmentation technique. Fruitfy optimization technique (FOA) and FCM clustering
are both used in hybrid segmentation. To get a more accurate value of the clustering
centers in FCM Clustering, a Fruitfy optimization algorithm (FOA) approach was
applied. The MRI images’ features are extracted using the Extended Gabor wavelet
transform (IGWT). When compared to other approaches, the result demonstrates that
the hybrids segment performs with great performance and good accuracy of 96.50%.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 65
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_6
66 S. Pal et al.
1 Introduction
Some image segmentation techniques like SVM, FCM, and ANN may benefit in
some particular methods. But we need better efficiency and accuracy methods to
early detect breast cancer. The hybrid method suggested in our work helps in the
early detection of breast cancer for diagnosis.
2 Related Works
This section gives some basic information on breast cancer and the methods used
to diagnose it, such as various optimization and clustering algorithms. The section
below also contains further medical segmentation techniques.
Al-Ayyoub et al. [9] wrote about how to use GPUs to make the SPFCM part
of mammography images work better. The results showed that using the GPU’s
computing power to split up data can be done quickly and accurately. This means
that the technology could be used in the real world to make the process of diagnosing
cancer faster and more accurate.
An effective and computationally efficient method for addressing medical data
categorization issues is FOA-SVM [10]. The proposed FOA-based method, which
looks at the novel swarm-based technique for the optimum parameter tuning for the
medical classification data, is what makes this study unique. It aims at maximizing the
generalization capability of the classification model. The actual investigations have
shown that in terms of numerous assessment factors, notably the computation time
cost, the recommended FOA-SVM beat four other competing choices. This implies
that the proposed FOA-SVM method can be a helpful alternative clinical choice
for medical decision support. A novel FS-based categorization model for CKD was
introduced by JerlinRubini and Perumal [11]. The usage of FFOA for effective FS
68 S. Pal et al.
and MKSVM for classification purposes is the work’s primary novelty. The MKSVM
algorithm will be used to categorize the data once the FFOA has been run to produce
a set of chosen features. Four scale datasets—renal chronic, Ohio, Hungarian, and
Swiss—were used to determine the outcome of the anticipated work. The findings
demonstrate that the suggested methodology delivers the greatest performance of
the classifier of 98.5% again for the chronic renal function dataset compared to the
standard HKSVM, FMMGNN, and SVM approaches. Additionally, it retains the
least FNR and FPR as compared to existing approaches and achieves the highest
sensitivities, specific, PPV, and NPV value.
Kapila and Bhagat [12] in this study report propose brain tumor segmentation
and classification, which is carried out in the MATLAB working environment. To
determine the recital of the expected technique, “Sensitivity, Selectivity, Accuracy,
PPV, NPV, FPR, and FNR” are used. The suggested tumor segmentation and clas-
sification achieves the best levels of specificity and accuracy when compared to
the current method. To assess outcomes, the suggested methodology (HFFABC +
ANN) is contrasted with the presently applied methods (Fruitfly + ANN) and (ABC
+ ANN). The suggested method generated brain MRI images with 98.1% sensitivity,
98.9% accuracy, and 99.59% dependability, respectively. The experimental findings
clearly show that the suggested strategy works better than the existing methods.
According to Cahoon et al. [13], using intensity alone as the main distinguishing
characteristic would result in increased misclassification rates for both supervised and
unsupervised segmentation algorithms in digital mammograms. But techniques like
the K-NN algorithms are capable of significantly lower the frequency of incorrectly
labeled pixels in certain sections of the picture when given additional information
like window standard deviations and means.
An enhanced FCM method called HisFCM was proposed by Zhang et al. [14] to
better use the information included in the provided image. HisFCM does better than
FCM, FCM S, and EnFCM at segmenting medical images because it uses the best
parts of all three.HisFCM can also deal with medical data in real time and works
much better than other improved algorithms. The suggested method, on the other
hand, might not be able to find regions of interest (ROI) in pictures, especially when
it comes to complicated medical images, because it is a segmentation method that
only uses the image’s color features and statistical information.
A novel approach for segmenting colored images was put out by Harrabi et al.
[15] and was based on a customized fuzzy c-means technique and several color
spaces. Using the accuracy classification degree, the most important components
of the employed color spaces are chosen in the first stage. Then, these various bits
of information are clustered into homogenous areas using a modified version of a
Fuzzy C-means (FCM) method. The acquired findings demonstrate the method’s
generality and robustness in that the fuzzy c-means approach included the most
important component photos. The findings showed that segmentation performance
has significantly improved. The segmentation of colored images can benefit from the
proposed approach.
Singh et al. [16] said that breast cancer is one of the main reasons why women die.
Using fuzzy C-Means grouping and K-means clustering, the authors of this article
Breast Cancer Detection Using Hybrid Segmentation Using FOA … 69
show a new way to find exact clusters in mammograms that show cancer mass and
calcification. By putting them together, they were able to figure out where the breast
cancer was in mammograms that had not been processed. The results demonstrate
that this technique can aid doctors in making a quicker diagnosis of breast cancer and
identifying the entire area that the disease has affected. This will help the doctor figure
out what stage of cancer the patient has so that important and effective treatments can
be given. Their study is based on a visual detection approach using mammography
processing pictures. Using the right data-collecting software or hardware connection
with digital mammography devices, a real-time system may be developed.
Ingo Kanungo et al. [17] One of the main reasons why women die is from breast
cancer. The prevention of cancer has therefore been demonstrated to depend on early
diagnosis by routine screening and prompt treatment. According to the paper, radiol-
ogists’ interpretation of the patient’s therapy from the patient’s raw mammography
pictures, which are only 63% accurate, is misleading. Using fuzzy clusterings, such
as K-means, fuzzy C-means, and FPCM, they have presented a novel technique in
this study for identifying breast cancer masses or calcified in mammograms. They
then recommended GA-ACO-FCM clustering for unequivocal mass identification.
By combining these, they were able to precisely (92.52%) locate the breast cancer
spot in the original mammography images. The findings suggest that this method can
help the radiologist diagnose breast cancer at an early stage and categorize the whole
cancer-affected region. This will assist the doctor in determining the patient’s cancer
stage so that essential and effective treatment procedures may be taken. The suggested
approach is inexpensive since it may be used with any type of computer. Using the
right data-collection hardware and software interaction with digital mammography
devices, a real-time system may be developed. Singh et al. [18] have presented
assessment techniques using soft computing algorithms to predict heart disease.
Suseendran and his research team [19–23] have discussed different image processing
techniques, pattern matching techniques, and implied machine learning algorithms
to get better accuracy, better comparison results, and better performance. Rakshit
et al. [24] have discussed different IoT-based methodologies and techniques to get
better performance and results in healthcare sector.
Summary:
• Investigate the aforementioned possibilities using current mammography equip-
ment.
• Color image segmentation may benefit in some particular methods.
• The clustering approach used earlier improved categorization and decreased
instances of incorrect classification to further improve classification accuracy.
70 S. Pal et al.
3 Proposed Method
The overarching goal of early cancer diagnosis is the preservation of human life. From
a medical standpoint, this is essential for keeping track of patients. Given that cancer
is the top cause of cancer-related mortality in women globally, early identification
of malignant growth is crucial for a doctor’s ability to make a proper diagnosis and
choose the best course of action. If caught in time, cancer is among the forms of
cancer that can be cured. Breast cancer is frequently diagnosed by self-examination,
either by either a patient or by a clinician. This manually performed exam looks for
lumps or other abnormalities in the size, shape, or location of the breasts [9].
The following four key steps make up the suggested MRI breast cancer diagnosis:
(1) Pictures: In the initial stage of the inquiry, we get clinical data from MRI scans to
diagnose breast cancer. (2) Preprocessing stage: In the investigation’s second stage,
a preprocessing technique is offered. Any image processing technique’s initial step
is often preprocessing. Enhancing picture quality and identifying those components
of the picture that are necessary for further processing are the main objectives of the
preprocessing approach. (3) Phase of feature extraction: The breast cancer picture
features are extracted using the Improved Gabor Wavelet Transform (IGWT). (2)
Breast MRI images are segmented using a segmentation algorithm employing the
hybrid segmentation approach in the second phase. This method is a hybrid segmen-
tation of breast MRI images using the FCM clustering algorithm with FruitFly opti-
mization. Fig. 1 shows the key steps suggested for MRI breast cancer diagnosis using
hybrid segmentation.
(i) Images
The most potential substitute to mammography for finding some tumors that
mammography might miss is magnetic resonance imaging. Additionally, by deter-
mining the level of the disease, radiologists and other medical professionals can use
MRI to help them make decisions about how to treat breast cancer patients.
Six hundred and ninety-nine occurrences and nine characteristics from needle
aspirates of patient breasts make up the Wisconsin dataset. To distinguish between
benign and cancerous samples is the aim. Between both the malignant and benign
samples, there were substantial differences in each of the nine characteristics.
(ii) Preprocessing
The raw input health records are supplied as input during preprocessing. These raw
data are very susceptible to noise, missing numbers, and contradiction. The accuracy
of categorization is influenced by the superiority of raw data. Preprocessing should
be applied to unrefined data to improve the quality of medical data. Preprocessing is
more useful in this article when a dataset with non-numerical information is received
as a mathematical structure. The mathematical dataset for the allotted supplementary
is obtained by grabbing the non-numerical data. Once preprocessing is complete, it
is relatively simple to forecast if a disease will exist or not. As a result, the final
results include facts that are difficult to accept. The primary objective of the prepro-
cessing component is to increase the input image’s prominence so that it may be used
for post-processing by minimizing or deleting isolated elements. The preprocessing
component’s main objective is to remove or reduce isolated and undesired back-
ground segments from the input picture to improve its suitability for post-processing
[12].
(iii) Feature extraction
The recommended method uses the created Gabor wavelet transform for feature
selection (IGWT). Here, an optimization method is used to alter the conventional
Gabor wavelet transform.
Here, an optimization method is used to alter the conventional Gabor wavelet
transform. The oppositional fruit fly method is used to develop the Gabor filter’s
efficacy. An improved Gabor wavelet is used in the preprocessed pictures as opposed
to the GWT [8]. Below, we present the mathematical justification for IGWT.
The fundamental wavelet for IGWT is
∞
− j2π yt 2π 2 σ 2 (y − f )2
g f,o = g f,o (t)e dt = exp − (1)
−∞ f2
where f is mentioned as a dominant factor and σ is denoted resolution factor and {g,
f, y, t}, is generated by scaling of the wavelet. The picture quality develops the Gabor
wavelet transform’s efficiency before the feature selection technique is used [8].
It is possible to find multiscale and multi-orientation textural segments and sub in
the abnormal region by tuning Gabor kernels with different scales and orientations.
The Gabor filter obtains its characteristics straight from the gray frames. Micro-
patterns in the segmented area are abnormal and come in different sizes and orien-
tations [20]. These patterns can be used to identify breast abnormalities. Utilizing
Gabor filters, such micro-patterns may be effectively examined [1].
(iv) Segmentation method
(a) Fuzzy C-means algorithm
One of the most popular approaches for pattern identification is the fuzzy C-means
method, sometimes referred to as fuzzy ISODATA. One item of data may belong
72 S. Pal et al.
to several groups when clustering is done using fuzzy C-means (FCM) [17]. To
accomplish a decent classification, it is dependent on minimizing the objective func-
tion. Solutions of reduction are the minimum mean square error stationary points
of “b,” which is the perceptual errors clustering criteria. One item of information
may belong to several groups when clustering using the fuzzy c-means (FCM) tech-
nique [4]. This method is frequently used in pattern recognition. This is based on the
minimal objective function:
N
C
Jm = m
Uab ||Xa − Cb||2 , 1 ≤ m < ∞ (2)
a=1 b=1
where m is any actual figure greater than 1, and k*k is any norm attempting to convey
the similarity between observational data and the center. In array b, ab represents
the degree of affiliation of xi, xa represents the ascending triangle of d-dimensional
measurement values, cb represents the cluster’s d-dimension center, and xa appears
to be this triangle. When doing fuzzy partitioning, the objective function is iteratively
optimized, and the membership uab and cluster centers cb are updated by
1
Uimj = (3)
c ||Xa−Cb|| 2/(m−1)
k=1 { ||Xa−Cb|| }
k
ia1 U ab.Xa
Cj = k (4)
a=1 Xa
When max ab is reached, the iteration will end. where k is the number of iteration
steps, and e is an iteration criterion between 0 and 1. This process ends up at a saddle
point or local minimum of bm.
Bezdek created a fuzzy c-means (FCM), a fundamental kind of fuzzy clustering.
It demonstrates a method for dividing data sets that span many dimensions into a
specified number of clusters. To what degree each piece of data is part of a cluster is
determined using a fuzzy c-means (FCM) cluster-based approach [4].
(b) Fruitfly optimization algorithm (FOA)
Based on the fruit fly’s natural tendency to seek out food, the fruitfly optimization
technique seeks out global optimization. This will utilize its apheresis capabilities to
detect the scent of food. Fruit fly utilizes their abilities to detect scent to get close to
the meal before utilizing eyesight to get it. Depending just on the group’s swarming
behavior, extra flies also may fly in the vicinity of the meal. Various other flies may
also fly toward food. Using this food-finding feature, the ideal input weight model
parameters to optimize for ELM are discovered [1].
The “fruit fly algorithms” is a software that imitates the foraging habits of fruit
flies. The fruit fly algorithm is a cutting-edge technique for searching for global
optimization. The inquiry into the foraging behaviors of the fly swarms served as the
catalyst. Fruit flies have acute vision and osphresis, making them expert superfood
Breast Cancer Detection Using Hybrid Segmentation Using FOA … 73
hunters. It initially looks for food by sensing a significant amount of smells floating
across the area and sniffing about. It may fly to that exact place after getting so close
to the meal [8] or find a fruit once there thanks to its keen vision. The optimum refers
to the source of food, and the foraging process is replicated by iteratively searching
for the optimum in the FOA. Based on the fruit fly’s concept of food identification,
the FOA is an alternative theory. It is better to recognize and evaluate smell and
visual cues when compared to various animals. Fruit flies’ olfactory apparatus is
more sensitive to a wider range of odors that are pleasant and may even be able
to identify a food source from a long distance away [11]. Once the material in the
immediate area has been eaten, it may use any delicate eyesight to locate food and
fly there.
Based on the fruit fly’s concept of food identification, the FOA is an alternative
theory. It is better to recognize and evaluate smell and visual cues when compared
to various animals. Fruit flies’ sensory organs have a wider range of pleasant smells
to pick up on in their surroundings, and they may even be able to pick up on a source
of food from a vast distance away [11]. Once the material in the immediate area has
been eaten, it may use any sensitive eyesight to locate food and fly there. Fruit flies
are thus introduced, and their method of finding food is described as follows: (a)
first, they use their olfactory kidney to analyze the source of food before trying to fly
to a particular location; (b) alternatively, they use their sensitive eyes to get nearer
to the food place; and (c) finally, they switch the location of their flock of fruit flies
before trying to fly in that direction.
Algorithm 1: Algorithm for Fruitfy Optimization (FOA) [1, 6]
1. Deploy a fruit fly at a random location to kick off the algorithm. Initialize the X and Y axes.
2. Randomly set the direction and distance for each fruit fly to travel in when looking for food
3. Xi = Random Value + X axis
4. Yi = Random Value + Y-axis
5. Because it is currently unable to detect the orientation of the food, The amplitude of the smell
density (S), a quantity that is the opposite of the range, is estimated when the distance to the
sources is first determined (Dist).
6. To compute the smell density (Smelli) for each fruit fy location, substitute the smell intensity
judgment function for the smell density value (S) (or fitness function). Smelli as the function (Si)
7. In the fruity fly swarm, pick the fruit fly with the greatest scent density (get the highest value).
maximum [best Smell Best Pointer] (Smell)
8. Decide on the place and fragrance density that works best (x, y). Currently, the fruit fly
population is using eyesight to travel to that food source.
9. best Smell = smell best
10. Axis X = best Index X
11. Axis Y = best Y-Index
12. Repeat steps two through five. Check to see if the current fragrance density is higher than the
one from the previous installment. If so, go to Step 8.
fuzzy C-Means model. The fundamental drawback of the fruit fly technique is that it
is less computationally efficient and that, later in the evolutionary process, it easily
becomes trapped at a local optimal value. Segmentation is more computationally
efficient in the FCM. Consequently, the hybrid segmentation approach will be more
effective than the separate methods.
(v) Result
Six hundred and ninety-nine occurrences and nine characteristics from needle aspi-
rates of patient breasts make up the Wisconsin dataset. To distinguish between benign
and cancerous samples is the aim. It was discovered that the nine characteristics in
each sample varied considerably between benign and cancerous samples. For this
suggested study, a 3.20 GHz Intel Core i7 microprocessor, Windows 7 OS, 4 GB
RAM, and MATLAB (version 2015a) are employed.
The performance of the proposed model was evaluated using the accuracy of
classification (ACC), the area around receiver-operating characteristic curve (AUC)
Breast Cancer Detection Using Hybrid Segmentation Using FOA … 75
parameters, sensitivity, and specificity. An ACC, sensitivities, and specificity all fall
under the following definitions:
Accuracy: Correctness determines the weighted percentage of input photos that
can be segregated successfully [12]. It is the proportion of accurately anticipated
observations to all observations. It may also be described as the likelihood that a
testing procedure will be appropriately performed.
TP +TN
Accuracy = × 100% (5)
(T P + F P + F N + T N )
TP
Sensitivity = × 100% (6)
(T P + F N )
Specificity: The measure of accuracy is the proportion of input pictures that have
been accurately segmented (the indicator of how accurately segmentation is carried
out to prevent undesired results).
TN
Specificity = × 100% (7)
(F P + T N )
The segmentation using the Fruit Fly Algorithms, Fuzzy C-means, and Hybrid
Segment based on FOA and FCM are shown in Fig. 2.
Table 1 and Figure 3 demonstrate the proposed method’s accuracy. The highest
accuracy is 96.70% and we achieved an average efficiency of 96.50% in our hybrid
classification based on the Fruit fly algorithm and fuzzy c-means. When compared
to other current methods, our suggested hybrid technique has the highest accuracy
and greatest performance.
Fig. 2 a Segmentation of
FOA, b segmentation of
FCM [4], and c segmentation
of our hybrid proposed
4 Conclusion
Breast cancer is one of the top causes of death among women. This article uses
a novel hybrid segmentation technique to identify cancer in medical images. In
the hybrid segmentation, FruitFly Optimization Algorithm (FOA) and FCM cluster
are employed. The FruitFly optimization algorithm (FOA) method was employed
to identify the FCM Clustering centers with the highest degree of accuracy. MRI
images’ characteristics are extracted using the Improved Gabor wavelet transform
(IGWT). The findings show that this method can help doctors diagnose breast cancer
more quickly and define the entire region that has been impacted by the disease.
This will assist the doctor in determining the patient’s cancer stage so that essential
and effective treatment procedures may be taken. The findings demonstrate that
hybrid segmentation operates and has a high accuracy of 96.50% when compared to
other approaches. Our suggested techniques yield encouraging outcomes. In the next
projects, we can do hybrid segmentation utilizing various clustering and optimization
techniques.
Breast Cancer Detection Using Hybrid Segmentation Using FOA … 77
References
1. Melekoodappattu, J. G., Subbian, P. S., & Queen, M. F. (2021). Detection and classification
of breast cancer from digital mammograms using hybrid extreme learning machine classifier.
International Journal of Imaging Systems and Technology, 31(2), 909–920.
2. Huang, H., Feng, X. A., Zhou, S., Jiang, J., Chen, H., Li, Y., & Li, C. (2019). A new fruit fly
optimization algorithm enhanced support vector machine for diagnosis of breast cancer based
on high-level features. BMC Bioinformatics, 20(8), 1–14.
3. Prakash, R. M., Bhuvaneshwari, K., Divya, M., Sri, K. J., & Begum, A. S. (2017). Segmentation
of thermal infrared breast images using K-means, FCM, and EM algorithms for breast cancer
detection. In 2017 International Conference on Innovations in Information, Embedded and
Communication Systems (ICIIECS) (pp. 1–4). IEEE.
4. Kannan, S. R., Ramathilagam, S., Devi, R., & Sathya, A. (2011). Robust kernel FCM in
segmentation of breast medical images. Expert Systems with Applications, 38(4), 4382–4389.
5. Hassanien, A. E., Moftah, H. M., Azar, A. T., & Shoman, M. (2014). MRI breast cancer
diagnosis hybrid approach using adaptive ant-based segmentation and multilayer perceptron
neural networks classifier. Applied Soft Computing, 14, 62–71.
6. Melekoodappattu, J. G., & Subbian, P. S. (2020). Automated breast cancer detection using
hybrid extreme learning machine classifier. Journal of Ambient Intelligence and Humanized
Computing, 1–10.
7. Kavitha, P., & Prabakaran, S. (2019). A novel hybrid segmentation method with particle swarm
optimization and fuzzy c-mean based on partitioning the image for detecting lung cancer.
8. Krishnakumar, S., & Manivannan, K. (2021). Effective segmentation and classification of brain
tumor using rough K mean algorithm and multi-kernel SVM in MR images. Journal of Ambient
Intelligence and Humanized Computing, 12(6), 6751–6760.
9. Al-Ayyoub, M., AlZu’bi, S. M., Jararweh, Y., & Alsmirat, M. A. (2016). A GPU-based breast
cancer detection system using single pass fuzzy c-means clustering algorithm. In 2016 5th
International Conference on Multimedia Computing and Systems (ICMCS) (pp. 650–654).
IEEE.
10. Shen, L., Chen, H., Yu, Z., Kang, W., Zhang, B., Li, H., ... & Liu, D. (2016). Evolving support
vector machines using fruit fly optimization for medical data classification. Knowledge-Based
Systems, 96, 61–75
11. JerlinRubini, L., & Perumal, E. (2020). Efficient classification of chronic kidney disease by
using multi-kernel support vector machine and fruit fly optimization algorithm. International
Journal of Imaging Systems and Technology, 30(3), 660–673.
12. Kapila, D., & Bhagat, N. (2022). Efficient feature selection technique for brain tumor classifi-
cation utilizing hybrid fruit fly-based ABC and ANN algorithm. Materials Today: Proceedings,
51, 12–20.
13. Cahoon, T. C., Sutton, M. A., & Bezdek, J. C. (2000). Breast cancer detection using image
processing techniques. In Ninth IEEE International Conference on Fuzzy Systems. FUZZ-IEEE
2000 (Cat. No. 00CH37063) (Vol. 2, pp. 973–976). IEEE.
14. Zhang, X., Zhang, C., Tang, W., & Wei, Z. (2012). Medical image segmentation using improved
FCM. Science China Information Sciences, 55(5), 1052–1061.
15. Harrabi, R., & Braiek, E. B. (2014). Color image segmentation using a modified Fuzzy C-
Means technique and different color spaces: Application in the breast cancer cells images. In
2014 1st International Conference on Advanced Technologies for Signal and Image Processing
(ATSIP) (pp. 231–236). IEEE.
16. Singh, N., Mohapatra, A. G., & Kanungo, G. (2011). Breast cancer mass detection in mammo-
grams using K-means and fuzzy C-means clustering. International Journal of Computer
Applications, 22(2), 15–21.
17. Kanungo, G. K., Singh, N., Dash, J., & Mishra, A. (2015). Mammogram image segmentation
using hybridization of fuzzy clustering and optimization algorithms. In Intelligent Computing,
Communication and Devices (pp. 403–413). Springer, New Delhi.
78 S. Pal et al.
18. Singh, D., Sahana, S., Pal, S., Nath, I., Bhattacharyya, S. (2020). Assessment of the heart
disease using soft computing methodology. In: Solanki, V., Hoang, M., Lu, Z., Pattnaik, P.
(Eds.), Intelligent Computing in Engineering. Advances in Intelligent Systems and Computing,
vol 1125. Springer, Singapore. https://doi.org/10.1007/978-981-15-2780-7_1.
19. Suseendran, G., Chandrasekaran, E., Pal, S., Elangovan, V. R., & Nagarathinam, T. (2021).
Comparison of multidimensional hyperspectral image with SIFT image mosaic methods for
mosaic better accuracy. In Intelligent Computing and Innovation on Data Science: Proceedings
of ICTIDS 2021 (pp. 201–212). Springer Singapore. https://doi.org/10.1007/978-981-16-3153-
5_23.
20. Suseendran, G., Balaganesh, D., Akila, D., & Pal, S. (2021). Deep learning frequent pattern
mining on static semi structured data streams for improving fast speed and complex data
streams. In 2021 7th International Conference on Optimization and Applications (ICOA) (pp. 1–
8). IEEE. https://doi.org/10.1109/ICOA51614.2021.9442621.
21. Jeyalaksshmi, S., Akila, D., Padmapriya, D., Suseendran, G., & Pal, S. (2021). Human facial
expression based video retrieval with query video using EBCOT and MLP. In Proceedings
of First International Conference on Mathematical Modeling and Computational Science:
ICMMCS 2020 (pp. 157–166). Springer Singapore.https://doi.org/10.1007/978-981-33-4389-
4_16
22. Suseendran, G., Doss, S., Pal, S., Dey, N., & Quang Cuong, T. (2021). An approach on data
visualization and data mining with regression analysis. In Proceedings of First International
Conference on Mathematical Modeling and Computational Science: ICMMCS 2020 (pp. 649–
660). Springer Singapore. https://doi.org/10.1007/978-981-33-4389-4_59
23. Pal, S., Suseendran, G., Akila, D., Jayakarthik, R., & Jabeen, T. N. (2021). Advanced FFT
architecture based on Cordic method for Brain signal Encryption system. In 2021 2nd Inter-
national Conference on Computation, Automation and Knowledge Management (ICCAKM)
(pp. 92–96). IEEE.doi: https://doi.org/10.1109/ICCAKM50778.2021.9357770
24. Rakshit, P., Nath, I., & Pal, S. (2020). Application of IoT in healthcare. Principles of Internet of
Things (IoT) Ecosystem: Insight Paradigm, pp. 263–277. https://doi.org/10.1007/978-3-030-
33596-0_10
Hybrid Optimization Using CC and PSO
in Cryptography Encryption for Medical
Images
S. Adhikari
School of Engineering, Swami Vivekananda University, Kolkata, India
e-mail: [email protected]
M. Brayyich
Collage of Engineering, Medical Instruments Technology Engineering, National University of
Science and Technology, Dhi Qar, Iraq
e-mail: [email protected]
D. Akila (B)
Department of Computer Applications, Saveetha College of Liberal Arts and Sciences, SIMATS,
Chennai, India
e-mail: [email protected]
B. Sakar
Department of Computer Science and Engineering, JIS College of Engineering, Kalyani, India
e-mail: [email protected]
S. Devika · S. Revathi
Department of Computer Applications, Agurchand Manmull Jain College, Chennai, India
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 79
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_7
80 S. Adhikari et al.
1 Introduction
Some security requirements must be completed to send the medical photos securely.
These requirements include honesty, validity, and secrecy [1]. By encrypting the
medical picture to achieve categorization and using computerized markers to assure
validity and uprightness, cryptographic methods may be used to meet the stated
security requirements. The pieces on display in this teaching activity show how
encryption methods give medical symbols protection. The primary goal is to secure
medical pictures both during transmission and while such cutting-edge information is
recorded. The next test is to make sure that the code can tolerate harsh treatment, such
as compression. Protection must be given top priority because there are still many
security issues in the world of cloud computing. Since the child’s name, address, and
other health details are accessible online, there is a chance that theft, unauthorized
access, and security breaches might happen to the data. Effective protection of these
records is required. By using cryptographic techniques to encrypt the first message,
it is possible to grant outside access to all those records [1].
Owing to the need of the secure transmission of medical images, global healthcare
organizations have been able to develop specific security protocols for medical data.
One such standard is Digital Imaging and Communications in Medicine (DICOM).
The standard provides guidelines and procedures for achieving the three telehealth
security services of confidentiality, authenticity, and integrity. While the integrity
and authenticity service is required to validate ownership and identify photo modifi-
cations, the secrecy service is required to prevent unwanted access to the transmitted
images. Today, cryptography and digital watermarking technologies are used to build
approaches and algorithms capable of providing the required security services for
telemedicine applications [2].
In the field of medicine, prompt and reliable diagnosis is crucial. These days it is
common to practice transferring photos, and thus it is crucial to find an effective way
to do so over the network. Various security requirements must be followed to ensure
the secure transmission of medical pictures. These requirements include confiden-
tiality, sincerity, and dependability. Data assurance has become a crucial issue as
a result of several communication security issues. Security for photos is a signifi-
cant challenge, especially when sophisticated images include a significant amount
of information. The necessity to meet the security requirements of digital pictures
has encouraged the development of effective encryption techniques [3]. When it
comes to private picture information, such as that from the military, commerce, or
industry, data must be encrypted before it can be transferred over the Internet [4].
Hybrid Optimization Using CC and PSO in Cryptography Encryption … 81
Vibrant image data became one of the key ways that humans convey information in
the multimedia age.
The importance of information security is growing as the Internet expands so
quickly. Since images, videos, and other types of information are the primary infor-
mation carriers on the Internet, security concerns have risen to the top of the research
agenda. However, conventional encryption techniques like the advanced encryp-
tion standard (AES) or data encryption standard (DES) aren’t appropriate for picture
encryption due to the strong correlation of neighboring image neighboring pixels and
high redundancy. The goal of the research is now to create a new picture encryption
algorithm [5]. Electronic healthcare, or e-healthcare, is now practical and widely
used because of the internet’s rapid development. E-healthcare is a term used to
describe a web-based system where a patient may get in touch with a knowledgeable
doctor for a diagnosis. Some medical photos are sent and kept online. These pictures
could reveal a lot of patient privacy, and they’re incredibly private and delicate. Data
encryption is the most effective approach to significantly protect this privacy concern
[6].
Because practically all real-world applications have internal optimization diffi-
culties, the optimization technique has been a core study topic that has attracted
a variety of research groups from many areas. Finding the optimal solution while
adhering to a set of restrictions is known as optimization. Applications of optimiza-
tion have multiplied recently, appearing in fields including engineering, machine
learning, cybersecurity, image processing, wireless sensor networks, and the Internet
of Things (IoT). Numerous of these issues are multimodal, multi-objective, noisy,
high dimensional, non-convex, and dynamic in nature.
Several traditional and environmental (NIA) methods have been shown to deal
with these hard optimization problems. Particle Swarm (PSO), one of these popular
techniques that pique our interest, operates with a population known as a swarm. A
set of guidelines that make use of both local and international information govern
the particle’s movement [7]. We suggested a hybrid optimal cryptography approach
in this work. The best key will be selected using a hybrid optimization method in
elliptic curve cryptography that combines particle swarm optimization with cuckoo
search optimization to increase the security level of the encryption and decryption
processes.
2 Related Works
intended message. Because there is less financial ambiguity, this approach uses less
memory. The researchers used crucial metrics like PSNR and SSI, which have shown
controlled picture quality against all tests while proving the present job with a variety
of data. As the method never provided significant impalpability, it is obvious that
it is insufficiently secure; therefore, additional investigation is required to raise the
security level. The suggested algorithm is quicker at both encryption and decryption.
The ROI-optimized lossless medical picture encryption and decryption system
based on game theory are suggested by JIAN ZHOU et al. [9]. The negotiation
process is used to maximize the ROI criteria. This allows the ROI to be calcu-
lated precisely and adaptively, taking into account the various medical picture types
and encryption standards. The encryption technique converts picture formats at the
pixel level, achieves lossless decryption, and successfully safeguards the security of
medical image data. Additionally, since the encoded ROI location data does not have
to be handled individually, the chance of information leakage is reduced even further.
The wavelet-based watermarking approach for medical picture authentication was
created by Balasamy K. et al. [10]. The watermark is created using a chaotic tented
map and the second image’s hash function, and then it is encrypted with a secret
key. New, reversible watermarking algorithms that precisely identify altered areas in
watermarked photos are our suggested approach. Using the best location and velocity
data that are integrated with the pixels of the created watermark, PSO is utilized to
generate random coefficients. Additionally, the proposed plan makes the watermark
invisible. There is no requirement for additional information throughout the extrac-
tion process. The lossless host picture is required in telemedicine for diagnostic
purposes on the receiving end.
A comparison of chaos or ECC-based encryption techniques was offered by
Mustapha Bensalah et al. in their paper [11]. Both methods offer strong security
capabilities. Before any practical use, a thorough security study must be conducted
because the transition process is not yet developed. The chaos-based technique does
offer a straightforward implementation and a fast execution time. The discrete loga-
rithm issue, on the other hand, is hard to solve but remains highly expensive in terms
of execution time due to the data encoding phase, which requires ample time. This is
where the ECC-based technique differs. Therefore, one of the objectives to accelerate
the ECC-encryption and decryption procedures is the optimization of this operation.
Mustapha Benssalah et al. [12] come up with a good way to evaluate the secu-
rity of Dawahdeh et al.’s most recent cryptosystem, which combines ECC, linear
cryptography, and chaos. This approach is discovered to still be vulnerable to a
variety of attacks, including known and selected plain-text attacks. In addition, a
more effective and secure medical image encryption technique for TMIS has been
proposed to address and overcome the identified vulnerabilities. By incorporating
a new ECIES that offers entity authentication and key sharing into the new system
version, the matrix key negotiating scheme has been improved. Utilizing Arnold’s Cat
map and the hyperchaotic Lorenz generator, unique processes that enable the addition
of confusion and diffusion to encrypted clinical images are also guaranteed. It has
been established that the improved approach works with both grayscale and medical
Hybrid Optimization Using CC and PSO in Cryptography Encryption … 83
images. The security and performance analysis of the IECCHC scheme shows that
it can survive a variety of attacks and exhibits exceptional security properties.
Alhayani, Bilal and others [13] proposed that real-time images, including crucial
data, are captured by the visual sensor networks, which then safely and successfully
transmit them to the required receiver via the wireless link. Applications like image
data transfer demand a substantial amount of energy, and the study focuses on deter-
mining whether-hile sensor networks in WSNs have restricted processing power and
battery life. Therefore, it is challenging to build an image transmission method for
cooperative communications that is energy-efficient.
In collaborative digital picture transmission over WSNs, the quality of the picture
depends on how the network is set up and how the camera works. Three key perfor-
mance indicators for the suggested cooperative image transmission strategy have
been evaluated using both unique approaches like PSNR (Peak Signal to Noise Ratio)
and vitality productivity and conventional methods. This work provided a thorough
description of an optimal ECC-based secure and cooperative picture transmission
paradigm. The simulation outcomes demonstrate the effectiveness of their suggested
model.
Sasi et al. [14] investigate solutions to some of the safety issues in a wireless
sensor network. While some of the strategies made use of standard cryptographic
procedures, others made use of cryptography optimization techniques. This paper
presents several optimization-based theories while also highlighting their benefits
and drawbacks. This study presents several concepts related to the different cryp-
tographic optimizations and concludes that a large amount of energy and range
is needed to store to reduce the key size, and that a complete conversion system
must be created in future development. The energy use and delay brought on by
the runtime when employed in the setting of a flexible security infrastructure in a
wireless network of sensors are the other areas of analyze the use of GA for picture
security, Sandeep Bhowmik et al. [15] combined block-based image processing and
encryption methods. The examples demonstrate that when the suggested technique
was applied to pictures, the correlation between n pixels was reduced. The four
cases we looked at here with various block sizes demonstrate that, while the conven-
tional Blowfish Transformation is algorithm better in terms of pixel connection when
compared to the Genetic Algorithm, encryption performance significantly improves
when GA is used after the traditional processing of images (here using the Blowfish
Algorithm). Both GA and the Blowfish Algorithm are outperformed by the suggested
Blow GA approach.
Once more, the results of the experiments demonstrate that there is a negative
connection between the block (we divided the image) and the pixel correlation. This
fact bolsters the earlier study reference to forward this study, the performance of the
method will be assessed using chromosomes of various sizes (key). It is anticipated
that the algorithm is more effective at disrupting the association between the picture
elements with a larger key size, resulting in a lower correlation coefficient value. It is
possible to assess the use of meta-heuristics like Evolutionary Algorithms, and Tabu
hybridized.
84 S. Adhikari et al.
3 Proposed Method
Random number plays a major role in the integrity of cryptographic primitives for
protecting important data, and it is represented by encryption keys. It is crucial
to maintain the picture’s integrity and confidentiality is a crucial security concern
with the processing and transmission of digital medical images [3]. The protection,
safety, and security of medical data kept in the information management system will
primarily be ensured through the verification of medical pictures. Privacy, authen-
ticity, integrity, and confidentiality are typically used to describe the transfer of
image data via an unsecured network between two locations. As a result, the security
of sensitive information included in medical photographs must be given additional
consideration [11].
The credibility of the health industry may be compromised if health informa-
tion management data is misused regarding patient security regarding their medical
Hybrid Optimization Using CC and PSO in Cryptography Encryption … 85
For the administration and transmission of electronic patient records (EPR) across
a network, DICOM (digital imaging and communications in medicine) standards
have been established. A header file or a medical picture that transmits important
patient information and data are both included in the DICOM standard [12]. The
interoperability of DICOM imaging equipment and arbitrary programs is condensed
by this standard.
(ii) ECC
Since 1987, ECC has revolutionized public keys in part because of its shorter
operand length than previous asymmetric methods [6]. ECC offers several advan-
tages, including quick calculations, lower power, and memory use [18]. ECC is used
for digital signatures, authentication, and key exchange, among other things.
The following is the equation for an elliptical curve:
y 2 = x 3 + ax + b (1)
If the parameters a and b are both fixed, x and y are members of the finite field, and
(binary or prime field). Point multiplication, in which a point P is divided by an integer
86 S. Adhikari et al.
k to produce a novel position Q that conforms to the curve, is the ECC operation that
takes the longest to process. The foundation of ECC is scalar multiplication.
ECC is an asymmetrical or public key method based on the algebraic structure
of elliptic curves. Koblitz & Miller independently advocated for its application in
cryptography in 1985. With noticeably lower key sizes than traditional asymmetric
cryptosystems like RSA, the ECC offers comparable security levels [12].
To solve the security issue in many fields, particularly GFs, premier ordering
domains GF(q), or characteristic-2 fields GF is suggested as the fundamental solution
in the literature (2 m).
Scalar point multiplication, denoted by Q = k, is the one-way function specified
by ECC. P, QE (GF (q)), and a scalar k are all present (the key). Recurring point
additions and doubling are used in this process.
The following is a list of the ECC El-Gamal encryption:
C1 = r.P (2)
C2 = M + r.Q (3)
M stands for the encrypted message as a point, but r is a randomized integer [11].
A secret key d links the two locations P and Q (Q = d.P). M = C2dC1 provides the
decryption procedure.
Algorithm 1: ECC [13]
Two components of the encryption method are the plaintext version of the data
picture and the private keys [3]. The array’s byte components are stored in a row
Hybrid Optimization Using CC and PSO in Cryptography Encryption … 87
sequence from left to right, with each line corresponding to one of the image’s
output lines. The image’s lines are then completely encrypted.
(iii) Particle Swarm Optimization
PSO is a randomized search strategy created by Eberhart and Kennedy [10] and is
modeled after the social behavior of schools of fish or bird flocking. A swarm is a
collection of mobile drugs that act in unison to accomplish a common objective. The
swarm’s possible solutions are all classified as particles. The initialization of particles
is created at random, and an iterative process is used to find the best solution. The
velocity Mi of each particle traverses across the m-dimensional search space.
PSO is a reliable stochastic global optimization technique that is based on animal
social behavior. With an initial planning xi = (xi1, xi2,…, xin), also referred to as
particles for I = 1, 2,…, N, where N is the initialized particle number, the PSO is
initialized with an original population of possible solutions in n-dimensional space.
The particles move along predetermined routes in n-dimensional space at a velocity
of vi = (vi1,vi2,…,vin). Each particle saves the location in the n-dimensional space
where the optimizing function had its best value (Pbest), as well as the best location
overall (Gbest) in the surrounding area [7]. The following equations describe how
these two best values affect their trajectory. The vector pbi = yields the Pbest distance
matrix (pbi1,pbi2,…,pbin). The vectoring = (pgi1,pgi2,…,pgin) yields the Gbest
global position, and the particle’s location and velocity are updated as
vi ← wvi + d1 s1 pbi − xi + d2 s2 pbi − xi (4)
xi ← xi + vi (5)
Every iteration ends with these modifications. Here, w stands for weight inertia,
which is also known as the previous velocity’s contribution to the new velocity. Here,
the numbers r1 and r2 are generated at random between [0, 1], and the acceleration
coefficients c1 and c2 are also generated at random between [0, 2].
It’s interesting to think about the particle’s neighborhood. Numerous topologies
may develop in the vicinity of the particles. The neighborhood in the original PSO is
made up of all the particles, hence the global ideal situation (Gbest) of the optimizing
function is the greatest in the neighborhood. The exchange of information occurs
across the swarm and progressively iteration by repetition; the variety of the particle
is destroyed as the swarm congregates in one area of the n-dimensional space that
may or may not be the best location. The neighborhood is seen by the best as a
large number of nodes linked by topological or dynamic magnitude networks. The
problem under investigation is one such network [7].
(iv) CS optimization
The CS algorithms are driven by various cuckoo species engaging in brood parasitism
by depositing their eggs into the nest of the host °edgings. These parasitic cuckoo
females may mimic the hues and patterns of the host species’ eggs. It is typical for
88 S. Adhikari et al.
a nest to just contain one egg at a time for ease of handling. The host nest’s exposed
egg indicates a preliminary setup. The method is chatting to another arrangement
through a cuckoo egg it lays [17].
• Start with the answer. Hello, where Hello = “H1, H2,…”
• Assess the value of fitness. Fi = PSNR + CC.
• Utilize the Levy Flight Formula Hnew to update the new solution.
• Determine Hnew’s fitness. When Hnew > f (Hi).
• Obtain the ideal key and maximum fitness at last. Hoptimal Fi = max (PSNR +
CC).
The three rules that we applied to the CS algorithm were
(i) Each cuckoo only lay one egg contains, and she sets it in a nest that she picks
at random.
(ii) The best nests produce the best eggs (solutions), which are passed on to
succeeding generations.
(iii) There are a set number of host nests accessible, and hosting does have a prob-
ability Pa” of finding an alien egg [7]. The host bird in this scenario either
discards the egg or leaves the nest and creates a fresh one in a different area.
(iv) hybrid optimization (CS + PSO)
To scramble and decode data from the medical picture, the optimal key selec-
tion procedure takes into account the “fitness function” as the maximum key with
PSNR. The fitness function represents a design solution that is close to the set
aims. The system of hybrids optimization creates the arrangement to evaluate each
arrangement’s purpose. The next stage is how it is shown.
Fitness = M AX (P S N R) (6)
The primitives are taken into account when the secret solution is introduced to
create a new population size for the optimum key selection procedure.
The goal of this hybrid technique is accomplished by selecting the best outcome
from the two methods to learn the hybridization form with the greatest focus. Up
to that moment, the process is repeated until the best key for the medical picture is
found.
(vi) Results
An assessment of DICOM picture encryption is carried out on a station with a Core
i7 CPU and 16 GB of RAM. The encryption techniques under consideration are
programmed in MATLAB r2017b (64-bit). A series of DICOM pictures are used to
test the effectiveness and consistency of the two encryption techniques. The security
simulation outcomes of our suggested encryption were contrasted with those of other
current security methods using various metrics.
For security analysis, the website’s medical images, such as “Brone” and “Foot
Others,” were collected as shown in Fig. 2. The figure below displays some sample
images.
Peak Signal-to-Noise Ratio:
2552
P S N R = 10log( ) (8)
MSE
where PSNR is the Peak signal to Noise Ration and MSE is the mean squared error.
Entropy:
2N −1
1
Entr opy = Pilog( ) (9)
i=0
Ki
SSI:
(2mean( A ∗ B) + C1)(2con( A ∗ B) + C2)
SS I = (10)
(mean A2 + mean B 2 + C1)(con A2 + con B 2 + C2)
where A and B are the ith pixels in the processed and original image. C1 and C2 are
the regular parameters.
The PSNR level of the medical pictures is displayed in the above Table 2 and
Entropy value is mentioned in Table 1. With the help of our Hybrid optimization
approach, we can utilize the PSNR value but also SSI value 1, which we obtained from
the particular photos, to decode that image. After selecting the optimum key, which
isolates the information into chunks with the highest level of the fitness function, the
entire input is encrypted.
Image2 64.923
Image3 65.740
Image4 65.702
4 Conclusion
the hybrid optimization technique exhibits impressive outcomes and security when
compared to other current methods using PSNR and entropy findings. Other optimiza-
tion techniques may be included in cryptography research in the future to increase
security, flexibility, and quickness.
References
1. Shankar, K., Elhoseny, M., Chelvi, E. D., Lakshmanaprabu, S. K., & Wu, W. (2018). An efficient
optimal key-based chaos function for medical image security. IEEE Access, 6, 77145–77154.
2. Al-Haj, A., Abandah, G., & Hussein, N. (2015). Crypto-based algorithms for secured medical
image transmission. IET Information Security, 9(6), 365–373.
3. Avudaiappan, T., Balasubramanian, R., Pandiyan, S. S., Saravanan, M., Lakshmanaprabu, S.
K., & Shankar, K. (2018). Medical image security using dual encryption with the oppositional-
based optimization algorithm. Journal of Medical Systems, 42(11), 1–11.
4. Yin, S., Liu, J., & Teng, L. (2020). Improved elliptic curve cryptography with homomorphic
encryption for medical image encryption. International Journal Network Secure, 22(3), 419–
424.
5. Yin, S., & Li, H. (2021). GSAPSO-MQC: Medical image encryption based on genetic simu-
lated annealing particle swarm optimization and modified quantum chaos system. Evolutionary
Intelligence, 14(4), 1817–1829.
6. Hafsa, A., Sghaier, A., Malek, J., & Machhout, M. (2021). Image encryption method based
on improved ECC and modified AES algorithm. Multimedia Tools and Applications, 80(13),
19769–19801.
7. Bharti, V., Biswas, B., & Shukla, K. K. (2021). A novel multi-objective gdwcn-pso algorithm
and its application to medical data security. ACM Transactions on Internet Technology (TOIT),
21(2), 1–28.
8. Elhoseny, M., Shankar, K., Lakshmanaprabu, S. K., Maseleno, A., & Arunkumar, N. (2020).
Hybrid optimization with cryptography encryption for medical image security in the Internet
of Things. Neural Computing and Applications, 32(15), 10979–10993.
9. Zhou, J., Li, J., & Di, X. (2020). A novel lossless medical image encryption scheme based
on game theory with optimized ROI parameters and hidden ROI position. IEEE Access, 8,
122210–122228.
10. Balasamy, K., & Ramakrishnan, S. (2019). An intelligent reversible watermarking system for
authenticating medical images using wavelet and PSO. Cluster Computing, 22(2), 4431–4442.
11. Benssalah, M., Rhaskali, Y., &Azzaz, M. S. (2018). Medical image encryption based on
elliptic curve cryptography and chaos theory. In 2018 International Conference on Smart
Communications in Network Technologies (SaCoNeT) (pp. 222–226). IEEE.
12. Benssalah, M., Rhaskali, Y., & Drouiche, K. (2021). An efficient image encryption scheme for
TMIS based on elliptic curve integrated encryption and linear cryptography. Multimedia Tools
and Applications, 80(2), 2081–2107.
13. Alhayani, B. S., Hamid, N., Almukhtar, F. H., Alkawak, O. A., Mahajan, H. B., Kwekha-
Rashid, A. S., ... & Alkhayyat, A. (2022). Optimized video internet of things using elliptic curve
cryptography-based encryption and decryption. Computers and Electrical Engineering, 101,
108022.
14. Sasi, S. B., & Sivanandam, N. (2015). A survey on cryptography using optimization algorithms
in WSNs. Indian Journal of Science and Technology, 8(3), 216.
15. Bhowmik, S., &Acharyya, S. (2011). Image cryptography: The genetic algorithm approach.
In 2011 IEEE International Conference on Computer Science and Automation Engi-
neering (Vol. 2, pp. 223–227). IEEE.
Hybrid Optimization Using CC and PSO in Cryptography Encryption … 93
16. Mary, G. G., & Rani, M. (2019). Application of ant colony optimization for enhancement of
visual cryptography images. In Nature Inspired Optimization Techniques for Image Processing
Applications (pp. 147–163). Springer, Cham.
17. Shankar, K., & Eswaran, P. (2016). RGB-based secure share creation in visual cryptog-
raphy using optimal elliptic curve cryptography technique. Journal of Circuits, Systems, and
Computers, 25(11), 1650138.
18. Shankar, K., & Eswaran, P. (2016). An efficient image encryption technique based on optimized
key generation in ECC using a genetic algorithm. In Artificial Intelligence and Evolutionary
Computations in Engineering Systems (pp. 705–714). Springer, New Delhi.
19. Pal, S., Jhanjhi, N. Z., Abdulbaqi, A. S., Akila, D., Alsubaei, F. S., & Almazroi, A. A. (2023).
An intelligent task scheduling model for hybrid internet of things and cloud environment for
big data applications. Sustainability, 15(6), 5104. https://doi.org/10.3390/su15065104
20. Doss, S., Paranthaman, J., Gopalakrishnan, S., Duraisamy, A., Pal, S., Duraisamy, B., & Le,
D. N. (2021). Memetic optimization with cryptographic encryption for secure medical data
transmission in IoT-based distributed systems. Computers, Materials & Continua, 66(2), 1577–
1594. https://doi.org/10.32604/cmc.2020.012379.
21. Rakshit, P., Ganguly, S., Pal, S., Aly, A. A., & Le, D. (2021). Securing technique using
pattern-Based LSB audio steganography and intensity-based visual cryptography. Computers,
Materials & Continua, 67(1), 1207–1224. https://doi.org/10.32604/cmc.2021.014293.
Boundary Element Method for Water
Wave Interaction with Semicircular
Porous Wave Barriers Placed
over Stepped Seabed
Abstract This study examines the dispersion of water waves by inverted semicir-
cular surface-piercing wave barriers installed on a stepped seabed. The “Boundary
element method” is applied to handle the present “Boundary value problem”. In addi-
tion to this energy identity is derived to estimate the dispersion of wave energy by the
pair of perforated wave barriers. In addition, the influence of porosity, geometrical
configurations of pair of porous barriers, and stepped seabed on the energy dissipa-
tion are investigated. The study reveals that for smaller Keulegan-Carpenter (KC)
number, the “energy dissipation” due to the perforated barriers is higher. However,
the reflection coefficient shows the opposite pattern.
Nomenclature
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 95
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_8
96 S. K. Dash et al.
A Wave amplitude
RC or |R0 | Reflection coefficient
TC or |T0 | Transmission coefficient
BEM Boundary element method
BVP Boundary value problem
1 Introduction
1.1 Objective
The main objective of the paper is to investigate the interaction of water waves with
semicircular perforated wave barriers placed over a stepped seabed. In this regard,
the energy identity relation is derived. The effect of various structural and porosity
related parameters on the wave energy scattering and dissipation is analyzed in a
detailed manner.
2 Mathematical Formulation
Figure 1 depicts the graphical diagram of the given physical problem in which water
waves propagating from −x direction towards +x direction impinge with floating
inverted dual semicircular porous wave barriers placed over step type seabed. Based
on the linear water waves theory in the Cartesian coordinate system with two dimen-
sions, the associated BVP is represented where the origin is at the mean free water
level, the y-axis points vertically upward, and the positive x-axis points to the right.
The water having “density” ρ covers the region −∞ < x < ∞ , −h1 < y < 0 depth for
left far field boundary and −h2 < y < 0 depth for right far field boundary. The mean
free surface merges with the horizontal plane y = 0. The dual semicircular porous
barriers having radius r 1 and r 2 , respectively and center at (b + r1, 0) and (−c − r2,
0), respectively float and are kept fixed in position using suitable floater/buoys. In
presence of these dual porous barriers, the total water region is categorized into three
sub-parts Rj for j = 1, 2, 3. It is taken that the water is “Incompressible”, “Inviscid”,
and “Irrotational”. Additionally, the motion of water is “harmonic” in reference to
the “angular frequency”. In view of these assumptions, the velocity potential func-
tion “Φ (x, y, t)” exists and is expressed of the form Φj (x, y, t) = Re(φ j (x, y)e−iωt )
with subscripts j represents domains Rj for j = 1, 2, 3. This scalar potential function
satisfies the Laplace equation as
98 S. K. Dash et al.
Fig. 1 Schematic diagram of “inverted semicircular porous wave barriers placed over a stepped
seabed”
( )
∂2 ∂2
+ 2 φ j = 0. (1)
∂x2 ∂y
∂φ j
− K φ j = 0. on y = 0 for j = 1, 2, 3. (2)
∂n
where ∂/∂n denotes the normal derivative, and K = ω2 /g. The BCs on the fixed
stepped bottom is given by
∂φ j
= 0, on [ j for j = 2, 3, 4. (3)
∂n
The dispersion of wave through the permeable structure follows a semi-empirical
quadratic discharge equation which states that “the pressure drop through a permeable
barrier is directly proportional to the square of the relative velocity” [10–12]. This
quadratic boundary condition is given by
⎧
⎪ ∂φ 1 ∂φ j
⎪
⎨ =− ,
∂n ∂n | |
| ∂φ1 | ∂φ1 , on [7 and [9 , for j = 2, 3, (4)
⎪
⎪ | | ∂φ1
⎩ φj − φ 1 = αj| + β
∂n | ∂n
j
∂n
where the coefficients α j , β j in the above equation represent the drag coefficient and
the inertial coefficient, respectively. Finally, the far-field B.Cs. on the both sides ends
are given by
Boundary Element Method for Water Wave Interaction … 99
⎧
⎪ ∂(φ1 − φ ) − ik (φ − φ inc ) = 0, as x → ∞,
⎪ inc
⎨ 0 1
∂n (5)
⎪
⎪ ∂φ2 − i p φ = 0, as x → ∞.
⎩ 0 2
∂n
where φ inc (x, y) denotes “incident wave potential” and it is denoted by φ inc (x, y)
= eiκ 0x f 0 (κ 0 , y) with κ 0 being the “wave number” accompanied with the incident
wave propagating in R1 satisfies the dispersion relation ω2 = gκ 0 tanh(κ 0 h1 ). On the
other hand, p0 represents the positive real root of the dispersion relation ω2 = gp0
tanh(p0 h2 ). The form of f 0 (κ 0 , y) is given by
( )
−ig A cosh(k0 (y + h 1 ))
f 0 (k0 , y) = , (6)
ω cosh(k0 h 1 )
∫0 ( 2 2)
g A 2k0 h 1 + sinh(2k0 h 1 )
f 0 (k0 , y) f 0 (k0 , y)dy = − . (7)
ω2 4k0 cosh2 (k0 h 1 )
−h 1
At this point, it is further noted that Eq. (5) can also be written as
⎧
φ1 (x, y) = (eik0 x + R0 e−ik0 x ) f 0 (k0 , y), on [1 ,
(8)
φ2 (x, y) = T0 ei p0 x f 0 ( p0 , y), on [5 .
The unknown values R0 and T 0 in Eq. (8), are linked to the reflection and
transmission of incident waves, respectively.
The BEM doesn’t need the roots of the complex “dispersion relation” in the permeable
region. Whereas, semi-analytical tools like Eigen-function expansion method needs
the complex roots of the “dispersion relation”. Often finding these complex roots
are complicated. Therefore, the BEM has significant advantages over other solution
tools. Applying “Green’s second identity” to the complex velocity potential φ(x, y)
and the fundamental solution G(x, y; x 0 , y0 ) over the domain of the physical problem
surrounded by [, we get the subsequent expression for (x 0 , y0 ) ∈ [ as
∫ ( )
1 ∂ G(x, y; x0 , y0 ) ∂φ(x, y)
− φ (x0 , y0 ) = φ (x, y) − G(x, y; x0 , y0 ) d[.
2 ∂n ∂n
[
(9)
100 S. K. Dash et al.
1 ( )1
G(x, y; x0 , y0 ) = ln r, with r = (x − x0 )2 + (y − y0 )2 2 (10)
2
Implementing the B.Cs (2–5) into Eq. (9) throughout each region Rj for j = 1, 2, 3,
the resulting set of integral equations are as follows
∫ ( ) ∫
1 ∂G ∂G
− φ1 + − ik0 G φ1 d[ + φ1 d[
2 ∂n ∂n
[1 [2 ∪[3 ∪[4
∫ ( ) ∫ ( )
∂G ∂G
+ − i p0 G φ1 d[ + − K G φ1 d[
∂n ∂n
[5 [6 ∪[8 ∪[10
∫ ( ) ∫ ( inc )
∂G ∂φ1 ∂φ 1
+ φ1 − G d[ = − ik0 φ inc Gd[, (11)
∂n ∂n ∂n 1
[7 ∪[9 [1
∫ ( ( ) ) ∫ ( )
1 ∂G ∂G ∂φ1 ∂G
− φ2 + φ1 Θ12 +G d[ + − K G φ2 d[ = 0,
2 ∂n ∂n ∂n ∂n
[7 [11
(12)
∫ ( ( ) ) ∫ ( )
1 ∂G ∂G ∂φ1 ∂G
− φ3 + φ1 Θ13 +G d[ + − K G φ3 d[ = 0,
2 ∂n ∂n ∂n ∂n
[9 [12
(13)
“Energy Identities” ensure the veracity of numerically measured water wave inter-
action results. From the aforementioned BEM the components of the energy identity
Boundary Element Method for Water Wave Interaction … 101
will be computed to ensure the validation of the work. When the water waves get
interacted with the porous structures, energy dissipation will take place and there-
fore, derivation of appropriate energy identities are very much helpful [3, 7, 13–15]
to account the percentage of incident “wave-energy dissipation” due to the existence
of surface-piercing wave barriers. In this current Section, the “energy-identity” for
incident waves interacting with porous barriers are derived. Implementing “Green’s
second identity” to the velocity potential functions φ j for j = 1, 2, 3 and its complex
conjugate φ ∗ j for j = 1, 2, 3 over the domain as defined before, we obtain the following
expression
∫ ( ∂φ j∗ (x, y)
)
∗ ∂φ j (x, y)
φ j (x, y) − φ j (x, y) d[ j = 0. (15)
∂n ∂n
[j
In the above expression, [ j depicts all the boundaries of the regions Rj for j = 1,
2, 3. Now in region 1, the only contributions are from the boundaries [ j for j = 1, 5,
7, 9. These contributions are stated as follows
( )
g 2 A2 2k0 h 1 + sinh(2k0 h 1 )
[1 : 2ik0 (−1 + |R0 | )~
A, ~
2
A= − . (16)
ω2 4k0 cosh2 (k0 h 1 )
( 2 2)
g A 2 p0 h 2 + sinh(2 p0 h 2 )
[5 : 2i p0 |T0 |2 B̃, B̃= − , (17)
ω2 4 p0 cosh2 ( p0 h 2 )
∫ ( )
∂φ ∗ (x, y) ∂φ1 (x, y)
[7 ∪ [9 : φ1 (x, y) 1 − φ1∗ (x, y) d[. (18)
∂n ∂n
[7 ∪[9
Using Eqs. (16–20) into Eq. (15), we get the final energy identity as
| R0 | 2 + χ0 | T0 | 2 + E D1 + E D2 = 1 , (21)
102 S. K. Dash et al.
( / )( / ) ∫ || ∂φ1 ||2
where χ0 = p0 k0 ~ B ~ A and the term E D1 = − җ(Θ 12 )
k0 ~
A
| ∂n | d[ ,E D2 =
[7
∫ || ∂φ1 ||2
− җ(Θ 13 )
~
k0 A
| ∂n | d[ represent the amount of energy dissipated due to the semi-
[9
circular porous barriers, respectively.
In this portion, the physical parameters related to the water waves scattering by the
inverted semi-circular perforated wave barriers are investigated using the iterative
BEM. Various results such as the R0 , T 0 and ED by the porous structures are presented
to analyze the effectiveness of the aforementioned physical problem in order to create
“tranquil zone” in the leeward side of the porous barriers. The wave and structural
input variables are configured to the following values: T 0 = 8 s, h1 = 10 m, h2 =
8 m, r1/h1 = r2/h1 = 1/4, KC = 10. The far-field false boundaries [ 1 and [ 5 are
placed at three times distant from the structure so that far-field BCs are satisfied on
[ 1 and [ 5 . Moreover, the drag coefficient (α , for j = 1, 2 ) and blockage coefficient
(β j , for j = 1, 2 ) are evaluated by following formulae
( )( )
8i KC ∗ b h1
αj = , βj = ,
3πω A 10
where b is termed as the submergence length from the mean free surface. The fraction
of the height of the “reflected wave” to the height of “incident wave” is known as
the reflection coefficient (R0 ) and it is expressed as
| |
|( )∑ ∫y j |
| −i g A
nb1 |
| |
|R0 | = | φ (− l, ymj ) cosh(k0 (h 1 + y))dy − e−i k0 l |.
| Ã ω cosh(k0 h 1 ) |
| j=1 y j +1 |
Here, the total number of boundary elements used to discretize [ 5 is shown by the
summation’s upper bound, “nbr” (Table 1).
In Fig. 2a, It has been noticed that E D1 goes down with an increase of KC number
in the profile of short water-waves, and an opposite pattern is seen in case of long
water-wave profile. Moreover, Fig. 2c shows that as the values of KC increases, the
Boundary Element Method for Water Wave Interaction … 103
Table 1 Comparison between |R0 |2 , |T0 |2 , |ED1 |2 , and |ED2 |2 with total energy as shown in Eq. (21)
KC |R0 |2 χ 0 |T 0 |2 |E D1 |2 |E D2 |2 Total Energy
3 0.0057 0.7747 0.1052 0.1149 1.0006
10 0.0168 0.6191 0.1570 0.2077 1.0007
20 0.0339 0.5354 0.1674 0.2645 1.0011
|R0 | increases, based on the phenomenon that when the values of KC goes up, the
porosity of the wave barrier decreases, and consequently the thin barriers behave
as a non-porous structure [15, 16]. In Fig. 3a, it is found that the variation of E D1
is higher for moderate values of r 1 /h1 in short water-wave profile. Further, in long
water-wave profile, E D1 takes higher values for higher r 1 /h1 . Moreover, the variation
of E D1 increases with an increase in incident time period up to some extent and
attains maximum before going to decrease further. On the other side, in Fig. 3b,
it is demonstrated that E D2 does not alter much due to the variation in r 1 /h1 and
it decreases gradually with rise in the time period which is depicted in Fig. 3c. In
Fig. 4a, it is noticed that E D1 takes higher value for smaller r 2 /h1 . An opposite trend
is revealed in Fig. 4b, c. Further, it is noted that E D1 attains its maximum for average
incident time-period values and it decreases with an increment in time-period after
reaching to maximum. A similar observation is found in Fig. 4b. Figure 4c can be
interpreted from the observation of Fig. 4b as with the increase in E D2 , the wave
reflection will decrease consequently.
5 Conclusions
In this study, water wave dispersion by inverted semicircular porous wave barriers
over stepped seafloor is investigated. To handle the present BVP, the BEM is used.
Further, energy identity is provided to determine the “wave energy dissipation” to
be measured by the permeable wave barriers. The effect of porosity, geometrical
configuration of pair of porous boxes, and stepped seabed on the energy dissipation
is studied. The study shows that for smaller KC number, the ED due the pair porous
boxes is higher. However, the reflection coefficient shows opposite pattern. Addi-
tionally, it is noted that the variation of the energy dissipations due to the porous
boxes increases as the ratio of water depths decreases. An opposite trend is noticed
in the variation of reflection coefficient in short wave regime and shows a similar
pattern in long wave regime. Moreover, the variation of RC initially drops as the
time period increases. Hereafter, the variation of RC increases with an increment in
time period. The overall pattern of the EDs due to the porous barriers and reflec-
tion coefficient with the variation of radius of semicircular porous boxes are similar
in nature as stated before. This study reveals that the KC number associated with
the properties of the porous wave barriers plays a significant role in wave ED by
the porous barriers and the maximum energy dissipation occurs in the intermediate
wavelength regions. The present results are useful for the coastal engineers to design
appropriate parameters and structural configurations to dissipate higher portion of
“incident wave energy” and to reduce the scattering coefficients to create a tranquil
zone as per the requirements.
References
1. Vijay, K. G., Venkateswarlu, V., & Nishad, C. S. (2021). Wave scattering by inverted trapezoidal
porous boxes using dual boundary element method. Ocean Engineering, 219, 108149.
2. Koley, S., Behera, H., & Sahoo, T. (2015). Oblique wave trapping by porous structures near a
wall. Journal of Engineering Mechanics, 141(3), 04014122.
3. Koley, S., Sarkar, A., & Sahoo, T. (2015). Interaction of gravity waves with bottom- standing
submerged structures having perforated outer-layer placed on a sloping bed. Applied Ocean
Research, 52, 245–260.
Boundary Element Method for Water Wave Interaction … 105
4. Behera, H., Koley, S., & Sahoo, T. (2015). Wave transmission by partial porous structures in
two-layer fluid. Engineering Analysis with Boundary Elements, 58, 58–78.
5. Koley, S. (2019). Wave transmission through multilayered porous breakwater under regular
and irregular incident waves. Engineering Analysis with Boundary Elements, 108, 393–401.
6. Koley, S., & Sahoo, T. (2017). Oblique wave scattering by horizontal floating flexible porous
membrane. Meccanica, 52(1), 125–138.
7. Koley, S., & Sahoo, T. (2017). Wave interaction with a submerged semicircular porous
breakwater placed on a porous seabed. Engineering Analysis with Boundary Elements, 80,
18–37.
8. Shen, Y., Firoozkoohi, R., Greco, M., & Faltinsen, O. M. (2022). Comparative investigation:
Closed versus semi-closed vertical cylinder-shaped fish cage in waves. Ocean Engineering,
245, 110397.
9. Vijay, K. G., & Sahoo, T. (2019). Scattering of surface gravity waves by a pair of floating
porous boxes. Journal of Offshore Mechanics and Arctic Engineering, 141(5).
10. Molin, B. (2011). Hydrodynamic modeling of perforated structures. Applied Ocean Research,
33(1), 1–11.
11. Liu, Y., Li, Y. C., & Teng, B. (2016). Interaction between oblique waves and perforated caisson
breakwaters with perforated partition walls. European Journal of Mechanics- B/Fluids, 56,
143–155.
12. Bennett, G. S., McIver, P., & Smallman, J. V. (1992). A mathematical model of a slotted
wavescreen breakwater. Coastal Engineering, 18(3–4), 231–249.
13. Kaligatla, R. B., Koley, S., & Sahoo, T. (2015). Trapping of surface gravity waves by a vertical
flexible porous plate near a wall. Zeitschrift für angewandte Mathematik und Physik, 66(5),
2677–2702.
14. Koley, S., & Sahoo, T. (2021). Integral equation technique for water wave interaction by an
array of vertical flexible porous wave barriers. ZAMM-Journal of Applied Mathematics and
Mechanics/Zeitschrift für Angewandte Mathematik und Mechanik, 101(5), e201900274.
15. Panduranga, K., Koley, S., & Sahoo, T. (2021). Surface gravity wave scattering by multiple
slatted screens placed near a caisson porous breakwater in the presence of seabed undulations.
Applied Ocean Research, 111, 102675.
16. Dean, R. G., & Dalrymple, R. A. (1991). Water wave mechanics for engineers and scientists
(Vol. 2). world scientific publishing company.
Fostering STEM Education Competency
for Elementary Education Students
at Universities of Pedagogy in Vietnam
Tiep Quang Pham, Tuan Minh Dang, Huong Thi Nguyen, and Lien Thi Ngo
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 107
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_9
108 T. Q. Pham et al.
1 Introduction
Humanity is entering the period of scientific revolution 4.0, the revolution of artificial
intelligence, it has been changing extremely and rapidly in all aspects of social life.
The advanced education systems in the world are undergoing great changes with
the ultimate aim of training a young generation who has enough intelligence and
sensitivity to the times to adapt and develop. Therefore, one of the modern educational
models to realize the above educational goals is spreading and influencing all over the
world, which is STEM education. STEM education is one of the major concerns of
many countries today. Several studies have shown the great role of STEM education
in the development of countries. According to Gonzales et al. [4], in the first decade
of the twenty-first century, the STEM education model has created a great change in
the field of education [4]. Banks and Barleks [1] affirm that STEM education has a
positive influence on the development of industry in the world [1]. Wang et al. [12]
argue that STEM education is closely linked with the development of industries [12].
According to Linh et al. [7], the development of STEM education aims to meet the
demand for high-quality human resources to ensure important factors for personal
life, the country’s political and economic position in the world [7]. Thus, it can be
affirmed that STEM education has great significance for integration and development
with many countries in the current period.
In 2018, Vietnamese education made strong strides for the fundamental and
comprehensive renovation of education. The introduction of the new general educa-
tion program emphasized the integration of STEM educational contents which
support students at all levels of orientation and the development of competencies and
qualities for students. In the new general education program since 2018, it is affirmed:
“STEM education is an educational model based on an interdisciplinary approach,
helping students apply their knowledge of science, technology, engineering and
mathematics to solve problems some practical issues in specific contexts” (Vietnam
Ministry of Education and Training, 2018) [9]. Thus, in the new general educa-
tion program, STEM education is both meant to promote education in the fields of
science, technology, engineering, and mathematics, as well as demonstrating an inter-
disciplinary approach to development abilities and qualities of learners. In addition,
STEM education also contributes to the realization of the following goals: Devel-
oping students’ specific competencies of subjects in the STEM field. It is the ability
to apply knowledge and skills related to Science, Technology, Mathematical Engi-
neering subjects, to link knowledge to solve practical problems. STEM education
provides students with the foundation for higher learning and future careers. The
requirement for the renovation of general education and the problem for primary
school teacher training in Vietnam is the need to train a team of teachers with suffi-
cient competency and professional qualifications to meet the challenges knowledge
in educational innovation, especially with the implementation of STEM education
in schools.
Fostering STEM Education Competency for Elementary Education … 109
However, in Vietnam today, the formal STEM teaching in schools faces many
challenges [6], most schools do not have a team of teachers with good STEM educa-
tion competency to be ready to perform STEM education tasks. Bien et al. [2] believes
that most teachers have not implemented STEM education regularly and effectively
because they do not know how to develop and implement STEM topics [2]. The
problem is to create a team of teachers with good STEM education competency,
especially with final year students at pedagogical universities so that they are ready
to perform STEM education tasks in schools. Therefore, the fostering STEM educa-
tion competency for students in primary education is an urgent issue today. There-
fore, this study is part of a broad research project on STEM education strategies in
Vietnam, which aims to answer the following questions: To answer the following
three questions:
1. To what extent is the STEM education competency of 4th year students majoring
in Primary Education achieved?
2. What is the content of STEM education competency building for students
majoring in Primary Education?
3. Which teaching method should be used in fostering STEM educational compe-
tency for students majoring in Primary Education?
First, elementary pedagogical students are learners who have learning experiences,
have systematic study habits and most have basic study skills such as reading docu-
ments, finding and exploiting information in the library or online, knowing how to
learn through sharing with friends or on forums, seminars, conferences… However,
in a few students, those skills are not good, partly because the training method is not
practiced enough but focus on skills, partly because these children have not actively
learned and practiced. Some children have not yet adapted to the way of studying at
university but still study like high school students, so they depend a lot on textbooks,
books, and teachers.
Second, although students are learners, students are mature learners, so their life
experience is complete and more or less the precedents in life experience also affect
learning. For example, many children always believe that textbooks and textbooks
are always right, even though that belief has no convincing basis. They don’t think
that books and textbooks are written and reviewed by people, so they are not always
correct. These children rarely research and think about and refer to scientific publi-
cations other than textbooks and books assigned by their teachers. Especially, very
few pedagogical students read and analyze scientific journals. Those are the early
manifestations of conservatism and stagnation in learning.
Third, the ability of primary school teachers to respond to modern learning strate-
gies is generally low. They lack skills in cooperative learning, project-based learning,
problem-based learning, case-based learning, constructivist learning, etc., and have
not created conditions for students to learn like that. The other part is because they
themselves are passive and just like to learn according to old habits. Currently, the
basic way students learn is still listening, taking notes, reading books, remembering,
understanding, and recalling when taking the exam, so only a few achieve the level of
application and critical thinking, the rest learn without learning, do not know how to
do, learn without really knowing right or wrong, just the right curriculum is suitable.
Fourth, the learning style of primary pedagogical students in general is not rich
and lively. Most students learn the same and are the same in listening, taking notes,
reading books, remembering, understanding, and recalling when taking the test.
Many students have not yet created for themselves the most effective learning style
for them, have not taken advantage of their forte and strengths in learning, but still
follow the general trend, just like everyone else. If you are like everyone else, you
are assured that you do not know that you have your own strengths or weaknesses, so
you need to choose the most suitable learning method for you to achieve the highest
efficiency.
Fifth, the learning attitude of primary pedagogical students is generally good
and positive. Most of the students appreciate their studies, schools and teachers,
and believe in the future of their careers. They are serious in studying, taking tests
and exams, obeying study discipline and school rules, enthusiastically participating
in activities of the Communist Youth Union and the school’s social movements,
112 T. Q. Pham et al.
including cultural and artistic movements, support for exams, humanitarian activities,
etc. These are the outstanding advantages of pedagogical students. But in particular in
terms of learning, especially in the in-service, connected systems, the study discipline
is not highly self-disciplined, mainly due to the compulsory regulations that students
comply with.
4 Research Context
In recent years, Vietnam’s education system is having strong turning points for
the fundamental and comprehensive renovation of education. One of them is the
implementation of the new general education curriculum program. The new general
education program is designed in the direction of a competency approach, which
has been implemented since 2019. The goal of the program is to help develop core
qualities and competencies in learners, which include qualities: patriotism, kindness,
honesty, hard work and responsibility and core competencies such as autonomy and
self-study, communication and collaboration, problem-solving and creativity with
specific competencies associated with specific subjects at each school level (Ministry
of Education and Training [MOET], 2018) [9]. To achieve the goals of the new general
education program, STEM education is an educational orientation that is concerned
and implemented to concretize those educational goals. In the new general educa-
tion curriculum, STEM education is both meant to promote education in the fields
of science, technology, engineering and mathematics, and to demonstrate an inter-
disciplinary approach, competency development, and quality of learners (Ministry
of Education and Training, 2018) [9]. At the same time, in the general education
program, the content of subjects in the STEM knowledge block has emphasized
and enhanced activities in the direction of STEM education. In the overall general
education program, STEM education has been focused through the following mani-
festations: (1) The new general education program is full of STEM subjects: Math-
ematics; Natural Sciences; Technology; Informatics, Physics, Chemistry, Biology;
(2) The position and role of Informatics Education and Technology Education in
the new general education program has been significantly enhanced [8]. This not
only clearly shows the thought of STEM education but also the timely adjustment
of general education before the industrial revolution 4.0. In the general education
program, it is also clearly stated: “Along with Mathematics, Natural Sciences and
Informatics, Technology subject contributes to promoting STEM education, one of
the educational trends that is being valued in many countries around the world and is
given due attention in this time in Vietnam’s reform of general education” [9]. Thus,
with the renovation of the general education program in 2018, STEM education is
an inevitable consequence to achieve the set educational goals.
Along with the promulgation of the new general education program of the Ministry
of Education and Training of Vietnam, the aim is to concretize the implementa-
tion of STEM education in schools, on August 14, 2020. The Ministry of Educa-
tion and Training issued Official Letter 3089/BGDÐT-BDTrH 2020 implementing
Fostering STEM Education Competency for Elementary Education … 113
5 Research Methods
5.1 Respondents
Table 1 Information on the number of lecturers participating in the survey and interview
University Number
Lecturer Students
Hanoi University of Education 09 55
Hanoi University of Education 2 08 60
Thai Nguyen University of Education—Thai Nguyen University 06 54
University of Vinh 08 53
University of Education—University of Danang 07 61
Ho Chi Minh City University of Education 09 57
The in-depth interview method is used to exploit the experience of lecturers and
students’ thoughts of pedagogical schools in Vietnam about the importance of STEM
education competency building contents and methods that teachers used to foster
STEM educational competencies for students, as these will be difficult to investi-
gate by questionnaire due to the limitation of this research method. Interviews were
conducted after the questionnaire survey was completed and initial results were
provided. The interview also focused on the practice of organizing STEM education
in schools about the extent and frequency of these activities. The questions we used
in in-depth interviews focused on the following:
1. How important is it to foster STEM education competency for students?
2. What contents on fostering STEM education competency need to be done?
3. What methods have been used to foster STEM educational competencies for
students? Which method is suitable for fostering STEM educational competency
for students?
4. When conducting competency building in STEM education, which stage is the
most difficult?
There were 47 lecturers who participated in the interviews, who participated in
fostering STEM education competencies for students and 36 students in primary
education from regional teacher training pedagogical schools in Vietnam. We
conducted 12 interviews, including 6 individual interviews and 6 group interviews
with groups of trainers. Interviews were conducted by 2 or 3 authors directly. The
two authors took notes during the interviews, conducting voice recordings of the
interviewees with informed consent. The interviewers encouraged the lecturers to
respond enthusiastically, according to their own thoughts, to the interview questions
posed and to share their thoughts, as well as to consider the responses of the respon-
dents. The teacher’s answers are carefully recorded and then transcribed verbatim to
provide author with detailed and authentic data.
Mathematical statistical methods and cross-checks were used to analyze the data
and confirm the reliability of the data. The data from the questionnaire was used
as primary data source, then processed by SPSS software version 20.0. Cronbach’s
alpha test is used to evaluate the reliability of the scale and the type of variable if the
obtained value is not within the allowable limit. Cronbach’s alpha coefficient is 0.74
(within the allowable limit from 0.6 to 0.9). This reliability test allows the authors
to confirm the reliability of the scale.
The survey results on the current status of students’ STEM educational compe-
tence were analyzed according to descriptive statistical parameters about the level
of competence, mean score, and standard deviation to detect the level of STEM
educational competence of students final year students.
116 T. Q. Pham et al.
6 Research Results
The results of this study aim to answer the first question: To what extent is the STEM
education competency of 4th year students majoring in Primary Education achieved?
The current state of STEM education competency was surveyed through question-
naires. From the questionnaires that have been made, we collect students’ answers,
then analyze and encode student answers into opinions, the views are arranged into
4 powerful components of competency as shown in Table 2. We build a profes-
sional standard STEM education competency structure in teaching according to the
professional standards promulgated by the Ministry of Education and Training. The
competency behaviors of this structure are necessary to provide seniors with the
competency to teach them STEM well. Based on the students’ opinions through the
questionnaire, we determined the students’ level of STEM education competency.
Then, we used the method of consulting a Science expert and a Math expert to survey
the proposed structure of STEM education competency. Based on the expert method,
we adjust and change the components according to the competency composition. As
a result, we have determined the structure of STEM education competency, including
4 components and 18 components as shown in Table 1.
The survey results on STEM education competency of students in Vietnam are
presented in Table 2.
The survey results in Table 2 show that the common point of the component
competencies of STEM education competency is the high level of awareness of
STEM education (Mean = 3.22), the competency to implement the plan. The
students’ ability to design STEM teaching plans (Mean = 2.65) is at an average
level, the competency to design STEM teaching plans (Mean = 1.91), and the ability
to evaluate and adjust the STEM teaching plan (Mean = 1.92) are at a low level.
Thus, a high level of awareness about STEM education shows that students are
equipped and have a relatively complete understanding of STEM education.
Fostering STEM Education Competency for Elementary Education … 117
The competency to design STEM teaching plans is relatively low, including the
following activities:
• Finding ideas in practice to build into STEM topics;
• Define STEM teaching goals;
• Select and design STEM-based teaching activities;
• Develop learning materials for STEM activities;
• Build and use equipment to support STEM activities.
The survey results in Table 2 show that students’ ability to find ideas in practice to
build into a STEM topic is at the lowest level (Mean = 1.94). The highest competency
is to build and use equipment to support STEM activities (Mean = 2.04). Combined
with in-depth interviews, we received feedback from both faculty and students that
118 T. Q. Pham et al.
finding ideas in practice to build STEM topics is very difficult and takes a lot of time.
They think that because the content of the curriculum of subjects related to STEM
education in Vietnam has not been closely related to find ideas that are suitable for
the knowledge in the subjects: Science and technology, engineering, and math.
In the survey results in Table 2, students’ ability to evaluate and adjust STEM
teaching plans is also relatively low. Combined with the results of the in-depth inter-
view method, we find that most of the students have not really paid much attention
and have not been able to perform evaluations—adjust the appropriate teaching plan.
Instructors believe that their students are still confused about the choice and use of
objective assessment tools and the application of the lesson study process to adjust
the STEM teaching plan because this is a form of assessment. prices, adjusting new
lessons in Vietnam.
We found that the survey results by questionnaire are consistent with the survey
results through in-depth interviews. We draw the following conclusions about the
assessment of STEM educational competency of primary education pedagogical
students of universities as follows: Students have basic understanding of STEM
education, but do not yet have the competency. Designed STEM education topics for
students, grasped some basic methods to organize STEM educational activities for
students. The competency to evaluate and adjust the STEM education plan is low.
At the same time, through the results of the questionnaire survey, we also compared
the STEM educational competencies of students in different regions. The survey
results show that between groups of students from universities in different regions,
there are differences in STEM educational competency, component competencies
in STEM educational competency of students in universities. There are significant
differences between universities.
We found that 4th year students at universities in the North have higher cognitive
abilities in STEM education than students in the Central and South. In which, the
percentage of students from Hanoi National University of Education with good cogni-
tive ability about STEM education is the highest. Combined with in-depth interviews,
we find that the student training program of the Northern universities also has many
activities in some modules that introduce the basics of STEM education to students.
And in training activities on STEM education for students, lecturers at universities
in this region are very focused on providing and forming knowledge about STEM
education for students in a methodical way. However, in terms of competency to
implement STEM teaching plans, students in the South have a higher level than
students in the North and Central regions. Combined with in-depth interviews, we
find that pedagogical universities in the South pay great attention to the formation
and development of practical competencies for students, creating many opportunities
for students to study through practice. In particular, the survey results show that the
percentage of students at Ho Chi Minh City University of Education with the ability
to implement STEM teaching plans is the highest among schools. For universities
in the North and Central, they often attach great importance to forming knowledge
for students in a methodical way, but the practice of students often assigns tasks to
students without strict control. This is also one of the disadvantages in the way of
Fostering STEM Education Competency for Elementary Education … 119
training students of the universities in the North and the Central. As for the compe-
tencies of the components for designing STEM teaching plans and evaluating and
adjusting the teaching plans, the competency levels of all three groups of students
are at the same level. All three groups of students said that it is difficult to design a
STEM teaching plan and evaluate and adjust the plan with them.
The results of this survey aim to answer the second question: What does the content
of STEM education competency building for students majoring in primary education
include?
We used the student questionnaire—question 2 to investigate the necessity of
fostering STEM educational competencies. The results of the survey are shown in
Fig. 1:
The survey results shown in Fig. 1 show that Students appreciate the need for
content that fosters STEM educational competencies. In particular, students are espe-
cially interested in the STEM teaching plan design module and the STEM teaching
plan assessment and adjustment module. Up to 72.3% of students think that fostering
the STEM teaching plan design module is necessary or higher. With the assessment
module and adjustment of the STEM teaching plan, 63.9% of students think that it is
necessary to foster this module from the necessary level or higher. The results of the
in-depth interview are consistent with the survey results by questionnaire, because in
fact, the competency to design STEM teaching plans and the competency to evaluate
and adjust STEM teaching plans is at a low level. Most of the respondents said that
when implementing STEM education, they find it very difficult to find ideas to design
for STEM education topics for students.
Most of the students surveyed admitted that evaluating and adjusting the STEM
teaching plan they also paid little attention to this content. Sometimes, students in
Vietnam often only focus on forming the knowledge contained in their lessons, which
leads to the phenomenon of trying to teach all the knowledge in the lesson to avoid
exceeding the allotted time. This results in them not paying much attention to the
assessment and adjustment of the plan after the lesson.
The results of this survey aim to answer the last question: Which teaching method
should be used in fostering STEM educational competencies for students majoring
in Primary Education?
Through consulting experts, the results show that in some primary education
student training activities of universities, a number of methods can be used to foster
STEM education competencies for students. Students such as: researching the theory
of STEM education, watching videos illustrating STEM lessons/topics, learning
experiences in STEM topics, practicing STEM topic design, experiencing teaching
STEM topics, design and discuss, discuss with experts.
According to the results of the in-depth interviews, at universities, almost all
students only study the theory of STEM education. Some faculty members of some
schools have occasionally organized for students to watch demonstration videos on
STEM lessons/topics, learning experiences in STEM topics, and practice designing
STEM education topics. Some organizational methods such as the designed STEM
subject teaching experience and seminars and discussions with experts are almost
never used. The lecturers also discussed further because the training program for
the modules related to basic knowledge and the modules on the organization of
teaching subjects in primary school takes up most of the training time, so there are few
opportunities. Organized methodically on fostering STEM educational competency
for students. In addition, STEM education is also one of the relatively new contents
for primary schools in Vietnam, so the competency building of STEM education for
final year students has been implemented in schools, but there is no systematic and
thorough organization.
Simultaneously with the survey on the methods teachers have used to foster STEM
education competency for students, we also conducted a survey through question-
naires—question 3 to learn about which methods are needed. Used to foster STEM
education competency for students to bring about positive fostering results. The
results of the survey are shown in Fig. 2.
Fostering STEM Education Competency for Elementary Education … 121
The results of Fig. 2 show that Students are not interested in purely theoretical
research activities. Only 16.7% of students think that it is necessary to organize theo-
retical research activities on STEM education. Most of the interviewed students said
that this activity can cause boredom when studying in students. Watching videos
illustrating STEM lessons has a positive effect on students, up to 75.0% of students
said that this method can be effective when fostering STEM educational competen-
cies for students. The learning experience in STEM topics is the activity that receives
the most positive reviews, with up to 97.2% of students saying that this way of organi-
zation will bring the most positive results. There are 63.9% of students’ opinions that
organizing for students to practice designing STEM educational topics for students
is a positive result, organizing STEM theme design practice is an activity. difficult
for students. However, students are more interested in this activity thanks to working
in groups. The STEM topic teaching practice method is evaluated as an effective
activity to develop STEM educational competency with 58.3% of students agreeing
with the use of this method. Finally, discussions with experts on STEM education
received great attention from students with 50% of students saying that this method
can be used to bring about positive results in competency building. STEM education
for students.
7 Discussion
Regarding the first research issue, it is to what extent does the STEM education
competency of final year primary education students at pedagogical universities in
Vietnam show up? The results show that the STEM educational competencies of
final year primary education students at pedagogical universities in Vietnam perform
at a moderate level. Students do not have many diverse and interesting activities to
122 T. Q. Pham et al.
elementary students in Vietnam are not interested in learning about STEM educa-
tion. The survey results show that experiential learning in STEM topics is the activity
that receives the most positive reviews. Students enjoy this method of learning. They
believe that having hands-on experience in STEM topics will help them better under-
stand how to organize STEM education for students than if they only understand the
theory of STEM education. At the same time, the survey results also show that
the practice of designing STEM topics is a difficult activity for students. However,
they still expressed a desire to learn through this method so that they will apply their
knowledge in practice and help them develop their competency to design their STEM
educational activity plan. Practice teaching STEM topics is considered an effec-
tive activity to develop STEM educational competency. Discussions with experts on
STEM education receive great attention from students, they believe that participating
in discussions with experts will help them have a deeper understanding of STEM
education and help them answer questions solve the problems they are facing.
8 Conclusion
STEM education has been receiving great attention from the Vietnamese educa-
tion community. In the current period, Vietnam’s education is undergoing renova-
tion according to the goal of developing learners’ competency. Accordingly, STEM
education is considered as one of the approaches to form and develop important
competencies of modern people in the twenty-first century. Therefore, the issue of
fostering STEM educational competencies for future teachers when they are studying
in pedagogical universities is very necessary.
Currently, although pedagogical universities have paid attention to training STEM
educational competencies for students majoring in primary education. However, it
can be said that it does not meet the requirements of the practical context. These future
teachers still lack some component competencies such as designing STEM learning
topics for elementary students, organizing STEM learning activities for elementary
students, linking STEM topics with practical context close to elementary students.
From this study, there are some recommendations proposed to Vietnam’s peda-
gogical universities, namely, it is necessary to develop some specialized modules to
train STEM educational competencies for students. In which, it is necessary to focus
on training contents on designing STEM learning topics for elementary students.
Connect STEM knowledge and skills to real-life problems. In addition, the process
of training STEM educational competency for students needs to actively apply a
learning strategy with practical experiences to ensure that upon graduation, students
have the competency to conduct teaching STEM topics for elementary students.
Future studies on STEM education should focus on assessing the STEM educa-
tional competency of primary school teachers. From there, identify a training program
for primary school teachers to supplement the knowledge and skills that are lacking
in STEM education. At the same time, it is necessary to research and develop STEM
124 T. Q. Pham et al.
educational programs and content for students from grades 1–12 as an independent
subject or as an educational topic with complementary competencies for students.
References
1. Banks, F. & Barlex, D. (2014). Teaching STEM in the Secondary School. Helping teachers
meet the challenge. Chapter 10, New York: Routledge.
2. Bien, N. V. et al. (2019). STEM education in middle school. Vietnam Education Publishing
House One Member Limited Liability Company, Hanoi.
3. Françoise D. L. D., & Winterton, J. (2007). What Is Competence? Human Resource
Development International.
4. Gonzales, A., Jones, D., & Ruiz, A. (2014). Toward achievement in the “Knowledge Econ-
omy” of the 21st Century: Preparing students through T-STEM academies. Research in Higher
Education Journal, 25, 1–14.
5. Hung, V. X. (2016). On the system of teaching competence of teachers in vocational educa-
tion institutions according to the implementation competency approach. Journal of Vocational
Science, 30.
6. Khuyen, N. T. T. et al. (2020). Measuring teachers’ perceptions to sustain STEM education
development. Sustainability (Switzerland), 12(4).
7. Linh, N. Q., Suong, H. T. H., & Khoa, C. T. (2017). STEM contents in pre-service teacher
curriculum: Case study at physics faculty. International Conference for Science Educators and
Teachers (ISET). pp. ISBN 978–0–73541615–4; ISSN 0094–243X, P030071–1 to P030071–8),
Bangkok: Proceedings of the 5th International, 2017.
8. Linh, N. Q., & Phuong, H. T. (2019). STEM education in the new general education program.
TNU Journal of Science and Technology. ISSN: 1859–2171 e-ISSN: 2615–9562.
9. Ministry of Education and Training [MOET]. School Curriculum 2018. December 26, 2018.
10. Ministry of Education and Training [MOET]. Circular no 3089/2020/TT-BGDÐT, Vietnam.
11. Song, M. (2017). Teaching integrated STEM in Korea: Structure of teacher competence.
LUMAT-B: International Journal on Math, Science and Technology Education, 2(4), 61–72.
12. Wang, S., Wan, J., Zhang, D., Li, D., & Zhang. (2016). Towards smart factory for industry 4.0:
A self-organized multi-agent system with big data base feedback and coordination. Computer
Networks, 101, 158–168.
Blockchain Based E-Medical Data
Storage for Privacy Protection
Abstract Electronic Medical Data (E-Medical Data) is sensitive and the privacy
should be preserved. E-Medical Data is easily stolen, altered, or even deleted entirely.
Accordingly, the healthcare organizations must guarantee that their medical data is
treated confidential, secure, and private. If the situation happens like medical data
cannot be logged or retrieved reliably, which delays treatment progress and even
endangers the patient’s life. Conventional method of medical data storage led to
threating of data by the attackers. Many medical applications face security prob-
lems like data stealing. Blockchain technology provides a solution to the security
issue in many applications. As, the Blockchain features such as decentralization,
cryptography-based security, immutability, and consensus algorithms open a solu-
tion to store e-medical data in a secure way with blocks and shared key. Our work
highlights the decentralized E-medical data storage with consensus algorithms and
its performance.
1 Introduction
Electronic patient data management system stores data from the patients such as
implantable sensors to monitor chemotherapy response and glucose level [1]. All
the medical applications store the patient’s diagnosis data for the treatment process.
As per the statistics in 2016, the United States filed 17,000 malpractice cases [2]. It
S. A. Alex (B)
St. Xavier’s Catholic College of Engineering, Nagercoil, India
e-mail: [email protected]
N. Z. Jhanjhi (B) · S. K. Ray
School of Computer Science, Taylor’s University, Subang Jaya, Malaysia
e-mail: [email protected]
S. K. Ray
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 125
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_10
126 S. A. Alex et al.
is burden to prove the medical malpractice [3]. The electronic patient data can be
modified or deleted by the defendant. The security of medical data in centralized data
management systems is difficult to verify. The hacker modifies or deletes the data by
getting appropriate permissions. Whenever the manager requests the database admin-
istrator to modify or delete the data, the action is performed only after the permission
is granted by the database administrator. The centralized security is not suitable for
Internet of Medical Things network [4]. Internet of Things (IoT) technology impacts
a great role in many areas such as social, environmental, and economic.
The concept of smart homes has recently emerged by including different kinds of
devices on the Internet. Chong et al. developed an approach to smart homes. In this
study the client/server unit provides a convenient, and easy option for controlling the
smart home [5]. Soliman et al. recommended a solution for smart home development
using IoT and Cloud which controls several sensors [6].
Smart grid is a digital communication technology-based electricity supply
network that notices and responds to local changes in usage. Karnouskos and Holanda
emphasized the smart grid-based solution. The smart infrastructure boosts the energy
efficiency [7]. Yu et al. researched the smart grid architecture and its key technologies
[8].
Internet of Medical Things (IoMT) is popular to provide solutions to healthcare
organization in which all the medical devices are connected on-line. Istepanian et al.
emphasized IoMT for glucose monitoring from diabetes data storage. The notifica-
tions are shared to mobile for information updates [9]. Ukil et al. coined the impor-
tance of IoT for healthcare researchers. A methodology is proposed for healthcare
analysis that senses heart attack and sends notification [10].
The IoT concept is emerging successfully to provide support to recent indus-
trial requirements. Perera et al. measured several resources and techniques that
focused Context-Aware computing theories, evaluation framework, and communi-
cation mediums [11]. Qiu et al. presented a deep information with public logistics.
Supply Hub Industrial Park (SHIP) used to share the information in real-time. It
provides distributed physical devices and functions effectively [12].
Climate-Smart Agriculture (CSA) is a method that transforms toward agricul-
tural development under climatic change situation. Zhao et al. reviewed various uses
of doing agricultural tasks with greenhouse effect. This automation process incor-
porating the concepts of IoT technology. The authors use information networks.
Hence, Remote Monitoring System (RMS) is proposed [13]. Bandyopadhyay et al.
proposed a framework using IoT that helps the farmers to acquire information about
crops delivery to customers [14].
2 Related Work
Bangladesh, and China invest in Blockchain technology and develop its own
blockchains. Blockchain technology powers several IoT devices to promote a smart
healthy city. The blockchain is also popularly used in smart logistics, transporta-
tion [16], healthcare applications [17], air quality monitoring [18], and societal
applications [19].
The application of Blockchain technology evolves from web app development
[20] to Artificial Intelligence [21]. Blockchain technology helps to preserve security
by using electronic health records (EHR) and Personal health records (PHR). These
frameworks contain patient data. EHR stores data about a patient of many hospitals
which is controlled by the environment [22].
The healthcare field uses blockchain technology that ensures decentralized method
of storing medical and distributing healthcare data. Omar et al. 2017 designed a
system that stores medical data on federation blockchain [23] that provides decryp-
tion key to the data owner. Dubovitskaya et al. 2017 developed healthcare records
distribution framework based on blockchain [24]. It stores only medical data in
a cloud server and issues the decryption key to the data owner. Yue et al., 2016
designed a healthcare data gateway architecture that stores the data in a private
blockchain cloud and provides the decryption key to the receiver [25]. Kannan et al.
[26] developed GemOS that combines the local databases into a blockchain. Fan
et al., 2018 developed MedBlock to store patient data in blockchain [27]. Xia et al.
2017 developed a blockchain-enabled medical data protection to store the medical
records in cloud repositories and provides security using blockchain [28].
Vyas et al., 2019 proposed an integration approach that combines machine
learning and permissioned blockchain in healthcare [29]. It helps to perform early
prediction of disease. Scalability is the issue in this integrated approach. Griggs
et al., 2018 used a healthcare blockchain of permissioned kind [30]. It uses smart
contracts for patient monitoring remotely on Ethereum blockchain. The transac-
tions are traceable, available, and speed. However, Ethereum protocol not addressed
authentication.
Blockchain solutions are focused for
(1) Secure storage of patient identification information
(2) Dealing medical device supply chain process
(3) Data monetization
(4) Fraud detection on medical data.
This section discusses various consensus algorithms, datasets used for secure data
storage and proposed system.
128 S. A. Alex et al.
3.2 Dataset
The proposed system uses various datasets such as Pima Indian Diabetic Dataset
[32], Heart Disease Dataset [32], and Mammography Dataset [32]. Table 1 displays
the details of three datasets.
The proposed method includes developing an application that authenticates the client
using MetaMask account followed by medical data storage. The workflow of the
proposed method is shown in Fig. 1. The client can be a doctor who stores the patient
data. When many hospitals are connected through the blockchain, the patient data is
shared between doctors of different hospitals for further analysis.
4 Experimental Results
This section discusses various performance metrics and the results comparison on
various Proof-of-Work (PoW)-based consensus protocol.
Fig. 1 Workflow of
Ethereum Blockchain-based
medical data storage
The experiment is carried out in Windows 11 machine, and the smart contract is
created using Remix IDE environment. Various consensus algorithms were selected
are SHA256, Ethash, Scrypt, and Equihash. The experiment involves various number
of transactions such as 100, 200, 300, 400, and 500. Smart contract is created in all the
consensus algorithms and evaluated its performance on various evaluation metrics.
The proposed method is evaluated using Transactions Per Second (TPS), Block
Time (BT), and Transaction Fee (TF). Transaction per second represents counting
the number of transactions completed per second [33]. It is mostly used for evaluating
the speed of system or network that involves cryptocurrencies. The system is fast
when it executes more transactions per second. It is a important parameter to measure
the speed of blockchain network. TPS of blockchain network depends on consensus
algorithm.
All the transactions are executed in terms of blocks. Block time is the time engaged
for block creation for each transaction. After the fresh block is created, it is further
included in existing blockchain [34]. This parameter affects the latency of blockchain
network. Transaction fee is a fee that is given to the transaction miners for block
verification of a transaction in the blockchain network [35]. It is like an incentive to
the miners for processing a transaction. During the mining process, the blocks are
created for a transaction.
130 S. A. Alex et al.
Fig. 5 Impact of
transactions versus
transaction fee
5 Conclusion
In this work, medical data is stored in blockchain network. The experiment is carried
out on various blockchain networks with PoW-based consensus algorithm such as
SHA256, Ethash, Scrypt, and Equihash. The datasets Pima Indian Diabetic dataset,
Heart Disease dataset, and Mammography dataset are stored in blockchain. The
performance of storage in blockchain is also evaluated. This work can be extended
to proof-of-stake (PoS)-based consensus algorithms.
References
1. Shi, Y., Peng, Y., Kou, G., & Chen, Z. (2007). Introduction to data mining techniques via
multiple criteria optimization approaches and applications. In Research and Trends in Data
Mining Technologies and Applications, IGI Global, pp. 242–275.
2. Tian, H., He, J., & Ding, Y. (2019). Medical data management on blockchain with privacy.
Journal of Medical Systems, 43, 1–6.
3. Nadin, M. (2018). Redefining medicine from an anticipatory perspective. Progress in
Biophysics and Molecular Biology, 140, 21–40.
132 S. A. Alex et al.
4. Soliman, M., Abiodun, T., Hamouda, T., Zhou, J., & Lung, C. H. (2013). Smart home:
Integrating internet of things with web services and cloud computing. In 5th Interna-
tional Conference on Cloud Computing Technology and Science (CloudCom). IEEE, Vol. 2,
pp. 317–320.
5. Ukil, A., Bandyoapdhyay, S., Puri, C., & Pal, A. (2016). IoT healthcare analytics: The
importance of anomaly detection. In 30th international conference on Advanced Information
Networking and Applications (AINA). IEEE, pp. 994–997.
6. Perera, C., Liu, C. H., Jayawardena, S., & Chen, M. (2014). A survey on internet of things
from industrial market perspective. IEEE Access, 2, 1660–1679.
7. Qiu, X., Luo, H., Xu, G., Zhong, R., & Huang, G. Q. (2015). Physical assets and service sharing
for IoT-enabled Supply Hub in Industrial Park (SHIP). International Journal of Production
Economics, 159, 4–15.
8. Zhao, J., Zhang, J., Feng, Y., & Guo, J. (2010). The study and application of the IOT technology
in agriculture. In 3rd IEEE International Conference on Computer Science and Information
Technology (ICCSIT). Vol. 2, pp. 462–465.
9. Bandyopadhyay, D., & Sen, J. (2011). Internet of things: Applications and challenges in
technology and standardization. Wireless Personal Communications, 58(1), 49–69.
10. Kaddoura, S., & Grati, R. (2021). Blockchain for healthcare and medical systems. In Enabling
Blockchain Technology for Secure Networking and Communications, IGI Global, pp. 249–270.
11. Sarpatwar, K., Vaculin, R., Min, H., Su, G., Heath, T., Ganapavarapu, G., & Dillenberger,
D. (2019): Towards enabling trusted artificial intelligence via blockchain. In Policy-Based
Autonomic Data Governance, Berlin, Springer, pp. 137–153.
12. Abraham, M., Vyshnavi, A. H., Srinivasan, C., & Namboori, P. K. (2019). Healthcare security
using blockchain for pharmacogenomics. Journal of International Pharmaceutical Research,
46, 529–533.
13. Juneja, A., & Marefat, M. (2018). Leveraging blockchain for retraining deep learning archi-
tecture in patient-specific arrhythmia classification. In IEEE EMBS International Conference
on Biomedical & Health Informatics (BHI), pp. 393–397.
14. Ahmad, R. W., Hasan, H., Jayaraman, R., Salah, K., & Omar, M. (2021). Blockchain
applications and architectures for port operations and logistics management. Research in
Transportation Business & Management, 41, 100620.
15. Punathumkandi, S., Sundaram, V. M., & Panneer, P. (2021). Interoperable permissioned-
blockchain with sustainable performance. Sustainability, 13, 11132.
16. Humayun, M., Jhanjhi, N. Z., Hamid, B., & Ahmed, G. (2020). Emerging smart logistics and
transportation using IoT and blockchain. IEEE Internet of Things Magazine, 3(2), 58–62.
17. Singh, A. P., Pradhan, N. R., Luhach, A. K., Agnihotri, S., Jhanjhi, N. Z., Verma, S., Ghosh,
U., & Roy, D. S. (2020). A novel patient-centric architectural framework for blockchain-enabled
healthcare applications. IEEE Transactions on Industrial Informatics, 17(8), 5779–5789.
18. Benedict, S., Rumaise, P., & Kaur, J. (2019). IoT blockchain solution for air quality
monitoring in SmartCities. In IEEE International Conference on Advanced Networks and
Telecommunications Systems (ANTS), December; pp. 1–6.
19. Benedict, S. (2020). Serverless blockchain-enabled architecture for iot societal applications.
IEEE Transactions on Computational Social Systems, 7(5), 1146–1158.
20. Dahmani, N., Alex, S. A., Sadhana, S. G., Jayasree, S. G., & Jinu, T. A. (2022). Welcome
wagons: A block chain based web application for car booking. In IEEE/ACS 19th International
Conference on Computer Systems and Applications (AICCSA). December; pp. 1–6.
21. Alex, S. A., & Briyolan, B. G. (2023). Convergence of Blockchain to artificial intelligence appli-
cations. In Handbook of Research on AI Methods and Applications in Computer Engineering,
IGI Global, pp. 253–270.
22. Uddin, M. A., Stranieri, A., Gondal, I., & Balasubramanian, V. (2018). Continuous patient
monitoring with a patient centric agent: A block architecture. IEEE Access, 6, 32700–32726.
23. Omar, A. A., Rahman, M. S., Basu, A., & Kiyomoto, S. (2017). MediBchain: A blockchain
based privacy preserving platform for Healthcare Data. In International Conference on Security,
Privacy and Anonymity in Computation, Communication and Storage, pp. 534–543.
Blockchain Based E-Medical Data Storage for Privacy Protection 133
24. Xia, Q. I., Sifah, E. B., Asamoah, K. O., Gao, J., Du, X., & Guizani, M. (2017). MeDShare:
Trust-less medical data sharing among cloud service providers via blockchain. IEEE Access,
5, 14757–14767.
25. Chong, G., Zhihao, L., & Yifeng, Y. (2011). The research and implement of smart home system
based on internet of things. In International Conference on Electronics, Communications and
Control, IEEE, pp. 2944–2947.
26. Karnouskos, S., & De Holanda, T. N. (2009). Simulation of a smart grid city with software
agents. In Third UKSim European Symposium on Computer Modeling and Simulation, pp. 424–
429.
27. Yu, X., Cecati, C., Dillon, T., & Simoes, M. G. (2011). The new frontier of smart grids. IEEE
Industrial Electronics Magazine, 5(3), 49–63.
28. Magrans, R., Gomis, P., Voss, A., & Caminal, P. (2011). Engineering in medicine and biology
society. EMBC: Annual International Conference of the IEEE.
29. Tang, H., Shi, Y., & Dong, P. (2019). Public blockchain evaluation using entropy and TOPSIS.
Expert Systems with Applications, 117, 204–210.
30. Ferrag, M. A., Derdour, M., Mukherjee, M., Derhab, A., Maglaras, L., & Janicke, H. (2018).
Blockchain technologies for the internet of things: Research issues and challenges. IEEE
Internet of Things Journal, 6(2), 2188–2204.
31. Yasaweerasinghelage, R., Staples, M., & Weber, I. (2017). Predicting latency of blockchain-
based systems using architectural modelling and simulation. In IEEE International Conference
on Software Architecture (ICSA), pp. 253–256.
32. Dua, D., & Graff, C. (2019). UCI Machine Learning Repository. University of California,
School of Information and Computer Science, Irvine, CA. Available from: http://archive.ics.
uci.edu/ml.
33. Binti Suhaili, S., & Watanabe, T. (2017). Design of high-throughput SHA-256 hash function
based on FPGA. 6th International IEEE Conference on Electrical Engineering and Informatics
(ICEEI), pp. 1–6.
34. Lam, D. K., Le, V. T. D., & Tran, T. H. (2022). Efficient architectures for full hardware
Scrypt-based block hashing system. Electronics, 11(7), 1068.
35. Biryukov, A., & Feher, D. (2019). Portrait of a miner in a landscape. In IEEE INFOCOM
2019-IEEE Conference on Computer Communications Workshops, pp. 638–643.
A Study on Different Fuzzy Image
Enhancement Techniques
1 Introduction
Lots of applications such as medical image analysis, satellite photo evaluation, remote
sensing, equipment vision, automated navigation, as well as dynamic as well as traffic
scene evaluation need high-resolution photos that preserve info. A high comparison
photo is tough to attain since we cannot manage the tape-taping problems. As an
example, lots of videotaped photos are fairly negative because of bad illumination,
bad shutter rate as well as aperture dimension, as well as non-linear mapping. Over
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 135
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_11
136 L. K. Narayan and V. P. Vishwakarma
the previous years, there have been lots of spatial as well as regularity domain name
methods that enhance photo comparison. Fuzzy concept, initially presented by Zadeh,
was reached various other areas, consisting of photo-refining, information design, as
well as manage system make. Fuzzy concept can surely take care of unpredictability.
This is the bottom line of fuzzy concept efficiently. Lots of scientists are functioning to
create fuzzy photo-refining concept. This work goal is to verify that fuzzy reasoning
can surely be utilized to enhance differentiation. Worldwide photo adjustment for
Grayscale worth. This establishment doesn’t impact where the factors lie. Neverthe-
less, we believe that the factors in the photo are not separated, yet can surely be linked
to various other bordering factors. A picture improvement formula needs to make
complete use of the appropriate info in the regional setting. A picture improvement
formula needs to likewise take into consideration obscuring as a function of photo
unpredictability. To build much far better use fuzzy info as well as stats regarding
the bordering area of a picture, we present fuzzy entropy into the photo improvement
formula. This makes our formula more suitable with the fact of photo information
refining. The 2nd action is to present the standard human qualities considered as a
covering up the result as well as suggest a dimension work that functions greatest to
determine the level of alter of the grey range worth of the photo pixels. This allows
us to utilize the physical functions of the individual.
1.1 Objective
This article reviews fuzzy grey-level contrast that is based on fuzzy logic to improve
low-contrast images. The techniques for enhancing contrast are under and over
enhancement. Through the use of non-linear membership functions in fuzzy set
theory, the drawback of under and over enhancement of images could be corrected.
1.2 Motivation
The purpose of this paper is to present a fuzzy image enhancement algorithms, which
maps elements from the pixels to fuzzy plane as well as to transformed plane using
the fuzzy techniques and provides a better strategy for new research.
2 Literature Reviews
Wei and Lidong [1] suggested a feature-based limit choice criterion for histogram
segmentation to execute bi-HE (warm mapping in low-contrast pictures). Its objec-
tive is to enhance low-contrast pictures by utilizing an ideal limit that sustains all
A Study on Different Fuzzy Image Enhancement Techniques 137
Cancer cells are the 2nd prominent reason for fatality around the world. Leukemia
is a kind of cancer cells that impacts the blood and blood-forming cells. Kids under
the age of 15 have a high danger of establishing leukemia.
Han and Kamber [15] objective is that category is the procedure of organizing
items into predefined classifications. This various from combination, which divides
points however doesn’t different them. The products in the classification are split
into a number of classifications. A design is produced that discusses the guidelines
for splitting programs into various classifications. This design can be utilized to
designate a lesson to a defined course.
Honeine [16] specifies multi-classification as producing courses from several
information collections. Changing a multi-class issue into a two-class issue is a
typical method to refix multi-class category issues.
Fuzzy Assistance Vector Device, designed by Abe [17], TakuyaInoue, and Shigeo
Abe, suggests fuzzy subscription to define the unidentified area issue in the Abe
SVM formula. The FSVM technique can likewise decrease outliers in information
category. Each dataset is designated a subscription degree that shows the payment
of the information to every course.
Nimesh et al. [18] suggested an automatic technique for leukemia discovery.
Experts take a look at the micrographs to identify if there’s a medical diagnosis of
leukemia. This procedure takes a great deal of time and needs a great deal of ability.
These restrictions are conquered by an automatic leukemia discovery gadget. It after
that essences appropriate components from the pictures and uses filtering system
methods. Category is done utilizing SVM. The system was evaluated utilizing a
picture dataset. 93.57% precision was accomplished. The program was effectively
executed in MATLAB.
Beatriz et al. [19] suggested a DNA microarray category technique. The suggested
technique utilizes a swarm knowledge formula to choose functions to determine the
very best establish of genetics to discuss the illness. A subset of genetics was utilized
to educate various ANNs. 4 datasets were utilized to examine the credibility of the
suggested design and analyze the hereditary correlation for illness category.
Himaliet et al. [20] review techniques for spotting leukemia. Lots of picture
refining methods can be utilized to spot red blood cells and premature cells. Anemia,
Leukemia, and jungle fever can all be triggered by another problem, such as vitamin
B12 shortage or anemia. It can be utilized to identify the problem. They examine
objectives to matter and determine cells afflicted by leukemia. Discovery of prema-
ture blast cells assists to identify leukemia and identify whether it’s persistent or
malignant. There are lots of methods to determine fully grown cells. These consist of
histogram scaling and direct contrast extending. Various other morphological tech-
niques are zoning, zoning disintegration, zoning, growth, and disintegration. K means
watershed alter. Histogram estimation and direct contrast are precise at 72, 73.7, and
97.8%, specifically.
A Study on Different Fuzzy Image Enhancement Techniques 139
3 Conclusion
4 Future Scope
In this review, various methods used for image enhancement using fuzzy logic are
done and also to identify the outcomes and shortcomings of the earlier works. To over-
come the limitations of existing techniques, a new technique based on morphological
enhancement using fuzzy logic will be proposed in the near future.
References
1. Wei, Z., Lidong, H., & Jun, W. (2015). Combination of contrast limited adaptive histogram
equalization and discrete wavelet transform for image enhancement. 9(3), 226–235.
2. Hanmandlu, M., Verma, O. P., Kumar, N. K., & Kulkarni, M. (2009). A novel optimal
fuzzy system for color image enhancement using bacterial foraging. IEEE Transactions on
Instrumentation and Measurement., 58(8), 2867–2879.
3. Sheet, D., Garud, H., Surveer, A., & Mahadevappa, M. (2010). Brightness preserving dynamic
fuzzy histogram equalization. IEEE Transactions on Consumer Electronics, 56(4), 2475–2480.
4. Ceilik, T., & Tjahjadi, T. (2011). Contextual and variational contrast enhancement. IEEE
Transactions on Image Processing, 20(12), 3431–3441.
5. Celik, T. (2012). Two-dimensional histogram equality and contrast enhancement. Pattern
Recognition, 45, 3810–3824.
6. Lee, C., Lee, C., & Kim, C.-S. (2013). Contrast enhancement based on layered different
representation of 2D histograms. IEEE Transactions on Image Processing, 22(12), 5372–5384.
7. Huang, S. C., Cheng, F. C., & Chiu, Y. S. (2013). Efficient contrast enhanced using adaptive
gamma correction with weighting distribution. IEEE Transactions on Image Processing, 22(3),
1032–1041.
8. Bdoli, M. A., Sarikhani, H., Ghanbari, M., & Brault, P. (2015). Gaussian model-based contrast
enhancement. IET Image Processing 9(7), 569–577
9. Wei, Z., Lidong, H., Jun, W., & Zebin, S. (2015). Entropy maximisation histogram mod scheme
for image enhancement. IET Image Processing, 9(3), 226–235.
10. Fu, X., Wang, J., Zeng, D., Huang, Y., & Ding, X. (2015). Remote sensing image enhancement
using regularized histogram equalization and DCT. IEEE Geoscience and Remote Sensing
Letters, 12(11), 2301–2305.
11. Singh, K., & Vishwakarma, D. K., along with Walia, G. S., Kapoor, R. (2016). Contrast
enhancement through texture region based histogram equalization. Journal of Modern Optics.
12. Chen, S., & Beghdadi, A. (2010). Natural enhancement of color image. EURASIP Journal and
Video Processing, 2010, 1–19.
13. Fu, X., LiWang, M., Huang, Y., Zhang, X. P., & Ding, X. (2014). A novel retinex based method
for image enhancement with illumination adjustment. In IEEE international conference on
acoustic, speech and signal processing, Florence.
14. Liang, Z., & Liu, W. (2016). Contrast Enhancement using nonlinear diffusion filtering. IEEE
Transactions on Image Processing, 25(2), 673–686.
15. Han, J., Kamber, M., & Pei, J. (2012). Data mining: Concepts and Techniques. Waltham,
Morgan Kaufmann USA.
16. Honeine, P., Noumir, Z., & Richard, C. (2013). Signal Process 93, 1013–26.
17. Abe, S., & Inoue, T. (2020). European symposium on artificial neural networks, (Bruges)
ESANN. Belgium.
18. Nimesh, S., et al. (2015). Automated leukaemia detection using microscopic images (vol. 58,
pp. 635–642). Elsevier.
19. Garro, B. A., et al. (2016). Classification of DNA microarrays using artificial neural networks
and abc algorithm. Applied Soft Computing, 38.
142 L. K. Narayan and V. P. Vishwakarma
20. Himali, P. (2015). Leukemia detection using digital image processing techniques. International
Journal of Applied Information Systems, 10(1).
21. Rawat, J. (2015). Computer aided diagnosis system for the detection of leukemia using
microscopic images (vol. 70, pp. 748–756). Elsevier.
22. Tejashree, G., et.al. (2015). Blood microscopic image segregation and acute leukemia detection.
International Journal of Emerging Research in Management and Technology, 4(9).
23. Joshi, M. D. (2013). International Journal of Emerging Trends and Technology in Computer
Science (IJETTCS), 2, 147–151.
24. Mohapatra, S., Patra, D., & Satpathy, S. (2014). An ensemble classification system for the early
diagnosis of acute lymphoblastic Leukemia in blood microscopy images. Neural Computing
and Applications, 24, 1887–1904
25. Putzu, L., Caocci, G., & Di Ruberto, C. (2014). Leukocyte classification using image processing
techniques for leukaemia detection Artif. Artificial Intelligence in Medicine, 62, 179–191.
26. Singh, P., & Singh, V. (2014). A binary pattern to detect acute lymphoblastic Lukemia. Kanpur,
India.
27. Fu, X., Wang, J., Zeng, D., Huang, Y., & Ding, X. (2015). Remote sensing image enhancement
using regularized-histogram equalization and DCT. IEEE Geoscience and Remote Sensing
Letters, 12(11), 2301–2305.
28. Liang, Z., Liu, W., & Yao, R. (2016). Contrast enhancement by nonlinear diffusion filtering.
IEEE Transactions on Image Processing, 25(2), 673–686.
29. Jayaram, B., Kakarla, V. V. D. L., Narayana, K., & Vetrivel, V. (2011). Fuzzy inference system
based contrast enhancement. EUSFLATLFA Aix-les-Bains, France.
30. Shin, J., & Park, R. H. (2015). Histogram-based locality-preserving contrast enhancement.
IEEE Signal Processing Letters, 22(9), 1293–1296.
A Review on Different Image
Enhancement Techniques
1 Introduction
I am using digital image enhancement techniques to improve low image quality and
better incorporate image processing. In simple language, digital image processing
can be simply defined as processing of an image which is digital in nature by using
digital computer. In digital image processing, we apply different types of operations
on an image so that it becomes more appropriate for viewing. There are two methods:
1. Spatial domain methods: [1] in this method, the operation is carried out directly on
the image pixels, which leads to its turn an increase in contrast. Frequency domain
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 143
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_12
144 L. K. Narayan and V. P. Vishwakarma
methods [2]: in this method, the operation is used on the fierce conversion of the
corresponding image. The article deals with spatial domain techniques, different
types of interference and filters applied to interference. The assessment of functions
is carried out in relation to the frequency of the frequency field in order to improve
image quality. By using this method, we can improve the corresponding image quality
by making changes to the transformation coefficient functions. Real-time solutions
are implemented in the spatial subject due to the fact it is very simple, clean to
interpret, and fundamentally, the range of complexity is very low. Filtering is a
method that serves as a means of disposing of the noise from a photograph real-time
answers are implemented inside the spatial field due to the fact it’s miles very simple,
clean to interpret, and essentially, the variety of complexity could be very low [3].
Coherence and intangible factors are the two foremost criteria that are lacking within
the space subject. The evaluation of capabilities is done in terms of the frequency
of the frequency field with a purpose to enhance photograph fine [4]. The Fourier
transform of the photo works inside the shape of a discreet cosine and a sine by way
of the usage of this technique, we can enhance the corresponding image high-quality
with the aid of making changes to the transformation coefficient capabilities [5]. The
advantages of enhancing the image of the frequency area include a low calculation
complexity, the management of the image coefficients and the use of an improved
version of the area feature [6]. The main drawback of this approach is that it can’t
create a clear picture of the historical past. This does not improve all components
of the picture. It may only be awareness on person components [7]. The removal of
the noise of a photo performs a vital function, and it’s by far one of the maximum
essential tasks in applications which include the scientific discipline, wherein noise
-unfastened photos lead to the detection of minimal mistakes. Filtering is a way that
serves as a method of getting rid of the noise from a photograph [8]. The thing deals
with spatial domain strategies, one-of-a-kind forms of interference and filters applied
to interference [9].
1.1 Objective
It makes the image clearer for people to see, removes noise and blur, increases
contrast and shows more detail. This paper fulfils the basic objective of image
enhancement that image should be improved for better human perception. Exam-
ples of development activities include these. The purpose of each site determines
how the development strategy is implemented.
1.2 Motivation
The motivation of this paper is to arrange and survey the Picture Handling Strategies
and the different strategies applied to the picture. As sometimes most pictures suffer
A Review on Different Image Enhancement Techniques 145
from high noise, weak contrast. In this paper, our main focus is to review different
methods for removing the noise from an image, improve the contrast of an image,
improve the brightness of an image, increase the resolution of an image. We addi-
tionally utilized various channels to see which channel turns out best for eliminating
specific commotions.
Image enhancement can be regarded to be one of the primary methods used to analyse
images. The purpose of contrast enhancement is to increase the quality of an image
so that it can be more appropriate for the particular application. As of today, many
enhancement techniques have been suggested for various applications. The effort
has been made to improve the quality of enhancement results, while reducing the
processing complexity and memory consumption.
The pixels in an image are utilized for the implementation of spatial method. Pixels
in photographs are subjected to direct operations. The purpose of this method is to
enhance the image’s clarity of information [10].
One of the most important aspects of gray-level image improvement is that, in this
technique the gray-level image enhancement techniques are directly applied on the
particular pixel of an image. One of the most important aspects of gray-level image
enhancement is that it is carried out directly on a specific pixel in an image when
using this technique [11]. The value of each individual pixel in the processed image is
based on the original value of the pixel. Numerous researchers such as Umar Farooq
have created a unique approach to image enhancement with infrared photos.
Intensity difference is described as the contrast among adjacent pixels. In some cases,
photograph best can be improved via increasing its contrast. Contrast in simple words
can be described as difference between highest and lowest pixel intensity value of
an image [12].
146 L. K. Narayan and V. P. Vishwakarma
When the image needs to be segmented, door conversion is used. The background
and a portion of the image are separated as desired. It is basically a process of creating
black and white image from gray scale image by setting exactly those pixels to white
which are greater than the particular threshold and setting exactly those pixels to
black which are lower than the particular threshold [13].
This theory, developed on the ground and McCann, deals with the perception of
colours from the factor of view of the human eye and the restart of the range of the
colours. The purpose of this technique is to decide the mirrored image of a photo by
way of disposing of the impact of mild from the unique image. In step with idea, the
human eye receives facts in a selected manner beneath exclusive lights conditions,
i.e. while mild touches and is pondered on an item, the human eye can understand that
item. Possible again, the above assertion in reality describes a light’s surroundings
wherein the principle aspect that the human eye can be dependent on isn’t always
light, however the strength of mild or the unevenness of light. Consequently, a unmar-
ried factor that displays the underlying information or statistics of any. The object,
just like the mirrored image coefficient, is preserved. It is based totally on the above-
mentioned model, the image may be expressed as follows: it’s far the fabricated from
the reflection component and the light component [14, 15].
A Review on Different Image Enhancement Techniques 147
3 Literature Reviews
special from the other paper is that it uses the most substantial processes, specifically
DWT (Discrete Wavelet Transforms) and SWT (desk bound wave).
Sreejith [23] proposed it’s miles a synthesis-based totally approach that makes
use of a brand new concept evolved to enhance images based totally on fuzzy good
judgment with a method, which enhances a excessive-assessment photo. It became
executed successfully, however for some images it appears to magnify a sure a part
of the image.
Gopalan, Sasi and Arathy, [24] “Progressed overall performance Detector Water-
mark image development using Filters” in 2018 discusses the effect of a few strate-
gies to manage the value of watermark image reputation. The real yet brittle water-
mark method for ambush complicated heavy infection JPEG or thawing may be
planned, as a result decreasing the extent of waterfall technique discovered inside
the image waterfall attack. Their revelations lessen to increase in fee photo distor-
tion in the usage of United States, Laplacian or channel. There are watercolours on
1000 photographs for the take a look at facts set and they are checked while they are
published or published. A distorted photo changed into advanced the use of inhibition
of separation, Laplacian and external canal deconvolution. The cost of revealing the
preceding watermark become then assessed and improved broke down.
Seung-received Jung, Jae-Yun Jeong and Sung-JeaKo, “improving Stereo
photograph Accuracy using Binoculars—tremendous difference” in 2018 pictures
proposed to feature every other approximate sharpness to the sound machine. The
sound device is a useful response to reduce the development hassle of improving
photographs accuracy. Arranging the interior of the car with restraints where the
path of movement can be determined is more vital to suppress pointless will increase
in light price. In addition, the strict consciousness of the BJND version is considered
with the aid of the approach of contrasting the accuracy of sound gadget planning.
Yue-cheng Li [25] proposed “enhancing Multidimensional image based on Human
Nature’s visual gadget,” which will use and display LIP (picture logarithmic varia-
tion) display in 2018. Attribute shape seen human (HVS) helps multi-scale compu-
tational restore. At that point, a extraordinary level of increase in the adaptability
of the smooth JND (simply seen variations, JND) of the exact device of people was
proposed and used as a machine to reveal the implementation of repairs techniques.
Their algorithm gives better results than other algorithms (Table 1).
4 Conclusion
After reviewing exclusive papers and specific methods, it may be determined that
there are basically broad classes for low-mild photograph improvement strategies,
namely, Pre-Enhancement and post-Enhancement. Pre-improvement is a tradition
of strategies that we observe in our self-improvement method earlier than taking
any photo, and submit-development is a way of life of techniques which might be
implemented after a photo is taken. There are extraordinary methods to enhance the
image in low light that fall into these classes. It changed into found that every approach
A Review on Different Image Enhancement Techniques 149
Table 1 (continued)
SI.No Authors Contribution Technique Findings Limitation
5 Kim et al. Photo PCA-based Give pictures to Can merely be low
[20] dehazing and fog removal various picture use for linear
improvement Handling bundles stretching
of the in dimness and
utilization of periodic light
most situation
compelling
thing
investigation
and altered
murkiness
capacities
6 Priyanka Principal Adaptive It gives a powerful This method is
et al. [21] component filters were upgrade picture ideal for visual
analysis was used along even in more observation,
used for the with the PCA hazier picture as particularly when
improvement for in evening time images are of high
of low-light Retinex-based picture. Anyway contrast, the best
images methods some data might effects for
get misfortune in radiographic and
this technique thermal images
7 Gu et al. [22] A low-light Poor light It provides a better Can’t return good
picture model used way for image sound effects then
improvement Enhancement it needs to improve
strategy in using inverted
view of low-light image
picture enhancement
debasement model
model and
unadulterated
pixel
proportion
earlier
8 Sreejith and Picture Homomorphic This Strategy Fuzzy logic need
Sarath upgrade filtering is upgrades the to enhancement
[23] utilizing done along pictures anyway techniques are a
fluffy with fuzzy this goes through way to enhance
rationale logic the over image quality.
improvement Imaging
some of the time
9 Gopalan and Another PCA Extremely helpful PCA techniques
Arathy [24] numerical for improvement are time taking to
model in yet some time it implement
picture gives over upgrade
upgrade issue
(continued)
A Review on Different Image Enhancement Techniques 151
Table 1 (continued)
SI.No Authors Contribution Technique Findings Limitation
10 Zhang et al. Multi-scale in LIP JND (Simply According to lip
[25] picture Noticeable techniques
improvement Contrasts, JND) of enhancement and
a reasonable filtering
arrangement of requirements
people
transformed into
proposed and
utilized as a
contraption to
notice the
execution of fix
procedures
has its very own benefits and drawbacks. This is, a few methods are over-subtle, a few
strategies are exceptionally complex and a few methods also are unexpected in their
consequences, however they’re financially high priced in computing and obtaining
any image for a data set. As a result, in preference to the use of one approach with
all its shortcomings, we are able to finish that if we are able to integrate two or
more techniques and create a fusion primarily based methods. One method turns
into an advantage over the other approach’s shortcomings and can offer the proper
result. This will be beneficial in improving the image in low mild, but additionally
facilitates to preserve the main goal of improving the image in low light by removing
all the hidden details from the image low light. The main purpose of low-light image
enhancement is to increase the image contrast so that pictures become more suitable
for viewing as well used in various different areas of applications. It should also be
ensured that images show good quality visual perception for humans.
5 Future Scope
The destiny of photo processing will involve looking space for adding wise life.
Moreover, advances in photograph processing programs are integrated into the advent
of brand-new clever digital species by means of researchers from all over the global.
In some years, the creation of photo processing and associated technologies will lead
to the arrival of millions upon millions of robots a good way to adjust international
governance. In the coming years, a lot of work can be done in image enhancement
using machine learning and deep learning algorithms. Various different techniques
are reviewed in this paper for enhancement of low-light images and different methods
like image fusion method, defogging method and machine learning methods can be
used in future for better results of image enhancement.
152 L. K. Narayan and V. P. Vishwakarma
References
1. Wang, W., Wu, X., Yuan, X., & Gao, Z. (2020). An experiment-based review of low-light image
enhancement methods. IEEE Access, 8, 87884–87917. https://doi.org/10.1109/ACCESS.2020.
2992749
2. Chen, S. D., & Ramli, R. (2003). Minimum mean brightness error bi-histogram equalization
in contrast enhancement. IEEE Transactions on Consumer Electronics, 49(4), 1310–1319.
3. Park, S., Kim K., Yu, S., & Paik, J. (2018). Contrast enhancement for low-light image
enhancement: A survey. IEIE Transactions on Smart Processing Computing, 7(1), 36–48.
4. Hu, H., & Ni, G. (2010). Colour image enhancement based on the improved retinex. In
Proceedings of the international conference on multimedia technology, pp. 1–4.
5. Li, L., Sun, S., & Xia, C. (2014) Survey of histogram equalization technology. Computer
Systems Applications, 23(3), 1–8
6. Lee, H. -G., Yang, S., Sim, J. -Y. (2015). Colour preserving contrast enhancement for low
light level images based on retinex. In Proceedings of Asia-Pacific Signal and Information
Processing Association Annual Summit and Conference, pp. 884–887.
7. Land, E. H., McCann, J. J. (1971). Lightness and Retinex theory. The Journal of the Optical
Society, 61(1), 1–11.
8. Kim, Y.-T. (1997). Contrast enhancement using brightness preserving bi-histogram equaliza-
tion. IEEE Transaction on Consumer Electronics, 43(1), 1–8.
9. Jobson, D. J., Rahman, Z., & Woodell, G. A. (2002). A multiscaleretinex for bridging the gap
between colour images and the human observation of scenes. IEEE Transactions on Image
Processing, 6(7), pp. 965–976.
10. Wang, M., Tian, Z., Gui, W., Zhang, X., & Wang, W. (2020). Low-light image enhancement
based on nonsubsampledshearlet transform. IEEE Access, 8, 63162–63174
11. Gu, Z., Li, F., Fang, F., & Zhang, G. (2019). A novel retinex-based fractionalordervaria-
tional model for images with severely low light. IEEE Transactions on Image Processing,
29, pp. 3239–3253. https://doi.org/10.1109/TIP.2019.2958144
12. Wang, Y., Chen, Q., & Zhang, B. (1999). Image enhancement based on equal area dualistic
sub-image histogram equalization method. IEEE Transaction on Consumer Electronics, 45,
68–75.
13. Zuiderveld, K. (1994). Contrast limited adaptive histogram equalization. In Graphics gems
(pp.474-485). Elsevier. ISBN: 0-12-336155-9
14. Loza, D. B., & Achim, A. (2013). Automatic contrast enhancement of low-light images based
on local statistics of wavelet coef_cients. In Proceedings of the IEEE international conference
on image processing, pp. 3553–3556.
15. Park, S., Yu, S., Moon, B., Ko, S., Paik, J. (2017). Low-light image enhancement using varia-
tional optimization-based Retinex model. IEEE Transactions on Consumer Electronics, 63(2),
pp. 178–184.
16. Rahman, Z., Aamir, M., Pu, Y.-F., Ullah, F., Dai, Q. (2018). A smart system for low-light image
enhancement with color constancy and detail manipulation in complex light environments
symmetry, 10, 718. https://doi.org/10.3390/sym10120718.
17. Sandoub, G., Atta, R., Ali, H. A., Abdel-Kader, R. F. (2021). A low-light image enhancement
method based on bright channel prior and maximum colour channel Department of Electrical
Engineering. Faculty of Engineering, Port Said University, Port Said, Egypt, February 2021
IET Image Process 15, 1759–1772
18. Ying, Z., Li, G., Ren, Y., Wang, R., & Wang, W. (2017). A new low-light image enhancement
algorithm using camera response model. IEEE International Conference on Computer Vision
Workshops (ICCVW), 2017, 3015–3022. https://doi.org/10.1109/ICCVW.2017.356
19. Gopalan, S., Arathy, S. (2015). A new mathematical model in image enhancement problem.
Procedia Computer Science, 46, 1786–1793.
20. Kim, M., Yu, S., Park, S., Lee, S., & Paik, J. (2018). Image dehazing and enhancement using
principal component analysis and modified haze features. Applied Science, 8, 1321. https://doi.
org/10.3390/app8081321
A Review on Different Image Enhancement Techniques 153
21. Priyanka, S. A., Wang, Y.-K., & Huang, S.-Y. (2019). Low-light image enhancement by prin-
cipal component analysis. IEEE Access, 7, 3082–3092. https://doi.org/10.1109/ACCESS.2018.
2887296
22. Gu, Z., & Chen, C., & Zhang, D. (2018). A low-light image enhancement method based on
image degradation model and pure pixel ratio prior. Mathematical Problems in Engineering,
1–19. https://doi.org/10.1155/2018/8178109
23. Sarath, K., Sreejith, S. Image Enhancement Using Fuzzy Logic. IOSR Journal of Electronics
and Communication Engineering (IOSR-JECE), pp. 34–44. e-ISSN: 2278-2834, ISSN: 2278-
8735. www.iosrjournals.org
24. Gopalan, S., & Arathy, S. (2015). A New Mathematical Model in Image Enhancement Problem.
Procedia Computer Science, 46, 1786–1793. https://doi.org/10.1016/j.procs.2015.02.134
25. Zhang, H., Zhao, Q., Li, L., Li, Y.-C., & You, Y.-h. (2011). Muti-scale image enhancement
based on properties of human visual system. In 4th international congress on image and signal
processing, (Shanghai, China, pp. 704–708). https://doi.org/10.1109/CISP.2011.6100344
Cryptocurrency and Application
of Blockchain Technology: An Innovative
Perspective
Abstract Cryptocurrency has been a moving point over the course of the last 10–
15 years due to the increasing demand of Digital Currency many people are using it
which makes it a useful investment. Efficiency, adaptability and data dense qualities
are determined by its unique design and technological innovation. The technology
used for cryptocurrency is Blockchain. Moreover, this paper deals with systematic
interaction between Block Chain and cryptocurrency. The present work shows and
summarizes the interplay connection of two key ideas in today’s digitalized society
both cryptocurrency and blockchain are at the forefront of technical study, and this
article focuses on their most current applications and advances. The main aim of
this study is to investigate cryptocurrencies and its legal status in India, as well
as suggesting ways to regulate cryptocurrency. This paper also deals with primary
source of data for analysing and to generate result about the awareness of digital
currency and the requirement of regulation.
1 Introduction
The Bartar framework, which was very famous in antiquated times, has been
supplanted by money. As time elapsed, the headway of money started as a need.
Another period of money has started because of innovative headways as computer-
ized cash. Cryptocurrency is a new peculiarity that is getting critical consideration.
From one viewpoint, it is based on a fresh out of the box new innovation whose
maximum capacity presently can’t seem to be understood. Then again, in the ongoing
structure, it satisfies comparable capacities as other, more conventional resources.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 155
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_13
156 C. Trivedi and S. Kumar
The rise of digital money sets off the arrangement of monetary relations, where
the trading of resources happens without including concentrated monetary establish-
ments (specifically-banks) or different individuals. Cryptographic cash is difficult for
the state with the most irksome tasks, rising up due to the prerequisite of legitimate
rule of law, which emerges and makes social relations and to harmonize the interest
of various accomplices. Focusing towards the security of the state and society, the
main aim of state towards the economy is to build a modernized economy, which is
not possible without the Intervention of Blockchain technology.
The digitalization and innovative movement of the cutting-edge world has incited
the assortment, investigations and executions of Enormous Information examination,
which have been implanted into each part of day-to-day existence and advancing
quickly [1]. (IOT) [2] are changing the association and correspondence framework,
adjusting the method of calculation and information stockpiling, while informa-
tion mining procedures, AI, and Artificial Intelligence [3] are upsetting information
extraction, critical thinking, navigation and activity advancement. These Enormous
Information scientific advancements are not only the moving focal points of inves-
tigations and executions, yet additionally the potential arrangements and driving
procedures for all parts of human existence, for example, infection expectation [4],
medical services [5, 6] and so forth. For example, the MapReduce programming
system [7] as large information investigation process combination has given a huge
worldview to both industry and the scholarly community.
As a scrambled advanced money, digital currencies are worked in a framework
which is not possible without the Blockchain technology and can’t be emerged. Orga-
nization fulfils the 5 V’s component of huge Information that is ‘volume, variety,
velocity, veracity and value’ [8]. Accordingly, it fulfils the purpose in a decent manner
which assists in Large Information investigation. Other Huge Information examines
the hold of keys for the unrest and improvement of cryptographic forms of money,
which makes cryptographic money a really encouraging with other options. Addi-
tionally, Huge Information examination can likewise help financial backers and engi-
neers to go with better choices and defeat its foundation constraints. Advances in
hidden cryptographic forms of money have demonstrated its pertinence in a wider
scope. This enhanced the speed of digitalization process and expanded the enormous
information which the organization is to examine. In a nutshell, there are common
advantages for double-dealing while considering the cooperation’s between Huge
Information and digital money and the possibilities stay unlimited.
The paper straightforwardly centres around the cooperation between Blockchain
and digital money, which are two critical ideas that have been completely researched
independently. We expect to introduce an exhaustive examination of their assembly
and a methodical survey of late improvements for all partners. This paper is both
scholar and modern amicable for partners who try to acquire a superior compre-
hension of the cooperation between blockchain and digital currency or intend to
investigate its future possibilities.
Cryptocurrency and Application of Blockchain Technology … 157
2 Cryptocurrency Overview
Though the possibility of electronic money follows as far as possible back to the last
of the 1980s, Bitcoin, shipped off in 2009 by unknown. Engineer Satoshi Nakamoto
is the chief compelling decentralized computerized cash [9]. Thus, a cryptographic
cash is a digital/virtual monetary system which limits the standards and giving clients
a virtual portion to work and items are freed from a central trusted agency. Digital
forms of money depend upon the transfer of information, for utilizing cryptographic
money is to ensure and to be certified. Bitcoin took the high-level coin market beyond
anyone’s expectations, and decentralizing the money and freeing it from moderate
influence structures is the reason for growth of Digital Currency. Taking every-
thing into account, individuals and associations execute the coin electronically on a
disseminated association. It got wide responses in beginning of 2011, different names
that is altcoins—A general name for leftover computerized types of cash which is
post-Bitcoin.
Another type of digital cash which got it name Litecoin and was conveyed in
2011. Its Procurement and its accomplishment in participating to become the most
important computerized currency market. After Litecoin changed Bitcoin’s show,
accelerating with the it would be more appropriate for regular trades. Ripple, sent
off in 2013, is familiar with a through and through phenomenal model with that used
by Bitcoin [10]. Another prominent coin in the extraordinary chain of computerized
cash Peercoin, which uses a dynamic imaginative improvement to get and uphold
its cash [11]. Peercoin joins the Proof of Work development used by Bitcoin and
Litecoin close by its own framework, Proof of stake, to use a combination network
security instrument. In August 2014 a new form of cryptocurrency NuShares/NuBits
have emerged, which exist on twofold money model system [12].
3 Blockchain
In direct terms, the blockchain may be compared as a data transmission system inclu-
sion to this informational collection is began by one people like association centres,
creates a new data block containing wide information. The creation of new block is
then conveyed to each specific party and associating in a mixed construction (utilizing
cryptography) with the objective that the trade nuances are not uncovered [14]. Those
in the association (i.e. the other association centre points) overall choose the block’s
authenticity according to a pre-described algorithmic endorsement method, for the
most part suggested as an ‘understanding part’. Once supported, the blockchain is
updated with a new ‘block’ which effectively updates the trading record that is sent
across the connection [15].
This system can be used for enormous exchange that can be applied to any resource
which can be watched out by an electronic plan.
The benefits of blockchain innovation are to permit the work on implementation of
a broad display of trades that require consistently the intermediation of an untouch-
able. Fundamentally, blockchain is tied in with decentralizing trust and empow-
ering decentralized confirmation of exchanges. Basically, it permits to remove the
‘middleman’.
By and large this will probably prompt productivity gains. In any case, it is critical
to highlight that it might likewise open cooperating gatherings to specific dangers
that were recently overseen by these mediators, that the use of cryptographic ledger
advancement may create the latest liquidity risk.
By and large, apparently when a middle person goes about as a support against crit-
ical dangers, like foundational risk, blockchain innovation can’t just supplant him, as
a safeguard against major threats, such as systemic risk, it cannot replace blockchain
technology. For example, the Bank for International Settlements (‘BIS’), in 2017 as
per this report named Distributed record innovation within instalment, getting and
settlement [16], especially reception free from blockchain advancement could intro-
duce new liquidity bets. Additionally, overall, it appears to be that when a middle
person capacity as a cushion against significant dangers, for example, fundamental
gamble, he can’t just be supplanted by blockchain innovation.
Cryptocurrency and Application of Blockchain Technology … 159
Transaction in Blockchain
The Transaction The update is sent The block has been Proof of work is
is complete throughout the added to the rewarded to nodes,
network. current blockchain. generally in bitcoin
4 Legal Perspective
The years 2013–2017 might be considered the beginning of the cryptocurrency move-
ment in India. In 2013, the RBI issued a public alert regarding cryptocurrencies. The
Reserve Bank of India (RBI) has also stated that it closely monitors all developments
pertaining to cryptocurrencies, including Bitcoins (very popular one) and other cryp-
tocurrencies (Altcoins-An Altcoin is an alternative digital currency to Bitcoin). In
February 2017, the RBI issued another caution to the public, and in the fourth quarter
of 2017, the RBI issued an explicit warning that ‘virtual currencies/cryptocurrencies
are not legal money in India’.
The Committee appointed by the Finance Ministry drafted a bill on cryptocur-
rencies in April 2018 but ‘was not in favour of ban’. In March 2020, the India’s
Supreme Court dealt a blow to the Reserve Bank of India by lifting the prohibition
on cryptocurrencies enforced by the RBI. It is likewise pertinent to take note of
the Committee [17] in 2019, drove the acquittance to boycott Virtual Currency, the
Committee shows its interests in regard to the swelling of Virtual Currency in its
report and expressed that essentially all Virtual Currencies are given a broad with
gigantic quantities of individuals in India putting resources into them. According
to the study, ‘Every one of these digital forms of money have been made by non-
sovereigns and are in this sense completely confidential undertakings and there is
no fundamental natural worth of these confidential digital forms of money because
of which they miss the mark on the qualities of a cash.’
The ‘Cryptocurrency and Regulation of Official Digital Currency Bill, 2021’ (the
‘Bill’) is a current bill presented in Lower House. According to a Lower Sabha
statement on 23-11-2021, it is stated ‘to make a facilitative system for production
of the authority computerized money to be given by the Reserve Bank of India. The
Bill additionally tries to preclude all confidential digital currencies in India; be
160 C. Trivedi and S. Kumar
that as it may, it considers specific exemptions for advance the basic innovation of
cryptographic money and its purposes’. The proposed Bill may preferably present
a degree of consistency of understanding and bring the different government orga-
nizations required onto a similar page while likewise giving security and managing
the generally unregulated business sectors and forestall its abuse.
In India, the Crypto exchanging stages are seeing a significant leap in volumes.
According to a new report [14], WazirX, India’s greatest Cryptocurrency trade
enrolled a yearly exchange of more than $43 billion Cryptocurrency exchange venues
in India are seeing a significant increase in volume. In the event that is appropriately
directed, the Government can burden the income produced, which can be a mutually
beneficial arrangement for both the Government as well as financial backers.
It can be concluded from the above discussion that the journey of cryptocurrency
is not too long in India but it has seen many ups and downs in this short span. The
banning of cryptocurrencies bill in 2019 and Supreme Court verdict in 2020 is the
key issues. Cryptocurrencies have a high potential and recently after union budget
of 2022–2023 (presented on 1st February 2022), Indians have once again started
talking about it. It will be very interesting to see that after 30% tax impositions, how
investors react about cryptocurrencies in India.
The launch and features of RBI’s-future digital currency will also be very impor-
tant. After the union budget 2022–2023, investors are started saying that India is
following China by giving sole authority to RBI to launch and promote digital curren-
cies. If government of India will present fresh bill on cryptocurrency, it will be very
interesting to see the nature and regulations of it. Apart from all the facts and predic-
tions, one thing is clear that cryptocurrencies (and hence Blockchain) will be the
matter of discussion in upcoming years and this article may be useful as a reference
for further research and studies in the said regard.
This research was carried out in May 2022 to gather information on several facets
of cryptocurrencies. The goal of the study was to determine the prevalence of cryp-
tocurrency use in order to have a clear-cut view. It investigated to know the view of
about cryptocurrency in India and how frequently they are using it. In addition, In
addition, the review looked at the members’ confidence in managing digital currency
in a time when such virtual money isn’t entirely controlled and directed. The report
also looked into the participants’ predictions for the future of digital money.
The research consisted of ten questions that were expected to be answered in a
short span of time (5 min). A Google sheet survey is used to collect the data. All the
questionnaires have been shown in Tables 1, 2, 3, 4, 5, 6, 7, 8 and 9.
Cryptocurrency and Application of Blockchain Technology … 161
Table 4 Do you think the digital currency will replace Paper currency in future?
Frequency Percent Valid percent Cumulative percent
Valid Yes 42 84.0 84.0 84.0
No 4 8.0 8.0 92.0
Maybe 4 8.0 8.0 100.0
Total 50 100.0 100.0
The readers will have seen that our outline and appraisal of the administrative
structure primarily connects with digital forms of money. This has been done on
purpose.
As previously mentioned, and proved all through this exploration, blockchain
is innovation technology that allows a cryptocurrency to function. The extent of
blockchain is, nonetheless, a lot more extensive than that of digital forms of money.
It seems to be utilized in an enormous area (like business, trade, service, hospital care
and governance), given promising outcome, like a connection with a security oath,
the collecting of offers bonds and various resources, the operation of land registration
offices, and so on. Consequently, it would be excessively obtuse to relate blockchain
with illegal tax avoidance, psychological oppressor funding or tax avoidance. It is
simply innovation, which isn’t intended to launder cash, work with fear monger
funding or sidestep burdens, and has various applications all through the entire legal
economy. It wouldn’t be wise to place future advancements in such manner some-
where near accepting blockchain and fin-tech examining its use cause to irksome
necessities, just in light of one of the applications utilizing blockchain innovation,
digital forms of money, is utilized illegally by some. As a matter of fact, cryptographic
forms of money are the main notable technology giving blockchain innovation into
the limelight, yet these days blockchain has plainly grown out of the setting of digital
currencies.
Cryptocurrency and Application of Blockchain Technology … 163
With the above analysis, it can be suggested that now the cryptocurrency should
be regulated by proper or strict laws. It can be done in manner by passing the law
from parliament, so the rules and regulation can be maintained. Here the role of
government is also important for creating awareness programme for the citizen and
proper information should be shared with all the citizens. RBI also should take
initiative step for its regulation as it is the central Bank of India by providing reliable
information to the public at large and the ways to regulate it.
References
1. Hwang, K., & Chen, M. (2017). Big-data analytics for cloud, IoT and cognitive computing.
Wiley.
2. Morgan, J. (2014). A Simple Explanation of the Internet of Things. https://www.forbes.com/
sites/jacobmorgan/2014/05/13/simple-explanation-internet-things-that-anyone-can-unders
tand/2a28a25b1d09
3. Lu, H., Li, Y., Chen, M., Kim, H., & Serikawa, S. (2018). Brain intelligence: Go beyond
artificial intelligence. Mobile Networks and Applications, 23, 368–375.
4. Chen, M., Hao, Y., Hwang, K., Wang, L., & Wang, L. (2017). Disease prediction by machine
learning over big data from healthcare communities. IEEE Access, 5, 8869–8879.
5. Chen, M., Yang, J., Hao, Y., Mao, S., & Hwang, K. (2017). A 5G cognitive system for healthcare.
Big Data and Cognitive Computing, 1, 2.
6. Chen, M., Li, W., Hao, Y., Qian, Y., & Humar, I. (2018). Edge cognitive computing based smart
healthcare system. Future Generation Computer Systems, 86, 403–411.
7. Ramírez-Gallego, S., Fernández, A., García, S., Chen, M., & Herrera, F. (2018). Big data: Tuto-
rial and guidelines on information and process fusion for analytics algorithms with MapReduce.
Information Fusion, 42, 51–61.
8. Wamba, S. F., Akter, S., Edwards, A., Chopin, G., & Gnanzou, D. (2015). How ‘big data’ can
make big impact: Findings from a systematic review and a longitudinal case study. International
Journal of Production Economics, 165, 234–246.
9. Nakamoto, S. (2008). Bitcoin: A peer-to-peer electronic cash system. https://bitcoin.org/bit
coin.pdf
10. Schwartz, D., Youngs, N., & Britto, A. The ripple protocol consensus algorithm. Ripple Labs
Inc.
11. Nadal, S., & King, S. (2012). PP coin: peer-to-peer crypto-currency with proof-of-stake. http:/
/www.peercoin.net/assets/paper/peercoin-paper.pdf
12. Jordan, L. https://nubits.com/sites/default/files/assets/nu-whitepaper-23_sept_2014-en.pdf
13. Federal Law 259-FZ. On digital financial assets, cryptocurrency and making changes to certain
legislative acts of the russian federation. Retrieved April 01, 2022, from http://www.consul
tant.ru/document/cons_doc_LAW_358753/
14. World Bank Group, Natarajan, H., Krause, S., & Gradstein, H. (2017). Distributed ledger
technology (DLT) and blockchain. FinTech note, no. 1. Washington, D.C. http://docume
nts.worldbank.org/curated/en/177911513714062215/pdf/122140-WP-PUBLIC-DistributedL
edger-Technology-and-Blockchain-Fintech-Notes.pdf, 1.
15. CPMI. (2015). Digital currencies. https://www.bis.org/cpmi/publ/d137.pdf, 5
16. CPMI. (2017). Distributed ledger technology in payment, clearing and settlement—An
analytical framework. https://www.bis.org/cpmi/publ/d157.pdf
17. Coindesk. https://www.coindesk.com/markets/2021/12/16/indian-crypto-exchange-wazirxs-
trading-volume-jumps-to-over-43b-in-2021/
18. Microsoft. What Is Cloud Computing? A Beginner’s Guide. (2018). https://azure.microsoft.
com/en-us/overview/what-is-cloud-computing/
Efficient Cluster-Based Routing Protocol
in VANET
Abstract The recent advancements in technology have shifted the focus toward
wireless sensor technology. Vehicular Adhoc Networks (VANET) is a wireless
network of vehicles that communicate with each other using different routing proto-
cols. Various protocols have been proposed for this purpose, out of which clustering-
based protocols have been in the current focus of research. The clustering-based
protocols proposed so far have primarily emphasized on the packet delivery ratio
(PDR), throughput, transmission delay, and stability. The energy consumption factor
of vehicles in the network is significantly ignored. The goal of the proposed approach
is to reduce the energy consumption of vehicles in VANET. For this purpose, an effi-
cient clustering-based framework is proposed, which includes an efficient cluster
head selection procedure and routing protocol along with cluster formation and
merging procedures, using which the energy consumption of the vehicles will be
significantly reduced.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 165
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_14
166 H. Ikram et al.
1 Introduction
destination is not within the cluster, the current CH will look for other CHs in its
neighbor. If within a certain time, neighbor CH is found, current CH will forward
the message packet to it and now this CH will look for destination in its members or
else forward the packet to neighbor CHs. But if a neighbor CH is not found within
a predefined time limit, the CH will generate an error message and will not receive
any new messages.
This document is divided in the following sections; a short study of the clustering-
based routing protocols suggested in the literature is given in Sect. 2, the methodology
is presented in Sect. 3, Sect. 4 describes the simulation model, results are presented
in Sect. 5, and the conclusion in Sect. 6.
2 Related Work
There are numerous routing protocols proposed to reinforce the routing operation in
VANET. This section focuses on some of the clustering strategies and cluster-based
routing protocols.
Kadhim et al. [1] proposed an efficient routing-based protocol that is based on
the stability of cluster head and proposed a gateway. This technique focuses on
improving path stability, PDR, transmission delay, network stability, and reducing
network overhead. The results show that their approach performs better than LRCA,
PASRP, and CVoEG.
Khayat et al. [7] developed a new clustering methodology for cluster head selec-
tion, in which the weight is calculated for each node. The three essential parameters
for weighted formula are trust, distance, and velocity. The trust for each node in this
technique is a combination of direct and indirect trust values. As the cluster head,
the vehicle with the highest probability will be chosen. As a result, the vehicle with
the shortest distance, most trust, and velocity has a better chance of being chosen
as the cluster head. The simulation looked at how each weighted parameter affected
clustering and cluster head selection.
Aradhana Behura et al. [5] proposed that Giraffe kicking optimization (GKO) is
a nature-inspired method that helps to awaken the least number of sensor nodes
while improving throughput and network longevity. This hybrid C-means-based
GKO method for VANET minimizes excessive energy usage caused by redundant
sensor nodes. The results showed that GKO had the best outcomes when compared to
other strategies such as GA, cuckoo search, and DE. Furthermore, the GKO method is
effective for extending the lifetime of the vehicle network while preserving coverage.
Khalid Kandili et al. [2] for VANET, proposed a new clustering-based routing
protocol that combines a modified K-Means method with Continuous Hopfield
Network and Maximum Stable Set Problem (KMRP). The proposed technique
avoided arbitrarily selecting the initial cluster head and cluster. A link depend-
ability model is also used to assign vehicles to clusters. Cluster heads are chosen
depending on their free buffer space, speed, and node degree. According to the find-
ings, KMRP decreases traffic congestion and collisions, resulting in a significant
168 H. Ikram et al.
boost in throughput. At high density and mobility, KMRP provides a fast algorithm
that decreases end-to-end delay and gives better PDR than other schemes, and it is
important to extend the lifetime of the vehicle network while preserving coverage.
Katiyar et al. [3] proposed an efficient multi-hop clustering (CH selection and
cluster merging) algorithm using which the vehicles can select and follow the most
suitable target vehicle from their one-hop neighbors. This algorithm contributes in
strengthening stability of network, and improves the data transmission performance
in terms of PDR, throughput and normalized routing overhead (NRO). The results
show that in the proposed AMC algorithm, the 10% to 30% improvement is recorded
in average CH duration, 10% to 15% in CM duration, 10% to 40% in CH changes,
and significant improvement in PDR with max 78% in 300m range, NRO with max
7 in 100m range, and throughput with max 82kbps in 300m range.
Bakkour et al. [6] proposed a clustering-based machine learning solution with self-
stabilization mechanism, for delay-sensitive applications in VANET. The proposed
scheme deals with data sharing delay, ensures high data availability, and reduces
packet loss in multi-hop VANET architecture. The results show that the stability of
the network is improved, and the average transmission delay of 100 vehicles/km is
~500 ms, PDR ~85%, and information coverage ~87%.
Darabkh et al. [4] proposed a dual-phase routing protocol using fog computing and
software-defined vehicular network, along with clustering technique. The proposed
protocol lessens the long-distance communications in each cluster and provides
an efficient mechanism for control overhead reduction. The results show that the
proposed algorithm gives impressive results when compared to IDVR, VDLA, IRTIV,
GPCR, CORA, MoZo, BRAVE, and CBDRP with a packet delivery ratio (PDR) of
90%, increased throughput with a max of 180.27 kbps, reduced end-to-end delay
with max of 0.5 s, and a decrease in no. of control messages (Table 1).
Table 1 (continued)
References Aim Objective Limitations
[7] An efficient clustering algorithm – To ensure the – Energy consumption
based on a weighted formula for stability of the factor not considered
calculating the probability of network for CH selection
cluster head selection
[5] Hybrid C-means Giraffe – To improve the – High Complexity
optimization technique with a energy
multi-fitness function used to consumption,
reach efficient routing enactment jitter, throughput
in VANET and probability
of sensor node
redistribution
[2] A new clustering-based routing – To reduce traffic – Performance decreases
protocol based on a weighted congestion and with high density
formula that combines a modified providing a
K-Means algorithm with a new significant
clustering algorithm for increase in
determining the likelihood of throughput
cluster head selection – To act better in
terms of the
packet delivery
ratio
[3] An effective cluster building – Contributes to – High complexity
process that assists the vehicle in strengthening – Energy consumption
selecting and following the most stability (CH and factor not considered
appropriate target vehicle among CM duration) – PDR not very better
one-hop neighbors – To increase the with max 78%
data transmission
performance (in
terms of PDR,
throughput and
normalized
routing overhead
NRO)
[6] To establish robust and reliable – To reduce – CH being the central
communication between nodes transmission element can cause
using machine learning and delay and packet additional time of data
clustering collision (high fusion and aggregation
data availability) for data sharing
– Extend the data applications
coverage – Direction of vehicles
– Improve the not considered while
stability of building clusters
clusters – Stability and PDR
(max 85%) can further
be optimized
(continued)
170 H. Ikram et al.
Table 1 (continued)
References Aim Objective Limitations
[4] To discover the most reliable – To increase PDR – High complexity
route from the source to the and throughput – Power consumption
destination at the shortest – To reduce factor not considered
possible time by combining SDN, end-to-end delay – Security factor not
fog computing, and clustering – To minimize the considered as fog/
no. of control cloud computing is
messages by being used
reducing control
overhead
This section comprises the clustering formation and merging strategies, the algorithm
for cluster head election, the routing protocol, and the related concepts used in our
framework.
Due to the high mobility of the vehicles, clusters may be restructured often,
resulting in significant communication overhead, poor stability, and higher energy
consumption than usual. These parameters are taken into account by our proposed
technique and hence this provides an efficient scheme for routing. This proposed
architecture contains four phases: The first phase is Cluster Formation, which is
based on certain factors such as average link lifetime of a vehicle, its energy level,
its neighborhood degree, and predefined threshold directional distance. The second
phase is Cluster Head Selection: the vehicle elected as CH will be the most predom-
inant vehicle among all with maximum energy, avg. link lifetime, and neighborhood
degree. The third phase is Cluster Merging: which describes the scenario of merging
two or more than two neighbor clusters and selecting a single cluster head. The fourth
phase is routing of the packet from source to destination (Table 2).
Table 2 (continued)
Abbreviation Definition
NHD Neighborhood degree
E Energy consumption
DTH Threshold directional distance
ETH Threshold energy level
PDR Packet delivery ratio
NRO Normalized routing overhead
Thtime Wait-time limit
Suitability Factor
The clustering procedure calculates the suitability factor for all the vehicles which
includes four factors, namely, neighborhood degree of a vehicle i, its average link
lifetime, its directional distance, and energy consumption value. The SF of a vehicle
i can be calculated using Eq. (1), as follows:
n
AvgLLTi = LLT(i, j) (2)
j=1
Here, j represents all the vehicles with which vehicle i has maintained a link at some
time. Now for calculating LLT of vehicle I with vehicle j, we will check if both the
vehicles are within the DTH of each other. For this, we will calculate the distance
covered by both vehicles i and j, as
Si = Vi ∗ t (i )
and
S j = V j ∗ t (ii)
172 H. Ikram et al.
Then, we will calculate the relative distance between both of the vehicles during
time t.
S = |S i − S j | (iii )
Now we will compare this distance S with D, if S is less than DTH then t becomes
LLT of vehicle i and vehicle j, because both of the vehicles maintained the link with
each other for time t. It can be represented as
If DTH > S
Then LLT(i,j) = t
Neighborhood Degree
A vehicle’s neighborhood degree is defined by number of vehicles that are in the
direct communication range of that vehicle within the threshold directional distance
with that vehicle [10–13].
To calculate the NHD of vehicle I, we will investigate that if there is any vehicle
j in the DTH of vehicle i. If there is any vehicle j present in the DTH of vehicle i then
update the NHD table of vehicle i by incrementing its NHD value by one. After time
t, it will again check that whether this vehicle j is still in its DTH , if it is not then
decrement the NHD value by one from NHD table of vehicle i.
Energy level
Energy level is the amount of energy which is held by a sensor in the vehicle. This
energy is consumed by the vehicle in order to route a packet. The energy level is
necessary to calculate if that vehicle is a candidate for cluster head selection.
A threshold for energy level is also defined in order to select the most efficient
energy-consuming vehicle as a CH.
Directional Distance
Directional distance is the distance of vehicle A with Vehicle B in the direction
of vehicle A. Our proposed system uses a predefined threshold directional distance
value which is compared with every vehicle’s directional distance.
Efficient Cluster-Based Routing Protocol in VANET 173
compared to a threshold, its energy level (Eq. (1)), along with the comparison of this
consumed energy level with a predefined threshold. This CH selection condition can
be mathematically presented as below in Eq. (3).
Here, LLT(i) represents the link life time of vehicle i which is joining a cluster. After
sending the join request to the CH vehicle j, the vehicle j will compare the LLT,
NHD, D, and E values of vehicle i with the respective values of itself, along with
comparing the E value of incoming vehicle i with the predefined threshold energy
value. If the condition presented in Eq. (3) is satisfied, then vehicle will become new
CH of that cluster.
To select CH of a cluster, the values of our introduced SF of all k number of
vehicles of that cluster are required, along with a predefined threshold energy level
E TH value. So in a cluster C, the SF and E TH of every vehicle j will be compared
with every other vehicle i present in C (Line 1–4). The vehicle with highest SF and
E TH value will be selected as CH of that cluster. This process will be repeated until
a CH is selected for every cluster.
this, the CMs of first cluster will also update the id of their CH to make CH.C2 as
their new CH (Line 7, 8). In this way, both the clusters will merge together.
4 Simulation Model
For the experimental evaluation of our scheme, simulations are performed using
MATLAB on a PC with RAM 8 GB and Intel processor with core i3 on Windows
10 operating system. The following scenarios are used in the simulation.
Scenario 1: In the first scenario, total number of nodes (vehicles) considered are ten,
the threshold directional distance DTH value is 2 km and the energy threshold E TH
is 120 J. And during routing between nodes, the wait time limit Thtime is 10 s.
Scenario 2: In the second scenario, total number of nodes (vehicles) considered are
twenty, the threshold directional distance DTH value is 5 km and the energy threshold
E TH is 115 J. And during routing between nodes, the wait time limit Thtime is 15 s.
Scenario 3: In the third scenario, total number of nodes (vehicles) considered are
hundred, the threshold directional distance DTH value is 10 km and the energy
threshold E TH is 115 J. And during routing between nodes, the wait time limit Thtime
is 18 s.
Efficient Cluster-Based Routing Protocol in VANET 177
5 Results
The simulation results of scenario 1 are shown in Figs. 1, 2, and 3. The total number
of nodes, their speed, cluster re-calculation time, and the size of field are taken in
the form of input from the user. Figure 1 shows the initial state of cluster when all
the ten nodes/vehicles are not a part of any cluster yet. Initially, all the nodes are
marked as blue. After time t, nodes have changed their position and now they are
moving in different directions with different speeds and energy, but they are not a
part of any cluster yet, as shown in Fig. 2. Now, when the vehicles are moving in
separate directions, clusters will be formed of vehicles within a certain range of each
other, shown in Fig. 3. The CHs are marked red while all the other nodes are marked
as blue. Here, the vehicle with the best SF will be calculated as CH of that cluster,
whereas the vehicles which are not in the communication range of a CH will not be
a part of any cluster. The clusters will keep getting updated as the nodes change their
positions in the network, and CH may also change during the new cluster formation/
upgradation.
The simulation results of scenario 2 are shown in Figs. 4, 5, 6, and 7. The total
number of nodes, their speed, cluster re-calculation time, and the size of field are
taken in the form of input from the user. Figure 4 shows the initial state of cluster
when all the twenty nodes/vehicles are not a part of any cluster yet. Initially, all the
nodes are marked as blue. After time t, nodes have changed their position and now
Efficient Cluster-Based Routing Protocol in VANET 179
they are moving in different directions with different speeds and energy, but they are
not a part of any cluster yet, as shown in Fig. 5. Now, when the vehicles are moving in
separate directions, clusters will be formed of vehicles within a certain range of each
other, shown in Fig. 6. The CHs are marked red while all the other nodes are marked
as blue. Here, the vehicle with the best SF will be calculated as CH of that cluster,
whereas the vehicles which are not in the communication range of a CH will not be
a part of any cluster. The clusters will keep getting updated as the nodes change their
positions in the network, and CH may also change during the new cluster formation/
upgradation. This change can be seen in Fig. 7.
The simulation results of scenario 3 are shown in Figs. 8, 9, 10, and 11. The total
number of nodes, their speed, cluster re-calculation time, and the size of field are
taken in the form of input from the user. Figure 8 shows the initial state of cluster
when all the hundred number of nodes/vehicles are not a part of any cluster yet.
Initially, all the nodes are marked as blue. After time t, nodes have changed their
position and now they are moving in different directions with different speeds and
energy, but they are not a part of any cluster yet, as shown in Fig. 9. Now, when the
vehicles are moving in separate directions, clusters will be formed of vehicles within
a certain range of each other, shown in Fig. 10. The CHs are marked red while all the
other nodes are marked as blue. Here, the vehicle with the best SF will be calculated
as CH of that cluster, whereas the vehicles which are not in the communication range
of a CH will not be a part of any cluster. The clusters will keep getting updated as
the nodes change their positions in the network, and CH may also change during the
new cluster formation/upgradation. This change can be seen in Fig. 11.
Efficient Cluster-Based Routing Protocol in VANET 181
6 Conclusion
We proposed a novel scheme in this study to reduce the energy consumption of vehi-
cles in VANET. The suggested framework aims to reduce vehicle energy consumption
in VANET. An effective clustering-based framework is provided for this purpose,
which comprises an efficient cluster head selection technique and routing protocol,
as well as cluster creation and merging operations, which would greatly lower the
energy consumption of the vehicles. We demonstrated that the suggested framework
outperforms the conventional schemes using comprehensive simulation results.
References
2. Kandali, K., Bennis, L., & Bennis, H. (2021). A new hybrid routing protocol using a modified
K-means clustering algorithm and continuous hopfield network for VANET. IEEE Access, 9,
47169–47183.
3. Katiyar, A., Singh, D., & Yadav, R. S. (2022). Advanced multi-hop clustering (AMC) in
vehicular ad-hoc network. wireless Network, 28(1), 45–68.
4. Darabkh, K. A., Alkhader, B. Z., Ala’F, K., Jubair, F., & Abdel-Majeed, M. (2022). ICDRP-
F-SDVN: An innovative cluster-based dual-phase routing protocol using fog computing and
software-defined vehicular network. Vehicular Communications 100453.
5. Behura, A., Srinivas, M., & Kabat, M. R. (2022). Giraffe kicking optimization algorithm
provides efficient routing mechanism in the field of vehicular ad hoc networks. Journal of
Ambient Intelligence and Humanized Computing. https://doi.org/10.1007/s12652-021-03519-9
6. Bakkoury, S. O. S. B. Z. New machine learning solution based on clustering for delay-sensitive
application in VANET.
7. Khayat, G., Mavromoustakis, C. X., Mastorakis, G., Batalla, J. M., Maalouf, H., & Pallis,
E. (2020). VANET clustering based on weighted trusted cluster head selection. International
Wireless Communications and Mobile Computing (IWCMC), 2020, 623–628.
8. Ram, A., & Mishra, M. K. (2020). Density-connected cluster-based routing protocol in
vehicular ad hoc networks. Annals of Telecommunications, 75(7), 319–332.
9. Shelly, S., & Babu, A. V. (2017). Link residual lifetime-based next hop selection scheme for
vehicular ad hoc networks. EURASIP Journal on Wireless Communications and Networking,
2017(1), 23. https://doi.org/10.1186/s13638-017-0810-x
10. Rawashdeh, Z. Y., & Mahmud, S. M. (2012). A novel algorithm to form stable clusters in
vehicular ad hoc networks on highways. EURASIP Journal on Wireless Communications and
Networking, 2012(1), 15. https://doi.org/10.1186/1687-1499-2012-15
11. Pal, S., Jhanjhi, N. Z., Abdulbaqi, A. S., Akila, D., Almazroi, A. A., & Alsubaei, F. S. (2023). A
hybrid edge-cloud system for networking service components optimization using the internet
of things. Electronics, 12(3), 649.
12. Humayun, M., Ashfaq, F., Jhanjhi, N. Z., & Alsadun, M. K. (2022). Traffic management:
Multi-scale vehicle detection in varying weather conditions using yolov4 and spatial pyramid
pooling network. Electronics, 11(17), 2748.
13. AlZain, M. A. A secure multi-factor authentication protocol for healthcare services using
cloud-based SDN.
Type II Exponentiated Class
of Distributions: The Inverse Generalized
Gamma Model Case
Abstract We display here a new class of probability models named Type II expo-
nentiated. This class plays a leading turn in creating more pliable distributions. It
employs the distribution function formula of the smallest order statistic instead of
the formula of the greatest order statistic function in generating distributions as in the
class of Gupta et al. The Type II Exponentiated Inverse Generalized Gamma Distri-
bution (Type II EIGGD) is applied here to spell the model origin. Some properties
of Type II EIGGD are derived.
1 Introduction
Gupta et al. in 1998 [1] proposed the exponentiated class of distributions as,
F(x) = [G(x)]k where G(x) is the baseline distribution function, and k is a positive
real number. Lots of work have been carried out on this idea. The focus here will
be on recent literature only due to the large number of scientific research related to
the topic. Pu et al. in 2016 [2] studied the generalized modified Weibull (GEMW)
distribution, which contains many models. Mathematical properties of this distribu-
tion are presented. Maximum likelihood estimation mechanism is used to estimate
the model parameters with real data sets. The Exponentiated T-X family of distribu-
tions is introduced by Ahmad et al. in 2019 [3]. Some properties of a special sub-
model, exponentiated exponential-Weibull are studied in detail. An empirical study
is conducted to evaluate the performances of the maximum likelihood estimators of
the model parameters. Oluyedea et al. in 2020 [4] proposed a generalized family of
distributions named the exponentiated generalized power series (EGPS) family of
distributions and studied its sub-model, the exponentiated generalized logarithmic
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 183
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_15
184 S. H. Abid and J. A. Altawil
(EGL) class of distributions. Some properties of the new EGPS and EGL distributions
are derived. They used the method of maximum likelihood to estimate the parame-
ters of this new family of distributions. Some properties of exponentiated generalized
Gompertz-Makeham distribution are derived by [5]. The model parameter estima-
tion is derived via maximum likelihood estimate method. Abid and Kadhim in 2021
[6] presented Doubly Truncated Exponentiated inverse Gamma distribution (EIGD).
Chipepa in 2022 proposed [7] the Exponentiated Half Logistic-Generalized-G Power
Series (EHL-GGPS) distribution. Several mathematical properties of the EHL-GGPS
distribution are derived. A simulation study for selected parameter values is presented
to examine the consistency of the maximum likelihood estimates. In 2022, Abid
and Jani [8] presented two doubly truncated generalized distributions with a lot of
properties.
The proposed class for generating new distributions will have the cumulative
distribution function (cdf),
∂ F(x)
f (x) = = kg(x)[1 − G(x)]k−1 (2)
∂x
The proposed class of distributions will be called type II exponentiated class.
( ) β
β/λθ x −(θ +1) e−(x /λ) and G(x, λ, θ, β) =
−1
Assume that g(x, λ, θ, β) = Γ(θ/β) 1
( )
β
Γ βθ ,(x −1 /λ)
Γ(θ/β) (x > 0) are pdf and cdf of Inverse Generalized Gamma random vari-
able, respectively. The cdf and the pdf of Type II EIGGD based on (1) and (2)
are,
⎡ ( ( )β ) ⎤k
Γ θ/β, x −1 /λ
F(x) = 1 − ⎣1 − ⎦ ,x >0 (3)
Γ(θ/β)
⎡ ( ( −1 )β ) ⎤k−1
( ) Γ θ/β, x /λ
k β β
x −(θ +1) e−(x /λ) ⎣1 − ⎦ ,
−1
f (x) =
Γ(θ/β) λθ Γ(θ/β)
x >0 (4)
where Γ(α) is the ordinary Gamma function, γ (α, βx) is the lower incom-
{ βx
plete Gamma function such that γ (α, βx) = 0 t α−1 e−t dt and Γ(α, β x) =
{ β x α−1 −t
∞t e dt = Γ(α) − γ (α, βx) is the upper incomplete Gamma function.
So, the reliability and hazard rate functions are respectively
Type II Exponentiated Class of Distributions: The Inverse Generalized … 185
(⎡ ( )β ) ⎤k
Γ θ/β, x −1 /λ
R(x) = 1 − F(x) = ⎣1 − ⎦ (5)
Γ(θ/β)
( ) β
( ) β
k λβθ x −(θ +1) e−(x /λ) k λβθ x −(θ +1) e−(x /λ)
−1 −1
λ(x) = ( )( (
β )
) = ( ( ) ) (6)
θ
Γ θ/β,(x −1 /λ) γ θ/β, x −1 /λ β
Γ β 1− Γ(θ/β)
{∞ {∞ ( )
( ) k β
E xr = x f (x)d x =
r
x r
( ) x −(θ +1)
Γ θ λθ
0 0 β
⎡ ( ( −1 )β ) ⎤k−1
( −1 )β Γ θ/β, xλ
− xλ ⎢ ⎥
e ⎢1 − ⎥ dx
⎣ Γ(θ/β) ⎦
( ){∞
k β
= x −(θ−r +1)
Γ(θ/β) λθ
0
( ⎡ ( −1 )β ) ⎤k−1
β Γ θ/β, x /λ
e−(x /λ) ⎣1 − ⎦ dx
−1
Γ(θ/β)
( ){∞
k β
= x −(θ−r +1)
Γ(θ/β) λθ
0
⎡ ( ( −1 )β ) ⎤k−1
β Γ(θ/β) Γ θ/β, x /λ
e−(x /λ) ⎣ ⎦ dx
−1
−
Γ(θ/β) Γ(θ/β)
( ){∞
k β
= x −(θ−r +1)
Γ(θ/β) λθ
0
⎡ ( ( −1 )β ) ⎤k−1
Γ(θ/β)
β − Γ θ/β, x /λ
e−(x /λ) ⎣ ⎦ dx
−1
Γ(θ/β)
( ) ( ) ( ( )β )
Now since, Γ s, ϒ̄ + γ s, ϒ̄ = Γ(s) → Γ(θ/β) − Γ θ/β, x −1 /λ =
( ( ) )
β
γ βθ , x −1 /λ we get,
186 S. H. Abid and J. A. Altawil
⎡ ( ( −1 )β ) ⎤k−1
( ){∞ γ θ/β, x /λ
( ) k β β
x −(θ −r +1) e−(x /λ) ⎣ ⎦ d x,
−1
E xr =
Γ(θ/β) λθ Γ(θ/β)
0
Σ∞ (−1)w Γ(b+1)
Now since, (1 − z)b = w=0 w! Γ(b−w+1)
zw with |z| < 1, b > 0 and
Σ∞
Γ(k + j ) j
(1 − z)−k = z with |z| < 1, k > 0
j=0
j!Γ(k)
[
( ( )β ) ]k−1 [ ( ( ( )β ) )]k−1
γ θ/β, x −1 /λ γ θ/β, x −1 /λ
So that for ( )
Γ dp
= 1 − 1 − Γ(θ/β) we get three formulas.
[ ( ( ( −1 )β ) )]k−1 ( ( ( )β ) )3
γ θ/β, x /λ Σ∞ (−1)w Γ(k) Σ∞ (−1)3 Γ(w+1) γ θ/β, x −1 /λ
If k − 1 > 0, then 1 − 1 − Γ(θ/β) = w=0 w!Γ(k−w) 3=0 3!Γ(w−3+1) Γ(θ/β)
[ ( ( ( )β ) )]k−1 ( ( ( )β ) )w
γ θ/β, x −1 /λ Σ∞ Γ(k−1+ j ) Σ∞ (−1)w Γ( j+1) γ θ/β, x −1 /λ
If k − 1 < 0, then 1− 1− Γ(θ/β) = j=0 j!Γ(k−1) w=0 w!Γ( j−w+1) Γ(θ/β)
[ ( ( ( )β ) )]k−1 [ ( ( ( )β ) )]1−1
γ θ/β, x −1 /λ γ θ/β, x −1 /λ
If k − 1 = 0, then 1− 1− Γ(θ/β) = 1− 1− Γ(θ/β) =1
( )β √
x −1 x −1 −1 − β1 −1
Let y = λ
→ λ
= β y→x= 1
√
λβ y
→ dx = λβ
y dy, then
( ) ∞ ∞ {0 ( )−(θ −r +1)
( r ) Γ(k + 1) β Σ (−1)w Σ (−1)3 Γ(w + 1) 1
E x = √
Γ(θ/β) λθ w=0 w!Γ(k − w) 3=0 3!Γ(w − 3 + 1) λβ y
∞
( )3
γ (θ/β, y) −1 − β1 −1
e−y y dy
Γ(θ/β) λβ
Type II Exponentiated Class of Distributions: The Inverse Generalized … 187
∞ ∞ { [ (θ −r ) ] ∞
Γ(k + 1) Σ (−1)w Σ (−1)3 Γ(w + 1) −1
= y β
Γ(θ/β)λr w=0 w!Γ(k − w) 3=0 3!Γ(w − 3 + 1)
0
( )3
γ (θ/β, y)
e−y d dy
Γ(θ/β)
{
∞
By using y α+r−1 e−y (γ (α, y))m dy = I (α + r, m) = α −m Γ(r + α(m + 1))FA(m) (r +
0
α(m + 1); α, . . . α; α + 1, . . . α + 1; −1, . . . , −1), where, FA(m) is the Lauricella function of type
A, then, ( )
Γ(k+1) Σ∞ (−1)w Σ∞ (−1)3 Γ(w+1)
E(x r ) = Γ(θ/β)λr w=0 w!Γ(k−w) 3=0
1
3!Γ(w−3+1) (Γ(θ/β))3
I (θ −r
β
)
,3
Case two: for k − 1 < 0.
( )Σ ∞ ∞ {∞ ( −1 )β
( ) k β Γ(k − 1 + j ) Σ (−1)w Γ( j + 1) −(θ −r +1) − xλ
E xr = x e
Γ(θ/β) λθ j!Γ(k − 1) w!Γ( j − w + 1)
j=0 w=0 0
⎛ ( ( )β ) ⎞w
−1
γ θ/β, x /λ
⎜ ⎟
⎜ ⎟ dx
⎝ Γ(θ/β) ⎠
( )β x −1 √ −1 − β1 −1
Again Let y = x −1 /λ → λ
= β y→x= 1
√
λβ y
→ dx = λβ
y dy, then
( )Σ ∞ ∞ {0 ( )−(θ −r +1)
( ) k β Γ(k − 1 + j ) Σ (−1)w Γ( j + 1) 1
E xr = √
Γ(θ/β) λθ j=0 j!Γ(k − 1) w=0 w!Γ( j − w + 1) λβ y
∞
( )w
−y γ (θ/β, y) −1 − β1 −1
e y dy
Γ(θ/β) λβ
∞
Σ ∞ {∞ [ (θ −r ) ]
k Γ(k − 1 + j ) Σ (−1)w Γ( j + 1) −1
= y β
Γ(θ/β)λr j=0 j!Γ(k − 1) w=0 w!Γ( j − w + 1)
0
( )w
γ (θ/β, y)
e−y dy
Γ(θ/β)
{
∞
Again by using y α+r−1 e−y (γ (α, y)) j dy = I (α + r, j ) = α − j Γ(r + α( j + 1))F ( j ) A (r +
0
α( j + 1); α, . . . α; α + 1, . . . α + 1; −1, . . . , −1) where,F ( j) A is the Lauricella function of type
A, then ( )
Σ∞ Γ(k−1+ j) Σ∞ (−1)w Γ( j+1) (θ −r )
E(x r ) = Γ(θ/β)λ
k
r j=0 j!Γ(k−1) w=0
1
w!Γ( j−w+1) (Γ(θ/β)) w I
β
, w
Case three: for k − 1 = 0 → k = 1
188 S. H. Abid and J. A. Altawil
{∞ {∞ ( )
( ) k β β
x −(θ+1) e−(x /λ)
−1
E xr = x f (x)d x =
r
x r
Γ(θ/β) λθ
0 0
⎡ ( ( )β ) ⎤k−1
Γ θ/β, x −1 /λ
⎣1 − ⎦ dx
Γ(θ/β)
⎡ ( ( )β ) ⎤1−1
{∞ ( ) ( −1 )β Γ θ
, x −1 /λ
1 β −(θ +1) − λ
x
⎣
β
⎦
= xr x e 1−
Γ(θ/β) λθ Γ(θ/β)
0
( ){∞
1 β β
x −(θ −r +1) e−(x /λ) d x
−1
dx =
Γ(θ/β) λθ
0
( )β √
Again Let y = x −1 /λ → x −1 /λ = β y → x = 1
√
λβ y
→ dx =
−1 − β1 −1
λβ
y dy, then
( ) {0 ( )−(θ −r +1) {∞
[
(θ −r )
]
−1 −y
β
y − β −1 dy =
1
= 1
Γ(θ/β) λθ λ
1
√
β y
e−y −1
λβ
1
Γ(θ/β)λr 0 y β
e dy =
( ) ∞
(θ −r )
Γ β
Γ(θ/β)λr
.
Expansion formula for the rth raw moment functions of Type II EIGGD is given
by
⎧ ( )
⎪ Γ(k+1) Σ∞ (−1)w Σ∞ (−1)3 Γ(w+1) 1 (θ −r )
⎪
⎪
⎪ Γ(θ/β)λr w=0 w!Γ(k−w) 3=0 3!Γ(w−3+1) (Γ(θ/β))3 I β ,3 ,k − 1 > 0
( ) ⎨ Σ∞ Γ(k−1+ j ) Σ∞ (−1)w Γ( j+1) ( )
k 1 (θ −r )
E X r = Γ(θ/β)λr j=0 j!Γ(k−1) w=0 w!Γ( j−w+1) (Γ(θ/β)) w I β ,w ,k − 1 < 0 (7)
⎪ ( )
⎪
⎪ Γ (θ −r )
⎪
⎩ β
Γ(θ/β)λr , k − 1 = 0
The Shannon entropy [9] of a continuous random variable with PDF (4) is defined
by Shannon as H = E(−ln f (x)). So,
⎛ ( )⎞
θ
λθ Γ β
H = ln⎝ ⎠ + (θ + 1)E(ln(X ))
kβ
(( )β )
+E x −1 /λ − (k − 1)E
Type II Exponentiated Class of Distributions: The Inverse Generalized … 189
⎛ ⎛ ( ( )β ) ⎞⎞
Γ d
p
, x −1 /λ
⎝ln⎝1 − ⎠⎠ (8)
Γ(θ/β)
( ( ( ( )β ) ))
(( )β ) Γ θ/β, x −1 /λ
Let I1 = (θ + 1)E(ln(X )), I2 = E x −1 /λ , I3 = −(k − 1)E ln 1 − Γ(θ/β)
for
{∞ ( ){∞
(θ + 1)k β β
(lnx)x −(θ +1) e−(x /λ)
−1
I1 = (θ + 1) lnx f (x)d x = θ
Γ(θ/β) λ
0 0
⎧ ( ( )β ) ⎫k−1
⎨ Γ θ/β, x −1 /λ ⎬
1− dx
⎩ Γ(θ/β) ⎭
⎡ ( ( )β ) ⎤k−1
( ){∞ Γ(θ/β) − Γ βθ , x −1 /λ
(θ + 1)k β β
lnx x −(θ +1) e−(x /λ) ⎣ ⎦ dx
−1
I1 =
Γ(θ/β) λθ Γ(θ/β)
0
( ) ( ) ( ( )β )
Now since, Γ s, ϒ̄ + γ s, ϒ̄ = Γ(s) → Γ(θ/β) − Γ θ/β, x −1 /λ =
( ( −1 )β )
γ θ/β, x /λ we get,
⎡ ( ( −1 )β ) ⎤k−1
( ){∞ γ θ/β, x /λ
(θ + 1)k β β
lnx x −(θ +1) e−(x /λ) ⎣ ⎦ dx
−1
I1 =
Γ(θ/β) λθ Γ(θ/β)
0
[ ( )β ) ]k−1
[ ( (
β )]k−1
)
(
γ θ/β, x −1 /λ γ θ/β,(x −1 /λ)
So that for Γ(θ/β) = 1− 1− Γ(θ/β)
we get three cases,
Case one: for k − 1 > 0
( ) ∞ ∞
(θ + 1)k β Σ (−1)u Γ(k) Σ (−1)s Γ(u + 1)
I1 =
Γ(θ/β) λθ u=0 u!Γ(k − u) s=0 s!Γ(u − s + 1)
⎧ ( ( −1 )β ) ⎫s
{∞ ⎨ γ θ/β, x /λ ⎬
β
(lnx)x −(θ +1) e−(x /λ)
−1
dx
⎩ Γ(θ/β) ⎭
0
( )β x −1 √ −1 − β1 −1
Let y = x −1 /λ → λ
= β y→x= 1
√
λβ y
→ dx = λβ
y dy, then
( )Σ
∞ ∞ {0 [ ]( )−(θ+1)
(θ + 1)k β (−1)u Σ (−1)s Γ(u + 1) 1 1
I1 = ln √ √
Γ(θ/β) λθ u!Γ(k − u) s!Γ(u − s + 1) λβ y λβ y
u=0 s=0 ∞
190 S. H. Abid and J. A. Altawil
{ }
γ (θ/β, y) s −1 − β1 −1
e−y y dy
Γ(θ/β) λβ
∞ ∞ {∞ ( )
(θ + 1)Γ(k + 1) Σ (−1)u Σ (−1)s Γ(u + 1) 1
I1 = −lnλ − lny
Γ(θ/β) u=0
u!Γ(k − u) s=0 s!Γ(u − s + 1) β
0
( )s
θ γ (θ/β, y)
e−y y β −1 dy
Γ(θ/β)
{
∞ θ
( )s {
∞
I11 = −lnλ e−y y β −1 γΓ(θ/β)
(θ/β,y)
dy, By using y α+r −1 e−y (γ (α, y))m dy =
0 0
I (α + r, m) = α −m Γ(r + α(m + 1))FA(m) (r + α(m + 1); α, . . . α; α + 1, . . . α +
1; −1, . . . , −1), where, FA(m) is the Lauricella function of type A, then,
( ) {
∞ θ
( )s
−lnλ θ −1 −y β −1 γ (θ/β,y)
I11 = (Γ(θ/β)) s I
β
, s , I 12 = β
lnye y Γ(θ/β)
dy. By using
0
y( β ) Σ∞
θ
γ (θ/β,y) (−y)m
incomplete gamma function Γ(θ/β)
= m=0 (θ/β+m)m! , we get I12 =
Γ βθ
{ ( )s
y θ/β Σ∞
∞ θ
−1
β
lnye−y y β −1 Γ(θ/β)
(−y)m
m=0 (θ/β+m)m! dy, By application of an equation in
0
Sect. 0.314 of [10] for(Σpower series ) raised to Σ∞power, we mobtain for
∞ m u
any u positive integer m=0 am (βx) = m=0 C u,m (βx) , where
the coefficient Cu,m ( f or m = 1, 2, . . . )satisfy the recurrence relation Cu,m =
Σ (−1) p
(ma0 )−1 mp=1 (up − m + p)a p Cu,m− p , Cu,0 = a0 u anda p = (α+ p) p!
, we get.
( )s
( )s ( θ/β Σ )s
y θ/β Σ∞ (−y)m y θ/β Σ∞ ( (−1) ) y m
m ∞ (y)θ s/β Σ∞
Γ(θ/β) m=0 (θ/β+m)m! = Γ(θ/β) m=0 θ
y
= Γ(θ/β) m=0 am y
m = (Γ(θ/β)) s m=0 Cs,my m ,
β +m m!
Σ∞ ( )
− m=0 Cs,m {
∞ θ [1+s]
+m−1 { ∞
then I12 = β(Γ(θ/β))s lnye−y y β dy, since 0 x
s−1 e−mx (lnx) = m −s Γ(s){ψ(s) − ln(m)} , then
0 ( )
Σ Γ θ[1+s] ( )
∞ β +m θ [1+s]
I12 = − C
m=0 s,m β(Γ(θ/β))s ψ β +m , then I 1 =
−(θ +1)Γ(k+1)lnλ Σ∞ (−1)s Γ(u+1)
(−1)u Σ∞
Γ(θ/β) s=0 s!Γ(u−s+1) (Γ(θ/β))s I (θ/β, s)
u=0 u!Γ(k−u)
1
−
( )
Γ θ [1+s] ( )
(θ +1)Γ(k+1) Σ∞ (−1)u Σ∞ (−1)s Γ(u+1) Σ∞ β +m θ [1+s]
Γ(θ/β) u=0 u!Γ(k−u) s=0 s!Γ(u−s+1) m=0 C s,m β(Γ(θ/β))s ψ β +m
Case two: for k(− 1 )
<0
∞
(θ + 1)k β Σ Γ(k − 1 + j )
I1 =
Γ(θ/β) λθ j=0 j!Γ(k − 1)
Σ∞ { ∞ ( −1 )β
(−1)s Γ( j + 1) − x
(lnx)x −(θ+1) e λ
s=0
s!Γ( j − s + 1)
0
⎛ ( ( −1 )β ) ⎞s
γ θ/β, x /λ
⎝ ⎠ d x,
Γ(θ/β)
By the same above arguments, we can easily write
Type II Exponentiated Class of Distributions: The Inverse Generalized … 191
∞ ∞ ( )
−(θ + 1)klnλ Σ Γ(k − 1 + j) Σ (−1)s Γ( j + 1) 1 θ
I1 = I ,s −
Γ(θ/β) j=0
j!Γ(k − 1) s=0 s!Γ( j − s + 1) (Γ(θ/β))s β
∞ ∞
(θ + 1)k Σ Γ(k − 1 + j ) Σ (−1)s Γ( j + 1)
Γ(θ/β) j=0 j!Γ(k − 1) s=0 s!Γ( j − s + 1)
( )
Σ∞ Γ θ [1+s]
+ m ( )
β θ [1 + s]
Cs,m ψ +m
m=0
β(Γ(θ/β))s β
(( ( )β
−1
)β ) 1 ( )
I2 = E x /λ = E X −β
λ
⎧ ( )
⎪ Γ(k+1) Σ∞ (−1)w Σ∞ (−1)3 Γ(w+1) [θ +β]
⎪
1
⎨ Γ(θ/β) w=0 w!Γ(k−w) 3=0 3!Γ(w−3+1) (Γ(θ/β)) ( β I , 3 ,k − 1 > 0
)
3
Σ∞ Γ(k−1+ j ) Σ∞ (−1)w Γ( j+1)
= k 1 [θ +β]
,w ,k − 1 < 0
⎪ Γ(θ/β) w=0 w!Γ( j−w+1) (Γ(θ/β))w I β
⎪
⎩
j=0 j!Γ(k−1)
θ/β, k − 1 = 0
( ⎛ ⎛( )β ) ⎞⎞
Γ θ/β, x −1 /λ
For I3 = −(k − 1)E ⎝ln⎝1 − ⎠⎠
Γ(θ/β)
⎛ ⎛ ( ( −1 )β ) ⎞⎞
Γ(θ/β) Γ θ/β, x /λ
= −(k − 1)E ⎝ln⎝ − ⎠⎠
Γ(θ/β) Γ(θ/β)
⎛ ⎛ ( ( )β ) ⎞⎞
Γ(θ/β) − Γ θ/β, x −1 /λ
= −(k − 1)E ⎝ln⎝ ⎠⎠,
Γ(θ/β)
⎛ ⎛ ( ( )β ) ⎞⎞
γ θ/β, x −1 /λ
( ) ( ) ⎜ ⎜ ⎟⎟
Since Γ s, ϒ̄ + γ s, ϒ̄ = Γ(s), then I4 = −(k − 1)E ⎜ ⎜
⎝ln⎝
⎟⎟.
⎠⎠ By using
Γ(θ/β)
⎛ ( ( )β ) ⎞ ⎛ ⎛ ( ( ) β ) ⎞⎞
Σ γ θ/β, x −1 /λ γ θ/β, x −1 /λ
xn ⎜ ⎟ ⎜ ⎜ ⎟⎟
ln(1 − x) = − ∞n=0 n , we get ln⎝ Γ(θ/β) ⎠ = ln⎝1 − ⎝1 − Γ(θ/β) ⎠⎠ =
⎛ ( ( ) β ) ⎞n ⎛ ( ( ) β ) ⎞s
Σ∞ γ θ/β, x −1 /λ Σ∞ γ θ/β, x −1 /λ
− 1⎜ ⎟ 1 Σ∞ (−1)s Γ(n+1) ⎜ ⎟
n=0 n ⎝1 − Γ(θ/β) ⎠ = − n=0 n s=0 s!Γ(n−s+1) ⎝ Γ(θ/β) ⎠ =
192 S. H. Abid and J. A. Altawil
⎛ ( ) (( )β )v ⎞s
(( −1 β
Σ∞ Σ∞ )β ) βθ − x Σ∞ x −1 /λ
(−1)1+s Γ(n+1) ⎜ −1 λ ⎟
n=0 s=0 ns!Γ(n−s+1) ⎝ x /λ e v=0 (θ/β+v)! ⎠ , then
[ ( −1 )β ∞ ( )βv ]s ∞ ∞
( )θ − xλ
Σ x −1 /λ Σ Σ λ−βv1 −···−βvs x −βv1 −···−βvs
−1
x /λ e = ··· ( ) ( )
v=0
(θ/β + v)! v =0 v =0
θ
+ v1 ! . . . θ + vs !
1 s β β
β
−θ s −θ s −s (x −1 /λ)
λ x e
∞
Σ ∞
Σ λ−θ s−βv 1 −···−βvs
= ···
v1 =0 v =0
(θ/β + v1 )! . . . (θ/β + vs )!
s
β
e−s (x /λ)
−1
x −θs−βv 1 −···−βvs
( )β ( )
β q
−s x −1 Σ∞ −s (x −1 /λ)
let e λ
= q=0 q!
, then
(⎛ ⎛ ( )β ) ⎞⎞
Γ θ/β , x −1 /λ ∞ Σ
Σ ∞ Σ
∞
(−1)1+s+q Γ(n + 1) ( s )q
− (k − 1)E ⎝ln⎝1 − ⎠⎠ = −(k − 1)
Γ(θ/β) nq!s!Γ(n − s + 1) λβ
n=0 s=0 q=0
∞
Σ ∞
Σ λ−θ s−βv1 −...−βvs ( )
... E X −θ s−βv1 −...−βvs −βq
(θ/β + v1 )! . . . (θ/β + vs )!
v1 =0 vs =0
( )
E X −θs−βv 1 −···−βvs −βq
⎧ Σ∞ (−1)w Σ∞ (−1)3 Γ(w+1)
⎪ ( ) Γ(k+1) 1
⎪ 3=0 3!Γ(w−3+1) ( ( θ ))3
⎪
⎪ Γ θ
λ −θ s−βv 1 −···−βv s −βq w=0 w!Γ(k−w)
Γ
⎪
⎪
β
( ) β
⎪
⎪ [θ +θ s+βv 1 +···+βvs +βq ] , 3 , k − 1 > 0
⎪
⎪ I β
⎨ Σ∞ Γ(k−1+ j ) Σ∞ (−1)w Γ( j+1)
( ) k ( ( 1 ))w
= Γ θ λ−θ s−βv1 −···−βvs −βq j=0 j!Γ(k−1) w=0 w!Γ( j−w+1)
Γ βθ
⎪
⎪ β
( )
⎪
⎪ θ +θ +···+βv +βq
⎪ I [ ], w , k − 1 < 0
s+βv
⎪
1 s
⎪
⎪ ( β )
⎪
⎪ Γ [
θ +θ +···+βv +βq ]
⎩
s+βv 1 s
β
Γ(θ/β)λ−θ s−βv1 −···−βvs −βq
,k − 1 = 0
Substituting I1 , I2 , and I3 in Eq. (8), we get the Shannon entropy of the Type II
EIGGD.
The relative entropy of the Type II EIGGD that can be obtained from
{
∞ ( )
f 1 (x)ln ff21 (x)
(x)
d x, suchthat f 1 (x) is the pdf in (4) with parameter (λ1 , θ1 , β1 , k1 )
0
and f 2 (x) is the pdf with parameter s(λ2 , θ2 , β2 , k2 ).
Type II Exponentiated Class of Distributions: The Inverse Generalized … 193
⎛ ⎞
β1 ((
k1
)β1 )
⎜ Γ(θ1 /β1 ) λ1 θ1 ⎟
DKL (F1 ||F2 ) = ln⎝ ( ) ⎠ + {(θ2 + 1) − (θ1 + 1)}E(lnX ) − E x −1 /λ1
k2 β2
Γ(θ2 /β2 ) λ2 θ2
( ⎛ ⎛( )β1 ) ⎞⎞
(( )β2 ) Γ θ1 /β1 , x −1 /λ1
⎜ ⎜ ⎟⎟
+E x −1 /λ2 + (k 1 − 1)E ⎜ ⎜
⎝ln⎝1 −
⎟⎟
⎠⎠
Γ(θ1 /β1 )
( ⎛ ⎛
( )β2 ) ⎞⎞
Γ θ2 /β2 , x −1 /λ2
⎜ ⎜ ⎟⎟
− (k2 − 1)E ⎜ ⎜
⎝ln⎝1 −
⎟⎟
⎠⎠ (9)
Γ(θ2 /β2 )
⎧ ( ) ([ ] )
⎪ Γ k1 +1 Σ∞ (−1) w Σ∞ (−1)3 Γ(w+1) 1 θ1 +β1
⎛( )β ⎞ ⎪
⎪ ( ) ( ) ( ( ))3 I β , 3 , k1 − 1 > 0
⎪
⎨ Γ θ /β w=0 w!Γ k −w 3=0 3!Γ(w−3+1) Γ θ1 /β1 1
X −1 1 1 1 ( 1 ) ([ ] )
E⎝ ⎠=
( k )
Σ∞ Γ k1 −1+ j Σ∞ (−1)w Γ( j+1) ( ( 1 ))w I
θ1 +β1
λ1 ⎪
⎪
1
β , w , k1 − 1 < 0
⎪ Γ θ1 /β1
⎪
j=0 j!Γ(k−1) w=0 w!Γ( j−w+1) Γ θ1 /β1 1
⎩
θ1 /β1 , k1 − 1 = 0
⎧ ( )β2 Σ Σ∞ (−1)3 Γ(w+1) ( )
⎪
⎪ Γ(k1 +1) λ1 ∞ (−1)w 1
I [θ1 +β2 ]
,3
, k1 − 1 > 0
(( )β2 ) ⎪
⎪ Γ(θ1 /β1 ) λ2 w=0 w!Γ(k1 −w) 3=0 3!Γ(w−3+1) (Γ(θ1 /β1 ))3 β1
X −1 ⎨ ( )β2 Σ ( )
Γ(k1 −1+ j ) Σ∞ (−1)w Γ( j+1)
E = k1
Γ(θ1 /β1 )
λ1
λ2
∞
j=0 j!Γ(k1 −1)
1
w=0 w!Γ( j−w+1) (Γ(θ1 /β1 ))w I [θ1β+β 2]
, w , k1 − 1 < 0
λ2 ⎪
⎪ ( ) 1
⎪ Γ [ 1β 2 ] ( λ )β2
θ +β
⎪
⎩ 1
Γ(θ1 /β1 )
1
λ2 , k1 − 1 = 0
⎧ ( ) ( ) Γ (k +1)
−Γ k1 +1 lnλ1 Σ∞ (−1) u Σ∞ (−1)s Γ(u+1) θ
⎪
⎪ ( ) ( ) ( ( 1 ))s I 1 , s − ( 1 )
⎪
⎪ θ u=0 u!Γ k −u s=0 s!Γ(u−s+1) θ β 1 θ
⎪
⎪ Γ β1 1 Γ β1 Γ β1
⎪
⎪ 1 ( 1) 1
⎪
⎪
⎪
⎪ θ [1+s]
Γ 1β +m ( )
⎪
⎪ Σ∞ u Σ∞ (−1)s Γ(u+1) Σ∞ θ1 [1+s]
⎪
⎪ (−1) ( ( 1 ))s ψ + m , k1 − 1 > 0
⎪
⎪ u=0 u!Γ(k−u) s=0 s!Γ(u−s+1) m=0 C s,m θ β1
⎪
⎪ β1 Γ β1
⎪
⎪
⎨ ( ) 1 ( )
E(ln(X )) = −k(1 lnλ)1 Σ∞ Γ k1(−1+ j) Σ∞ (−1)s Γ( j+1) 1 θ1 (k1 )
⎪
⎪ θ1 j=0 j!Γ k −1 s=0 s!Γ( j−s+1) ( ( θ ))s I β1 , s − θ
⎪
⎪ Γ β 1 Γ β 1 Γ β1
⎪
⎪ 1 ( 1 ) 1
⎪
⎪
⎪
⎪ ( ) θ
Γ 1β
[1+s]
+m ( )
⎪
⎪ Σ Γ −1+ Σ s Σ θ
⎪ ∞
⎪
k 1( j
) ∞ (−1) Γ( j+1) ∞ C 1 [1+s]
m=0 s,m ( ( θ ))s ψ + m , k1 − 1 < 0
1
⎪
⎪ β1
⎪ j=0 j!Γ k1 −1
⎪
s=0 s!Γ( j−s+1)
Γ β1
⎪
⎪
⎪
⎪ ( ) 1
⎩ θ
−lnλ1 + β1 ψ β1 , k1 − 1 = 0
1 1
⎛ ⎛ ( ( )β1 ) ⎞⎞
θ1 X −1
Γ ,
⎜ ⎜ β1 λ1 ⎟⎟
E⎜ ⎜
⎝ln⎝1 − ( ) ⎟⎟ =
⎠⎠
θ1
Γ β1
194 S. H. Abid and J. A. Altawil
⎧ Σ∞ Σ∞ Σ∞ ( )q Σ
⎪ (−1)1+s+q Γ(n+1) ∞
⎪ v1 =0 . . .
s
⎪
⎪ n=0 s=0q=0 nq!s!Γ(n−s+1) λ1 β1
⎪
⎪ Σ∞ λ1 −θ1 s−β1 v1 −···−β1 vs
⎪
⎪
⎪
⎪ vs =0 (θ1 /β1 +v1 )!...(θ1 /β1 +vs )!
Σ∞ Σ∞ (−1)3 Γ(w+1)
⎪
⎪ Γ(k1 +1) (−1)w 1
⎪
⎪
( )
w=0 w!Γ(k1 −w) 3=0 3!Γ(w−3+1) ( ( θ1 ))3
⎪
⎪
θ1
Γ β λ1 −θ1 s−β1 v1 −···−β1 vs −β1 q Γ
⎪
⎪
1 ( ) β1
⎪
⎪ θ +θ s+β v +···+β v +β q
I [ 1 1 1 1β1 1 s 1 ] , 3 , k1 − 1 > 0
⎪
⎪
⎪
⎪
⎪ Σ∞ Σ∞ Σ∞ (−1)1+s+q Γ(n+1) ( s )q
⎨ β1
n=0 s=0 q=0 nq!s!Γ(n−s+1) λ
Σ∞ Σ λ1 −θ1 s−β1 v1 −···−β1 vs
1
⎪
⎪ ··· ∞
⎪
⎪
v 1 =0 v
Σ∞ Γ(k11−1+
s =0 (θ /β1 +v 1 )!...(θ 1 /β 1 +v s )!
j ) Σ∞ (−1)w Γ( j+1) ( ( 1 ))
⎪
⎪ ( ) k1
⎪
⎪ θ
Γ β1 λ1 −θ1 s−β1 v1 −···−β1 vs −β1 q j=0 j!Γ(k1 −1) w=0 w!Γ( j−w+1) θ w
⎪
⎪ ( )
Γ β1
⎪
⎪
1
θ +θ s+β v +···+β v +β q
1
⎪
⎪
⎪ I [ 1 1 1 1β1 1 s 1 ] , w , k1 − 1 < 0
⎪
⎪ Σ∞ Σ∞ Σ∞ (−1)1+s+q Γ(n+1) ( s )q
⎪
⎪
⎪
⎪ λ1 β1
⎪
⎪
n=0 s=0 q=0 nq!s!Γ(n−s+1)
( )
⎪
⎪ Σ Σ Γ [θ1 +θ1 s+β1 v1 +···+β1 vs +β1 q ]
⎩ ∞ ··· ∞ λ1 −θ 1 s−β1 v 1 −···−β1 v s β1
,k − 1 = 0
v1 =0 vs =0 (θ1 /β1 +v1 )!...(θ1 /β1 +vs )! Γ(θ1 /β1 )λ1 −θ1 s−β1 v1 −···−β1 vs −β1 q 1
⎛ ⎛ ( ( −1 )β2 ) ⎞⎞ ⎛ ⎛ ( ( −1 )β2 ) ⎞⎞
θ2
Γ β2, Xλ Γ θ2 /β2 , Xλ
let I 4 = E ⎝ln⎝1 − ⎠⎠ = E ⎝ln⎝ Γ(θ 2 /β2 )
− ⎠⎠ =
2 2
( )
θ
Γ β2 Γ(θ2 /β2 ) Γ(θ2 /β2 )
2
⎛ ⎛ ( ( −1 )β2 ) ⎞⎞
Γ(θ2 /β2 )−Γ θ2 /β2 , Xλ
E ⎝ln⎝ ⎠⎠,
2
Γ(θ2 /β2 )
( ( ( ) ))
( ) ( ) β
γ θ2 /β2 ,( X −1/λ2 ) 2
Since Γ s, ϒ̄ + γ s, ϒ̄ = Γ(s), then I4 = E ln Γ(θ2 /β2 )
. By
Σ∞ x n
using ln(1
(
− x) = − )n=0 n , we get
⎛ ( )β ⎞
γ θ2 /β2 , x −1 /λ2
2
⎜ ⎟
ln⎜
⎝
⎟
⎠
Γ(θ2 /β2 )
⎛ ⎛( ( )β ) ⎞⎞
γ θ2 /β2 , x −1 /λ2
2
⎜ ⎜ ⎟⎟
= ln⎜
⎝1 − ⎜1 −
⎝
⎟⎟
⎠⎠
Γ(θ /β ) 2 2
⎛ ( ( )β ) ⎞n
γ θ2 /β2 , x −1 /λ2
2
∞
Σ 1⎜⎜1 −
⎟
⎟
=− ⎝ ⎠
n Γ(θ2 /β2 )
n=0
⎛ ( ( )β ) ⎞s
θ2 −1 2
∞
Σ ∞ γ β2 , x /λ2
1 Σ (−1)s Γ(n + 1) ⎜
⎜
⎟
⎟
=− ( )
n s!Γ(n − s + 1) ⎝ θ
Γ β2
⎠
n=0 s=0 2
⎛ (( )β )v ⎞s
x −1 /λ2
θ ( ) 2
Σ∞ Σ ∞ (( )β ) β2 − x −1 /λ β2 Σ∞
(−1)1+s Γ(n + 1) ⎜
⎜ −1 2 2 2 ⎟
⎟
= x /λ2 e
ns!Γ(n − s + 1) ⎝ (θ2 /β2 + v)! ⎠
n=0 s=0 v=0
[ ]s
( )θ2 β2 Σ∞ (x −1 /λ2 )β2 v
e−(x /λ2 )
−1
then x −1 /λ2 v=0 (θ2 /β2 +v)! =
Type II Exponentiated Class of Distributions: The Inverse Generalized … 195
∞
Σ Σ∞
λ2 −β2 v1 +···−β2 vs x −β 2 v1 −···−β2 vs β2
λ2 −θ 2 s x −θ 2 s e−s (x /λ2 )
−1
···
v1 =0 v =0
(θ2 /β2 + v1 )! . . . (θ2 /β2 + vs )!
s
∞
Σ ∞
Σ λ2 −θ 2 s−β2 v1 −···−β2 vs β2
e−s (x /λ2 ) x −θ 2 s−β2 v1 −···−β2 vs
−1
= ···
v1 =0 v =0
(θ2 /β2 + v 1 )! . . . (θ2 /β2 + vs )!
s
(
β q
) ( ( ( ) ))
β2 Σ∞ −s (x −1 /λ2 ) 2 Γ
θ2
,(x −1 /λ2 )
β2
Σ∞ Σ ∞ Σ ∞ ( )q Σ∞ ∞
Σ
(−1)1+s+q Γ(n + 1) s λ2 −θ 2 s−β2 v 1 −···−β2 vs
· · ·
nq!s!Γ(n − s + 1) λ2 β2 (θ /β + v1 )! . . . (θ2 /β2 + vs )!
n=0 s=0 q=0 v1 =0 vs =0 2 2
( )
E X −θ 2 s−β2 v 1 −···−β2 vs −β2 q
( )
where, E X −θ 2 s−β2 v1 −···−β2 vs −β2 q
⎧ Γ(k1 +1) Σ∞ (−1)w Σ∞ (−1)3 Γ(w+1) 1
⎪
⎪
( )
−w) 3=0 3!Γ(w−3+1) ( ( θ1 ))3
⎪ Γ β1 λ1
⎪
θ1 −θ 2 s−β2 v 1 −···−β2 vs −β2 q w=0 w!Γ(k 1
Γ β
⎪
⎪ ( ) 1
⎪
⎪ [θ1 +θ2 s+β2 v1 +···+β2 vs +β2 q ] , 3 , k − 1 > 0
⎪
⎪ I β 1
⎨ k1 Σ1∞ Γ(k1 −1+ j ) Σ∞ (−1)w Γ( j+1)
( ) ( ( 1 ))w
= Γ θ1 λ1 −θ 2 s−β2 v1 −···−β2 vs −β2 q j=0 j!Γ(k1 −1) w=0 w!Γ( j−w+1) θ
⎪
⎪ β1
( )
Γ β1
⎪
⎪ [θ1 +θ2 s+β2 v 1 +···+β2 vs +β2 q ]
1
⎪
⎪ I , w , k − 1 < 0
⎪
⎪ ( β )
1
⎪
1
⎪
⎩
θ +θ s+β v +···+β2 vs +β2 q ]
Γ [ 1 2 2 1β
1
Γ(θ /β )λ −θ 2 s−β2 v1 −···−β2 vs −β2 q
, k1 − 1 = 0
1 1 1
By substituting the above results in Eq. (9), we get the relative entropy of the Type
II EIGGD.
[ (
β
) ]k1 [ (
β
) ]k1
Γ θ1 /β1 ,( X −1 /λ1 ) 1 Γ(θ1 /β1 ) Γ θ1 /β1 ,( X −1 /λ1 ) 1
Let I5 = E 1 − Γ(θ1 /β1 )
= E Γ(θ1 /β1 )
− Γ(θ1 /β1 )
=
[ (
β
) ]k1
Γ(θ1 /β1 )−Γ θ1 /β1 ,( X −1 /λ1 ) 1
E Γ(θ1 /β1 )
,
196 S. H. Abid and J. A. Altawil
( ) ( )
Since Γ s, ϒ̄ + γ s, ϒ̄ = Γ(s), then
[ ( β
) ]k 1 [ ( (
β
) )]k1
γ θ1 /β1 ,( X −1 /λ1 ) 1 γ θ1 /β1 ,( X −1 /λ1 ) 1
E Γ(θ1 /β1 )
= E 1− 1− Γ(θ1 /β1 )
By using
Σ
(1 − z)b = ∞ (−1) Γ(b+1) s
s
s=0 s! Γ(b−1+s) z ,
⎡ ⎛ ( ( )β ) ⎞⎤k1
γ θ1 /β1 , X −1 /λ1 1
⎣1 − ⎝1 − ⎠⎦
Γ(θ1 /β1 )
∞
Σ ∞
(−1)s Γ(k1 + 1) Σ (−1)u Γ(s + 1)
we get =
s=0
s! Γ(k1 − s + 1) u=0 u!Γ(s − u + 1)
⎛ ( ( )β ) ⎞u
γ θ1 /β1 , X −1 /λ1 1
⎝ ⎠
Γ(θ1 /β1 )
⎛( ⎞u
⎛ ( ( )β ) ⎞u ( −1 )β1 ) βθ11 ( ( )β1 )q
γ θ1 /β1 , X −1 /λ1 1 ⎜ X /λ 1 Σ∞ − X −1
/λ 1 ⎟
⎝ ⎠ =⎜ ⎟
Γ(θ1 /β1 ) ⎝ Γ(θ1 /β1 ) (θ1 /β1 + q)q! ⎠
q=0
∞
Σ ∞
Σ ( )u
q1 +···+qu λ1 −θ 1 u−β1 q1 −···−β1 qu x −θ 1 u−β1 q1 −···−β1 qu 1
= ··· (−1) ( ) ( )
θ1
+ . . . θ1
+ Γ(θ1 /β1 )
q1 =0 qu =0 β1
q 1 β1
q u q1 ! . . . qu !
⎡ (⎛ ( )β ) ⎞⎤k1
γ θ1 /β1 , X −1 /λ1 1
E ⎣1 − ⎝1 − ⎠⎦
Γ(θ1 /β1 )
∞
Σ ∞
(−1)s Γ(k1 + 1) Σ (−1)u Γ(s + 1)
=
s=0
s! Γ(k1 − s + 1) u=0 u!Γ(s − u + 1)
∞
Σ Σ∞
(−1)q1 +···+qu λ1 −θ 1 u−β1 q1 −···−β1 qu E(X −θ 1 u−β1 q1 −···−β1 qu )
···
q1 =0 q =0
Γ u (θ1 /β1 ) (θ1 /β1 + q1 ) . . . (θ1 /β1 + qu )q1 ! . . . qu !
u
where
( )
E X −θ 1 u−β1 q1 −···−β1 qu
⎧ Σ∞ ( )
( ) Γ(k+1) (−1)w Σ∞ (−1)3 Γ(w+1) [θ +θ1 u+β1 q1 +···+β1 qu ]
⎪
⎪ 3=0 3!Γ(w−3+1) ( ( θ ))3 I
1
β ,3 ,k − 1 > 0
⎪ θ −θ u−β q −···−β1 qu
⎪ Γ β λ 1 11
w=0 w!Γ(k−w)
Γ β
⎪
⎪ ( )
⎨ Σ∞ Γ(k−1+ j ) Σ∞ (−1)w Γ( j+1)
( ) k ( ( 1 ))w I [θ +θ1 u+β1 q1 +···+β1 qu ] , w , k − 1 < 0
= Γ βθ λ−θ 1 u−β1 q1 −···−β1 qu j=0 j!Γ(k−1) w=0 w!Γ( j−w+1)
Γ βθ β
⎪
⎪ ( )
⎪
⎪ Γ [ 1
θ+θ u+β1 q1 +···+β1 qu ]
⎪
⎪ β
⎩ ( )
θ −θ u−β q −···−β q
,k − 1 = 0
Γ β λ 1 1 1 1 u
Then, we have the stress-strength reliability model for Type II EIGGD as follows,
∞
Σ ∞ ∞
(−1)s Γ(k1 + 1) Σ (−1)u Γ(s + 1) Σ
R =1−
s! Γ(k1 − s + 1) u!Γ(s − u + 1)
s=0 u=0 q1 =0
Type II Exponentiated Class of Distributions: The Inverse Generalized … 197
∞
Σ (−1)q1 +···+qu λ1 −θ 1 u−β1 q1 −···−β1 qu E(X −θ 1 u−β1 q1 −···−β1 qu )
···
Γ u (θ1 /β1 ) (θ1 /β1 + q1 ) . . . (θ1 /β1 + qu )q1 ! . . . qu !
qu =0
∞
Σ ∞ ∞
(−1)s Γ(k1 + 1) Σ (−1)u Γ(s + 1) Σ
=1−
s! Γ(k1 − s + 1) u!Γ(s − u + 1)
s=0 u=0 q1 =0
∞
Σ (−1)q1 +···+qu λ1 −θ 1 u−β1 q1 −···−β1 qu
···
Γ u (θ1 /β1 ) (θ1 /β1 + q1 ) . . . (θ1 /β1 + qu )q1 ! . . . qu !
qu =0
⎧ Σ∞ Σ∞ ( )
Γ(k+1) (−1)w (−1)3 Γ(w+1) [θ +θ1 u+β1 q1 +···+β1 qu ]
⎪
⎪ ( ) 1
3=0 3!Γ(w−3+1) ( ( θ ))3 I β ,3
,k − 1 > 0
⎪
⎪ Γ θ
λ−θ 1 u−β1 q1 −···−β1 qu
w=0 w!Γ(k−w)
Γ β
⎪
⎪
β
( )
⎨ Σ∞ Γ(k−1+ j ) Σ∞ (−1)w Γ( j+1) ( ( 1 )) [θ +θ1 u+β1 q1 +···+β1 qu ]
.
( )
θ
k
j=0 j!Γ(k−1) w=0 w!Γ( j−w+1) wI β ,w ,k − 1 < 0 (10)
Γ λ−θ 1 u−β1 q1 −···−β1 qu Γ βθ
⎪
⎪ β
( )
⎪
⎪ [ θ +θ1 u+β1 q1 +···+β1 qu ]
⎪
⎪
Γ β
⎩ ( ) ,k − 1 = 0
Γ βθ λ−θ 1 u−β1 q1 −···−β1 qu
4 Conclusion
References
1. Gupta, C., Gupta, L., & Gupta, D. (1998). Modeling failure time data by Lehman alternatives.
Communications in Statistics Theory and Methods, 27, 887–904.
2. Pu, S., Oluyede, B., Qiu, Y., & Linder, D. (2016). A Generalized class of exponentiated models
Weibull distribution with applications. Journal of Data Science 14, 585–614.
3. Ahmad, Z., Ampadu, C., Hamedani, G., Jamal, F., & Nasir, M. (2019).The new exponentiated
T-X class of distributions: Properties, characterizations and application. Pak.j.stat.oper.res.
XV(IV), 941–962.
4. Oluyedea, B., Mashabeb, B., Fagbamigbec, A., Makubateb, B., & Wandukua, D. (2020).
The exponentiated generalized power series family of distributions: Theory, properties and
applications. Heliyon, 6(e04653), 1–16.
5. Olosunde, A., & Adekoya T. (2020).On some properties of exponentiated generalized
Gompertz-Makeham distribution. Indonesian Journal of Statistics and Applications, 4(1),
22–38.
6. Abid, S., & Kadhim, F. (2021). Doubly truncated exponentiated inverted gamma distribution.
Journal of Physics: Conference Series, 1999(1), 012098.
7. Chipepa, F., Chamunorwa, S., Oluyede, B., Makubate, B., & Zidana, C. (2022). The exponenti-
ated half logistic-generalized-G power series class of distributions: Properties and applications.
Journal of Probability and Statistical Science, 20(1), 21–40.
8. Abid, S., & Jani, H. (2022).Two doubly truncated generalized distributions: Some properties.
AIP Conference Proceedings, 2398, 060033.
9. Kullback, S., & Leibler, R. (1951). On information and sufficiency. Annals of Mathematical
Statistics, 22(1), 79–86. https://doi.org/10.1214/aoms/1177729694. MR 39968
198 S. H. Abid and J. A. Altawil
10. Gradshteyn, S., & Ryzhik, M. (2000). Table of integrals, series, and products (6th ed.).
Academic Press.
11. Shannon, E. (1948). A mathematical theory of communication. Bell System Technical Journal
27, 379–432.
A Comparative Study of Masi Stock
Exchange Index Prediction Using
Nonlinear Setar, MS-AR and Artificial
Neurones Network Models
Saoudi Youness, Falloul Moulay Mehdi, Hachimi Hanaa, and Razouk Ayoub
Abstract The aim of this paper is to examine the effectiveness of three nonlinear
econometric prediction models on the Casablanca stock exchange’s MASI index: the
Self-Exciting Threshold Autoregressive (SETAR), the Markov Switching Autore-
gressive Model (MS-AR) and the Artificial Neural Network (ANN) model. The time
frame under investigation is from January 1, 2002, to September 20, 2018. Nonlin-
earity tests are used to confirm the hypotheses of the study. Schwartz selection criteria
were also used to select the optimal delay. To choose the best prediction model, the
Mean Absolute Error Criterion, the Root Mean Square Error Criterion and the Mean
Absolute Percentage Error Criterion were used. The results of applying SETAR, MS-
AR and ANN models showed that the neural network model is the most optimal.
This model is followed by the Markovian model MS-AR since it has given better
results than the SETAR model. These results can be beneficial for financial market
traders to make good decisions regarding allocative portfolio and asset management
strategies.
1 Introduction
Over the past 20 years, interest in nonlinear models of time series has evolved signif-
icantly. The presence of nonlinearity in financial time mean series has important
consequences especially with regard to propriety of the low efficiency of finan-
cial markets. The Threshold AutoRegressive (TAR) model is proposed by Tong
[1] and Tong and Lim [2]. Some of the most well-known nonlinear models include
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 199
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_16
200 S. Youness et al.
Hamilton’s Markov Switching Autoregressive Model and the Self Exciting Threshold
AutoRegressive model. These three models are different from conventional linear
econometric models in that they assume that time series may behave differently under
various regimes.
Tong studied the SETAR model (Self-Excited Threshold Autoregressive Model)
[4]. In this model, the change in regime is controlled by a piecewise function of the
values in the time series itself. Amiri [5], who demonstrated the power of nonlinear
models by contrasting the performance of the Markovian autoregressive switching
model (MS-AR) and linear model forecasting, is one of many studies that have
been carried out to evaluate the precision of financial and economic time series fore-
casting and modeling. Wen-Yi Peng, Ching-Wu Chu [6] compared the performance of
univariate methods for forecasting. This study showed that nonlinear models allowed
better modeling of macroeconomic series than linear models. The artificial neural
network (ANN) is a prediction method based on mathematical models of the brain,
and it allows to give complex nonlinear relationships between response variables and
predictors, the prediction through the ANN is classified among the second generation
of prediction models as stated in Zang [7].
This paper is structured as follows: Beginning with an introduction and the objec-
tive of the paper, the properties and econometric tests are studied in the second
section, the third section explained the estimation of models, namely, SETAR, MS-
AR and ANN. The last section is devoted to the comparative approach between these
three models in order to choose the most optimal model for forecasting.
The objective of this paper is to give and test the forecasting efficiency between two
nonlinear econometric models, namely, SETAR, MS-AR and the neural network
model—ANN in order to choose the best-performing model.
2 Methodology
The purpose is to study the methods and tests used to test and model the series of the
MASI index following the SETAR, the MS-AR and the ANN models and choose the
most optimal forecasting model.
A Comparative Study of Masi Stock Exchange Index Prediction Using … 201
This section contains descriptive statistics for the daily data for the period from
January 1, 2002, to September 20, 2018.
These statistics of MASI include:
– The standard deviation, mean,
– The Kurtosis, Skewness and the Jarque Berra,
– The econometric tests of stationarity, homoscedasticity, linearity and stability.
The data used in this paper consist of the daily MASI index downloaded from the
Casablanca stock exchange website, covering a historical period that extends from
01 January 2002 to 28 September 2018 with a number of 4176 observations.
Figure 1 describes the evolution of the MASI series on a sample of 4167
observations. This series is transformed into logarithmic difference to account for
nonstationarity in variance (Table 1).
The J.B. statistic shows that the normality null hypothesis is rejected, and further-
more, the series of MASI yield is leptokurtic. The series of yields is spread to the
left, as indicated by the negative skewness coefficient. This asymmetry might indi-
cate that the series isn’t linear. The Jarque and Bera test, whose P-value is less than
Table 3 Homoscedasticity
Series Q TR2
test
MASI 39,38a 465,83b
Q is the Breush Pagan statistic, TR2 of White’s test. a and b Rejec-
tion of the null hypothesis of homoscedasticity at the respective
thresholds of 1 and 5%
5% in relation to the Jarque and Bera statistic, confirms the non-normality of the
distribution of MASI yields.
T is the number of observations in the series and T 1/4 is the typically used delay
(Table 2).
The t-DF statistic’s value, PP, should be compared to the critical values. At the 5%
threshold, the results for model 1, model 2 and model 3 were 1.95, −2.86 and 3.41,
respectively. The Philips-Perron test results demonstrate stationarity for the series in
the first difference and the presence of stationarity in the MASI series in level.
The results of the estimates from the application of the PG and White homoscedas-
ticity tests to the MASI yield series are shown in Table 3.
The Breush-Pagan test and the White test both come to the same conclusion—
namely, that the null hypothesis of homoscedasticity is false—so the results are
consistent between the two tests. It is highly likely that the presence of an ARCH
effect, which is frequently observed in financial time series, is what caused the null
hypothesis of homoscedasticity to be rejected.
A Comparative Study of Masi Stock Exchange Index Prediction Using … 203
The i.i.d series null hypothesis is put to the test against an undefined alternative
hypothesis using the Brock, Deshert and Sheinkman (1987) BDS test. This test is
interesting because, in contrast to earlier tests, it can identify nonlinearity in yield
series. The guidelines provided by Brock et al. (1992) were adhered to in order to
apply the test: the values 0.5, 1, 1.5, and for the ratio ε/σ were used, and the extension
dimension m ranged from 2 to 15. Table 4 displays the test’s outcomes.
The conclusion that can be drawn from this test is that the assumption of inde-
pendence of returns is rejected. In other words, this confirms the nonlinearity of the
series. According to financial market theory, Casablanca’s financial market is not
efficient according to the low form of market efficiency.
Figure 2 describes the Recursive Residue Graphs, according to the CUSUM test
below, we find that the recursive residues (in blue) are very close to zero, it is well
within the confidence interval of 5% (in red). We can, therefore, conclude that there
is no instability of the parameters over time.
This result can be confirmed using Square Recursive Residue Graphs as shown
below.
204 S. Youness et al.
Figure 3 describes the Square Recursive Residue Graphs, according to the test
above, we find that the recursive residues (in blue) are out of the confidence interval of
5% (in red). We can, therefore, conclude that there is no instability of the parameters
over time.
For years, it has been recognized that most financial series have nonlinear dynamics,
asymmetries and multimodal distributions. Since it is impossible to account for these
phenomena from the usual autoregressive linear models of ARMA type, nonlinear
processes capable of reproducing these characteristics are necessarily used.
A Comparative Study of Masi Stock Exchange Index Prediction Using … 205
A SETAR (2,1,1) with two regimes and an autoregressive process AR (1) with (d = 1)
in each regime is as follows:
Xt = ϕ1,0 + Xt−1 ϕ 1,1 1 − I Xt−1 > c + ϕ2,0 + Xt−1 ϕ 2,1 I Xt−1 > c + εt
(1)
qt = Xt−d (2)
I(.) Denotes the indicator function, qt is threshold variable ϕ1,0 , ϕ1,1 , ϕ2,0 , ϕ2,1
are the coefficients represents the coefficients of the AR (1) process.
Different variances for each of the w segments are supported by the TAR model
in the equation (regimes). A restriction of the following form is used to stabilize the
variance for the various regimes:
P1
P2
Xt = I1 α1,0 + X1,t−i ϕ1,i + I2 α2,0 + X2,t−i ϕ2,i
i=1 i=1
Pw
+ . . . + Iw αw,0 + Xw,t−i ϕw,i + et (3)
i=1
With:
– The variables ϕ1 , ϕ2 , .., and ϕ p represent the coefficients of the AR(P) process.
– εt iid (0, σε 2 ).
– The constants in μ(St ) represent μ1 if the process is in regime 1 (St = 1), μ2 if
the process is in regime 2 (St = 1), and mu 1 in all other cases.
St = R if the process is in the previous regime, R.
The following transition probabilities are expressed for a Markov chain that
regulates the transition between regimes:
St can simply be the values 1 and 2 in the case of two regimes, and an example
of this kind of model class with an AR(1) in both regimes is as follows:
The ANN model is an intelligent model and is used to solve complex problems in
many applications such as optimization, prediction, modeling, etc. [8, 9] (Tables 7
and 8).
The ANN model has given performant results compared to the two other nonlinear
models.
208 S. Youness et al.
Table 7 Statistical
Learning Validation
properties’ table
Yield Measure Yield Measure
R-square 0.0070142 R-square 0.0036443
Log-likelihood −9653.92 Log-likelihood −4852.4
SSE 0.1581246 SSE 0.076442
Sum of 2783 Sum of 1392
frequencies frequencies
To choose the best model, we can use model comparison criteria, these criteria are
numerous and play a very important role in econometrics. These criteria, which are
sought to be minimized, are based on forecast error. These criteria are: MAE, RMSE
and MAPE (Table 9).
Table 9 Comparison
Measure Method MASI index
between SETAR, MS-AR and
ANN MAE SETAR 0.005079
MS-AR 0.005071
ANN 0.005050
RMSE SETAR 0.007526
MS-AR 0.007520
ANN 0.007410
MAPE SETAR 175.0816
MS-AR 168.7702
ANN 150.5602
A Comparative Study of Masi Stock Exchange Index Prediction Using … 209
5 Conclusion
The study investigated the performance of the SETAR, MS-AR and ANN models in
modeling and forecasting MASI Index from 2002 to 2018. To demonstrate that the
MASI index used in the study was nonlinear and was one of the study’s goals. Two
tests confirmed that the assumption of return independence is clearly rejected. In
other words, the series is characterized by nonlinearity according to financial market
theory, Casablanca’s financial market is inefficient. According to the CUSUM test
results, there is no parameter instability over time.
The MAE, the RMSE and the MAPE were used in the study to determine which
model performed the best (MAPE). Overall, the findings demonstrated that, in most
situations, ANN modeling is superior to SETAR and MS-AR models.
The discussion of the findings leads to the following conclusions:
– The closing price of the MASI index is nonlinear and does not alter structurally.
– The almost imperceptible error measures reveal the different predictive models
estimated for the MASI closing prices for forecasting to be robust, efficient and
reliable.
The ANN model has given good results compared to the two other nonlinear
models.
The forecasting comparison of the ANN model and genetic algorithms will form
the basis of the paper’s upcoming study, which aims to gauge how well predictions
are made.
References
1. Tong, H. (1978). On a threshold model in pattern recognition and signal processing. In C. Chen
(Ed.), Pattern recognition and signal processing (pp. 575–586). Sijhoff and Noordhoff.
2. Tong, H., & Lim, K. S. (1980). Threshold autoregression, limit cycles and data. Journal of the
Royal Statistical Society: Serie B, 42, 245–292.
3. Hamilton, J. D. (1989). A new approach to the economic analysis of nonstationary time series
and the business cycle. Econometrica, 57, 357–384.
4. Tong, H. (1983). Threshold models in nonlinear time-series analysis. Springer.
5. Amiri, E. (2010). Forecasting GDP, Growth rate with nonlinear models. In 1st International
Conference of Econometrics Methods and Applications (pp. 1–18).
6. Peng, W.-Y., & Chu, C.-W. (2009). A comparison of univariate methods for forecasting container
throughput volumes. Mathematical and Computer Modelling, 50, 1045–1057. https://doi.org/
10.1016/j.mcm
7. Zhang, H. -T., Xu, F. -Y., & Zhou, L. (2010). Artificial neural network for load forecasting in smart
grid. In 2010 International Conference on Machine Learning and Cybernetics (pp. 3200–3205).
https://doi.org/10.1109/ICMLC.2010.5580713
8. Medeiros, M. C., & Veiga, A. (2005). A flexible coefficient smooth transition time series model.
IEEE Transactions on Neural Networks, 16(1), 97–113. https://doi.org/10.1109/TNN.2004.
836246
210 S. Youness et al.
9. Nurunnahar, S., Talukdar, D. B., Rasel, R. I., & Sultana, N. (2017). A short term wind speed
forcasting using SVR and BP-ANN: A comparative analysis. In 2017 20th International Confer-
ence of Computer and Information Technology (ICCIT) (pp. 1–6). https://doi.org/10.1109/ICC
ITECHN.2017.8281802
Application of Differential Transform
Method for Solving Some Classes
of Singular Integral Equations
Abstract In this paper, we have used differential transform method for solving
a class of singular integral equations. The methods provide solutions in terms of
convergent series with easily computable components. The aim of this article is
to introduce the Differential Transform Method (DTM) as efficient tools to solve
different kinds of singular integral equations. The solution is considered as an infinite
series expansion that converges rapidly to the exact solution. It is shown that DTM
is more effective and powerful technique and provides the solution in a rapidly
convergent series with components that are computed both elegantly and accurately.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 211
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_17
212 S. Mondal et al.
1 Introduction
The concept of the differential transform was first proposed by Zhou (1986), and
its main application was to solve both linear and non-linear initial value problems
in elastic circuit analysis. The classical Taylor’s series method is one of the earliest
analytic techniques to solve many problems, specially ordinary differential equations.
However, since it requires a lot of symbolic calculation for the derivative of functions,
it takes a lot of computational time for higher derivatives. The differential transform
method (DTM) is an iterative procedure for obtaining analytical Taylor series solu-
tions of differential equations. There are many applications of DTM in literature.
Ayaz [1] used differential transform method for the solutions of a system of differ-
ential equations. Arikoglu and Ibrahim [2] obtain the solution of boundary value
problems for integro-differential equations by using differential transform method.
Odibat [3] used differential transform method for solving Volterra integral equation
with separable kernels. Abdulkawi [4] used differential transform method for the
numerical solution of Cauchy singular integral equations of the first kind for two
special cases for which the behaviors of the function at the endpoints are given.
Suresh and Piriadarshani [5] used differential transformation method for the solu-
tion of various kinds of Riccati differential equation. George and Sivakumar [6] used
differential transform method to solve integral and integro-differential equations.
Ahmad et al. [7] used modified differential transform method for solving classes of
integral and integro-differential equations. Mondal and Mandal [8] used differential
transform method to find the numerical solution of a hypersingular integral equation
of second kind and a Cauchy-type singular integro-differential equation.
Here, we have used differential transform method for the approximate numer-
ical solutions of some classes of singular integral equations. The integral equations
considered are Cauchy-type singular integral equations of first kind and a simple
hypersingular integral equation of first kind. For the Cauchy type singular integral
equations, we present approximate solution for the case when the solution is bounded
at both the endpoints.
We have presented the method with some illustrative examples whose exact solu-
tions are known. Numerical solution of each equation based on the exact and approx-
imate solutions is compared, and it is shown that the proposed method works well
and possesses good accuracy.
Here, we have used differential transform method to find the numerical solution
of some classes of singular integral equations. We can see that by using differential
transform method, we can easily get the numerical solution of those integral equations
without any computational difficulties. Also, we have given some examples to show
the accuracy of the proposed methods. Results reveal that the proposed method works
Application of Differential Transform Method for Solving Some … 213
well and has good accuracy. To solve some singular integral equations by using some
simple technique is our main objective.
where f (x) is the original function and F(k) is the transformed function. The
differential inverse transform of F(k) is defined as
∞
Σ
f (x) = F(k)(x − x0 )k .
k=0
This implies that the concept of differential transform is derived from the Taylor
series expansion, but the method does not evaluate the derivatives symbolically.
However, successive order derivatives are calculated by an iterative way, which are
described by the transformed equations of the original function.
In real applications, the function f (x) is expressed by a finite series and can be
written as
Σ
n
f (x) = F(k)(x − x0 )k .
k=0
Theorem 1 If f (x) = g(x) ± h(x), then F(k) = G(k) ± H (k), where G(k) and
H (k) are the differential transform of g(x) and h(x), respectively.
Theorem 3 If f (x) = ag(x), then F(k) = aG(k), where a is a constant and G(k)
is the differential transform of g(x).
Σk
Theorem 4 If f (x = g(x)h(x) , then F(k) = l=0 G(l)H (k − l), where G(k) and
H (k) are the differential transform of g(x) and h(x) respectively.
Theorems 1, 2, 3 and 4 can be deduced from the definition of differential transform
method assuming that x0 = 0.
The following theorems are now proved.
{1 √ φ(t)
Theorem 5 If g(x) = −1 1 − t 2 (t−x) dt, −1 < x < 1, then the differential
transform of g is
ΣN [ Σk−1 1+(−1)i
G(d) = −π Φ(0)δ(d − 1) + k=1 Φ(k) −π δ(d − k − 1) + i=0
] 4
Γ ( 21 )Γ ( i+1
2 )
Γ( 2 )
i+4 δ(d − k + i + 1) ,N →∞
Σ∞ { 1 √
1 − t 2t k
g(x) = Φ(k) dt
k=0 −1 t−x
and
{ √ ( ) ( )
1
1 − t 2t j Σ
j−1
1 + (−1)i Γ 21 Γ i+1
dt = −π x j+1
+ ( i+4 )2 x j−i−1 , −1 < x < 1
−1 (t − x) i=0
4 Γ 2
(2.2)
j = 1, 2, 3, . . . ..
Application of Differential Transform Method for Solving Some … 215
∞
[ ( 1 ) ( i+1 ) ]
Σ Σ
k−1
1 + (−1)i
Γ Γ
g(x) = −π Φ(0)x + Φ(k) −π x k+1 + 2
( i+4 )2 x k−i−1 .
k=1 i=0
4 Γ 2
{ 1 √(1−t 2 )φ(t)
Theorem 6 If h(x) = −1 (t−x)2 dt, −1 ≤ x ≤ 1, where the integral is in the
sense of Hadamard finite part of order 2, then the differential transform of h is
ΣN ΣN Σk−1
H (d) [= −π k=0 Φ(k)(k + 1)δ(d − k) + Φ(k)
k=1] i=0
1+(−1)i Γ ( 2 )Γ ( 2 )
1 i+1
Γ ( i+4
(k − i − 1)δ(d − k + i + 2) .
2 )
4
where the integral is in the sense of Cauchy principal value. Then by using (2.1), one
obtains
[ ∞
( ( ) ( ) )]
d Σ Σ
k−1
1 + (−1)i Γ 21 Γ i+1
h(x) = −πΦ(0)x + Φ(k) −π x k+1
+ ( 2
) x k−i−1
dx
k=1
4
i=0
Γ i+4
2
Γ ( i+4
(k − i − 1)δ(d − k + i + 2) .
2 )
4
3 Illustrative Examples
In order to illustrate the advantage and the accuracy of the differential transform
method for solving some singular integral equations, we now give some illustrative
examples.
216 S. Mondal et al.
Approximate values of φ(t) at t = 0, 0.2, 0.4, 0.6, 0.8 are obtained as before with
appropriate modification and are shown in Table 1 together with exact values. The
approximate and exact values coincide.
Table 1 Comparison of the numerical solution obtained by present method with exact values
t 0 0.2 0.4 0.6 0.8
φ(t)[present method] −1.11408 −1.31239 −1.56487 −1.78661 −1.75936
φ(t)[ exact −1.11408 −1.31239 −1.56487 −1.78661 −1.75936
values ]
Application of Differential Transform Method for Solving Some … 217
Table 2 Comparison of the numerical solution obtained by present method with exact values
t 0 ±0.2 ±0.4 ±0.6 ±0.8
φ(t)[present method] ∓0.391918 ∓0.733212 ∓0.96 ∓0.96
φ(t)[exact values] 0 ∓0.391918 ∓0.733212 ∓0.96 ∓0.96
{ √
1 1
1 − t 2 ψ(t)
dt = 4x, −1 ≤ x ≤ 1 (3.5)
π −1 (t − x)2
4 Conclusion
In this paper, differential transform method is used to obtain the numerical solution
of some classes of singular integral equations. To show the efficiency of the method,
we have considered some examples, where we can see that the method works well
and has good accuracy. From the results, it is seen that the solutions are identical
to the exact solutions for all considered examples, which show that the DTM is a
reliable tool for the solution of singular integral equations. This method can be further
used to solve different types of linear and non-linear mathematical equations. The
computational difficulties are very less in this method.
The present paper deals with the solution of some integral equations with singular
kernels. Although several known methods of solution for the integral equations are
already existing in the literature, yet attempts are being made to obtain easier and
faster methods of solution. Differential transform method has been utilized here to
obtain approximate solution of some classes of singular integral equations. Some
illustrative examples are given to show the validity and applicability of the proposed
218 S. Mondal et al.
method. Numerical results reveal that the proposed method works well and has good
accuracy.
Acknowledgements All of the authors are grateful to Swami Vivekananda University for providing
the facilities to carrying out this research work. We would like to express our sincere gratitude to
Mr. Saurabh Adhikari and Mr. Abhishek Dhar for their invaluable guidance and support throughout
the research process.
References
Abstract This study focuses on determining the ratio between face-to-face learning
and online learning and points out the main factors affecting the training process
of primary school teachers in Vietnam according to the teaching model combined
blended learning. The authors conducted a survey using a questionnaire and
conducted in-depth interviews with lecturers who are teaching at universities that
train primary school teachers in many provinces and cities across the country. The
results show that the views on the ratio between face-to-face and online learning
in training units are different. Many lecturers believe that face-to-face teaching is
favorable for training professional skills for primary school teachers, so the online
learning should be limited in the training process. However, some lecturers believe
that information technology is becoming increasingly important in education. There-
fore, blended learning is an inevitable trend in teacher training. When discussing
the factors affecting the teaching process by blended learning model, most of the
lecturers think that technological factors and the skills of teachers and learners in
using information technology are important roles. Finally, the study shows the most
suitable ratio between face-to-face and online learning to make the process of training
primary teachers under the blended teaching model more effective and, at the same
time, offers a few suggestions to help lecturers improve the effectiveness of the
blended learning process.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 219
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_18
220 T. Q. Pham et al.
1 Introduction
The rapid and global digital transformation has impacted many fields of society,
including education. Education trends need to change towards smart, modern, agile,
and less costly. To achieve that, education needs new approaches and innovations in
teaching content, methods, and forms. The industrial revolution 4.0 will drastically
change human resource requirement, industry structure, and qualifications. This rapid
transformation poses a problem for education, which is to train high-quality human
resources, capable of adapting and meeting the needs of society. In addition, it is
very important to determine the appropriate teaching content for the new context,
focusing on knowledge and skills related to perception, critical thinking, creative
work ability and skills—physical skills, social skills.
The 4.0 revolution also has a strong impact on teaching methods and organization.
This is necessary for learners to quickly access new technology. Classes in the tradi-
tional form are no longer appropriate, but need the support of technology devices,
through online classes. Teachers need to use many modern means to control the
quality of information, create positive interactions, and effectively support learners
in the context of rapidly increasing knowledge volume. In particular, the teacher also
plays the important role of catalyzing, coordinating, and guiding learners to grasp
new needs and trends as well as equip them with the necessary tools for self-study
and self-training necessary professional skills. The arrival of digital technology in
countries around the world also requires appropriate forms of teaching organiza-
tion. Learners need to have interactive, additive, and independent learning skills.
Assessments also need to be changed to create consistency between elements in
teaching. Besides assessing theoretical knowledge, it is necessary to evaluate trained
skills and learners’ competence in professional fields. The industrial revolution 4.0
also changes training methods with schools needing to increase the introduction of
virtual, simulated, and digitalized training models for school administration. Educa-
tional institutions gradually shift from “passive” training to training according to
society’s orders, closely linking educational institutions with enterprises, or forming
enterprise training institutions to share the common resources.
In Vietnam, the fundamental and comprehensive reform of education and training
with the goal of developing learners’ quality and competency is being promoted.
Resolution No. 29-NQ/TW has emphasized very clearly “Strongly shifting the educa-
tional process from mainly equipping knowledge to developing comprehensively
competency and quality of learners. Learning with practice; theory with practice”.
Thus, our Party has clearly defined “qualities” and “competency” as the core elements
that need to be formed for learners. Directive 16/CT-TTg dated May 4, 2017 of the
Prime Minister stated: “The 4th industrial revolution with the development trend
is based on the highly integrated foundation of the digitized connection system—
Physics—Biology with the breakthrough Internet of things and artificial intelligence
are fundamentally changing the world’s production” [1]. Therefore, it is required
to drastically change educational policies, contents, and methods in order to create
modern people capable of receiving new production technology trends. This trend
Training Elementary Teachers in Vietnam by Blended Learning Model 221
also poses challenges for teacher training in Vietnam. In the context of integration
and globalization, the responsibility of primary school teachers is getting bigger and
bigger. Teachers are a decisive factor to the quality of education, if not done well in
teacher training, all educational programs will fail. Therefore, the work of fostering
and training primary school teachers to meet the new requirements of the country
has become very important and urgent.
The general education program 2018 with many new points, especially the orien-
tation and requirement to switch from equipping knowledge content to developing
learners’ competency and qualities by teaching methods and forms [2]. Through
active and integrated teaching, differentiated teaching, and creative experiences, the
competency of primary school teachers also faces new challenges. The amount of
time is almost unchanged, in which the content of knowledge changes and increases
continuously, so if it is only direct teaching, it is very difficult to convey all the
knowledge units to students. Combining online and face-to-face learning is consid-
ered a useful solution to this problem. Students of primary education have their
own characteristics compared with students of other pedagogical disciplines, they
are the subjects that need to be equipped with multi-disciplinary knowledge: natural
sciences, social sciences, life skills … to be able to undertake teaching many subjects
in primary schools today. Therefore, if designing online courses, learning resources
can be accessed remotely without having to go to school during the entire learning
process, it will help the learning process become easier and more effective.
In this study, we focused on answering the following two questions:
(1) Factors affecting the implementation of blended learning in teacher training in
Vietnam?
(2) Is the ratio between face-to-face teaching and online teaching appropriate and
highly effective for teacher training in Vietnam?
science fields in primary school, including: Vietnamese, math, natural science, and
social science to research and teaching in primary school.
*Block of industry knowledge: A training major is a collection of in-depth
specialized knowledge and skills of a specific training discipline. Helping students
to flexibly and effectively apply teaching methods, forms of teaching organization,
teaching means, methods of testing and assessing students in teaching subjects of
the primary education program which meet the requirements of innovation in the
direction of developing students’ quality and competency in accordance with actual
conditions.
Creative learners apply modern ideas, strategies, and teaching methods in teaching
one of the important fields in primary school, including: Vietnamese, Mathematics,
Natural Science and Social Science in elementary school, and STEAM learning and
education.
*Pedagogical skills knowledge block and graduation: Including pedagogical
skills training courses, practical time, internships at primary schools, and related
modules to complete the training program.
The program is designed to last for 4 years with many internships and internships
at practical schools. Currently, most schools choose to train in the form of credit
learning instead of the annual system as in the previous period.
In terms of training methods, most universities currently train students in Primary
Education in the direct form, focusing on the lecture hall. Some schools apply online
learning in some special cases. However, the regulation of the duration between face-
to-face and online learning is not flexible and clear, which sometimes makes teachers
feel confused when applying and the level of effectiveness and synchronization is
not high.
It is easy to see that there are many challenges posed in the training of primary
school teachers in the context of social changes, especially the development of tech-
nology. Direct full-time classes sometimes do not solve all the problems of time and
volume of knowledge. Infrastructure in some training units has not met the require-
ments. It will be very difficult when the number of learners is large in a small space.
Pressure also comes from teachers when they have to transmit large amounts of infor-
mation in a specified period of time. Therefore, the new context requires training units
to find a new way to help learners and teachers reduce pressure when performing
tasks. Blended learning with proven benefits is considered a useful solution, in line
with the general trend of the times, which will help the training of primary school
teachers become effective and quality.
There are many different views on blended learning. Blended Learning is an increas-
ingly commonly used term to describe an approach in education that combines
online materials and enables learners to interact face-to-face with teaching methods.
Teaching methods of each traditional classroom are different, creating a new hybrid
224 T. Q. Pham et al.
5 Research Method
This study aims to propose a model of training primary school teachers in the form of
blended learning in Vietnam. The study was conducted to answer the main research
questions:
226 T. Q. Pham et al.
What are the main factors that determine the success of primary school teacher
training in blended learning in Vietnam?
What is the most effective ratio between face-to-face learning and online learning
in blended teaching?
To achieve this goal, the study used quantitative and qualitative research methods.
In which, the questionnaire is considered as the main tool to collect data on the
factors affecting the success of blended learning in training primary school teachers.
In-depth interviews with a group of lecturers and experts who directly teach students
of Primary Education in universities will help find a reasonable ratio between face-
to-face learning and online learning.
The survey results were collected, processed, and analyzed using SPSS soft-
ware, descriptive statistics. In addition, the research team also uses Excel and Amos
software to support the processing to give the most accurate results.
The in-depth interview method is used to exploit teachers’ thoughts and perceptions
about the “golden ratio” between face-to-face learning and online learning in the
blended teaching model because this issue is difficult to investigate by methods,
method of using questionnaires. Interviews were conducted after survey by ques-
tionnaire and teachers were provided with results of analysis of the importance of
factors affecting the quality of blended teaching (data source A). The interview
content focused on the practice of training primary teachers under the blended
Training Elementary Teachers in Vietnam by Blended Learning Model 227
teaching model at universities in Vietnam that are training students in Primary Educa-
tion, how to implement blended teaching and ratio between face-to-face teaching
and online learning. In addition, the interviews also explored technology elements
commonly used in the training process. The research team interviewed 58 lecturers
who are teaching at primary teacher training universities in all three regions, including
teachers working in the city (33 teachers, accounting for 56.9% of the total) and
mountainous and rural areas (25 teachers, accounting for 43.1%). Male gender
lecturers interviewed were 27 teachers (accounting for 46.6%), female lecturers were
31 lecturers (accounting for 53.4%). The lecturers interviewed are all experienced
in teaching primary school teacher training programs, with 2–10 years of combined
teaching experience. Thirty different interviews were conducted, of which 15 were
face-to-face interviews conducted by the author 2 or 3 directly, the remaining inter-
views were conducted online via the Zoom Meeting platform and were conducted by
the authors. The authors clearly recorded the content of the interview. Interviewees
are encouraged to share in-depth about blended teaching, what is the most effective
ratio between face-to-face learning and online learning? And how to apply them
when teaching. The data of the entire interview are stored and analyzed in detail to
give the most accurate results.
7 Research Results
According to the statistic in Table 1, the above survey results show that most of the
factors are identified as important and very important. It can be affirmed that the
above factors all have a significant impact on the success of the Blended learning
teaching process. “Student” is the main and most important factor when it has an
average value of 4.43/5 (according to the Likert-5 scale). This is explained because
the learner is the subject, the center of the teaching process. The active, proactive
self-discipline will make the lesson interesting and effective, so teachers always value
the process of actively absorbing from students.
The effectiveness of blended learning is also greatly influenced by assistive tech-
nology (YT3), teacher’s pedagogy (YT7), interaction frequency (YT8) and teaching
content (YT10). Technology is considered the backbone, a prerequisite element when
teaching online.
According to the graphic Fig. 1, technology includes a system of intelligent tools,
software, and intelligent equipment to support both teachers and learners. Univer-
sities in Vietnam are increasingly focusing on the application and implementation
of advanced technology in training and teaching. Some popular software in online
teaching such as Zoom meeting, Google Meet, Microsoft Teams, combined with the
228 T. Q. Pham et al.
LMS online learning system, online learning resources in the form of text, video,
and voice are effective solutions to help the school operates and improves training
efficiency.
According to the diagram Fig. 2, however, no matter how good the technology
is, without a suitable teaching method, the quality of the class will definitely not
meet expectations. Therefore, teaching methods play the role of a “bridge” to guide
and bring advanced technologies to learners, helping learners to gain knowledge
and achieve learning goals. At the same time, good pedagogy also helps students
discover their own potential. Besides, the interaction of students and teachers is
also an indispensable factor in the lessons, only regular exchanges help learners to
deeply understand the knowledge they have learned, believe in the true theories.
Interaction (YT8) in the online environment has its own characteristics, sometimes
even more difficult than face-to-face learning. However, if the enthusiasm, effort
and self-discipline from the learners can be mobilized, this factor can be completely
improved. The greater the frequency of interaction, the more effective the class will
be.
According to the graphic Fig. 3, Social context (YT4) was assessed to have the least
influence on the quality of blended learning with an average value of 2.47/5. Teachers
believe that this factor only changes the way of learning between students and the
teaching method of teachers. In the learning process, this is not a problem that greatly
affects the quality of teaching. The changing context requires that teachers also need
to change pedagogical methods, update technology and knowledge regularly to meet
professional requirements. In Vietnam, the COVID-19 pandemic has had a huge
impact on all industries, including education. Typically, there are times when schools
have to switch to online teaching completely to ensure the safety of learners. It is also
from that that teaching combined between face-to-face and online forms is interested,
spread and promoted at all levels.
230 T. Q. Pham et al.
Based on data source A, the research team interviewed 58 lecturers who are teaching
at primary teacher training universities in all three regions, including teachers
working in the city (33 teachers in the city, accounting for 56.9%) and mountainous
and rural areas (25 teachers, accounting for 43.1%). Male gender lecturers inter-
viewed were 27 teachers (accounting for 46.6%), female lecturers were 31 lecturers
(accounting for 53.4%). The lecturers interviewed are all experienced in teaching
primary school teacher training programs, with 2–10 years of combined teaching
experience. The lecturers were also provided with the results of the A survey and
were asked to share initial information about the training programs and blended
learning methods applied at their schools.
According to the graphic Figs. 4 and 5, most teachers pointed out that there is
no consistency in the form of teaching when implementing, the application of direct
teaching is mandatory. Only in special cases (epidemic, unsafe working environment,
…) will online teaching be applied. Additional online courses are designed on LMS
software, providing learning resources to support students’ learning process. The
training program does not specify or unclearly the time between online learning and
face-to-face learning. The majority of the training units interviewed said that the
instructors were instructed to teach with a common ratio between online and face-to-
face of 20:80 or 30:70 throughout the training program. There is no division of the
proportion of online and face-to-face learning in each subject or part of the subject.
This creates a passivity for lecturers and learners because it is difficult to determine
when and what content to deploy in order to achieve high efficiency. This will be a
Training Elementary Teachers in Vietnam by Blended Learning Model 231
huge challenge for schools that have a system of primary school teacher training in
particular and teacher training in general.
Regarding the first research question, which are the main factors that determine the
success of the training of primary school teachers in the form of blended learning
in Vietnam? The results show that all of the above factors affect the quality of
primary school teacher training in the form of blended learning. This is understand-
able because the above factors affect many different aspects in the teaching process.
Furthermore, there is a reciprocal relationship between these factors. Changing tech-
nology will lead to a change in pedagogy, and resources for the learning process
are also designed in formats suitable for technology. In addition, choosing to put
knowledge content into teaching for students is also a big challenge in the context
that knowledge is increasing continuously and rapidly.
The main factors that have the greatest influence on the effectiveness of the lesson
come from the learner, the teacher and the tools and means of support throughout
232 T. Q. Pham et al.
the learning process. Supporting tools and means are understood as elements of
technology, resources, learning materials, etc., in which technology plays the role of
the foundation of the combined teaching process, especially in the form of learning
online. The level of interaction between the teacher and the learner is a proof of
whether the lesson is successful or not. If there is more interaction, it proves that the
class is very interesting and attracts the participation of a large number of learners.
The duration of the lesson should be adjusted to suit the physiological characteristics
of the learners, avoiding too long to create fatigue. Most teachers believe that if
the above factors can be improved, the effectiveness of the teaching process will be
enhanced.
Thus, it can be seen that training teachers in the form of blended learning (Blended
learning) is completely appropriate in the current context. If there is the best prepa-
ration, improving the quality of the above eleven factors and determining a reason-
able ratio between face-to-face and online form, it will solve the problem of the
effectiveness of the teaching and learning process.
For the second question, what is the most effective ratio between face-to-face
learning and online learning in blended teaching? The interview results show that
there is a significant difference between the teaching rate in person and online at
universities. This is explained because most of it depends on the content of the
program and the training method of each unit, on the other hand, it is also related
to the specificity of some subjects, which cannot be uniformly applied in terms of
rates face-to-face and online learning rates for all courses. The interview also showed
that the application of blended learning has been implemented by schools in the past
few years but only really exploded when society was affected by the COVID-19
pandemic. Thus, synchronization is a factor that needs to be improved more in units.
Through collecting and processing information, the research team found that the most
reasonable ratio between online and face-to-face learning is 30:70 applied to each
module. The content related to practicality is prioritized in face-to-face teaching.
Learners can learn through online courses designed by teachers and searching for
documents will help more when learning online.
In order to effectively implement the training of primary school teachers according
to the blended teaching model; learners need to be prepared with tools for online
learning such as phones, laptops, connection systems. network. The learning
outcomes of learners are influenced by teaching methods, learning materials, and
equipment, the main factor is due to the learners themselves. Learners must be
psychologically prepared, gradually familiarize themselves with self-study and self-
study methods, and find a way to learn that is suitable for themselves. Learners
need to boldly ask, exchange, discuss to find support when needed, and improve
the necessary soft skills. Instructors need to design and provide diverse learning
materials, suitable for learners’ abilities. Have a detailed teaching plan and schedule,
understand the mix ratio used in the Blended learning model (what percentage is used
for the traditional learning model, what percentage is used for the online learning
model) so that learners can be proactive in arranging their study time accordingly.
The educational institutions need to increase investment in infrastructure and build a
learning management system that is not limited in time and space, allowing learners
Training Elementary Teachers in Vietnam by Blended Learning Model 233
to access it anytime, anywhere; the system needs to be simple, have specific instruc-
tions for learners to easily access and have tools for learners to evaluate their own
learning process. There is a separate mechanism to encourage teachers to invest in
the Blended learning teaching model, providing diverse e-learning materials that are
suitable for many people’s interests and learning styles, such as: e-books, teaching
videos, electric lectures.
Blended learning is a transition from the traditional learning model to the online
learning model. The rapid transformation of online teaching and learning is an
inevitable trend in the near future. The investigation on upgrading the infrastructure,
equipment, and teaching personnel to serve the blended learning model is also an
important element to deliver the effective online learning forms. Factors affecting the
successful application of the Blended learning model into practice include: modern
teaching equipment, teachers’ capacity and students’ abilities. In which, the main
factor belongs to the learner, learners need to change their perception of how to learn,
find a suitable self-study method so that the blended learning model can bring into
full effect.
References
1. Directive No. 16/CT-TTg dated May 4, 2017 of the Prime Minister on strengthening the
competency to access the 4th industrial revolution, p. 1.
2. Ministry of Education and Training. (2018). General education program, p. 5.
3. Circular No. 13. (2018). Promulgating Law Course Curriculum, Ministry of Labour, Invalids
and Social Affairs, p. 1.
4. Circular No. 11. (2018). Promulgating Informatics Subject Programs, Ministry of Labour,
Invalids and Social Affairs, p. 1.
5. Circular No. 03. (2019). Promulgating English Subject Program, Ministry of Labour, Invalids
and Social Affairs, p. 1.
6. Circular No. 12. (2018). Promulgating the Physical Education Subject Program, Ministry of
Labour, Invalids and Social Affairs, p.1
7. Circular No. 10. (2018). Regulations on the program, organization of teaching and assessment
of learning results in the subject of National defense and security education, p. 1.
8. Graham, C. R., Woodfield, W., & Harrison, J. B. (2013). A framework for institutional adop-
tion and implementation of blended learning in higher education. The Internet and Higher
Education, 18, 4–14.
9. Barbour, M. K. (2011). State of the nation study: K-12 online learning in Canada. Vienna, VA:
International Council for K-12 Online Learning, p. 23.
10. Banados, E. (2006). A blended learning pedagogical model for teaching and learning EFL
successfully through an online interactive multimedia environment. CALICO Journal, 23, 533–
550.
11. Hoai, H. T. T., & Thao, T. T. (2019). Application of blended teaching approach in higher educa-
tion—A solution for large classes, science and technology magazine. Thai Nguyen University,
199(06), 87–92.
12. Lazarinis, F., Karachristos, C. V., Stavropoulos, E. C., & Verykios, V. S. (2019). Education and
Information Technologies, 24, 1237–1249.
13. Bedi, K. (2008). Experiences of hybrid corporate training programmes at an online academic
institution. In International Conference on Hybrid Learning and Education (pp. 271–282).
Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-85170-7_24
234 T. Q. Pham et al.
14. Dam, Q. V., & Yen, N. T. H. (2017). The trend of applying the Blended learning model in higher
education and the possibility of its implementation at the National Economics University. In
Proceedings of the National Scientific Conference “Online training in the period of industrial
revolution 128” - National Economics University, p. 25.
15. Vu Thi Minh, T. (2020). Blended learning and applicability at Hung Vuong University. Journal
of Science, No 37/2020.
Doubly Truncated Type II Exponentiated
Generalized Gamma Distribution
Abstract Here, we present a new model called the Doubly Truncated Type II
Exponentiated Generalized Gamma (DTII EGG). Some properties of this distri-
bution have been derived such as cumulative and probability distribution func-
tions, moments, Shannon entropy function and Relative entropy function. Also, we
provided stress-strength reliability of the proposed distribution.
1 Introduction
The exponentiated distributions (ED) are very popular statistical models. These distri-
butions were introduced by Gupta et al. in 1998. The focus here will be on some
modern literature on the topic. Al-Babtain et al. in 2015 proposed the McDonald type
I EGD to extend some other models. Feroze and Elbatal in 2016 presented the Beta
type I EGD. The moments and the maximum likelihood estimation for this distribu-
tion are studied. Rasekhi in 2018 studied some estimation methods of type I EGD
parameters such as UMVU, ML, least squares, weighted least squares and Minimum
distance estimators. The method performances are compared numerically by using
simulation based on the mean integrated squared error (MISE). Abid and Kadhim
in 2021 presented Doubly Truncated EIGD. In 2022, Abid and Jani presented two
doubly truncated generalized distributions with a lot of properties.
Here, we introduce the Type II Exponentiated class of distributions, F(x) =
1 − [1 − G(x)]k where G(x) the baseline distribution is function and k is a positive
real number.
Let X be a random variable distributed as type II Exponentiated Generalized
Gamma with parameters a, p, d, k > 0(X ∼ T ypeI I EGGD(a, d, p, k)), then the
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 235
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_19
236 S. H. Abid and J. A. Altawil
probability density function (pdf) and the cumulative distribution function (cdf) of
the random variable are, respectively,
⎡ ⎤k−1
p
k p d−1 −( x ) p ⎣ γ dp , ax
f 1 (x) = d x e a 1− ⎦ (1)
pd a dp
⎡ p ⎤k
γ d
p
, ax
F1 (x) = 1 − ⎣1 − ⎦ (2)
d
p
where (α) is the ordinary Gamma function, γ (α, βx) is the lower incom-
βx
plete Gamma function such that γ (α, βx) = 0 t α−1 e−t dt and (α, β x) =
∞ α−1 −t
βx t e dt = (α) − γ (α, βx) is the upper incomplete Gamma function.
The pdf and the cdf of doubly truncated type II Exponentiated Generalized Gamma
random variable X
(X ∼ DTII EGGD (a, d, p, k, b, c)) can be defined, respectively, as,
k−1
p d−1 −( ax )
p γ p, a
d x
( )p
k
ad
x e 1−
d
p d
p
f (x) = k k
( ) ( )
p p
γ p, a
d c
γ p, a
d b
1− 1− − 1− 1−
d
p d
p
k−1
p d−1 −( ax )
p γ p, a
d x
( )p
k
ad
x e 1−
d
p d
p
= k k , b <x <c (3)
( ) ( )p
p
γ p, a
d b
γ p, a
d c
1− − 1−
d
p d
p
k k
( )p ( )
p
γ p, a
d x
γ p, a
d b
1− 1− − 1− 1−
d
p d
p
F(x) = k k
( ) ( )
p p
γ p, a
d c
γ p, a
d b
1− 1− − 1− 1−
d
p d
p
k k
( ) ( )p
p
γ p, a
d b
γ p, a
d x
1− − 1−
d
p d
p
= k k , b <x <c (4)
( ) ( )p
p
γ p, a
d b
γ p, a
d c
1− − 1−
d
p d
p
1− − 1−
d
p d
p
R(x) = 1 − F(x) = 1 − k k
( ) ( )p
p
γ p, a
d b
γ p, a
d c
1− − 1−
d
p d
p
k k
γ p, a
d x
( ) p
γ p, a
d c
( ) p
1− − 1−
d
p d
p
= k k (5)
( ) ( )p
p
γ p, a
d b
γ p, a
d c
1− − 1−
d
p d
p
k−1
p x p γ p, a
d x
( )p
k
ad
x d−1
e− 1−
f (x) d
p
a d
p
λ(x) = = k−1 k (6)
R(x) γ p, a
d x
( ) p
γ p, a
d c
( )p
1− − 1−
d
p d
p
p d
1− − 1−
d
p d
p
⎡ p ⎤k−1
c γ d
, ax
x d+r −1 e−( ) ⎣1 −
p p
x
a ⎦ dx
b
d
p
∞ (−1)u (b+1) u
Now since, (1 − z)b = u=0 u! (b−u+1)
z with |z| < 1, b > 0 and
k−1
−k ∞ (k+ j) j γ p, a
d x
( )p
(1 − z) = j=0 j!(k) z with |z| < 1, k > 0 So that for 1 −
d
p
we get three formulas
k−1 u
γ p, a
d x
( )p ∞ u (k)
γ p, a
d x
( )p
If k − 1 > 0, then 1 − = u=0 (−1) u!(k−u)
d
p dp
k−1 j
γ p, a
d x
( ) p
∞ (k−1+j) γ p, a
d x
( )p
If k − 1 < 0, then 1 − = j=0 j!(k−1)
d
p d
p
k−1 1−1
γ p, a
d x
( ) p
γ p, a
d x
( ) p
If k − 1 = 0, then 1 − = 1− =1
d
p d
p
So, we have the following three cases, Case one: for k − 1 > 0
238 S. H. Abid and J. A. Altawil
p
k ad
E xr = k k
γ ( )
p
( )p
p, a γ p, a
d b d c
d
p
1− − 1−
d
p d
p
⎛ p ⎞u
(k) ⎝ γ p , a
∞ d x
c p
d+r −1 −( ax )
∫x e (−1) u
⎠ dx
b
u=0
u!(k − u) d
p
p
k ad
= k k
γ dp ,( ab )
p
γ dp ,( ac )
p
pd
1− − 1−
d
d
p p
⎛ p ⎞u
∞
(−1)u c γ dp , ax
d+r −1 −( a ) ⎝
x p
∫x e ⎠ dx
u=0
u!(k − u) b dp
x p √ √ a 1p −1
Let .y = a
→ x
a
= p y→x =a p y → dx = p
y dy, then
ka r
E xr = k k
γ ( )
p
( )p
p, a γ p, a
d b d c
d
p
1− − 1−
d
p d
p
(ac ) p ⎛ ⎞u
∞
(−1) u
(d+r )
−1 −y
γ d
p
,
y
y p ⎝e ⎠ dy,
u=0
u!(k − u) d
( )
b p p
a
ka r
E xr = k k
γ ( )
p
( )p
p, a γ p, a
d b d c
d
p
1− − 1−
d
p d
p
(ac ) p ⎛ ⎞u
∞ ∞
(−1)u (d+r )
−1 −y
ym
⎝ ⎠ dy,
d
y p
e y e−y
p
u=0
u!(k − u) m=0 d
+m+1
( ab )
p p
ka r
E xr = k k
γ ( )
p
( )p
p, a γ p, a
d b d c
d
p
1− − 1−
d
p d
p
∞ ∞
(ac ) p
(−1)u (d(1+u)+r )
+m−1 −(1+u)y
Cu,m y p
e dy
u=0
u!(k − u) m=0
(a)
b p
ka r
E xr = k k
γ ( )
p
( )p
p, a γ p, a
d b d c
d
p
1− − 1−
d
p d
p
∞ ∞ p (d(1+u)+r )
(1+u)( ac )
(−1)u z p +m−1
dz
Cu,m ∫ e−z
u=0
u!(k − u) m=0 (1+u)( b )
p (1 + u) (1 + u)
a
ka r
E xr = k k
γ ( )
p
( )p
p, a γ p, a
d b d c
d
p
1− − 1−
d
p d
p
∞
∞ (d(1+u)+r )
(−1)u 1 p +m
Cu,m
u=0
u!(k − u) m=0 1+u
p
(d(1 + u) + r ) b
+ m, (1 + u)
p a
c p
(d(1 + u) + r )
− + m, (1 + u)
p a
d
p
1− − 1−
d
p d
p
⎛ p ⎞j
∞ c γ dp , ax
(k − 1 + j) d+r −1 −( ax ) ⎝
p
x e ⎠ dx
j=0
j!(k − 1) d
b p
240 S. H. Abid and J. A. Altawil
x p √ √ a 1p −1
Again Let y = a
→ x
a
= p y→x =a p y → dx = p
y dy, then
ka r
= k k
γ ( )
p
( )p
p, a γ p, a
d b d c
d
p
1− − 1−
d
p d
p
⎛ ⎞j ,
p
c p γ , ax
(k − 1 + j) ( a )
d
∞ (d+r )
−1 −y p
⎠ dy y p ⎝ e
j=0 j!(k − 1) ( ab )
dp
p
ka r
E xr = k k
γ ( )
p
( )p
p, a γ p, a
d b d c
d
p
1− − 1−
d
p d
p
(ac ) p ⎛ ⎞j
∞ ∞
(k − 1 + j) (d+r )
−1 −y
y m
⎝ ⎠ dy,
d
y p
e y e−y
p
j=0
j!(k − 1) m=0 d
+m+1
( ab )
p p
ka r
E xr = k k
γ ( )
p
( )p
p, a γ p, a
d b d c
d
p
1− − 1−
d
p d
p
∞
∞
(k − 1 + j) ( ac ) p (d(1+ j )+r ) +m−1
Cm, j ∫ y p
e−(1+u)y dy
j=0
j!(k − 1) m=0 ( ab )
p
Again, let (1 + j )y = z → y = z
(1+ j)
→ dy = dz
(1+ j)
, then
ka r
E xr = k k
γ ( )
p
( )p
p, a γ p, a
d b d c
d
p
1− − 1−
d
p d
p
∞
∞ (d(1+ j )+r )
(k − 1 + j) 1 p +m
Cm, j
j=0
j!(k − 1) m=0
1+ j
Doubly Truncated Type II Exponentiated Generalized Gamma Distribution 241
p
(d(1 + j ) + r ) b
+ m, (1 + j )
p a
c p
(d(1 + j ) + r )
− + m, (1 + j )
p a
1 p c
x r +d−1 e−( a ) d x,
x p
E xr = d
( ) ( )p
p
γ p, a
d b
γ p, a
d c
a
d
p
1− − 1− b
d
p d
p
x p √ √ a 1p −1
Again Let y = a
→ x
a
= p y→x =a p y → dx = p
y dy, then
1 p
E x r
( ) ( ) ad
p p
γ p, a
d b
γ p, a
d c
dp 1− − 1−
d
p d
p
p p
( ac ) p √ r +d−1 a a r d+r
p
, ab − d+rp
, ac
e−y y p −1 dy =
1
∫ apy
p p p
(a)
b p
dp , ab − dp , ac
( dp ,( ax ) p ) k−1
x d−1 e−( a )
x p γ
−(d−1)k p c ( dp )
1−
a d b (lnx) ⎧
⎨
' (k ⎫dx
k⎬
p γ ( p ,( a ) ) γ ( dp ,( ac ) p )
d d b p
1− − 1−
⎩ ( dp ) ( dp ) ⎭
k−1
γ dp ,( ax )
p
−(d − 1)k p
I1 = k k
γ ( )
p
( ) p ad
p, a γ p, a
d b d c
d
p
1− − 1−
d
p d
p
⎧ p ⎫u
∞
(k) c ⎨ γ dp , ax ⎬
d−1 −( ax )
p
(−1) u
∫(lnx)x e dx
u!(k − u) b ⎩ d ⎭
u=0 p
x p √ √ a 1p −1
Again let y = a
→ x
a
= p y→x =a p y → dx = p
y dy, then
−(d − 1)k
I1 = k k
γ ( )
p
( )p
p, a γ p, a
d b d c
d
p
1− − 1−
d
p d
p
⎛ ⎞u
c p
∞
(−1) ( ) u
1 γ d
p
, y
∫ lna + lny e−y y p −1 ⎝ ⎠ dy
a d
u=0
u!(k − u) ( b ) p p dp
a
u
( ac ) p −y dp −1 γ p ,y
d
∞ u ∞
, since m=0 am y m = m=0 Cu,m y m , then
Doubly Truncated Type II Exponentiated Generalized Gamma Distribution 243
∞ ( ac ) p −(1+u)y d(1+u)
I11 = lna m=0 C u,m p e y p +m−1 dy, let z = (1 + u)y → y =
( ab )
z
(1+u)
→ dy = dz
(1+u)
, then
∞
d(1+u) +m
1 p
I11 = lna Cu,m
m=0
1+u
p c p
d(1 + u) b d(1 + u)
+ m, (1 + u) − + m, (1 + u)
p a p a
u
( ac ) p γ p ,y
d
−y p −1
d
I12 = 1
p lnye y dy, Using expansion of incomplete gamma
p ( ab ) d
p
function
∞
xm
γ (θ, x) = x θ (θ )e−x
m=0
(θ + m + 1)
∞ ( ac ) p
1 (1+u)d
I12 = Ct,u ∫ lnye−(1+u)y y p +t−1 dy, N ow let e−(1+u)y
p t=0 ( )
b p
a
∞
∞
(−(1 + u)y)q (−1)q
= = ((1 + u)y)q , we getI12
q=0
q! q=0
q!
∞ ∞
1 (−1)q ( ac ) p − −(1+u)d −t−q+1
= Ct,u (1 + u)q ∫ ln; yy p
dy
p t=0 q=0
q! (b)
p
a
−(1 + u)d z
− t − q lny → y = e ( p −t−q ) → dy
−(1+u)d
let z =
p
z
e( )
−(1+u)d −t−q
p
= dz, then
−(1+u)d
p
−t −q
∞ ∞
1 (−1)q
I12 = Ct,u (1 + u)q
p t=0 q=0
q!
−(1+u)d
−t−q ln( Ca )
p
1 p
2
∫
ze−z dz
−(1+u)d
−t −q −(1+u)d
−t−q ln( ab )
p
p p
244 S. H. Abid and J. A. Altawil
∞ ∞
1 (−1)q 1
I12 = Ct,u (1 + u)q 2
p t=0 q! −(1+u)d
q=0
p
− t − q
p
−(1 + u)d b , then we get I1 .
2, − t − q ln
p a
−(1 + u)d c p
− 2, − t − q ln
p a
Case two: for k − 1 < 0
−(d − 1)k p
I1 = k k
γ ( )
p
( ) p ad
p, a γ p, a
d b d c
d
p
1− − 1−
d
p d
p
⎧ p ⎫ j
∞
(k − 1 + j ) c ⎨ γ dp , ax ⎬
d−1 −( ax )
p
∫(lnx)x e d x,
j!(k − 1) b ⎩ d ⎭
j=0 p
∞
−(d − 1)k (k − 1 + j )
I1 = k k
γ ( )
p
( ) p j!(k − 1)
p, a γ p, a
d b d c
j=0
d
p
1− − 1−
d
p d
p
) ∞ d(1+p j ) +m
p
1 d(1 + j ) b
ln a C j,m + m, (1 + j )
m=0
1+ j p a
d(1 + j ) c p
− + m, (1 + j )
p a
∞ ∞
1 (−1) q
1
+ Ct, j (1 + j )q 2
p t=0 q! −(1+ j)d
q=0
p
−t −q
p
−(1 + j )d b
2, − t − q ln
p a
−(1 + j )d c p
− 2, − t − q ln
p a
z = (− pq − d)lnx, then
−(d − 1) apd
I1 = p
p
dp , ab − dp , ac
∞
(−1)q 1 1
{(2, (− pq − d)lnb) − (2, (− pq − d)lnc)}
q=0
q! a pq
(− pq − d)2
X p
For I2 = E a
=
E(X p ), we get the result directly.
1
ap
p
γ dp ,( Xa )
For I3 = −(k − 1)E ln 1 − Since, ln(1 − x) = − ∞n=0
xn
n
then,
d
p
⎛ ⎞ ⎛ ⎞n
γ dp , y ∞ γ dp , y
1
ln⎝1 − ⎠=− ⎝ ⎠
dp n dp
n=0
∞ ∞ ∞ x pv1 +...+ pvn
1 x nd −n x p
a
=− e a ... ,
n a d + v !... d + v !
n=0 v1 =0 vn =0 p 1 p n
∞ (−n ( ax ) p )q
N ow let e−n ( a ) =
x p
q=0 q!
, We get,
⎛ ⎛ p ⎞⎞ q
γ dp , Xa ∞ ∞ ∞ ∞ (−1)q+1 np
⎜ ⎜ ⎟⎟ a
−(k − 1)E ⎝ln⎝1 − ⎠⎠ = −(k − 1). ...
dp n v1 vn q=0
nq!
−
a −nd− pv1 −...− pvn
E X nd+ pq+ pv1 +...+ pvn ,
d + v !... d +v !
p 1 p n
Let f (x)
1 = ⎧⎡
⎪
1
p ⎤k ⎡
1
⎤k ⎫
1⎪
and f 2 (x) =
⎨ ( )
⎬
d b 1 1 d c p1
γ p1 , a1 γ p1 , a1
⎣1− 1
1
⎦ ⎣
− 1−
1
1
⎦
⎪
⎩ p1
d
p1
d ⎪
⎭
1 1
⎡ ⎤k −1
( )
d p2 2
γ p1 , ax
k
2
p2
d2 x
( )
d2 −1 − a2
e
x p2
⎣1− 1
2
⎦
d2 a d2
p 2 p
⎧⎡ 2
p ⎤k ⎡
2
⎤k ⎫
⎪ 2⎪
⎨ ( ) ⎬
d b 2 2 d c p2
γ p2 , a2 γ p2 , a2
⎣1− 2
2
⎦ −⎣1− 2
2
⎦
⎪
⎩ p
d2
p
d2 ⎪
⎭
2 2
⎛ ⎧ ⎫
⎨' p1 (k1 ' p1 (k1 ⎞
d1
a 1 d1 γ
d1 b
, a1 γ
d1 c
, a1
⎬
⎜ . 1−
p1 p1
1 − 1−
p1
1 ⎟
⎜ k1 p1 ⎩
d1
⎭⎟
d1
⎜ p1 ⎟ p1
⎛ ⎛ p2 ⎞⎞
γ dp22 , ax2
⎜ ⎜ ⎟⎟
− (k2 − 1)E ⎝ln⎝1 − ⎠⎠ (7)
p2 d2
∞ 1 x p2 p22 − ax p2 ∞
d x
− n=0 n
a
e 2 2 , then
a2 v=0 d2
+v !
p2
Doubly Truncated Type II Exponentiated Generalized Gamma Distribution 247
⎡ p2 v ⎤n
d2 p2 ∞
x
⎢ x − ax a2 ⎥
⎣ e 2 ⎦
a2 v=0
d2
+v !
p2
∞ ∞ p2
− p v +...− p2 vn p2 v1 +...+ p2 vn
a2 2 1 x −n x
= ... a2−d2 n x d2 n e a2
v1 =0 vn =0
d2
p2
+ v1 ! . . . p2 + vn !
d2
∞ ∞ p2
a2 −d 2 n− p2 v1 −···− p2 vn −n x
= ··· e a2
x d2 n+ p2 v1 +···+ p2 vn
v1 =0 vn =0
d2
p2
+ v1 ! . . . p2 + vn !
d2
p2 p2 q
−n x ∞ −n ax
let e a2
= 2
, then
⎛ ⎛ q=0 q!p2 ⎞⎞
γ p2 , a2
d2 x
⎜ ⎜ ⎟⎟
E ⎝ln ⎝1 − ⎠⎠
p2 d2
q
∞ ∞ ∞ ∞ (−1)q+1 np and then get
a2 2
−d n− p v −...− p2 vn
a2 2 2 1
= ...
n=0 v1 =0 vn =0 q=0
nq! d2
+ v ! . . . d2
+ v !
p2 1 p2 n
d2 n+ p2 v1 +...+ p2 vn
E X
the result.
By substituting the above results in Eq. (7), we get the relative entropy of the
DTII EGGD.
The life of a component is described using the stress-strength models, Let a random
strength (X) which is subjected to a random stress (Y) follow, respectively, DTII EGG
(a, d, p, k) and DTII EGG (a1 , d1 , p1 , k1 ), then the stress-strength model of DTII EGG
D is,
c
R = P(Y < X ) = ∫ f X (x)FY (x)d x
b
' p1 ( k 1
d1
γ , ax
1−
p1
1
d1
p1
= ⎧' p1 ( k 1 ' p1 ( k 1
⎫
⎨ γ
d1 b
, a1 γ
d1 c
, a1
⎬
1− − 1−
p1 p1
1 1
⎩
d1
p1
d1
p1 ⎭
248 S. H. Abid and J. A. Altawil
' p1 ( k 1
d1
γ , ax
E 1−
p1
1
d1
p1
− ⎧' p1 (k1 '
⎫
p1 (k1
⎨ γ
d1 b
, a1 γ
d1 c
, a1
⎬
1− − 1−
p1 p1
1 1
⎩
d1
p1
d1
p1 ⎭
' p1 (k1
γ
d1
, ax ∞ (−1)s (b+1) s
Let I6 = E 1 − By using (1 − z)b =
p1
1 z , we
d1 s=0 s! (b−1+s)
p1
get
⎡ p1 ⎤k1 ⎛ p1 ⎞s
γ d1
, x ∞
(−1)s (k1 + 1) ⎜ γ p1 , a1
d1 x
⎢ p1 a1 ⎥ ⎟
⎣1 − ⎦ = ⎝ ⎠
d1
s=0
s! (k1 − 1 + s) p1d1
p1
⎛ p1 ⎞s ⎛ p1 d1 p1 q ⎞s
p1
γ p1 , a1
d1 x
⎜ a1
x
∞ − ax1 ⎟
⎜ ⎟ ⎜
⎝ ⎠ =⎝ ⎟ ⎠
dp11 dp11 q=0
d1
p1
+ q q!
⎞s ⎛
∞
∞
−d1 s− p1 q1 −...− p1 qs d1 s+ p1 q1 +...+ p1 qs
q1 +...+qs a x 1
⎝ ⎠
= ... (−1) 1
q1 =0 qs =0
d1
p1
+ q1 . . . dp11 + qs q1 ! . . . qs ! dp11
⎡ p1 ⎤
γ d1
, x
⎢ p1 a1 ⎥
E ⎣1 − ⎦
d1
p1
∞
(−1)s (k1 + 1)
=
s=0
s! (k1 − 1 + s)
∞
∞ −d s− p q −...− p1 qs d1 s+ p1 q1 +...+ p1 qs
(−1)q1 +...+qs a1 1 1 1 E X
... .
q1 =0 qs =0 s dp11 d1
p1
+ q 1 . . . d1
p1
+ q s q 1 ! . . . q s !
Then, we get the stress-strength reliability model for DTII EGG D as follows,
Doubly Truncated Type II Exponentiated Generalized Gamma Distribution 249
' p1 ( k 1
d1
γ , ax
1−
p1
1
d1
p1
R = ⎧' p1 ( k 1 ' p1 ( k 1
⎫
⎨ γ
d1 b
, a1 γ
d1 c
, a1
⎬
1− − 1−
p1 p1
1 1
⎩
d1
p1
d1
p1 ⎭
∞
(−1)s (k1 +1)
∞
∞
(−1)q1 +...+q s a1 1 1 1
−d s− p q −...− p q
1 s
s! (k1 −1+s)
... s d1
d1
d1
s=0 q1 =0 qs =0 +q 1 ... +q s q1 !...qs !
⎧' ⎫
p1 p1 p1
− p1 (k1 ' p1 (k1
⎨ γ
d1 b
, a1 γ
d1 c
, a1
⎬
1− − 1−
p1 p1
1 1
⎩
d1
p1
d1
p1 ⎭
⎧
⎪ ka d1 s+ p1 q1 +...+ p1 qs
⎪
⎪ ⎧' p (k ' (k ⎫
⎪
⎪ ⎨ ⎬
⎪
⎪
d
γ p, a b d c p γ p, a
⎪
⎪ dp 1− − 1−
⎪
⎪ ⎩ ⎭
⎪
⎪ dp dp
⎪
⎪
⎪
⎪ (d(1+u)+d1 s+ p1 q1 +...+ p1 qs ) +m
⎪
⎪ ∞
∞
⎪
⎪ (−1)u 1 p
⎪
⎪ Cu,m
⎪
⎪ u!(k − u) 1+u
⎪
⎪ ⎧ u=0 m=0 p ⎫
⎪
⎪
⎪
⎪ ⎪
⎪ (d(1 + u) + d1 s + p1 q1 + . . . + p1 qs ) b ⎪
⎪
⎪
⎪ ⎨ + m, (1 + u) ⎬
⎪
⎪ p a
⎪
⎪ c p ,k − 1 >0
⎪
⎪ ⎪
⎪ (d(1 + u) + d 1 s + p q
1 1 + . . . + p q
1 s ) ⎪
⎪
⎪
⎪ ⎩ − + m, (1 + u) ⎭
⎪
⎪ p a
⎪
⎪
⎨ ka d1 s+ p1 q1 +...+ p1 qs
. ⎧' p (k ' (k ⎫ (8)
⎪
⎪ ⎨ γ dp , ab γ dp , ac
p ⎬
⎪
⎪ pd 1− − −
⎪
⎪ 1
⎪
⎪ ⎩ p d p d ⎭
⎪
⎪
⎪
⎪
⎪
⎪ ∞ ∞ (d(1+ j )+d1 s+ p1 q1 +...+ p1 qs ) +m
⎪
⎪ (k − 1 + j) 1 p
⎪
⎪ C
⎪
⎪ j!(k − 1)
m, j
1 + j
⎪
⎪
⎪
⎪ ⎧ j=0 m=0 p ⎫
⎪
⎪ ⎪ (d(1 + j ) + d1 s + p1 q1 + . . . + p1 qs ) b ⎪
⎪
⎪ ⎪
⎨ + m, (1 + j ) ⎪
⎬
⎪
⎪
⎪
⎪ p a
c p ,k − 1 <0
⎪
⎪ ⎪ (d(1 + j ) + d s + p q + . . . + p q ) ⎪
⎪
⎪ ⎪
⎩ −
1 1 1 1 s
+ m, (1 + j ) ⎪
⎭
⎪
⎪
⎪
⎪ p p a
⎪
⎪ d+d1 s+ p1 q1 +...+ p1 qs d+d1 s+ p1 q1 +...+ p1 qs c p
⎪ a d1 s+ p1 q1 +...+ p1 qs , ab − , a
⎪
⎪
p
p
,k − 1 = 0
⎩ d b p d c p
p, a − p , a
4 Conclusion
The invention of new distributions has turned into of interest for many researchers,
hoping to get models that better interpret the data. Here, we presented DTII EGG
distribution. Some properties of DTII EGG distribution are derived. We provided
forms for rth raw moment, Shannon and Relative Entropies. The form of stress-
strength reliability model has been derived with fully different parameters.
250 S. H. Abid and J. A. Altawil
References
1. Gradshteyn, S., & Ryzhik, M. (2000). Table of integrals, series, and products (6th ed.). Academic
Press.
2. Abid, S., & Kadhim, F. (2021). Doubly truncated exponentiated inverted gamma distribution.
Journal of Physics: Conference Series, 1999(1), 012098.
3. Abid, S., & Jani, H. (2022). Two doubly truncated generalized distributions: Some properties.
AIP Conference Proceedings, 2022(2398), 060033.
4. Al-Babtain, A., Merovci, F., & Elbatal, I. (2015). The McDonald exponentiated gamma distri-
bution and its statistical properties. SpringerPlus, vol. 4, no. 2. https://doi.org/10.1186/2193-180
1-4-2
5. Feroze, N., Elbatal, I. (2016). Beta exponentiated gamma distribution: some properties and
estimation. Pak.j.stat.oper.res., vol. XII, no.1, pp. 141–154.
6. Gupta, C., Gupta, L., & Gupta, D. (1998). Modeling failure time data by Lehman alternatives.
Cammunications in Statistics Theory and Methods, 27, 887–904.
7. Kullback, S., & Leibler, R. (1951). On information and sufficiency. Annals of Mathematical
Statistics 1, 79–86. https://doi.org/10.1214/aoms/1177729694. MR 39968.
8. Rasekhi, M. (2018). A study on methods for estimating the PDF and the CDF in the exponentiated
gamma distribution. Communications in Statistics—Simulation and Computation. https://doi.
org/10.1080/03610918.2018.1508707
9. Shannon, E. (1948). A mathematical theory of communication. Bell System Technical Journal,
27, 379–432.
Detailed Review of Challenges in Cloud
Computing
Sneha Raina
1 Introduction
Services known as “cloud computing” enable users to store and access computing
resources and data over the Internet as opposed to a costly local hard drive. By
enabling users to store their data across a variety of cloud services, it enhances storage
capacity and lowers costs by removing the need to buy an expensive system with
more memory. A shared pool of programmable computer resources can be accessed
easily and on-demand via the network, as stated by the National Institute of Standards
and Technology in the United States. Although cloud computing benefits consumers,
many are ignorant of the numerous risks that could lead to significant loss. The vast
majority of them had no understanding how their cloud service provider handled or
S. Raina (B)
University Institute of Computing, Chandigarh University, Kharar, Punjab, India
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 251
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_20
252 S. Raina
stored their data. When consumers choose to use a cloud computing service, they are
essentially providing a third party with access to their private information in order
to let them store and back up their data or resources. System or standard operating
procedure weaknesses that can be used against you are known as vulnerabilities.
A compromise could result from being used by the bad guy. Threats are possible
assaults that could be launched by exploiting a weakness. Using the most effective
and economical means feasible, cloud computing is a new paradigm that strives to
supply computing resources. Although it’s still early on, it’s gaining momentum.
If Cloud’s flaws and risks are fixed, its users will have a virtual fortress at their
disposal. We are closely monitoring technical aspects of cloud computing security
in this development, with a focus on assaults and hacking attempts against cloud
computing providers and devices. New security requirements are required, as we have
said, due to attacks on cloud computing environments, in addition to the particular
security vulnerabilities and dangers that services and service-oriented architectures
face. In this ongoing study, we make an effort to anticipate the kind of security issues
that the cloud computing paradigm can bring about and offer some working solutions
based on the notion of potential vulnerabilities.
2 Cloud Computing
Cloud computing, according to Mell, M. Peter, and Timothy Grance, “is a model
for providing ubiquitous, convenient, on-demand network access to a shared pool of
configurable computing resources” [1, 2].
“Cloud computing,” in P. Gaw’s opinion, relates to a broader concept—basically,
the idea of leveraging the internet to provide access to services that are supported
by technology. In line with Gartner, those services must be “massively scalable” in
order to be considered “real cloud computing.”
As J. Kaplan puts it, Cloud computing, in my view, is a set of web-based
services that enable clients to access a variety of functional skills on a “pay-as-
you-go” basis that previously require huge hardware/software expenses and expert
knowledge. Cloud computing brings the utility computation concepts of the past to
life without the technological challenges or implementation issues. “A concept for
providing on-demand network access to a shared pool of configurable computing
resources (e.g., networks, servers, storage, applications, and services) that may
be quickly supplied and released with minimal administrative effort or service
provider engagement,” based on the observations of National Institute of Standards
and Technology (NIST). Five essential elements, three service models, and four
availability-promoting deployment strategies make up this cloud architecture [1].
Detailed Review of Challenges in Cloud Computing 253
See Table 1.
3 Review of Literature
Networks, servers, and applications that are already given electronically instead
of physically are referred to as “cloud-based services” in a study that the authors
presented in 2011. According to the research’s conclusions, the massive system’s
overhead has been minimized in terms of potential risk, privacy breaches, and other
aspects. Cloud computing makes web services readily available on demand. For
people who prefer not to invest in building out infrastructure on their system, the
cloud is the ideal alternative. Internet service providers must spend a lot of money
on infrastructure and deal with problems like equipment failure and software flaws.
In 2012, the authors talked about cloud security. They pointed out that even though
data are growing exponentially, open-ended and generally easy to access resources
still raise security issues. Additionally, they looked at the security threats related to
cloud computing infrastructures, features, cloud delivery strategies, and cloud stake-
holder groupings [3]. Cloud security is examined in this section by the researchers.
Cloud network security is a major challenge as more people utilize cloud services that
transmit, receive, and collect personal data over a network [4]. In this study, the author
talks about a number of risk variables that have an impact on cloud security, including
data theft, man-in-the-middle attacks, and data corruption. Cloud computing is one
of the most cutting-edge study subjects because to its flexibility, affordability, and
data translation between client and server [5]. The usage of a reputation management
system to handle robust data security and to maintain track of the data in a transac-
tion table is both covered in length in this paper. Although virtualization is a must
for cloud computing, its security has not been widely researched. The analysis of
cloud security in this paper examines the effects of virtualization vulnerabilities on
several service models in the cloud. Virtualization-based cloud computing offers a
way to exchange resources including software and infrastructure [6]. Being flexible
and dependable are important while providing services in a cloud architecture. Cloud
security integrates the RSA approach with a digital signature to encrypt user data
as it is transmitted over the network, and it is a type of security pattern to which
cloud providers must adhere to. To increase the security of cloud data, the RSA algo-
rithm with Digital Signature and security management frameworks and standards
are addressed [7].
The research community paid less attention to employing several cloud providers
to handle security in 2013 than it did to using a single cloud provider [8]. This study’s
main objectives include the use of various clouds, lowering security risks, and confi-
dentiality. Users of the cloud lead to a loss of control for the owner because data are
dispersed across organizational boundaries and accessed online [9]. Summarizing
data security and maintaining data owners’ trust are increasingly top priorities for
cloud computing organizations. Cyberattacks are an alternative to physical assaults
for compromising people’s private information, and blocking them takes time and
effort to protect businesses, people, and the nation [10]. This study discovered that
data mining and algorithms are both necessary to maintain cloud security and privacy.
Detailed Review of Challenges in Cloud Computing 255
During 2014, Internet of Things (IoT) and cloud computing became the most signifi-
cant technical developments. They are anticipated to increase in both use and tenancy,
making them the most crucial element of the internet [11]. Particular attention is paid
to CloudIoT, which combines cloud computing and IoT. Through a web interface,
cloud users can access a virtual pool of resources. Infrastructure, networks, platforms,
software, and storage are all examples of cloud resources. It is crucial to protect the
security and integrity of the data utilized by cloud users as more enterprises shift
sensitive data to the cloud. The risks to data security posed by cloud computing have
been highlighted by [12]. Cloud computing’s explosive growth has increased server
security problems, making it challenging to monitor security risks. The Diversity of a
Denial-of-Service (DOS) attack is one of the most serious dangers. A mechanism for
immediately detecting the most attacked traffic in the cloud computing environment
is provided by these various types of network intrusion.
One of the key problems identified in 2015 is distributed denial of service assault,
which is a sort of attack in which multiple attackers target a single opponent to
block access of the targeted system from accessing the services. The many methods
for identifying and thwarting Distributed Denial of Service (DDoS) assaults were
described in detail in [13]. Cloud security investigation is more difficult than digital
forensic investigation; in forensic investigations, investigators face many challenges,
and it may be difficult to obtain evidence from cloud forensic investigations [14].
The author talked about how cloud computing’s difficulties affect digital investi-
gations. In a form of distributed computing known as “cloud computing,” customers
pay only for the resources and apps they actually utilize [15]. In order to introduce
cloud security subjects including data security, security, and integrity, this paper anal-
yses a number of unresolved security issues that have an impact on cloud computing.
The pros and cons of the most recent cloud security solutions are also covered. In
2016, anything that is online and capable of sending data over a network is referred to
as the “Internet of Things.“ However, poor technological implementation and design
can cause security problems [16]. In order to tackle with the flaws in the security
domain of IOT, an architecture was suggested as to lessen the risks. The ability to
translate data is made feasible by the cloud, which also relieves cloud users of the
responsibility of managing local storage [17]. Stenography and cryptography were
combined to create a security-improving approach. Users can utilize a web browser
to access Security-as-a-Service (SaaS), a cloud-based security service. The approach
separates encryption into numerous encryptions, which is ingenious [18]. The author
of this study used cloud services to encrypt and decrypt data using innovative and
transparent methods.
The researchers explained that the cloud platform’s architecture is based on the
OpenStack framework, which supports multimodal enhancements and makes use of
fingerprints as a novel biometric method for user authentication. It provides total
logical separation of compute and data resources linked to numerous organizations,
as well as secure access for many users. Masala et al. [19] covers topics such cloud
security, data storage security on public cloud servers, and logical user authentication
for cloud access. You may store and access cloud data from any location with the
help of multi-cloud storage, which can also encrypt and store data across many
256 S. Raina
cloud drives. Subramanian and John [20] proposed paradigm, which uses index-
based encrypted data to address various insider threats, file privacy for various files
provided by various users, and distributed data storage.
Researchers have proposed a new cloud security plan [21] that would offer
enhanced secure data translation and security breach protection in the year 2018,
according to users, who are reporting an increase in illegal activity. This system
offers storage of data but faces a number of security concerns; because of which, a
separate approach is needed to guarantee that cloud data is kept properly on the cloud
platform [22]. It was addressed how to safeguard cloud data storage using various
security measures. To deliver the services that cloud users require, utility computing
and software-as-a-service (SaaS) are used in cloud computing. Cloud security is
a significant issue with a plethora of difficulties and issues. Data privacy, security
concerns, and malicious programs are just a few of the security-related challenges
and issues that cloud service providers and customers must deal with [23].
Loss of Data: The most common problem with cloud computing is data leakage.
We are all aware that our private data are in the hands of third parties, because of
which we are not having complete control over our database, and there might be
the situation that hackers will get access of our personal files if the cloud security is
compromised.
Hacker’s Interface and Unreliable API: Using API is the simplest method of
communicating with the cloud, as it is also known. Therefore, it is essential to protect
externally exposed interfaces and APIs. Cloud computing also offers few services that
are accessible to the general population. This is the feature of cloud computing that
is most vulnerable since these services can be accessed by outside parties. Therefore,
it is likely that hackers will simply exploit these services to corrupt or steal our data.
Hacking into user Account: The biggest security issue with cloud computing is
account theft. If a hacker is successful in accessing an account belonging to a user
or a company. After that, the hacker is free to carry out Unauthorized Activities.
Changes to Service Provider: Among the many security issues with cloud
computing, vendor lock-in is a critical one. While migrating service providers, many
organizations will face a number of difficulties. Data migration and the addition to
the fact that rival cloud services have distinct operating principles will be barrier for
a company seeking to migrate from AWS to Cloud Services. It’s also possible that
AWS’s costs are different from, say, those of Google Cloud.
Skill Deficit: Working, migrating to a different service provider, needing an addi-
tional function, learning how to use a feature, and other similar concerns are the
main problems that develop in an IT company with inexperienced staff. Working
with cloud computing, therefore, requires the usage of a qualified individual.
A DOS (Denial of Service) assault: This type of assault takes place when the
system has an abnormally high volume of traffic. The biggest targets of DoS attacks
Detailed Review of Challenges in Cloud Computing 257
are governmental institutions and banks. A DoS attack results in the loss of data.
Data recovery, thus, takes a long time and costs a lot of money.
Figure 1 discusses about the analysis being done for the challenges in the cloud/
on-demand model as per the results of IDC Survey Ranking.
Security methods that are applicable to cloud data include identification and authen-
tication, encryption, integrity checking, access control, secure detection, and data
masking. The security methods include the following.
OTP Validation: Currently, many banks offer the One Time Password (OTP) form
of cloud user authentication, which is generated using random under generation. The
system factor authentication method, sometimes known as one-time authentication,
uses it as well. It is sometimes referred to as a multiple authentication factor when
used for two-time authentication.
Access Management: Cloud data are shielded from tampering and unauthorized
disclosure thanks to access control. Owners of cloud data can impose strict access
restrictions on users who can access their data. Authorized users can access cloud
data while unauthorized users cannot.
Secure Deletion: It is essential to comprehend how data are retrieved from the
server. There are several deletion strategies, such as clearing, where we erase media
prior reloading it and simultaneously offer security for admitting formerly data that
is stored on the medium. This type of data is frequently communicated for lower
categorization levels using the sanitization technique.
258 S. Raina
6 Conclusion
The research above makes it abundantly evident that there is potential for the develop-
ment of many security methods to safeguard data gathering. The significance of each
technique has grown. These were succinctly explained, yet they have a wide range of
applications. The cloud data in the aforementioned circumstances were completely
protected. A brief summary of the literature has been provided where the effects of
virtualization vulnerabilities on cloud service models have been examined, how RSA
approach with digital signature can be used to encrypt user data is discussed. It has
also been observed that data mining and algorithms are important to maintain cloud
security and privacy. We have addressed the benefits and drawbacks of the current
approaches to fully resolve security and privacy issues. These are the open issues to
work on.
References
1. Badger, L., Patt-corner, R., & Voas, J. (2012). Cloud computing synopsis and recommenda-
tions recommendations of the national institute of standards and technology. NIST Special
Publication, 800(146), 81. http://csrc.nist.gov/publications/nistpubs/800-146/sp800-146.pdf
2. Buyya, R., Yeo, C. S., Venugopal, S., Broberg, J., & Brandic, I. (2009). Cloud computing and
emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility.
Future Generation Computer Systems, 25(6), 599–616. https://doi.org/10.1016/j.future.2008.
12.001
3. Bekl, A., & Behl, K. (2012). An analysis of cloud computing. In 2012 World Congress on
Information and Communication Technologies (pp. 109–114).
4. Sutradhar, N., Sharma, M. K., & Sai Krishna, G. (2021). Cloud computing: Security issues
and challenges. Lecture Notes in Electrical Engineering, 692(December), 25–32. https://doi.
org/10.1007/978-981-15-7486-3_4
5. GR, V., & Rama Mohan Reddy, A. (2012). An efficient security model in cloud computing
based on soft computing techniques. International Journal of Computer Applications, 60(14),
18–23. https://doi.org/10.5120/9760-3219
6. Kishore Kumar, D., Venkatewara Rao, G., & Srinivasa Rao, D. G. (2012). Cloud computing:
An analysis of its challenges & cloud computing: An analysis of its challenges & security
issues. International Journal of Computer Science and Network, 1(5), 2277–5420.
7. Shaikh, A. H., & Meshram, B. B. (2021). Security issues in cloud computing. Lecture Notes
in Networks and Systems, 146, 63–77. https://doi.org/10.1007/978-981-15-7421-4_6
8. Shrawankar, M., & Kr, A. (2013). Shrivastava, “Comparative Study of Security Mechanisms
in Multi-cloud Environment.” International Journal of Computers and Applications, 77(6),
9–13. https://doi.org/10.5120/13396-1039
9. Aggarwal, N., Tyagi, P., Dubey, B. P., & Pilli, E. S. (2013). Cloud computing: data storage
security analysis and its challenges. International Journal of Computers and Applications,
70(24), 33–37. https://doi.org/10.5120/12216-8359
10. Aggarwal, P., & Chaturvedi, M. M. (2013). Application of data mining techniques for infor-
mation security in a cloud: A survey. International Journal of Computers and Applications,
80(13), 11–17. https://doi.org/10.5120/13920-1804
11. Botta, A., De Donato, W., Persico, V., & Pescape, A. (2014). On the integration of cloud
computing and internet of things. In Proceedings—2014 International Conference on Future
Detailed Review of Challenges in Cloud Computing 259
Abstract The demand for efficient and secure warehouse operations has signifi-
cantly increased due to growing customer expectations and competition in modern
businesses. Smart warehousing systems have emerged as a solution to address these
demands, with Autonomous Mobile Robots (AMRs) becoming a popular tech-
nology for enabling warehouse automation. However, current smart warehousing
systems face challenges such as collisions and deadlock occurrences within AMRs,
as well as data security issues throughout the system. To address these challenges,
this paper proposes an innovative approach that utilizes IoT sensors, AMRs, and
blockchain technology to improve the performance of smart warehousing systems.
The proposed system uses a multiple-sensor fusion method that combines 3D LiDAR,
inertial measurement units (IMU) sensors, and RGB cameras to optimize the perfor-
mance of AMRs and prevent collisions with both dynamic and static obstacles. The
proposed approach offers an innovative and practical solution for implementing effi-
cient and secure smart warehousing systems with a significant potential impact on
logistics and supply chain management. The integration of IoT sensors and AMRs
can increase overall productivity while reducing safety risks in smart warehouses.
The use of blockchain technology can provide a secure and transparent way to track
and verify transactions in the warehouse, reducing the need for manual interven-
tion and increasing system efficiency. Overall, the proposed IoT sensor, AMR, and
blockchain-based approach can bring significant benefits to the optimization and
security of warehouse systems.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 261
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_21
262 S. Balakrishnan et al.
1 Introduction
With the boom of e-commerce and the ever-changing customer demands and
processes, the need for smart logistics is higher than ever before [34]. Smart ware-
housing is one of the major applications of smart logistics that is crucial to catering
demands like same-day delivery, high product availability, flexibility in shopping
destinations, and varying delivery methods plus return options [37]. Warehouses
are meant to arrange and store manufactured goods and the fundamental process
of warehousing includes four major stages: receipt, storage, picking, and shipment
of stored inventories. The implementation of IoT technology in warehousing has
gained significant traction in recent years due to the numerous advantages it offers.
As evidence, over 60% of warehousing companies have already adopted IoT-based
technology to improve their operations [28]. One of the huge problems of traditional
warehouses was excessive spending on labour where large numbers of employees
are employed to work on general tasks and replicated processes. However, modern
warehouses have been decreasing the spending on labour and investing more in IoT
(internet of things) like automated robots, which help to automate tasks without
any general workers but a few specialists only [43]. The technological advances of
AMRs in particular have significantly helped to achieve operational flexibility and to
increase performance in productivity, quality, and (sometimes) cost efficiency [11].
In addition, IoT can assist in reducing the incidences of work-related injuries by
providing predictive maintenance [14]. By implementing sensors like RFID sensors
that enable proximity warning for workers and by installing efficient object detec-
tion and collision avoidance systems in automated vehicles and robots, injury risk
can be significantly minimized [26]. In order to manage the laborious and time-
consuming inventory checking and updating process, a shift towards automation and
implementation of IoT in warehouses is imperative.
With the onset of Industry 4.0, automation in warehouses and increased security and
traceability across the warehouse management system have become non-negotiable.
The growth expectation for automated mobile robots (AMRs) is declared to be a
CAGR of 35% by 2026 [28]. With the increasing demand for AMR’s inclusion in
warehouse automation, it is important to address the safety concerns associated with
them and introduce technological innovations that increase its efficiency and deliver
high performance [27]. While existing AMR systems account for object detection
and collision avoidance, they are unable to effectively deal with dynamic obsta-
cles and unknown environments [7]. These challenges pose a safety threat to their
deployment in warehouse systems. To ensure the efficient travel of these AMRs in
a busy warehouse environment, it is important to address the issues of collisions
and occurrences of deadlocks within the system while they complete fast transaction
IoT-Enabled Smart Warehousing with AMR Robots and Blockchain … 263
operations. The AMR used in our system addresses these concerns by working on the
SLAM (simultaneous localization and mapping) algorithm that allows the construc-
tion of the surrounding map and the location of the AMR on it at the same time,
which allows the robots to map out unknown environments. In general, neither the
environment nor the robot’s pose is known but both must be estimated from localiza-
tion and mapping data. This is called Simultaneous Localization and Mapping, i.e.,
SLAM. This algorithm works on Spiking Neural Networks (SNNs), which offer the
power of real-time data processing, contributing towards active obstacle detection
and avoidance even in unknown areas and unforeseen circumstances [40]. It robustly
controls an autonomous robot and helps it to map out and navigate in unknown envi-
ronments, while compensating for its own intrinsic hardware imperfections, such as
partial or total loss of visual input. The odometry sensor and the RGB-depth camera
signals help in accurate object detection and therefore collision avoidance [41].
Besides this other major bottleneck in warehouse management like delayed
payments due to lack of coordination among strategic partners, inventory tracking
and detailing historical information of products as well as ensuring secure transac-
tions and safekeeping of sensor-generated data can be addressed by the implemen-
tation of blockchain technology in the warehouse management system. By utilizing
a commodity traceability network focused on blockchain technologies, commodity
history can be stored in a global database through smart contracts and a chain that
can trace back to the source of goods can thus be created [20]. A smart contract is
a computerized transaction protocol that executes the terms of a contract and can
ensure a deeper level of security and greater coordination in logistics environments
by making the execution of predetermined procedures visible to the outside world
[13, 32].
3 Past Studies
Past papers present a real-time obstacle detection and avoidance system that employs
a multi-sensor approach. To increase the reliability and accuracy of detection, the
system combines camera and LiDAR detection. The system was put to the test by
using prototypes in various scenarios, and they discovered that combining camera
and LiDAR detections results in more accurate position and label information for
each detection. The pros and cons of sensors for object detection are discussed in
the paper, along with the challenges of LiDAR detection. Another system by makes
use of GPS sensors to determine the current location and a 2D LiDAR sensor to
recognize and avoid fixed obstacles as it travels in the direction of a goal. On the
main PC, the algorithm is set up to send collision-avoidance instructions, reducing
safety risks.
264 S. Balakrishnan et al.
According to Fig. 1, the Perception layer is the physical layer that includes hardware
such as sensors [39], to gather data from the surrounding environment. There are
two types of sensors used in this layer, wired and wireless sensors. Wired sensors are
physically connected to the base, while wireless sensors do not require physical wires
but may be affected by distance [10]. Network Layer is responsible for managing
connections between servers, smart devices, and network devices [24, 39] (Fig. 1).
Zigbee is the primary protocol used for connecting smart devices, due to its scal-
ability, stability, security, and affordability [23]. Data collected from the perception
layer are transmitted to the AWS IoT core, which can manage trillions of messages
and billions of devices and is transmitted to a smart gateway. Data processing layer
uses cloud computing and edge IoT to provide computation, networking, security
and storage [39]. Data streaming involves the continuous, high-speed flow of data
from various sources like sensors, logs, and financial transactions [2]. Application
Layer includes real-time tracking and monitoring, predictive maintenance, automated
inventory management, temperature and humidity monitoring, security and access
control, and data analytics. IoT devices such as sensors, tags, beacons, cameras, and
smart locks are used to provide these capabilities, allowing warehouse managers to
optimize operations, reduce costs, and improve efficiency [6, 22, 44–46].
IoT-Enabled Smart Warehousing with AMR Robots and Blockchain … 265
together, this multiple-sensor system can detect objects at high frequency, estimate
motion with low latency and present a dense and accurate 3D map (Fig. 2).
Blockchain technology
Blockchain can ensure IoT data integrity without a third party while saving bandwidth
and computational power of IoT Devices. Moreover, blockchain can provide a secure
and scalable framework for an IoT network so that sensitive information can be
delivered without a centralized server [19]. Our proposed system utilizes a hybrid
blockchain to protect user data through a private blockchain manager and use the
consensus of public nodes to validate transactions [19]. There are two types of IoT
architecture: three-layer architecture [1, 17, 30, 39, 42] and five-layer architecture [1,
5, 21, 30, 33, 39]. Our system used Microsoft’s three-tier BaaS system that supports
high development facilities and high scalability [31].
Fig. 2 Prototype for collision detection and avoidance system in AMR and architecture
IoT-Enabled Smart Warehousing with AMR Robots and Blockchain … 267
5 Conclusion
5.1 Limitations
Our system is not perfect as there are some limitations or shortcomings. With our
system implementing devices such as AMR Robots along with the sensors needed,
the implementation costs can be a significant hurdle for small and medium-sized
warehouse businesses. There are no cheaper alternatives to the proposed devices for
the system as chosen devices have to provide reliability and accuracy which our
system focuses on. However, as technology advances, the cost of these devices will
decrease [29], even so, the current price is expensive, which is one of our limita-
tions. Another limitation is the complexity of an IoT warehouse application. Despite
it looking easy on the surface, it is complex since having an IoT warehouse appli-
cation involves the integration of smart technologies including the sensors, cloud
and other devices that need to work together in order for the warehouse to operate
[8, 38]. Moreover, the status tracking and connection of the cyber-physical system
are extremely important so as to maintain data consistency [9]. Since having an IoT
warehouse is complex, it will require specialized skills to manage it [15], therefore,
having a high cost of labour for specialized workers. Additionally, the integration
of outdated devices can also be the reason for different losses not only data loss but
also be the reason for physical damage to those devices [12].
Some of the possible future enhancements we plan to improve on are the security,
reliability, and performance of the warehouse system. More robotics and automation
are an option as the robotics industry is still growing [18], therefore, a better and
cheaper alternative may be available in the future. This could improve the reliability,
performance, and cost of the warehouse system. Data in warehouse management are
important and adding artificial intelligence can improve decision-making processes,
optimize inventory management, and enhance overall efficiency [4, 16]. Additionally,
implementing data analytics along with artificial intelligence and further enhancing
the warehouse management systems as data analytics can extract and analyze the
data [3]. Furthermore, due to blockchain technology still being in its nascent stages,
its adoption is hindered by factors like incompatibility with the legacy systems of
manufacturing firms, and the overall cost of its implementation creates a major road-
block in its incorporation in small-scale warehouse management systems [25]. To
summarize, this study explores the improvements that can be made in the various
interconnected processes of warehouse management through the implementation of
automation and blockchain technologies. A high level of attention to detail was dedi-
cated to the design of the architecture of our proposed smart warehouse system, and
an efficient way to address its safety and efficiency concerns was also detailed down.
268 S. Balakrishnan et al.
References
1. Al-Fuqaha, A., Guizani, M., Mohammadi, M., Aledhari, M., & Ayyash, M. (2015). Internet of
things: A survey on enabling technologies, protocols, and applications. IEEE Communications
Surveys & Tutorials, 17(4), 2347–2376. https://doi.org/10.1109/comst.2015.2444095
2. Alieksieiev, V. (2018). One approach of approximation for incoming data stream in. IEEE
xplore. https://doi.org/10.1109/DSMP.2018.8478466
3. Andiyappillai, N. (2019). Data analytics in warehouse management systems (WMS) imple-
mentations–a case study. International Journal of Computer Applications, 181(47), 14–17.
4. Aravindaraj, K. & Chinna, P. R. (2022). A systematic literature review of integration of industry
4.0 and warehouse management to achieve sustainable development goals (SDGs). Cleaner
logistics and supply chain, 100072. https://doi.org/10.1016/j.clscn.2022.100072
5. Chaqfeh, M. A., & Mohamed, N. (2012). Challenges in middleware solutions for the internet
of things. In Proceedings of the International Conference on Collaboration Technologies and
Systems (CTS’12) (pp. 21–26). IEEE
6. Costa, B. (2017). Specifying functional requirements and QoS parameters for IoT systems.
https://doi.org/10.1109/DASC-PICom-DataCom-CyberSciTec.2017.83
7. Deilamsalehy, H., & Havens, T.C. (2016). Sensor fused three-dimensional localization using
IMU, camera and LiDAR. IEEE SENSORS, 1–3. https://doi.org/10.1109/ICSENS.2016.780
8523
8. Ding, Y., Jin, M., Li, S., & Feng, D. (2021). Smart logistics based on the internet of things
technology: An overview. International Journal of Logistics Research and Applications, 24(4),
323–345. https://doi.org/10.15439/2017f267
9. Falkenberg, R., Masoudinejad, M., Buschhoff, M., Venkatapathy, A. K. R., Friesel, D.,
ten Hompel, M., Spinczyk, O., & Wietfeld, C. (2017). PhyNetLab: An IoT-based ware-
house testbed. In 2017 Federated Conference on Computer Science and Information Systems
(FedCSIS) (pp. 1051–1055). IEEE. https://doi.org/10.15439/2017f267
10. Ferrari, P. (2009). Wired and wireless sensor networks for industrial applications. https://doi.
org/10.1016/j.mejo.2008.08.012
11. Fragapane, G., de Koster, R., Sgarbossa, F., & Strandhagen, J. O. (2021). Planning and control of
autonomous mobile robots for intralogistics: Literature review and research agenda. European
Journal of Operational Research, 294(2), 405–426. https://doi.org/10.1016/j.ejor.2021.01.019
12. Ghazal, T. M., Afifi, M. A. M., & Kalra, D. (2020). Security vulnerabilities, attacks, threats and
the proposed countermeasures for the internet of things applications. Solid State Technology,
63(1s).
13. Ateniese, G., Michael Chiaramonte, T., Treat, D., Magri, B, & Venturi, D. (2018). Hybrid
Blockchain. U.S. Patent 9,959,065.
14. Javaid, M., Haleem, A., Singh, P., Rab, S., & Suman, R. (2021). Upgrading the manufacturing
sector via applications of industrial internet of things (IIoT). Sensors International, 2, 100129.
https://doi.org/10.1016/j.sintl.2021.100129
15. Kamali, A. (2019). Smart warehouse versus traditional warehouse. CiiT International Journal
of Automation and Autonomous System, 11(1), 9–16. https://doi.org/10.36039/ciitaas%2F11%
2F1%2F2019%2F180349.9-16
16. Khalifa, N., & Abd Elghany, M. (2021). Exploratory research on digitalization transformation
practices within supply chain management context in developing countries specifically Egypt
in the MENA region. Cogent Business & Management, 8(1), 1965459. https://doi.org/10.1080/
23311975.2021.1965459
IoT-Enabled Smart Warehousing with AMR Robots and Blockchain … 269
17. Khan, R., Khan, S.U., Zaheer, R., & Khan, S. (2012). Future internet: The internet of things
architecture, possible applications and key challenges. In Proceedings of the 10th International
Conference on Frontiers of Information Technology (FIT’12) (pp. 257-260). IEEE.
18. Khazetdinov, A., Aleksandrov, A. N. D. R. E. Y., Zakiev, A. U. F. A. R., Magid, E., & Hsia,
K. H. (2020). RFID-based warehouse management system prototyping using a heterogeneous
team of robots. Robots in Human Life, 263. https://doi.org/10.13180/clawar.2020.24-26.08.32
19. Lao, L., Li, Z., Hou, S., Xiao, B., Guo, S., & Yang, Y. (2020). A survey of IoT applications in
blockchain systems. ACM Computing Surveys, 53(1), 1–32. https://doi.org/10.1145/3372136
20. Latif, A., Farhan, M., Rizwan, O., Hussain, M., Jabbar, S., & Khalid, S. (2020). Retail level
blockchain transformation for product supply chain using truffle development platform. Cluster
Computing, 24, 1–16. https://doi.org/10.1007/s10586-020-03165-4
21. Atzori, L., Iera, A., & Morabito, G. (2010). The internet of things: A survey. Computer Network,
54(15), 2787–2805.
22. Mahalank, S. N., Malagund, K. B., & Banakar, R. M. (2016). Non functional requirement
analysis in IOT based smart traffic management system. https://doi.org/10.1109/ICCUBEA.
2016.7860147
23. Man, L. X., & Lu, X. (2016). Design of a ZigBee wireless sensor network node for aquaculture
monitoring. https://doi.org/10.1109/CompComm.2016.7925086
24. Wu, M., Lu, T. J., Ling, F. Y, Sun, J., & Du, H. Y. (2010). Research on the architecture of
internet of things. In Proceedings of the 3rd International Conference on Advanced Computer
Theory and Engineering (ICACTE’10) (vol. 5, pp. V5–484). IEEE
25. Mitra, M. (2018). 6 challenges of blockchain. Retrieved February 1, 2023, from https://www.
mantralabsglobal.com/blog/challenges-of-blockchain/.
26. Monarca, D., Rossi, P., Alemanno, R., Cossio, F., Nepa, P., Motroni, A., Gabbrielli, R.,
Pirozzi, M., Console, C., & Cecchini, M. Autonomous vehicles management in agriculture
with bluetooth low energy (BLE) and passive radio frequency identification (RFID) for obstacle
avoidance. Sustainability, 14(15), 9393. https://doi.org/10.3390/su14159393
27. Qazi, A. A. (2020). Issues & challenges faced by warehouse Management in the FMCG
sector of Pakistan. International Transaction Journal of Engineering, Management, & Applied
Sciences & Technologies, 11(15), 11A15L, 1–11. https://doi.org/10.14456/ITJEMAST.202
0.300
28. Research and Markets (https://www.researchandmarkets.com/reports/5600844)
29. Rodrik, D. (2018). New technologies, global value chains, and developing economies (No.
w25164). National Bureau of Economic Research. https://doi.org/10.2139/ssrn.3338636
30. Said, O., & Masud, M. (2013). Towards internet of things: Survey and future vision.
International Journal of Computer Networks. 5(1), 1–17
31. Song, J., Zhang, P., Alkubati, M., Bao, Y., & Yu, G. (2021). Research advances on blockchain-
as-a-service: Architectures, applications and challenges. ScienceDirect. https://doi.org/10.
1016/j.dcan.2021.02.001
32. Taherdoost, H. (2023). Smart contracts in blockchain technology: A critical review. Informa-
tion, 14(2), 117. https://doi.org/10.3390/info14020117
33. Tan, L., & Wang, N. (2010). Future internet: The internet of things. In Proceedings of the 3rd
International Conference on Advanced Computer Theory and Engineering (ICACTE’10) (vol.
5, pp. V5−376). IEEE.
34. Tripathy, R. P., Mishra, R. M., & Dash, S. R. (2020). Next generation warehouse through
disruptive iot blockchain. In 2020 International Conference on Computer Science, Engineering
and Applications (ICCSEA) (pp. 1–6). https://doi.org/10.1109/ICCSEA49143.2020.9132906
35. Turhanlar, E. E., Ekren, B. Y., & Lerher, T. (2022). Autonomous mobile robot travel under
deadlock and collision prevention algorithms by agent-based modelling in warehouses. Inter-
national Journal of Logistics Research and Applications, 1–20. https://doi.org/10.1080/136
75567.2022.2138290
36. Vigliotti, M. (https://www.frontiersin.org/articles/https://doi.org/10.3389/fbloc.2020.553671/
full)
270 S. Balakrishnan et al.
37. Wen, J., He, L., & Zhu, F. (2018). Swarm robotics control and communications: Imminent
challenges for next generation smart logistics. IEEE Communications Magazine, 56(7), 102–
107. https://doi.org/10.1109/mcom.2018.1700544
38. Winkelhaus, S., & Grosse, E. H. (2020). Logistics 4.0: A systematic review towards a new
logistics system. International Journal of Production Research, 58(1), 18–43. https://doi.org/
10.15439/2017f267
39. Wu, M., & Lu, T. J. (2010). Research on the architecture of internet of things. In 2010 3rd
international conference on advanced computer theory and engineering (ICACTE). https://doi.
org/10.1109/ICACTE.2010.5579493
40. Xu, J., Wang, L., Kou, Q., Fang, T., Dan, Y., Zhou, L., & Zhang, Y. (2022). Real-time behaviour
decision of mobile robot based on the deliberate/reactive architecture. International Journal of
Innovative Computing, Information and Control, 18(4), 1163–1180. https://doi.org/10.24507/
ijicic.18.04.1163
41. Yamazaki, K., Vo-Ho, V., Bulsara, D., & Le, N. (2022). Spiking neural networks and their
applications: A review. Brain Sciences, 12(7), 863. https://doi.org/10.3390/brainsci12070863
42. Yang, Z., Yue, Y., Yang, Y., Peng, Y., Wang, X., & Liu, W. (2011). Study and application on
the architecture and key technologies for IOT. In Proceedings of the International Conference
on Multimedia Technology (ICMT’11) (pp.747–751). https://doi.org/10.1109/ICMT.2011.600
2149
43. Zoho Inventory. (https://www.zoho.com/inventory/?utm_source=Articles&utm_medium=Bus
iness%20Guides&utm_campaign=Essential%20Business%20Guides)
44. Kumar, V., Malik, N., Singla, J., Jhanjhi, N. Z., Amsaad, F., & Razaque, A. (2022). Light
weight authentication scheme for smart home iot devices. Cryptography, 6(3), 37.
45. Bhoi, S. K., Panda, S. K., Jena, K. K., Sahoo, K. S., Jhanjhi, N. Z., Masud, M., & Aljahdali,
S. (2022). IoT-EMS: An internet of things based environment, monitoring system in volunteer
computing environment. Intelligent Automation & Soft Computing, 32(3).
46. Ullah, A., Azeem, M., Ashraf, H., Jhanjhi, N. Z., Nkenyereye, L., & Humayun, M. (2021).
Secure critical data reclamation scheme for isolated clusters in IoT-enabled WSN. IEEE Internet
of Things Journal, 9(4), 2669–2677.
47. Saleh, M., Jhanjhi, N., Abdullah, A., & Saher, R. (2022). Iotes (a machine learning model)
design dependent encryption selection for iot devices. In 2022 24th International Conference
on Advanced Communication Technology (ICACT) (pp. 239–246). IEEE.
48. Humayun, M., Ashfaq, F., Jhanjhi, N. Z., & Alsadun, M. K. (2022). Traffic management:
Multi-scale vehicle detection in varying weather conditions using yolov4 and spatial pyramid
pooling network. Electronics, 11(17), 2748.
49. Saleh, M., Jhanjhi, N. Z., Abdullah, A., & Saher, R. (2022). Proposing encryption selection
model for IoT devices based on IoT device design. In 2022 24th International Conference on
Advanced Communication Technology (ICACT) (pp. 210–219). IEEE.
Measuring the Feasibility of Using Fuel
Cells in Marine Applications
Abstract The marine sector is getting more attention on the international environ-
mental pollution issues. As shipping’s contribution to air pollution rises, legisla-
tive pressure to reduce shipping emissions is steadily increasing due to consumer
awareness. The International Maritime Organization is enforcing worldwide laws
guiding the reduction of SOx and NOx emissions from shipping and also intends to
implement more regional restrictions to minimize emissions. Therefore, novel ideas
for energy conversion that are both environmentally friendly and energy efficient
are being discussed. Utilizing fuel cell technologies for auxiliary power or perhaps
primary propulsion is one potential option. Fuel cells as clean power sources are very
appealing to the maritime industry, which is committed to sustainability and reducing
greenhouse gas and pollutant emissions from ships. Currently, power capacity, costs
and lifetime of the fuel cell stack are the primary barriers.
1 Introduction
A. Kiritsi
MSc in Economics and Energy Law, AUEB, Athens, Greece
A. Fountis (B)
Faculty, Berlin School of Business and Innovation, Berlin, Germany
e-mail: [email protected]
A. A. Alwan
College of Engineering, National University of Science and Technology, Dhi Qar, Iraq
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 271
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_22
272 A. Kiritsi et al.
1. How renewable energy systems are used to supply the electric power needed in
the seas?
2. How the fuel cells can be technologically competitive for various systems in
comparison to different batteries?
2 Literature Review
Fuel cells are similar to batteries in that they can store energy, but they do not deplete
their supply or require replenishing. As long as there is a supply of fuel, they can
generate both electricity and heat. A fuel cell is made up of two electrodes: a negative
electrode, also known as an anode, and a positive electrode, also known as a cathode.
These electrodes are positioned in between an electrolyte. At the anode, a fuel such
as hydrogen is introduced, while at the cathode, oxygen is introduced. Fuel cells with
different power rates can be utilized in a broad range of applications in the marine
sector. To date, though, the majority of fuel cell developmental initiatives made by
companies and supported by governmental and commercial organizations have been
linked to improving the state-of-the-art of fuel cell innovation for ground gas and
electricity grid purposes. These areas could be used to categorize certain potential
fuel cell applications in the marine environment [2].
Measuring the Feasibility of Using Fuel Cells in Marine Applications 273
With such kinds of transport ships like tankers, bulk carries and cruise ships, the
challenge from alternative technologies is going to be significantly tougher. This
is so because the diesel engines now powering these ships—which travel at steady
speeds—are highly efficient and of low-rpm. Sector resistance to abandon a depend-
able and effective propulsion technology would restrict the utilization of fuel cells
[3]. Generally, despite military uses, commercial fuel cell uses must demonstrate
cost effectiveness. Fuel cells are not probably to be utilized until they offer distinct
and substantial economic benefits.
The primary consideration in the construction of naval combat ships is not always
cost. If fuel cells turn out to be the most efficient device for a specific application,
they will be employed. Despite significant statistical study, the Navy has not yet
identified certain operations for which fuel cells are particularly well-suited. The
major obstacles to creating fuel cells for naval ships are the restricted supply of fuel
and the reduced power output. Switching a fleet to a new fuel is not an easy operation
[2].
Although fuel cell systems are more power efficient than dual-fuel or conventional
marine diesel systems, the benefits are not particularly significant while considering
costs and level of technological sophistication. The concept of fuel cell deployments
in the marine sector is predicated on the use of zero carbon or carbon–neutral fuels,
taking lower carbon or no carbon potential transport into consideration. In other
words, the research makes the fundamental premise that carbon capture and storage
(CCS) is not possible on-board ships [2]. Hence, synthesized natural gas (SNG,
primarily methane), hydrogen, ammonia, and methanol via sustainable origins are
recognised as fuel oil with lengthy prospective. Traditional marine hydrocarbon fuels
are omitted due to their weak long-term prospective. Short-term uses of hydrocarbon
274 A. Kiritsi et al.
raw substances utilized as feedstock for hydrogen, ammonia, SNG, and methanol
are deemed appropriate as a transition [4].
A few safety guidelines must be observed when using fuel cell systems on ships to
assure that the technology offers the same degree of protection as traditional systems.
The following criteria illustrate some key protection ideas and how they can be used
in real-world situations [5].
Generally, the solitary failure criteria is used. It indicates that the fuel cell technology
must be built such that zero solitary failure can result in an unsafe circumstance.
Additionally, all safety-related equipment must be certified for their intended use
[5].
Fig. 1 Sketches of
double-walled-pipes (Source
[5])
Measuring the Feasibility of Using Fuel Cells in Marine Applications 275
A sensing element placed in-between the pipes can track the failure of a double-
walled pipe obstacle. As a result, the pressure level between the pipes must be greater
than high pressures and less than the inner pipe’s pressure. In that situation, an internal
and external barrier collapse can be identified. A sensor module at the head of the
ventilator duct will normally identify a gas pipe collapse inside the ventilator duct.
3 Methodology Framework
A Marine Fuel Cell Power System (MFCPS) is a hybrid electric propulsion system
that utilizes a membrane process unit (MPU) to generate power and hydrogen fuel
(H2) while moving at a slow pace or operating on land. It is comparable to a gas/
diesel hybrid, except that there is no separate combustion chamber; instead, it blends
burning and electricity conversion operations into a single unit. Stacked fuel cells
serve as generators in conjunction with batteries and/or solar cells for storage and/
or distribution [3]. The necessity for rapid and dependable start-up times drove the
creation of the small ‘battery-agnostic’ design required for maritime applications.
The ability to keep up with significant developments in battery technology that allow
improved use of solar energy from thin film technology will be a critical issue for
maritime fuel cell producers. Contingent upon the temperature of the fuel, hydrogen
can be stored in two ways. The first method is to compress gas pressure and store it
at 350 or 700 bar. The second method is to liquefy gas pressure and store it below
253 C. There is a possible solution of separated hydrogen storage tanks into two
categories of studies based on this two-way—one is a liquidized country under 253
C, stores media with 7% H2. The other is a compressed state at 350 or 700 bar with
1% H2. For long-term storage in naval fuel cell power systems, a hydrogen storage
tank with low density and good temperature tolerance is extremely desirable. A
mathematical model is followed for optimizing the cost of gasoline with regard to
hours from an hourly dispatch aboard a Marine Fuel Cell Power System (MFCPS),
powered by hydrogen and gasoline. The model is based on a differential equation
(DDP) that is used to determine the best energy measurement procedure (EMS) for
the typical power profile for each practical power source size mix Z. The proposed
model is then used to calibrate the cost optimization findings for the constraint of
hydrogen fuel (H2) tanks. The results show that there is no statistically significant
276 A. Kiritsi et al.
difference between a non-hybrid energy system and a hybrid energy system that
uses zero-emission hybrid energy systems. The suggested method’s performance is
demonstrated by examining hourly power dispatch statistics for the investigated ship
over a one-year period [6].
Taking into account the cost, energy, and carbon footprint of both battery and fuel cell
modules, it is calculated that a hybrid propulsion system utilizing both technologies
can result in a greater total cost than one utilizing simply one. However, while fuel
cells are less expensive at every level of performance, they have much poorer energy
and power density when compared to traditional diesel engines. Thus, at this time, it
may be more useful to combine both batteries and fuel cells for best performance than
focusing on a single technology for whole system design. To discover the optimal
power source for a ship, a multi-objective optimization model may be designed [7].
Rising fumed gas from the SOFC stack can be utilized to pre-heat the fuel, air, and
transforming unit. Rankine cycles might be used to create steam when the operating
temperature of the SOFC stack is lower. A ST might create extra electrical energy,
increasing the total efficiency of the system to more than 80%. Water is the most
prevalent working fluid in the Rankine cycle. However, because of their low critical
temperature, organic fluids are commonly used to replace water at the point when the
temperature of the intensity source is lower. A SOFC stack, a fuel supply unit, an air
supply unit, a transforming unit, and an after-synergist combustor contain a circuitous
half breed SOFC-ST framework (Fig. 3). The SOFC stack’s elevated exhaust air may
be used to pre-heat the fuel, air, and reforming unit. Rankine cycles might be utilized
to create steam while the working temperature of the SOFC stack is lower. Thus,
the ORC/SOFR technology in conjunction with SOFC might be studied further in
order to boost electrical production by matching Rankine cycle in conjunction with
the Exergy system [6]. When compared to stochastic dynamic programming, the
calculation time is greatly lowered. According to the findings, the business case for
a regenerative ammonia-based electrolysation is viable. Rising fumes gas from the
SOFC stack can be utilized to pre-heat the fuel, air, and improving unit. Rankine
cycles may be utilized to make steam while the working temperature of the SOFC
stack is lower. The inside of the vehicle can be regarded as a somewhat restricted
and complicated atmosphere that is not favourable to hydrogen dissipation. When
this is done, the centralization of hydrogen entering the control lodge drops to under
4% and arrives at nothing in the traveller region. Under Condition 4, hydrogen can
be removed directly from the energy unit lodges to a detachable region, with next to
zero hydrogen entering the control or traveller cabins.
A considerable amount of hydrogen will gather in the lodge following a hole,
which could bring about a blast out of the blue. Subsequently, perhaps the most
critical variable influencing the wellbeing of a hydrogen energy unit boat would be the
ventilation states of the lodges (Fig. 4). According to the original ventilation design,
the results show that the natural ventilation condition for LNG-S2H2 fuel cells has a
Fig. 4 Hydrogen
concentration at the detection
points. (Source [10])
natural ventilation capability of 37.5% and an optimal yet another rate of hydrogen
leakage of 100%, with two ventilators in the stern cabin and four in the controlling
cabin. It has a perfect one-way hydrogen leak rate of 88.6%, while Condition 3 with
two vents in each cabin achieves 90%, which is higher than Condition 1 with just
four vents in the control lodge and adding two regular vents with a breadth of 1.2 m
in the traveller lodge, one on the left and one on the right; Condition 3: holding the
extraordinary normal vents [10].
Increase the volume and/or diluting rate of the chemical bonding to provide arti-
ficial ventilation of hydrogen in a fuel cell cabin. Mechanical vents are used in lieu
of natural vents (pipes) to provide ventilation. When this is done, the centralization
of hydrogen entering the control lodge drops to under 4% and arrives at nothing in
the traveller region. Under Condition 4, hydrogen can be removed directly from the
energy unit lodges to a detachable region, with practically zero hydrogen entering the
control or traveller lodges. Because of limiting fuel cell fire threats caused by chem-
ical explosions under these conditions, the vessel’s safety is considerably enhanced.
Replacing the natural venting inside the fuel cell rooms with artificial vents can
greatly reduce the concentration of hydrogen entering the control lodge and traveller
room thus improving the safety of this fuel cell powered ship. As a result, with no
hydrogen diffusion in between, a steady state may be established (Fig. 5).
The suggested method’s performance is demonstrated by examining hourly power
dispatch statistics for the investigated ship over a one-year period. The findings reveal
that there is no statistically significant difference between a non-hybrid energy system
and a hybrid energy system that uses a zero-emission hybrid energy system. Opti-
mization of power loads in hybrid energy systems is fuelled by hydrogen and gasoline.
The model is used to calibrate the cost optimization findings for the constraint of
H2 tanks. In this paper, a mathematical model is constructed for characterizing and
optimizing the cost of gasoline with regard to hours from an hourly dispatch aboard
[11].
Measuring the Feasibility of Using Fuel Cells in Marine Applications 279
5 Conclusions
Fuel cells as clean power sources are attractive to the maritime industry; however, the
primary barriers are power capacity, costs, and lifespan of the fuel cell stack. Fuel cells
can supply supplementary energy in addition to meeting other demands in addition
to primary propulsion. This enables refuelling to take place more quickly and allows
for longer distances between fill-ups. They function similarly to batteries in that they
can store energy, but unlike batteries, their source of energy is not depleted and they
do not need to be replenished. Synthesized natural gas (SNG, primarily methane),
hydrogen, ammonia, and methanol are some examples of fuels that can be used in
fuel cell deployments in the maritime industry. These deployments are founded on
the use of zero carbon or carbon–neutral fuels. The single failure criterion, the two-
barrier principal for gas supply, and certification of safety-related apparatus are all
included in the safety guidelines.
Stacked fuel cells require rapid and dependable start-up times and can store
hydrogen in two ways. The fuel, oxygen, and transforming unit can all have their
temperatures increased by utilizing the rising fumed gas that is produced by the
SOFC stack. When the operating temperature of the SOFC stack is lower, Rankine
cycles might be used to generate steam. The current research shows only marginal
benefits exist for the potential use, although considering the current global energy
imbalances and challenges there may new chances in the foreseeable future.
However, the way is still long. In an industry that is presently responsible for 3%
of the world’s greenhouse gas emissions, zero-emission mandates and regulations are
quickly becoming a reality for ship owners and operators everywhere in the world.
“Going green” even not at a full scale is not a simple undertaking, because there are
more than 90,000 ships in the world’s commercial fleet.
280 A. Kiritsi et al.
References
1. Sapra, H., Stam, J., Reurings, J., van Biert, L., van Sluijs, W., de Vos, P., Visser, K., Vellayani,
A. P., & Hopman, H. (2021). Integration of solid oxide fuel cell and internal combustion engine
for maritime applications. Applied Energy, 281, 115854.
2. Chiche, A. (2022). On hybrid fuel cell and battery systems for maritime applications (Doctoral
dissertation, KTH Royal Institute of Technology).
3. Mashkour, M., Rahimnejad, M., Raouf, F., & Navidjouy, N. (2021). A review on the application
of nanomaterials in improving microbial fuel cells. Biofuel Research Journal, 8(2), 1400–1416.
4. Gadducci, E., Lamberti, T., Rivarolo, M., & Magistri, L. (2022). Experimental campaign and
assessment of a complete 240-Kw proton exchange membrane fuel cell power system for
maritime applications. https://doi.org/10.2139/ssrn.4023041
5. Vogler, F., & Würsig, G. (2011). Fuel cells in maritime applications: Challenges, chances and
experiences. https://h2tools.org/sites/default/files/2019-08/paper_96.pdf
6. Xing, H., Stuart, C., Spence, S., & Chen, H. (2021). Fuel cell power systems for maritime
applications: Progress and perspectives. Sustainability, 13(3), 1213.
7. Wu, P., & Bucknall, R. (2020). Hybrid fuel cell and battery propulsion system modelling
and multi-objective optimisation for a coastal ferry. International journal of hydrogen energy,
45(4): 3193–208
8. Hansson, J., Brynolf, S., Fridell, E., & Lehtveer, M. (2020). The potential role of ammonia
as marine fuel—based on energy systems modeling and multi-criteria decision analysis.
Sustainability, 12(8), 3265.
9. Xing, H., Stuart, C., Spence, S., & Chen, H. (2021). Fuel cell power systems for maritime
applications: Progress and perspectives. Sustainability., 13(3), 1213.
10. Li, F., Yuan, Y., Yan, X., Malekian, R., & Li, Z. (2018). A study on a numerical simulation of
the leakage and diffusion of hydrogen in a fuel cell ship. Renewable and Sustainable Energy
Reviews, 97, 177–185.
11. Rafiei, M., Boudjadar, J., & Khooban, M. H. (2021). Energy management of a zero-
emission ferry boat with a fuel-cell-based hybrid energy system: Feasibility assessment. IEEE
Transactions on Industrial Electronics, 68(2), 1739–1748.
12. Sürer, M. G., & Arat, H. T. (2022). Advancements and current technologies on hydrogen
fuel cell applications for marine vehicles. International Journal of Hydrogen Energy, 47(45),
19865–19875.
Blockchain-Based Healthcare Research
with Security Features that Can Be
Applied to Protect Patient Medical
Records
M. Huda (B)
College of Computing and Informatics, Saudi Electronic University, Riyadh, Saudi Arabia
e-mail: [email protected]
Abdullah
Chandigarh University, Mohali, Punjab, India
S. Adhikari
Swami Vivekananda University, Kolkata, India
e-mail: [email protected]
A. A. Ftaiet
College of Engineering, National University of Science and Technology, Dhi Qar, Iraq
e-mail: [email protected]
N. Dey
Global Institute of Management and Technology, Krishnanagar, India
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 281
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_23
282 M. Huda et al.
1 Introduction
Blockchain is used to refer to the created online-based open ledger. Thus, blockchain
technology allows participants to execute transactions through peer-to-peer verifi-
cation of transactions on blockchain-based applications [1]. A blockchain ledger
therefore contains details of all transactions performed in a blockchain-based appli-
cation. Additionally, each participant in a blockchain-based application is notified
when a transaction occurs and can validate the transaction through peer-to-peer
verification. Blockchain-based application security systems are designed to reli-
ably protect participants and blockchain system assets from malicious damage or
destruction. Blockchain-based systems are therefore protected by cryptography. The
increasing application and integration of cryptography proves that blockchain-based
applications are becoming more secure. In addition, the expansion and application
of blockchain technology in financial markets has raised security concerns. The need
for additional security through cryptography is due to cases of theft of financial assets
and the destruction of blockchain systems [2]. Blockchain systems promise greater
information integrity and data transparency, and an enhanced security model to meet
cybersecurity challenges. Blockchain technology is therefore widely applied in the
processes of business organizations using blockchain-based applications. Although
blockchain technology is widely used, various application areas pose additional chal-
lenges to security systems. Furthermore, there is no standardized form of security
that applies to all forms of blockchain-based applications. However, the security
of blockchain systems remains an important issue to consider. This work aims to
provide a systematic literature search on blockchain technology [3]. Additionally,
the study aims to analyze the major security threats to blockchain-based technology
and provide an analysis of how security issues can be mitigated to promote the
sustainability of blockchain technology [4].
Blockchain technology forms a series of blocks that are replicated through a peer-
to-peer network system. Therefore, in blockchain-based application systems, each
block is added to another block containing a cryptographic hash. Additionally, every
block in the blockchain contains a list of previous transactions, a block header in
the form of a Merkle tree. Additionally, each blockchain can be categorized into two
forms: With or without permission. Therefore, a permissionless blockchain allows
anyone to leave or join the blockchain at will. Additionally, transactions are available
on a public domain, permissionless blockchain. However, permissioned blockchains
Blockchain-Based Healthcare Research with Security Features that Can … 283
restrict entry and exit into the system, limiting transaction visibility [5]. Furthermore,
smart contracts are blockchain-based computer applications that can perform certain
functions when certain commands are executed in the system [6]. Blockchain-based
applications can therefore initiate specific smart contracts to enable them to perform
specific functions. Blockchain technology therefore eliminates escrow and interme-
diary systems and validates transactions according to consensus and peer-to-peer
verification. There are various mechanisms used to validate transactions in various
blockchain technologies [7, 8].
A general analysis of blockchain security shows that complexity is the way to improve
blockchain security. Recognizing blockchain security risk factors enables advanced
security features [9]. Moreover, technological advancements in blockchain tech-
nology have proven to improve security features [10]. For example, advances in
infrastructure, better coding, and better peer-to-peer verification of transactions could
improve the details of blockchain security [11] (Fig. 1).
There are many studies conducted on the security risks of blockchain-based appli-
cations. The information contained in the survey can be summarized as follows: For
example, in a study by Yang et al. 2, her work highlights various attacks against
blockchain systems. In addition, the research also shows how to mitigate the attacks.
Additionally, some blockchains, such as Ethereum’s smart her contract, have under-
gone extensive research to identify programming pitfalls that make them vulnerable
to attacks. Therefore, all research aims to promote the reliability, integrity, and secu-
rity of blockchain systems [17]. Additionally, ongoing research-driven initiatives
in blockchain technology will promote the sustainability and growth of the system
(Fig. 3).
to prevent the 51% attack from occurring include developing higher hash rates,
monitoring mining pools, and refraining from using proof of work as a means
of reaching consensus with Achieve Network. Another major security threat is
blockchain endpoint vulnerabilities. This is a weakness imposed on personal and
consumer virtual wallets. So, while mainstream blockchains are protected from
attacks, end users are at greater risk of attacks. Additionally, her individual Digi-
tals wallet lacks the necessary security features to protect against sophisticated cyber
and blockchain attacks. Additionally, third parties involved in managing and enabling
the blockchain are at risk of attack as they are directly involved in the true value of the
system. Third-party providers include blockchain smart providers, smart contracts,
payment platforms, and payment processors. Therefore, weak security of blockchain
apps, wallets, and endpoints increases the risk of attacks. Additionally, another risk of
blockchain technology is routing attacks. This form of attack is carried out by hackers
who redirect and intercept data in transit to external network services. Therefore, the
anonymity created by hackers entices the sender to send sensitive information to
the hackers, who then gain access to their accounts. It is applied to extract critical
information about money racketeers without having to. So the user assumes every-
thing is fine. Routing attacks can therefore cause significant damage and expose
important information and financials without informing the participants [19]. Addi-
tionally, phishing attacks are commonly used to compromise blockchain systems.
Therefore, attackers impersonate themselves through phishing with genuine emails
in order to gain the confidentiality of the participants. Therefore, hackers attack the
blockchain by demanding user’s personal information and credentials to compromise
user’s information and credentials. Phishing is therefore widely used by hackers
and is a major concern for blockchain participants and administrators. Informa-
tion sharing allows hackers to monitor the transactions and spending behavior of
targeted victims [20]. Furthermore, leaking transaction information could expose
the blockchain system to a flood of chaff coins, making it impossible for participants
to access the actual coins they used, thus exposing privacy and allowing attackers
to access participants’ wallets and could compromise the blockchain system. Addi-
tionally, attackers can work with rogue employees of the system to compromise the
blockchain. Blockchain participants should therefore be vigilant. The proliferation
of security threats in many forms has resulted in enormous financial and informa-
tion losses. Ensuring the security of the system is therefore the responsibility of all
stakeholders involved in the blockchain system [21].
There are risks that appear within the Blockchain while other may appear outside the
Blockchain. Therefore, some security risks are rampant occurring as compared to
others [17]. As seen from the figure on the security risks within the Blockchain, the
risks have been extensively analyzed. Therefore, the risks are mainly assessed with
applications related to finance, healthcare, smart vehicles, and electronic voting.
Blockchain-Based Healthcare Research with Security Features that Can … 287
8 Conclusion
Blockchain technology has made significant progress in the field of financial transac-
tions and information exchange. Additionally, blockchain has facilitated the onset of
safer and more efficient access to financial and other business data. This is essential
for physical replacement through existing systems. Furthermore, blockchain tech-
nology relies on consensus, encryption, and decentralization to promote security and
ensure efficiency and trust in financial and information transactions. However, the
efficiency of blockchain systems is not immune to numerous security threats that
continue to plague new adopters and existing users. We still face security threats
due to the emergence of new threats and the increasing complexity of blockchain
systems. Therefore, as digital transformation drives forward, blockchain technology
is one of the key developments that can be used to drive these changes. In addition,
blockchain technology is attracting attention in many fields of society, and the need
for it is increasing because it is expected to have future scalability and compati-
bility with various technologies. Moreover, given the growing consumer appetite
for information, blockchain systems ensure greater transparency and security in the
transmission and exchange of information. The future of this research imitation is to
develop a comprehensive knowledge base related to blockchain systems. The future
of blockchain is therefore to reduce risk and improve its sustainability to improve
organizational integrity, security and efficiency.
References
1. Surendra, V., et al. (2020). Resources, conservation & recycling blockchain technology adop-
tion barriers in the Indian agricultural supply chain: An integrated approach. Resources,
Conservation & Recycling, 161(April), 104877. https://doi.org/10.1016/j.resconrec.2020.
104877
2. Ulusoy, T., & Çelik, M. Y. (2019). Is it possible to understand the dynamics of cryptocurrency
markets using econophysics? crypto-econophysics. In Blockchain Economics and Financial
Market Innovation (pp. 233–247). Springer.
3. Atzei, N., Bartoletti, M., & Cimoli, T. (2017). A survey of attacks on Ethereum smart con-tracts
(SoK). In Proceedings of 6th International Conference on Principles of Security and Trust (vol.
10204, pp. 164–186).
4. Bartolucci, S., Bernat, P., & Joseph, D. (2018). SHARVOT: Secret SHARe-based VOTingon
the blockchain. In Proceedings of ACM/IEEE 1st International Workshop on Emerging Trends
in Software Engineering for Blockchain (pp. 30–34); Chen, W. K. (1993). Linear Networks
and Systems. pp. 123–135.
5. Buchmann, N., Rathgeb, C., Baier, H., Busch, C., & Margraf, M. Enhancing breeder document
long-term security using blockchain technology. In Proceedings of International Computer
Software and Applications Conference, 2, 744–748.
6. Alcarria, R., Bordel, B., Robles, T., Mart´ın, D., & Manso-Callejo, M. ´A. (2017). A blockchain-
based authorization system for trustworthy resource monitoring. 1186, 56−91.
7. Iqbal, M., & Matulevičius, R. (2019). Blockchain-based application security risks: A systematic
literature review. In International Conference on Advanced Information Systems Engineering
(pp. 176–188). Springer.
Blockchain-Based Healthcare Research with Security Features that Can … 289
8. Wong, L., et al. (2020). Time to seize the digital evolution: Adoption of blockchain in operations
and supply chain management among Malaysian SMEs. International Journal of Information
Management, 52, 101997. 10.1016/j.ijinfomgt.2019.08.005
9. Zamani, E., He, Y., & Phillips, M. (2020). On the security risks of the blockchain. Journal of
Computer Information Systems, 60(6), 495–506.
10. Ozyilmaz, K. R., & Yurdakul, A. (2019). Designing a blockchain-based IoT with Ethereum,
swarm, and LoRa: The software solution to create high availability with minimal security risks.
IEEE Consumer Electronics Magazine, 8(2), 28–34.
11. Sethi, M., Singh, G., Smith, K., Sorniotti, A., Stathakopoulou, C., Vukoli´c, M., Cocco,
S.W., & Yellick, J. (2018). Hyper ledger fabric: A distributed operating system for permissioned
blockchain. In Proceedings of EuroSys ’18 ThirteenthEuroSys Conference Article No.30
12. Zhu, P., et al. (2021). Enhancing traceability of infectious diseases: A blockchain-based
approach. Information Processing and Management, 58(4), 102570. https://doi.org/10.1016/j.
ipm.2021.102570
13. Rehman, M., Javaid, N., Awais, M., Imran, M., & Naseer, N. (2019). Cloud based secure
service providing for IoT using blockchain. In 2019 IEEE Global Communications Conference
(GLOBECOM) (pp. 1–7). IEEE.
14. Wang, Z., Dong, X., Li, Y., Fang, L., & Chen, P. (2018). IoT security model and performance
evaluation: a blockchain approach. In 2018 International Conference on Network Infrastructure
and Digital Content (IC-NIDC) (pp. 260–264). IEEE.
15. Bhutta, M. N. M., Khwaja, A. A., Nadeem, A., Ahmad, H. F., Khan, M. K., Hanif, M. A., &
Cao, Y. (2021). A survey on blockchain technology: Evolution, architecture and security. IEEE
Access, 9, 61048–61073.
16. Tahar, M., Hammi, B., & Bellot, P. (2020). Bubbles of trust: A decentralized blockchain-based
authentication system for IoT. Computers & Security, 78(2018), 126–142. https://doi.org/10.
1016/j.cose.2018.06.004
17. Yang, X. M., Li, X., Wu, H. Q., & Zhao, K. Y. (2017). The application model and challenges
of blockchain technology in education. Modern distance education research, 2, 34–45.
18. Fabiano, N. (2017). Internet of things and blockchain: legal issues and privacy. The challenge
for a privacy standard. In 2017 IEEE International Conference on Internet of Things (iThings)
and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and
Social Computing (CPSCom) and IEEE Smart Data (SmartData) (pp. 727–734). IEEE.
19. Park, S., Specter, M., Narula, N., & Rivest, R. L. (2021). Going from bad to worse: From
internet voting to blockchain voting. Journal of Cyber security, 7(1), tyaa025.
20. Rathee, G., Sharma, A., Saini, H., Kumar, R., & Iqbal, R. (2020). A hybrid framework for
multimedia data processing in IoT-healthcare using blockchain technology. Multimedia Tools
and Applications, 79(15), 9711–9733.
21. Zaghloul, E., Li, T., Mutka, M. W., & Ren, J. (2020). Bitcoin and blockchain: Security and
privacy. IEEE Internet of Things Journal, 7(10), 10288–10313.
Wave Scattering by Thin Multiple
Bottom Standing Vertical Porous Walls
in Water of Uniform Finite Depth
Abstract The problem of wave scattering by thin bottom standing porous walls in
uniform finite depth water is solved under the assumption of the linearized water
wave theory. Using Havelocks inversion formulae, this boundary value problem is
reduced to a set of coupled Fredholm-type integral equations involving the difference
of potentials across the walls. The methodology utilized in this study is multi-term
Galerkin approximation with set of basis functions involving Chebychev polyno-
mials. A system of linear equations has been solved for numerical estimations of the
reflection and transmission coefficients. Wave energy dissipation and dynamic wave
force has been calculated both analytically and numerically. The numerical results
for reflection coefficients, energy dissipation and wave force are depicted graphically
against wave numbers taking different values of the parameters. Very good agree-
ment between some earlier known results in the literature and our present results is
established.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 291
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_24
292 B. Sarkar et al.
1 Introduction
Porous breakwater is a fruitful weapon rather than the conventional fixed rigid barriers
to protect the coastal region and harbours. Also, the structures can dissipate the wave
energy and reduce the wave forces on the structure. Moreover, this kinds of structures
are more eco-friendly as well as low costing.
Rigid breakwaters in the form of thin impermeable barriers are quite conventional
and have been investigated widely in the literature on water waves during the past few
decades. Explicit solutions of water wave scattering problems associated with rigid
thin barriers have been found in the literature of Dean [1], Ursell [2] only for a single
or a pair of thin vertical barriers where the barriers are immersed in deep water for
normal incidence of surface waves. In the context of permeable breakwater, Sollitt
and Cross [3] developed a technique for the first time to predict the reflection and
transmission of ocean waves through a permeable breakwater of rectangular cross
section. Chwang [4] adapted a porous-wave maker theory to examine wave propaga-
tion by horizontal oscillation of a vertical porous plate. Water wave diffraction by an
infinite thin porous wall in finite depth water was examined by Yu [5] using matched
asymptotic expansion. Lee and Chwang [6] employed the methods of eigenfuction
expansion and least square approximation to investigate the problem of scattering of
surface waves by thin vertical porous barriers for four different basic configurations
of the barriers.
In the present study, scattering by thin bottom standing porous walls in uniform
finite depth water is explored under the assumption of linear water wave theory.
Accepting Havelock’s inversion formulae along with porous boundary conditions,
the problem is reduced to coupled integral equation of Fredholm type. Multi-term
Galerkin’s technique involving Chebychev’s polynomials multiplied with suitable
weights is considered to solve these integral equations approximately. Wave force,
wave energy dissipation and reflection coefficients are plotted against wave numbers
adopting different parametric values. The results of Das et al. [7], Mandal and Dolai
[8], Lee and Chwang [6] for two or single rigid thin bottom standing walls are
recovered in order to validate the accuracy of the present method.
2 Mathematical Formulation
Considering the linearized water wave theory and irrotational fluid motion, the
mathematical problem is to solve φ(x, y) satisfying
∇ 2 φ = 0 in 0 ≤ y ≤ h, (2.1)
∂φ
Kφ + = 0 on y = 0, |x| > 0 (2.2)
∂y
∂φ
= −ik0 G j (y) φ ∓b j + 0, y − φ ∓b j − 0, y on y ∈ L j ; ( j = 1, 2), (2.3)
∂x
∂φ
= 0 on y = h (2.5)
∂y
and
Tφ inc (x, y) as x → −∞
φ(x, y) ∼ (2.6)
φ inc (x, y) + Rφ inc (−x, y) as x → ∞
where Re φ inc (x, y)e−iσ t denotes the velocity potential in the fluid region, R and T
are reflection and transmission coefficients, σ denotes the angular frequency of the
0 (h−y)
waves. Here, φ inc (x, y) = φ0 (y)e−iμ(x−b2 ) , where φ0 (y) = coshk
coshk0 h
and k0 is the
unique positive real root of the equation K = k tanh kh such that μ = k0 .
The arrangements of the barriers are shown in Fig. 1. The geometry of the problem
is symmetric about x = 0. So, the velocity potential can be split into symmetric and
anti-symmetric parts as follows
where
Rs + Ra Rs − Ra
R= ,T = (3.2)
2 2
294 B. Sarkar et al.
Thus,
f js (y) = −ik0 G j (y)g sj (y), y ∈ L j , L j ≡ a j , h j = 1, 2 (3.6)
Using Havelock’s inversion formulae on g sj (y) and continuity of f js (y) along the
walls gives
g1s (t)Ms11 (y, t)dt + g2s (t)Ms12 (y, t)dt = μA0S sinμb1 φ0 (y) (3.7)
L1 L2
Wave Scattering by Thin Multiple Bottom Standing Vertical Porous … 295
g1s (t)Ms21 (y, t)dt + g2s (t)Ms22 (y, t)dt = iμ 1 − R s e2iμb2 φ0 (y) (3.8)
L1 L2
where
Ms11 (y, t) = − r∞=1 αr δhe
r sinhαr b1
φr (t)φr (y)
αr b1
∞ αr δr sinhαr b1
M12 (y, t) = M21 (y, t) = − r =1 heαr b2 φr (t)φr (y)
s s
(3.9)
Ms22 (y, t) = − r∞=1 αr δhe
r sinhαr b2
αr b2 φr (t)φr (y)
And
4kr h
δr = (r = 1, 2, . . . ) (3.10)
2kr h + sin2kr h
Let us introduce
g sj (t) = μAs0 sinμb1 G sj1 (t) + iμ 1 − R s e2iμb2 G sj2 (t), j = 1, 2 (3.11)
Using (3.11) and introducing Kronecker delta δ jl on Eqs. (3.7) and (3.8), we have
G s1l (t)Ms11 (y, t)dt + G s2l (t)Ms12 (y, t)dt = δ1l φ0 (y), y ∈ L 1 (3.12)
L1 L2
G s1l (t)Ms21 (y, t)dt + G s2l (t)Ms22 (y, t)dt = δ2l φ0 (y), y ∈ L 2 (3.13)
L1 L2
s ih
cμAs0 sinμb1 S11
s
+ iμ 1 − R s e2iμb2 S12 = cscμd i As0 sinμb2 + 1 − R s e2iμb2
δ0
s ih
μ As0 sinμb1 S21
s
+ iμ 1 − R s e2iμb2 S22 = cscμd −i As0 sinμb1 − eiμd
δ0
+R s e2iμb2 e−iμd (3.14)
where d = b2 − b1 and
S sjl = G sjl (t)φ0 (t)dt, j, l = 1, 2 (3.15)
Lj
and
4k0 hcosh2 k0 h
δ0 = (3.16)
2k0 h + sinh2k0 h
296 B. Sarkar et al.
Now to solve Eqs. (3.12) and (3.13) for G sjl (t), we introduce (N + 1)-term Galerkin
approximation of G sjl (t) are chosen as
N
G sjl (t) a (n)s ψ (n)
j (t), a j < t < h, j, l = 1, 2 (4.1)
n=0
2(−1)n 2 21
ψ (n)
j (t) = h − a j − (h − t)2
π (2n + 1) h − a j h
h−t
U2n , a j < t < h, j = 1, 2 (4.2)
h − aj
Using (4.1) and (4.2) in Eqs. (3.12) and (3.13), we get the following system of
equations
N (n)s (11)s N (n) (12)
n=0 a1l V mn + n=0 a2l s Vmn s = −δ1l Wm(1)
N (n)s (21)s (n)s (22)s (4.3)
n=0 a1l Vmn + n=0 a2l Vmn = −δ2l Wm(2) , m = 0, 1, 2, . . .
N
where,
∞
( j j)s δr αr hsinhαr b j
cVmn =− J2m+1 kr h − a j J2n+1 kr h − a j
r =1
eαr b j (k r h)
2
+ ik0 h 2 G j (y)ψ (m) (n)
j (y)ψ j (y)dy
Lj
∞
( jl)s δr αr hsinhαr b1
Vmn =− J2m+1 kr h − a j J2n+1 kr h − a j
r =1
eαr b2 (k r h)
2
I2m+1 k0 h − a j
Wm( j )s = (−1) m
, j, l = 1, 2, m, n = 0, 1, . . . , N (4.4)
k0 hcoshk0 h
Also, using (4.1) in Eq. (3.15), we get the system of equations and rewrite them
in matrix form as follows
S = W V −1 (−W )T (4.5)
Wave Scattering by Thin Multiple Bottom Standing Vertical Porous … 297
where
W (1)s 0 V (11)s V (12)s
W = and V =
0 W (2)s V (21)s V (22)s
Similar calculation can be done for the anti-symmetric part φ a (x, y) by replacing
the expressions as discussed above.
From the linear Bernoulli’s equation, integrating the dynamic pressure discontinuity
equations along the porous walls, we obtain the horizontal wave force acting on the
walls as
s
F = iρσ
s
φ (b1 + 0, y) − φ s (b1 − 0, y) dy
L1
s
+ φ (b2 + 0, y) − φ (b2 − 0, y) dy
s
(5.1)
L2
|F s + F a |
WF = (5.2)
F0
where
ρgσ
F0 = tanhk0 h (5.3)
k0
Now, utilizing Green’s integral formula, the energy identity for porous walls can
be derived as follows,
|R|2 + |T |2 = 1 − J (5.4)
2
2
J = δ0 R G j (y) g s,a
j (y) dy (5.5)
j=1 Lj
and R G j (y) is the real part of G j (y)( j = 1, 2).
298 B. Sarkar et al.
The numerical estimates for wave energy dissipation, reflection coefficients and wave
force have been obtained by taking only three terms (N = 2) in Galerkin’s approxi-
mations of g s,a
j (y), j = 1, 2. However, quite a good accuracy in the numerical results
have been achieved considering single term (N = 0) in Galerkin’s approximation.
A comparison between our present results and the results of Das et al. [7] obtained
for double bottom standing thin barriers have been demonstrated in Table 1 with
a1
h
= ah2 = 0.2, bh1 = 0.3, bh2 = 0.301, G j = 0. The compatibility between these two
results up to 2–3 decimal places asserts that the correctness of the present results.
Again, Table 2 exhibits a comparison between the present results and the results
of Mandal and Dolai [8] for single bottom standing thin vertical barriers by taking
b1
h
= 0.001, bh2 = 0.0011, G j = 0. It is observed from the Table 2 that two results
are almost equal up to 2–3 decimal places. This again validates the exactness of the
present method.
The graphs of Fig. 10 in Lee and Chwang [9] for a single porous bottom standing
barrier in finite depth have been recovered in Fig. 2 corresponding to G j = 0.5,1. Here
the values of the other parameters are ah1 = ah2 = 0.25, bh1 = 0.001, bh2 = 0.0011.
This also ratifies the correctness of our results.
In Fig. 3, dissipation of wave energy i.e. J is depicted against Kh with ah1 = 0.35,
a2
h
= 0.55, bh1 = 3.0, bh2 = 5.0; ah1 = 0.35, ah2 = 0.55, bh1 = 3.0, bh2 = 3.001
and ah1 = 0.35, ah2 = 0.999, bh1 = 0.001, bh2 = 3.001 for four, two and single
walls respectively. Here we take G j = 1 for the above three configurations. It is
observed from Fig. 3 that as the number of walls increases the wave energy dissipation
increases.
Table 1 Comparison between the numerical estimates of Das et al.’s results for R1 and R2 and
a1 a2 b1 b2
Present results for R with h = h = 0.2, h = 0.3, h = 0.301, G j = 0
Kh R1 R2 R
0.2 0.436929 0.437559 0.439544
0.8 0.392339 0.393359 0.385276
1.4 0.088149 0.088774 0.0671994
Table 2 Comparison between the numerical estimates of Mandal and Dolai’s results for R1 and
a1 a2 a 1 b1 b2
R2 and Present results for R with h = h = h , h = 0.001, h = 0.002, G j = 0
a
h R1 R2 R
0.2 0.2914 0.2923 0.299451
0.4 0.1397 0.1397 0.142861
0.6 0.0573 0.0573 0.0581811
0.8 0.0156 0.0156 0.013988
Wave Scattering by Thin Multiple Bottom Standing Vertical Porous … 299
a1 a2 b1 b2
Fig. 2 Graph of |R| against K h for h = h = 0.25, h = 0.001, h = 0.0011
Fig. 3 Graph of J against K h for four, two and single walls with G j = 1
300 B. Sarkar et al.
a1 a2 b1 b2
Fig. 4 Graph of |R| against K h for h = 0.2, h = 0.45, h = 3.5, h = 5.0
a1 a2 b1 b2
Fig. 5 Graph of W F against K h for h = 0.45, h = 0.25, h = 2.5, h = 4.0
Wave Scattering by Thin Multiple Bottom Standing Vertical Porous … 301
7 Conclusion
Acknowledgements All of the authors are grateful to Swami Vivekananda University for providing
the facilities for carrying out this research work.
References
8. Mandal, B. N., & Dolai, D. P. (1994). Oblique water wave diffraction by thin vertical barriers
in water of uniform finite depth. Applied Ocean Research, 16, 195–203.
9. Mandal, B. N., & Chakrabarti, A. (2000). Water wave scattering by barriers (1st ed.). WIT
Press.
10. Evans, D. V., & Porter, R. (1997). Complementary methods for scattering by thin barriers.
International Services of Advanced Fluid Mechanics, 8, 1–44.
Mobile Learning Integration
into Teaching Kinematics Topic
in English in Vietnam High School
Abstract In the Vietnam National Education Program (2018) the chapter “Kinemat-
ics” has been constructed as fundamental part for 10th grade Physics curriculum. The
understanding of concepts and laws, practice with experiments should be a serious
target and essential subject of study and competence development for students. The
application mobile devices and solution support the flexibility and effectiveness in
the learning process. Using English as a medium of instruction is a phenomenon
and is of significant challenge for both teachers and students in Vietnam schools
today. Teaching Physics for high school students requires the balance between
the teaching experiential science (subjects in specialization), foreign languages
(English in particular), and new technology integrated learning approaches. This
paper examined the relationship between motivation and meaningful learning for
high school students with mobile App and mobile learning approaches implemented
in Kinematics studying in English.
1 Introduction
The COVID-19 pandemic has impacted the education industry, where the feasibility
of implementing physical education classes is limited. In response to the pandemic
situation, schools in Vietnam have introduced mandatory online, blended (including
and mobile) learning courses that allow students to keep learning (“disruptive classes
P. T. H. Yen (B)
Foreign Language Specialized School, University of Languages and International Studies-VNU,
Hanoi, Vietnam
e-mail: [email protected]
T. Q. Cuong
VNU University of Education, Hanoi, Vietnam
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 303
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_25
304 P. T. H. Yen and T. Q. Cuong
but undisruptive learning”). This process provides students’ new learning mode and
possibility that involves a seamless interaction between the learning content and the
learning community [1, 7].
Mobile learning (m-learning) with specific platform, solution and devices allow
learners to access and engage in formal learning activities beyond the physical class-
room and access learning resources without schedule restrictions [1] as well as expand
learning environment and experiences. Furthermore, m-learning functions support
collaborative learning by facilitating interactions within the learning community
such as holding remote group discussions, sharing documents, placing questions,
self-testing [4].
Many empirical studies have confirmed the benefits of m-learning such as
increasing learner creativity [5], enhancing self-regulation, self-directed, self-
determined learning [11], building collaborative capacity as well as improving
student learning outcomes and competence [2].
Applying m-learning in studying physics in English (kinematics topic) helps
increase students’ interest and motivation in learning, from which students master
knowledge and understand concepts and characteristics, deploy laws of physics in
English through experiments, and explore simulations right on their own mobile
phones. From there, it helps also to solve some difficulties in learning physics in
English in particular and can be considered as a solution to support the teaching of
natural sciences in English in general. Files are updated at a later stage.
Currently, in the literature and practice, there are many conceptions of m-learning,
but focusing on two main trends:
(i). The trend of linking Mobile learning with the use of technology devices, tools,
applications in learning process, including access of learning content through
mobile devices [3, 4, 6, 8].
(ii). The trend of linking Mobile learning with the mobility of learners: M in the
term Mobile learning stands for “MY” (“The learner himself”) which represents
learning (any+ ) such as anytime, anywhere, with anybody, anything etc. Hence,
Mobile learning means a new form of unique and ubiquitous learning services
for “mobile” students [3, 11, 12].
In short, Mobile learning refers the “mobility fitness” and the relevant integration
of technology and in specific content knowledge areas of learning in combination
with pedagogical setting.
Winters [13] stated four views on mobile learning: technology-centric, e-learning,
strengthening formal learning, and learner-centered practice. Mobile learning may
be conducted by formal, informal and non-formal learning with Open Course Ware
(OCW), Massive Open Online Courses (MOOC), and Small Private Open Courses
(SPOC) designed for personal Physics topics in particular [3].
Mobile Learning Integration into Teaching Kinematics Topic in English … 305
is still critical. The teachers face many difficulties in terms of the level of English
proficiency, class activities and management, the new balance of Physics content
knowledge and English as the medium of instruction, and technology skills applied
in teaching. Thus, we realized that there is a need for a process to design a teaching
plan for teachers to refer to and to prepare well for physics lectures in English.
Step 1: Determine the lesson objectives
Step 2: Build a system of vocabulary/sentence patterns related to the lesson
Step 3: Prepare experiments, simulations, IT applications to support the learning
of content related to the lesson.
Step 4: Develop the content of the teaching plan and organize teaching activities
Step 5: Guide students to do physics exercises in English
Step 6: Homework assigned
The Kinematics is the second chapter of 10th grade in new Vietnam Educa-
tion Program for high school Physics Curriculum (totally, it consists of 7 chapters).
In the lower grades, students have access to knowledge related to motion but at a
simple, brief and qualitative level. This chapter will provide students with a deeper,
broader, more systematic and complete knowledge of motion in general and simple
mechanical movements in particular (Table 1).
Mobile app-based solutions allow for a more customized teaching approach that
is also more successful. When utilized in the classroom, mobile app tools, which
are generally more tailored, provide an opportunity to improve specific abilities and
read texts in an e-environment quickly and efficiently. Solutions based on mobile
applications are adaptable to encourage students with an appropriate speed for their
learning process, using resources and needed skills making the learning process
more productive [3]. Hence, when selecting to employ mobile applications in the
Physics learning process, the teacher should be certain that this is the most effective
instrument available.
Based on observation and survey of students’ learning needs in the Foreign
Language Specialized High School, University of Languages & International
Studies-VNU, Hanoi, Vietnam the Mobile App so-called “CNN physics” has been
designed and implemented in teaching Kinematics chapter (See Annex of Program
Code).
The CNN physics application is developed with the main function (See Table 2)
of supporting 10th grade students in learning physics in English (learning and
practicing pronunciation, practicing multiple choice questions by topic).
With this App the teachers can organize various flexible activities in-and-or outside
classroom with blended, flipped learning approaches such as warm-up, checking
previous lessons content understanding and skills, assign new tasks for self-study as
well as self-testing (Physics terminology or content in English, vocabulary, formulas
etc.) for students before-during and after-class (See Fig. 1). Students can integrate
308 P. T. H. Yen and T. Q. Cuong
Table 1 Contents and requirements for the chapter “Kinematics” (10th grade, National Physics
curriculum. https://bit.ly/3InT64n)
Content National education program requirements
knowledge of
kinematics section
Describing motion Arguing to derive the formula for calculating average speed, defining speed
in one direction
From pictures or practical examples, the displacement can be defined
Compare distance traveled and displacement
Based on the definition of speed in one direction and displacement, a
formula for calculating and defining velocity can be derived
Perform the experiment (or based on the given data), plot the
displacement−time in linear motion
Calculate the speed from the slope of the displacement–time graph
Determine the total displacement, the total velocity
Apply the formula for calculating speed and velocity
Discuss to design the plan or select the plan and implement the plan,
measure the speed with practical tools
Describe some common speed measurement methods and evaluate their
advantages and disadvantages
Uniformly variable Performing experiments and reasoning based on the change of velocity in
motion linear motion, derive the formula for calculating the acceleration; State the
meaning, unit of acceleration
Carry out the experiment (or based on the given data), plot the
velocity–time graph in linear motion. Apply velocity–time graph to
calculate displacement and acceleration in some simple cases
Derive formulas for uniformly variable linear motion (not using integrals)
Apply the formulas of uniformly variable rectilinear motion
Describe and explain motion when an object has a constant velocity in one
direction and a constant acceleration in a direction perpendicular to this
direction
Discuss to design the plan or choose the plan and implement the plan,
measure the free fall acceleration with practical tools
To be able to carry out a project or research project to find the conditions
for throwing objects in the air at a certain height to achieve the greatest
height or range
Table 2 Function
Function Students performance
authorization table of apps
Log out Yes
Log in Yes
See list of topics Yes
Search topic Yes
Study topics, watch video lectures Yes
Learn vocabulary by topic Yes
Take quizzes by chapter Yes
View multiple choice answers Yes
View test results Yes
Mobile Learning Integration into Teaching Kinematics Topic in English … 309
Fig. 1 The app “CNN physics” user interface screenshot (the Topic 1 self-study and vocabulary
practice)
App for their own learning purpose, pace, activities, self-assessment etc. in connective
and collaborative environment (See Fig. 2).
4 Research Design
A survey was conducted to get student views on the usage of mobile applications for
kinematics lesson taught in English. The use of a survey (usually in the form of a
Likert-type questionnaire) is a common research approach in many studies on mobile
learning. The questionnaire mentioned here consists of 20 items using a Likert 5-
point scale format (Strongly agree = 4, Agree = 3, Neutral = 2, Disagree = 1, and
310 P. T. H. Yen and T. Q. Cuong
Strongly disagree = 0 on the Likert scale), and 122 students completed the survey
(from 10th grade in High School for Gifted, Lao Cai province, Vietnam).
Table 3 presents the findings of the investigation. The proportion of replies
obtained for each Likert value, as well as the average value and standard deviation,
are shown in the row corresponding to each question.
The reliability of Cronbach’s Alpha is shown in the Table 4.
Most of the values in the column Cronbach’s Alpha if Item Deleted < Cronbach’s
Alpha and all values in the column Corrected Item-Total Correlation > 0.3 → Most
of the observed variables meet the criteria and do not need to be removed.
Cronbach’s Alpha in the above measurement has a value of > 0.7 (= 0.967), which
means that the reliability of the scale is evaluated well. Of the total 122 answers, up
to 95.9% of students answered Strongly agree. This is a very positive response to
the trend of using mobile learning in learning physics in English in particular and
natural sciences in English in general.
Mobile Learning Integration into Teaching Kinematics Topic in English … 311
5 The Findings
In this study the use of m-learning in Kinematics lesson in English (as a medium of
instruction) may be understood in two perspectives: an innovative and new way of
teaching that increases student flexiblity, self-directed learning, and the use of mobile
tools, devices and solutions with student mobility. This mobile App “CNN physics”
technology not only guided and supported pupils in learning Kinematics, antici-
pated to address shortcomings in traditional physics lesson, but encouraged them
into English academic communicative context, focused on formal representations of
standardized physics problems, phenomena as well as expanded their motivation and
self-confidence in English resources.
312 P. T. H. Yen and T. Q. Cuong
Moreover, the study results show significant level of motivation stimulated by the
use of the App “CNN physics” in the learning process. This turns to the ARCS model
of motivation consisting of four basic dimensions: attention, relevance, confidence,
and satisfaction [3, 8–10]. Attention (A): the use of mobile App “CNN physics” with
relevant devices (smartphone, tablet, iPad) in new circumstances attempts (learning
Kinematics and using English as a medium of instruction) forces learner curiosity,
interest, passion, and creativity. Students’ curiosity is piqued from the start, resulting
in enthusiastic involvement (from Q2 to Q11). Mobile device integration in the
classroom must be creative and comprehensive (from Q16 to Q20).
Relevance (R): the students’ perceptions of a link between the creative element
incorporated in the learning process and their own experiences requirements, ambi-
tions, and preferences in terms of both Kinematics learning and English usage for
presentations, exploring and experimenting physics issues (from Q12 to Q14). Confi-
dence (C): the relation between student’s, readiness, acceptance, sense of personal
control and their anticipation of success in the learning process, and the completion
of the learning outcomes (Q10 and Q12). Finally, Satisfaction (S): the arguments
and suggestions connected to students’ attitudes toward the learning process and
outcomes (Q1, Q2, Q8).
6 Conclusion
Today, the use of mobile applications in the learning process grows rapidly which
makes popular utilized tools in the new way of Physics classrooms teaching.
However, the with the application of IT in teaching science subjects, mobile learning
in teaching physics in English in particular is still limited in Vietnam high school
practice.
The implementation App “CNN physics” in teaching in English (as a medium
of instruction) encourages students’ activities, curiosity, motivation and creativity of
both Physics learning phenomena and English academic communication.
The challenges of m-learning can be observed with paying attentions on accep-
tance and acquisition technology, digital skills for using mobile apps, pedagogical
settings, teachers’ professional competence, appropriate learning resources as well
as re-designing and integrating frequency, purposefulness of technology use into
curriculum and lesson.
Mobile Learning Integration into Teaching Kinematics Topic in English … 313
Annex
Program code
import Vocabulary from "../models/vocabulary";
export const getAllVocabularyByTopic = async (req, res)
=> {
try {
const vocabularies = await Vocabulary.find({
topic_id: req.params.topic_id,
}).populate("topic_id").sort({"createdAt": 1});
return res.status(200).json({
success: true,
vocabularies,
});
} catch (error) {
res.status(500).json({ success: false, message:
"Internal server error" });}
};
export const createVocabulary = async (req, res) => {
const { vocabulary, listVocabulary } = req.body;
console.log(listVocabulary);
if (!listVocabulary && !vocabulary) {
return res
.status(404)
.json({ success: false, message: "vocabulary is required" });
}
if(listVocabulary){
try {
await
Vocabulary.insertMany(req.body.listVocabulary);
return res.status(200).json({
success: true,
message: "Quiz saved successfully",
});
} catch (error) {
res.status(500).json({ success: false, message: "Internal server
error" });
}
}else{
try {
const newVocabulary = new Vocabulary(req.body);
await newVocabulary.save();
return res.status(200).json({
success: true,
message: "Quiz saved successfully",
vocabulary: newVocabulary,
});
} catch (error) {
res.status(500).json({ success: false, message: "Internal server
error" });
}
}
};
export const updateVocabulary = async (req, res) => {
314 P. T. H. Yen and T. Q. Cuong
Acknowledgements This research has been completed under the sponsorship of the University of
Languages and International Studies (VNU ULIS) under the Project No. N.22.06.
Mobile Learning Integration into Teaching Kinematics Topic in English … 315
References
1. Bernacki, M. L., Crompton, H., & Greene, J. A. (2020). Towards convergence of mobile and
psychological theories of learning. Contemporary Educational Psychology, 60, 101828. https:/
/doi.org/10.1016/j.cedpsych.2019.101828
2. Chang, W.-H., Liu, Y.-C., & Huang, T.-H. (2017). Perceptions of learning effectiveness in M-
learning: Scale development and student awareness. Journal of Computer Assisted Learning,
33(5), 461–472. https://doi.org/10.1111/jcal.12192
3. Cuong, T. Q., Bich, N. T. N., & Chung, P. K. (2020). Giáo trình lí luâ.n và công nghê. da.y ho.c.
NXB ÐHQGHN
4. Diacopoulos, M. M., & Crompton, H. (2020). A systematic review of mobile learning in social
studies. Computers & Education, 154, 103911. https://doi.org/10.34190/ejel.20.5.2612
5. Holzinger, A., Nischelwitzer, A., & Meisenberger, M. (2005). Mobile phones as a challenge for
m-learning: Examples for mobile interactive learning objects (MILOs). In Pervasive Computing
and Communications Workshops (pp. 307−311).
6. Jahnke, I., & Liebscher, J. (2020). Three types of integrated course designs for using mobile
technologies to support creativity in higher education. Computers & Education, 146, 103782.
https://doi.org/10.1016/j.compedu.2019.103782
7. Jimmy, D., & Clark, M. Ed. (2007). Learning and teaching in the mobile learning environment
of the twenty-first century. Texas.
8. Kearney, M., Burden, K., Schuck, S. (2020). Theorising and implementing mobile learning:
Using the iPAC framework to inform research and teaching practice (pp. 101–114). Springer
Singapore. https://doi.org/10.1007/978-981-15-8277-6_8
9. Keller, J. (2009). Motivational design for learning and performance: The ARCS model
approach. Springer Science & Business Media.
10. Keller, J. (2011). Instructional materials motivation scale (IMMS). Unpublished manuscript.
The Florida State University.
11. Hogue, R. J. (2011). An inclusive definition of mobile learning. http://rjh.goingeast.ca/2011/
07/17/an-inclusive-definition-of-mobilelearning-edumooc/
12. Traxler, J. (2007). Current state of mobile learning. International Review on Research in Open
and Distance learning, 8(2).
13. Winters, N. (2006). What is mobile learning. In M. Sharples (Ed.), Big issues in mobile learning:
Report of a workshop by the Kaleidoscope Network of Excellent Mobile Learning Initiative
(pp. 5–9). University of Nottingham.
14. Zheng, L., Li, X., & Chen, F. (2016). Effects of a mobile self-regulated learning approach on
students’ learning achievements and self-regulated learning skills. Innovations in Education
and Teaching International, 1–9. https://doi.org/10.1080/14703297.2016.1259080
On Density of Grid Points in l ∞ -Balls
Abstract Finding the minimum and the maximum densities for axes-parallel
squares, cubes, and hypercubes, cast in the integer space, is an important problem
in the domain of digital geometry. In this work, we study different variations of this
problem and solve a number of them. Interestingly, the extremum values for integer
sizes sometimes differ from those for real sizes, and hence, we have studied and
analyzed them separately. Further, the results and proofs in 2D readily extend to
higher dimensions, and hence we could get simple-yet-novel theoretical results for
the extremum densities for l∞ -balls in general. As ‘density’ provides a measure of
how a set of points bounded by a region is relatively more concentrated or sparse, it
has applications in image analysis, social networking, complex networks and related
areas, apart from different branches of physical science. Hence, our results are funda-
mental in the understanding of locating the density minima and maxima in a discrete
space of an arbitrarily large dimension.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 317
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_26
318 N. G. Basu et al.
1 Introduction
In this work, we consider that the given set of points having uniform weight is located
at every crossing point of a uniform rectilinear or cuboidal grid. The grid is conceived
as the integer plane, i.e., Z2 , or as the integer space, i.e., Z3 , for simplicity. The grid
On Density of Grid Points in l ∞ -Balls 319
points are equivalent to integer points in our framework. We have also extended
our research to n-dimensional grid and the grid in that case is represented as the n-
dimensional integer hyperspace. We have identified the location of the axes-parallel
squares (cubes, hypercubes) with maximum and minimum densities when the length
of a side can be any integer or a real number and the centroid of the square (cube,
hypercube) might or might not be aligned with a grid point.
2 Maximum Density
(ii) |Sa | = a(a + 1) if only two opposite sides contain pixels (array with a rows
and (a + 1) columns).
(iii) |Sa | = (a + 1)2 if all four sides contain pixels (array with (a + 1) rows and (a
+ 1) columns).
Theorem 1 Out of all squares of integer length, the ones with maximum density are
unit squares.
a2
= (1+ a1 )2 , which evaluates to
4 if and only if a = 1, and to a smaller value for any other value of a. In particular, the
density is 4 for a unit square only when its four corners coincide with four pixels. ∎
We extend the result of Sect. 2.1 to derive a similar result for (axes-parallel) cubes
of integer length. We consider here a uniform 3D grid so that the grid points are in
bijection with 3-dimensional points with integer coordinates, i.e., with Z3 . We denote
by Ba an axes-parallel cube of side a. The set of voxels contained in Ba is denoted by
Ba: = Ba ∩ Z3 , and is referred to as a digital cube. The cardinality of Ba is denoted
by |Ba|, and the density of voxels in Ba is given by ba := |Ba 3a | .
We now denote by B{p} a cube passing through any point p ∈ R3 . B{p} is the digital
cube corresponding to B{p} , so B{p} := B{p} ∩ Z3 , and by b{p} denotes the density of
voxels in B{p} .
A natural extension of Observation 1 to 3D reduces the number of cubes from
infinte to a countable collection and thus helps in proving the next lemma. Since B{p}
has an integer length, for every pair of opposite, each of these two faces will contain
either no voxel or the same number of voxels. Further, if these two faces contain
voxels, then translating the cube by a small amount along the direction orthogonal
to these faces will result in decreasing the number of voxels in B{p} leading to the
following observation.
Observation 2 If a particular face of a cube does not contain any voxel, then it can
be translated to a position so as to increase its density.
The above observation implies that the maximum density cube having integer
length will come from the countable collection {B{q} : q ∈ Z 3 }, i.e.,
max{b{ p} : p ∈ R 3 } = max{b{q} : q ∈ Z 3 }.
On Density of Grid Points in l ∞ -Balls 321
Fig. 2 The cube length is 2. Left: Cube containing maximum possible voxels. Right: Cube
containing minimum possible voxels. Voxels contained by the cubes are shown in red
Like squares, we consider only the cubes containing more than one voxel, since
the containment of a single voxel trivially degenerates to the limiting case of infinite
density. The case where a cube contains exactly k voxels is referred to as “k-voxel
containment”.
From the above observation, we can infer that a cube will attain maximum density
if it contains voxels on all of its faces (refer Fig. 2). In particular, we have the following
lemma.
Lemma 2 For a given positive integer a, any cube Ba will have the maximum density
if it contains (a + 1)3 voxels.
Theorem 2 In the collection of all cubes of integer length, the ones with maximum
density are unit cubes.
a3
= (1 + a1 )3 , which evaluates
to 8 if a = 1, and to a smaller value for any other value of a. In particular, the density
is 8 for a unit cube only when its eight corners coincide with eight voxels. ∎
From the above observation, we can infer that a hypercube will attain maximum
density if it contains hypervoxels on all of its hyperfaces and we have the following
lemma.
Lemma 3 For a given positive integer a, any hypercube H a will have the maximum
density if it contains (a + 1)n hypervoxels.
Theorem 3 Among all the hypercubes of integer length, the ones with maximum
density are unit hypercubes.
an
= (1 + a1 )n , which evaluates
to 2 if a = 1, and to a smaller value for any other value of a. In particular, the density
n
is 2n for a unit hypercube only when its 2n corners coincide with 2n hypervoxels. ∎
In this section, we are considering only axes-parallel squares S a with real length a.
Clearly, the number of pixels in S a with the same x-coordinate (y-coordinate) is at
most [a + 1]. Please refer to Fig. 3. We have this interesting observation.
Observation 4 For a given positive real number a, any real square S a will
accommodate at most (a + 1)2 pixels.
On Density of Grid Points in l ∞ -Balls 323
Theorem 4 In the collection of all squares of real length, the ones with maximum
density are unit squares.
[a + 1]2 (a + 1)2 1
sa ≤ ≤ = (1 + )2 < (1 + 1)2 ∀a > 1 ⇒ sa < 4 ∀a > 1
a2 a2 a
In particular, the density is 4 for a unit square only when its four corners coincide
with four pixels. ∎
Observation 5 For a given real number a, any cube Ba will contain at most [a + 1]3
voxels.
Theorem 5 In the collection of all cubes of real length, the ones with maximum
density are unit cubes.
[a+1]3
Proof By Observation 5, the density of Ba is at most a3
,
[a + 1]3 (a + 1)3 1
ba ≤ 3
≤ = (1 + )3 < (1 + 1)3 ∀a > 1 ⇒ ba < 8 ∀a > 1
a a3 a
In particular, the density attains the value of 8 only for a unit cube, especially
when its eight corners coincide with eight hypervoxels. ∎
324 N. G. Basu et al.
Theorem 6 In the collection of all hypercubes of real length, the ones with maximum
density are unit hypercubes.
[a+1]n
Proof The density of H a is at most an
,
[a + 1]n (a + 1)n 1
ha ≤ n
≤ n
= (1 + )n < (1 + 1)n ∀a > 1 ⇒ h a < 2n ∀a > 1
a a a
In particular, the density attains the value of 2n only for a unit hypercube, especially
when its 2n corners coincide with 2n voxels. ∎
3 Minimum Density
We present here some results for finding the squares (cubes, hypercubes) with
minimum density. In this section, we use “integer square” (“real square”) to mean a
square of integer (real) length. A square (cube, hypercube) without any pixel (voxel,
hypervoxel) has zero density and is disregarded from our consideration. Hence, we
consider those with at least one pixel (voxel, hypervoxel). Note that for decreasing
the density of a square (cube, hypercube), we can keep increasing its area (volume,
hypervolume) without altering its set of pixels (voxels, hypervoxels). In other words,
the density of a square (cube, hypercube) with pixels (voxels, hypervoxels) on its
boundary can easily be decreased by decreasing its area (volume, hypervolume)
by an infinitesimal amount. Hence, for finding a square (cube, hypercube) with
minimal density, the candidate squares (cubes, hypercubes) will be the maximal
squares (cubes, hypercubes) with no pixels (voxels, hypervoxels) on their boundary.
We have the following theorem, which is simple but important in the context of
our work.
Theorem 7 In the collection of all integer squares, the ones with minimum density
are those with no pixels on their boundaries and the value of the minimum density
is unity.
Proof Any integer square of length a can be positioned such that it has no pixel on
its boundary. This minimizes its density to 1. By changing the value of a, the density
On Density of Grid Points in l ∞ -Balls 325
We present here some results for finding the minimum-density cube. In this section,
we use “integer cube” to mean a cube of integer length. We repeat here that a cube
without any voxel has zero density and is disregarded from our consideration. We
consider those with at least one voxel.
Observation 7 For any positive integer a, Ba will contain at least a3 voxels.
Theorem 8 In the collection of all integer cubes, the ones with minimum density are
those with no voxels on their boundaries.
Proof Any integer cube of length a can be positioned such that it has no voxel on
its boundary. This minimizes its density to 1. Again by changing the value of a, the
density cannot be reduced further, because by Observation 7 Ba always contains at
least a3 voxels. ∎
Note that the earlier results can easily be extended to higher dimensions as a hyper-
cube of dimension n will contain at least an hypervoxels, which lands us into the
theorem below.
Theorem 9 In the set of all integer hypercubes, the ones with minimum density are
those with no hypervoxels on their hyperfaces and their density becomes unity.
Proof Any integer hypercube of length a can be positioned such that it has no
hypervoxel on its boundary. This will make its density equal to 1. The density cannot
be reduced further, because H a always contains at least an hypervoxels, whence the
proof. ∎
We now identify the minimum-density squares with real length. Consider Fig. 4.
At the left, there is a square of length 5.5 unit containing 6 × 5 pixels and after
translating the same square to the left, it contains 5 × 5 pixels (right). We can readily
generalize this into the following observation.
326 N. G. Basu et al.
Fig. 5 5 × 5 Pixel
containment by squares of
real length
Hence, for minimum density, it suffices to consider only those squares that contain
a set of pixels of k × k form. We also have another observation.
It is not difficult to see that the above observation leads to the following statement.
For any integer k ≥ 2, a square with side just less than k + 1 is going to contain at
least k 2 pixels. Substituting k with k − 1, we can write lim sk−|ε| ≥ (k−1)
2
k2
∀k ≥ 2.
|ε|→0
We have the following theorem.
Theorem 10 In the collection of all squares of real length, the ones with minimum
density are those with length just less than 2.
Proof We notice a special case for squares with length just less than 2 and containing
a single pixel at its center. Then there will be no pixel on its boundary and the limiting
value of its density is 41 .
Since the squares we consider cannot contain less than one pixel, any other square
of length a < 2 cannot have a lower density.
Let us assume for contradiction that for some integer k > 2,
(k − 1)2 1
2
≤ ⇒ 3k 2 − 8k + 4 ≤ 0,
k 4
On Density of Grid Points in l ∞ -Balls 327
(k − 1)2 1
lim sk−|ε| ≥ 2
> ∀k > 2
|ε|→0 k 4
Hence for minimum density, it suffices to consider only those cubes that contain
a set of voxels of k × k × k form. Similar to 2D, we observe that, for any positive
real a, if Ba is a cube containing k × k × k voxels, where k ≥ 2, then (k − 1) ≤ a <
(k + 1). In other words, for any integer k ≥ 2, a cube of length just less than k will
contain at least (k − 1)3 voxels. As result, we have lim bk−|ε| ≥ (k−1)
3
k3
, ∀k ≥ 2,
|ε|→0
which leads to the following theorem, that can be proved in a way very similar to
Theorem 10.
Theorem 11 In the collection of all cubes of real length the ones with minimum
density are those with length just less than 2.
Proof We consider a cube of length just less than 2 and containing a single voxel. It
easily follows that the voxel must be at its center, to avoid other voxels from entering
its occupied region. There will be no voxel on any of its faces and the limiting value
of its density is 18 . As in 2D, we claim that no other cube with length a < 2, can have
a lesser density than this.
Let us assume for contradiction that some integer k > 2,
(k − 1)3 1
≤ ⇒ 7k 3 − 24k 2 + 24k − 8 ≤ 0
k3 8
not feasible as k ≥ 3.
So, we get
(k − 1)3 1
lim bk−|ε| ≥ 3
> , ∀k ≥ 3.
|ε|→0 k 8
328 N. G. Basu et al.
For a ∈ R, such that, k − 1 ≤ a < k, ba > bk −|ε| as |Ba| can be kept the same as
|Bk −|ε| |. Since k ≥ 3, it covers all cases for a ≥ 2, which completes the proof for all
positive real a. Hence, the minimum-density cube is the one that contains only one
voxel and is of length just less than 2. ∎
Theorem 12 In the collection of all hypercubes of real length the ones with minimum
density are those with length just less than 2.
Proof Proceeding in the same way as in 2D and 3D, we can show that a hypercube
of length just less than 2 and containing a single hypervoxel at its center is the least
dense hypercube possible.
There will be no hypervoxel on any of its hyperfaces and the limiting value of its
density is 21n ∎
We have presented some novel findings on locating the squares, cubes and hypercubes
having maximum and minimum density in a digital space. The centers of these
squares, cubes, or hypercubes can be anywhere and the lengths can be either integral
or real. For any hypercube of dimension n ≥ 2, the maximum density is 2n for either
integral or real length. Also, for hypercubes with integral length, minimum density
is unity for any length, and for real length, the minimum density is 21n occurring
for a length just less than 2. In the future, we may consider squares and cubes
of arbitrary orientation to investigate the nature of extrema. It also remains to be
explored whether these findings somehow correlate with the cases where the grid is
other than rectilinear or if the region under consideration is bounded by any other
primitive shape other than square, cube, or hypercube.
References
1. Andres, E., & Roussillon, T. (2011). Analytical description of digital circles. In I. Debled-
Rennesson, E. Domenjoud, B. Kerautret, & P. Even (Eds.), Discrete geometry for computer
imagery, Proceedings of the 16th IAPR International Conference, DGCI 2011, Nancy, France,
April 6–8, 2011. Lecture Notes in Computer Science (Vol. 6607, pp. 235–246). Springer. https:/
/doi.org/10.1007/978-3-642-19867-0_20
On Density of Grid Points in l ∞ -Balls 329
2. Barrera, T., Hast, A., & Bengtsson, E. (2016). A chronological and mathematical overview
of digital circle generation algorithms introducing efficient 4- and 8-connected circles. Inter-
national Journal of Computer Mathematics, 93(8), 1241–1253. https://doi.org/10.1080/002
07160.2015.1056170, https://doi.org/10.1080/00207160.2015.1056170
3. Basu, N. G., Majumder, S., & Hon, W. K. (2017). On finding the maximum and minimum
density axis-parallel regions in Rd . Fundamenta Informaticae, 152(1), 1–12.
4. Basu, N. G., Bhowmick, P., & Majumder, S. (2023). On density extrema for digital discs. In
R. P. Barneva, V. E. Brimkov, & G. Nordo (Eds.), Combinatorial image analysis (pp. 56–70).
Springer International Publishing.
5. Bhowmick, P., & Bhattacharya, B. B. (2008). Number-theoretic interpretation and construction
of a digital circle. Discrete Applied Mathematics, 156(12), 2381–2399. https://doi.org/10.1016/
j.dam.2007.10.022
6. Bhowmick, P., & Bhattacharya, B. B. (2009). Real polygonal covers of digital discs – some
theories and experiments. Fundamenta Informaticae, 91(3–4), 487–505. https://doi.org/10.
3233/FI-2009-0053
7. Fisk, S. (1986). Separating point sets by circles, and the recognition of digital disks. IEEE
Transactions on Pattern Analysis and Machine Intelligence, 8(4), 554–556. https://doi.org/10.
1109/TPAMI.1986.4767821
8. Huxley, M. N., & Zunic, J. D. (2007). The number of n-point digital discs. IEEE Transactions on
Pattern Analysis and Machine Intelligence, 29(1), 159–161. https://doi.org/10.1109/TPAMI.
2007.250606
9. Kim, C. E., & Anderson, T. A. (1984). Digital disks and a digital compactness measure. In
R.A. DeMillo (Ed.), Proceedings of the 16th Annual ACM Symposium on Theory of Computing,
April 30–May 2, 1984, Washington, DC, USA (pp. 117–124). ACM. https://doi.org/10.1145/
800057.808673
10. Kovács, G., Nagy, B., & Vizvári, B. (2017). Weighted distances and digital disks on the Khal-
imsky grid - disks with holes and islands. Journal of Mathematical Imaging and Vision, 59(1),
2–22. https://doi.org/10.1007/s10851-0160701-5
11. Majumder, S., & Bhattacharya, B. B. (2008). On the density and discrepancy of a 2d point set
with applications to thermal analysis of VLSI chips. Information Processing Letters, 177–182.
12. Matic-Kekic, S., Acketa, D. M., & Zunic, J. D. (1996). An exact construction of digital convex
polygons with minimal diameter. Discrete Mathematics, 150(1–3), 303–313. https://doi.org/
10.1016/0012-365X(95)00195-3
13. Nagy, B. (2016). Number of words characterizing digital balls on the triangular tiling. In N.
Normand, J.V. Guédon, & F. Autrusseau (Eds.), Discrete geometry for computer imagery,
Proceedings of the 19th IAPR International Conference, DGCI 2016, Nantes, France, April
18–20, 2016. Lecture Notes in Computer Science (Vol. 9647, pp. 31–44). Springer. https://doi.
org/10.1007/978-3-319-32360-2_3
14. Nakamura, A., & Aizawa, K. (1984). Digital circles. Computer Vision, Graphics, and Image
Processing, 26(2), 242–255. https://doi.org/10.1016/0734-189X(84)90187-7
15. Pham, S. (1992). Digital circles with non-lattice point centers. The Visual Computer, 9(1), 1–24.
https://doi.org/10.1007/BF01901025
16. Zunic, J. D. (2004). On the number of digital discs. Journal of Mathematical Imaging and
Vision, 21(3), 199–204. https://doi.org/10.1023/B:JMIV.0000043736.15525.ed
17. Zunic, J. D., & Sladoje, N. (1997). A characterization of digital disks by discrete moments.
In G. Sommer, K. Daniilidis, & J. Pauli, (Eds.), Computer analysis of images and patterns,
Proceedings of the 7th International Conference, CAIP’97, Kiel, Germany, September 10–12,
1997. Lecture Notes in Computer Science (Vol. 1296, pp. 582–589). Springer. https://doi.org/
10.1007/3-540-63460-6_166
Performance Validation and Hardware
Implementation of a BLE Mesh Network
by Using ESP-32 Board
Abstract Mesh Bluetooth networks enabled each node of the network to commu-
nicate with one another via multi-hop communication (many-to-many connectivity).
These networks become a crucial part of the Internet of Things (IoT). In recent years,
Bluetooth Low Energy (BLE) technologies are used to construct mesh Bluetooth
network which have sparked a lot of attention. A variety of BLE meshing solutions
evolved; however, these are unified by the BLE mesh network standard. In this paper,
a BLE mesh network that consists of ten nodes was designed and implemented. These
nodes which are based on the ESP-32 evaluation board are programmed by using the
Arduino software version (1.8.13). Each node is able to send and receive messages
by listening to the three advertising channels (37, 38, and 39). Different message load
values of 67, 128 and 255 bytes were used in the experimental testing of the network
transmission processes and the obtained maximum one hop throughput and latency
values are (19.1220, 96.7622 and 218.0491 Kbit/s) and (4.4, 5.9646 and 7.3038 ms)
respectively. With respect to one hop values and for message load of 255 bytes, the
percentage reduction in the throughput values are 58%, 74% and 83% for 3, 5 and
10 hops respectively.
1 Introduction
The Mesh Bluetooth networking standard based on BLE that allows for many-to-
many communication over Bluetooth radio, was conceived in 2014 and adopted on
July 13, 2017 [1]. The mesh stack defined by the BLE Mesh Profile is located on top
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 331
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_27
332 Z. K. Farej and A. W. Talab
of the BLE core specification [2–5]. BLE mesh Networks enable low-power Internet
of Things (IoT) devices to make communications in a versatile and reliable way
[6, 7]. Due to its high energy performance, low device costs, and widespread avail-
ability in consumer equipment, such as smartphones and tablets, BLE has become
a common communication technology in IoT systems [8, 9]. The Bluetooth ‘SIG’
recently accomplished requirements for adding mesh networking functionality to
any Bluetooth LE unit. As long as a Bluetooth Mesh (BM) network stack is available
and the proposed protocol uses the current lower layers of BLE therefore the mesh is
completely backward-compatible with any BLE system (from Bluetooth Version 4.0
and above). According to sources, a BM network can accommodate 32,767 nodes,
according to the specification as well as 127 hops [10, 11].
In reference [12] three methods are used to study and investigate the Bluetooth
mesh network and assess its performance. These methods are experimental evalu-
ation, statistical technique, and the graph-based simulation model. All three ways
produced consistent results, revealed potential disadvantages and unresolved issues
that need to be addressed. The selected hardware device that was used in the evalu-
ation process is the nRF52832 development Board from NORDIC Semiconductor.
Reference [13] estimated the current consumption, endurance, and energy cost per
delivered bit of a battery-operated Bluetooth Mesh sensor node. A Real-world hard-
ware is used to create the BM network model and data is collected. The evaluation
results quantify the impact of major Bluetooth Mesh parameters. In this work, the
device that is used in the data measurements process is a PCA10028 Development
Kit. This belongs to the popular nRF51 series Nordic Semiconductor. In the research
paper [14], the Bluetooth Mesh protocol’s quality of service (QoS) performance was
analyzed. The most important protocol parameters, as well as their impact on system
performance, were discovered. According to the study, the protocol’s major flaw
is its scalability, as well as in densely populated deployments, Bluetooth Mesh is
particularly prone to network congestion and increased packet collision risk. The
aim of this study [15] is to assess the Bluetooth mesh network capabilities and limits
in terms of data delivery capacity in monitoring applications. Several tests are carried
out in an office setting by establishing a multi-hop network with a number of BLE
nodes. Each test trial evaluates the network’s performance in terms of packet delivery
to a base station. The author in reference [16] offered an experimental evaluation of
6BLEMesh based on a real-world application. Latency, round journey time (RTT),
and energy usage are all taken into account. Three different hardware platforms
were used to simulate device current consumption, assess communication, energy,
efficiency and compute theoretical device lifetime (for battery-operated devices).
In this paper, a Bluetooth mesh network is proposed and hardware implemented
by using the Esp-32 Evaluation Boards. The proposed network is tested and its
performance is also evaluated. The evaluation processes are carried out with different
packet sizes (67, 128, and 255 bytes) and number of hops in terms of throughput and
latency.
The rest of the paper is organized into five sections. Section 2 presents an overview
of the Bluetooth Low Energy mesh network. Section 3 includes research method
Performance Validation and Hardware Implementation of a BLE Mesh … 333
and material Sect. 4 gives results and analysis finally, Sect. 5 ends this paper with
conclusions.
A Bluetooth low energy mesh network has been designed and hardware implemented.
The implemented mesh network consists of ten nodes of ESP-32 Evaluation board.
The ESP-32 board has an integrated micro controller circuit that is capable of running
programs. The ESP-32 name is designed by Espressif Systems. The network hard-
ware components (where its nodes are placed in a row) are shown in Fig. 4. Each
ESP-32 board is configured (provision) and programmed to work as the BLE mesh
node. It can send and receive by listening to the three advertising channels (37, 38,
Performance Validation and Hardware Implementation of a BLE Mesh … 335
and 39). Three different message loads (67, 128, and 255 bytes) were used in the
process of testing and performance evaluation of the designed mesh network. The
throughput and latency performance for this network are computed and is average
for each of the message loads and under different number of hops.
The number of packets sent by a node during a given amount of time is known as
throughput. If N is the number of data packets provided and acknowledged success-
fully inside a connection event, the Maximum BLE mesh throughput for one hop
may be derived or computed using Eq. 1 below:
(E[N ] × L)
T h max = (1)
connInterval
where:
E[N] denotes the expected value of N. (i.e. the average number of successfully
transmitted data packets), L denotes the quantity of user data contained in a packet
[24].
Latency is the time it takes for a data packet to get from one node to the other and
vice versa [25]. It’s vital to understand the different mechanisms that determine these
336 Z. K. Farej and A. W. Talab
2
S
RTTONE-HOP = (tBackoffi + tTXi ) + tprocessing total + tRetrasmiti (2)
i=1 i=1
n
S
RTTmulti-hop = (tBackoffi + tTXi ) + tprocessing total + tRetrasmiti (3)
i=1 i=1
tmaximum
E[T ] = (4)
n+1
The parameters that are used in all measurements are described in Table 1.
Due to expenses issues (save money), in this paper measurements were carried
out on a small mesh network with only ten nodes placed in a row as shown above
with ten meters distance between any two adjacent nodes. The data packets are trans-
ferred across these nodes depending on nodes configuration, number of hops, and
Time to Live. This measurement setup which forms a base line for all the following
measurement, where Eq. (3) has the back-off time, the processing time, as well as the
transmit time as the three factors that influence communication, and the average RTT
is (7.3038) milliseconds assuming there aren’t any retransmissions. These aspects
must be for this specific setup, and it was evaluated more closely. The average RTT
was used to verify the theoretical analysis. Table 1 shows a maximum back-off
of (4) ms, resulting in an average back-off time of (2) ms. There are some time
constants (include radio enabling/disabling and channel switching) defined by the
Bluetooth Mesh stack on the ESP-32 evaluation board and influence the transmission
time. As a result it is required (2.04 + 0.2592 (starting and stopping) = 2.2992 ms
(t_TX) for completing the packet transmission on a channel, and (0.32 + 0.2592
= 0.5792 ms (t_Ack) for an acknowledgment, using those constants in combina-
tion with the throughput speed and packet sizes listed in Table 1. The obtained
result (2.2992 ms) is for sending on the channel (37), the total time is (2 × 2.2992)
milliseconds after additionally sending on the channel (38), and the total time is (3 ×
2.2992 ms) for finishing transmissions on the all (37,38, and 39) channels. Finally,
before the initial transmission on channel (37), it is required another (0.2852 ms)
(radio overhead defined by the standard). As a result, the packet’s overall transmit
time is (2.2992 + 0.2852 = 2.5844 ms) (as long as only one channel (37) is used
because two nodes are configured in the network), while the total send time of the
Throughput performance in a BLE mesh network that depends on the number and
size of transmitted packets as well as on the transmission time interval (time taken
to send a packet). In this research, an average of five throughput readings were taken
for three different loads (67, 128, and 255 bytes) with different numbers of hops
(one, three, five, and ten hops). For 255 bytes message load the throughput readings
with their average values are shown in Table 2. Through the practically obtained
throughput values, it is noted that the throughput is highly affected by the number of
hops, where the higher the number of hops (3, 5 and 10), the throughput decreased by
(58%, 74% and 83%) respectively, the reason for that is the increase in transmission
time as well as the accumulated processing (receive/transmit) time. Figure 6 shows
the average throughput values for the considered different loads and number of hops.
Table 2 Average throughput values of five readings for 255 bytes load (a) one hop, (b) three hops,
(c) five hops, (d) ten hops
No. of Throughput readings (Kbit/s) Average
hops (1) (2) (3) (4) (5)
(a)
1 215.047956 224.700069 212.976591 216.716575 220.804431 218.0491244
(b)
1 205.154633 212.153513 204.469567 142.231754 199 192.6018934
2 106.459807 111.0821307 106.84564 107.574838 107.778756 108.0960696
3 78.485504 82.116886 78.6600012 79.472845 80.254071 79.7978636
(c)
1 193.579767 216.569179 208.349698 209.832611 211.392905 207.944832
2 103.437075 111.868457 105.235323 107.625745 106.960496 107.0254192
3 77.13552 81.960462 78.027741 75.511075 78.346458 78.1962512
4 63.590973 67.144667 63.596051 59.000112 64.443004 63.5549614
(continued)
Performance Validation and Hardware Implementation of a BLE Mesh … 339
Table 2 (continued)
No. of Throughput readings (Kbit/s) Average
hops (1) (2) (3) (4) (5)
5 52.865776 55.814604 53.301192 49.979593 52.504866 52.8932062
(d)
1 167.68486 179.785441 161.32955 229.494024 169.686635 181.5960992
2 106.960496 103.991116 106.595243 108.240412 107.713128 106.700079
3 75.511075 78.893896 75.983201 79.7835 76.135821 77.2614986
4 63.004592 63.456634 64.584179 66.68342 63.51993 64.249751
5 52.722216 54.41805 53.034846 54.328909 53.378037 53.5764116
6 46.626053 46.594668 46.834549 48.232193 47.1564 47.0887726
7 40.478005 41.556816 39.56263 41.783678 40.808565 40.8379374
8 36.838208 36.878317 36.538065 37.406896 36.991427 36.9305826
9 32.868792 33.645412 32.473227 33.565961 33.154233 33.141525
10 30.454912 30.46307 30.306492 31.071903 30.643671 30.5880096
(a) (b)
(c) (d)
Fig. 6 Average throughput performance for the three loads a one hop, b three hops, c five hops,
d ten hops
340 Z. K. Farej and A. W. Talab
Latency performances in the BLE mesh network depend on the processing time,
Backoff time, and transmit time (propagation delay ti me is very small and ignored).
In this paper, an average of five latency readings were taken for three different loads
(67, 128, and 255 bytes) as shown in Table 3, with different numbers of hops (one,
three, five, and ten hops). The obtained results of the practical latency values clarified
that the latency is also highly affected by the number of hops, so as the hops increased
(3, 5, 10), the latency (and with respect to the one hop value) is increased by (57%,
74% and 83%) respectively, again this is due to the increased transmission time
as well as the accumulated processing (receive/transmit) time. Figure 7 shows the
average latency values for the considered different loads and number of hops.
Table 3 The average latency values of five readings for 255 bytes load (a) One hop, (b) Three hops
(c) Five hops (d) Ten hops
No. of hops Latency reading value (ms) Average
(1) (2) (3) (4) (5)
(a)
1 7.403000 7.085000 7.475000 7.346000 7.210000 7.3038
(b)
1 7.76 7.504 7.786 11.193 8 8.4486
2 14.954 14.237 14.9 14.799 14.771 14.7322
3 20.284 19.386999 20.239 20.032 19.837 19.95579
(c)
1 8.224 7.351 7.641 7.587 7.531 7.6668
2 15.391 14.231 15.128 14.792 14.884 14.8852
3 20.639 19.424 20.403 21.083 20.32 20.3738
4 25.035 23.709999 25.033001 26.983 24.704 25.093
5 30.114 28.523001 29.868 31.853001 30.320999 30.1358002
(d)
1 9.494 8.855 9.868 6.937 9.382 8.9072
2 14.884 15.309 14.935 14.708 14.78 14.9232
3 21.083 20.179001 20.952 19.954 20.91 20.6156002
4 25.268 25.087999 24.65 23.874001 25.063 24.7886
5 30.195999 29.254999 30.018 29.302999 29.825001 29.7193996
6 34.144001 34.167 33.992001 33.007 33.759998 33.814
7 39.330002 38.308998 40.240002 38.101002 39.014 38.9988008
8 43.216 43.168999 43.570999 42.558998 43.036999 43.110399
9 48.435001 47.317001 49.025002 47.429001 48.018002 48.0448014
10 52.273998 52.259998 52.529999 51.236 51.952 8.9072
Performance Validation and Hardware Implementation of a BLE Mesh … 341
(a) (b)
(c) (d)
Fig. 7 Average value of latency performance for the three loads a one hop, b three hops, c five
hops, d ten hops
5 Conclusion
A small BLE mesh network consisting of only ten ESP-32 evaluation board nodes
is configured and programmed through Laptop using 1.8.13 Arduino software. The
performance of this network is investigated and evaluated under three different loads
(67,128, and 255 bytes) and the number of hops (1, 3, 5, and 10). It is noted that the
practically obtained throughput and latency values are affected by the number of hops.
As the number of hops increases, the network throughput and latency performance
degrades. For one hope, high consistency is found between the total theoretical
(6.4488 ms) and practical (7.3038 ms) average round trip times. The convergence
of practical and theoretical results reflects the efficiency of the ESP-32 evaluation
boards and the software used in programming the BLE mesh network nodes. It is
an important task to make performance evaluation for such mesh networks as it is
forming the basis for the Internet of Things technology that requires hundreds of
connected devices and enters into various life fields such as building automation,
asset tracking, and sensor networks.
Acknowledgements This work was supported by Northern Technical University (NTU), the
Technical Engineering College/Mosul, and my supervisor Dr. Ziyad Khalaf Farej.
342 Z. K. Farej and A. W. Talab
References
1. Pierleoni, P., Gentili, A., Mercuri, M., Belli, A., Garello, R., & Palma, L. (2021). Performance
improvement on reception confirmation messages in bluetooth mesh networks. IEEE Internet
Things Journal, 9(3). https://doi.org/10.1109/JIOT.2021.3090656
2. Bluetooth Special Interest Group. (2019). Bluetooth Core Specification v5.1 (pp. 281, 282, 352,
2691, 2733–2736). http://www.bluetooth.com/
3. Sakamoto, K., Furumoto, T., Sakamoto, K., & Furumoto, T. (2012). Getting started with blue-
tooth low energy. In Pro multithreading and memory management for iOS and OS X (pp. 81–92).
https://doi.org/10.1007/978-1-4302-4117-1_4
4. Farej, Z. K., & Saeed, A. M. (2020). Analysis and performance evaluation of bluetooth low
energy piconet network. OALib, 07(10), 1–11. https://doi.org/10.4236/oalib.1106814
5. Collotta, M., Pau, G., Talty, T., & Tonguz, O. K. (2018). Bluetooth 5: A concrete step forward
toward the IoT. IEEE Communications Magazine, 56(7), 125–131. https://doi.org/10.1109/
MCOM.2018.1700053
6. Ghori, M. R., & Wan, T.-C. (2020). Bluetooth low energy mesh networks (p. 5). Scholarly
Community Encyclopedia.
7. Sornalatha, K., & Kavitha, V. R. (2017). IoT based smart museum using bluetooth low energy. In
Proceedings of the 3rd IEEE International Conference on Advances in Electrical, Electronics,
Information, Communication and Bio-Informatics, AEEICB 2017 (pp. 520–523). https://doi.
org/10.1109/AEEICB.2017.7972368
8. Eben, A. B., Bak, A., & Sosnowski, M. (2020). Efficient relay node management method for
BLE MESH networks. International Journal of Electronics and Telecommunications, 66(1),
29–35. https://doi.org/10.24425/ijet.2019.130262
9. Hernandez-Solana, A., Perez-Diaz-De-Cerio, D., Garcia-Lozano, M., Bardaji, A. V., &
Valenzuela, J. L. (2020). Bluetooth mesh analysis, issues, and challenges. IEEE Access, 8,
53784–53800. https://doi.org/10.1109/ACCESS.2020.2980795
10. Almon, L., Alvarez, F., Kamp, L., & Hollick, M. (2019) The king is dead long live the king!
Towards systematic performance evaluation of heterogeneous bluetooth mesh networks in real
world environments. In Proceedings of the Conference on Local Computer Networks, LCN
(Vol. 2019-Octob, pp. 389–397). https://doi.org/10.1109/LCN44214.2019.8990765
11. Perez-DIaz-De-Cerio, D., Hernandez-Solana, A., Garcia-Lozano, M., Bardaji, A. V., & Valen-
zuela, J. L. (2021). Speeding up bluetooth mesh. IEEE Access, 9, 93267–93284. https://doi.
org/10.1109/ACCESS.2021.3093102
12. Baert, M., Rossey, J., Shahid, A., & Hoebeke, J. (2018). The bluetooth mesh standard: An
overview and experimental evaluation. Sensors (Switzerland), 18(8). https://doi.org/10.3390/
s18082409
13. Darroudi, M., Caldera-Sànchez, R., & Gomez, C. (2019). Bluetooth mesh energy consumption:
A model. Sensors (Switzerland), 19(5). https://doi.org/10.3390/s19051238
14. Rondon, R., Mahmood, A., Grimaldi, S., & Gidlund, M. (2020). Understanding the performance
of bluetooth mesh: Reliability, delay, and scalability analysis. IEEE Internet of Things Journal,
7(3), 2089–2101. https://doi.org/10.1109/JIOT.2019.2960248
15. De Leon, E., & Nabi, M. (2020). An experimental performance evaluation of bluetooth mesh
technology for monitoring applications. In IEEE Wireless Communications and Networking
Conference (WCNC) (Vol. 2020-May). https://doi.org/10.1109/WCNC45663.2020.9120762
16. Darroudi, S. M., & Gomez, C. (2020). Experimental evaluation of 6blemesh: Ipv6-based ble
mesh networks. Sensors (Switzerland), 20(16), 1–24. https://doi.org/10.3390/s20164623
17. Murillo, Y., Reynders, B., Chiumento, A., Malik, S., Crombez, P., & Pollin, S. (2017). Bluetooth
now or low energy: Should BLE mesh become a flooding or connection oriented network?
In IEEE International Symposium on Personal, Indoor and Mobile Radio Communications
(PIMRC) (Vol. 2017-Octob, pp. 1–6). https://doi.org/10.1109/PIMRC.2017.8292705
18. Álvarez, F., Hahn, A. S., Almon, L., & Hollick, M. (2019). Toxic friends in your network:
Breaking the bluetooth mesh friendship concept. In Proceedings of the ACM Conference
Performance Validation and Hardware Implementation of a BLE Mesh … 343
Soobia Saeed, Manzoor Hussain, Mehmood Naqvi, and Hawraa Ali Sabah
Abstract K Nearest Neighbour (k-NN) is one of the most common machine learning
algorithms; however, it frequently fails to operate well due to an incorrect distance
measure or the existence of a large number of irrelevant pieces of information. To
improve k-NN classification, linear and non-linear feature transformation approaches
were used to extract class-relevant information. In this paper, we describe the combi-
nation of Laplace transformation of Eigen maps and Gradient conjugate iterative
approach to sort out the non-linear data or irrelevant data in which a non-linear feature
mapping is sought through Laplacian Eigen maps or kernel mixtures to remove it
whereas the Locally preservation projection (LPP) are applied to save the original
values during the reconstruction of linear data and create the large-margin distant
in the hybrid k-NN model. The algorithm offers a computationally efficient solution
to nonlinear dimensionality reduction with locality-preserving qualities, and a linear
transformation matrix is subsequently trained to fulfil the goal of a large margin
distance framework.
S. Saeed (B)
School of Computing and Information Sciences, Sohail University, Karachi, Pakistan
e-mail: [email protected]
M. Hussain
Computing Department, Faculty of Computing & Information Technology, Indus University,
Karachi, Pakistan
e-mail: [email protected]
M. Naqvi
School of Electronics Engineering and Computer Science, Mohwak College, Alberta, Canada
e-mail: [email protected]
H. A. Sabah
Collage of Engineering, Medical Instruments Technology Engineering, National University of
Science and Technology, Dhi Qar, Iraq
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 345
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_28
346 S. Saeed et al.
1 Introduction
Fig. 1 Linear transformation of the images by dimension reduction method of proposed technique
The k-NN algorithm is a common machine learning technique to classify data, but it
often fails due to many irrelevant features in the data due to the inappropriate selection
of distance metrics. In this section, Laplace Transformation of Eigen mapping is
formulated to solve the non-related features in the classification of the proposed
hybrid k-NN model extended in the previous section. It enhances the features of
linear transformation to be applied for exact relevant information of vector variables
(constructed imputed values) of datasets and removes the irrelevancy or non-linear
features using the GCIA approach with the combination of the iteration method. The
other reason for the selection of Laplacian Eigen maps and LPP is to enhance the
low dimensional space to convert high dimensional space and create the large margin
distance values. This technique makes it easier to transform the images and convert
them into the linear transformation of images after refining the imputation of the
missing values. Few of the data still lack of linear transformation in the datasets of this
research which is needed to reduce the time and storage in the data. It helps to reduce
the unnecessary multi-collinearity and improve the performance of classification of
the hybrid k-NN model that can easily identify the refined data after reconstruction.
The previous technique works well to construct the missing data in the datasets but
still, there is some barrier to visualize the data properly as, during the reconstruction
of the data, few of the data are converted in the non-linear form which is needed to
be linear. This research considers the proposed technique LE-LPP to be applied in
datasets for the transformation of non-linear dimension reduction with the GCIA in
the proposed hybrid k-NN model to get the large margin distance values. The iteration
method selects to check the non-linear data in the initial value which generate a
Laplace Transformation of Eigen Maps of Locally Preserving Projection … 349
I =1
Σ
wi, j (yi − y j )2 (1)
i, j
where, (wi,j ) is the weight function that confirms the points are close together and
assign the large value of weight function, where the further points are assigned by
the smaller weight. Meanwhile, this function is decreased exponentially and the
points are mapped further separated by a huge drawback acquire. This process of
mappings explained the strong appropriateness of LE in the sorting values of non-
linear points of the hybrid k-NN algorithm. For the detecting function of the hybrid
k-NN algorithm, segment x of the length n, the linear Eigen maps instruction is
directly represented by the following steps:
350 S. Saeed et al.
|| ||2
The Euclidean distance matrix is calculated by ||xi − x j || and the nearest neigh-
bour is connected with a node of i and j such that if the node j is between the nearest
neighbor of i then node i and j are connected
(1) The weight matric of the k-NN algorithm is computed by
|| ||2
Wi, j = e − ||xi − x j ||
Σ
1
wi, j
j
This approach is applied to build the connectivity of the Laplacian graph of Eigen
maps following from the previous subheading with the integrating neighborhood
information of the given datasets. This approach computes the transformation matrix
of mapping the data points to the subspace. The LPP optimally preserves the confined
neighborhood information in a certain sense of linear transformation. This approach is
defined by the proposed steps which represent the mapping values directly generated
by the linear discrete approximation.
Laplace Transformation of Eigen Maps of Locally Preserving Projection … 351
Conjugate Gradient algorithm (CG) is one of the best known optimization approaches
for finding the irrelevant or non-linear data in the hybrid k-NN model. The section
of this research is GCIA for extracting the irrelevant or non-linear data in the hybrid
k-NN model as this research use trained MRI datasets, that’s why the proposed
GCIA is better for finding the non-linear data in the MRI images and the iterative
method checks the presence of non-linear data multiple times and converts it into
linear data through the LELPP method. However, there are no derivatives available
for this expression; finite differences are used in the CGIA to approximate the first
derivative. The purpose of choosing this non-linear CGIA is to minimize non-linear
data in the trained MRI datasets by finding solutions to underdetermine linear data
of trained MRI datasets. The CGIA finds the local minimum of nonlinear features
in the datasets using its gradient alone. Given an N-variable function, the gradient
indicates the direction of maximum increase. To search non-linear data or irrelevant
features in the trained MRI images, simply start in the opposite (search) direction
using the following steps that are given below:
(1) Choose the initial point and calculate the remaining non-linear data
(2) Find the step length, where the calculation of irrelevant features and non-linear
data, Δ × 0
(3) Perform a line search direction
(4) Set the new iteration point of: X n+1 = X n + α n S n
(5) If X n+1 ≤ ε holds, the algorithm stops. Otherwise, go to the next step
(6) Let X = X n+1 and go to Step 4.
Whereas, S n indicates the search direction, α, β are the line search parameters
and Δ × 0 indicates the length of the line search direction.
352 S. Saeed et al.
The strength of this technique is the method on how to save the original values of
transformation and achieve a better solution with the help of an Eigen Map. This
(LELPP) technique has certain “intelligence” for finding the much better solutions
of linear transformation by using the GCIA. LPP is a linear transformation of the non-
linear Laplacian Eigen map values. The procedure of the algorithm is represented
by:
Let G indicate the Laplace transformation graph of m nodes which is connected
to the nodes of I and j if the x i and x j are close together. These two steps are directly
introduced so that the LE-LPP technique is organized in terms of two conditions:
(1) i and j are connected to each edge of the k-neighbors (e-neighbor) where “e” is
defined by the edge node which is determined by the Euclidean norm matrix.
(2) If the parameter of k nearest neighbors related to the number of data points (i.e.,
k ∈ N) then i and j are connected by an edge node if i and j are in between the
k-nearest neighbor to each other.
For a better understanding, the two other variations of a weighted matrix are at the
edge of the linear transformation. W i,j makes the relationship between the weight of
the edges with vertices of i, j and 0 when if there is no find at the edge of the matrix
by:
i. Choose kernel function (t ∈ R) when the nodes are connected with i and j
ii. Choose W i,j = 1when only vertices are connected to edges in terms of i and j.
This development aims to establish a better solution from the proposed hybrid
k-NN model. At the same time, the combination of LE-LPP is precisely applied for
exploring the new idea in the proposed hybrid k-NN model for selecting the large
margin distance values and also applying the GC approach and iteration method, as it
is carefully checked multiple times during the transformation of non-linear to linear
features until the remaining non-linear features are available in the datasets. The
LLP approach and the Eigen maps with the iterating method are combined to make
it possible for the LE-LPP technique to achieve it more professionally. It gives better
results for the proposed technique to remove the unnecessary data in the datasets and
identify the hidden non-linear data to be selected by the nearest location in the k-NN
algorithm.
The technique was implemented on Laplace Eigen maps to transform the linear data
and LPP preserves the original values of the hybrid k-NN model. The results are
measured in terms of the GCIA which is implemented on the hybrid k-NN model
Laplace Transformation of Eigen Maps of Locally Preserving Projection … 353
to increase the performance during the transformation of linear data. In the end, this
research implements the time complexity function to check the executing time of the
proposed model and then compare the average running time of the previous k-NN
algorithm as mentioned in the subsection given below.
This section identifies the computational outcomes of the simulation formulated with
statistical significance analysis, to assess the performance of the proposed LELPP-
TC technique in the hybrid k-NN model for finding the non-linear data or irrelevant
features from the extension of previous results. The section of this research is to
identify the non-linear data through Laplace transformation of Eigen Maps after
reconstructing the linear data. It enhances the performance of the hybrid k-NN model
with less execution in the time series data. In addition LPP method is applied to save
the new reconstructed linear values in the hybrid k-NN model are illustrated in Figs. 3
to 15 from Datasets-I to III that are given below.
Figures 2, 3 and 4 show the performance of Laplace transformation of Eigen
values with the Eigen vector. Thus, simulation has been performed with multiple
steps including the process of normalization and applied kernel function when the
nodes are connected in the form of i and j to each other which is represented by the
edge. In this method, taken the key point value such as 195 (key point value) for
the MRI images in the Laplacian Eigen maps for preserving the local information
of trained datasets which is used in the hybrid k-NN Model. These values show the
new constructed values in terms of statistical form which captured the non-linear
values or irrelevant features in the hybrid k-NN model. The next step is to calculate
the all weight matrix W1 and W2 one-by-one, note that W1 and W2 indicate the edge
weights of image intensity for MRI datasets. These calculated weight matrix values
represent the Eigen values (E) in the form Eigen vector such as D1 and W1 and then
the Laplace matrix of Eigen values of linear data such as LM = D1 − W1 and L =
D2 − W2 in the order of Eigen values 0 = λ1 ≤ λ2 ≤ λ3 ≤ · · · ≤ λn are calculated
where λ represents the Eigen vectors as shown in Fig. 5 are given.
In Figs. 5 and 6 the calculated results after implementing the LPP over the Laplace
transformation of Eigen maps values of Eigenvectors are shown. Firstly, to calculate
the distance of eigenvectors of all of three MRI datasets and then apply LPP codes
Fig. 2 Weight vector matrix of Laplace transformation of Eigen maps for Datasets-I
354 S. Saeed et al.
Fig. 3 Weight vector matrix of Laplace transformation of Eigen maps for Datasets-II
Fig. 5 Eigen diagonal matrix (D) values with LPP for Datasets-I
to implement to calculate the Eigen values in the form of PCA. These results are
generated in the PCA forms after extracting the calculated values of the first four
components of Eigen values of eigenvectors. The statistical results are generated in
the complex number after implementing the mapping function of conversion of MRI
images.
From the Table 1 the list of irrelevant errors present in the trained datasets-II is
shown. This section of research uses the non-linear gradient conjugate technique
with LELPP to remove these errors and improve the performance the hybrid k-NN
model with measured the running time during the process.
Time complexity is another part of this research that applies to check the execution
time of the proposed hybrid k-NN model. The computational complexity of the time
Laplace Transformation of Eigen Maps of Locally Preserving Projection … 355
Fig. 6 Eigen diagonal matrix (D) values with LPP for Datasets-II
Table 1 Results on number of iteration used to remove the error using non-linear-GCIA with
LELPP
Sr. n Trained datasets Iters Exist Errors Exist errors Execution time (s)
1 D-II 1 3 1.8771e−17 0 0.007629
2 D-II 1 3 9.8064e−18 0 0.007629
3 D-II 1 3 −3.7284e−17 0 0.007629
4 D-II 1 3 0.40701e−17 0 0.007629
5 D-II 1 3 −5.1783e−18 0 0.007629
series function is a combination of two processes (1) Proposed CM-DFT and LELPP.
This technique minimizes the time delay of the execution time by the same length
of T and maximum delay D, the complexity ( ) is O(DT ) for making the complexity
of the cross-correlation function of O n2 DT for the N variable of CM-DFT. The
complexity of k-NN for the proposed(( )model) X is O(xTN). Consequently, the total time
complexity of hybrid k-NN is O n2 DT + O(x T N ). Furthermore, the remaining
LE-LPP is used in this technique which has the complexity O(LE-LPPlogT ). Thus
the complexity of achieved (( values
) ) is O(LE-LPPxlogT ). Hence the complexity of
the proposed model is O n2 DT + O(xTN) + O(xlogT ) + O(LE-LPPxlogT ).
The efficiency of this method is achieved by the proposed model which generates
the novel technique of the hybrid k-NN model. In addition, this technique is used
for measuring the execution time of the proposed hybrid k-NN model that become
efficient and also calculate the running time complexities of algorithms and compare
with the average time of the previous model. Hence, this novel technique is more
efficient than the previous one.
356 S. Saeed et al.
Table 2 Comparison of current hybrid k-NN execution time with previous hybrid k-NN model
MRI datasets Affiliation Hybrid k-NN Computational
model accuracy time
(%)
Brain MRI images Dalia 91.9 2.5305
Mohammad
Toufiq et al.
(2021)
Brain MRI images Zahid Ullah 95.8 4.103
et al. (2020)
Collen ground truth (SHG) and elastic Camilo Roa 90 3.0116
ground truth (TPEF) images et al. (2021)
Proposed low-grade tumor with CSF datasets 99.9 2.4207
(( n ) )
TC = O DT + O(x T N ) + O(x log T ) + O(LELPPx log T ).
2
T C = O{{(1//2)1.5331} + (0.9978 × 1.5331) + (0.9978 log 0.0076)
+ (0.0074 × 0.9978 log 0.0076)}
This research used the similar model to evaluate the computational time and
imputation accuracy. One of the k-NN models, the average execution time for imputed
missing values was 2.5305 s and the other model average execution times were 4.103
and 3.0116 s. Thus depending on the amount of data to be imputed and the accuracy
of each, the faster method could be preferable. Our proposed hybrid k-NN model
gives better results than others with less execution time (Table 2).
5 Conclusion
In this research, we proposed an approach for embedding a set of data points into a
new feature space in a hybrid k-NN model using non-linear feature mappings discov-
ered by combining Laplacian Eigen maps with gradient conjugate iterative methods.
To improve k-NN classification at a maximum margin distance, linear and non-linear
feature transformation methods were used to extract class-relevant information and
produce the linear transformation matrix. This demonstrates that learning a linear
transformation for the feature vectors produced by Laplacian Eigen maps is similar
to directly dealing with a kernel function, which can also be easily constructed from
Laplace Transformation of Eigen Maps of Locally Preserving Projection … 357
the weight matrix for Laplacian Eigen maps in order to reach a high margin. This
work presents a computationally fast method for solving nonlinear dimensionality
reduction problems using locality-preserving projections and nonlinear GCIA.
References
1. Shi, H., Luo, Y., Xu, C., & Wen,Y. (2018). C.M.I. Manifold regularized transfer distance metric
learning. In Proceeding: BMVC, Swansea, UK (pp. 158–168).
2. Rippel, O., Paluri, M., Dollar, P., & Bourdev, P. (2015). Metric learning with adaptive density
discrimination. Arxiv Preprint, 1(1), 1–15.
3. Xiong, F., Kam, M., Hrebien, L., Wang, B., & Qi, Y. (2016). Kernelized information-theoretic
metric learning for cancer diagnosis using high-dimensional molecular profiling data. ACM
Transactions on Knowledge Discovery from Data (TKDD), 10(2), 1–23.
4. Kapoor, R., & Gupta, R. (2015). Morphological mapping for non-linear dimensionality
reduction. IET Computer Vision, 9(5), 226–233.
5. Wu, Z., Efros, A. A., & Yu, S. X. (2018). Improving generalization via scalable neighborhood
component analysis. In Proceedings of the European Conference on Computer Vision (ECCV)
(pp. 685–701).
6. Soobia, S., Habibollah, H., & Jhanjhi, N. Z. (2021). A systematic mapping study of: Low-grade
tumor of brain cancer and CSF fluid detecting approaches and parameters. In Approaches and
applications of deep learning in virtual medical care (Vol. 1, pp. 1–30).
7. Lin, C., Jain, S., Kim, H., & Bar-Joseph, Z. (2017). Using neural networks for reducing the
dimensions of single-cell RNA-Seq data. Nucleic Acids Research, 45(1), 156–159.
8. Ma, M. (2017). Laplacian Eigen maps for dimensionality reduction and data representation.
ACM Digital Library, 15(1), 1373–1396.
9. Belkin, M., & Niyogi, P. (2015). Laplacian Eigen maps for dimensionality reduction and data
representation. Neural Computing, 15(3), 1373–1396.
10. Tang, M., Djelouah, A., Perazzi, F., Boykov, Y., & Schroers. (2018). Normalized cut loss for
weakly-supervised CNN segmentation. In Proceedings of the IEEE Conference on Computer
Vision and Pattern Recognition, USA (pp. 1818–1827).
11. Xu, X., Liang, T., Zhu, J., Zheng, D., & Sun, T. (2019). Review of classical dimensionality
reduction and sample selection methods for large-scale data processing. Neurocomputing,
15(5), 328.
12. Zhang, S., Zong, M., Sun, K., Liu, Y., & Cheng, D. (2014). Efficient k-NN algorithm based
on graph sparse reconstruction. In International Conference on Advanced Data Mining and
Applications (Vol. 8933(1), pp. 356–369). Springer.
13. Zhang, S., Cheng, D., Deng, Z., Zong, M., & Deng, X. (2018). A novel kNN algorithm with
data-driven k parameter computation. Pattern Recognition Letters, 109(1), 44–54.
14. Soobia, S., Afnizanfaizal, A., & Jhanjhi, N. Z. (2021). Statistical analysis the pre and
post-surgery of health care sector using high dimension segmentation. In Machine learning
healthcare: Handling and managing data (Vol. 1, pp. 1–25).
15. Soobia, S., Afnizanfaizal, A., & Jhanjhi, N. Z. (2021). Performance analysis of machine
learning algorithm for health care tools with high dimension segmentation. In Machine learning
healthcare: Handling and managing data (pp. 1–30).
16. Zhang, S., Li, X., Zong, M., Zhu, X., & Cheng, D. (2017). Learning k for KNN classification.
ACM Transactions on Intelligent Systems and Technology (TIST), 8(2), 1–19.
17. Alhawarat, A., Alhamzi, G., Masmali, I., & Salleh, Z. (2021). A descent four-term conjugate
gradient method with global convergence properties for large-scale unconstrained optimization
problems. Mathematical Problems in Engineering, 112–119.
358 S. Saeed et al.
18. Wasi, H. A., & Shiker, M. A. (2021). Nonlinear conjugate gradient method with modified
Armijo condition to solve unconstrained optimization. Journal of Physics: Conference Series
(IOP Publishing), 1818(1), 12–21.
19. Soobia, S., Jhanjhi, N. Z., & Mehmood, N. (2021). Implementation of donor recognition
and selection for bioinformatics blood bank application. In Advanced AI techniques and
applications in bioinformatics (Vol. 1, pp. 105–138). CRC Press.
20. Soobia, S., Habibollah, H., & Jhanjhi, N. Z. (2021). A systematic mapping: Study of low-grade
tumor of brain cancer and CSF fluid detecting in MRI images. In Approaches and applications
of deep learning in virtual medical care (Vol. 1, pp. 1–25).
21. Saeed, S., & Abdullah, A. B. (2019, March). Investigation of a brain cancer with interfacing
of 3-dimensional image processing. In 2019 International Conference on Information Science
and Communication Technology (ICISCT) (pp. 1–6). IEEE.
22. Koorapetse, M., & Kaelo, P. (2020). Self-adaptive spectral conjugate gradient method for
solving nonlinear monotone equations. Journal of the Egyptian Mathematical Society, 28(1),
1–21.
23. Kumam, P., Awwal, A. M., Yahaya, M. M., & Sitthithakerngkiet, K. (2021). An efficient DY-
type spectral conjugate gradient method for system of nonlinear monotone equations with
application in signal recovery. AIMS Mathematics, 6(1), 8078–8106.
24. Abubakar, A. B., Kumam, P., Ibrahim, A. H., Chaipunya, P., & Rano, S. A. (2021). New hybrid
three-term spectral-conjugate gradient method for finding solutions of nonlinear monotone
operator equations with applications. Mathematics and Computers in Simulation, 45(1), 98–
101.
A Systematic Literature Review of
How to Treat Cognitive Psychology
with Artificial Intelligence
S. Saeed (B)
School of Computing and Information Sciences, Sohail University, Karachi, Pakistan
e-mail: [email protected]; [email protected]
M. Hussain
Computing Department, Faculty of Computing & Information Technology, Indus University,
Karachi, Pakistan
e-mail: [email protected]
M. Naqvi
School of Electronics Engineering and Computer Science, Mohwak College, Alberta, Canada
e-mail: [email protected]
K. A. Jabbar
College of Engineering, Medical Instruments Technology Engineering, National University of
Science and Technology, Dhi Qar, Iraq
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 359
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_29
360 S. Saeed et al.
1 Introduction
based on big data analysis technologies, and suicide early warning systems based on
medical pictures [15]. Yet, a cognitive psychology-based evaluation of artificial intel-
ligence is insufficient. The current state of AI techniques in cognitive psychology, the
application of artificial intelligence in experimental cognitive psychology research,
and the current evolution of psychology trends are all examined in this study [16, 17].
2 Literature Review
Data collection, analysis, testing, and assessment could be improved with AI tech-
nologies, according to Liang et al. [19]. By analyzing vast amounts of data and
combining it with expert analysis, AI technology can be used to create useful ther-
apeutic tools. The use of artificial intelligence can currently be used to identify and
forecast problems, evaluate forecasts, and diagnose problems. The AI developed a
number of variables that were associated with suicidal ideation and behavior after
analyzing data from 707 suicidal patients in Greater Santiago, Chile. The study
resulted in a set of preventative interventions for suicidal adults that reduced their
likelihood of committing suicide. It also improved their psychological well-being,
feelings of self-worth, and motivation to live [22, 23]. In 2017, the authors develop the
model that mimicked psychiatric diagnoses using fuzzy logic. It evaluated patients
successfully and tested mental health diagnoses based on insufficient information.
Utilizing appropriate AI technology allows for the development of mental models,
testing of their accuracy, and the recommendation of treatments.
studied. In this article, we examine the foundational ideas and most recent devel-
opments in psychology and brain research, as well as three common application
scenarios: face attraction and affective computing. However, deep neural networks,
influenced by cognitive psychology theories and methodologies, provide an excel-
lent illustration of the advantages of integrating information and proficiency from
several fields by describing how children learn object labels [28] (Fig. 1).
and a Pearson correlation coefficient to analyze the benchmark (PC). The Pearson
correlation coefficient was over 0.85 when the five-fold approach was used to analyze
the performance of the face attractiveness templates under various computer models;
the maximum absolute error was less than 0.25, and the root mean square error was
0.3–0.4. [29]. In this research, the author discusses the facial features and multitask
learning approach using the predicting strategy of facial attractiveness. To estimate
the first use of face representation datasets who are pre-trained this large dataset of
facial expression automatically with the use of a deep convolutional neural network
(CNN). Another key point of use of multitask learning strategy is used for three tasks
in terms of optimal shared features such as identifying the facial attribute using the
deep learning model including facial features, gender recognition, race recognition
etc. To improve the accuracy of the attractiveness computation, a multi-stream CNN
is fed certain parts of the face picture (such as the left eye, nose, and mouth) in addition
to the entire face. Each multi-stream network receives numerous partial facial features
as input [30, 31]. During the Meta training process, the author discovered a signifi-
cant number of specific preferences that many people share. During the meta-testing
phase, the model is then applied to fresh patients with a small sample of assessed
photos. This study made use of a facial beauty dataset that included hundreds of
volunteers from varied social and cultural backgrounds and was rated by hundreds of
races, gender, and age groups. According to quantitative comparisons, the suggested
strategy surpasses existing algorithms for predicting facial beauty and is effective in
learning individual beauty preferences from a limited number of annotated pictures
[32, 33].
Society the author proposed the concept of emotional computing when she stated
that it “is computing that can monitor, evaluate, and change emotions in response to
human outward expressions” [32, 33].
Machine learning is at the core of AI. Different algorithms are used by machine
learning to learn a particular task. Making predictions, classifying things, creating
visuals, and many other things are among these jobs. A subset of machine learning
called deep learning completes comparable tasks but with a more complex structure.
These applications of AI include the classification of images in biology and chem-
istry, simulations in mathematics and physics, medical diagnosis, and numerous
more areas. The power of these AI techniques has not yet been completely applied
in psychology, a topic that has not been around as long as the one listed above.
The goal of psychology as a science is to investigate and characterize how people’s
behavior relates to their emotional and cognitive functioning [30]. Unfortunately,
according to some researchers, the bulk of psychologists primarily concentrate on
explaining behavior. According to Yarkoni and Westfall, forecasting future behavior
has become rare or unimportant since this has now become accepted practice. Many
psychologists have begun experimenting with artificial intelligence to predict and
classify outcomes in many areas of study [31], discuss the quantifying pain levels
based on brain scans to using machine learning to gain a deeper understanding of
personalities [31], to detecting human needs in critical situations [32], or to predict
problematic social media usage or future alcohol abuse [29], Researchers have even
investigated how to improve AI models for mental health [29]. There are numerous
ways in which psychologists have begun using AI and machine learning to address
significant issues. The subject of mental health and mental diseases is one of the most
significant issues psychologists nowadays deal with. Psychologists commonly treat
major depressive disorder (MDD), anxiety, post-traumatic stress disorder (PTSD),
schizophrenia, and many other mental illnesses and disorders. Treatments for these
conditions typically take the shape of various therapies or, when provided in collab-
oration with a psychiatrist, even medications. Psychologists have utilized machine
learning approaches to increase their understanding of the heterogeneity in these
diseases [34]. The most prevalent types of mental diseases, including depression and
anxiety, are currently increasing and have affected millions of people. This research
will examine contemporary research that uses AI approaches to further the study of
psychology. Various uses of AI and machine learning are discussed in this article,
such as diagnosing and predicting mental health issues, identifying depression levels,
and predicting suicide and self-injury.
366 S. Saeed et al.
The researcher investigate how well machine learning may be used to forecast
the onset of PTSD following admission to the ER or hospitalization. Papini and
colleagues used an ensemble machine earning approach to try to improve the accu-
racy of a prior study’s attempt to predict PTSD [35]. The Data gathered for 271
patients who had been admitted to the emergency room was gathered by Papini and
his colleagues. Pulse, stay time, state of consciousness, and injury severity were
only a few of the physical predictors that were gathered. Additionally, psycho-
logical predictors were gathered, such as a history of an anxiety or mood illness,
present mental health, any PTSD symptoms, and others. 3, 6, and 12 months after
being admitted to the emergency room, PTSD screening was finished. According
to research, 41 predictive features were left after preprocessing the gathered data.
A machine learning model comprised of multiple decision trees was used by the
researchers, which is called extreme gradient boost (XG Boost). Machine learning
algorithms that use decision trees use yes/no questions derived from training data
to process testing data. The model was used to predict positive PTSD symptoms
(PC-PTSD score 3) and negative PTSD symptoms (PC-PTSD score 3) on individual
bases which are denoted as PTSD+ and PTSD. The model of accuracy was deter-
mined by the area under the curve score. In this research, the author compares XG
Boost model with benchmark prediction model discussed in this paper. One bench-
mark is based on hospital features for normal data collection for prediction. Based on
the most important predictor in the second benchmark, “PTSD severity only in the
hospital,” logistic regression was used exclusively. Karstoft and her colleagues went
in a slightly different direction. Data was collected from 957 trauma survivors for
their study [36]. Among this data, 68 predictive features were sorted according to their
importance in predicting PTSD development. A support vector machine (SVM) was
used by the researchers to evaluate the prediction’s accuracy. Model improvement
with SVM requires training data, as it is a supervised learning method. It is common
to use SVM models in clustering for the classification of data and the detection of
outliers [35, 37].
5 General Discussion
System-based analysis and application examples in this study show that cognitive
psychology combined with artificial intelligence is the future direction of artifi-
cial intelligence development: to advance artificial intelligence, to enable computers
to simulate advanced cognition in humans, to learn and think, so that computers
can recognize and understand human emotions, and to finally realize dialogue and
empathy with humans. With artificial intelligence and human psychological cogni-
tion, people and machines can interact emotionally in similar ways, similar to human
communication, not just by simulating rational thinking, but also by reproducing
A Systematic Literature Review of … 367
6 Conclusion
References
1. Afzali, M. H., Sunderland, M., Stewart, S., Masse, B., Seguin, J., Newton, N., Teesson, M., &
Conrod, P. (2018). Machine-learning prediction of adolescent alcohol use: A cross-study, cross-
cultural validation. Addiction, 114(1), 662–671.
2. Alharthi, R., Guthier, B., & El Saddik, A. (2018). Recognizing human needs during critical
events using machine learning powered psychology-based framework. IEEE Access, 6(1),
58737–58753.
368 S. Saeed et al.
3. Bechara, A., Damasio, H., & Damasio, A. R. (2000). Emotion, decision making and the
orbitofrontal cortex. Cerebral Cortex, 10(2), 295–307.
4. Bleidorn, W., & Hopwood, C. J. (2018). Using machine learning to advance personality
assessment and theory. Personality and Social Psychology Review, 23(5), 190–203.
5. Branch, B. (2019). Artificial intelligence applications and psychology: An overview. Neuropsy-
chopharmacologia Hungarica, 21(2), 119–126.
6. Dave, R., Sargeant, K., Vanamala, M., & Seliya, N. (2022). Review on psychology research
based on artificial intelligence methodologies. Journal of Computer and Communications,
10(5), 113–130.
7. Soobia, S., Habibollah, H., & Jhanjhi, N. Z. (2021). A systematic mapping study of: Low-grade
tumor of brain cancer and CSF fluid detecting approaches and parameters. In Approaches and
applications of deep learning in virtual medical care (pp. 1–13).
8. Dwyer, D. B., Falkai, P., & Koutsouleris, N. (2018). Machine learning approaches for clinical
psychology and psychiatry. Annual Review of Clinical Psychology, 14(1), 91–118.
9. Goldberg, P., Sümer, Ö., Stürmer, K., Wagner, W., Göllner, R., Gerjets, P., Kasneci, E., &
Trautwein, U. (2019). Attentive or not? Toward a machine learning approach to assessing
students’ visible engagement in classroom instruction. Educational Psychology Review, 33(2),
27–49.
10. Han, S., Liu, S., Li, Y., Li, W., Wang, X., Gan, Y., et al. (2020). Why do you attract me but not
others? Retrieval of person knowledge and its generalization bring diverse judgments of facial
attractiveness. Social Neuroscience, 15(1), 505–515.
11. Huang, C. (2017). Combining convolutional neural networks for emotion recognition. In
Proceedings of the 2017 IEEE MIT Undergraduate Research Technology Conference (URTC),
Cambridge, UK (pp. 1–4).
12. Jacobucci, R., Littlefield, A. K., Millner, A. J., Kleiman, E., & Steinley, D. (2020). Pairing
machine learning and clinical psychology: How you evaluate predictive performance matters.
Sensor, 23(1), 1–8.
13. Soobia, S., Afnizanfaizal, A., & Jhanjhi, N. Z. (2021). Implementation of donor recogni-
tion and selection for bioinformatics blood bank application. In Advanced AI techniques and
applications in bioinformatics (pp. 105–138). CRC Press.
14. Soobia, S., Habibollah, H., & Jhanjhi, N. Z. (2021). A systematic mapping: Study of low-grade
tumor of brain cancer and CSF fluid detecting in MRI images. In Approaches and applications
of deep learning in virtual medical care (pp. 1–25).
15. Soobia, S., & Habibollah, H. (2021). A systematic mapping study of: Low-grade tumor of brain
cancer and CSF fluid detecting approaches and parameters. In Approaches and applications of
deep learning in virtual medical care (pp. 1–30).
16. Karstoft, K.-I., Galatzer-Levy, I. R., Statnikov, A., Li, Z., & Shalev, A. Y. (2015). Bridging a
translational gap: Using machine learning to improve the prediction of PTSD. BMC Psychiatry,
15(1), 30–38.
17. Lebedeva, I., Ying, F., & Guo, Y. (2022). Personalized facial beauty assessment: A meta-
learning approach. Computers & Graphics, 98(1), 1–13.
18. Lee, J., Mawla, I., Kim, J., Loggia, M. L., Ortiz, A., Jung, C., Chan, S.-T., Gerber, J.,
Schmithorst, V. J., Edwards, R. R., Wasan, A. D., Berna, C., Kong, J., Kaptchuk, T. J., Gollub,
R. L., Rosen, B. R., & Napadow, V. (2018). Machine learning-based prediction of clinical pain
using multimodal neuroimaging and autonomic metrics. Pain, 160(1), 550–560.
19. Liang, L., Lin, L., Jin, L., Xie, D., & Li, M. (2018). SCUT-FBP5500: A diverse bench-
mark dataset for multi-paradigm facial beauty prediction. In Proceedings of the 2018 24th
International Conference on Pattern Recognition (ICPR) (pp. 1598–1603). IEEE.
20. Nadji-Tehrani, M., & Eslami, A. (2020). A brain-inspired framework for evolutionary artificial
general intelligence. IEEE Transactions on Neural Networks and Learning Systems, 31(12),
5257–5271.
21. Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences.
Artificial Intelligence, 267(1), 1–38.
A Systematic Literature Review of … 369
22. Papini, S., Pisner, D., Shumake, J., Powers, M. B., Beevers, C. G., Rainey, E. E., Smits, J.
A. J., & Warren, A. M. (2018). Ensemble machine learning prediction of posttraumatic stress
disorder screening status after emergency room hospitalization. Journal of Anxiety Disorders,
60(1), 35–42.
23. Picard, R. W. (2003). Affective computing: Challenges. International Journal of Human-
Computer Studies, 59(1), 55–64.
24. Pradhan, N., Singh, A. S., & Singh, A. (2020). Cognitive computing: Architecture, tech-
nologies and intelligent applications. Special Section on Human-Centered Smart Systems and
Technologies, 3(1), 25–50.
25. Soobia, S., Afnizanfaizal, A., & Jhanjhi, N. Z. (2021). Statistical analysis the pre and
post-surgery of health care sector using high dimension segmentation. In Machine learning
healthcare: Handling and managing data (pp. 1–25).
26. Soobia, S., Afnizanfaizal, A., & Jhanjhi, N. Z. (2021). Performance analysis of machine
learning algorithm for health care tools with high dimension segmentation. In Machine learning
healthcare: Handling and managing data (pp. 1–30).
27. Savci, M., Tekin, A., & Elhai, J. D. (2020). Prediction of problematic social media use (PSU)
using machine learning approaches. Current Psychology, 41(1), 2755–2764.
28. Schnack, H. G. (2019). Improving individual predictions: Machine learning approaches
for detecting and attacking heterogeneity in schizophrenia (and other psychiatric diseases).
Schizophrenia Research, 214(1), 34–42.
29. Shi, Y., & Li, C. (2018). Exploration of computer emotion decision based on artificial intelli-
gence. In Proceedings of the 2018 International Conference on Virtual Reality and Intelligent
Systems (ICVRIS), Hunan, China (pp. 293–295). IEEE.
30. Simon, H. A. (1987). Making management decisions: The role of intuition and emotion.
Academy of Management Perspectives, 1(1), 57–64.
31. Vahdati, E., & Suen, C. Y. (2021). Facial beauty prediction from facial parts using multi-task
and multi-stream convolutional neural networks. International Journal of Pattern Recognition
on Artificial Intelligence, 35(2), 216–220.
32. Yang, G. Z., Dario, P., & Kragic, D. (2018). Social robotics—trust, learning, and social
interaction. Journal of Social Robotics, 12(3), 1–12.
33. Yarkoni, T., & Westfall, J. (2017). Choosing prediction over explanation in psychology: Lessons
from machine learning. Perspectives on Psychological Science, 12(1), 1100–1122.
34. Zador, A. M. (2019). A critique of pure learning and what artificial neural networks can learn
from animal brains. Nature Communications, 10(1), 1–7.
35. Zhang, M., He, C., & Zuo, K. (2019). Data-driven research on the matching degree of eyes,
eyebrows and face shapes. Frontier Psychology, 10(1), 1466.
36. Soobia, S., Afnizanfaizal, A., & Jhanjhi, N. Z. (2022). Hybrid graph cut hidden Markov model
of k-mean cluster technique. CMC-Computers, Materials & Continua, 72(1), 1–15.
37. Zhao, J., Cao, M., Xie, X., Zhang, M., & Wang, L. (2019). Data-driven facial attractiveness
of Chinese male with epoch characteristics. Digital Object Identifier (IEEE Access), 7(1),
10956–10966.
Study of SEIRV Epidemic Model
in Infected Individuals in Imprecise
Environment
Abstract Multiple scientific disciplines now have a new avenue to analyze the
dynamics of epidemic models thanks to mathematical modelling. Vaccination is
a simple, safe, and efficient way to shield people from dangerous diseases before
they come into contact with them. In this paper, we consider an epidemic model
in which the entire population is classified into five classes susceptible, exposed,
infected, recovered and vaccinated. When uncertainty is introduced, the epidemic
model’s scenario transforms. To overcome such a situation we consider the SEIRV
model in imprecise environment. All parameters of the model are taken as interval
numbers in order to construct an improved epidemic SEIRV model. Non-negative
feasible steady states namely DFE (Disease free equilibrium), EE (Endemic Equi-
librium) and stability criteria of them have beenanalyzed in interval environment. In
the end, extensive numerical simulations verify all of the analytical findings.
A. Acharya (B)
Department of Mathematics, Swami Vivekananda Institute of Modern Science, Karbala
More 700103, West Bengal, India
e-mail: [email protected]
S. Paul
Department of Mathematics, Arambagh Government Polytechnic, Arambagh, West Bengal, India
M. A. Biswas
Department of Mathematics, Gobardanga Hindu College, P.O.-Khantura, 24 Parganas (North),
Gobardanga 743252, West Bengal, India
A. Mahata
Mahadevnagar High School, Maheshtala, Kolkata 700141, West Bengal, India
S. Mukherjee
Department of Mathematics, Gurudas College, Kolkata 700054, West Bengal, India
B. Roy
Department of Mathematics, Bangabasi Evening College, Kolkata 700009, West Bengal, India
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 371
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_30
372 A. Acharya et al.
1 Introduction
2 Preliminaries
Definition The interval [Tm 1 , Tn 1 ] can also be written as k1 (η) = (Tm 1 )1−η (Tn 1 )η for
the parameter η ∈ [0, 1], which is also called parametric form in interval figure.
Study of SEIRV Epidemic Model in Infected Individuals in Imprecise … 373
(max Tm 1 Rm 1 , Tn 1 Rn 1 , Tm 1 Tn 1 , Rm 1 Rn 1 ,
4. yς1 (η) = e(η) = y(Tm 1 )1−η (Tn 1 )η if y > 0,
= y(Tm 1 )1−η (Tn 1 )η if y < 0, { }
1−η η
k1 (η) T Rn 1 Rm 1 Tn 1 Rm 1 Rn 1 Tm 1 Tn 1
5. p1 (α) = h 1 (η)
= (min{ Rmm1 , Tn 1
, Tn 1
, Rm 1
}) (max{ Tm 1
, Tn 1
, Rn 1
, Rm 1
) .
1
Where, g1 (η), s1 (η), r1 (η), yk1 (η), p1 (α), e(η) denotes the interval valued func-
tion for constant ς1 and η ∈ [0, 1].
3 Model Formulation
The whole population (N) is divided into five groups: susceptible (S), exposed (E),
infected (I), recovered (R), and vaccinated (V) at any time t ≥ 0, thus N(t) = S(t) +
E(t) + I(t) + R(t) + V (t).
Considering the SEIRV model [24] as:
dS /\ /\
/\
/\
= μ2 I(t) − μ0 R(t)
dt
dV /\
/\
From the model system (2), ddtS = ddtE = ddtI = ddtR = ddtV = 0, then we have two
equilibrium points namely Disease free Equilibrium (DFE) Point PD F E and Endemic
Equilibrium (EE) point PE E .( )
1−ξ1 ξ 1−ξ ξ 1−ξ ξ
Λ1 Λ21 δ1 1 δ21 Λ1 1 Λ21
Where, PD F E = 1−ξ1 ξ 1−ξ1 ξ1 , 0, 0, 0, ξ
(
1−ξ11−ξ ξ 1−ξ ξ
) and
μ02 μ011 +δ2 δ1 μ011 μ02 1 μ011 +δ2 1 δ11
μ02
( )( )
1−ξ ξ 1−ξ ξ 1−ξ ξ 1−ξ ξ
μ02 1 μ011 +μ11 1 μ121 μ02 1 μ011 +μ21 1 μ221
PE E (S ∗ , E ∗ , I ∗ , R ∗ , V ∗ ). EE points are S ∗ = 1−ξ1 ξ 1−ξ1 ξ ,
( ) β2 β11 μ11 μ121
1−ξ ξ 1−ξ ξ
μ02 1 μ011 +μ21 1 μ221 1−ξ
Λ1 1 Λ21 Λ1 1 Λ21
ξ 1−ξ ξ
E∗ = 1−ξ1 ξ I ∗, I ∗ = (
ξ
1−ξ1 1−ξ ξ
)(
1−ξ ξ 1−ξ ξ
) −
μ11 μ121 μ02
μ011 +μ11 1 μ121 μ02 1 μ011 +μ21 1 μ221
( )( )
1−ξ ξ 1−ξ ξ 1−ξ ξ 1−ξ ξ 1−ξ ξ
1−ξ1
μ02
ξ
μ011 +δ1
1−ξ1 ξ1
δ2
1−ξ1
μ21
ξ
μ221 μ02 1 μ011 +μ11 1 μ121 μ02 1 μ011 +μ21 1 μ221 δ1 1 δ21
1−ξ ξ , R∗ = 1−ξ1 ξ1 I ∗, V ∗ = 1−ξ1 ξ 1−ξ1 ξ 1−ξ1 ξ .
β2 1 β11 μ02 μ01 β2 β11 μ02 μ011 μ11 μ121
The reproduction number can be evaluated from the greatest eigenvalue of the matrix
X Y −1 [24] where,
[ 1−ξ1 ξ1 1−ξ1 ξ1 ] [ ]
β2 β1 Λ1 Λ2 1−ξ ξ 1−ξ ξ
0 1−ξ1 ξ1 1−ξ ξ μ02 1 μ011 + μ11 1 μ121 0
X= μ02 μ01 +δ2 1 δ11 and Y = 1−ξ ξ 1−ξ ξ 1−ξ ξ .
0 0 −μ11 1 μ121 μ02 1 μ011 + μ21 1 μ221
Study of SEIRV Epidemic Model in Infected Individuals in Imprecise … 375
In this part we have analyzed two states of equilibrium: (i) DFE and (ii) EE.
Theorem 2 The DFE point of the system (2) is stable when R0 < 1 and the system
(2) is unstable when R0 > 1.
Proof Consider the Jacobi matrix at DFE of the model system (2) is given by,
JD F E =
⎡ 1−ξ1 ξ1 1−ξ1 ξ1 ⎤
( 1−ξ ξ ) β β Λ Λ
1 + δ 1−ξ1 δ ξ1
⎢ − μ02 1 μ01 0 − 11−ξ 2ξ 11−ξ 2ξ 0 0 ⎥
⎢ 1 2
μ 1 μ 1 +δ 1δ 1 ⎥
⎢ 02 01 1 2 ⎥
⎢ ( ) 1−ξ1 ξ1 1−ξ1 ξ1 ⎥
⎢ 1−ξ ξ1 1−ξ ξ β1 β 2 Λ1 Λ2 ⎥
⎢ 0 − μ02 1 μ01 + δ11 1 δ121 0 0 ⎥
⎢ 1−ξ ξ1 1−ξ1 ξ1 ⎥
⎢ μ02 1 μ01 +δ1 δ2 ⎥.
⎢ 1−ξ ξ
( 1−ξ ξ1 1−ξ ξ
) ⎥
⎢ δ11 1 δ121 − μ02 1 μ01 + δ21 1 δ221 ⎥
⎢ 0 0 0 ⎥
⎢ ⎥
⎢ 1−ξ1 ξ1 1−ξ ξ1 ⎥
⎣ 0 0 δ21 δ22 −μ02 1 μ01 0 ⎦
1−ξ1 ξ1 1−ξ ξ1
δ1 δ2 0 0 0 −μ02 1 μ01
ξ ξ ξ ξ 1−ξ1 ξ1
β11 I ∗ + δ2
1−ξ 1−ξ 1−ξ 1−ξ1
where, A1 = 3μ02 1 μ011 + μ11 1 μ121 + μ21 1 μ221 + β2 δ1 ,
ξ1 2
1−ξ1 ξ1 1−ξ1 ξ1 1−ξ1 ξ1 1−ξ1 ξ1 1−ξ1 ξ1
B1 = 3(μ1−ξ
02 μ01 ) + 2μ02 μ01 μ11 μ12(+ 2μ02 μ01 μ21 μ22 + μ11 μ12
1
)
1−ξ ξ 1−ξ ξ 1−ξ ξ 1−ξ1 ξ1 1−ξ ξ 1−ξ ξ 1−ξ ξ
μ21 1 μ221 − β2 1 β11 μ11 1 μ121 δ2 δ1 + 2μ02 1 μ011 + μ11 1 μ121 + μ21 1 μ221
( )
1−ξ ξ 1−ξ ξ
β2 1 β11 I ∗ + δ2 1 δ11 ,
( ) {( )2
1−ξ1 ξ1 ∗ 1−ξ1 ξ1 1−ξ1 ξ1 1−ξ ξ 1−ξ ξ 1−ξ ξ
C1 = β2 β1 I + μ02 μ01 + δ2 δ1 μ02 1 μ011 + μ11 1 μ121 μ02 1 μ011
}
ξ1 1−ξ1 ξ1 1−ξ1 ξ1 1−ξ1 ξ1 1−ξ1 ξ1 1−ξ1 ξ1 1−ξ1 ξ1
+ μ1−ξ
21
1
μ μ
22 02 μ 01 + μ 11 μ μ
12 21 μ22 − β 2 β1 μ11 μ δ
12 2 δ 1
ξ 2 ξ
β11 ) μ11 1 μ121 S ∗ I ∗ ,
1−ξ1 1−ξ
− (β2
1−ξ ξ
D1 = μ02 1 μ011 .
The eigenvalues of the characteristic equation (3) are −D1 , −D1 and another
equation becomes λ3 + A1 λ2 + B1 λ + C1 = 0.
For Stability criteria using Routh-Hurwitz of the model system (2) we have
A1 > 0, B1 > 0, C1 > 0, A1 B1 > C1 if R0 < 1.
Therefore, the model system (2) at the EE point is stable when R0 < 1.
4 Numerical Simulation
/\
In the part to discuss model (3) numerically, we consider [6] parameter A= [0.01,
/\ /\
/\ /\
μ2 = [0.21, 0.31], b = [0.021, 0.051]. Figure 1 is plotted using these values for
different ξ1 (= 0, 0.6, 1). /\ /\ /\
/\
We consider [6] parameters A = [0.4, 0.6], β = [0.09, 0.12], δ = [0.001, 0.003],
/\ /\
Fig. 1 Using the above parameter value of the model (2) we plotted a, b and c for different value
of ξ1 = 0, 0.6, 1. This figure reflects the system is stable at DFE when R0 > 1 for t ∈ [0, 1000]
5 Conclusions
Fig. 2 Using the above parameter value of the model (2) we plotted a, b and c for different value
of ξ1 = 0, 0.6, 1. This figure reflects that the system is stable at EE when R0 < 1 for t ∈ [0, 1000]
is LAS. All relevant theorems and results are checked correctly through numerical
simulation and figures are verified by Matlab software inelegant way. Various types
of epidemic models in neurotrophic environment is left for near future.
References
1. Mahata, A., Paul, S., Mukherjee, S., & Roy, B. (2022). Stability analysis and Hopf bifurcation
in fractional order SEIRV epidemic model with a time delay in infected individuals. Partial
Differential Equations in Applied Mathematics, 5, 100282.
2. Poonia, R. C., Saudagar, A. K. J., Altameem, A., Alkhathami, M., Khan, M. B., & Hasanat, M.
H. A. (2022). An enhanced SEIR model for prediction of COVID-19 with vaccination effect.
Life, 12, 647.
3. Sutton, K. M. (2014). Discretizing the SI epidemic model. Rose-Hulman Undergraduate
Mathematics Journal, 15(1), 12.
4. Mahata, A., Mondal, S. P., Ahmadian, A., Ismail, F., Alam, S., & Salahshour, S. (2018).
Different solution strategies for solving epidemic model in imprecise environment. Complexity,
2018(2), 1–18.
Study of SEIRV Epidemic Model in Infected Individuals in Imprecise … 379
5. Cooper, I., Mondal, A., & Antonopoulos, C. G. (2020). A SIR model assumption for the spread
of COVID-19 in different communities. Chaos, Solitons & Fractals, 139, 110057.
6. Paul, S., Mahata, A., Ghosh, U., & Roy, B. (2021). SEIR epidemic model and scenario analysis
of COVID-19 pandemic. Ecological Genetics and Genomics, 19, 100087.
7. Paul, S., Mahata, A., Mukherjee, S., et al. (2022). Study of fractional order SEIR epidemic
model and effect of vaccination on the spread of COVID-19. International Journal of Applied
and Computational Mathematics, 8, 237.
8. Paul, S., Mahata, A., Mukherjee, S., & Roy, B. (2022). Dynamics of SIQR epidemic model with
fractional order derivative. Partial Differential Equations in Applied Mathematics, 5, 100216.
9. Youssef, H., Alghamdi, N. Ezzat, M. A., El-Bary, A. A., & Shawky, A. M. (2021). Study on
the SEIQR model and applying the epidemiological rates of COVID-19 epidemic spread in
Saudi Arabia. Infectious Disease Modelling, 6, 678-692.
10. Pal, D., Mahapatra, G. S., & Samanta, G. P. (2013). Optimal harvesting of prey-predator
system with interval biological parameters: A bioeconomic model. Mathematical Biosciences,
241, 181–187.
11. Pal, D., & Mahapatra, G. S. (2015). Dynamic behavior of a predator–prey system of combined
harvesting with interval-valued rate parameters. Nonlinear Dynamics, 83, 2113–2123.
12. Xiao, Q., Dai, B., & Wang, L. (2015). Analysis of a competition fishery model with interval-
valued parameters: Extinction, coexistence, bionomic equilibria and optimal harvesting policy.
Nonlinear Dynamics, 80(3), 1631.
13. Mahata, A., Mondal, S. P., Roy, B., et al. (2021). Study of two species prey-predator model
in imprecise environment with MSY policy under different harvesting scenario. Environment,
Development and Sustainability, 23, 14908–14932.
14. Zhang, X., & Zhao, H. (2014). Bifurcation and optimal harvesting of a diffusive predator–prey
system with delays and interval biological parameters. Journal of Theoretical Biology, 363,
390–403.
15. Wang, Q., Liu, Z., Zhang, X., & Cheke, R. (2015). A incorporating prey refuge into a predator–
prey system with imprecise parameter estimates. Computational and Applied Mathematics, 36,
1067–1084.
16. Zhao, H., & Wang, L. (2022). Stability and Hopf bifurcation in a reaction–diffusion predator–
prey system with interval biological parameters and stage structure. Nonlinear Dynamics, 11,
575.
17. Mahata, A., Mondal, S. P., Roy, B., et al. (2020). Influence of impreciseness in designing
tritrophic level complex food chain modeling in interval environment. Advances in Difference
Equations, 399.
18. Das, S., Mahato, P., & Mahato, S. K. (2020). A Prey predator model in case of disease transmis-
sion via pest in uncertain environment. Differential Equation and Dynamical System. https://
doi.org/10.1007/s12591-020-00551-7
19. Mahata, A., Mondal, S. P., Alam, S., & Roy, B. (2017). Mathematical model of glucose-insulin
regulatory system on diabetes mellitus in fuzzy and crisp environment. Ecological Genetics
and Genomics, 2, 25–34.
20. Santra, P. K., & Mahapatra, G. S. (2020). Dynamical study of discrete-time prey predator
model with constant prey refuge under imprecise biological parameters. Journal of Biological
Systems, 28(3), 681–699.
21. Das, A., & Pal, M. (2017). A mathematical study of an imprecise SIR epidemic model with
treatment control. Journal of Applied Mathematics and Computing, 56, 477–500.
22. Acharya, A., Mahata, A., Alam, S., Ghosh, S., & Roy, B. (2022). Analysis of an imprecise
delayed SIR model system with Holling type-III treatment rate. In S.L. Peng, C.K. Lin, &
S. Pal (Eds.), Proceedings of 2nd International Conference on Mathematical Modeling and
Computational Science. Advances in Intelligent Systems and Computing (Vol. 1422).
23. Paul, S., Mahata, A., Mukherjee, S., Mali, P. C., & Roy, B. (2022). Mathematical model for
tumor-immune interaction in imprecise environment with stability analysis. In S. Banerjee &
A. Saha (Eds.), Nonlinear dynamics and applications (pp. 935–946). Springer Proceedings in
Complexity. Springer.
380 A. Acharya et al.
24. Mahata, A., Paul, S., Mukherjee, S., et al. (2022). Dynamics of Caputo fractional order SEIRV
epidemic model with optimal control and stability analysis. International Journal of Applied
and Computational Mathematics, 8, 28.
Study of a Fuzzy Prey Predator
Harvested Model: Generalised Hukuhara
Derivative Approach
B. Manna · S. Mondal
Department of Mathematics, Swami Vivekananda University, Barasat–Barrackpore Rd, Bara
Kanthalia, West Bengal 700121, India
e-mail: [email protected]
A. Acharya
Department of Mathematics, Swami Vivekananda Institute of Modern Science, Karbala More,
Kolkata, West Bengal 700103, India
S. Paul
Arambagh Govt Polytechnic, Arambagh, West Bengal, India
A. Mahata (B)
Mahadevnagar High School, Maheshtala, Kolkata, West Bengal 700141, India
e-mail: [email protected]
B. Roy
Department of Mathematics, Bangabasi Evening College, Kolkata, West Bengal 700009, India
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 381
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_31
382 B. Manna et al.
1 Introduction
Lotka [1] and Volterra [2] initiated research in the field of ecology. Although Malthus
[3] presented the first theoretic managing of population kinetics, Verhulst [4] devel-
oped a logistic equation-based model. In the Lotka-Volterra system, direct interven-
tion is supposed to reduce the per capita growth rates of two species. For biologists
who are interested in the consequences of competitive interactions between species,
Lotka-Volterra equation of exploitative competition has served as a suitable initial
point. This model’s presumptions may not be very realistic, but they are necessary for
simplification. The dynamics of one or both populations can be affected by a variety
of non-model factors that affect the outcome of competitive interactions. Changes
in the environment, illness, and chance are just some of these factors. Over the past
few decades, ecologists and researchers have shown the greatest interest to formu-
late different types of harvested [5–8] (prey or predator or both harvested) models to
explore better dynamics of biological phenomena into models.
In the field of biological science, many authors have evolved their system
constructed totally on the presumption that the model parameters are familiar.
However indeed, the parameters of the model system are not exact because of inac-
curacy in data collection, unconscious measurement, technical error and climate
changes. To overcome such type of complexity many authors use a different
approach—FDE (Fuzzy differential approach), IDE (Interval differential equation
approach) and Stochastic approach etc.
In recent times, the “fuzzy differential equation” (FDE) has become increasingly
popular. Kaleva [9] presented the idea of FDE. Bede [10] demonstrated in FDE that
the Hukuhara derivative cannot solve a class of BVPs. To overdoes these demerits, the
thought of generalised derivative had been studied in [11, 12] and FDE was explored
by this perception in [13–16] showed FDE plays a vital role in the mathematical
modelling of biological science. A diabetes model has been taken in those papers and
discussed how fuzzy diabetes system is framed to the system of differential equations
using generalised H-derivatives (Hukuhara derivatives). Fuzzy population model was
solved in [17, 18]. Jafelice et al. [19] developed a fuzzy concept for the HIV-infected
population. Fuzzy stability of diabetes model system has been investigated by Mahata
et al. [20] and Roy et al. [21]. Henceforth, several notable works of bio-mathematical
modelling has been published with based on fuzzy differential equation (see [22–
24]). Here, motivated by the above theme we have considered Lotka–Volterra with
harvesting system [6] in fuzzy environment.
The arrangement of the paper follows as Sect. 2 contains the basic concept. Model
formation and stability analysis are smeared in sect. 3. Numerical illustration is
prepared in Sect. 4. Section 5 contains conclusion of the work.
Study of a Fuzzy Prey Predator Harvested Model: Generalised … 383
2 Pre-requisite Concept
0, x1 ≤ Q 11
x1 −Q 11
Q 12 −Q 11
Q 11 ≤ x1 ≤ Q 12
μ R̃ (x1 ) = 1 x1 = Q 12
Q 13 −x1
Q 13 −Q 12
Q 12 ≤ x1 ≤ Q 13
0, x1 ≥ Q 13
~1 given by,
Definition 2.2 The α-cut of Q
Rα = [Q 11 + α(Q 12 − Q 11 ), Q 13 − (Q 13 − Q 12 )] ∀ α; 0 ≤ α ≤ 1
3 Model Formulation
dv1 (t)
= r v1 (t) − p1 v1 (t)v2 (t) − h 1 Ev1 (t)
dt
dv2 (t)
= −sv2 (t) + p2 v1 (t)v2 (t) − h 2 Ev2 (t)
dt
With the initial condition v1 (0) = v01 , v2 (0) = v02 .
where
384 B. Manna et al.
d ṽ1 (t)
= r ṽ1 (t) − p1 ṽ1 (t)ṽ2 (t) − h 1 E ṽ1 (t)
dt (1)
d ṽ2 (t)
= −s ṽ2 (t) + p2 ṽ1 (t)ṽ2 (t) − h 2 E ṽ2 (t)
dt
Here, and are following cases follows:
Case 1: When ṽ1 (t) and ṽ2 (t) are (i) gHD:
Case 2: When ṽ1 (t) and ṽ2 (t) are (ii) gHD.
Case 3: When ṽ1 (t) is (i) gHD and ṽ2 (t) is (ii) gHD.
Case 4: When ṽ1 (t) is (ii) gHD and ṽ2 (t) is (i) gHD.
For our convenience in this paper, we take the first two cases. Considering initial
conditions are fuzzy numbers, we discuss the stability analysis of first two cases as
follows.
3.1 Case 1
dv1L (t, α)
= r v1L (t, α) − p1 v1R (t, α)v2L (t, α) − h 1 Ev1R (t, α)
dt
dv1R (t, α)
= r v1R (t, α) − p1 v1L (t, α)v2R (t) − h 1 Ev1L (t, α)
dt (2)
dv2L (t, α)
= −sv2R (t, α) + p2 v1L (t, α)v2L (t, α) − h 2 Ev2R (t, α)
dt
dv2R (t, α)
= −sv2L (t, α) + p2 v1R (t, α)v2R (t) − h 2 Ev2L (t, α)
dt
With the initial condition,
v1L (0, α) = v01L (α), v1R (0, α) = v01R (α), v2L (0, α) = v02L (α), v2R (0, α) = v02R (α).
Study of a Fuzzy Prey Predator Harvested Model: Generalised … 385
λ4 + r 1 λ3 + r 2 λ2 + r 3 λ + r 4 = 0
p22
,
r u(r + u) > 3 p1 lurp+2 p1 lu .
2
ri < 0, i = 1, 3; r j > 0, j = 2, 4 and r1r2 − r3 < 0 if 8
Using
( c stability condition
) of RH-criteria, system (2) is unstable at
c
E 12 v1L , v1R
c
, v2L
c
, v2R
c
.
3.2 Case 2
dv1L (t, α)
= r v1R (t, α) − p1 v1L (t, α)v2R (t, α) − h 1 Ev1L (t, α)
dt
dv1R (t, α)
= r v1L (t, α) − p1 v1R (t, α)v2L (t, α) − h 1 Ev1R (t, α)
dt (3)
dv2L (t, α)
= −sv2L (t, α) + p2 v1R (t, α)v2R (t, α) − h 2 Ev2L (t, α)
dt
dv2R (t, α)
= −sv2R (t, α) + p2 v1L (t, α)v2L (t, α) − h 2 Ev2R (t, α)
dt
386 B. Manna et al.
v1L (0, α) = v01L (α), v1R (0, α) = v01R (α), v2L (0, α) = v02L (α), v2R (0, α) = v02R (α)
where,v1L c
= v1Rc
= h 2 E+s
p2
, v2L
c
= v2R
c
= r −hp11 E for r > h 1 E.
λ4 + μ1 λ3 + μ2 λ2 + μ3 λ + μ4 = 0
where, μ1 = 2(m 1 + r ), μ2 = 2r m 1 + m 1 n 1 + 2r 2 , μ3 = m 1 n 1 + 4r 2 m 1 + n 1 +
r m 1 n 1 , μ4 = (2r − 1)n 1 m 21 + m 21 n 21 and m 1 = (h 2 E + s), n 1 = (r − h 1 E).
Here, μi > 0, i = 1, 2, 3, 4 when m 1 > 0, n 1 > 0, r > h 1 E, r > 21 .
( ) ( )
μ1 μ2 − μ3 = 2(m 1 + r ) 2rm 1 + m 1 n 1 + 2r 2 − m 1 n 1 + 4r 2 m 1 + n 1 + r m 1 n 1 > 0,
( )( ) ( )
if 2(m 1 + r ) 2rm 1 + m 1 n 1 + 2r m 1 n 1 + 4r m 1 + n 1 + rm 1 n 1 > m 1 n 1 + 4r m 1 + n 1 + r m 1 n 1 2 +
2 2 2
{ }
4(m 1 + r )2 (2r − 1)n 1 m 21 + m 21 n 21 , m 1 > 0, n 1 > 0, r > h 1 E
1
μ1 μ2 μ3 − μ23 − μ21 μ4 > 0, i f m 1 > 0, n 1 > 0, r > h 1 E, r >
2
.
Using
( c stability condition
) of RH–criteria, the system (3) is Stable at
c
E 22 v1L , v1R
c
, v2L
c
, v2R
c
.
Study of a Fuzzy Prey Predator Harvested Model: Generalised … 387
4 Numerical Simulation
In this section, we have to analysis and determine all result to validate proposed
model system. Consider the initial number of prey and predator as TrFN such as at
t = 0 prey population density v~1 (0) = (10, 15, 30) and predator population density
is v~2 (0) = (5, 8, 12) where 0 ≤ α ≤ 1 and others parameters given in the following
Table 1.
(i) (ii)
(iii)
Fig. 1 Fuzzy solution of (i) for α = 0, (ii) for α = 0.6, (iii) for α = 1
388 B. Manna et al.
v1L (0, α) = 10 + 5α, v1R (0, α) = 30 − 5α, v2L (0, α) = 5 + 3α, v2R (0, α) = 12 − 4α.
(4)
Using the parameters reported in Table 1 and t TrFN from (4) we plot Fig. 1(i),
(ii) and (iii) for α = 0, α = 0.6 and α = 1 respectively.
We observed that in Fig. 1(i), (ii)v1L (t, α) ≤ v1L (t, α), v2L (t, α) ≤ v2R (t, α) and
in Fig. 1(iii) v1L (t, α) = v1R (t, α), v2L (t, α) = v2R (t, α) for t ∈ [0, 1.4] imply that
strong solution exists in the system (2). Clearly, Fig. 1 depicts that the equilibrium
(interior) point of (2) is unstable.
Using the parametric value reported in Table 1 and taking TrFN from (4) we
plotted Fig. 2(i), (ii) and (iii) for α = 0, α = 0.6 and α = 1 respectively.
We observed that Fig. 1(i), (ii), (iii) v1L (t, α) = v1L (t, α), v2L (t, α) = v2R (t, α)
for t ∈ [0, 400] imply that a strong solution exists of the system (3). Clearly, Fig. 2
depicts that with the time growths, the prey and predator populations oscillate in
altered period dependent on the values of the (parameter α for 0 ≤ ) α ≤ 1. Therefore,
the system (3) has periodic solution and E 22 c
v1L
c
, v1R
c
, v2L
c
, v2R
c
is neutrally stable
for 0 ≤ α ≤ 1.
(i) (ii)
(iii)
Fig. 2 Fuzzy solution of Fig. 1(i) for α = 0, Fig. 1(ii) for α = 0.6, Fig. 1(iii) for α = 1
Study of a Fuzzy Prey Predator Harvested Model: Generalised … 389
5 Conclusion
References
1. Lotka, A. J. (1925). Elements of physical biology. The Williams and Wilkins Co., Baltimore.
2. Volterra, V. (1926). Variazioni e fluttuazionidelnumers di individuiin specie animaliconviventi.
Memoria della Reale Accademia Nazionale dei Lincei, 2, 31–113.
3. Malthus, T. R. (1959). An essay on the principle of population, as it affects the future improve-
ment of society, with remarks on the speculations of Mr. Godwin, M. Condorcet and other
writers. J. Johnson, London, 1798. Reprint, University of Michigan Press, USA.
4. Verhulst, P. F. (1838). Noticesur la loique la populationpersuitdans son accroissement.
Correspondence Mathematique et Physique (Ghent), 10, 113–121.
5. Rebaza, J. (2012). Dynamics of prey threshold harvesting and refuge. Journal of Computational
and Applied Mathematics, 236, 1743.
6. Pal, D., Mahapatra, G. S., & Samanta, G. P. (2013). Optimal harvesting of prey -predator
system with interval biological parameters: Abioeconomicmodel. Mathematical Bioscience,
241, 181–187.
7. Mondal, S., Samanta, G.P.,2019, Dynamics of an additional food provided predator–prey
system with prey refuge dependent on both species and constant harvest in predator, Physica
A: Statistical Mechanics and its Applications, 534(15)
390 B. Manna et al.
8. Haque, Md. M., & Sarwardi, S. (2018). Dynamics of a Harvested Prey–Predator Model with
Prey Refuge Dependent on Both Species. International Journal of Bifurcation and Chaos,
28(12).
9. Kaleva, O. (1987). Fuzzy differential equations. Fuzzy Sets and Systems, 24, 301–317.
10. Bede, B. A. (2006). Note on “two-point boundary value problems associated with non-linear
fuzzy differential equations.” Fuzzy Sets and Systems, 157, 986–989.
11. Bede, B., S. G., & Gal, S. G. (2005). Generalizations of the differentiability of fuzzy-number-
valued functions with applications to fuzzy differential equations. Fuzzy Sets and Systems, 151,
581–599.
12. Chalco-Cano, Y., & Román-Flores, H. (2008). On the new solution of fuzzy differential
equations. Chaos Solitons Fractals, 38, 112–119.
13. Mahata, A., Mondal, S. P., Alam, S., & Roy, B. (2017). Mathematical model of glucose –insulin
regulatory system on diabetes mellitus in fuzzy and crisp environment. Ecological Genetics
and Genomics, 225–234.
14. Salahshour, S., Ahmadian, A., Mahata, A., Mondal, S. P., & Alam, S. (2018). The behavior of
logistic equation with alley effect in fuzzy environment: fuzzy differential equation approach.
International Journal of Applied and Computational Mathematics, 4(2), 62.
15. Mahata, A., Mondal, S. P., Alam, S., Roy, B. (2017). Application of ordinary differential equa-
tion in glucose-insulin regulatory system modeling in fuzzy environment. Ecological Genetics
and Genomics, 3–5, 60–66.
16. Mahata, A., Mondal, S. P., Ahmadian, F. Ismail, S. Alam, S., & Salahshour, S. (2018). Different
solution strategies for solving epidemic model in imprecise environment. Complexity.
17. Barros, L. C., Bassanezi, R. C., & Tonelli, P. A. (2000). Fuzzy modelling in population
dynamics. Ecological Modelling, 128, 27–33.
18. Akın, O., & Oruc, O. A. (2012). Prey predator model with fuzzy initial values. Hacettepe
Journal of Mathematics and Statistics, 41(3), 387–395.
19. Jafelice, R. M., Barros, L. C., Bassanezi, R. C., & Gomide, F. (2004). Fuzzy Modeling in
Symptomatic HIV Virus Infected Population. Bulletin of Mathematical Biology, 66, 1597–
1620.
20. Mahata, A., Mondal, S. P., Alam, S., Chakraborty, A., De, S. K., & Goswami, A. (2019). Mathe-
matical model for diabetes in fuzzy environment with stability analysis. Journal of Intelligent &
Fuzzy Systems, 36(3), 2923-2932.
21. Roy, B., Mahata, A., Hirak Sinha, H., & Manna, B. (2021). Comparison between pre-diabetes
and diabetes model in fuzzy and crisp environment: fuzzy differential equation approach.
International Journal of Hybrid Intelligence, 2(1), 47–66.
22. Mahata, A., Matia, S. N., Roy, B., Alam, S., & Sinha, H. (2021). The behaviour of logistic
equation in fuzzy environment: fuzzy differential equation approach. International Journal of
Hybrid Intelligence, 26–46.
23. Keshavarz, M., Allahviranloo, T., Abbasbandy, S., Modarressi, M. H. (2021). A study of fuzzy
methods for solving system of fuzzy differential equations. New Mathematics and Natural
Computation, 17(1), 1–27. https://doi.org/10.1142/S1793005721500010.
24. You, C., Cheng, Y., & Ma, H. (2022). Stability of Euler methods for fuzzy differential equation.
Symmetry, 14, 1279. https://doi.org/10.3390/sym14061279
25. Sharma, S., & Samanta, G. P. (2014). Optimal harvesting of a two species competition model
with imprecise biological parameters. Nonlinear Dynamics,77(4), 1101–1119.
26. Xu, C., & Li, P. (2013). Stability analysis in a fractional order delayed predator-prey
model. International Journal of Mathematical and Computational Science,7(5), waste.org/
publication/16751.
27. Paul, S., Mondal, S. P., & Bhattacharya, P. (2017). Discussion on proportional harvesting model
in fuzzy environment: fuzzy differential equation approach. International Journal of Applied
and Computational, 3, 3067–3090. https://doi.org/10.1007/s40819-016-0283-3.
Overview of Applications of Artificial
Intelligence Methods in Propulsion
Efficiency Optimization of LNG Fueled
Ships
Keywords Artificial intelligence · Neural network · LNG · Ship design · Big data
A. Kiritsi
MSc in Economics and Energy Law, AUEB, Athens, Greece
A. Fountis (B)
Faculty, Berlin School of Business and Innovation, Berlin, Germany
e-mail: [email protected]
M. A. Alkhafaji
College of Engineering, National University of Science and Technology, Dhi Qar, Iraq
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 391
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_32
392 A. Kiritsi et al.
1 Introduction
2 Problem Formulation
The purpose of this paper is to conduct a literature review and an investigation into
the factors that influence the amount of fuel that a ship uses while it is in operation.
This investigation will use the information that is provided by the literature, both
as a theoretical foundation and in conjunction with ‘soft’ computer techniques that
are utilized in this field. Finding the solution to an incorrect issue is what is meant
when people talk about “soft computing”. The Artificial Neural Network (also known
as Soft Computing), the Neuro Fuzzy Inference adaptive system, and the Genetic
Algorithm are all examples of various ‘soft’ computer methods that are utilized in
this industry [3].
The purpose of this paper is to highlight the importance of large-scale energy data
analysis of a ship’s performance as well as the expected benefits of such an analysis,
as well as the challenges that such an analysis faces and the capabilities that machine
learning applications may be able to offer a framework for sustainable growth, with
regard to the field of Shipping.
There are numerous factors that contribute to the control of the energy efficiency
of a ship, including the following:
Monitoring fuel consumption can initially result in significant financial benefits,
which help reduce the running costs of the management company and help increase
the fleet’s competitiveness at the same time. These benefits can be gained by moni-
toring fuel consumption. In addition to this, it helps to improve the operation of the
ship by ensuring the optimal operation of its machinery and its safe routing through
the most appropriate route. This is accomplished by taking into account a number
of different parameters that affect fuel consumption, such as the weather and the sea
currents. At the same time, lowering one’s consumption of fuel results in lower emis-
sions of greenhouse gases and other pollutants, which helps to preserve the natural
ecosystem [2].
The contribution of this work lies in the fact that it is a concise guide around the
factors that affect the energy efficiency of LNG combustion ships. Beginning with a
theoretical background, the work concludes with an overview of the application of
artificial intelligence tools (ANN, GA, ANFIS, and MLVR) in optimizing propulsion
parameters in LNG ships. This work is a part of a larger body of research that aims
to improve the energy efficiency of LNG ships [5, 6].
394 A. Kiritsi et al.
Systems that are able to acquire and apply knowledge in a “smart” way, as well as
systems that are able to perceive, reason, learn, and draw conclusions from incomplete
information, are typically referred to as intelligent systems. This is because intelligent
systems have the ability to do all of these things. When we have to monitor very
complex systems or when we have an excessive number of various inputs, this feature
is absolutely necessary.
Even when it is possible to model very complex systems, the resulting models
may be so complicated that developing accurate algorithms or making decisions
based on these models may result in a significant increase in the cost of computers
and hardware as well as an increase in the difficulty of the process. too sluggish for
it to be practical. Systems that are founded on knowledge and have the ability to
make intelligent decisions have proven to be very successful in solving problems of
this nature. It is anticipated that industrial machinery and decision support systems
will, in the not too distant future, be capable of maintaining the consistency and
repeatability of the operation, as well as dealing with external performance without
noticeably degrading performance.
It has been demonstrated that a computer can be programmed to exhibit some
intelligent characteristics of a person, similar to how neurons in the brain, the material
and software of a computer, and the internet are not smart in and of themselves [3,
4] (Fig. 1).
An intelligent system could acquire knowledge and carry out high-level cognitive
tasks if it were to make use of neural networks. A neural network consists of a
collection of nodes, which are typically arranged in layers, and synapses, which are
weighted connections between the nodes. Figure 2 illustrates these functions as a
reference.
Therefore, it is possible to educate a neural network in such a way that it is able
to differentiate between the sounds that are produced by a machine, whether or not
3 Solution Sets
It has become possible, through the use of real historical data and the theoretical
and practical study of their effect on fuel consumption, to reveal the relationships
between input variables (independent variables) and output variables (dependent vari-
ables). This has been made possible by the study of their effect on fuel consumption
(dependent variables, i.e. final fuel consumption) [5, 6].
In this particular investigation, ANFIS was put to use in order to map the relationships
that existed between controlled limits and engine performance. For the purpose of
training and testing the ANFIS model, which has six input variables (diesel fuel
injection timing, blended petrol ratio, recirculation rate of 50% exhaust and 10%
exhaust time, average real pressure display heat) within a wide range of engine
operating parameters and four engine emissions and performance costs, a total of
eighty experimental data were chosen for a dual-fuel diesel engine. The outputs
from ANFIS were then utilized in order to evaluate the objective functions of the
optimization process. This process was carried out utilizing an approach that involved
multiple genetic algorithm optimization (GA) optimization objectives [7].
Overview of Applications of Artificial Intelligence Methods … 397
In order to cut down on overall energy consumption, the energy efficiency control
strategy of the system is based on a model of an advanced dual-feeder shaft-free
generator shaft, a propulsion system that uses an LNG/dual-fuel diesel engine, and
the power consumption of the main engine. Both the simulation model of the whole
propulsion system and the control strategy that was designed for it have been devel-
oped. Simulation with Matlab and Simulink was used to investigate the impact that
engine speed has on the ship’s energy efficiency and to test whether or not various
control strategies for improving energy efficiency are even possible. The findings
indicate that the strategies that were designed are able to ensure the strength of the
entire ship in a variety of conditions, improve the ship’s energy efficiency, and reduce
the amount of CO2 emissions [8].
The goal of this research was to develop a tool that would enable the flight developer
to enter all of the roots, destinations, and dates of a flight set. The algorithm would
then not only choose which aircraft would be more effective in making a partic-
ular flight, but it would also produce an estimate of the fuel consumed, taking into
account weather parameters, level degradation, and available aircraft. This research
was conducted in order to facilitate the development of this tool. In addition to
producing an estimate of CO2 emissions, the payload, and various fuel consumption
parcels, the algorithm’s primary focus will be on optimizing fleet utilization. It was
possible to perform a simulation of the new flight schedule by making use of the
payload constraints and taking into account the sufficiently dry total weight of each
aircraft for each particular trip. In this configuration, the algorithm was able to save
15,396 kg of fuel, which is equivalent to almost 10 million dollars per year. This was
accomplished across 100 flights [9].
The energy efficiency of the ship is significantly impacted by the nature of the work
environment that is associated with the navigation environment. The most important
step toward increasing the ship’s energy efficiency is figuring out how to calculate the
ideal engine speed for each specific navigational environment. It has been discovered
398 A. Kiritsi et al.
that the ship’s resistance plays a significant role in determining the effect that the
working conditions have on the energy efficiency of the vessel.
It is possible to construct a model of the main engine’s energy efficiency by first
calculating the ship’s resistance at a variety of navigation speeds and in a variety of
navigation environments.
Using the method of dynamic optimization, we can then reach the optimal engine
speed for the current navigation environment. After that, we figure out how much
resistance the ship will encounter. Hydrostatic resistance, wave resistance, wind
resistance, and shallow water resistance are all components of the ship’s overall
resistance. Therefore, we are able to arrive at the ship’s total resistance by first
calculating the vessel’s hydrostatic resistance [10].
This article takes a look at the various propulsion systems that are utilized on ships
that are tasked with transporting liquefied natural gas (LNG). In the study, the primary
characteristics of propulsion systems, as well as the benefits and drawbacks asso-
ciated with each system, are discussed, beginning with the earliest systems and
progressing all the way up to the most recent ones that have been put into place. The
propulsion systems that are described include gas turbines, steam turbines, combined
cycles, internal combustion engines with 2 or 4 cylinders, as well as mechanical, elec-
trical, and dual-fuel (DF) systems. Because of their high efficiency, high elasticity,
due to the configuration of the propulsion system, and reduced SOx emissions, DF
engines, both 4S and 2S, are the propulsion systems that are currently installed in YFA
airlines. This is due to the fact that gas emission regulations require the reduction of
SOx emissions. IMO TIER III when operating on gas, with the exception of the MAN
2S DF engines which are Tier II when using gas. This propulsion system includes a
greater quantity of equipment, which results in increased costs for installation and
maintenance. This is an unfavorable aspect of the system [11].
The training data were derived from an example of real measurements of how the
two-stroke Win GD XDF-72 engine worked. The number of samples collected for
diesel fuel is 301, while the number of samples collected for gasoline is 318.
In this paper, the prediction model for good or malfunction of the main Win
GD XDF-72 engine was investigated based on mechanized learning by classifica-
tion utilizing Exhaust CHAID, algorithms, and neural networks. This model was
able to determine whether the engine was functioning properly or not. It is possible
to create a machine performance prediction model using the MLP neural network
Overview of Applications of Artificial Intelligence Methods … 399
method, which has a 100% correct prediction rate for training data and a 100% correct
prediction rate for control type data [11].
The results of the comparison between the three approaches are presented in Table
1.
In the current investigation, an efficient method that is derived from the Multivari-
able Linear Regression (MVLR) and Genetic Algorithm (GA) techniques has been
utilized to predict the likelihood of a worker being involved in a workplace acci-
dent in the shipbuilding industry. Figure 3 presents the high-level architecture of the
optimization algorithm.
In order to conduct an occupational risk assessment, the MVLR-GA model was
operationalized with an appropriate collection of input–output training data. The
accident conditions, the day and time, the person’s specialty, the type of event, the
potentially dangerous situation, and the potentially dangerous actions involved in the
event were the data that were input. The calculated Risk Indicators were the output
data, and they were based on the input parameters. We requested and were given
access to a number of accident files from the Hellenic Labor Inspection Service’s
archives in order to facilitate the development of an efficient training program for the
GA algorithm (Professional Accident Reports). These files were given a statistical
edit so that we could determine which parameters were the most significant. Because
of the statistical process, the chosen parameters provided evidence that they are
related to the frequency with which each of the four levels of injury was observed
[12].
400 A. Kiritsi et al.
In this study, an artificial neural network (ANN) and a fuzzy expert system (FES) are
modeled as being part of an internal combustion engine in order to make predictions
regarding the engine’s power, torque, specific fuel consumption, and hydrocarbon
emissions. Experimentally-obtained data were used in this study. These data were
obtained through laboratory-based studies that involved conducting experiments. An
artificial neural network (ANN) for the engine has had some of its training and testing
done using some of the experimental data that was collected (Fig. 4).
It was found that both data groups had a confidence interval of p >0.05 and that
there were no differences when the experimental data and findings from ANN and
FES were compared with t-test in SPSS and regression analysis in Matlab. exhibited
statistical significance. As a result, it has been demonstrated that developed ANN
and FES can be used consistently in the engineering and automotive sectors in place
of experimental work. Additionally, it appears that ANN and FES can be used and
implemented in a variety of challenging and hazy situations, including figuring out
engine performance and emission parameters [13].
Overview of Applications of Artificial Intelligence Methods … 401
Fig. 4 Recommended ANN for predicting petrol engine performance and emission parameters
(Source [13])
It is being investigated whether an artificial neural network model can use a back-
propagation learning algorithm to forecast a particular diesel engine’s fuel consump-
tion and exhaust temperature for different injection times. Experimental findings
are contrasted with the new model that is suggested. The comparison revealed that
an average absolute relative error of less than 2% is required to obtain consistency
between experimental and network results. A well-trained neural network model
is thought to deliver quick and reliable results, making it a simple tool to use in
preliminary studies for such thermal engineering issues [14].
In this study, an artificial neural network (ANN) is modeled to predict fuel consump-
tion, particularly for the brake, actual power, average effective pressure, and exhaust
temperature of a methanol engine. Several tests were conducted using a four-cylinder,
four-stroke test engine operating at various speeds and engine speeds to gather
training and testing data. A conventional back multiplication algorithm-based ANN
model was created using some of the experimental data for training. On the basis of
a common back multiplication method, an ANN model was created. The effective-
ness of ANN projections was then evaluated by contrasting the predictions with the
findings of the experiments.
402 A. Kiritsi et al.
While special brake fuel consumption, actual power, average effective pressure,
and exhaust gas temperature have also been used separately as an outlet layer, engine
speed, engine torque, fuel flow, average intake manifold temperature, and cooling
water inlet temperature have all been used as input layers. After training, both the
training and test sets of R2 values were discovered to be very near to 1. This demon-
strates how effective the developed ANN model is at predicting internal combustion
engine parameters such as flue gas temperature, average effective pressure, and the
precise fuel usage of brakes [15].
In this research, an artificial neural network was used to simulate the performance and
emission parameters of a single-cylinder, four-stroke CRDI engine with a dual-fuel
CNG-diesel function. Based on experimental data, an ANN model was created, with
load, fuel injection pressure, and CNG energy share serving as the network’s input
parameters, to forecast BSFC, BTE, NOx, PM, and HC. As shown by correlation
coefficients in the 0.99833–0.99999 range, average absolute error rates in the 0.045–
1.66% range, and noticeably lower average square errors, the developed ANN model
was able to predict performance and emission parameters with remarkable accuracy.
This is an acceptable indicator of the robustness of the predicted accuracy [16].
This paper introduces a substitute tool for vehicle tuning applications using virtual
sensors created by an artificial neural network (ANN) for a hydrogen vehicle. The
objective of this research is to regulate exhaust emissions by optimizing straight-
forward engine process parameters. The virtual sensors are built around the engine
process factors (butterfly position, lambda, ignition progress, and spray angle) and
exhaust emission variables (CO, CO2 , HC, and NOx). First, a thorough experimental
and coordination procedure for the training and validation of neural networks was
used to gather the experimental data. The motor and transmission models were
created using two ANN virtual sensors that were built using the optimized layer-by-
layer neural network. With a maximum predictive mean relative error of 0.65%, the
suggested virtual sensors’ accuracy and performance were satisfactory. The virtual
sensors were used and simulated as a measurement tool to coordinate and optimize
the car with precise prediction [17].
Overview of Applications of Artificial Intelligence Methods … 403
Here, an ANN modeling program for a light diesel engine that combines various
biodiesel fuels with traditional mineral diesel is discussed. In this study, an artificial
neural network (ANN) was used to forecast nine distinct engine reactions, including
maximum pressure (Pmax), maximum pressure position (CAD Pmax), maximum
heat release rate (HRRmax), maximum HRR position (CAD HRRmax), and cumu-
lative HRR (CuHRR). For this modeling exercise, four related engine operating
factors were used as input parameters: engine speed, output torque, fuel mass flow
rate and types, and biodiesel fuel mixtures. It was examined whether ANN could
be used to forecast the connections between these inputs and outputs. In order to
validate the simulation findings, information from the concurrent motor study was
first compared. This paper also included network optimization techniques along with
basic ANN “model” and “model parameter” results, including the kind of transfer
function, the training algorithm, and the number of neurons [18].
Artificial intelligence techniques were used in this research to estimate how much
fuel the ship would use while at sea. The noon report, which contains ship statistics,
was originally obtained from a commercial ship. This report’s data were analyzed
and separated into training and assessment data. On the computer, some of the info
was used as training data. The computer was then instructed to guess the untaught
404 A. Kiritsi et al.
portion using a multiple linear regression technique. Finally, the effectiveness of the
evaluation is evaluated by comparing this machine learning prediction with actual
data in a graph [20].
Solutions that are realistic from the viewpoint of a typical ship, such as finding
potential engine efficiency and methods to optimize ship functions. The required
solutions will be obvious once the issue statement is clear. As a result, sensors for
parameters that need to be logged in order to provide information for particular
solutions can be found. After that, the information is stored on a website for quick
and effective processing [21] (Fig. 5).
4 Conclusion
The two biggest expenses for shipping are the amount of fuel consumed by ships
and the decrease of emissions. States and other international organizations, besides
the International Maritime Organizations (IMOs), are working to reduce ship fuel
consumption and the quantity of harmful gases they release by establishing various
rules and controls to address these two issues. Shipping companies are basically
trying to figure out how much fuel was used during the trip in order to cut down on
fuel consumption on ships. Artificial intelligence methods are applied as tools to esti-
mate the fuel consumption of the ship during the voyage. Data is growing daily and is
endless. It is vital that the shipping industry improves cooperation and data integration
to take advantage of the huge amount of information. Big data analytics will allow
the industry to reveal information, trends and correlations that are currently hidden.
Real-time data creates optimization opportunities in every aspect of the shipping
industry—energy management, route design and optimization, predictable mainte-
nance, environmental management, and ship safety and protection. The quality of
the data collected automatically is at least theoretically superior to the data collected
manually. However, there are still many factors that hinder its quality. Most of these
issues are caused by sensors that have inherent inaccuracies, calibration problems, or
malfunctions. For example, a poorly calibrated sensor or displacement of the sensor
calibration can lead to significant performance misinterpretations.
The question is how to understand all this information. A challenge far beyond the
automatic analysis that requires advanced analysis tools capable of understanding
information on a scale and beyond schedule. What is artificial intelligence (AI), of
course? More specifically, mechanical learning systems (ML) and their algorithms
that can convert different data points throughout the history of functions to bring to
light knowledge between noise: basic relationships between variables that can be used
for predicting future results. The comparison showed that the consistency between
experimental and network results is achieved with an average absolute relative error
of less than 2%. It is believed that a well-trained neural network model provides fast
and consistent results, making it an easy-to-use tool in preliminary studies for such
thermal engineering problems.
References
1. CO2 Emissions from ships: Council agrees its position on a revision of EU rules. (2019).
Council of the EU Press release 25 October 2019, https://www.consilium.europa.eu/en/
press/press-releases/2019/10/25/co2-emissions-from-ships-council-agrees-its-position-on-a-
revision-of-eu-rules/
2. Fernández, I. A., Gómez, M. R., Gómez, J. R., & Insua, Á. B. (2017). Review of propulsion
systems on LNG carriers. Renewable and Sustainable Energy Review, 67, 1395–1411.
3. Cai, T. (2015). Application of soft computing techniques for renewable energy network design
and optimization. Lamar University.
406 A. Kiritsi et al.
4. An, H., Zhou, Z., & Yi, Y. (2017). Opportunities and challenges on nanoscale 3D neuromorphic
computing system. IEEE International Symposium on Electromagnetic Compatibility & Signal/
Power Integrity (EMCSI), 2017, 416–421.
5. Chen, W., et al. (2017). Performance evaluation of GIS-based new ensemble data mining tech-
niques of adaptive neuro-fuzzy inference system (ANFIS) with genetic algorithm (GA), differ-
ential evolution (DE), and particle swarm optimization (PSO) for landslide spatial modelling.
Elsevier.
6. Vas, P. (1999). Artificial intelligence based electrical machines and drives, applications of
artificial neural network (soft computing), the neuro fuzzy inference adaptive system, and the
genetic algorithm in propulsion efficiency. Oxford University Press.
7. Yu, W., & Zhao, F. (2019). Predictive study of ultra-low emissions from dual-fuel engine using
artificial neural networks combined with genetic algorithm. International Journal of Green
Energy, 16(12), 938–946.
8. Wang, K., Yan, X., & Yuan, Y. (2015). Study and simulation on the energy efficiency manage-
ment control strategy of ship based on clean propulsion system. In: Proceedings of the ASME
2015 34th international conference on ocean, offshore and arctic engineering. Volume 7: ocean
engineering. St. John’s, Newfoundland, Canada. May 31–June 5, 2015. V007T06A058. ASME.
9. Spencer, K. (2011). Fuel consumption optimization using neural networks and genetic
algorithms (2011 Report of Aerospace Engineering implementation on TAP airline)
10. Yan, X. P., Yuan, Y., & Li, F. (2016). Real-time optimization of ship energy efficiency based
on the prediction technology of working condition. In: Report of transportation research part
D, transport and environment. Wuhan University of Technology.
11. Pallas, D., & Tsoukalas, V. (2019). Artificial intelligence application in performance of engine
WIN GD XDF-72. Journal of Multidisciplinary Engineering Science and Technology (JMEST),
6(12), 11234–11239. ISSN: 2458-9403
12. Tsoukalas, V. D., & Fragiadakis, N. G. (2015). Prediction of occupational risk in the ship-
building industry using multivariable linear regression and genetic algorithm analysis. Safety
Science, 83(2016), 12–22.
13. Tasdemir, S., Saritas, I., Ciniviz, M., & Allahverdi, N. (2011). Artificial neural network and
fuzzy expert system comparison for prediction of performance and emission parameters on a
gasoline engine. Expert Systems with Applications, 38(11–2011), 13912–13923.
14. Parlak, A., Islamoglu, Y., Yasar, H., & Egrisogut, A. (2006). Application of artificial neural
network to predict specific fuel consumption and exhaust temperature for a Diesel engine.
Applied Thermal Engineering, 26(828–824), 1359–4311.
15. Çay, Y., Çiçek, A., Kara, F., & Sağiroğlu, S. (2012). Prediction of engine performance for an
alternative fuel using artificial neural network. Applied Thermal Engineering, 37, 217–225.
16. Kumar, A., et al. (2012). Development of an ANN based system identification tool to estimate
the performance-emission characteristics of a CRDI assisted CNG dual fuel diesel engine.
Journal of Natural Gas Science and Engineering, 21, 147–158.
17. Yap, K., Ho, T., & Karri, V. (2012). Exhaust emission control and optimization of engine
parameters using virtual sensors of an artificial neural network for a water-powered vehicle.
International Journal of Hydrogen Energy, 37(10–2012), 8704–8715.
18. Harun, I. (2012). Artificial neural networks modelling of engine-out responses for a light-duty
diesel engine fuelled with biodiesel blends. Applied Energy, 92, 769–777.
19. Shivakumar, P. Srinivasa Pai, B.R., & Rao, S. (2011). Artificial Neural Network based prediction
of performance and emission characteristics of a variable compression ratio CI engine using
WCO as a biodiesel at different injection timings. Applied Energy, 88(7), 2344–2354
20. Yuanik, T., et al. (2019). Ship fuel consumption prediction with machine learning, conference
paper IMSEC 2019, pp. 757–759
21. Serena Lim, S. L., & Zhiqiang, H. (2019). Practical solutions for LNG fueled ships. In:
Conference proceedings of ICMET OMAN 2019, pp. 38–48.
22. Anan, T., Higuchi, H., Hamada, N. (2017). New artificial intelligence technology improving
fuel efficiency and reducing CO2 emissions of ships through use of operational big data. Fujitsu
Scientific & Technical Journal, 53, 23–28.
Interval Neutrosophic Multicriteria
Decision Making by TODIM Method
Abstract The Interval Neutrosophic Set (INS) can be used to identify the diffi-
culty associated with a set of numbers that are not exact integers inside a real unit
interval. INS is utilized in engineering, information fusion, medicine, and cyber-
netics because it can effectively express incomplete information. When faced with
conflicting, incorrect, and inconsistent information, the Neutrosophic Set (NS) is
widely employed to address multicriteria decision making (MCDM) challenges. We
can choose the best by assessing the degree of dominance of alternatives over other
alternatives. The alternatives to MCDM problems use the TODIM method. First, the
TODIM method was modified to cope with MCDM in the proposed novel strategy
for interval Neutrosophic weighted average (INWA). The main advantage of this
method is that it may be applied to high-risk MCDM problems. In this study, the
aggregation properties for Interval Neutrosophic sets were obtained using the INWA
operator. Lastly, a numerical example was proposed.
1 Introduction
Zadeh [33] defined fuzzy sets (FS) as a method of describing and handling data
that is not rigid but somewhat fuzzy, with membership values ranging from 0 to
1. He also proposed an innovative fuzzy set theory, a valuable tool for tackling
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 407
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_33
408 N. Chaini et al.
over the other alternative. This method can therefore be applied to both qualitative
and quantitative criteria. Qin et al. [22] explained the TODIM approach’s adapt-
ability for various fuzzy conditions. The outline of this paper is organized as follows.
Section 2 established the literature review of the TODIM method and its usage in
different environments. The proposed interval neutrosophic TODIM methodology is
given in Sect. 3. Some numerical experiment that validates the proposed method is
presented in Sect. 4. Lastly, Sect. 5 has the conclusions of the paper.
2 Literature Review
fuzzy environments was presented in [3]. Zindani et al. [36] presents a unique inte-
grated group decision making framework for decision making in intuitionistic fuzzy
environments combining Schweizer-Sklar t-conorm and t-norm (SSTT) aggregation
operators, power average (PA) operators and TODIM procedures.
The following steps for the proposed TODIM method for an interval neutrosophic
set are described.
1. Decision making construction
2. Normalization of the decision matrix
3. Calculate the relative weight
4. Calculating Score values
5. Calculating the accuracy values
6. Formation of dominance matrix
7. Aggregation of all the dominance matrix
8. Finding global values
9. Ranking.
Based on descending order of global values, the ranking will be done. The highest
global value ψi reflects the best alternative.
We have assumed that an investment firm wishes to invest a certain amount of money
in the best alternative. The investment corporation creates a decision making board
based on three decision makers who assess the four choices. The four choices are as
follows:
Interval Neutrosophic Multicriteria Decision Making by TODIM Method 411
4.1 Method 1
Numerical problem: 1
If the possible company Ai (i = 1, 2, 3, 4) are determine by the Interval Neutrosophic
⎡ ⎤
[0.6, 0.7], [0.1, 0.2], [0.8, 0.9] [0.3, 0.6], [0.2, 0.3], [0.3, 0.4] [0.8, 0.9], [0.3, 0.4], [0.4, 0.5]
⎢ ⎥
⎢ [0.4, 0.5], [0.2, 0.3], [0.6, 0.7] [0.5, 0.6], [0.3, 0.4], [0.4, 0.5] [0.7, 0.8], [0.2, 0.3], [0.3, 0.4] ⎥
R=⎢ ⎥
⎣ [0.7, 0.8], [0.2, 0.4], [0.5, 0.6] [0.6, 0.7], [0.2, 0.4], [0.3, 0.5] [0.8, 0.9], [0.3, 0.4], [0.2, 0.3] ⎦
[0.5, 0.7], [0.1, 0.2], [0.4, 0.5] [0.7, 0.8], [0.3, 0.4], [0.5, 0.6] [0.6, 0.7], [0.1, 0.2], [0.3, 0.4]
⎡ ⎤
0 −1.072 −0.8262 −0.0972
⎢ 0.3633 0 0.4514 0.3732 ⎥
δ=⎢
⎣ −0.1778 −1.3818
⎥
0 −0.4911 ⎦
1.1549 −1.1254 −0.0622 0
4.2 Method 2
We presume that the decision-weight maker’s vector is γ = (0.37, 0.33, 0.3)T and
the attribute weight vector is W = (0.4, 0.35, 0.25)T . The MCDM problem is now
solved for an Interval Neutrosophic Set utilizing the TODIM approach.
Step 1: Establishment of the decision matrix: We create a decision matrix based on
the data provided by the decision makers and the criteria listed below.
Decision matrix for D M1 is N D M1
⎡ ⎤
[0.6, 0.7], [0.1, 0.2], [0.8, 0.9] [0.3, 0.6], [0.2, 0.3], [0.3, 0.4] [0.8, 0.9], [0.3, 0.4], [0.4, 0.5]
⎢ ⎥
⎢ [0.4, 0.5], [0.2, 0.3], [0.6, 0.7] [0.5, 0.6], [0.3, 0.4], [0.4, 0.5] [0.7, 0.8], [0.2, 0.3], [0.3, 0.4] ⎥
⎢ ⎥
⎣ [0.7, 0.8], [0.2, 0.4], [0.5, 0.6] [0.6, 0.7], [0.2, 0.4], [0.3, 0.5] [0.8, 0.9], [0.3, 0.4], [0.2, 0.3] ⎦
[0.5, 0.7], [0.1, 0.2], [0.4, 0.5] [0.7, 0.8], [0.3, 0.4], [0.5, 0.6] [0.6, 0.7], [0.1, 0.2], [0.3, 0.4]
⎡ ⎤ ⎡ ⎤
0 −0.65 −0.67 −0.61 0 −0.53 −0.53 −0.76
⎢ 0.26 0 −0.61 −0.61 ⎥ 1 ⎢ 0.19 0 −0.45 −0.53 ⎥
δ11 = ⎢
⎣ 0.27 0.24
⎥δ =⎢ ⎥
0 −0.57 ⎦ 2 ⎣ 0.19 0.16 0 −0.53 ⎦
0.24 0.24 0.28 0 0.26 0.19 0.19 0
⎡ ⎤ ⎡ ⎤
0 −0.63 −0.53 −0.82 0 −0.61 −0.67 −0.74
⎢ 0.16 0 −0.63 −0.53 ⎥ 2 ⎢ 0.24 0 −0.65 −0.65 ⎥
δ31 = ⎢
⎣ 0.13 0.16
⎥δ =⎢ ⎥
0 −0.82 ⎦ 1 ⎣ 0.27 0.26 0 −0.50 ⎦
0.21 0.13 0.17 0 0.30 0.26 0.20 0
⎡ ⎤ ⎡ ⎤
0 −0.70 −0.72 −0.59 0 −0.63 −0.82 −0.96
⎢ 0.24 0 −0.59 −0.38 ⎥ 2 ⎢ 0.16 0 −0.85 −0.94 ⎥
δ22 = ⎢
⎣ 0.25 0.21
⎥δ =⎢ ⎥
0 −0.53 ⎦ 3 ⎣ 0.21 0.21 0 −0.96 ⎦
0.21 0.13 0.19 0 0.24 0.23 0.24 0
⎡ ⎤ ⎡ ⎤
0 −0.50 −0.55 −0.45 0 −0.70 −0.59 −0.79
⎢ 0.2 0 −0.61 −0.35 ⎥ ⎢ ⎥
δ13 = ⎢ ⎥ δ 3 = ⎢ 0.24 0 −0.72 −0.61 ⎥
⎣ 0.22 0.24 0 −0.50 ⎦ 2 ⎣ 0.21 0.25 0 −0.81 ⎦
0.18 0.14 0.20 0 0.28 0.21 0.28 0
⎡ ⎤
0 −0.89 −0.69 −0.85
⎢ 0.22 0 −0.69 −0.56 ⎥
δ33 = ⎢
⎣ 0.17 0.17
⎥
0 −0.53 ⎦
0.21 0.14 0.13 0
Table 1 Example computation of dominance degrees of the first alternative over the others
considering each criterion
Pair of alternatives ϕ1 ϕ2 ϕ3 sum(ϕ1 , ϕ2 , ϕ3 )
(a1 , a2 ) −1.81 −1.94 −2.09 −5.84
(a1 , a3 ) −1.73 −2.21 −1.83 −5.77
(a1 , a4 ) −2.19 −2.29 −2.09 −6.57
Sum = −18.18
Table 2 Ordering
Ordering
Proposed method ψ4 > ψ3 > ψ2 > ψ1
Existing method δ( A2), δ(A4), δ( A3), δ(A1)
ψ1 = 0, ψ2 = 0.38, ψ3 = 0.68, ψ4 = 1.
Compare the suggested method to other methods already in use [31]. According to
the value of δ(Ai ) for both method (Table 2).
The investigation mentioned above demonstrates that the ranking results are
marginally different. The interval Neutrosophic TODIM approach, however, can
choose the businesses in a reasonable manner. The proposed method indicates that
the best investment for MCDM is an arm firm, while the current method indicates
that a food company is the greatest choice. This demonstrates the effectiveness and
logic of the strategy we suggested.
5 Conclusion
We concentrated on this area because the INS environment is better suited for dealing
with actual problems that involve ambiguity. We can precisely locate important alter-
natives by using the TODIM method, and we may analyze the alternatives to provide a
416 N. Chaini et al.
rating that is acceptable and in line with the experts’ predictions. Despite the amazing
technologies we use every day, we just used the TODIM method as a starting point and
a distinct approach for the new researchers. Additionally, in order to make choosing
the best option simple, we employed similarity measures to rank the order of all the
choices. For use in the fields of science and engineering, this INS similarity metric
is practical. This work presents a new TODIM methodology for an interval neutro-
sophic environment and derives INS aggregation features using the INWA operator.
Additionally used the suggested approach to solve decision making issues to pick the
best business to invest in. Since we deal with the interval-based idea, this approach is
different from the earlier ones. We’ll keep working by utilizing the TODIM approach
in additional domains in the future.
Acknowledgements The Pusat Pengurusan Penyelidikan (RMC), Universiti Tun Hussein Onn
Malaysia, Malaysia, funded this study with Grant No. H346 from the Geran Penyelidikan
Pascasiswazah (GPPS).
References
1. Adali, E. A., Isik, A. T., & Kundakci, N. (2016). Todim method for the selection of elective
course. European Scientific Journal, 12(10), 314–324.
2. Broumi, S., & Smarandache, F. (2014). New operations on interval neutrosophic sets.
Neutrosophic Theory and Applications, 1, 256–266.
3. Davoudabadi, R., Mousavi, S. M., & Mohagheghi, V. (2020). A new last aggregation method of
multi-attributes group decision making based on concepts of TODIM, WASPAS and TOPSIS
under interval-valued intuitionistic fuzzy uncertainty. Knowledge and Information System, 62,
1371–1391.
4. Deng, X., & Gao, H. (2019). TODIM method for multiple attribute decision making with 2-
tuple linguistic Pythagorean fuzzy information. Journal of Intelligent Fuzzy Systems, 37(2),
1769–1780.
5. Gao, Z., Zhu, L., Li, Z., & Fan, P. (2015). Threat evaluation of early warning detection based on
incomplete attribute information TODIM method. 3rd International Conference on Machinery,
Materials, and Information Technology Applications (pp. 40–47). Atlantic press.
6. Gomes, L. F. A. M., & Lima, M. M. P. P. (1992). TODIM: Basics and application to multicri-
teria ranking of projects with environmental impacts. Foundations of Computing and Decision
Sciences, 16(4), 113–127.
7. Gomes, L. F. A. M., & Rangel, L. A. D. (2009). Multicriteria analysis of natural gas destination
in Brazil: An application of the TODIM method. Mathematical and Computer Modeling, 50,
92–100.
8. Gomes, L. F. A. M., Machado, M. A. S., Costa, F. F., & Rangel, L. A. D. (2013). Criteria
interactions in multiple criteria decision aiding: A Choquet formulation for the TODIM method.
Procedia Computer Science, 17, 324–331.
9. Gomes, L. F. A. M., Machado, M. A. S., Costa, F. F., & Rangel, L. A. D. (2013). Behav-
ioral multi-criteria decision analysis: The TODIM method with criteria interactions. Annals of
Operations Research, 211, 531–548.
10. Gomes, L. F. A. M., Machado, M. A. S., Santos, D. J., & Caldeira, A. M. (2015). Ranking of
suppliers for steel industry: A comparison of the original TODIM and the Choquet-extended
TODIM methods. Procedia Computer Science, 55, 706–714.
Interval Neutrosophic Multicriteria Decision Making by TODIM Method 417
11. He, X., & Wu, Y. (2017). City sustainable development evaluation based on hesitant
multiplicative fuzzy information. Mathematical Problems in Engineering, 2017, 1–9.
12. Krohling, R. A., & de Souza, T. T. M. (2012). Combining prospect theory and fuzzy numbers
to multi-criteria decision making. Expert Systems with Applications, 39, 11487–11493.
13. Krohling, R.A. and de Souza T. T. M., 2012. F-TODIM: An application of the fuzzy TODIM
method, to rental evaluation of residential properties. Congreso Latino-Iberoamericano de
Investigation perativa, Symposio Brasileiro de Pesquisa Operational, September 24–28, Rio
de Janeiro, Brazil, pp. 431–443.
14. Li, M., Wu, C., Zhang, L., & You, L. N. (2015). An intuitionistic fuzzy-TODIM method to solve
distributor evaluation and selection problem. International Journal of Simulation Modelling,
14(3), 511–524.
15. Lin, C., Lee, C., & Lin, J. (2016). Using the fuzzy TODIM method as a decision making support
methodology for house purchasing. Journal of Testing and Evaluation, 44(5), 1925–1936.
16. Lourenzutti, R., & Krohling, R. A. (2013). A study of TODIM in a intuitionistic fuzzy and
random environment. Expert Systems with Applications, 40, 6459–6468.
17. Lourenzutti, R., & Krohling, R. A. (2014). The Hellinger distance in multi-criteria decision
making: An illustration to the TOPSIS and TODIM methods. Expert Systems with Applications,
41, 4414–4421.
18. Lourenzutti, R., & Krohling, R. A. (2015). TODIM based method to process heterogeneous
information. Procedia Computer Science, 55, 318–327.
19. Passos, A. C., Teixeira, M. G., Garcia, K. C., Cardoso, A. M., & Gomes, L. F. A. M.
(2014). Using the TODIM-FSE method as a decision-making support methodology for oil
spill response. Computers & Operations Research, 42, 40–48.
20. Peng, J. J., Wang, J. Q., Wu, X. H., Wang, J., & Chen, X. H. (2015). Multi-valued neutrosophic
sets and power aggregation operators with their applications in multi-criteria group decision-
making problems. International Journal of Computational Intelligent Systems, 8(2), 345–363.
21. Pramanik, S., Dalapati, S., Alam, S., & Roy, T. P. (2017). NC-TODIM-based MAGDM under
a neutrosophic cubic set environment. Journal of Information, 8(149), 1–21.
22. Qin, Q., Liang, F.-F., Li, L., Chen, Y.-W., & Yu, G.-F. (2017). A TODIM-based multi- criteria
group decision making with triangular intuitionistic fuzzy numbers. Applied Soft Computing,
55, 93–107.
23. Ren, P., Xu, Z., & Gou, X. (2016). Pythagorean fuzzy TODIM approach to multi-criteria
decision making. Applied Soft Computing, 42, 246–259.
24. Sang, X., & Liu, X. (2016). An interval type-2 fuzzy sets- based TODIM method and its
application to green supplier selection. Journal of the Operational Research Society, 67(5),
722–734.
25. Smarandache, F. (2005). Neutrosophic set- a generalization of the intuitionistic fuzzy set.
International Journal of Pure and Applied Mathematics, 24(3), 287–297.
26. Sun, R., Hu, J.-J., & Chen, X. (2017). Novel single-valued neutrosophic decision-making
approaches based on prospect theory and their applications in physician selection. Soft
Computing, 20, 1–15.
27. Tosun, Ö., & Akyüz, G. (2015). A fuzzy TODIM approach for the supplier selection problem.
International Journal of Computational Intelligence Systems, 8(2), 317–329.
28. Ulrich, F. S., & Henri, G. (2018). Fuzzy triangular aggregation operators. International Journal
of Mathematics and Mathematical Sciences, 9209524, 1–13.
29. Wang, J., Wei, G., & Lu, M. (2018). TODIM method for multiple attribute group decision
making under 2-tuple linguistic neutrosophic environment. Symmetry, 10(10), 486.
30. Wei, C., Zhiliang, R., & Rodriguez, R. M. (2014). A hesitant fuzzy linguistic TODIM method
based on a score function. International Journal of Computational Intelligence Systems, 8(4),
701–712.
31. Xu, D.-S., Wei, C., & Wei, G.-W. (2017). TODIM Method for single-valued neutrosophic
multiple attribute decision making. Information, 8(4), 125.
32. Ye, J. (2014). A multi-criteria decision-making method using aggregation operators for
simplified neutrosophic sets. Journal of Intelligent and Fuzzy Systems, 26, 2459–2466.
418 N. Chaini et al.
Abstract Recent studies have shown that the COVID-19-caused Coronavirus illness
is exceedingly infectious and has a significant global mortality rate. The S E I R
compartmental model of COVID-19 is explored in this manuscript with four different
categories. A few conclusions about the existence and uniqueness criteria for the
new model, in addition to the positivity and boundedness of the response, have been
made. The Routh-Hurwitz consistency criterion is used to analyze the dynamics of
the equilibrium point of our suggested model. When RCovid19 < 1 at infection-
free equilibrium, we prove that the system is locally asymptotically stable. The
investigation of the COVID-19 transmission and prevention in Brazil is the primary
goal of this study. MATLAB software is used to describe the model system, and
graphically illustrate the numerical outcomes.
S. Paul (B)
Department of Mathematics, Arambagh Govt. Polytechnic, Arambagh, West Bengal, India
e-mail: [email protected]
A. Acharya
Department of Mathematics, Swami Vivekananda Institute of Modern Science, West Bengal,
Karbala More 700103, India
M. A. Biswas
Department of Mathematics, Gobardanga Hindu College, 24 Parganas (North), P.O.-Khantura,
Gobardanga, West Bengal 743252, India
A. Mahata
Mahadevnagar High School, Maheshtala, Kolkata, West Bengal 700141, India
S. Mukherjee
Department of Mathematics, Gurudas College, Kolkata, West Bengal 700054, India
P. C. Mali
Department of Mathematics, Jadavpur University, Kolkata 700032, India
B. Roy
Department of Mathematics, Bangabasi Evening College, Kolkata, West Bengal 700009, India
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 419
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_34
420 S. Paul et al.
1 Introduction
(i) Analyze the model’s stability and dynamic behavior with fuzzy interval
numbers as the model’s parameters.
(ii) Using numerical modeling to validate the results and stop COVID-19 from
spreading.
(iii) MATLAB software is used to describe the model system, and graphically
illustrate the numerical outcomes.
2 Preliminaries
Definition The interval [Vm , Vn ] can also be written as k1 (η) = (Vm )1−η (Vn )η for
η ∈ [0, 1], whose interval figures also refer to as parametric form.
The Scenario of COVID-19 Pandemic in Brazil Using SEIR Epidemic … 421
3 Model Formulation
The model in this research will be split into four parts. The overall population to
be examined is designated as N, and it includes the susceptible (S), exposed (E),
infected (I), and recovered (R) compartments at any given time.
Thus N = S + E + I + R. (1)
The SEIR model’s diagram is depicted in Fig. 1, and Table 1’s descriptions of the
parameter values are shown there as well.
The given system is
dS 1− p p 1− p p 1− p p
= b L b R − β R β L S I − μ R μ L S,
dt
dE
1− p p 1− p p 1− p p
= β L β R S I − μ L μ R + k L k R E,
dt
dI (2)
1− p p 1− p p 1− p p
= k L k R E − μ L μ R + γ L γ R I,
dt
dR 1− p p 1− p p
= γ L γ R I − μ R μ L R,
dt
with S(0) ≥ 0, E(0) ≥ 0, I (0) ≥ 0, R(0) ≥ 0, where p stands for the interval-valued
parameter and Table 1 lists the parameter values.
dS 1− p p 1− p p 1− p p 1− p p 1− p p
= b L b R − β R β L S I − μ R μ L S ≥ −β R β L S I − μ R μ L S.
dt
t 1− p p 1− p p
We have, S(t) ≥ S(0) exp − 0 β R β L I + μ R μ L dp > 0 .
1− p p 1− p p 1− p p 1− p p 1− p p
Now dE
= β R β L S I − μ L μ R + k L k R E ≥ − μ L μ R + k L k R E.
dt
t
1− p p 1− p p
Then, E(t) ≥ E(0) exp − ∫ μ L μ R + k L k R dp > 0.
0
1− p p 1− p p 1− p p 1− p p 1− p p
Also ddtI = k L k R E − μ L μ R + γ L γ R I ≥ − μ L μ R + γ L γ R I .
t
1− p p 1− p p
Now, I (t) ≥ I (0)ex p − ∫ γ R γ L + μ R μ L dp > 0.
0
1− p p 1− p p 1− p p
Now dR
= γLγ R I − μ R μ L R ≥ −μ R μ L R.
dt
t
1− p p
Then, R(t) ≥ R(0) exp − ∫ μ R μ L dp > 0.
0
+T ) 1− p p 1− p p
Again d(S+E+Idt
= b L b R − μ R μ L (S + E + I + T ).
1− p p 1− p p
Therefore dt = b L b R − μ R μ L N .
dN
1− p p 1− p p
If b L b R − μ R μ L N < 0 then ddtN < 0.
Thus all population are positive.
dS dE dI dR
= = = = 0. (3)
dt dt dt dt
1− p p
bL b R
Then we get, infection-free equilibrium (E 0 ) = 1− p p , 0, 0, 0 and the
μ R μL
epidemic equilibrium point (E 1 ) = (S ∗ , E ∗ , I ∗ , R ∗ ),
The Scenario of COVID-19 Pandemic in Brazil Using SEIR Epidemic … 423
1− p p 1− p p 1− p p 1− p p 1− p p 1− p p 1− p p 1− p p
b L b R − μ L μ R +k L k R E μ R μ L μ L μ R +k L k R μ L μ R +γ L γR R0 −1
where S ∗ = 1− p p E ∗= ,
1− p p 1− p p 1− p p
,
μ R μL βk L k R μ L μ R +k L k R
1− p p 1− p p
k k E bk L k R E
I ∗ = 1− p L p R1− p p R ∗ = 1− p p 1−
, p p 1− p p
.
μ L μ R +γ L γR μ R μ L μ L μ R +γ L γR
The reproduction number (RCovid19 ) can be evaluated from the greatest eigenvalue
of the matrix F V −1 [18, 19] where,
1− p p 1− p p
bL b R β R βL 1− p p 1− p p
0 0 μL μ R + k L k R
F= and V = .
1− p p
μ R μL
1− p p 1− p p 1− p p
0 0 μL μ R + γL γ R −k L k R
1− p p 1− p p 1− p p
k L k R bL b R β R βL
Therefore, RCovid19 = 1− p p
1− p p 1− p p
1− p p 1− p p
.
μ R μL μ L μ R +k L k R μ L μ R +γ L γ R
4 Stability Analysis
1− p p
Or, (−μ R μ L − y) (y 3 + ay 2 + by + c) = 0, where
A = β R β L I ∗ + 3μ R μ L + k L k R + γ R γ L ,
1− p p 1− p p 1− p p 1− p p
B = β R β L I ∗ + μ R μ L 2μ R μ L + k L k R + γ R γ L
1− p p 1− p p 1− p p 1− p p 1− p p
1− p p 1− p p 1− p p 1− p p
+ μL μ R + k L k R μL μ R + γL γ R ,
C = β R βL I ∗ + μ R μL μ R μL + k R k L μ R μL + γ R γL
1− p p 1− p p 1− p p 1− p p 1− p p 1− p p
1− p p 1− p p 1− p p ∗
− μ R μ L β R β L kk L kR S .
5 Numerical Discussion
In this section, using MATLAB software, we discuss the stability of our proposed
model at E0 and E1 .
This section discusses the stability of the model. We can see from the following
figures that the model is LAS at E0 taking p = 0, 0.5, 1.0 using Table 2 (Fig. 2).
(a) (b)
(c)
Fig. 2 Time series solution of the system (2) is stable at E0 for t ∈[0, 800]
6 Conclusion
The present study’s possible goal is to analyze a model for studying COVID-19 trans-
mission patterns using actual pandemic cases in Brazil, assisted by epidemiological
modeling. The SEIR model was constructed and explored in this article in order
to better explain the scenario in Brazil. We employed nonlinear analysis to demon-
strate the model’s existence and uniqueness. The model’s fundamental reproduction
number was also determined by next generation matrix method. In order to stop the
virus from spreading throughout the nation, our main aim is to establish the funda-
mental reproductive number and equilibrium. Furthermore, the global stability at the
points E0 and E1 has been demonstrated. The results reveal that if RCovid 19 < 1,
E0 is globally asymptotically stable. Also if RCovid 19 > 1, the point E1 is global
asymptotic stable. Ministries and public health professionals may be able to develop
strategic strategies to close vaccination gaps and stop outbreaks in the future with
the use of the research findings from the current study.
426 S. Paul et al.
References
1. Rothan, H. A., & Byrareddy, S. N. (2020). The epidemiology and pathogenesis of coronavirus
disease (COVID-19) outbreak. Journal of Autoimmunity., 109, 102433.
2. Bai, Y., Yao, L., Wei, T., et al. (2020). Presumed asymptomatic carrier transmission of COVID-
19. JAMA, 323(14), 1406–1407.
3. Kermack, N. O., & Mackendrick, A. G. (1927). Contribution to mathematical theory of
epidemics. Proceedings of the Royal Society of London. Series A, Mathematical and Physical
Sciences, 700–721.
4. Ji, C., Jiang, D., & Shi, N. (2011). Multigroup SIR epidemic model with stochastic perturbation.
Physica A: Statistical Mechanics and Its Applications., 390(10), 1747–1762.
5. Bjornstad, O. N., Finkenstadt, B. F., & Grenfell, B. T. (2002). Dynamics of measles epidemics:
Estimating scaling of transmission rates using a time series SIR model. Ecological Monographs,
72(2), 169–184.
6. Hu, Z., Ma, W., & Ruan, S. (2012). Analysis of SIR epidemic models with nonlinear incidence
rate and treatment. Mathematical Biosciences, 238(1), 12–20.
7. Diekmann, O., Heesterbeek, H., & Britton, T. (2013). Mathematical tools for understanding
infectious disease dynamics. In: Princeton series in theoretical and computational biology.
Princeton University Press, Princeton
8. Paul, S., Mahata, A., Ghosh, U., & Roy, B. (2021). SEIR epidemic model and scenario analysis
of COVID-19 pandemic. Ecological Genetics and Genomics 19, 100087.
9. He, S., Peng, Y., & Sun, K. (2020). SEIR modeling of the COVID-19 and its dynamics.
Nonlinear Dynamics, 101, 1667–1680.
10. Overton, C. E. (2020). Using statistics and mathematical modeling to understand infectious
disease outbreaks: COVID-19 as an example. Infectious Disease Modelling 5, 409–441.
11. Barros, L. C., Bassanezi, R. C., & Leite, M. B. F. The SI epidemiological models with a fuzzy
transmission parameter. Computers & Mathematics with Applications, 45, 1619–26.
12. Zhou, L., & Fan, M. (2012). Dynamics of an SIR epidemic model with limited resources visited.
Nonlinear Analysis: Real World Applications, 13, 312–324.
13. Mccluskey, C. C. (2010). Complete global stability for an SIR epidemic model with delay-
distributed or discrete. Nonlinear Analysis, 11(1), 55–59.
14. Paul, S., Mahata, A., Mukherjee, S., & Roy, B. (2022). Dynamics of SIQR epidemic model with
fractional order derivative. Partial Differential Equations in Applied Mathematics, 5, 100216.
15. Mahata, A., Paul, S., Mukherjee, S., Das, M., & Roy, B. (2022). Dynamics of Caputo Fractional
Order SEIRV Epidemic Model with Optimal Control and Stability Analysis. International
Journal of Applied and Computational Mathematics, 8(28).
16. Mahata, A., Paul, S., Mukherjee, S., & Roy, B. (2022). Stability analysis and Hopf bifurcationin
fractional order SEIRV epidemic model with a time delay in infected individuals. Partial
Differential Equations in Applied Mathematics, 5, 100282.
17. Paul, S., Mahata, A., Mukherjee, S., Roy, B., Salimi, M., & Ahmadian, A. (2022). Study of
Fractional Order SEIR Epidemic Model and Effect of Vaccination on the Spread of COVID-19.
International Journal of Applied and Computational Mathematics, 8(5), 1–16.
18. Diekmann, O., Heesterbeek, J. A. P., & Roberts, M. G. (2009). The Construction of Next-
Generation Matrices for Compartmental Epidemic Models. Journal of The Royal Society
Interface., 7(47), 873–885.
19. Diethelm, K., & Ford, N. J. (2004). Multi-order fractional differential equations and their
numerical solution. Applied Mathematics and Computation, 154(3), 621–640.
Ldetect, IOT Based Pothole Detector
Sumathi Balakrishnan, Low Jun Guan, Lee Yun Peng, Tan Vern Juin,
Manzoor Hussain, and Sultan Sagaladinov
Abstract Potholes are a persistent issue in Malaysia that poses a threat to the safety
and economic well-being of the country. Poor road construction, heavy traffic, and
extreme weather conditions are some of the contributing factors to the development of
these road defects. Despite the efforts by the government and local authorities to repair
and maintain the roads, potholes remain a significant problem, especially in rural
areas. The high number of road traffic deaths in rural areas compared to urban areas
highlights the urgency of addressing the pothole problem in Malaysia. In this paper,
a pothole detection system called Ldetect using LiDAR sensor is proposed. This
system idea provides a better solution to addressing the persistent pothole problem
in Malaysia.
1 Introduction
Potholes are a significant problem in Malaysia that affects both the safety and
economic well-being of the country. These road defects are caused by a combina-
tion of factors, including poor road construction, heavy traffic, and extreme weather
conditions [1]. Despite efforts by the government and local authorities to repair and
maintain roads, potholes continue to be a persistent issue in many areas of the country.
S. Balakrishnan (B)
Taylor’s University, Subang Jaya, Malaysia
e-mail: [email protected]
S. Balakrishnan · L. J. Guan · L. Y. Peng · T. V. Juin · S. Sagaladinov
School of Computer Science, Taylor’s University, Subang Jaya, Malaysia
M. Hussain
Computing Department, Faculty of Computing & Information Technology, Indus University,
Karachi, Pakistan
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 427
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_35
428 S. Balakrishnan et al.
One of the main causes of potholes in Malaysia is the lack of proper maintenance
and repair. It is impossible for contractors or road maintenance teams to keep track
of the condition of every road in the country, especially in rural areas. If these roads
are left neglected, it will develop into a more serious problem road surface cracking,
potholes and so much more. According to [2], in Malaysia, the number of road
traffic deaths in rural areas (66%) is significantly higher compared with that in urban
areas (34%). This data shows that it is necessary to look into solving pothole issues,
especially in rural areas, to reduce the amount of road traffic deaths that happen in
rural areas due to potholes.
2 Existing Literature
The below alternative ideas for pothole detection generally have lower accuracy
and more limitations than the idea of using LiDAR. Additionally, many of these
alternatives are more expensive, more complex, or less flexible than LiDAR, which
makes LiDAR the preferred technology for pothole detection in many cases.
Acoustic Sensors
A different strategy is to employ acoustic sensors, which track changes in sound
waves as a car passes over a road surface. In this method, sensors are mounted
to the car, and the sound made by the wheels as they move over the pavement is
examined to determine which parts of the road are most likely to have potholes. This
method is simple to use and reasonably inexpensive, although it may be impacted
by tire noise, road noise, and other noise-interfering elements. It can be challenging
to discern between potholes and other kinds of road irregularities, and this method
is less accurate than LiDAR [9].
Vision-Based Systems
Utilizing vision-based systems, which employ cameras to find potholes, is another
possible strategy. In this method, cameras are mounted on the car, and photos of the
surface of the road are examined to spot locations that are likely to have potholes.
This method may be helpful for finding larger potholes that are clearly apparent in the
photos, but it may be impacted by the lighting and other image-interfering elements.
This method also calls for a more complex computer vision system and is typically
less accurate than LiDAR [10].
Inertial Measurement Units (IMUs)
Utilizing inertial measurement units (IMUs), which monitor the acceleration and
orientation of the vehicle as it moves over the road surface, is another possible
strategy. This method involves mounting IMUs on the car and analyzing the data to
locate regions of the road that are likely to have potholes. This method is simple to use
and reasonably inexpensive, but it can be impacted by things like vehicle vibrations
and other things that can skew the IMU results. Furthermore, this method’s accuracy
Ldetect, IOT Based Pothole Detector 429
is lower than LiDAR’s, and it might be challenging to tell potholes apart from other
kinds of road irregularities [11].
Ground-penetrating radar (GPR)
Using ground-penetrating radar (GPR), which creates a map of the subsurface by
using radio waves to enter the ground, is another possible strategy. This method uses
GPR to find subsurface characteristics, including voids or variations in subsurface
density, that are suggestive of potholes. Although this method can be helpful for
finding potholes that are not readily apparent on the road surface, it is typically more
expensive and difficult than LiDAR. This method is also less precise than LiDAR
and susceptible to interference from other subsurface features, such as subterranean
utilities [12].
3 Proposed Solution
According to [13], poor-condition roads are the main cause of 94% of accidents on
the road. Therefore, the motivation behind this study is to propose a pothole detector
that is able to identify potholes on roads using IoT sensors to reduce the number of
potholes on the road. In addition to that, it can also be used to detect bad-condition
roads, which would eventually develop into road cracks and potholes if left neglected.
This device will only be attached to government vehicles like public buses, garbage
trucks, and taxis instead of private vehicles to avoid invasion of someone’s privacy.
If there is any pothole or bad-condition road detected while the device is running, it
will then send the information of the pothole or bad-condition road, e.g., details (size,
diameter, and depth) and geolocation, to the cloud to store it in the database. The
database will be shared with Jabatan Kerja Raya (JKR), the Malaysian Public Works
Department, to enhance the speed and efficiency of road repairs and maintenance.
With the help of road data gained from hundreds or thousands of vehicles attached
with this device, the road maintenance teams of JKR will be able to identify and
prioritize the roads that are due for maintenance work. With regular road condition
inspections and proper preventative repairs, it is possible to prevent the roads from
developing cracks, potholes, or other defects, making sure that the roads are always
safe to be used by road users [14].
Figure 1 depicts the proposed hardware for the pothole detector. The proposed
hardware for the pothole detector consists of a LiDAR module, a GPS module, a
camera, and a buzzer. All of the hardware mentioned above is connected to and
controlled by a Raspberry Pi microprocessor. The cloud and the web application are
both connected to the Raspberry PI microprocessor. The cloud is informed about
the details and geolocations of potholes and bad-quality roads using the information
gathered by the microprocessor. Information, which it then processes and stores
in the database. Additionally, an AVR-IoT Microchip is utilized for the system’s
cloud back-end because of its WiFi capabilities, which enable direct connectivity to
Amazon Web Services (AWS), a cloud service.
430 S. Balakrishnan et al.
The principle behind the workings of the proposed pothole detector device is quite
simple. First, a LiDAR module is used to scan the road in front of the vehicles in
real-time. LiDAR has a similar working mechanism to radar, but it emits infrared
light pulses instead of radio waves to form a laser and measures the time it takes
for the infrared light pulses to come back after hitting nearby objects. The LiDAR
module then calculates the distance to each surface of the road using the measured
time for the emitted laser pulse to bounce off the road surface and return to the LiDAR
sensor. This LiDAR module is capable of producing 3D models and maps of the road
environment in real-time with the millions of precise distance measurement points
it captures each second [15]. After having the 3D models and maps produced by
the LiDAR module, the Raspberry Pi microprocessor uses the algorithm for pothole
detection to determine where the pothole or bad-condition road is in relation to the
car. If a pothole or bad-condition road is detected by the algorithm, the buzzer that
is built into the pothole detector device will buzz to inform the driver. Following
that, the microprocessor will record the details (size, diameter and depth) and use a
camera module to take pictures of that pothole or bad-condition road. A GPS module
will also be used to retrieve the geolocation of that pothole or bad-condition road.
All of these data will then be sent by the microprocessor to the cloud database to be
stored.
The system does have a web application that can be used by local authorities.
This is done so that the government has the necessary information, such as where
the potholes and bad-condition roads are, by viewing the report generated by the
web application using the data stored in the cloud database and can repair them in a
timely manner. Once the pothole or bad-condition road is repaired, the data can be
updated on the web application and removed from the cloud database.
Ldetect, IOT Based Pothole Detector 431
4 Technologies
See Table 1.
5 System Architecture
Application Layer
Pothole detectors will use web applications in order to place information about
potholes. This means that web applications can be accessed from any device. The
application layer for a pothole detector device would typically involve the software or
user interface that enables users to interact with the device and access its features and
functionalities [16]. This layer contains the user interface, Pothole Detection Algo-
rithm, Data Storage and Management, GPS Integration, Alerting and Reporting.
Overall, the application layer for a pothole detector device plays a critical role
in enabling users to effectively utilize the device and maximize its potential for
improving road safety and maintenance.
Network Layer
The Arduino and Raspberry Pi are two IoT gateway possibilities. As can be observed,
there are a number of distinctions between the two, and each has advantages and
disadvantages. After some consideration, it was decided that Raspberry Pi was the
better choice because Pothole IoT sensors and actuators are not relatively light and
do require a lot of computational power. A WiFi-enabled Microchip AVR-IoT board
is used to connect the pothole detector to AWS IOT.
Transport Layer
The transport layer is responsible for data transmission and packet delivery between
devices and servers. The transport layer should use a reliable and efficient protocol
to ensure packet delivery with minimal latency [17].
The Table 2 lists some of the alternatives for the transport layer protocol, including
Zigbee, Message Queue Telemetry Transport (MQTT), Hypertext Transfer Protocol
(HTTP or HTTPS), Long Range Wide Area Network (LoRaWAN), and Constrained
Application Protocol (CoAP).
Ldetect, IOT Based Pothole Detector 433
Security Layer
The security layer is in charge of making sure that the data transferred by the system
is confidential, intact, and available. A strong security layer is required for the pothole
detection system in order to guard against unwanted access, data manipulation, and
denial-of-service assaults. To ensure the security of the system, the security layer
should comprise secure communication protocols, encryption, authentication, and
access control techniques [18–21].
434 S. Balakrishnan et al.
Data tampering and distributed denial-of-service (DDoS) assaults are a few poten-
tial hazards to be aware of. Concerns about physical security, such as gadget theft,
should not be disregarded. Security features, harms, and solutions are not discussed
in this proposal [22–25].
6 Experimental Result
Figure 3 shows the circuit set up on the pothole detector device using Tinkercad. As
Raspberry Pi and LiDAR sensors are not available in Tinkercad, Arduino Uno and an
ultrasonic distance sensor will be used to replace them for a concept demonstration
of how the device works. Once the device has been turned on, the LCD panel will
light up to show if any road issues are detected. The distance between the road issue
and the ultrasonic distance sensor will also be displayed on the LCD panel. When
there is a road issue in the detection range of an ultrasonic distance sensor, the piezo
buzzer will start buzzing, and the LCD panel will also display “Detected” and the
distance between the road issue and the ultrasonic distance sensor. At the same time,
the LED which represents a GPS module will capture the location and send it to the
cloud server.
Users will be able to access web applications via the web page. The feature on the
website allowed users to be informed of the recent road issues detected by the pothole
detector device. The time and date for each of the road issues detected will also be
Ldetect, IOT Based Pothole Detector 435
shown on this web page. Another feature allowed users to be informed of the recent
road issues detected by the pothole detector device, with each of the road issues being
marked on a map. The details for each of the road issues detected, such as the size
and depth of the pothole, the severity, and the road issues detected, whether it is a
pothole or poor road condition, will also be shown. Both of the features are shown
in Fig. 4.
8 Conclusion
The proposed system of pothole detection uses a lidar sensor fixed onto a vehicle.
The input is processed to alert the driver and the coordinates will be sent to the
government to take necessary action. The system is made to increase the efficiency
of repairing potholes and decrease the frequency of accidents that they cause. Govern-
ments can reduce labor costs and time spent on manual road inspections by using the
data to optimize maintenance. The data in the server can be analyzed for predictive
maintenance of roads to prevent new potholes using machine learning models such
as random forest, support vector machine, or ensemble voting, and by considering
various factors like the most common types of vehicles on that road and weather
conditions. Pothole severity can also be categorized which will be useful for the
government’s prioritization. Real-time detection also means that drivers can drive
without having to look out for potholes in poor visibility conditions. However, it is
rather wasteful to install lidar sensors only to detect road damage, so other types of
hazards like people, animals, litter, branches, and debris may be included later.
Lidar sensor together with a detection algorithm must be developed. Firstly,
pothole detection will be the focus. It must be tested multiple times under different
environmental conditions and vehicle types. Once it has been verified that all compo-
nents work as intended, it should be installed on government vehicles according to
frequency. In the second year, the algorithm can be improved for detection of water
filled potholes and other road damage such as upheavals and ruts. Alternative alerts
such as voice including distance and severity could be installed. There could also be
an option in the vehicle to automatically limit its speed when approaching a pothole.
436 S. Balakrishnan et al.
In the third year, the public may sign up to test the system. Arrangements with
manufacturers and technicians would have to be made to ensure compatibility. If the
feedback is positive, general availability may be considered. Furthermore, there is
a possibility to collaborate with navigation software companies to integrate pothole
data for additional alerts and route calculation.
References
1. Alaamri, R. S. N., Kattiparuthi, R. A., & Koya, A. M. (2017). Evaluation of flexible pave-
ment failures—A case study on Izki road. International Journal of Advanced Engineering,
Management and Science, 3(7), 741–749. https://doi.org/10.24001/ijaems.3.7.6.
2. Darma, Y., Karim, M. R., & Abdullah, S. (2017). An analysis of Malaysia road traffic death
distribution by road environment. Sādhanā, 42(9), 1605–1615. https://doi.org/10.1007/s12046-
017-0694-9
3. De Silva, G. D., Perera, R. S., Laxaman, N. M., Thilakarathna, K. M., Keppitiyagama, C., & de
Zoysa, K. (2008). Automated pothole detection system. In: International IT conference (IITC
08), Colombo, Sri Lanka.
4. Advanced Engineering Informatics, 25(3)-sciencedirect.com (Title-Pothole Detection in
Asphalt Pavement Images.)
5. W. H. Organization. (2015) Global status report on road safety. http://www.who.int/violencei
njuryprevention/roadsafetystatus/2015/en.pdf/.
6. Madli, R., Hebbar, S., Pattar, P., & Golla, V. (2015). Automatic detection and notification of
potholes and humps on roads to aid drivers. IEEE Sensors Journal, 15(8), 4313–4318.
7. Perttunen, M., Mazhelis, O., Cong, F., Kauppila, M., Leppnen, T., Kantola, J., Collin, J., Pirt-
tikangas, S., Haverinen, J., & Ristaniemi, T. (2011). Distributed road surface condition moni-
toring using mobile phones. In Proc. Int. Conf. Ubiquitous Intell. Comput, Berlin, Heidelberg
(pp. 64–78).
8. Kulkarni, A., Mhalgi, N., Sagar Gurnani, D., & Giri, N. (2014). Pothole detection system using
machine learning on android. International Journal of Emerging Technology and Advanced
Engineering, 5(7), 360364.
9. Mednis, A., Strazdins, G., Liepins, M., Gordjusins, A., & Selavo, L. (2010). RoadMic: Road
surface monitoring using vehicular sensor networks with microphones. Networked Digital
Technologies, 417–429. https://doi.org/10.1007/978-3-642-14306-9_42.
10. Bučko, B., Lieskovská, E., Zábovská, K., & Zábovský, M. (2022). Computer vision based
pothole detection under challenging conditions. Sensors, 22(22), 8878. https://doi.org/10.3390/
s22228878.
11. Lei, T., Mohamed, A. A., & Claudel, C. (2018). An IMU-based traffic and road condition
monitoring system. HardwareX, 4, e00045. https://doi.org/10.1016/j.ohx.2018.e00045
12. Huston, D. R., Pelczarski, N. V., Esser, B., & Maser, K. R. (2000). Damage detection in
roadways with ground penetrating radar. SPIE Proceedings. doi, 10(1117/12), 383542.
13. Silva, L.A., Sanchez San Blas, H., Peral García, D., Sales Mendes, A., & Villarubia González,
G. (2020). An architectural multi-agent system for a pavement monitoring system with pothole
recognition in UAV images. Sensors, 20(21), 6205. https://doi.org/10.3390/s20216205
14. MME, Z. (2017). Improving maintenance practice for road network in Sudan. MOJ Civil
Engineering, 2(6). https://doi.org/10.15406/mojce.2017.02.00054.
15. Debeunne, C., & Vivet, D. (2020). A review of visual-lidar fusion based simultaneous
localization and mapping. Sensors, 20(7), 2068. https://doi.org/10.3390/s20072068.
16. Karagiannis, V., Chatzimisios, P., Vazquez-Gallego, F., & Alonso-Zarate, J. (2015). A survey on
application layer protocols for the internet of things. Transaction on IoT and Cloud Computing.
Ldetect, IOT Based Pothole Detector 437
17. Iren, S., Amer, P. D., & Conrad, P. T. (1999). The transport layer: Tutorial and survey. ACM
Computing Surveys, 31(4), 360–404. https://doi.org/10.1145/344588.344609
18. Aldosari, H.M. (2015). A proposed security layer for the internet of things communication
reference model. Procedia Computer Science, 65, 95–98. https://doi.org/10.1016/j.procs.2015.
09.084.
19. Almusaylim, Z. A., & Zaman, N. (2019). A review on smart home present state and challenges:
Linked to context-awareness internet of things (IoT). Wireless networks, 25, 3193–3204.
20. Humayun, M., Jhanjhi, N. Z., Hamid, B., & Ahmed, G. (2020). Emerging smart logistics and
transportation using IoT and blockchain. IEEE Internet of Things Magazine, 3(2), 58–62.
21. Ullah, A., Azeem, M., Ashraf, H., Alaboudi, A. A., Humayun, M., & Jhanjhi, N. Z. (2021).
Secure healthcare data aggregation and transmission in IoT—A survey. IEEE Access, 9, 16849–
16865.
22. Almulhim, M., & Zaman, N. (2018, February). Proposing secure and lightweight authentica-
tion scheme for IoT based E-health applications. In 2018 20th International Conference on
advanced communication technology (ICACT) (pp. 481–487). IEEE.
23. Alamri, M., Jhanjhi, N. Z., & Humayun, M. (2019). Blockchain for Internet of Things (IoT)
research issues challenges & future directions: A review. Int. J. Comput. Sci. Netw. Secur, 19(1),
244–258.
24. Alferidah, D. K., & Jhanjhi, N. Z. (2020). A review on security and privacy issues and challenges
in internet of things. International Journal of Computer Science and Network Security IJCSNS,
20(4), 263–286.
25. Almulhim, M., Islam, N., & Zaman, N. (2019). A lightweight and secure authentication scheme
for IoT based e-health applications. International Journal of Computer Science and Network
Security, 19(1), 107–120.
Speech Synthesis with Image Recognition
Using Application of CNN and RNN
Abstract In this paper, we are trying to make a portable image recognition machine
that will read out the objects in the image in the form of speech with the help of CNN
and RNN. This can be very helpful for recognizing different objects in real-time. We
will be using Raspberry Pi, which itself is a portable computer to do all the work at
hand. With the help of the Raspberry Pi camera, a real-time image will be captured
just by showing the object to the camera, and then with the help of a Convolutional
Neural Network (CNN), the image will be recognized. The CNN architecture has
ResNet which has the capability to handle sophisticated deep learning tasks and
models. The speech part will be constructed with the help of a Recurrent Neural
Network (RNN). Recurrent Neural Networks architecture has internal memory that
stores the state it has gone through i. e. the inputs it has taken this help and makes it
a choice for a machine learning problem. Example of RNN includes Apple Siri and
Google Voice Search. The speech will be the output in the form of a voice.
1 Introduction
Image Processing is the use of computer algorithms to process images and videos
and extract useful information [1]. Processing of images facilitates us to recognize
images and objects in the image. We are combining image recognition and speech
synthesis in real-time with a portable device which can be helpful for a blind person
to know its surrounding [2, 3]. In this study, we have taken the help of Digital Image
Processing (DIP) and Speech synthesis technology [4, 5].
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 439
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_36
440 B. Mandal et al.
• In this paper, we are trying to make a portable image recognition machine that
will read out the objects in the image in the form of speech.
• Our objective is to recognize different objects in real-time.
3 Proposed Methodology
5. Using the same pre-trained MSCOCO dataset caption will be generated for the
image.
6. Object recognition.
7. With the help of RNN’s Long-Shot Term Memory.
a. The nodes of a recurrent neural network (RNN), a type of artificial neural
network, are connected in a certain order to represent a directed graph.
b. Networks with long short-term memory can pick up dependencies.
c. By using LSTM directly, long-term dependency issues can be prevented.
8. The caption will be read out as the output from the speakers connected to the
Raspberry Pi.
9. We can repeat the process to recognize other objects.
4 Technology Used
Raspberry Pi
A single computer board called a Raspberry Pi was created in the UK by the Raspberry
Pi Foundation. This single board computer was built to enhance the competency
level of the fundamental knowledge of computer science in schools and developing
countries. Raspberry Pi is minicomputer (cf. Fig. 2) capable of performing all the
stuff which a regular computer is capable of. Raspberry Pi can handle any peripherals
which a personal computer supports.
The release of 3 Model B was done in February 2016 which has the following
specification
1. 64-bit processor with four cores
2. Inbuilt Wi-Fi and Bluetooth
3. USB boot capabilities
Fig. 2 Raspberry Pi
Speech Synthesis with Image Recognition Using Application of CNN … 443
Model 3 B+ specification
a. 1.4 Ghz processor
b. Gigabit Ethernet (300 Mbits/s throughput limit)
c. Connection of internal USB 2. 0
d. Wi-Fi with 2. 4/5 GHz band capable of transferring data at 100 Mbit
e. Power over Ethernet
f. USB boot
g. Network boot.
Convolutional Layer
Feature extraction is performed by the first layer called the convolution layer. The
relationship among the pixels is preserved in this layer. Further, this layer learns the
image features by utilizing Small Square of input data. It takes the image matrix and
kernel of a certain size as input in the first place. An image matrix of dimension d
has height h and width ‘w’.
• A Kernel performs certain image operations on either a single pixel or a group of
pixels.
• The output is in the form of (h − f h + 1)x(w − f w + 1)x1.
An image matrix of dimension 5 × 5 gets multiplied by a matrix of size 3 × 3
called the kernel and its convolution is called the Feature Map shown in Fig. 4.
A kernel which are small matrix is used for edge detection, noise removal, blurring
images or sharpening of an image. Below, in Fig. 5, we depict the kernels used for
different purposes.
Strides
Stride is the measure of how many pixels move in excess of an input matrix. One
pixel is moved at a time for stride value 1, two pixels are moved for stride value 2,
and so forth. The matrix shown here displays the movement when stride is 2. This
can be observed in Fig. 6.
Fig. 5 Matrix for edge detection, noise removal, Blurring an image or sharpening of an image
Padding
Misfit of filter in the image matrix can be solved as follows.
• Zero padding of the image
• Valid padding means dropping the section of the image, where the filter does not
fits well.
• Given that the picture matrix is relatively vast, pooling layers (Fig. 8) are utilized
to reduce the number of parameters. Each map’s dimensionality is reduced via
spatial pooling, which also protects the sensitive data. Several forms of spatial
pooling are
• Max pooling
• Average pooling
• Sum pooling
In rectified feature map, the largest element is found from the max pooling. The
largest element can also be obtained from average pooling. Sum pooling is the sum
of all the feature maps.
Fig. 7 Tanh or sigmoid are alternatives to ReLU, but ReLUhas a better performance than the other
two
Speech Synthesis with Image Recognition Using Application of CNN … 447
5 Result Analysis
The vector transformation of feature map matrix is done which is of the form
X 1, X 2, X 3 . . . . These features combined with a fully connected layer create a
model. We have considered the softmax or sigmoid function to do the classification.
• Image as input to the convolutional layer.
• Provide the parameters and if needed apply the filter with stride and padding.
Convolution is done on the image.
• Apply ReLU on the obtained matrix.
• Dimensionality reduction can be obtained by pooling.
• Include convolution layers until satisfaction is achieved.
• Output obtained is flattened which is essentially provided as input into the fully
connected layer
The three different images captured by the pi camera are shown in Figs. 9, 11,
and 13, while the equivalent output on the pi terminals is shown in Figs. 10, 12, and
14.
Our research work has combined CNN & RNN and made it much more portable
with the help of Raspberry Pi. Now it is easier to detect scenes, and objects in an
image and to make it simpler the speech synthesis will help users to understand what
objects are present in the image. CNN needs to be fine-tuned which is conceptually
heavier. But overall it is working fine. The foundations of voice recognition are
covered in this essay, and the field’s most recent advancements are looked into. The
450 B. Mandal et al.
study discusses a number of neural network models, including deep neural networks,
RNN, and LSTM. Neural network-based automatic speech recognition is a new field
that is currently developing. For those with disabilities, two important applications
are text to speech and speech to text.
References
1. Chen, L. C., Papandreou, G., Kokkinos, I., Murphy, K., & Yuille, A. L. (2018). Deeplab
: Semantic image segmentation with deep convolution nets, atrous convolution, and fully
connected CRFs. IEEE Transactions on Pattern Analysis and Machine Intelligence, 40(4),
834–848.
2. Kaushal, M., Khehra, B., & Sharma, A. (2018). Soft computing based object detection and
tracking approaches : State-of-the-art surve. Applied Soft Computing, 70, 423–464.
3. Perić, Z., & Nikolić, J. (2012). An adaptive waveform coding algorithm and its application in
speech coding. Digital Signal Processing, 22(1), 199–209.
4. R. K. Moore, “Cognitive informatics : the future of spoken language processing ?”, in Proceed-
ings of the 10th International Conference on Speech and Computer (SPECOM), Patras, Greece,
October 2005.
5. Nikolic, J., & Peric, Z. H. (2008). Lloyd-Max’s algorithm implementation in speech coding
algorithm based on forward adaptive technique. Informatica (Lithuanian Academy of Sciences),
19(2), 255–270.
6. Alwzwazy, H. A., Albehadili, H. A., Alwan, Y. S., & Islam, N. E. (2016). Handwritten digit
recognition using convolutional neural networks. Proceedings of International Journal of
Innovative Research in Computer and Communication Engineering, 4(2), 1101–1106.
7. Ling, Z. H., Kang, S. Y., Zen, H., Senior, A., Schuster, M., Qian, X. J., et al. (2015). Deep
learning for acoustic modeling in parametric speech generation: A systematic review of existing
techniques and future trends. IEEE Signal Processing Magazine, 32(3), 35–52.
8. Toda, T., Black, A., & Tokuda, K. (2007). Voice conversion based on maximum-likelihood
estimation of spectral parameter trajectory. IEEE Transactions on Audio, Speech and Language
Processing, 15(8), 2222–2235.
9. Zen, H., Gales, M., Nankaku, Y., & Tokuda, K. (2011). Product of experts for statistical para-
metric speech synthesis. IEEE Transactions Audio, Speech, and Language Processing, 20(3),
794–805.
10. Tokuda, K., Nankaku, Y., Toda, T., Zen, H., Yamagishi, J., & Oura, K. (2013). Speech synthesis
based on hidden Markov models. Proceedings of the IEEE, 101(5), 1234–1252.
11. Hinton, G., Deng, L., Yu, D., Dahl, G., Mohamed, A., Jaitly, N., et al. (2012). Deep neural
networks for acoustic modeling in speech recognition: The shared views of four research
groups. IEEE Signal Processing Magazine, 29(6), 82–97.
12. Ling, Z.-H., Deng, L., & Yu, D. (2013). Modeling spectral envelopes using restricted Boltz-
mann machines and deep belief networks for statistical parametric speech synthesis. IEEE
Transactions Audio, Speech, and Language Processing, 21(10), 2129–2139.
GeoGebra-Assisted Teaching of Rotation
in Geometric Problem Solving
Hoang Vu Nguyen, Thi Minh Chau Chu, Ton Quang Cuong, Vu Thi Thu Ha,
Pham Van Hoang, Ta Duy Phuong, and Tran Le Thuy
H. V. Nguyen (B)
Pi Journal, Vietnam Mathematical Society, Hanoi, Vietnam
e-mail: [email protected]
T. M. C. Chu
Hanoi National University of Education, Hanoi, Vietnam
T. Q. Cuong · P. Van Hoang · T. Le Thuy
University of Education, Vietnam National University, Hanoi, Vietnam
e-mail: [email protected]
T. Le Thuy
e-mail: [email protected]
V. T. T. Ha
Pham Hong Thai High School, Hanoi, Vietnam
T. D. Phuong
Institute of Mathematics, Hanoi, Vietnam
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 451
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_37
452 H. V. Nguyen et al.
1 Introduction
GeoGebra is an open-source software that is notable for its dynamic geometry capa-
bilities which can provide students with a more visual presentation of a geometry
problem compared to using only pen and paper. As such, it is inherently useful for
teaching geometric transformations where objects change their positions (or sizes).
Studies on teaching geometric transformation, particularly rotation, using GeoGebra
often focus on building mathematical intuition or motivating interest in geometry
for students [1–5]. Based on our experience of teaching Vietnamese school students,
especially gifted ones, it may be necessary to introduce more advanced geometry
problems selected from previous literature, such as [6, 7], in which rotation is more
involved in the problem solving process and used together with other mathematical
knowledge and techniques. In such situations, the usage of GeoGebra for visualizing
the required steps can be beneficial to demonstrate how rotation can interact with other
geometrical concepts to create a plausible proof. Remarkably, relationships partic-
ular to rotation such as preservation of lengths and angle magnitudes are usually
better demonstrated with software. This integrated approach can simulate a deeper
understanding among students than just visual demonstration of rotation alone. In
this study, we present several problems in which rotation is utilized in combination
with different mathematical tools. For all of these, GeoGebra can provide a helpful
assistance to students in understanding how to construct the proofs.
2 Rotation in GeoGebra
Rotation in GeoGebra can be done in two ways using either the toolbar interface or
with typing command:
• Choose Rotate around point in the group of the toolbar, select the object,
select the center point of rotation, then enter the rotation angle.
• Use the Rotate command, providing it with object name, the rotation angle
and name of the center of rotation. For example, Rotate(a,d, A) will rotate
object a by d degrees around point A.
However, to be more demonstrative of the visual process, rotation can be controlled
using a parameter t, ranging from 0 to the actual rotation angle of choice, so that
students can see the object continuously doing the rotation as this parameter changes.
Consequently, in the following problems, each rotation was presented with three
GeoGebra-Assisted Teaching of Rotation in Geometric Problem Solving 453
figures: initial position, mid-rotation and final position. In some complex cases, the
arcs of rotation were also drawn to better visualize the trajectory of the objects.
Problem 1 ([6]) On sides CD and DA of square ABCD, select points E and F such
that DE = AF. Prove that the lines AE and BF are perpendicular (Fig. 1a).
Solution Let O be the center of the square. Rotate the triangle ADE 90° clockwise
around O (Fig. 1b). Points A and B are images through rotation of D and A, respec-
tively. Since DE = AF, the image E’ of E coincides with F or BF is the image of AE
through a 90° rotation (Fig. 1c). Hence, BF is perpendicular to AE.
Remark This problem shows how rotation can lead to a proof of congruence for
triangles.
Problem 2 ([6]) On the sides BC and AC of triangle ABC, construct two squares
BCDE and ACGF outside the triangle. Prove that the segments AD and BG are equal
and perpendicular to each other (Fig. 2a).
Solution Rotate triangle GCB 90° counter-clockwise around C (Fig. 2b and c). The
images of the sides CG, CB and GB are CA, CD and AD, respectively. Hence, AD
and GB are perpendicular (Fig. 2c).
Remark This problem shows how rotation can be used to prove perpendicularity
between lines.
Problem 3 ([6]) Let O be the center of square ABCD and E be an arbitrary point on
segment CD. The points P and Q are perpendicular projections of B and D on AE.
Prove that OPQ is an isosceles right-angle triangle (Fig. 3a).
Solution Rotate triangle APB 90° clockwise around O (Fig. 3b and 3c). Due to the
preservation of angles PAB and PBA through rotation, it can be easily proved that
the image of P through rotation is Q. Hence, OP = OQ. As the rotation is a 90° one,
triangle OPQ is also a right-angle triangle (Fig. 3c).
Remark This problem shows both the length preservation and perpendicularity of
a 90° rotation.
Problem 4 ([6]) Point P is on side CD of square ABCD. The angle bisector of angle
BAP intersects BC at Q. Prove that BQ + DP = AP (Fig. 4a).
Solution Rotate triangle AQB 90° counter-clockwise around A (Fig. 4b, c). Images
of B and Q are D and Q’, respectively (Fig. 4c). Due to preservation of angles
through rotation, the angles AQ’P and AQB are equal. Using complementary angle
relationships, it can be proven that angle PAQ’ is also equal to angle PQ’A or triangle
Q’PA is an isosceles one. So BQ + DP = DQ’ + DP = PQ’ = AP.
Remark This problem shows how rotation can be used to prove relationships
involving sums of angles.
GeoGebra-Assisted Teaching of Rotation in Geometric Problem Solving 455
Problem 5 ([6]) Let ABC be an acute triangle with angle ABC being 45°. Altitudes
from A and C intersect at H. Prove that BH = AC (Fig. 5a).
Solution Rotate triangle CKA 90° counter-clockwise around C (Fig. 5b, c). Its image
is triangle CA’K’. The quadrilateral CKBK’ is a square since CK = CK’ and its angles
are all 90° (Fig. 5c). Because the angles K’CA’ and HBK are both equal to angle KCA,
CA’ is parallel to BH. As CK is already parallel to BK’, CKBA’ is a parallelogram.
Consequently, BH = CA’ = CA.
Remark This problem shows how rotation can be used to prove both perpendicu-
larity and parallelism.
Problem 6 ([6]) On sides AB and AD of square ABCD, select points P and Q such
that AP = DQ (Fig. 6a). Prove that ∠P B Q + ∠PC Q + ∠P D Q = 90◦ .
Solution Rotate triangles DAP and CBP 90° clockwise around O (Fig. 6b). Images
of angles ADP and PCB are angles DCQ and QBA, respectively (Fig. 6c). Hence,
∠P B Q + ∠PC Q + ∠P D Q
= ∠AB Q + ∠PC Q + ∠P D A
= ∠BC P + ∠PC Q + ∠QC D = ∠BC D = 90◦ .
456 H. V. Nguyen et al.
Remark This problem shows a proof using rotation of two objects at the same time.
Remark This problem shows a case with a rotation angle that is not a right-angle.
(a) (b)
Remark This problem shows how rotation can be used within geometrical construc-
tions. Another construction problem with rotation can be seen in [8].
Remark This problem shows how rotation can be used creatively to prove other
familiar theorems.
Problem 10 (A kinematic problem) Two points A and B rotate clockwise with the
same angular velocity around points O1 and O2 , respectively. Prove that vertex C of
equilateral triangle ABC also moves on a particular circle.
458 H. V. Nguyen et al.
(a) (b)
(c)
Solution Let A, O2 and B rotate 60° counter-clockwise around O1 (Fig. 10a). Their
images are A’, O3 and B’. Hence, O3 B = O2 B and AA’ = O1 A’ (equilateral
triangle). Since B’ and C are the images of B through 60° rotations around O1 and
−−→ −−→
A, respectively, then B C = A A. Points A and B rotate clockwise with the same
−−→ −−→
angular velocity around points O1 and O2 , so the angle between O3 B and A A is
−−→ −−→ −−→
invariant. Hence, the length of O3 C = O3 B+ B C does not change, i.e., C is on a
circle with center O3 (Fig. 10b). This circle can be visualized using the trace tracking
feature of GeoGebra (pink circle in Fig. 10a and b). Another circle (green one) is
also the solution when O2 is rotated clockwise around O1 .
Remark This problem shows how rotation can be used for kinematic problems
involving several motions simultaneously.
GeoGebra-Assisted Teaching of Rotation in Geometric Problem Solving 459
(a) (b)
Fig. 10 Problem 10 and its solution in GeoGebra. Different positions of A are shown in the left
and right parts of the figure
4 Conclusion
In this study, we have demonstrated how GeoGebra can be used in enabling students
to understand complex problem solving with rotation through the use of dynamic
geometry. The samples provided showed that rotation can help students better under-
stand rotation not only as a standalone geometric transformation but also as a math-
ematical tool that can be used together with other concepts to produce a solution.
This may also be applicable to other mathematical concepts that require intuitive
visualization. GeoGebra may at first seem to be difficult to manage but once the
connection between the mathematical concepts and their representations in software
is explained, students could manage to get insights into the dynamics of geometry
that may be not available when working with a pencil and paper approach. Future
research in this direction may involve rotation and other geometrical transformations
in 3D geometrical problems as well as STEM-related teaching of how such problems
appear in real-life circumstances. Incorporation of our findings in manuals and books
for school math teachers in Vietnam and elsewhere is also a topic worth considering.
460 H. V. Nguyen et al.
References
1. Chua, G. L. L., Tengah, K. A., Shahrill, M., Tan, A., & Leong, E. (2017). Analysing students’
perspectives on geometry learning from the combination of Van Hiele phase-based instructions
and GeoGebra. In Proceeding of the3rd International Conference on Education (Vol. 3, pp. 205–
213).
2. Coelho, A., & Cabrita, I. (2015). A creative approach to isometries integrating geogebra and italc
with ‘paper and pencil’ environments. Journal of the European Teacher Education Network, 10,
71–85.
3. Hall, J., & Chamblee, G. (2013). Teaching algebra and geometry with GeoGebra: Preparing
pre-service teachers for middle grades/secondary mathematics classrooms. Computers in the
Schools: Interdisciplinary Journal of Practice, Theory, and Applied Research, 30(1–2), 12–29.
4. Mukamba, E., & Makamure, C. (2020). Integration of GeoGebra in teaching and learning
geometric transformations at ordinary level in Zimbabwe. Contemporary Mathematics and
Science Education, 1(1), ep20001.
5. Selvy, Y. et al. (2020). Improving students’ mathematical creative thinking and motivation
through GeoGebra assisted problem based learning. Journal of Physics: Conference Series,
1460, 012004.
6. Pompe, W. (2016). Wokól obrotów Przewodnik po geometrii elementarnej. Wydawnictwo
Szkolne OMEGA.
7. Yaglom, I. M. (1975). Geometric Transformations. (A. Shield, Translated from Russian). The
Mathematical Association of America (MAA).
8. Maksimović, M., Kontrec, N., & Panić, S. (2021). Use of GeoGebra in the study of rotations. In
Proceedings of the 12th International Conference on Science and Higher Education in Function
of Sustainable Development, Uzice, Serbia.
Pandai Smart Highway
Sumathi Balakrishnan, Jing Kai Ooi, Shin Kir Ti, Jer Lyn Choo,
Ngui Adrian, Qiao Hui Tai, Pu Kai Jin, and Manzoor Hussain
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 461
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_38
462 S. Balakrishnan et al.
road situation. A prototype is used to assess the viability of the model. The results
of the investigations demonstrate good efficiency in vehicle detection and accurate
messages provided to the drivers.
Keywords IOT · Smart highway · IR sensor · Camera sensor · LED traffic lane
markers
1 Introduction
According to the statistics published on the official portal of the Ministry of Transport
Malaysia [1], there were a total of 567,516 cases of road accidents resulting in 6,167
road fatalities in 2019. For recent statistics, the cases of road accidents from January
2022 to September 2022 were 402,626 cases and 4,379 fatalities were caused [2].
Most accidents happened on the highway. One of the main reasons for highway
accidents is unclear visibility, especially at night and because the drivers could not
observe the road situation in front which may reduce the efficiency of emergency
decision-making. This is due to the reason that there needs to be more street lights
installed on the highway. This may cause the drivers to be unable to clearly observe the
road situation in the front. If there is an accident happening ahead and a vehicle stops,
the drivers do not have enough time to brake their vehicle and crash. Furthermore,
the third problem that highway users may face usually is the obstacle on the highway
lane. The obstacles can be the dead bodies of animals, fallen trees and branches, fallen
items from delivering vehicles and so on. The drivers may not have enough time to
avoid the obstacles as they are driving fast. Hence, we will design an intelligent
highway system that can alert drivers about the road situation ahead to provide the
drivers enough time to decelerate their vehicles. Apart from that, this system can
also allow the drivers to have more time to think about the decision-making before
entering the accident or obstacle area (Fig. 1).
2.1 Sensors
This system fully relies on vehicle, object recognition and sensor. Recognizing vehi-
cles and extracting of their data will utilize vehicle detection and tracking approaches
[3]. The proposed system is using Infrared sensors (IR) and in comparison, [4] ultra-
sonic sensors are being used in their system. The similarity of their system is to detect
vehicles and detect density levels of the traffic, the data will be processed and then
sent to the LCD [5]. Another proposed system uses the IR sensor and Node MCU
Pandai Smart Highway 463
microcontroller to detect different lane positions and determine the presence of vehi-
cles and send information to the microcontroller [6, 7]. To reduce response times,
emergency vehicles can be automatically scheduled by managing traffic signals [8].
IR sensors being used in emergency vehicle detection are found in [9], there are
numerous emergency vehicle preemption (EVP) system designs, including as radio-
based emitter/detector systems, strobe light systems, infrared emitters, and sound
systems. Cameras are used to measure the traffic circumstances, and lane center
edges are used to estimate the traffic parameters [10]. Author [11] introduced an
area-based image processing method for the detection of traffic density. To notify the
driver regarding the situation, it alerts the driver of the presence of potholes via LCD
display and speaker [12]. In this system, the average vehicle speed as estimated by
vehicle detection systems is used to calculate the real-time traffic density. Real-time
traffic images are processed by [13] using image processing techniques, and optical
flow is used to estimate traffic congestion. Similar to this, variable speed restrictions
are set using electronic sign boards to avoid traffic jams. CCTV cameras are being
used to monitor traffic and detect vehicles [14].
This paper also mentions harvesting energy to generate power as a power supply
for smart roads [15]. Sustainable power supply has always been a challenge, but by
using natural sources such as a combination of solar energy and mechanical vibration,
electrical energy can be generated. Not only that, the harvest can be used to store in
the electrical power grid. This research proposal intends to use this idea to reduce the
464 S. Balakrishnan et al.
cost of electricity [16, 17]. To better utilize developing communication systems like
5G, the power grid must be replaced with a smart grid (SG) due to the requirement
for a diverse power source and efficient power management [18]. The well-known
smart application Advanced Metering Infrastructure, for instance (AMI) utilizes the
capability of two-way information exchange between the consumer and the smart
grid. By real-time monitoring, real-time expense due to energy use, and the ability
to make more informed decisions, its integration with EVs gives customers a better
experience with EVs [19].
2.3 Network
As network protocols, the authors of [20] employed RFID, NFC, Bluetooth, WiFi,
and Zigbee. Wi-SUN is used for Neighborhood Area Networks. SigFox, Cellular, and
NB-Iot are used for Wide Area Networks. The authors of [21] employed LoRa and
LoRaWAN as networking protocols to enable long-distance data connection while
utilizing very little electricity. The authors of [22] utilized 5G to play a critical role in
the development of applications for smart cities since it will allow various devices to
connect and exchange data at high speeds. The author of [23] utilized a video private
network to stage events in several parts of the city. In order to achieve the link, the
wireless private network was employed for the information system. LoRaWAN and
the LoRa network protocol were utilized by Author [24] for low cost rechargeable
battery end-devices.
2.4 Database
Authors of [1, 4] stated that they would store their data in cloud storage. Cloud
storage provides better performance compared to local storage [1]. Authors of [5]
implemented a cloud-based backend system that enables the querying of data and
facilitates the exchange of information with other traffic data systems. The traffic
controllers proposed would have the ability to configure the rules for propagating
data within a cloud-based backend system. The smart traffic system proposed in [6]
was using both private and public cloud storage to store the data collected from their
system.
One approach to improving the flow of ambulance operations and reducing traffic
could be to implement a method for counting traffic density [2]. CCTVs are installed
on the road to check the condition of the road [3]. A smart highway system can provide
Pandai Smart Highway 465
the surrounding traffic and the distance between vehicles to the drivers [7]. The new
generation of smart highway architecture’s top layer is the application system, which
includes various services such as intelligent infrastructure management and mainte-
nance, information service, and intelligent traffic management [8]. According to the
authors in [9], the implementation of a smart highway system that includes sensors
for detecting vehicles and traffic along the roadside can improve driver awareness.
3 Methodology
Research of other intelligent highway systems or related systems sss also conducted
and compared through relevant research papers, which were used to assist in selecting
hardware, road assistance methods, transmission methods and other useful informa-
tion. Combined with data retrieved from the public statistics, specific sensors in
the system, data transmission methods, data storage methods, and road assistance
technology were chosen to be used in the proposed system in Malaysia.
The architecture in Fig. 2 was created based on our combined research data, it
includes 5 layers namely the Perception Layer, Transmission Layer, Middleware
Layer, Application Layer and Business Layer.
4 Justification
The IR sensor was chosen for emergency vehicle detection due to its usability in
the dark, and it is low cost, while the camera sensor was chosen to detect obstacles
due to the possibility of implementing AI automated detection and also the ability
to measure vehicle speeds and density. For road assistance technology, LED traffic
lane markers were chosen due to its ability to alert drivers in a similar way how a
traffic light works, lights up by default to combat visibility issues, and it is low cost.
Digital sign boards are the common choice to relay information to drivers, due to its
simplicity and ease of understanding, therefore, it was chosen. Transmission method
as MQTT was chosen for its simplicity to set up and uses low energy usage. Even with
unstable connections between devices MQTT’s IoT implementation leverages QoS
levels to assure message delivery to recipients. For the cloud database, MongoDB
offers a lot of benefits such as flexibility, scalability, and high-performance while
being open source, this enables future proofing as well, which is the reason it was
chosen. Artificial Intelligence (AI) Machine Learning was implemented in the main
processing server within the middleware layer for analysis of data in the cloud server
for deep learning and to generate usable and useful data. Relevant articles point
towards the usage of AI Machine learning to analyze data which is the main reason
for its implementation. Finally, the profit generation methods were discussed by
finding out the most plausible methods in Malaysia, which are selling the collected
data to the government or any reputable companies, and selling the system.
5 Discussion
Smart roads must be able to adapt to new improvements and changes as technology
evolves. By using electronic tolling systems, it is possible to remove the requirement
for physical toll booths while also reducing congestion at toll plazas [39–47].
6 Conclusion
In this article, we have studied the Intelligent Highway System, which combines arti-
ficial intelligence, the Internet of Things technology, and mobile application devel-
opment technology. We believe that this system has broad application prospects
and can improve the safety and flow of highways [40]. The system collects data
through numerous IoT devices, such as infrared sensors, pressure sensors, camera
sensors, photosensitive sensors, etc., to collect information about vehicles, weather,
personnel, and more. The data is then transmitted to a central server and processed
using artificial intelligence technology. In the future, we should focus on addressing
the following issues:
• For drivers, how to intelligently remind them to pay attention to safety and reduce
the accident rate to below 10%.
• For energy conservation, how to use cleaner energy and reduce environmental
pollution. To reduce construction costs, how to quickly build the entire Intelligent
Highway System and reduce costs and time.
• To reduce construction costs, how to quickly build the entire Intelligent Highway
System and reduce costs and time.
Overall, we believe that the development of the Intelligent Highway System is
feasible and can contribute to improving traffic safety and reducing environmental
pollution. However, achieving this goal requires collecting data through the actual
operation of the system and through user feedback, and solving various technical diffi-
culties through continuous version iterations and upgrades of artificial intelligence
algorithms.
References
1. Ministry of Transport Malaysia. (2019). Ministry of Transport Malaysia Official Portal Road
Accidents and Fatalities in Malaysia, www.mot.gov.my
2. Lim, A. (2022). 402,626 road accidents recorded in Malaysia from Jan-Sept 2022, with 4,378
fatalities-PDRM statistics-paultan.org. Paul Tan’s Automotive News. https://paultan.org/2022/
10/27/402626-road-accidents-recorded-in-malaysia-from-jan-sept-2022-with-4378-fatalities-
pdrm-statistics/
3. Dong, J., Meng, W., Liu, Y., & Ti, J. (2021). A framework of pavement management system
based on IoT and big data. Advanced Engineering Informatics, 47, 101226. https://doi.org/10.
1016/j.aei.2020.101226
468 S. Balakrishnan et al.
4. Sarrab, M., Pulparambil, S., & Awadalla, M. (2020). Development of an IoT based real-time
traffic monitoring system for city governance. Global Transitions, 2, 230–245. https://doi.org/
10.1016/j.glt.2020.09.004
5. Talukder, M. Z. (2017). An IoT based automated traffic control system with real-time update
capability. ResearchGate. An IoT based automated traffic control system with real-time update
capability.
6. Turankar, A., Falguni Khobragade, Dipalee, S., Arati Somkuwar, Neware, K., & Kalyani, W.
(2021). Smart traffic monitoring and controlling using IOT and cloud. International Journal
of Creative Research Thoughts (IJCRT), 9(6), 2320–2882. https://ijcrt.org/papers/IJCRT2106
150.pdf
7. Nellore, K., & Hancke, G. (2016). Traffic management for emergency vehicle priority based
on visual sensing. Sensors, 16(11), 1892. https://doi.org/10.3390/s16111892
8. Sanjay, S. T., Fu, G., Dou, M., Xu, F., Liu, R., Qi, H., & Li, X. (2015). Biomarker detection
for disease diagnosis using cost-effective microfluidic platforms. The Analyst, 140(21), 7062–
7081. https://doi.org/10.1039/C5AN00780A
9. Al-Ostath, N., Selityn, F., Al-Roudhan, Z., & El-Abd, M. (2015). Implementation of an emer-
gency vehicle to traffic lights communication system. In 2015 7th International Conference
on New Technologies, Mobility and Security (NTMS). https://www.semanticscholar.org/paper/
Implementation-of-an-emergency-vehicle-to-traffic-Al-Ostath-Selityn/7a129016a74cdcb144
8fda66d51de0c81d6ec52c
10. Kapileswar, N. (2012). Automatic traffic monitoring system using lane centre edges. IOSR
Journal of Engineering, 02(08), 01–08. https://doi.org/10.9790/3021-02840108
11. Uddin, M. S., Das, A., & Taleb, M. A. (2015). Real-time area based traffic density esti-
mation by image processing for traffic signal control system: Bangladesh perspective. In
2015 International Conference on Electrical Engineering and Information Communication
Technology (ICEEICT). https://www.semanticscholar.org/paper/Real-time-area-based-traffic-
density-estimation-by-Uddin-Das/447d02f7bbed89b19f17ad65132c2b06524957c0
12. Srikanth, C. (2019). Design and development of an intelligent system for pothole and hump
identification on roads. International Journal of Recent Technology and Engineering (IJRTE),
8(3), 2277–3878. https://doi.org/10.35940/ijrte.C5936.098319
13. Gohar, M., Muzammal, M., & Rahman, A. U. (2018). SMART TSS: Defining
transportation system behavior using big data analytics in smart cities. Sustainable
Cities and Society. https://www.semanticscholar.org/paper/SMART-TSS%3A-Defining-transp
ortation-system-behavior-Gohar-Muzammal/241cc9084da28143df4a6bcffda917e108436ddf
14. Addala, S. (2020). Vehicle Detection and Recognition. Lovely Professional Univer-
sity. https://www.researchgate.net/publication/344668186_Research_paper_on_vehicle_dete
ction_and_recognition
15. Toh, C. K., Sanguesa, J. A., Cano, J. C., & Martinez, F. J. (2020). Advances in smart roads for
future smart cities. Proceedings of the Royal Society A, 476(2233), 20190439.
16. El Hendouzi, A., Bourouhou, A., Regragui, B., Al Irfane, M., & Rabat, A. (2020). Solar
photovoltaic power forecasting. Madinat al Irfane, 100100. https://doi.org/10.1155/2020/881
9925
17. Saeed, N., Saeed, N., & El-Dessouki, I. (2021). Smart grid integration into smart cities smart
applications of 5G network view project optimum MANET routing system view project smart
grid integration into smart cities. https://doi.org/10.1109/ISC253183.2021.9562769
18. Li, W., Wu, Z., & Zhang, P. (2020). Research on 5G network slicing for digital power grid.
In IEEE 3rd International Conference on Electronic Information and Communication Tech-
nology (ICEICT), Shenzhen, China, pp. 679–682. https://doi.org/10.1109/ICEICT51264.2020.
9334327
19. Khan, R., Kumar, P., Jayakody, D. Nalin k, & Liyanage, M. (2019). A survey on security and
privacy of 5G technologies: Potential solutions, recent advancements and future directions.
20. Syed, A. S., Sierra-Sosa, D., Kumar, A., & Elmaghraby, A. (2021). IoT in smart cities: A
survey of technologies, practices and challenges. Smart Cities, 4(2), 429–475. https://doi.org/
10.3390/smartcities4020024
Pandai Smart Highway 469
21. Barro, P., Zennaro, M., Degila, J., & Pietrosemoli, E. (2019). A smart cities LoRaWAN network
based on autonomous base stations (BS) for some countries with limited internet access. Future
Internet, 11(4), 93. https://doi.org/10.3390/fi11040093
22. Hoon, J., Sharma, K., Costa, J., Sicato, S., & Park, J. (2019). Emerging technologies for
sustainable smart city network security: Issues, challenges, and countermeasures 15(4), 765–
784. https://doi.org/10.3745/JIPS.03.0124
23. Jiang, D. (2020). The construction of smart city information system based on the Internet of
Things and cloud computing. Computer Communications, 150, 158–166. https://doi.org/10.
1016/j.comcom.2019.10.035
24. Premsankar, G., Ghaddar, B., Slabicki, M., & Francesco, M. D. (2020). Optimal configuration
of LoRa networks in smart cities. IEEE Transactions on Industrial Informatics, 16(12), 7243–
7254. https://doi.org/10.1109/tii.2020.2967123
25. Lilhore, U. K., Imoize, A. L., Li, C.-T., Simaiya, S., Pani, S. K., Goyal, N., Kumar, A., & Lee,
C.-C. (2022). Design and implementation of an ML and IoT based adaptive traffic-management
system for smart cities. Sensors, 22(8), 2908. https://doi.org/10.3390/s22082908
26. Sood, S. K., & Sahil. (2019). Smart vehicular traffic management: An edge cloud centric IoT
based framework. Internet of Things, 100140. https://doi.org/10.1016/j.iot.2019.100140
27. Dewi, N. K., & Putra, A. S. (2021). Law enforcement in smart transportation systems on
highway. In International Conference on Education of Suryakancana (IConnects Proceedings).
https://doi.org/10.35194/cp.v0i0.1367
28. George, A. M., George, V. I., & George, M. A. (2018). IOT based smart traffic light control
system. IEEE Xplore. https://doi.org/10.1109/ICCPCCT.2018.8574285
29. Yadav, A., More, V., Shinde, N., Nerurkar, M., & Sakhare, N. (2019). Adaptive traffic manage-
ment system using IoT and machine learning. International Journal of Scientific Research in
Science, Engineering and Technology, 216. https://www.academia.edu/44866408/Adaptive_
Traffic_Management_System_Using_IoT_and_Machine_Learning
30. Jain, A., Dhamnaskar, B., Doshi, V., & Muchhala, S. (2021). Smart road maintenance: A
machine learning and IoT based approach. International Journal of Research in Engineering
and Technology., 08, 2395–2456.
31. Wiegand, G. (2019). Benefits and challenges of smart highways for the user. www.semantics
cholar.org and https://www.semanticscholar.org/paper/Benefits-and-Challenges-of-Smart-Hig
hways-for-the-Wiegand/d1a848132c17138c7a5921d05f12f009c299903b
32. Liu, C., Du, Y., Ge, Y., Wu, D., Zhao, C., & Li, Y. (2021). New generation of smart highway:
Framework and insights. Journal of Advanced Transportation, 2021, 1–12. https://doi.org/10.
1155/2021/9445070
33. Yu-chuan, D. U., Cheng-long, L. I. U., Di-fei, W. U., & Cong, Z. (2022). Framework of the
new generation of smart highway. China Journal of Highway and Transport, 35(4), 203. https:/
/doi.org/10.19721/j.cnki.1001-7372.2022.04.017
34. Iqbal, A. (2020). Obstacle detection and track detection in autonomous cars. In www.intech
open.com. IntechOpen. https://www.intechopen.com/chapters/69747
35. Toskov, B., Toskova, A., Bogdanov, S., & Spasova, N. (2021). Intelligent IoT gateway. IEEE
Xplore. https://doi.org/10.1109/ICAI52893.2021.9639779
36. Craggs, I. (2022). MQTT vs CoAP for IoT. www.hivemq.com and https://www.hivemq.com/
blog/mqtt-vs-coap-for-iot/#:~:text=MQTT%20is%20a%20layer%20over
37. Marquez-Barja, J., Lannoo, B., Braem, B., Donato, C., Maglogiannis, V., Mercelis, S.,
Berkvens, R., Hellinckx, P., Weyn, M., Moerman, I., & Latre, S. (2019). Smart Highway: ITS-
G5 and C-V2X based testbed for vehicular communications in real environments enhanced by
edge/cloud technologies.
38. Trubia, S., Severino, A., Curto, S., Arena, F., & Pau, G. (2020). Smart roads: An overview of
what future mobility will look like. Infrastructures, 5(12), 107. https://doi.org/10.3390/infras
tructures5120107
39. Pompigna, A., & Mauro, R. (2021). Smart roads: A state of the art of highways innovations in
the Smart Age. Engineering Science and Technology, an International Journal, 25. https://doi.
org/10.1016/j.jestch.2021.04.005
470 S. Balakrishnan et al.
40. Guerrieri, M. (2021). Smart roads geometric design criteria and capacity estimation based on
AV and CAV emerging technologies: A case study in the Trans-European transport network.
International Journal of Intelligent Transportation Systems Research, 19(2), 429–440. https:/
/doi.org/10.1007/s13177-021-00255-4
41. Almulhim, M., & Zaman, N. (2018). Proposing secure and lightweight authentication scheme
for IoT based E-health applications. In 2018 20th International Conference on Advanced
Communication Technology (ICACT) (pp. 481–487). IEEE.
42. Lee, S., Abdullah, A., Jhanjhi, N., & Kok, S. (2021). Classification of botnet attacks in IoT
smart factory using honeypot combined with machine learning. PeerJ Computer Science, 7,
e350.
43. Alferidah, D. K., & Jhanjhi, N. Z. (2020). A review on security and privacy issues and challenges
in internet of things. International Journal of Computer Science and Network Security IJCSNS,
20(4), 263–286.
44. Almulhim, M., Islam, N., & Zaman, N. (2019). A lightweight and secure authentication scheme
for IoT based e-health applications. International Journal of Computer Science and Network
Security, 19(1), 107–120.
45. Humayun, M., Jhanjhi, N. Z., Alruwaili, M., Amalathas, S. S., Balasubramanian, V., & Selvaraj,
B. (2020). Privacy protection and energy optimization for 5G-aided industrial Internet of
Things. IEEE Access, 8, 183665–183677.
46. Humayun, M., Ashfaq, F., Jhanjhi, N. Z., & Alsadun, M. K. (2022). Traffic management:
Multi-scale vehicle detection in varying weather conditions using yolov4 and spatial pyramid
pooling network. Electronics, 11(17), 2748.
47. Muzafar, S., Jhanjhi, N. Z., Khan, N. A., & Ashfaq, F. (2022). DDoS attack detection approaches
in on software defined network. In 2022 14th International Conference on Mathematics,
Actuarial Science, Computer Science and Statistics (MACS) (pp. 1–5). IEEE.
Hand Gesture Recognition: A Review
Abstract The review-paper is primarily focused on the problem arising in the recog-
nition of hand gestures. We have considered gestures of the hand, which are combi-
nations of different hand positions. The recognition of the hand gesture approach
uses a combination of static shape recognition. The methods of user interaction now
used with a keyboard, mouse, and pen are inadequate. The usable command set is
constrained by these devices’ limitations. It is created as a real-time implementa-
tion of the standard. The urge for human–machine interaction is spreading quickly,
thanks to computer vision technology. Recognition of Gesture is commonly used in
robot control, intelligent furniture, and various different characteristics. An essen-
tial part of human–computer interaction is gesture recognition. People are becoming
dissatisfied with gesture identification based on wearable gadgets and are hoping
for more natural gesture recognition. The effectiveness of human–computer interac-
tion may be greatly increased through computer vision-based gesture recognition,
which may conveniently and effectively transmit human thoughts and instructions to
computers. The fundamental components of computer vision-based gesture recog-
nition technologies are hidden Markov, dynamic time rounding, and neural network
algorithms. Image gathering, segmentation of hand, recognition of gesture, and its
classification are the procedure’s four primary components. This paper also contains
classical approaches to hand gesture recognition like the Glove-based approach.
Gadgets and are hoping for more natural gesture recognition. Computer vision-based
gesture recognition may conveniently and effectively transmit human emotions and
instructions to computers, greatly enhancing the effectiveness of human–computer
interaction. The key components of computer vision-based gesture recognition tech-
nologies include neural network algorithms, hidden Markov models, and dynamic
S. Parihar (B)
University Institute of Computing, Chandigarh University, Mohali, Punjab, India
e-mail: [email protected]
N. Shrotriya · P. Thakore
Department of Advance Computing, Poornima College of Engineering, Sitapura, Jaipur, India
e-mail: [email protected]
P. Thakore
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 471
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_39
472 S. Parihar et al.
time rounding. Gathering images, segmenting hands, detecting gestures, and classi-
fying the results are the procedure’s four primary phases. The glove-based technique,
one of the more traditional methods for hand gesture identification, is also included
in this study.
1 Introduction
The goal of the field of computer science and language technology known as gesture
recognition is to mathematically understand human gestures. Although any physical
movement or mood can result in a gesture, the face or hands are the most common
places for them to appear. It is a crucial area of computer science that uses algorithms
to try to understand human gestures [1].
The development of a desirable alternative to popular human–computer interac-
tion modalities depends on the recognition of gestures. We have concentrated on the
challenge of dynamic hand gesture recognition in this work.
Sequences of different hand shapes are used as gestures of the hand. Motion and
subtle alterations are possible for a given hand shape. Continuous deformations are
not allowed, however, these gestures can be identified by the hand shapes used and
the type of motion they include [2].
Computer vision-based gesture recognition enables more natural interaction
between people and technology. Its advantage is that the environment has a smaller
impact on it. There are fewer restrictions on users and more opportunities for users
to connect to computers, which enables computers to accurately and quickly under-
stand human commands. No special equipment is needed to follow the instructions
[3]. They may skillfully wrap up conversations in silence. For instance, data gloves
and other devices have excellent detecting effects but are pricey and difficult to use.
The optical-marking method replaced Data Glove after that by using infrared light
to determine the relative position and motion of a human hand. It has a compa-
rable result but calls for much more complicated equipment. Although they are more
expensive and have an impact on the user’s actions, external devices can offer more
precision [4].
A few industries that often employ hand gesture detection include UAVs,
somatosensory gaming, and sign language recognition. Research on the recognition
of hand gestures is crucial in this situation [5]. Other subfields of hand gesture recog-
nition include hand gesture segmentation, hand gesture tracking, and hand gesture
recognition. Hand gesture segmentation [5], the first stage in hand gesture recogni-
tion, selects the appropriate individual hand motion from one frame of a video. The
majority of it is composed of skin tone-based types, edge detection types, motion
data types, and statistical template types; each has advantages and disadvantages [3].
Hand Gesture Recognition: A Review 473
The initial step in hand gesture identification is hand gesture tracking, which is
concerned with the real-time position and tracking of hand gestures in video based
on some of their properties. The real-time monitoring and assurance provided by
tracking of hand gestures ensure that intended hand motions are not misplaced.
2 Application Areas
secondary controls. This technology wave has also affected the healthcare industry.
Wachs et al. are known for developing a gesture-based interface for sterilely exploring
radiological images [10]. Because it enables the surgeon to work with medical data
without contaminating the patient, the operating room, or other surgeons, a sterile
human–machine interface is essential. The usage of gesture-based technologies may
replace the widespread use of touch screens in operating rooms at hospitals. Smooth
surfaces are necessary for these touch displays, but they occasionally go without
being thoroughly cleaned after each treatment. The hand motion recognition system
presents a potential alternative to the excessively high infection rates that hospitals
are now experiencing [8]. When the Toshiba Osmia laptops were formally presented
in June 2008, it may have been the first time that daily computing and gesture recog-
nition had been integrated. With Toshiba’s media center software, users may stop or
play music and video by merely bringing an open palm up to the screen. Making a
fist causes your hand to behave like a mouse and move the pointer across the screen.
In conclusion, the use of hand gesture recognition in different future scenarios is
possible. By moving your thumb up and down, you can click. Increasingly, more
people are using gesture recognition as a result of variables including declining
hardware and processing costs [6]. Vision-based hand gesture detection is still a
significant area of research since the present algorithms are so basic in comparison
to mammalian vision. While the majority of approaches could work well in a lab
situation, their key drawback is that they do not translate to random World Academy
of Science, Engineering, and Technology settings [11].
The anatomical structure of the hand is intricate, with numerous joints and related
sections that allow for about 27 degrees of flexibility (DOFs) [11]. Understanding the
anatomy of the human hand is essential for developing user interfaces because it helps
designers decide what postures and movements are most natural to use. Although
hand gestures and postures are frequently confused, it’s important to understand the
differences between the two [12]. Hand posture is a still hand position that excludes
any motion. A hand posture might consist of squeezing your hand and holding your
hands in a particularly unique position. On the other hand, a gesture of hand is
defined as a dynamic action that includes several hand positions connected by quick,
continuous motions, like when you wave goodbye [13]. The complexity of gesture
identification may be divided into two stages: high-level hand posture detection
and low-level hand gesture recognition [14]. This is because hand gestures have a
composite nature. In a system for recognizing hand gestures based on eyesight, the
camera records the hand’s motion [15]. While accounting for individual frames, this
video input is separated into a number of characteristics. The frames may also go
through some type of filtration to remove irrelevant details and emphasize important
ones. For example, the hands are isolated from the rest of the body and the back-
ground. There are several postures seen in the single hands [16]. A recognizer may be
Hand Gesture Recognition: A Review 475
trained against probable grammar as gestures are simply a collection of linked hand
positions. This means that, just as phrases develop from words, hand gestures may
be understood as arising from a range of compositional hand positions. Recognized
gestures may be used to run several programs [13] (Fig. 1).
The recognition of hand gestures may be split into two categories: recognition and
non-vision-based recognition, depending on the many methods used to gather data
regarding hand movements (such as data gloves) [16]. Since the hand is a deformable
object, it cannot be accurately modeled by a single simple model. Furthermore,
environmental elements like brightness, color, and other aspects can easily influence
the tracking and recognition of human hands [17].
A data glove is a Virtual Reality (VR) tool with several applications and a lot of
sensors on it. Thanks to the mapping of software, the glove device is able to virtually
“interact with the computer” and move, grip, and turn virtual instances. The most
recent version of the application has the ability to record individual finger bending
[18].
Real-time hand gestures are accurately transmitted to the computer through the
glove, which also provides the user with feedback from the virtual environment. It
offers a simple and common kind of human–computer interaction to the user [17].
In recent times, an increase has been seen in the interest of research on the visual
perception of hand movements. Since connection sort of devices restrict the range of
motions of hands, recognition of images based on vision is much more comfortable,
natural, and convenient for the benefactor than recognition based on non-vision type
systems (electromagnetic waves, data glove, etc.). Researchers created a new type
of color glove (also known as a color producer) based on electromagnetic waves and
the data glove, as well as a non-contact optical sensor chip for hand motion detection
[1].
476 S. Parihar et al.
The processing power of a computer has significantly increased during the last ten
years. This has made it possible to utilize a computer for HCI, enabling people
to input data naturally and adaptably. According to current research, hand gesture
identification using computer vision is essential for human–computer interaction
(HCI) [5].
There are four parts to a hand gesture detection system which come under
computer-based vision. Looking at the data model, the first stage collects picture
data from one or more cameras and checks the incoming data stream to determine if
it contains hand motion data. Segmentation is used to establish the posture and remove
any backdrop as soon as the computer notices a hand motion. Following that, this
is utilized throughout the feature extraction phase, with categorization acting as the
process’ final objective. During the identification or classification phase, depending
on the model’s parameters, before providing hand gesture descriptions, the system
classifies the hand motions it has received. Finally, the system manages the specific
application in accordance with the description [2].
Image Segmentation: The division of a digital image or physical image converted
into digital form into a multitude of segments of images, also called image regions or
image objects, is a technique used in digital image processing and computer vision.
By changing and/or simplifying a picture’s representation, image segmentation tries
to improve a picture’s relevance and understandability [19]. Segmentation of Image
is an approach for finding and locating things and boundaries (curves, lines, etc.) in
pictures. Each pixel in a photograph is tagged during the image segmentation process
so that the given pixels having the same label have the same character traits.
The full picture or a collection of contours which were taken from the picture are
included in a collection of segments made via picture segmentation.
It’s important to take into account color changes across adjacent portions of the
same property(s). When this is applied to a heap of photos, as is common in imaging
under medical science, the contours generated after segmentation of the picture
may be used for generating 3D re-constructions using interpolation techniques like
marching cubes [20].
Now, a variety of methods may be used to segment hand gestures. Based on the
differential between the skin color of hand motions and the surrounding environment,
the skin color model is built to achieve hand gesture segmentation. Although the
model is unaffected by hand gestures, it cannot be used to exclude items that have a
similar skin tone, such as human faces and other comparable objects [21]. In order to
distinguish between different hand motions based on a static background, the frame
difference approach and backdrop difference method of hand gesture segmentation
employ information about hand gesture movement.
Using the skin color model to segment hands, it is possible for objects with similar
hand hues, such as human faces, to interfere with the segmentation. After skin color
identification, hand gesture segmentation based on model attributes is used to address
the aforementioned problems. A classifier is trained to differentiate the hand region
Hand Gesture Recognition: A Review 477
from the non-hand area using these attributes after the hand motion characteristics
are retrieved from a large sample of hand motions [15].
In order to segment hand gestures, we take a lot of pictures using a camera. Depth
pictures and RGB images are the two categories that make up the image library.
A typical camera can take RGB photos, while depth cameras like Kinect and Leap
Motion can simultaneously take RGB and depth images [19].
The utilization of depth images can enable the capture of a portion of the infor-
mation of the space around, which helps with the classification and recognition of
gestures. Depending on whether a single image or a video is produced, gestures are
either static or dynamic [1].
There are issues with occlusion and variable light intensities and orientations
during the picture-gathering process, which increases the bar for the robustness of the
algorithm. With the advancement of gesture recognition’s practicality, an increasing
number of algorithms are focused on ensuring invariance of illumination and dealing
with occlusion issues [13] (Fig. 2).
Using a convolutional neural network to segment gestures: The segmentation of
motions using CNN uses Full Convolutional Neural Networks (FCN)-based convo-
lutional neural network optimization. Instead of CNN’s final layer, a deconvolution
layer is employed, and the picture is subsequently up-sampled to its original size
using pixel prediction [22]. In contrast to CNN, FCN takes images of any size as
input, does away with recurrent storage and convolution calculation difficulties, and
does not need that all images be the same size [23]. Contrarily, FCN contains a
number of critical problems. The picture is not particularly clean and clear when the
upsampling factor is taken high, and the attention to detailing should be increased;
the link between individual pixels is also not utilized well. Gesture segmentation may
be accomplished using a variety of different techniques when using the convolutional
neural network-based segmentation strategy [21] (Fig. 3).
Using the Depth Threshold Method for Gesture Segmentation: Depending on how
close an object or scenery is to the camera in the depth image, the depth threshold
Fig. 2 Segmentation of
image
478 S. Parihar et al.
method determines how far away each pixel is from the camera [22]. Then, within a
predetermined range, it extracts a picture with a spacing. The range of depth of the
hands on the depth picture is determined, or the hand is thought of as the item closest
to the digital camera, in order to more accurately extract the range of hands [24].
This technique improves the accuracy of gesture detection, resulting in a more precise
hand area and a greater pre-processing impact. It does, however, place limitations on
how and how much acknowledgment may be given [25] (Fig. 4).
Extraction of Features: Image processing, pattern recognition, and feature extrac-
tion in machine learning begin with a basic collection of measured data. The learning
and generalization processes that follow are made easier by the derived values
(features) produced by this process, which can also, in certain situations, enhance
human interpretations. Dimension reduction and feature extraction are related ideas
[26].
It is possible to limit an algorithm’s input data to a more manageable collec-
tion of qualities when it is too huge to analyze and seems repetitive (for example,
measurement of the same in feet and meters or the same photos displayed as pixels)
(also named a feature vector). The feature selection procedure involves identifying
a subset of the original characteristics [27].
It is possible to complete the needed job using this condensed representation rather
than the entire starting data since the chosen characteristics are made to include the
key information from the input.
Use feature extraction to lower the number of resources required to explain a
huge amount of given data. One of the biggest difficulties in analyzing complicated
data is the enormous number of aspects that need to be considered. If there are
numerous variables, a classification algorithm may overfit training examples and
poorly perform on newly provided samples, which demands a lot of processing
resources and memory [27]. The broad definition of “feature extraction” includes
methods for putting together variable combinations that accurately represent the
data while avoiding these issues [26].
Recognition of Gestures: Hand gesture recognition is the system’s final degree
of recognition. Following preparation, analysis, and modeling, of the given input
picture, the selected algorithm will start to recognize and understand the gesture
[28]. The method of recognition is influenced by the extraction of features technique
and classification algorithm. Statistics are frequently used to categorize gestures.
The extraction and identification of hand movements have made extensive use of the
neural network. The system must be trained with enough amount of data to properly
categorize a new feature vector before the recognition stage [29].
The different types of recognition of gestures are static recognition and dynamic
recognition. We can say Dynamic gestures are changes in hand gesture motion that
happen over time, i.e., several successive static gestures; static gestures are move-
ments that happen in a single frame [30]. Three components make up the gesture
recognition image: a depth map, an RGB map, and an RGB-D map. The depth map
may display the distance between the camera and the object in real time. The gray
picture is a representation of the depth map [31].
The distance between the camera and the object is represented by each pixel in
the depth map, on the other hand, depth images and the Three channel RGB images
and depth images make up the RGB-D image. The pixels in the two images are
connected one to one even though they appear to be distinct [32]. In recent years,
deep learning artificial neural networks (ANN-CNN-RNN-GAN), DTW, and Hidden
Markov Method (HMM) are being used in the majority of gesture detection methods
(Dynamic Time Warping). The HMM and DTW algorithms for voice recognition
were developed. The dynamic programming (DP) idea is used in the DTW tech-
nique to deal with the issue of shifting pronunciation lengths. Contrary to HMM and
convolutional neural networks, the DTW technique does not need a lot of training
data [33].
The method is quick and easy; the objective is to discover the best matching
sequence and path based on the best path with the least amount of overhead. Hidden
Markov Method will search for hidden sequences within the apparent sequences in
order to decipher the message sent by gestures. Convolutional neural networks were
initially employed to classify pictures [31] (Fig. 5).
480 S. Parihar et al.
Recently, the science of computer vision has seen a surge in research on computer
vision-based gesture detection. Using Hidden Markov Model (HMM), Grobel and
Assan were able to recognize 262 individual motions in the film with an accuracy
of 94% [34]. To complete the identification of depth gesture photos in the movie,
Reyes and Dominguez suggested DTW gesture recognition [35]. SimoSerra et al.
recognized gestures by imposing physical restrictions on the positions of hand joints
[36]. Rina et al. suggested a Matrix Completion-based method for massively parallel
real-time gesture position estimation in 2016 [37].
• Different gestures are comparable, yet the same gesture might differ [10].
• A lot of freedom has been given to hand motion and movement, with several levels
of freedom. In the hand-activity space, there are several degrees of freedom. It
is really hard for a lot of provided algorithms to execute accurate calculations
for each and every degree of freedom of motion, and it surely takes a lot of
time to compute several degrees of freedom, making real-time recognition more
challenging [9].
• Viewing angles and light intensity vary. In the gesture recognition procedure,
rotation invariance and illumination invariance are more challenging to achieve.
When compared to other approaches, the deep learning neural network-based
method is slower, accurate, and dependent on data. To ensure that it cannot satisfy
real-time needs, it requires a significant volume of data that has been tagged and a
solid speed of computation. The DTW approach is quicker than the HMM method,
but its precision and model resilience are not as excellent as those of the neural
network [5].
8 Conclusion
References
1. Smith, J. D., & Johnson, A. B. (2022). Hand gesture recognition using computer vision
techniques. International Journal of Computer Science, 10(2), 123–145.
2. Li, Y., & Ogunbona, P. (2012). Hand gesture recognition: A survey. International Journal of
Pattern Recognition and Artificial Intelligence, 26(7), 1–27.
3. Mitra, S., & Acharya, T. (2007). Gesture recognition: a survey. IEEE Transactions on Systems,
Man, and Cybernetics, Part C (Applications and Reviews), 37(3), 311–324.
4. Gross, R., Shi, J., & Wittenbrink, C. (2001). Human-computer interaction using hand gestures
with a glove-based system. ACM Transactions on Computer-Human Interaction (TOCHI),
8(2), 107–132.
5. Starner, T., Weaver, J., & Pentland, A. (1998). Real-time American sign language recognition
using desk and wearable computer-based video. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 20(12), 1371–1375.
482 S. Parihar et al.
6. Wang, X., Jiang, J., Wei, Y., Kang, L., & Gao, Y. (2018). Research on gesture recognition method
based on computer vision. In MATEC Web of Conferences (Vol. 232, p. 03042). EITCE 2018
7. Sun, J. H., Yang, J. K., Ji, T. T., Ji, G. R., & Zhang, S. B. Research on the hand gesture
recognition based on deep learning. [email protected]
8. Pavlovic, V. (1999). Dynamic Bayesian networks for information fusion with ap-plications to
human–computer interfaces. Ph.D. dissertation, University Illinoisat Urbana-Champaign
9. Je, H. M., Kim, J., & Kim, D. (2007). Hand gesture recognition to understand musical
conducting action, pp. 163–168. https://doi.org/10.1109/ROMAN.2007.4415073
10. Jacob, M. G., Wachs, J. P., & Packer, R. A. (2013) Hand-gesture-based sterile interface for the
operating room using contextual cues for the navigation of radiological images. Journal of the
American Medical Informatics Association, 20(e1), e183–6. https://doi.org/10.1136/amiajnl-
2012-001212. Epub 2012 Dec 18. PMID: 23250787; PMCID: PMC3715344.
11. Murthy, G. R. S., & Jadon, R. S. (2011). Computer vision based human computer interaction.
Journal of Artificial Intelligence, 4, 245–256. https://doi.org/10.3923/jai.2011.245.256
12. Prattichizzo, D., & Malvezzi, M. (2016). Understanding the human hand for robotic
manipulation. IEEE Transactions on Haptics, 9(4), 531–549.
13. Balasubramanian, R., & Schwartz, A. B. (2012). The cortical control of movement revisited.
Neuron, 74(3), 425–442.
14. Argall, B. D., & Billard, A. (2009). A survey of tactile human-robot interactions. Robotics and
Autonomous Systems, 57(3), 271–289.
15. Wang, J., Plankers, R., & van der Stappen, A. F. (2009). A survey on the computation of
approximate hand postures. Computer Graphics Forum, 28(2), 365–381.
16. Bhuyan, M. K., Bhuyan, M. K., & Gogoi, A. (2017). A review on hand gesture recognition
techniques, challenges, and applications. International Journal of Signal Processing, Image
Processing and Pattern Recognition, 10(2), 175–190.
17. Cauchi, A., Adami, A., & Sapienza, M. (2018). A review on vision-based hand gesture
recognition. Image and Vision Computing, 73, 1–16.
18. Razali, N. M., Elamvazuthi, I., & Seng, K. P. (2015). A review on data glove and vision-based
hand gesture recognition systems for human–computer interaction. Journal of Computational
Methods in Sciences and Engineering, 15(1), 29–40.
19. Shinde, V., Bacchav, T., Pawar, J., & Sanap, M. (2014). Hand gesture recognition system using
camera. International Journal of Engineering Research & Technology (IJERT), 3(1). ISSN:
2278–0181.
20. Premaratne, P., Yang, S., & Vial, P. Hand gesture recognition: An overview. ResearchGate
21. Yasen, M., & Jusoh, S. A systematic review on hand gesture recognition techniques, challenges
and applications. ResearchGate
22. Khan, R. Z., & Ibraheem, N. A. Comparative study of hand gesture recognition system.
In Proceedings of International Conference of Advanced Computer Science & Information
Technology in Computer Science & Information Technology (CS & IT)
23. Ciregan, D., Meier, U., & Schmidhuber, J. (2012). Multi-column deep neural networks for
image classification. In Proceedings of the 2012 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR) (pp. 3642–3649).
24. Rahmati, M., & Moeslund, T. B. (2016). Hand gesture recognition using depth data: A survey.
Image and Vision Computing, 55, 80–116.
25. Keskin, C., Kıraç, F., Kara, Y. E., & Akarun, L. (2012). Real time hand pose estimation using
depth sensors. In Proceedings of the 21st International Conference on Pattern Recognition
(ICPR) (pp. 1965–1968).
26. Yang, Q., Li, S., Zhou, X., & Zhou, J. (2017). Hand gesture recognition based on feature
extraction using leap motion controller. In 2017 International Conference on Robotics and
Automation Sciences (ICRAS) (pp. 400–404).
27. Samad, M. A., Sulaiman, N. H., & Zakaria, M. N. (2018). Feature extraction for dynamic hand
gesture recognition: A review. IEEE Access, 6, 28853–28868.
28. Islam, M. Z., Hossain, M. S., Ul Islam, R., & Andersson, K. Static hand gesture recognition
using convolutional neural network with data augmentation. IEEE
Hand Gesture Recognition: A Review 483
29. Tang, X., & Luo, J. (2018). A review of hand gesture recognition techniques. Artificial
Intelligence Review, 49(1), 1–44.
30. Fang, Y., Wang, K., Cheng, J., & Lu, H. A real-time hand gesture recognition method. IEEE
31. Ramamoorthy, A., Vaswani, N., Chaudhury, S., & Banerjee, S. Recognition of dynamic hand
gestures. Schloss Dagstuhl
32. Garg, P., Aggarwal, N., & Sofat, S. Vision based hand gesture recognition. Academia
33. Ceolini, E., Frenkel, C., Shrestha, S. B., Taverni, G., Khacef, L., Payvand, M., & Donati, E.
Hand-gesture recognition based on EMG and event-based camera sensor fusion: A benchmark
in neuromorphic computing. Frontiers in Neuroscience.
34. Assan, M., & Grobel, K. (1998). Video-based sign language recognition using hidden markov
models. In Gesture and Sign Language in Human-Computer Interaction, pp. 97–109. Springer.
35. Reyes, M., Dominguez, G., & Escalera, S. (2011). Featureweighting in dynamic timewarping
for gesture recognition in depth data. In Proceedings of the IEEE International Conference on
Computer Vision (pp. 1182–1188). https://doi.org/10.1109/ICCVW.2011.6130384
36. Simo-Serra, E., et al. (2015). Discriminative learning of deep convolutional feature point
descriptors. In Proceedings of the IEEE International Conference on Computer Vision.
37. Damdoo, R., Kalyani, K., & Sanghavi, J. Adaptive hand gesture recognition system using
machine learning approach. BBRC
Application of Big Data
in Banking—A Predictive Analysis
on Bank Loans
Abstract Loans make up a significant portion of bank profits. Despite the fact that
many people are looking to get loans, finding a trustworthy applicant who will return
the loan is challenging. Choosing a real applicant may be difficult if the procedure
is done physically. As a result, it is important to develop a machine learning-centred
loan prediction system that will choose suitable individuals on its own. Both the
applicant and the bank staff will benefit from this. The loan sanctioning period will
be significantly shortened. In this exploration, we use the Decision Tree machine
learning technique to predict the loan data.
1 Introduction
The loan imbursement system is the backbone of the bank’s operations. The currency
made from the loans forms the bulk of the bank’s profits. In situations where the bank
approves the loan following a lengthy validation and authentication process, there is
no guarantee that the applicant will be getting the loan [1, 2]. In this regard, it also
needs to be mentioned that if this process is done physically, the bank will require
supplementary time. We are able to foretell if a particular individual will secure the
loan or not. The entire testimonial procedure is mechanised using machine learning.
For potential borrowers as well as bank clients, a loan forecast is pretty valuable [3].
This paper focuses on predicting whether the client qualifies for a loan or not based
on customer information received from a dataset.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 485
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_40
486 S. Banerjee et al.
1.1 Methodology
The suggested model emphasises forecasting the eligibility of any borrowers for a
loan by analysing the behaviours (information). For our model, input is the collection
of borrower behaviour. One can decide if the borrower request will be granted or
not based on the classifier’s output [4]. Regression model is being used to solve this
problem. Figure 1 depicts a detailed view of the method used.
The first step is the collection of data from customers. The second step involves
data filtering which consists of the removal of missing values in the dataset. The
consequent step involves the calculation of the importance of attributes. This step is
vital since this step increases the efficacy of the model and hence makes it accurate.
In the next and penultimate steps, the machine learning model was trained and tested
on the default parameters. In the final step, result analysis is done.
A logistic regression approach is used to classify customers. Regression analysis
is a statistical process which involves assessing relationships between variables. It
includes approaches for modelling and analysing several variables. The main goal is
to establish an association between one or more independent variables and one depen-
dent variable [4–6]. Regression analysis, more precisely, aids in understanding how
one independent variable’s variation affects the usual value of the dependent variable
while the other independent variables are held constant [7–10]. Linear regression fits
a linear equation to the observed data in order to establish a relationship between
two variables [11, 12].
2 Dataset Description
The loan prediction dataset is drawn from the Kaggle competition and represents
diverse applicant age groups and genders [13–15]. There are 13 characteristics in
the dataset, which includes assets, income, marital status, education, and more.
2.1 Results
This section deals with the results analysis part. Figures 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
and 12 expresses scatter plots and different histograms. The test dataset used gave an
81% accuracy in validation using the random forests as shown in Fig. 13. Figure 14
shows the Histogram of Frequency versus TotalIncome. Figure 15 deals with the
Histogram of Frequency versus LoanAmount, and Fig. 16 depicts the Histogram of
Frequency versus Log(LoanAmount).
Fig. 4 Histogram of
TotalIncome1
Fig. 9 Histogram of
frequency versus
LoanAmount
Fig. 10 Histogram of
frequency versus
log(LoanAmount)
Fig. 11 Histogram
frequency versus testing_
loan(LoanAmount)
490 S. Banerjee et al.
Fig. 13 Table showing accuracy of loan prediction and importance of various features
Fig. 14 Histogram of
frequency versus
TotalIncome
Fig. 15 Histogram of
frequency versus
LoanAmount
Application of Big Data in Banking—A Predictive Analysis on Bank Loans 491
Fig. 16 Histogram of
frequency versus log
(LoanAmount)
3 Conclusion
References
1. de Sa, H. R., & Prudencio, R. B. C. (2011). Supervised link prediction in weighted networks.
In Proceedings of International Joint Conference on Neural Networks, San Jose, California,
USA
2. Goyal, A., & Kaur, R. (2016). Loan prediction using ensemble technique. International Journal
of Advanced Research in Computer and Communication Engineering, 5(3)
3. Jagannatha Reddy, M. V., & Kavitha, B. (2010). Extracting prediction rules for loan default
using neural networks through attribute relevance analysis. International Journal of Computer
Theory and Engineering, 2(4), 596–601.
4. Sivasree, M. S., & Sunny, T. R. (2015). Loan credibility prediction system based on decision
tree algorithm. International Journal of Engineering Research & Technology (IJERT), 4(9).
ISSN: 2278-0181 IJERTV4IS090708
5. Desai, D. B., & Kulkarni, R. V. (2013). A review: Application of data mining tools in CRM
for selected banks. International Journal of Computer Science and Information Technologies
(IJCSIT), 4(2), 199–201.
6. Gupta, A., Pant, V., Kumar, S., & Bansal, P. K. (2020). Bank loan prediction system using
machine learning. In International Conference on System Modeling & Advancement in
Research Trends.
7. Arutjothi, G., & Senthamarai, C. (2017). Prediction of loan status in commercial bank using
machine learning classifier. In International Conference on Intelligent Sustainable Systems
(ICISS).
8. Arutjothi, G., & Senthamarai, C. (2017). Comparison of feature selection methods for credit
risk assessment. International Journal of Computer Science, 5(5).
492 S. Banerjee et al.
9. Byanjankar, A., Heikkilä, M., & Mezei, M. (2015). Predicting credit risk in peer-to-peer
lending: A neural network approach. In IEEE Symposium Series on Computational Intelligence.
IEEE.
10. Devi, C. R. D., & Chezian, R. M. (2016). A relative evaluation of the performance of ensemble
learning in credit scoring. In IEEE International Conference on Advances in Computer
Applications (ICACA). IEEE.
11. Sudhamathy, G., & Venkateswaran, C. J. (2016). Analytics using R for predicting credit
defaulters. In IEEE International Conference on Advances in Computer Applications (ICACA).
IEEE.
12. Sudhakar, M., & Reddy, C. V. K. (2016). Two step credit risk assessment model for retail bank
loan applications using decision tree data mining technique. International Journal of Advanced
Research in Computer Engineering & Technology (IJARCET), 5(3), 705–718.
13. Aboobyda, J. H., & Tarig, M. A. (2016). Developing prediction model of loan risk in banks
using data mining. Machine Learning and Applications: An International Journal (MLAIJ),
3(1), 1–9.
14. Somayyeh, Z., & Abdolkarim, M. (2015). Natural customer ranking of banks in terms of credit
risk by using data mining a case study: Branches of Mellat Bank of Iran. Journal of UMP
Social Sciences and Technology Management, 3(2), 307–316.
15. Harris, T. (2013). Quantitative credit risk assessment using support vector machines: Broad
versus Narrow default definitions. Expert Systems with Applications, 40, 4404–4413.
An Image Enhancement Algorithm
for Autonomous Underwater Vehicles:
A Novel Approach
M. Huda (B)
College of Computing and Informatics, Saudi Electronic University, Riyadh, Saudi Arabia
e-mail: [email protected]
K. Rohit
University Institute of Computing, Chandigarh University, Mohali, Punjab, India
e-mail: [email protected]
B. Sarkar
Department of Computer Science and Engineering, JIS College of Engineering, Kalyani, India
e-mail: [email protected]
S. Pal
Department of Computer Science and Engineering, Sister Nivedita University, Kolkata, India
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 493
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_41
494 M. Huda et al.
1 Introduction
While taking submerged pictures, the light that raises a ruckus around town can make
corruption and loss of its frequency due to dispersing. While scattering is achieved
by particles in the water, the light coming from the camera can in like manner cause
a deficiency of its recurrence as it goes additionally lowered [1, 2]. Red light got
held first, and blue light can go much further. Because of these variables, pictures
taken submerged may have inferior quality. They can likewise influence the useful-
ness of submerged cameras. Backscattering, light assimilation, and forward dissi-
pation are a portion of the variables that can influence the nature of pictures caught
submerged. To improve the nature of pictures caught submerged, specialists have
been creating different strategies that can be utilized to send pictures and recordings
to the base PC. This is done by utilizing an umbilical string, which is a kind of elec-
trical gadget that conveys both power and sign [1]. At the point when the base PC gets
a picture or video, it then, at that point, conveys the message to the subsequent stage.
Nonetheless, this cycle can be extremely tedious and limits the usefulness of UUV.
In this paper, we introduced a strategy that includes an electrical get-together that can
perform different undertakings, for example, upgrading the nature of pictures caught
submerged. Besides improving the nature of pictures, this gathering can likewise
be utilized to gather other data like temperature and profundity. To play out these
undertakings, a gadget known as the Raspberry Pi 3B was proposed. The gadget,
which is smaller than the expected PC, is fueled by a 1.2 GHz processor and 1 GB of
Smash. It runs on the Linux working framework known as Raspbian. Other outsider
applications, for example, the Ubuntu MATE can also be used to run the device [3,
4]. The device has four USB ports, which can be used to connect various peripherals
such as sensors and cameras. It also has a pair of general-purpose pins that can be
used to connect sensors. The camera attached to the device is used to take images
and videos. It then processes the captured images and based on the data collected,
the device can control the propellers. The device also comes with a pair of external
modules in build, namely a Wi-Fi module and a Bluetooth module. These allow the
device to send enhanced videos to the base computer. The remaining section of the
paper is arranged as follows. The second section of the paper talks about the various
techniques involved in underwater image enhancement, i.e., Literature Survey. The
third section covers the aspects related to the proposed algorithm and flowchart,
the fourth section discusses the implementation details regarding hardware aspects,
sensors, and Raspberry Pi, the fifth section discusses the experimentation results,
and the final section concludes the paper [5].
An Image Enhancement Algorithm for Autonomous Underwater … 495
at bigger ones. The outcome worth of a pixel is figured by considering its position
among the adjoining pixels. This technique can be utilized to contrast the middle pixel
and the other closeby pixels. Normalized result values can be registered by adding
2 for every pixel with a more modest worth than the middle pixel, and adding 1 for
every pixel with equivalent worth. At the point when the picture district containing
a solitary pixel’s area is generally homogeneous, its histogram will be sufficiently
able to top, and its change capability will plan the locale’s pixel values to the entire
reach. This outcome can make AHE overproduce a little clamor in the picture [8].
The subsequent picture is then reproduced to the RGB variety space and the result
picture is a high goal upgraded picture. A variety of revised and contrast-upgraded
yield pictures can be produced, which can be noticeable in the last result picture.
The register module is a little structure factor gadget that can be utilized in modern
applications. It includes the accompanying parts: the BCM2835, 512 MB of DDR3,
and 4 GB eMMC streak memory. The gadget can be associated with a baseboard
utilizing a 200-pin DDR2 equal port. It ought to be noticed that this isn’t viable with
standard SODIMMs. The gadget’s different highlights can be gotten by utilizing the
gadget’s double-channel SODIMM connectors. The B/B + and A/B just have one of
these. The register module is usually utilized by organizations to rapidly foster new
items by giving them a total bundle that incorporates a central processor, memory,
and capacity [5]. This kills the requirement for extra peripherals and permits them
to zero in on the advancement of their new item.
4 BCM2835
the power bank. Figure 2 shows the proposed circuit chart of the gadget. It demon-
strates the way that our proposed module can be utilized as a working module for
the UUV. The Raspberry can be utilized to control different engines and gadgets
utilizing visual sources of info. For example, it tends to be utilized to identify objects
submerged. This component was shown by utilizing the gadget’s 40 pins to control
the bearing and speed of the engines when the UUV goes through it. This Raspberry
Pi elements can be utilized to identify and move toward target applications. We used
bilge directs to control the UUV during this appearance. The contraption had the
choice to achieve its goal by controlling the speed and heading of the propellers
which are made of steel (Fig. 3).
The siphon, which is fueled by 12 V, is equipped for moving 1100 gallons of
water at a time. The details of the propellers are as per the following: Sharp edge
width: 31.2 mm; Opening: 2 mm, and viable with 2 mm shaft engines. Four engine
controlling modules, in particular XY-15AS, were utilized to control the speed of
the propellers. These modules can convey a current of up to 15 Amps. Subsequent to
Fig. 3 Practical
implementation of the
underwater vehicle
498 M. Huda et al.
controlling the propellers utilizing 12 V Li-particle batteries, they were at long last
all set [5]. The trial of the engines was performed on the outer layer of the gadget.
The speed and bearing of the propellers were checked and constrained by Raspberry
Pi.
5 Experimentation Results
Dissimilar to the general picture quality strategy, submerged pictures can’t give a
genuine colorless picture of an objective scene. Because of the absence of reference
norms for submerged pictures, a great many strategies and procedures for emotional
and objective assessment are utilized to assess and examine these pictures. The
picture displayed in Fig. 5 is the first submerged picture that will be handled. The
strategies that are utilized to further develop the picture are the CLAHE calculation,
holomorphic sifting calculation, and the last figure shows the proposed calculation.
The measures of all submerged pictures are 450*338. The CLAHE calculation in
Fig. 1 can work on the picture’s dynamic reach and feature subtleties, yet it can’t
take out the lopsided brightening. The holomorphic separating process in Fig. 1
can further develop the picture’s variety cast by lessening the quantity of subtleties
and working on its brilliance. Notwithstanding, it can’t fundamentally improve the
differentiation. The aftereffects of the concentrate in Fig. 1 demonstrate the way that
the strategy can further develop the picture quality in turbid water by diminishing
the commotion focuses in the picture. It can likewise feature the water bodies and
far-off reef in the first picture. The regular condition of the light and shadow in the
picture can likewise work on its lucidity. This can likewise help feature the details
of marine life (Fig. 4).
The table beneath shows the different highlights of the picture that are connected
with the pinnacle signal-to-commotion proportion, mean squared mistake, and data
entropy. These are likewise considered after the dull channel earlier upgrade and
holomorphic separating procedures and the proposed strategy. The more modest
the MSE after picture handling implies the better the handling impact. Then again,
the higher the PSNR, the better the picture’s handling impact. The bigger the data
entropy, the more prominent the problem of the data it contains. The consequences
of this study show that the DCP calculation has the best exhibition with regards to
picture handling. It has a bigger PSNR esteem and the littlest MSE. Albeit, the DCP
calculation is fit for taking care of the majority of the picture-handling undertakings;
it can’t as expected manage the issues of lopsided enlightenment and variety cast
in submerged pictures. The presentation of the holomorphic sifting calculation is
impacted by the various upsides of the PSNR and the MSE. Notwithstanding, it can in
any case perform better compared to different strategies with regards to managing the
issues of variety cast and lopsided brightening in submerged pictures. The exhibition
of the proposed calculation is likewise higher contrasted with that of the holomorphic
separating technique and DCP strategy. It has higher objective evaluation indexes, and
the information entropy is also higher. The contrast between the proposed strategies
An Image Enhancement Algorithm for Autonomous Underwater … 499
and the customary techniques can be credited to the better acknowledgment of the
inner surface and outside forms of the picture. As far as both execution and abstract
outcomes are considered, the proposed calculation is plainly better compared to the
aftereffects of the past two techniques.
6 Quantitative Evaluation
In Fig. 2, the information showing the different advances associated with the handling
of a submerged picture is shown. The first Fig. 2 shows the first submerged picture
with goal 367*305, while the second one Fig. 2b shows the consequences of the
DCP calculation that is utilized to improve the picture, while Fig. 2c shows the
consequences of holomorphic separating calculation and Fig. 2d portrays results
subsequent to handling through the proposed calculation. The turbid water body of
the fish is typically blue in variety, making it hard to see its subtleties. In the wake
of handling utilizing proposed calculation, the fish can be plainly seen its shape,
appearance, surface, and the emotional appearance of the picture is more normal.
The progressions in the water brought about by the light can be noticed, and the
differentiation between the articles in the water and the fish can be reestablished.
Table 1 relates to the pinnacle PSNR, MSE, and data entropy of the first submerged
picture in Fig. 5 after DCP improvement, holomorphic sifting upgrade, and proposed
calculation in this paper. The outcomes likewise show that the strategy proposed in
this paper is fundamentally better compared to the outcomes got by the past two
medicines.
7 Conclusion
Table 1 Quantitative
PSNR MSE Entropy
evaluation by different
algorithms for original image DCP 30.654 57.435 7.634
in Fig. 5 HF 26.866 155.555 6.986
Proposed ALG 29.543 69.433 7.543
An Image Enhancement Algorithm for Autonomous Underwater … 501
motor control operation. The paper’s image enhancement system is designed to take
advantage of the multiple sensors and image inputs received by the UUV. It can
then enhance the image and control the vehicle’s propellers using a common Wi-Fi
signal. Although the paper’s image enhancement system is capable of improving
the image quality of the UUV, it is still in need of more improvements due to its
limited image transmission range. Recently, a new model of the popular Raspberry
Pi was released with 8 GB of RAM. This will allow the system to improve its image
processing speed. Intel’s Movidius vision processing unit can also be utilized to boost
the system’s performance.
References
1. Wu, X. J., & Li, H. S. (2013). A simple and comprehensive model for underwater image
restoration. In 2013 IEEE International conference on information and automation, ICIA 2013
(pp. 699–704). https://doi.org/10.1109/ICInfA.2013.6720385
2. Panetta, K., Gao, C., Agaian, S. (2015). Human-visual-system-inspired underwater image quality
measures. Image enhancement for IEEE Journal of Oceanic Engineering, 41(3), 541–551.
3. Lu, H., Serikawa, S. (2014). A novel underwater scene reconstruction method. In Proceedings–
2014 International Symposium on Computer, Consumer and Control, IS3C 2014 (pp. 773–775)
4. Galdran, A., Pardo, D., Picón, A., Alvarez-Gila, A. (2015). Automatic red-channel underwater
image restoration. Journal of Visual Communication and Image Representation 26, 132–145.
https://doi.org/10.1016/j.jvcir.2014.11.006
5. Perez, J., Sanz, P. J., Bryson, M., Williams, S. B. (2017). A benchmarking study on single image
dehazing techniques for underwater autonomous vehicles. In OCEANS 2017
6. Boudhane, M., Balcers, O. (2019). Underwater image enhancement method using color channel
regularization and histogram distribution for underwater vehicles AUVs and ROVs. International
Journal of Circuits, Systems and Signal Processing 13, 570–578
7. Voronin, V., Semenishchev, E., Tokareva, S., Zelenskiy, A., Agaian, S. (2019). Underwater image
enhancement algorithm based on logarithmic transform.
8. Xu J, Bi, P., Du, X., Li, J. (2019). Robust PCANet on target recognition via the UUV optical
vision system. Optik 181, 588–597
Proposing a Model to Enhance
the IoMT-Based EHR Storage System
Security
Abstract The Internet of Medical Things (IoMT) and Electronic Health Records
(EHR) are core aspects of today’s healthcare facilities, hence these technologies and
storage platforms should be equipped with innate safeguarding and secrecy concerns
for the welfare of individual human beings. The utmost feasible precautions need
to be taken by healthcare organizations with regard to user consent, verifiability,
scalability, and authentication protocols, aside from prospective vulnerability intru-
sions. Especially considering the explosive rise of modern health facilities, fraud-
sters are consistently searching for means to access healthcare information sources
as their prime targets. The significance of data gleaned from the healthcare systems is
highly valuable on the black market. Blockchain technology is recognized as a much
more alluring way to facilitate information sharing via the entire healthcare distri-
bution network while endangering data confidentiality and integrity. The purpose of
this research is to strengthen the IoMT-based EHR storage system security utilizing
Hyperledger Fabric infrastructure. The proposed model leverages the usage of Hyper-
ledger Fabric’s unalterableness and data protection characteristics to guarantee the
confidentiality and integrity of EHRs while also ensuring secure data exchange and
identity management for authorized individuals. Hyperledger Fabric strategies must
be integrated with edge computing and cloud platforms to further their value-added
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 503
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_42
504 S. R. Das et al.
best attributes. When our proposed model are objectively implemented in the near
future, the key components that can be employed to minimize the exploitation of
healthcare data storage would be recognized.
1 Introduction
about analyzing information more rapidly and in bulk near the point of generation,
providing action-driven responses in real time. When edge route gateways and edge
network nodes are reasonably linked, latency can be minimized and the processing
time for massive volumes of datasets can be boosted [3].
A service that offers cloud computing that we are able to access via the public
or private web lets us conserve files and other information mostly on the Internet
concept called cloud storage.
Hyperledger Fabric, a consortium blockchain platform, employs Byzantine-fault-
tolerant (BFT) consensus procedures to ensure a robust and secure exchange of
patient records across the network of adversarial consulted individuals. The BFT
approach can prevent node failures by lessening the impact of the vulnerable nodes.
Hyperledger Fabric is incomparable to current blockchain strategies in terms of effec-
tiveness [4, 5]. Peer nodes and ordering nodes are indeed the primary kinds of nodes in
a certain network. Peer nodes are responsible for batching and verifying transactions.
The recent historical record of events inside the network is generated and ordered
by ordering nodes. Several transactions are perhaps handled at once with enough
accuracy. Data protection includes private pathways that are predefined message
pathways. Nobody is going to actually be able to access the data without much or no
authorization. Chaincode allows for the addition, modification, and transmission of
data.
The incorporation of blockchain technology, more specifically Hyperledger
Fabric, to optimize EHR preservation in the realm of Implantable Medical Devices
(IMT) constitutes one of the most feasible approaches. In addition to safeguarding
patient confidentiality and record integrity, this infrastructure may establish a decen-
tralized, secure, and unmodifiable platform for the storage and distribution of
EHRs.
2 Problem Statement
3 Related Work
Fabric that allows for the safe P2P transmission of private medical information while
upholding their confidentiality, legitimacy, transparency, and dependability [25].
Researchers offered a safe, patient-centric blockchain-based approach to manage
who may access health information. The cloud and mobile devices were employed
to capture EHR data using IoMT sensors [26].
The desired structure for sharing health data assists to enhance current data
management tactics. The outcomes of this research demonstrated that it is viable to
use Hyperledger Fabric to facilitate interoperability while bolstering security controls
in the preservation of patient records [27].
Their system, which was based on the Ethereum platform, would provide secure
medical information access for each individual as well as for all healthcare facil-
itators. Consequently, it includes both a web and mobile application. The foun-
dational architecture on which the Ethereum node is built should provide a trust-
worthy, scalable, and secure method given its limited capabilities and low electricity
consumption [28]. The authors used an Ethereum platform to construct a patient-
centric smart contract in order to solve the issues of data exchange, confidentiality,
and dependability involved with administrating EHRs [29].
Al Mamun et al. [30] figured out after thoroughly analyzing a few pertinent
literature that Ethereum (private) and Hyperledger Fabric encompass utmost widely
used current systems for EHR administration since both almost fully satisfy all
necessary requirements [30]. Most of the cloud applications are involved in HER
applications [31–35], which can provide assistance somehow.
4 Methodology
Edge Computing: Identify the devices and sensors that will be used to collect
health data. Set up edge devices and gateways to collect and process the data.
Configure security measures for edge devices and gateways, such as authentication,
authorization, and encryption.
Data Encryption: After the data is processed, it is encrypted using secure encryp-
tion algorithms to assure that the information is secure during transmission and
storage.
Smart Contract: Define the smart contract logic that will be used to manage
user access control and data sharing. Develop and deploy the smart contract on the
blockchain network. Configure the smart contract to interact with the EHR storage
system and enforce access control policies.
Hyperledger Fabric Framework: Set up the Hyperledger Fabric network and
nodes. Configure the network to support EHR storage and access control. Deploy
the smart contract on the Hyperledger Fabric network.
Hyperledger Caliper: Use Hyperledger Caliper to test the performance and scal-
ability of the Hyperledger Fabric network. Configure the test scenarios to simulate
realistic workloads and user interactions.
HL7: Configure the EHR storage system to support HL7 data exchange standards.
Develop and test HL7 interfaces to enable seamless data exchange between
systems.
510 S. R. Das et al.
FHIR: Configure the EHR storage system to support FHIR data exchange stan-
dards. Develop and test FHIR interfaces to enable seamless data exchange between
systems.
Cloud Computing: Set up a cloud infrastructure to host the EHR storage system
and the Hyperledger Fabric network. Configure the cloud infrastructure to support
high availability, scalability, and security.
Overall, the proposed model should focus on ensuring that the EHR storage system
is secure, scalable, and interoperable with other systems. It should also include testing
and performance evaluation to identify any potential vulnerabilities or bottlenecks
in the system.
5 Conclusion
References
1. Loh, C. M., & Chuah, C. W. (2021). Electronic medical record system using ethereum
blockchain and role-based access control. Applied Information Technology And Computer
Science, 2(2), 53–72.
2. Lee, J., Park, Y. R., & Beck, S. S. (2021). Deriving key architectural features of FHIR-
blockchain integration through the qualitative content analysis.
Proposing a Model to Enhance the IoMT-Based EHR Storage System … 511
3. Yang, Y., Shi, R. H., Li, K., Wu, Z., & Wang, S. (2022). Multiple access control scheme for
EHRs combining edge computing with smart contracts. Future Generation Computer Systems,
129, 453–463.
4. George, J. T. (2022). Introducing blockchain applications: Understand and develop blockchain
applications through distributed systems. Apress.
5. Uddin, M., Memon, M. S., Memon, I., Ali, I., Memon, J., Abdelhaq, M., & Alsaqour, R. (2021).
Hyperledger fabric blockchain: Secure and efficient solution for electronic health records.
Computers, Materials & Continua, 68, 2377–2397.
6. Hira, F. A., Khalid, H., Rasid, S. Z. A., Baskaran, S., & Moshiul, A. M. (2022). Blockchain
technology implementation for medical data management in Malaysia: Potential, need and
challenges. TEM Journal, 11(1), 64.
7. Pang, Z., Yao, Y., Li, Q., Zhang, X., & Zhang, J. (2022). Electronic health records sharing
model based on blockchain with checkable state PBFT consensus algorithm. IEEE Access, 10,
87803–87815.
8. Keshta, I., & Odeh, A. (2021). Security and privacy of electronic health records: Concerns and
challenges. Egyptian Informatics Journal, 22(2), 177–183.
9. Yang, X., Li, T., Pei, X., Wen, L., & Wang, C. (2020). Medical data sharing scheme based on
attribute cryptosystem and blockchain technology. IEEE Access, 8, 45468–45476.
10. Kim, M., Yu, S., Lee, J., Park, Y., & Park, Y. (2020). Design of secure protocol for cloud-assisted
electronic health record system using blockchain. Sensors, 20(10), 2913.
11. Gupta, R., Kanungo, P., Dagdee, N., Madhu, G., Sahoo, K. S., Jhanjhi, N. Z., Masud, M.,
Almalki, N. S., AlZain, M. A. (2023). Secured and privacy-preserving multi-authority access
control system for cloud-based healthcare data sharing. Sensors, 23(5), 2617.
12. Alrebdi, N., Alabdulatif, A., Iwendi, C., & Lian, Z. (2022). SVBE: Searchable and verifiable
blockchain-based electronic medical records system. Scientific Reports, 12(1), 266.
13. Verma, D. K., Tyagi, R. K., & Chakraverti, A. K. (2022). Secure data sharing of electronic
health record (EHR) on the cloud using blockchain in Covid-19 Scenario. In Proceedings of
trends in electronics and health informatics: TEHI 2021 (pp. 165–175). Singapore, Springer
Nature Singapore.
14. Dakhane, A., Waghmare, O., & Karanjekar, J. (2021). AI framework using blockchain
for healthcare database. International research Journal of Modernization in engineering
Technology and Science, 2(8), 17017.
15. Kumar, R., Kumar, P., Tripathi, R., Gupta, G. P., Islam, A. N., & Shorfuzzaman, M. (2022).
Permissioned blockchain and deep learning for secure and efficient data sharing in industrial
healthcare systems. IEEE Transactions on Industrial Informatics, 18(11), 8065–8073.
16. Chamola, V., Goyal, A., Sharma, P., Hassija, V., Binh, H. T. T., & Saxena, V. (2022). Artificial
intelligence-assisted blockchain-based framework for smart and secure EMR management.
Neural Computing and Applications, 1–11.
17. Awasthi, M. V., Karande, N., & Bhattacharjee, S. (2022). Convergence of blockchain,
IoMT, AI for healthcare platform framework. International Journal of Engineering Research
Management (IJERM), 9, 1–7.
18. Li, H., & Wang, X. (2022). Design and implementation of electronic medical record system
based on hyperledger fabric. In Proceedings of the 2022 4th blockchain and internet of things
conference (pp. 68–72).
19. Manoj, T., Makkithaya, K., & Narendra, V. G. (2022). A blockchain based decentralized
identifiers for entity authentication in electronic health records. Cogent Engineering.
20. Rajawat, A. S., Goyal, S. B., Bedi, P., Simoff, S., Jan, T., & Prasad, M. (2022). Smart scal-
able ML-blockchain framework for large-scale clinical information sharing. Applied Sciences,
12(21), 10795.
21. Liu, J., Zhang, X., & Wang, X. (2021). Blockchain-based electronic health records: A new era
of patient data management.
22. Reegu, F. A., Mohd, S., Hakami, Z., Reegu, K. K., & Alam, S. (2021). Towards trustworthiness
of electronic health record system using blockchain. Annals of the Romanian Society for Cell
Biology, 25(6), 2425–2434.
512 S. R. Das et al.
23. Reegu, F. A., Abas, H., Hakami, Z., Tiwari, S., Akmam, R., Muda, I., Almashqbeh, H. A., &
Jain, R. (2022). Systematic assessment of the interoperability requirements and challenges of
secure blockchain-based electronic health records. Security and Communication Networks.
24. Sonkamble, R. G., Bongale, A. M., Phansalkar, S., Sharma, A., & Rajput, S. (2023). Secure
data transmission of electronic health records using blockchain technology. Electronics, 12(4),
1015.
25. Khan, A. A., Wagan, A. A., Laghari, A. A., Gilal, A. R., Aziz, I. A., & Talpur, B. A. (2022).
BIoMT: A state-of-the-art consortium serverless network architecture for healthcare system
using blockchain smart contracts. IEEE Access, 10, 78887–78898.
26. Nazir, S., & Dua, A. (2022). IoT-based electronic health records (EHR) management system
using blockchain technology. In Blockchain (pp. 135–163). Chapman and Hall/CRC.
27. Wang, Q., & Qin, S. (2021). A hyperledger fabric-based system framework for healthcare data
management. Applied Sciences, 11(24), 11693.
28. Frikha, T., Chaari, A., Chaabane, F., Cheikhrouhou, O., & Zaguia, A. (2021). Healthcare
and fitness data management using the IoT-based blockchain platform. Journal of Healthcare
Engineering.
29. Fatokun, T., Nag, A., & Sharma, S. (2021). Towards a blockchain assisted patient owned system
for electronic health records. Electronics, 10(5), 580.
30. Al Mamun, A., Azam, S., & Gritti, C. (2022). Blockchain-based electronic health records
management: A comprehensive review and future research direction. IEEE Access, 10, 5768–
5789.
31. Shafiq, D. A., Jhanjhi, N. Z., & Abdullah, A. (2022). Load balancing techniques in cloud
computing environment: A review. Journal of King Saud University-Computer and Information
Sciences, 34(7), 3910–3933.
32. Mishra, S. K., et al. (2020). Energy-aware task allocation for multi-cloud networks. IEEE
Access, 8, 178825–178834. https://doi.org/10.1109/ACCESS.2020.3026875
33. Ali, S., Hafeez, Y., Jhanjhi, N. Z., Humayun, M., Imran, M., Nayyar, A., Singh, S., Ra, I.
H. (2020). Towards pattern-based change verification framework for cloud-enabled healthcare
component-based. Ieee Access, 8, 148007–148020.
34. Shafiq, D. A., Jhanjhi, N. Z., & Abdullah, A. (2019). Proposing a load balancing algorithm for
the optimization of cloud computing applications. In 2019 13th International conference on
mathematics, actuarial science, computer science and statistics (MACS) (pp. 1–6). IEEE.
35. Gill, S. H., Razzaq, M. A., Ahmad, M., Almansour, F. M., Haq, I. U., Jhanjhi, N. Z., Alam, M.
Z., & Masud, M. (2022). Security and privacy aspects of cloud computing: A smart campus
case study. Intelligent Automation & Soft Computing, 31(1).
Synthetic Crime Scene Generation Using
Deep Generative Networks
Abstract Synthetic crime scenes can provide an effective training tool for law
enforcement personnel, enabling them to gain valuable experience without the need
for real-world practice. However, creating realistic synthetic crime scenes is a chal-
lenging task that requires advanced artificial intelligence techniques. In this paper,
we propose a novel architecture for generating synthetic crime scenes using a hybrid
VAE + GAN model. The proposed architecture leverages scene graph information
and input text embeddings to generate coarse images of the foreground and back-
ground using a conditional variational autoencoder (VAE). Two separate generators
then generate more detailed images of the foreground and background, and a fusion
generator combines them to create a final image. A discriminator evaluates the realism
of the generated images. This approach represents a significant contribution to the
field, as it enables the generation of highly realistic crime scenes from textual input.
The proposed architecture has the potential to be used by law enforcement agencies
to aid in crime scene reconstruction, and may also have applications in related fields
such as forensic science and criminal justice.
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 513
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_43
514 F. Ashfaq et al.
1 Introduction
distribution of existing data. While these networks have been extensively used in the
fabrication of synthetic images since their inception a decade ago, their potential in
artificially simulating a crime scene to aid law enforcement and the judicial system
has not yet been explored.
The ability to create mental images based on verbal input, known as visual imagery
[17], is a natural human instinct. However, replicating this process in computers
and connecting the visual and verbal worlds has proven difficult. Nevertheless, the
emerging field of Text-To-Image Synthesis [18] involves the generation or manipula-
tion of realistic images based on textual descriptions and has numerous applications
in various fields. To accomplish this task, it is necessary to combine the two major
branches of artificial intelligence, Computer Vision and Natural Language Processing
as shown in Fig. 2.
Therefore, in this research, we propose a novel approach to generate synthetic
crime scenes using a hybrid model of Conditional Variational Autoencoder (VAE) and
Generative Adversarial Networks (GANs). Our model will utilize scene graphs and
input text embeddings to generate coarse images of the foreground and background
using a Conditional VAE. The background and foreground images will then be used
by two separate GANs to generate more detailed images, which will be fused using
a fusion generator along with the discriminator. This model will be trained on real
crime scene images and witness statements to produce a synthetic crime scene that
accurately depicts the sequence of events as described by eyewitnesses. If successful,
this work will represent a significant contribution to the field of forensic science, as
no prior work has used deep generative networks to generate synthetic crime scenes
from text and witness statements.
2 Problem Statement
3 Related Work
The literature review consists of two distinct sections. The first section delves into the
core concepts of criminal investigation, including the primary methods utilized in the
investigation and the technological advancements that have been made in forensic
science to date. The second section is devoted to exploring the creation of synthetic
images, animations, and videos using deep generative networks, as well as how these
networks can be utilized in the recreation of crime scenes based on investigative notes
and witness accounts.
Crime Scene Investigation (CSI) is a multi-step process that involves collecting,
preserving, and analyzing physical evidence from a crime scene to reconstruct the
events that took place, identify potential suspects, and provide evidence for use in
court [25]. CSI teams consist of a variety of professionals, including law enforcement
officers, forensic specialists, and other experts, who work together to gather evidence
and establish a comprehensive understanding of the crime. Effective CSI requires
meticulous attention to detail, scientific rigor, and collaborative effort to uncover the
truth. The fundamental steps involved in the CSI process are depicted in Fig. 3.
Criminal investigation refers to the process of collecting, analyzing, and inter-
preting evidence to uncover and prosecute criminals. It is carried out by law enforce-
ment agencies, typically starting with the gathering of primary information to deter-
mine whether a crime has been committed and to identify the perpetrator [26].
The investigation process involves various techniques and procedures, including the
collection of eyewitness testimonies, physical evidence, and circumstantial evidence,
Synthetic Crime Scene Generation Using Deep Generative Networks 517
which must all be incorporated in a methodical and precise manner [27]. Success-
fully achieving the first three goals of identifying the crime, and the criminal, and
presenting evidence in court can be considered the hallmark of a successful investi-
gation. Other benefits of this strategy include the return of stolen goods, deterrence
of criminal activity, and satisfaction of crime victims [1].
Crime scene investigation (CSI) is a complex and challenging process that requires
attention to detail and proper resources. There are several potential issues that
can arise during crime scene evaluation, including contamination, time constraints,
weather and environmental factors, lack of resources, human error, and dealing with
large or complex crime scenes. These challenges highlight the need for a thorough
understanding of proper techniques and procedures, as well as the importance of
expertise, experience, and resources in shaping the outcome of the case.
Currently, there are several gaps in CSI practices that need to be addressed [28].
These include a lack of standardization, limited expertise, lack of resources, limited
use of technology, limited collaboration, limited budget, limited access to forensic
data, and limited data sharing. These gaps can affect the quality of evidence collected,
the accuracy of conclusions drawn from it, and the ability to identify suspects and
518 F. Ashfaq et al.
connect cases. Addressing these gaps is crucial in improving the quality and effective-
ness of CSI, ensuring that justice is served. Computer-generated images and proper
crime scene documentation can also play a significant role in enhancing the accuracy
and reliability of evidence collected during CSI.
Over the years, different methods have been used to document crime scenes,
including photography, sketches, notes, and videos [29, 30]. For example, photog-
raphy has been a widely used technique to document evidence at a crime scene, and
it can provide valuable information about the location and condition of the evidence
[31]. Sketches and notes are also useful in documenting crime scenes, especially
for items that may not be easily captured in a photograph, such as the location of a
bullet hole or blood spatter [32–34]. Overall, proper documentation of a crime scene
is crucial for maintaining the integrity of the evidence and ensuring that justice is
served.
Advancements in technology have also played a role in crime scene documen-
tation. In the past, traditional methods such as photography, sketching, and written
notes were commonly used. However, with the advent of digital cameras, GPS tech-
nology, and 3D scanners, crime scene documentation has become more accurate and
efficient [35, 36]. Computer-generated images and animations have also been used to
reconstruct crime scenes and present evidence in court. For example, laser scanning
technology has been used to create 3D models of crime scenes, which can provide a
more comprehensive view of the area and aid in the investigation. For example, digital
photography has become increasingly popular in recent years, and it allows for the
rapid capture and storage of large amounts of digital data [37]. In addition, 3D laser
scanning has been used to create accurate and detailed computer-generated images
of a crime scene [38]. These images can be used in court to provide a visual represen-
tation of the crime scene and the evidence collected, which can be particularly useful
in complex cases where verbal descriptions may not be sufficient [6]. Furthermore,
virtual reality technology has also been used to create immersive crime scene recon-
structions that can be used to train investigators and provide a better understanding
of the crime scene to judges and jurors [39]. Virtual reality allows investigators to
recreate crime scenes in a virtual environment, providing an opportunity for them
to examine and analyze the scene from different angles and perspectives [40, 41].
Augmented reality, on the other hand, allows investigators to overlay digital informa-
tion onto the real-world environment, providing additional information and context
about the crime scene. These technologies have the potential to revolutionize crime
scene documentation and provide new insights and perspectives for investigators
[42, 43].
Moreover, computer-generated visualizations have been used in courtrooms to
present evidence to judges and juries. These visualizations can be in the form of
animations, simulations, and even virtual reality experiences. They can help to
simplify complex evidence and present it in a more understandable and engaging
way. For example, in a homicide trial, a 3D visualization of the crime scene [44, 45]
can be presented to the jury, providing them with a clear and accurate view of the
location and events. The use of computer-generated visualizations in court can help
to make the evidence more convincing and aid in securing a conviction.
Synthetic Crime Scene Generation Using Deep Generative Networks 519
4 Methodology
Our study proposes a novel approach called “CrimeVG” for synthetic crime scene
generation using a hybrid deep generative model that combines Variational Autoen-
coder (VAE) and Generative Adversarial Network (GAN). The VAE component
compresses input data into a lower dimensional space, known as the latent space,
and then decodes the latent representation back into the original data space. Mean-
while, the GAN component consists of two neural networks: a generator network
that creates synthetic samples from the latent space and a discriminator network that
evaluates the realism of the synthetic samples.
The hybrid VAE-GAN model we used for synthetic crime scene generation is a
quantitative approach that involves mathematical functions and numerical compu-
tations to model the relationship between input and output variables. Quantitative
approaches typically entail collecting numerical data, using statistical methods to
analyze the data, and generating numerical results. In our study, we trained the model
on a dataset of real crime scenes and evaluated its performance using quantitative
metrics such as reconstruction error or the quality of the generated synthetic images.
It is worth noting that although the crime scene description used as input for
the hybrid VAE-GAN model may be qualitative in nature, the model itself and the
results it produces would be considered quantitative. Our study’s hybrid VAE-GAN
model shows promise in generating realistic synthetic crime scenes, which could have
potential applications in forensic training and education, as well as in generating test
data for evaluating forensic analysis algorithms.
Figure 4 shows the general design of our suggested model. The following is a
discussion of our proposed model architecture and model flow.
The first step is to convert the input text description into a compact representation
using BERT, a language model trained on a large corpus of text. This embedding
vector is used as input to the VAE component of the hybrid model.
520 F. Ashfaq et al.
We propose using a scene graph to represent the objects and their relationships at the
crime scene. The scene graph takes the embedding vector produced by BERT as input
and outputs a structured representation that can be used to generate the foreground
and background of the scene.
The CVAE disentangles the visual objects and their attributes in the foreground and
background of the crime scene using the input from BERT and the scene graph. It
then generates a probability distribution over the latent variables, which can be used
to reconstruct the image.
The GAN architecture consists of three generators, one for the foreground objects,
one for the background, and a fusion module that combines the two. This approach
allows for greater control over the synthesis process and produces high-quality,
synthetic crime scenes that are indistinguishable from real ones.
Synthetic Crime Scene Generation Using Deep Generative Networks 521
The fusion module takes the output of the foreground and background generators
and fuses them together to form the final crime scene image. The output is then fed
into the Discriminator, which distinguishes between the real and generated crime
scene images and improves the generated image quality.
5 Conclusion
In this study, we proposed a novel hybrid model for generating synthetic crime scenes
using a combination of natural language processing techniques and computer vision.
Our model utilizes BERT for text embedding generation, a scene graph for structured
representation of crime scenes, a CVAE for foreground–background generation, a
GAN for image refinement, and a fusion module for generating the final synthetic
crime scene. Through experimentation, we demonstrated that our model can generate
high-quality, diverse synthetic crime scenes that are indistinguishable from real crime
scenes.
Our proposed model has several potential applications, including training law
enforcement officials and forensic experts, conducting research on criminal behavior,
and augmenting forensic investigations. Moreover, the model can be extended to
other domains beyond crime scene generation, such as object detection, scene
understanding, and image synthesis.
Overall, our research shows that the integration of natural language processing
and computer vision can lead to significant advancements in the field of crime scene
generation. We hope that our work will inspire further research in this area and
contribute to the development of more sophisticated models for generating synthetic
crime scenes.
References
1. Tilstone, W. J., Hastrup, M. L., & Hald, C. (2019). Fisher techniques of crime scene investigation
first (International). CRC Press.
2. Ogle, R. R., & Plotkin, S. (2012). Crime scene investigation and reconstruction. Pearson
Prentice Hall.
3. Pfefferli, P. W. (2001). Computer aided crime scene sketching. Problem of Forensic Sciences,
46, 83–85.
4. Clair, E. S., Maloney, A., & Schade, A. (2012). An introduction to building 3D crime scene
models using SketchUp. Journal of Association Crime Scene Reconstruction, 18, 29–47.
5. Abu Hana, R. O., Freitas, C. O., Oliveira, L. S., & Bortolozzi, F. (2008). Crime scene classifi-
cation. In Proceedings of the 2008 ACM symposium on Applied computing (pp. 419–423).
6. Galanakis, G., Zabulis, X., Evdaimon, T., Fikenscher, S. E., Allertseder, S., Tsikrika, T., &
Vrochidis, S. (2021). A study of 3D digitisation modalities for crime scene investigation.
Forensic Sciences, 1(2), 56–85.
522 F. Ashfaq et al.
7. Hana, R. O. A., de Almendra Freitas, C. O., Oliveira, L. S., & Bortolozzi, F. (2008). Crime
scene representation (2D, 3D, stereoscopic projection) and classification. Journal of Universal
Computer Science, 14(18), 2953–2966.
8. Bornik, A., Urschler, M., Schmalstieg, D., Bischof, H., Krauskopf, A., Schwark, T., Scheurer,
E., Yen, K. (2018). Integrated computer-aided forensic case analysis, presentation, and
documentation based on multimodal 3D data. Forensic science international, 287, 12–24
9. Albeedan, M., Kolivand, H., Ho, E. S. (2022). A review of crime scene investigations through
augmented reality. In: Science and technologies for smart cities: 7th EAI international confer-
ence, smartcity360°, virtual event, December 2–4 2021 proceedings (pp. 563–582). Cham:
Springer International Publishing.
10. Bang, J., Lee, Y., Lee, Y. T., & Park, W. (201). AR/VR based smart policing for fast response
to crimes in safe city. In 2019 IEEE international symposium on mixed and augmented reality
adjunct (ISMAR-Adjunct) (pp. 470–475). IEEE.
11. Ma, M., Zheng, H., & Lallie, H. (2010). Virtual reality and 3D animation in forensic
visualization. Journal of Forensic Sciences, 55(5), 1227–1231.
12. Streefkerk, J. W., Houben, M., van Amerongen, P., ter Haar, F., & Dijk, J. (2013). The art
of csi: An augmented reality tool (art) to annotate crime scenes in forensic investigation. In
Virtual, augmented and mixed reality. systems and applications: 5th International conference,
VAMR 2013 held as part of HCI international 2013, Las Vegas, NV, USA, July 21–26, 2013,
proceedings, Part II 5 (pp. 330–339). Berlin, Heidelberg: Springer.
13. White, P. (Ed.). (2010). Crime scene to court: the essentials of forensic science. Royal Society
of Chemistry.
14. Dawnay, N., & Sheppard, K. (2023). From crime scene to courtroom: A review of the current
bioanalytical evidence workflows used in rape and sexual assault investigations in the United
Kingdom. Science & Justice.
15. Reichherzer, C., & Coleman, T. (2019). Jury visualisation of crime scenes in virtual reality.
Bulletin (Law Society of South Australia), 41(5), 26–27.
16. Sugarman, J. (2012). Crime scene reconstruction, forensic 3D animation [Video file]. Retrieved
from https://www.youtube.com/watch?v=Fn2cCVgZ-wk
17. Pearson, J. (2019). The human imagination: The cognitive neuroscience of visual mental
imagery. Nature Reviews Neuroscience, 20(10), 624–634.
18. Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., & Lee, H. (2016). Genera-
tive adversarial text to image synthesis. In International conference on machine learning
(pp. 1060–1069). PMLR.
19. Raneri, D. (2018). Enhancing forensic investigation through the use of modern three-
dimensional (3D) imaging technologies for crime scene reconstruction. Australian Journal
of Forensic Sciences, 50(6), 697–707.
20. Thiruchelvam, P., Jegatheswaran, R., Binti Juremi, D. J., & Mohd Puat, H. A. (2021). Crime
scene reconstruction based on a suitable software: A comparison study. In I. T. D. Vinesh, R.
Jegatheswaran, & D. J. Binti Juremi, & H. A. Mohd Puat (Eds.), Crime scene reconstruction
based on a suitable software: A comparison study.
21. Chan, E. R., Monteiro, M., Kellnhofer, P., Wu, J., & Wetzstein, G. (2021). pi-gan: Periodic
implicit generative adversarial networks for 3d-aware image synthesis. In Proceedings of the
IEEE/CVF conference on computer vision and pattern recognition (pp. 5799–5809).
22. Rajnish, D. R., Nahar, J., Shukla, S., Dixit, V., & Suryawanshi, T. (2022). Medical image
synthesis using GAN.
23. Li, F., Huang, W., Luo, M., Zhang, P., & Zha, Y. (2021). A new VAE-GAN model to synthesize
arterial spin labeling images from structural MRI. Displays, 70, 102079.
24. Shamsolmoali, P., Zareapoor, M., Granger, E., Zhou, H., Wang, R., Celebi, M. E., & Yang, J.
(2021). Image synthesis with adversarial networks: A comprehensive survey and case studies.
Information Fusion, 72, 126–146.
25. Gupta, S., & Jain, I. B. (2023). Crime scene investigation and forensic evidence: Forensic
analysis and tools. Journal of Pharmaceutical Negative Results, 3661–3667.
Synthetic Crime Scene Generation Using Deep Generative Networks 523
26. Zurgani, E. (2018). Documentation of the body transformations during the decomposi-
tion process: From the crime scene to the laboratory [Doctoral dissertation, University of
Huddersfield].
27. Bostanci, E. (2015). 3D reconstruction of crime scenes and design considerations for an
interactive investigation tool. International Journal of Information Security Science, 4(2),
50–58.
28. Knox, M. A. (2010). Forensic engineering applications in crime scene reconstruction. In: ASME
international mechanical engineering congress and exposition (vol. 44489, pp. 413–419).
29. Robinson, E. M. (2016). Crime scene photography. Academic Press.
30. Duncan, C. D. (2022). Advanced crime scene photography. CRC Press.
31. Weiss, S. L., & Wyman, R. (2022). Photographing crime scenes. In Handbook of forensic
photography (pp. 405–422). CRC Press.
32. Pazarena, L. (2022). The use of field notes and how to document and/or incorporate notes into
CSI reports. In Report writing for crime scene investigators (pp. 21–34). CRC Press.
33. Druid, H. (2022). Crime scene and crime scene investigations. Handbook of Forensic Medicine,
1, 161–181.
34. Osman, K., Gabriel, G. F., & Hamzah, N. H. (2021). Crime scene investigation issues: Present
issues and future recommendations. Jurnal Undang-Undang dan Masyarakat, 28, 3.
35. Formosa, S. Taking LiDAR to court: Mapping vapour evidence through spatial forensics.
Applied Geomatics Approaches, 67.
36. Thiruchelvam, V., Wei, A. Y., Juremi, J., Puat, H. A., & Jegatheswaran, R. Utilization of
unmanned aerial vehicle (UAV) technology in crime scene investigation.
37. Telyatitskaya, T. (2021). Digital photography of crime scenes in the production in forensic
examinations. Technology and Language, 3(2), 68–76.
38. Cunha, R. R., Arrabal, C. T., Dantas, M. M., & Bassanelli, H. R. (2022). Laser scanner and drone
photogrammetry: A statistical comparison between 3-dimensional models and its impacts on
outdoor crime scene registration. Forensic Science International, 330, 111100.
39. Mayne, R., & Green, H. (2020). Virtual reality for teaching and learning in crime scene
investigation. Science & Justice, 60(5), 466–472.
40. Maneli, M. A., & Isafiade, O. E. (2022). 3D forensic crime scene reconstruction involving
immersive technology: A systematic literature review. IEEE Access, 10, 88821–88857.
41. Rinaldi, V., Hackman, L., NicDaeid, N. (2022). Virtual reality as a collaborative tool for
digitalised crime scene examination. In Extended reality: First international conference, XR
Salento 2022 Lecce, Italy, July 6–8, 2022, Proceedings, Part I (pp. 154–161). Cham: Springer
International Publishing.
42. Tolstolutsky, V., Kuzenkova, G., & Malichenko, V. (2022). The experience of using augmented
reality in the reconstruction of the crime scene committed in transport. In International scien-
tific siberian transport forum TransSiberia-2021 (vol. 1, pp. 1095–1102). Cham: Springer
International Publishing.
43. Ajah, B. O., Ajah, I. A., & Obasi, C. O. (2020). Application of virtual reality (VR) and
augmented reality (AR) in the investigation and trial of Herdsmen terrorism in Nigeria.
International Journal of Criminal Justice Sciences, 15(1), 1–20.
44. Sieberth, T., Dobay, A., Affolter, R., & Ebert, L. (2019). A toolbox for the rapid prototyping
of crime scene reconstructions in virtual reality. Forensic science international, 305, 110006.
45. Kottner, S., Thali, M. J., & Gascho, D. Forensic imaging.
46. Humayun, M., Ashfaq, F., Jhanjhi, N. Z., & Alsadun, M. K. (2022). Traffic management:
Multi-scale vehicle detection in varying weather conditions using yolov4 and spatial pyramid
pooling network. Electronics, 11(17), 2748.
47. Muzafar, S., Jhanjhi, N. Z., Khan, N. A., & Ashfaq, F. (2022). DDoS attack detection approaches
in on software defined network. In 2022 14th International conference on mathematics,
actuarial science, computer science and statistics (MACS) (pp. 1–5). IEEE.
48. Humayun, M., Khalil, M. I., Almuayqil, S. N., & Jhanjhi, N. Z. (2023). Framework for detecting
breast cancer risk presence using deep learning. Electronics, 12(2), 403.
Co-opetition Reloaded: Rethinking
the Role of Globalization, Supply Chains,
and Mechanism Design Theory
Anastasios Fountis
1 Introduction
A. Fountis (B)
Faculty, Berlin School of Business and Innovation, Berlin, Germany
e-mail: [email protected]
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 525
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_44
526 A. Fountis
is the rapid economic integration of ideas and knowledge through digital exchange,
technology, innovation, and organizational imitation. “Old globalization” was about
goods and standard services crossing borders.
2 Co-opetition
at once and on a variety of levels throughout the value chain. This is the case in the
arrangement between PSA Peugeot Citroen and Toyota to share components for a new
city car, which is being sold simultaneously as the Peugeot 107, the Toyota Aygo, and
the Citroen C1. This allows the companies to save money on shared costs while still
remaining fiercely competitive in other areas. It is possible to anticipate a number of
benefits, including cost reductions, the complementarity of resources, and the transfer
of technological know-how. There are also certain challenges, such as unequal risk
distribution, complementary demands, lack of trust, and distribution of control. It is
not impossible for three or more enterprises to be engaged in cooperative competition
with one another. Cooperative competition can also take the form of shared resource
management in the building industry. There is research which presents a short-term
partnering case in which construction contractors form an alliance, agreeing to put
all or some of their resources in a joint pool for a fixed duration of time and to
allocate the group resources using a more cost-effective plan. The case focuses on a
scenario in which the construction contractors form an alliance and agree to put all
or some of their resources in a joint pool for a fixed duration of time. Additionally in
practice, policy makers and regulators can trigger, promote, and affect co-opetitive
interactions among economic actors who did not intentionally plan to coopete before
the external institutional stakeholders (i.e., a policy maker or regulator) created the
conditions for the emergence of coopetition. This was found to be the case in a number
of real-world scenarios. Cooperative game theory was proposed by Asgari et al. [4]
as the basis for fair and efficient allocation of the incremental gains of collaboration
among the collaborating contractors [4]. The findings of their research presented a
novel approach to the planning and distribution of building resources. Contractors
no longer view one another as their only competitors; rather, in an effort to lower
their overall costs, they seek cooperative opportunities beyond their competition.
On the other side at the intra-organizational level, co-operative competition takes
place between persons or functional units that are part of the same organization
at the intra-organizational level. Some studies, drawing on game theory and theo-
ries of social interdependence, investigate the presence of simultaneous cooperation
and competition among functional units, the antecedents of co-opetition, and the
impact of co-opetition on knowledge-sharing behaviors. For instance, the idea of
co-opetition-influenced effective knowledge sharing practices in cross-functional
teams led to the development of the concept of co-opetitive knowledge sharing.
This concept was developed to explain the mechanisms through which co-opetition
influences effective knowledge-sharing practices. The argument that lies beneath the
surface is that organizational teams, despite the fact that they are required to coop-
erate with one another, are likely to experience tension as a result of different profes-
sional philosophies and competing goals brought forth by various cross-functional
representatives.
Co-opetition is not an example of cartels because the goal of cartels is to limit
competition, whereas the goal of co-opetition is to take advantage of the complemen-
tary resources of the firms in order to achieve lower costs and manage new innovation
possibilities while still regarding competition at a later point in time. Cartels are not
an example of co-opetition because cartels have as their goal to limit competition.
528 A. Fountis
An economic theory called mechanism design theory aims to investigate the methods
through which a specific end or result can be attained. The mechanism design theory is
an economic framework for comprehending how businesses might accomplish their
goals when obstacles like vested interests and inaccurate information may stand in
their way. The theory, which derives from game theory, explains how individual
incentives and motives can be used to a company’s advantage. The founders of the
theory were given the Economic Sciences Nobel Prize in 2007. A subfield of microe-
conomics called mechanism design studies how organizations and institutions might
produce desirable social or economic results under the restrictions of people’s self-
interest and imperfect knowledge. Principal-agent issues might arise when people
behave in their own self-interest and aren’t driven to give truthful information. In
particular, the mechanism design theory focuses on how organizations and institu-
tions can produce desirable social or economic outcomes under the constraints of
people’s self-interest and incomplete information. It enables economists to analyze,
compare, and possibly regulate specific mechanisms linked to the achievement of
particular outcomes. Mechanism design considers incentives and private informa-
tion to improve economists’ understanding of market mechanisms and demonstrates
how the correct incentives (money) can persuade players to divulge their private
information and produce the best results.
Thus, mechanism design theory is employed in economics to investigate the proce-
dures and systems that lead to a given result. Basically, it is an inverse problem of
game theory as it starts at the end of the game, then goes backwards; it is also called
reverse game theory. By combining their efforts, Eric Maskin, Leonid Hurwicz, and
Roger Myerson greatly popularized the idea of mechanism design theory. The three
researchers were acknowledged as the subject’s founding authorities after receiving
the Nobel Memorial Prize in Economic Sciences in 2007 for their work on the mech-
anism design theory. The idea of game theory was generally established by John
von Neumann and Oskar Morgenstern in their 1944 book, Theory of Games and
Economic Behavior, which served as the foundation for mechanism design theory.
The study of how various actors interact both competitively and cooperatively to
produce events and results is known as game theory in economics. Many mathe-
matical models have been created to effectively examine this idea and its outcomes.
More than a dozen Nobel Prizes have been awarded to academics in the field of
game theory, which has been recognized throughout the history of economic studies.
However, the approach of mechanism design theory to game theory is typically the
opposite. By starting with an outcome and figuring out how different actors interact
to get that outcome, a scenario is studied. In order to arrive at an outcome, both
game theory and design theory examine the competing and collaborative influences
of various actors. The mechanism design theory takes into account an intended result
and the steps necessary to get there. The game theory examines how different players
may affect various results.
Co-opetition Reloaded: Rethinking the Role of Globalization, Supply … 529
A distinct example are the Financial Markets and Mechanism. Design Theory
Mechanism design theory has a wide range of applications, and as a result, numerous
mathematical theorems have been created. With the use of these applications and
theorems, researchers can govern the entities’ access to information and manage their
constraints. An auction market is one setting where the application of mechanism
design theory is demonstrated.
In general, regulators want to create a competitive market that is efficient and
well-organized for players. In order to get this conclusion, a number of entities with
different levels of association and information are involved. The goal of mechanism
design theory is to regulate and restrict participant access to information in order
to produce the desired outcome of a well-ordered market. For exchanges, market
makers, buyers, and sellers in particular, this typically necessitates the monitoring of
information and activity at several levels.
When solving, or better said, depicting a mechanism design problem, the target
function or goal function is the most important “given”, while the underlying mech-
anism is the unknown factor. Among the most common representations which can
be applied in the field of relations between nations is “pie-splitting”. This gives in
a simple context a description of how resources are shared or deals and treaties are
starting to get formed. The main point is the willingness to share and the assumption
that all players are rational ones.
The study of solution concepts for a category of games involving private infor-
mation is called mechanism design. In a problem involving design, the goal function
is the most important “given,” while the mechanism, as was previously stated, is the
unknown. It is essential to make the distinction (1) that a game’s “designer” decides
on the game’s structure rather than simply inheriting one; and (2) that the designer
is invested in the game’s results. The person who designed the mechanism is taking
part in and supervising the process.
We are going to assume the scenario of slicing a cake. The “you-cut-I-choose
protocol” has an additional property: even if the players’ values are different, it is
possible for each one of them to be guaranteed at least half of the cake, according to
their own evaluation. And this remains true even in the scenario in which they keep
the other’s valuation a secret. Consider the person who is responsible for cutting the
cake; he can divide it into what he believes to be two equal parts, thereby ensuring
that he will receive one-half of the value. While the other person gets to choose what
they think is the best piece of the cake for themselves, they each get at least half of
what they think the cake is worth.
For a pie-splitting or cake-cutting procedure with n players, with utility functions
ui and where player i obtains piecei .
• It is fair if for each player i, ui (piecei ) ≥ 1/n.
• It is envy-free if for each player i and player j, ui (piecei ) ≥ ui (piecej ).
• It is equitability if for each player i and player j, ui (piecei ) = uj (piecej ).
• It is exact if for each player i, and cut j, ui (piecej ) = 1/n.
The Austin moving-knife methods [5] are ways to divide a cake fairly [5]. They
divide the cake into precisely 1/n pieces and give one piece to each of the n partners.
530 A. Fountis
In comparison, proportional division procedures divide the cake equally among all
partners, giving each at least one-half of the total amount, though some partners may
receive more. When n equals 2, Austin’s method produces a division that is both
precise and envy-free. Furthermore, the cake can be divided into any number k of
pieces that are valued by both parties as precisely 1/k. As a result, any fraction can
be used to split the cake between the partners.
Some cases for application in the field of contemporary politics will be mentioned
in the sixth and final section of this section. There is extensive literature on such
efforts, but they are usually applied from a Grand Strategy perspective [6]) [at the
game theory level rather than from the point of view of the mechanism design theory
which applies in the field of the computational politics, which is the nexus of political
science and computer science [6]. In this field, computational techniques are used
to give answers to political science problems, such as analysis tools and prediction
techniques. Large data sets are used by researchers in this field to examine user
activity.
Building a classifier to forecast users’ political bias in social media or identifying
political bias are typical examples of such works and are getting driven by the sets of
political choices which are offered to citizens. An unprecedented amount of latent,
user-generated data has been made available to researchers in science and campaign
strategists, thanks to social media, and recent advances in computer science have
made it possible to store and handle sizable data sets. Political science study has
undergone a significant shift, thanks to computational politics, which effectively
collects data on individuals rather than aggregates. Targeting likely voters can be
done successfully using this knowledge [7]. In general, the field of Political Game
Theory is becoming a growing academic field as the global complexity is increasing
and new perspectives are needed with the use of more scientific tools [8].
Dagnino and Rocco [9] in their book Coopetition Strategy: Theory, Experiments
and Cases have done a significant contribution to the field of study of co-opetition [9].
The cardinal point for them was the following question: Is the concept of co-opetition
merely another passing fad or does it represent a fundamental shift in the way we
think about strategy? As they also very successfully mention they try to Convert a
“liquid” word into a tangible word. When looked at from two different theoretical
vantage points—namely that of competition theory and cooperation theory—this
subject can be seen in a much clearer light. Because of this, there is a significant
urge to reduce it to a straightforward extension of either the competition theory or
the cooperation theory. In regard to the former, co-opetition could become a part of
the “competitive paradigm” if cooperation between firms is considered to be “com-
petitive maneuvering” or “cooperative maneuvering,” both of which can provide a
competitive advantage. Co-opetition could also become a part of the “competitive
paradigm” if co-opetition is considered to be part of the “competition”. In relation to
the second strategy, co-opetition is nothing more than an additional form of coopera-
tion. Research on co-opetition can, consequently, make substantial use of the alliance
theory. The principles of trust, opportunism, and commitment, which play signifi-
cant roles in dyadic cooperative relationships, can also be applied to co-opetitive
relationships and have the same effect.
Co-opetition Reloaded: Rethinking the Role of Globalization, Supply … 531
recent years, they have frequently been compared to ostensibly successful autocracies
such as China.
(4) Macro unraveling.
Additionally, strategic competition will lead to the disintegration of the macro policy
order.
(5) The commanding heights: the technology and energy revolution.
It is often believed that technology dominates the commanding heights of the global
economy, and national economic policy frequently focuses on gaining a technological
advantage.
China, the European Union, the United States, and others are investing more in
bolstering strategic autonomy in key technology industries. Economic sanctions and
limits are being imposed on technology flows and investments between competing
blocs, and this will intensify through 2023, attracting a larger number of nations.
There will be decisions to make. Due to the pursuit of strategic autonomy, global
economic fragmentation incurs costs. However, like in other spheres, international
competition can be a positive factor—creating sharper incentives for investment
and innovation (as during the Cold War). Moreover, energy remains a fundamental
component of economic advantage. In comparison to Europe, the United States
has a greater degree of energy independence. Europe is currently facing competitive
pressures, particularly in energy-intensive industries, and we see the struggle of major
economies like the one of Germany which is mainly depending on manufacturing
activities compared with other economies which focus on services.
In logistics and especially in supply chain management, the supply chain is a
network of modes of transport and means of transport that ensures the uninterrupted
movement of goods from the place of origin to the destination. In the years 2020,
2021, and 2022, as a consequence of the COVID-19 pandemic, global supply chains
and shipments slowed down, causing worldwide shortages and affecting consumer
patterns. The situation remains still complicated also due to eventual energy shortages
and the relevant political environment with the War in Ukraine. The economic sectors
of an economy are a framework for understanding the impact of the supply chain
disruption on all goods and services for the global economy and subsequently to the
existence of the globalization as we knew it up to now.
An oxymoron and a combination of the words “friend” and “enemy,” the term “fren-
emy” (sometimes written “frienemy”) refers to “a person with whom one is amicable,
despite a basic dislike or rivalry” or “a person who combines the traits of a friend and
an enemy”. The phrase is used to define interactions of a personal, geopolitical, or
commercial nature between individuals, groups, or institutions. These relationships
can exist between individuals. A similar concept, a competitive friendship is also
534 A. Fountis
described by author and activist Jessica Mitford, who was based in the United States,
and asserted in 1977 that one of her sisters had invented the term when the latter was
a young child. The word was initially used to describe a somewhat dumb little girl
who lived near the family: “My sister and the enemy would always play together,
despite the fact that they hated each other with all of their hearts” [11]. Beginning in
the middle of the 1990s, its utilization experienced a meteoric rise. Rivalries in the
workplace are rather prevalent, especially among companies that work together on
projects. While it was certainly not unheard of for people to socialize with colleagues
in the past, the sheer amount of time that people spend at work now has left a lot
of people with less time and the inclination to develop friendships outside of the
office. This is due to the increasing informality of work environments as well as the
“abundance of very close, intertwined relationships that bridge people’s professional
and personal lives”.
To have a good professional connection requires two or more business partners to
come together and benefit from one another. On the other hand, to have a successful
personal relationship requires more similar interests outside of the company. Rela-
tionships are more likely to develop between people who share similarities in the
office, in a sports club, or in any other environment where there is competition based
on performance. The intensive setting can foster competitiveness, which can then
morph into envy and put a strain on a relationship. Because of the common interest
in engaging in business dealings or competition, frenemy-type interactions become
habitual and commonplace. When asked about himself, according to an anecdote,
Sigmund Freud once observed, “an intimate friend and a loathed enemy have always
been important to my emotional life… certainly not infrequently…both a friend and
an enemy might be found in the same individual at the same time”.
On the basis of the actions they exhibit toward one another, frenemies can be
classified as follows, according to [12]:
(1) A person is considered to be a one-sided frenemy by the other party if they only
interact with or reach out to them when they need something from them, such
as assistance or a favor. This individual does not have any interest in the other
individual’s life and does not care about what is going on in it. It is a one-sided
relationship since one party does not show up in time to meet the requirements
of the other, and this also causes problems.
(2) Unfiltered and undermining adversary: This type of adversary taunts, makes
fun of, and cracks sarcastic jokes about the friend on such a regular basis that
it becomes difficult to endure their behavior. In addition, private information
becomes widely known.
(3) An overly involved enemy is one who interferes in the lives of a friend in ways
that the friend might feel uncomfortable with or find to be inappropriate.
They contact their family, friends, or significant others in an inappropriate
manner or without the permission of those individuals in order to find out some-
thing. Their over-participation frequently causes their companion to feel both
bothered and irritated.
Co-opetition Reloaded: Rethinking the Role of Globalization, Supply … 535
(4) A rival at work is an example of a competitive work enemy. This type of work
enemy is an adversary to one individual. They put on a friendly front, flatter
one another, and act as though they want the best for one another, but in reality,
none of them truly wishes the other person any happiness or success in life.
They never want the other person to be more successful than they are.
(5) There is also a type of adversary known as an ambivalent enemy, and they
possess both good and bad characteristics. There are occasions when they are
helpful and courteous, but there are also instances when they behave in a manner
that is self-serving or competitive.
(6) Jealousy can transform friends into adversaries, and it may do so quite quickly.
Jealousy can arise for a variety of reasons, including a friend’s promotion,
success, attractiveness, personality, sense of humor, or social standing.
(7) Uncertain enemy: A person who is unsure of the status or degree of closeness
of their friendship may, for instance, be unsure about whether or not the other
person likes them, whether they are true friends or simply professional buddies,
or whether or not they will consider asking them to family events.
(8) The passive-aggressive adversary is someone who will say hurtful things and
give compliments behind the other person’s back, but they will never do any
of those things straight to their face. They have the potential to leave a person
questioning their actions and wondering if they have done something wrong.
6 Co-opetition 2.0
The level of cooperation between rivals is at an all-time high. Consider both the
potential downsides and upsides using the following guide. Since the 1980s, there
has been a growing trend toward “co-opetition,” which refers to working together
with an adversary in order to accomplish a shared objective or gain an advantage.
However, a large number of businesses are uneasy with the idea, and as a result, they
pass up the exciting potential it brings.
The practitioners and scholars Brandenburger and Nalebuff [13] who were
involved in the development of the methodology back in 1996 offer a framework
for deciding whether or not to form a partnership with a competitor by drawing on
examples from Apple and Samsung, DHL and UPS, Ford and GM, and Google and
Yahoo [13]. Both companies have been involved in the methodology’s development.
In their opinion: To begin if you want to know the potential for cooperation with
your competitor, conduct an analysis of what each party will do if it chooses not to
cooperate and how the results of that decision will alter the dynamics of the industry.
It’s possible that working together will result in a clear victory, but even if it doesn’t,
it may still be preferable than letting someone else take your place in the transaction,
which could put you in a worse position. After then, it is essential to figure out how
to collaborate without divulging your “secret sauce,” which refers to your existing
advantages. After you’ve completed those steps, the next step is to draft an agreement
that spells out the specifics of the business transaction, including its scope, who will
536 A. Fountis
be in control, how the arrangement can be undone if it’s not working well, and how
the profits will be split. You’ll also have to deal with resistance from within your
own company and work to shift internal views. Co-opetition demands mental flex-
ibility, but businesses that are able to cultivate it can gain a significant competitive
advantage.
The United States and the Soviet Union engaged in a strong struggle over the
decades leading up to their successful landing on the moon just over 50 years ago.
But the truth is that collaboration was almost how space exploration got started.
When President John F. Kennedy met with Nikita Khrushchev in 1961 and again
when he addressed the United Nations in 1963, he advocated a collaborative mission
to the moon. It was never realized, but, in 1975, the adversaries from the Cold War
started cooperating on the Apollo-Soyuz project, and by 1998, the jointly controlled
International Space Station had ushered in a new era of collaboration in the field of
space exploration.
Moreover, the interesting point is that Brandenburger and Nalebuff [13] enhanced
the concept and they added a new dimension in their latest framework, and from the
corporate level extrapolated it to the state level. Could countries such as the United
States and China, for example, work together on an expedition to Mars? Because
doing so would require giving up control of the intellectual property in a way that
cannot be undone, this presents an obstacle that appears to be insurmountable. As
a result of the fact that military applications can be found in space technology, this
is a particularly sensitive subject. The latest point gives also a new component on
the way we see co-opetition or concepts like frenemies, under the current fluctuant
condition of globalization and by experiencing a real ongoing strategic dichotomy
between liberal democracies and authoritarian regimes.
Brandenburger and Nalebuff [13] started this interesting extension of their 1996
book and somehow touched the functionality of the co-opetition at the nation’s level
by discussing a chance that was lost for the United States of America and the Soviet
Union to work together on a journey to the moon [13]. Today, there are even more
chances for nations to work together, whether it is in the fight against COVID-19 or
the fight against climate change, or even in the fight against trade conflicts. This may
sound like at the end of 2022 or the beginning of 2023 perhaps utopic but sooner or
later new forms of eventual globalization with new regional frames and networks will
arise. They concluded that a deeper comprehension of the concept of co-opetition
can assist companies, managers, and nations in discovering more effective ways to
collaborate and achieve shared goals. The entanglement of areas defined under the
co-opetition gives also the perspective or the impression of a rope pulling. Rope
pulling (also known as tugging war) is a sport in which two teams compete to pull
a rope a certain distance against the opposing team’s pull. The earlier mentioned
notion of frenemies as well as the frame of co-opetition depicts the struggle to not
only pull but also keep balance as in a co-opetition environment no one wants or has
to fall.
As previously mentioned, the mechanism design theory can offer a context for
analyzing international alliances at the nation’s level which are getting formed on
the basis of goal congruence. The main assumption for a better understanding is that
Co-opetition Reloaded: Rethinking the Role of Globalization, Supply … 537
the tendency for fierce, usually zero-sum games is getting more constrained, and the
tendency is for alliances and networks. The actual context for research is the political
action after the Russian invasion of Ukraine in 2022. Firstly, we have the Eurasianism
move with the economically sanctioned political systems of Russia, Iran, China, and
Turkey as the main drivers [14]; secondly, the NATO alliance which is becoming the
center of a political and economic, especially for energy, network defending the idea
of the Western, liberal democracies, where the USA has the pivotal role. Finally, the
big question mark remains the long-standing economic co-opetition between China
and the USA which may become a long-pending competition in war terms and not
avoid the Thucydides Trap [15]. In all these cases, the model of the pie-splitting as
presented in the third section of this study can get applied in order to have sooner or
later a balance.
7 Conclusion
be seen as a phenomenon that has been around for a very long time, but is only
just beginning to take on the new dimensions and significance associated with the
modern era. The concept of simultaneously thinking about cooperation and acting
in a way that is both cooperative and competitive requires a cognitive revolution
in both research and practice. The term “frenemy” refers to a person and by the
time to states or other non-state actors in international relations with whom one is
amicable, despite a basic dislike or rivalry. This is due to the increasing informality
of work environments as well as the “abundance of very close, intertwined relation-
ships”. Frenemies can be classified on the basis of the actions they exhibit toward one
another. Jealousy can transform friends into adversaries, and it may do so quickly.
Since the 1980s, there has been a growing trend toward “co-opetition,” which refers
to working together with an adversary in order to accomplish a shared objective or
gain an advantage.
As mentioned, there is an effort for converting a “liquid” word into a tangible word
under the concept of co-opetition. The challenge under the current global conditions
is becoming more immense as there is a somehow disruptive transformation, first of
all of the globalization as we knew it up to now.
References
1. Wilding, D. (2018). Shore Lore: The glory days of Sealshipt Oysters. Retrieved December 23,
2022, from https://eu.wickedlocal.com/story/cape-codder/2018/03/03/shore-lore-glory-days-
sealshipt/13765886007/
2. Herzog, T. (2010). Strategisches management von Koopetition–Eine empirisch begründete
Theorie der Zivilen Luftfahrt. Wirtschaftsuniversität Wien.
3. Brandenburger, A., & Nalebuff, B. (1996). Co-opetition: A revolution mindset that combines
competition and cooperation. Harvard Business Press.
4. Asgari S., Afshar A., Madani, K. 2020. Cooperative game theoretic framework for joint resource
management in construction. Journal of Construction Engineering and Management, 140(3).
5. Austin, A. K. (1982). Sharing a cake. The Mathematical Gazette, 66(437), 212–215. https://
doi.org/10.2307/3616548
6. Guner, S. (2012). A short note on the use of game theory in analyses of international relations.
Retrieved December 23, 2022, from https://www.e-ir.info/2012/06/21/a-short-note-on-the-use-
of-game-theory-in-analyses-of-international-relations/
7. Chester, J., & Montgomery, K. C. (2017). The role of digital marketing in political campaigns.
Internet Policy Review, 6(4). https://doi.org/10.14763/2017.4.773
8. McCarty, N., & Meirowitz, A. (2007). Political game theory: An introduction (analytical
methods for social research). Cambridge University Press. https://doi.org/10.1017/CBO978
0511813122
9. Dagnino, G. B., & Rocco, E., (Eds.) (2009). Coopetition strategy: Theory, experiments and
cases (1st ed.). Routledge. https://doi.org/10.4324/9780203874301
10. O’Sullivan, M., Skilling, D. (2022). War by other means—positioning for 2023, The levelling.
Retrieved December 23, 2022, from https://thelevelling.blog/2022/12/09/war-by-other-means-
positioning-for-2023/
11. Mitford, J. (2010). Poison penmanship: The gentle art of muckraking. New York Review Books,
p. 218.
Co-opetition Reloaded: Rethinking the Role of Globalization, Supply … 539
12. Clarke, K. (2017). Five types of frenemies and the signs that you have one. Retrieved December
22, 2022, from https://www.cbc.ca/life/wellness/five-types-of-frenemies-and-the-signs-that-
you-have-one-1.4060736
13. Brandenburger, A., & Nalebuff, B. (2020). The rules of co-opetition. Harvard Business Review.
Retrieved December 22, 2022, from https://hbr.org/2021/01/the-rules-of-co-opetition
14. Laruelle, M. (2008). Russian Eurasianism: An ideology of empire. Princeton.
15. Allison, G. (2017). Destined for war: Can America and China Escape Thucydides’s Trap?
New York: Houghton Mifflin Harcourt. ISBN 978–1328915382.
Throughput Performance Analysis
of DL-MU-MIMO for the IEEE 802.11ac
Wireless LAN
1 Introduction
When data traffic increases significantly in WLAN, the networks cannot support this
increase in the demands, so new improvements are added to the WLAN standards, to
obtain an increase in the data rate with low latency. The 802.11ac has been developed,
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2023 541
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3_45
542 Z. K. Farej and O. M. Ali
with many features added to its PHY and MAC layers, to provide very high throughput
WLAN, such as using modulation up to 256-QAM, bandwidth up to 160 MHz,
MIMO transmission up to eight SS is provided to support a 6.9 Gbps data rate [1, 2].
Besides, beamforming schemes, including SU–MIMO and MU–MIMO techniques
to solve the interference problem among users, and increase the gain, thus increasing
the data rate, improve the efficiency of spectral for a certain channel configuration
[3]. An AP issues a packet involving only preambles for channel sounding, and a
compressed beamframe is received with adjusted DL (down-link) information of the
channel from clients. In the MU–MlMO technique, the AP can send multiple data
streams to various clients at once without causing overlapping by using the transmit
beamforming [4]. The MU-MIMO is considered a new feature that is added to the
IEEE 802.11ac standard [5]. The beamforming achieves higher spectral efficiency,
despite its need for overhead sounding. Any mismatch between the channel state and
the transmit beam causes efficiency performance degradation especially when the
transmission duration is much longer than the channel calibration time [6].
In this paper, features of the beamforming (SU-MIMO and MU-MIMO with
20 and 80 ms) are considered and the throughput performance of the WLAN is
investigated under this feature for different number of node topologies.
2 Related Work
A new module for Massive MlMO is designed by [2] using the OMNeT++ network
simulator, for performance evaluation and verifying the operation of an IEEE
802.11ac WLAN depending on the theoretical expectations. The authors in [7] have
proposed a detection algorithm of high performance for DL-MU-MlMO with prac-
tical error minimization in IEEE 802.11ac LAN. The results showed performance
improvement for any MCS and the number of STA cases. The throughput of the IEEE
802.11ac WLAN standard’s MU-BF and SU-BF modes were analyzed under time-
varying channels by the authors in [8]. They studied system behaviors and throughput
results for different beamforming transmissions with numerical results based on
mobile STA speed, operating SNR, and payload size. Kosek [9] has suggested a
model DEMS queuing mechanism to help DL-MU-MIMO transmission. The end
result impacts high throughput and a decrease in the queuing delay for high priority
A.Cs. Chulho [10] has estimated a MAC network system performance prior to its
implementation through a proposed uniform MAC design process. The proposed
methodology was utilized to implement the IEEE 802.11ac DL-MU-MlMO MAC
system on an ARM-based test platform using C-MOS technology. A comparison
of the network simulation and the actual system verified its validity. The authors
in [11] mathematically compare and measure the throughput of MU-MlMO, SU-
MlMO, and SlSO. The result indicated that when the AP supports the same number
of spatial streams as that in the receiver, the performance of the SU-MlMO is more
efficient than that of the others. In [12], the comparison between SU-MIMO and MU-
MIMO showed that when evaluating the performance of MU-MIMO, it’s important
Throughput Performance Analysis of DL-MU-MIMO for the IEEE … 543
to consider crosstalk interference (CTI). The results show that MU-MlMO can have a
less throughput gain and much less stability than SU-MlMO. The authors in [13] have
studied the problem of AP selection in MU-MIMO WLAN and proposed a new MU-
MlMO-Aware AP Selection (MAPS) algorithm. The results illustrate that MAPS
outperforms legacy designs and provides low-overhead design with best-throughput
assignment for client.
The IEEE 802.11ac standard specifies a new physical layer format known as Very
High Throughput (VHT) PHY [14]; the VHT PHY specification includes new high-
speed transmission modes based on Orthogonal Frequency Division Multiplexing
(OFDM). These modes employ techniques such as up to 8 MlMO SS, down-link
MU-MIMO up to four clients with Transmission Opportunity (TXOP) sharing high-
density QAM modulation (up to 256-QAM), and the ability to use 20, 40, 80, and 160
MHz channels [15, 16]. Currently as shown in Table 1, Wave 1 and Wave 2 devices are
available in IEEE 802.11ac. Wave 1 devices perform 80 MHz channels, 256 QAM-
modulation, and up to 3×3 MIMO (Wave 1 generation does not contain MU-MIMO).
802.11ac Wave 1 devices have a theoretical performance of up to 1.3 Gb/s (about
433.3 Mb/s per MIMO stream). Wave 2 devices can utilize up to 4 MlMO spatial
streams, 160 MHz radio channels, and down-link MU-MlMO with up to 4 single
stream clients; the theoretical maximum data rate of 802.11ac Wave 2 devices is 3,470
Gb/s [17]. For each 802.11ac mode transmission, there is a specified Modulation and
Coding Scheme (MCS) index and a number of MlMO spatial streams. Forward Error
Correction coding rate and Modulation type are both determined by the MCS index.
Every MCS is compatible with radio channels of (20, 40, 80, and 160 MHz), as well
as Guard Intervals of 400 ns or 800 ns (GI) [16].
Four types of frames are used in MU channel calibration (sounding): (1) Null Data
Packets Announcement (NDPA)s, (2) Null Data Packets (NDP)s, (3) Compressed
Beamforming, and (4) Beamforming Reports Poll. Figure 3 illustrates these frames
[1, 18]. The first beamformee to reply is included in the NDPA frame, as are the beam-
formees who should prepare a beamforming report. Figure 3b illustrates the format
of this frame. The NDP is the second frames in the channel calibration (sounding).
This only contains the PHY header (does not contain the “data” field shown in
Throughput Performance Analysis of DL-MU-MIMO for the IEEE … 545
(a) (b)
(c) (d)
Fig. 1 a SU-MIMO one 1 stream client, b SU-MIMO four 1 stream clients, c SU-MIMO four 1
stream clients, d MU-MIMO four 1 stream clients
(a) (b)
Fig. 3a) and an empty frame. The NDP enables beamformees to construct their beam-
forming reports. The Beamforming Reports Polls frame is shown in Fig. 3c. The AP
demands the beamforming reports of the various beamformees. The beamformee can
send reports to the beamformer (or AP) using the Compressed Beamforming frame.
Figure 3d shows this frame [19].
Through the sounding of the channel for MU-MlMO, the AP (beamformer) transmits
an NDPA frame. This frame’s purpose is to reserve the channel for the required
duration and to announce the sounding process. Then, the beamformer transmits an
Null Data Packets (NDP) frame [21]. By analyzing the training fields in the received
NDP, the beamformees create a feedback matrix. In order to direct transmissions in
the direction of the beamformee, the beamformer calculates the steering matrix after
receiving the feedback matrix. Figure 4 illustrates steering matrix deployment [1].
The initial beamformee responds with a Compressed Beamforming frame, next,
the AP sends the Beamforming Report Poll frame to the remainder of receivers
Throughput Performance Analysis of DL-MU-MIMO for the IEEE … 547
(a) (b)
to collect the other Compressed Beamforming frames. All the frames that are
transmitted within the calibration operation are separated with SIFS as shown in
Fig. 5.
MU-MIMO transmissions are limited to four receivers, whereas the WLAN
may have many more beamformees [22]. Therefore, in congested networks, the
channel sounding operation may result in more overhead. Channel sounding might
be performed either per transmission or periodically. In the per transmission case,
the channel sounding is directly followed by a unique MU-MlMO transmission, and
each transmission requires a channel sounding [23]. This allows a high accuracy of
beamforming but suffers from more overhead. But in the second scenario, Multiple
MU-MIMO transmissions follow the sounding procedure. In this paper, the peri-
odical channel sounding will be considered for two periods (20 and 80 ms). The
beam, before transmitting beamforming (TBF) same energy (approximately) in all
directions [24], After TBF, energy is manipulated to have much more energy in the
direction of the receiver (by constructive addition) and the lowest energy in all other
directions. In the mid-range, TBF gain (about 3 dB) in the direction of the receiver is
optimum [25]. The increasing range for a sustainable link from an AP with reducing
orders of MCS is shown in Fig. 6.
The simulation results are confirmed by the theoretical performance. According
to [17] analyses, the MAC layer delay and maximum data rate can be calculated as
follows:
N DS ∗ N SS ∗ NBits Per Symbol * CR
Throughput(bits/sec) =
O F D MS D
where
N DS = Data Subcarrier Number (for 20, 40, 80, and 160 MHz equal to 52, 108,
234, and 468 subcarriers, respectively),
N SS = Number of SS is variable from 1 to 8,
Nbit per symbol = Number of bit per symbol for 256 QAM is 8,
CR = code rate (value 3/4),
O F D M S D = Symbol Duration of OFDM (which is given by ΔF 1
where ΔF equals
subcarrier spacing) and its value = 3.6 µs involve Gl of 400 ns.
Four scenarios with various number of nodes (5, 15, 30, and 45) to evaluate the
performance of a WLAN based on the IEEE 802.11ac standard. The proposed wire-
less LAN scenarios are modeled and simulated using the discrete event OMNet++
version 5.5.1. The processes of simulation for the OMNet++ scenarios are performed
according to simulation parameters as illustrated in Table 2.
Throughput Performance Analysis of DL-MU-MIMO for the IEEE … 549
4 Throughput
Figure 7a–d shows system throughput for each slot; it is noticed that the 4×4 SU-
MIMO system outperforms both 4×4 MU-MIMO (20 and 80 ms) systems. The
reason for that is all systems support the same number of antennas (at transmitter and
receiver), however, the SU-MIMO does not require channel calibration, as well as it
is simulated with the assumption that its antennas are far enough with no interference
among their spatial stream, where the throughput of 4×4 MU-MIMO (20 and 80 ms)
is significantly impacted by the channel calibration. Therefore, 4×4 SU-MlMO is
more efficient than MU-MlMO, and the variation in the throughput between 4×4
SU-MIMO and MU-MIMO increases because of the effect of the channel calibration.
550 Z. K. Farej and O. M. Ali
(a) (b)
(c) (d)
Fig. 7 Illustrate SU-MIMO and MU-MIMO throughput per slot performance for a 5 nodes, b 15
nodes, c 30 nodes, and d 45 nodes
When the period between any two repeat channel sounding processes is 20 ms, MU-
MIMO suffers more frequent significant calibration overhead. It is noticed that the
throughput of 4×4 SU-MIMO outperforms that of 4×4 MU-MIMO-20ms and 4×4
MU-MIMO-80 ms by 32 and 18% (for 5 node scenarios) up to 83 and 52% (for 45
node scenarios), respectively.
Moreover, it is observed that implementing a channel calibration for MU-MIMO-
80 ms every 80 ms incurs a significant overhead but lower than that of MU-MIMO-
20ms. The collision probability becomes more significant as the number of nodes
increases, as well as the sounding time overhead. So the throughput of 4×4 MU-
MIMO-80 ms outperforms 4×4 MU-MIMO-20 ms by 17% (for 5 node scenarios)
up to 64% (for 45 node scenarios). In comparison with 4×4 MU-MIMO-20 ms, it is
also noticed that 4×4 MU-MIMO-80ms is able to support a larger number of beam-
formees with better scalability, because of the limited overhead of the calibration
operation. Moreover, 2×2 SU-MlMO and 4×4 SU-MlMO are more scalable than
the MU-MIMO system (Table 3).
Throughput Performance Analysis of DL-MU-MIMO for the IEEE … 551
5 Conclusion
Four scenarios are proposed in this paper (2×2 SU-MlMO, 4×4 SU-MlMO, 4×4
MU-MlMO-20 ms, and 4×4 MU-MlMO-80 ms SS) to model and simulate wireless
LANs based on the IEEE 802.11ac standard. The simulation result shows that 4x4
SU-MlMO outperforms MU-MlMO for both (20 and 80 ms) repetition periods of
channel calibration, and the highest throughput is acquired at the 4×4 SU-MIMO
scenario. The performance of MU-MlMO highly depends on the overhead of channel
calibrations. When the channel sounding (calibration) repetition period is reduced
(from 80 to 20 ms), the throughput is decreased, due to the more frequently repeated
overhead channel sounding or calibration. In comparison with 4×4 MU-MIMO-20
ms, 4×4 MU-MIMO-80 ms can support a larger number of beamformees, due to the
reduced overhead of the calibration operation. It is also concluded that for a large
number of nodes (45) scenario, the repetition calibration or sounding channel periods
along with collision probability and interference among streams have a significant
effect on the throughput performance as well as limiting the scalability of the MU-
MIMO WLANs.
References
6. Chaudhary, S. R., Patil, A. J., Yadao, A. V. (2016). WLAN-IEEE 802.11ac: Simulation and
performance evaluation with MIMO-OFDM. In 2016 Conference on Advances in Signal
Processing (CASP) (pp. 440–445). https://doi.org/10.1109/CASP.2016.7746211
7. Park, J., Kim, M., Kim, H., Kim, J. (2013). A high performance MIMO detection algorithm
for DL MU-MIMO with practical errors in IEEE 802.11ac systems. In IEEE 24th Annual
International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC)
(pp. 78–82). https://doi.org/10.1109/PIMRC.2013.6666108.
8. Yu, H., & Kim, T. (2014). Beamforming transmission in IEEE 802.11ac under time-varying
channels. The Scientific World Journal, 2014. https://doi.org/10.1155/2014/920937
9. Kosek-Szott, K. (2018). Improving DL-MU-MIMO performance in IEEE 802.11ac networks
through decoupled scheduling. Wireless Networks, 24(8), 3113–3127. https://doi.org/10.1007/
s11276-017-1520-3
10. Chung, C., Jung, Y., & Kim, J. (2016). Implementation of IEEE 802.11ac down-link MU-
MIMO WLAN MAC using unified design methodology. Journal of Semiconductor Technology
and Science, 16(6), 719–727. https://doi.org/10.5573/JSTS.2016.16.6.719
11. Liao, R., Bellalta, B., Barcelo, J., Valls, V., & Oliver, M. (2013). Performance analysis of IEEE
802.11ac wireless backhaul networks in saturated conditions. EURASIP Journal on Wireless
Communications and Networking, 2013(1). https://doi.org/10.1186/1687-1499-2013-226
12. Redieteab, G., Cariou, L., Christin, P., & Courtel, C. (2012). SU/MU-MIMO in IEEE 802.11ac
: PHY + MAC performance comparison for single antenna stations. Simulation.
13. Zeng, Y., Pefkianakis, I., Kim, K. H., & Mohapatra, P. (2017). MU-MIMO-Aware AP selection
for 802.11ac networks. In Proceedings of the 18th ACM International Symposium on Mobile
Ad Hoc Networking and Computing. https://doi.org/10.1145/3084041.3084057
14. Siddiqui, F., Zeadally, S., Salah, K. (2015). Gigabit wireless networking with IEEE
802.11ac:Technical ges. Journal of Networks, 10(3). https://doi.org/10.4304/jnw.10.3.164-171
15. Mimo, M., Bejarano, O., & Knightly, E. W. (2013). S TANDARDS IEEE 802.11ac : From
channelization to magazine. pp. 84–90.
16. Yao, M., & Tanguay, A. MU-MIMO tech brief Anisha Teckchandani contributors . pp. 1–32.
17. Daldoul, Y., Meddour, D. E., & Ksentini, A. (2019). An analytical comparison of MU-MIMO
and single user transmissions in IEEE 802.11ac. In IEEE 30th Annual International Symposium
on Personal, Indoor and Mobile Radio Communications (PIMRC) (pp. 1–6). https://doi.org/
10.1109/PIMRC.2019.8904189
18. Perahia, E., & Gong, M. (2011). Gigabit wireless LANs: An overview of IEEE 802.11ac and
802.11ad. ACM SIGMOBILE Mobile Computing and Communications Review., 15(3), 23–33.
https://doi.org/10.1145/2073290.2073294
19. Politis, A. C., & Hilas, C. S. (2018). DL MU-MIMO with TXOP sharing and suppressed
acknowledgments in IEEE 802.11ac WLANs. In 2018 41st International Conference on
Telecommunication of Signal Process TSP 2018 (pp. 1–5). https://doi.org/10.1109/TSP.2018.
8441246
20. G. Redieteab, L. Cariou, P. Christin, and J. F. Hélard, “PHY+MAC channel sounding interval
analysis for IEEE 802.11ac MU-MIMO,” Proc. Int. Symp. Wirel. Commun. Syst., pp. 1054–
1058, 2012, doi: https://doi.org/10.1109/ISWCS.2012.6328529.
21. Arun, S. R., Somani, K., Srivastava, S., Mundra, A. (2017). Proceedings of first international
conference on smart system.
22. Karmakar, R., Chattopadhyay, S., & Chakraborty, S. (2017). Impact of IEEE 802.11n/ac PHY/
MAC high throughput enhancements on transport and application protocols-a survey. IEEE
Communications Surveys Tutorials, 19(4), 2050–2091. https://doi.org/10.1109/COMST.2017.
2745052
23. Ravindranath, N. S., Singh, I., & Prasad, A. (2017). Study of performance of transmit
beamforming. Icicct, pp. 419–429
Throughput Performance Analysis of DL-MU-MIMO for the IEEE … 553
A Chourashia, Khusbhu, 1
Abdulhussain, Zahraa N., 503 Chu, Thi Minh Chau, 451
Abdullah, 281 Cuong, Ton Quang, 303, 451
Abid, Salah H., 183, 235
Acharya, Ashish, 371, 381, 419
Adhikari, Saurabh, 21, 37, 51, 65, 79, 281 D
Adrian, Ngui, 461 Dang, Tuan Minh, 107
Akila, D., 21, 79 Das, Arijit, 211, 291
Alex, Suja A., 125 Dash, Santanu Kumar, 95
Ali, Omer Mohammed, 541 Das, Shampa Rani, 503, 513
Alkhafaji, Mohammed Ayad, 51, 65, 391 Das, Shamp Rani, 261
Altawil, Jumana A., 183, 235 Devika, S., 79
Alwan, Adil Abbas, 271 Dey, Niti, 281
Ashfaq, Farzeen, 165, 503, 513 Díaz, Vicente García, 65
Ashraf, Humaira, 165
Asirvatham, David, 503
Ayoub, Razouk, 199 E
Azman, Amal Danish, 261 Ejodame, Osezua Ehizogie, 261
Elangovan, V. R., 21
B F
Balaganesh, D., 21, 37 Farej, Ziyad Khalaf, 331, 541
Balakrishnan, Sumathi, 261, 427, 461 Fiza, Inbasat, 165
Banerjee, Saurabh, 485 Fountis, Anastasios, 271, 391, 525
Basu, Nilanjana G., 317 Ftaiet, Adnan Allwi, 281
Bhowmick, Partha, 317
Bhuvana, R., 21
Biswas, Manajat Ali, 371, 419 G
Brayyich, Mohammed, 79 Ghorai, Santu, 211, 291
Ghosh, Anup Kumar, 439
Guan, Low Jun, 427
C
Chaini, Najihah, 407
Cheng, Phung Shun, 261 H
Choo, Jer Lyn, 461 Hachimi, Hanaa, 51
© The Editor(s) (if applicable) and The Author(s), under exclusive license 555
to Springer Nature Singapore Pte Ltd. 2023
S.-L. Peng et al. (eds.), Proceedings of 3rd International Conference on Mathematical
Modeling and Computational Science, Advances in Intelligent Systems
and Computing 1450, https://doi.org/10.1007/978-981-99-3611-3
556 Author Index
V Y
Yen, Pham Thi Hai, 303
Yi, Yeo Jia, 261
Vishwakarma, Virendra Prasad, 135, 143 Youness, Saoudi, 199