AI-Driven IoT Systems For Industry 40 - Deepa Jose
AI-Driven IoT Systems For Industry 40 - Deepa Jose
Industry 4.0
The purpose of this book is to discuss the trends and key drivers of Internet of
Things (IoT) and artificial intelligence (AI) for automation in Industry 4.0. IoT and
AI are transforming the industry thus accelerating efficiency and forging a more
reliable automated enterprise. AI-driven IoT systems for Industry 4.0 explore current
research to be carried out in the cutting-edge areas of AI for advanced analytics,
integration of industrial IoT (IIoT) solutions and Edge components, automation in
cyber-physical systems, world leading Industry 4.0 frameworks and adaptive supply
chains, etc.
A thorough exploration of Industry 4.0 is provided, focusing on the challenges
of digital transformation and automation. It covers digital connectivity, sensors, and
the integration of intelligent thinking and data science. Emphasizing the significance
of AI, the chapter delves into optimal decision-making in Industry 4.0. It exten-
sively examines automation and hybrid edge computing architecture, highlighting
their applications. The narrative then shifts to IIoT and edge AI, exploring their
convergence and the use of edge AI for visual insights in smart factories. The book
concludes by discussing the role of AI in constructing digital twins, speeding up
product development lifecycles, and offering insights for decision-making in smart
factories. Throughout, the emphasis remains on the transformative impact of deep
learning and AI in automating and accelerating manufacturing processes within the
context of Industry 4.0.
Edited by
Deepa Jose, Preethi Nanjundan, Sanchita Paul,
and Sachi Nandan Mohanty
Cover image: © Shutterstock
© 2025 selection and editorial matter, Deepa Jose, Preethi Nanjundan, Sanchita Paul, and Sachi Nandan
Mohanty, individual chapters, the contributors
Reasonable efforts have been made to publish reliable data and information, but the author and pub-
lisher cannot assume responsibility for the validity of all materials or the consequences of their use.
The authors and publishers have attempted to trace the copyright holders of all material reproduced in
this publication and apologize to copyright holders if permission to publish in this form has not been
obtained. If any copyright material has not been acknowledged please write and let us know so we may
rectify in any future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or here-
after invented, including photocopying, microfilming, and recording, or in any information storage or
retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, access www.copyright.com
or contact the Copyright Clearance Center, Inc. (CCC), 222 Rosewood Drive, Danvers, MA 01923, 978-
750-8400. For works that are not available on CCC please contact [email protected]
Trademark notice: Product or corporate names may be trademarks or registered trademarks and are used
only for identification and explanation without intent to infringe.
DOI: 10.1201/9781003432319
Chapter 18 Deep Learning for Real-Time Data Analysis from Sensors............. 315
Sagar C V, Harshit Bhardwaj, and Anupama Bhan
Dr. Sanchita Paul is presently working as Associate Professor in BIT Mesra, Ranchi,
Jharkhand. She received her Ph.D. Degree from BIT Mesra, Ranchi, Jharkhand, in
the year January 2012. She received her M. Tech Degree from BIT Mesra, Ranchi,
Jharkhand, in the year 2006 and BE Degree from Burdwan University, West Bengal,
ix
x About the Editors
in the year of 2004. Her research areas include artificial intelligence, cloud comput-
ing, Internet of Things, machine learning and deep learning. She has guided five
Ph.D. Scholars. She has published 60 International Journals of International repute.
She also has six patents in area of health informatics, IoT and Cloud Computing.
She has acted as session chair and editorial member of many international journals
and conferences. She has published one book on cloud computing in Scholar’s Press,
Germany and five book chapters. She has completed two projects and one ISRO-
funded project is ongoing. She is life member of CSI. She is principal investigator in
setting of cloud computing lab at BIT Mesra, Ranchi.
Dr. Sachi Nandan Mohanty received his PostDoc from IIT Kanpur in the year
2019 and Ph.D. from IIT Kharagpur, India, in the year 2015, with MHRD scholar-
ship from Govt of India. He has authored/edited 28 books, published by IEEE-Wiley,
Springer, Wiley, CRC Press, NOVA, and DeGruyter. His research areas include data
mining, big data analysis, cognitive science, fuzzy decision making, brain-computer
interface, cognition, and computational intelligence. Prof. S N Mohanty has received
four Best Paper Awards during his Ph.D. at IIT Kharagpur from International
Conference at Beijing, China, and the other at International Conference on Soft
Computing Applications organized by IIT Roorkee in the year 2013. He has awarded
best thesis award first prize by Computer Society of India in the year 2015. He
has guided nine Ph.D. scholars. He has published 120 International Journals of
International repute and has been elected as FELLOW of Institute of Engineers,
European Alliance Innovation (EAI), and Senior member of IEEE Computer Society
Hyderabad chapter. He is also the reviewer of Journal of Robotics and Autonomous
Systems (Elsevier), Computational and Structural Biotechnology Journal (Elsevier),
Artificial Intelligence Review (Springer), Spatial Information Research (Springer).
List of Contributors
Biswa Mohan Acharya Sanika Desai
Siksha ‘O’ Anusandhan University Vishwakarma Institute of Technology
Bhubaneswar, Odisha, India Pune, Maharashtra, India
xi
xii List of Contributors
xv
1 A Novel Hybrid
Approach Based
on Attribute-Based
Encryption for Secured
Message Transmittal
for Sustainably
Smart Networks
Sucheta Panda, Sushree Bibhuprada B.
Priyadarshini, Biswa Mohan Acharya,
Tripti Swarnkar, and Sachi Nandan Mohanty
1.1 INTRODUCTION
Nowadays, data security plays a crucial role in every domain of life. In this
context, cryptography is a strategy for securing the information and communi-
cation of the same with the help of certain codes, so that the destined user can
read it and do the further processing at his own end. The term cryptography
means hidden writing. Cryptography provides a strong base for keeping the data
confidential while verifying data integrity. Asymmetric key cryptography and
symmetric key cryptography are basically two types of algorithms used in the
cryptographic process. In this research, asymmetric algorithm has been used,
which is more secure and authentic than symmetric algorithm [1–7]. In this con-
text, Rivest, Shamir, Adleman (RSA) algorithm has been employed as it comes
under the umbrella of asymmetric algorithm that satisfies the integrity, confiden-
tiality, and authenticity of data along with non-repudiation in case of electronic
communication. Both the public and the private keys can encrypt a message
in RSA cryptography. Here the inverse of the key is used to decrypt it [7–12].
Moreover, the ABE tool is used in cryptography where the shared file can only
be encrypted once with the given policy and it can then be decrypted by any
recipient who meets the requirement.
DOI: 10.1201/9781003432319-1 1
2 AI-Driven IoT Systems for Industry 4.0
ABE is used in the cloud data security solution, which is a novel approach to
create a secure cryptosystem. This method was discussed in [1] as a novice strat-
egy for controlling encrypted access. Key-policy attribute-based encryption (KPAE)
and cipher text-policy attribute-based encryption (CPAE) are the common forms of
ABE. According to an access tree and data that has been encrypted over a number
of attributes, KPAE generates users’ secret keys. For providing flexibility in building
a cryptosystem, four strategies are used in this encryption technique: (i) the public
(PK) and master (MK) keys are generated during the setup phase, (ii) the session
key (SK, also known as a private key) is generated in the key generation step, (C)
the encryption phase, in which the access structure (policy), message, and public
key (PK) are used to construct the cipher text (CT), and (D) the decryption phase, in
which the cipher text, private key, and public key are used to decrypt the encrypted
text (i.e. CT). To finish the policy update, Sahai and Waters [13] used the cipher text
authorisation approach.
However, because the key update and cipher text update are constrained by the
previous access policy, such approaches cannot meet the integrity and security
requirements. Setup, KeyGen, Encrypt, and Decrypt are the four basic algorithms
that make up the CPAE encryption method. The most essential benefit of RSA [3]
is that the private key remains secret because it is not communicated or revealed to
another user. By combining ABE method with RSA algorithm, we propose a model
known as the Hybrid Asymmetric Approach based on Attribute-Based Encryption
(HAA-ABE), through which we get the higher level of data security in public cloud.
The first system with constant-size cipher texts was suggested by J. Herrnaz [4];
where users with at least a few qualities in a given number of attributes, for a thresh-
old “T,” get determined with the help of sender, followed by decryption according to
our proposed approach. A weighted threshold decryption policy can be added as an
addition to our proposed strategy.
massive numbers that have been matched but are not identical (in case of asymmetric
key cryptography). A public key is a key which is shared with anyone. The private
key, which is the second key, is kept hidden. A message can be encrypted with any of
the keys; the decryption is the inverse of the one used to encrypt the message. Even
for key exchange, without utilising a hidden or secret route, PKC is widely utilised to
secure electronic communication over open networked environments like the inter-
net. According to Zhou et al., the CPAE technique encrypts data using a portion of
access tree. Afterwards, it finishes the encryption using the other access policy tree
[8]. According to the authors, an access policy’s right sub tree is ordinarily smaller
than its left sub tree.
The owner of the data can then connect their CT to the proxy server’s cipher text
after encrypting their data with the relevant component. This method pre-supposes
that the access policy’s root node is an “AND” gate. Alternative solutions to this
constraint problem were later presented, such as encrypting data using only one
property at first (called a dummy attribute). Borgh et al. [9] developed a strategy that
can be used in two cases. Firstly, when the devices do not have enough resources,
the method first symmetrically encrypts the data before using the user access policy
for symmetric key encryption. Figure 1.1 describes the PKC concept, where the
plain text is converted into cipher text at the sender’s site, and with the receiver’s
private key, the encrypted text is once more transformed at the receiver’s site back
into plain text.
1.2.2 RSA Algorithm
The most widely used asymmetric-key algorithm is RSA [10], which was first
announced in 1977 having two common applications such as key distribution and
digital signatures. An exponential expression is used in this technique. In the paper,
Shireen Nisha [7] intends to evaluate RSA, conjointly examine its merits and loop-
holes, and provide fresh strategies to address the weaknesses in her work. RSA is a
cryptographic method that assures secure network communication, which is illus-
trated in Figure 1.2, and its detailed working is depicted as shown in Figure 1.3.
Some of the advantages of RSA are outlined as follows:
i. The public key of the recipient is necessary for RSA encryption; also we
don’t have to reveal any secret keys in order to hear from other people.
ii. Encryption is faster than the Digital Signature Algorithm (DSA).
iii. Data in transit will be tamper-proof since tampering with the data will
change the keys’ usage. The recipient will also be made aware of the change
because the material cannot be decrypted using the private key.
iv. RSA algorithm uses complicated mathematics to keep its users safe and
secure.
v. It includes the factorisation of complex numbers to factorise prime numbers.
vi. This algorithm is tough to crack.
vii. RSA algorithm encrypts data using a public key that is known by everyone,
and exchanging the public key is simple.
offers stronger security and more effective performance than other first-generation
public key cryptosystems since its mathematical foundation is more complicated.
One of the main benefits of using elliptic curves is based on the difficulties in factor-
ing or calculating discrete logarithms over integers as compared to other methods.
New public key encryption technology called ECC [15] is more effective, quicker,
and more compact than earlier systems. ECC exploits the properties of the elliptic
curve equation to obtain keys instead of the conventional method, which involves
multiplying very large prime integers. The following equation describes an elliptic
curve over a finite field:
E : x 3 + ax + b = 0 mod p (1.1)
The ECC points are produced in Equation (1.1) by changing the values of x and y.
E p (a, b) stands for the set of all elliptic curve points, which is defined as follows:
E p ( a, b ) = {( x, y ) : y 2
}
= x 3 + ax + b mod p (1.2)
Elliptic curve discrete logarithm problem difficulty determines how secure ECC
is (ECDLP).
On an elliptic curve, suppose P and Q represent two points where Q = α P, α is
a scalar.
If α is large enough, obtaining α from P and Q is computationally impossible. α
represents Q’s discrete logarithm to base P. Thus, point multiplication is the primary
6 AI-Driven IoT Systems for Industry 4.0
operation in ECC. In other words, multiplying a scalar α by any point P on the curve
will result in another point Q on the curve.
is depicted in Figure 1.6, where each data comprises attributes and users have key to
an access tree that may identify them (such as name, location, or service type). Only
when data can meet the access tree’s requirement, it can then be decrypted using
the access tree. An access tree would look like this: 〈 MumbaiOR〈 RajeshANDClerk 〉〉 .
Each of the three data sets has three characteristics that correspond to the name,
location, and service type. The user can only decrypt data 3 as shown in Figure 1.7
Attribute-Based Encryption for Secured Message Transmittal 9
based on his location (i.e. Mumbai). On the other hand, as the user tree is an access
tree, KPAE is unable to monitor data access [21]. Users bear accountability, accord-
ing to Yu et al. [22], and senders conceal some properties. On the basis of DBDH
and D-Linear assumptions, the anti-key abuse KPAE was presented (as in case of
AFKPAE). User’s private key contains a property that is associated with their unique
identity. The tracking algorithm links the suspicious identifier’s relevant properties
to the cipher text [21–26].
The CPAE method resembles standard role-based access control more conceptu-
ally. The original CPAE technique was introduced by Bethencourt et al. [16]. Despite
the fact that policies are limited to a single AND gate, Cheung and Newport [27]
developed a CPAE structure based on Bilinear Diffie-Hellman assumption. Goyal
et al. later suggested a generic transformational technique for turning a KPAE sys-
tem into a CPAE system while utilising a universal access tree. Their security argu-
ment is dependent on the Decisional Bilinear Diffie-Hellman assumption, and their
construction provides for the representation of an access structure with threshold
gates acting as nodes in a restricted-size access tree. In [28], Waters proposed the
most efficient CPAE algorithm in terms of length of cipher text and expressivity with
the cipher text length. In contrast to KPAE, cipher text has an access tree, and the
user key contains information about the user such as name, position, and location [14,
28–30]. The encrypted information can be unlocked, if the user attribute fulfils the
requirements of the access tree for the cipher text.
A lightweight CPAE method was presented by Touati et al. [30]. They plan to
lower CP-computing ABE’s cost in the restricted system by delegating some cryp-
tographic tasks to assist devices with more resources than others. In the proposed
method, a symmetric key is utilised to initially encrypt the data. After that, using
CPAE, only this symmetric key is encrypted, as proposed in [9, 30]. Borgh et al.
[9] developed a strategy that might be applied to two different scenarios. Figure 1.9
shows an illustration of CPAE. Each user is endowed with a unique set of character-
istics. Because the features of User 3 match the criteria of the cipher text access tree
〈 MumbaiOR〈 RajeshANDClerk 〉〉 , only User 3 has the ability to decode the data. The
ability to limit data access is the major difference between CPAE and KPAE. Data
in KPAE simply comprises user attributes, and anyone can access it.
Attribute-Based Encryption for Secured Message Transmittal 11
On the other side, CPAE can regulate data access due to the encryption of the
access tree. To increase security and privacy, an ABE technique was offered by
Susilo et al. [31] as a decentralised cipher text strategy that protects privacy. In cloud
computing, Feng et al. [32] suggested a system for sharing health records that is
privacy-conscious. To protect the user’s attribute, this is the first paper to use com-
mitment and zero knowledge-proof techniques. To accomplish user accountability,
it employs the multi-authority CPAE, which employs disguised access policies to
safeguard users’ privacy. The paper [32] encrypts PHR files with priority level-
based encryption (PLBE) technology in order to offer flexible data control and finely
grained access control.
i. Complex access rules such as threshold, “AND,” “OR,” and “NOT” gates
can be handled by both the KPAE and the CPAE.
ii. For precise access management in cloud storage systems, Liu Wang et al.
[33] developed hierarchical attribute-based encryption. With the associa-
tion of HIBE system and the CPAE system, the performance is traded for
expressivity, and the suggested system is then subjected to proxy and lazy
re-encryption. As a result of which, they were able to ensure the transmis-
sion and consolidation of medical data [33–37].
12 AI-Driven IoT Systems for Industry 4.0
1.2.5.2 Variation
KPAE is better for scenarios where the sender specifies the technique for obtaining
cipher text, like social sites and Electronic Medical Systems (EMS) [13], whereas
CPAE is better for scenarios where the sender specifies the technique for obtaining
cipher text, such as pay television system and database access.
FIGURE 1.10 A visual illustration of the setup and key generation phase.
Attribute-Based Encryption for Secured Message Transmittal 13
algorithm.
CT = T , C ~ = Me ( g, g ) , C = h s , ∀yY : C y = g qy ( 0 ) , C y′ = H ( att ( y ) )
as qy ( 0 )
.
( ) (
= e g r . H ( i )ri , g qx( 0 ) /e g ri , H ( i )qx( 0 ) )
TABLE 1.1
Factors in the Proposed Model
Phases Input Parameters Output Parameters
the performance of the proposed approach is a little more difficult, because decryp-
tion time might vary significantly depending on the access trees and the group of
attributes that participated, as shown in Figure 1.19. We used decryption to decrypt a
set of cipher texts that had been decrypted using a variety of randomly created policy
trees. Both key generation and encryption programmes take a predictable amount
of time. The access tree of the encrypted text, as well as the properties given in the
private key, determines the decryption performance.
18 AI-Driven IoT Systems for Industry 4.0
1.4.2 Result Analysis
Figure 1.17 demonstrates that as the count of qualities rises, the time to generate key
also rises accordingly. But the time required to generate the key takes less time than
the conventional model. Similarly, in encryption graphs portrayed in Figure 1.18
when the number of leaf nodes increases, encryption time also rises, but it takes
comparatively less time than the model exits previously. In the decryption graph,
it has been noticed that the running time is not varying linearly. The specific access
trees and collection of attributes used can have a big impact on how long it takes to
decrypt data. The length of time required to decode a message also varies on the spe-
cific properties that are available; we uniformly selected a key fulfilling the policy at
random for each run of the decryption time graph. To achieve this, random subsets
of the qualities included in tree leaves were examined, and attributes that did not
satisfy it were periodically eliminated. The running times shown in Figure 1.19 were
obtained by a series of decryption runs carried out in this manner. The efficiency of
the decryption graph is dependent on the particular cipher text access tree and the
attributes provided by the private key.
REFERENCES
1. V. Goyal, O. Pandey, A. Sahai, and B. Waters, “Attribute-based encryption for fine-
grained access control of encrypted data”, In Proc. of CCS’06, Alexandria, Virginia,
USA, pp. 1–28, 2006.
2. Z. Wan, J. Liu, and R. H. Deng, “HASBE: A Hierarchical Attribute-Based Solution
for Flexible and Scalable Access Control in Cloud Computing”, IEEE Transaction on
Information Forensic and Security, Vol. 7, Iss. 2, pp. 743–754, 2012.
3. K. Yang and X. Jia, “Data Storage Auditing Service in Cloud Computing: Challenges,
Methods and Opportunities”, World Wide Web, Vol. 15, Iss. 4, pp. 409–428, 2012.
4. J. Herrnaz, F. Laguillaumie, and C. Rafols, “Constant size cipher texts in threshold
attribute-based encryption”, In PKC,LNCS 6056, Springer-Verlag, pp. 19–34, 2010.
5. V. Kamliya and A. Rajnikanth, “A Survey on Hierarchical Attribute Set-Based
Encryption (HASBE) Access Control Model For Cloud Computing”, International
Journal of Computer Applications(0975–8887), Vol. 112, Iss. 7, pp. 4–7, 2015.
6. M. Pirretti, P. Traynor, P. McDaniel, and B. Waters, “Secure Attribute-Based Systems”,
Journal of Computer Security, Vol. 18, Iss. 5, pp. 799–837, 2010.
7. S. Nisha and M. Farik, “RSA Public Key Cryptography Algorithm – A Review”,
International Journal of Scientific & Technology Research, Vol. 6, Iss. 7, pp. 187–191,
2017.
8. Z. Zhou and D. Huang, “Efficient and secure data storage operations for mobile
cloud computing”, In Network and Service Management (CNSM), 8th International
Conference and 2012 Workshop on Systems Virtualization Management (SVM), IEEE,
pp. 37–45, 2012.
9. J. Borgh, E. Ngai, B. Ohlman and A. M. Malik. “Employing Attribute-Based
Encryption in Systems with Resource Constrained Devices in an Information Centric
Networking Context”, Global Internet of Things Summit (GIoTS), pp. 1–6, 2016. doi:
10.1109/GIOTS.2017.8016277
10. R. Rivest, A. Shamir, and L. Adleman, “A Method for Obtaining Digital Signatures
and Public-Key Cryptosystems”, ACM Transaction on Communications, Vol. 21,
pp. 120–126, 1978.
11. N. Koblitz, “Elliptic Curve Cryptosystems”, Mathematics of Computation, Vol. 48,
Iss. 177, pp. 203–209, 1987.
12. K. Rabah, “Security of the Cryptographic Protocols Based on Discrete Logarithm
Problem”, Journal of Applied Sciences, Vol. 5, pp. 1692–1712, 2005.
13. A. Sahai and B. Waters, “Fuzzy identity based encryption”, In Advances in Cryptology –
Eurocrypt, Springer, Vol. 3494, pp. 457–473, 2005.
14. W. Teng, G. Yang, Y. Xiang, and D. Wang, “Attribute-Based Access Control with
Constant-Size Cipher Text in Cloud Computing”, IEEE Transactions on Cloud
Computing, Vol. 5, Iss. 4, pp. 617–627, 2017.
15. A. Miyaji, A. Nomura, K. Emura, K. Omote, and M. Soshi, “A cipher text-policy attri-
bute-based encryption scheme with constant cipher text length”, In ISPEC, LNCS 5451,
Springer-Verlag, Vol. 5451, pp. 13–23, 2009.
16. J. Bethencourt, A. Sahai, and B. Waters, “Cipher text-policy attribute-based encryp-
tion”, In IEEE Symposium on Security and Privacy, SP 2007, IEEE, pp. 321–334, 2007.
17. J. Katz, A. Sahai, and B. Waters, “Predicate encryption supporting disjunctions,
polynomial equations, and inner products”, In Smart, N.P. LNCS, Springer, Vol. 4965,
pp. 146–162, 2008.
18. T. Nishide, K. Yoneyama, and K. Ohta, “Attribute-based encryption with partially hidden
encryptor-specified access structures”, In Proceedings of the 6th International Conference
on Applied Cryptography and Network Security, New York, pp. 111–129, 2008.
Attribute-Based Encryption for Secured Message Transmittal 21
19. C.-J. Wang and J.-F. Luo, “A key-policy Attribute-based encryption scheme with
constant size cipher text”, In Eighth International Conference on Computational
Intelligence and Security, 2012.
20. Y. Yan, M. B. M. Kamel, and P. Ligeti, “Attribute-based encryption in cloud com-
puting environment”, In 2020 International Conference on Computing, Electronics &
Communications Engineering (ICCECE), IEEE, pp. 63–68, 2020.
21. R. Ostrovsky, A. Sahai, and B. Waters, “Attribute-based encryption with non-mono-
tonic access structures”, In Proceedings of the 14th ACM Conference on Computer and
Communications Security, pp. 195–203, 2007.
22. S. Yu, K. Ren, J. Li, and W. Lou, “Defending against key abuse attacks in KPAE
enabled broadcast systems”, In Proc. of the Security and Privacy in Communication
Networks, pp. 311–329, 2009.
23. A. R. Nimje, V. T. Gaikwad, and H. N. Datir, “Attribute-Based Encryption Techniques
in Cloud Computing Security: An Overview”, International Journal of Computer
Trends and Technology, Vol. 4, Iss. 3, pp. 419–423, 2013.
24. J. Leea, S. Oha, and J. Janga, “A work in progress: Context based encryption scheme
for internet of things”, In The 10th International Conference on Future Networks and
Communications (FNC 2015) Procedia, pp. 271–275, 2015.
25. K. Emura, A. Miyaji, A. Nomura, K. Omote, and M. Soshi, “A cipher text-policy attri-
bute-based encryption scheme with constant cipher text length”, In ISPEC, Springer-
Verlag, pp. 13–23, 2009.
26. P. P. Tsang, S. W. Smith, and A. Kapadia, “Attribute-based publishing with hidden
credentials and hidden policies”, In Proc. Network & Distributed System Security
Symposium (NDSS), pp. 179–192, 2007.
27. L. Cheung and C. Newport, “Provably secure cipher text policy ABE”, In Proceedings
of the 14th ACM Conference on Computer and Communications Security, pp. 456–465,
2007.
28. V. Goyal, A. Jain, O. Pandey, and A. Sahai, “Bounded cipher text policy attribute
based encryption”, In Proceedings of the 35thinternational colloquium on Automata,
Languages and Programming, (ICALP ‘08), Springer, Vol. 5, Iss. 125, pp. 579–591,
2008.
29. B. Waters, “Cipher text-policy attribute-based encryption: An expressive, efficient,
and provably secure realization”, In Proceedings of the International Conference
on Practice and Theory in Public Key Cryptography, Springer, Vol. 65, pp. 53–70,
2011.
30. L. Touati, Y. Challal, and A. Bouabdallah, “C-CPAE: Cooperative cipher text policy
attribute-based encryption for the internet of things”, In International conference on
Advanced Networking Distributed Systems and Applications (INDS), IEEE, pp. 64–69,
2014.
31. W. Susilo, J. Han, and Y. Mu, “Improving Privacy & Security in Decentralized
Cipher Text-Policy Attribute-Based Encryption”, IEEE, Vol. 10, Iss. 3, pp. 665–678,
2015.
32. F. Feng, Y. Xhafal, and Zhang, “Privacy-Aware Attribute-Based PHR Sharing With
User Accountability in Cloud Computing”, Journal of Super Computing, Vol. 71, Iss. 5,
pp. 1607–1619, 2015.
33. Q. Liu, G. Wang, and J. Wu, “Hierarchical attribute-based encryption for fine-grained
access control in cloud storage services”, In Proc. of the 17thACM Conference on
Computer and Communications Security, pp. 735–737, 2010.
34. D. Sangeetha and V. Vijayakumar, “Enhanced security of PHR system in cloud using
prioritized level based encryption”, In Proc. of International Conference on Security in
Computer Networks and Distributed Systems, pp. 57–69, 2014.
22 AI-Driven IoT Systems for Industry 4.0
35. S. B. Priyadarshini, A. B. Bagjadab, and B. K. Mishra, “Digital Signature and Its Pivotal
Role in Affording Security Services”, In eBusiness Security, Auerbach Publications,
New York, pp. 422–442, 2018, ISBN 9780429468254.
36. A. Sahani, J. Arya, A. Patro, and S. B. B. Priyadarshini, “Blockchain: Applications and
Challenges in the Industry”, Intelligent and Cloud Computing, Proceedings of ICICC,
Vol. 1, pp. 813–818, 2019.
37. M. P. Nath, S. B. B. Priyadarshini, and D. Mishra, “A Comprehensive Study on Security
in IoT and Resolving Security Threats Using Machine Learning (ML)”, Advances in
Intelligent Computing and Communication, Vol. 5, pp. 545–553, 2021.
2 Object Detection Using
Deep Learning (DL) and
OpenCV Approach
Ajit Kumar Mahapatra, Sushree Bibhuprada B.
Priyadarshini, Lokesh Kumar Nahata, Smita Rath,
Nikhil Singh, Shatabdi Chakraborty,
Jyotirmayee Pradhan, Sachi Nandan Mohanty,
and Prabhat Sahu
2.1 INTRODUCTION
2.1.1 Background Study
The hand is a human organ used to manipulate physical objects. For this reason, the
hand is most commonly used by humans to communicate and operate machines.
The process of comprehending and classifying significant hand movements made
by humans is known as hand gesture recognition. The mouse and keyboard are the
basic inputs and outputs of a computer, and you need to use your hands to use both of
these devices. The most important and immediate exchange of information between
humans and machines takes place through visual and audio aids, but this communi-
cation is one-way. Hand gestures support everyday communication to clearly convey
our message [1–5].
Hand gestures are essential for sign language communication, as the hand is of
paramount importance to silent and hearing-impaired people who communicate
with the hand using gestures. If a computer has the ability to translate and under-
stand hand gestures, it will be an advancement in human-computer interaction. The
dilemma here is that modern images are informative and require extensive process-
ing to perform this task. Each gesture has some characteristics that make it different
from other gestures. The HU invariant moment is used to extract these features from
the gesture and classify them using the K-Nearest Neighbour (KNN) algorithm. The
actual applications of gesture-based human-computer interactions are interaction
with virtual objects, control of robots, translation of body and sign language, and
control of machines by gestures [6, 7]. Hand detection and recognition is one of the
technologies that can recognize hand motions using real-time video. Hand gestures
fall within a certain category of interest. The design of hand gesture recognition in
this study is a challenging task that combines two key problems. Manual detection
is first. Making a character that can be used with one hand at a time is another issue.
Through the use of challenging elements including stance, orientation, position, and
DOI: 10.1201/9781003432319-2 23
24 AI-Driven IoT Systems for Industry 4.0
scale variation, this research focuses on how the system detects, recognizes, and
interprets hand gestures.
To get good results in the development of this project, various kinds of ges-
tures such as numbers and sign language need to be created in this system. The
image is taken from our own hands to detect the gesture [8, 9]. In this project,
the detection of hands will be done using Python programming and TensorFlow
libraries. Applying the ideas of hand classification and the hand detection system
will allow for the development of hand gesture recognition using Python and
OpenCV [7, 10–16].
2.1.2.1 Motivation
We can easily figure out what objects are present in an image. With the least con-
scious effort, the human visual framework can finish complex works such as the
identification of many objects. It is faster and more accurate. We can now eas-
ily train computers to ensnare and classify multiple hand gestures that are being
used in our daily lives with an image with high accuracy. Object detection using
TensorFlow is a computer vision technique. As the name implies, it aids us in
detecting, finding, and identifying an object in an image or video. In video call-
ing systems, we can add subtitles based on the recognition of hand gestures, and it
makes communication easier.
TABLE 2.1
Assumptions Taken
Sl. No. Assumption
1. We assume that the photos taken from the camera are of high quality and are capable of
detecting the hand gestures with high accuracy.
TABLE 2.2
Constraints Taken
Sl. No. Assumption
1. It is very important for hand gesture to be detected precisely and understood by the
system.
2. It is necessary for hand gesture to be present in the existing database else the conversion
to corresponding text will not take place.
3. We need to take each gesture’s image from various different angles to make the system
more accurate.
26 AI-Driven IoT Systems for Industry 4.0
TABLE 2.3
A Comparative Analysis of Different Gesture Recognition Models
Background Additional
Primary Methods of to Gesture Markers Required Number of
Recognition Images (Like Wrist Band) Training Images Frame Rate
Hidden Markov models General Multicoloured gloves 7-hour signing –
Hidden Markov models General No 400 training 10
sentences
Linear approximation to Blue screen No 7441 images –
linear point distribution
on models
Finite state machine/model Static Markers on glove 10 sequences of 10
machine 200 frames each
Hand gesture recognition General No – –
a. pre-trained and
b. fine-tuned using a dataset consisting of images extracted from video foot-
age of two soccer matches.
Today, people use Google Assistant and Siri to get answers to their questions,
but the same is difficult for people with disabilities, mostly hearing-impaired and
silent. Therefore, with the help of object recognition, they can do the same with
hand movements. And that would be of great benefit to them. Our project is useful
for people with disabilities, doctors, and educational institutions dealing with people
28 AI-Driven IoT Systems for Industry 4.0
with disabilities. They no longer need to carry a card or a human interpreter. The
project aims to create a medium for communication between people who are both
deaf and mute. This would as well bring rescue to doctors and also help in increasing
the education rate of our country [1, 7, 8].
2.3.1 Methods/Technologies Observed
The following models came across during the literature survey:
Figure 2.2 represents the brief stages involved in our proffered work, and
Figure 2.3 represents the testing of images where the image can either match
or mismatch. Figure 2.4 illustrates the diagrammatic representation of how the
object detection model works. Similarly, Figure 2.5 represents the working steps
involved in our proposed approach [20–32].
calculate the object’s xmin, ymin, xmax, and ymax values and send them, along
with the image, to the model for training. The following steps are involved for
this purpose:
• The folder containing the images we need to label can be found by clicking
“Open Dir” and choosing the folder.
• Then select the location where we want to save the label file by clicking
on “Change Save Dir”. The image directory should not be the same as this
directory.
• We can now draw boxes over the photos using the “Create Rectbox” command.
• To save, click the Save button. A file containing the box coordinates will
be created.
34 AI-Driven IoT Systems for Industry 4.0
Finally, we will have a folder with data that will have the same name as the image
as an image label. We can now do object detection training with the data. After add-
ing annotations to the image, we construct a label map with the item name, ID, and
display name; one label is created for each object.
The steps involved in TensorFlow object detection are shown in Figure 2.13.
With the help of images of hands and under various circumstances, the hand ges-
ture detection system has been checked. This section goes into detail on the system’s
FIGURE 2.13 Sequence of steps followed during object detection using TensorFlow.
36 AI-Driven IoT Systems for Industry 4.0
detection outputs in the case of Namaste, OK, thumbs down, thumbs up, and victory
gestures, respectively.
also aiming to move the system onto Android software since we want our product
to be distributed for the benefit of the world’s people with disabilities. Researching
cutting-edge mathematical techniques for image processing and looking into various
hardware options that would lead to more precise hand detections would be ideal.
This study illustrated the possibilities for streamlining user interactions with desktop
computers and system software in addition to illustrating the various gesture opera-
tions that users may perform.
REFERENCES
1. https://github.com/itseez/opencv (Accessed: 13 Oct 2017).
2. https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/
tf2_detection_zoo.md.
3. https://iopscience.iop.org/article/10.1088/1757-899X/1045/1/012043.
4. https://scholar.google.com/scholar?hl=en&as_sdt=0%2C5&q=object+detecting+with+
tensorFlow&btnG=.
5. S. M. M. Roomi, R. J. Priya, and H. Jayalakshmi, “Hand gesture recognition for human-
computer interaction”, Journal of Computer Science, Vol. 6, Iss. 9, pp. 1002–1007,
2010.
6. S. Sharma and S. Singh, “Vision-based hand gesture recognition using deep learning
for the interpretation of sign language”, Expert Systems with Applications, Vol. 182,
pp. 115–157, 2021.
7. P. S. Neethu, R. Suguna, and D. Sathish, “An efficient method for human hand gesture
detection and recognition using deep learning convolutional neural networks”, Soft
Computing, Vol. 24, pp. 15239–15248, 2020.
8. S. Hussain, R. Saxena, X. Han, J. Khan, and H. Shin, “Hand gesture recognition using
deep learning”, 2017 International SoC Design Conference (ISOCC), IEEE, pp. 48–49,
2017.
9. P. Parvathy, K. Subramaniam, P. Venkatesan, et al., “Development of hand gesture
recognition system using machine learning”, Journal of Ambient Intelligence and
Humanized Computing, Vol. 12, pp. 6793–6800, 2021.
10. M. Al-Hammadi, G. Muhammad, W. Abdul, M. Alsulaiman, M. A. Bencherif, T.
S. Alrayes, and M. A. Mekhtiche, “Deep learning-based approach for sign language
gesture recognition with efficient hand gesture representation”, IEEE Access, Vol. 8,
pp. 192527–192542, 2020.
11. P. Trigueiros, F. Ribeiro, and L. P. Reis, “A comparison of machine learning algorithms
applied to hand gesture recognition”, 7th Iberian Conference on Information Systems
and Technologies (CISTI 2012), IEEE, pp. 1–6, 2012.
12. A. Mujahid, M. J. Awan, A. Yasin, M. A. Mohammed, R. Damaševičius, R. Maskeliūnas,
and K. H. Abdulkareem, “Real-time hand gesture recognition based on deep learning
YOLOv3 model”, Applied Sciences, Vol. 11, Iss. 9, pp. 41–64, 2021.
13. G. Devineau, F. Moutarde, W. Xi, and J. Yang, “Deep learning for hand gesture rec-
ognition on skeletal data”, 13th IEEE International Conference on Automatic Face,
Gesture Recognition (FG 2018), pp. 106–113, 2018.
14. A. A. Q. Mohammed, J. Lv, and M. S. Islam, “A deep learning-based end-to-end
composite system for hand detection and gesture recognition”, Sensors, Vol. 19, Iss. 23,
pp. 52–82, 2019.
15. R. M. Gurav and P. K. Kadbe, “Real time finger tracking and contour detection for
gesture recognition using OpenCV”, 2015 International Conference on Industrial
Instrumentation and Control (ICIC), IEEE, pp. 974–977, 2015.
40 AI-Driven IoT Systems for Industry 4.0
16. V. Harini, V. Prahelika, I. Sneka, and P. Adlene Ebenezer, “Hand gesture recogni-
tion using OpenCv and Python”, New Trends in Computational Vision and Bio-
inspired Computing: Selected works presented at the ICCVBIC. Springer, Coimbatore,
pp. 1711–1719, 2020.
17. R. Shrivastava, “A hidden Markov model based dynamic hand gesture recognition sys-
tem using OpenCV”, 3rd IEEE International Advance Computing Conference (IACC),
IEEE, pp. 947–950, 2013.
18. D. H. Pal and S. M. Kakade, “Dynamic hand gesture recognition using Kinect sensor.
2016 International Conference on Global Trends in Signal Processing, Information
Computing and Communication (ICGTSPICC), IEEE, pp. 448–453, 2013.
19. M. Suresh, A. Sinha, and R. P. Aneesh, “Real-time hand gesture recognition using
deep learning”, IJIIE-International Journal of Innovations and Implementations in
Engineering ISSN 2454-3489, Volume 1, 2019 December Edition.
20. T. Sharma, S. Kumar, N. Yadav, K. Sharma, and P. Bhardwaj, “Air-swipe gesture rec-
ognition using OpenCV in Android devices”, International Conference on Algorithms,
Methodology, Models and Applications in Emerging Technologies (ICAMMAET),
IEEE, pp. 1–6, 2017.
21. J. Shukla and A. Dwivedi, “A method for hand gesture recognition”, Fourth
International Conference on Communication Systems and Network Technologies,
IEEE, pp. 919–923, 2014.
22. A. S. Ghotkar and G. K. Kharate, “Study of vision based hand gesture recognition
using Indian sign language”, International Journal on Smart Sensing and Intelligent
Systems, Vol. 7, Iss. 1, pp. 96–115, 2014.
23. V. V. Krishna Reddy, et al., “Hand gesture recognition using convolutional neural net-
works and computer vision”, Cognitive Informatics and Soft Computing: Proceeding
of CISC 2021. Springer Nature Singapore, Singapore, pp. 583–593, 2021.
24. N. H. Dardas and N. D. Georganas, “Real-time hand gesture detection and recognition
using bag-of-features and support vector machine techniques”, IEEE Transactions on
Instrumentation and Measurement, Vol. 60, Iss. 11, pp. 3592–3607, 2011.
25. A. Chaudhary, J. L. Raheja, K. Das, and S. Raheja, “Intelligent approaches to interact
with machines using hand gesture recognition in natural way: A survey”, International
Journal of Computer Science and Engineering Survey, Vol. 1303, pp. 22–92, 2013.
26. A. Mujahid, M. J. Awan, A. Yasin, M. A. Mohammed, R. Damaševičius, R. Maskeliūnas,
and K. H. Abdulkareem, “Real-time hand gesture recognition based on deep learning
YOLOv3 model”, Applied Sciences, Vol. 11, Iss. 9, pp. 41–64, 2021.
27. S. S. Kakkoth and S. Gharge, “Survey on real time hand gesture recognition”,
International Conference on Current Trends in Computer, Electrical, Electronics and
Communication (CTCEEC), IEEE, pp. 948–954, 2017.
28. S. Sharma and S. Jain, “A static hand gesture and face recognition system for blind
people”, 6th International Conference on Signal Processing and Integrated Networks
(SPIN), IEEE, pp. 534–539, 2019.
29. L. Brethes, P. Menezes, F. Lerasle, and J. Hayet, “Face tracking and hand gesture
recognition for human-robot interaction”, International Conference on Robotics and
Automation, IEEE, Vol. 2, pp. 1901–1906, 2004.
30. M. P. Nath, S. B. B. Priyadarshini, D. Mishra, and S. Borah, “A comprehensive study of
contemporary IoT technologies and varied machine learning (ML) schemes, soft com-
puting techniques and applications”, Advances in Intelligent Systems and Computing,
Vol. 1248. Springer, Singapore, 2021. https://doi.org/10.1007/978-981-15-7394-1_56
31. S. B. B. Priyadarshini, A. Mahapatra, S. N. Mohanty, et al., “myCHIP-8 emulator:
An innovative software testing strategy for playing online games in many platforms”,
EAI/Springer Innovations in Communication and Computing. Springer, Cham, 2022.
https://doi.org/10.1007/978-3-031-07297-0_9
Object Detection Using Deep Learning (DL) and OpenCV Approach 41
32. D. Xu, X. Wu, Y. L. Chen, and Y. Xu, “Online dynamic gesture recognition for human
robot interaction”, Journal of Intelligent Robotic Systems, Vol. 77, Iss. 3–4, pp. 583–
596, 2015.
33. S. Rath, S. B. B. Priyadarshini, et al., “A real-time hybrid YOLOV4 approach for multi-
classification and detection of objects”, Journal of Theoretical and Applied Information
Technology, Vol. 101, Iss. 11, pp. 4314–4225, 2023.
34. J. Zhang, et al., “Multi-class object detection using faster R-CNN and estimation of
shaking locations for automated shake-and-catch apple harvesting”, Computers and
Electronics in Agriculture, Vol. 173, pp. 53–84, 2020.
35. S. H. Lee, C. H. Yeh, T. W. Hou, and C. S. Yang, “A lightweight neural network based
on AlexNet-SSD model for garbage detection”, Proceedings of the 2019 3rd High
Performance Computing and Cluster Technologies Conference, pp. 274–278, 2019.
36. K. Chaitanya and G. Maragatham, “Object and obstacle detection for self-driving cars
using GoogLeNet and deep learning”, Artificial Intelligence Techniques for Advanced
Computing Applications: Proceedings of ICACT. Springer, Singapore, pp. 315–322,
2021.
37. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet classification with deep con-
volutional neural networks”, Communications of the ACM, Vol. 60, Iss. 6, pp. 84–90,
2017.
38. S. M. Abbas and S. N. Singh, “Region-based object detection and classification using
faster R-CNN”, 4th International Conference on Computational Intelligence &
Communication Technology (CICT), IEEE, pp. 1–6, 2018.
3 Enhancing Industrial
Operations through
AI-Driven Decision-
Making in the Era
of Industry 4.0
Prakruthi R Rai, Preethi Nanjundan,
and Jossy Paul George
3.1 INTRODUCTION
The dawn of the Fourth Industrial Revolution, colloquially termed Industry 4.0,
marked a pivotal juncture within the annals of commercial development. The sim-
ulation of human intelligence in machines, called artificial intelligence (AI), has
risen and plays a vital position in the new era. AI refers to a broad field of science
encompassing PC technological know-how, psychology, philosophy, linguistics, and
other areas [1]. In a trendy and hastily evolving technological panorama, Industry
4.0 has emerged as an innovative idea integrating modern-day technologies along
with AI, big statistics analytics, the Internet of Things (IoT), and cloud computing.
This fourth commercial revolution aims to transform traditional manufacturing pro-
cedures by leveraging smart structures to enhance productiveness, performance, and
profitability [2]. Rooted in a synergy of present-day technologies, Industry 4.0 pro-
mulgates an imaginative and prescient of interconnected and intelligent production
systems. At the nexus of this revolution lies the all-powerful realm of AI, orchestrat-
ing a paradigm shift in how industries function, evolve, and decide [3].
Historically, business evolutions had been catalyzed with the aid of technological
breakthroughs from the mechanization wrought with the aid of steam engines during
the First Industrial Revolution to the automation delivered with the aid of electronics
and IT all through the Third. However, Industry 4.0 is awesome, blurring the traces
between the bodily and digital spheres. This amalgamation is embodied with the aid of
cyber-physical systems, decentralized decision-making frameworks, and the ever-pres-
ent IoT. With their torrent of non-stop information streams, such systems necessitate
advanced computational mechanisms for interpretation, processing, and reaction [3].
Enter AI. With its large arsenal of machine learning (ML) algorithms, neural net-
works, and records analytics equipment, AI has emerged as the linchpin for Industry
4.0’s aspirations. It paves the way for systems that don’t simply reply but predict,
42 DOI: 10.1201/9781003432319-3
Enhancing Industrial Operations through AI-Driven Decision-Making 43
adapt, and optimize [4]. Whether forecasting machines put on-and-tear to address
it preemptively, optimizing supply chains with the use of actual-time international
data, or customizing production strategies to fulfill transferring needs, AI-driven
selections are getting the lifeblood of cutting-edge business ecosystems [2]. This
bankruptcy seeks to clarify the tricky interaction of AI technology in the overarch-
ing framework of Industry 4.0. We aim to spotlight AI’s transformative position
in guidance business selection-making tactics toward remarkable performance and
precision via a mix of technical insights, case research, and ahead-looking analyses.
At the coronary heart of modern industry’s rapid transformation is AI—a techno-
logical juggernaut reshaping the contours of production, operation, and management
[5]. The infusion of AI into commercial practices has revolutionized selection-mak-
ing, transitioning from traditional, often reactive strategies to predictive, proactive,
and particularly optimized techniques. AI has a profound effect on current industrial
choice-making [3].
3.2 BACKGROUND
Within the paradigm of Industry 4.0, AI stands as the computational nucleus, anal-
ogous to the mind’s role in complicated organisms [1]. This industrial panorama
is inundated with large datasets from an intricate mesh of sensors, interconnected
IoT gadgets, and cyber-physical systems. AI’s prowess in real-time facts process-
ing, pattern reputation, and predictive analytics will become integral, remodeling
this uncooked data deluge into actionable intelligence [3]. Moreover, the AI-driven
decentralized decision-making algorithms empower person components within these
surroundings to autonomously derive and execute choices, making sure an orches-
trated and cohesive operation goes with the flow. Cognitive automation, underpinned
via AI, seamlessly melds device performance with human-like reasoning, enabling
nuanced judgments past conventional automation paradigms [6].
Furthermore, intuitive human-system interfaces, bolstered by way of superior
natural language processing (NLP) frameworks, strengthen the symbiotic interplay
between human information and automated systems [3]. Concurrently, AI-pushed
security protocols protect the commercial community, ensuring information integ-
rity and thwarting potential breaches in this hyper-linked environment. As Industry
4.0 evolves, device mastering mechanisms allow non-stop adaptive learning, opti-
mizing operations, and refining decision matrices. AI helps and governs the elaborate
interaction of components within Industry 4.0, orchestrating a destiny characterized
by means of unprecedented performance and adaptability [7].
fluidity in delivery chain operations [8]. Moreover, with their inherent adaptabil-
ity, AI-driven structures pivot in the direction of electricity-efficient methodologies,
calibrating consumption primarily based on actual-time wishes and consistently pin-
pointing wastage vectors, aligning business tactics with sustainable imperatives [2].
One of the more nuanced advantages lies in AI’s ability for bespoke production vari-
ations, wherein algorithms, attuned to market oscillations and client remarks, tai-
lor production paradigms, ensuring alignment with dynamic market demands. This
AI-driven foresight in addition encompasses danger mitigation, offering preemptive
strategies primarily based on historic information trajectories, modern operational
tendencies, and external geopolitical or marketplace stimuli [8]. From a safety atti-
tude, deploying AI-empowered robot entities in excessive-threat business domains
minimizes human exposure to risks. At the same time, actual-time monitoring algo-
rithms stand sentinel, supplying immediate signals for potential protection breaches.
Beyond the tangible operational facets, AI integration gives a cerebral gain, distill-
ing good-sized record repositories into actionable strategic insights, informing and
refining excessive-stage organizational selections [6]. Ultimately, these advantages
culminate into tangible fee efficiencies, bolstering the financial robustness of indus-
tries navigating the complexities of Industry 4.0. The infusion of AI-pushed selec-
tion-making within the business sector indicates a paradigmatic shift, intertwining
intelligence with operational substrates and placing the trajectory for an adaptive,
green, and growth-centric commercial destiny [14].
and instantaneous decision-making stand at the forefront, setting the gold standard
for operational efficiency, precision, and adaptability [13].
models. Trained rigorously on vast datasets comprising both flawless and defective
units, the AI system evolved to recognize an expansive spectrum of anomalies from
minute cosmetic blemishes to intricate functional discrepancies. The electronics
firm’s strategic pivot to AI-empowered visual inspections epitomizes the confluence
of technology with quality assurance, delineating a future where product perfection
is not an aspirational ideal but a consistently realized standard [17].
and tear, and optimize performance without physical interventions [2]. Moreover,
the inclusion of facet computing is predicted to grow, decentralizing information
processing and putting it in the direction of the source, ensuring reduced latency
and greater actual-time decision-making [8]. As quantum computing moves from
the realm of theoretical to practical, its profound computational competencies could
probably revolutionize areas from fabric technological know-how to optimization
issues, heralding breakthroughs previously deemed impossible [11]. Yet, as those
technological marvels unfold, a similarly large trend will be the heightened rec-
ognition of sustainability and moral concerns. Circular financial system concepts,
emphasizing resource efficiency, waste discount, and sustainable manufacturing,
will become valuable tenets of Industry 4.0. Concurrently, with AI playing a piv-
otal function, ethical AI frameworks addressing biases, transparency, and selection
responsibility will gain paramount importance [9].
In the epoch of Industry 4.0, where the convergence of era and business operations is
rewriting traditional paradigms, the integration of AR and VR stands as a transforma-
tive brand [19]. The profound fusion of these immersive realities is similarly ampli-
fied through the riding force of AI, shaping a new paradigm that redefines education,
layout, preservation, and collaboration within commercial ecosystems. Augmented
truth, which overlays virtual facts onto the real-world environment, and VR, which
creates completely virtual immersive environments, have traditionally been perceived
as distinct realms [7]. However, AI serves because of the cohesive detail that harmo-
nizes these realities, rendering them now not just complementary but also synergistic.
AI-driven PC imaginative and prescient algorithms enable AR to apprehend and inter-
act with actual global items, facilitating context-conscious fact overlays. Moreover,
AI-powered NLP empowers AR and VR interfaces to interpret and respond to spoken
instructions, fostering intuitive human-PC interactions [2]. In the realm of schooling
and education, this amalgamation shines. AI-stronger VR simulations permit train-
ees to immerse themselves in realistic scenarios, learning how to function complex
machinery, troubleshoot troubles, or navigate complicated manufacturing lines—all
inside a safe, controlled environment. The adaptive talents of AI-pushed VR make
certain that the schooling evolves alongside the trainee’s development, optimizing the
studying curve [9]. For renovation and design, the integration is equally profound. AR
interfaces equipped with AI-pushed item popularity can guide technicians via complex
restore methods, superimposing step-by-step commands into real gadgets. In design
approaches, AI algorithms can interpret consumer gestures and sketches, transforming
hard principles into detailed 3D models within a VR environment, catalyzing innova-
tion and new releases. Collaboration, too, is redefined [13]. Geographically allotted
groups can certainly converge more advantageous VR spaces inside AI, interacting
with dynamic 3D models, prototypes, and simulations, fostering collaborative ideation
and problem-solving. However, demanding situations persist [16]. Achieving seam-
less synchronization between AI-pushed real-time statistics, AR/VR interfaces, and
industrial structures calls for meticulous engineering. Privacy issues, ethical consider-
ations, and facts security in AI-mediated immersive environments additionally neces-
sitate stringent protocols [15]. In essence, the mixing of AI-driven AR and VR inside
Industry 4.0 is a harmonious symphony of generation, redefining human-device inter-
play, know-how dissemination, layout ideation, and collaborative innovation, shaping
a future where reality and virtuality are indistinguishably woven [12].
to the fore a complex tapestry of moral issues [6]. As algorithms begin to hold sway
over areas traditionally reserved for human discretion, from scientific diagnoses
and monetary lending to recruitment and criminal justice, the imperatives of ethi-
cal, transparent, and responsible AI have by no means been extra said. Central to
these concerns is the task of bias and equity [2]. ML fashions, the heart of many AI
structures, are skilled on giant datasets, frequently reflective of historical or societal
biases. Without meticulous scrutiny and correction, those biases can be unwittingly
perpetuated and amplified by way of AI, mainly to skewed, unjust, or discrimi-
natory decisions [10]. For instance, a recruitment AI trained on historically biased
facts may inadvertently desire certain demographics over others, exacerbating exist-
ing disparities. Transparency and interpretability form another important axis. The
upward thrust of complex deep gaining knowledge of fashions, regularly termed
“black packing” due to their opaqueness, raises issues about the comprehensibility
of AI decisions [7]. Stakeholders, from affected people to policymakers, require
comprehensible causes for AI-driven selections, mainly in excessive-stakes scenar-
ios like healthcare or felony settings. The burgeoning subject of XAI objectives to
deal with this, striving to make AI fashions extra interpretable and their decisions
more elucidated [2]. Furthermore, the autonomy endowed upon AI in selection-
making amplifies worries about duty. In eventualities wherein AI-driven decisions
cause unfavorable or dangerous results, assigning duty turns tough. Is the onus on the
builders, the operators, the records curators, or the AI itself? Regulatory frameworks
and robust governance models make it hard to delineate clean lines of responsibil-
ity. Lastly, privacy and information safety are paramount [7]. AI’s voracious urge
for food for facts, often non-public or touchy, necessitates rigorous statistics safety
measures. Moreover, as AI systems begin to make predictions or inferences about
people, moral quandaries about consent, facts ownership, and the right to explana-
tion rise up [10]. As AI’s position in selection-making burgeons, a harmonized dance
of technological innovation and ethical diligence becomes vital. Balancing the trans-
formative potential of AI with the imperatives of justice, transparency, accountabil-
ity, and human dignity will form not simply the future of AI but also the very fabric
of our information-driven society.
3.7 CONCLUSION
The ascent of AI inside the digital revolution brings a huge capacity and important
demanding situations, particularly within the context of Industry 4.0. This segment
of business evolution, marked by way of interconnected cyber-physical systems and
the IoT, sees AI as the driving force at the back of unheard-of variations. AI’s cogni-
tive abilities, fueled by elaborate algorithms and information-driven methods, bridge
the space between uncooked records and actionable insights [16]. This is exempli-
fied by neural networks and deep gaining knowledge of, permitting real-time evalu-
ation of sensor statistics to optimize techniques and predict equipment screwups.
Additionally, AI’s function in logistics optimizes supply chains even as generative
AI fashions personalize product design [11].
However, knowing this ability requires addressing inherent complexities. The
need for comprehensible AI selections necessitates XAI methods, especially in
54 AI-Driven IoT Systems for Industry 4.0
important eventualities [2]. Moreover, data safety and privacy ought to be upheld via
robust encryption and moral frameworks. This transformative trajectory embodies
now not simply technical evolution but a new generation of industrial performance,
adaptability, and innovation underpinned by way of both computational intelligence
and moral considerations [7]. In this complicated landscape, stakeholders keep the
key to shaping a harmonious destiny. Industry leaders, policymakers, workers, and
consumers all play pivotal roles. Embracing AI’s benefits requires openness to inno-
vation, upskilling, and adapting business models. Yet, demanding situations like eth-
ics, activity displacement, and information security demand equal attention. Industry
leaders can set up ethical AI committees and retraining programs [6]. Policymakers
must create agile policies that defend rights and privateness at the same time as
promoting innovation. Frontline workers must embrace upskilling for the evolving
activity market. Ultimately, Industry 4.0’s AI-pushed panorama requires proactive
collaboration [2]. By capitalizing on AI’s blessings even as actively addressing its
challenges, stakeholders can ensure balanced growth, inclusivity, and ongoing inno-
vation in this new industrial generation.
REFERENCES
1. Noreen, U., Shafique, A., Ahmed, Z., & Ashfaq, M. A. (2023). Banking 4.0: Artificial
Intelligence (AI) in Banking Industry & Consumer’s Perspective. Sustainability, 15,
3682. 10.3390/su15043682
2. Kagermann, H., Wahlster, W., & Helbig, J. (2013). Recommendations for implement-
ing the strategic initiative INDUSTRIE 4.0: Final report of the Industrie 4.0 Working
Group. Forschungsunion, 1–44.
3. Crowston, K., Allen, E. E., & Heckman, R. (2012). Using Natural Language Processing
Technology for Qualitative Data Analysis. International Journal of Social Research
Methodology, 15(6), 523–543.
4. Arinez, J. F., Chang, Q., Gao, R. X., Xu, C., & Zhang, J. (Nov. 2020). Artificial
Intelligence in Advanced Manufacturing: Current Status and Future Outlook.
ASME Journal of Manufacturing Science and Engineering, 142(11), Art. no. 110804.
10.1115/1.4047855
5. Lai, Z.-H., Tao, W., Leu, M. C., & Yin, Z. (Apr. 2020). Smart Augmented Reality
Instructional System for Mechanical Assembly Towards Worker-Centered Intelligent
Manufacturing. The Journal of Manufacturing Systems, 55, 69–81.
6. Wu, D., Jennings, C., Terpenny, J., Gao, R. X., & Kumara, S. (2017). A Comparative
Study on Machine Learning Algorithms for Smart Manufacturing: Tool Wear Prediction
Using Random Forests. The Journal of Manufacturing Science and Engineering,
139(7), 1–10.
7. Brynjolfsson, E., & McAfee, A. (2017). The Business of Artificial Intelligence. Harvard
Business Review, 95(1), 62–72.
8. Brynjolfsson, E., & McAfee, A. (2011). Race against the machine: How the digital
revolution is accelerating innovation, driving productivity, and irreversibly transform-
ing employment and the economy. Digital Frontier Press.
9. Li, X., Tao, F., & Zhang, L. (2018). A Survey of Artificial Intelligence for Industrial Big
Data Analytics. Journal of Manufacturing Systems, 48, 144–156.
10. Chen, Y., & Xie, J. (2018). Industry 4.0 and the Industrial Internet of Things: A Survey.
International Journal of Academic Research in Business and Social Sciences, 8(11),
2222–6990.
Enhancing Industrial Operations through AI-Driven Decision-Making 55
11. Chen, Y., Li, X., & Zhang, X. (2018). A Survey on Artificial Intelligence for Decision-
Making in Industry 4.0. Journal of Intelligent Manufacturing, 29(2), 411–422.
12. Resnik, D. B. (2011). What Is Ethics in Research and Why Is It Important. National
Institute of Environmental Health Sciences, 1(10), 49–70.
13. Wang, L., & Wang, X. (2018). A Survey on Industrial Artificial Intelligence for Industry
4.0. Journal of Manufacturing Systems, 48, 144–156.
14. Rojek, I., Jasiulewicz-Kaczmarek, M., Piechowski, M., & Mikołajewski, D. (2023). An
Artificial Intelligence Approach for Improving Maintenance to Supervise Machine
Failures and Support Their Repair. Applied Sciences, 13, 4971. 10.3390/app13084971
15. Libby, K. (2019). This Bill Hader Deepfake video is amazing. It’s also terrifying for our
future. Popular Mechanics.
16. Siau, K., & Wang, W. (2020). Artificial Intelligence (AI) Ethics: Ethics of AI and
Ethical AI. Journal of Database Management, 31, 74–87. 10.4018/JDM.2020040105
17. Almetwally, A. A., Bin-Jumah, M., & Allam, A. A. (2020). Ambient Air Pollution and
Its Influence on Human Health and Welfare: An Overview. Environmental Science and
Pollution Research, 27, 24815–24830. 10.1007/s11356-020-09042-2
18. Ghorani-Azam, A., Riahi-Zanjani, B., & Balali-Mood, M. (2016). Effects of Air
Pollution on Human Health and Practical Measures for Prevention in Iran. Journal of
Research in Medical Sciences, 21, 189646. 10.4103/1735-1995.189646
19. Wuest, T., Weimer, D., Irgens, C., & Klaus, D. T. (2016). Machine Learning in
Manufacturing: Advantages, Challenges, and Applications. Production & Manufacturing
Research, 4(1), 23–45.
20. Wang, W., & Siau, K. (2019). Artificial Intelligence, Machine Learning, Automation,
Robotics, Future of Work and Future of Humanity: A Review and Research Agenda.
Journal of Database Management, 30(1), 61–79.
4 Acne Detection Using
Convolutional Neural
Networks and Image-
Processing Technique
Premanand Ghadekar, Aniket Joshi,
Atharv Vanjari, Mohammad Raza,
Shubhankar Gupta, and Anagha Gajaralwar
4.1 INTRODUCTION
A lot of people have acne, which is a common skin ailment, especially throughout
adolescence. Dermatologists may spend a lot of time identifying and analysing acne
lesions, which has sparked an increase in interest in creating automated approaches
for acne identification. Convolutional neural networks (CNNs) have become an effec-
tive method for image processing in the medical field. By accurately recognizing and
categorizing acne lesions in pictures, CNNs may be used for acne detection in this
context. Using the structure of the human visual system as inspiration, CNNs are a
sort of deep learning algorithm. Convolutional layers, which are made up of several
layers of coupled neurons, are what they use to extract information from the input
pictures. Fully linked layers that conduct categorization using the characteristics that
were retrieved are placed after these layers. Following are generally the processes
involved in acne detection using CNNs:
Data collection: Images with acne lesions are gathered into a dataset. These
photos might come from different sources or be gleaned via dermatological
research.
Data pre-processing: Before being entered into the CNN, the gathered pic-
tures undergo pre-processing to make sure they are acceptable. Resizing
the photos, normalizing the pixel values and enhancing the dataset by per-
forming transformations like rotation, scaling or flipping are examples of
pre-processing operations.
Training the model: Using the pre-processed dataset as training data, a CNN
model is built. Using optimization methods like gradient descent, the net-
work’s weights are adjusted after the pictures are fed through the network,
the loss (error) is calculated and the process is repeated. In order to teach
the CNN, the characteristics and patterns that separate acne lesions from
healthy skin, this phase will first need to be completed.
56 DOI: 10.1201/9781003432319-4
Acne Detection Using Convolutional Neural Networks 57
Dermatologists may be able to save time and labour by utilizing CNNs for acne
detection rather than manually examining a lot of photos. Healthcare practitioners
can benefit from automated methods for detecting acne since they can diagnose and
arrange treatments more quickly.
It is important to note that by creating accurate CNN-based acne, the identifi-
cation system calls for rigorous model building, model optimization and a varied,
well-annotated dataset. For the representation to work more accurately and respond
to changing trends in acne presentation, ongoing updates and enhancements may
also be required.
Medical vulgaris which is also called acne is a type of skin disorder which hap-
pens when the pores of the skin get blocked and get filled with deceased skin cells
and oil. It is most commonly seen in teenagers and also seen in some aged people.
Acne gets produced when an oily substance which is also known as sebum that
lubricates human skin and hairs and dead skin cells plug hair follicles. Increase
of bacteria can multiply the number of pimples and acne on the face, and it might
also increase the inflammation and infections. Acne results in inflammation and
produces larger and dark red pimples. Medically and technically the check-up and
the assessment of acne is done by the respective dermatologist, and it requires a
clinical environment. The given prescriptions of the doctor are then followed by
the patient over a long period of time to heal the spots, and it requires a good capi-
tal. The medical field is growing tremendously as this problem is very common in
most of the people. From the acne patients who need a day-to-day treatment or a
long-time treatment has to follow the dermatologist and have to visit the clinics fre-
quently just for their basic check-up and doctor’s further prescription. According
to a survey, the average time worldwide for a patient to wait for getting an appoint-
ment with a dermatologist is 32 days [1]. This leads to big frustration to the acne
patients as it affects their day-to-day life cycle, their food intake, their schedule
etc…. To fill this gap model is been created which will detect the acne by their
images. The basic aim of this model is to (1) detect the acne through images and
then also tell, amount of acne through the worldwide data; (2) accurately assess the
acne of face and other dermal issues; (3) give the predicted and possible image if
plastic surgery or medical treatment is to be done in future; (4) to give a possible
range of amount required for the treatment; (5) to classify between the acne and
non-acne images; (6) to suggest the required medical treatments according to the
type of dermal issue.
58 AI-Driven IoT Systems for Industry 4.0
diverse items like cars, bikes, buildings and fruits. Haar cascade makes use of cas-
cading windows and attempts to compute the capabilities in each window and clas-
sify whether or not it can be an object. Haar cascade works as a classifier. It classifies
nice statistics points – which are part of a model that detects item – and bad statistics
points – which don’t comprise model item. The set of rules may be defined in four
steps: featuring calculation by Haar, integral image creation, AdaBoost algorithm
implementation and then remembering the importance this set of rules calling for
numerous high-quality photographs of facial and futile photographs of non-facial
things to train the classifier, much like different gadgets getting trained by the mod-
els [5].
For a model, the first goal is to get the hair feature landmarks. The Haar function
is actually a calculation performed on a contiguous rectangular region of selected
neighbours within the detection. The mathematical operations involve adding the
intensities of the pixels from each region and then deviation calculation between
the sums. Here are some examples of hair features. Without going too deep into the
background calculations (see docs if interested), keyshots significantly speed up the
calculation of these hair features. Instead of doing the math for each pixel, create
sub-rectangles as proxies and create array references to each sub-rectangle. Used to
calculate hair properties, AdaBoost selects important features by default and trains
a classifier to use them. A set of “vulnerable classifiers” are adopted to design a
“strong classifier” that can be used by a set of guidelines to hit upon the elements. A
vulnerable newbie is created by means of passing a window over the entered picture
and computing the Haar function for each subregion of the photograph. This distinc-
tion is compared to a certain threshold that separates non-articles from articles [6].
By identifying the friends in the detection window, the calculation entails sum-
ming the pixel intensities of each vicinity and calculating the deviation among the
sums. Right here are a few examples of hair features. Because they are “weak classi-
fiers”, they require a large set of Haar functions to ensure accuracy in order to form
reliable classifiers. A cascade classifier consists of several layers, each of which is a
group of weak novices. Weak novices have been trained to use boosting, so implic-
itly predicting each weak novice produces a very accurate classifier. Based on this
prediction, the classifier decides whether it should look for an element (positive) or
go directly to the next area (poor) [7].
In this paper, Mr. Tingting Zhao, Hang Zhang and Jacob Spoelstra have dealt with
Nestle Skin Health SHIELD. Their main aim was to develop a model using deep
learning so that it will help to detect the acne from the self-taken images. To deal
with the geographical sensitivity of the images, they used CNN model for the train-
ing and testing of the images. They had used ResNet 152 pre-trained model. And
they had outperformed a human dermatologist on test images [8].
A general-purpose facial recognition library with mobile applications is described
in the paper Open facial. Mr. Brandon Amos and his team have researched the com-
bined field of IoT and deep learning. They have introduced the OpenFace library to
fill the gap between the public and the private face recognition systems. This paper is
meant to be for the non-experts in the field of face and pattern recognition. Moreover,
it helps to get familiar with the various deep learning techniques that that have been
used [9].
60 AI-Driven IoT Systems for Industry 4.0
In this paper, they focused and worked on the analysis of retinal diseases. They
had used the CNNs and MatConvNet for automated recognition of various retinal
diseases with fundus photos. They had built the dataset on ten different categories:
one for normal retina and nine for different retina diseases. The outcomes were
based on VGG-19 architecture. They got an overall accuracy of 52%, and on the
multi-categorical classifier, they got a precision of 72.8%. Furthermore, it can be
improved with the help of better algorithms [10].
In this paper, they have given an idea of cost of surgery for removing acne from
the face. In the Chinese market, a treatment called chemical peel (CP) is widely
accepted as it is used to remove the hyperpigmentation and scarring. The factors that
were affecting the willingness to pay were identified using the general linear models,
and hence, an approximate value had been provided with that. They got a response
rate of almost 96% among the 476 patients. And as a conclusion, they got positive
results after three sessions of CP treatment for US$383.4 [11].
In this paper, they focused and studied to develop a system that detects acne on
the face. This system generally detects shapes and amounts of acne present on the
face. They have used the image-processing module of the MATLAB program. First,
the system converts RGB colour images to greyscale, then absolute maximum value
is calculated, and then the next step is to normalize the greyscale images by reducing
the data into its simplest form. Brightness extraction is done; after that image sub-
traction is done for the region of Interest. Then the unwanted features, like spots and
noise, are eliminated. The sensitivity, precision and accuracy numbers calculated
by this system are all typically high, with the exception of accuracy, which requires
improvement [12].
They provide an in-depth description of acne vulgaris (AV) in this work. AV is a
disorder of the pilosebaceous unit that causes both inflammatory and noninflamma-
tory lesions on the skin, including papules, pustules and nodules, as well as scarring
in varying degrees. AV is a relatively common condition that mostly affects teens,
with a lifetime incidence rate of roughly 85%. Acne frequency among women aged
20–29 was 50.9%, whereas it was 26.3% among people from 40 to 49, indicating
that AV can persist into adulthood. Women make for two-thirds of all dermatologi-
cal office visits for acne, with women over the age of 25 accounting for one-third of
these sessions [13].
In this paper, Viola and Jones introduced a system to directly and fleetingly descry
faces within an image. This system can be acclimated to directly descry facial fea-
tures. Still, the area of the image being anatomized for a facial point needs to be
regionalized to the position with the loftiest probability of containing the point. By
regionalizing the discovery area, false cons are excluded and the speed of discovery
is increased due to the reduction of the area examined [14].
Skin acne is a persistent inflammatory condition caused by the pilosebaceous
gland producing a greater amount of sebum than expected as a result of androgenic
hormones stimulation. Uncertainty still exists regarding the causes of acne and how
therapy influences how the condition develops. The treatment that works best is oral
isotretinoin, which is administered in the early stages of serious complaints [15].
In this paper, the authors describe the problems related to acne in detail; accord-
ing to their research, the global population is believed to be impacted by acne with
Acne Detection Using Convolutional Neural Networks 61
an estimated 9.4% prevalence, making it the eighth most common disease in the
world. Adolescents post puberty are the most commonly affected group, with teen-
age boys being particularly prone to more severe forms of acne. This review provides
an updated understanding of the prevalence of acne worldwide, where general and
institutional studies show a consistent prevalence globally, except in certain popu-
lations which will be discussed. However, there is a need for a standard, credible
assessment scale for acne, as the studies use a range of disparate measures. The
review will also delve into special populations, such as those who don’t have acne,
and the effect of potential determinants of acne on disease epidemiology [16].
In this paper, the authors mention that precise grading of the severity of skin con-
ditions is essential for effective patient treatment, and this is particularly true for AV,
the most prevalent skin disease in adolescence. Medical professionals typically use
a combination of lesion counting and experience-based global estimation to grade
acne. However, this can be challenging due to the similarity in appearance between
different severity levels of acne. To address this issue, this study explores the use of
label distribution learning (LDL) to accurately grade and count acne. The authors
propose a framework that takes into account the relationship between the number
of lesions and the severity of acne and optimize it using multi-task learning loss. A
new dataset, ACNE04, has been created and made publicly available, along with the
code, to evaluate the proposed framework [17].
A growing number of people are turning to computer-assisted diagnosis since it is
efficient as well as effective. Although deep learning has made great strides in the iden-
tification of acne, a number of issues still need to be resolved, including colour shifts
brought on by erratic illumination, size fluctuations and tightly packed breakouts. We
suggest an acne detection network that combines composite material, highlights refine-
ment, innovates the context of enhancement and provides Mask-Aware Multi-Attention
transformers to address these issues. To enhance the feature representation and lessen
the negative effects of unbalanced illumination, the composite feature refinement com-
bines high-level semantic information with minute details. To adjust to changes in size
and improve contextual information, the dynamic context enhancement makes use of
multi-scale characteristics. By repressing of no significance regions and stressing pos-
sible acne spots, the Mask-Aware Multi-Attention improves in the detection of highly
concentrated and superficial manifestations of acne. On the ACNE04 acne picture data
source and the PASCAL VOC 2007 organic photo collection of data, their technique
achieves cutting-edge technology results, and on PASCAL VOC 2007, it performs
similarly to earlier state-of-the-art methods [18, 19].
4.3 APPROACH
Here in this model a practical way of acne detection through images by using CNN-
based transfer learning regression model is being discussed. Here the performance
of this model is not less than a special trained dermatologist and then it will help to
get a better prediction of the total revenue required for the treatment of such acne and
also for plastic surgery if done [8]. The framework that is being suggested here com-
bines the facial landmark model with the One Eye OpenCV system to retrieve skin
zones from multiple facial locations and eliminate unwanted noise. Addressing the
62 AI-Driven IoT Systems for Industry 4.0
4.3.1 Haar Cascade
Haar cascade was used in the pre-processing of the image data to crop out faces from
the images. It was observed that some of the images failed to detect the faces; hence,
the model was training with both the faces and background as the noise. It was rec-
ommended to use other deep learning model for face detection in the pre-processing
stage to improve the quality of the model (Figure 4.1). As for the dataset, there is
an imbalance data on the skin tones in the dataset as there is lesser dark skin tone in
the dataset.
The study from the University of California San Francisco suggests that dark
skin has a stronger skin barrier; hence, this may suggest why there are lesser acne
dark skin photo found. This is the limitation of the model. Alternatively, a proposed
model is to increase synthetic data for dark skin in order to make up for the imbal-
ance dataset. The model helps the company to identify acne skin consumer. This
can be deployed on the website to classify the consumer skin to better recommend
a more appropriate skincare product online without the salesperson. Moving for-
ward, the shopping experience can be improved to have live predictions via video
instead of images. The electronic device can be placed in stores and can be helpful
FIGURE 4.1 Working of Haar Cascade. (a) Edge features. (b) Line features. (c) Four-
rectangle features [20].
Acne Detection Using Convolutional Neural Networks 63
to the consumers especially when the salesperson is away during the store operations
(Figure 4.2).
4.4 WORKFLOW
Here a dataset is filled with total 2156 images, out of which 1010 images contain
acne, whereas 1150 do not. The dataset is then divided into 60-20-20 ratio for train-
ing, validating and testing, respectively (see Table 4.1).
The general flow of the project is as follows:
As Figure 4.3 shows, raw images will be taken as an input which will be pre-
processed, and after that, a series of deep learning algorithms will come into action
TABLE 4.1
Distribution of Images in Dataset
Folder Variable Type No. of Images
Train Non-acne 690
Train Acne 610
Validation Non-acne 230
Validation Acne 200
Test Non-acne 230
64 AI-Driven IoT Systems for Industry 4.0
like CNN and Haar cascade through which the unnecessary parts of the images will
be removed, and the images can directly go for training and testing, and the end the
final evaluation will be done.
4.5 RESULTS
In the model, the Haar cascade classifier is used. It’s an algorithm used in the field of
object detection no matter what size the object has (see Table 4.2).
Less complexity of the algorithm makes it more friendly to use in real time.
The algorithm might be trained to recognize a wide range of items, including auto-
mobiles, faces, animals, humans and vehicles. Here is a model that is used for the
removal of unnecessary part of the face, which is basically the face cropping done
here (Figures 4.4 and 4.5).
Basically here once the training and testing of the dataset of 2156 images is
done, the model first gets the classification of acne (Figure 4.6) and non-acne
images (Figure 4.7), and then the model pre-processes the data and then for
removing the unnecessary background, it crops the images by using Haar cascade
algorithm.
TABLE 4.2
Accuracy Table
Precision Recall F1 Score Support
Non-acne 0.95 0.97 0.96 230
Acne 0.97 0.94 0.95 200
Accuracy – – 0.96 430
Nacro avg 0.96 0.96 0.96 430
Weighted avg 0.96 0.96 0.96 430
FIGURE 4.4 (a) Input dataset image (I). (b) Input dataset image (II).
66 AI-Driven IoT Systems for Industry 4.0
FIGURE 4.5 (a) Output results (I). (b) Output results (II).
4.6 CONCLUSION
The model has an accuracy of 97% on validation data and 98% on test data while
the F1 scores for validation data and test data are 98%. Compared to the previous
models, this is the highest precise results acquired. The model has a train score of
99% for both accuracy and F1 score. In order to prevent overfitting, dropout ratio of
50% was introduced in the CNN model.
Acne Detection Using Convolutional Neural Networks 67
Haar cascade was used in the pre-processing of the image data to crop out faces
from the images. It was observed that some of the image failed to detect the faces;
hence, the model was training with both the faces and background as the noise. It
was recommended to use other deep learning model for face detection in the pre-
processing stage to improve the quality of model. The model helps the company
to identify acne skin consumer (Table 4.3). This can be deployed on the website to
TABLE 4.3
Accuracy per Dataset
classify the consumer skin to better recommend a more appropriate skincare product
online without the salesperson.
In conclusion, the model has high accuracy and F1 score; however, it is not able to
work well with darker skin tones which can be further improved.
REFERENCES
1. Joshi, A., Khosravy, M., and Gupta, N. (eds) Machine Learning for Predictive
Analysis. Lecture Notes in Networks and Systems, vol 141. Springer, Singapore. doi:
10.1007/978-981-15-7106-0_52.
2. Singh, V., Shokeen, V., and Singh, B. (2013). Face detection by Haar Cascade clas-
sifier with simple and complex backgrounds images using openCV implementation.
International Journal Advanced Technology Engineering Science 1, no. 12:33–38.
3. Padilla, R., Filho, C., and Costa, M. (2012 April). Evaluation of Haar Cascade
Classifiers for Face Detection. Conference: ICDIP: International Conference on Digital
Image Processing At: Venice, Italy.
4. Suva, M. (2015). A brief review on acne vulgaris: Pathogenesis, diagnosis and treat-
ment. Research & Reviews Journal of Pharmacology 4:1–12.
5. Chovatiya, R. (2021). Acne treatment. JAMA 326, no. 20:2087. doi: 10.1001/jama.
2021.16599
6. Pathak, A. R., Pandey, M., and Rautaray, S. (2018). Application of deep learning
for object detection. Procedia Computer Science 132, no. 6:1706–1717. doi: 10.1016/
j.procs.2018.05.144
7. Chandan, G., Jain, A., and Mohana, M. (2018). Real Time Object Detection and Tracking
Using Deep Learning and OpenCV. 1305–1308. doi: 10.1109/ICIRCA.2018.8597266.
8. Zhao, T., Zhang, H., and Spoelstra, J. (2019). A computer vision application for assess-
ing facial acne severity from selfie images. arXiv preprint arXiv:1907.07901
9. Amos, B., Ludwiczuk, B., and Satyanarayanan, M. (2016). Openface: A general-
purpose face recognition library with mobile applications. CMU School of Computer
Science.
10. Choi, J. Y., Yoo, T. K., Seo, J. G., Kwak, J., Um, T. T., and Rim, T. H. (2017). Multi-
categorical deep learning neural network to classify retinal images: A pilot study
employing small database. PLoS One 12, no. 11:e0187336.
11. Xiao, Y., Chen, L., Jing, D., Deng, Y., Chen, X., Su, J., and Shen, M. (2019 Feb 22).
Willingness-to-pay and benefit-cost analysis of chemical peels for acne treatment
in China. Patient Prefer Adherence 13:363–370. doi: 10.2147/PPA.S194615. PMID:
30863024; PMCID: PMC6391120.
12. Chantharaphaichi, T., Uyyanonvara, B., Sinthanayothin, C., and Nishihara, A. (2015).
Automatic Acne Detection for Medical Treatment, 2015 6th International Conference
of Information and Communication Technology for Embedded Systems (IC-ICTES).
IEEE, 1–6.
Acne Detection Using Convolutional Neural Networks 69
13. Tan, A. U., Schlosser, B. J., and Paller, A. S. (2017 Dec 23). A review of diagnosis
and treatment of acne in adult female patients. The International Journal of Women's
Dermatology 4, no. 2:56–71. doi: 10.1016/j.ijwd.2017.10.006. PMID: 29872679;
PMCID: PMC5986265.
14. Wilson, P. I. and Fernandez, J. (2006). Facial feature detection using Haar classifiers.
Journal of Computing Sciences in Colleges 21, no. 4:127–133.
15. Williams, H. C., Dellavalle, R. P., and Garner, S. (2012). Acne vulgaris. The Lancet
379, no. 9813:361–372.
16. Tan, J. K. and Bhate, K. (2015 Jul). A global perspective on the epidemiology of acne.
British Journal of Dermatology 172, no. Suppl 1:3–12. doi: 10.1111/bjd.13462. PMID:
25597339.
17. Wu, X. et al. (2019). Joint Acne Image Grading and Counting via Label Distribution
Learning, 2019 IEEE/CVF International Conference on Computer Vision (ICCV),
Seoul, Korea (South). pp. 10641–10650, doi: 10.1109/ICCV.2019.01074
18. Min, K., Lee, G.-H., and Lee, S.-W. (2021). ACNet: Mask-Aware Attention with
Dynamic Context Enhancement for Robust Acne Detection. 2021 IEEE International
Conference on Systems, Man, and Cybernetics (SMC).
19. Ghadekar, P., Bongulwar, A., Jadhav, A., Ahire, R., Dumbre, A., and Ali, S. (2023).
Ensemble Approach to Solve Multiple Skin Disease Classification Using Deep
Learning, International IEEE Conference on Device Intelligence, Computing and
Communication Technologies, March 17–18, 2023, Dehradun India.
20. Haar Cascade Classifier Image. https://www.google.com/search?q=haar+cascade+
classifier+image&source=lnms&tbm=isch&sa=X&v.
5 Key Driving Technologies
for Industry 4.0
Anesh D Sundar ArchVictor and
C. Emilin Shyni
5.1 INTRODUCTION
The Fourth Industrial Revolution, commonly referred to as Industry 4.0, is a revo-
lutionary period in manufacturing and production where cutting-edge technologies
converge to produce intelligent, linked, and extremely efficient systems. Industry
4.0’s core technologies, known as Key Driving Technologies, give organizations the
tools they need to increase productivity, streamline operations, achieve previously
unheard-of levels of automation, and make data-driven decisions.
The driving force behind Industry 4.0 as shown in Figure 5.1 is the seamless
integration of physical and digital systems, blurring the lines between the physical
and virtual worlds. A wide array of cutting-edge technologies converge to enable
this convergence, enabling industries to usher in a new era of manufacturing and
production.
Industry 4.0’s core technologies, known as key driving technologies, give organi-
zations the tools they need to increase productivity, streamline operations, achieve
previously unheard-of levels of automation, and make data-driven decisions.
Internet of Things (IoT) enables smart manufacturing processes, predictive mainte-
nance, and real-time monitoring, leading to increased efficiency and reduced down-
time. IoT forms the backbone of Industry 4.0, interconnecting devices and machines
to collect and exchange data. Recent research in the Journal of Manufacturing
Systems by Li et al. [1] discusses the application of IoT in industrial environments,
highlighting its role in enabling predictive maintenance, real-time monitoring, and
process optimization. The massive amounts of data generated by IoT and other
digital sources can be harnessed using big data analytics. The vast amount of data
generated by IoT and other sources necessitates advanced analytics for meaning-
ful insights. In the book “Big Data Analytics for Smart Manufacturing,” published
in 2021, Goh et al. [2] explore the application of big data analytics in Industry
4.0, covering topics such as data management, machine learning (ML), and predic-
tive maintenance. Advanced data analysis tools and algorithms assist businesses
in extracting useful insights, trends, and correlations from data, allowing them to
make data-driven decisions and optimize operations for improved performance and
cost-effectiveness. Artificial intelligence (AI) and ML are critical components of
Industry 4.0 because they enable machines and systems to learn from data and
improve their performance over time. AI-powered systems can automate complex
tasks, support predictive maintenance, optimize production schedules, and enhance
70 DOI: 10.1201/9781003432319-5
Key Driving Technologies for Industry 4.0 71
Cybersecurity and data privacy: As Industry 4.0 relies heavily on IoT devices
and data exchange, cybersecurity and data privacy are critical concerns.
Ensuring the security of IoT devices, networks, and data transmission is
essential to preventing cyber threats and protecting sensitive information.
According to statistics [8], as shown in Figure 5.4, IoT device utilization will be
very high in 2025 when compared to non-IoT devices.
According to the survey [12], global data volume usage in 2015 was 15.5
zettabytes, and this figure is expected to rise to 185 zettabytes by 2025, as shown
in Figure 5.6.
Overall, big data analytics enables firms in Industry 4.0 to make more informed
decisions, improve operational efficiency, improve customer experiences, and drive
innovation, resulting in enhanced competitiveness and growth.
Physical feedback: CPS provides feedback from the physical world to digital
systems, allowing them to respond to real-world events and conditions.
Predictive capabilities: CPS uses data analytics and AI to predict and prevent
potential issues, optimizing processes and improving efficiency.
Safety and security: CPS employs security measures to safeguard the integrity,
confidentiality, and availability of data and control systems, hence reducing
cybersecurity threats.
Based on survey data [15], Figure 5.9 depicts the origin of the greatest number
of cyberattacks. The values of the highest number of cyber assaults per country are
84 AI-Driven IoT Systems for Industry 4.0
shown in Table 5.1. Physical systems are a key component of Industry 4.0, allowing
for the seamless integration of physical processes with digital information. They
enable industries to develop more responsive, efficient, and inventive systems, result-
ing in higher production, sustainability, and competitiveness.
TABLE 5.1
Origin of the Highest Number of
Cyber Threats
Origin for the Highest
Countries Number of Cyberattacks
Netherlands 2.2
Indonesia 2.41
Russia 2.46
Thailand 2.5
Vietnam 4.23
Germany 5.1
India 5.33
Brazil 5.63
USA 17.05
Key Driving Technologies for Industry 4.0 85
focus on higher value activities. Here are some examples of how RPA contributes to
Industry 4.0:
Lee et al.’s survey [16] also highlighted the pitfalls in Table 5.2.
Addressing these risks and challenges requires careful planning, effective change
management, a thorough process assessment, and collaboration between IT, operations,
and business units. Figure 5.10 shows the approximate percentage of risk adoption.
By taking a holistic the approach and addressing these challenges proactively,
organizations can maximize the benefits of RPA adoption in Industry 4.0 while min-
imizing potential drawbacks.
TABLE 5.2
Approximate Percentage of Risk in the
Adoption of RPA
Approx. Percentage of
Challenges Risk in Adoption
Process ability 26
Technical complexity 23
Capacity 20
Ownership 16
Operational risk 14
Other 1
88 AI-Driven IoT Systems for Industry 4.0
5.2.7 Cloud Computing
Cloud computing is a fundamental enabler of Industry 4.0, providing the necessary
infrastructure and capabilities to support the digital transformation of industries.
Cloud computing provides scalable and adaptable resources for data storage, pro-
cessing, and analytics, allowing businesses to use the power of new technologies
such as IoT, AI, and big data.
Cloud computing is a key technology for Industry 4.0, allowing firms to leverage
sophisticated technology capabilities, analyze data, and drive innovation in manu-
facturing, supply chain management, and other industrial processes.
1. Set clear objectives: Define specific goals and outcomes you want to
achieve through Industry 4.0 implementation. Align these objectives with
your organization’s overall vision and strategy.
2. Leadership commitment: Gain buy-in and commitment from top leader-
ship. Ensure they understand the importance of Industry 4.0 and are actively
involved in driving the transformation.
3. Examine the existing situation: Examine your company’s current techno-
logical infrastructure, processes, and labor skills. Determine your strengths,
shortcomings, and areas for progress.
4. Create a roadmap: Make a thorough implementation roadmap, including
the steps, deadlines, and resources needed for the Industry 4.0 journey.
Divide the blueprint into doable stages.
5. Technology selection: Choose the right technologies (IoT, AI, big data, etc.)
that align with your objectives. Ensure they address your specific chal-
lenges and provide meaningful solutions.
6. Pilot projects: Begin with pilot projects to test and validate the technology
of choice. Choose areas where the influence can be quantified and concrete
advantages may be demonstrated.
7. Data strategy: Develop a comprehensive data strategy that includes data
collection, storage, analysis, and visualization. Ensure data quality, secu-
rity, and compliance.
8. Integration and connectivity: Establish a robust connectivity infrastructure
to link various devices, systems, and processes. Implement IoT devices and
sensors for data collection.
Key Driving Technologies for Industry 4.0 93
The challenge of upskilling the workforce to operate and manage these advanced
technologies looms large, demanding a commitment to continuous learning and
development. Legacy systems and regulatory hurdles often act as bottlenecks,
impeding the seamless integration of new technologies. Addressing these challenges
necessitates a holistic approach, where visionary leadership, strategic planning, and
adept change management converge to pave the way forward.
The symphony of Industry 4.0 is not orchestrated by a single maestro; rather, it is a col-
laborative endeavor that encompasses governments, industries, academia, and individuals.
Collaborative ecosystems and partnerships foster innovation and knowledge exchange,
accelerating the pace of technological advancement. Governments play a pivotal role in
shaping policies, regulations, and standards that nurture the growth of Industry 4.0.
Industries and businesses, driven by a spirit of innovation and adaptability, fuel
the engine of transformation, leveraging cutting-edge technologies to gain a com-
petitive edge. Academia becomes the crucible of knowledge creation, where research
and development pave the way for novel applications and breakthroughs.
As Industry 4.0 unfolds its transformative potential, ethical considerations and
sustainability come to the forefront. The sheer scale of data collection and process-
ing raises questions about data privacy, security, and ownership. The responsible
and ethical use of AI, particularly in decision-making processes, demands careful
consideration to ensure fairness, transparency, and accountability.
Moreover, the sustainable deployment of Industry 4.0 technologies is a moral
imperative. Organizations must embrace environmentally friendly practices, mini-
mize energy consumption, and harness technology to address pressing global chal-
lenges, such as climate change and resource scarcity.
5.6 CONCLUSION
Industry 4.0 technologies are transforming sectors and delivering unprecedented
levels of efficiency, productivity, and competitiveness. The adoption of Industry 4.0
technologies opens up a plethora of options across multiple industries. IoT integrates
the physical and digital worlds, allowing for real-time data collection, remote moni-
toring, and predictive analytics. AI enables machines to learn, reason, and make
intelligent decisions, transforming processes such as automation, data processing,
and decision-making. Data analytics unveil hidden patterns, trends, and correlations,
guiding strategic choices and unleashing innovation. CPS bridge the gap between
physical processes and digital systems, creating smart, interconnected environments
that optimize operations and enhance safety.
However, the path to embracing Industry 4.0 is not without challenges.
Organizations must navigate barriers like technological complexity, high initial
costs, data security concerns, and resistance to change. Skill shortages, integration
issues, and regulatory compliance complexities add further layers of complexity to
the adoption process. Yet, these challenges are opportunities for growth and innova-
tion. By addressing these hurdles head-on and implementing a well-crafted strategy,
organizations can harness the full potential of Industry 4.0. Success requires vision-
ary leadership, a commitment to continuous learning, and a culture that embraces
change and collaboration.
Key Driving Technologies for Industry 4.0 95
As we stand at the cusp of a new era defined by Industry 4.0, the fusion of
technology, data, and human ingenuity holds the promise of a brighter and more
interconnected future. The organizations that embrace these technologies, adapt
their operations, and empower their workforce will be the trailblazers in this trans-
formative journey, shaping industries and redefining what is possible in the digital
age.
LIST OF ABBREVIATIONS
AI artificial intelligence
AR augmented reality
CPS cyber-physical systems
IoT Internet of Things
ML machine learning
RPA robotic process automation
VR virtual reality
REFERENCES
1. Li, S., Li, X., & Ma, X. The application of IoT in industrial environments: A systematic
review. Journal of Manufacturing Systems, 64, 261–271, 2022.
2. Goh, M., Wu, D., & Kim, T. H. (Eds.). Big Data Analytics for Smart Manufacturing.
Springer, 2021.
3. Rauschecker, U., Shen, W., & Wang, L. AI-driven predictive maintenance for industrial
equipment. Journal of Manufacturing Systems, 72, 307–319, 2023.
4. Azevedo, A., Barata, J., & Tovar, E. (Eds.). Cyber-Physical Systems for Industry 4.0.
CRC Press, 2022.
5. Gao, W., Zhang, Y., & Zhu, W. Additive manufacturing: Technology, applications, and
opportunities in industry 4.0. Additive Manufacturing, 39, 101922, 2021.
6. Ivezic, N., Howe, A., & Fussell, D. Augmented Reality and Virtual Reality in Industry
4.0. CRC Press, 2023.
7. Guo, Q., Li, X., & Zhu, Z. Cloud and edge computing for smart manufacturing: A
review. Robotics and Computer-Integrated Manufacturing, 79, 101998, 2022.
8. Vailshery, L. S. Internet of Things (IoT) and Non-IoT Active Device Connections
Worldwide from 2010 to 2025. Technology & Telecommunications, 2022.
9. Taylor, P. Forecast Revenue Big Data Market Worldwide 2011–2027, www.statista.com,
2022.
10. Archenaa, J., & Mary Anita, E. A. A survey of big data analytics in healthcare and
government. 2nd International Symposium on Big Data and Cloud Computing, 50,
408–413, 2015
11. Di Gesualdo, D. Artificial Intelligence for Industry 4.0, www.eeweb.com, 2022.
12. Berisha, B., & Meziu, E. Big Data Analytics in Cloud Computing: An Overview,
www.researchgate.net/publication/348937287_Big_Data_Analytics_in_Cloud_
Computing_An_overview, http://dx.doi.org/10.13140/RG.2.2.26606.95048, 2021.
13. Mosterman, P. J., & Zander, J. Industry 4.0 as a cyber-physical system study., Software
and Systems Modeling, 15, 17–29, 2016
14. Ervural, B. C., & Ervural, B. Overview of Cyber Security in the Industry 4.0 Era,
Managing The Digital Transformation, Springer Series in Advanced Manufacturing,
http://dx.doi.org/10.1007/978-3-319-57870-5_16, 2018.
96 AI-Driven IoT Systems for Industry 4.0
15. DavidPur, N. Which Countries Are Most Dangerous? Cyber Attack Origin – By
Country, www.cyberproof.com, 2022.
16. Lee, J., Lee, B., & Lee, D. The impact of robotic process automation (RPA) on busi-
ness process outsourcing (BPO). Technology Analysis & Strategic Management, 31(11),
1367–1381, 2019.
17. Schleicher, D., & Schöbel, A. Immersive Technologies in Industry 4.0—Current Status
and Future Prospects. Augmented and Virtual Reality in Operations Management,
15–35, 2020.
18. Sosa, R., & Oliveira, L. Industry 4.0 and Augmented Reality: Opportunities and
Challenges Production Engineering Research and Development, 10, 23–32, 2020.
6 Opportunities and
Challenges of Digital
Connectivity for
Industrial Internet
of Things
Mahesh Visveshwarappa
6.1 INTRODUCTION
The adoption of Internet of Things (IoT) in the industries (manufacturing) is referred to
as the Industrial IoT (IIoT). IoT refers to the network of smart systems and administration
platforms that will work together to get a good cause for societies. Because industrial
equipment must cooperate and operate synchronously, interconnection of equipment is
not a novel idea. The systems exchange valuable information and interconnect with each
other, but then again this interaction is only permitted in a precise area of a factory or
other industrial setting [1]. There is an explosion of smart devices and technology that
have made it possible for the physical world to turn into digital entities. Cheers to IIoT,
the factors behind technological advancement are machine learning, robots, and artificial
intelligence (AI) with higher speed internet. Generally, the IIoT, commonly referred to
as the Industry 4.0 or Industrial Internet, is one such technology that is driving trans-
formation in the manufacturing sector. Industry IIoT can achieve previously unheard-of
levels of competence, production, and performance by fusing machine-to-machine inter-
action with big data analysis. According to Accenture’s analytical reports, the IIoT will
revolutionize numerous industries that produce almost two-thirds of the global economy,
resulting in 14.2 trillion dollars in economic growth by 2030. However, as it has devel-
oped, new opportunities have also emerged as well as new problems and challenges for
corporate executives.
As a result, businesses in a variety of sectors are under increasing pressure to
navigate and get over the challenges associated with the development and adop-
tion of the IIoT [2]. In order to deliver interoperability, competence, and scalability,
industrial IoT uses sensor-driven computing, intelligent machines, and data analyt-
ics applications. This directly encourages automation in crucial infrastructure and
boosts business productivity.
The most significant difficulty is protecting the Industrial Infrastructure and all
of its components while working toward the objectives of increased productivity.
Concern is raised by cyberattacks on critical infrastructure and industries because
DOI: 10.1201/9781003432319-6 97
98 AI-Driven IoT Systems for Industry 4.0
of the enormous losses they cause. Therefore, it is important to take away from these
occurrences the fact that attackers are now targeting industries and that this problem
requires immediate attention. IIoT is frequently referred to as the integration of two
important technologies called operational technology (OT) and information technol-
ogy (IT), where OT is the plant network where manufacturing is done and IT is the
enterprise network. To avoid the compromise of IIoT infrastructure, these two men-
tioned technologies have diverse security requirements to be taken into account [1].
Therefore, both business and academic worlds have been advised to create new
technologies for wireless communication through which industrial contexts achieve
all of the aforementioned needs in IIoT. For instance, the authors of [3] suggested a
5G-enabled IIoT architecture, which demonstrates how 5G benefits the world with
the help of gigantic machine-to machine-communication, boosted mobile broadband
(MBB), and ultrareliable and low-latency communication (URLLC) and can help
greatly in the automation of various industries [4].
1. Industrial IoT methods for minimizing time: Saving time at work places
significant advantages of IIoT for the industrial zone. The IoT can assist
with duties involving the configuration, registration, upkeep, updating,
and monitoring of machines. Hence, it is possible to manage some of these
procedures remotely. An essential component of the IoT development is
industrial control systems. In the next three years, up to 30% of control
systems will have analytical capabilities powered by AI, according to the
most recent Deloitte report [5].
2. Operational accuracy and reducing the likelihood of errors: In the produc-
tion sector, even a minor error can cost a lot of money. Intelligent, auto-
mated methods have made it feasible to prevent human errors that could be
the consequence of tiredness or lack of concentration. The productivity of
the entire firm can be increased by a well-programmed system, and since
it operates steadily, it is immune to being overworked, and it is more exact
and precise than human beings.
3. Enhanced operational efficiency: German production companies anticipate
a 12% performance gain after deploying intelligent solutions, according to
a PwC(PricewaterhouseCoopers) poll. The adoption of IoT solutions in
the sector allows staff to focus on tasks that advance development while
machines do repetitive duties. Employee retention is increased, and their
overall dedication and satisfaction are influenced when they are relieved of
tedious and unsatisfying jobs [7].
Digital Connectivity for IIoT 101
6.5 CONCLUSION
The security challenges must be taken into account from the very beginning given
the rise in potential threats to the IIoT. When introducing IIoT architecture or after
making changes to an existing architecture, a thorough penetration test is required
because the changes may open up new attack vulnerabilities that weren’t there before.
The various security risks, difficulties, and solutions mentioned in this chapter will
aid in preventing assaults on industries and provide chances for new business owners
to contribute to the development of the IIoT.
102 AI-Driven IoT Systems for Industry 4.0
REFERENCES
1. A. C. Panchal, V. M. Khadse and P. N. Mahalle, “Security Issues in IIoT: A
Comprehensive Survey of Attacks on IIoT and Its Countermeasures,” 2018 IEEE
Global Conference on Wireless Computing and Networking (GCWCN), Lonavala,
India, 2018, pp. 124–130, doi: 10.1109/GCWCN.2018.8668630
2. C. J. Turner, J. Oyekan, L. Stergioulas and D. Griffin, “Utilizing Industry 4.0 on the
Construction Site: Challenges and Opportunities,” in IEEE Transactions on Industrial
Informatics, vol. 17, no. 2, pp. 746–756, Feb. 2021, doi: 10.1109/TII.2020.3002197
3. E. T. Nakamura and S. L. Ribeiro, “A Privacy, Security, Safety, Resilience and
Reliability Focused Risk Assessment Methodology for IIoT Systems Steps to Build and
Use Secure IIoT Systems,” 2018 Global Internet of Things Summit (GIoTS), Bilbao,
Spain, 2018, pp. 1–6, doi: 10.1109/GIOTS.2018.8534521
4. S. S. A. Abbas and K. L. Priya, “Self Configurations, Optimization and Protection
Scenarios with Wireless Sensor Networks in IIoT,” 2019 International Conference on
Communication and Signal Processing (ICCSP), Chennai, India, 2019, pp. 0679–0684,
doi: 10.1109/ICCSP.2019.8697973
5. https://solwit.com/en/posts/industrial-iot-opportunities-and-threats/ website accessed
on Aug. 2023.
6. A. Artemenko, “Keynote: Advances and Challenges of Industrial IoT,” 2021 IEEE
International Conference on Pervasive Computing and Communications Workshops
and other Affiliated Events (PerCom Workshops), Kassel, Germany, 2021, pp. 526–526,
doi: 10.1109/PerComWorkshops51409.2021.9431146
7. A. Chowdhury and S. A. Raut, “Benefits, Challenges, and Opportunities in Adoption of
Industrial IoT (March 28, 2019),” in International Journal of Computational Intelligence
& IoT, vol. 2, no. 4, 2019, Available at SSRN: https://ssrn.com/abstract=3361586
7 Malicious QR Code
Detection and Prevention
Premanand Ghadekar, Faijan Momin,
Tushar Nagre, Sanika Desai,
Prathamesh Patil, and Vinay Aher
7.1 INTRODUCTION
Nowadays, everyone has a smartphone, and UPI has just changed the banking sys-
tem. Every single link is in the form of a quick response (QR). After 2020, due to
the pandemic, demand for a standard that could carry large amounts of information
without humans touching physical items has increased. The solution is provided by
QR code.
QR code includes black and white color modules that are structured in a square
shape on a plain color background. The information to be ciphered should be in one
of the four standard modes.
By the way, people are committing fraud by using malicious QR codes. And one
can’t predict which QR code is malicious and which one is not by just looking at them.
system in which they have given a machine learning-based solution in which they
have used convolution Neural Network (CNN) by which in order to distinguish
between good and bad QR code/JPEG images, they have extracted 10 discriminative
characteristics using a machine learning classifier.
In this paper, authors [3] have designed a new two-level QR code in order to increase
the security of QR codes. They developed two types of QR codes: the public level, which
can be read by any QR code reader, and the private level, which employs textured pat-
terns as black modules and is encoded for private message transfer. Also private methods
do not affect or interfere with the process of the public level; thus, smooth functioning of
this two-level QR code takes place. This study [4] demonstrates how to identify perspec-
tive distortion in QR codes using edge direction and edge projection analysis.
The authors [5] of this work used a watermark technique for threat detection to
develop a blind digital image based on a QR code. The suggested method provides a
framework through which a modified binary form of data can be embedded into the
cover image’s DWT domain and used to identify images or QR codes.
This paper [6] shows how QR codes can be used to attack systems. Also authors
have discussed different phishing techniques from the point of view of attackers and
suggested possible solutions to it.
The authors [7] of this study have suggested a method for reading QR codes based
on the correlation of histograms between the reference image for the QR code and
the input picture. The paper also proposes a new algorithm which provides good
accuracy for the trained model.
This paper [8] describes the different worm attacks on the QR code and also pro-
vides the countermeasures for the same.
This paper [9] discusses the study of different malicious QR codes and also dem-
onstrates how to identify perspective distortion in QR codes using edge direction.
The authors [10] of this study have suggested the specification of the data matrix
of the barcode symbol.
This study [11] demonstrates the QR code libraries and provides a case study on
their application in the NITK central library.
In this paper [12], authors have designed QR code perspective distortion based on
edge projections and edge directions and their analysis.
In the proposed paper [13], different QR patterns are given and their invariant
moments and localization.
In this proposed paper [14], authors have analyzed the BRISK (Binary Invariant
Scalable Keypoints) algorithm.
This paper [15] presents quick QR code detection and recognition in high-reso-
lution photos.
1. Finding Pattern: This pattern can detect or identify the QR code and it can
be read at any angle.
Malicious QR Code Detection and Prevention 105
7.5.1 Quishing Attack
The first type of attack is quishing [16], and it is somehow similar to phishing attacks.
In this attack, a phishing page that hackers have designed to steal information like
user credentials, personal data, or some sensitive information is sent to a victim [17].
This attack can be avoided by making the QR code dynamic because it has more
security than the static ones. Also aging time can be added, which means if the user
does not use or activate the QR code in a specified time then it will become invalid.
It can be prevented by encrypting the data as well [18].
7.5.2 QRLjacking
The second type of attack is QRLjacking [19]. In this attack, hackers spread mal-
ware to the user’s or victim’s device. QRLjacking is short for QR code login jack-
ing [20]. Here the user or victim scans the QR code on a fake website which is
provided by hackers and then the hackers have all access to the user’s account. This
attack can be avoided by sending email/SMS to the users from service providers [21].
106 AI-Driven IoT Systems for Industry 4.0
Additional authentication methods can be added to user login. For example, sound-
based authentication can be used.
7.6.2 Decision Tree
Decision tree algorithm is used for classification and regression problem analysis
based on the nodes of the tree (Figure 7.4).
7.7 DIAGRAMS
In the proposed system (Figure 7.6) [24], by using the camera of the system, first
of all scan an image QR code. For data preprocessing, the model uses various data
preprocessing techniques such as scaling and translation.
Training Testing Model [25]: For training and testing of the proposed model, the
dataset has split into 70% training dataset and 30% testing dataset into two classes.
After preprocessing the QR code, images for training and testing of the proposed
model extract features from the QR code image [26]. Therefore, for extracting fea-
tures of QR codes, selected patterns are used as features for the purpose of feature
extraction. In this there are three main features: finder pattern, alignment pattern,
and timing pattern. For the purpose of feature extraction, here the BRISK algorithm
is implemented [19], which selects these patterns and finds relations between them,
and selects the best features from them by using scale invariance and rotational
invariance [27]. Once the features are selected, separate files of the extracted fea-
tures are then created and converted [28] into CSV files.
Malicious QR Code Detection and Prevention 109
Once the feature extraction is over, [29], then on the basis of QR code type, our
system checks whether it is an SSL (Secure Sockets Layer) or if it is redirecting the
user to a malicious website or it uses UPI protocol, and based on that, classification
is done in respective types.
After detecting the QR code type, by using a random forest algorithm, classify
them into two classes: malicious and normal (benign) QR codes which was our main
objective [16]. While using the random forest algorithm, features extracted from the
BRISK algorithm as an input parameter, and with the help of different decision tree
[30] classifiers of random forest algorithm based on BRISK features, classify QRs
into malicious and normal on the basis of the majority of votes of decision tree clas-
sifiers, and hence, proposed models have selected random forest as the classifying
algorithm as its accuracy was the highest. And at the end, print the result as a mali-
cious QR code or normal QR code.
7.8 IMPLEMENTATION
Our solution is mainly based on two problems: the first is the efficiency of existing
QR code scanners that are not on that level as of now so the proposed model uses
an improved random forest algorithm to increase accuracy and efficiency, and the
second is to detect the problematic one or malicious QR codes, [6] because QR code
fishing is a big issue nowadays so proposed model secures the user perspective for
security on QR scanning by showing them proper security warning so user can make
better decisions.
110 AI-Driven IoT Systems for Industry 4.0
TABLE 7.1
Experimental Results
Sr. No Algorithm Exiting Accuracy Proposed System Accuracy
1 KNN 69.61% 71.61%
2 Decision tree 65.83% 76.76%
3 Random forest 70.45% 81.46%
7.12 CONCLUSION
With the help of a model, the detection of malicious QR is easy. A number of suspi-
cious QR will be detected, and so it will be helpful in the case of authentication,
digital information, and especially in payment transactions. It will provide great
security, and most importantly people will also adapt to it. Every time while scan-
ning QR code, they will check it through the model. And the number of frauds in
digitalization and transactions will be very low.
REFERENCES
1. Kharraz, A., Kirda, E., Robertson, W., uBalzarotti, D., and Francillon, A. 2014. Optical
delusions: A study of malicious QR codes in the wild. In 2014 44th Annual IEEE/IFIP
International Conference on Dependable Systems and Networks, Atlanta, GA, USA,
2014, pp. 192–203, doi: 10.1109/DSN.2014.103.
2. Shaikh, A., Kotavadekar, R., Sawant, S., and Landge, S. 2021. Machine learning-based
solution for the detection of malicious JPEG images, IEEE access, 10.1109/ACCESS.
2020.2969022, PP, 2020/01/23.
3. Tkachenko, I., Guichard, C., Strauss, O., Gaudin, J.-M., Puech, W., Senior Member,
IEEE, and Destruel, C. 2016. Two-level QR code for private message sharing and docu-
ment authentication, IEEE transactions on information forensics and security, vol. 11(2).
4. Singh, P. K., Zhu, J., Huo, L., and Pavlovich, P. A. 2021. Research on QR image code
recognition system based on artificial intelligence algorithm, Research on QR image
code recognition system based on artificial intelligence algorithm, vol. 30.
5. Thulasidharan, P. P., and Nair, M. S. 2015. QR code based blind digital picture water-
marking with attack detection code. AEU – International Journal of Electronics and
Communications, 69(7):1074–1084.
6. Security of the QR code by P. Kieseberg, M. Leithner, M. Mulazzani, L. Munroe,
S. Schrittwieser, M. Sinha, and E. Weippl. 2010. In MoMM’2010 – The Eighth
International Conference on Advances in Mobile Computing and Multimedia, 8–10
November 2010, Paris, France.
7. Geneva, Switzerland: ISO. 24778:2008. Specification for an Aztec Code barcode sym-
bol. Geneva, Switzerland: ISO.
8. Sharma, V. November 2011. An analytical survey of recent worm attacks. International
Journal of Computer Science and Network Security, 11.
9. Sharma, V., 2011, A study of malicious QR codes. International Journal of Computer
Science and Network Security. 9(5).
10. ISO 16022:2006. Specification for the DataMatrix barcode symbol.
11. Shettar, I. M. 2016. Quick response (QR) codes libraries: Case study on applications in
the NITK Library, National Conference on Future Librarianship, TIFR BOSLA; pages
129, vol 34.
Malicious QR Code Detection and Prevention 113
12. Karrach, L., Pivariová, E., and Boek, P. 2020. Identification of QR code perspective
distortion based on Edge directions and Edge projections analysis. 6(7):67.
13. Rajan, R., and Vibhor, S. 2017. Hu invariant moments-based localisation of QR code
patterns by tribal and ZAZ. International Journal of Advanced Computer Science and
Applications(IJACSA), 8(9):162–72.
14. Leutenegger, S., Chli, M., and Siegwart, R. Y. 2010 BRISK: Binary Robust Invariant
Scalable Keypoints Autonomous Systems Lab. ETH Zurich.
15. Szentandrási, I., Herout, A., and Dubská, M. May 2012. Quick QR code detection
and recognition in high-resolution photos. In The 28th Spring Conference on Computer
Graphics Proceedings. Association for Computing Machinery, New York, US, pp. 129–36.
16. Jain, A., and Chen, Y. 1993. Bar code localization using texture analysis. In Second
International Conference on Document Analysis and Recognition, pp. 41–44.
17. Alfthan, J. 2008. Robust Detection of Two-Dimensional Barcodes in Blurry Images.
Master’s thesis, KTH Computer Science and Communication, Stockholm, SE.
18. Liu, Y., Yang, J., and Liu, M. 2008. Recognition of QR code with mobile phones. In
Chinese Control and Decision Conference, CCDC 2008, 203–206.
19. Mikolajczyk, K., and Schmid, C 2005. A performance evaluation of local descriptors.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10):1615–1630.
20. Dalal, N., and Triggs, B. 2005. Histograms of oriented gradients for human detection.
Computer Vision and Pattern Recognition, 2005. IEEE Computer Society Conference,
@inrialpes.fr
21. Arnould, S., Awcock, G., and Thomas, R. 1999. Remote bar-code localisation using
mathematical morphology. In Seventh International Conference on Image Processing
and Its Applications, vol. 2, pp. 642–646.
22. JOUR, Chang, Jae. 2014. An introduction to using QR codes in scholarly journals,
Science Editing 1(2):113–117, 10.6087/kcse.2014.1.113, 2014/08/01
23. Muniz, R., Junco, L., and Otero, A. 1999. A robust software barcode reader using
the Hough transform. In International Conference on Information Intelligence and
Systems, 313–319.
24. Duda, R. O., and Hart, P. E. 1972. Use of the Hough transformation to detect lines and
curves in pictures. Communications of the ACM, 15(1):11–15.
25. Dubská, M., Herout, A., and Havel, J. 2011. PClines – Line detection using parallel
coordinates. In Proceedings of CVPR 2011.
26. Haindl, M., and Mikes, S. 2005. Colour texture segmentation using modelling approach.
In: Singh, S., Singh, M., Apte, C., Perner, P. (eds), Pattern Recognition and Image
Analysis. ICAPR 2005. Lecture Notes in Computer Science, vol 3687, pp. 484–491.
Springer, Berlin, Heidelberg. https://doi.org/10.1007/11552499_54
27. Parvu, O., and B, A. 2009. A method for fast detection and decoding of specific 2d
barcodes. In 17th Telecommunications forum TELFOR 2009, 1137–1140.
28. Belussi, L., and Hirata, N. 2011. Fast QR code detection in arbitrarily acquired images.
24th SIBGRAPI Conference on Graphics, Patterns and Images, Sibgrapi 2011, Alagoas,
Maceió, Brazil, August 28–31, 2011.
29. Herout, A., Dubská, M., and Havel, J. 2012. Real-time precise detection of regular grids
and matrix codes. Journal of Real-Time Image Processing, 10.1007/s11554-013-0325-6,
vol 11, 2013/02/14.
30. Hu, H., Xu, W., and Huang, Q. 2009. A 2D barcode extraction method based on texture
direction analysis. In Fifth International Conference on Image and Graphics, ICIG ‘09.,
pp. 759–762.
8 Integration of Advanced
Technologies for
Industry 4.0
Tanmay Paliwal, Aditya Sikdar, and Zidan Kachhi
8.1 INTRODUCTION
Industry 4.0 is the term used to describe the fourth industrial revolution, charac-
terized by the integration of advanced technologies such as the Internet of Things
(IoT), artificial intelligence (AI), big data analytics, robotics, and automation in
the manufacturing sector. Industry 4.0 aims to create intelligent factories capa-
ble of producing high-quality products with minimal human intervention while
being flexible, efficient, and responsive to customer needs. One of the critical
drivers of Industry 4.0 is the IoT, which refers to the network of physical devices,
sensors, and actuators connected to the Internet and can communicate with each
other and with cloud-based services. IoT enables data collection and sharing
across the entire value chain, from product design and development to production
and distribution. IoT also facilitates real-time monitoring and control of the man-
ufacturing process, predictive maintenance, and quality assurance. Another key
driver of Industry 4.0 is AI, which refers to the ability of machines and systems
to perform tasks that usually require human intelligence, such as learning, rea-
soning, and decision-making. AI can enhance the automation and optimization
of the manufacturing process by analyzing large amounts of data, recognizing
patterns, and generating insights. AI can also enable intelligent decision-making
by providing recommendations, suggestions, and solutions based on data-driven
models [1–4].
8.2.1 Cyber-Physical Systems
CPS are systems that integrate physical components with computational compo-
nents. This integration allows for real-time communication between the physical
and digital worlds. For example, a CPS can monitor the status of a machine through
114 DOI: 10.1201/9781003432319-8
Integration of Advanced Technologies for Industry 4.0 115
sensors, send the data to a cloud service for analysis, and receive instructions from
an AI system to adjust the machine parameters or perform maintenance tasks. CPS
is essential for Industry 4.0, which is the fourth industrial revolution. Industry 4.0
is characterized by using automation, data, and connectivity to create smart facto-
ries. CPS can help manufacturers achieve higher levels of automation, efficiency,
quality, and flexibility in their production processes. They can also enable new
functionalities such as self-configuration, self-optimization, self-diagnosis, self-
healing, and self-learning. CPS is already used in various industries, including
manufacturing, transportation, healthcare, and energy. They have the potential to
revolutionize the way we produce goods and services, and their impact will only
grow in the future [3, 5, 6].
1. Benefits of CPS
a. Increased visibility and control: Manufacturers can get real-time
visibility into their production processes using CPS. This enables
them to make better judgments about maximizing their outputs and
identifying and fixing issues more rapidly. For instance, CPS can
keep track of the effectiveness and quality of items and machin-
ery and notify the operators if any flaws or defects are found. This
can lower expenses associated with waste, downtime, and rework.
Additionally, CPS may gather and analyze information from various
sources, including sensors, cameras, radio frequency identification
(RFID) tags, and Global Positioning System (GPS) gadgets. This can
assist manufacturers in streamlining their logistics, energy use, and
inventory levels [7, 8].
b. Improved collaboration: In a manufacturing organization, it may make
it possible for various stakeholders and departments to work together.
This may result in more effective information flow and better decision-
making. CPS, for instance, can make coordinating and communicating
easier for teams working on design, engineering, production, mainte-
nance, and customer service. This can shorten product development,
reduce mistakes, and increase client happiness. Collaboration between
manufacturers and their partners, suppliers, and clients can also be
made possible via CPS. This may result in a more responsive and trans-
parent supply chain [7, 8].
c. New business models: CPS may make new business models possible
that were not before. For instance, manufacturers can now provide their
clients with services for predictive maintenance. This means that they
can utilize CPS to check on the health of their products remotely and
do necessary maintenance or repairs before malfunctioning. This can
improve client retention, cut warranty expenses, and open fresh rev-
enue streams. Mass customization is another illustration of a brand-new
business model made possible by CPS. Producers may create products
tailored to each customer’s unique requirements and tastes by using
CPS. This may improve differentiation, competitiveness, and consumer
pleasure [7, 8].
116 AI-Driven IoT Systems for Industry 4.0
Despite these challenges, the benefits of CPS outweigh the risks. CPS has the
potential to revolutionize the way we produce goods and services. They are already
significantly impacting several industries, and their impact will only grow.
Another challenge is the integration of IoT data with existing systems. IoT data
can be generated from various sources, and it cannot be easy to integrate this data
with existing systems. Along with the challenges, the benefits of using IoT in manu-
facturing outweigh the risks. IoT has the potential to revolutionize the way we pro-
duce goods and services. It has already had a significant impact on several industries,
and its impact will only grow in the future.
The adoption of big data analytics in manufacturing is still in its early stages, but it
is growing. The benefits of big data analytics are clear, and manufacturers who adopt
big data analytics are likely to see significant improvements in their performance.
b. Plan their actions: Robots can use it to organize their actions. They
can be more effective and less prone to errors as a result. Robots can
utilize path planning, for instance, to identify the best way to get from
one place to another with the aid of AI. They may be able to save time,
energy, or resources. Robots can utilize task planning to break down a
problematic activity into smaller, more straightforward tasks with the
aid of AI. This can assist them in carrying out the task in a systematic
and orderly manner. Robots can also benefit from AI by using con-
tingency planning to foresee and deal with unforeseen circumstances.
This can assist them in adapting to setbacks and recovering from mis-
takes [7, 18, 19].
c. Coordinate with other robots or humans: Robots can use AI to help
them coordinate their behaviors with those of humans or other robots.
They can collaborate more successfully as a result. For instance,
using multi-agent systems to collaborate with other robots in a
shared environment, AI can assist robotics. This can assist people in
achieving a shared objective, like putting together a product or clean-
ing a space. AI can also assist robots in collaborating socially with
people through human-robot interaction. They can then modify their
behavior due to a better understanding of human emotions, intents,
and preferences. Robots can utilize swarm intelligence to mimic the
group behavior of natural systems like ants, bees, or birds with the
use of AI [7, 18, 19].
d. Learn from their outcomes: It can help robots to learn from their mis-
takes and improve their performance over time. Robotics and automa-
tion are essential for Industry 4.0 because they can help manufacturers
reduce labor costs, increase productivity, improve quality, and meet
customer expectations. Robotics and automation can also enable new
possibilities, such as collaborative robots, autonomous vehicles, and
additive manufacturing. Collaborative robots, also known as cobots,
can work safely alongside humans. This makes them ideal for tasks
that require a combination of human and machine skills. For example,
cobots can assist human workers with assembly tasks or perform dan-
gerous tasks that would be unsafe for humans. Autonomous vehicles
are vehicles that can operate without human input. This technology
is still in its early stages, but it has the potential to revolutionize the
way we transport goods and people. For example, autonomous vehi-
cles could deliver goods to customers or transport workers to and from
work. Additive manufacturing, also known as 3D printing, is a process
that uses a computer-controlled machine to create three-dimensional
objects from a digital file. This technology is becoming increasingly
popular in manufacturing because it allows manufacturers to create
complex parts with less waste and time. The adoption of robotics and
automation in manufacturing is growing. The benefits of these tech-
nologies are clear, and manufacturers who adopt them will likely see
significant performance improvements [20–23].
Integration of Advanced Technologies for Industry 4.0 123
in its early stages, but it is growing. The benefits of IoT are clear, and manufacturers
who adopt IoT are likely to see significant improvements in efficiency, productivity,
and quality. They can also achieve higher automation, optimization, and innovation
levels in their production processes [19, 24, 25].
Intelligent factories integrating IoT and AI can dramatically increase perfor-
mance, efficiency, and creativity. Smart factories and Industry 4.0, the fourth
industrial revolution propelled by digital technologies, are other names for intel-
ligent factories. IoT devices are used in intelligent factories to gather and send
data from various sources during the manufacturing process, including machines,
equipment, products, employees, and the environment. They also employ AI to
analyze and understand the data to deliver insights, ideas, or actions that can
optimize the production process. There are several uses for intelligent factories,
including:
b. AI algorithms
AI programs can handle challenging tasks, including analysis, pattern
recognition, and prediction. As a result, producers can improve their
products and services and make better operational decisions. As an
illustration, IBM is a pioneer in cognitive computing and offers Watson
using AI. With the ability to handle audio, photographs, and other types
of data, this platform may offer perceptions and solutions for a variety
of fields and sectors [7, 19, 36].
Manufacturers can use Watson for the following purposes:
i. Analyze data
Watson can use big data analytics to examine historical and cur-
rent data from industrial processes, including output, quality, effi-
ciency, and utilization. Watson may also employ natural language
processing to examine text data, including reviews, complaints, and
consumer feedback. Watson can assist manufacturers in improv-
ing their decision-making by using data analysis to provide insights
into their operations, goods, and customers [7, 19, 36].
ii. Recognize patterns
It helps find patterns in data. This can be used to find issues,
streamline procedures, and develop forecasts. Watson, for
instance, may utilize machine learning to identify patterns of
flaws or faults in goods or procedures and recommend fixes or
enhancements. Additionally, Watson may utilize deep learn-
ing to identify client behavior or preferences trends and pro-
vide tailored goods or services. Using computer vision, Watson
can interact with humans and robots by recognizing patterns in
things, faces, or movements. Watson can assist manufacturers in
problem-solving, process optimization, and prediction by seeing
trends [7, 19, 36].
iii. Make predictions
It can be used to anticipate what will happen in the future. This
can be applied to production planning, inventory optimization, and
customer demand fulfillment. Watson, for instance, may forecast
future product demand using predictive analytics based on market
trends, seasonality, or events. In addition, Watson may employ pre-
scriptive analytics to suggest the best courses of action or fixes in
light of expected outcomes. Watson can also employ reinforcement
learning to learn from its own actions and results and gradually
improve. Watson can assist manufacturers in production planning,
inventory optimization, and meeting consumer demand by making
predictions [7, 19, 36].
Some examples of how IoT solutions and edge components are used for adaptive
supply chains are:
1. RFID tags: RFID tags are devices that can store information, such as prod-
uct specifications, history, location, and status, and transmit it to RFID
readers through radio waves. RFID tags can help manufacturers track their
130 AI-Driven IoT Systems for Industry 4.0
products throughout the supply chain, from raw materials to finished goods,
and optimize their inventory management, quality control, and customer
service [41, 42].
2. Smart containers: Intelligent containers are containers that are equipped
with sensors, actuators, GPS devices, and cameras that can monitor their
environment, such as temperature, humidity, and pressure as well as their
location, movement, and security, and communicate with cloud services or
edge devices through wireless networks. Smart containers can help manu-
facturers ensure their product’s safety, quality, and efficiency during trans-
portation and optimize logistics operations [41, 42].
8.5.2 Edge Computing
A distributed computing paradigm known as edge computing moves computation
and data storage closer to the network’s edge or the location where data is generated.
As a result, applications’ efficiency, dependability, and security may all be enhanced.
Additionally, edge computing can enable brand-new applications like augmented
reality, self-driving cars, and smart cities that are not viable or practical with cloud
computing [43, 44].
downtime. For instance, digital twins can be used to track the operation
and state of various pieces of machinery, including turbines, engines,
and pumps. AI can also identify failure or deterioration indicators,
including vibration, temperature, and pressure. Digital twins can deter-
mine the machinery and equipment’s remaining usable life based on
this analysis and suggest the best time for maintenance. Digital twins
can use AI to optimize the maintenance procedure, including choosing
the optimal maintenance approach, planning the maintenance chores,
and allocating the necessary resources [45, 46].
b. Quality control: It can be utilized to model the manufacturing process.
Using this, possible issues can be found before they arise. For instance,
digital twins can be used to model the production of products like cars,
planes, and smartphones. The production data, such as output, qual-
ity, efficiency, or utilization, can also be analyzed using AI. Based on
this study, digital twins can pinpoint flaws or mistakes in the product
or process and provide fixes or enhancements. Digital twins can also
use AI to validate or provide feedback by comparing the simulation
results with the requirements or standards. Manufacturers may guaran-
tee product compliance, client satisfaction, and brand reputation using
digital twins [45, 46].
c. Design optimization: It can be applied to improve the layout of real-
world systems or items. Performance, effectiveness, and safety can all
be enhanced by doing this. For instance, digital twins can be utilized
to improve the design of a bridge, stadium, or skyscraper. Additionally,
they can utilize AI to assess the design choices following several stan-
dards, such as price, robustness, or beauty. Digital twins can provide
the optimal design solution that matches or exceeds the demands and
expectations of the consumer based on this evaluation. AI can also
be used with digital twins to test and validate the design under vari-
ous conditions, such as weather, load, or natural disasters. Architects
and engineers may use digital twins to produce better, more effective
designs [45, 46].
d. Training and simulation: It can be used to mimic various scenarios and
train operators. This could help to increase security and effectiveness.
Digital twins, for instance, can be used to train operators of intricate
systems, such as those in airplanes, ships, or power plants. AI can also
be used to develop realism and immersion in simulations that mirror
the conditions and circumstances of the actual world. Digital twins can
give operators feedback and direction on operating the systems safely
and effectively based on these simulations. Digital twins can also use
AI to evaluate the abilities and performance of the operators and offer
individualized training and suggestions for development. Digital twins
allow operators to learn more quickly and effectively [45, 46].
e. Research and development: It can be utilized for product and process
development and research. This may speed up and lower the price of
launching new items. Digital twins can be employed, for instance, in
Integration of Advanced Technologies for Industry 4.0 133
The adoption of digital twins in manufacturing is still in its early stages but grow-
ing. The benefits of digital twins are clear, and manufacturers who adopt digital
twins are likely to see significant improvements in their performance.
Digital twins are essential for Industry 4.0 because they can help manufactur-
ers accelerate their product development lifecycles, optimize their production pro-
cesses, and improve their product quality and customer satisfaction by enabling
them to [19]:
Some examples of how AI is used for constructing digital twins are as follows:
For instance:
a. Compatibility problems, when combining several technologies or sys-
tems with various standards, protocols, or architectures, manufacturers
may encounter compatibility problems. This could lead to data inter-
change or communication issues, as well as decreased functionality or
performance [47, 48].
b. Interoperability problems, when merging technologies or systems with
various functionalities, capabilities, or interfaces, manufacturers may run
into interoperability problems. Problems with coordination or collabora-
tion and an increase in complexity or redundancy may follow [47, 48].
c. Scalability problems, when combining technologies or systems with
varying capacities, demands, or requirements, manufacturers may run
into scalability problems. Performance decline, resource limitations, or
system instability could occur from this [47, 48].
d. Reliability problems, when merging technologies or systems with vary-
ing degrees of quality, robustness, or resilience, manufacturers may
run into reliability problems. System faults, mistakes, or failures could
come from this [47, 48].
e. Usability problems, manufacturers may run into usability problems
when integrating technologies or systems with varying degrees of user-
friendliness, accessibility, or flexibility. Users may become angry, frus-
trated, or perplexed [47, 48].
f. Manufacturers may run into technical complexity when integrating
technologies or systems with various levels of sophistication, invention,
or novelty. This could make choosing, deploying, or using the technolo-
gies or systems more challenging or unpredictable [47, 48].
g. Cost considerations, when integrating technologies or systems that have
varying degrees of affordability, availability, or maintainability, manu-
facturers may have to consider costs. This could lead to higher costs or
investments for purchasing, setting up, or updating the technology or
systems [47, 48].
h. Risk factors, when integrating technologies or systems that have various
levels of security, safety, or compliance, manufacturers may encounter
risk issues. This may come from increased exposure to cyberattacks,
data breaches, legal obligations, or regulatory fines [47, 48].
2. Cybersecurity and data privacy challenges
The hazards or weaknesses manufacturers confront while working with
massive amounts of data and connected devices in Industry 4.0 are called
“cybersecurity and data privacy challenges.” In addition to legal, ethical,
and societal ramifications, these difficulties include cyberattacks, data
breaches, data theft, data manipulation, data loss, and data misuse problems
[47, 48]. For instance:
a. Cyberattacks, malicious actors wishing to disrupt, harm, or destroy
manufacturers’ systems, data, or business activities may launch cyber-
attacks. These assaults include ransomware, phishing, or denial-of-service
attacks [47, 48].
136 AI-Driven IoT Systems for Industry 4.0
8.9 CONCLUSION
Industry 4.0 is a term that describes the fourth industrial revolution, which is charac-
terized by the integration of advanced technologies such as IoT, AI, big data analyt-
ics, robotics, and automation in the manufacturing sector. Industry 4.0 aims to create
intelligent factories capable of producing high-quality products with minimal human
intervention while being flexible, efficient, and responsive to customer needs.
Integrating IoT and AI technologies is essential for achieving the full potential of
Industry 4.0 transformation. By combining the data collection and communication
capabilities of IoT with the data analysis and decision support capabilities of AI,
manufacturers can create CPS that can interact with each other and with humans in
real time. These systems can also adapt to changing conditions and demands, learn
from their own experiences, and improve their performance over time.
Advanced technologies such as IoT, AI, big data analytics, robotics, and automa-
tion can bring significant benefits and advantages to the manufacturing sector in
Industry 4.0. These benefits and advantages include higher operational effective-
ness and productivity, cost reduction through optimized resource allocation, smooth
production procedures, real-time data-driven insights, and adaptability to changing
demands through flexible manufacturing lines.
However, technology integration in Industry 4.0 also poses several challenges
that need to be addressed by manufacturers, researchers, policymakers, and other
stakeholders. These challenges include technology integration, cybersecurity, data
privacy, and upskilling and training challenges.
It also opens up new opportunities for future research directions to help manu-
facturers overcome these challenges and achieve their goals and objectives. These
140 AI-Driven IoT Systems for Industry 4.0
ACKNOWLEDGMENTS
My thanks go to the editor of the volume, Dr. Preethi Nanjundan, for giving me the
chance to write a chapter on this area of AI forecasting and making helpful sugges-
tions and to Zidan Kachhi for guiding me on the right path.
LIST OF ABBREVIATIONS
AI Artificial intelligence
CPS Cyber-physical system
GPS Global Positioning System
Industry 4.0 The Fourth Industrial Revolution
IoT Internet of Things
QR code Quick-response code
RFID Radio frequency identification
SMLC Smart Manufacturing Leadership Coalition
REFERENCES
1. Vasja R, Maja M, and Alojz K. “A Complex View of Industry 4.0”, SAGE Open, 6(2),
2158244016653987, 2016. doi: 10.1177/2158244016653987, 2016
2. Kumar, A., & Singh, R. K. “Integrating Industry 4.0 and Circular Economy: a Review”,
Journal of Enterprise Information Management, 2021. doi: 10.1108/JEIM-11-2020-0465
3. Alshamrani, A., Alshamrani, M., & Alshamrani, H. “Industry 4.0 and Its
Implementation: A Review”, Information Systems Frontiers, 1–18, 2021.
4. Wang, Y., Liang, H., & Zhang, Y. “The Current Status and Developing Trends of
Industry 4.0: a Review”, Information Systems Frontiers, 1–16, 2021.
5. Eercan, E. “What is Industry 4.0 and Cyber Physical Systems?”, Retrieved from https://
eercan.com/post/what-is-industry-40-and-cyber-physical-systems/, 2020.
6. Radanliev, P., De Roure, D., Nicolescu, R., Huth, M., & Santos, O. “Digital Twins:
Artificial Intelligence and the IoT Cyber-Physical Systems in Industry 4.0”,
International Journal of Intelligent Robotics and Applications, 171–185, 2021. doi:
10.1007/s41315-021-00180-5
7. Lee, J., Bagheri, B., & Kao, H. A. “A Cyber-Physical Systems Architecture for Industry
4.0-Based Manufacturing Systems”, Manufacturing Letters, 3, 18–23, 2015.
8. Liang, X., Shetty, S., Tosh, D., Kamhoua, C., Kwiat, K., & Njilla, L. “Provchain: A
blockchain-based data provenance architecture in cloud environment with enhanced
privacy and availability”, In 17th IEEE/ACM International Symposium on Cluster,
Cloud and Grid Computing, CCGRID, pp. 468–477, 2017.
Integration of Advanced Technologies for Industry 4.0 141
9. Sakovich, N. “IoT in Manufacturing: Ultimate Guide and Use Cases”, SaM Solutions,
2021. https://www.sam-solutions.com/blog/iot-in-smart-manufacturing/
10. Chalishazar, T. “IoT in Manufacturing: The Ultimate Guide”, Peerbits, 2022.
11. “IoT Applications in Manufacturing Industry”, TechVidvan, 2021.
12. “How to Use the Internet of Things (IoT) in Manufacturing”, PixelPlex, 2021.
13. Kaur, S., & Singh, S. “Robotic automation in manufacturing”, In Industry 4.0: Trends
in Management of Intelligent Manufacturing System, S. Singh & S. Kaur, Eds.,
pp. 35–54, 2023.
14. Kumar, V. “Big Data Analytics in Manufacturing”, HCL Blogs, Hcltech.com, 2019.
15. Littlefield, M. “What Is Big Data Analytics in Manufacturing?”, Blog.lnsresearch.com,
2015.
16. Srivastava, S. “Big Data in Manufacturing – Importance and Use Cases”, Appinventiv.
com, 2022.
17. Consoli, R. “Using Big Data Analytics to Improve Production”, Manufacturing.net.,
2018.
18. Auschitzky, E., Hammer, M., & Rajagopaul, A. “How Big Data Can Improve
Manufacturing”, McKinsey, 2014.
19. Zhou, J., Liu, X., & Zhou, Z. “Industry 4.0: Towards future industrial opportunities
and challenges”, In Industry 4.0: Trends in Management of Intelligent Manufacturing
Systems, Springer, Singapore, pp. 3–20, 2019.
20. ROBOTNIK. “Mobile Robots in Industry 4.0: Automation & Flexibility”, Robotnik,
2021.
21. Goel, R., & Gupta, P. “Robotics and Industry 4.0”, In A Roadmap to Industry 4.0:
Smart Production, Sharp Business and Sustainable Development, pp. 157–169, 2019.
22. Violino, B. “Robotics and Industry 4.0: Reshaping the Way Things Are Made”, ZDNet.,
2016.
23. POLLY. “The Role of Collaborative Mobile Robots in Industry 4.0”, Robotics &
Automation News, 2021.
24. L2L. “The Role of IoT and Industry 4.0 in Creating Digital Factories of Tomorrow”, IoT
for All, 2022.
25. Gregolinska, E., Khanam, R, Lefort, F., & Parthasarathy, P. “Industry 4.0: Digital
Transformation in Manufacturing”, McKinsey, Www.mckinsey.com, 2022.
26. Eranda, K. “IoT in Industry 4.0 – Deep Explanation – JFrog Connect”, JFrog, 2021.
27. Rejig. “Top Five Benefits of Industry 4.0 for Modern Enterprises”, Rejig Digital, 2021.
28. Saribardak, E. “How IoT Reshapes Industry 4.0 and Benefits of IoT for SMEs”,
ReadWrite, 2020.
29. Mendoza, M. “IoT in Manufacturing: Applications Today and the transformative tech-
nologiesFuture”, Hitachi Solutions, 2020.
30. Lowe, A., Adiraju, P., Fuzellier, M., Janakiraman, P., & Cummins, T. “Connected
Factory Solution Based on AWS IoT for Industry 4.0 Success”, Amazon Web Services,
2020.
31. Aheleroff, S., Xu, X., Lu, Y., Aristizabal, M., Velásquez, J. P., Joa, B., & Valencia,
Y. “IoT-Enabled Smart Appliances under Industry 4.0: A Case Study”, Advanced
Engineering Informatics, 43, 101043, 2022. doi: 10.1016/j.aei.2020.101043
32. IBM. “What Is a Digital Twin”, Www.ibm.com, 2022.
33. Wikipedia Contributors. “Digital Twin”, Wikipedia, Wikimedia Foundation, 2019.
34. Collie, B., Parker, G., Haimes, Y., and Dubno, D. “Adaptive Manufacturing and
Homeland Security”, White Paper for HSSTAC Quadrennial Homeland Security
Review (QHSR), 2017.
35. Gosselin, S.. “Adaptive Manufacturing Technology and the Modern Factory”, Integris,
2021.
142 AI-Driven IoT Systems for Industry 4.0
36. Gnamm, J., Frost, T., Lesmeister, F., & Ruehl, C, eds. “Factory of the Future: How
Industry 4.0 and AI Can Transform Manufacturing”, Bain, 2023.
37. Keleko, A. T., Kamsu-Foguem, B., Ngouna, R. H., & Tongne, A. “Artificial Intelligence
and Real-Time Predictive Maintenance in Industry 4.0: A Bibliometric Analysis”, AI
and Ethics, 553–577, 2022. doi: https://doi.org/10.1007/s43681-021-00132-6
38. Achouch, M., Dimitrova, M., Ziane, K., Karganroudi, S. S., Dhouib, R., Ibrahim,
H., & Adda, M. “On Predictive Maintenance in Industry 4.0: Overview, Models, and
Challenges”, Applied Sciences, 12(16), 8081, 2022.
39. Hofmann, E., Sternberg, H., Chen, H., Pflaum, A., & Prockl, G. “Supply Chain
Management and Industry 4.0: Conducting Research in the Digital Age”, International
Journal of Physical Distribution & Logistics Management, 49(10), 945–955, 2019.
40. Caiado, R. G. G., Scavarda, L. F., Azevedo, B. D., de Mattos Nascimento, D. L., &
Quelhas, O. L. G. “Challenges and Benefits of Sustainable Industry 4.0 for Operations
and Supply Chain Management—A Framework Headed toward the 2030 Agenda”,
Sustainability 14(2), 830, 2022.
41. Gaur, V. “Bringing Blockchain, IoT, and Analytics to Supply Chains”, Harvard
Business Review, 2021. https://hbsp.harvard.edu/product/H06RVC-PDF-ENG
42. Zhu, X. N., Peko, G., Sundaram, D., & Piramuthu, S. “Blockchain-Based Agile Supply
Chain Framework with IoT”, Information Systems Frontiers, 23(4):1023–1039, 2021.
43. Ouyang, C. “Edge Computing and Hybrid Cloud: Scaling AI within Manufacturing”,
IBM Blog, 2021.
44. Pelizzo, G. “Council Post: How Edge Computing Enables Industry 4.0”, Forbes, 2021.
45. Downey, J. “What Is Digital Twin Technology and How It Benefits Manufacturing
in the Industry 4.0 Era?” SL Controls, 219–241, 2020. ISBN 978-0-12-817630-6, doi:
https://doi.org/10.1016/B978-0-12-817630-6.00011-4
46. Tao, F., Zhang, M., Liu, Y., & Nee, A. Y. C. “Digital Twin Driven Smart Manufacturing:
Connotation, Reference Model, Applications and Research Issues”, Robotics and
Computer-Integrated Manufacturing, 61, 101837, 2019
47. Peraković, D., Periša, M., & Zorić, P. “Challenges and Issues of ICT in Industry 4.0”,
Lecture Notes in Mechanical Engineering, 259–269, 2019. https://doi.org/10.1007/
978-3-030-22365-6_26
48. TXM Lean Solutions. “The Advantages & Challenges: Implementing Industry 4.0”,
TXM Lean Solutions, 2020.
49. Ellingrud, K., Gupta, R., & Salguero, J. 2020. “Reskilling Workers for Industry 4.0”,
McKinsey, Www.mckinsey.com, 2020.
50. Li, L. “Reskilling and Upskilling the Future-Ready Workforce for Industry 4.0 and
Beyond”, Information Systems Frontiers, 24(3), 2022. doi: https://doi.org/10.1007/
s10796-022-10308-y
51. Moraes, E. B., Kipper, L. M., Kellermann, A. C. H., Austria, L., Leivas, P., Moraes,
J. A. R., & Witczak, M. “Integration of Industry 4.0 Technologies with Education
4.0: Advantages for Improvements in Learning”, Interactive Technology and Smart
Education, ISSN 17415659, 2022. doi: 10.1108/ITSE-11-2021-0201
52. Tabim, V. M., Ayala, N. F., & Frank, A. G. “Implementing Vertical Integration in the
Industry 4.0 Journey: Which Factors Influence the Process of Information Systems
Adoption?, Information Systems Frontiers, 1–18, 2021. doi: 10.1007/s10796-021-10220-x
9 Challenges in Digital
Transformation
and Automation
for Industry 4.0
Manjari Sharma, Tanmay Paliwal,
and Payashwini Baniwal
9.1 INTRODUCTION
Throughout history, industrial revolutions have consistently been more than merely
small changes. They have signified enormous strides that have altered social struc-
tures, production paradigms, and the global economic environment. Each transition has
embodied the pinnacle of human ingenuity and an unyielding drive for advancement,
from the thunderous clangs of the first steam engines in the late 18th century, heralding
the beginning of mechanized production, to the quiet digital murmurs echoing through
contemporary global supply chains. As civilization enters the era of Industry 4.0, it
stands at the intersection of the physical and digital, where the revolutionary nature of
digital technologies and the tangible elements of industrial processes collide [1].
This Fourth Industrial Revolution is defined by Industry 4.0, which represents a fun-
damental change in how industries are understood, produced, and advanced. Industry 4.0
is not merely another stage in the continual development of industry. This digital trans-
formation is supported by a network of connected hardware, sensors, and systems known
as the Internet of Things (IoT), as well as the reasoning abilities of artificial intelligence
(AI). However, the smooth and well-coordinated integration of various technologies is
what sets Industry 4.0 apart from its predecessors, not only its technological armament.
Envision a factory where cyber-physical systems communicate in real time,
where an anomaly detected by a sensor on the assembly line instantaneously triggers
a recalibration of processes, ensuring optimal productivity and minimal downtime.
Imagine a scenario where data analytics platforms, fed by a constant stream of data,
not only detect patterns but also predict future trends, offering strategic insights that
drive not just reactive but proactive decision-making. This is the promise of Industry
4.0 – an industrial landscape that is dynamically responsive, inherently intelligent,
and perpetually evolving.
However, as with any profound transformation, the journey toward achieving the
full potential of Industry 4.0 is fraught with challenges. The intricate mesh of con-
nected devices, while offering unprecedented levels of communication and control,
also introduces vulnerabilities. Cybersecurity, thus, emerges as a prime concern,
9.2.4 Resource Optimization
With a more connected and integrated supply chain, Industry 4.0 brings about an
optimal use of resources, including raw materials, energy, and manpower. The effi-
cient use of resources not only reduces costs but also minimizes waste, fostering
sustainable and eco-friendly manufacturing practices [7].
9.2.5 Human-Machine Collaboration
Contrary to the popular belief that automation would replace human jobs, Industry
4.0 emphasizes collaboration between humans and machines. With smart devices
aiding human decision-making processes and humans guiding machines in more
complex tasks, a synergistic relationship emerges [8].
9.3.1 Internet of Things
IoT is at the forefront of Industry 4.0, redefining how gadgets, sensors, and machin-
ery interact and communicate. The IoT connects physical things into a seamless
digital fabric, allowing for real-time data transmission and cooperation. This inter-
connection serves as Industry 4.0’s nervous system, providing producers with a full
perspective of their activities. From smart sensors that monitor manufacturing pro-
cesses to connected logistics systems that optimize supply chains, the IoT boosts
operational efficiency, improves decision-making, and ushers in previously unthink-
able levels of real-time responsiveness.
9.3.2 Artificial Intelligence
AI is the cognitive engine that will bring Industry 4.0 from concept to reality.
AI gives machines the ability to analyze data, learn from patterns, and make
informed decisions on their own. This mutually beneficial interaction between
AI and IoT powers predictive analytics, allowing manufacturers to anticipate and
minimize possible hazards. Algorithms for machine learning optimize processes,
detect abnormalities, and enhance tactics. Furthermore, AI enhances human
capacities by encouraging creative problem-solving, facilitating human-robot
collaboration, and lifting the entire production ecosystem to new levels of adapt-
ability and efficiency.
146 AI-Driven IoT Systems for Industry 4.0
9.3.3 Automation
The cornerstone of Industry 4.0’s promise of accuracy, consistency, and agility
is automation. Smart factories use automation to orchestrate industrial processes
with little human interaction. Robotic systems that are AI-infused and led by real-
time data from IoT devices perform activities ranging from ordinary assembly to
intricate customization. Automation reduces human error, boosts throughput, and
maintains consistent product quality. It speeds up production, shortens lead times,
and frees human workers from repetitive duties, allowing them to focus on higher
value tasks.
9.3.4 Cloud Computing
Cloud computing is the digital nexus that connects the many components of Industry
4.0. This technology provides scalable computing resources, storage capacities, and
powerful analytics tools for processing the massive amounts of data created by IoT
devices. Cloud systems lay the groundwork for real-time data analysis, promot-
ing informed decision-making and collaboration across geographically distributed
teams. The flexibility of the cloud enables manufacturers to extend their operations
and experiment with new technologies without the limits of physical infrastructure.
These technology forces, when combined, enable Industry 4.0 to surpass old pro-
duction paradigms. They establish the framework for intelligent, linked ecosystems
in which robots, data, and humans work in tandem. This convergence boosts opera-
tional efficiency, feeds innovation, and promotes a future in which manufacturing is
defined by agility, customization, and long-term growth. As organizations navigate
this technology terrain, they discover new opportunities while confronting new dif-
ficulties, leading them toward a digitally transformed and automated future.
9.4.2 Quantum Technologies
Quantum technologies leverage the principles of quantum mechanics, especially
superposition and entanglement, to develop advanced systems and methods for com-
puting, communication, and sensing, among other applications [13].
Quantum technologies, particularly quantum computing, promise to revolution-
ize fields that are currently restricted by the capabilities of classical computers.
Problems that are computationally intensive for classical machines, such as factor-
ing large integers or simulating complex molecular structures, can be addressed
more efficiently using quantum computers. This breakthrough has vast implications
for cryptography, drug discovery, and optimization problems, among other areas.
Furthermore, quantum sensors and quantum communication technologies can pro-
vide unprecedented precision in measurements and ultra-secure communication
channels, respectively [14].
In essence, the journey toward the full realization of Industry 4.0 is rife with chal-
lenges. However, these challenges, when viewed through the lens of opportunity, can
pave the way for innovations that not only enhance operational efficiency but also
foster a culture of continuous learning and adaptation [18].
ethical concerns pertaining to accountability and justice come to the fore. The soci-
etal implications, including potential job displacement, underscore the need for a
human-centered approach, focused on workforce reskilling. The chapter under-
scores the harmonious coexistence of human and machine, harnessing their respec-
tive strengths while advocating proactive measures to address unforeseen outcomes
through robust testing and continuous vigilance. Ultimately, responsible automation
and AI integration align with ethical principles and societal well-being, catalyzing
the transformative potential of Industry 4.0 [40].
institutions, and increasing diversity for improved problem-solving are all part of
developing a skilled workforce. These strategies collectively equip organizations to
manage the complexity of Industry 4.0, assuring they capitalize on its revolutionary
potential while confronting difficulties, supporting innovation, and fostering long-
term growth.
9.10 CONCLUSION
The journey through the complex world of Industry 4.0 is defined by the junction of
technology innovation, labor dynamics, and ethical considerations. The adoption of
Industry 4.0 appears to be a continuous pursuit of operational dominance and lasting
growth based on the conundrums and strategies presented in this chapter.
The necessity for a comprehensive viewpoint on digital metamorphosis is high-
lighted by an assessment of the major roadblocks and their pertinent solutions as
this investigation into Industry 4.0’s difficulties and potential comes to a close. This
requires the intricate task of managing the data influx as well as the delicate blending
of many technologies. The chapter stresses the importance of employee reskilling to
bridge the skills gap and the crucial role that change stewardship plays in ensuring
a smooth transition. Cybersecurity worries have brought attention to the precarious
balance between accessibility and security. A shift toward openness, fairness, and
lessening bias has been prompted by ethical considerations in AI and mechanization.
The solutions put forth here emphasize creating comprehensive digital metamor-
phosis plans that take into account difficulties and align goals with technological
prowess. Utilizing the revolutionary potential of Industry 4.0 on around immediate
connectivity, mastery of mechanization, and the harmonic marriage of IoT, AI, and
cloud infrastructures to foster innovation.
As this chapter’s analysis of Industry 4.0’s problems and solutions comes to a
close, it is becoming clear that this revolutionary era is more like an odyssey than a
conclusion. Companies are urged to adopt a pliable mentality in recognition of the
fact that new horizons will open up and that the landscape is constantly changing.
Understanding that transformation is an ongoing process rather than a passing phase
is necessary in order to view Industry 4.0 as an ongoing journey. Such acceptance
inspires the development of a society that values adaptability and creativity and is
supported by ongoing learning. This necessitates close monitoring of developing
trends, technology, and potential disruptions. Businesses may put themselves at the
forefront of innovation by recognizing the continual nature of Industry 4.0. This
necessitates close monitoring of developing trends, technology, and potential disrup-
tions. Businesses may put themselves at the forefront of innovation by recognizing
the continual nature of Industry 4.0. This enables them to anticipate problems, grab
emerging opportunities, and maintain a competitive edge. Businesses that internal-
ize change as constant, navigate it strategically, and persistently forge new ground
in their pursuit of distinctiveness will prosper as technology advances and societal
needs change.
The promises Industry 4.0 makes – of competency, dependability, and creativ-
ity – mark the end of this chapter’s investigation of its problems and tactics. These
commitments are concrete goals, not just abstract aspirations, which organizations
can achieve by steadfastly navigating the terrain. By strategically implementing
technologies, streamlining processes, and making the most of available resources,
proficiency takes shape. As infrastructures become more durable, resilient to distur-
bances, and flexible to changing requirements, trustworthiness emerges. When orga-
nizations encourage creativity, embrace change, and use cutting-edge technology to
develop novel solutions, innovation thrives. To keep these promises, an integrated
Challenges in Digital Transformation and Automation for Industry 4.0 161
ACKNOWLEDGMENTS
My thanks go to the editor of the volume, Dr. Preethi Nanjundan, for giving me the
chance to write a chapter on this area of AI forecasting and making helpful sugges-
tions, and to Dr. Manjari and Zidan Kachhi for guiding me on the right path.
LIST OF ABBREVIATIONS
AI Artificial intelligence
DDoS Distributed denial of service
Industry 4.0 The Fourth Industrial Revolution
IoT Internet of Things
SMEs Small and medium enterprises
REFERENCES
1. Schwab, K. “The Fourth Industrial Revolution (Industry 4.0) A Social Innovation
Perspective”, Tạp Chí Nghiên Cứu Dân Tộc, pp. 23, 2016.
2. Lu, Y. “Industry 4.0: A Survey on Technologies, Applications and Open Research
Issues”, Journal of Industrial Information Integration, pp. 1–10, 2017.
3. Papulová, Z., Gažová, A., & Šufliarský, Ľ. “Implementation of Automation Technologies
of Industry 4.0 in Automotive Manufacturing Companies”, Transportation Research
Board, 2022. doi: https://doi.org/10.1016/j.procs.2022.01.350. https://www.sciencedi-
rect.com/science/article/pii/S1877050922003593
4. Zhong, R. Y., Newman, S. T., Huang, G. Q., & Lan, S. “Big Data for Supply Chain
Management in the Service and Manufacturing Sectors: Challenges, Opportunities,
and Future Perspectives”, Computers & Industrial Engineering, pp. 572–591, 2017.
5. Lasi, H., Fettke, P., Kemper, H.-G., Feld, T., & Hoffmann, M. “Industry 4.0”, Business
& Information Systems Engineering, pp. 239–242, 2014.
6. Wang, S., Wan, J., Li, D., & Zhang, C. “Implementing Smart Factory of Industry 4.0: An
Outlook”, International Journal of Distributed Sensor Networks, 2016. doi: https://doi.
org/10.1155/2016/3159805. https://journals.sagepub.com/doi/full/10.1155/2016/3159805
7. Ghobakhloo, M. “The Future of Manufacturing Industry: A Strategic Roadmap Toward
Industry 4.0”, Journal of Manufacturing Technology Management, pp. 910–936, 2018.
8. Kagermann, H., Wahlster, W., Helbig, J., Kagermann, H., Wahlster, W., & Helbig,
J. “Securing the Future of German Manufacturing Industry Recommendations for
Implementing the Strategic Initiative Industrie 4.0. Final Report of the Industrie 4.0
Working Group”, Acatech – National Academy of Science and Engineering, Scientific
Research Publishing, p. 678, 2013.
162 AI-Driven IoT Systems for Industry 4.0
9. Lampropoulos, G., Siakas, K., & Anastasiadis, T. “Internet of Things in the Context
of Industry 4.0: An Overview”, International Journal of Entrepreneurial Knowledge,
pp. 4–19, 2019.
10. Chen, M., Mao, S., & Liu, Y. “Big Data: A Survey”, Mobile Networks and Applications,
pp. 171–209, 2014.
11. Raghupathi, W., & Raghupathi, V. “Big Data Analytics in Healthcare: Promise and
Potential”, Health Information Science and Systems, 2014. doi: 10.1186/2047-2501-2-3
12. Wang, G., Gunasekaran, A., Ngai, E. W. T., & Papadopoulos, T. “Big Data Analytics
in Logistics and Supply Chain Management: Certain Investigations for Research and
Applications”, International Journal of Production Economics, pp. 98–110, 2016.
13. Biamonte, J., Wittek, P., Pancotti, N., Rebentrost, P., Wiebe, N., & Lloyd, S. “Quantum
Machine Learning”, Nature, pp. 195–202, 2017.
14. Preskill, J. “Quantum Computing in the NISQ Era and Beyond”, Quantum, p. 79, 2018.
15. Kagermann, H. “Change through digitization—Value creation in the age of Industry
4.0”, In Management of Permanent Change, Springer, Wiesbaden, pp. 23–45, 2015.
16. Hermann, M., Pentek, T., & Otto, B. “Design principles for Industrie 4.0 scenarios”,
49th Hawaii International Conference on System Sciences (HICSS), 2016.
17. Rüßmann, M., Lorenz, M., Gerbert, P., Waldner, M., Justus, J., Engel, P., & Harnisch,
M. “Industry 4.0: The Future of Productivity and Growth in Manufacturing Industries”,
BCG Global, 2015.
18. Geissbauer, R., Schrauf, S., Koch, V., & Kuge, S. “Industry 4.0 – Opportunities and
Challenges of the Industrial Internet”, PwC, 2014.
19. Huhns, M. N., & Singh, M. P. “Service-Oriented Computing: Key Concepts and
Principles”, IEEE Internet Computing, pp. 75–81, 2005.
20. Emaminejad, N., & Akhavian, R. “Trustworthy AI and Robotics: Implications for the
AEC Industry”, Automation in Construction, 2022. doi: 10.1016/j.autcon.2022.104298
21. World Economic Forum. “The Future of Jobs Report 2018 Insight Report Centre for the
New Economy and Society”, World Economic Forum, 2018.
22. Acquire Media. Robots Need Not Apply: New ManpowerGroup Research Finds
Human Strengths are the Solution to the Skills Revolution, ManpowerGroup Inc, 2023.
23. Bontas, R. “Human Capital Trends Report 2019”, Deloitte Romania, 2019.
24. Mazor, A. H. “Leading the Social Enterprise: Reinvent With a Human Focus”, Deloitte
Insights, 2019. https://www2.deloitte.com/us/en/insights/focus/human-capital-trends/2019/
leading-social-enterprise.html
25. Bessen, J. E. “AI and Jobs: The Role of”, SSRN Electronic Journal, 2017. https://papers.
ssrn.com/sol3/papers.cfm?abstract_id=3106676
26. OCDE. “Getting Skills Right: Future-Ready Adult Learning Systems”, OECD, 2014.
27. McKinsey. Skill Shift: Automation and the Future of the Workforce, McKinsey &
Company, 2018.
28. J, F., & M, R. “Building From the Bottom Up – Managing the Future of Work”, Harvard
Business School, 2019.
29. Partner, F. “PwC Report: ‘Upskilling for a Digital World’ Finds Lack of Skills a Global
Concern”, Fair360, 2019.
30. M, A., T, G., & U, Z. “Risk Capital in OECD Countries”, Financial Market Trends,
pp. 113–151, 2006.
31. Biran, O., Cotton, C., & Nohl, C. “Explanation and Justification in Machine Learning:
A Survey”, Foundations and Trends® in Information Retrieval, pp. 105–307, 2021.
32. Kotter, J., & Schlesinger, L. “Choosing Strategies for Change”, Harvard Business
Review, 13, 2018. https://hbr.org/2008/07/choosing-strategies-for-change
33. Armenakis, A. A., & Bedeian, A. G. “Organizational Change: A Review of Theory and
Research in the 1990s”, Journal of Management, 25(3), pp. 293–315, 1999.
Challenges in Digital Transformation and Automation for Industry 4.0 163
34. Lewin, K, & Lewin, K. Frontiers in group dynamics “Frontiers in Group Dynamics:
Concept, Method and Reality in Social Science; Social Equilibria and Social Change”,
Human Relations, pp. 5–41, 1947.
35. Schein, E. H. “Organizational Culture and Leadership”, SSRN, 1985.
36. Kouzes, J. M., & Posner, B. Z. “The Leadership Challenge”, John Wiley & Sons, 1995.
37. Mitnick, K., & Simon, W. “The Art of Deception Controlling the Human Element of
Security”, Hyperion, 2002. https://archive.org/details/artofdeceptionco0000mitn
38. West-Brown, M., Stikvoort, D., Kossakowski, K., Killcrece, G., Ruefle, R., & Zajicek,
M. T. “Handbook for Computer Security Incident Response Teams (CSIRTs)”, Semantic
Scholar, 2003.
39. Schneier, B. “Data and Goliath: The Hidden Battles to Collect Your Data and Control
Your World”, New York, Ny [U.A.] Norton, 2015.
40. Ribeiro, J., Lima, R., Eckhardt, T., & Paiva, S. “Robotic Process Automation and
Artificial Intelligence in Industry 4.0 – A Literature Review”, Procedia Computer
Science, pp. 51–58, 2021.
41. Floridi, L. “The Logic of Design as a Conceptual Logic of Information”, SSRN Electronic
Journal, 2017. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3849209
42. Lee, J. D., & See, K. A. “Trust in Automation: Designing for Appropriate Reliance”,
Human Factors: The Journal of the Human Factors and Ergonomics Society,
pp. 50–80, 2004. doi: https://doi.org/10.1518/hfes.46.1.50_30392. https://journals.sage-
pub.com/doi/abs/10.1518/hfes.46.1.50_30392
43. Parasuraman, R., & Riley, V. “Humans and Automation: Use, Misuse, Disuse, Abuse”,
Human Factors: The Journal of the Human Factors and Ergonomics Society, pp. 230–
253, 1997.
44. Knight, J. C., & Leveson, N. G. “An Experimental Evaluation of the Assumption
of Independence in Multiversion Programming”, IEEE Transactions on Software
Engineering, pp. 96–109, 2012.
45. Amodei, D., Olah, C., Steinhardt, J., Christiano, P. F., Schulman, J., & Mane, D.
“Concrete Problems in AI Safety”, ArXiv, 2016.
46. Grieves, M., & Vickers, J. “Digital Twin: Mitigating Unpredictable, Undesirable
Emergent Behavior in Complex Systems”, ResearchGate, 2017.
47. Riazul Islam, S. M., Kwak, D., Humaun Kabir, M., Hossain, M., & Kwak, K.-S. “The
Internet of Things for Health Care: A Comprehensive Survey”, IEEE Access, pp. 678–
708, 2015.
10 Design and Analysis
of Embedded
Sensors for IIoT
A Systematic Review
Kogila Raghu and Macharla Mounika
10.1 INTRODUCTION
In today’s automated environment, embedded systems have grown in significance.
Embedded systems are crucial for accelerating production and managing factory sys-
tems, especially in the area of automation. Systems that are embedded have become
a vital part of many businesses in recent years, revolutionizing the automation of
industrial processes. Manufacturing procedures are streamlined, performance is
improved, and efficiency is optimized. This rise is being fueled by an increase in the
need for automation, the IoT and smart electronic devices crossway various indus-
tries. This post will address embedded systems in industrial automation as well as
some exciting possibilities. A maintenance worker receives real-time data from an
Industrial Internet of Things (IoT) (IIoT) sensor that keeps an eye on equipment and
systems. IIoT sensors offer 24/7 “eyes” on vital assets as opposed to depending on
sporadic inspections by maintenance specialists. As a result, machinery runs more
consistently, minor issues are found quickly, and major breakdowns are avoided.
What separates IoT and IIoT is the level of rigor in the standards. The term “Internet
of Things” refers to any item that can be connected to the Internet, like a conven-
tional smartphone. The IIoT is a more technical subset of IoT that adheres to tight
standards. The security of data flowing from complex machinery or proprietary sys-
tems is ensured by these standards.
Business applications for embedded systems come with a wide range of benefits
and capabilities. Process optimization, well-informed decision-making, and prompt
response to anomalies are all made possible by real-time control and monitoring
capabilities are shown in Figure 10.1. Automating routine tasks and streamlining
procedures boosts the efficiency and productivity of embedded systems. By fusing
cutting-edge technologies like artificial intelligence (AI), machine learning (ML),
and IoT, the embedded system provides proactive monitoring, early danger identifi-
cation, and efficient risk management. Additionally, it opens up new possibilities for
complex analytics, preventative maintenance, and autonomous judgment.
Generally, the systems that are embedded are a crucial part of engineering auto-
mation, fostering innovation and allowing companies to prosper in a fast-paced,
cutthroat environment. Machine control and machine monitoring are the two main
subcategories of embedded industrial automation systems.
distribution, and optimize energy use. Businesses may track and analyze
energy data using embedded systems, find potential for energy savings, and
put energy conservation measures into action. Various energy consumption
indicators, including equipment efficiency, operational patterns, and power
usages, are continuously monitored by these systems. Embedded systems
can identify patterns and trends in the acquired data that point to potential
opportunities for energy savings.
For instance, they can spot instances of excessive energy use during particular
times of the day or energy-wasting device problems. Businesses can optimize energy
use and cut waste thanks to this information.
Several illustrations of machine monitoring:
The IoT, which was created because of the improvement of wireless technology,
has recently undergone major development to improve its usability for both current
and upcoming real-life applications. The initial introduction of IoT was given by a
corporate organization that was a radiofrequency identification (RFID) member in
1999. Then electronic devices, AI, ML, and data analytics gained acceptance and
started to draw interest for practical applications [1]. IoT nodes are physical objects
with sensors that are utilized for online data communication. In light of this, there is
Design and Analysis of Embedded Sensors for IIoT 167
link all sensor-equipped equipment via the Internet to give progressive activity,
perceptibility over instantaneous data. IoT environmental monitoring in industry
aids in preserving a secure working location by sensing smoke and other impuri-
ties and evaluating the quality of the air and water. Each cluster uses Bluetooth to
send information gathered from the working environment to the main server [4].
Industrial automation, smart grid, and smart power are examples of IIoT applications
that handle massive volumes of data rapidly and effectively to raise production at a
little charge. Due to IIoT’s promise of enhanced customer practices, data privacy,
increased GDP, unremitting data flooding, and many other benefits, industry sectors
are growing faster than they would on their own. Industrial machines and systems
will be able to detect their surroundings and other factors and react accordingly as
this technology advances. By 2025, it is anticipated that business-to-business (B2B)
solutions will bring in $11 trillion in annual income, or 70% of the total amount [5].
With the help of smart IIoT gadgets, machines can continuously change their modu-
lation strategy in order to operate and address mechanical faults without the need
for manual labor.
The IIoT technology is in its still early days; hence, the problems listed above are
to be anticipated, despite the recent emergence of several IIoT applications. A num-
ber of communication-related issues have been mentioned in prior IIoT assessments,
but they didn’t go into detail about how these issues might be resolved by IIoT-
enabled technology. Furthermore, a lot of studies have only focused on the problems
and applications that now exist, ignoring the chances that this expertise presents for
corporate growth and amplified consumer happiness. In order to achieve its objec-
tive, this research intends to shed light on the infrastructure and technology that
enable the IIoT, with an emphasis on their relevance in fostering worldwide indus-
trial progress, their claims, encounters, and future proposals on fixing present issues
and opening up new predictions. The present review’s precise goals are as follows:
This analysis will help company leaders understand how to use IIoT technology
to grow their industries, create new business models, and become more competitive
in the marketplace.
could boost production, boost efficiency, and stimulate innovation in industrial pro-
cesses. Additionally, it is anticipated that the combination of systems with embedded
devices like cutting-edge technologies like AI and IoT would further improve their
capabilities. In general, embedded systems are crucial for giving firms the chance to
prosper in the dynamic and cutthroat world of industrial automation.
Recent industrial systems are not complete without the feeding of sensor informa-
tion to the controllers, monitors, and other functioning technologies powering the
industry. Despite the fact that sensor networking has been used for a while, the ini-
tiation of cyberspace has enhanced both the chances and trials related to employing
sensor systems. The IIoT now includes sensors, which has expanded both the design
potential and challenges.
Sensors serve a variety of purposes in modern manufacturing. Along with pro-
viding information for method control, they also support with importance of assur-
ance, asset monitoring, worker safety, and other things. Additionally, the initiation of
potent, AI and cloud-based analytical software tools have made it possible to utilize
sensor data to reduce production costs through process improvement and preventive
maintenance. After being transferred to the internet, that data can be utilized for
a number of things, in addition to supply management and international resource
coordination.
There are many different sensor types that can be used for these various applica-
tions, and new and better models are always being developed. Among the most popu-
lar sensor kinds are those for light, motion, temperature, position, presence, vision,
etc., which are shown in Figure 10.3.
10.2 METHODOLOGY
The development of embedded sensors for IIoT applications is a crucial undertaking
that requires careful attention to a number of important factors. These sensors are
the primary data collectors in industrial settings; thus, reliability, precision, effi-
ciency, and connection should be prioritized in their design. Here is a step-by-step
tutorial for creating embedded sensors for the IIoT:
tasks in business applications devoid of the interference of other parties, this circum-
stance is referred to as fog computing. Fog computing is the term for web service
delivery across IT infrastructure. The storage, virtual network connectivity, and pro-
cessing power provided by the fog nodes, which also include other components, are
essential to the computation. System optimization, ML-driven trials, and system fail-
ure prediction can all be done using these nodes. If a link breaks anywhere between
the local fog nodes and the remote cloud, all fog nodes that are located before that
hop are always reachable and can thus keep providing local system support. Fog
computation acts as a bridge between static calculation area and dynamic computa-
tion area for many industrial manufacturing and appliances. A specific study was
directed by Tange et al. [14] to determine how the IIoT’s security requirements might
be satisfied using the newly emerging concept of fog computing. The chapter, which
offers a wealth of research opportunities in the IIoT security field, shows how fog
computing, as an emergent computing paradigm, might become a powerful tool in
protecting a number of connected industrial environments.
In order to predict maintenance in fog computing, Teoh et al. [15] presented the
management of several resources with the ML and genetic algorithm (GA) approach.
The fog workflow replicates the measurement of GA’s time, cost, and energy along
with the methods FCFS (First Come First Serve), Round Robin, Max-Min, and
Min-Min. Real-time data and two kinds of logistic regression are used to build the
maintenance prediction model. The results showed that the technique outperformed
Min-Max, Round Robin in terms of execution time, cost, and energy usage. The
accuracy of the prediction model is 95.1% during training and 94.5% during testing.
A schematic planning related to the fog computing idea was created by Benomar
et al. [16] with industrial monitoring needs in mind. A layer of fog formed to give
a single-valued decomposition-based strategy for data aggregation. Using the
OMNET++ simulator, the advantages of the recommended method for monitoring
performance, including latency and packet delivery rate, were investigated. As an
illustration, a networked monitoring system for a real-world industrial motor was
used. In the future, additional domains that are essential to the industry can be actu-
ated using the recommended approach.
First, the Q-learning method was created to provide a discrete partial offloading
choice. The system’s performance was then optimized by using looser job offload-
ing, which was given as a continuous built-on deep deterministic policy gradient
(DDPG). The simulation findings show that the DDPG approach and the Q-learning
scheme both reduce latency by 30% and 23%, respectively.
10.3.1 Real-Time Performance
By utilizing AI, blockchain technology, big data analytics, edge computing, and
cloud computing, sensors, and IoT devices can collect data from many sources
and act in response. They are sophisticated enough to automate procedures,
anticipate issues, and recognize security concerns in a present IoT-based sys-
tem. Remote real-time control of heating and cooling equipment is possible
with the aid of a monitoring and controlling system equipped with humidity
and temperature sensors. Applications for the cloud and Android can also show
users the outcomes of the sensed data. A cell-phone-enabled application and
Arduino device can be used to implement a real-time scheduling approach in
an IoT-smart automated system. High performance can be obtained because the
approach is reasonably able to modify the system setting using the switching
features available.
10.3.2 Energy Efficiency
IIoT will continue to support sustainability activities in two areas: improved resource
management and increased energy efficiency. In order to monitor keenly of pro-
cesses and manage various operations across various applications, current IoT sys-
tems contain wireless, radio sensors that collect data. Energy efficiency must be
taken into account when developing IoT devices if they are to maintain functionality
over time. The energy consumption of IoT devices is often optimized in order to
increase system effectiveness. By taking energy efficiency, system integrity, com-
munication capabilities, and safety measures into account, there are well-known
wireless technologies.
Additionally, mixed integer linear programming (MILP) [23] used in a built-
in IoT context can reduce power dissipation for processing and communication
by 27% and 36%, respectively. However, using fog computing in a large IoT net-
work comes with some energy consumption concerns. Future research should take
these issues into account when building saving energy solutions for such frame-
works. Future IIoT applications should make use of automation and data insights
in real time to boost sustainability overall, minimize waste, and increase energy
efficiency. IoT imagines a time when a smart environment’s sensing, processing,
and networking capabilities will require little to no human interaction. In such
Design and Analysis of Embedded Sensors for IIoT 181
In their work, Dhirani et al. [25] examined these problems regardless of the appli-
cation of cybersecurity standards and norms. Their research also offered valuable
advice on how to minimize the disparities between security standards and controls
and the fusion of compatible standards for OT and ICT security architectures. They
suggested the essential steps for the protection of the diverse producing environment,
addressing the new threats related to cyber security, different degrees of protec-
tion being enforced, and establishing unified cybersecurity benchmarks to provide
OT/ICT convergence. This study also included guidelines for mapping and putting
into practice a variety of standards, including those for 5G and machine-to-machine
networks, Platforms enabling cloud computing, IoT-based ecosystems, and lowering
security checks and problems.
In conclusion, the IIoT field is one that is fast evolving and has a wide range of
potential future paths and developments. Future applications of IIoT that hold the
most promise are edge computing, integrating AI, and enhancing security measures.
AI algorithms may examine data gathered by IIoT devices to spot abnormalities,
spot patterns, and make wise predictions.
10.4 CONCLUSION
Designing embedded sensors for IIoT involves a multidisciplinary approach,
encompassing electrical engineering, mechanical engineering, software devel-
opment, and data science. Collaboration among experts in these domains is
essential to create sensors that meet the demands of industrial applications while
maintaining high reliability and security. Embedded sensors are the backbone
of IIoT systems, enabling industries to harness the power of data for improved
efficiency and competitiveness. Designing and analyzing these sensors require
careful consideration of factors such as sensor selection, power efficiency, data
analysis, and security. As IIoT continues to evolve, addressing challenges and
embracing emerging technologies will be crucial for its success in various indus-
trial sectors.
REFERENCES
1. Patel KK, Patel SM, Scholar PG. Internet of Things-IOT: Definition, Characteristics,
Architecture, Enabling Technologies, Application & Future Challenges. Int J Eng Sci
Comput. 2016;6:1–10.
2. Khan WZ, Rehman MH, Zangoti HM, Afzal MK, Armi N, Salah K. Industrial Internet
of Things: Recent Advances, Enabling Technologies and Open Challenges. Comput
Electr Eng. 2020;81. https://doi.org/10.1016/j.compeleceng.2019.106522.
3. Kiesel R, van Roessel J, Schmitt RH. Quantification of Economic Potential of 5G for
Latency Critical Applications in Production. Procedia Manuf. 2020;52:113–20. https://
doi.org/10.1016/j.promfg.2020.11.021.
4. Malik PK, Sharma R, Singh R, Gehlot A, Satapathy SC, Alnumay WS, et al. Industrial
Internet of Things and Its Applications in Industry 4.0: State of The Art. Comput
Commun. 2021;166:125–39.
5. Sign´e L, Heitzig C. Effective engagement with Africa Capitalizing on shifts in busi-
ness, technology, and global partnerships; 2022.
Design and Analysis of Embedded Sensors for IIoT 183
6. ur Rehman MH, Yaqoob I, Salah K, Imran M, Jayaraman PP, Perera C. The Role
of Big Data Analytics in Industrial Internet of Things. Futur Gener Comput Syst.
2019;99:247–59. https://doi.org/10.1016/j.future.2019.04.020.
7. Gebremichael T, Ledwaba LPI, Eldefrawy MH, Hancke GP, Pereira N, Gidlund M,
et al. Security and Privacy in the Industrial Internet of Things: Current Standards and
Future Challenges. IEEE Access 2020;8:152351–66. https://doi.org/10.1109/ACCESS.
2020.3016937.
8. Younan M, Houssein EH, Elhoseny M, Ali AA. Challenges and Recommended
Technologies for the Industrial Internet of Things: A Comprehensive Review. Meas J
Int Meas Confed. 2020;151. https://doi.org/10.1016/j.measurement.2019.107198.
9. Abikoye OC, Bajeh AO, Awotunde JB, Ameen AO, Mojeed HA, Abdulraheem M,
et al. Application of Internet of Thing and Cyber Physical System in Industry 4.0 Smart
Manufacturing. Adv Sci Technol Innov. 2021. https://doi.org/10.1007/978-3-030-
66222-6_14.
10. Thakur P, Kumar Sehgal V. Emerging Architecture for Heterogeneous Smart Cyber-
Physical Systems for Industry 5.0. Comput Ind Eng. 2021;162. https://doi.org/10.1016/j.
cie.2021.107750.
11. Latif S, Idrees Z, Ahmad J, Zheng L, Zou Z. A Blockchain-Based Architecture for
Secure and Trustworthy Operations in the Industrial Internet of Things. J Ind Inf
Integr. 2021;21. https://doi.org/10.1016/j.jii.2020.100190.
12. Umran SM, Lu S, Abduljabbar ZA, Zhu J, Wu J. Secure Data of Industrial Internet of
Things in a Cement Factory Based on a Blockchain Technology. Appl Sci. 2021;11.
https://doi.org/10.3390/app11146376.
13. Rathee G, Ahmad F, Sandhu R, Kerrache CA, Azad MA. On the Design and
Implementation of a Secure Blockchain-Based Hybrid Framework for Industrial Internet-
of-Things. Inf Process Manag. 2021;58. https://doi.org/10.1016/j.ipm.2021.102526.
14. Tange K, De Donno M, Fafoutis X, Dragoni N. A Systematic Survey of Industrial
Internet of Things Security: Requirements and Fog Computing Opportunities. IEEE
Commun Surv Tutorials 2020;22. https://doi.org/10.1109/COMST.2020.3011208.
15. Teoh YK, Gill SS, Parlikad AK. IoT and Fog Computing Based Predictive Maintenance
Model for Effective Asset Management in Industry 4.0 Using Machine Learning. IEEE
Internet Things J. 2021. https://doi.org/10.1109/JIOT.2021.3050441.
16. Benomar Z, Campobello G, Segreto A, Battaglia F, Longo F, Merlino G, et al. A Fog-
Based Architecture for Latency-Sensitive Monitoring Applications in Industrial Internet
of Things. IEEE Internet Things J. 2021. https://doi.org/10.1109/JIOT.2021.3138691.
17. Senthilkumar P, Rajesh K. Design of a Model Based Engineering Deep Learning
Scheduler in Cloud Computing Environment Using Industrial Internet of Things (IIOT).
J Ambient Intell Humaniz Comput. 2021. https://doi.org/10.1007/s12652-020-02862-7.
18. Liu J, Qian K, Qin Z, Alshehri MD, Li Q, Tai Y. Cloud Computing-Enabled IIOT
System for Neurosurgical Simulation Using Augmented Reality Data Access. Digit
Commun Networks. 2022;11(2): 347–57. https://doi.org/10.1016/j.dcan.2022.04.019
19. Wu J, Zhang G, Nie J, Peng Y, Zhang Y. Deep Reinforcement Learning for Scheduling
in an Edge Computing-Based Industrial Internet of Things. Wirel Commun Mob
Comput. 2021;2021. https://doi.org/10.1155/2021/8017334.
20. Deng X, Yin J, Guan P, Xiong NN, Zhang L, Mumtaz S. Intelligent Delay-Aware Partial
Computing Task Offloading for Multi-User Industrial Internet of Things through Edge
Computing. IEEE Internet Things J. 2021. https://doi.org/10.1109/JIOT.2021.3123406.
21. Ren S, Kim JS, Cho WS, Soeng S, Kong S, Lee KH. Big Data Platform for Intelligence
Industrial IoT Sensor Monitoring System Based on Edge Computing and AI. 3rd
Int. Conf. Artif. Intell. Inf. Commun. ICAIIC 2021;2021. https://doi.org/10.1109/
ICAIIC51459.2021.9415189.
184 AI-Driven IoT Systems for Industry 4.0
11.1 INTRODUCTION
Artificial intelligence (AI) has emerged as a transformational technology in the
context of Industry 4.0, revolutionising decision-making processes in a variety of
industries [1, 2]. This overview aims to introduce the role of AI in Industry 4.0 and
investigate its potential for optimising decision-making in complicated industrial
situations.
The integration of digital technologies is what gives rise to “Industry 4.0,” which
signifies a paradigm shift in production and technology. With the use of the Internet
of Things (IoT), big data analytics, cloud computing, and cyber-physical systems,
among other major concepts and technologies, this overview seeks to provide read-
ers with a thorough grasp of Industry 4.0 [3].
AI is the umbrella term for a variety of tools and methods that let computers
simulate human intellect, learn from experience, and take independent actions [4].
AI is essential to Industry 4.0 because it turns data into insightful knowledge and
helps decision-making in a variety of fields [5]. By transforming decision-making
through data analysis, predictive analytics, process optimisation, decision support
systems (DSSs), and autonomous systems, AI plays a critical role in Industry 4.0
[6, 7]. Organisations can achieve higher levels of productivity, efficiency, and com-
petitiveness in the ever-changing world of Industry 4.0 by utilising AI to its full
potential [8].
Knowledge of the main concepts and technology of Industry 4.0 is necessary.
Industry 4.0 is built on a combination of interconnectedness, information open-
ness, technical support, decentralised decision-making, and technologies, includ-
ing the IoT, big data analytics, cloud computing, and cyber-physical systems [9].
Organisations may promote innovation, streamline operations, and boost competi-
tiveness in the fast-changing environment of Industry 4.0 by utilising these ideas
and technologies [10]. A new wave of opportunities and developments for industries
around the world is brought about by the Industry 4.0 framework [11]. To be success-
ful in this rapidly changing environment, organisations must overcome a number of
problems that are presented by technology [12]. The main difficulties that industries
confront when making decisions within the context of Industry 4.0 are highlighted
in this overview of issues [13].
to improve decision quality, increase the capacity for human decision-making, and
improve organisational performance [21].
models are deep neural networks with numerous hidden layers. Many deci-
sion-making processes, including speech and image recognition, NLP, and
recommendation systems, rely on neural networks.
Bayesian networks: Bayesian networks use directed acyclic graphs to represent
probability interactions between variables. They can describe both causal and
conditional interdependence between variables. Bayesian networks are used for
uncertain decision-making tasks like risk assessment, diagnosis, and prediction.
Reinforcement learning: Decision-making in reinforcement learning mod-
els develops through engagement with the environment. By experiment-
ing with various activities and taking feedback into account, they seek to
maximise a reward signal. Applications for reinforcement learning include
robotics, gaming, and autonomous systems.
Genetic algorithms: Natural selection serves as the inspiration for a certain
kind of optimisation method known as genetic algorithms. They employ a
population-based strategy that reflects the development of biological spe-
cies. In fields like scheduling, resource allocation, and logistics, genetic
algorithms are employed to solve complicated optimisation problems.
Fuzzy logic: Fuzzy logic models take into account and make sense of ambigu-
ity and imperfect information. To deal with subjective or ambiguous data,
they employ linguistic variables and fuzzy rules. Expert systems, control
systems, and pattern recognition are a few examples of decision-making
processes that make use of fuzzy logic.
accuracy, efficiency, and effectiveness. This section examines how using advanced
analytics, automation, and beneficial insights, an AI-powered DSS may improve
decision-making.
their unique needs. Additionally, AI systems may adjust and learn from
user interactions, progressively enhancing the accuracy and usefulness of
recommendations over time.
1. Data privacy and security: Systems for making decisions depend on enor-
mous volumes of data, including sensitive and private data. When people’s
data is gathered, saved, and analysed without their knowledge or agree-
ment, privacy concerns arise. To safeguard information from unauthorised
access or breaches, organisations must prioritise data privacy and have
strong security measures in place.
2. Informed consent and data usage: In decision-making processes, open
communication and securing people’s informed consent are crucial. The
collection, usage, and sharing of a person’s data should be under their con-
trol. Organisations must acquire informed consent, make sure data protec-
tion laws are followed, and provide clear explanations of how data will be
used.
11.11 CONCLUSION
AI-driven decision-making in Industry 4.0 has enormous potential to change organ-
isations’ decision-making capacities. The creation and use of decision-making
systems will be influenced by enhanced intelligence, moral AI governance, and
trustworthy AI. As important research directions, federated learning, responsible
AI, and hybrid decision-making systems will develop. In conclusion, this chapter
examined the use of AI in Industry 4.0, the incorporation of AI models into DSS
frameworks, and the ethical issues surrounding AI-driven decision-making. We cov-
ered the significance of data gathering and preprocessing, highlighted several AI
models appropriate for making decisions, and looked at the difficulties and solutions
for resolving ethical issues. In order to make sure that AI-driven decision-making
benefits society in the era of Industry 4.0, it is critical to keep advancing ethical AI
governance frameworks, fostering transparency, and pushing responsible AI prac-
tices. Organisations may embrace the revolutionary power of AI and make wise
decisions that lead to success in the dynamic and data-rich environment of Industry
4.0 by embracing these future directions.
ACKNOWLEDGEMENTS
My thanks go to the editor of the volume, Professor Arun Kumar Sangaiah, for
giving me the chance to write a chapter on the area of AI-driven IoT systems for
Industry 4.0 and making helpful suggestions.
AUTHOR BIOGRAPHIES
R. Bargavi is a research scholar at the Faculty of Management at SRM Institute of
Science and Technology, Kattankulathur Campus, pursuing doctoral studies in edu-
cational technology and sustainability.
REFERENCES
1. Mhlanga, D., “Artificial intelligence in the Industry 4.0, and its impact on poverty, inno-
vation, infrastructure development, and the sustainable development goals: Lessons
from emerging economies?” Sustainability, vol. 13, no. 11, p. 5788, 2021.
2. Jan, Z. et al., “Artificial intelligence for Industry 4.0: Systematic review of applications,
challenges, and opportunities,” Expert Syst. Appl., vol. 32, p. 119456, 2022.
3. Haleem, A., Javaid, M., Singh, R. P. and Suman, R., “Medical 4.0 technologies for
healthcare: Features, capabilities, and applications,” Internet Things Cyber-Physical
Syst., vol. 2, pp. 12–30, 2022.
4. Sodhi, H., “When Industry 4.0 meets Lean Six Sigma: A review,” Ind. Eng. J., vol. 13,
no. 1, pp. 1–12, 2020.
204 AI-Driven IoT Systems for Industry 4.0
5. Gill, S. S. et al., “AI for next generation computing: Emerging trends and future direc-
tions,” Internet Things, vol. 19, p. 100514, 2022.
6. Ustundag, A. and Cevikcan, E., Industry 4.0: Managing the digital transformation.
Springer, 2017.
7. Terziyan, V., Gryshko, S. and Golovianko, M., “Patented intelligence: Cloning human
decision models for Industry 4.0,” J. Manuf. Syst., vol. 48, pp. 204–217, 2018.
8. Miśkiewicz, R. and Wolniak, R., “Practical application of the Industry 4.0 concept in a
steel company,” Sustainability, vol. 12, no. 14, p. 5776, 2020.
9. Pereira, A. C. and Romero, F., “A review of the meanings and the implications of the
Industry 4.0 concept,” Procedia Manuf., vol. 13, pp. 1206–1214, 2017.
10. Tseng, M.-L., Tran, T. P. T., Ha, H. M., Bui, T.-D. and Lim, M. K., “Sustainable indus-
trial and operation engineering trends and challenges toward Industry 4.0: A data
driven analysis,” J. Ind. Prod. Eng., vol. 38, no. 8, pp. 581–598, 2021.
11. Kagermann, H., “Change through digitization—Value creation in the age of Industry
4.0,” in Management of permanent change, Springer, 2014, pp. 23–45.
12. Bakthavatchalam, K. et al., “IoT framework for measurement and precision agriculture:
Predicting the crop using machine learning algorithms,” Technologies, vol. 10, no. 1,
2022, https://doi.org/10.3390/technologies10010013.
13. Da Xu, L., Xu, E. L. and Li, L., “Industry 4.0: State of the art and future trends,” Int. J.
Prod. Res., vol. 56, no. 8, pp. 2941–2962, 2018.
14. Oztemel, E. and Gursev, S., “Literature review of Industry 4.0 and related technolo-
gies,” J. Intell. Manuf., vol. 31, pp. 127–182, 2020.
15. Zizic, M. C., Mladineo, M., Gjeldum, N. and Celent, L., “From Industry 4.0 towards
industry 5.0: A review and analysis of paradigm shift for the people, organization and
technology,” Energies, vol. 15, no. 14, p. 5221, 2022.
16. Sarker, I. H., “Deep learning: A comprehensive overview on techniques, taxonomy,
applications and research directions,” SN Comput. Sci., vol. 2, no. 6, p. 420, 2021.
17. Jarrahi, M. H., “Artificial intelligence and the future of work: Human-AI symbiosis in
organizational decision making,” Bus. Horiz., vol. 61, no. 4, pp. 577–586, 2018.
18. Janiesch, C., Zschech, P. and Heinrich, K., “Machine learning and deep learning,”
Electron. Mark., vol. 31, no. 3, pp. 685–695, 2021.
19. Zhou, L., Pan, S., Wang, J. and Vasilakos, A. V., “Machine learning on big data:
Opportunities and challenges,” Neurocomputing, vol. 237, pp. 350–361, 2017.
20. Huh, Y. U., Keller, F. R., Redman, T. C. and Watkins, A. R., “Data quality,” Inf. Softw.
Technol., vol. 32, no. 8, pp. 559–565, 1990.
21. Zhong, R. Y., Newman, S. T., Huang, G. Q. and Lan, S., “Big data for supply chain
management in the service and manufacturing sectors: Challenges, opportunities,
and future perspectives,” Comput. Ind. Eng., vol. 101, pp. 572–591, 2016, https://doi.
org/10.1016/j.cie.2016.07.013.
22. Huberman, A. M. and Miles, M. B., “Data management and analysis methods,” 1994.
23. Sun, G.-D., Wu, Y.-C., Liang, R.-H. and Liu, S.-X., “A survey of visual analytics tech-
niques and applications: State-of-the-art research and future challenges,” J. Comput.
Sci. Technol., vol. 28, pp. 852–867, 2013.
24. Hearst, M. A., “User interfaces and visualization,” Mod. Inf. Retr., vol. 15, no. 16, pp.
257–323, 1999.
25. Fang, C. and Marle, F., “A simulation-based risk network model for decision support in
project risk management,” Decis. Support Syst., vol. 52, no. 3, pp. 635–644, 2012.
26. Lilien, G. L., Rangaswamy, A., Van Bruggen, G. H. and Starke, K., “DSS effective-
ness in marketing resource allocation decisions: Reality vs. perception,” Inf. Syst. Res.,
vol. 15, no. 3, pp. 216–235, 2004.
AI for Optimal Decision-Making in Industry 4.0 205
27. Dahl, S. and Derigs, U., “Cooperative planning in express carrier networks—An
empirical study on the effectiveness of a real-time decision support system,” Decis.
Support Syst., vol. 51, no. 3, pp. 620–626, 2011.
28. Deshpande, P. S., Sharma, S. C., Peddoju, S. K., Deshpande, P. S., Sharma, S. C. and
Peddoju, S. K., “Predictive and prescriptive analytics in big-data era,” Secur. Data
Storage Asp. Cloud Comput., vol. 1025, pp. 71–81, 2019.
29. Pawar, V. and Jose, D. V., Machine Learning Methods to Identify Aggressive Behavior in
Social Media BT. In: Dutta, P., Chakrabarti, S., Bhattacharya, A., Dutta, S., Shahnaz, C.
(eds) Emerging Technologies in Data Mining and Information Security, 2023, vol. 490,
pp. 507–513. Springer, Singapore. https://doi.org/10.1007/978-981-19-4052-1_50
30. Zhou, Z.-H., Machine learning. Springer Nature, 2021.
31. LeCun, Y., Bengio, Y. and Hinton, G., “Deep learning,” Nature, vol. 521, no. 7553,
pp. 436–444, 2015.
32. Kang, Y., Cai, Z., Tan, C.-W., Huang, Q. and Liu, H., “Natural language processing
(NLP) in management research: A literature review,” J. Manag. Anal., vol. 7, no. 2,
pp. 139–172, 2020.
33. Abooraghavan, R., Perumal, G. A., Rahuman, K. K., Suneel, C. and Jose, D., “Smart
driving license system in RTO using IoT technology,” AIP Conf. Proc., vol. 2444,
no. 1, p. 70004, Mar. 2022, doi: 10.1063/5.0078325.
34. Rahm, E. and Do, H. H., “Data cleaning: Problems and current approaches,” IEEE
Data Eng. Bull., vol. 23, no. 4, pp. 3–13, 2000.
35. Clark, W. A. V. and Avery, K. L., “The effects of data aggregation in statistical analy-
sis,” Geogr. Anal., vol. 8, no. 4, pp. 428–438, 1976.
36. Osborne, J., “Notes on the use of data transformations,” Pract. Assessment, Res. Eval.,
vol. 8, no. 1, p. 6, 2002.
37. Turner, C. R., Fuggetta, A., Lavazza, L. and Wolf, A. L., “A conceptual basis for feature
engineering,” J. Syst. Softw., vol. 49, no. 1, pp. 3–15, 1999.
38. Li, L., Fan, Y., Tse, M. and Lin, K.-Y., “A review of applications in federated learning,”
Comput. Ind. Eng., vol. 149, p. 106854, 2020.
39. Arrieta, A. B. et al., “Explainable artificial intelligence (XAI): Concepts, taxonomies,
opportunities and challenges toward responsible AI,” Inf. Fusion, vol. 58, pp. 82–115,
2020.
40. Akyuz, E. and Celik, M., “A hybrid decision-making approach to measure effective-
ness of safety management system implementations on-board ships,” Saf. Sci., vol. 68,
pp. 169–179, 2014.
12 Challenges in Lunar
Crater Detection for
TMC-2 Obtained DEM
Image Using Ensemble
Learning Techniques
Sanchita Paul, Chinmayee Chaini,
and Sachi Nandan Mohanty
12.1 INTRODUCTION
Deep learning is a branch of machine learning (ML) that focuses on training
multi-layered artificial neural networks (ANNs) to learn from and make sense of
enormous volumes of data. Neurons, the interconnected nodes that make up these
networks, mimic the behavior of real neurons. Each neuron in the network receives
input signals, carries out calculations, and then generates an output signal that
can be transmitted to other neurons. The influence of one neuron on another is
determined by the strength of the connections between neurons, which are rep-
resented by weights. It draws inspiration from the way that the neural networks
(NNs) in the human brain are organized and operate. The input and output layers
of deep learning models sometimes referred to as deep NNs (DNNs), are separated
by several hidden layers. These complex designs enable feature extraction from
unstructured data and hierarchical learning. Each layer of the network represents a
different degree of abstraction and is capable of learning. Deeper layers learn more
sophisticated and abstract properties, such as shapes or objects, whereas the top
layers collect low-level features like edges or corners in images. The network’s last
layers deliver the required results, like classification or prediction. Future-looking,
the deep learning discipline is developing quickly. To enhance performance and
address the issues, researchers are investigating novel designs, optimization meth-
ods, and learning algorithms. Research is ongoing in the domain of unsupervised
learning, which involves training models using unlabeled data. Deep learning algo-
rithms have proven to perform exceptionally well in a variety of challenging tasks,
including speech recognition, computer vision, and natural language processing
(NLP). The outstanding successes of deep learning in applications for artificial
intelligence have been considerably aided by this learning of hierarchical represen-
tations straight from data [1, 2].
last layer, is connected to every other neuron in the layer above it, like the hidden
layers of CNNs. CNNs can be modified for semantic segmentation even though they
are frequently employed for classification problems. The input image is separated
into equal-sized, tiny patches for semantic segmentation. Each patch’s center pixel is
classified by the CNN, and then the patch is moved to categorize the following cen-
ter pixel. The loss of spatial information as features travel into the final fully linked
network layers makes this method ineffective because the overlapping features of
the sliding patches are not reused. The fully CNN (FCNN) was developed to address
this problem. Transposed convolutional layers are used in FCNNs in place of the
final fully connected layers of the CNN. These layers conduct semantic segmenta-
tion and up-sampling on the low-resolution feature maps, recovering the original
spatial dimensions at the same time. The backpropagation algorithm is frequently
combined with an optimization algorithm-like gradient descent for training DNNs.
The first step in the procedure is to establish the gradient of a loss function, which
gauges the difference between the desired and expected outputs. The optimization
technique then uses this gradient data to iteratively update the network weights to
minimize the loss function’s value. The network is optimized iteratively until it per-
forms at a satisfactory level [3, 4].
Key Concepts:
• Neural networks (NNs): ANNs, which are made up of interconnected nodes
called neurons, are a key component of deep learning. Together, neurons
from several layers process and transform data.
• Deep neural networks(DNNs): DNNs are NNs that include several hidden
layers and are used in deep learning models. These complex designs enable
feature extraction from unstructured data and hierarchical learning.
• Feature learning: It is also known as representation learning, which is the
automatic learning of high-level, abstract representations of data features
by deep learning models.
• Training and optimization: With the use of backpropagation algorithms
and big datasets, deep-learning models are trained. Updates to the network
parameters and the reduction of the discrepancy between expected and
actual outputs are achieved using optimization techniques like stochastic
gradient descent.
• Big data and computing power: Big data is essential for deep learning since
it needs a ton of labeled training data to pick up complicated patterns. It also
gains from the availability of strong hardware accelerators, such as graph-
ics processing units (GPUs), to handle the intensive computations required.
based on size, texture, contrast, and color information. Following that, dominating
characteristics are identified using statistical analysis or feature selection methods
like principal component analysis (PCA). The chosen characteristics are then fed
into a support vector machine (SVM) or NN-based ML classifier as inputs. The ideal
boundary separating each class is chosen by the ML classifier using the input feature
vector and target class labels. After being trained, the ML classifier can be used
to categorize fresh, ambiguous data to establish its class. The selection of suitable
features, the length of the feature vector, the type of classifier, and the necessary pre-
processing needs depending on raw image attributes are examples of typical issues.
Deep learning classifiers: Since DLCs can process raw images directly, prepro-
cessing, segmentation, and feature extraction should not be necessary. The limitation
on the number of deep-learning algorithms, however, necessitates image resizing
input data. Using data augmentation techniques during training can help to avoid
the need for some techniques’ intensity normalization and contrast enhancement.
DLC provides a greater level of classification accuracy since it may prevent mistakes
brought on by inaccurate feature vectors or sloppy segmentation. The fact that DLC
networks frequently include numerous hidden layers means that, in comparison to
ML-based techniques, more mathematical operations are carried out, making the
models for DLC networks more computationally costly. While the DLC uses the
image as input and outputs the object class, the ML classifier uses the feature vec-
tor as input. Because deep learning has more layers than traditional ANNs, theo-
retically it can be stated to be an improvement over ANN. As each layer translates
the input data from the preceding layer into a new representation at a higher and
slightly greater abstraction level, it is regarded as a type of representational learn-
ing. Multiple layers of NNs are used in deep-learning models to identify both local
and global relationships in data. Early layers extract features like edges and their
positions, by using nonlinear functions to alter the data in each layer. Deeper lay-
ers ignore small differences and take into account edge arrangements to discover
more complex patterns. These patterns come together to create intricate combina-
tions that depict related objects. Due to CNNs’ efficiency in processing picture data,
deep learning has seen a rise in their use. CNNs are well suited for tasks like image
classification and semantic segmentation because of the hierarchical design of their
layers, which enables the extraction of useful characteristics from images. Due to the
network-wide preservation of spatial information brought about by the development
of FCNNs, semantic segmentation has been further enhanced. Utilizing the back-
propagation algorithm and optimization methods, weights are iteratively adjusted dur-
ing the training of DNNs. Deep learning has significant promise for solving complex
challenges across a variety of disciplines with further research and breakthroughs.
Let’s examine the difficulties encountered notably when utilizing deep learning
to recognize lunar craters. Due to the unique properties of the lunar surface and the
dearth of labeled data, deep learning for the detection of lunar craters faces particu-
lar difficulties. Among the difficulties are:
To overcome these difficulties in lunar crater detection using deep learning, a mix
of cutting-edge methods, teamwork between researchers and subject-matter special-
ists, and the accessibility of extensive and carefully curated datasets are essential.
Deeper insights into the lunar surface and its geological history will be made pos-
sible by improved crater detection techniques made possible by ongoing research and
developments in deep-learning methodology [5, 6].
crater recognition. These models may capture both local and global cor-
relations in the data through many layers of NNs, improving the ability to
recognize craters in various lunar landscapes.
4. Transfer learning: Transfer learning is a method that applies models that
have already been trained on large-scale datasets to new problems with
little data. Deep-learning models can learn general features by being pre-
trained on large datasets, such as general image collections, and then being
fine-tuned on smaller lunar crater datasets. This strategy can enhance per-
formance and lessen the issue of scarce labeled data that is unique to lunar
craters.
5. Ensemble techniques: Multiple deep-learning models are combined using
ensemble learning to increase the reliability and accuracy of predictions.
Ensemble approaches can get over the constraints of a single model and
improve generalization and classification performance by combining the
results of individual models. By capturing various crater traits and boost-
ing overall detection capability, ensemble approaches have demonstrated
promise in the detection of lunar craters.
6. Domain-specific architectures: To detect lunar craters, researchers are
using specialized deep-learning architectures. To maximize performance,
these systems take into account particular aspects of lunar imaging and
include domain expertise. The accuracy and dependability of lunar cra-
ter detection can be further improved by developing models that take into
consideration elements like illumination conditions, surface textures, and
geological fluctuations.
A deeper understanding of the lunar surface’s geological history and surface fea-
tures could be obtained thanks to these developments in deep learning for lunar cra-
ter detection. We should anticipate further improvement in precisely detecting and
analyzing lunar craters as researchers continue to push the limits of deep-learning
algorithms and improve their applicability to crater detection [7, 8].
Deep-learning methods have been also used in multiple real-time lunar crater
recognition applications, advancing our knowledge of the lunar surface and assist-
ing numerous lunar exploration missions. Here are some significant real-time deep-
learning applications in this area:
f. Lunar habitat planning: Deep-learning methods can help with the real-
time evaluation and choice of viable lunar habitats. Deep-learning models
can help determine the best locations for lunar colonies by examining pho-
tos and data relating to topographical features, radiation levels, and geologi-
cal stability. Future Moon colonization and human exploration are made
possible by real-time habitat planning.
The objective of this discussion [9] is to provide insights into various change
detection techniques and their limitations. It emphasizes that a good change detec-
tion technique should provide information about change areas, categorize change
types effectively, and offer an accurate assessment of change detection outcomes.
The advantage lies in the categorization of change detection techniques into those
conveying change information through binary maps and those showing detailed
transitional changes. The disadvantage is the challenge of determining appropriate
threshold values for segmentation, which can affect the accuracy of change detec-
tion results. It is acknowledged that post-classification-based techniques tend to offer
higher accuracy compared to algebra-based approaches, but the selection of the best
technique remains challenging and depends on factors, such as remote sensing data
and the characteristics of the area under study. Researchers continue to explore dif-
ferent techniques and base their decisions on performance evaluations and accuracy
assessments. These real-time deep-learning applications for lunar crater recognition
show how important this technology is to the advancement of lunar exploration,
scientific inquiry, and future missions. Deep learning advances space exploration by
enabling accurate and effective processing of data from the lunar surface, furthering
our understanding of the Moon [6, 10].
c. Rule evaluation: To calculate the overall output fuzzy set based on the input
fuzzy sets and their accompanying rule weights, the fuzzy rules are evalu-
ated using inference techniques. Combining the fuzzy sets and using fuzzy
logic operations like conjunction, disjunction, and implication are part of
this step.
d. Defuzzification: Through a defuzzification procedure, the resulting fuzzy
set is transformed back into a crisp value or set of crisp values. The final
output of the fuzzy system is generated in this stage. To determine whether
or not a region is a crater, the output of the fuzzy inference method is
defuzzified to produce a clear classification determination.
e. Training and adaptation: A learning algorithm is used to modify the fuzzy
system’s parameters, including the membership functions and rule weights.
The learning algorithm reduces the error between the system’s output and
the expected output using the training data.
The benefits of fuzzy learning are numerous. It is suitable for modeling and
control in a variety of disciplines since it can manage complicated, ambiguous,
and inaccurate data. Fuzzy systems are interpretable and valuable for knowl-
edge representation because they can capture human-like decision-making and
reasoning processes. Fuzzy learning also enables continual optimization by
allowing the fuzzy system to adjust to new information or situations that are
changing.
12.3.1 Fuzzy Learning
“In human cognition, almost all classes have unsharp (fuzzy) boundaries,” claims
Lofti Zadeh (informally known as the fuzzifier of the crisp domain). We can there-
fore conform to Zadeh’s claim by coupling, embedding, or meshing fuzzy compo-
nents into NNs with bivalent logic. Fuzzy NNs were created through the union of
fuzzy logic’s knowledge representation capabilities and NNs’ learning capabilities.
As a result, the weakness of learning in fuzzy logic as well as the problem of NNs
being a “black box”—the inability to explain decisions (lack of transparency)—
have been overcome. The fundamental idea behind this neuro-fuzzy system (NFS)
is that it blends the learning and connectionist structure of NNs with the human-
like reasoning style of fuzzy systems. NFS offers strong and adaptable universal
approximations with the capacity to investigate comprehensible IF-THEN rules. In
many spheres of our social and technological existence, NFS usage is expanding.
Techniques for fuzzy learning can be used for a variety of feature extraction and
image processing tasks. These methods deal with uncertainty and imprecision in
picture data by using fuzzy logic and fuzzy sets. Here are a few common approaches
for fuzzy learning in this field:
and accounts for picture segmentation errors. Meaningful areas from pho-
tos can be extracted via fuzzy clustering, allowing for later analysis and
feature extraction.
b. Fuzzy rule-based classification: For picture recognition and classification
problems, fuzzy rule-based classification is a well-liked method. The links
between the characteristics and class labels are captured by fuzzy rules that
are constructed based on features retrieved from images. The rules are then
assessed, and classification choices are made using fuzzy inferences. For
tasks like emotion recognition, face recognition, and object classification,
this strategy is helpful.
c. Fuzzy feature selection: To minimize dimensionality and improve classifi-
cation accuracy, feature selection is a crucial stage in image analysis. When
choosing which features to use, fuzzy feature selection approaches take into
account the features’ relevance, redundancy, and uncertainty. The level of
relevance and redundancy is represented by fuzzy sets, and the utility of
features is assessed using fuzzy measures. To choose the most informa-
tive features for upcoming processing and classification tasks, fuzzy feature
selection is helpful.
d. Fuzzy-based noise reduction: Noise reduction techniques for photos can be
used fuzzy logic. The uncertainty and imprecision of noise-affected visual
data can be modeled by fuzzy systems. Denoising techniques and fuzzy
filters can successfully reduce noise while retaining crucial image features.
These methods make adaptive adjustments to the filter parameters based on
the properties of the image and the noise levels. Applications like medical
image processing and the enhancement of blurry images benefit greatly
from fuzzy-based noise reduction techniques.
e. Fuzzy compression and reconstruction: Image reduction and reconstruc-
tion can also make use of fuzzy-learning techniques. Fuzzy systems are
capable of capturing an image’s perceptual properties and optimizing
the compression process in accordance. Fuzzy-based compression tech-
niques make use of fuzzy sets to represent image attributes and make use
of the image data’s duplication and irrelevance for effective compres-
sion. By using fuzzy inference and interpolation methods, fuzzy logic
can also help in the reconstruction of high-quality images from com-
pressed versions.
f. Handling uncertainty: The encoding and manipulation of uncertainty
in picture data are possible with fuzzy logic. The assignment of partial
memberships to various classes or concepts is possible with fuzzy logic,
as opposed to classical binary logic, which only accepts values of 0 or 1.
Uncertainty in image analysis might develop as a result of changing illu-
mination conditions, occlusions, or murky object borders. This uncertainty
can be modeled and reasoned using fuzzy learning approaches, allowing
for more flexible and reliable analysis and recognition.
g. Enhanced analysis and recognition: By capturing the intricate dependen-
cies and interactions between picture attributes and class labels, fuzzy
learning techniques can enhance analysis and recognition tasks. For
Lunar Crater Detection 217
This is a method that entails developing a set of fuzzy rules that may dif-
ferentiate between crater and non-crater regions based on particular proper-
ties. These fuzzy learning methods for crater detection on the Moon have
the benefit of handling ambiguous and imprecise data, enabling more reli-
able and exact identification of craters on the lunar surface. Fuzzy logic and
learning algorithms are used in these techniques to enable them to adapt to
changing situations and enhance performance over time. In general, fuzzy
learning approaches adaptability comes from their capacity to deal with
ambiguity, noise, and imprecision in visual data. In comparison to conven-
tional approaches, these strategies offer more adaptable and flexible solutions
because they make use of fuzzy logic and fuzzy sets. This enables enhanced
image processing and feature extraction capabilities, resulting in more precise
and significant outcomes across a range of applications and domains. These
fuzzy learning methods for crater detection on the Moon have the benefit of
handling ambiguous and imprecise data, enabling more reliable and exact iden-
tification of craters on the lunar surface. Fuzzy logic and learning algorithms
are used in these techniques to enable them to adapt to changing situations and
enhance performance over time [13, 14].
artifacts found in lunar photos. To enhance the quality of the input data,
preprocessing methods like noise reduction and picture enhancement are
frequently required.
c. Handling imbalanced data: Due to the relative rarity of lunar craters in
comparison to the surrounding lunar surface, datasets are skewed such
that positive crater samples are vastly outnumbered by negative samples.
The training and performance of fuzzy models may be impacted by this
class imbalance since these models may be biased in favor of the dominant
class. This problem can be solved by using strategies like oversampling,
undersampling, or class weighting, which increase the detection precision
for minority classes.
d. Generalizations to a new environment: Models for detecting lunar craters
must be able to generalize adequately to various lunar topographies and
lighting situations. However, due to variations in lighting angles, surface
textures, or illumination circumstances, the models developed using par-
ticular datasets may have trouble adjusting to other surroundings. Fuzzy
model resilience and adaptation in varied lunar conditions continue to be
a difficulty.
e. Design complexity: To provide proper membership functions and rules, one
needs subject knowledge and skill while developing fuzzy systems. The
right variables, the right number and form of fuzzy sets, and the right rule
base must all be decided upon when designing an efficient fuzzy model.
This procedure can be difficult since it necessitates a thorough comprehen-
sion of the domain of the problem and the capacity to faithfully portray the
underlying relationships.
f. Computational intensity: Fuzzy systems learning can be computationally
demanding, particularly when working with large datasets or intricate sys-
tems. Iterative methods are frequently used to optimize fuzzy models, and
these approaches consume a lot of processing power. Fuzzy-model training
and tuning can be time-consuming as a result; thus, it is important to bal-
ance model complexity and computing efficiency.
g. Overfitting and underfitting: Like other ML models, fuzzy models are
prone to both overfitting and underfitting. When a model gets too compli-
cated and works well with training data but not with untrained data, this is
called overfitting. On the other side, underfitting occurs when the model is
too straightforward and fails to identify the underlying trends in the data.
To minimize these issues and get the best performance, it is essential to
strike a balance between the complexity of the fuzzy model and the quan-
tity of training data that is readily available.
but variance stays steady before growing noticeably. This conclusion emphasizes
how, to increase model complexity and boost performance, bias and variance must
be carefully balanced, both playing a vital role.
Also, when working with complicated data, such as imbalanced, high-dimen-
sional, and noisy data, typical ML techniques have some drawbacks that ensemble
learning seeks to address. But traditional approaches frequently perform poorly
because they are unable to adequately capture the complex properties and underlying
structures of such data. In response, ensemble learning combines data fusion, data
modeling, and data mining into a single framework to build an effective knowledge
discovery and mining model. The extraction of a diverse range of features using
different transformations is the first step in the ensemble learning process. These
learned features are used as inputs for several learning algorithms, each of which
produces subpar predicted outcomes. The main principle of ensemble learning is
to combine the outputs of these underperforming learners and to take advantage
of the valuable information they have to provide [24, 25]. This combination is fre-
quently accomplished using voting processes, in which each learner contributes their
respective forecast, and the final prediction is produced by adaptively aggregating
the individual predictions. The authors of this study review the common approaches
to ensemble learning and categorize them according to their various traits. They talk
about the developments in each strategy and point out problems that still need to be
solved [18]. The research also considers how ensemble learning might be combined
with other ML technologies like deep learning and reinforcement learning. These
combinations provide opportunities to improve the functionality and performance of
ensemble learning models as well as new directions for future research. In general,
ensemble learning provides a viable framework for knowledge exploration and min-
ing in scenarios involving complicated data. Ensemble learning can enhance predic-
tive performance and capture the complex patterns and structures present in the data
by utilizing the advantages of many learning algorithms and efficiently combining
their outputs. The chapter gives a thorough overview of ensemble learning, empha-
sizing its importance, the state of the art in research, and probable prospects for
developments in this area. The goal of ensemble learning is to seamlessly combine
various ML algorithms into a single framework, making use of the complementary
data from each approach to enhance the performance of the overall model. Its inher-
ent extensibility enables the blending of many ML models for various applications,
such as classification and clustering. It has a long research history and provides flex-
ibility when combining different models for different ML applications.
and low-frequency points in image processing; and (3) displaying ambiguity and
imprecision in natural language through linguistic variables. To improve a variety of
applications, including classification, prediction, NLP, and autonomous control, deep
learning and fuzzy systems are combined. Fuzzy approaches come in a variety of
forms and are frequently used in conjunction with deep learning:
For a variety of reasons, the combination of fuzzy systems and deep learning is
effective:
Overall, the combination of fuzzy systems and deep learning offers enhanced
performance and robustness in a variety of applications [26]. The flowchart below
illustrates the process of selecting features for the proposed fusion model, which
224 AI-Driven IoT Systems for Industry 4.0
integrates the fuzzy feature extraction and deep-learning architecture using DEM
images. Additionally, the model incorporates an IoT component to display the loca-
tion of the crater on the lunar surface. By combining fuzzy feature extraction, deep
learning architecture, and IoT technology, the proposed fusion model presents a
robust and efficient approach for accurate lunar crater detection and location visual-
ization. For better understanding, refer Figure 12.1.
FIGURE 12.1 Flowchart of the proposed fusion model utilizing DEM image.
Lunar Crater Detection 225
a. Data integration: Different kinds and forms of data are frequently needed
for deep learning and fuzzy learning. It can be difficult to combine these
various data sources and coherently portray them. To make sure that the
226 AI-Driven IoT Systems for Industry 4.0
# Calculate the average of a list of numbers # Calculate the average of a list of numbers
def calculate _average(numbers): #Function calculate_average(numbers):
Total=0 #Total=0
Count=0 #Count=0
for num in numbers: #For each number in numbers:
total+= num #Add the number to the total
count+= 1 #Increment count by 1
average= total/count #Average= divide the total by count return average
return average #Numbers_list= [5,10,15,20,25]
numbers_list =[5,10,15,20,25] #Result= calculate_average(numbers_list)
result=calculate_average(numbers_list) #Display “The average is:”, result
print(“The average is:”, result)
FIGURE 12.2 An example of Python source code and pseudocode written in English.
Continuous research and development into the merging of deep learning and
fuzzy learning are necessary to meet these problems. By addressing these chal-
lenges, researchers can use the advantages of both methodologies to build more reli-
able and effective models for a variety of applications [27].
increase both approaches’ applicability and accuracy while also enhancing their
interpretability. Future developments and advancements in this area are very likely
with further research and development in this area. Following is a brief overview of
the developments in the merging of deep learning and fuzzy learning:
These developments show the possibility of merging deep learning and fuzzy sys-
tems to solve problems, enhance accuracy and interpretability, and broaden the scope
of both approaches’ possible applications. Future improvements and developments in
this fusion technique are probably going to come from further study and development.
Lunar Crater Detection 229
For DEM image labeling, annotation tools should support a variety of annotation
forms, be compatible with a range of picture formats and data encoding, execute
consistently well, have an intuitive user interface, allow for customization, and have
time-saving features. Together, these characteristics support precise, effective, and
230 AI-Driven IoT Systems for Industry 4.0
simplified annotation operations for DEM pictures. Annotating a high spatial resolu-
tion dataset for the preprocessing stage typically involves assigning labels or anno-
tations to specific features or regions of interest within the dataset. When it comes
to annotating DEM images, there are a few different approaches you can consider.
The choice between manual annotation, binary mask creation, or image augmen-
tation depends on the specific requirements of your application, the complexity of
the features you want to extract, and the availability of labeled data. Manual annota-
tion offers the highest level of accuracy but may be time-consuming and labor-inten-
sive. Binary masks can provide a more efficient way to focus on specific features,
while image augmentation can help increase dataset size and model robustness. We
describe it to discuss the binary mask types for better understanding:
can mark and name various aspects and items of interest in the photographs, aiding
in the annotation process. Here are a few illustrations:
The tools for annotating DEM and ORTHO images created from TMC-2 photos
are only a few examples. The tool selected will rely on the precise criteria, compati-
bility with data formats, and degree of capability required for the annotation activity.
order to open a DEM image with an annotation tool. It should handle widely used file
types including GeoTIFF, ASCII Grid, USGS DEM (.dem), and other formats spe-
cialized to DEM data. Users should be able to input the DEM file into the program,
ensuring accurate georeferencing and elevation data visualization. For opening DEM
images, a variety of tools are available, each with a unique set of characteristics
and functionalities. For instance, users can convert the DEM file to a raster format
using a conversion tool provided by ArcMap Desktop 10.8. Options to input data
and choose the DEM file directly are available in ArcGIS Pro 3.0. The “Open Data
File” option in Global Mapper 22 enables users to open the DEM file, and QGIS 3.16
provides capabilities to create a raster layer by selecting the TIFF file. Similar to this,
popular formats like TIFF should be supported by annotation tools, when it comes to
opening ORTHO images. The tools should have capabilities for reliably opening and
displaying ORTHO pictures while preserving georeferencing data and guaranteeing
proper alignment with other spatial data layers. It is significant to remember that the
selection of an annotation tool depends on the tastes, needs, and particular activities
at the hand of the user. ArcGIS, QGIS, Global Mapper, and numerous other GIS
software programs are some of the frequently used tools for opening and annotating
DEM and ORTHO pictures. These tools allow users to execute intricate geospatial
analysis, produce visualizations, and produce maps with annotated data, in addition
to just opening and annotating photos.
The quick explanation in Table 12.1 explains how to open DEM and ORTHO
pictures using various annotation tools. For handling these spatial datasets, each tool
offers a different set of features and processes. In conclusion, annotating DEM and
ORTHO pictures requires the usage of annotation tools since they allow users to see
and interpret these spatial datasets. The tools should support the file types frequently
used with DEM and ORTHO pictures, give precise georeferencing, and have user-
friendly annotation and analysis options. Depending on personal preferences and
the precise capabilities needed for the task at hand, different annotation tools may
be selected.
TABLE 12.1
Annotation Tool Required to Open DEM and ORTHO Image
Annotation Tool DEM Image ORTHO Image
Arc Map Desktop 10.8 Go to Conversion tools > to Use the Copy Raster tool, open the ORTHO
raster > DEM to Raster, select image file
the DEM file
ArcGIS Pro 3.0 Use the Add Data option, select Go to Map ribbon, select Add Data, choose
the TIFF file the TIFF file
Global Mapper 22 File > Open Data File, choose File > Open Data File, select the ORTHO
the DEM file image file
QGIS 3.16 Layer > Add Layer > Add Raster Layer > Add Layer > Add Raster Layer,
Layer, select the TIFF file select the ORTHO image file
CVAT File > Rasterrio > Access File > PIL > Access properties > read the
properties > read the data data
234 AI-Driven IoT Systems for Industry 4.0
Once the model is trained, we will evaluate its performance based on the testing
set; specifically we are focusing on classifying craters and background on the lunar
surface. Afterwards, we will calculate relevant evaluation metrics such as accuracy,
precision, and recall to assess the model’s effectiveness in finding out best future
model for automatic lunar crater detection. In addition to the deep-learning model,
we will incorporate fuzzy extraction rules into the classification process. We identify
fuzzy variables and linguistic terms based on the problem domain and define mem-
bership functions for each term. Using the fuzzy logic framework, we can implement
fuzzy extraction rules to refine the model’s predictions. To test the model with new
data, we will have to preprocess the data similarly to the training set and use the
trained model to make predictions. Then we will be able to apply the fuzzy extrac-
tion rules to the model’s predictions and obtain final outputs that provide classifica-
tion results enhancement. We will evaluate the performance of the fuzzy extraction
model by comparing its metrics with the original deep-learning model. Based on
the results and insights gained, we fine-tuned the model, making adjustments to the
fuzzy extraction rules or the deep-learning model architecture as needed.
Overall, this implementation of the Faster R-CNN-50 model with ImageNet as
the backbone, combined with fuzzy extraction rules, allows for accurate classifica-
tion of craters and background in the TMC-2 orbiter payload data. This approach
enhances the analysis of DEM and ORTHO images, contributing to a better under-
standing of lunar surface features, which is under implementation process and will
play the vital role to categorize uncataloged lunar crater datasets.
planetary surfaces. To fabricate the Raspberry Pi with LED sensors for real-time
data preprocessing, we would typically need to:
The purpose of combining Raspberry Pi with LED sensors here is to enable real-
time data preprocessing and analysis. By capturing and processing data from the
LED sensors linked to the Raspberry Pi, it becomes possible to detect lunar craters
accurately. This fusion of technologies allows for a more advanced and automated
approach to studying the Moon’s geological past and gaining insights into planetary
surfaces.
Overall, the research presented in this chapter serves as a thorough reference for
researchers and practitioners interested in utilizing the potential of deep-learning
techniques, sensor technologies, and ensemble-learning strategies for lunar explora-
tion. In summary, by exploring these future directions, we can further advance the
field of lunar crater detection, enabling more accurate mapping of the Moon’s sur-
face, aiding in the selection of landing sites for future missions, and supporting our
understanding of lunar geology.
ACKNOWLEDGMENTS
The authors would like to acknowledge the financial support received by the second
author from ISRO CHANDRAYAAN-2 AO under the sponsored project entitled
“Machine learning and Deep learning Based Automatic Lunar Crater detection
using Terrain Mapping Camera-2 DEM images.” We thank SSPO, Indian Space
Research Organisation (ISRO), Bengaluru, for support and for help in using the data
obtained from TMC-2. We thank the ISRO team for inspiration and giving us finan-
cial support for conducting this project. This chapter is an outcome of a collabora-
tive research project of Principle Investigator, Research Fellow, and Department of
Computer Science and Remote Sensing, BIT Mesra, Ranchi, for assistance.
REFERENCES
1. Haque, Intisar Rizwan I., and Jeremiah Neubert, “Deep learning approaches to bio-
medical image segmentation,” Informatics in Medicine Unlocked, 18 (2020): 100297.
Overview of deep learning in medical imaging.
2. Silburt, Ari, et al., “Lunar crater identification via deep learning,” Icarus, 317 (2019):
27–38.
3. Lin, Xuxin, et al., “Lunar crater detection on digital elevation model: A complete work-
flow using deep learning and its application,” Remote Sensing, 14.3 (2022): 621.
4. Price, Stanton R., Steven R. Price, and Derek T. Anderson, “Introducing fuzzy layers
for deep learning,” 2019 IEEE International Conference on Fuzzy Systems (FUZZ-
IEEE). IEEE, 2019.
5. Babuška, Robert, and Henk B. Verbruggen, “An overview of fuzzy modeling for con-
trol,” Control Engineering Practice, 4.11 (1996): 1593–1606.
6. Salleh, Mohd Najib Mohd, Noureen Talpur, and Kashif Hussain, “Adaptive neuro-
fuzzy inference system: Overview, strengths, limitations, and solutions,” Data Mining
and Big Data: Second International Conference, DMBD 2017, Fukuoka, Japan, July
27–August 1, 2017, Proceedings 2. Springer International Publishing, 2017.
7. Zhou, Huiyu, Yuan, and Chunmei Shi, “Object tracking using SIFT features and mean
shift,” Computer Vision and Image Understanding, 113.3 (2009): 345–352.
8. Ke, Yan, and Rahul Sukthankar, “PCA-SIFT: A more distinctive representation for
local image descriptors,” Proceedings of the 2004 IEEE Computer Society Conference
on Computer Vision and Pattern Recognition, 2004. CVPR 2004. Vol. 2. IEEE, 2004.
9. Asokan, A., and J. Anitha, “Change detection techniques for remote sensing applica-
tions: a survey”,” Earth Science Informatics, 12 (2019): 143–160.
10. Kar, Samarjit, Sujit Das, and Pijush Kanti Ghosh, “Applications of neuro fuzzy sys-
tems: A brief review and future outline,” Applied Soft Computing, 15 (2014): 243–259.
11. Elmizadeh, Heeva, and Hadi Mahdipour, “Detection and monitoring of geomorphic
landforms in areas with shadow and cloud cover using remote sensing techniques and
fuzzy segmentation,” Advanced Applied Geology, 13.1 (2023): 72–89.
240 AI-Driven IoT Systems for Industry 4.0
12. Mittal, Mohit, et al., “A neuro-fuzzy approach for intrusion detection in energy effi-
cient sensor routing,” 2019 4th International Conference on Internet of Things: Smart
Innovation and Usages (IoT-SIU). IEEE, 2019.
13. Amade, Benedict, and Cosmas Ifeanyi Nwakanma, “Identifying challenges of inter-
net of things on construction projects using fuzzy approach,” Journal of Engineering,
Project & Production Management 11.3 (2021): 215–227.
14. De Oliveira, José Valente, and Witold Pedrycz, eds., Advances in fuzzy clustering and
its applications. John Wiley & Sons, 2007.
15. Esogbue, Augustine, and John A. Murrell, “Advances in fuzzy adaptive control,”
Computers & Mathematics with Applications, 27.9–10 (1994): 29–35.
16. Serrano, Navid, and Homayoun Seraji, “Landing site selection using fuzzy rule-
based reasoning,” Proceedings 2007 IEEE International Conference on Robotics and
Automation. IEEE, 2007.
17. Dong, Xibin, et al., “A survey on ensemble learning,” Frontiers of Computer Science,
14 (2020): 241–258.
18. Krawczyk, Bartosz, et al., “Ensemble learning for data stream analysis: A survey,”
Information Fusion, 37 (2017): 132–156.
19. Guan, Donghai, et al., “A review of ensemble learning based feature selection,” IETE
Technical Review, 31.3 (2014): 190–198.
20. Zheng, Yuanhang, Zeshui Xu, and Xinxin Wang, “The fusion of deep learning and
fuzzy systems: A state-of-the-art survey,” IEEE Transactions on Fuzzy Systems, 30.8
(2021): 2783–2799.
21. Tian, D., and M. Gong, “A novel edge-weight based fuzzy clustering method for change
detection in SAR images,” Information Sciences, 467 (2018): 415–430.
22. Garcia-Jimenez, S., A. Jurio, M. Pagola, L. De Miguel, E. Barrenechea, and H. Bustince,
Forest fire detection: A fuzzy system approach based on overlap indices. Applied Soft
Computing, 52 (2017): 834–842.
23. Huang, Jianxi, et al., Assimilation of remote sensing into crop growth models: Current
status and perspectives. Agricultural and Forest Meteorology, 276 (2019), 107609.
24. Lu, Yunfan, et al., “Three-dimensional model of the Moon with semantic information
of craters based on Chang’e data,” Sensors, 21.3 (2021): 959.
25. Prashar, Ajay Kumar, et al., “PDS4 data archive for Chandrayaan-2 mission payloads
(TMC2, OHRC, and IIRS),” LPI Contributions, 2678 (2022): 1016.
26. https://www.isro.gov.in/chandrayaan2-payloads.html
27. Chowdhury, Arup Roy, et al., “Terrain mapping camera-2 onboard Chandrayaan-2
orbiter,” Current Science, 118.4 (2020): 566.
13 A Framework of
Intelligent Manufacturing
Process by Integrating
Various Function
T. Rajasanthosh Kumar, Laxmaiah G.,
and S. Solomon Raj
13.1 INTRODUCTION
Manufacturers in the modern era have to juggle a number of conflicting priorities in
order to keep their operations running smoothly, such as responding quickly to shop
floor faults, maintaining an optimal inventory level while striving for customized pro-
duction, and adapting their work schedules to accommodate fluctuations in product
availability and production needs [1, 2]. Though enterprise resource planning (ERP)
systems are helpful in facilitating communication between internal and external par-
ties, ERP by itself is insufficient since it focuses on higher up concerns rather than
the shop floor [3–5]. The aforementioned difficulties may be overcome by integrating
data from the shop floor with information from enterprise systems like ERP to make
informed decisions that take into account the needs of both the shop and the company
as a whole [6]. Based on the production plan and the current shop floor scenario, the
manufacturing execution system (MES) was implemented to manage shop floor oper-
ations [7, 8]. When it comes to optimizing the whole manufacturing process, MES is
the system that does the heavy lifting from work order to final product [9]. Events like
quality issues and equipment breakdowns on the work floor must be addressed imme-
diately. There is a shortage of substructure for data collection, integration of produc-
tion data, and analysis on each capability of MES systems, which makes it difficult
for many modern manufacturing organizations to handle manufacturing management
operations utilizing MES [10]. Workers who rely on phone, email, and other electronic
forms of communication for cooperation must be well-versed in manufacturing man-
agement difficulties and their solutions. Many businesses still have trouble gathering
data from the factory floor, mostly because of restrictions on accessing the detailed
protocol of control equipment for free. This implies that sensor technologies, such as
Open Platform Communications Unified Architecture (OPC-UA) or MTConnect, are
required if one is to get usable data from the shop floor [11, 12].
To find a solution, we first perform a literature review on ways to enhance MES
systems. AUTO-ID and sensor network technologies are often used in data-col-
lecting studies. Radio-frequency identification (RFID)-based data collecting was
• [PR #1] Problem workers must manually enter machine controller data into
MES or Decibel (DB). The time difference makes data use less useful.
• [PR #2] Sensors collect shop floor data. However, there are several sen-
sor communication standards and the PLC (programmable logic controller)
supports multiple sensor kinds depending on suppliers, making data inte-
gration difficult. It prevents data optimization.
• [PR #3] Some organizations invest in controller protocols to collect data
from facilities; however, diverse controller protocols make data integration
difficult. It also prevents data optimization.
• [PR #4] Once the production schedule is set, it is hard to adjust for stake-
holder needs or shop floor conditions. If anything occurs, MES operators and
shop floor personnel have to communicate, which takes time to reschedule.
A Framework of Intelligent Manufacturing Process 243
• [PR #5] MES operators have trouble with real-time tracking WIP. Manual
search is done when pieces move the wrong way, but it takes time.
• [PR #6] Due to resource conditions and manufacturing operating schedule,
scheduling preventive maintenance is difficult.
• [PR #7] Identifying machine failures is difficult. Traveling relies on employ-
ees’ tacit knowledge. Without expertise and insight, problem-solving will
take a long time. Workers vary.
• [PR #8] Many manufacturing businesses process information using facil-
ity on/off check or the x-R chart. In other circumstances, the difficulty is
compositely generated by several variables, making basic data processing
difficult. Worker understanding is key.
• [PR #9] It is difficult to identify product quality issues, their causes, and
their onset. Manual inspection relies on employees’ knowledge and compe-
tence; therefore. it might vary.
• [PR #10] Estimating production material consumption is scarcely con-
nected to SCM (supply chain management); thus, demand forecast depends
on manager expertise, and feedback work is mostly done by phone, which
increases communication load and reduces work efficiency.
• [SC #3] Uses for the intelligent MES system: Modules of Smart MES
capability analyze data to determine what actions to take. In addition, all
departments may coordinate their efforts to boost output on the factory
floor. Whenever new features are needed in MES, we can easily configure
and reflect them by modeling them as apps, so that the system is always up-
to-date. Overall manufacturing operation management may be improved
with the help of data synchronization and a module to facilitate collabora-
tive application development [12, 25, 26].
Figure 13.1 depicts the overall traceability across PR, DC, and SC, and our
reference architecture is based on the aforementioned system idea and design
considerations.
ERP unifies the many incompatible communication protocols used in manufac-
turing. Figure 13.2 depicts the many shop floor communication options available to
workers. In order to pool information from many channels of communication, some
method of integrating these channels is required. Data from the corporate informa-
tion system and data generated by the Smart MES application are both saved and
converted into the enterprise information database. It serves a dual purpose: if none
of them can immediately process the message, it may act as a temporary storage
space until they can. It has the ability to convert data for use in both systems [26, 27].
These days, most databases also have a data transformation tool. Since most busi-
ness systems are built separately, facilitating communication between them requires
extra effort and a tool called an enterprise service bus. Information gathered on the
factory floor must be stored and merged into a single database for effective analysis.
Transforming often occurs in a separate module from the one responsible for storing
data. However, for the speed of computation, methods have been developed to enable
the processing of data directly in the repository. After data has been stored and
integrated, it will be sent to the data analysis engine, where it will be processed in
order to extract actionable insights. The analytical procedure as a whole is shown in
Figure 13.3. The model builder decides what kinds of information are sent. A model
builder constructs an investigation worthy and defines its yield by coupling data min-
ing and the pre-processing approach most suitable to the analysis challenge in hand
with the input parameters. The pre-processing module yields pre-processing meth-
ods, whereas the data mining module yields data mining techniques. The behavior
of shop floor data might be affected by the use of different quality, manufacturing
246 AI-Driven IoT Systems for Industry 4.0
processes, and WIP (work-in-piece) monitoring criteria for new products [28]. In
addition, they may be modified as needed in light of the addition of additional sen-
sors or infrastructure. Modelers may fix this by making new or revised models.
Afterwards, we create a visualization and report.
Data analysis results are processed by the module so that the user may interpret
them. Every user may have access to any visual content or report by uploading it to
a web interface.
• Event processing: This module receives the results of data analysis. Each
analytical outcome is treated as a discrete occurrence. The event filter mon-
itors every activity and keeps track of whether or not it represents atypical
behavior. When this happens, event abstraction takes control. The filtered
event is then abstracted into a form that Smart MES applications may use.
The application’s collaboration manager receives the abstracted event via
the application interface. Filtering rules and abstraction rules are defined in
the rule repository. Figure 13.4 depicts the format for event data.
• The MES management module and the Smart MES application: Each sub-
system in this module makes decisions about what to do on the shop floor
and what data to send up to the corporate IT based on the results of data
analyses. Enterprise information systems may be used for a variety of pur-
poses, including the thorough scheduling of operations.
The user may control and monitor the application from their own device, such as
a smartphone. Only approved users are permitted. Data Sync. Manager is in charge
of keeping everything in sync. Application cooperation manager is responsible for
the initial event reception, application cooperation, and data synchronization. Both
of these will be discussed in further detail in the next section. There are three steps
that must be taken whenever a program is updated or a new one is installed. (1) Any
data characteristics used by the new application that were not included in the old
artist data or had several names would be registered in the schema master data. We
place a higher value on adaptability than on speed and design simplicity, despite
the fact that it is simple to set up an application such that it adheres to a pre-defined
master data model via a unified data model. The application collaboration manager
should be registered with the collaboration information of multiple applications for
newly installed apps (2). New or altered data properties should be included in the
synchronization process.
processing after conflict resolution. Regulation that is laid down there. The infor-
mational sequence diagram in Figure 13.6 shows the flow of data. It demonstrates
the flow of data from updating application 1 to updating application 2 and so on.
Data is then sent to the third program after checking for update conflicts. It has
two examples: (1) when there is no tension and (2) a disagreement exists. Message
interface, message queue, message content extraction module, message generator,
and rule-based engine for facilitating collaboration are all parts of the application
collaboration manager. Cooperation is the norm. When a program determines that it
has business communicating with other software, it generates and transmits a signal
to the software manager of cooperation to choose where to transmit its message.
The interface via which a message is sent to an application is received, processed,
and then returned to dispatch. A message queue is an intermediate storage system.
vehicle door. Robots often perform spot welding. However, there are situations when
a WIP (work-in-piece) is not welded in the appropriate location. Incorrect place-
ment of the jig or spot robot is occasionally to blame. However, in this operational
context, the issue is likely the result of a facility failure, such as a worn bearing or
a faulty servo driver.
A discussion with an IT engineer revealed that the majority of the shop floor
is wired into the MES using a 1:n architecture. MES also has 1:n connections to
enterprise systems like ERP. This stems from the many protocols, interfaces, and
data formats in use today. In Figure 13.8, the AS-IS process depicts the current state
of the problem. The MES system relies on human labor, with quality inspection and
documentation being highly dependent on the skills of the workers. It takes time for
judgment to be made since there are so many factors to consider, even if employees
receive some insight from gathered shop floor data like the current value. If the spot-
welding robot is broken and requires fixing immediately, for example, because of a
faulty servo driver, a repair technician should be called in right away. Now, we will
go through the system’s benefits and the new, better situation shown in Figure 13.7.
The prototype’s implementation architecture for the operating scenario is shown in
Figure 13.8. After data collection, data mining is used to categorize quality defects.
ECMiner1, a data mining program, was employed. In this case, we used a method
called support vector machine (SVM).
In the context of data analysis and machine learning, the term “classification”
refers to the process of categorizing or assigning data points SVM have the capa-
bility to perform classification tasks by constructing hyperplanes in an n-dimen-
sional vector space, where n represents the number of x variables. Figure 13.7
depicts the interface of ECMiner. Following a thorough analysis, event process-
ing is undertaken in order to make informed judgments about whether there
A Framework of Intelligent Manufacturing Process 251
TABLE 13.1
Smart MES Advantages over Existing MES
Task Current MES Smart MES Advantages
Inspection The inspector manually The inspector may write Smart MES module
report writes the result to the that result automatically. automatically records
MES database Event is recorded work. MES can detect
quality quicker
Find the issue The inspector uses product The inspector does not A non-skilled inspector
knowledge and graphed need skills alone. Data can tackle WIP defects
data to determine analysis may help in real time. Detection
WIP issue several hands time is diminished
Inform The MES operator should Quality management Automatically notify
maintenance contact maintenance to the service may notify maintenance
fix machine. Maintainers maintenance management service
often refuse to alter management service
schedule flexibility, if not through an application
urgent facility failure
13.5 CONCLUSION
In this study, we highlighted the issue with today’s MES systems by explaining how
there is no conducive atmosphere for analyzing and understanding shop floor data
in real-time manner. Despite the efforts of several studies and issues, they neglected
to address coordination between the different MES capabilities. For these reasons,
system Smart MES is specified, together with the requirements and design consid-
erations, for the architecture of this topic of interest design. The application case for
Smart MES in an operational setting is presented to demonstrate the viability and
value of intelligent MES against the MES of today. Using a case study of an actual
operation, we demonstrated that Smart MES is capable of fully and efficiently han-
dling the current situation on the factory floor manner. This is only one possibility,
and there is a plethora of derived situations. The research and methods presented in
this chapter may be used as a foundation upon which to build. The need for further
study requires a data model to be established and applied to the Smart MES archi-
tecture to guarantee the visibility of information exchange between all of its parts.
Smart MES’s data model development allows for the elaboration of its operations
in more depth, since the data model itself reflects the physical notion and data. In
addition, the Smart MES is a massive system; therefore, it takes some time before it
can be put into action. And so, to capacity A Smart MES system that really works is
now in the works.
REFERENCES
1. Tian, G. Y., Yin, G., and Taylor, D. “Internet-based manufacturing: A review and a
new infrastructure for distributed intelligent manufacturing.” Journal of Intelligent
Manufacturing 13, (2002): 323–338.
A Framework of Intelligent Manufacturing Process 253
2. Zhang, Luyao, Feng, Lijie, Wang, Jinfeng, and Lin, Kuo-Yi. “Integration of design,
manufacturing, and service based on digital twin to realize intelligent manufacturing.”
Machines 10, no. 4 (2022): 275.
3. Gu, Ai, Yin, Zhenyu, Fan, Chao, and Xu, Fulong. “Safety framework based on block-
chain for intelligent manufacturing cyber physical system.” In 2019 1st International
Conference on Industrial Artificial Intelligence (IAI), pp. 1–5. IEEE, 2019.
4. Hung, Min-Hsiung, Lin, Yu-Chuan, Hsiao, Hung-Chang, Chen, Chao-Chun, Lai, Kuan-
Chou, Hsieh, Yu-Ming, and Tieng, Hao et al. “A novel implementation framework of
digital twins for intelligent manufacturing based on container technology and cloud
manufacturing services.” IEEE Transactions on Automation Science and Engineering
19, no. 3 (2022): 1614–1630.
5. Oztemel, Ercan. “Intelligent manufacturing systems.” In Artificial intelligence tech-
niques for networked manufacturing enterprises management, pp. 1–41. London:
Springer London, 2010.
6. Wang, Zilin, Cui, Lizhen, Guo, Wei, Zhao, Lei, Yuan, Xin, Gu, Xiaosong, Tang,
Weizhong, Bu, Lingguo, and Huang, Weiming. “A design method for an intelligent
manufacturing and service system for rehabilitation assistive devices and special
groups.” Advanced Engineering Informatics 51, (2022): 101504.
7. Li, Ruiqi, Wei, Sha, and Li, Jia. “Study on the application framework and standardiza-
tion demands of AI in intelligent manufacturing.” In 2019 International Conference
on Artificial Intelligence and Advanced Manufacturing (AIAM), pp. 604–607. IEEE,
2019.
8. Trappey, Amy JC, Trappey, Charles V., Chao, Min-Hua, and Wu, Chun-Ting.
“VR-enabled engineering consultation chatbot for integrated and intelligent manufac-
turing services.” Journal of Industrial Information Integration 26 (2022): 100331.
9. Zhou, Ji, Zhou, Yanhong Wang, Baicun, and Zang, Jiyuan. “Human–cyber–physi-
cal systems (HCPSs) in the context of new-generation intelligent manufacturing.”
Engineering 5, no. 4 (2019): 624–636.
10. Zhou, Dongdong, Xu, Ke, Lv, Zhimin, Yang, Jianhong, Li, Min, He, Fei, and Xu, Gang.
“Intelligent manufacturing technology in the steel industry of China: a review.” Sensors
22, no. 21 (2022): 8194.
11. Devedzic, V., and Radovic, D. “A framework for building intelligent manufacturing
systems.” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications
and Reviews) 29, no. 3 (1999): 422–439.
12. Liu, Zhi-feng, Zhang, Yue-ze, Yang, Cong-bin, Huang, Zu-guang, Zhang, Cai-xia, and
Xie, Fu-gui. “Generalized distributed four-domain digital twin system for intelligent
manufacturing.” Journal of Central South University 29, no. 1 (2022): 209–225.
13. Annanth, V. Kishorre, Abinash, M., and Rao, Lokavarapu Bhaskara “Intelligent manu-
facturing in the context of industry 4.0: A case study of siemens industry.” Journal of
Physics: Conference Series 1969, no. 1 (2021): 012019. IOP Publishing.
14. Sharma, P., and Bhargava, M. “Designing, implementation, evolution and execution
of an intelligent manufacturing system.” International Journal of Recent Advances in
Mechanical Engineering 3, no. 3 (2014): 159–167.
15. Aljarrah, O., Li, J., Heryudono, A., Huang, W., and Bi, J. “Predicting part distortion
field in additive manufacturing: A data-driven framework.” Journal of Intelligent
Manufacturing 34, no. 4 (2023): 1975––1993.
16. Zhang, Zhongyu, Zhu, Zhenjie, Zhang, Jinsheng, and Wang, Jingkun. Construction
of intelligent integrated model framework for the workshop manufacturing system
via digital twin. The International Journal of Advanced Manufacturing Technology
vol. 118, (2022): 3119–3132.
17. Guo, Qing lin, and Zhang, Ming. “Multiagent-based scheduling optimization for intel-
ligent manufacturing system.” The International Journal of Advanced Manufacturing
Technology 44, (2009): 595–605.
254 AI-Driven IoT Systems for Industry 4.0
18. Kamble, Sachin S., Gunasekaran, Angappa, Parekh, Harsh, Mani, Venkatesh, Belhadi,
Amine, and Sharma, Rohit. “Digital twin for sustainable manufacturing supply chains:
Current trends, future perspectives, and an implementation framework.” Technological
Forecasting and Social Change 176, (2022): 121448.
19. Shi, Miaoyuan. “Knowledge graph question and answer system for mechanical intel-
ligent manufacturing based on deep learning.” Mathematical Problems in Engineering
2021, (2021): 1–8.
20. Marques, Maria, Agostinho, Carlos, Zacharewicz, Gregory, and Jardim-Gonçalves,
Ricardo. “Decentralized decision support for intelligent manufacturing in industry
4.0.” Journal of Ambient Intelligence and Smart Environments 9, no. 3 (2017 Jan 1):
299–313.
21. Guo, W., Wang, Y., Chen, X., and Jiang, P. Federated transfer learning for auxiliary
classifier generative adversarial networks: framework and industrial application.
Journal of Intelligent Manufacturing vol. 35 (2023): 1–16 (2024): 1439–1454.
22. Liu, Yefeng, Zhao, Yuan, Tao, Lin, Zhao, Kexue, and Li, Kangju. “The application
of digital flexible intelligent manufacturing system in machine manufacturing indus-
try.” In 2018 IEEE 8th Annual International Conference on CYBER Technology in
Automation, Control, and Intelligent Systems (CYBER), pp. 664–668. IEEE, 2018.
23. Lin, Yu-Ju, Wei, Shih-Hsuan, and Huang, Chin-Yin. “Intelligent manufacturing control
systems: The core of smart factory.” Procedia Manufacturing 39, (2019): 389–397.
24. Padovano, Antonio, Longo, Francesco, Nicoletti, Letizia, Gazzaneo, Lucia, Chiurco,
Alessandro, and Talarico, Simone. “A prescriptive maintenance system for intelligent
production planning and control in a smart cyber-physical production line.” Procedia
CIRP 104, (2021): 1819–1824.
25. Romero, David, Jardim-Goncalves, Ricardo, and Grilo, Antonio. “Factories of the
future: Challenges and leading innovations in intelligent manufacturing.” International
Journal of Computer Integrated Manufacturing 30, no. 1 (2017): 1–3.
26. Chen, Gaige, Wang, Pei, Feng, Bo, Li, Yihui, and Liu, D. “The framework design of
smart factory in discrete manufacturing industry based on cyber-physical system.”
International Journal of Computer Integrated Manufacturing 33, no. 1 (2020 Jan 2):
79–101.
27. Anton, Florin, Borangiu, Theodor, Răileanu, Silviu, and Anton, Silvia. “Cloud-based
digital twin for robot integration in intelligent manufacturing systems.” In Advances in
service and industrial robotics: Results of RAAD, pp. 565–573. Springer International
Publishing, 2020. https://www.mdpi.com/2077-1312/9/3/338
28. Sulhi, Ahmad. “Data mining technology used in an internet of things-based decision
support system for information processing intelligent manufacturing.” International
Journal of Informatics and Information Systems 4, no. 3 (2021): 168–179.
14 Adaptive Supply
Chain Integration
in Smart Factories
Deepak Mathivathanan and
Sivakumar Kirubanandan
14.1 INTRODUCTION
The advent of Industry 4.0 has facilitated a transformative shift in manufacturing by
changing the way products are designed, produced, and delivered leveraging digi-
tal technologies to create intelligent, interconnected, and adaptive manufacturing
ecosystems [1]. Advances in terms of Internet of Things (IoT), big data analytics,
artificial intelligence (AI), and cyber-physical systems (CPS) enable manufacturers
to achieve increased efficiency, flexibility, and productivity and deliver custom-
ized products and services. Smart factories are the heart of this transformation
that drives innovation and efficiency through advanced technologies, data ana-
lytics, and automation [2, 3]. Smart factories are a step towards next-generation
manufacturing and thus represent a paradigm shift from traditional manufacturing
approaches by introducing a higher level of flexibility, agility, and intelligence
in the production process [4]. Smart factories embody integrated cyber-physical
systems, IoT devices, AI, and data analytics creating interconnected and intelli-
gent production systems enabled by real-time monitoring, predictive capabilities,
and autonomous decision-making, leading to increased productivity at reduced
costs, improved quality, and enhanced sustainability [5]. In addition to production,
with real-time data, AI-driven analytics, and automation, smart factories assist in
delivering goods faster to market, improve the operational efficiency, and enhance
customer experiences [6]. Furthermore, smart factories enable real-time monitor-
ing and predictive maintenance, minimizing downtime and optimizing equipment
performance to facilitate data-driven decision-making, enhancing supply chain
visibility and responsiveness, efficient inventory management, and reducing wast-
age [7]. Furthermore, smart factories drive the implementation of lean and agile
manufacturing principles, fostering continuous improvement and increased adapt-
ability through SCI [8].
SCI plays a vital role in enabling seamless coordination and collaboration across
the entire supply chain ecosystem, fostering efficient and synchronized operations in
a smart factory scenario [9, 10]. Real-time visibility into various stages of the pro-
duction process, including inventory levels, production status, and customer demand,
enables accurate forecasting, demand planning, and inventory management, leading
1. Role of IoT: IoT forms the backbone of smart factory operations through
connecting devices, sensors, and machines throughout the manufacturing
environment [40]. Here is how IoT enables smart factory operations:
a. Data collection and real-time monitoring: IoT sensors embedded in
machinery and equipment in smart factories constantly collect data
on several control parameters, such as temperature, pressure, humid-
ity, and performance metrics [41]. IoT-enabled real-time data collection
hence enables factories to implement continuous and effective moni-
toring systems for equipment health, energy consumption, and overall
production efficiency [42].
b. Connectivity and communication: IoT facilitates connected devices to
seamlessly communicate between machines to enable the exchange of
information and data across the factory floor [43]. The enhanced con-
nectivity offered by IoT thus enables real-time collaboration, synchro-
nization, and coordination of production processes, leading to better
efficiency and responsiveness [44].
c. Predictive maintenance: IoT-enabled sensors monitor equipment con-
ditions and performance to provide valuable insights into maintenance
needs [45]. Predictive maintenance algorithms hence can effectively
analyse data patterns and generate alerts or predictions regarding poten-
tial failures and other maintenance requirements [46]. This minimizes
unplanned downtime, optimizes maintenance schedules, and improves
overall equipment reliability.
d. SCI: IoT connectivity extends beyond the factory walls and is key in
enabling integration with suppliers, customers, and logistics providers
[47]. Real-time data sharing and collaboration assist in streamlining
supply chain operations to ensure timely delivery and accurately man-
age inventory, and respond to changing demand effectively.
260 AI-Driven IoT Systems for Industry 4.0
2. AI: Advanced AI technologies, such as ML, deep learning (DL), and cogni-
tive computing along with IoT capabilities, enable effective processing and
analysis of huge volumes of real-time data generated by IoT devices [48].
Here is how AI enables smart factory operations:
a. Data analysis and insights: AI algorithms help in analysing large data-
sets collected by IoT devices to read patterns, anomalies, and correla-
tions providing valuable insights for improving production efficiency,
quality control, and decision-making [49]. AI-powered analytics enable
high-precision predictive and prescriptive capabilities, enhancing effi-
cient production planning, demand forecasting, and inventory manage-
ment [50].
b. Autonomous decision-making: Machines and robots within smart
factories integrated with AI can make autonomous decisions based
on real-time data and predefined rules [10]. AI-powered systems can
optimize production parameters, adjust schedules, and dynamically
respond to changes with minimum human interventions and maximum
efficiency [51].
c. Robotics and automation: AI-driven robots play a crucial role in
automating various tasks within smart factories. Advanced robot-
ics supported by AI algorithms can perform complex operations and
collaborate with human workers in a safe and efficient manner [52].
AI-enabled robots learn from experience (past data), improve their
performance, and enable them to handle future diverse manufacturing
processes [53].
d. Cognitive systems and natural language processing (NLP): AI-powered
cognitive systems improve human-machine interactions within smart
factories [50]. NLP models enable voice- or text-based communication
that allows operators to interact with machines and systems intuitively
[54]. Thus, AI-enabled capability enhances operational efficiency and
simplifies training to facilitate better decision-making.
IoT and AI technologies play a pivotal role in addressing these challenges and
facilitating seamless SCI in smart factories. IoT enables real-time data collection,
connectivity, and visibility. Real-time data capture and transmission enhances sup-
ply chain visibility and better tracking of products and services. Customer data on
purchase behaviours, usage patterns, and preferences enable accurate forecasting.
Similarly, AI processes and analyses the gathered data to derive valuable insights,
enable predictive capabilities, and enhance decision-making. Although SCI has
these potential benefits, analysing the barriers and challenges against successful SCI
requires further research in developing holistic approaches and strategic road maps
for successful integration.
Thus, ASCs are complex multistructural systems that have the capability to adapt
to changing markets, operations environments, and internal changes in the supply
chain by leveraging the information technology capabilities like IoT and AI. The
ASC strategy adopted from [75] is presented in Figure 14.4. The ASCs are driven by
three value chain drivers that consist of the main elements of traditional supply chain
management along with the elements of agility and sustainability.
ASCs are highly relevant in Industry 4.0 and smart factories due to the current rap-
idly changing market landscape characterized by shorter product lifecycles, volatile
customer demands, and increased product customization [76]. Sensing and respond-
ing to the dynamic market conditions promptly involve leveraging real-time data,
predictive analytics, and agile practices to adjust production schedules, optimize
inventory levels, and align supply with demand, which are the core characteristics of
Adaptive Supply Chain Integration in Smart Factories 265
ASCs. The goal tree of ASCM presented in Figure 14.5 articulates the hierarchy of
goals and objectives within specific domains of the supply chain.
By leveraging the combination of AI and IoT technologies, smart factories can
benefit from ASCM. The AI-driven IoT systems in smart factories provide real-time
data insights, enable accurate demand forecasting, optimize supply chain opera-
tions, enhance risk management, and facilitate autonomous decision-making [77].
The result is a responsive and agile supply chain that can quickly adapt to chang-
ing conditions, optimize resources, and deliver superior operational performance.
trend in ASC integration [57]. Smart factories integrate their systems with
suppliers, customers, logistics providers, and other stakeholders to share
real-time data, coordinate activities, and optimize operations while collab-
orative platforms enable end-to-end visibility, synchronized planning, and
efficient decision-making that enhance SCI.
Sustainability and circular economy: Sustainable practices and circular econ-
omy research has delved into ASC integration [85]. Smart factories focus-
ing on reduced waste, optimized energy usage, and adopting eco-friendly
processes are emerging trends. Supply chains embrace closed-loop systems,
recycling, and product life extension strategies by adopting circular econ-
omy concepts. The sustainability considerations in supply chain planning
and decision-making enable greener operations, enhancing the brand repu-
tation that meets evolving environmental regulations.
Predictive maintenance and asset optimization: AI-driven predictive main-
tenance and asset optimization techniques supported by analytics and IoT
sensors enable real-time monitoring of equipment health, performance, and
maintenance needs [49]. Hence, predictive maintenance algorithms to anal-
yse data and predict potential failures, optimize maintenance schedules,
and minimize downtime are emerging trends.
These emerging trends in ASC integration for smart factories focus on enabling
organizations to build agile, resilient, and customer-centric supply chains. By lever-
aging the capabilities of blockchains, smart factories can enhance supply chain trans-
parency, traceability, security, and collaboration. Edge computing, which enables
real-time data processing, decision-making, and resilience at the edge, improves
operational efficiency, security, and predictive capabilities. The combination of
these technologies can drive ASC integration, enabling smart factories to build agile,
transparent, and efficient supply chains.
14.5 CONCLUSION
This chapter delved into the concept of ASCI in the context of smart factories within
the Industry 4.0 framework. Smart factories help the manufacturing industry by
leveraging digital technologies to create intelligent, interconnected, and adaptive
manufacturing ecosystems. Integrating cyber-physical systems, IoT devices, AI, and
data analytics into smart factories enables real-time monitoring, predictive capabili-
ties, and autonomous decision-making. As a result, smart factories can function with
increased productivity, improved quality, and enhanced sustainability. Furthermore,
SCI facilitates real-time visibility, accurate demand forecasting, planning, and effi-
cient inventory management. Hence, smart factories deliver improved efficiency at
lower costs. This integration also enables supply chains to rapidly respond to ever-
changing customer demands, market conditions, accommodate disruptions, and
ultimately enhance customer satisfaction and loyalty. Additionally, SCI also fosters
collaboration and innovation among partners that are focused towards optimizing
resource utilization and improving risk management capabilities. However, the
implementation of SCI in smart factories is also faced with several challenges such
268 AI-Driven IoT Systems for Industry 4.0
as information silos, traditional systems and infrastructure, data quality and accu-
racy concerns, resistance to change, lack of trust and collaboration, and security and
privacy concerns. To overcome these challenges, IoT and AI technologies play a piv-
otal role by enabling real-time data collection, connectivity, and visibility. In addi-
tion, AI provides advanced capabilities for data analysis, prediction, optimization,
and decision-making. Furthermore, this chapter has also discussed the concept of
ASCM, emphasizing its ability to proactively respond to changing market conditions
and disruptions by integrating agile practices along with flexible practices supported
by advanced digital technologies and real-time data analytics. This chapter also
sheds light on the emerging trends in ASCI for smart factories which include AI/
ML, advanced analytics and big data, IoT and connectivity, blockchain technology,
collaborative networks and digital platforms, sustainability and circular economy,
and predictive maintenance and asset optimization. These trends mainly focus on
enabling agile, resilient, and customer-centric supply chains by leveraging advanced
technologies, data-driven insights, and collaborative approaches.
REFERENCES
1. Oztemel, E., & Gursev, S. (2020). Literature review of industry 4.0 and related tech-
nologies. Journal of Intelligent Manufacturing, 31, 127–182.
2. Dahmani, N., Benhida, K., Belhadi, A., Kamble, S., Elfezazi, S., & Jauhar, S. K. (2021).
Smart circular product design strategies towards eco-effective production systems: A
lean eco-design industry 4.0 framework. Journal of Cleaner Production, 320, 128847.
3. Rao, S. K., & Prasad, R. (2018). Impact of 5G technologies on industry 4.0. Wireless
Personal Communications, 100, 145–159.
4. Mantravadi, S., Møller, C., Chen, L. I., & Schnyder, R. (2022). Design choices for
next-generation IIoT-connected MES/MOM: An empirical study on smart factories.
Robotics and Computer-Integrated Manufacturing, 73, 102225.
5. Zarte, M., Pechmann, A., & Nunes, I. L. (2022). Knowledge framework for production
planning and controlling considering sustainability aspects in smart factories. Journal
of Cleaner Production, 363, 132283.
6. Morrar, R., Arman, H., & Mousa, S. (2017). The fourth industrial revolution (indus-
try 4.0): A social innovation perspective. Technology Innovation Management Review,
7(11), 12–20.
7. Pech, M., Vrchota, J., & Bednář, J. (2021). Predictive maintenance and intelligent sen-
sors in smart factory. Sensors, 21(4), 1470.
8. Ding, B., Ferras Hernandez, X., & Agell Jane, N. (2023). Combining lean and agile
manufacturing competitive advantages through Industry 4.0 technologies: An integra-
tive approach. Production Planning & Control, 34(5), 442–458.
9. Liu, Z., Sampaio, P., Pishchulov, G., Mehandjiev, N., Cisneros-Cabrera, S., Schirrmann,
A., Jiru, F., & Bnouhanna, N. (2022). The architectural design and implementation of
a digital platform for industry 4.0 SME collaboration. Computers in Industry, 138,
103623.
10. Maddikunta, P. K. R., Pham, Q.-V., Prabadevi, B., Deepa, N., Dev, K., Gadekallu, T. R.,
Ruby, R., & Liyanage, M. (2022). Industry 5.0: A survey on enabling technologies and
potential applications. Journal of Industrial Information Integration, 26, 100257.
11. Zhong, R. Y., Li, Z., Pang, L. Y., Pan, Y., Qu, T., & Huang, G. Q. (2013). RFID-enabled
real-time advanced planning and scheduling shell for production decision making.
International Journal of Computer Integrated Manufacturing, 26(7), 649–662.
Adaptive Supply Chain Integration in Smart Factories 269
12. Omar, I. A., Debe, M., Jayaraman, R., Salah, K., Omar, M., & Arshad, J. (2022).
Blockchain-based supply chain traceability for COVID-19 personal protective equip-
ment. Computers & Industrial Engineering, 167, 107995.
13. Oh, J., & Jeong, B. (2019). Tactical supply planning in smart manufacturing supply
chain. Robotics and Computer-Integrated Manufacturing, 55, 217–233.
14. Büchi, G., Cugno, M., & Castagnoli, R. (2020). Smart factory performance and indus-
try 4.0. Technological Forecasting and Social Change, 150, 119790.
15. Huang, G. Q., Zhang, Y. F., Chen, X., & Newman, S. T. (2008). RFID-enabled real-
time wireless manufacturing for adaptive assembly planning and control. Journal of
Intelligent Manufacturing, 19, 701–713.
16. Wang, T., Zhang, Y. F., & Zang, D. X. (2016). Real-time visibility and traceability frame-
work for discrete manufacturing shopfloor. In Proceedings of the 22nd International
Conference on Industrial Engineering and Engineering Management 2015: Core Theory
and Applications of Industrial Engineering (Volume 1) (pp. 763–772). Atlantis Press.
17. Soosay, C. A., Hyland, P. W., & Ferrer, M. (2008). Supply chain collaboration:
Capabilities for continuous innovation. Supply Chain Management: An International
Journal, 13(2), 160–169.
18. Leuschner, R., Rogers, D. S., & Charvet, F. F. (2013). A meta-analysis of supply chain
integration and firm performance. Journal of Supply Chain Management, 49(2), 34–57.
19. Zhang, Y., Qu, T., Ho, O., & Huang, G. Q. (2011). Real-time work-in-progress man-
agement for smart object-enabled ubiquitous shop-floor environment. International
Journal of Computer Integrated Manufacturing, 24(5), 431–445.
20. Shrouf, F., Ordieres, J., & Miragliotta, G. (2014, December). Smart factories in Industry
4.0: A review of the concept and of energy management approached in production
based on the Internet of Things paradigm. In 2014 IEEE International Conference on
Industrial Engineering and Engineering Management (pp. 697–701). IEEE.
21. Napoleone, A., Macchi, M., & Pozzetti, A. (2020). A review on the characteristics
of cyber-physical systems for the future smart factories. Journal of Manufacturing
Systems, 54, 305–335.
22. Jiang, J. R. (2018). An improved cyber-physical systems architecture for industry 4.0
smart factories. Advances in Mechanical Engineering, 10(6), 1687814018784192.
23. Kure, H. I., Islam, S., & Razzaque, M. A. (2018). An integrated cyber security risk
management approach for a cyber-physical system. Applied Sciences, 8(6), 898.
24. Soori, M., Arezoo, B., & Dastres, R. (2023). Internet of things for smart factories in
industry 4.0, a review. Internet of Things and Cyber-Physical Systems, 3, 192–204.
25. Ahmad, M., Ishtiaq, A., Habib, M. A., & Ahmed, S. H. (2019). A review of internet of
things (IoT) connectivity techniques. Recent Trends and Advances in Wireless and IoT-
Enabled Networks, 25–36.
26. Bana, A.-S., De Carvalho, E., Soret, B., Abrao, T., Marinello, J. C., Larsson, E. G., &
Popovski, P. (2019). Massive MIMO for internet of things (IoT) connectivity. Physical
Communication, 37, 100859.
27. Fan, C., Yan, D., Xiao, F., Li, A., An, J., & Kang, X. (2021). Advanced data analytics
for enhancing building performances: From data-driven to big data-driven approaches.
Building Simulation, 14, 3–24.
28. Wang, J., Ma, Y., Zhang, L., Gao, R. X., & Wu, D. (2018). Deep learning for smart
manufacturing: Methods and applications. Journal of Manufacturing Systems, 48,
144–156.
29. Kragic, D., Gustafson, J., Karaoguz, H., Jensfelt, P., & Krug, R. (2018). Interactive, col-
laborative robots: Challenges and opportunities. IJCAI, 18–25.
30. Evjemo, L. D., Gjerstad, T., Grøtli, E. I., & Sziebig, G. (2020). Trends in smart manu-
facturing: Role of humans and industrial robots in smart factories. Current Robotics
Reports, 1, 35–41.
270 AI-Driven IoT Systems for Industry 4.0
31. Kousi, N., Koukas, S., Michalos, G., & Makris, S. (2019). Scheduling of smart intra –
Factory material supply operations using mobile robots. International Journal of
Production Research, 57(3), 801–814.
32. Orellana, F., & Torres, R. (2019). From legacy-based factories to smart factories
level 2 according to the industry 4.0. International Journal of Computer Integrated
Manufacturing, 32(4–5), 441–451.
33. Yin, S., Rodriguez-Andina, J. J., & Jiang, Y. (2019). Real-time monitoring and control
of industrial cyberphysical systems: With integrated plant-wide monitoring and control
framework. IEEE Industrial Electronics Magazine, 13(4), 38–47.
34. Chung, S. H., Byrd, T. A., Lewis, B. R., & Ford, F. N. (2005). An empirical study of the
relationships between IT infrastructure flexibility, mass customization, and business
performance. ACM SIGMIS Database: The DATABASE for Advances in Information
Systems, 36(3), 26–44.
35. Wang, S., Wan, J., Li, D., & Zhang, C. (2016). Implementing smart factory of Industrie
4.0: An outlook. International Journal of Distributed Sensor Networks, 12(1), 3159805.
36. Ceccato, V., & Lukyte, N. (2011). Safety and sustainability in a city in transition: The
case of Vilnius, Lithuania. Cities, 28(1), 83–94.
37. Wang, L. (2015). Collaborative robot monitoring and control for enhanced sustainabil-
ity. The International Journal of Advanced Manufacturing Technology, 81, 1433–1445.
38. Wiktorsson, M., Do Noh, S., Bellgran, M., & Hanson, L. (2018). Smart factories: South
Korean and Swedish examples on manufacturing settings. Procedia Manufacturing,
25, 471–478.
39. Buckler, B. (1996). A learning process model to achieve continuous improvement and
innovation. The Learning Organization, 3(3), 31–39.
40. Aryal, A., Liao, Y., Nattuthurai, P., & Li, B. (2020). The emerging big data analytics
and IoT in supply chain management: A systematic review. Supply Chain Management:
An International Journal, 25(2), 141–156.
41. Plageras, A. P., Psannis, K. E., Stergiou, C., Wang, H., & Gupta, B. B. (2018). Efficient
IoT-based sensor BIG data collection–processing and analysis in smart buildings.
Future Generation Computer Systems, 82, 349–357.
42. Tao, H., Bhuiyan, M. Z. A., Abdalla, A. N., Hassan, M. M., Zain, J. M., & Hayajneh, T.
(2018). Secured data collection with hardware-based ciphers for IoT-based healthcare.
IEEE Internet of Things Journal, 6(1), 410–420.
43. Vijayaraghavan, A., Sobel, W., Fox, A., Dornfeld, D., & Warndorf, P. (2008). Improving
machine tool interoperability using standardized interface protocols: MT connect.
44. Qiu, X., Luo, H., Xu, G., Zhong, R., & Huang, G. Q. (2015). Physical assets and service
sharing for IoT-enabled supply hub in industrial park (SHIP). International Journal of
Production Economics, 159, 4–15.
45. Liu, K., & Shi, J. (2015). IoT-enabled system informatics for service decision making.
IEEE Intelligent Systems, 30(6), 18–21.
46. Shah, S. A., Seker, D. Z., Hameed, S., & Draheim, D. (2019). The rising role of big data
analytics and IoT in disaster management: Recent advances, taxonomy and prospects.
IEEE Access, 7, 54595–54614.
47. Agrawal, M., Eloot, K., Mancini, M., & Patel, A. (2020). Industry 4.0: Reimagining
manufacturing operations after COVID-19. McKinsey & Company, 1–11.
48. Chugh, G., Kumar, S., & Singh, N. (2021). Survey on machine learning and deep learn-
ing applications in breast cancer diagnosis. Cognitive Computation, 1–20.
49. Corallo, A., Crespino, A. M., Lazoi, M., & Lezzi, M. (2022). Model-based big data
analytics-as-a-service framework in smart manufacturing: A case study. Robotics and
Computer-Integrated Manufacturing, 76, 102331.
50. Sahoo, S., & Lo, C. Y. (2022). Smart manufacturing powered by recent technological
advancements: A review. Journal of Manufacturing Systems, 64, 236–250.
Adaptive Supply Chain Integration in Smart Factories 271
51. Javaid, M., Haleem, A., Khan, I. H., & Suman, R. (2023). Understanding the potential
applications of artificial intelligence in agriculture sector. Advanced Agrochem, 2(1),
15–30.
52. Goel, R., & Gupta, P. (2020). Robotics and industry 4.0. In A roadmap to industry 4.0:
Smart production, sharp business and sustainable development. 157–169. Springer,
Cham. https://doi.org/10.1007/978-3-030-14544-6_9
53. Ahmed, I., Jeon, G., & Piccialli, F. (2022). From artificial intelligence to explain-
able artificial intelligence in industry 4.0: A survey on what, how, and where. IEEE
Transactions on Industrial Informatics, 18(8), 5031–5042.
54. Varaprasad, R., & Mahalaxmi, G. (2022). Applications and techniques of natural lan-
guage processing: An overview. IUP Journal of Computer Sciences, 16(3), 7–21.
55. Meneguette, R. I., Bittencourt, L. F., & Madeira, E. R. M. (2013). A seamless flow
mobility management architecture for vehicular communication networks. Journal of
Communications and Networks, 15(2), 207–216.
56. Simatupang, T. M., Wright, A. C., & Sridharan, R. (2002). The knowledge of coor-
dination for supply chain integration. Business Process Management Journal, 8(3),
289–308.
57. Herczeg, G., Akkerman, R., & Hauschild, M. Z. (2018). Supply chain collaboration in
industrial symbiosis networks. Journal of Cleaner Production, 171, 1058–1067.
58. McNamara, M. (2012). Starting to untangle the web of cooperation, coordination,
and collaboration: A framework for public managers. International Journal of Public
Administration, 35(6), 389–401.
59. Peng, D. X., & Lu, G. (2017). Exploring the impact of delivery performance on cus-
tomer transaction volume and unit price: Evidence from an assembly manufacturing
supply chain. Production and Operations Management, 26(5), 880–902.
60. Lu, Y., Morris, K. C., & Frechette, S. (2016). Current standards landscape for smart
manufacturing systems. National Institute of Standards and Technology, NISTIR,
8107(3), 1–35.
61. Zhou, T., Tang, D., Zhu, H., & Wang, L. (2020). Reinforcement learning with compos-
ite rewards for production scheduling in a smart factory. IEEE Access, 9, 752–766.
62. Brusset, X. (2016). Does supply chain visibility enhance agility? International Journal
of Production Economics, 171, 46–59.
63. Nanjappan, M., Natesan, G., & Krishnadoss, P. (2021). An adaptive neuro-fuzzy infer-
ence system and black widow optimization approach for optimal resource utilization
and task scheduling in a cloud environment. Wireless Personal Communications,
121(3), 1891–1916.
64. Zhou, B., Hu, H., Huang, S.-Q., & Chen, H.-H. (2013). Intracluster device-to-device
relay algorithm with optimal resource utilization. IEEE Transactions on Vehicular
Technology, 62(5), 2315–2326.
65. Radziwon, A., Bilberg, A., Bogers, M., & Madsen, E. S. (2014). The smart factory:
Exploring adaptive and flexible manufacturing solutions. Procedia Engineering, 69,
1184–1190.
66. Ashurst, C., Freer, A., Ekdahl, J., & Gibbons, C. (2012). Exploring IT-enabled innovation:
A new paradigm? International Journal of Information Management, 32(4), 326–336.
67. Sjödin, D. R., Parida, V., Leksell, M., & Petrovic, A. (2018). Smart factory implementa-
tion and process innovation: A preliminary maturity model for leveraging digitaliza-
tion in manufacturing moving to smart factories presents specific challenges that can
be addressed through a structured approach focused on people, processes, and tech-
nologies. Research-Technology Management, 61(5), 22–31.
68. O’Donovan, P., Bruton, K., & O’Sullivan, D. T. (2016). Case study: The implementation
of a data-driven industrial analytics methodology and platform for smart manufactur-
ing. International Journal of Prognostics and Health Management, 7(3), 1–22.
272 AI-Driven IoT Systems for Industry 4.0
69. Illa, P. K., & Padhi, N. (2018). Practical guide to smart factory transition using IoT, big
data and edge analytics. IEEE Access, 6, 55162–55170.
70. Cooke, P. (2021). Image and reality: “Digital twins” in smart factory automotive pro-
cess innovation – Critical issues. Regional Studies, 55(10–11), 1630–1641.
71. Ghobakhloo, M. (2020). Determinants of information and digital technology imple-
mentation for smart manufacturing. International Journal of Production Research,
58(8), 2384–2405.
72. Davis, J., Edgar, T., Porter, J., Bernaden, J., & Sarli, M. (2012). Smart manufactur-
ing, manufacturing intelligence and demand-dynamic performance. Computers &
Chemical Engineering, 47, 145–156.
73. Ivanov, D., Dolgui, A., & Sokolov, B. (2012). Applicability of optimal control theory
to adaptive supply chain planning and scheduling. Annual Reviews in Control, 36(1),
73–84.
74. Urciuoli, L., & Hintsa, J. (2017). Adapting supply chain management strategies to secu-
rity–an analysis of existing gaps and recommendations for improvement. International
Journal of Logistics Research and Applications, 20(3), 276–295.
75. Ivanov, D., Sokolov, B., & Kaeschel, J. (2010). A multi-structural framework for adap-
tive supply chain planning and operations control with structure dynamics consider-
ations. European Journal of Operational Research, 200(2), 409–420.
76. Malhotra, A., Gosain, S., & El Sawy, O. A. (2007). Leveraging standard electronic
business interfaces to enable adaptive supply chain partnerships. Information Systems
Research, 18(3), 260–279.
77. Dolgui, A., & Ivanov, D. (2022). 5G in digital supply chain and operations manage-
ment: Fostering flexibility, end-to-end connectivity and real-time visibility through
internet-of-everything. International Journal of Production Research, 60(2), 442–451.
78. Modgil, S., Singh, R. K., & Hannibal, C. (2022). Artificial intelligence for supply
chain resilience: Learning from Covid-19. The International Journal of Logistics
Management, 33(4), 1246–1268.
79. Ivanov, D. (2018). New drivers for supply chain structural dynamics and resilience:
Sustainability, industry 4.0, self-adaptation. In: Structural dynamics and resilience in
supply chain risk management, 293–313. International Series in Operations Research
& Management Science, vol 265. Springer, Cham. https://doi.org/10.1007/978-3-319-
69305-7_10
80. Sharp, M., Ak, R., & Hedberg, T. Jr (2018). A survey of the advancing use and develop-
ment of machine learning in smart manufacturing. Journal of Manufacturing Systems,
48, 170–179.
81. Tiwari, S., Wee, H. M., & Daryanto, Y. (2018). Big data analytics in supply chain
management between 2010 and 2016: Insights to industries. Computers & Industrial
Engineering, 115, 319–330.
82. Rana, A., Rawat, A. S., Afifi, A., Singh, R., Rashid, M., Gehlot, A., & Alshamrani, S. S.
(2022). A long-range Internet of Things-based advanced vehicle pollution monitoring
system with node authentication and blockchain. Applied Sciences, 12(15), 7547.
83. Hasan, M. A., Raghuveer, K., Pandey, P. S., Kumar, A., Bora, A., Jose, D., … &
Khanapurkar, M. M. (2021). Internet of Things and its application in industry 4.0 for
smart waste management. Journal of Environmental Protection and Ecology, 22(6),
2368–2378.
84. Bhaskar, P., Tiwari, C. K., & Joshi, A. (2020). Blockchain in education management:
Present and future applications. Interactive Technology and Smart Education, 18(1),
1–17. https://doi.org/10.1108/ITSE-07-2020-0102
85. Cioffi, R., Travaglioni, M., Piscitelli, G., Petrillo, A., & Parmentola, A. (2020). Smart
manufacturing systems and applied industrial technologies for a sustainable industry:
A systematic literature review. Applied Sciences, 10(8), 2897.
15 Implementation of
Intelligent CPS
for Integrating
the Industry and
Manufacturing Process
T. Rajasanthosh Kumar, Mahesh. M. Kawade,
Gaurav Kumar Bharti, and Laxmaiah G.
15.1 INTRODUCTION
Today’s marketplaces are being changed by revolutionary technology, hitting more
needs in the sector to become more agile and adaptive. As the global market becomes
more competitive, businesses must significantly adjust their production strategies,
technology, and management to maintain profitability [1–3]. Automation, digitali-
zation, artificial intelligence (AI), powerful computers, advanced sensors and data
acquisition systems, intelligent robots, technology for communication and informa-
tion (ICT), the Internet of Things (IoT), and big data, along with the use of cloud
computing have all helped many different industries increase their performance and
productivity [4, 5]. Industry 4.0, also known as the Fourth Industrial Revolution,
resulted from the shift from traditional production to intelligent manufacturing
through digitalization and other forms of cutting-edge technology. The word origi-
nated in Germany and refers to cutting-edge production methods that bring digital
technology to traditional industries through the IoT [6, 7]. The efficient implementa-
tion of key enabling technologies (KETs) is a cornerstone of Industry 4.0’s revolution
and crucial to its success [8]. The most widely accepted tenets of Industry 4.0 are
shown in Figure 15.1.
Cyber-physical systems (CPSs), the IoT, big data, AI, cloud-based computing,
robots, and the smart factory will be discussed in more detail below.
15.1.1 Cyber-Physical Systems
Aiming to bridge the gap between the digital and physical worlds via the integra-
tion of computing, networking, and physical assets, CPSs blur the lines between the
two. It is generally accepted that CPS refers to the combination of hardware (e.g.,
actuators, sensors, and other components in robotics) and cybernetic software (com-
munication, networking, and the Internet), albeit the precise meaning of this term
DOI: 10.1201/9781003432319-15 273
274 AI-Driven IoT Systems for Industry 4.0
may vary depending on the reader’s viewpoint and background [9, 10]. The success
of Industry 4.0, of which CPS is a fundamental component, relies on an intelligent
oversight of linked systems among its human elements and computational capabili-
ties, modernizing both fields using cutting-edge tech [11, 12]. Since its inception,
CPSs have been at the forefront of academic study and practical use in industries
around the globe. New developments in computer science and information and com-
munication technologies, on one hand, and manufacturing science and technology,
on the other, are referred to in the manufacturing business as cyber-physical produc-
tion systems (CPPS) [13, 14]. The automobile, healthcare, and energy sectors are just
a few that have used CPPSs. The references within provide the following:
15.1.2 Internet of Things
According to the International Telecommunications Union (ITU), the IoT is a world-
wide framework that facilitates the information society and enables the provision
of advanced services through the interconnection of physical and virtual entities.
This interconnection is established using both existence and developing compatible
Implementation of Intelligent CPS 275
15.1.4 Cloud Computing
The cloud can serve as a massive infrastructure for the company to store informa-
tion, determine information, and use offer schemes that optimize its manufacturing
process chain, boost efficiency in manufacturing, and reduce costs because it pro-
vides a hub for the transfer of data and allows the exchange of assets over storage,
processing, and analysis of data. Cloud manufacturing (CMfg) is the term for cloud
computing’s use in manufacturing. CMfg’s capacity to include and use cutting-edge
technologies like cloud computing, big data, the CPS, and the IoT, has led to its
rapid rise and widespread use. Consequently, it allows for the simple administration
and virtualization of big data, interfaces, and the virtualization of resources used
in production. The IoT allowed actual time equipment status monitoring deployed
in CMfg surroundings, RFID-enabled shop-floor logistics, and big data visualiza-
tion. Because of its flexibility, scaling, multiple tenants, innovative decision-making
instruments, and automated production on demand, CMfg has found widespread use
276 AI-Driven IoT Systems for Industry 4.0
in cloud computing for tracking procedures and controlling factory loops, among
other applications. Additional information and examples of CMfg’s use may be
found in.
15.1.5 Artificial Intelligence
Increasing degrees of connections, nonlinear behaviour, unpredictability, and data
volume have all arisen due to Industry 4.0, making today’s industrial settings more
complicated than ever. As AI develops and machine learning/deep learning-based
approaches become more widely used across industries, they will be in a prime posi-
tion to take on a central part in bringing Industry 4.0 to fruition. Industry 4.0 relies
heavily on AI, leading to commercial AI via the development, testing, and deploy-
ment of several AI algorithms for industrial applications. Industry 4.0 benefits from
AI-based techniques because of the enhanced agility, efficiency, and sustainability
that come from intelligent equipment doing tasks that include monitoring them-
selves, analysis and diagnosis, and analysis without human intervention. Several
recent industrial applications have explored the best ways to integrate AI methods
and ideas within the framework of Industry 4.0. Intelligent systems’ analysis of data
and real-time supervisory duties is one such application; it provides an extensive
structure with a comprehensive overall strategy of AI in the sector with an example
with a specific use in mind, outlining the steps necessary to implement adaptable
data analysis and real-time monitoring systems that are adaptable and pluggable in
industrial environments.
15.1.6 Robotic Technology
Robotics technology is essential to the success of Industry 4.0. Commercial manipu-
lators, mobile robots, and collaborative robots are a few instances of robotics tech-
nology employed in manufacturing processes and Industry 4.0 applications. Modern
technical advancements in robotics have increased smart factories’ efficiency, flex-
ibility, versatility, reconfigurability, and safety, making industrial robots the central
technology in automation and industry. Industry 4.0 settings now have a cooperatively
automated vehicle for more flexibility and intelligence in manufacturing. This is just
one example of the many industrial uses that use state-of-the-art technology for robot-
ics, like using electronic factory processes to develop intelligent manufacturing cells.
The Internet of Robotic Things (IoT) is a new field that emerged from the convergence
of robotics and IoT technology. Integrating robots with IoT will boost the capabilities
of both current IoT and robotics infrastructures. Autonomous robots and portable
machines, with a wide range of potential uses, would be made possible by advances in
robotics, paving the way for the introduction of Industry 4.0. More research and case
studies have been provided on robots’ applications in Industry 4.0.
the smart factory. With an eye on Industry 4.0, the intelligent factory strategically
uses technologies like robots, automation, embedded technology, and information
networks. Many new technologies, including CPS, the IoT, big data, cloud comput-
ing, and AI, are aiding in the shift from traditional to intelligent manufacturing,
which will ultimately result in automated production and digital twin systems. The
intelligent factory uses modern technology and production models to enable fully
autonomous production processes that attain peak performance via ongoing self-
tuning, situational adaptation, and experience learning. The term “smart factory”
has been explored in the academic literature from various perspectives. In this, the
fundamentals of an intelligent factory design are discussed, including the technol-
ogy that makes it possible and the difficulties it faces as well as a sample applica-
tion. An extensive literature review on intelligent factories was done to summarize
current research accomplishments and developments in this area, and possible
prospects. Another in-depth literature review based on a multi-stage methodology
and selected criteria has been conducted to establish background information and
assess emerging theories and practices related to Industry 4.0, which includes smart
manufacturing.
Figure 15.2 depicts how the four tiers of the smart factory are interconnected
through a hierarchical pyramid of intelligent automation. Unlike the traditional
pyramid, this one allows for modern computers, cloud storage, networking, and
intelligent devices to provide two-way knowledge interaction between numerous
components of the industrial process’s hierarchy.
Smart factories have been implemented across several industries to the Industry
4.0 framework. Under Industry 4.0, for example, the pharmaceutical industry’s
process for medication packaging has been integrated into an intelligent factory.
The agents in this smart factory can self-organize thanks to a dialogue mechanism
powered by AI. Additionally, a CPS-based cutting-edge plant using digital twin
technology is being used to produce circuit breakers. Also, AM has started using
intelligent factories. In addition, the real-time optimization of a regular chemical
plant due to the 4.0 paradigm in the industry has turned it into an intelligent plant.
Finally, IIoT has been used to transform the tannery sector from a traditional factory
into an intelligent factory. Design, machining, monitoring, and other processes are
some of the many areas that might benefit from Industry 4.0’s proposed architecture
for intelligent production systems. The leading construction and its process elements
are used as feedback data, and the utility’s outputs help with decision-making and
machining process planning, according to another work that focuses on intelligent
factories and unique machining processes. In the linked article, several different
uses for smart factories are discussed.
15.1.8 Industry 5.0
For the last decade, “Industry 4.0” has been the most often used phrase when dis-
cussing business transactions in the manufacturing sector. It has had a significant
influence on social and economic advancement and is now a widely held belief sys-
tem across the world. Intelligent and self-sufficient manufacturing processes are a
hallmark of this kind of economy. Recently, a concept known as Industry 5.0 has
been proposed in response to these concerns, which centres the worker at the core of
digital transformation rather than automation. The five main topics of Industry 5.0
identified in the published literature are supply chain assessment and optimization,
business innovation and digitalization, intelligent and sustainable manufacturing, the
IoT, AI, big data, and human-machine interaction. Many academics see Industries 5.0
as a doorway to the peaceful cohabitation of humans and machines. The end goal
is a collaborating industrial system with holistic human-centred growth, long-term
dedication to environmental protection, and a dependable source of resilience, all
achieved via reimagining the interaction between people and intelligent components.
This chapter overviews the technologies necessary to create an intelligent factory,
such as those used in manufacturing, computing, and communication. An advanced
CPS conforming to the one-of-a-kind, cutting-edge manufacturing framework for
Industry 4.0 is mapped out. It describes the potential interplay between the intel-
ligent factory’s core components to create an automated, data-driven manufacturing
environment. A simple, brilliant factory concept is shown using a real-life examina-
tion of smart manufacturing involving a drilling operation.
FIGURE 15.3 Physical assets of the part of the digital manufacturing platform.
The pick-and-place robot gets the coordinates for the slot in the drilling unit, grabs
the part, and deposits it there. Before beginning the drilling process, a proximity
sensor ensures it arrives at the table. When the robot is done drilling, it activates a
limit switch and moves the piece to a holding location. The process will restart once
a piece has returned to its starting location.
The sensors are linked to the PLC, and information is sent from the robots and the
PLC through the cloud using the Laboratory View application programming inter-
face (API). The following parts provide a more in-depth explanation of the system’s
framework.
15.2.1 Mechanism of Construction
This project included the development of a vertical drill to showcase a miniature
manufacturing process, with the robot being instructed to retrieve the sample from a
predetermined slot and deliver it to the platform for drilling. A drill is a device used
to create holes in various materials. As seen in Figure 15.4, it was suggested that the
drilling platform be modelled in CAD software. Components of the drilling system
include the following:
A grasping mechanism was added to secure the workpiece and keep the drilling
unit steady, as illustrated in Figure 15.4. The workpiece may be secured from both
sides by mounting the hydraulic pistons and steel gripper (clamp) to the table.
Under the Boards menu, users may generate customizable visualization cards to
examine data in real time. Individual API keys may be generated for any user app
that wants to link to the IoT platform to either receive information from devices or
send out commands to devices. Information about the MQTT client connection uti-
lized in this study is as follows:
A subject and an asset describing the information are provided to the platform at
the same time.
282 AI-Driven IoT Systems for Industry 4.0
TABLE 15.1
PLC-Robot Reaction Time
Variables Time
Average KRC4-Labview API 7.955
Average PLC Data Block Read Time 2.600
Producer Loop Iteration Average 103.9
Implementation of Intelligent CPS 285
15.3.2 Integrated Systems
Table 15.2 depicts the normal MQTT message delivery time to the IoT platform.
Speed test JSON. The programme used MQTT’s QoS0 level since it sent informa-
tion to the cloud fastest. However, as increasing iterations increase data transmission
time, QoS2 is required most.
Various measured metrics are visualized in IBM Watson IoT, as shown in
Figure 15.9. When the information reaches the IoT platform, users may make use
of tools like IBM’s Watson Analytics and Data Science Experience. Cloudant is a
NoSQL database (see Figure 15.10) that may also be used to store information. IBM
Watson also supports integrating third-party apps into the IoT infrastructure.
TABLE 15.2
Normal MQTT-IoT Message Delivery Time
Platform
Variable Time (ms)
Avg. Publish time QoS0: at most once 263.9
QoS1: at least once 0.8415
QoS2: exactly once 218.1
15.4 CONCLUSIONS
As a result of the proliferation of digitalization and intelligent manufacturing,
“Industry 4.0” has become a focal point in the industrial industry. In the era of
Industry 4.0, an intelligent factory is a crucial component of the digital revolution.
It paves the way for an easy transition from traditional (standard) to intelligent pro-
duction by leveraging state-of-the-art developments in intelligent machines, CPSs,
intelligent sensor technology and instrumentation, the IoT website, big data, the
cloud, and AI. This chapter provides a hierarchical structure for brilliant manu-
facturing facilities incorporating recent findings. This design, which includes all
of the cutting-edge technologies connected with the different levels of the intel-
ligent factory, has been implemented and verified to demonstrate the efficacy of
an intelligent factory in production operations. Using this technique, observe how
implementing the high-tech solutions of the intelligent manufacturing paradigm
has improved production, communication, and independence. The results showed
that the framework encouraged collaboration and information sharing across the
various parts of the system, leading to a more efficient manufacturing procedure
(lower reaction time) and more output. The suggested technique has certain restric-
tions, one of which is that the model can only be used with a single production.
The intelligent factory hopes to offer consumers more malleable goods through
reconfigurable production.
REFERENCES
1. Cheng, Guo-Jian, et al. “Industry 4.0 development and application of intelligent man-
ufacturing.” 2016 International Conference on Information Systems and Artificial
Intelligence (ISAI).
2. Zhou, Keliang, Taigang Liu, and Lifeng Zhou. “Industry 4.0: Towards future industrial
opportunities and challenges.” 2015 12th International Conference on Fuzzy Systems
and Knowledge Discovery (FSKD).
Implementation of Intelligent CPS 287
3. Liu, Chang, et al. “Integrated application in intelligent production and logistics man-
agement: Technical architectures concepts and business model analyses for the cus-
tomised facial masks manufacturing.” April 2019 International Journal of Computer
Integrated Manufacturing 32.4–5.
4. Chen, Yubao. “Integrated and intelligent manufacturing: Perspectives and enablers.”
2017 Engineering 3.5.
5. Tao, Fei, et al. “Digital twins and cyber–physical systems toward smart manufacturing
and industry 4.0: Correlation and comparison.” 2019 Engineering 5.4.
6. Zhang, Yingfeng, et al. “A framework for smart production-logistics systems based on
CPS and industrial IoT.” 2018 IEEE Transactions on Industrial Informatics 14.9.
7. Thoben, Klaus-Dieter, Stefan Wiesner, and Thorsten Wuest. ““Industrie 4.0” and
smart manufacturing-a review of research issues and application examples.” 2017
International Journal of Automation Technology 11.1.
8. Chen, Zhen, and Mingjie Xing. “Upgrading of textile manufacturing based on Industry
4.0.” 2015 5th International Conference on Advanced Design and Manufacturing
Engineering. Atlantis Press.
9. Shi, Zhan, et al. “Smart factory in Industry 4.0.” 2020 Systems Research and Behavioral
Science 37.4.
10. You, Zhijia, and Lingjun Feng. “Integration of industry 4.0 related technologies in the
construction industry: A framework of A cyber-physical system.” 2018 IEEE Access 8.
11. Pathak, Pankaj, et al. “Fifth revolution: Applied AI & human intelligence with
cyber-physical systems.” 2019 International Journal of Engineering and Advanced
Technology 8.3.
12. El Zant, Chawki, et al. “Enhanced manufacturing execution system “MES” through
a smart vision system.” 2021 Advances on Mechanics, Design Engineering and
Manufacturing III: Proceedings of the International Joint Conference on Mechanics,
Design Engineering & Advanced Manufacturing, JCM.
13. Wang, Chengcheng, and Kunlun Wei. “Construction of intelligent manufacturing digi-
tal workshop ability assessment model for CPS.” 2019 2nd World Conference on
Mechanical Engineering and Intelligent Manufacturing (WCMEIM).
14. Liu, Chao, and Pingyu Jiang. “A cyber-physical system architecture in shop floor for
intelligent manufacturing.” 2016 Procedia CIRP 56.
15. Li, Suisheng, et al. “Blockchain dividing based on node community clustering in
intelligent manufacturing cps.” 2019 IEEE International Conference on Blockchain
(Blockchain).
16. Atieh, Anas Mahmoud, Kavian Omar Cooke, and Oleksiy Osiyevskyy. “The role of
intelligent manufacturing systems in the implementation of Industry 4.0 by small and
medium enterprises in developing countries.” 2023 Engineering Reports 5.3.
17. Qi, Quansong, Zhiyong Xu, and Pratibha Rani. “Big data analytics challenges to imple-
menting the intelligent Industrial Internet of Things (IIoT) systems in sustainable man-
ufacturing operations.” 2023 Technological Forecasting and Social Change 190.
18. Chen, Gaige, et al. “The framework design of smart factory in discrete manufactur-
ing industry based on cyber-physical system.” International Journal of Computer
Integrated Manufacturing 33.1.
19. Tupa, Jiri, Jan Simota, and Frantisek Steiner. “Aspects of risk management implemen-
tation for Industry 4.0.” 2019 Procedia Manufacturing 11.
20. Wang, Lin, Jinfeng He, and Songjie Xu. “The application of industry 4.0 in customized
furniture manufacturing industry.” 2017 Matec Web of Conferences. Vol. 100. EDP
Sciences.
21. Zawra, Labib M., et al. “Utilizing the Internet of Things (IoT) technologies in the imple-
mentation of Industry 4.0.” 2020 International Conference on Advanced Intelligent
Systems and Informatics. Cham: Springer International Publishing.
288 AI-Driven IoT Systems for Industry 4.0
22. Li, Suisheng, Hong Xiao, and Jingwei Qiao. “Multi-chain and data-chains partitioning
algorithm in intelligent manufacturing CPS.” 2021 Journal of Cloud Computing 10.1.
23. Barbosa, José, et al. “Cross benefits from cyber-physical systems and intelligent
products for future smart industries.” 2016 IEEE 14th International Conference on
Industrial Informatics (INDIN).
24. Abikoye, Oluwakemi Christiana, et al. “Application of Internet of Thing and cyber-
physical system in Industry 4.0 smart manufacturing.” 2021 Emergence of Cyber-
Physical System and IoT in Smart Automation and Robotics: Computer Engineering in
Automation. Cham: Springer International Publishing, 203–217.
25. Leitão, Paulo, Armando Walter Colombo, and Stamatis Karnouskos. “Industrial auto-
mation based on cyber-physical systems technologies: Prototype implementations and
challenges.” 2016 Computers in Industry 81.
26. Xu, Li Da, Eric L. Xu, and Ling Li. “Industry 4.0: State of the art and future trends.”
International Journal of Production Research 56.8.
27. Chen, Zhehan, Xiaohua Zhang, and Ketai He. “Research on the technical architecture
for building CPS and its application on a mobile phone factory.” 2017 5th International
Conference on Enterprise Systems (ES).
28. Lv, Zhihan, et al. “Trustworthiness in industrial IoT systems based on artificial intel-
ligence.” 2018 IEEE Transactions on Industrial Informatics 17.2.
29. Habib, Maki K., and Chukwuemeka Chimsom. “CPS: Role, characteristics, architec-
tures and future potentials.” 2014 Procedia Computer Science 200: 1347–1358.
30. Ahmed, Rania Salih, Elmustafa Sayed Ali Ahmed, and Rashid A. Saeed. “Machine
learning in cyber-physical systems in Industry 4.0.” 2020 Artificial Intelligence
Paradigms for Smart Cyber-Physical Systems. IGI Global, 20–41.
16 Machine-Learning-
Enabled Stress Detection
in Indian Housewives
Using Wearable
Physiological Sensors
Shruti Gedam and Sanchita Paul
16.1 INTRODUCTION
Mental stress has emerged as a major worry affecting people’s well-being, particu-
larly among housewives, who frequently confront various tasks and pressures in
their everyday lives. Detecting and monitoring mental stress levels in this population
can aid in the development of effective intervention techniques and the promotion of
mental health [1]. The introduction of wearable physiological sensors in recent years
has provided a promising avenue for real-time monitoring of physiological signals
related to stress.
Among the different physiological signals that can be measured, electrocardio-
gram (ECG), galvanic skin response (GSR), and skin temperature (ST) have been
identified as useful indicators of a person’s stress level [2]. Each of these signals
gives important information on the physiological changes that occur during stress,
offering light on the underlying mechanisms and improving stress detection and
monitoring. Wearable physiological sensors are devices that continually monitor an
individual’s physiological parameters such as heart rate, skin conductance, ST, and
others. When used to identify mental stress, these sensors act as a non-invasive, real-
time data source, providing significant insights into an individual’s physiological
responses to stressors. The idea is to use these sensors to collect data, analyse it, and
identify trends or deviations related to stress, which can subsequently be used for
early intervention or stress management.
The ECG, which measures electrical activity in the heart, provides critical infor-
mation about heart rate variability (HRV). HRV, which measures the time intervals
between consecutive heartbeats, has been extensively researched as a key indicator
of stress levels. The autonomic nervous system alters with stress, resulting in varia-
tions in HRV rhythms. Increased sympathetic nervous system activity and decreased
parasympathetic nervous system activity are specifically seen, resulting in altera-
tions in heart rate and HRV measurements [3]. Valuable elements such as heart rate,
HRV, and certain frequency domain metrics can be retrieved from ECG data, assist-
ing in the precise assessment of mental stress levels.
GSR, also known as electrodermal activity (EDA), monitors changes in the skin’s
electrical conductivity. It is affected by sympathetic nervous system activity, which
is linked to emotional and psychological responses such as stress. During times of
stress, GSR signals exhibit different patterns, which are characterised by higher skin
conductance levels. Sweat gland activity, which is controlled by sympathetic arousal,
is responsible for the rise in conductance. Skin conductance level, skin conductance
response, and temporal dynamics can be derived from GSR data, providing useful
information for mental stress detection [4].
Changes in blood flow and sympathetic nervous system activity influence ST,
another important physiological marker. Vasoconstriction occurs under stress as a
result of heightened sympathetic arousal, resulting in decreased blood flow to the
skin and a corresponding drop in ST [5]. Changes in ST patterns might be sugges-
tive of the body’s physiological reaction to stimuli, making this thermoregulatory
response a potential stress marker. Capturing ST data allows for the extraction of
parameters such as average ST, temperature variability, and thermal response, which
aids in the understanding and detection of mental stress levels.
The combination of ECG, GSR, and ST readings gives a multi-modal tech-
nique for detecting mental stress. By taking into account numerous physiological
signs at the same time, researchers can make use of the complimentary nature
of these signals to improve the accuracy and reliability of stress evaluation. The
combination of these signals enables a more thorough investigation of the phys-
iological changes that occur during stress, capturing many components of the
body’s stress response.
The study’s contributions include investigating the applicability of wearable phys-
iological sensors and the efficacy of feature selection approaches in capturing mental
stress changes in Indian housewives. Furthermore, we hope to provide insights into
the best relevant strategies for stress detection in this specific population by analys-
ing and comparing multiple machine-learning (ML) algorithms.
stress and emotional discomfort. This information can be utilised to create support
systems and programmes that are tailored to their individual needs. Counselling,
peer support groups, or access to mental health services are all examples of initia-
tives that might equip housewives with the resources they need to properly manage
their stress.
Providing insights into housewives’ stress levels might empower them to take
charge of their mental health. Individuals who are aware of their stress triggers and
patterns can practise self-care strategies and seek help when necessary, thereby
enhancing their overall quality of life. This type of empowerment benefits house-
wives’ autonomy and self-efficacy. Additionally, good stress management can boost
a housewife’s productivity and engagement in daily activities. Stress reduction can
lead to better decision-making, enhanced concentration, and increased excitement
for home activities and personal pursuits. This, in turn, can have a favourable impact
on the household’s general well-being.
Another key advantage of stress monitoring is the prevention of burnout.
Housewives frequently work nonstop, handling various obligations with no breaks.
Continuous stress monitoring can aid in the prevention of burnout by detecting early
indicators of excessive stress and allowing for appropriate modifications to daily
routines and lifestyle choices. This helps to improve their long-term well-being and
resilience. Addressing stress among housewives has the potential for long-term posi-
tive consequences in a broader societal context. If individuals choose to re-enter the
workforce or pursue personal hobbies, stress reduction can lead to improved mental
health, better family connections, and enhanced job satisfaction. When housewives
are emotionally healthy and nurtured, they may take a more active role in their com-
munities, benefiting society as a whole.
FIGURE 16.1 IoMT device made of described low-cost the sensor (a) outside part of the
device (b) inside the structure of the device.
Machine-Learning-Enabled Stress Detection 293
USB power supply for device operation, a real-time clock (RTC) module for precise
timekeeping, a microSD card module for data storage, Arduino Mega and UNO
units for sensor connectivity, and a TFT LCD display for real-time visualisation of
ECG, GSR, and ST measurements.
During the data collection phase, the ECG sensor was strategically placed on the
chest with three electrodes or pads to accurately capture the heart’s electrical sig-
nals. The GSR sensor was placed on the palmar surface of the distal phalanx of the
index and middle fingers, which was chosen due to the increased density of sweat
glands, allowing exact measurements of changes in skin conductance. Because of its
accessibility, stability, and consistent temperature readings, the DS18B20 tempera-
ture sensor was carefully inserted under the armpits of the participants. Special care
was taken to ensure appropriate skin contact and proper positioning of the sensor tip
in the centre of the armpit.
16.2.2 Stressors
A stressor is a stimulation or condition that causes an individual to experience stress.
Stressors can be external or internal, and their nature varies greatly. They can elicit
physiological, psychological, and emotional responses, which constitute the body’s
response to stress [7].
Participants in the study were exposed to a variety of stress-inducing cir-
cumstances in order to investigate their physiological reactions. These stresses
were divided into three categories: “No Stress,” “Acute Stress,” and “Chronic
Stress.”
16.3 METHODOLOGY
16.3.1 Pre-Processing of Dataset and Extraction of Features
A set of pre-processing measures were carried out in order to improve the model’s
performance. These steps were critical in getting the data ready for deep ML pro-
cessing. To begin, the ECG data was filtered using a notch filter with a cut-off fre-
quency of 0.05, resulting in the extraction of 15 HRV components. Simultaneously,
15 different features were obtained from the GSR signal. This was accomplished
by passing the signal through a low-pass Butterworth filter and calculating the first
derivative of the filtered data. The discrete wavelet transform (DWT) [8] was used to
address the inherent randomness in the GSR signal. This transformation efficiently
captured various frequency components by separating the signal into approximation
and detailed coefficients. To extract 11 ST features, a Butterworth filter was used,
but this time with a lower frequency cut-off of 5 Hz and a sampling rate of 1000 Hz.
The model’s inputs were the retrieved characteristics from the ECG, GSR, and ST
modalities. The dataset was partitioned for model training and testing, with 75% set
aside for training and 25% set aside for testing. The flowchart of the methodology of
the whole study is illustrated in Figure 16.2.
The StandardScaler [9] technique was used to provide consistent and predict-
able model performance throughout training and testing. The features taken from
the ECG, GSR, and ST data were standardised using this technique, yielding fea-
tures with a mean of 0 and a standard deviation of 1 (Table 16.1). This procedure of
standardisation was critical in optimising the model’s performance throughout the
investigation.
complexity while keeping key traits by identifying principal components that explain
the most variance in the data. PCA assists us in identifying the most relevant physi-
ological variables that lead to stress changes in the context of stress detection. Based
on cumulative explained variance or other criteria, we identify the ideal number of
primary components to retain, ensuring the preservation of crucial stress-related
information.
296 AI-Driven IoT Systems for Industry 4.0
TABLE 16.1
All the Extracted Features (15 Each) from the Three Signals – ECG, GSR, and ST
(Continued)
Machine-Learning-Enabled Stress Detection 297
(Continued)
298 AI-Driven IoT Systems for Industry 4.0
16.3.3 Machine-Learning Algorithms
In this study, a variety of ML methods in the pursuit of effective stress detection
among Indian housewives utilise wearable physiological sensors. These algorithms
formed the framework of our stress-detection model, each with its own set of benefits
and capabilities. Here, providing a brief overview of the ML algorithms utilised the
following:
16.3.3.8 LightGBM
LightGBM [18] is an ML technique of the gradient boosting family. It sequentially
constructs an ensemble of decision trees, with each tree learning from the mistakes
of the previous ones. LightGBM, as opposed to standard depth-wise tree growth,
employs a leaf-wise growth technique. At each stage, it chooses the leaf node that
produces the greatest reduction in the loss function. This strategy frequently results
in shorter training times. LightGBM can directly accept categorical features, elimi-
nating the need for one-hot encoding and making it useful for datasets with mixed
data types. Because of its speed and efficiency, LightGBM is a popular choice for
contests and applications where model training time is critical.
TABLE 16.2
Performance of ML Algorithms in Terms of Accuracy, Precision,
Recall, and F1-Score
ML Algorithm Accuracy Precision Recall F1-Score
RF 92.68 92.11 93.57 92.83
SVM 91.81 91.84 92.01 91.92
MLPNN 93.29 92.07 94.92 93.47
RBFNN 85.98 85.89 86.51 86.20
LDA 53.87 54.49 53.91 54.20
QDA 54.54 55.17 54.54 54.85
Gaussian Naïve Bayes 92.81 91.43 94.69 93.03
LightGBM 92.31 91.31 93.73 92.50
the data’s variance. This dimensionality reduction was critical for improving the
performance and interpretability of our stress-detection model. Based on wearable
physiological sensor data, we investigated multiple ML methods to detect mental
stress in Indian housewives. The performance metrics of these algorithms are sum-
marised in Table 16.2.
According to the results, the MLPNN algorithm had the highest accuracy (93.29%)
and F1-score (93.47%), showing its efficiency in detecting mental stress in housewives.
MLPNN’s precision and recall values also show a well-balanced performance. RF
and Gaussian Naive Bayes also performed well, with accuracy values of 92.68% and
92.81%, respectively. These algorithms displayed great precision and recall, implying
that they have the capacity to identify stress reliably. LDA and QDA, on the other
hand, demonstrated lower accuracy and F1-scores, indicating limits in their ability to
properly capture the intricacies of stress-related physiological changes (Figure 16.3).
FIGURE 16.3 The graph shows all performance matrices of all ML algorithms; here the
X-axis shows performance metrics and the Y-axis shows values in percentage.
302 AI-Driven IoT Systems for Industry 4.0
The success of MLPNN, RF, and Gaussian Naive Bayes in our stress-detection
model demonstrates the utility of wearable physiological sensors in detecting men-
tal stress in Indian housewives. These discoveries have important implications for
research as well as practical applications. Accurate stress identification can lead to
prompt interventions and support systems for housewives dealing with a variety
of issues. Personalised treatment and stress management programmes, for exam-
ple, can be adapted to individual stress levels, contributing to greater well-being.
Furthermore, the use of IoMT devices allows for continuous monitoring, increasing
the possibility of real-time interventions and personalised activities. Furthermore,
the study highlights the significance of feature selection approaches such as PCA in
improving the performance of stress-detection models. We can improve model effi-
ciency and interpretability by identifying the most relevant physiological indicators.
To generalise the findings, future studies should try to include a larger and more
diverse sample. The model’s applicability can be improved by broadening the range
of stressors and including real-life scenarios.
REFERENCES
1. Singh, V., Kumar, A., & Gupta, S. (2022). Mental Health Prevention and Promotion-A
Narrative Review. Frontiers in Psychiatry, 13, 898009. https://doi.org/10.3389/
fpsyt.2022.898009.
2. Kyriakou, K., Resch, B., Sagl, G., Petutschnig, A., Werner, C., Niederseer, D.,
Liedlgruber, M., Wilhelm, F., Osborne, T., & Pykett, J. (2019). Detecting Moments
of Stress from Measurements of Wearable Physiological Sensors. Sensors (Basel,
Switzerland), 19(17), 3805. https://doi.org/10.3390/s19173805.
3. Shaffer, F., & Ginsberg, J. P. (2017). An Overview of Heart Rate Variability Metrics and
Norms. Frontiers in Public Health, 5, 258. https://doi.org/10.3389/fpubh.2017.00258.
Machine-Learning-Enabled Stress Detection 303
4. Subramanian, S., Barbieri, R., & Brown, E. N. (2020). Point Process Temporal
Structure Characterizes Electrodermal Activity. Proceedings of the National Academy
of Sciences of the United States of America, 117(42), 26422–26428. https://doi.org/
10.1073/pnas.2004403117.
5. Herborn, K. A., Graves, J. L., Jerem, P., Evans, N. P., Nager, R., McCafferty, D. J., &
McKeegan, D. E. (2015). Skin Temperature Reveals the Intensity of Acute Stress. Physiology
& Behavior, 152(Pt A), 225–230. https://doi.org/10.1016/j.physbeh.2015.09.032.
6. Malhotra, S., & Shah, R. (2015). Women and Mental Health in India: An
Overview. Indian Journal of Psychiatry, 57(Suppl 2), S205–S211. https://doi.org/
10.4103/0019-5545.161479.
7. Schneiderman, N., Ironson, G., & Siegel, S. D. (2005). Stress and Health: Psychological,
Behavioral, and Biological Determinants. Annual Review of Clinical Psychology, 1,
607–628. https://doi.org/10.1146/annurev.clinpsy.1.102803.144141.
8. Van Fleet, P. J. (2019). Discrete wavelet transformations: An elementary approach with
applications. Wiley.
9. Abbasi, A., Javed, A. R., Chakraborty, C., Nebhen, J., Zehra, W., & Jalil, Z. (2021).
ElStream: An Ensemble Learning Approach for Concept Drift Detection in Dynamic
Social Big Data Stream Learning. IEEE Access, 9, 66408–66419.
10. Kurita, T. (2019). Principal component analysis (PCA). Computer Vision: A Reference
Guide, 1–4.
11. Fratello, M., & Tagliaferri, R. (2018). Decision trees and random forests. In
Encyclopedia of bioinformatics and computational biology: ABC of bioinformatics
(Vol. 374). Elsevier.
12. Byun, H., & Lee, S. W. (2002, July). Applications of support vector machines for pat-
tern recognition: A survey. In International workshop on support vector machines
(pp. 213–236). Springer Berlin Heidelberg.
13. Jambukia, S. H., Dabhi, V. K., & Prajapati, H. B. (2015, March). Classification of ECG
signals using machine learning techniques: A survey. In 2015 International Conference
on Advances in Computer Engineering and Applications (pp. 714–721). IEEE.
14. Yingwei, L., Sundararajan, N., & Saratchandran, P. (1997). A Sequential Learning
Scheme for Function Approximation Using Minimal Radial Basis Function Neural
Networks. Neural Computation, 9(2), 461–478.
15. Varatharajan, R., Manogaran, G., & Priyan, M. K. (2018). A Big Data Classification
Approach Using LDA with an Enhanced SVM Method for ECG Signals in Cloud
Computing. Multimedia Tools and Applications, 77, 10195–10215.
16. Ghojogh, B., & Crowley, M. (2019). Linear and quadratic discriminant analysis:
Tutorial. arXiv preprint arXiv:1906.02590.
17. Jahromi, A. H., & Taheri, M. (2017, October). A non-parametric mixture of Gaussian
naive Bayes classifiers based on local independent features. In 2017 Artificial
Intelligence and Signal Processing Conference (AISP) (pp. 209–212). IEEE.
18. Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., … & Liu, T. Y. (2017).
Lightgbm: A Highly Efficient Gradient Boosting Decision Tree. Advances in Neural
Information Processing Systems, 3149–3157.
17 Rising of Dark
Factories due to
Artificial Intelligence
Anjali Mathur
17.1 INTRODUCTION
An industry can be considered economic activity that is concerned with the pro-
duction of the goods, extraction of the services and the provisions of services.
Steel energy is an example of the production of goods, while coal mining belongs
to extraction of the materials. The tourism industry is all about the provision of
services. Industries came into existence in the mid-eighteenth century. Before the
industrialization, the economy was fully dependent on agriculture and handicrafts.
Industrialization grows with the Industrial Revolution.
The industry revolution has taken place four times, and the recent one is Industry
4.0. The first industry revolution was driven by the creation of the steam engine in
the eighteenth century. Then after a series of developments had taken place in the
fields of electrical, chemical, oil, steel, airplane, mechanical refrigeration, etc. It was
the era of the second industry revolution. Development of such industries opens a
path for a plenty of opportunities in the field of management that was other than the
production of goods. It was the beginning of the third industry revolution. Gradually,
it develops in horizontal and vertical directions with two pillars of management and
production in industries. The third revolution was well known for its technological
development especially in the fields of biotechnology, microelectronics, telemetric
and computer science. Slowly, it steps toward the fourth revolution, and Industry 4.0
is an advanced manufacturing model with an extensive set of technologies. It was
the beginning of the socio-technical interaction era. Gradually, an autonomous and
self-organized production system comes into existence that develops a value chain
system between organizations. Industry 4.0 evolved with multiple versions, includ-
ing the development of smart products, individualized production, autonomous con-
trol and data-centric management.
of industrialization, the factory system grows in the dimension of smart plants, smart
production, smart logistics and finally smart factories.
Smart factory is a new approach, consisting of smart sensors, computer technolo-
gies and predictive analytics. Various types of sensors (Figure 17.1) are commonly
used in different applications. Some most common sensors are as follows:
• Temperature sensor
• Proximity sensor
• Accelerometer
• IR sensor (infrared sensor)
• Pressure sensor
• Light sensor
• Ultrasonic sensor
• Smoke, gas and alcohol sensor
• Touch sensor
• Color sensor
• Humidity sensor
• Position sensor
• Magnetic sensor (Hall effect sensor)
• Microphone (sound sensor)
• Tilt sensor
• Flow and level sensor
• PIR sensor
• Strain and weight sensor
Internet of Things (IoT) and Industry IoT (IIoT) are the basic blocks of smart fac-
tories. The combination of all these maximizes the control over the manufacturing
process and enhancements to acquire, transfer, interpret and analyze the informa-
tion. Smart factories are flexible systems that can self-optimize the performance over
a broader network, can self-adapt to new conditions in real time, can learn from the
306 AI-Driven IoT Systems for Industry 4.0
new environment and can execute the whole production processes in autonomous
mode. Smart factories can be characterized by five major features: connected, opti-
mized, transparent, proactive and agile.
FIGURE 17.2 Digital twin framework for a holistic view of building [2].
Rising of Dark Factories due to Artificial Intelligence 307
Recently, during the time of COVID-19 pandemic, there was a high demand
for fully autonomous systems for industries. That leads to a path for dark factories.
Advancements in technologies especially in artificial intelligence (AI) pave the way
for true dark factories. Decreased prices of robots, increased labor costs in manu-
facturing, scarcity of labors, expansion needs in production capacity, requirement of
more flexible environments, energy saving opportunities, efficiency with improved
308 AI-Driven IoT Systems for Industry 4.0
quality products, and sustainability are the few factors behind the rising of dark
factories. Advancements in technologies change the need and working culture of
industries. Precise, meticulous and more accurate works are in demand. Industrial
robotics changes the perception about work; more production with lower cost was
always required, but now industrialists want to cut short the labor wages and replace
the human labors with automated machines.
line can easily be predicted by ML. Manufacturers make use of ML-based predictive
maintenance to reduce malfunctions and unscheduled failures at a very low cost.
Traditionally, there are two types of maintenance: corrective (or reactive) and pre-
ventive. The process of repairing or restoring equipment to its original or working
condition is known as corrective maintenance. The process of maintaining equip-
ment in order to prevent failures is known as preventive maintenance. AI introduces
a new branch of “predictive maintenance”, which uses data analytics to identify and
predict equipment failures. It helps the industrialist reduce the cost for finding and
repairing the equipment failures.
17.6.3 Industrial Robotics
Mobile, programmable and automated robots used for manufacturing are considered
industrial robots. Lower mobility, parallel manipulators and concomitant motion are
the major features of the industrial robots. Industrial robotics was first introduced
in 1937, a crane-like device powered by a single electric motor (Figure 17.6). It was
used to carry and put the goods from one place to another. After few modifications,
robotic manipulator came into existence. It was a mechanical device used for mov-
ing goods, without direct physical contact by the operator. The specialty of robotic
manipulator was its reprogrammable and multifunctional features. The program-
ming of motions and sequences requires some connectivity with computer system.
Gradually, evolution in industrial robotics involves AI and makes the industrial
robots an alternative of human labors. The involvement of robotics increases the
processing speed, quality and production. The industrial robotics has successfully
been applicable for assembling and disassembling products, welding, painting, pack-
aging and labeling, palletizing, product inspection and testing.
17.6.4 Computer Vision
One of the major domains of AI is machine vision or computer vision that works on
visual inspections. Camera is an essential tool for computer vision and can do jobs
like human eyes with AI. Visual inspection by machine vision provides swiftness
and precision. For an industry, rapid and accurate information matters a lot. The
visual inspection is required at various fields like defect detection, product assembly,
barcode analysis, packing standards, safety and security standards and pruning the
items. As compared to human eye, “computer eye” can find out the defects more
precisely, and with the inclusion of AI, these defects can further be classified on
the basis of some parameters. Detecting foreground and background objects, track-
ing, moving or rotating objects, differentiation of images on the basis of colors and
brightness, etc. are a few of the jobs of computer vision (Figure 17.7).
17.6.5 Inventory Management
Management of the flow of goods from manufacturers to warehouses and from
warehouses to end users is considered inventory management. It is a complex pro-
cess that manages ordering, storing, using and selling of goods. Basically, it’s a
series of processes for managing the things from raw materials to finished products.
Inventory management has so many risks also like spoilage, theft, damage or shifts
in demand. For every industry, the inventory management is a big role player and
needs a very meticulous monitoring. AI helps the industrialist to perform inventory
management in very systematic and intellectual way (Figure 17.8). ML has the abil-
ity to process a massive data and can do classification and pattern recognition on the
data. It helps an industry in supply chain management.
17.8 CONCLUSION
In spite of multiple challenges, the dark factories are growing rapidly. High-quality
products with lesser human involvement are the major advantage of dark factories.
AI makes the things much easier and increases the quality by providing precise,
accurate and in-depth views of products. ML, deep learning, computer vision NLP
and Big Data analysis are major wheels for driving the AI-based systems. The
advancements in AI will transform the smart factories into dark factories.
REFERENCES
1. https://dataconomy.com/2022/05/05/artificial-intelligence-in-industry-4-0/
2. https://facilityexecutive.com/optimize-with-digital-twin-technology/
3. Muzammal, M., Talat, R., Sodhro, A. H., & S. Pirbhulal. A Multi-Sensor Data Fusion
Enabled Ensemble Approach for Medical Data from Body Sensor Networks. Information
Fusion. 2020, 53, 155–164, ISSN 1566–2535. doi:10.1016/j.inffus.2019.06.021
4. Noureddine, R., Solvang, W., Johannessen, E., & Yu, H. Proactive Learning for
Intelligent Maintenance in Industry 4.0. 2020. doi:10.1007/978-981-15-2341-0_31
5. https://subscription.packtpub.com/book/data/9781788831307/1/ch01lvl1sec04/crisp-dm
6. https://www.tthk.ee/inlearcs/1-basic-about-industrial-robots/
7. Sundaram, S., & Zeid, A. Artificial Intelligence-Based Smart Quality Inspection for
Manufacturing. Micromachines. 2023, 14, 570. doi:10.3390/mi14030570
8. Sustrova T. An Artificial Neural Network Model for a Wholesale Company’s Order-
Cycle Management. International Journal of Engineering Business Management. 2016,
8. doi:10.5772/63727
314 AI-Driven IoT Systems for Industry 4.0
FURTHER READINGS
1. Ortiz, J. H. Industry 4.0 – Current Status and Future Trends. doi:10.5772/intechopen.
86000
2. Jacić, L. Re: Is Industry 4.0 Always Correspond with Automation? 2022. Retrieved
from: www.researchgate.net/post/Is_Industry_40_always_correspond_with_automation/
62f386cc3061927d1e04d950/citation/download
3. https://issuu.com/techfastly/docs/feb_2021/s/11659740
4. https://www.grainger.com/know-how/industry/manufacturing/kh-what-is-a-dark-factory
5. https://www.linkedin.com/pulse/dark-factories-likely-reality-shirish-kulkarni/
6. https://www.planettogether.com/blog/balancing-the-dark-side-of-automation-exploring-
dark-factories-and-finding-the-optimal-production-line-efficiency
7. https://www.i-scoop.eu/industry-4-0/lights-out-automation-manufacturing/
8. https://www.cnbc.com/2023/04/19/from-generative-ai-to-digital-twins-how-tech-will-
transform-factories.html
9. https://www.threadinmotion.com/blog/smart-factories-all-around-the-globe
10. https://www.electronicshub.org/different-types-sensors/
18 Deep Learning for
Real-Time Data Analysis
from Sensors
Sagar C V, Harshit Bhardwaj,
and Anupama Bhan
18.1 INTRODUCTION
According to the World Health Organization, cancer is the second biggest cause of
death from medical problems and has a substantial influence on the world, caus-
ing more than 10 million deaths yearly. According to estimates, 1,500 people lose
their lives to cancer every day. Cancer recurrence is a constant concern, despite
the fact that treatment and prevention can increase the likelihood of survival. For
instance, breast cancer treatment might impair the immune system, which can
cause malignant cells to reappear in the brain and other organs. As researchers and
data scientists look for methods to use artificial intelligence (AI) in anything from
weather forecasting to judicial procedures, AI has entered many industries, includ-
ing medicine. The diverse range of AI applications is significantly influenced by
the medical industry. Cervical cancer is a rapidly progressing and highly danger-
ous form of cancer that has a significant impact on the lives of women world-
wide. The World Health Organization reports that the incidence of cervical cancer
is particularly increasing among Indian women, affecting approximately 1 in 53
women compared to 1 in 100 globally. One of the common symptoms experienced
by patients is abnormal vaginal discharge or bleeding. The Pap smear test is a
commonly employed method in medical diagnosis and treatment for identifying
abnormalities in cervical cells. It helps detect variations in cell size, color, mucus,
and cell integrity. Nevertheless, the precise and manual segmentation of Pap smear
cells presents a significant challenge for cytologists. This is primarily because of
cell overlap, which complicates the identification of inflammatory cells and the
presence of mucus in cell images. To address these challenges, the effectiveness of
medical imaging modalities such as MRI scans, CT scans, X-rays, and others has
been demonstrated through the application of machine learning and deep learn-
ing algorithms. The most frequent malignancy among women is cervical cancer.
The Pap smear test includes removing cells from the cervical region and adding
color to the microscope slide in order to look for abnormalities. The translucent
cells are examined by skilled cytologists. Manual examination takes time, though,
and many ailments frequently depend heavily on time. It is essential to develop
techniques that can aid human-based diagnostic systems because their influence is
only going to expand and the number of cases is going to increase every year. The
objective of this project is to build a categorization model based on machine learn-
ing. However, pre-processing like filtering and region-of-interest segmentation is
required before Pap smear images can be fed directly into a classification system.
As having too many features can result in a longer processing time, the features
extracted from the pre-processed images are further improved to remove redun-
dant features. This study employs principal component analysis (PCA) for this
purpose. The resulting features that have been filtered and decorrelated are then
fed into multiple Support Vector Machine (SVM) models that categorize cells into
various stages or levels of cancer cell growth. To avoid mortality, cervical cancer
must be identified early and treated properly. Cervical cytology, also known as a
Pap smear, is a standard technique for detecting cervical cancer. It entails examin-
ing cervical cells under a microscope for any abnormal alterations that can point
to the existence of precancerous or cancerous diseases. The manual evaluation of
Pap smear slides by human experts takes time and is subject to variation across
observers. The primary cause of death in female patients stems from the chal-
lenge of detecting cervical cancer in its early stages, as symptoms do not typically
appear until the disease has advanced. The mortality rate among women could be
significantly reduced if cervical cancer could be identified earlier. Thus, this chap-
ter proposes a procedure for early detection of cervical cancer in order to prevent
the deaths of female patients. Traditional methods heavily rely on human experts,
resulting in a time-consuming process. Furthermore, these methods are not fea-
sible in developing countries with a large population due to a scarcity of trained
specialists or radiologists in this particular field.
18.3 METHODOLOGY
The diagnosis of cervical cancer based on AI applications on images involves train-
ing a supervised algorithm (ML) with images labeled by an expert. During this train-
ing, the ML systems (intelligent agents) are taught to differentiate between images of
different classes (normal or pathologic), and the parameters of mathematical models
describing these methods are optimized. After successful training, the algorithms
are evaluated using validation images to measure their identification and generaliza-
tion capabilities using established classification metrics such as accuracy, sensitivity,
specificity, area under (the receiver operating characteristic) curve (AUC), and F1
score, among others.
the devices themselves, equipped with sensors and actuators designed to perceive
and interact with their surroundings. These sensors are adept at monitoring diverse
variables such as temperature, humidity, light, motion, and location. Subsequently,
the data collected from these sensors is relayed across networks, employing both
wired and wireless connections, for subsequent processing and analysis. Central to
the IoT landscape is connectivity, which plays a pivotal role in facilitating devices’
communication and data sharing among themselves and with cloud platforms. The
underlying objective of the IoT concept is to establish an interconnected ensemble of
devices that collaborate harmoniously to enhance efficiency, automation, and con-
venience across various facets of our daily existence. This connectivity is achieved
through a spectrum of protocols and technologies, including but not limited to
Wi-Fi, Bluetooth, Zigbee, RFID, and cellular networks. The expansion of wireless
communication technologies and the proliferation of energy-efficient, cost-effective
connectivity solutions have undeniably contributed to the burgeoning realm of IoT
applications.
This module built a convolutional encoder network using the TensorFlow and Keras
frameworks. A system call was produced from the. NET framework through APIs
to execute the module, allowing the deployment of the appropriate algorithms at any
layer or node of the model. The system saves the photos in either RGB or grayscale
format for pre-processing before analyzing them using trained or pretrained models
to produce effective results. The picture data is imported and separated into training,
testing, and validation subsets before being processed in pretrained models using
various data augmentation and flipping approach as shown in Figure 18.3. The next
stage is to create a model for training and testing purposes utilizing a web-based API
framework.
problem and then reusing and applying that knowledge to solve comparable chal-
lenges in other fields. A context network is initially trained using an appropriate
dataset in transfer learning, and then the obtained information is transferred to
a specified target. The target is trained using a dataset that is suited to its unique
needs. The effectiveness of transfer learning depends on several factors, includ-
ing the choice of a suitable pretrained model and consideration of factors such
as dataset size and similarity. The pretrained model used should be appropriate
to the target problem while also taking into account the context problem or job.
Overfitting can occur when the target dataset is smaller or similar in size to the
source dataset. If the dataset is larger, the pretrained model often requires fine-
tuning rather than rebuilding the model from scratch, regardless of the precise
processes required.
18.8.2 Quadratic SVM
Quadratic SVMs represent an advanced iteration of the conventional SVM algo-
rithm, incorporating quadratic functions into the decision-making process. This
innovation enhances their performance, particularly in scenarios involving intri-
cate non-linear relationships between features and the target variable. Quadratic
SVMs are extensively employed in classification and regression tasks that grapple
with complex data patterns. Similar to other SVM variants, Quadratic SVMs
aim to determine an optimal hyperplane that effectively segregates data points
into distinct classes. Unlike their linear SVM counterparts, which rely solely on
linear decision boundaries, Quadratic SVMs introduce quadratic terms within
the kernel function. This augmentation empowers them to capture intricate non-
linear associations between features, significantly augmenting their capability to
handle intricate data patterns. Training a Quadratic SVM involves the resolution
of an optimization problem focused on identifying the hyperplane that maximizes
the margin while minimizing classification errors. The quadratic kernel func-
tion serves to gauge the similarity between data points, and the SVM endeav-
ors to ascertain the coefficients that underpin the configuration of the decision
boundary.
Gaussian RBF kernel. This kernel empowers SVMs to effectively handle intricate
feature interactions by transforming data into a higher dimensional realm, thereby
enabling a more robust modeling of non-linear patterns. During the training phase
of Gaussian RBF SVMs, the primary objective is to identify an optimal hyper-
plane that maximizes the margin between distinct classes while simultaneously
minimizing classification errors. The Gaussian RBF kernel computes the similar-
ity between pairs of data points, while the SVM determines the coefficients gov-
erning the formulation of the decision boundary. This optimization process entails
a delicate balance between the size of the margin and the tolerance for allowable
misclassifications. In order to regulate this trade-off, regularization techniques,
such as the utilization of the C regularization parameter, are brought into play.
A significant advantage of Gaussian RBF SVMs lies in their capability to effec-
tively encapsulate highly non-linear relationships among features. The inherent
capacity of the RBF kernel equips the SVM to define intricate decision boundaries
and adeptly handle datasets that don’t exhibit linear separability. By mapping data
into an elevated-dimensional space, Gaussian RBF SVMs are adept at identifying
complex patterns and generating dependable predictions, even in the presence of
intricate non-linearities.
18.9 DATASET
The Pap smear database, which is available to the public, comprises 917 cervical cell
images distributed across seven categories in an uneven manner. The resolution of
the photos is 0.201m per pixel. The dataset was produced using tissue samples from
the uterine cervix that were obtained during smear tests. To ensure accuracy, two
cytotechnicians and a doctor diagnosed premalignant cell changes before progress-
ing to malignancy. This can make the classification procedure more challenging. The
dataset was collected from the Herlev University Hospital and contains manually
annotated ground truth segmentation of cervical cells. The images were captured
using digital cameras and a microscope and were obtained from various sources,
including clinical laboratories and research institutes. Expert pathologists exten-
sively documented and categorized each image based on the presence or absence of
abnormalities. The dataset comprises seven distinct categories, each corresponding
to specific cervical cell conditions as mentioned in Table 18.1. Within this dataset,
a prominent image is showcased, featuring a stained cell sample wherein a distinct
nucleus is encircled by cytoplasm (Figure 18.4). Moreover, for reference, a meticu-
lously manually annotated segmented image representing the ground truth accom-
panies this image (Figure 18.5). To expound further, this database encompasses a
diverse set of classes designed to facilitate the classification and recognition of cervi-
cal lesions. Notably, high-grade squamous intraepithelial lesion (HSIL) refers to cells
displaying notable and pronounced abnormalities. In contrast, low-grade squamous
intraepithelial lesion (LSIL) characterizes cells with milder irregularities. Atypical
squamous cells of undetermined significance (ASCUS) pertain to cells manifesting
ambiguous attributes that necessitate more in-depth evaluation. Lastly, the classifica-
tion category of invasive cervical cancer pertains to the advanced stage of malignant
growth in cervical cells.
324 AI-Driven IoT Systems for Industry 4.0
TABLE 18.1
Seven Distinct Categories Assigned to Individual Pap Smear Cells
S. No Cell Type Cell Image Category
The Pap smear information base incorporates 917 pictures of cervical cells that
are grouped into seven unique classes. Classes 1–3 are viewed as typical, while
classes 4–7 are thought of as strange. It is seen that most cells with bigger cores have
a place with the strange gathering, yet this isn’t generally evident as expected cells
may likewise have cores equivalent in size and chromatin circulation to disease or
extreme cells, making grouping troublesome. To address this, the seven classes were
diminished to five by joining classes 1–2 and 6–7 into a solitary class. This improves
on the grouping system, as displayed in Table 18.2.
Deep Learning for Real-Time Data Analysis from Sensors 325
TABLE 18.2
The Classification Process Employed Five Distinct Categories
Class Cell Type Category
1–2 Carcinoma in situ + Severe dysplasia Abnormal cells
3 Mild dysplasia Abnormal cells
4 Moderate dysplasia Abnormal cells
5 Normal columnar Normal cells
6–7 Intermediate squamous + Superficial Squamous Normal cells
326 AI-Driven IoT Systems for Industry 4.0
Image resizing: The resolution and size of Pap smear images might vary.
Resizing the photos to a standardized resolution aids in the production of
consistent data for further analysis. It also lowers computational complexity
and memory demands.
TABLE 18.3
Intermediate Cells and Normal Superficial Cells
Steps Cell 1 Cell 2 Cell 3
Cropped and segmented
Binary cell
Cropped nucleus
Binary nucleus
Deep Learning for Real-Time Data Analysis from Sensors 327
TABLE 18.4
Columnar Cells
Steps Cell 1 Cell 2
Binary cell
Cropped nucleus
Binary nucleus
Noise reduction: In Pap smear images, noise or artifacts may be present due to
various factors such as image capture, staining, or digitalization. To miti-
gate these issues and preserve essential visual information, noise reduction
techniques like Gaussian smoothing and median filtering can be employed.
These methods help reduce unwanted noise while retaining critical details
in the images mentioned in Tables 18.4 and 18.5.
Enhancement of contrast: Enhancing the contrast in Pap smear images can
significantly improve the visibility of cellular structures and abnormalities.
Various techniques, including histogram equalization and adaptive contrast
enhancement, can be applied to adjust the dynamic range of pixel intensi-
ties and enhance the visual quality of the image mentioned in Tables 18.6
and 18.7.
328 AI-Driven IoT Systems for Industry 4.0
TABLE 18.5
Light Dysplastic Cells along with Severe Dysplastic and
Carcinoma in Situ Cell
Severe Dysplastic and Moderate
Steps Light Dysplastic Cells Carcinoma in Situ Cell Dysplastic Cells
Cropped and
segmented
Binary cell
Cropped nucleus
Binary nucleus
TABLE 18.6
Values for Various Classification of Cells
Normal Intermediate Carcinoma Moderate Light
Columnar and Normal in Situ and Severe Dysplastic Dysplastic
Cell Superficial Dysplastic Cells Cells Cells
Area of 12,000 approx. 4,000 approx. 37,000 approx. 4,300 approx. 26,000 approx.
nucleus
Area of cell 14,500 approx. 18,000 approx. 680,000 approx. 61,000 approx. 690,000 approx.
Eccentricity 0.53 approx. 0.72 approx. 0.48 approx. 0.38 approx. 0.41 approx.
of nucleus
Eccentricity 0.7 approx. 0.68 approx. 0.89 approx. 0.62 approx. 0.89 approx.
of cell
Area of 0.08 approx. 0.02 approx. 0.055 approx. 0.071 approx. 0.038 approx.
nucleus:
Area of cell
Deep Learning for Real-Time Data Analysis from Sensors 329
TABLE 18.7
Texture Features
Normal Intermediate Carcinoma Moderate Light
Texture Columnar and Normal in Situ and Severe Dysplastic Dysplastic
Features Cell Superficial Dysplastic Cells Cells Cells
Contrast 0.2391–0.2238 0.295–0.344 0.0684–0.072 0.1984–0.1586 0.0901–0.0964
Autocorrelation 48.814–48.840 53.906–53.894 54.589–54.602 35.889–36.92 56.483–56.492
Entropy 1.715–1.693 1.200–1.215 1.185–1.182 1.272–1.2705 0.9958–0.9995
Correlation 0.9432–0.946 0.940–0.927 0.9816–0.9806 0.9334–0.9486 0.9662–0.9638
Sum variance 153.875–154.235 185.089–184.875 186.39–186.58 114.584–115.10 198.57–198.61
Sum average 13.685–13.688 14.379–14.381 14.527–14.529 11.915–11.918 14.859–14.860
Variance 48.720–48.738 53.829–53.841 54.39–54.42 36.799–36.82 56.296–56.308
Sum entropy 1.618–1.604 1.115–1.123 1.147–1.143 1.484–1.463 0.9545–0.9544
Difference 0.239–0.223 0.294–0.344 0.0684–0.072 0.1984–0.1586 0.0902–0.0965
variance
Difference 0.303–0.284 0.259–0.278 0.1638–0.162 0.2286–0.1982 0.1646–0.1691
entropy
Image registration: Pap smear pictures may have minor misalignments or ori-
entation anomalies in some circumstances. Image registration techniques
may be used to align and fix misalignments in the dataset, assuring consis-
tency and boosting the quality of future analysis mentioned in Table 18.8.
Color normalization: Color differences in Pap smear pictures may occur owing
to differing staining methods or imaging circumstances. Color normalization
methods can be used to standardize the look of colors across photographs,
ensuring that color-based characteristics are not affected by such variances.
Cell segmentation: Cell segmentation is a critical stage in Pap smear image
processing as it enables the separation of individual cells or regions of inter-
est. By applying various segmentation methods, it becomes possible to iso-
late cells from the background, facilitating more precise feature extraction
and categorization. This step plays a pivotal role in the accurate analysis
and interpretation of Pap smear images for medical diagnosis.
TABLE 18.8
Comparative Analysis of Proposed
Technique with Previous Techniques
References Accuracy (%)
[1] 84.3
[2] 92.19
[3] 93.78
[4] 84.31
[5] 97.18
Proposed technique 95
330 AI-Driven IoT Systems for Industry 4.0
18.10.1 Image Filters
In computerized picture handling, there are two primary kinds of channels: smooth-
ing channels and honing channels. Smoothing channels, like low pass channels, per-
mit lower recurrence parts of the picture to pass while lessening higher recurrence
content, which can incorporate edge data. The reason for smoothing channels is to
lessen commotion in the picture. Be that as it may, by streamlining edges, there is a
gamble of losing limit data. Interestingly, honing channels expect to improve limit
data and capability as a high pass channel, permitting the high recurrence edge parts
of the picture to pass. In this specific case, a two-sided channel (bilateral filter) is
utilized.
18.10.2 Bilateral Filters
Similar to Gaussian filtering, bilateral filtering takes into account the intensity val-
ues of adjacent pixels as well as the distances between them. This approach saves
the picture limits while decreasing clamor in the inward areas. The following is the
output of the bilateral filter:
BF[ I ] p =
1
Wp ∑G ( p − q ) G
σs σr (I p − Iq ) Iq (18.1)
q∈ S
The function G (|(|p-q|)|) is applied to reduce the impact of pixels that are distant
from the target pixel. The portion of the 2D Gaussian channel, which is utilized in
picture handling, gives this capability:
i2
−
1 2σ 2
Gσ (i ) = e (18.2)
2πσ 2
The parameter determines the size of the neighborhood. The blurring increases
as the value increases. In addition, W_p is a normalization function that ensures that
the sum does not exceed 1. This is provided by
Wp = ∑G ( p − q ) G
σs σr (I p − I q ). (18.3)
q∈ S
It is evident that, alongside the term G(s)(|p-q|) that accounts for the distance
factor, another component G(r)(|I_p-I_q|) is present. This additional term serves to
diminish the impact of pixels having dissimilar intensity values compared to the tar-
get pixel, even when they are situated in close proximity to the said target pixel. The
degree of blurring within the target image is now dictated by two distinct param-
eters: σ_s and σ_r. This choice of parameters and methodology is highlighted in the
Deep Learning for Real-Time Data Analysis from Sensors 331
work of Sylvain Paris (2009). Given the significance of boundary information, espe-
cially in scenarios involving intensity inhomogeneity and noise, the bilateral filtering
technique was opted for among various filtering strategies.
N
Energy LGF
X = ∑ ∫
− log ( pi,X ( I(y)) dy ) (18.4)
i=1 Ωi∩ϑ i
The point is to limit the energy articulation utilizing an iterative technique. Here,
I denotes various image regions, and pi,X(I(y) denotes portions of those regions that
are contained within the circular boundary. A neighborhood with a radius, where
“X” denotes any location in the image domain, “y” denotes the location of a neigh-
boring pixel, and “I(y)” denotes the intensity value at that location, forms a circular
boundary.
u (x) − I ( y) 2
pi,X (I ( y ) =
1
exp −
( i ) (18.5)
2π σ i 2σ ( x )
2
i
332 AI-Driven IoT Systems for Industry 4.0
The local means and variances of intensities are represented by the variables u
and i(x). A weighting capability is utilized, where the weight doled out to y incre-
ments as it draws nearer to the focal point of x.
d2
1 −
e 2σ 2 if d ≤ ρ
ω (d ) = a (18.6)
0 if d > ρ
The energy equation is used in the segmentation process after the term “(x-y)”
has been added. When the absolute difference between x and y is greater than a
predetermined radius, this term has a value of zero and gains weight as y gets closer
to the center of x:
N
Energy LGF
X = ∑ ∫
(
− ω ( x − y ) log pi,X ( I ( y )) dy ) (18.7)
i=1 Ωi∩ϑ i
18.14 CLASSIFICATION
Numerous advancements in the field of data science have been achieved, particu-
larly in the classification of data collected through feature extraction. Processing is
required due to the volume of created data, as well as some unique algorithms and
analytical tools for obtaining significant insights [15]. This data can be categorized
using a number of techniques, including KNN and SVM [16]. Consider Tress’ deci-
sion. Every grouping technique enjoys its own benefits and detriments. In the preced-
ing section, it has been found that properties extracted from segmented cells can be
valuable for classification purposes in image processing. An SVM classifier was used
in our study. To distinguish unfair highlights that can be utilized for order, we led
head part investigation (PCA) preceding applying the classifier.
18.15 RESULT
A more effective image pre-processing technique involved the use of bilateral fil-
tering, a non-linear blur method known for preserving boundary information.
Subsequently, image segmentation was performed using an active contour model
incorporating LGFE. The primary objective of this approach was to extract relevant
data while disregarding irrelevant information for accurate analysis. For the segmen-
tation process, the nucleus and cell borders were specifically targeted by employing
a Gaussian distribution with fluctuating means and variances. Starting from a seed
point, active contours were marked within the image. Various parameters, such as
kernel size, time step, inner weight, outer weight, and the number of iterations, were
adjusted to optimize the segmentation procedure. On average, it took approximately
240 seconds to complete 800 cycles of this process. We used a multi-class SVM
classifier to divide the data into various classes after converting the black-and-white
images of the cell and its nucleus into segments and examining their geometric fea-
tures, such as area, eccentricity, perimeter, cell area, and nucleus area. There are four
types of classifiers: Polynomial SVM, Linear SVM, Quadratic SVM, and Gaussian
RBF SVM (Figure 18.8). Because segmentation is the classification model’s top pri-
ority, we chose the images with the highest dice coefficient values because they pro-
duced the best segmentation results. The top 200 images with the best segmentation
among the 917 were chosen for feature extraction. After that, a 90:10 ratio was used
Deep Learning for Real-Time Data Analysis from Sensors 335
FIGURE 18.8 (a) Polynomial SVM confusion matrix; (b) Gaussian RBF SVM confusion
matrix; and (c) Quadratic SVM confusion matrix. (Continued)
336 AI-Driven IoT Systems for Industry 4.0
to divide these images into a training set and a testing set. A confusion matrix was
produced as an output using the SVM model.
The accuracy rates produced by various SVM types varied. With a rate of 95%,
the Polynomial SVM was the most accurate, followed by the Quadratic and the
Gaussian RBF SVM.
18.16 CONCLUSION
Among women, cervical cancer is the most common type. The Papanicolaou test, or
Pap smear test, is the best strategy for early recognizable proof and finding of cervi-
cal disease because of its capacity to distinguish cell irregularities and precancerous
transformations. We developed an automated algorithm that accurately identifies the
kind of Pap smear images in order to bypass the laborious and ineffective manual
cervical cancer detection procedure. The calculation includes compelling clamor
expulsion, division, and element extraction. To improve image quality and reduce
noise, bilateral filtering was utilized. Energy and local Gaussian fitting were used
to segment cells and nuclei without re-initialization. The exactness of the division
was assessed utilizing the Dice coefficient, and different boundaries were utilized to
portray the state of every cell.
For the categorization of cervical images as normal or abnormal, a neural net-
work classifier is utilized. The experimental results showcase the efficacy of the
proposed framework in accurately identifying non-cancerous and cancerous regions
within cervical images. Future studies can expand upon this work by examining
Deep Learning for Real-Time Data Analysis from Sensors 337
segmented cancerous areas in cervical images and classifying them based on sever-
ity levels such as “Early” or “Advanced,” enabling timely surgical interventions to
prevent fatalities. Additionally, the potential influence of different infections on cer-
vical cancer can be explored using cervical and Pap smear images.
To distinguish between normal and malignant cells and nuclei, a set of features
was extracted, encompassing texture attributes like the gray-level co-occurrence
matrix (GLCM), as well as geometric properties, including area, eccentricity, and
perimeter. These features were chosen to capture the distinctive textural and geo-
metric characteristics of the cells, aiding in the differentiation process. To address
the challenge of high-dimensional feature space, PCA was employed. PCA was
utilized to compute and condense basic statistical and texture features, effectively
reducing dimensionality while preserving relevant information. For classification,
three distinct varieties of multi-class SVM classifiers were employed. This com-
bined approach not only enables accurate classification but also identifies the most
relevant features for all categories. The performance of the classifiers was evaluated
by plotting confusion matrices for each of the four SVM classifiers used. A dataset
of approximately 200 photos was used to train and evaluate the classifiers. Among
the different SVM classifiers, the polynomial SVM classifier exhibited the highest
precision, surpassing other methodologies.
REFERENCES
1. S. Kumar Singh, P. Agrawal, V. Madaan, and M. Sharma, “Classification of clinical
dataset of cervical cancer using KNN,” Indian Journal of Science and Technology,
vol. 9, no. 28, pp. 1–5, 2016.
2. R. Srivastava, S. Srivastava, and R. Kumar, “Detection and classification of cancer
from microscopic biopsy images using clinically significant and biologically interpre-
table features,” Journal of Medical Engineering, vol. 2015, pp. 101–115, August 2015.
3. N. Theera Umpon, S. Auephanwiriyakul, and T. Chankong, “Automatic cervical cell
segmentation and classification in Pap smears,” Computer Methods and Programs in
Biomedicine, vol. 113, no. 2, p. 2014, February 2014.
4. N. Mittal, G. Singh, and M. Arya, “Cervical cancer detection using segmentation on
Pap smear images,” in the International Conference, Jaipur, 2016.
5. X. Xu, Y. He, J. Song, and J. Su, “Automatic detection of cervical cancer cells by a
two-level cascade classification system,” Analytical Cellular Pathology, vol. 2016,
pp. 8–16, May 2016.
6. J. Jantzen, and G. Dounias, “Analysis of Pap-smear Image Data,” in Proceedings of the
Nature-Inspired Smart Information System, pp. 11–18, 2006.
7. L. Lu, I. Nogues, R. M. Summers, S. Liu, J. Yao, and L. Zhang, “DeepPap: Deep con-
volutional networks for cervical cell classification,” IEEE Journal of Biomedical and
Health Informatics, pp. 21–31, November 2017.
8. J. Norup, “Classification of Pap-smear Data by Tranduction Neuro-Fuzzy Methods,”
Lyngby, Denmark, 2005.
9. J. Norup, G. Dounias, B. Bjerregaard, and J. Jantzen, “Pap-smear Benchmark Data
For Pattern Classification,” NiSIS 2005: Nature Inspired Smart Information Systems
(NiSIS), EU Co-Ordination Action, 2005.
10. P. Kornprobst, J. Tumblin, F. Durand, and S. Paris, “Bilateral filtering: Theory and
applications,” Foundations and Trends® in Computer Graphics and Vision, vol. 4,
pp. 1–73, 2009.
338 AI-Driven IoT Systems for Industry 4.0
11. L. He, A. Mishra, C. Lee, and L. Wang, “Active contours driven by local Gaussian dis-
tribution fitting energy,” Signal Processing, vol. 89, no. 12, pp. 2435–2447, December
2009.
12. E. Martin and J. Jantzen, “Pap-smear Classification,” Master’s Thesis, Automation,
Technical University of Denmark, Oersted-DTU, September 22, 2003.
13. M. Chowdhury, L. B. Mahanta, M. Kumar Kundu, A. Kumar Das K. Bora, “Automated
classification of Pap smear images to detect cervical dysplasia,” Computer Methods
and Programs in Biomedicine, vol. 138, pp. 31–47, January 2017.
14. Y. Hao, K. Hwang, L. Wang, L. Wang, and M. Chen, “Disease prediction by machine
learning over big data from healthcare communities,” IEEE Access, vol. 5, pp. 8869–
8879, April 2017.
15. J. Ding, J. Wang, X. Kang, and X.-H. Hu, “Building an SVM Classifier for Automated
Selection of Big Data,” in IEEE International Congress on Big Data (BigData
Congress), pp. 15–22, 2017.
16. M. Sharma, S. Kumar Singh, P. Agrawal, and V. Madaan, “Classification of clinical
dataset of cervical cancer using KNN,” Indian Journal of Science and Technology,
vol. 9, no. 28, pp. 1–5, July 2016.
17. R. Kumar, R. Srivastava, and S. Srivastava, “Detection and classification of cancer
from microscopic biopsy images using clinically significant and biologically interpre-
table features,” Journal of Medical Engineering, vol. 2015, pp. 1–14, April 2015.
18. J. Talukdar, C. Kr Nath, and P. H. Talukdar, “Fuzzy clustering based image segmenta-
tion of Pap smear images of cervical cancer cell using FCM Algorithm,” International
Journal of Engineering and Innovative Technology, vol. 3, no. 1, pp. 460–462, July 2013.
19. W. William, A. Ware, A. H. Basaza-Ejiri, and J. Obungoloch, “Cervical cancer clas-
sification from Pap-smears using an enhanced fuzzy C-means algorithm,” Informatics
in Medicine Unlocked, vol. 14, pp. 23–33, February 2019.
20. P. Liang, G. Sun, and S. Wei, “Application of deep learning algorithm in cervical cancer
MRI image segmentation based on wireless sensor,” Journal of Medical Systems,
vol. 43, no. 156, pp. 1–7, June 2019.
21. F. H. D. Araújo, R. R. V. Silva, D. M. Ushizima, M. T. Rezende, C. M. Carneiro, A.
G. C. Bianchi, and F. N. S. Medeiros, “Deep learning for cell image segmentation and
ranking,” Computerized Medical Imaging and Graphics, vol. 72, pp. 13–21, March
2019.
22. Y. Song, L. Zhang, S. Chen, D. Ni, B. Li, Y. Zhou, B. Lei, and T. Wang, “A deep learn-
ing based framework for accurate segmentation of cervical cytoplasm and nuclei,” in
Proc. EMBC, pp. 2903–2906, Aug. 2014.
23. Y. Liu, P. Zhang, Q. Song, A. Li, P. Zhang, and Z. Gui, “Automatic segmentation of
cervical nuclei based on deep learning and a conditional random field,” IEEE Access,
vol. 6, pp. 53709–53721, 2018.
24. Z. Hu, J. Tang, Z. Wang, K. Zhang, L. Zhang, and Q. Sun, “Deep learning for image-
based cancer detection and diagnosis − A survey,” Pattern Recognition, vol. 83,
pp. 134–149, 2018, ISSN 0031-3203.
19 Blockchain as a
Controller of Security in
Cyber-Physical Systems
A Watchdog for Industry 4.0
Adri Jovin John Joseph, Marikkannan Mariappan,
and Marikkannu P
19.1 INTRODUCTION
The advancement in computers, communication, and control has led to their advent in
the most conventional mechanisms which involve mechanical and electrical energy.
These mechanical and electrical devices that are controlled by computers, communi-
cation channels and control systems form the basis of a cyber-physical system (CPS).
CPSs connect computing systems with physical systems. Integrating the physical
system with computing systems provides a lot of advantages like handling physical
systems from the remote and having control over most operations seamlessly. These
advantages have made them suitable to be used in applications relevant to defense,
healthcare, household, and industrial infrastructure [1]. Some of the most common
applications that are a part of CPSs are power grids, smart cities, healthcare, smart
transportation, etc. All these create a lot of data, which greatly contributes to big
data analytics. The advantage poses such systems with risk also. The various com-
ponents of a CPS are tightly coupled with one another, which indirectly infers that
each component depends on another component for its functioning [2]. Systems can
be controlled equally by hackers like authorized users when access is controlled
through a public network. When there is a lot of scope for data transfer through com-
munication channels, it is obvious that it drags the interests of hackers and people
with malicious intent. Hence, it is important to secure the data from the access of
malicious users. A CPS should ensure the important information security services,
namely confidentiality, availability, and integrity when the data gets transferred from
one layer to the other. Further, it should ensure authenticity and a non-repudiation
nature. One of the peculiar features of data generated in CPSs is that it is distributed
over the entire network and is processed at different ends. This poses more problems
in managing the data as well as in securing the data. These data reach a longer extent
than what an end-user expects to be. The longer the extent the data reaches, the more
its impact on the business. Any compromise in any part of the data in any geographi-
cal location may have a drastic impact on the data manager as well as the service
provider. The CPS market was estimated to be around 60.5 billion US dollars in 2018
and is anticipated to grow with a compounding rate of 9.3% for a period of ten years
in near future, according to Credence Research [3]. If there is a business impact over
such a billion-dollar economy, it would cripple the global economic scenario. The
Internet of Things (IoT) is a well-known use case of CPS that is popular and has
extended its application in various domains from simple farm monitoring systems to
complex nuclear reactor control and monitoring systems. The reason for quoting IoT
in this instance is that the technology, though viewed to be a complex one during its
initial phases, has become a very common one in recent days. It has advanced in a
way that even novice users could learn about it in a short period and could develop
their applications and host them over the cloud. Several cloud-hosting services offer
easier interfaces for users to access their services. But the underlying risk is with
security. Most users do not bother about the security issues or are unaware of the
security implications at the time of deployment paving way for intruders to easily
gain access to the system.
Blockchain is a promising innovation that has been deployed in many applica-
tions used over the internet in recent days. The blockchain contains data, which
are outcomes of transactions. These transactions are done in a distributed envi-
ronment, which is validated by consensus algorithms and is integrated as chains
of blocks. A block is a collection of transactions, which in turn contributes to the
blockchain which is a collection of blocks [4]. Like most other data storages, block-
chains can also be compromised. The presence of consensus algorithm is one key
component that can be used to prevent the compromise on data. Data cannot be
appended nor removed without consent from the involved parties or through some
proof of work. In case of trying to modify or compromise the data by some means,
the whole blockchain may get invalidated, which makes the efforts of the hacker
futile. Further, to add, it is not as easy to alter data in a blockchain, as many people
think. It requires a lot of computational power and effort to get it done, because the
consensus algorithm utilized by each blockchain may vary depending on the frame-
work or application deployed. This property of implementing a non-alterable data
structure is termed immutability. Immutability is the core value of a blockchain
system. According to Gartner, blockchain will add 3.1 trillion US dollars in busi-
ness in 2030 [3]. Blockchains have found their application in finance, IoT, cyberse-
curity, smart contracts, identity management, reputation management etc. and suit
any application that requires decentralization and consensus among participation
parties.
How does the blockchain get aligned with CPSs? How do they ensure security in
CPSs? Blockchains are distributed ledgers, whereas CPSs produce distributed data.
This data needs to be validated and integrated, which by default ensures integrity.
The blockchains cannot be altered so easily, which further ensures the integrity of
data. The blockchains are available throughout which ensures the availability of
data. Further, various cryptographic implementations in blockchains prove to ensure
the confidentiality of data. Some of the use cases that are described in this chapter
are first of their kind, and hence it would be really helpful for researchers to deep
dive into those domains.
Blockchain as a Controller of Security in Cyber-Physical Systems 341
19.3.1 Challenges
One of the major challenges of deploying blockchains in CPSs is the volume of data
the blockchain must handle since it involves a huge cost to handle the volume of data.
Secondly, the CPSs designed for each application have a specific data structure, and
hence, data interoperability is another major challenge. Thirdly, the public avail-
ability of blockchains should ensure the privacy of users, and hence, the design of a
privacy framework for blockchains is a major challenge [1].
Apart from those listed above, there are certain domain-specific challenges. For
example, in the case of power grids, there may be fluctuation in the recent state, and
signature verification may contribute to additional overhead and slower execution of
processes due to consensus process and data replication [9].
19.3.2 Limitations
Though blockchain implementation has advantageous aspects in CPS, it possesses
certain limitations which are listed as follows [2]:
• The number of devices connected shall be limited based on the block size
and turnaround time of hash calculations.
• Transaction fee or a reward mechanism is mandatory.
• A set of miners connected to the blockchain may gain complete control over
the system if they possess a malicious intent.
• The entire ledger needs to be stored if a party is an endorser or miner, which
adds data storage overhead.
19.3.3 Classification
Blockchain CPSs can be categorized into further subcategories based on the acces-
sibility option, voting rights, and sustainability properties [9]. The classification is
portrayed in Figure 19.1.
Based on the accessibility option available in the blockchain-aided CPS, the sys-
tems are categorized as follows:
Open blockchain-aided CPSs: This CPS provides open access to the various
layers of the architecture. The participants are free to vote, participate, and
Blockchain as a Controller of Security in Cyber-Physical Systems 343
get excluded from participation, and all processes are based on the consen-
sus mechanism that is deployed in the system.
Private blockchain-aided CPSs: The access is restricted by the owners of the
CPS. The data about the state and consensus is not exposed to the public.
Based on the voting rights in the blockchain-aided CPS, systems are categorized
as follows:
Based on the sustainability properties prevailing in the blockchain, they are clas-
sified as follows:
1. Protocol layer
2. Infrastructure layer, and
3. Blockchain layer
The functionalities of each layer were well-defined in this system, and the entities
listed under each layer are displayed in Table 19.1.
The client automation which provides the platform for the Protocol Layer is
responsible for verification. It also protects the privacy of the customers. The data
provenance generator is responsible for the monitoring as well as the retrieval of data
on the delivery of power. The data thus obtained is uploaded to the Fabric platform.
The data will be attached to the ledger using the consensus scheme that is being
deployed in the blockchain system.
TABLE 19.1
System Components – Blockchain-Enabled Smart Power Grid
Protocol Layer Infrastructure Layer Blockchain Layer
Hyperledger Fabric Client Powerplant Nodes
Token Database Transmission infrastructure Copies of all blocks of transactions
Registration Database Distribution infrastructure
Credential Database Service Provider
Marketing Agent
Operations Division
Home Area Network
Advanced Metering Infrastructure
Customers
Blockchain as a Controller of Security in Cyber-Physical Systems 345
In these systems, since the data is stored in a tree data structure, a large volume
of data can be made scalable. The data is stored in the form of a JSON file, the
interoperability issue is also addressed. A token-based access method is designed to
conserve the privacy of the end-users of the system.
Another implementation of blockchain-enabled multi-agent-based power grid
systems takes the liberty of deploying permissionless blockchain in a trustless envi-
ronment [9]. A particular subclass of CPS known as an open symbiotic blockchain-
aided CPS is used in this system. This system provides security for the data stored.
Amidst the two choices of storing data, the one within the blockchain and the other
as a cryptographic hash pointer, this system stores the data in the blockchain. This
provides security for the data. Many wonder how such a system provides security
for the data. It should be noted that blockchain storage comes with a cost for each
transaction. A party wishing to access this data needs to spend enormous computa-
tional power to gain access to the data. This in turn enhances the security of the data.
Some of the secure implementations of power grids deploy blockchain as a security
measure alone [12].
Choi et al. proposed a reactor protection system (RPS), a system designed specifi-
cally for the protection of the control systems in the nuclear reactor to ensure the
protection of the system against false data injections. In the case of power grids, the
security issue is concerned with a public blockchain, whereas, in the case of NPP,
the security issue is concerned with that of a private blockchain. This system deploys
a novel concept, Proof-of-Monitoring (PoM) as the consensus mechanism to ensure
the integrity of the RPS.
346 AI-Driven IoT Systems for Industry 4.0
19.5.3 Healthcare
The CPS, nevertheless, has wide applications in healthcare. One of the serious data
attacks that were encountered by the healthcare industry was the outbreak of the
Wannacry ransomware that cost the industry millions of dollars. Despite the spend-
ing and compensation, many healthcare agencies and institutions were not able to
recover the data that they had lost. Medical records are equivalently critical as far
as an individual is concerned. A public blockchain implementation based on the
Ethereum blockchain is used to preserve medical data [13]. The Ethereum block-
chain is used in this model since the dependency of Ethereum on other third parties
is almost trivial. The sequence of transactions in the system is as follows:
19.5.4 Manufacturing Industry
A blockchain-based CPS (BCPS) in the manufacturing industry namely the 5C-CPS
architecture ensured the security of the CPSs. The tri-layer blockchain-based CPS
is classified as 5Cs based on their functionality [16]. Table 19.2 shows the correla-
tion between the BCPS layers and 5C-CPS architecture. The connection layer is
associated with the physical layer which may be connected to another device or is
operated by a human being. It also contains components like sensors, actuators, and
machines. The conversion layer comprises analytic tools, deep-learning networks,
and computation infrastructure like fog/edge computing infrastructure. The com-
ponent of the cyber layer comprises big data, digital twins, cloud computing infra-
structure, data warehouses, and other inter-cyber interactions. The cognition layer
contains the decision-support system. The configuration layer contains intelligent
decision-making components and management information systems. It also shows
the alignment of the 5Cs with the properties of blockchain.
Blockchain as a Controller of Security in Cyber-Physical Systems 347
TABLE 19.2
Alignment of BCPS Layers with 5C-CPS Layers and Blockchain Properties
BCPS Layer 5C – CPS Layer Blockchain Property
Non-intermediary
Decentralization
Transparency
Immutability
Authenticity
Distribution
Anonymity
Connection Net Connection ✓ ✓ ✓ ✓ ✓ ✓ ✓
Conversion ✓ ✓ ✓ ✓ ✓ ✓
Cyber Net
Cyber ✓ ✓ ✓ ✓ ✓ ✓ ✓
Cognition ✓ ✓ ✓ ✓ ✓ ✓
Management Net
Configuration ✓ ✓ ✓ ✓ ✓ ✓
TABLE 19.3
Blockchain Contribution to BCPS
BCPS Layers Blockchain Contribution
Management Net Advanced Cryptography, Smart Contract, Asset Tokenization, P2P interactions
Cyber Net Decentralized AI, Load distribution among nodes, Advanced Cryptography,
Micro-clouds, Smart Contract, P2P interactions
Connection Net Tracking components, P2P interactions, Smart Contracts, Shared resources,
Advanced Cryptography
The blockchain contributes to the functions described in Table 19.3 in the CPS for
enablement of data security.
The above-mentioned system guarantees a secure and reliable service in the man-
ufacturing sector. Another variant of the above-mentioned model is used for trust
management services in CPSs [17].
19.6 CONCLUSION
The decentralized nature of CPSs makes them suitable to be integrated with decen-
tralized databases like blockchains. The inherent mechanisms in a blockchain namely
consensus algorithms, distributed data, and secure protocols supplement the function-
alities of CPSs. The implementation issues, the security challenges, and other factors
relevant to the implementation of blockchains in CPSs are discussed in this chapter.
As a concluding remark, blockchains should not be deployed wherever possible, rather
they need to be deployed in spaces where they could work out and enhance the system.
348 AI-Driven IoT Systems for Industry 4.0
REFERENCES
1. X. Liang, S. Shetty, D. K. Tosh, J. Zhao, D. Li, and J. Liu, “A reliable data provenance
and privacy preservation architecture for business-driven cyber-physical systems using
blockchain,” Int. J. Inf. Secur. Priv., vol. 12, no. 4, pp. 68–81, 2018.
2. H. Rathore, A. Mohamed, and M. Guizani, “A survey of blockchain enabled cyber-
physical systems,” Sensors (Switzerland), vol. 20, no. 1, pp. 1–28, 2020.
3. A. Braeken, M. Liyanage, S. S. Kanhere, and S. Dixit, “Blockchain and cyberphysical
systems,” Computer (Long. Beach. Calif)., vol. 53, no. 9, pp. 31–35, 2020.
4. D. B. Rawat, V. Chaudhary, and R. Doku, “Blockchain technology: Emerging applica-
tions and use cases for secure and trustworthy smart systems,” J. Cybersecurity Priv.,
vol. 1, no. 1, pp. 4–18, 2020.
5. P. J. Taylor, T. Dargahi, A. Dehghantanha, R. M. Parizi, and K. K. R. Choo, “A system-
atic literature review of blockchain cyber security,” Digit. Commun. Netw., vol. 6, no. 2,
pp. 147–156, 2020.
6. M. K. Choi, C. Y. Yeun, and P. H. Seong, “A novel monitoring system for the data
integrity of reactor protection system using blockchain technology,” IEEE Access, vol. 8,
pp. 118732–118740, 2020.
7. N. Etemadi, P. Van Gelder, and F. Strozzi, “An ism modeling of barriers for block-
chain/distributed ledger technology adoption in supply chains towards cybersecurity,”
Sustain., vol. 13, no. 9, pp. 1–28, 2021.
8. J. Zhang, L. Pan, Q. L. Han, C. Chen, S. Wen, and Y. Xiang, “Deep learning based
attack detection for cyber-physical system cybersecurity: A survey,” IEEE/CAA J.
Autom. Sin., vol. 9, no. 3, pp. 377–391, 2022.
9. R. Skowroński, “The open blockchain-aided multi-agent symbiotic cyber–physical
systems,” Futur. Gener. Comput. Syst., vol. 94, pp. 430–443, 2019.
10. Y. Maleh, S. Lakkineni, L. Tawalbeh, and A. A. AbdEl-Latif, Blockchain for Cyber-
Physical Systems: Challenges and Applications, 2022. https://link.springer.com/chapter/
10.1007/978-3-030-93646-4_2
11. E. R. Griffor, C. Greer, D. A. Wollman, and M. J. Burns, “Framework for cyber-physical
systems: Volume 1, overview,” Nist, vol. 1, no. 1, p. 79, 2017.
12. G. Liang, S. R. Weller, F. Luo, J. Zhao, and Z. Y. Dong, “Distributed blockchain-based
data protection framework for modern power systems against cyber attacks,” IEEE
Trans. Smart Grid, vol. 10, no. 3, pp. 3162–3173, 2019.
13. R. Ch, G. Srivastava, Y. L. V. Nagasree, A. Ponugumati, and S. Ramachandran,
“Robust cyber-physical system enabled smart healthcare unit using blockchain technol-
ogy,” Electronics, vol. 11, no. 19, p. 3070, 2022.
14. G. N. Nguyen, N. H. Le Viet, M. Elhoseny, K. Shankar, B. B. Gupta, and A. A. A.
El-Latif, “Secure blockchain enabled cyber–physical systems in healthcare using deep
belief network with ResNet model,” J. Parallel Distrib. Comput., vol. 153, pp. 150–160,
2021.
15. S. Rathore and J. H. Park, “A blockchain-based deep learning approach for cyber secu-
rity in next generation industrial cyber-physical systems,” IEEE Trans. Ind. Inf., vol. 17,
no. 8, pp. 5522–5532, 2021.
16. J. Lee, M. Azamfar, and J. Singh, “A blockchain enabled cyber-physical system archi-
tecture for industry 4.0 manufacturing systems,” Manuf. Lett., vol. 20, pp. 34–39, 2019.
17. B. K. Mohanta, U. Satapathy, M. R. Dey, S. S. Panda, and D. Jena, “Trust Management
in Cyber Physical System using Blockchain,” 2020 11th Int. Conf. Comput. Commun.
Netw. Technol. ICCCNT 2020, 2020.
20 Energy Management in
Industry 4.0 Using AI
Jeevitha D, Deepa Jose, Sachi Nandan Mohanty,
and A. Latha
20.1 INTRODUCTION
Energy Management using AI is an emerging field that combines cutting-edge tech-
nologies with sustainable practices to optimize energy usage in industrial settings.
By utilizing the capabilities of artificial intelligence (AI), businesses can achieve
significant improvements in energy efficiency, cost savings, and environmental
sustainability.
According to a report by McKinsey & Company, AI-powered energy manage-
ment systems can reduce energy consumption by up to 20% in manufacturing indus-
tries [1]. A significant advantage of implementing AI in energy management is its
ability to enable predictive maintenance. By analyzing historical data and machine-
learning algorithms, AI systems can forecast equipment failures and maintenance
requirements, allowing for proactive interventions. This not only reduces downtime
and maintenance costs but also enhances overall operational efficiency [2].
Furthermore, AI-based energy management systems can facilitate demand man-
agement programs. By participating in such programs, organizations can optimize
their energy usage, contribute to grid stability, and potentially earn financial incen-
tives [3].
A study conducted by the International Journal of Advanced Manufacturing
Technology highlights that AI algorithms, combined with Internet of Things (IoT)
technologies, can significantly improve energy efficiency in industrial processes [4].
real-time data from physical objects, environments, and assets, enabling them to
make informed decisions and optimize operations.
Sensors, connected devices, and actuators IoT devices are capable of capturing
and transmitting data over the internet. These devices can be embedded in various
objects and environments, including manufacturing equipment, vehicles, buildings,
and even wearable devices.
Smart sensors play a crucial role in collecting real-time data. The purpose of
these sensors is to detect and measure specific physical properties or environmental
factors. They can provide accurate and precise measurements, ensuring reliable data
collection. Temperature sensors, occupancy sensors, air quality sensors, and vibra-
tion sensors are examples of smart sensors.
energy sources more effectively. This intelligent approach to load management has
the potential to transform the energy landscape and bring about a more sustainable
future.
Table 20.1 provides a high-level overview, and the specific AI techniques and
tools used for integration can vary depending on the specific application and techno-
logical advancements.
TABLE 20.1
Sustainable Energy Practice and AI Integration Aspects
Sustainable Energy Practice AI Integration Aspects
Renewable energy generation Predictive analytics for solar/wind energy generation.
Optimization of energy output using AI algorithms
Energy storage Intelligent battery management and optimization Predictive
maintenance of energy storage systems
Smart energy management Load forecasting, demand-side management, real-time grid
monitoring, and stability analysis. Automated control for
energy routing and balancing
Energy consumption data and analysis Data collection from smart meters and IoT devices. Energy
usage pattern recognition through machine-learning
Identification of energy-saving opportunities
Energy Management in Industry 4.0 Using AI 355
• Data breaches
• Privacy laws and regulations
• Data minimization and purpose limitation
• User consent and transparency
• Secure storage and data handling
• Third-party data sharing
• Ethical considerations
The integration of AI with legacy systems presents both challenges and oppor-
tunities for organizations. Legacy systems are typically older technology stacks
that have existed for a long time and may not be designed to work seamlessly
with AI applications. However, with careful planning and implementation, the
integration of AI can unlock new capabilities and enhance the functionality of
legacy systems.
The main challenge is the availability of computing resources. AI applications
typically require significant computational power to process large data sets and
train complex models. Organizations may need to consider cloud-based solutions or
upgrade their hardware to handle the computational demands of AI.
Integration also requires careful consideration of the existing workflows and pro-
cesses within the organization. Legacy systems often have well-established work-
flows that may not align with AI-driven processes. Organizations need to assess
how AI can enhance or streamline existing workflows and make any necessary
356 AI-Driven IoT Systems for Industry 4.0
adjustments to ensure a smooth integration. This may involve changes to data flows,
user interfaces, or backend processes to accommodate AI capabilities.
• Technology advancements
• Digital literacy
• Soft skills
• Industry-specific expertise
• Lifelong learning
• Collaboration with educational institutions
• Employee engagement and retention
Energy storage systems: The scalable energy storage technologies are essential
for the widespread adoption of renewable energy sources. The sensory data
are forecasted through a data prediction model in the cloud [7]. Battery
technology advancements such as solid-state batteries, lithium-ion batter-
ies, and flow batteries are enabling reliable energy storage for both grid-
scale and residential applications.
Demand response systems: Demand response systems utilize real-time infor-
mation and communication technologies to modify energy consumption in
response to grid conditions and pricing signals. Demand response systems
help to relieve grid strain and promote energy efficiency.
Energy management software: Advanced software solutions are being developed
to optimize energy consumption in buildings and industrial processes. These
platforms analyze information from sensors, meters, and other tools using
machine-learning algorithms to identify energy-saving opportunities, automate
control systems, and provide actionable insights for optimizing energy usage.
IoT in energy management: By the combination of IoT devices with energy
management systems, organizations can observe energy usage, identify
inefficiencies, and implement targeted interventions to reduce energy waste.
Figure 20.2 demonstrates the role of IoT management for energy. It includes
energy cost reduction, predictive maintenance, carbon footprint, green energy
implementation, energy cost reduction, energy audit and compliance, energy effi-
ciency optimization, and reduced energy spending.
The application of AI-powered energy management in Industry 4.0 has the poten-
tial to revolutionize how industries consume and manage energy resources [10]. By
optimizing energy usage, reducing waste, integrating renewable sources, and pro-
moting sustainability, AI is instrumental in shaping a greener and more sustainable
future for industrial operations.
20.6 CONCLUSION
The chapter provides a comprehensive overview about the potential of AI in trans-
forming energy management practices within the Industry 4.0 paradigm. It explores
the key AI techniques, practical implementations, and examples to demonstrate the
positive impact of AI adoption on energy efficiency, cost reduction, and environmen-
tal sustainability. Additionally, it addresses the challenges and considerations that
industries must take into account while implementing AI-driven energy manage-
ment solutions and offers insights into the future of this dynamic field.
REFERENCES
1. Lasi, H., Fettke, P., Kemper, H. G., Feld, T., Hoffmann, M. (2014). Industry 4.0. Business
& Information Systems Engineering, 6, 239–242, ISSN: 2363-7005.
2. Wang, S., Wan, J., Zhang, D., Li, D., Zhang, C. (2016). Towards smart factory for
Industry 4.0: A self-organized multi-agent system with big data based feedback and
coordination. Computer Networks, 101, 158–168, ISSN: 1389-1286.
Energy Management in Industry 4.0 Using AI 363
3. Ashton, K. (2009). That “Internet of Things” thing. RFiD Journal, 22(2009), 97–114.
4. Lycett, M. (2013). ‘Datafication’: Making sense of (big) data in a complex world.
European Journal of Information Systems, 22(4), 381–386, ISSN: 1476-9344.
5. Gantz, J., Reinsel, D. (2012). The Digital Universe in 2020: Big Data, Bigger Digital
Shadows, and Biggest Growth in the Far East United States, IDC
6. McKinsey & Company. (2018). AI in Energy: Transforming Industry and Optimizing
Resource Use.
7. Vinothkumar, T., Sivaraju, S. S., Thangavelu, A., Srithar, S. (2023). An energy-efficient
and reliable data gathering infrastructure using the internet of things and smart grids,
Automatika, 64(4), 720–732. doi: 10.1080/00051144.2023.2205724
8. Dasi, S., Sandiri, R., Anuradha, T., Santhi Sri, T., Majji, S., Murugan, K. “The State-
of-the-art Energy Management Strategy in Hybrid Electric Vehicles for Real-time
Optimization,” 2023 International Conference on Inventive Computation Technologies
(ICICT), Lalitpur, Nepal, 2023, pp. 1637–1643. doi: 10.1109/ICICT57646.2023.10134496
9. Chong, S. W., Wong, Y. S., Goh, H. H. (2018). Applications of artificial intelligence in
energy management systems: A systematic review. Energies, 11(10), 2743.
10. Karim, R. A., et al. (2020). Internet of things and artificial intelligence techniques
for industrial energy efficiency improvement. International Journal of Advanced
Manufacturing Technology, 107(5–6), 2587–2603. doi: 10.1007/s00170-020-05542-0
21 Deployment of IoT with
AI for Automation
Abith Kizhakkemuri Sunil, Preethi Nanjundan,
and Jossy Paul George
21.1 INTRODUCTION
Automation is using machines, computers, or other automated approaches to per-
form obligations that would otherwise be accomplished with the aid of people.
Automation has been around for hundreds of years; however, it has ended up more
and more sophisticated in recent years with the improvement of new technologies
such as the Internet of Things (IoT) and synthetic intelligence (artificial intelligence
[AI]). IoT refers back to the community of bodily gadgets which might be embedded
with sensors, software, and community connectivity, which permits them to gather
and change information. AI refers back to the potential of machines to learn and
make selections without human intervention.
The aggregate of IoT and AI is creating new possibilities for automation in a huge
range of industries, along with production, healthcare, transportation, and logistics.
For instance, IoT sensors can be used to collect records at the overall performance
of machines, which can then be analyzed via AI algorithms to become aware of
capability issues and take corrective movement earlier than they reason an outage or
other disruption.
Moreover, the demands of today’s interconnected world have given rise to several
pressing needs that are effectively addressed by the integration of AI and automation
within the realm of IoT. One such critical need is efficiency enhancement: With the
exponential growth in the number of connected devices, the ability to streamline
operations and optimize resource utilization becomes paramount. AI-driven auto-
mation ensures that tasks are executed precisely and promptly, minimizing waste
and maximizing productivity.
Furthermore, data deluge management is a key concern in the IoT landscape. The
sheer volume of data generated by IoT devices can overwhelm traditional data man-
agement approaches. AI’s capability to swiftly process and interpret vast data sets
enables organizations to extract meaningful insights, facilitating informed decision-
making and strategic planning.
In addition, the advent of IoT has fueled the need for Proactive Insights and
Action, as opposed to reactive measures. By harnessing AI algorithms, businesses
can anticipate trends and potential issues by analyzing data patterns. This foresight
empowers them to take preventative actions, averting disruptions and downtime that
could lead to substantial losses.
Predictive protection: IoT sensors can be used to gather data at the performance
of machines, which can then be analyzed through AI algorithms to perceive
capacity problems earlier than they reason an outage or other disruption [1].
Quality manipulation: IoT sensors can be used to monitor the first-rate of
products or services, and AI algorithms may be used to perceive defects
and take corrective motion [2].
Supply chain control: IoT sensors may be used to track the place of goods and
substances in a delivery chain, and AI algorithms may be used to optimize
the routing and scheduling of deliveries [3].
Safety: IoT sensors may be used to display the environment for risks, and AI
algorithms may be used to take corrective motion to prevent accidents [4].
These are only some examples of the way IoT and AI are being used in automa-
tion today. As those technologies evolve, we are able to assume even extra revolution-
ary and complicated automation packages inside the destiny.
The integration of IoT and AI has the capacity to revolutionize the manner we
automate obligations. By combining the real-time statistics collection competencies
of IoT with the analytical strength of AI, we are able to create clever automation
structures that may make choices and take moves without human intervention. This
can result in large upgrades in efficiency, productiveness, and safety in a huge variety
of industries.
d = √ ( x2 − x1) 2 + ( y2 − y1) 2
Deployment of IoT with AI for Automation 367
where d is the distance between the two points, x1 and y1 are the coordinates of the
first point, and x2 and y2 are the coordinates of the second point.
The formula for calculating the distance between two points in three dimensions
is as follows:
where d is the distance between the two points, x1, y1, and z1 are the coordinates of
the first point, and x2, y2, and z2 are the coordinates of the second point.
algorithms, each possessing distinct capabilities and limitations. Among the widely
recognized approaches are linear regression and logistic regression, tailored for con-
tinuous and binary predictions, respectively [8]. Decision trees, on the other hand,
construct predictive flowcharts, while support vector machines excel at class sepa-
ration optimization. Through the fusion of decision trees, random forests enhance
prediction accuracy [9]. Deep learning emerges as another prominent category,
harnessing artificial neural networks to yield exceptional outcomes, such as convo-
lutional neural networks facilitating image recognition, recurrent neural networks
powering natural language processing, and generative adversarial networks capable
of data synthesis [10].
The journey into AI also entails confronting significant challenges. These chal-
lenges encompass the scarcity of data, the potential for algorithmic bias rooted in
biased training data, the complexity of explaining AI’s decision-making process,
and the exposure of vulnerabilities that can be exploited for malicious purposes.
Despite these obstacles, AI’s trajectory is undoubtedly promising, poised to reshape
industries and elevate our daily experiences. However, the responsible and beneficial
integration of AI necessitates a concerted effort to address these challenges compre-
hensively [11].
Delving deeper, deep learning algorithms represent a subset of machine learning
techniques that function hierarchically, learning intricate patterns from data. These
algorithms have demonstrated remarkable achievements across diverse domains,
including remarkable advancements in tasks like image recognition and natural lan-
guage processing [10].
statistics and identify potential issues earlier than they arise. This can help to prevent
costly downtime and enhance the overall performance of the manufacturing unit [4].
IoT and AI can also be used to make actual-time decisions about the way to
allocate sources. For example, IoT sensors can be used to track the stock levels in a
warehouse. AI algorithms can then be used to investigate these statistics and make
decisions approximately when to reserve new inventory. This can assist to improve
the efficiency of the warehouse and reduce the threat of stockouts [13].
Sensors: Sensors gather information from the physical world. They may be
used to measure a range of things, such as temperature, humidity, mild
tiers, and movement.
Networks: Networks transmit information among sensors and the cloud. They
can be wired or wireless.
Cloud computing: Cloud computing shops and analyzes statistics. It also gives
the computing energy needed to run AI algorithms.
In addition to these three fundamental classes, there are some other components
that can be used to combine IoT and AI, together with gateways, aspect computing
gadgets, and record visualization tools.
The desire of hardware and software program additives will depend on the pre-
cise application. For example, a manufacturing unit that uses IoT and AI to screen
the situation of a system will need special components more than an employer. This
is the usage of IoT and AI to customize customer studies.
370 AI-Driven IoT Systems for Industry 4.0
It is also crucial to not forget the moral implications of the use of IoT and AI. For
example, how will the facts be used? Will or not it be shared with third parties? How
will privateness be included?
By cautiously thinking about these elements, companies can lay out IoT and AI
structures which might be successful and moral.
scalability to handle increasing volumes of data. There are factors to consider when
planning for scalability, such as data volume, data update frequency, and latency
requirements.
Potential strategies for IoT and AI systems include the following:
• Using cloud computing to provide scalable storage and processing resources [21].
• Using edge computing to bring data processing closer to devices [27].
• Implementation of a hybrid approach that integrates cloud edge computing [28].
• The specific scaling strategy that is best for a particular system will depend
on the specific needs of the application.
Cloud and edge are two types of computing infrastructure that can be used to man-
age data in IoT and AI systems. The cloud is a centralized computing infrastructure
that provides storage and scalability [29]. An edge is a distributed computing system
that is close to the devices that collect the data [30]. Each model has advantages and
disadvantages.
The cloud is scalable and can handle large amounts of data but can be expensive
and have high latency. The edges cannot be measured in size, but they can be much
more durable and much safer. The best implementation will depend on the specific
needs of the application.
There are a number of best practices that should be followed when deploying IoT
and AI systems.
These best practices include the following:
FIGURE 21.2 Diagram representing the data flow in an IoT system [38].
Deployment of IoT with AI for Automation 375
• Using strong passwords and encryption: IoT devices are often connected to
the internet, which makes them vulnerable to attack. One of the best ways
to protect IoT devices is to use strong passwords and encryption. Strong
passwords should be at least 12 characters long and should include a mix
of upper and lowercase letters, numbers, and symbols. Encryption can be
used to protect data that is transmitted between IoT devices and between
IoT devices and servers [39].
• Keeping software up to date: IoT devices often have software that needs
to be updated regularly. Software updates often include security patches
that can help to protect IoT devices from attack. It is important to keep IoT
devices up to date with the latest software releases [40].
• Implementing security best practices: There are a number of security best
practices that can be implemented to safeguard IoT and AI systems. These
best practices include the following:
1. Segmenting networks: This involves dividing the network into smaller
networks, which can help to isolate devices and data [41].
2. Using firewalls: Firewalls can be used to control traffic between net-
works and to block unauthorized access [42].
3. Using intrusion detection systems (IDS): IDS can be used to detect
malicious activity on the network [43].
4. Using intrusion prevention systems (IPS): IPS can be used to block
malicious activity on the network [44].
• Educating users about security risks: It is important to educate users about
the security risks associated with IoT and AI systems. Users should be
aware of the importance of using strong passwords and encryption, and
they should be aware of the risks of clicking on malicious links or opening
attachments from unknown senders [40].
21.8 CONCLUSION
The rapid evolution of IoT and AI holds large potential to revolutionize automa-
tion. Their blended pressure forms an effective synergy able to automating duties,
enhancing efficiency, and enabling superior decision-making. Currently in its nascent
ranges, the integration of IoT and AI reveals programs across numerous sectors like
production, healthcare, and agriculture. The future of automation gleams brightly,
with IoT and AI assuming pivotal roles in shaping it. Despite its intricacy, harmoniz-
ing IoT and AI stays vital for ahead-looking groups. Challenges together with safety,
privateness, and ethics ought to be tackled, yet the potential blessings of IoT and
AI-driven automation are sizable. Organizations put money into this integration role
themselves advantageously for the instances beforehand.
The horizon of IoT and AI automation holds a realm of boundless possibilities,
encompassing numerous noteworthy trends at the horizon. These developments con-
sist of the ascension of 5G networks, poised to usher in faster information switch
speeds and heightened connection reliability. Concurrently, the evolution of contem-
porary AI algorithms promises amplified energy and performance. Furthering this
panorama is the growth of facet computing, an advancement positioning AI in nearer
proximity to the information it necessitates for processing. An additional fashion
of importance lies within the surging adoption of blockchain era, efficaciously bol-
stering records and transaction protection. These interconnected tendencies together
form a destiny where IoT and AI maintain to adapt, intertwining technological
advancements for transformative consequences.
Beyond IoT and AI, a host of other new technologies are gambling a pivotal role
in shaping the destiny of automation. These encompass robotics, which are becom-
ing more and more crucial to various industries; virtual truth and augmented real-
ity, which can be reworking how we have interaction with digital environments;
quantum computing, which holds monstrous promise for solving complicated issues;
and herbal language processing, enabling computer systems to recognize and talk
in human language. As these technologies preserve to strengthen, they collectively
contribute to a transformative landscape where automation is pushed by way of a
diverse array of current innovations.
The fusion of IoT and AI automation brings forth a number of ethical concerns,
together with the danger of bias within AI algorithms, the implications of automation
Deployment of IoT with AI for Automation 379
REFERENCES
1. D. U. Mishra and S. Sharma, Revolutionizing Industrial Automation Through the
Convergence of Artificial Intelligence and the Internet of Things. IGI Global, 2022.
Available: https://play.google.com/store/books/details?id=rw-bEAAAQBAJ
2. R. Anandan, S. Gopalakrishnan, S. Pal, and N. Zaman, Industrial Internet of Things
(IIoT): Intelligent Analytics for Predictive Maintenance. John Wiley & Sons, 2022.
Available: https://books.google.com/books/about/Industrial_Internet_of_Things_IIoT.
html?hl=&id=E8p6EAAAQBAJ
3. L. Zhang, L. Yao, Z. Lu, and H. Yu, “Current Status of Quality Control in Screening
Esophagogastroduodenoscopy and the Emerging Role of Artificial Intelligence,”
Dig. Endosc., Jul. 2023, doi: 10.1111/den.14649. Available: http://dx.doi.org/10.1111/
den.14649
4. P. Chawla, A. Kumar, A. Nayyar, and M. Naved, Blockchain, IoT, and AI Technologies
for Supply Chain Management. CRC Press, 2023. Available: https://play.google.com/
store/books/details?id=fDaxEAAAQBAJ
5. A. Pasumpon Pandian, T. Senjyu, S. M. S. Islam, and H. Wang, Proceeding of the
International Conference on Computer Networks, Big Data and IoT (ICCBI - 2018).
Springer, 2019. Available: https://books.google.com/books/about/Proceeding_of_the_
International_Conferen.html?hl=&id=ZYvrxQEACAAJ
6. A. Grizhnevich, “IoT Architecture: Building Blocks and How They Work,” ScienceSoft,
Apr. 01, 2018. Available: https://www.scnsoft.com/blog/iot-architecture-in-a-nutshell-
and-how-it-works. [Accessed: Aug. 25, 2023]
7. G. Strang, Linear Algebra and Learning from Data. Wellesley-Cambridge Press, 2019.
Available: https://books.google.com/books/about/Linear_Algebra_and_Learning_from_
Data.html?hl=&id=L0Y_wQEACAAJ
8. A. K. Dubey, A. Kumar, S. Rakesh Kumar, N. Gayathri, and P. Das, AI and IoT-
Based Intelligent Automation in Robotics. John Wiley & Sons, 2021. Available:
https://books.google.com/books/about/AI_and_IoT_Based_Intelligent_Automation.
html?hl=&id=NSAlEAAAQBAJ
9. F. Khan, R. Alturki, M. A. Rehman, S. Mastorakis, I. Razzak, and S. T. Shah,
“Trustworthy and Reliable Deep Learning-based Cyberattack Detection in Industrial
IoT,” IEEE Trans. Industr. Inform., vol. 19, no. 1, pp. 1030–1038, Jan. 2023, doi:
10.1109/tii.2022.3190352. Available: http://dx.doi.org/10.1109/tii.2022.3190352
10. I. Goodfellow, Y. Bengio, and A. Courville, Deep Learning. MIT Press, 2016. Available:
https://play.google.com/store/books/details?id=omivDQAAQBAJ
11. T. Ahamed, IoT and AI in Agriculture: Self-Sufficiency in Food Production to Achieve
Society 5.0 and SDG’s Globally. Springer Nature, 2023. Available: https://play.google.
com/store/books/details?id=AY-4EAAAQBAJ
12. P. Friess, Digitising the Industry – Internet of Things Connecting the Physical, Digital
and Virtual Worlds. River Publishers, 2016. Available: https://books.google.com/books/
about/Digitising_the_Industry_Internet_of_Thin.html?hl=&id=nYktDwAAQBAJ
13. P. Fraga-Lamas, S. I. Lopes, and T. M. Fernández-Caramés, “Green IoT and Edge AI
as Key Technological Enablers for a Sustainable Digital Transition towards a Smart
Circular Economy: An Industry 5.0 Use Case,” Sensors, vol. 21, no. 17, Aug. 2021, doi:
10.3390/s21175745. Available: http://dx.doi.org/10.3390/s21175745
380 AI-Driven IoT Systems for Industry 4.0
22.1 INTRODUCTION
The field of artificial intelligence (AI) known as machine learning (ML) focuses on
developing algorithms, constructing models, and implementing decisions based on
datasets fed into the system [1]. Supervised learning and unsupervised learning are
the two main categories of ML algorithms. The term “supervised learning” refers to
the process in which algorithms learn from a training dataset and are simultaneously
trained to supervise the learning process itself. “Unsupervised machine learning”
refers to a type of ML in which the input data is uncategorized. In contrast, unsuper-
vised learning does not require prior knowledge about the data and does not specify
the number of classes [2]. Many different ML techniques exist, such as decision tree
(DT), support vector machine (SVM), k-nearest neighbor (KNN), Gaussian Naive
Bayes (GNB) classifier, and convolutional neural network (CNN) [3].
COVID-19 is currently the most significant threat that the world faces, surpassing
even the most catastrophic natural calamities [4]. Even though it’s been more than a
year, a resolution to the problem is not even close to being in sight. However, there
are still very few strategies to contain the spread in accordance with WHO (World
Health Organization) directives [5, 6].
The research aims to determine whether or not people are wearing face masks
(FMs) at a public gathering or event. To achieve the goal, ML algorithms such as DT,
SVM, KNN, GNB classifier, and CNN were used in the research work. As an input
dataset, we use an image that contains both some masked and non-masked individu-
als. Among the steps that have to be undertaken in order to complete this research
work and achieve its goal are the pre-processing, the data augmentation, the training,
the testing, and the image segmentation [7]. After completing the procedures, a seg-
mented image of the input dataset consisting of masked and non-masked individu-
als is obtained using the CNN algorithm [8, 9]. Then, the comparative analysis is
performed using ML parameters, which gives the results of those who conceal their
identity with a mask and those who do not, in percentages. The goal of this research
is to develop a model capable of identifying individuals who do not wear masks in
public. In this research project, which addresses the crisis caused by the COVID-19
virus, the main goal is to follow the recommendations of the WHO website to pre-
vent further transmission of the virus [10, 11]. Many of the existing works deal with
face recognition, but very few articles address the performance parameters of ML
[12–18]. The objective of this work is to study a variety of ML models for the detec-
tion of FMs. According to analysis model comparisons, we concluded that the CNN
model is best per performance comparison considering parameters such as accuracy,
recalls, precision, and F1 score. We have used pickle, cv2, NumPy, perceptron itera-
tive learning (PIL), and sklearn libraries to analyze the performance parameters and
compare the various ML models.
The most important contributions that this research paper has made are outlined
in the following:
The following is the format for this body of scientific work: The first section
describes the work that already exists. Learning models are discussed in the second
section. Some examples of these models include DT, SVM, KNN, GNB classifier,
and CNN. In the third section, we will present the model of the suggested system.
The fourth section discusses the model layout process as well as the comparison of
parameters with different learning algorithms.
in the identification of individuals who do not have their FMs detected by AI and
ML algorithms. The research study that was conducted utilized a well-known ML
method to identify the owner of a FM. The code was executed on Google Colab.
are the two primary phases that make up the supervised learning process. During
the training phase, the supervised algorithm creates the model, and later, during the
testing phase, the same model is used to make predictions about the output.
22.4.4 CNN Model
There are three layers in a CNN: the input layer, the hidden layer, and the output
layer [6]. CNNs are able to perform convolutions by using layers within the hidden
layers used to perform the convolutions within the hidden layers. A test dataset of
photos is used to train the model, and the photos are divided into two classes, those
with masks and those without masks, based on the information in the dataset. Using
the dataset that was taken, the model is being trained, and then the model is being
deployed. There are four stages in the CNN, as shown in Figure 22.1. Each stage has
a different label such as input, convolution, pooling, flattering, and softmax.
The FM research is done with a dataset collected from the popular Kaggle web-
site. The dataset had approximately 10,020 pictures. We performed testing in 2004
and trained about 8016 in the dataset that is available from Kaggle. Supervised ML
uses the thorough stages of training and development. Training: After performing
pre-processing on the dataset pictures, we trained a model by feeding it into a neural
network. Once the model has been trained, we may proceed with loading the FM
detector, performing facial detection, and then classifying faces into the categories
of masked and unmasked. There has been a proposal for a FM detection model lay-
out with detailed training and development, as shown in Figure 22.2.
In mask detection we have used the CNN. We use the 10,020 dataset, a dataset
of images from Kaggle. There are a number of methods that can be used to achieve
the goal of FM detection, as described in Algorithm 22.1. Python programming is
required at each stage of the process to achieve the goal. The main steps are import-
ing libraries, importing an image dataset, labeling the images, resizing the images,
plotting the results, invoking the model and performing various CNN layers, fitting
the model, predicting the model, performing an accuracy test, and performing all
steps to evaluate performance parameters such as accuracy, precision, F1 score, and
recall. The CNN model is used in Google Colab to apply all of the datasets that were
presented to it and to make predictions for both individuals who were wearing masks
and those who were not wearing masks, as well as the accuracy of the prediction in
terms of a percentage scale.
1. Import Libraries.
2. Import dataset of Images.
3. Labeling.
4. Image resize.
5. Plotting.
A Comparison of the Performance of Different Machine Learning Algorithms 387
22.5.1 Design Requirement
Python environments like Google Colab should be able to read the models and data-
set first. The collected dataset, such as “with mask” or “without mask,” must be con-
verted to a 2D scale so that it can be used on any computer system (see Figure 22.3).
All models, such as DT, SVM, KNN, GNB classifier, and CNN, are expected to dis-
play the ML parameters. It is expected that the model will predict “with mask” and
“without mask” with a higher accuracy value. To incorporate the dataset into Google
Colab, we use an application programming interface (API) to create the dataset. The
JupyterLab or Google Colab is used to run all the ML algorithms.
TABLE 22.1
Standard Equation for Parameter Calculation
Parameters Standard Equation
TP
Precision
TP + FP
TP
Recall
TP + FN
PRECISION × RECALL
F1 score 2×
PRECISION + RECALL
TP + TN
Accuracy
TP + FP + FN + TN
TABLE 22.2
Performance Parameter Comparison
Performance Gaussian Naive
Parameters DT SVM KNN Bayes Classifier CNN
Accuracy (%) 81.63 95.11 75.99 78.24 98.103
Precision (%) 81.4 96 95.41 77.67 97.25
Recall (%) 81.72 96 95.41 76.205 99.001
F1 Score (%) 81.72 96 76 78.20 98.12
% Improvement of 16.14 36.86 23.24 21.13
accuracy w.r.t CNN
% Improvement of 16.56 3.030 23.43 21.31
Precision w.r.t CNN
% Improvement of 15.87 2.69 23.17 21.04
Recall w.r.t CNN
% Improvement of 16.27 2.69 23.17 21.04
F1 score w.r.t CNN
A Comparison of the Performance of Different Machine Learning Algorithms 389
It takes supervised ML models and a lot of training data to recognize faces with
and without masks. All input photos are converted to 2D images, and the original
data and altered data are returned. The input image is used by the pickle, cv2, and
sklearn image data generators to supervise ML models. Figure 22.9 shows that the
val_loss value will be enormous if the epoch is short.
FIGURE 22.7 Visualization of the relationship between epoch value and loss value using
the CNN model.
A Comparison of the Performance of Different Machine Learning Algorithms 391
FIGURE 22.8 Visualization of the relationship between epoch value and accuracy value
using the CNN model.
FIGURE 22.9 Visualization of the relationship between epoch value and val_loss value
using the CNN model.
FIGURE 22.10 Visualization of the relationship between epoch value and val_acc value
using the CNN model.
FIGURE 22.11 Successful recognization of face mask using the KNN model.
A Comparison of the Performance of Different Machine Learning Algorithms 393
FIGURE 22.14 Visualization of confusion matrix: (a) CNN, (b) KNN, (c) GaussianNB,
(d) DT model, and (e) SVM. (Continued)
A Comparison of the Performance of Different Machine Learning Algorithms 395
FIGURE 22.15 Recognition of face mask and CNN model summary report.
396 AI-Driven IoT Systems for Industry 4.0
22.8 CONCLUSIONS
In the last few years, COVID-19 has become more widespread globally. It is essential
that we control this issue if we are to be able to return to our everyday lives. With the
correct use of the results of this research, it could be used to help identify those who
are not wearing any masks as a result of the research. For the model to be able to
operate in an effective manner, more training data has been provided (around 80%)
and more testing data has been provided (20%). As a result of the application of this
research work in a wide range of public gatherings, such as congested areas, it will
be much easier to identify those who are violating the use of masks. In tests that were
performed using the 10,020 dataset, the proposed CNN model got an accuracy rate of
98.57% when using the dataset. This chapter compares the performance parameter
of different ML models such as DT, SVM, KNN, GNB classifier, and CNN in order
to achieve the main contribution of this chapter which is to determine the best model.
A significant contribution of this chapter is its use of performance comparisons with
other model libraries, including cv2, pickle, NumPy, and sklearn, for the purpose of
illustrating all confusion matrices and measurement of the relation between epoch
value and accuracy value, val_loss, and val accuracy value. This work will be con-
ducted in cloudy and foggy environments in the future, which may make it more
difficult for the proposed approach to detect faces in these environments.
REFERENCES
1. Zhou, Z.H., 2021. Machine learning. Springer Nature.
2. Kumar, B.A. and Misra, N.K., 2024. Masked face age and gender identification using
CAFFE-modified MobileNetV2 on photo and real-time video images by transfer learn-
ing and deep learning techniques. Expert Systems with Applications, 246, p. 123179.
3. Loey, M., Manogaran, G., Taha, M.H.N. and Khalifa, N.E.M., 2021. A hybrid deep
transfer learning model with machine learning methods for face mask detection in the
era of the COVID-19 pandemic. Measurement, 167, p. 108288.
4. Vijitkunsawat, W. and Chantngarm, P., 2020, October. Study of the performance
of machine learning algorithms for face mask detection. In 2020-5th International
Conference on Information Technology (InCIT) (pp. 39–43). IEEE.
5. Sethi, S., Kathuria, M. and Kaushik, T., 2021. Face mask detection using deep learning:
An approach to reduce risk of Coronavirus spread. Journal of Biomedical Informatics,
120, p. 103848.
6. Pooja, S. and Preeti, S., 2021. Face mask detection using AI. Predictive and Preventive
Measures for Covid-19 Pandemic, pp. 293–305.
7. Kodali, R.K. and Dhanekula, R., 2021, January. Face mask detection using deep learn-
ing. In 2021 International Conference on Computer Communication and Informatics
(ICCCI) (pp. 1–5). IEEE.
8. Asif, S., Wenhui, Y., Tao, Y., Jinhai, S. and Amjad, K., 2021, May. Real time face mask
detection system using transfer learning with machine learning method in the era of
COVID-19 pandemic. In 2021 4th International Conference on Artificial Intelligence
and Big Data (ICAIBD) (pp. 70–75). IEEE.
9. Boulila, W., Alzahem, A., Almoudi, A., Afifi, M., Alturki, I. and Driss, M., 2021,
December. A deep learning-based approach for real-time facemask detection. In 2021
20th IEEE International Conference on Machine Learning and Applications (ICMLA)
(pp. 1478–1481). IEEE.
A Comparison of the Performance of Different Machine Learning Algorithms 397
10. Sen, S. and Sawant, K., 2021, February. Face mask detection for covid_19 pandemic
using PyTorch in deep learning. In IOP Conference Series: Materials Science and
Engineering (Vol. 1070, No. 1, p. 012061). IOP Publishing.
11. Dondo, D.G., Redolfi, J.A., Araguás, R.G. and Garcia, D., 2021. Application of deep-
learning methods to real time face mask detection. IEEE Latin America Transactions,
19(6), pp. 994–1001.
12. Mbunge, E., Simelane, S., Fashoto, S.G., Akinnuwesi, B. and Metfula, A.S., 2021.
Application of deep learning and machine learning models to detect COVID-19 face
masks-A review. Sustainable Operations and Computers, 2, pp. 235–245.
13. Kaur, G., Sinha, R., Tiwari, P.K., Yadav, S.K., Pandey, P., Raj, R., Vashisth, A. and
Rakhra, M., 2022. Face mask recognition system using CNN model. Neuroscience
Informatics, 2(3), p. 100035.
14. Gupta, P., Saxena, N., Sharma, M. and Tripathi, J., 2018. Deep neural network for
human face recognition. International Journal of Engineering and Manufacturing
(IJEM), 8(1), pp. 63–71.
15. Bhojane, K.J. and Thorat, S.S., 2018. Face recognition based car ignition and secu-
rity system. International Research Journal of Engineering and Technology (IRJET),
5(05), pp. 3565–3668.
16. Rahman, M.M., Manik, M.M.H., Islam, M.M., Mahmud, S. and Kim, J.H., 2020,
September. An automated system to limit COVID-19 using facial mask detection in
smart city network. In 2020 IEEE International IOT, Electronics and Mechatronics
Conference (IEMTRONICS) (pp. 1–5). IEEE.
17. Bhuiyan, M.R., Khushbu, S.A. and Islam, M.S., 2020, July. A deep learning based
assistive system to classify COVID-19 face mask for human safety with YOLOv3. In
2020 11th International Conference on Computing, Communication and Networking
Technologies (ICCCNT) (pp. 1–5). IEEE.
18. Goyal, H., Sidana, K., Singh, C., Jain, A. and Jindal, S., 2022. A real time face
mask detection system using convolutional neural network. Multimedia Tools and
Applications, 81(11), pp. 14999–15015.
Index
A Comparative analysis, 27
Complex systems, 51
Accuracy, 24 Computer vision, 24
Adaptive manufacturing, 126 Convolutional Neural Network(s) (CNNs),
Adaptive supply chain management, 264 51, 382
Advanced technologies, 94 Cost savings, 83
AI and ML, 71 Covid-19, 99
AI integration, 46 CPU, 25
AI optimization, 352 Customer demands, 256
AI-based project, 24 Cyber-physical system(s) (CPS), 53, 82
AI-driven energy management, 353 Cybersecurity, 75
AI-driven IoT systems for Industry 4.0, 26 Cybersecurity and data privacy challenges,
Algorithmic bias, 199 135
Algorithms, 39
Anaconda, 28
Android software, 39 D
Anomaly detection, 128 Data analysis, 54
Application cooperation, 247 Data analytics, 70
Artificial Intelligence (AI), 42 Data classification, 303
Artificial Neural Networks (ANNs), 206 Data collection, 56
Asset optimization, 267 Data deluge management, 364
Augmented reality (AR), 50 Data distribution, 226
Autonomic nervous system, 289 Data flow, 149
Autonomous robots, 189 Data fusion, 179
Autonomous systems, 71 Data governance, 137
Availability, 62 Data integration, 85
Data interoperability, 342
B Data management, 70
Data mining, 175
Bias in AI algorithms, 24, 376 Data preprocessing, 77
Big data, 54 Data processing, 72
Big data analysis, 97 Data protection, 91
Big data analytics, 70 Data quality, 92
Blockchain, 80 Data security, 77
Blockchain deployment, 344 Data storage, 77
Blockchain technology, 176 Data synchronization, 244
Bosch, 128 Data transformation, 194
Data-driven insights, 81
C Decentralized, 174
Decentralized decision-making, 42
Cartography, 38 Decision boundary, 299
Cervical cell images, 323 Decision support systems, 185
Change management, 76 Decision tree (DT), 382
Chronic stress, 291 Decision-making, 51
Cloud and edge computing, 72 Deep learning, 56
Cloud computing, 42 Deep learning algorithms, 63
Cloud manufacturing (CMfg), 275 Deep learning models, 206
Cognitive automation, 43 Deep learning techniques, 59
Cognitive systems, 260 Demand forecasting, 78
Collaboration, 52 Demand management, 261
Collaboration and communication, 89 Demand response, 352
399
400 Index
Haar cascade, 58 M
Hand detection, 23
Hand gesture recognition, 23 Machine control, 131
Health concerns, 291 Machine interaction, 72
Healthcare, 53 Machine learning, 317
Healthcare monitoring, 83 Machine learning algorithms, 367
Holistic skill development, 151 Machine learning techniques, 368
Human dignity, 53 Machinery failures, 367
Index 401
U W
Uncertainty, 60 Wireless communication, 319
Unsupervised learning, 382 Wireless sensor networks, 346
Workflow, 355
V Workforce displacement, 313