0% found this document useful (0 votes)
15 views22 pages

Sensors 24 04182

This paper presents a federated learning-oriented edge computing framework for the Industrial Internet of Things (IIoT) to address challenges such as hardware-software coupling, deployment complexities, and energy consumption. The proposed framework supports device access and AI model deployment at the edge, utilizing a time series-based method for efficient device selection and computation offloading. Experimental results demonstrate that the proposed method significantly reduces model training time and energy consumption compared to traditional approaches.

Uploaded by

Aashish Bhambri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views22 pages

Sensors 24 04182

This paper presents a federated learning-oriented edge computing framework for the Industrial Internet of Things (IIoT) to address challenges such as hardware-software coupling, deployment complexities, and energy consumption. The proposed framework supports device access and AI model deployment at the edge, utilizing a time series-based method for efficient device selection and computation offloading. Experimental results demonstrate that the proposed method significantly reduces model training time and energy consumption compared to traditional approaches.

Uploaded by

Aashish Bhambri
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

sensors

Article
Federated Learning-Oriented Edge Computing Framework for
the IIoT
Xianhui Liu, Xianghu Dong * , Ning Jia and Weidong Zhao

CAD Research Center, Tongji University, Shanghai 201800, China; [email protected] (X.L.);
[email protected] (N.J.); [email protected] (W.Z.)
* Correspondence: [email protected]; Tel.: +86-191-4564-8583

Abstract: With the maturity of artificial intelligence (AI) technology, applications of AI in edge
computing will greatly promote the development of industrial technology. However, the existing
studies on the edge computing framework for the Industrial Internet of Things (IIoT) still face several
challenges, such as deep hardware and software coupling, diverse protocols, difficult deployment of
AI models, insufficient computing capabilities of edge devices, and sensitivity to delay and energy
consumption. To solve the above problems, this paper proposes a software-defined AI-oriented
three-layer IIoT edge computing framework and presents the design and implementation of an
AI-oriented edge computing system, aiming to support device access, enable the acceptance and
deployment of AI models from the cloud, and allow the whole process from data acquisition to model
training to be completed at the edge. In addition, this paper proposes a time series-based method
for device selection and computation offloading in the federated learning process, which selectively
offloads the tasks of inefficient nodes to the edge computing center to reduce the training delay and
energy consumption. Finally, experiments carried out to verify the feasibility and effectiveness of the
proposed method are reported. The model training time with the proposed method is generally 30%
to 50% less than that with the random device selection method, and the training energy consumption
under the proposed method is generally 35% to 55% less.

Keywords: industrial internet of things; edge computing; artificial intelligence; federated learning

Citation: Liu, X.; Dong, X.; Jia, N.;


Zhao, W. Federated 1. Introduction
Learning-Oriented Edge Computing
With the advent of the information age and the proposal of the concept of intelligent
Framework for the IIoT. Sensors 2024,
manufacturing, the Industrial Internet of Things (IIoT) has become a popular focus of
24, 4182. https://doi.org/10.3390/
s24134182
current research on both information technology and industrial technology. The Industrial
Internet of Things (IIoT) refers to the concept of enhancing and optimizing industrial pro-
Academic Editor: Francesco Mercaldo cesses and applications using Internet and IoT technologies. It involves connecting sensors,
Received: 2 June 2024
devices, and other physical objects to the internet to enable data collection, monitoring,
Revised: 21 June 2024 analysis, and automated control. This technology aims to improve production efficiency,
Accepted: 25 June 2024 reduce costs, and enhance the reliability and safety of industrial processes. IIoT finds wide
Published: 27 June 2024 applications across various sectors including manufacturing, energy, transportation, agri-
culture, and more, significantly transforming traditional industrial production methods [1].
By simulating human neural networks, artificial intelligence (AI) technology can play a
role in various practical production application scenarios, including but not limited to
Copyright: © 2024 by the authors. image recognition, natural language processing, and decision support [2]. In the Internet
Licensee MDPI, Basel, Switzerland. of Things (IoT) context, various machine learning algorithms can make effective use of
This article is an open access article
the large amounts of data generated by a large number of devices [3], while in the field
distributed under the terms and
of industrial production, AI can also make powerful contributions to tasks such as prod-
conditions of the Creative Commons
uct defect detection, intelligent recognition and sorting, intelligent visual guidance and
Attribution (CC BY) license (https://
data analysis [4]. Thus, embedding AI technology into the IIoT is an important research
creativecommons.org/licenses/by/
direction in this field. Moreover, federated learning, as a method for training AI models
4.0/).

Sensors 2024, 24, 4182. https://doi.org/10.3390/s24134182 https://www.mdpi.com/journal/sensors


Sensors 2024, 24, 4182 2 of 22

in distributed systems, can effectively address the challenges that arise when training AI
models in the IIoT [5].
In recent years, cloud-based computing has emerged as an open platform for the
training and deployment of IIoT AI models by virtue of its dynamic expansion capabilities,
flexible deployment capabilities, low cost and high efficiency [6]. However, the cloud
computing scheme has the two following problems. First, in the process of training and
applying reasoning models in the cloud, there will inevitably be a delay in data transmission,
which presents a major challenge for time-sensitive IIoT applications [7]. Second, some
enterprises have high requirements for data privacy and security, and cloud computing
solutions inevitably face problems of data security [8]. Therefore, to overcome these
problems of transmission delay and data security, the edge computing scheme has been
proposed [9]. In the IIoT application scenario, edge computing can effectively alleviate the
above shortcomings of cloud computing. Overall, the relationship among edge computing,
cloud computing, and federated learning can be summarized as follows. Edge computing
handles and preprocesses data, reducing the need to transmit data to the cloud, thus saving
bandwidth and reducing latency. Cloud computing complements edge computing by
providing storage and computational resources, supporting large-scale data processing
and analysis. Federated learning leverages interactions between edge devices and center
servers to enable collaborative learning across distributed data sources, thereby enhancing
global model performance. However, existing AI-oriented edge computing frameworks
generally have the following problems, and our research motivation is to provide a general
solution to the following problems and limitations, as well as methods and frameworks for
the rapid deployment of AI in the IIoT and the selection of federated hierarchical learning
devices in energy computation and time-delay-sensitive scenarios. First, the high coupling
between hardware and software in IIoT edge devices poses a challenge to the existing
frameworks. The existing edge computing frameworks need a software-defined edge
computing architecture that does not rely on specific hardware architectures. Second, the
deployment and delivery methods of AI models, as well as their interactions with the device
side, are complex steps in the context of IIoT applications. Third, the computing power
of IIoT devices is generally insufficient, meaning that the timely and effective completion
of the training of local models cannot be ensured, potentially making it difficult to meet
real-time industrial needs. Fourth, IIoT applications are typically highly sensitive to issues
of delay and energy consumption, and further study is needed on how to reduce delays
and computing resource consumption in the federated learning process.
To address the above challenges, this paper proposes a federated learning-oriented
edge computing framework for the IIoT. First, we propose an edge computing framework
in which edge computing nodes and devices closely cooperate for access and interaction.
By virtue of the software-defined nature of this framework, the hardware and software
of the edge gateway nodes are decoupled, and the access protocols of the devices do not
depend on the specific device type or the device architecture of the edge computing nodes.
This framework can support the rapid delivery and deployment of AI models. On the basis
of this framework, we present the design and implementation of an edge computing system
that supports the realization of the whole process of data collection from the device side to
the edge and AI model deployment from the cloud to the edge. Then, we propose a time
series-based device selection and computation offloading method for use in the federated
learning process. This method enables the purposeful selection of edge computing nodes
and the partial offloading of computing tasks from the device side to the edge to address
the resource allocation problem in the presence of an edge computing center. Through this
method, the delay and energy consumption of the system in the federated learning process
are optimized. Finally, we report experiments conducted to verify the feasibility of the
proposed architecture and the superiority of the proposed method in terms of delay and
energy consumption for the case of training a multilayer perceptron (MLP) network on the
MNIST dataset.
The contributions of this paper are as follows:
Sensors 2024, 24, 4182 3 of 22

(1) A software-defined AI-oriented three-layer IIoT edge computing framework is pro-


posed to overcome the challenges of deep hardware and software coupling, diverse
access protocols, and difficult deployment of AI models in the IIoT.
(2) An AI-oriented edge computing system based on a microservice architecture is de-
signed and implemented. This system supports the realization of the whole pro-
cess from device-to-edge data collection and processing to AI model distribution
and deployment.
(3) A time series-based method of device selection and computation offloading for use in
the federated learning process is proposed for IIoT edge computing to reduce both
training time delay and energy consumption, thereby reducing long-term costs.
(4) Experiments are designed and implemented to evaluate the proposed method. Through
a series of experimental analyses, we verify the feasibility and superiority of the pro-
posed method.
The rest of the paper is organized as follows. The related works are reviewed in
Section 2. In Section 3, the details of the proposed scheme and algorithm are provided. The
results of the experiment of the proposed scheme and algorithm are discussed in Section 4.
Section 5 discusses the superiority, in terms of time consumption and energy consumption,
and limitations of the proposed scheme.

2. Related Research
This section summarizes the research on the IIoT applications of edge computing and
AI models.

2.1. Edge Computing Architecture for the IIoT


IIoT edge computing was proposed to solve the problems of data security and trans-
mission delay faced by cloud computing in industrial production. Sha et al. (2020) [10]
proposed a generic edge-centric IoT architecture, explaining how the edge layer interacts
with IoT application users, the cloud, and IoT end devices. Sodhro et al. (2020) [11] pro-
posed an AI-based edge computing platform architecture that consists of adaptable edge
nodes, adaptable network nodes and adaptable application nodes. The edge nodes are
responsible for collecting and analyzing data using AI algorithms, the network nodes are
responsible for obtaining node information and for transmission through the network, and
the application nodes are responsible for running IIoT applications, such as real-time moni-
toring and error diagnosis. Zhao et al. (2022) [12] proposed an edge computing network
using digital twin technology. With the help of digital twins, this edge computing network
can connect IIoT devices more efficiently. Mai et al. (2021) [13] considered the limited
computing resources of mobile edge computing and discussed a method of decomposing
and distributing the execution of critical tasks among network devices.
Nguyen et al. (2021) [14] applied AI to model distributed networks to improve trans-
mission efficiency. Mwase et al. (2022) [15] proposed a new AI strategy based on distributed
machine learning (DML), which involves training AI models completely at the edge, and
discussed optimizations to reduce both the size of the training data to be transmitted
and the transmission scale, satisfying the needs of edge deployment and focusing on the
processing of low-performance devices. Torres-Charles et al. (2022) [16] focused on col-
laborative sharing in the cloud at the edge and proposed a new cloud–edge computing
architecture layout from the perspective of cloud–edge integration. Kok et al. (2022) [17]
proposed an edge computing framework with the help of AI. They used AI algorithms to
solve the communication, computing, caching and control (4C) problems in an IoT network
and defined the network model and related mathematical formulas under their framework.
In accordance with the challenges faced by different edge computing application scenar-
ios, corresponding potential and feasible AI solutions were given. Zhao et al. (2023) [18]
focused on the flexibility, security and real-time performance of the IIoT framework, pro-
posed a new three-layer software-defined IIoT control architecture, and proposed the
use of decentralized control devices (DCDs) as device execution units to control indus-
Sensors 2024, 24, 4182 4 of 22

trial devices and perform IIoT tasks. In this paper, we study an application scheduling
algorithm for use in time-sensitive scenarios and propose a computation-based decentral-
ized network intelligent IIoT application deployment (DNAI2) problem and its solution.
Kumar et al. (2023) [19] studied the problem of multidimensional data processing in the
cloud–edge framework of the IIoT and provided a method for processing IIoT data at the
edge. Zhang et al. (2023) [20] considered the trust problem among complex heterogeneous
devices in the IIoT. The aforementioned studies have examined industrial IoT edge com-
puting architectures from various perspectives, with a common focus on transmission rates
of network models, network nodes at edge computing centers, or optimized deployment
of applications on edge devices.

2.2. Training AI Models in the IIoT and Federated Learning


Studies on the applications of AI models in the IIoT focus on the efficiency and
security of the AI model application process [21] and methods for training and deploying
AI deep learning models in IIoT architectures. These methods include cloud training [22],
edge training [23], cloud–edge collaborative training [24], and smart device-end model
training [25].
Bellavista et al. (2020) [26] proposed a three-layer computing architecture in which
on-site, edge, and cloud resources are used to run AI models in a collaborative manner. In
this architecture, on-site data are input into an AI model, and model training is performed
at the edge. The trained AI model is then transmitted to the cloud, and the cloud distributes
it to each edge server that needs to use the AI inference model. Sun et al. (2020) [27] also
proposed an AI computing framework based on the IIoT. In this architecture, edge servers
and cloud servers work together to provide services for IIoT AI applications. AI models
are trained in the cloud, and the trained AI reasoning models are deployed at the edge
to perform inference on actual data. McClellan et al. (2020) [28] proposed a method of
applying deep learning in mobile edge computing (MEC) using 5G technology. With the
fast data transmission of 5G networks, deep learning models can be run at the mobile edge,
and the data can be stored.
Federated learning was initially proposed by Google in 2016 as a privacy-preserving
distributed machine learning paradigm [29]. When integrated with cloud computing
and edge computing, federated learning efficiently utilizes the computational power of
dispersed terminal devices for parallel computation. It minimizes intermediate results
synchronization to enhance the efficiency of distributed machine learning [30]. Below are
some recent studies on federated learning focusing on model performance and privacy
protection. Jiang et al. (2024) [31] integrated federated learning and split learning into
satellite-terrestrial integrated networks (STINs), introducing advanced frameworks such
as split-then-federated learning and FedSL-LSTM. Their approach addresses privacy and
efficiency concerns in B5G/6G mobile communication, demonstrating superior perfor-
mance in electricity theft detection. Parra-Ullauri et al. (2024) [32] introduced kubeFlower,
a Kubernetes (K8s) operator for federated learning. It ensures privacy through secure
resource isolation and integrates differential privacy via P3-VC. Tested on both cloud and
edge nodes, their approach showcases robust privacy preservation in federated learning
environments. Mhaisen et al. (2022) [33] studied a hierarchical federated learning system
with edge training to optimize the selection problem of edge users. With the help of this
system, AI models can achieve faster convergence and better accuracy during the training
process. Baccour et al. (2022) [34] proposed deploying a federated learning training frame-
work, a decentralized reinforcement learning training framework, and an active learning
training framework in a decentralized network and studied algorithm models based on
the above three frameworks. Moreover, the methods of model inference in a decentralized
network for corresponding models trained based on the above three frameworks were
studied while considering the privacy and security of the data during the transmission
process. Salim et al. (2023) [35] proposed a computational framework based on information
fusion. With the help of this framework, the training of artificial neural network models
Sensors 2024, 24, 4182 5 of 22

based on federated learning will use fewer training rounds, thereby reducing the con-
sumption of computing resources and the training time cost while improving the model
accuracy. Phan et al. (2023) [36] proposed an IIoT edge framework based on blockchain
and federated learning. Within this framework, the federated learning scheme inherits
fully homomorphic encryption and splitting-based privacy, making it more conducive to
protecting data privacy and security when building AI models for IIoT networks.
However, due to constraints such as network communication, local computation costs,
and device uptime, aggregation servers can only select a limited number of clients to
participate in each training round [37]. Therefore, a core process in federated learning
protocols is the “client selection” before each training round begins. Several widely applied
federated optimization algorithms, including FedAvg [29], FedProx [38], and FedYoGi [39],
employ random client selection algorithms. Some methods have improved upon client
selection. For instance, DCS [40] minimizes communication costs by filtering clients based
on expected value thresholds. However, it calculates the local update model value using a
global validation set, potentially leading to information loss for models of lower value not
participating in global aggregation rounds. FedCS [41] addresses the maximization of de-
vice selection by transforming it into a submodular maximization problem under knapsack
constraints using greedy algorithms based on local model updates and transmission times.
However, it does not necessarily minimize latency or energy consumption and lacks fair-
ness considerations in device selection. Similarly, FedMCCS [42] converts the maximization
of device selection into a double-layer maximization optimization problem under knapsack
constraints, encountering similar issues. FedCCPS [43] aims to minimize overall train-
ing time by sorting local update times using K-means clustering and binary partitioning.
However, it does not consider scenarios with constrained energy consumption.

3. Materials and Methods


This section first proposes a software-defined AI-oriented three-layer IIoT edge com-
puting framework in Section 3.1. In the device layer, container virtualization technology is
used to solve the problems of deep coupling between the hardware and software of devices
and protocol diversity. In the data layer, the data and virtual device models are stored, and
this layer is also responsible for the forwarding of device data. In the AI model layer, the
steps of AI model application are described to solve the difficult problem of AI model de-
ployment. Then, on the basis of this edge computing framework, we design and implement
an AI-oriented edge computing system based on the microservice architecture in Section 3.2
and describe the functional modules of the system in the protocol service layer, data layer
and application layer. Finally, we analyze the time and energy consumption costs of the
federated learning system in Section 3.3 and propose a time series-based method of device
selection and computation offloading to solve the problems of the general insufficiency of
the AI computing power at the edge in the IIoT and the sensitivity to system delay and
energy consumption.

3.1. Software-Defined AI-Oriented Three-Layer IIoT Edge Computing Framework


This section proposes a software-defined, AI-oriented, three-layer IIoT edge computing
framework, which is used to realize all necessary functionalities for edge computing, from
device access to AI model training and inference. The overall framework is divided into
three layers, as shown in Figure 1. From the bottom up, these layers are the device layer,
the data layer and the AI function layer.
Sensors2024,
Sensors 2024,24,
24,4182
x FOR PEER REVIEW 66 of 22
of 22

1. Software-defined
Figure1.
Figure Software-defined AI-oriented
AI-oriented three-layer
three-layer IIoT
IIoT edge
edge computing
computing framework.
framework.

3.1.1.
3.1.1. Device
Device Layer
Layer
Devices
Devices in the
in the IIoT
IIoT device
device layer
layer have
have the
the characteristics
characteristics ofof deep
deep coupling
coupling between
between
software and hardware, strong heterogeneity, and different access protocols [44]. In ad-
software and hardware, strong heterogeneity, and different access protocols [44]. In addi-
dition, in the traditional operational technology (OT) commonly used in industry, there
tion, in the traditional operational technology (OT) commonly used in industry, there is
is still no method available to support the rapid deployment of information technology
still no method available to support the rapid deployment of information technology ap-
applications [45]. The existing common IIoT device access protocols include RESTful HTTP,
plications [45]. The existing common IIoT device access protocols include RESTful HTTP,
MQTT, ZigBee, ModBus, and OPC-UA. Among them, OPC-UA, as an enabling technology
MQTT, ZigBee, ModBus, and OPC-UA. Among them, OPC-UA, as an enabling technol-
for industrial modeling, can describe device properties and functions in the form of object
ogy for industrial modeling, can describe device properties and functions in the form of
models for various industrial devices and provide standard programming interfaces and
object models for various industrial devices and provide standard programming inter-
standard communication protocols [46]. However, many existing devices do not support
faces and standard communication protocols [46]. However, many existing devices do not
this emerging protocol. Another software-defined device access method is virtualization
support this emerging protocol. Another software-defined device access method is virtu-
technology [47].
alization technology [47].
By means of virtualization technology, the processes of data acquisition, protocol
parsingBy and
means
dataofprocessing
virtualization
can betechnology,
encapsulated theinto
processes
a singleof data acquisition,
independent device protocol
commu-
parsing and data processing can be encapsulated into a single independent
nication protocol image via container virtualization [48]. Data acquisition: through device edge
com-
munication protocol image via container virtualization [48]. Data acquisition:
computing devices (such as sensors, programmable logic controllers, remote terminal units, through
edge computing
industrial devices
computers, (such
etc.), dataasare
sensors, programmable
collected logic site
at the production controllers, remotein
and processed termi-
real
nal units, industrial computers, etc.), data are collected at the production
time, and the collected data are transmitted to the edge for further analysis and processing. site and pro-
cessed inparsing:
Protocol real time,theand
rawthe collected
data collecteddata are the
from transmitted to theare
edge devices edge for further
parsed analysis
and converted
and processing. Protocol parsing: the raw data collected from the edge
into a standardized data format in accordance with the corresponding protocol mirror devices are parsedto
and converted into a standardized data format in accordance with the corresponding
facilitate subsequent storage, analysis and application. Data processing: by using a rule pro-
tocol mirror
engine and atohierarchical
facilitate subsequent storage,the
control method, analysis and application.
data collected and parsedDatainprocessing:
the protocolby
using aare
image rule engine and
processed anda hierarchical
analyzed, and control
localmethod,
tasks arethe data collected
processed belowandthe parsed in the
edge device
protocol image
connection end,are processed
reducing and analyzed,
the dependence onand
thelocal
edgetasks
and theare cloud
processed
and below the edge
improving the
device connection end, reducing the dependence on the edge and the
real-time performance and efficiency of data processing while ensuring the security and cloud and improv-
ing the real-time
integrity performance and efficiency of data processing while ensuring the secu-
of the data.
rity and integrity of the data.
Sensors 2024, 24, 4182 7 of 22

At the software level, the IIoT device layer realizes isolation between device access
programs and general industrial applications. In this way, more flexible deployment and
container migration capabilities can be supported. In addition, to better meet the real-time
performance requirements of device access, time-sensitive software-defined networking
(TSSDN) is also adopted. In this networking paradigm, time-sensitive networking is imple-
mented on general SDN switches, which can theoretically eliminate transmission jitter [49].
By integrating the data acquisition, protocol parsing and data processing flows of various
common protocols through container virtualization, the industrial Internet platform can
realize the efficient collection and processing of production data and provide strong support
for increasing the digitalization, intelligence and efficiency of the production process.

3.1.2. Data Layer


The data layer includes digital twins of various devices [50] and data storage. On the
one hand, the data layer performs virtual device mapping to the digital space for the real
physical devices. Each virtual device periodically checks the state of the corresponding
real physical device, describes the functions of the physical device, and collects informa-
tion on the available computing and network resources of the real physical device. The
storage of virtual devices is divided in the protocol dimension, where the corresponding
models include RESTful HTTP device models, MQTT device models, ZigBee device mod-
els, ModBus device models, etc. On the other hand, the data layer is responsible for the
collection, storage and forwarding of device data. The databases required for data storage
include all kinds of relational databases and nonrelational databases as well as distributed
databases. Facing the device side, the southbound interface of the data layer is responsible
for communicating with the device layer to complete all operations related to the device
life cycle, such as device module modeling, discovery, monitoring, destruction, backup,
and migration. Facing the AI server, the northbound interfaces of the data layer provide
abstract data types and network programming interfaces for AI applications to achieve
network awareness and control capabilities. In Section 3.3, we design a time series-based
method of device selection and computation offloading for the forwarding of device data.

3.1.3. AI Function Layer


The AI model application process can be divided into four main steps: training model
deployment, data collection and preprocessing, model training, and inference model de-
ployment. The deployment of AI models is designed in a data-driven manner. For the
selection and deployment of AI models for the data collected from the data layer, includ-
ing structured data, unstructured data and semistructured data, APIs and various other
components are integrated based on industrial application modeling, facilitating the rapid
development of IIoT AI applications. Field engineers can quickly design and deploy indus-
trial AI applications for the training and application of models for reasoning on real data
without worrying about specific implementation details or complicated deployment steps.
The dynamically collected data are obtained from the device layer, are then loaded and
extracted through real-time stream calculations, and are finally stored in the data layer in
either a central or distributed manner. The AI function layer accesses these massive-scale
data through a standard Open Database Interconnection (ODBC) interface. For the AI
model service, Docker technology is used to select basic images such as TensorFlow Serving,
and the compilation environments of C++, Python, Go and other languages are adapted
and integrated with JupyterLab. The model framework and dependencies are installed by
means of rule chains and WebSocket to construct an integrated AI service development
environment supporting multiple types of programming languages. Accordingly, model
loading, training, and reasoning scripts can be configured by field engineers, and AI appli-
cations can be rapidly developed once the environmental variables have been configured.
In addition, the web version provides a graphical development environment based on
TensorBoard, ECharts and other technologies to realize the configuration operation and
Sensors 2024, 24, x FOR PEER REVIEW 8 of 22

Sensors 2024, 24, 4182 8 of 22

environment based on TensorBoard, ECharts and other technologies to realize the config-
uration operation and
atomic interaction atomic
of the interaction
metamodel and of
to the metamodel
support andconstruction
the rapid to support the rapid con-
of intelligent
struction of intelligent
models in a graphical way.models in a graphical way.

3.2.
3.2. AI-Oriented Edge Computing System
Based on on the
the above
above software-defined
software-defined AI-oriented
AI-oriented three-layer
three-layer IIoT
IIoT edge
edge computing
computing
framework,
framework, we present the
the design and implementation of an AI-oriented edge
design and implementation of an AI-oriented edge computing
computing
system
system inin this
this section.
section. The system architecture is shown shown inin Figure
Figure 2.2. In this edge computing
computing
system,
system, aa microservice
microservicearchitecture
architecture[51][51]is is adopted.
adopted. There
There are are three
three layers,
layers, namely,
namely, the
the pro-
protocol service
tocol service layer,
layer, the the
datadata layer,
layer, andand the application
the application layer,
layer, corresponding
corresponding toaccess
to the the access
part
part
of theofdevice
the device
serviceservice layer,
layer, the the
data dataand
layer layer
theand the AI function
AI function layer, respectively,
layer, respectively, in
in the above
the above software-defined AI-oriented three-layer IIoT edge computing
software-defined AI-oriented three-layer IIoT edge computing framework. The functions of framework. The
functions
the detailedof service
the detailed service
modules modules
in each layer in areeach layer
listed are listed hierarchically
hierarchically below. below.

Figure 2.
Figure 2. Architecture
Architecture diagram
diagram of
of the
the AI-oriented
AI-oriented edge
edge computing
computing system.
system.

3.2.1.
3.2.1. Protocol Service Layer
The
The protocol
protocolservice
servicelayer
layerisisthe
thelowest
lowest layer
layerofof
thetheentire edge
entire edgecomputing
computing architecture
architec-
and is responsible for direct interaction with the underlying devices.
ture and is responsible for direct interaction with the underlying devices. The communi- The communication
protocol between
cation protocol the protocol
between serviceservice
the protocol layer andlayertheand
underlying devicesdevices
the underlying is deployed in the
is deployed
form
in theofform
device servicesservices
of device using container
using containertechnology. The interaction
technology. modes include
The interaction modes built-in
include
RESTful API, MQTT,
built-in RESTful API,ZIGBEE, Modbus,Modbus,
MQTT, ZIGBEE, and OPC-UA modes, among
and OPC-UA modes,others.
among others.
When
When a specific device needs to be connected to the edge system, the registration
a specific device needs to be connected to the edge system, the registration and
and
configuration
configuration function in the function layer will select the corresponding device service
function in the function layer will select the corresponding device service in
in
accordance
accordance with
with the
the communication
communication protocol protocol used
used byby the
the device
device and
and register
register the
the device
device
information
information in inthe
thedevice
deviceservice
service listlist to complete
to complete device
device access.
access. This protocol
This protocol serviceservice
can be
can be rewritten
rewritten and extended.
and extended. There are Theremany aredevice
many access
deviceprotocols
access protocols for the
for the IIoT. To IIoT.
expandTo
expand support for a variety of different device access protocols, a variety
support for a variety of different device access protocols, a variety of new and different de- of new and
different device
vice protocols protocols
can can be
be designed in designed
container in formcontainer form and
and deployed deployed
in the protocolinservice
the protocol
layer.
service layer.
Sensors 2024, 24, 4182 9 of 22

3.2.2. Data Layer


The core data service functions reside in the data layer. The Digital Twin Device
Models module corresponds to the digital twins of devices, which store the configuration
data of the corresponding virtual container devices, including the data required for IIoT
device configuration and the data for pairing virtual devices with device services. The
communication between the system and each specific device needs to comply with a
specific communication protocol, and data are transmitted in a specific format based on
a specific configuration. These configuration data are stored in this digital twin module.
When a device service is connected to a device through the device service layer, the device’s
configuration information and function API are registered in this module. When a device
needs to operate or communicate, the relevant configuration information can be obtained
from this module.
The Data Storage module is responsible for collecting device data, storing the data
transmitted by the device service layer, and performing related simple processing and
management of the data. All the data transmitted by the device service layer to the
functional layer are received and stored by the Data Storage module. The databases used
for data storage can include relational databases such as SQL Server, MySQL, and Oracle
databases or nonrelational databases such as Redis and MongoDB databases. In actual
industrial production scenarios, field engineers can decide which key data should be stored
and can simply process the data. The data stored in the Data Storage module can be
accessed by other microservices in the microservice group that have access permissions,
thereby enabling data storage and data interaction between IIoT applications.
The Registry module is responsible for registering and configuring other microservice
applications. It is the registration and configuration server in a microservice group. After
each microservice function is started, the Registry function will register the configuration
properties of the microservice in the console through a RESTful API. All data are stored in
the form of key–value pairs, and when one microservice server communicates with another
microservice server, the relevant configuration information will also be obtained from the
Registry module.
The Log&Notification function handles system notifications and logging. In the Sched-
ule and Rule Engine modules, the scheduling and rule engine policies can be customized by
engineer users. The Rule Engine module provides invocation methods to support different
microservice scheduling strategies and data.

3.2.3. Application Layer


The application layer of the AI-oriented edge computing architecture is designed to
support a generally applicable AI model training cycle at the edge. This cycle includes
deployment, loading and distribution; training services; storage functions for AI models;
and distribution functions for computation offloading and uploading model updates.
Load&Distribution and Dispatch services: The Load&Distribution service is respon-
sible for loading and scheduling AI tasks. The cloud platform deploys AI tasks to the
edge computing platform through the Dispatch service, and once these tasks are received,
they are loaded and scheduled by means of the Load&Distribution service. Specifically,
when the Load&Distribution service receives a request to issue AI tasks, it parses the tasks
and determines whether it needs to use other mutually trusted nodes in accordance with
the current remaining computing resources. If the local edge computing resources are
insufficient to support all the AI tasks, then some tasks need to be decomposed and sent
back to Dispatch. Dispatch transfers the tasks that need computational offloading to other
mutually trusted nodes or sends models that have been trained in the local edge framework
back to the cloud.
Training Service and AI Models: The Training Service is responsible for the real-time
training of AI models running on edge devices in the field. AI tasks, including configuration
information and AI models, are received by the application layer of the edge computing
architecture and transmitted to the Training Service for local fusion and real-time training.
Sensors 2024, 24, x FOR PEER REVIEW 10 of 22
Sensors 2024, 24, 4182 10 of 22

computing architecture and transmitted to the Training Service for local fusion and real-
The
timetraining results,
training. namely,
The training the AInamely,
results, models,the
areAI
subsequently sent back for sent
models, are subsequently storage and
back for
further transmission.
storage and further transmission.
3.3. Time Series-Based Method of Device Selection and Computation Offloading
3.3. Time Series-Based Method of Device Selection and Computation Offloading
In this section, an energy queue-based device selection and computation offloading
In this section, an energy queue-based device selection and computation offloading
method is proposed for use in federated edge learning. In general, in federated edge
learning,isthe
method proposed
devices for usefor
used in training
federated edge
local learning.
models areIn general, selected
generally in federated edge learn-
randomly [52].
ing, the devices
However, used fordevice
this random training local models
selection are generally
approach does not selected randomly
fully consider the [52]. How-
problems
ever,
of delaythisand
random
energydevice selection in
consumption approach
the IIoTdoes not fullyscenario.
application consider Therefore,
the problems an of delay
energy
and energy consumption in the IIoT application scenario. Therefore,
queue-based device selection and computation offloading method is proposed in this an energy queue-
based device
section. selection and
By maintaining computation
a time offloading
series consisting ofmethod is proposed
the estimated timesinneeded
this section.
for oneBy
maintaining a time series consisting of the estimated times needed for
round of local model training on each edge device, the devices for the current round of one round of local
model training
training on each from
can be selected edge thedevice,
nearthe devices
range of thefor the current
queue, and the round of the
part of training can be
calculation
selected from the near range of the queue, and the part of the
whose energy consumption would exceed the maximum value among the devices in eachcalculation whose energy
consumption
round would
is offloaded to exceed
the edge the maximumcenter
computing value for
among the devices
completion. in each
In this way, round is of-
the energy
floaded to the edge computing center for completion. In this way, the
consumption and time cost of the federated edge learning system can be reduced in the energy consumption
and term.
long time cost of the federated edge learning system can be reduced in the long term.

3.3.1.
3.3.1.Federated
FederatedEdge
EdgeLearning
Learning
Figure
Figure33shows
showsthethefederated
federatedlearning
learningmodel
modeltraining
traininggraph
graphin inthe
thesystem.
system.There
Thereare
are
NNdevices
devicesin inthe
thesystem,
system,and
andkkof ofthem
themareareselected
selectedfor
fortraining
traininginineach
eachround.
round.The Thelocal
local
models
modelsatatthetheedge
edge have their
have own
their data
own setssets
data generated fromfrom
generated the field in real
the field intime,
real they
time,train
they
the data
train theindata
theirinown
theirscope, and then
own scope, andupload the trained
then upload modelmodel
the trained parameters to the to
parameters edge
the
computing
edge computingcenter, center,
which then
which performs model fusion.
then performs modelSubsequently, the fused the
fusion. Subsequently, model is
fused
distributed back to the devices, which fuse it with their own trained models
model is distributed back to the devices, which fuse it with their own trained models to to obtain new
local models
obtain and perform
new local models anditerative training
perform again.
iterative training again.

Figure3.3.Training
Figure Trainingprocess
processunder
underthe
thefederated
federatedlearning
learningmodel.
model.

Themathematical
The mathematicalformulas
formulasdescribing
describingthetheprocess
processof offederated
federatedlearning
learningand
andthethetime
time
and energy consumption calculations are given below.
and energy consumption calculations are given below.
In
Ineach
eachround,
round, thethe edge
edge compute
compute center
center selects
selects aa set
set of
of devices,
devices, denoted
denoted as as K𝐾t , ,to
to
t
participate
participateininmodel training.M𝑀represents
modeltraining. representsthethe
global model
global modelparameters at the
parameters at beginning
the begin-
of a round
ning of training,
of a round Mit represents
of training, 𝑀 represents
the parameters of the local
the parameters model
of the local before
model one round
before one
round
of training, t +1
of training, 𝑀
Mi represents represents the parameters
the parameters of the
of the local modellocal model
after one after
roundone round of
of training,
and L Mit and
training, 𝐿 𝑀 represents
represents the lossfor
the loss function function for local
local model modelSubsequently,
training. training. Subsequently,
the edge
compute
the edge center
compute sends the sends
center latest global model
the latest global t
M model 𝑀 to all
to all selected devices:
selected devices:

Mit𝑀= Mt𝑀
, i,∈
𝑖 ∈K𝐾
t (1)
(1)
Sensors 2024, 24, 4182 11 of 22

The selected devices update their local models using the received global model and
their own datasets, employing the stochastic gradient descent algorithm, where δ denotes
the step size:
Mit+1 = Mit − δ∇ L Mit

(2)
The global model parameters at the end of a round of training are calculated through
weighted averaging of the local model parameters in accordance with the relative
data proportions:
k
D
Mt+1 = ∑ i Mit+1 (3)
i =1
D
where D is the total amount of data used in this round of training and Di is the size of the
local dataset of device ℶ.
The time consumption for each local training is

1
Ti = εγDi (4)
fi

where ε represents the number of iterations; γ represents the average number of CPU
clock cycles to compute a unit amount of data, which is regarded as a constant for a given
calculation task; and f i represents the CPU frequency of the device for the calculation
task. The calculation frequencies of different devices are different, but the frequency of a
single device remains consistent throughout a calculation task. It must be noted that in
the above equation, we have simplified the calculation of local training time consumption
to a certain extent. Strictly speaking, the number of data points in local computation and
training time do not necessarily exhibit a strictly linear relationship. For example, support
vector machines (SVM) exhibit quadratic complexity, while methods like random forests,
decision trees, convolutional neural networks (CNN), and multilayer perceptrons (MLP)
have quasilinear complexity [53–55]. For certain methods in the IIoT, such as MLP used in
our experimental section, the size of the dataset and training time can be approximated to
be linearly related. The approximate calculation of energy consumption in the following
equations follows a similar logic.
The energy consumption for each local training is approximated as

Ei = ρεγDi f i2 (5)

where ρ is a capacitance coefficient constant, which represents the energy consumption of


the CPU computing module. The energy consumption is affected by the working voltages
of the electronic components, and it is not easy to calculate the exact value; therefore, we
use an approximate value.
The total time consumption for the calculation is

Tcomp = max{ Ti }ik=1 + T0 (6)

where T0 represents the time needed for device selection, model parameter transmission
and model fusion at the edge computing center, which is generally small compared with
the local model training time and thus can be ignored. Therefore, the above expression
simplifies to
Tcomp = max{ Ti }ik=1 (7)

1 k
 
Tcomp = max Di ·εγ (8)
f i i =1
The total energy consumption is calculated as

k
Ecomp = ∑ Ei (9)
i =1
Sensors 2024, 24, 4182 12 of 22

k
Ecomp = ∑ ρεγDi fi2 i (10)
i =1

The optimization goal is to minimize Tcomp while ensuring that Ecomp does not ex-
ceed E0 , which represents the threshold for the total energy consumption of the entire
device system.

3.3.2. Time Series-Based Method of Device Selection and Computation Offloading


To reduce the total training time and energy consumption, a time series-based device
selection and computation offloading strategy is now considered for the local computing
tasks. Computational offloading refers to transferring part of the computing task burden of
an edge device, i.e., part of the dataset, to the edge computing center when the computing
capacity of the edge device is insufficient; then, the edge computing center performs the
calculation on behalf of the edge device.
According to Equation (8), the computation time of the system is related to the longest
time consumption among the devices selected in each round. Therefore, we maintain a time
series representing the lengths of time needed for training on each device and try to select
devices with similar time consumption for training during each round of device selection;
in this way, the total training time can be greatly reduced. In other words, low-efficiency
devices tend to be selected for training simultaneously, and high-efficiency devices also
tend to be selected for training simultaneously. Moreover, for each batch of selected devices,
an agreed-upon value of the energy consumption cost is defined as Ei,0 . When the estimated
energy consumption cost of a device in the current round of calculation exceeds this agreed-
upon value, the excess calculation will be offloaded to the edge computing center for
completion. We enhanced the federated averaging algorithm [37] by proposing Federated
Averaging with Device Selection and Computing Offload based on time series.
TimeQueue is a time series from small to large. During device selection, a is
set as a time-weighted inverse coefficient, and k devices are randomly selected from
TimeQueue[rk + ak ] in each round.
When Ei > Ei,0 , the amount of data reserved for local training is

Ei,0
Di,comp = (11)
ρεγ f 2i

Otherwise, it is 0.
When Ei > Ei,0 , the amount of data that needs to be offloaded during local training is

Ei,0
Di,trans = Di − (12)
ρεγ f 2i

Otherwise, it is 0. The Algorithm 1 is described below.


Accordingly, the time consumed for data transmission is

1 k
B i∑
Ttrans = Di,trans (13)
=1

where B is the total bandwidth of the system. It should be noted that the data transmission
model is significantly simplified, omitting any discussion of queue latency, limitations in
MAC protocol performance, the effects of Automatic Repeat request (ARQ) and Forward
Error Correction (FEC), and other critical teletraffic engineering considerations. Addi-
tionally, bandwidth is allocated to each computing device using average distribution and
the formulas give average values. This simplification assumes that devices with lower
computational and transmission capabilities will not exceed protocol limitations. Moreover,
it assumes the system operates under ideal stability, thereby refraining from accounting for
Sensors 2024, 24, 4182 13 of 22

time consumption influenced by factors such as ARQ and FEC, which are challenging to
quantify precisely. The same reasoning applies to the following equation.

Algorithm 1 Federated Averaging with Device Selection and Computing Offload based on
time series

1: Server executes:
2: initialize M
3: for each round r = 1, 2, ... do
4: for each client c in parallel do
5: timeQueue ← UpdateTimeQueue(c, timeQueue)
6: S ← SelectDevices(timeQueue, k, α)
7: for each client c∈S in parallel do
8: M′ ← ClientUpdate(c, M)
9: Dt ← receive Dtrans from clients
10: end for
11: batches ← (data D split into batches of size)
12: for each batch b in batches do
13: M′′ ← arg min(Loss(M))
14: end for
15: M ← WeightedAvg(M′ , M′′ , Dt , D)
16: end for
17: end for
18: function SelectDevices(timeQueue, k, α)
19: r ← (random value in area [0, 1])
20: S ← (random k devices in area timeQueue[rk, rk + αk])
21: return S
22: end function
23: function UpdateTimeQueue(c, timeQueue) ▷Executed on client c
14: time ← (calculate time with Formula (4))
25: Insert(timeQueue, time)
26: return timeQueue
27: end function
28: function ClientUpdate(c, M) ▷Executed on client c
29: energy ← (calculate energy with Formula (5))
30: if energy > E0 then
31: Dt ← (calculate Dtrans with Formula (12))
32: transfer Dt to Server
33: end if
34: for each local epoch i from 1 to E do
35: batches ← (data D split into batches of size)
36: for each batch b in batches do
37: M ← arg min(Loss(M))
38: end for
39: end for
40: return M to Server
41: end function

The energy consumed for data transmission is

k pkDi,trans
Etrans = ∑ σB
(14)
i =1

where p represents the transmission power and σ is a constant coefficient of the transmission
rate over the bandwidth.
Sensors 2024, 24, 4182 14 of 22

By using Di,comp in place of Di in Equations (8) and (10), the total training time
consumption of the system based on device selection and computation offloading in one
round of training can be expressed as

Ttotal = Tcomp + T trans (15)


k
1 k

1
B i∑
Ttotal = max εγDi,comp + Di,trans (16)
fi i =1 =1
Similarly, the total energy consumption is

Etotal = Ecomp + Etrans (17)

k  kpDi,trans
Etotal = ∑ ρεγDi,comp f i2 i +
σB
) (18)
i =1

For an edge computing center with strong computing power and a guarantee of
sufficient energy, the training time for the typically small amount of offloaded data will be
much less than Tcomp , so its energy consumption is not considered.

4. Results
As described in this section, we applied the above edge computing framework to
construct a system and conducted comparative experiments using the proposed time series-
based method of device selection and computation offloading to verify its feasibility as
well as its superiority in terms of time and energy costs.

4.1. Experimental Setup


As described in this section, the above edge computing framework was applied to
construct a system for simulation experiments, which included an edge computing center
and 40 edge devices. The number of local training iterations was 10. The average number
of CPU clock cycles per unit amount of data was determined by the dataset and the model
to be trained. The CPU frequency followed a uniform distribution in the range of [1,2]
GHz. The constant capacitance coefficient was set to 10−28 . The total bandwidth was set
to 500 MB/s. The transmission power was set to 0.1 W. The constant coefficient of the
transmission rate over the bandwidth was set to 0.9. The agreed-upon maximum value
of the energy consumption cost was set to 5 J. The number of devices selected in each
training round was 15. In these experiments, the time-weighted inverse coefficient was set
to different values of 2, 1.6 and 1.2.
In the IIoT domain, various machine learning methods can be applied, including
Multilayer perceptrons (MLP), convolutional neural networks (CNN), recurrent neural
networks (RNN), support vector machines (SVM), k-nearest neighbor (KNN), Long Short-
Term Memory (LSTM), and other deep reinforcement learning networks [53–55]. Among
these methods, MLP is particularly versatile and adaptable, capable of addressing diverse
IIoT tasks such as image processing, data analysis, anomaly detection, and quality control.
MLP also offers interpretability, allowing insights into how internal weights and neurons
respond to input data. Moreover, its training time scales approximately linearly with the
dataset size, aligning with the assumptions in Section 3’s formulas. Therefore, we selected
the MLP model for validation in this experiment.
We used the MNIST [56] dataset to verify the effectiveness of the proposed scheme. The
MNIST dataset contains 60,000 training data samples and 10,000 test samples. These sam-
ples are all 28 × 28 pixel images of handwritten digits. The training samples were divided
among the 40 edge devices following a uniform distribution in the range of [1000, 2000].
The MNIST dataset was used to train an MLP model. The Flatten layer converted each
28 × 28 image into a flat vector. Dense layers were fully connected layers: the first Dense
layer had 64 neurons with ReLU activation. The output Dense layer had 10 neurons (one for
Sensors 2024, 24, 4182 15 of 22

each digit from 0 to 9) with softmax activation for multi-class classification. The model used
the Adam optimizer, sparse categorical cross-entropy loss function (suitable for integer-
encoded labels like MNIST), and accuracy metric. The model was trained with 5 epochs, a
batch size of 32, and a learning rate of 0.001.
In the experiments, we set up a comparative test. In the control group, a random
device selection scheme without optimization was applied as a baseline against which to
compare the time series-based scheme for device selection and computation offloading
proposed in this paper. First, the convergence and accuracy of the proposed scheme were
verified on the training and test datasets and compared with those of the control group.
Then, different time-weighted inverse coefficients were set to evaluate the relationship
between the training time and energy consumption for further comparison of the proposed
scheme and the control scheme.

4.2. Analysis of Results


In this section, the effectiveness of the proposed scheme is evaluated in terms of
model convergence and accuracy, and its superiority in terms of training time and energy
consumption is investigated in comparison with the control scheme.

4.2.1. Model Loss and Accuracy


In this experiment, the training performance and model convergence ability of the
federated learning algorithm based on the proposed time series-based device selection
and computation offloading method are compared with those of the traditional federated
learning algorithm with random device selection. Figures 4 and 5 show plots of the model
accuracy and loss, respectively, during training using the random device selection algorithm
and the proposed algorithm with different time-weighted inverse coefficients. It can be
seen from the comparative analysis that when the number of training rounds is less than 20,
the model accuracy and convergence speed achieved with the proposed scheme show small
differences compared with those achieved with random device selection, and the smaller
the time-weighted inverse coefficient is, the larger the difference is. When the number
of training rounds is more than 20, however, there is no large gap between the model
convergence and accuracy with the different schemes. Although the difference is small, it
is seen that because the proposed device selection strategy preferentially selects training
batches consisting of similar devices, it results in greater sample similarity and consistency
than the completely random selection method when the number of training rounds is small.
However, as the number of training rounds increases, all devices gradually come to be
treated equally in long-term device selection, so the data homogeneity is eliminated, and
the model accuracy and convergence results are not affected. Therefore, these results show
that the proposed method may have a small impact on model convergence and accuracy in
the case when less training time is available, and the smaller the time-weighted inverse
Sensors 2024, 24, x FOR PEER REVIEW 16 of 22
coefficient is, the greater the impact; however, it does not affect the actual final model
convergence and accuracy.

Figure 4.
Figure 4. Plot
Plotofoftraining accuracy.
training accuracy.
Sensors 2024, 24, 4182 16 of 22

Figure 4. Plot of training accuracy.

Figure 5.
Figure 5. Plot
Plotofoftraining loss.
training loss.

4.2.2. Curve
4.2.2. CurveofofTraining
TrainingTime over
Time Multiple
over Training
Multiple Rounds
Training Rounds
In
In this
thisexperiment,
experiment, thethe
training
trainingtimetime
in each round
in each and the
round andcumulative trainingtraining
the cumulative time time
were compared between the proposed method and the random
were compared between the proposed method and the random device selection method. device selection method.
Figure 6 shows the overall time consumption in each of the 50 training rounds. The model
Figure 6 shows the overall time consumption in each of the 50 training rounds. The
training time with the proposed method is generally 30% to 50% less than that with the
model training time with the proposed method is generally 30% to 50% less than that
random device selection method, and the smaller the time-weighted inverse coefficient is,
with the random device selection method, and the smaller the time-weighted inverse
the shorter the average training time in each round. Moreover, regarding the oscillation
coefficient
amplitude ofis,the the shorter
curve, the average
the random devicetraining
selectiontime
method in each round.
results Moreover,
in the largest regarding
oscilla-
the
tions, while the oscillation amplitude under the proposed method is slightly reduced with in the
oscillation amplitude of the curve, the random device selection method results
largest oscillations,
a decrease while the oscillation
in the time-weighted amplitude
inverse coefficient. under the
The relevant proposedinmethod
experiments reference is slightly
reduced with a decrease
[29] demonstrate that when in the
the client
time-weighted inverse in
data participating coefficient. The round
each training relevant experiments
cannot
in reference
cover all data[29] demonstrate
distributions, that when
selecting the client
more clients data participating
to participate in each
can accelerate training round
the conver-
cannot cover all data distributions, selecting more clients to participate can in
gence speed of the federated model. Therefore, Figure 4 does not show any superiority accelerate
terms
the of training rounds.
convergence speedHowever, as the time-weighted
of the federated inverse coefficient
model. Therefore, Figure 4decreases,
does notthe show any
system tendsinto
superiority selectofatraining
terms more centralized data scale as
rounds. However, andtheoffloads data exceeding
time-weighted inversethecoefficient
threshold tothe
decreases, the system
edge computing
tends tocenter
selectfor computation,
a more thereby
centralized reducing
data processing
scale and offloads data
time. Figure 7 shows the total time consumption of the system after 50 rounds of training
exceeding the threshold to the edge computing center for computation, thereby reducing
with each method. It shows that the proposed time series-based method of device selec-
processing time. Figure 7 shows the total time consumption of the system after 50 rounds of
tion and computation offloading can reduce the total training time to a certain extent com-
training
pared with withtheeach
random method.
device It shows method
selection that theover
proposed
the same time series-based
number of trainingmethod
rounds.of device
selection
These experimental results verify the superiority of the proposed method in terms aofcertain
and computation offloading can reduce the total training time to time extent
compared
consumption. with the random device selection method over the same number of training
Sensors 2024, 24, x FOR PEER REVIEW 17 of 22
rounds. These experimental results verify the superiority of the proposed method in terms
of time consumption.

Figure6.
Figure 6. Plot
Plot of
of single-round
single-round training
training times.
times.
Sensors 2024, 24, 4182 17 of 22
Figure 6. Plot of single-round training times.

Figure7.7.Plot
Figure Plotof
ofcumulative
cumulativetraining
trainingtime.
time.

4.2.3.
4.2.3.Curve
Curve ofof Energy
EnergyConsumption
Consumptionover overMultiple
MultipleTraining
TrainingRounds
Rounds
In
In this experiment, the energy consumption per traininground
this experiment, the energy consumption per training roundand andthethecumulative
cumulative
training
training energy consumption (estimated according to Equation (18) in Section 3.33.3
energy consumption (estimated according to Equation (18) in Section andand
ex-
excluding the energy consumption of the edge computing center) were
cluding the energy consumption of the edge computing center) were compared between compared between
the
theproposed
proposedmethod
methodand andthe
therandom
randomdevice
deviceselection
selectionmethod.
method.Figure
Figure88shows
showsthe theoverall
overall
energy consumption in each of the 50 training rounds. The training energy
energy consumption in each of the 50 training rounds. The training energy consumption consumption
under
underthetheproposed
proposedmethod
methodisis generally
generally 35%35%to to
55% lessless
55% than thatthat
than under the the
under random
random device
de-
selection method, and the smaller the time-weighted inverse coefficient
vice selection method, and the smaller the time-weighted inverse coefficient is, the lower is, the lower the
average training
the average energy
training consumption
energy consumption in each round.
in each Moreover,
round. Moreover,regarding
regardingthe oscillation
the oscilla-
amplitude of the energy consumption curve, the random device selection
tion amplitude of the energy consumption curve, the random device selection method method results
in the largest oscillations, while the oscillation amplitude under the proposed
results in the largest oscillations, while the oscillation amplitude under the proposed method is
slightly reduced with a decrease in the time-weighted inverse coefficient. These findings
method is slightly reduced with a decrease in the time-weighted inverse coefficient. These
are consistent with the analysis presented in the previous subsection. Figure 9 shows the
findings are consistent with the analysis presented in the previous subsection. Figure 9
total energy consumption of the system after 50 rounds of training with each method. It
shows the total energy consumption of the system after 50 rounds of training with each
shows that the proposed time series-based method of device selection and computation
method. It shows that the proposed time series-based method of device selection and com-
offloading can somewhat reduce the total training energy consumption on the device side
putation offloading can somewhat reduce the total training energy consumption on the
compared with the random device selection method over the same number of training
Sensors 2024, 24, x FOR PEER REVIEW
device side compared with the random device selection method over the same number of
rounds. These experimental results verify the superiority of the proposed method in18terms of 22
training rounds. These experimental results verify the superiority of the proposed method
of time consumption.
in terms of time consumption.

Figure 8. Plot of single-round energy consumption.


Sensors 2024, 24, 4182 18 of 22

Figure 8. Plot of single-round energy consumption.

Figure9.9.Plot
Figure Plot of
of cumulative
cumulative energy
energyconsumption.
consumption.

4.2.4.
4.2.4. Comparison
Comparison withwith Related
RelatedMethods
Methods
As
As shown in Table 1, our method is
shown in Table 1, our method is compared
comparedwith withother
other related
relatedmethods,
methods,including
including
FedAvg
FedAvg [29], DCS [40], FedCS [41], FedMCCS [42], and FedCCPS [43],in
[29], DCS [40], FedCS [41], FedMCCS [42], and FedCCPS [43], interms
termsof offeature
feature
collection,
collection, optimization goals, strategies, and time improvements compared to arandom
optimization goals, strategies, and time improvements compared to a random
device
deviceselection
selectionmethod.
method.ItItshould
shouldbebenoted
notedthat duedue
that to differences in feature
to differences collection
in feature and
collection
optimization goals,goals,
and optimization direct direct
horizontal comparisons
horizontal of theseofmethods
comparisons on the same
these methods on thedimension
same di-
are relatively
mension challenging.
are relatively However,However,
challenging. comparedcompared
to the random
to thedevice
random selection method,
device selection
our approach demonstrates superiority over other methods in scenarios
method, our approach demonstrates superiority over other methods in scenarios involv- involving more
extensive feature collection, applying the time series-based method of device
ing more extensive feature collection, applying the time series-based method of device selection and
computation offloading.
selection and computation offloading.

Table1.1.Comparison
Table Comparisonwith
withrelated
relatedmethods.
methods.

TimeTime
Improvements Com-
Improvements
Methods Feature Collection
Methods Feature Collection Optimization
Optimization
GoalsGoals Strategies
Strategies Compared to Random
pared to Random Device
Device Selection Method
Selection Method
FedAvg - - RandomRandom device
device selection
FedAvg - - selection method --
method
Device communication Minimizing overall Client selection
DCS Minimizing overall Client selection threshold 32.67%
DCS Device communication timetime communication cost threshold filtering 32.67%
communication cost filtering
Local model training Maximizing the Greedy algorithm
Greedy algorithm under
FedCS time; model
Local model training time; Maximizing number
theofnum-
device under the knapsack -
FedCS transmission the knapsack constraint -
model transmission timetimeber of deviceselections
selections constraint problem
problem
Double-layer greedy
CPU frequency; Maximizing the
algorithm under the
FedMCCS Memory; device number of device -
knapsack constraint
Energy selections
problem
CPU frequency; size of Federated client cluster
Minimizing overall
FedCCPS dataset; transmission and latency-prediction 21%
training time
power selection
Minimizing overall Time series-based
CPU frequency; size of
training time within method of device
Ours dataset; transmission 30–50%
limits of energy selection and
power; bandwidth
consumption computation offloading

5. Discussion
This paper proposes a three-layer, software-defined AI-oriented edge computing
framework for the IIoT, which can overcome the problem of the high hardware and software
coupling of edge devices in the IIoT as well as the difficulties of deploying and delivering
Sensors 2024, 24, 4182 19 of 22

AI models and interacting with the device side to some extent. Based on this framework, the
design and implementation of an AI-oriented edge computing system are also presented.
In the proposed architecture, the edge receives AI models deployed from the cloud for
training, and model training is performed at the edge based on on-site data. This method of
training at the edge supports a variety of IIoT transmission protocols and facilitates access
to various common IIoT control devices and sensor devices to meet the needs of different
application scenarios. To a large extent, this approach also protects the privacy and security
of data, in accordance with the concepts of big data AI, model sharing and data privacy. In
addition, to reduce the delay of model training and the energy consumption of the devices
while adapting to the needs of less efficient nodes, this paper proposes a time series-based
method of device selection and computation offloading. Experiments verify the feasibility
of this method compared with traditional random device selection in terms of training
accuracy and model convergence as well as its superiority in terms of time consumption
and energy consumption.
However, there are still some shortcomings and limitations in this study. In the
experiments, we found that the device access system based on container virtualization
has a certain probability of exhibiting a small amount of uncertain jitter in response.
This jitter presents a great challenge for industrial application scenarios that require high
real-time performance.
In future work, we will focus on improving the real-time performance of the system,
including increasing the efficiency of data transmission, increasing the effectiveness of
the time-sharing scheduling strategy, and optimizing the computing resource allocation
method. The overall framework of IIoT edge computing for AI models is currently under-
going constant development, and we hope that the architecture proposed in this paper can
serve as a reference for researchers in this area.

Author Contributions: Conceptualization, X.L. and X.D.; methodology, X.D.; software, X.D.; vali-
dation, N.J., X.L. and X.D.; formal analysis, X.D.; investigation, X.L. and W.Z.; resources, X.L. and
W.Z.; data curation, N.J.; writing—original draft preparation, X.D.; writing—review and editing,
N.J.; visualization, X.D.; supervision, W.Z.; project administration, X.L.; funding acquisition, X.L. All
authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by The National Key Research and Development Program of
China, grant number 2022YFB3305700. The APC was funded by The National Key Research and
Development Program of China.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Data Availability Statement: The MNIST dataset used in the experimental section is sourced from:
Deng, L. The MNIST database of handwritten digit images for machine learning research. IEEE
Signal Processing Magazine, 29(6), 141–142, 2012. It can be downloaded from https://yann.lecun.
com/exdb/mnist/ (accessed on 11 November 2022).
Acknowledgments: We thank all the researchers working in this field, whose previous research
results have helped us to carry out further research.
Conflicts of Interest: The authors declare no conflicts of interest.

References
1. Sisinni, E.; Saifullah, A.; Han, S.; Jennehag, U.; Gidlund, M. Industrial Internet of Things: Challenges, Opportunities, and
Directions. IEEE Trans. Ind. Inform. 2018, 14, 4724–4734. [CrossRef]
2. Mohammadi, M.; Al-Fuqaha, A.; Sorour, S.; Guizani, M. Deep Learning for IoT Big Data and Streaming Analytics: A Survey.
IEEE Commun. Surv. Tutor. 2018, 20, 2923–2960. [CrossRef]
3. Mahdavinejad, M.S.; Rezvan, M.; Barekatain, M.; Adibi, P.; Barnaghi, P.; Sheth, A.P. Machine learning for internet of things data
analysis: A survey. Digit. Commun. Netw. 2018, 4, 161–175. [CrossRef]
Sensors 2024, 24, 4182 20 of 22

4. Zou, Z.; Jin, Y.; Nevalainen, P.; Huan, Y.; Heikkonen, J.; Westerlund, T. Edge and Fog Computing Enabled AI for IoT-An Overview.
In Proceedings of the 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), Hsinchu,
Taiwan, 18–20 March 2019. [CrossRef]
5. Khan, L.U.; Saad, W.; Han, Z.; Hossain, E.; Hong, C.S. Federated Learning for Internet of Things: Recent Advances, Taxonomy,
and Open Challenges. IEEE Commun. Surv. Tutor. 2021, 23, 1759–1799. [CrossRef]
6. Saeik, F.; Avgeris, M.; Spatharakis, D.; Santi, N.; Dechouniotis, D.; Violos, J.; Leivadeas, A.; Athanasopoulos, N.; Mitton, N.;
Papavassiliou, S. Task offloading in Edge and Cloud Computing: A survey on mathematical, artificial intelligence and control
theory solutions. Comput. Netw. 2021, 195, 108177. [CrossRef]
7. Zhang, K.; Zhu, Y.; Maharjan, S.; Zhang, Y. Edge Intelligence and Blockchain Empowered 5G Beyond for the Industrial Internet of
Things. IEEE Netw. 2019, 33, 12–19. [CrossRef]
8. Sha, K.; Wei, W.; Yang, T.A.; Wang, Z.; Shi, W. On security challenges and open issues in Internet of Things. Future Gener. Comput.
Syst. 2018, 83, 326–337. [CrossRef]
9. Vermesan, O.; EisenHauer, M.; Serrano, M.; Guillemin, P.; Sundmaeker, H.; Tragos, E.Z.; Valino, J.; Copigneaux, B.; Presser, M.;
Aagaard, A. The Next Generation Internet of Things—Hyperconnectivity and Embedded Intelligence at the Edge. In Next
Generation Internet of Things Distributed Intelligence at the Edge and Human Machine-to-Machine Cooperation; River Publishers: New
York, NY, USA, 2018.
10. Sha, K.; Yang, T.A.; Wei, W.; Davari, S. A survey of edge computing-based designs for IoT security. Digit. Commun. Netw. 2020, 6,
195–202. [CrossRef]
11. Sodhro, A.H.; Pirbhulal, S.; de Albuquerque, V.H.C. Artificial Intelligence-Driven Mechanism for Edge Computing-Based
Industrial Applications. IEEE Trans. Ind. Inform. 2019, 15, 4235–4243. [CrossRef]
12. Zhao, Y.; Li, L.; Liu, Y.; Fan, Y.; Lin, K.-Y. Communication-Efficient Federated Learning for Digital Twin Systems of Industrial
Internet of Things. IFAC-PapersOnLine 2022, 55, 433–438. [CrossRef]
13. Mai, T.; Yao, H.; Guo, S.; Liu, Y. In-Network Computing Powered Mobile Edge: Toward High Performance Industrial IoT. IEEE
Netw. 2021, 35, 289–295. [CrossRef]
14. Nguyen, D.C.; Ding, M.; Pathirana, P.N.; Seneviratne, A.; Li, J.; Poor, H.V. Federated Learning for Internet of Things: A
Comprehensive Survey. IEEE Commun. Surv. Tutor. 2021, 23, 1622–1658. [CrossRef]
15. Mwase, C.; Jin, Y.; Westerlund, T.; Tenhunen, H.; Zou, Z. Communication-efficient distributed AI strategies for the IoT edge.
Future Gener. Comput. Syst. 2022, 131, 292–308. [CrossRef]
16. Torres-Charles, C.A.; Carrizales-Espinoza, D.E.; Sanchez-Gallegos, D.D.; Gonzalez-Compean, J.L.; Morales-Sandoval, M.;
Carretero, J. SecMesh: An Efficient Information Security Method for Stream Processing in Edge-Fog-Cloud. In Proceedings of
the 2022 7th International Conference on Cloud Computing and Internet of Things, Hanoi, Vietnam, 23–25 September 2022.
[CrossRef]
17. Kök, İ.; Yıldırım Okay, F.; Özdemir, S. FogAI: An AI-supported fog controller for Next Generation IoT. Internet Things 2022,
19, 100572. [CrossRef]
18. Zhao, Y.; Hu, N.; Zhao, Y.; Zhu, Z. A Secure and Flexible Edge Computing Scheme for AI-Driven Industrial IoT. Clust. Comput.
2023, 26, 283–301. [CrossRef]
19. Kumar, R.; Agrawal, N. Analysis of multi-dimensional Industrial IoT (IIoT) data in Edge–Fog–Cloud based architectural
frameworks: A survey on current state and research challenges. J. Ind. Inf. Integr. 2023, 35, 100504. [CrossRef]
20. Zhang, F.; Wang, H.; Zhou, L.; Xu, D.; Liu, L. A blockchain-based security and trust mechanism for AI-enabled IIoT systems.
Future Gener. Comput. Syst. 2023, 146, 78–85. [CrossRef]
21. Hong, Z.; Chen, W.; Huang, H.; Guo, S.; Zheng, Z. Multi-Hop Cooperative Computation Offloading for Industrial
IoT–Edge–Cloud Computing Environments. IEEE Trans. Parallel Distrib. Syst. 2019, 30, 2759–2774. [CrossRef]
22. Abdulrahman, S.; Tout, H.; Ould-Slimane, H.; Mourad, A.; Talhi, C.; Guizani, M. A Survey on Federated Learning: The Journey
From Centralized to Distributed On-Site Learning and Beyond. IEEE Internet Things J. 2021, 8, 5476–5497. [CrossRef]
23. Zhang, C.; Patras, P.; Haddadi, H. Deep Learning in Mobile and Wireless Networking: A Survey. IEEE Commun. Surv. Tutor. 2019,
21, 2224–2287. [CrossRef]
24. Aledhari, M.; Razzak, R.; Parizi, R.M.; Saeed, F. Federated Learning: A Survey on Enabling Technologies, Protocols, and
Applications. IEEE Access 2020, 8, 140699–140725. [CrossRef] [PubMed]
25. Du, Z.; Wu, C.; Yoshinaga, T.; Yau, K.; Ji, Y.; Li, J. Federated Learning for Vehicular Internet of Things: Recent Advances and Open
Issues. IEEE Open J. Comput. Soc. 2020, 1, 45–61. [CrossRef] [PubMed]
26. Bellavista, P.; Penna, R.D.; Foschini, L.; Scotece, D. Machine Learning for Predictive Diagnostics at the Edge: An IIoT Practical
Example. In Proceedings of the ICC 2020—IEEE International Conference on Communications, Dublin, Ireland, 7–11 June 2020.
[CrossRef]
27. Sun, W.; Lei, S.; Wang, L.; Liu, Z.; Zhang, Y. Adaptive Federated Learning and Digital Twin for Industrial Internet of Things. IEEE
Trans. Ind. Inform. 2021, 17, 5605–5614. [CrossRef]
28. McClellan, M.; Cervelló-Pastor, C.; Sallent, S. Deep Learning at the Mobile Edge: Opportunities for 5G Networks. Appl. Sci. 2020,
10, 4735. [CrossRef]
Sensors 2024, 24, 4182 21 of 22

29. McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; Agüera y Arcas, B. Communication-Efficient Learning of Deep Networks
from Decentralized Data. In Proceedings of the International Conference on Artificial Intelligence and Statistics, Cadiz, Spain,
9–11 May 2016. [CrossRef]
30. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning: Concept and Applications. ACM Trans. Intell. Syst. Technol. 2019,
10, 12. [CrossRef]
31. Jiang, W.; Han, H.; Zhang, Y.; Mu, J. Federated split learning for sequential data in satellite–terrestrial integrated networks. Inf.
Fusion 2024, 103, 102141. [CrossRef]
32. Parra-Ullauri, J.M.; Madhukumar, H.; Nicolaescu, A.-C.; Zhang, X.; Bravalheri, A.; Hussain, R.; Vasilakos, X.; Nejabati, R.;
Simeonidou, D. kubeFlower: A privacy-preserving framework for Kubernetes-based federated learning in cloud–edge environ-
ments. Future Gener. Comput. Syst. 2024, 157, 558–572. [CrossRef]
33. Mhaisen, N.; Abdellatif, A.A.; Mohamed, A.; Erbad, A.; Guizani, M. Optimal User-Edge Assignment in Hierarchical Federated
Learning Based on Statistical Properties and Network Topology Constraints. IEEE Trans. Netw. Sci. Eng. 2022, 9, 55–66. [CrossRef]
34. Baccour, E.; Mhaisen, N.; Abdellatif, A.A.; Erbad, A.; Mohamed, A.; Hamdi, M.; Guizani, M. Pervasive AI for IoT Applications: A
Survey on Resource-Efficient Distributed Artificial Intelligence. IEEE Commun. Surv. Tutor. 2022, 24, 2366–2418. [CrossRef]
35. Salim, M.M.; El Azzaoui, A.; Deng, X.; Park, J.H. FL-CTIF: A federated learning based CTI framework based on information
fusion for secure IIoT. Inf. Fusion 2024, 102, 102074. [CrossRef]
36. Duy, P.T.; Quyen, N.H.; Khoa, N.H.; Tran, T.D.; Pham, V.H. FedChain-Hunter: A reliable and privacy-preserving aggregation for
federated threat hunting framework in SDN-based IIoT. Internet Things 2023, 24, 100966. [CrossRef]
37. Li, X.; Huang, K.; Yang, W.; Wang, S.; Zhang, Z. On the Convergence of FedAvg on Non-IID Data. arXiv 2019, arXiv:1907.02189.
[CrossRef]
38. Sahu, A.K.; Li, T.; Sanjabi, M.; Zaheer, M.; Talwalkar, A.; Smith, V. Federated Optimization in Heterogeneous Networks. arXiv
2018, arXiv:1812.06127. [CrossRef]
39. Reddi, S.J.; Charles, Z.B.; Zaheer, M.; Garrett, Z.; Rush, K.; Konečný, J.; Kumar, S.; McMahan, H.B. Adaptive Federated
Optimization. arXiv 2020, arXiv:2003.00295. [CrossRef]
40. Hosseinzadeh, M.; Hudson, N.; Heshmati, S.; Khamfroush, H. Communication-Loss Trade-Off in Federated Learning: A
Distributed Client Selection Algorithm. In Proceedings of the 2022 IEEE 19th Annual Consumer Communications & Networking
Conference (CCNC), Las Vegas, NV, USA, 8–11 January 2022. [CrossRef]
41. Liu, S.; Viotti, P.; Cachin, C.; Quéma, V.; Vukolic, M. XFT: Practical fault tolerance beyond crashes. In Proceedings of the 12th
USENIX Conference on Operating Systems Design and Implementation (OSDI’16), Savannah, GA, USA, 2–4 November 2016.
42. Abdulrahman, S.; Tout, H.; Mourad, A.; Talhi, C. FedMCCS: Multicriteria Client Selection Model for Optimal IoT Federated
Learning. IEEE Internet Things J. 2021, 8, 4723–4735. [CrossRef]
43. Xin, F.; Zhang, J.; Luo, J.; Dong, F. Federated Learning Client Selection Mechanism Under System and Data Heterogeneity. In
Proceedings of the 2022 IEEE 25th International Conference on Computer Supported Cooperative Work in Design (CSCWD),
Hangzhou, China, 4–6 May 2022. [CrossRef]
44. Xu, L.D.; He, W.; Li, S. Internet of Things in Industries: A Survey. IEEE Trans. Ind. Inform. 2014, 10, 2233–2243. [CrossRef]
45. Titu, A.M.; Stanciu, A.M. Merging Operations Technology with Information Technology. In Proceedings of the 2020 12th
International Conference on Electronics, Computers and Artificial Intelligence (ECAI), Bucharest, Romania, 25–27 June 2020.
[CrossRef]
46. Bruckner, D.; Stănică, M.-P.; Blair, R.; Schriegel, S.; Kehrer, S.; Seewald, M.; Sauter, T. An Introduction to OPC UA TSN for
Industrial Communication Systems. Proc. IEEE 2019, 107, 1121–1131. [CrossRef]
47. Cruz, T.; Simões, P.; Monteiro, E. Virtualizing Programmable Logic Controllers: Toward a Convergent Approach. IEEE Embed.
Syst. Lett. 2016, 8, 69–72. [CrossRef]
48. Bernstein, D. Containers and Cloud: From LXC to Docker to Kubernetes. IEEE Cloud Comput. 2014, 1, 81–84. [CrossRef]
49. Nayak, N.G.; Dürr, F.; Rothermel, K. Incremental Flow Scheduling and Routing in Time-Sensitive Software-Defined Networks.
IEEE Trans. Ind. Informatics. 2018, 14, 2066–2075. [CrossRef]
50. Wu, Y.; Zhang, K.; Zhang, Y. Digital Twin Networks: A Survey. IEEE Internet Things J. 2021, 8, 13789–13804. [CrossRef]
51. Microservices: A Definition of This New Architectural Term. Available online: https://martinfowler.com/articles/microservices.
html (accessed on 1 January 2024).
52. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated Learning: Challenges, Methods, and Future Directions. IEEE Signal Process.
Mag. 2020, 37, 50–60. [CrossRef]
53. Matke, M.; Saurabh, K.; Singh, U. An Empirical Evaluation of Machine Learning Algorithms for Intrusion Detection in IIoT
Networks. In Proceedings of the 2023 IEEE 20th India Council International Conference (INDICON), Hyderabad, India, 14–17
December 2023. [CrossRef]
54. Choudhry, M.D.; Mani, J.S.; Rose, B.; Mol, S.P. Machine Learning Frameworks for Industrial Internet of Things (IIoT): A
Comprehensive Analysis. In Proceedings of the 2022 First International Conference on Electrical, Electronics, Information and
Communication Technologies (ICEEICT), Trichy, India, 16–18 February 2022. [CrossRef]
Sensors 2024, 24, 4182 22 of 22

55. Alqurashi, S.; Shirazi, H.; Ray, I. On the Performance of Isolation Forest and Multi Layer Perceptron for Anomaly Detection in
Industrial Control Systems Networks. In Proceedings of the 2021 8th International Conference on Internet of Things: Systems,
Management and Security (IOTSMS), Gandia, Spain, 6–9 December 2021. [CrossRef]
56. Deng, L. The mnist database of handwritten digit images for machine learning research. IEEE Signal Process. Mag. 2012, 29,
141–142. [CrossRef]

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.

You might also like