Green Ai For Iiot: Energy Efficient Intelligent Edge Computing For Industrial Internet of Things
Green Ai For Iiot: Energy Efficient Intelligent Edge Computing For Industrial Internet of Things
1, MARCH 2022 79
Abstract—Artificial Intelligence (AI) technology is a huge infrastructure and energy consumption [9], [10]. Large cloud
opportunity for the Industrial Internet of Things (IIoT) in the providers also have good energy efficiency due to the high
fourth industrial revolution (Industry 4.0). However, most AI- utilization of resources and mature scheduling strategies on
driven applications need high-end servers to process complex AI
tasks, bringing high energy consumption to IIoT environments. homogeneous architectures [11]. However, in IIoT environ-
In this article, we introduce intelligent edge computing, emerg- ments, the cloud-based structure for general AI applications
ing technology to reduce energy consumption in processing AI loses its effectiveness due to inconsistency between the long
tasks, to build green AI computing for IIoT applications. We first distance from the cloud and high real-time requirements of
propose an intelligent edge computing framework with a hetero- IIoT applications [12]. By bringing AI computing near to
geneous architecture to offload most AI tasks from servers. To
enhance the energy efficiency of various computing resources, we the location where it is needed, intelligent edge computing
propose a novel algorithm to optimize the scheduling for differ- becomes an excellent opportunity for AI-driven IIoT because
ent AI tasks. In the performance evaluation, we build a small the data transfer time is reduced [13]. Some intelligent edge
testbed to show the AI-driven IIoT applications’ energy efficiency nodes have adequate capability to process complex AI tasks,
with intelligent edge computing. Meanwhile, extensive simulation including computer vision and natural language processing to
results show that the proposed online scheduling strategy con-
sumes less than 80% energy of the static scheduling and 70% of support AI-driven IIoT applications [14]. Meanwhile, unlike
the first-in, first-out (FIFO) strategy in most settings. homogeneous structure in most cloud platforms, it is possible
to customize intelligent edge computing with heterogeneous
Index Terms—Green computing, intelligent edge, industrial
Internet of Things (IIoT), artificial intelligence (AI). hardware, which has good energy efficiency in processing
specific AI tasks [15], [16].
This paper tries to introduce intelligent edge computing into
I. I NTRODUCTION the IIoT scenario to support energy-efficient AI-driven IIoT
applications. The most challenging problem in IIoT environ-
RTIFICIAL intelligence (AI) driven Industrial Internet of
A Things (IIoT) will play a prominent role in supporting
high automation manufacturing [1], [2]. For example, different
ments is to guarantee the processing latency, which is a matter
of life and death to most industrial applications [17], [18]. In
our design, a framework organizes all heterogeneous resources
object recognition models have been deployed in automated and then finds the most feasible hardware to handle differ-
product assembly, factory unattended monitoring, automated ent AI tasks in the required processing time. Since existing
quality inspections, and other core processes of industrial pro- AI tasks are developed in several major AI libraries such as
duction [3], [4]. Since most AI tasks are compute-bound, PyTorch or TensorFlow, the framework can connect various AI
it is necessary to deploy high-end infrastructure to sup- tasks and different AI hardware without modification. Another
port AI-driven IIoT in finishing each task in a short time problem is to schedule heterogeneous hardware for better
interval [5], [6]. However, high-performance computing hard- energy efficiency in processing different AI tasks in intelli-
ware, including processors, graphics processing units (GPUs), gent edge [19]. Because special AI hardware has better energy
and tensor processing units (TPUs), usually has high energy efficiency in processing specific tasks, the proposed frame-
consumption in processing AI tasks [7]. As one of the most work’s scheduling strategy leverages the task type and latency
efficient computing devices for AI tasks, a typical NVIDIA requirement to minimize energy consumption and guarantee
RTX GPU has 400 watts peak power consumption [8]. processing latency.
In general AI-driven applications, users apply cloud to We also implement the demonstration system in a small
process AI tasks to reduce the additional cost from both testbed with different edge devices to test the process
latency of several basic AI tasks in the proposed frame-
Manuscript received March 16, 2021; revised June 17, 2021; accepted
July 16, 2021. Date of publication August 20, 2021; date of current version work. Meanwhile, the scheduling strategy in the intelligent
February 16, 2022. This work was supported in part by JSPS KAKENHI edge is evaluated in large-scale scenarios by extensive simula-
under Grant JP19K20250, Grant JP20F20080, and Grant JP20H04174; and tions. The simulation results show that the online scheduling
in part by Leading Initiative for Excellent Young Researchers (LEADER),
MEXT, Japan. (Corresponding author: Kaoru Ota.) strategy can reduce more than 20% energy consumption
The authors are with the Department of Sciences and Informatics, compared to static scheduling strategy and more than 40%
Muroran Institute of Technology, Muroran 050-0071, Japan (e-mail: compared to the first-in, first-out (FIFO) strategy in most
[email protected]; [email protected]; mx.dong@
csse.muroran-it.ac.jp). settings. We list three major contributions of this paper as
Digital Object Identifier 10.1109/TGCN.2021.3100622 follows.
2473-2400
c 2021 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See https://www.ieee.org/publications/rights/index.html for more information.
Authorized licensed use limited to: International Institute of Information Technology Bhubaneswar. Downloaded on January 30,2023 at 14:49:06 UTC from IEEE Xplore. Restrictions apply.
80 IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, VOL. 6, NO. 1, MARCH 2022
•We first design an intelligent edge computing frame- detection which is an important issue in automated manufac-
work to improve the energy efficiency of AI-driven IIoT turing. Supervised deep learning [24], [25], semi-supervised
applications. To the best of our knowledge, the proposed deep learning [26], and unsupervised deep learning methodolo-
framework is original and creative. gies [27] have been introduced to solve the anomaly detection
• We then state the scheduling problem in the proposed problem in IIoT environments. Meanwhile, some researchers
framework and give two solutions to solve the problem apply new deep learning methods, such as federated learn-
in both static and online scenarios, respectively. ing [28] and explainable AI [29], to detect anomalies in
• A demonstration framework is developed in our manufacturing.
testbed, and the intelligent edge devices show excel- Radio-frequency identification (RFID) is an efficient tech-
lent performance-to-power ratio in processing AI tasks. nology in sensing objects in manufacturing, which is stabler
Meanwhile, results from extensive simulations show than the vision-based way in many scenarios [30], [31].
that two proposed scheduling methods outperform other Huang et al. [32] show a deep learning-based approach
solutions. to monitor and predict manufacturing progress in factories.
Section II introduces related work of AI-driven IIoT and The proposed approach design a transfer learning-based AI
intelligent edge computing. Section III introduces the scenario model to improve the prediction performance with insufficient
of AI-driven IIoT applications and the design of the proposed labeled data. However, due to the gap between AI technolo-
framework. Section IV formulates the scheduling problem and gies and industrial informatics, the AI-driven IIoT area still
gives two solutions. Section V shows the demonstration frame- needs a stable and scalable solution for more comprehensive
work in the testbed and performance evaluation, followed by applications.
the conclusions and future work drawn in Section VI.
Authorized licensed use limited to: International Institute of Information Technology Bhubaneswar. Downloaded on January 30,2023 at 14:49:06 UTC from IEEE Xplore. Restrictions apply.
ZHU et al.: GREEN AI FOR IIoT: ENERGY EFFICIENT INTELLIGENT EDGE COMPUTING FOR IIoT 81
Authorized licensed use limited to: International Institute of Information Technology Bhubaneswar. Downloaded on January 30,2023 at 14:49:06 UTC from IEEE Xplore. Restrictions apply.
82 IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, VOL. 6, NO. 1, MARCH 2022
Authorized licensed use limited to: International Institute of Information Technology Bhubaneswar. Downloaded on January 30,2023 at 14:49:06 UTC from IEEE Xplore. Restrictions apply.
ZHU et al.: GREEN AI FOR IIoT: ENERGY EFFICIENT INTELLIGENT EDGE COMPUTING FOR IIoT 83
is processed for device dj , given by Therefore, in time series T, the total energy consumption
denoted by E is given by
1, task ak is processed for device dj ,
Xjk = (1) |T |
0, elsewise.
E = P · tk . (9)
Usually, the power consumption in processing a given AI t=1
task depends on the hardware of edge devices. Therefore, the
energy consumption of task ak in edge node ei is denoted by Therefore, the scheduling problem is summarized as
pik . We also use a value Yik to denote whether edge node ei
minimize: E
is scheduled to process task ak , given by
s.t.: lt ≤ Lt (10)
|A|
1, task ak is processed in edge node ei , k =1 Yik · bk ≤ ci .
Yik = (2)
0, elsewise.
Resource scheduling problem for AI-driven IIoT in intel-
We use P to denote the power consumption of the schedul- ligent edge: given a set of intelligent edge nodes and a set
ing strategy is given by of industrial equipment, the edge node scheduler attempts
to assign edge nodes for processing AI tasks submitted by
|E | |A|
equipment with minimal energy consumption.
P = Yik · pik . (3)
i=1 k =1
B. Scheduling Strategy
The processing latency of task ak on edge node ei is
p We first solve the scheduling problem in a static scenario
denoted by lik , and the network latency from equipment dj
in which AI tasks are submitted at the beginning and contin-
to edge node ei is denoted by lijn . Considering the communi-
d denote the time for uously processed until the end of the scheduling. The object
cation limitation of equipment ei , let lik of the scheduling problem is simplified to
transferring data of task ak to edge node ei , given by
|A| minimize: P
|E |
lt ≤ Lt
lkd = Yik · k =1 Yik D subject: (11)
(4) |A|
k =1 Yik ≤ ci .
Bk
i=1
where D is the transferring data size and Bk is the ingress Theorem 1: The scheduling problem that assigning intel-
bandwidth of edge node ei . ligent edge nodes for processing AI tasks with the static
The total latency for processing task ak is denoted by lkt , assumption is NP-hard.
given by Proof: The NP-hardness of the scheduling problem can be
proved by reducing the bin-packing problem.
|E | |E | |D| Bin-packing Problem: given a set of items a1 , a2 , .., a|A| ,
p
lkt = li.e., · Yik + lijn · Xjk · Yik + lkd . (5) each item ak with a size bk , and a number of |E | bins of
i=1 i=1 j =1 size C, the bin-packing problem accommodate all items into
We use Lt to denote the required processing latency in the |E | bins. We first suppose that there is a solution for the bin-
scheduling and for a given task ak , lkt is no more than Lt . packing problem. This solution accommodates |A| items to
Because edge node ei has a capacity for processing tasks, |E | bins, which is also a solution to the edge node scheduling
the resource scheduling strategy should satisfy problem. We suppose that the edge node scheduling problem
as a solution that each AI task ak is assigned to an edge
|A| node ei , which can solve the bin-packing problem. Since we
Yik · bk ≤ ci (6) can evaluate the objective function associated with a given
k =1 solution in a polynomial time, the edge node scheduling is
NP-hard.
where ci is the capacity of edge node ei , and bk is the overload
To solve the edge node scheduling problem, we design a
of task ak .
heuristic algorithm shown in Algorithm 1. We assume that
We consider that the scheduling strategy works on a time
edge nodes are less than equipment devices, i.e., |E | < |D|.
series T, which is divided into time slots. Let t denote a time
There are two loops in the static scheduling algorithm, a dou-
slot in time series T. We assume the length of each time slot
ble loop, and a normal loop. The complexity of the double
is the minimal scheduling unit. Therefore, the working time
loop is O(|A||D|) because the first layer of the loop has |A|
series of task ak , denoted by tk , is given by
iterations, and the second layer has no more than |D| itera-
tions. The normal loop has |E | iterations. Therefore, the time
tk = z1 , z2 , . . . , z|T | (7)
complexity of the proposed algorithm is O(|A||D| + |E |) =
and zt is a value to denote whether task tk is working, given by O(|A||D|) because |E | is smaller than |A||D|. Moreover, from
the analysis in a previous paper [43], the number of edge nodes
1, task ak is working in time t employed by the static scheduling algorithm is no more than
zt = (8)
0, elsewise. two times that in an optimal solution.
Authorized licensed use limited to: International Institute of Information Technology Bhubaneswar. Downloaded on January 30,2023 at 14:49:06 UTC from IEEE Xplore. Restrictions apply.
84 IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, VOL. 6, NO. 1, MARCH 2022
Authorized licensed use limited to: International Institute of Information Technology Bhubaneswar. Downloaded on January 30,2023 at 14:49:06 UTC from IEEE Xplore. Restrictions apply.
ZHU et al.: GREEN AI FOR IIoT: ENERGY EFFICIENT INTELLIGENT EDGE COMPUTING FOR IIoT 85
Authorized licensed use limited to: International Institute of Information Technology Bhubaneswar. Downloaded on January 30,2023 at 14:49:06 UTC from IEEE Xplore. Restrictions apply.
86 IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, VOL. 6, NO. 1, MARCH 2022
TABLE I
H ARDWARE S ETTING OF THE T ESTBED
TABLE II
PARAMETER S ETTINGS IN A LL S IMULATIONS We compare the results between the deep reinforcement
learning-based online scheduling strategy (Online), the static
scheduling algorithm (Static), and the FIFO strategy in all
simulations. All simulations are executed 20 times, and the
average values are recorded with a confidence level of 95%.
All results of the performance evaluation are represented in
Fig. 6. As shown in Fig. 6(a), we change the number of tasks
from 200 to 500 and test the total energy consumption in one
hour. The online scheduling strategy shows the lowest energy
consumption with the different number of tasks. When the
number of tasks is less than 300, the FIFO method performs
better than the results with the static scheduling algorithm.
When the number of tasks increases to more than 300, the
all devices. The model converting can improve then energy static scheduling algorithm brings less energy consumption
efficiency in both high-spec and low-spec edge devices. GPU than the FIFO method.
accelerated AI processing is still essential to process AI tasks We also test the energy consumption with different numbers
with less energy consumption. of edge nodes. The number of tasks is set to 400, and the num-
ber of edge nodes increases from 100 to 400. From Fig. 6(b),
when the number of edge nodes is less than 200, three meth-
B. Experimental Result Analysis ods perform similarly, and the total energy consumption is
In a large environment, we apply simulations for the more than 5200 Wh. After more edge nodes begin to process
performance evaluation of the proposed scheduling strategy. AI tasks, the total energy consumption decreases. The online
We develop a simulator with Python 3.9.2. The training and scheduling method still has the lowest energy consumption in
evaluation is executed in a server that equips an Intel Core i9 the three methods, and the other two methods perform very
10900X CPU, 128GB memory, a 1TB SSD and two NVIDIA similarly.
GeForce TITAN RTX graphics cards. The operating system is The active ratio of the task in the time series also affects
Ubuntu 18.04 LTS. We adopt Keras 2.4.0 as the deep leaning the total energy consumption. In Fig. 6(c), we adjust the active
platform. In the experiments, we apply the RMSProp opti- ratio from 0.5 to 1.0 and test the total energy consumption. The
mizer for Keras library. We set the size of mini-batches to 64. number of edge nodes is set to 400, and the number of total
For the -greedy in the scheduler training, we set the value of tasks is set to 800. The results show that a higher active ratio
to 0.1. will bring more energy consumption, and the online scheduling
As shown in Table II, we set the parameters of all sim- strategy performs better than the other two algorithms.
ulations according to the experimental results. The tasks are Finally, we test the energy consumption with different
randomly generated in the time series, and the power consump- required latency. We set the number of tasks to 400 with 400
tion for processing tasks is evenly distributed in [5, 25] from edge nodes, and the required latency increases from 100 ms to
Fig. 4. The required processing latency is evenly distributed 600 ms. As shown in Fig. 6(d), the total energy consumption
in [2, 600] ms from Fig. 5. According to the general industrial decreases with increased required latency. When the required
environment, [45], the network latency between devices and latency is less than 300 ms, the online scheduling strategy
edge nodes is evenly distributed in [1, 200] ms. The capacity performs much better than the other two methods. When the
of edge nodes is based on the memory size of each device and latency requirement becomes loose, three scheduling methods
evenly distributed in [1, 16]. According to memory usage, the perform similarly, and the FIFO has less energy consumption
required resource units of tasks are evenly distributed in [1, 2]. than the Static algorithm.
We use a small factory with 500 employees as the simulation
environment. Therefore, the number of edge nodes and equip-
ment devices is set from 100 to 400 and from 200 to 500, VI. C ONCLUSION AND F UTURE W ORK
respectively. Usually, since one device will send at most two From this article, intelligent edge computing shows its
tasks, the number of tasks is set from 200 to 800. We test the potential in the energy-efficient AI-driven IIoT scenario.
energy consumption in one hour with 3600 time-slots and 1 Depending on the development of AI frameworks, it is possi-
second per slot. The ratio that tasks are active in the total time ble to concentrate heterogeneous resources on edge to support
series is evenly distributed [0.8, 1]. different AI tasks. Meanwhile, novel resource scheduling is
Authorized licensed use limited to: International Institute of Information Technology Bhubaneswar. Downloaded on January 30,2023 at 14:49:06 UTC from IEEE Xplore. Restrictions apply.
ZHU et al.: GREEN AI FOR IIoT: ENERGY EFFICIENT INTELLIGENT EDGE COMPUTING FOR IIoT 87
also important to the energy efficiency in the IIoT environ- [6] Y. B. Zikria, M. K. Afzal, S. W. Kim, A. Marin, and M. Guizani, “Deep
ments. The demonstration system developed the small testbed learning for intelligent IoT: Opportunities, challenges and solutions,”
Comput. Commun., vol. 164, pp. 50–53, Dec. 2020.
shows that a well-designed edge system can outperform tra- [7] Y. Wang et al., “Benchmarking the performance and energy efficiency
ditional cloud-based solutions with short processing latency of AI accelerators for AI training,” in Proc. 20th IEEE/ACM Int. Symp.
and very low energy consumption. Finally, the proposed work Clust. Cloud Internet Comput. (CCGRID), 2020, pp. 744–751.
is a feasible solution to the AI-driven IIoT applications from [8] NVIDIA A100 GPUs Power the Modern Data Center. Accessed:
Mar. 10, 2021. [Online]. Available: https://www.nvidia.com/en-us/data-
the performance evaluation results. With the online schedul- center/a100/
ing strategy, energy consumption in different settings is less [9] H. Li, K. Ota, M. Dong, and M. Guo, “Learning human activities
than 20% and 30% of the number with the static schedul- through Wi-Fi channel state information with multiple access points,”
IEEE Commun. Mag., vol. 56, no. 5, pp. 124–129, May 2018.
ing strategy and FIFO algorithm. In the future, we will try [10] M. Mohammadi, A. Al-Fuqaha, S. Sorour, and M. Guizani, “Deep learn-
to enlarge the scale of our testbed and develop more deep ing for IoT big data and streaming analytics: A survey,” IEEE Commun.
learning applications such as NLP, time series forecasting, and Surveys Tuts., vol. 20, no. 4, pp. 2923–2960, 4th Quart., 2018.
audio recognition. We also plan to build intelligent edge ser- [11] M. Dong, K. Ota, and A. Liu, “RMER: Reliable and energy-efficient
data collection for large-scale wireless sensor networks,” IEEE Internet
vice in robots and UAVs to support autonomous driving and Things J., vol. 3, no. 4, pp. 511–519, Aug. 2016.
flying. [12] J. Xu, K. Ota, and M. Dong, “Saving energy on the edge: In-memory
caching for multi-tier heterogeneous networks,” IEEE Commun. Mag.,
vol. 56, no. 5, pp. 102–107, May 2018.
R EFERENCES [13] H. Li, K. Ota, and M. Dong, “Deep reinforcement scheduling for mobile
crowdsensing in fog computing,” ACM Trans. Internet Technol., vol. 19,
[1] S. G. Tzafestas, “Synergy of IoT and AI in modern society: The robotics
no. 2, pp. 1–18, 2019.
and automation case,” Robot. Autom. Eng. J., vol. 3, no. 5, pp. 1–15,
2018. [14] J. Xu, K. Ota, and M. Dong, “Big data on the fly: UAV-mounted mobile
[2] W. Sun, J. Liu, and Y. Yue, “AI-enhanced offloading in edge comput- edge computing for disaster management,” IEEE Trans. Netw. Sci. Eng.,
ing: When machine learning meets Industrial IoT,” IEEE Netw., vol. 33, vol. 7, no. 4, pp. 2620–2630, Oct.–Dec. 2020.
no. 5, pp. 68–74, Sep./Oct. 2019. [15] S. Pasteris, S. Wang, M. Herbster, and T. He, “Service placement with
[3] M. Haslgrübler, P. Fritz, B. Gollan, and A. Ferscha, “Getting through,” provable guarantees in heterogeneous edge computing systems,” in Proc.
in Proc. 7th Int. Conf. Internet Things, 2017, pp. 1–8. IEEE INFOCOM Conf. Comput. Commun., 2019, pp. 514–522.
[4] Y. Cong, D. Tian, Y. Feng, B. Fan, and H. Yu, “Speedup 3-D texture-less [16] R. Han, S. Li, X. Wang, C. H. Liu, G. Xin, and L. Y. Chen, “Accelerating
object recognition against self-occlusion for intelligent manufacturing,” gossip-based deep learning in heterogeneous edge computing platforms,”
IEEE Trans. Cybern., vol. 49, no. 11, pp. 3887–3897, Nov. 2019. IEEE Trans. Parallel Distrib. Syst., vol. 32, no. 7, pp. 1591–1602,
[5] J. Hiller, M. Henze, M. Serror, E. Wagner, J. N. Richter, and K. Wehrle, Jul. 2021.
“Secure low latency communication for constrained Industrial IoT sce- [17] H. Yan, Y. Zhang, Z. Pang, and L. D. Xu, “Superframe planning and
narios,” in Proc. IEEE 43rd Conf. Local Comput. Netw. (LCN), 2018, access latency of slotted MAC for industrial WSN in IoT environment,”
pp. 614–622. IEEE Trans. Ind. Informat., vol. 10, no. 2, pp. 1242–1251, May 2014.
Authorized licensed use limited to: International Institute of Information Technology Bhubaneswar. Downloaded on January 30,2023 at 14:49:06 UTC from IEEE Xplore. Restrictions apply.
88 IEEE TRANSACTIONS ON GREEN COMMUNICATIONS AND NETWORKING, VOL. 6, NO. 1, MARCH 2022
[18] H. Li, K. Ota, M. Dong, and H.-H. Chen, “Efficient energy trans- [41] N. Tziritas et al., “Data replication and virtual machine migrations to
port in 60 GHz for wireless industrial sensor networks,” IEEE Wireless mitigate network overhead in edge computing systems,” IEEE Trans.
Commun., vol. 24, no. 5, pp. 143–149, Oct. 2017. Sustain. Comput., vol. 2, no. 4, pp. 320–332, Oct.–Dec. 2017.
[19] K. Li, X. Tang, and K. Li, “Energy-efficient stochastic task scheduling [42] H. Liu, L. T. Yang, M. Lin, D. Yin, and Y. Guo, “A tensor-based holistic
on heterogeneous computing systems,” IEEE Trans. Parallel Distrib. edge computing optimization framework for Internet of Things,” IEEE
Syst., vol. 25, no. 11, pp. 2867–2876, Nov. 2014. Netw., vol. 32, no. 1, pp. 88–95, Jan./Feb. 2018.
[20] I. Bisio, C. Garibotto, A. Grattarola, F. Lavagetto, and A. Sciarrone, [43] H. Li, P. Li, S. Guo, and A. Nayak, “Byzantine-resilient secure software-
“Exploiting context-aware capabilities over the Internet of Things for defined networks with multiple controllers in cloud,” IEEE Trans. Cloud
industry 4.0 applications,” IEEE Netw., vol. 32, no. 3, pp. 101–107, Comput., vol. 2, no. 4, pp. 436–447, Oct.–Dec. 2014.
May/Jun. 2018. [44] R. Luus, “Dynamic programming,” in Iterative Dynamic Programming.
[21] T. Han, K. Muhammad, T. Hussain, J. Lloret, and S. W. Baik, “An Boca Raton, FL, USA: Chapman Hall/CRC, 2019, pp. 67–80.
efficient deep learning framework for intelligent energy management in [45] P. Ferrari, A. Flammini, E. Sisinni, S. Rinaldi, D. Brandão, and
IoT networks,” IEEE Internet Things J., vol. 8, no. 5, pp. 3170–3179, M. S. Rocha, “Delay estimation of Industrial IoT applications based
Mar. 2021. on messaging protocols,” IEEE Trans. Instrum. Meas., vol. 67, no. 9,
[22] A. Kanawaday and A. Sane, “Machine learning for predictive mainte- pp. 2188–2199, Sep. 2018.
nance of industrial machines using IoT sensor data,” in Proc. 8th IEEE
Int. Conf. Softw. Eng. Service Sci. (ICSESS), 2017, pp. 87–90.
[23] K. T. P. Nguyen and K. Medjaher, “A new dynamic predictive mainte-
nance framework using deep learning for failure prognostics,” Reliabil.
Eng. Syst. Safety, vol. 188, pp. 251–262, Aug. 2019.
[24] S. M. Erfani, S. Rajasegarar, S. Karunasekera, and C. Leckie, “High- Sha Zhu (Student Member, IEEE) received
dimensional and large-scale anomaly detection using a linear one-class the B.S. degree in electrical engineering from
SVM with deep learning,” Pattern Recognit., vol. 58, pp. 121–134, Honam University, South Korea, in 2011, and
Oct. 2016. the M.S. degree in electrical engineering from
[25] T. S. Buda, B. Caglayan, and H. Assem, “DeepAD: A generic Chonnam National University, South Korea, in 2013.
framework based on deep learning for time series anomaly detec- She is currently pursuing the Ph.D. degree in
tion,” in Advances in Knowledge Discovery and Data Mining. Cham, advanced information and electronic engineering
Switzerland: Springer, 2018, pp. 577–588. with Muroran Institute of Technology, Japan. Her
[26] D. F. Wulsin, J. R. Gupta, R. Mani, J. A. Blanco, and B. Litt, “Modeling research interests include smart grid, IoT, and edge
electroencephalography waveforms with semi-supervised deep belief computing.
nets: Fast classification and anomaly measurement,” J. Neural Eng.,
vol. 8, no. 3, 2011, Art. no. 036015.
[27] T. Schlegl, P. Seeböck, S. M. Waldstein, U. Schmidt-Erfurth, and
G. Langs, Unsupervised Anomaly Detection With Generative Adversarial
Networks to Guide Marker Discovery (Lecture Notes in Computer Kaoru Ota (Member, IEEE) was born in Aizu–
Science). Cham, Switzerland: Springer, 2017, pp. 146–157. Wakamatsu, Japan. She received the B.S. degree
[28] Y. Liu et al., “Deep anomaly detection for time-series data in Industrial in computer science and engineering from The
IoT: A communication-efficient on-device federated learning approach,” University of Aizu, Japan, in 2006, the M.S.
IEEE Internet Things J., vol. 8, no. 8, pp. 6348–6358, Apr. 2021. degree in computer science from Oklahoma State
[29] L. Antwarg, R. M. Miller, B. Shapira, and L. Rokach, “Explaining University, USA, in 2008, and the Ph.D. degree
anomalies detected by autoencoders using SHAP,” 2020. [Online]. in computer science and engineering from The
Available: arXiv:1903.02407. University of Aizu in 2012. She is currently an
[30] X. Xie, X. Liu, H. Qi, B. Xiao, K. Li, and J. Wu, “Geographical Associate Professor and Ministry of Education,
correlation-based data collection for sensor-augmented RFID systems,” Culture, Sports, Science and Technology Excellent
IEEE Trans. Mobile Comput., vol. 19, no. 10, pp. 2344–2357, Oct. 2020. Young Researcher with the Department of Sciences
[31] X. Xie et al., “Implementation of differential tag sampling for and Informatics, Muroran Institute of Technology, Japan. From March 2010
COTS RFID systems,” IEEE Trans. Mobile Comput., vol. 19, no. 8, to March 2011, she was a Visiting Scholar with the University of Waterloo,
pp. 1848–1861, Aug. 2020. Canada. She was also a Japan Society of the Promotion of Science Research
[32] S. Huang, Y. Guo, D. Liu, S. Zha, and W. Fang, “A two-stage trans- Fellow with Tohoku University, Japan, from April 2012 to April 2013. She is a
fer learning-based deep learning approach for production progress recipient of the IEEE TCSC Early Career Award 2017, the 13th IEEE ComSoc
prediction in IoT-enabled manufacturing,” IEEE Internet Things J., Asia–Pacific Young Researcher Award 2018, and the 2020 N2Women: Rising
vol. 6, no. 6, pp. 10627–10638, Dec. 2019. Stars in Computer Networking and Communications. She is a Clarivate
[33] C. Zhang, Q. Cao, H. Jiang, W. Zhang, J. Li, and J. Yao, “FFS-VA: A Analytics 2019 Highly Cited Researcher (Web of Science).
fast filtering system for large-scale video analytics,” in Proc. 47th Int.
Conf. Parallel Processing, 2018, pp. 1–10.
[34] S. Bhattacharya and N. D. Lane, “Sparsification and separation of deep
learning layers for constrained resource inference on wearables,” in
Proc. 14th ACM Conf. Embedded Netw. Sens. Syst. CD-ROM, 2016,
pp. 176–189. Mianxiong Dong (Member, IEEE) received the
[35] H. Li, K. Ota, and M. Dong, “Learning IoT in edge: Deep learning for B.S., M.S., and Ph.D. degrees in computer sci-
the Internet of Things with edge computing,” IEEE Netw., vol. 32, no. 1, ence and engineering from the University of Aizu,
pp. 96–101, Jan./Feb. 2018. Japan. He is the youngest ever Vice President and a
[36] S. Teerapittayanon, B. McDanel, and H. T. Kung, “Distributed deep Professor of Muroran Institute of Technology, Japan.
neural networks over the cloud, the edge and end devices,” in Proc. IEEE He was a JSPS Research Fellow with the School of
37th Int. Conf. Distrib. Comput. Syst. (ICDCS), 2017, pp. 328–339. Computer Science and Engineering, The University
[37] NVIDIA EGX Platform for AI Computing. Accessed: Mar. 10, of Aizu and was a Visiting Scholar with BBCR
2021. [Online]. Available: https://www.nvidia.com/en-us/data- Group, University of Waterloo, Canada, supported
center/products/egx/ by JSPS Excellent Young Researcher Overseas Visit
[38] NVIDIA Jetson Xavier NX for Embedded & Edge Systems. Accessed: Program from April 2010 to August 2011. He was
Mar. 10, 2021. [Online]. Available: https://www.nvidia.com/en- selected as a Foreigner Research Fellow (a total of three recipients all over
us/autonomous-machines/embedded-systems/jetson-xavier-nx/ Japan) by NEC C&C Foundation in 2011. He is a recipient of the IEEE TCSC
[39] H. van Hasselt, A. Guez, and D. Silver, “Deep reinforcement learning Early Career Award 2016, the IEEE SCSTC Outstanding Young Researcher
with double Q-learning,” in Proc. 30th AAAI Conf. Artif. Intell., 2016, Award 2017, the 12th IEEE ComSoc Asia–Pacific Young Researcher Award
pp. 2094–2100. 2017, the Funai Research Award 2018, and the NISTEP Researcher 2018
[40] H. Liu et al., “Thermal-aware and DVFS-enabled big data task schedul- (one of only 11 people in Japan) in recognition of significant contributions
ing for data centers,” IEEE Trans. Big Data, vol. 4, no. 2, pp. 177–190, in science and technology. He is a Clarivate Analytics 2019 Highly Cited
Jun. 2018. Researcher (Web of Science).
Authorized licensed use limited to: International Institute of Information Technology Bhubaneswar. Downloaded on January 30,2023 at 14:49:06 UTC from IEEE Xplore. Restrictions apply.