Fault Detection and Classification in Industrial IoT in Case of Missing Sensor Data

Download as pdf or txt
Download as pdf or txt
You are on page 1of 8

1

Fault detection and classification in Industrial IoT


in case of missing sensor data
Merim Dzaferagic, Nicola Marchetti, Irene Macaluso

Abstract—This paper addresses the issue of reliability in aspect in assuring such ultra-reliability in the IIoT is how
Industrial Internet of Things (IIoT) in case of missing sensors to guarantee we have a functioning system in place, even in
measurements due to network or hardware problems. We propose case some of the measurements are missing due to network
to support the fault detection and classification modules, which
are the two critical components of a monitoring system for or hardware issues. In fact values are often missing from
IIoT, with a generative model. The latter is responsible of the collected sensor data, and the related issue of missing
imputing missing sensor measurements so that the monitoring value imputation becomes then very important. For example,
system performance is robust to missing data. In particular, high-frequency data collection can result in large gaps in the
we adopt Generative Adversarial Networks (GANs) to generate data and if the network stops working, all the measurements
missing sensor measurements and we propose to fine-tune the
training of the GAN based on the impact that the generated collected during the network downtime will be missing [3].
data have on the fault detection and classification modules. We Other possible reasons behind missing data are: faulty sensors
conduct a thorough evaluation of the proposed approach using producing intermittent readings, loss of data during wireless
the extended Tennessee Eastman Process dataset. Results show communication owing to packet loss or to interference in
that the GAN-imputed data mitigate the impact on the fault the communication medium, or data removed purposely by
detection and classification even in the case of persistently missing
measurements from sensors that are critical for the correct attackers with malicious intentions during sensing, processing,
functioning of the monitoring system. storing or communication. A related research challenge is to
impute the missing values, to enable the data to be analyzed
Index Terms—Generative Adversarial Network, Industrial IoT
(IIoT), data imputation, fault detection, fault classification. while ensuring that the imputed values are as close as possible
to the true values. What complicates things with regard to
the imputation of missing data in IoT, is that the data to
I. I NTRODUCTION
be collected in such systems is diverse, and the techniques
The Internet of Things (IoT) is a computing paradigm developed must therefore provide a high level of confidence
that relies on ubiquitous connection to the Internet, where for different types of applications, besides the need to be robust
common objects are turned into connected devices. Up to to the increase in the scale of IoT (and IIoT) deployments.
trillions of smart objects are being deployed, that are capable Furthermore, techniques must be lightweight to be able to fulfil
of sensing their surroundings, transmit and process acquired real-time IoT application requirements [9].
data, and then in turn feedback relevant information to the All the approaches reported to date in the literature focus on
environment. A subset of IoT, the Industrial Internet of Things either data imputation, anomaly detection or fault classification
(IIoT) encompasses machine-to-machine (M2M) and industrial for an industrial process. In this paper we instead propose
communication technologies with applications in the automa- a framework that unifies all three techniques, allowing us to
tion sector. IIoT paves the way for better understanding of optimize each of them in a way that results in the best overall
the manufacturing process, with positive repercussions on the performance of the monitoring system. We propose a data-
efficiency and sustainability of the production system [1], [2]. driven decomposition of the process to monitor the various
In the era of IoT big data, the integration of cloud com- indicators of the health of a machine/component or an entire
puting technologies and cyber-physical systems enables the industrial process. Instead of proposing a new tailored solution
full potential of Industry 4.0 to be harvested in manufac- to collect, communicate and process data in an industrial
turing processes, with a multitude of sensors being installed environment, we focus on the detection and classification
around the industrial operating environment and equipment. of system failures based on a dataset with missing values,
The networked sensors would continuously send monitoring investigating in particular the impact of missing data on the
data, allowing for proactive maintenance to take place, leading monitoring system in an industrial setting. This is a very
to a reduction in the unplanned downtime via data analysis important issue to tackle, as small reconstruction errors of
techniques [3], [4], [5], [6], [7]. missing sensor data could greatly affect the capability of
The work proposed in this article is based on the FIRE- the monitoring system. The data imputation module in our
MAN project funded by CHIST-ERA, which focuses on framework relies on a generative adversarial network (GAN)
modelling and analysing IIoT networks based on specific model that learns the correlation between the data from the
machine learning algorithms capable of detecting rare events input layer to replace missing sensor measurements. The GAN
in industrial setups in an ultra-reliable way [8]. An important was optimized by validating its performance based on the
M. Dzaferagic, N. Marchetti, and I. Macaluso are with CONNECT centre, effect of the imputed measurements on the fault identification
Trinity College Dublin, Ireland. and detection modules, which ultimately constitute the two
2

essential tasks performed by the monitoring system. As we dataset, noise in the measurements, missing data). Similar
will discuss later, the GAN-imputed data mitigate the loss on to the work proposed by the authors of [25], [3], we also
these two modules even in the case of persistently missing investigate the impact of missing data on the detection of rare
measurements from sensors that are critical for the correct events in an industrial setting. The authors of [25] propose a
functioning of the monitoring system. sensor data reconstruction scheme that exploits the hidden data
Section II reports an account of the relevant literature, while dynamics to accurately estimate the missing measurements. In
also positioning our work highlighting the main differences [3], the authors focus on missing data imputation for large gaps
and advantages of our approach. Section III describes the in univariate time-series data and propose an iterative frame-
proposed framework, describing the adopted fault detection, work, using multiple segmented gap iteration to provide the
fault classification and GAN-based missing data imputation most appropriate values. All the approaches mentioned above
techniques. Section IV analyses the performance of our tech- focus on either data imputation, anomaly detection or fault
niques in terms of recall and precision, and showing the impact classification for an industrial process. We, on the other hand,
the proposed data imputation has on both metrics. Section propose a framework that unifies all three techniques, allowing
V concludes the paper and outlines some possible promising us to optimize each of them in a way that results in the best
directions for future research. overall performance. Indeed we train the imputation model to
minimize the false alarm rate of the anomaly detection model
and the classification error of the fault identification model, so
II. S TATE OF THE ART
that the monitoring system performance is robust to missing
IoT is based on the idea of connecting the physical and the data.
digital worlds [10]. Initially, Radio-Frequency IDentification
(RFID) was the main technology in this space, allowing III. F RAMEWORK
microchips to identify themselves to a reader wirelessly [11].
In this work we propose a data-driven decomposition of
Today, IoT applications have moved from the simple RFIDs,
the process to monitor the various indicators of the health of
by integrating diverse sensing, computing and communication
a machine/component or an entire industrial process. Since
technologies. An IoT platform provides services and features
faults account only for a very small fraction of the data
like: node management, connectivity and network manage-
collected in an IIoT scenario, we separately address the
ment, data management, analysis and processing, security,
problem of fault or anomaly detection and the problem of
access control and interfacing [12]. Domains like transporta-
fault classification. The latter is only triggered in case an
tion, healthcare, industrial automation and education are all
anomaly is detected, as shown by the data flow in Figure 1.
affected by the fast development of these platforms. Different
The fault detection, for which we employ an autoencoder as
domains face different problems related to the deployment of
discussed in Section III-A, can be trained using data collected
IoT solutions (e.g. low latency communication, high reliability,
during normal operation. The fault classification, discussed
massive data transfer, energy efficiency). Hence, different IoT
in Section III-B, can be deployed as soon as enough labeled
platforms are needed to run specific applications.
data related to faults become available. Finally, the imputation
In an industrial environment, the detection and prediction
module, described in Section III-C, requires both faulty and
of anomalies are important for both economic and security
non-faulty data to replace missing sensors’ data so that the
reasons. Difficulties related to these tasks originate from the
monitoring process can continue without disruption even if
fact that anomalies are rare events within datasets, making it
some measurements are not received.
difficult to apply most of the existing algorithms which result
Figure 1 shows the overall system and its data flow. In
in either false alarms or misdetections [13], [14]. The authors
case all sensors measurements are received, the fault detection
of [8] made an attempt at providing a general framework to
procedure is always activated, while the fault classification is
model a wide range of cases based on the advances in IIoT
performed only if an anomaly is detected (path a − c in Figure
networks and Machine Learning (ML) algorithms. Similarly,
1). In case of missing measurements, the data imputation
the authors of [15], [16] describe several deployed solutions
module is first triggered, followed by the fault detection and
of cyber-physical systems in an industrial environment. The
eventually by the fault classification if an anomaly is detected.
most promising solutions include those presented in [17], [18],
[19]. Unlike the work in the above-mentioned papers, instead
of proposing a new tailored solution to collect, communicate A. Fault detection
and process data in an industrial environment, we focus on We built the fault detection module using an autoencoder
the detection and classification of system failures based on a that receives as input the N sensor measurements used to
dataset with missing values. monitor the industrial process. The outputs of the autoencoder
Fault detection in an industrial environment has always are the reconstructed values from the input. By minimizing
been a challenging task [20], [21], [22], [23], [24]. Due the Root Mean Square Error (RMSE) of the reconstructed
to the issues related to interoperability and communication values, the model learns a representation for the input data
between different devices, collecting a large dataset in such an and filters the noise. By training the model on fault-free
environment is not an easy task. However, even if we assume data only, the autoencoder will learn the patterns of normal
that the dataset was collected, different problems arise when operating conditions. This way, when faulty data will be
using such a dataset for anomaly detection (e.g. unbalanced inputed the resulting reconstruction error should be larger than
3

Possible Data Path Scenario

a No missing data; no anomaly detected

a c No missing data; anomaly detected; anomaly classification

b d Missing data; no anomaly detected

b d e Missing data; anomaly detected; anomaly classification

Fig. 1: Data flow within the monitoring system. If all measurements are received the fault detection is activated, followed
by the fault classification in the event of an anomaly (paths a and a − c). If some measurements are not received, the fault
detection and identification are preceded by the data imputation module (paths b − d and b − d − e).

the error corresponding to fault-free data. It is worth noting between the data from the input layer to replace missing sensor
that fault-free measurements represent the normal operation of measurements. Considering that the main purpose of data im-
the system, these measurements constitute the majority of the putation in our proposed architecture is to replace the missing
collected data. Hence the fault detection module can be readily values so that the fault detection and classification modules can
deployed as soon as enough measurements are collected. After operate correctly, the model requires both faulty and non-faulty
training the model to minimize the RMSE, we choose an data during training. We start from the Generative Adversarial
RMSE threshold, which will indicate the presence/absence of Imputation Network (GAIN) model presented in [26]. In [26],
a system fault. the generator G observes the N -dimensional real data vector x
Besides its main purpose, which is fault detection, the with missing components. Let us denote by M the mask that
autoencoder is also used to fine-tune the training of the model indicates the missing values in the input dataset. The mask
for the missing sensor measurement imputation. M , when multiplied with the complete input dataset, produces
a dataset with missing values. Then G imputes the missing
B. Fault Classification components conditioned on what is actually observed, and
For the fault classification module, in this paper we adopt a outputs an imputed vector x̂. The discriminator D receives the
deep neural network (DNN) that receives as input the time lags output of the generator as its input. It takes the complete vector
of the N sensor measurements used to monitor the industrial generated by G and attempts to determine which components
process. Another possible solution is to adopt a recurrent were actually observed and which were imputed. In other
neural network. Fault classification is a multinomial (or multi- words the discriminator D attempts to predict the mask M . In
class) classification problem. While an unequal distribution of addition to the output of the generator, D receives a hint vector,
the faults might result in an imbalanced classification problem, which reveals partial information about the missing values. In
we have addressed the most severe imbalance by training sepa- particular, the hint vector reveals to D all the components of
rately a model for the detection of anomalies that only requires the mask M except for one, which is randomly independently
fault-free data. In fact, fault-free sensor measurements, i.e. chosen for each sample.
data collected in normal operating conditions, represent the The training of the GAIN model is a two step process. We
vast majority of the data collected in IIoT. By separating the first optimize the discriminator D with a fixed generator G
fault detection from the fault classification stage, the DNN for using mini-batches of size KD . D is trained according to
fault classification is only used after an unspecified fault has equation (1), where LD is defined with equation (2).
been detected (see Section III-A) to determine which fault has kD
occurred. Therefore DNN for fault classification is trained only
X
min − LD (m(j), m̂(j), b(j)) (1)
using sensor measurements collected during faulty operations D
j=1
to classify the different types of faults.
The DNN is not only used to determine the type of fault that X 
has occurred, but it is also the basis to fine-tune the training LD (m, m̂, b) = mi log(m̂i )
i:bi =0 (2)
of the model used to impute missing data. 
+ (1 − mi ) log(1 − m̂i )
C. GAN-based missing data imputation We denote with m(j) the original mask associated with j−th
The data imputation module relies on a Generative Ad- sample in the mini-batch, while m̂(j) is the correspond-
versarial Network (GAN) model that learns the correlation ing predicted mask, i.e. the output of D, and b(j) is an
4

N −dimensional vector whose elements are all equal to 1


except for one element that is 0. The position of the 0 element
in b(j) is the position of the only element of the mask m(j)
that is not provided as input to D. In other words, by using
(2) we train D only for the element of the mask vector that
is unknown to the discriminator for each sample, which is
randomly chosen for each sample.

kG
X
min LG (m(j), m̂(j), b(j)) + αLM (x(j), x̂(j)) (3)
G
j=1

After we run the training process for the discriminator D, Fig. 2: Hyperparameteres tuning for the GAN.
we optimize the generator G according to equation (3) with
mini-batches of size KG . The cost function for G is the
weighted sum with hyperparameter α of two components: sampled at an interval of 3 minutes. The training data and
one which applies to the missing sensors measurements - testing data span 25 hours of operation (i.e. 500 samples) and
LG (equation (4)); and one which applies to the observed 48 hours of operation (i.e. 960 samples) respectively. While the
measurements - LM (equation (5)). original TE process dataset consisted of 22 runs, one normal
X and the remaining 21 for the faulty conditions, in this work we
LG (m, m̂, b) = − (1 − mi ) log(m̂i ) (4) use the extended version provided by Reith et al. [27]. This
i:bi =0 extended dataset was generated by running 500 independent
d simulations for each of the runs, differing from the original
ones for the random seeds used. Faults are introduced after
X
LM (x, x̂) = mi (x̂i − xi )2 (5)
i=1 1 and 8 simulation hours in the training and testing files
respectively. Our analysis presented in the remainder of this
In [26], the authors use the RMSE as the metric to evaluate
section does not included fault 21 since it was not part of the
how well the model performs in terms of imputing the missing
extended dataset. Finally, faults 3, 9, and 15 are not considered
values. However, we noticed that even though the RMSE
because of their unobservability from the data which results
metric can be very low, in an industrial process certain mea-
in high missed detection rates [28].
surements might have a lower tolerance to variations, while
other measurements might have very limited impact on the The remainder of this section presents a detailed analysis
capability to monitor the process. This is for example the case of the performance of the three components of the monitoring
for the Tennessee Eastman (TE) process, which we used for mechanism.
the validation of our framework (see Section IV). Therefore,
instead of relying on the RMSE, we use the feedback from
the fault detection and classification modules to tune the A. Fault detection
hyperparameters of the GAIN model (see Figure 2). An autoencoder is a type of a neural network with a
The authors of [26] provide theoretical results for a dataset symmetric structure. The structure can be divided into two
with values Missing Completely At Random (MCAR). Simi- mirrored sub-structures (i.e. the encoder and the decoder). For
larly, we use the MCAR approach for training of our model. the purpose of fault detection in this paper the autoencoder is
However, in order to understand the impact of physical sensors constituted of: i) an input layer with N inputs, ii) HA hidden
failing, validation and testing are done on persistent sensor layers with ReLu activation functions, and iii) an output layer
failures. That is where the feedback coming from the fault with N outputs. The input layer and the first HA /2 hidden
detection and classification modules plays an important role, layers form the encoder, and the remaining HA /2 hidden
because the imputed values have to provide enough informa- layers and the output layer form the decoder. The neural
tion for these two modules to operate correctly. network was trained with the Adam optimiser with a batch
size of 103 samples and a constant learning rate 0.001 for 105
IV. E VALUATION epochs in total. The training set consists of 300 of the 500
To train the models and test the performance of the frame- training files for the fault free scenario. We used 100 of the
work presented in the previous section we use the TE process remaining 200 training files as validation set to optimize the
dataset. The TE is a chemical process that was computation- number of hidden layers HA and neurons of the autoencoder.
ally modelled in 1993 and since then has become widely The resulting autoencoder has HA = 12 hidden layers of
adopted for benchmarking process monitoring techniques. The size [52, 52, 48, 47, 46, 45, 45, 46, 47, 48, 52, 52]. The resulting
TE process simulation produces data in correspondence to false alarm and missed detection rate computed on testing files
normal operation (fault-free data) and in correspondence to are 0.12 and 0.17 respectively. In particular, we evaluated the
21 different simulated process faults (faulty data). Two sets false alarm rate using all 500 testing fault-free files and the
of data are generated - training and testing datasets. The data missed detection rate on all the 500 faulty data files for each
consists of a multivariate time series of N = 52 variables of the 17 faults under consideration.
5

Fig. 3: Recall of the DNN for training and testing data Fig. 4: Precision of the DNN for training and testing data

TABLE I: Dimensions of the discriminator and generator


networks.
DNN. We used the remaining 100 training files as validation
Input layer Hidden layers Output layer data for the GAN. It is important to highlight that these 100
size size size
Generator 624 [1144, 1144] 52 files were not used in the training or validation of the the fault
Discriminator 104 [200, 200] 52 detection autoencoder and the fault classification DNN, since
these two models are used in the hyperparameters selection
for the GAN. As mentioned earlier, the proposed architecture
has to work as one interconnected system, meaning that to
B. Faults Classification
understand how the GAN model performs we cannot rely
The DNN structure for the fault classifier is constituted by: exclusively on validation metrics related to the GAN itself.
i) an input layer with N ×(L+1) inputs, ii) two hidden layers We have to consider the system as a whole, and look at the
with ReLu activation functions and iii) N softmax functions impact on the fault detection and classification modules of
as output layer. The number of lags L is 20, and the number of the data reconstructed by the GAN. In particular, we used
neurons in the two hidden layers is 250 and 60 respectively. We the precision and recall of the fault classification module
train the DNN model with Adam optimiser with a batch size of and the missed detection and false alarm rate of the fault
512 samples and a constant learning rate 0.005 for 103 epochs detection module to tune the GAN hyperparameters, i.e. to
in total. The training set consists of 300 of the 500 training determine the number and size of the layers for the generator
files for all the considered faults. Training data corresponding G and discriminator D, the value of α in (3) and the missing
to normal operating conditions are not employed to train the probability, i.e. how many sensors are missing for each sample
DNN since this classifier is only used after an anomaly has during training. We considered 4 different values for α, 3
been detected. To tune the network hyperparameters (number different options for the number of hidden neurons for G
of neurons, number of hidden layers, learning rate) we used and D, 2 options for the number of hidden layers for G
100 of the remaining 200 training data files not used during and D, and 3 values for the missing probability, resulting
training. Finally, we tested the DNN using all 500 testing data in 432 configurations of these hyperparameters. We randomly
files for each of the faults under consideration. sampled 50 distinct configurations and trained and validated
Figure 3 and Figure 4 show the recall and the precision the GAN in correspondence to each of them. Table II shows
for each fault for both training and testing data. With a the chosen α parameter and the missing probability pm , while
minimum recall of 0.963 and 0.943 on the training set and Table I reports the selected size for the generator and the
on the test set, respectively, the DNN performs very well and discriminator. As shown in Table I, both the generator G and
correctly identifies most occurrences of each fault. The DNN the discriminator D have one input layer, two hidden layers
also performs very well in terms of precision for each fault, and one output layer. The input of the generator G consists
i.e. what proportion of the samples identified by the model as of: the current sensor measurements (with missing values), the
an instance of a fault corresponds to an actual occurrence of measurements for the last LG = 10 time steps and the mask
that fault. that indicates which measurements are missing. Considering
that our system has N = 52 sensors, the size of the input layer
C. GAN-based missing data imputation is N + N × LG + N = 624. The input of the discriminator
The GAN was trained using 400 of the 500 training files for D consists of: the current sensor measurements (with imputed
all the considered faults and the normal operating conditions. values - output of G) and the hinting vector. Hence, the size
These are the same 400 files that we used to train and validate of the input layer of D is N + N = 104.
the fault detection autoencoder and the fault classification In the remainder of this section the performance of the
6

TABLE II: GAN hyperparameters and P (f ). In particular, we define the difference in recall and
α pm Batch size #epochs precision of the two imputation methods with respect to the
Generator 100 0.1 1000 250,000 original recall and precision values as:
Discriminator / 0.1 1000 250,000 P
Rx (f, s)
∆R (x, f ) = R(f ) − s∈SC (6)
|SC |
and P
s∈SC Px (f, s)
GAN-based imputation and the performance of the moving ∆P (x, f ) = P (f ) − , (7)
average (MA) method that was used as benchmark are mea- |SC |
sured with respect to the impact on the fault-classification where x denotes MA or GAN . Figures 5a and 5b show the
DNN and the fault detection autoencoder. We first run 103 resulting distribution of these differences. As we can observe,
simulations in which approximately 10% of the measurements the GAN imputation results in a shift of the distribution of
are randomly missing, i.e. 5 of the N = 52 sensors are the difference in recall and difference in precision towards
missing in each sample. We repeated the same experiment smaller values. In other words, the loss due to the missing
by considering persistent sensors’ failures. The results of sensor measurements is significantly mitigated by the adoption
this preliminary analysis show that in most cases the MA of the GAN-based imputation.
mechanism achieves a very good performance. However, on Figures 5c and 5d show the resulting distribution of the
closer examination of these results we observed a drop in difference in recall and precision in case 2 critical sensors
performance of the fault-classification DNN when specific measurements are not available. In this case, for each of the
sensors were missing. In light of this, we conducted a thor- 500 testing files of each fault we simulated the unavailability
ough analysis by systematically removing sensors one by of each of the possible 2−combinations of the critical sensors
one and measuring the impact for each fault when imputing (i.e. 12
2 = 66). As before, we consider the worst case scenario
the missing values using the MA. This analysis showed of persistent sensors failures, i.e. the sensor measurements are
that only 12 sensors have a significant impact on the fault unavailable for the duration of the simulated process. We then
classification mechanism. The set SC of critical sensors is perform the imputation using the generator of the trained GAN
SC = {0, 8, 12, 17, 18, 20, 21, 43, 44, 49, 50, 51}. Some of and the MA and tested the DNN on the resulting imputed
these sensors impact multiple faults (e.g. sensor 17), others data. The results confirm that the GAN imputation significantly
are critical just for the detection of one of the faults (e.g. outperforms the MA also in this case.
sensor 0). If any of the remaining 40 sensors is missing and As for the anomaly detection evaluation, we performed an
its value is imputed using an MA or even using the average analysis similar to that conducted for the fault-classification
over fault-free data, the recall of DNN is always greater then DNN. Table IV shows the comparison between the false alarm
0.80. On the other hand, if even one of the 12 identified critical and the missed detection rate for the original testing files, for
sensors is missing, the MA-based imputation does not result imputed values using MA and GAN. It is worth highlighting
in an acceptable loss. In fact, in some cases the recall of that to compute the missed detection rate the testing files
the DNN can drop below 0.40. In light of these results, we corresponding to faulty operations have to be used, while the
focus our analysis on these 12 critical sensors to evaluate the false alarm rate is computed on the testing files corresponding
performance of the imputation mechanism. The results of this to normal operations. As before, for each of the 500 testing
analysis are shown in Figure 5. For each of the 500 testing files of each of the 17 faults, used in this case to compute
files of each fault we simulated the unavailability of each of the missed detection rate, we simulated the unavailability
the critical sensors 10 time slots after the fault occurred and for of each of the 12 critical sensors 10 time slots after the
the entire duration of the simulated process. Since the number fault occurred and for the entire duration of the simulated
of lags LG used by GAN is equal to 10, for each testing file the process1 . In the case of false alarm rate, for each of the 500
first missing sensor measurement imputed by the GAN uses testing files corresponding normal operations, we simulated the
the actual sensors measurements. Henceforth, and until the end unavailability of each of the 12 critical sensors 10 time slots
of the simulated process, the GAN relies on imputed values after the beginning of the simulation. Table III summarizes
for the missing sensors. At each time slot, we performed the the number of samples used for each case. As we can see
imputation using the generator of the trained GAN and the from Table IV, both the MA and GAN methods perform as
MA. We then fed the imputed data to the fault classifier DNN well as the original data with respect to the miss detection
to determine the resulting performance of the two imputation rate. However, the false alarm rate is affected by the use of
mechanisms. Let us denote by P (f ) and R(f ) the precision imputed values, with the GAN outperforming the MA method
and recall for fault f of the DNN in correspondence to also in this case.
the original testing data. We also denote by RMA (f, s) and
PMA (f, s) the recall and precision for fault f of the DNN V. C ONCLUSION
when classifying data imputed using MA for missing sensor In this work we proposed a data-driven decomposition of
s. In a similar way, we define RGAN (f, s) and PGAN (f, s) the process to monitor the various indicators of the health
in case of the GAN-based imputation. By computing the 1 Since 20 measurements per hour are collected and faults are introduced
average recall and precision over all the critical sensors for in each testing run after 8 simulation hours, the number of faulty samples in
both imputation methods, we can compare it against R(f ) each testing file is 960 − 8 × 20 = 800.
7

(a) Distribution of ∆R , 1 critical sensor missing. (b) Distribution of ∆P , 1 critical sensor missing.

(d) Distribution of ∆P , 2 critical sensors missing


(c) Distribution of ∆R , 2 critical sensors missing. .
Fig. 5: Performance comparison on the test data.

TABLE III: Number of samples used for false alarm and miss fine-tune the data imputation so as to minimize the impact of
detection rate computation. missing sensor data on the capability of the system to detect
No missing data MA/GAN and identify faults. We conducted a thorough evaluation of
False alarm rate 500 × 960 12 × 500 × 960 the proposed approach using the extended Tennessee Eastman
Miss detection rate 17 × 500 × 800 12 × 17 × 500 × 800
Process dataset. Results show that the GAN-imputed data
mitigate the impact on the fault detection and identification
TABLE IV: False alarm and miss detection rate on the test
even in the case of persistently missing measurements from
set.
sensors that are critical for the correct functioning of the
No missing data MA GAN monitoring system.
False alarm rate 0.12 0.17 0.14
Miss detection rate 0.17 0.17 0.17
ACKNOWLEDGMENT
This work is supported by CHIST-ERA (call 2017) via
of a machine/component or an entire industrial process. We the FIREMAN consortium, which is funded by the following
included in the monitoring system a module for data impu- national foundations: Academy of Finland (n. 326270, n.
tation that guarantees we have a functioning system in place 326301), Irish Research Council, Spanish and Catalan Gov-
even in case some of the critical sensors’ measurements are ernment under grants TEC2017-87456-P and 2017-SGR-891,
missing, due for example to hardware or network issues. The respectively. This material is also supported by the Air force
data imputation module is based on GAN and was optimized Office of Scientific Research under award number FA9550-18-
by taking into account the feedback from the fault detection 1-0214, and co-funded by Science Foundation Ireland (SFI)
and classification modules, rather than using a metric, e.g. the under the European Regional Development Fund with Grant
RMSE, specific to the GAN model alone. This allowed us to Numbers 13/RC/2077 and 13/RC/2077 P2.
8

R EFERENCES [22] D. Zurita, M. Delgado, J. A. Carino, and J. A. Ortega, “Multimodal


forecasting methodology applied to industrial process monitoring,” IEEE
Transactions on Industrial Informatics, vol. 14, no. 2, pp. 494–503,
[1] E. Sisinni, A. Saifullah, S. Han, U. Jennehag, and M. Gidlund, “Indus-
2017.
trial internet of things: Challenges, opportunities, and directions,” IEEE
[23] T. Chen, X. Liu, B. Xia, W. Wang, and Y. Lai, “Unsupervised anomaly
Internet of Things Journal.
detection of industrial robots using sliding-window convolutional varia-
[2] Ericsson, “Cellular networks for massive IoT,” [Online]. Available:
tional autoencoder,” IEEE Access, vol. 8, pp. 47 072–47 081, 2020.
https://www.ericsson.com/assets/local/publications/whitepapers/
[24] P. Hu and J. Zhang, “5g-enabled fault detection and diagnostics: How do
wp iot.pdf, 2020.
we achieve efficiency?” IEEE Internet of Things Journal, vol. 7, no. 4,
[3] Y. Liu, T. Dillon, W. Yu, W. Rahayu, and F. Mostafa, “Missing value pp. 3267–3281, 2020.
imputation for industrial iot sensor data with large gaps,” IEEE Internet [25] C. Kalalas and J. Alonso-Zarate, “Sensor data reconstruction in industrial
of Things Journal, vol. 7, no. 8, pp. 6855–6867, 2020. environments with cellular connectivity,” in 2020 IEEE 31st Annual
[4] F. Civerchia, S. Bocchino, C. Salvadori, E. Rossi, L. Maggiani, and International Symposium on Personal, Indoor and Mobile Radio Com-
M. Petracca, “Industrial internet of things monitoring solution for munications, 2020, pp. 1–6.
advanced predictive maintenance applications,” Journal of Industrial [26] J. Yoon, J. Jordon, and M. Schaar, “Gain: Missing data imputation using
Information Integration, vol. 7, pp. 4–12, 2017. generative adversarial nets,” in International Conference on Machine
[5] J. Wan, S. Tang, D. Li, S. Wang, C. Liu, H. Abbas, and A. Vasilakos, Learning. PMLR, 2018, pp. 5689–5698.
“A manufacturing big data solution for active preventive maintenance,” [27] C. Rieth, B. Amsel, R. Tran, and M. Cook, “Additional tennessee
IEEE Transactions on Industrial Informatics, vol. 13, no. 4, pp. 2039– eastman process simulation data for anomaly detection evaluation,”
2047, 2017. Harvard Dataverse, vol. 1, 2017.
[6] B. Cheng, J. Zhang, G. Hancke, S. Karnouskos, and A. Colombo, [28] Y. Zhang, “Enhanced statistical analysis of nonlinear processes using
“Industrial cyberphysical systems: Realizing cloud-based big data in- kpca, kica and svm,” Chemical Engineering Science, vol. 64, no. 5, pp.
frastructures,” IEEE Industrial Electronics Magazine, vol. 12, no. 1, pp. 801–811, 2009.
25–35, 2018.
[7] W. Yu, T. Dillon, F. Mostafa, W. Rahayu, and Y. Liu, “A global
manufacturing big data ecosystem for fault detection in predictive
maintenance,” IEEE Transactions on Industrial Informatics, vol. 16,
no. 1, pp. 183–192, 2020.
[8] P. Nardelli, C. Papadias, C. Kalalas, H. Alves, I. T. Christou, I. Macaluso,
N. Marchetti, R. Palacios, and J. Alonso-Zarate, “Framework for the
identification of rare events via machine learning and iot networks,” in
2019 16th International Symposium on Wireless Communication Systems
(ISWCS). IEEE, 2019, pp. 656–660.
[9] A. González-Vidal, P. Rathore, A. Rao, J. Mendoza-Bernal,
M. Palaniswami, and A. Skarmeta-Gómez, “Missing data imputation
with bayesian maximum entropy for internet of things applications,”
IEEE Internet of Things Journal.
[10] M. Ammar, G. Russello, and B. Crispo, “Internet of Things: A
survey on the security of IoT frameworks,” Journal of Information
Security and Applications, vol. 38, pp. 8–27, 2018. [Online]. Available:
https://doi.org/10.1016/j.jisa.2017.11.002
[11] X. Jia, Q. Feng, T. Fan, and Q. Lei, “RFID technology and its applica-
tions in Internet of Things (IoT),” 2012 2nd International Conference on
Consumer Electronics, Communications and Networks, CECNet 2012 -
Proceedings, pp. 1282–1285, 2012.
[12] J. Mineraud, O. Mazhelis, X. Su, and S. Tarkoma,
“A gap analysis of Internet-of-Things platforms,” Computer
Communications, vol. 89-90, pp. 5–16, 2016. [Online]. Available:
http://dx.doi.org/10.1016/j.comcom.2016.03.015
[13] H. He and E. A. Garcia, “Learning from imbalanced data,” IEEE
Transactions on knowledge and data engineering, vol. 21, no. 9, pp.
1263–1284, 2009.
[14] B. Krawczyk, “Learning from imbalanced data: open challenges and
future directions,” Progress in Artificial Intelligence, vol. 5, no. 4, pp.
221–232, 2016.
[15] P. Leitao, S. Karnouskos, L. Ribeiro, J. Lee, T. Strasser, and A. W.
Colombo, “Smart agents in industrial cyber–physical systems,” Proceed-
ings of the IEEE, vol. 104, no. 5, pp. 1086–1101, 2016.
[16] S. J. Oks, A. Fritzsche, and K. M. Möslein, “An application map
for industrial cyber-physical systems,” in Industrial internet of things.
Springer, 2017, pp. 21–46.
[17] S. Yin, J. J. Rodriguez-Andina, and Y. Jiang, “Real-time monitoring
and control of industrial cyberphysical systems: With integrated plant-
wide monitoring and control framework,” IEEE Industrial Electronics
Magazine, vol. 13, no. 4, pp. 38–47, 2019.
[18] W. Dai, H. Nishi, V. Vyatkin, V. Huang, Y. Shi, and X. Guan, “Industrial
edge computing: Enabling embedded intelligence,” IEEE Industrial
Electronics Magazine, vol. 13, no. 4, pp. 48–56, 2019.
[19] H. Hellstrom, M. Luvisotto, R. Jansson, and Z. Pang, “Software-defined
wireless communication for industrial control: A realistic approach,”
IEEE Industrial Electronics Magazine, vol. 13, no. 4, pp. 31–37, 2019.
[20] L. H. Chiang, R. D. Braatz, and E. L. Russel, “Fault detection and
diagnosis in industrial systems,” 2001.
[21] S. Yin and O. Kaynak, “Big data for modern industry: challenges and
trends [point of view],” Proceedings of the IEEE, vol. 103, no. 2, pp.
143–146, 2015.

You might also like