0% found this document useful (0 votes)
19 views

fault_detection

Uploaded by

fexodih181
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

fault_detection

Uploaded by

fexodih181
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2020 IEEE 9th Data Driven Control and Learning Systems Conference

November 20-22, 2020, Liuzhou, China

A Deep Learning Model with Adaptive Learning Rate for Fault Diagnosis
Xiaodong Zhai1, Fei Qiao1
1. School of Electronics and Information Engineering, Tongji University, Shanghai 201804
E-mail: [email protected], [email protected]

Abstract: With the increasing amount of data in the field of equipment fault diagnosis, deep learning is playing an increasingly
important role in the process of fault diagnosis, during which the timeliness requirement is high and the fault diagnosis results
need to be obtained accurately and timely. However, with the increase of network layers, the training time of deep learning model
becomes longer. Learning rate in the deep learning model plays an important role in the process of model training, and a
well-designed learning rate adjustment strategy can effectively reduce the training time and satisfy the requirements of fault
diagnosis. At present, some deep learning models usually adopt a globally uniform learning rate strategy, which is unreasonable
for different parameters. This paper has designed an adaptive learning rate strategy for the parameters of weight and bias
respectively in deep learning model. Specifically, the strategy contains a learning rate strategy based on stochastic gradient
descent method for weight, and a power exponential learning rate strategy for bias. Experiments are carried out to validate the
effectiveness of proposed learning rate strategy. Results suggest that the strategy can reduce the training time and reconstruction
error rate of deep learning model, and improve the classification accuracy of fault diagnosis.
Key Words: Deep learning, Learning rate, Adaptive, Fault diagnosis

large number of training data and deep-level network


1 Introduction structure, as a result, the training time of deep learning


With the development of modern industrial technology, model is longer than other machine learning algorithms
the safety, stability, reliability and operation efficiency of generally [7]. Therefore, how to speed up the training time
equipment have become the core competitiveness of of deep learning model is a problem which is worth of
manufacturing enterprises [1], and equipment management intensive study, especially when it is applied in engineering
has become an important field in enterprise management. In practice.
the process of production, the performance of equipment 2 Related Work
deteriorates with the increase of service time, and various
faults will occur in the process of equipment operation. Traditional fault diagnosis methods include model driven
When the equipment fails, the production efficiency will be methods, knowledge driven methods, and data driven
reduced. More seriously, the equipment will be shut down, methods. However, the first two methods are often limited
and malignant accidents such as machine damage and by professional technology, expert experience and other
human death will occur. Therefore, it is particularly knowledge. In addition, with the continuous development of
important to find and identify the types and locations of equipment status monitoring technology, more and more
faults in time. With the development of computer equipment status data can be utilized. As a result, data
technology, many artificial intelligence algorithms have driven methods based on machine learning and artificial
been applied in the field of equipment fault diagnosis. It is intelligence have attracted people's attention in recent years
predicted that the growing Internet of Things will connect [8-9]. Data driven methods can discover the intrinsic law of
30 billion devices by 2020 [2], and the huge amount of data equipment status trends and estimate the fault types of
will also promote the innovation of the monitoring process equipment by advanced methods based on equipment status
of the physical network system of industrial 4.0. With the data. With the increasing amount of equipment status data,
increasing amount of data, the advantages of deep learning more and more attention has been paid to the deep learning
using in dealing with large-scale data are highlighted. method in machine learning.
The motivation of deep learning is to build and simulate There are two significant parameters in deep learning
the neural network of human brain for analysis and learning. model, which are weight and bias. However, traditional
It imitates the mechanism of human brain to interpret data, deep learning models often use a global uniform constant
such as images, sounds and texts [3-5]. Deep learning is a parameter for these two parameters, and the setting of this
multi-layer neural network model essentially. By combining constant parameter requires previous experience.
low-level features, we can get a higher-level and more Meanwhile, it should be noted that there are a large number
abstract feature representation to discover the distributed of weight and bias parameters in deep learning model, and
feature representation of data. At the same time, it weakens they are two different types of parameters. Different
the adverse effects of unrelated factors and improves the parameters play different roles. With this in mind, it is
accuracy of classification and prediction [6]. Meanwhile, the unreasonable to provide the same learning rate strategy for
excellent performance of deep learning is mainly based on a different parameters. A global uniform learning rate is not
necessarily suitable for all parameters, and it will reduce the
iteration efficiency and increase the model training time of
*
This work is supported by National Natural Science Foundation, deep learning model.
China(No. 71690234, 61873191), National Science and Technology Major At present, there have been some studies on the
Project (2017-V-0011-0063) and the National Key R&D Program, China
(No. 2017YFE0101400). adjustment strategies of the learning rate in the deep

978-1-7281-5922-5/20/$31.00 ©2020 IEEE DDCLS'20


668
Authorized licensed use limited to: Carleton University. Downloaded on June 20,2021 at 01:24:34 UTC from IEEE Xplore. Restrictions apply.
learning model. A downward trend learning rate strategy (RBM) [13]. DBN is formed by layers of unsupervised
can significantly improve the convergence speed and reduce RBM which are trained and stacked. In the model proposed
the training time of the model [10], and the learning rate in this paper, the structure of Softmax regression, which is
adjusts according to the characteristics of the function. In often used in the process of multi-classification, is added to
many cases, a downward trend learning rate strategy is still a the top layer. In order to achieve multi-classification, it
simple and effective learning rate strategy relatively. In maps the output of multiple neurons into the interval of (0,
2010, Duchi et al. proposed an adaptive all-parameter 1), which can be regarded as probability.
learning rate strategy AadGrad [11]. This method designs a A basic model structure of DBN is shown in Fig. 1. Firstly,
learning rate for each parameter in the process of deep layer-by-layer training algorithm is adopted to complete
learning model training, and uses the sum of gradients to pre-training, and through supervised reverse fine-tuning
ensure the downward trend of learning rate. This method is training, the whole deep neural network is trained to realize
the first to propose an all-parameter learning rate strategy, feature learning and classification.
which is an effective way to accelerate the convergence of
deep learning model. In 2013, Senior et al. proposed an
improved learning rate strategy AadDec [12], which is
based on the learning rate strategy of AadGrad. Each
learning rate is simplified from the sum of squares of all
previous round gradients to the sum of squares of the current
gradient and the last round gradient in this strategy. The
convergence speed of this method is further improved
compared with that of AadGrad, and this strategy has
achieved good results in practical application.
The above literatures provide some feasible methods to
increase the iteration efficiency and reduce the model
training time of deep learning model. However, these
methods do not distinguish the weight and bias in the deep
learning model, and use a uniform adaptive learning rate
strategy, which will have some limitations. Because these
existing studies can not solve the above problems well, the
original intention of the work in this paper is proposed, and
specific improvement programs are put forward. This paper
proposed an deep learning model with adaptive learning rate
for fault diagnosis. In the process of model training, the
learning rate is adaptively adjusted according to the current
gradient value of the objective loss function in each iteration Fig. 1: Basic structure of DBN
which is based on stochastic gradient descent (SGD), and 3.2 Weight and Bias
independent learning rate is designed for the weight and bias
separately. This method speeds up the iteration process of The basic unit of the deep learning model is neuron, and
the model and weakens the dependence of the model on the its structure is shown in Fig. 2. Where vi is the input neuron,
initial value of the learning rate. In this paper, the method is xi is the state of the input neuron, wij is the connection
applied to the gear fault diagnosis process, and comparison weight between input neuron and output neuron, bj is the
experiments are carried out to demonstrate the improvement bias of the output neuron, F(.) is the activation function, and
of convergence efficiency and the classification accuracy of yj is the state of the output neuron. The mathematical
the proposed methodology. expressions are as follows:
3 Methodology y j  f (u j ) (1)

u j   i 1 ij xi  b j
I
3.1 Deep learning Model (2)
In the model of deep learning, data are expressed by
The essence of deep learning model is a kind of
connecting weights, and they are distinguished by sharing
multi-layer neural network, and the general neural network
weights and biases. Therefore, weights are important for
only has several layers of network, but deep learning model
feature extraction and layer-by-layer abstraction of deep
contains a large number of hidden layers, so it has strong
learning model. According to formula (2), it can be seen that
ability of feature learning. Through multi-layer non-linear
the bias bj can be regarded as a neuron with the state of bj
conversion, we can learn the deep abstract features from
and weight of 1. It can be seen as adding a dimension to the
complex training data and describe the intrinsic information
original data which is beneficial to data differentiation,
of data. In order to avoid falling into the problem of local
especially when the dimension of input data is low.
optimum, it usually adopts layer-by-layer training algorithm
However, when the dimension of input data is high, which is
to realize parameter training of multi-layer neural network
enough to distinguish the data, the role of bias will be
in deep learning.
relatively weakened. Therefore, for the fault diagnosis
At present, there are many mature deep learning models,
model based on deep learning, when the dimension of input
and this paper focuses on Deep Belief Network (DBN). The
data is high, the calculation amount of bias could be reduced
basic unit of DBN is the Restricted Boltzmann Machine
appropriately.

DDCLS'20
669
Authorized licensed use limited to: Carleton University. Downloaded on June 20,2021 at 01:24:34 UTC from IEEE Xplore. Restrictions apply.
learning rate of the model.   ij , b1i ,  b2 j obtained by
the above formulas are utilized to update the corresponding
weights and biases in formula (2). With the constant
updating of parameters, the training process of the model is
completed until the iteration termination condition is
reached.

Fig. 2: The network graph of an artificial neuron

At present, the adjustment of weight and bias generally


adopts a globally uniform way. However, for the parameter Fig. 3: The network graph of an RBM
of weight, designing a learning rate for each weight 3.4 Reverse Fine-tuning Training
parameter which could adjust increment adaptively
according to its own state can accelerate the stable The training of RBM is unsupervised, and according to
expression of input data and improve the convergence speed the distribution of training data, the initial values of DBN
of the model. Meanwhile, in the process of dealing with model parameters can be obtained. Reverse fine-tuning
high-dimensional data, although the role of bias could be training stage is a process of supervised learning, and it
weakened, it will also slow down the convergence rate of the fine-tunes DBN layer parameters according to the known
model in the condition that learning rate adjustment strategy label from top to bottom. RBM is a typical energy model,
is improper. With this in mind, we should select a function and the loss cost function of the model can be obtained on
model with less computation on the basis of ensuring the the basis of the defined energy function. In the process of
downward trend for the parameter of bias. model training, the aim of reverse fine-tuning training is to
minimize the value of loss cost function by adjusting model
3.3 Definition of Learning Rate parameters. The gradient descent method is utilized widely
[15-16] in order to obtain the appropriate model parameters

For a RBM model with parameters  = ij ,b1i ,b2 j  , as and minimize the loss cost function. Its general
mathematical expression is:
shown in Fig. 3, the upper layer is the hidden unit layer, and
the lower layer is the visible unit layer. The connection (t +1)=(t )-(t ) L((t )) (6)
between visible unit and hidden unit is bi-directional, but the
neurons in the same layer are not connected with each other. Where L( ) is defined as the loss cost function on the data
According to the probability theory, the hidden elements are set,  L( ) is the gradient value of the loss cost
mutually independent when the visible element states are function, 
(t +1) is the model parameter at the iteration
given, and the visible elements are mutually independent time of t+1, (t ) is the model parameter at the iteration
when hidden element states are given. In the process of
model training and calculation, the updating criteria of time of t, i.e. the weight or bias parameter, (t) is the
model parameters are as follows [14]: learning rate, i.e. the step size, and it is generally using a
small positive number. Gradient descent method can solve
ij   ( Edata ( i h j )  Emodel ( i h j )) (3) most optimization problems quickly. However, because
deep learning is usually based on large amount of training
b1i   ( Edata ( i i T )  Emodel ( hi hi T )) (4)
data, the calculation of  L( ) will be huge, and even can
b2 j   ( Edata ( j j T )  Emodel (h j h j T )) (5) not be calculated when using gradient descent method to
optimize model parameters.
Where  is the learning rate of the weight between visible
Therefore, stochastic gradient descent (SGD) [17] is
unit layer vi and hidden unit layer hj,   ij is the weight adopted to optimize the parameters of deep learning model
increment,  is the learning rate of the bias in visible unit in this paper. Essentially, SGD is a deformation of gradient
descent method. Unlike gradient descent method, SGD
layer, b1i is the bias increment,  is the learning rate of the calculates the gradient of loss cost function by randomly
bias in hidden unit layer,  b2 j is the bias increment, Edata is selecting some samples from training data. Its mathematical
expressions are as follows:
the expectation obtained by input data label, Emodel is the (t +1)=(t )-(t ) Lm((t )),
(7)
expectation obtained by model.  =  , ,  is called the m  (1,2,3, ,M )

DDCLS'20
670
Authorized licensed use limited to: Carleton University. Downloaded on June 20,2021 at 01:24:34 UTC from IEEE Xplore. Restrictions apply.
compared with other adaptive methods [11]. For the model
 Lm ( )= n1 ln ( )
N
(8)
dealing with fault diagnosis, sometimes the original data is
high-dimensional, which can weaken the role of bias
Where  Lm ( ) is the gradient value of loss function relatively. Therefore, a simple power exponential function is
calculated from the m-th batch data, N is the number of chosen as the learning rate strategy for bias, which simply
samples in the m-th batch data set. As we can see in these ensures that the learning rate is in a downward trend, so as to
two formulas, the computational complexity of SGD is further reduce the amount of calculation and improve the
greatly reduced compared with gradient descent method. final classification accuracy.
3.6 Evaluating Indicator
3.5 Learning Rate Scheduling
In this paper, Reconstruction Error Rate (RER) of test
For (t) in Formula(7), traditional DBN models usually data in the reverse fine-tuning stage is used as the
quantitative evaluation index, which can well describe the
set a global uniform constant parameter based on experience.
convergence state of the model. For a test data set with N
However, with the number of iterations increasing, more
samples, the mathematical expressions of the reconstruction
precise step size of iteration is needed. Constant learning
error rate are as follows:
rate will slow down convergence speed of the model
1
RER   n 1 MSE (data (n))
N
because it keeps the step size of each iteration unchanged (12)
during the iteration process. A good learning rate strategy N
can significantly improve the convergence speed and 1
MSE (data )   d (
D
In (data (d )  Out (data (d ))) 2 (13)
operation efficiency of deep learning model. In terms of D 1

mechanism, the full-parameter learning rate will further Formula(13) is the calculation formula of Mean Squared
reduce the training time of the model and the final Error(MSE), where In( data ) is the input data of the model,
classification error of the model. Based on AdaGrad and
Out (data ) is the data generated by the model, and D is the
AdaDec, and combined with SGD method, the mathematical
expression of learning rate strategy is designed according to number of samples. Under the condition of same number of
the different characteristics and functions of weight and bias, iterations, when the reconstruction error rate is high, the
which are formulated as follows: convergence of the model is bad. On the contrary, when the
reconstruction error rate is low, the convergence of the
 ij (t  1) model is good.
 ij (t )  (9)
K  g (t ) 2 4 Case Study
t In order to verify the performance of the learning rate
 i (t )   i (0)(1  ) - q (10) strategy proposed in this paper, a constant learning
R rate(Cons) is introduced. Experiment in Section 4.1 was
t designed to compare the convergence and computational
 j (t )   j (0)(1  ) - q (11)
R complexity of Cons, AdaGrad, AdaDec and the learning rate
strategy proposed in this paper. On this basis, the
Where  ij (t ) is the learning rate of weight in next round, classification accuracy of each method is compared and
 ij (t  1) is the learning rate in current round, g (t )2 is the analyzed in Section 4.2. Finally, in order to verify the
reliability of setting the learning rates of weight and bias
sum of squares of the gradients of the loss function in the
respectively, experiment in Section 4.3 was designed to
current round, K is a constant term, and it equals one in
elaborate the relationship between weight and bias.
general which mainly ensures that the learning rate is
In this paper, the data set of rolling bearings are used [18],
bounded and in a downward trend. i (t ) and  j (t ) are the including bearings in good condition, bearings with peeling
learning rates of bias terms for visible and hidden units off in outer ring, bearings with peeling off in inner ring,
respectively, which use power exponential functions with a bearings with peeling off in ball and bearings with broken
downward trend where R is the number of iterations, and the cage. The neural network model used in the experiment has
value of q is 0.75 generally. a five-layer structure. The number of neurons in the input
According to the above formulas, the main idea of the layer is 1000, and the numbers of neurons in the three hidden
learning rate adjustment strategy is that a larger learning rate layers are 1000, 500, 250 respectively. The number of
can make the value of the target loss function decrease neurons in the output layer is 5. The initial connection
rapidly in the initial stage of iteration process. In the process weights between layers obey the Gauss distribution with the
of iteration, the learning rate decreases gradually, which can mean value of 0 and the variance of 0.001. The initial bias of
accelerate the stable expression of data samples and help the the first layer is determined by the training data, and the
model to find the convergence point of data samples more initial biases of the other layers are set to 0. All the methods
quickly and steadily. In this paper, the learning rate of mentioned in the experiment adopt the same initial value of
weight is adaptively adjusted by using the current gradient learning rate, the initial value of learning rate in the
value based on the learning rate of previous round, which is pretreatment stage is 0.1, and the initial value of learning
adaptive. As a result, the learning rate can describe the rate in the reverse fine-tuning stage is 0.001. Reconstruction
current running state of model more accurately, and reduce error rate of the model is calculated by formula (12) and
the amount of computation of historical gradient data formula (13).

DDCLS'20
671
Authorized licensed use limited to: Carleton University. Downloaded on June 20,2021 at 01:24:34 UTC from IEEE Xplore. Restrictions apply.
4.1 Reconstruction Error Rate Comparison 98.7%, 98.1% and 98.3%, respectively. In addition, the
curve of the learning rate strategy proposed in this paper is
In this section, the influence of four methods on the
higher than those of other three strategies in the whole
convergence of the deep learning model is compared.
iteration process, that is, its comprehensive performance is
Meanwhile, the running time of the model with 100
better.
iterations is counted. The experimental results are shown in
Fig. 4.

Fig. 5: Classification accuracy comparison


Fig. 4: Comparison of the convergence performance of four 4.3 Function Contrast of Weights and Bias Function
methods
In this paper, we set different learning rate strategies for
According to Fig. 4, we can see that the Cons, AdaGrad, weight and bias, and the learning rate strategy for bias is
AdaDec and the learning rate strategy proposed in this paper only set as the power exponents. In order to verify the
make the reconstruction error rate of the model decrease effectiveness of this strategy, three learning rate strategies
with the increase of iterations and stabilize eventually. In the are compared in this section. They are learning rate strategy
whole process of iteration, the reconstruction error rate in which weight and bias are constant (Cons+Cons),
curves of four strategies are close, but the reconstruction learning rate strategy in which weight is constant, bias is 0
error curves of the method proposed in this paper are (Cons+Zero), and learning rate strategy in which weight is 0,
obviously lower than those of other three strategies. When bias is constant(Zero+Cons). The main purpose is to
the number of iteration reaches 100, the reconstruction error compare and analyze the influence of weight and bias on the
rate of constant learning rate is 7.81, that of the AdaGrad is convergence of deep learning model. The experimental
7.61 and that of the AdaDec is 7.90. The reconstruction error results are shown in Fig. 6.
rate of the model proposed in this paper is 7.26, which
shows that the convergence of model with proposed learning
rate strategy is better.
At the same time, the training time of four strategies was
counted during the experiment, among which the time of
constant learning rate was the shortest, followed by the
learning rate strategy proposed in this paper, AdaGrad and
AdaDec. Although the training time of the learning rate
strategy proposed in this paper is longer than that of constant
learning rate, the disparity is not significant. However, if we
want to achieve the same convergence effect, the constant
learning rate needs more iterations and more training time.
Considering the reconstruction error rate and training time,
the learning rate strategy proposed in this paper is obviously
better than other three strategies.
4.2 Classification Accuracy Comparison
The classification accuracies of the four strategies vary Fig. 6: The influence of weight and bias on the convergence of
with the process of iteration are showed in Fig. 5. From the deep learning model
figure, we can see that with the increasing of iterations, the
classification accuracies of four learning rate strategies According to Fig. 6, it can be seen that the two learning
increase as well. When the number of iterations reaches 100, rate strategies, the weight and bias are constant and the
the classification accuracy of the learning rate strategy weight is constant with bias being zero, make the
proposed in this paper can reach 99.2%, while the reconstruction error rate of model decrease gradually in the
classification accuracies of the other three learning rates are process of iterations. When the number of iterations reaches

DDCLS'20
672
Authorized licensed use limited to: Carleton University. Downloaded on June 20,2021 at 01:24:34 UTC from IEEE Xplore. Restrictions apply.
100, the reconstruction errors of the two models are 7.81 and [3] L. Li, and S. S. Yu, Image quality assessment based on deep learning
model, Journal of Huazhong University of Science and technology
7.88 respectively. By increasing the number of iterations,
(Nature Science edition), 44(12): 70-75, 2016.
there is little difference between the two results. However, [4] S. K. Kim, Y. J. Park, and S. Lee, Voice activity detection based on
the learning rate strategy in which weight is zero and bias is deep belief networks using likelihood ratio, Journal of Central South
constant does not reduce the reconstruction error rate in the University, 23: 145-149, 2016.
[5] K. Liu, and W. Y. Yuan, Short Texts Feature Extraction and
process of iterations, which keeps a high reconstruction Clustering Based on Auto-Encoder, Acta Scientiarum Naturalium
error rate. Therefore, we can get a conclusion that the weight Universitatis Pekinensis, 51 (2):282-288, 2015.
plays a decisive role in the process of model convergence, [6] Y. Lecun, Y. Bengio, and G. E. Hinton, Deep learning, Nature, 521:
and the bias term does not play an important role in the same 436-444, 2015.
[7] K. Liu, L. M. Zhang, X. L. Fan, New image deep feature extraction
process.
based on improved CRBM, Journal of Harbin Institute of
Technology, 48(5):155-159, 2016.
5 Conclusions [8] Y. Peng, and M. Dong, A prognosis method using age-dependent
In this paper, a deep learning model with adaptive hidden semi-Markov model for equipment health prediction,
Mechanical Systems & Signal Processing, 25(1): 237-252, 2011.
learning rate for fault diagnosis is proposed. According to [9] T. P. Hong, and B. S. Yang, Estimation and forecasting of machine
the different roles of weight and bias in the deep learning health condition using ARMA/GARCH model, Mechanical Systems
model, SGD method is used to design a suitable learning & Signal Processing, 24(2):546-558, 2010.
rate strategy for the parameter of weight, and a power [10] G. E. Hinton, and R. R. Salakhutdinov, Reducing the dimensional-ity
of data with neural networks, Science, 313(5786): 504-507, 2006.
exponential function is chosen as the learning rate strategy [11] J. Duchi, E. Hazan, and Y. Singer, Adaptive subgradient methods for
for the parameter of bias. Experiments show that the strategy online learning and stochastic optimization, The Journal of Machine
proposed in this paper can extract the characteristics of data Learning Research, 12: 2121-2159, 2011.
samples better, reduce the reconstruction error rate of data [12] A. Senior, G. Heigold, M. A. Ranzato, and K. Yang, An empirical
study of learning rates in deep neural networks for speech recognition,
samples, improve the training efficiency and classification in Proceedings of the 2013 IEEE International Conference on
accuracy of the model and has better performance than some Acoustics, Speech, and Signal Processing, 2013: 6724-6728.
existing learning rate strategies. [13] J. W. Liu, Y. Liu, and X. L. Luo, Research and Development on
However, there are still some areas for further Boltzmann Machine, Journal of Computer Research and
Development, 51(1): 1-16, 2014.
improvement for the work proposed in this paper. For [14] R. Salakhutdinov, and G. Hinton, An efficient learning procedure for
example, when the dimensional of data set is low, the deep Boltzmann machines, Neural Computation, 24(8): 1967-2006,
learning rate strategy proposed in this paper needs to be 2012.
adjusted appropriately, and further study is needed when the [15] H. Robbins, and S. Monro, A stochastic approximation method, The
Annals of Mathematical Statistics, 22(3): 400-407,1951.
deep learning model with adaptive learning rate applied to [16] Z. You, X. R. Wang, and B. Xu, Exploring one pass learning for deep
more practical problems. neural network training with averaged stochastic gradient descent, in
Proceedings of the 2014 IEEE International Conference on
References Acoustics, Speech, and Signal Processing, 2014: 6854-6858.
[17] S. Klein, J. P. W. Pluim, M. Staring, and M. A. Viergever, Adaptive
[1] E. Pan, W. Z. Liao, and M. l. Zhuo, Periodic Preventive Maintenance stochastic gradient descent optimisation for image registration,
Policy with Infinite Time and Limit of Reliability Based on Health International Journal of Computer Vision, 81(3): 227-239, 2009.
Index, Journal of Shanghai Jiaotong University, 15(2):231-235, [18] https://download.csdn.net/download/qq_34133884/11017362
2010.
[2] C. MacGillivary, V. Turner, and D. Lund, Worldwide internet of
things (IOT) 2013-2020 forecast: Billions of things, trillions of
dollars, IDC Q1 Doc, 243661(3):1-22, Oct. 2013.

DDCLS'20
673
Authorized licensed use limited to: Carleton University. Downloaded on June 20,2021 at 01:24:34 UTC from IEEE Xplore. Restrictions apply.

You might also like