0% found this document useful (0 votes)
35 views12 pages

Nafips-Cbsf2018 Paper 60

Paper published in International Conference of Fuzzy Logic (North American Fuzzy Information Processing Society) - 2018

Uploaded by

Renan Fonteles
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views12 pages

Nafips-Cbsf2018 Paper 60

Paper published in International Conference of Fuzzy Logic (North American Fuzzy Information Processing Society) - 2018

Uploaded by

Renan Fonteles
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

Adaptive Fuzzy Learning Vector Quantization (AFLVQ)

for time series classification

Renan Fonteles Albuquerque1 , Paulo D. L. de Oliveira1 , and Arthur P. de S. Braga1


1 Federal University of Ceará, Brazil

[email protected], [email protected], [email protected]

Abstract. Over the past decade, a variety of research fields studied


high-dimensional temporal data for pattern recognition, signal processing,
fault detection and other purposes. Time series data mining has been
constantly explored in the literature, and recent researches show that there are
important issues yet to be addressed in the field. Currently, neural network
based algorithms have been frequently adopted for solving classification
problems. However, these techniques generally do not take advantage
from expert knowledge about the processed data. In contrast, Fuzzy-based
techniques use expert knowledge for performing data mining classification but
they lack on adaptive behaviour. In this context, Hybrid Intelligent Systems
(HIS) have been designed based on the concept of combining the adaptive
characteristic of neural networks with the informative knowledge from fuzzy
logic. Based on HIS, we introduce a novel approach for Learning Vector
Quantization (LVQ) called Adaptive Fuzzy LVQ (AFLVQ) which consists on
combining a Fuzzy-LVQ neural network with adaptive characteristics. In this
paper, we conducted experiments with a time series classification problem
known as Human Activity Recognition (HAR), using signals from a tri-axial
accelerometer and gyroscope. We performed multiple experiments with
different LVQ-based algorithms in order to evaluate the introduced method.
We performed simulations for comparing three approaches of LVQ neural
network: Kohonen’s LVQ, Adaptive LVQ and the proposed AFLVQ. From the
results, we conclude that the proposed hybrid Adaptive-Fuzzy-LVQ algorithm
outperforms several other methods in terms of classification accuracy and
smoothness in learning convergence.

Keywords: Pattern recognition; Time series classification; Artificial Neural


Networks; Fuzzy Logic; Hybrid Intelligent Systems; Neuro-Fuzzy; Learning
Vector Quantization.

1 Introduction

Machine learning (ML) is a research field from Artificial Intelligence (AI) which
studies techniques for building systems capable of learning automatically by
experience. Within this area, there is a specific field dedicated to working
with temporal data or time series. Time series data mining techniques have
been constantly explored in the literature in the past decade and it is still an
important topic frequently addressed by researches nowadays. Numerous works
2

have contributed in multiple advances in time series data mining techniques [2, 11,
12, 15, 17, 21]. Applications such as classification, clustering, anomaly detection and
forecast are examples of common data mining tasks applied in time series.
Time series classification has been subject of various researches that explored
temporal data collected from sensors, such as accelerometers, electrocardiogram
(ECG) and electroencephalogram (EEG) as sources for multi-class identification.
Examples of applications are: Human Activity Recognition (HAR) [15], ECG-based
[2, 5, 17, 21] and EEG-based [11] classification.
As time series data may have complex characteristics, it is necessary to apply
sophisticated solutions that can handle the nonlinear operations. Among the
data mining techniques, the Artificial Neural Network (ANN) is an interesting
computational intelligent approach for addressing the problem due to its adaptive
and generalization capabilities. Classical neural network based algorithms such as
Support Vector Machine (SVM) [17], Multi-Layer Perceptron (MLP) [15], Learning
Vector Quantization (LVQ) [5, 12] have been employed in time series classification
problems. Furthermore, deep learning based neural networks such as Deep MLP [2]
and Convolution Neural Network (CNN) [21] have been increasingly explored in
related studies.
Another category of intelligent classifiers is based on Hybrid Neural Systems
(HNS) [19]. HNS are systems which combine artificial neural networks with multiple
intelligent methods in a single model for solving a specific problem. This approach
aims to extract specific advantages from different techniques in order to build a
more robust system. A type of hybrid neural system is Fuzzy Neural Network. In this
approach, concepts of Fuzzy Sets and Artificial Neural Networks are combined in a
unique method.
Hybrid Fuzzy-LVQ neural networks have been widely explored in the literature.
For instance, in [5], the authors presented a new model for data classification using
LVQ-based neural network combined with type-2 fuzzy logic. In this work, a fuzzy
inference system was employed to determine which network’s prototype is the
nearest to an input vector. This new method was implemented and tested with two
data sets for comparing its effectiveness against the original LVQ algorithm and a
type-1 Fuzzy-LVQ. In a different approach, [8] employed a FLVQ-based algorithm
with wavelet transformation for classifying abnormalities in images of inner surface
of the eye. In this work, the authors compared the FLVQ method with other two
methods: Levenger-Marquardt (LM) and Adaptive Neuro-Fuzzy Inference System
(ANFIS).
In [4], a study is conducted on the application of the FLVQ model proposed
by [18] for probability distribution identification. In a more practical way, a
classification algorithm based on Generalized Fuzzy-LVQ method was designed and
implemented in a FPGA [1, 10]. Furthermore, in [20] three different variation of LVQ
algorithms are introduced and compared: Fuzzy-soft LVQ, batch LVQ and Fuzzy-LVQ.
As a motivation, few works have explored Fuzzy-LVQ strategies for dealing with
high-dimensional temporal data. Therefore, we believe hybrid Fuzzy-LVQ classifiers
have the potential to be explored more deeply.
3

In this paper, we present a hybrid neuro-fuzzy algorithm that combines adaptive


LVQ-ANN proposed by [3] and the Fuzzy-LVQ introduced in [7]. In our study, we
employ the proposed technique for classifying human activities through tri-axial
accelerometer time series patterns. The next sections of this paper are organized as
follows: in section 2 we introduce the main concepts of Learning Vector Quantization
theory. In section 3 we present the fundamentals of the proposed method (AFLVQ).
In section 4, we describe the methodology of this work, including a brief description
of the data set used and the performed experiment. In section 5 we present the
results of the simulations and in section 6 we conclude the article by summarizing
the results and suggesting future works.

2 Learning Vector Quantization (LVQ)

Learning Vector Quantization (LVQ) is a prototype-based supervised classification


algorithm which adopts a learning strategy based on similarity measures (distance
functions) and winner-take-all approach. LVQ is a neural network based method
proposed by Kohonen [13]. Kohonen’s LVQ is a supervised competitive learning
algorithm substantiated in concepts of vector distance as a similarity measure.
Its architecture is composed by a layered feedforward network which
has a competitive layer where the neurons compete among them based on
a distance metric, or a similarity measure, between training instances and
prototypes. Generally, Euclidean Distance is chosen as the distance metric for LVQ
implementation. This method aims to divide the data space into distinct regions and
defining a vector prototype (or neuron) for each region. This process is also known
as Vector Quantization.

2.1 Kohonen’s LVQ1

The learning method in LVQ consists in using the input vector as guidance for
organizing the prototypes in specific regions that defines a class. Firstly, a set of
prototypes is initialized and for each prototype is assigned a class. Each class must be
represented by at least one prototype, a class can have multiple prototypes, and one
prototype only represents a unique class. Then, during the learning process, each
instance from the training set is compared with all network’s prototypes, using a
similarity measure. LVQ-based algorithms are classified as competitive learning due
to the selection of the closest prototype within the set of P prototypes:

P
w = ar g mi n d (x j , p i ) (1)
i =1

where w is the index of the winner prototype (the closest prototype of an specific
instance x j ). The distance is measured by a distance function. The Euclidean
distance, or L 2 -norm, is generally used to calculate this distance.
s
° °2 n
X
d (x j , p i ) = °x j − p i ° = (x j k − p i k )2 (2)
k=1
4

where n is the dimension of the instance x j , which is the same for p i . If the class
of an instance is equal to the class of the closest prototype (winner prototype), this
prototype is moved towards the instance, otherwise it moves away. Consider t as
the iteration counter of the training algorithm. The learning rule for Kohonen’s LVQ1
algorithm is given by:
(
p w (t ) + α(t )[x j − p w (t )] if C (p w ) = C (x j );
p w (t + 1) = (3)
p w (t ) − α(t )[x j − p w (t )] if C (p w ) 6= C (x j ).
For all prototypes p i (t ) where i 6= w, the prototypes remains the same. In our
experiments, we adopted a linearly decreasing learning rate α(t ) = α(0)(1− Nt ), where
α(0) is the initial learning rate and N is the maximum number of training iterations.

2.2 Kohonen’s LVQ2

Kohonen introduced in 1988 the LVQ2 algorithm, another variation similar to the
original LVQ [14]. However, the learning process is based on two prototypes p 1st and
p 2nd that are the first and second nearest prototypes to an instance x j , respectively.
One of them must belong to the correct class and the other to a incorrect class.
Furthermore, these prototypes must fall into a zone defined around the mid plane
between them. For an instance x j and the two nearest prototypes p 1st and p 2nd , let
d 1st and d 2nd be the distances of x j to p 1st and p 2nd , respectively. Then x j will fall
into a window of a width w if Equation 4 is satisfied. It is recommended to adopt the
width w between 0.2 and 0.3 [13]. The prototype LVQ2 learning rule is given by the
Equation 5.

d 1st d 2nd 1−w


µ ¶
mi n , > s , where s = (4)
d 2nd d 1st 1+w

p 1st (t + 1) = p 1st (t ) − α(t )[x j − p 1st (t )]


(5)
p 2nd (t + 1) = p 2nd (t ) + α(t )[x j − p 2nd (t )]

2.3 Quantization Error (Q E )

In prototype-based algorithms, the prototypes can be considered quantization


vectors, as they represent a specific region in the input data [16]. For evaluating
the vector quantization in a prototype-based algorithm, a Quantization Error (Q E )
can be used. This error metric is based on the average of the distances between
prototypes and the instances of the data.

1 XN °
°x j − p w °2
°
QE = (6)
N j =1

where N is the number of instances, x j is the j th instance, and p w is a prototype that


represents the class of x j . Given a set of prototypes P = {p 1 , p 2 , . . . , p k } , w represents
the index of the closest prototype to the instance x j . The value w can be calculated
by the Equation 1.
5

3 Adaptive Fuzzy Learning Vector Quantization (AFLVQ)

3.1 Fundamentals on AFLVQ

The Adaptive-Fuzzy-LVQ model is inspired by two concepts:

Adaptability
Adaptive LVQ-ANN is a specific variation of LVQ-based methods which has the
capability of adjusting their architecture to improve network performance during the
training process. In general, the adaptive characteristics implies the ability of making
changes in the network’s structure by including or removing prototypes (codebooks
or neurons). In previous work [3], a study was conducted on a proposed adaptive
LVQ algorithm, applied to human activity recognition using data collected from a
tri-axial accelerometer. In [3], the Kohonen’s LVQ algorithm was modified to include
an adaptive step at the end of each epoch during the network training. The adaptive
process consisted in two stages:

– Prototype inclusion: The inclusion of new prototypes is based on Kohonen’s


Self-Organizing Map [13] applied on misclassified samples. The number of
prototypes to be included is calculated based on the quantity of misclassified
instances of a specific class. Hence, the greater the presence of misclassified
instances of a class C i , the greater the number of new prototypes (p new → C i )
that will be included to represent this class.
– Prototype removal: The removal of prototypes is determined by a score
calculated for each prototype. For a prototype k, its score can be calculated by
scor e k = A k −B k , where A k and B k represent how many times this prototype has
been a winner and correctly classified and misclassified, respectively. Removal of
a prototype will be done whenever this score is lower than a removal threshold
(ψ). Low scores indicate that a prototype frequently classify incorrectly instances
or do not contribute significantly to the classification performance.

The neural network growth is restricted by a variable called Budget. Therefore,


the number of prototypes will not overstep the pre-defined architecture size. Further
implementation details about this method can be found in [3].

Fuzzy
In the proposed AFLVQ method, the fuzzy part is based on the Fuzzy-LVQ
introduced by Chung [7]. Its algorithm consists of optimizing a fuzzy objective
function by minimizing the network output error, calculated by the difference of
the class membership of the target and actual values, and minimizing the distances
between training patterns and competing neurons. In their works, Chung and Lee [7]
define the following objective function:

N X
P
Q m (U , V ) = [(t j i )m − (µ j i )m ]d (x j , p i )
X
(7)
j =1 i =1
6

subject to the following constraints: ci=1 µ j i = 1; ∀ j and µ j i ∈ [0, 1]; ∀ j , i .


P

The term d (x j , p i ) represents the distance between the i th prototype and the
j th instance (See Equation 1). The fuzziness parameter m define weights for the
membership functions for each prototype in a manner that the greater the value
of m, the smoother is the learning process. The target class membership value of
neuron i for input pattern j is represented by t j i ∈ {0, 1}. Hence, the FLVQ learning
rule and the membership updating rule will be:

p i (t + 1) = p i (t ) + α(t )[(t j i )m − (µ j i )m ][x j − p i (t )]; ∀i (8)

" 1 #−1
P µ d (x , p ) ¶ m−1
j i
µji =
X
(9)
`=1 d (x j , p ` )

Note that the previous equations are only valid when the number of prototypes
P is equal to the number of instances N . For LVQ architectures with multiple
prototypes per class, we introduce a competitive step in the training process. In
Figure 1 the FLVQ network is described.

Fig. 1. Fuzzy-LVQ Architecture

In the figure, we have an input layer that receives the instances from the dataset.
The distance layer calculates the distance between each prototype to the presented
instance. Then, in the competitive layer (called MIN layer by Chung [7]), only the
closest prototype from each class is chosen to undergo the fuzzy competition.
Therefore, the membership computations and parametric vector modification will
be applied at the maximum number of classes in the problem or k prototypes.

3.2 AFLVQ Algorithm

The presented Adaptive-Fuzzy-LVQ algorithm can be divided in two stages:


7

– Training: During the training stage, the instances from the training dataset is
presented to the neural network, and the prototypes are adjusted based on
Chung’s Fuzzy-LVQ algorithm [7].
– Adaptation: After completing an epoch, the resulted LVQ-network is evaluated
in order to verify the need for adaptation.If there is a need for adaptation, the
adaptive method from our previous work [3] removes or includes prototypes,
according to criteria that aim to improve the network performance.

Fig. 2. Flow-chart of training an Adaptive-Fuzzy-LVQ model

Figure 2 presents a flowchart describing the process of training an


Adaptive-Fuzzy-LVQ model. First, the prototypes’ weights are initialized. We
used the Kohonen Self-Organizing Map for initializing the prototypes. Afterwards,
the FLVQ is executed during a whole epoch i . In the end of each epoch, the algorithm
verifies the need for adaptation and, depending on this decision, the network is
adapted (p i (ad apt ed ) ) or not (p i ). Then, the cycle restarts in the next epoch. When
the stop criteria are satisfied, the algorithm returns the trained model.

4 Methodology
4.1 Dataset
The dataset used in our experiments was selected from the UCI Machine Learning
Repository [9]. It is an Activity Recognition database, built from the recordings
of 30 volunteers executing multiple activities while carrying a waist-mounted
smartphone. This dataset was introduced by Anguita [6] and it contains the
collection of data from embedded inertial sensors: a tri-axial accelerometer and
a gyroscope. In this time series classification problem, the aim is to recognize
activities or actions performed by humans based on the information retrieved from
body-worn motion sensors. The selected activities classes are: standing, sitting,
laying down, walking, walking downstairs and walking upstairs. The dataset is
composed by 10299 samples which of them 7352 compose the training set and
2947 the test set, representing a division of 70% and 30% for training and test data,
respectively.
8

Features
In order to properly represent the input data, multiple features were extracted
from the sensors’ raw data. As it is a time series classification problem, the features
from each instance were calculated based on multiple observations in an ordered
sequence, usually called sub-series or data window. Examples of extracted features
are described in Table 1. These features were selected and extracted by Anguita [6].

Table 1. Example of extracted features from signals

Function Description
mean Mean value
std Standard deviation
max Largest values in array
min Smallest value in array
sma Signal magnitude area
correlation Correlation coefficient
energy Average sum of the squares

The total features extracted was 561. Thus, for an instance x ∈ RD , its dimension
is given by D = 561.

4.2 Experiments

Experiments were designed to verify the algorithm’s performance based on the


number of prototypes (Nc ). The occurrence of adaptation is strongly dependent on
the amount of prototypes in the network. Hence, by analyzing this aspect, we can
demonstrate the influence of adaptation in the training performance. The attributes
of the conducted experiments are described in Table 2.

Table 2. Experiment’s attributes and values

Attributes Epochs P Algorithms α0 w m

LVQ1, LVQ2, FLVQ,


Values 100 {Nc , 5Nc , 10Nc } 0.09 0.2 1.4
ALVQ1, ALVQ2, AFLVQ

The number of prototypes (P ) depends on the number of classes (Nc ). Therefore,


since there are Nc = 6 classes in the dataset, the experimented networks will have
6, 30 and 60 prototypes, respectively. We executed the experiments with different
variations of LVQ: LVQ1 and LVQ2 from Kohonen [13], FLVQ from Chung [7], ALVQ1
and ALVQ2 from our previous work [3] and AFLVQ, which is the approach presented
in this paper. Note that w and m are specific parameter for LVQ2 and FLVQ learning
rules.
9

5 Results

The classification performances of LVQ-based algorithms with networks composed


by 6, 30 and 60 prototypes are presented in Table 3. For networks with fewer
prototypes, the adaptation is generally nonexistent. Take, for example, the case
where there is only one prototype representing each class, thus P = Nc = 6. In this
case, it is unlikely that a prototype will be removed. As each prototype is representing
a class by itself, eventually all prototypes will be chosen as a winner during the
training process. Hence, in this case, the non-adaptive LVQ algorithm is equivalent
to its adaptive version. As we can see in Table 3, for P = 6 the algorithms LVQ1, LVQ2
and FLVQ are equivalent to ALVQ1, ALVQ2 and AFLVQ, respectively.
As the number of prototypes grows, they reduce their chance to be chosen, and
after one epoch, many prototypes may be removed for not being chosen at least
once. Adaptation is more effective for greater number of prototypes. As we can see
in the cases where P = 30 and P = 60, the training accuracy in adaptive methods
increases. However, in some cases there is a cost of reducing its generalization.
Hence, it is necessary to properly select the number of prototypes (P ) in order to
avoid overfitting. Note also that, in all three scenarios presented in Table 3, LVQ2 and
ALVQ2 had the same results. In other words, ALVQ2 has not been adapted during
training. This can also be seen in Figure 3.a, where the chart of LVQ2 (Orange) and
ALVQ2 (Green dots) are overlapping.

Table 3. Experiment results for P = {Nc , 5Nc , 10Nc }

P Set LVQ1 LVQ2 FLVQ ALVQ1 ALVQ2 AFLVQ

Nc Training 83.62% 98.83% 86.48% 83.62% 98.83% 86.48%


P =6 Test 82.76% 95.45% 86.19% 82.76% 95.45% 86.19%

5Nc Training 90.61% 99.67% 90.44% 90.61% 99.67% 91.04%


P = 30 Test 84.63% 93.96% 87.61% 86.73% 93.96% 87.58%

10Nc Training 94.08% 99.62% 93.27% 94.27% 99.62% 94.07%


P = 60 Test 87.75% 91.99% 89.18% 88.12% 91.99% 89.65%

In Figure 3 we present the evolution of two error measures throughout the


epochs: classification error (C E ) and quantization error (Q E ). Classification error
can be calculated by: C E = Nmi ss /N , where C E is the classification error, Nmi ss is
the number of misclassified instances and N is the total of instances. Quantization
error is described in Section 2.3. In Figure 3.(a) we can notice that FLVQ and AFLVQ
are methods which converge smoothly, with minor oscillations when comparing
to LVQ1 and ALVQ1. Regarding the proposed AFLVQ, it outperforms all other
algorithms, except LVQ2 and ALVQ2, which have demonstrated to be significantly
superior than the other algorithms, for this specific dataset. ALVQ2 and LVQ2 also
presented the fastest convergence by reaching low error values in less than 20
epochs.
10

In Figure 3.(b), taking as an example the best model (LVQ2 and ALVQ2), we can
observe that initially the Q E = 15.78. After training, the error increased to Q E = 30.45,
instead of reducing. Strange as it may seem, low Q E does not necessarily mean a well
trained model, neither high Q E means a poorly trained model. However, there are
limits for acceptable Q E values that may change accordingly to the disposition of
the data. It is extremely important to evaluate the relationship between quantization
error and classification error in order to properly choose a classification model.

(a) (b)
0.25 50
LVQ1 LVQ1
LVQ2 LVQ2
FLVQ FLVQ
ALVQ1
45 ALVQ1
0.2 ALVQ2 ALVQ2
AFLVQ AFLVQ

40
Classification Error

Quantization Error
0.15
35
X: 100
Y: 30.45
30
0.1

25

0.05
20
X: 0
Y: 15.78

0 15
0 10 20 30 40 50 60 70 80 90 100 0 10 20 30 40 50 60 70 80 90 100
Epochs Epochs

Fig. 3. Classification Error (C E ) and Quantization Error (Q E ) evolution throughout the epochs

The dataset employed in our experiments has a specific characteristic where


pairs of classes share very similar instances. For example, the class walking and
walking downstairs present similar patterns. This can be evidenced by examining
the confusion matrix obtained from the best trained model. In Table 4 , we observe
that most of the misclassifications were caused by mistaking class 4 for class 5, or
the other way around. This characteristic existing in this dataset is well suited for
LVQ2 learning rules, which explains the remarkable results obtained through this
algorithm.

Table 4. Confusion matrix for the best test result (LVQ2 and ALVQ2 with Nc = 6)

C1 C2 C3 C4 C5 C6 Recall
C1 478 7 11 0 0 0 96.37%
C2 18 450 3 0 0 0 95.54%
C3 5 18 397 0 0 0 94.52%
C4 0 2 0 448 40 1 91.24%
C5 0 0 0 29 503 0 94.55%
C6 0 0 0 0 0 537 100.00%
Precision 95.41% 94.34% 96.59% 93.92% 92.63% 99.81% 95.45%
11

Conclusion

In this paper, we presented a novel Adaptive-Fuzzy-LVQ method applied in


human activity classification from high-dimensional signals of motion sensors. We
conducted experiments in order to evaluate different variations of the LVQ algorithm
to compare with the proposed method. It is possible to conclude from the results
that employing AFLVQ provides considerable improvements in accuracy on the
classification of time series, comparing to other LVQ-based algorithms. In general,
AFLVQ outperformed all variations, except for LVQ2 and ALVQ2.
Comparing LVQ1 and FLVQ, we observed that both are very similar in overall
accuracy. However, the learning process in LVQ1 seems to be more unstable while
FLVQ presents a smoother evolution over training epochs and tends to converge
to better results. The learning rate weighted by the membership values of each
prototype justifies the FLVQ to be smoother as the adjustment rate will be relative
to the distance of a prototype to a specific instance. From the obtained results,
we can also conclude that LVQ-based algorithms are effective for performing
classification of high-dimensional time series, since in most experiments, they
demonstrated convergence to high training accuracy. Regarding generalization, all
algorithms achieved test accuracy between 82.76% and 95.45%, which is satisfactory,
considering the problem complexity.
Concerning future works, we intend to combine Kohonen’s LVQ2 with Fuzzy
Logic to evaluate its performance in the problem addressed in this paper. Based
on the experiment’s results, we understand that fuzzy can improve LVQ-based
algorithms by smoothing the training convergence, avoiding major oscillation that
may result in poor classification performance. Once LVQ2 and ALVQ2 have presented
the best classification performances, we can improve their learning rule by including
fuzzy aspects. Furthermore, we intend to explore the changes in performance by
varying the fuzziness parameter m, as well as the learning rate α. Properly tuning
these parameters is important, as they can significantly influence the training results.
Finally, we plan to work with Type-2 Fuzzy sets for evaluating its performance in
dealing with uncertainties present in input data.

Acknowledgements

We would like to express our gratitude to the Coordination for the Improvement of
Higher Education Personnel (CAPES) for the financial support.

References
1. Afif, I.N., Wardhana, Y., Jatmiko, W.: Implementation of adaptive fuzzy neuro generalized
learning vector quantization (afnglvq) on field programmable gate array (fpga) for real
world application. In: Advanced Computer Science and Information Systems (ICACSIS),
2015 International Conference on. pp. 65–71. IEEE (2015)
2. Al Rahhal, M.M., Bazi, Y., AlHichri, H., Alajlan, N., Melgani, F., Yager, R.R.: Deep learning
approach for active classification of electrocardiogram signals. Information Sciences 345,
340–354 (2016)
12

3. Albuquerque, R.F., Braga, A.P.d.S., Torrico, B.C., Reis, L.L.N.d.: Classificação de dinâmicas
de sistemas utilizando redes neurais lvq adaptativas. In: Conferência Brasileira de
Dinâmica, Controle e Aplicações - DINCON (2017)
4. Alfa, G.D., Kurniasari, D., Usman, M., et al.: Neural network fuzzy learning vector
quantization (flvq) to identify probability distributions. International Journal of
Computer Science and Network Security (IJCSNS) 16(10), 16 (2016)
5. Amezcua, J., Melin, P., Castillo, O.: New Classification Method Based on Modular Neural
Networks with the LVQ Algorithm and Type-2 Fuzzy Logic. Springer (2018)
6. Anguita, D., Ghio, A., Oneto, L., Parra, X., Reyes-Ortiz, J.L.: A public domain dataset for
human activity recognition using smartphones. In: ESANN (2013)
7. Chung, F.L., Lee, T.: Fuzzy learning vector quantization. In: Neural Networks, 1993.
IJCNN’93-Nagoya. Proceedings of 1993 International Joint Conference on. vol. 3, pp.
2739–2743. IEEE (1993)
8. Damayanti, A.: Fuzzy learning vector quantization, neural network and fuzzy systems for
classification fundus eye images with wavelet transformation. In: 2017 2nd International
conferences on Information Technology, Information Systems and Electrical Engineering
(ICITISEE). pp. 331–336 (Nov 2017)
9. Dheeru, D., Karra Taniskidou, E.: UCI machine learning repository (2017),
http://archive.ics.uci.edu/ml
10. Fajar, M., Jatmiko, W., Agus, I.M., et al.: Fnglvq fpga design for sleep stages classification
based on electrocardiogram signal. In: Systems, Man, and Cybernetics (SMC), 2012 IEEE
International Conference on. pp. 2711–2716. IEEE (2012)
11. Hajinoroozi, M., Mao, Z., Jung, T.P., Lin, C.T., Huang, Y.: Eeg-based prediction of driver’s
cognitive performance by deep convolutional neural network. Signal Processing: Image
Communication 47, 549–555 (2016)
12. Jain, B.J., Schultz, D.: Asymmetric learning vector quantization for efficient nearest
neighbor classification in dynamic time warping spaces. Pattern Recognition 76, 349–366
(2018)
13. Kohonen, T.: The self-organizing map. Proceedings of the IEEE 78(9), 1464–1480 (1990)
14. Kohonen, T., Barna, G., Chrisley, R.: Statistical pattern recognition with neural networks:
Benchmarking studies. In: IEEE International Conference on Neural Networks. vol. 1, pp.
61–68 (1988)
15. Nakano, K., Chakraborty, B.: Effect of dynamic feature for human activity recognition
using smartphone sensors. In: 2017 IEEE 8th International Conference on Awareness
Science and Technology (iCAST). pp. 539–543 (Nov 2017)
16. Peres, S.M., Rocha, T., Biscaro, H.H., Madeo, R.C.B., Boscarioli, C.: Tutorial sobre
fuzzy-c-means e fuzzy learning vector quantization: Abordagens híbridas para tarefas de
agrupamento e classificação. Revista de Informática Teórica e Aplicada 19(1), 120–163
(2012)
17. Rajesh, K.N., Dhuli, R.: Classification of ecg heartbeats using nonlinear decomposition
methods and support vector machine. Computers in biology and medicine 87, 271–284
(2017)
18. Sakuraba, Y., Nakamoto, T., Moriizumi, T.: New method of learning vector quantization
using fuzzy theory. Systems and computers in Japan 22(13), 93–103 (1991)
19. Wermter, S.: Hybrid neural systems. Springer Science & Business Media (2000)
20. Wu, K.L., Yang, M.S.: A fuzzy-soft learning vector quantization. Neurocomputing 55(3-4),
681–697 (2003)
21. Xia, Y., Wulan, N., Wang, K., Zhang, H.: Detecting atrial fibrillation by deep convolutional
neural networks. Computers in biology and medicine 93, 84–92 (2018)

You might also like