Electronics 12 01186
Electronics 12 01186
Article
Improving Multi-Class Motor Imagery EEG Classification
Using Overlapping Sliding Window and Deep Learning Model
Jeonghee Hwang 1 , Soyoung Park 2 and Jeonghee Chi 2, *
Abstract: Motor imagery (MI) electroencephalography (EEG) signals are widely used in BCI systems.
MI tasks are performed by imagining doing a specific task and classifying MI through EEG signal
processing. However, it is a challenging task to classify EEG signals accurately. In this study, we
propose a LSTM-based classification framework to enhance classification accuracy of four-class MI
signals. To obtain time-varying data of EEG signals, a sliding window technique is used, and an
overlapping-band-based FBCSP is applied to extract the subject-specific spatial features. Experimental
results on BCI competition IV dataset 2a showed an average accuracy of 97% and kappa value of 0.95
in all subjects. It is demonstrated that the proposed method outperforms the existing algorithms for
classifying the four-class MI EEG, and it also illustrates the robustness on the variability of inter-trial
and inter-session of MI data. Furthermore, the extended experimental results for channel selection
showed the best performance of classification accuracy when using all twenty-two channels by the
proposed method, but an average kappa value of 0.93 was achieved with only seven channels.
Keywords: multi-class motor imagery; EEG classification; FBCSP; overlapping window; overlapping
bandpass filter; LSTM
activities in BCI are referred to as motor-imagery (MI) [17–19], and EEG data for MI tasks
in a BCI system are collected using electrodes attached to the scalp [17,20].
EEG signals are widely used as a major brain signal in the BCI system due to their non-
invasive nature [21]. However, since brain activity can be affected by multiple sources of
environmental, physiological, and activity-specific noise [22,23], it is important to consider
the following properties of EEG signals. EEG signals are naturally non-stationary. Diverse
behavioral and mental states continuously change the statistical properties of brain signals.
Thus, it poses a problem that signals other than the information signals we want to obtain
are always irregularly present [24]. In addition, EEG signals that are recorded often have a
low signal-to-noise ratio (SNR) due to the presence of various types of artifacts, such as
electrical power line interference, electromyogram (EMG), and electrooculogram (EOG)
interference [25]. To improve the signal-to-noise ratio and eliminate artifacts from the EEG
signals, effective preprocessing is necessary before feature extraction [26]. Furthermore,
EEG reveals inherent inter-subject variability in brain dynamics, which can be attributed to
differences in physiological artifacts among individuals. This phenomenon can significantly
impact the performance of learning models [27].
To address these problems, many researchers have implemented various feature
extraction techniques for MI classification. The most important thing in the MI-based BCI
system is to extract discriminative characteristics of the EEG signals that affect system
performance. Common spatial pattern (CSP) is a popular method of extracting different MI
features. The CSP spatial filtering method well represents the spatial characteristics of the
EEG signal for each motion image. However, the CSP algorithm has limitations in that the
frequency band, acquisition time, and the number of source signals must be determined
in advance [28,29]. To solve this problem, an FBCSP method [30] that divides the EEG
signal into several narrow frequency bands and extracts features by applying different CSP
filters to each of the divided signals has been proposed. However, there are limitations in
selecting the signal acquisition time or the number of spatial features to be extracted [31].
Recently, many studies have been proposed for automatic feature extraction and clas-
sification using deep learning methods such as CNN, LSTM, and restricted Boltzmann
machine (RBM). The results of these approaches have shown that it reduces the time-
consuming preprocessing and achieves a higher accuracy [32,33]. A RBM with a four-layer
neural network was applied to accomplish better performance for motor imagery classifica-
tion in [32]. Zhang et al. proposed a hybrid deep network model based on CNN and LSTM
to extract and learn the spatial and temporal features of the MI task [33]. CNN combined
with short-time fourier transformation (STFT) was applied for two-class MI classification
in [34]. In another study [35], LSTM using dimension-aggregate approximation (1d-AX)
channel weighting technique to extract features from EEG is proposed to enhance classifi-
cation accuracy. In [36], EEG signals were classified by constructing a convolutional neural
network (CNN) using an image-based approach. Meanwhile, some studies [33,37–41] have
suggested adding time segments based on FBCSP, but the performance improvement in
accuracy was not significant. The authors of [37] showed that multiple time segments by
sliding windows from a continuous stream of EEG can extract more discriminable features.
In [38], regularized CSP algorithms were proposed to promote the learning of good spatial
filters, including extracting features from a fixed time segment of 2s. Moreover, Zhang et al.
developed a hybrid deep learning method to extract discriminative features, combining
the time domain method and the frequency domain method for a four-class MI task [33].
In [41], to address the issue of nonstationary EEG signals, sliding window-based CSP
methods have been proposed to consider session-to-session and trial-to-trial variability.
Experimental results showed that the sliding window-based methods outperformed the
existing models for both healthy individual and stroke patients. EEG signals are sequential
data, and a recurrent neural network (RNN) is one of the architectures to train the sequen-
tial processing, demonstrating good performance in time-series signals analysis. The most
popular type of RNN is the long short-term memory (LSTM) network [33,42]. Although
Electronics 2023, 12, 1186 3 of 16
many EEG classification methods based on neural network have been proposed, there are a
few studies that applied LSTM to multi-class MI task.
In this study, a framework is presented to improve the classification accuracy of
four-class MI EEG signals using an LSTM-based classification method for extracting tem-
poral features from time-varying EEG signals. We apply an overlapping sliding window
approach not only to augment training data sets, but also to acquire time series data of
EEG signals. Moreover, considering that the phenomena of ERD and ERS appearing in
the sensorimotor cortex during motion imagination occur in different frequency bands
for each subject, an overlapping band-based FBCSP is used to extract the subject-specific
spatial features. In addition, to explore the effectiveness of channel selection processing, we
investigate whether feature extraction from channels filtered by channel correlation affects
the classification accuracy of MI task.
The rest of this paper is organized as follows. Section 2 provides a review of related
work. Section 3 describes the proposed LSTM-based method with or without channel
selection for four-class MI EEG classification. The experimental results and analysis are
discussed in Section 4, and Section 5 concludes the paper.
2. Related Work
The extraction of discriminative features from EEG signals is an important factor
affecting the performance of BCI systems in classifying MI tasks. Feature extraction is
carried out in the spatial, time, and frequency domains [26].
(SAE) to classify EEG motor imagery signals. They used the short-time Fourier transform
(STFT) to construct 2D images for training their network. The features extracted by the
CNN are classified through the deep network SAE. Lee et al. proposed a classification
approach utilizing the continuous wavelet transform (CWT) and a CNN [51]. The CWT
was utilized to generate an EEG image that incorporates time-frequency and electrode
location information, resulting in a highly informative representation.
Recently, many researchers have utilized neural network techniques as an effective ar-
chitecture for classifying MI tasks. These techniques combine all three phases of extraction,
selection, and classification into a single pipeline [27,35]. Several studies have employed
deep learning frameworks to classify EEG signals, and these have shown improvements
in classification accuracy. Dai et al. used a CNN with hybrid convolution scale and ex-
perimented with different kernel sizes to obtain high classification accuracy [29]. They
demonstrated that using a single convolution scale limits the classification performance.
Sakhavi et al. introduced a temporal representation of the EEG to preserve information
about the signal’s dynamics and used a CNN for classification [52]. Moreover, a hybrid
deep learning scheme that combines CNN and LSTM has been proposed, where CNN
extracts spatial information and LSTM processes temporal information. Zhang et al. devel-
oped a deep learning network based on CNN-LSTM for four-class MI, which was trained
using all subjects’ training data as a single model [33]. This study showed a better result
than an SVM classifier. They also proposed a hybrid deep neural network with transfer
learning (HDNN-TL) in [53], which aimed to improve classification accuracy when dealing
with the individual differences problem and limited training samples. RNN (recurrent
neural network) is a type of ANN whose computing units are connected in a directed graph
along a sequence, making it a popular choice for analyzing time-series data in various ap-
plications, including speech recognition, natural language processing, and more [35,42,47].
The most popular type of RNN is the LSTM network, which is an excellent way to expose
the internal temporal correlation of time series signals [27,53]. Zhou et al. applied wavelet
envelope analysis and LSTM to consider the amplitude modulation characteristics and
time-series information of MI-EEG [54]. In [55], a RNN-based parallel method was applied
to encode spatial and temporal sequential raw data with bidirectional LSTM (bi-LSTM),
and its results showed superior performance compared to other methods.
Figure1.1.Feature
Figure Featureextraction
extractionfrom
fromaasingle-trial
single-trialEEG.
EEG.
IfIfthe
thenumber
numberofofvalid
validtrials
trialsafter
afterremoving
removingrejected
rejectedtrials
trialsgenerated
generatedduring
duringEEG EEGdata
data
measurementisisV,
measurement V,the
thetotal
totalnumber dataNNi iacquired
numberofofdata acquiredfrom theith
fromthe ithsubject
subjectisisasasfollows.
follows.
𝑁 N= =
𝑛 n∗ 𝑉∗ V (2)
(2)
i i i
EEG signals require extraction of many features in a single session. The number of
features extracted from overlapping windows is much greater than the number of features
extracted from non-overlapping windows, and the shorter the length of the sliding window,
the more features that can be extracted. Therefore, in this study, an overlapping window
of 1 s from 1 s after the cue sign was applied and an interval of 0.1 s was placed between
consecutive sliding windows to extract more distinguishable features from EEG signals
and improve classification accuracy. The number of samples included in the 1 s window
is R∗C. Furthermore, it is essential to extract more features within a session rather than
features extracted from overlapping windows is much greater than the number of features
extracted from non-overlapping windows, and the shorter the length of the sliding win-
dow, the more features that can be extracted. Therefore, in this study, an overlapping win-
dow of 1 s from 1 s after the cue sign was applied and an interval of 0.1 s was placed
between consecutive sliding windows to extract more distinguishable features from EEG
Electronics 2023, 12, 1186 6 of 16
signals and improve classification accuracy. The number of samples included in the 1s
window is R*C. Furthermore, it is essential to extract more features within a session rather
than extract features from three 1s windows due to the inherent non-stationarity in the
extract
EEG data. features from three
Therefore, 1 s windows
we performed due extraction
feature to the inherent
whilenon-stationarity in the EEG
moving the 1 s window by
data. Therefore, we performed feature extraction while moving the 1 s window
Δts. As a result, the number of samples, 𝑛 , extracted from a single trial from the ith ∆ts.
by sub-
As
jectaisresult,
shown the
innumber samples, ni , extracted from a single trial from the ith subject is
of(3).
Equation
shown in Equation (3). 𝑅
𝑛 n= = R −1 ∗𝑅∗𝐶 (3)
i 𝑅(∗R∆𝑡 −1 ∗R∗C (3)
∗ ∆ts )
Therefore, the total samples, Ni, that can be obtained from the ith subject is shown in
Therefore, the total samples, Ni , that can be obtained from the ith subject is shown in
Equation (4).
Equation (4).
𝑅R
𝑁N=i = −−1 1∗ 𝑅∗ ∗R𝐶∗∗C𝑉∗ Vi (4)
(4)
𝑅(∗R∆𝑡
∗ ∆ts )
3.2.
3.2. LSTM-Based
LSTM-Based FBCSP
FBCSP with
with Overlapped
Overlapped Band
Band
After determining the window for feature
After determining the window for feature extraction, feature
extraction, extraction
feature was performed.
extraction was per-
We used the overlapping band-based FBCSP algorithm in this study. This
formed. We used the overlapping band-based FBCSP algorithm in this study. This algorithm, likealgo-
the
conventional FBCSP, consisted of four steps: bandpass filtering, spatial filtering using
rithm, like the conventional FBCSP, consisted of four steps: bandpass filtering, spatial fil- CSP,
feature selection, classification. Figure 2 shows the general framework for the proposed
tering using CSP, feature selection, classification. Figure 2 shows the general framework
approach.
for the proposed approach.
Figure 2.
Figure 2. Processing
Processing steps
steps of
of the
the filter
filterbank
bankcommon
commonspatial
spatialpattern.
pattern.
A filter
A filter bank
bank that
that decomposes
decomposes the the EEG
EEG into
into multiple
multiple frequency
frequency passbands was em-
ployedin
ployed inthe
thefirst
firststep,
step,starting
startingfrom
from4 4toto3232HzHz with
with thethe bandwidth
bandwidth of of 4 Hz
4 Hz andand overlap
overlap of
2ofHz.
2 Hz. A total
A total of bandpass
of 13 13 bandpass filters
filters werewere
used, used, namely,
namely, 4–8, 6–10,
4–8, 6–10, 8–12,8–12,
10–14,10–14, …, 28–32
. . . , 28–32 Hz
Hz due
due to theto overlapping
the overlapping between
between two two frequency
frequency bands.
bands. The signals
The signals werewere bandpass
bandpass fil-
filtered
by Chebyshev
tered type II filter.
by Chebyshev type IIAsfilter.
suggested by manyby
As suggested studies
many[33,52,53], CSP features
studies [33,52,53], CSPextracted
features
from overlapping
extracted sub-bands sub-bands
from overlapping led to an improved
led to anperformance of motor imagery
improved performance of motor EEG-based
imagery
BCI systems.BCI
EEG-based Inspired
systems.by Inspired
these research results,
by these we used
research multiple
results, we usedoverlapping filter bands
multiple overlapping
to achieve
filter bands higher classification
to achieve accuracy.
higher classification accuracy.
In
In the second step, filtered signals were transformed
the second step, filtered signals were transformed to to spatial
spatial subspace
subspace usingusing CSPCSP
algorithm for feature extraction,
algorithm for feature extraction, and and CSP features in each frequency band were extracted.
In
In the
the third
third step,
step, the
the discriminative
discriminative features
features were
were selected from the filter bank which con-
sisted
sisted ofof overlapping
overlapping frequency
frequencybands,
bands,using
usingthethe ITFE
ITFE algorithm
algorithm [65]
[65] which
which optimized
optimized the the
approximation of mutual information between class labels and extracted EEG/MEG com-
ponents for multiclass CSP. The fourth step employed two-stacked LSTM layers to classify
the selected CSP features. Thus, the selected features were then fed into LSTM networks.
LSTM network is an advanced RNN that allows information to persist. It can handle
the vanishing gradient problem of RNN [33,42]. The key to LSTM is cell state, which
consists of three gates. The forget gate decides to remember or forget the previous time
step’s information. The input gate attempts to learn new information, while the output
gate transmits the updated information from the current time step to the next. At last, in
the output gate, the cell passes the updated information from the current time step to the
next time step. In other words, LSTM can control important information to be retained and
unrelated information to be released by using three gates. Therefore, LSTM is an excellent
the vanishing gradient problem of RNN [33,42]. The key to LSTM is cell state, which con-
sists of three gates. The forget gate decides to remember or forget the previous time step’s
information. The input gate attempts to learn new information, while the output gate
transmits the updated information from the current time step to the next. At last, in the
Electronics 2023, 12, 1186 output gate, the cell passes the updated information from the current time step to the7next
of 16
time step. In other words, LSTM can control important information to be retained and
unrelated information to be released by using three gates. Therefore, LSTM is an excellent
way to reveal the internal temporal correlation of time series signals and can learn one
way to reveal the internal temporal correlation of time series signals and can learn one time
time step at a time from EEG channels, so we adopt LSTM to extract discriminative fea-
step at a time from EEG channels, so we adopt LSTM to extract discriminative features of
tures of time-varying EEG signals.
time-varying EEG signals.
As mentioned above, we employ an overlapping bandpass filter as well as overlap-
As mentioned above, we employ an overlapping bandpass filter as well as overlapping
ping window-based feature extraction method to further improve classification accuracy
window-based feature extraction method to further improve classification accuracy of four-
of four-class MI EEG signals. Moreover, LSTM is used for classification that enables better
class MI EEG signals. Moreover, LSTM is used for classification that enables better quality
quality learning by storing the correlation information from EEG signals through time. In
learning by storing the correlation information from EEG signals through time. In this
this scenario, we anticipate that our approach will provide us more discriminative infor-
scenario, we anticipate that our approach will provide us more discriminative information
mation for feature extraction from EEG signals.
for feature extraction from EEG signals.
3.3. LSTM
3.3. LSTM Based
Based FBCSP with Overlapped
FBCSP with Overlapped Band
Band Applying
Applying Channel
Channel Selection
Selection
EEG is
EEG is measured
measured by by aa BCI
BCI device
device with
with many
many channels.
channels. To explore the
To explore the effectiveness
effectiveness
of channel
of channel selection,
selection, we
we first
first carried
carried out
out the
the channel
channel selection
selection task
task by
by correlation between
correlation between
channels and then performed feature extraction and classification with only selected chan-
channels and then performed feature extraction and classification with only selected chan-
nels among
among multiple
multiple channels.
channels.Figure
Figure3 shows
3 showsthethe
EEGEEGclassification processing
classification stepstep
processing in-
cluding the channel selection step.
including the channel selection step.
Pearson’s correlation
correlation coefficient
coefficientisisusedusedtotofind
findchannel
channelcorrelation
correlation forfor subject’s
subject’s MIMI in
in channel
channel selection
selection task. task.
We We calculated
calculated the correlation
the correlation coefficient
coefficient between
between channelschannels for
for each
each
MI andMIdetermined
and determined thatcorrelation
that the the correlation between
between the two
the two channels
channels waswas highhigh when
when the
the ab-
absolute value
solute value of the
of the correlation
correlation coefficient
coefficient is greater
is greater thanthan
0.8. 0.8. MICoeff
MICoeff j,k,l , which
j,k,l, which storesstores
high
high correlation
correlation information
information betweenbetween
channelschannels for each
for each MI, isMI, is calculated
calculated by Equation
by Equation (5)
(5) and
and correlation between channel j and channel k
stores 1 if the correlation between channel j and channel k in the lth motion is high oror0
stores 1 if the in the lth motion is high
0otherwise,
otherwise,where
wherel is l isthe lthMI
thelth MIfor
foreach
eachsubject
subjectand andj jand
andkkmean
meanthe thejthjthandandkth kthchannels.
channels.
1 𝑖𝑓 𝑐𝑜𝑒𝑓𝑓 𝐶 , , , 𝐶 , , ≥ 0.8
(
𝑀𝐼𝐶𝑜𝑒𝑓𝑓 , , = 1 i f coe f f Cj,k,l , Cj,k,l ≥ 0.8 (5)
MICoe f f j,k,l = 0 𝑜𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒 (5)
0 otherwise
where j and k = 0, …, 21, and l = 1, …, 4.
where Figure
j and 4k = shows
0, . . . an example
, 21, and l = of
1, .MICoeff
. . , 4. showing high correlation by channel during
one trial
Figurefor 4several
showssubjects.
an example Thatofis,MICoeff
channels having high
showing a high inter-channel
correlation correlation
by channel for
during
one trial for several subjects. That is, channels having a high inter-channel correlation for
each MI are shown. For example, for subject1, channel 0 showed high correlation with
channels 2, 3, and 4 in all motions but high correlation with channel 7 only in right motions.
Channels with high correlation for all MIs can interfere with unambiguously determining
the class in MI classification, thus those channels were removed. That is, channels with
high correlation between channels for each MI of all subjects are checked, and the high
correlation between all channels is ranked, eliminating the channels in order of maximum
correlation. Thus, only channels that allow the subject MI to be discriminated are selected.
each MI are shown. For example, for subject1, channel 0 showed high correlation with
channels 2, 3, and 4 in all motions but high correlation with channel 7 only in right mo-
tions. Channels with high correlation for all MIs can interfere with unambiguously deter-
mining the class in MI classification, thus those channels were removed. That is, channels
with high correlation between channels for each MI of all subjects are checked, and the
Electronics 2023, 12, 1186 high correlation between all channels is ranked, eliminating the channels in order of max- 8 of 16
imum correlation. Thus, only channels that allow the subject MI to be discriminated are
selected.
(a)
(b)
(c)
Figure 4. Correlation between channels for each MI. The circle (o) represents a high correlation
Figure 4. Correlation between channels for each MI. The circle (o) represents a high correlation be-
between(a)
tween two channels. two channels.
Subject (a) Subject
1; (b) Subject 3; (c) 1; (b) Subject
Subject 8. 3; (c) Subject 8.
4. Experimental Results
4.1. Dataset and Experimental Environment
In this study, we used the BCI competition IV-2a dataset [64]. This dataset was col-
lected while imagining movements of the left hand, right hand, feet, and tongue from
twenty-two EEG electrodes and three EOG channels with a sampling frequency of 250 Hz
and a bandpass filtered between 0.5 Hz and 100 Hz from nine subjects. All experiments
were carried out using Python in an Intel i9-7920X CPU and an Nvidia GTX 1080 Ti GPU
Electronics 2023, 12, 1186 9 of 16
4. Experimental Results
4.1. Dataset and Experimental Environment
In this study, we used the BCI competition IV-2a dataset [64]. This dataset was
collected while imagining movements of the left hand, right hand, feet, and tongue from
twenty-two EEG electrodes and three EOG channels with a sampling frequency of 250 Hz
and a bandpass filtered between 0.5 Hz and 100 Hz from nine subjects. All experiments
were carried out using Python in an Intel i9-7920X CPU and an Nvidia GTX 1080 Ti
GPU environment. The window size for feature extraction was set to 750 (750win) and
250 (250win); 750win is a window without applying window sliding, and the window
movement time ∆ts of 250win with window sliding was set to 0.1 s. Subject-specific features
were extracted using FBCSP in each channel except for the EOG channel. 250win-OB is
a technique that extracts features by applying an overlapped band in FBCSP to 250win.
The overlap size applied to the bandpass filter was set to two. In FBCSP, the classifier used
two-layer-stacked LSTM. Of the total trials for each user, 80% was used as a training dataset
and 20% as a testing dataset, and the epoch was set to 400. We evaluated the proposed
method based on various evaluation metrics such as accuracy, kappa coefficient, precision,
recall, and the results of all the evaluation metrics were represented as the average value of
10 repetitions. The kappa value was mainly used as the evaluation metric for comparison
evaluation with previously proposed algorithms. The kappa value is a scale that reflects the
classification accuracy by correcting the classification results caused by chance, and the BCI
competition committee that provides the data used in this experiment also recommends
this scale [64,66].
FigureFigure
5. Kappa value
5. Kappa according
value to window
according to window size.
size.
Figure 6. Index
Figure of of
6. Index classification
classification result of250win-OB.
result of 250win-OB.
WeWe compared the performance of the existing methods and the 250win-OB method
compared the performance of the existing methods and the 250win-OB method
proposed in this study. Figure 7 shows the results of the comparison by kappa value.
proposed in this study. Figure 7 shows the results of the comparison by kappa value
Through this result, the 250win-OB method shows better performance than other previously
Through this result, the 250win-OB method shows better performance than other previ-
ctronics 2023, 12, x FOR PEER REVIEW proposed methods. Existing techniques showed a large difference in the kappa value 11 of
ously proposed
according to themethods. Existing
subject, whereas techniquesmaintained
the 250win-OB showed aa significantly
large difference in the
high value for kappa
all according
value subjects. In more
to the detail, FBCSP
subject, [30] had the
whereas an average kappa maintained
250win-OB value of 0.57 and a standard
a significantly high
deviation of 0.18 for each subject, SRLDA [67] showed 0.74 and 0.17,
value for all subjects. In more detail, FBCSP [30] had an average kappa value of shared network [33]0.57 and
showed 0.81 and 0.12, and HDNN-TL [53] showed 0.81 and 0.10. On the other hand, the
method is more
a standard suitable
deviation for for
of 0.18 theeach
MI EEG classification
subject, SRLDA [67]ofshowed
various0.74subjects thanshared
and 0.17, existi
method proposed in this work showed an average of 0.95 and a standard deviation of
algorithms.
network [33] showed
0.02. Therefore, through0.81
thisand 0.12, andit was
comparison, HDNN-TL [53]
shown that theshowed
proposed0.81
methodandis 0.10.
more On the
other hand, the method proposed in this work showed an average of
suitable for the MI EEG classification of various subjects than existing algorithms.0.95 and a standard
deviation of 0.02. Therefore, through this comparison, it was shown that the proposed
Figure 7. Performance
Figure comparison
7. Performance between
comparison between existing
existing methods
methods and
and the the proposed
proposed method. method.
Figure 8. Comparison of accuracy with existing methods and the proposed method.
The correlation coefficients between channels are calculated to conduct channel selec-
tion for classifying EEG. Figure 9 shows a heat map of the correlation between channels
for each user. As shown in the experimental results, the channel correlation coefficient
values for each user showed a lot of variation. Subject3 and subject5 show a much higher
correlation between channels than other subjects, whereas subject6 and subject9 show a
relatively low correlation compared to other subjects. Overall, there are differences in the
value of cumulative correlation, but channels with high inter-channel correlation showed
high correlation in most subjects.
correlation between channels than other subjects, whereas subject6 and subject9 show
values forlow
relatively eachcorrelation
user showed a lot of to
compared variation. Subject3
other subjects. and subject5
Overall, show
there are a much hig
differences in t
correlation
value between correlation,
of cumulative channels than
butother subjects,
channels whereas
with high subject6 and
inter-channel subject9show
correlation show
relatively
high low correlation
correlation compared to other subjects. Overall, there are differences in
in most subjects.
Electronics 2023, 12, 1186 value of cumulative correlation, but channels with high inter-channel correlation
12 of 16 show
high correlation in most subjects.
Figure 9. Cumulative channel correlation heatmap by subject. The heatmap is based on the cum
lative value of the number of times the correlation between channels is greater than 0.8.
Figure 9. Cumulative
Figure 9. Cumulativechannel correlation
channel correlation heatmap
heatmap by subject.
by subject. The heatmap
The heatmap is based onisthe
based on the cum
cumula-
To
tive analyze
value of thethe effect
number of of
timesthe
the number
correlation of channels
between channels on
is the
greater classification
lative value of the number of times the correlation between channels is greater than 0.8.
than 0.8. accuracy, t
results using all 22 channels and the results after removing channels with high corre
To analyze the effect of the number of channels on the classification accuracy, the
To analyze
tionsresults
were compared.
using
the effect ofand
The number
all 22 channels
the number
the of channels
results
of channels on the can
to be removed
after removing
classification
channels with be specified,
high
accuracy,
correlationsand in t
results using
experiment,
were all results
the
compared. 22 The
channels
after and
number the results
removing
of channels beafter
5,to10, and
removedremoving
15 channels channels with
were compared
can be specified, and highwith
in this corret
tions were
results usingcompared.
experiment, the The and
results
all channels, number
after removingof channels
5,
the results 10, and to be removed
15
are shown in Figurecan
channels were be specified,
compared
10 with with theand in t
the comparison
results using all channels, and the results are shown in Figure
kappa values. The proposed 250win-OB method was used in the experiments. with
experiment, the results after removing 5, 10, and 15 10 with
channels the
werecomparison
compared of
kappa values. The proposed 250win-OB method was used in the experiments.
results using all channels, and the results are shown in Figure 10 with the comparison
kappa values. The proposed 250win-OB method was used in the experiments.
When five channels with high correlation were removed, subject4, subject7, and sub-
Figure 10. showed
ject8 Changeslightly
of kappa value
lower according
results to channel with
than 250win-OB removal.
all channels, but with these
exceptions, most subjects showed the same performance as 250win-OB. In the case of
250win-OB-10 with 10 channels deleted, the average performance was lowered by 1.35%
Electronics 2023, 12, 1186 13 of 16
compared to 250win-OB. In the case of 250win-OB-15, using only seven channels after delet-
ing fifteen channels, it showed 2.87% lower performance than 250win-OB. Experimental
results showed that classification was best performed when all twenty-two channels were
used, but an average kappa value of 0.93 was maintained even when only seven channels
were used.
5. Conclusions
The EEG used in the BCI system has a problem in that end-to-end learning is difficult
because it is greatly affected by noise and has a great effect on performance depending on
the frequency range used. To classify the four-class MI EEG for each subject, we proposed
an overlapped band-based FBCSP with an LSTM classifier. The proposed algorithm applied
a sliding window for each channel, tried to overcome the dependence on frequency band
by extracting features for each window using FBCSP-based on overlapped band, and tried
to classify features over time using LSTM. Through experiments, we showed that the
algorithm proposed in this study can classify the four-class MI EEG of all subjects better
than other existing algorithms. In future work, we plan to conduct a study on selecting the
minimum required number of channels based on the set accuracy and finding the channel
selection threshold for choosing the number of channels. Based on the findings of this
paper, we believe that our research can be extended to EEG-based emotion recognition,
preference recognition in neuromarketing and game control, and so on.
Author Contributions: Conceptualization, J.H. and J.C.; methodology, J.C.; software, S.P. and J.C.;
validation, J.H. and J.C.; formal analysis, J.H. and J.C.; investigation, J.H.; writing—original draft
preparation, J.H. and J.C.; writing—review and editing, J.H., S.P. and J.C.; supervision, J.C. All authors
have read and agreed to the published version of the manuscript.
Funding: Funding for this paper was provided by Namseoul University.
Data Availability Statement: https://www.bbci.de/competition/iv/ (accessed on 18 February 2023).
Conflicts of Interest: The authors declare no conflict of interest.
References
1. Katona, J. A Review of Human–Computer Interaction and Virtual Reality Research Fields in Cognitive InfoCommunications.
Appl. Sci. 2021, 11, 2646. [CrossRef]
2. McFarland, D.J.; Wolpaw, J.R. Brain-computer interfaces for communication and control. Commun. ACM 2011, 54, 60–66.
[CrossRef] [PubMed]
3. Izso, L. The significance of cognitive infocommunications in developing assistive technologies for people with non-standard
cognitive characteristics: CogInfoCom for people with non-standard cognitive characteristics. In Proceedings of the 2015 6th
IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Gyor, Hungary, 19–21 October 2015; pp. 77–82.
4. Eisapour, M.; Cao, S.; Domenicucci, L.; Boger, J. Virtual Reality Exergames for People Living with Dementia Based on Exercise
Therapy Best Practices. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2018, 62, 528–532. [CrossRef]
5. Amprimo, G.; Rechichi, I.; Ferraris, C.; Olmo, G. Measuring Brain Activation Patterns from Raw Single-Channel EEG during
Exergaming: A Pilot Study. Electronics 2023, 12, 623. [CrossRef]
6. Katona, J.; Ujbanyi, T.; Sziladi, G.; Kovari, A. Speed control of Festo Robotino mobile robot using NeuroSky MindWave
EEG headset based brain-computer interface. In Proceedings of the 2016 7th IEEE International Conference on Cognitive
Infocommunications (CogInfoCom), Wroclaw, Poland, 16–18 October 2016; pp. 000251–000256. [CrossRef]
7. Stephygraph, L.R.; Arunkumar, N. Brain-Actuated Wireless Mobile Robot Control through an Adaptive Human–Machine
Interface. In Proceedings of the International Conference on Soft Computing Systems: ICSCS 2015; Advances in Intelligent
Systems and Computing. Springer: New Delhi, India, 2015; Volume 1, pp. 537–549. [CrossRef]
8. Markopoulos, E.; Lauronen, J.; Luimula, M.; Lehto, P.; Laukkanen, S. Maritime safety education with VR technology (MarSEVR).
In Proceedings of the 2019 10th IEEE International Conference on Cognitive Infocommunications (CogInfoCom), Naples, Italy,
23–25 October 2019; pp. 283–288.
9. Spaak, E.; Fonken, Y.; Jensen, O.; de Lange, F.P. The Neural Mechanisms of Prediction in Visual Search. Cereb. Cortex 2015,
26, 4327–4336. [CrossRef]
10. de Vries, I.E.; van Driel, J.; Olivers, C.N. Posterior α EEG dynamics dissociate current from future goals in working memory-guided
visual search. J. Neurosci. 2017, 37, 1591–1603. [CrossRef]
Electronics 2023, 12, 1186 14 of 16
11. Qian, L.; Ge, X.; Feng, Z.; Wang, S.; Yuan, J.; Pan, Y.; Shi, H.; Xu, J.; Sun, Y. Brain Network Reorganization During Visual Search
Task Revealed by a Network Analysis of Fixation-Related Potential. IEEE Trans. Neural Syst. Rehabil. Eng. 2023, 31, 1219–1229.
[CrossRef]
12. Liu, Y.; Yu, Y.; Ye, Z.; Li, M.; Zhang, Y.; Zhou, Z.; Hu, D.; Zeng, L.-L. Fusion of Spatial, Temporal, and Spectral EEG Signatures
Improves Multilevel Cognitive Load Prediction. IEEE Trans. Human-Mach. Syst. 2023, 1–10. [CrossRef]
13. Yusoff, M.Z.; Kamel, N.; Malik, A.; Meselhy, M. Mental task motor imagery classifications for noninvasive brain computer
interface. In Proceedings of the 2014 5th International Conference on Intelligent and Advanced Systems (ICIAS), Kuala Lumpur,
Malaysia, 3–5 June 2014; pp. 1–5.
14. Djemal, R.; Bazyed, A.G.; Belwafi, K.; Gannouni, S.; Kaaniche, W. Three-Class EEG-Based Motor Imagery Classification Using
Phase-Space Reconstruction Technique. Brain Sci. 2016, 6, 36. [CrossRef]
15. Wolpaw, J.; Birbaumer, N.; Heetderks, W.; McFarland, D.; Peckham, P.; Schalk, G.; Donchin, E.; Quatrano, L.; Robinson, C.;
Vaughan, T. Brain-computer interface technology: A review of the first international meeting. IEEE Trans. Rehabil. Eng. 2000,
8, 164–173. [CrossRef]
16. Kaiser, V.; Bauernfeind, G.; Kreilinger, A.; Kaufmann, T.; Kübler, A.; Neuper, C.; Müller-Putz, G.R. Cortical effects of user training
in a motor imagery based brain–computer interface measured by fNIRS and EEG. Neuroimage 2014, 85, 432–444. [CrossRef]
17. Pfurtscheller, G.; Neuper, C. Motor imagery activates primary sensorimotor area in humans. Neurosci. Lett. 1997, 239, 65–68.
[CrossRef] [PubMed]
18. Pfurtscheller, G.; Neuper, C. Motor imagery and direct brain-computer communication. Proc. IEEE 2001, 89, 1123–1134. [CrossRef]
19. Siuly, S.; Li, Y.; Wen, P. Modified CC-LR algorithm with three diverse feature sets for motor imagery tasks classification in EEG
based brain–computer interface. Comput. Methods Programs Biomed. 2014, 113, 767–780. [CrossRef]
20. Pfurtscheller, G.; Neuper, C.; Flotzinger, D.; Pregenzer, M. EEG-based discrimination between imagination of right and left hand
movement. Electroencephalogr. Clin. Neurophysiol. 1997, 103, 642–651. [CrossRef]
21. He, B.; Baxter, B.; Edelman, B.J.; Cline, C.C.; Ye, W.W. Noninvasive Brain-Computer Interfaces Based on Sensorimotor Rhythms.
Proc. IEEE 2015, 103, 907–925. [CrossRef] [PubMed]
22. Barbati, G.; Porcaro, C.; Zappasodi, F.; Rossini, P.M.; Tecchio, F. Optimization of an independent component analysis approach for
artifact identification and removal in magnetoencephalographic signals. Clin. Neurophysiol. 2004, 115, 1220–1232. [CrossRef]
23. Ferracuti, F.; Casadei, V.; Marcantoni, I.; Iarlori, S.; Burattini, L.; Monteriù, A.; Porcaro, C. A functional source separation algorithm
to enhance error-related potentials monitoring in noninvasive brain-computer interface. Comput. Methods Programs Biomed. 2020,
191, 105419. [CrossRef]
24. Shenoy, P.; Krauledat, M.; Blankertz, B.; Rao, R.P.N.; Müller, K.-R. Towards adaptive classification for BCI. J. Neural Eng. 2006,
3, R13–R23. [CrossRef]
25. Nicolas-Alonso, L.F.; Gomez-Gil, J. Brain computer interfaces, a review. Sensors 2012, 12, 1211–1279. [CrossRef]
26. Dai, G.; Zhou, J.; Huang, J.; Wang, N. HS-CNN: A CNN with hybrid convolution scale for EEG motor imagery classification. J.
Neural Eng. 2019, 17, 016025. [CrossRef]
27. Alzahab, N.A.; Apollonio, L.; Di Iorio, A.; Alshalak, M.; Iarlori, S.; Ferracuti, F.; Porcaro, C. Hybrid deep learning (hDL)-based
brain-computer interface (BCI) systems: A systematic review. Brain Sci. 2021, 11, 75. [CrossRef] [PubMed]
28. Müller-Gerking, J.; Pfurtscheller, G.; Flyvbjerg, H. Designing optimal spatial filters for single-trial EEG classification in a
movement task. Clin. Neurophysiol. 1999, 110, 787–798. [CrossRef]
29. Wang, Y.; Gao, S.; Gao, X. Common spatial pattern method for channel selection in motor imagery based brain-computer interface.
In Proceedings of the Annual International Conference of the IEEE Engineering in Medicine and Biology—Proceedings, Shanghai,
China, 31 August–3 September 2005; Volume 7, pp. 5392–5395. [CrossRef]
30. Ang, K.K.; Chin, Z.Y.; Zhang, H.; Guan, C. Filter bank common spatial pattern (FBCSP) in brain-computer interface. In Proceedings
of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence), Hong
Kong, China, 1–8 June 2008; pp. 2390–2397.
31. Ma, Y.; Ding, X.; She, Q.; Luo, Z.; Potter, T.; Zhang, Y. Classification of Motor Imagery EEG Signals with Support Vector Machines
and Particle Swarm Optimization. Comput. Math. Methods Med. 2016, 2016, 4941235. [CrossRef]
32. Lu, N.; Li, T.; Ren, X.; Miao, H. A Deep Learning Scheme for Motor Imagery Classification based on Restricted Boltzmann
Machines. IEEE Trans. Neural Syst. Rehabil. Eng. 2016, 25, 566–576. [CrossRef] [PubMed]
33. Zhang, R.; Zong, Q.; Dou, L.; Zhao, X. A novel hybrid deep learning scheme for four-class motor imagery classification. J. Neural
Eng. 2019, 16, 066004. [CrossRef]
34. Shovon, T.H.; Al Nazi, Z.; Dash, S.; Hossain, M.F. Classification of motor imagery EEG signals with multi-input convolutional
neural network by augmenting STFT. In Proceedings of the 2019 5th International Conference on Advances in Electrical
Engineering (ICAEE), Dhaka, Bangladesh, 26–28 September 2019; pp. 398–403. [CrossRef]
35. Wang, P.; Jiang, A.; Liu, X.; Shang, J.; Zhang, L. LSTM-based EEG classification in motor imagery tasks. IEEE Trans. Neural Syst.
Rehabil. Eng. 2018, 26, 2086–2095. [CrossRef] [PubMed]
36. Yang, T.; Phua, K.S.; Yu, J.; Selvaratnam, T.; Toh, V.; Ng, W.H.; So, R.Q. Image-based motor imagery EEG classification using
convolutional neural network. In Proceedings of the 2019 IEEE EMBS International Conference on Biomedical & Health
Informatics (BHI), Chicago, IL, USA, 19–22 May 2019; pp. 1–4.
Electronics 2023, 12, 1186 15 of 16
37. Blankertz, B.; Tomioka, R.; Lemm, S.; Kawanabe, M.; Muller, K.-R. Optimizing Spatial filters for Robust EEG Single-Trial Analysis.
IEEE Signal Process. Mag. 2007, 25, 41–56. [CrossRef]
38. Lotte, F.; Guan, C. Regularizing Common Spatial Patterns to Improve BCI Designs: Unified Theory and New Algorithms. IEEE
Trans. Biomed. Eng. 2010, 58, 355–362. [CrossRef]
39. Ang, K.K.; Chin, Z.Y.; Zhang, H.; Guan, C. Mutual information-based selection of optimal spatial–temporal patterns for single-trial
EEG-based BCIs. Pattern Recognit. 2012, 45, 2137–2144. [CrossRef]
40. Yahya-Zoubir, B.; Bentlemsan, M.; Zemouri, E.T.; Ferroudji, K. Adaptive time window for EEG-based motor imagery classification.
In Proceedings of the International Conference on Intelligent Information Processing, Security and Advanced Communication,
Batna, Algeria, 23–25 November 2015; pp. 1–6.
41. Gaur, P.; Gupta, H.; Chowdhury, A.; McCreadie, K.; Pachori, R.B.; Wang, H. A Sliding Window Common Spatial Pattern for
Enhancing Motor Imagery Classification in EEG-BCI. IEEE Trans. Instrum. Meas. 2021, 70, 4002709. [CrossRef]
42. Liu, Y.; Huang, Y.-X.; Zhang, X.; Qi, W.; Guo, J.; Hu, Y.; Zhang, L.; Su, H. Deep C-LSTM Neural Network for Epileptic Seizure and
Tumor Detection Using High-Dimension EEG Signals. IEEE Access 2020, 8, 37495–37504. [CrossRef]
43. Ai, Q.; Chen, A.; Chen, K.; Liu, Q.; Zhou, T.; Xin, S.; Ji, Z. Feature extraction of four-class motor imagery EEG signals based on
functional brain network. J. Neural Eng. 2019, 16, 026032. [CrossRef] [PubMed]
44. Farquhar, J.; Hill, N.J.; Lal, T.N.; Schölkopf, B. Regularised CSP for Sensor Selection in BCI. In Proceedings of the 3rd International
BCI workshop, Graz, Austria, 21–24 September 2006; pp. 1–2.
45. Arvaneh, M.; Guan, C.; Ang, K.K.; Quek, C. Multi-frequency band common spatial pattern with sparse optimization in Brain-
Computer Interface. In Proceedings of the 2012 IEEE International Conference on Acoustics, Speech and Signal Processing
(ICASSP), Kyoto, Japan, 25–30 March 2012; pp. 2541–2544. [CrossRef]
46. Kumar, S.; Sharma, A.; Tsunoda, T. An improved discriminative filter bank selection approach for motor imagery EEG signal
classification using mutual information. BMC Bioinform. 2017, 18, 125–137. [CrossRef]
47. Al-Saegh, A.; Dawwd, S.A.; Abdul-Jabbar, J.M. Deep learning for motor imagery EEG-based classification: A review. Biomed.
Signal Process. Control. 2020, 63, 102172. [CrossRef]
48. Hamedi, M.; Salleh, S.-H.; Noor, A.M.; Mohammad-Rezazadeh, I. Neural network-based three-class motor imagery classification
using time-domain features for BCI applications. In Proceedings of the 2014 IEEE REGION 10 SYMPOSIUM, Kuala Lumpur,
Malaysia, 14–16 April 2014; pp. 204–207. [CrossRef]
49. Park, H.-J.; Kim, J.; Min, B.; Lee, B. Motor imagery EEG classification with optimal subset of wavelet based common spatial
pattern and kernel extreme learning machine. In Proceedings of the 2017 39th Annual International Conference of the IEEE
Engineering in Medicine and Biology Society (EMBC), Jeju, Korea, 11–15 July 2017; pp. 2863–2866. [CrossRef]
50. Tabar, Y.R.; Halici, U. A novel deep learning approach for classification of EEG motor imagery signals. J. Neural Eng. 2016,
14, 016003. [CrossRef]
51. Lee, H.K.; Choi, Y.-S. Application of Continuous Wavelet Transform and Convolutional Neural Network in Decoding Motor
Imagery Brain-Computer Interface. Entropy 2019, 21, 1199. [CrossRef]
52. Sakhavi, S.; Guan, C.; Yan, S. Learning Temporal Information for Brain-Computer Interface Using Convolutional Neural Networks.
IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 5619–5629. [CrossRef]
53. Zhang, R.; Zong, Q.; Dou, L.; Zhao, X.; Tang, Y.; Li, Z. Hybrid deep neural network using transfer learning for EEG motor imagery
decoding. Biomed. Signal Process. Control. 2020, 63, 102144. [CrossRef]
54. Zhou, J.; Meng, M.; Gao, Y.; Ma, Y.; Zhang, Q. Classification of motor imagery eeg using wavelet envelope analysis and LSTM
networks. In Proceedings of the 2018 Chinese Control and Decision Conference (CCDC), Shenyang, China, 9–11 June 2018;
pp. 5600–5605. [CrossRef]
55. Ma, X.; Qiu, S.; Du, C.; Xing, J.; He, H. Improving EEG-based motor imagery classification via spatial and temporal recurrent
neural networks. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and
Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; pp. 1903–1906.
56. Handiru, V.S.; Prasad, V.A. Optimized Bi-Objective EEG Channel Selection and Cross-Subject Generalization with Brain–Computer
Interfaces. IEEE Trans. Hum.-Mach. Syst. 2016, 46, 777–786. [CrossRef]
57. Ghaemi, A.; Rashedi, E.; Pourrahimi, A.M.; Kamandar, M.; Rahdari, F. Automatic channel selection in EEG signals for classification
of left or right hand movement in Brain Computer Interfaces using improved binary gravitation search algorithm. Biomed. Signal
Process. Control. 2017, 33, 109–118. [CrossRef]
58. Baig, M.Z.; Aslam, N.; Shum, H.P.H. Filtering techniques for channel selection in motor imagery EEG applications: A survey.
Artif. Intell. Rev. 2019, 53, 1207–1232. [CrossRef]
59. Yang, H.; Guan, C.; Wang, C.C.; Ang, K.K. Maximum dependency and minimum redundancy-based channel selection for motor
imagery of walking EEG signal detection. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and
Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 1187–1191. [CrossRef]
60. Shenoy, H.V.; Vinod, A.P. An iterative optimization technique for robust channel selection in motor imagery based Brain Computer
Interface. In Proceedings of the 2014 IEEE International Conference on Systems, Man, and Cybernetics (SMC), San Diego, CA,
USA, 5–8 October 2014; pp. 1858–1863. [CrossRef]
61. Li, M.; Ma, J.; Jia, S. Optimal combination of channels selection based on common spatial pattern algorithm. In Proceedings of the
2011 IEEE International Conference on Mechatronics and Automation, Beijing, China, 7–10 August 2011; pp. 295–300. [CrossRef]
Electronics 2023, 12, 1186 16 of 16
62. Ma, X.; Qiu, S.; Wei, W.; Wang, S.; He, H. Deep Channel-Correlation Network for Motor Imagery Decoding from the Same Limb.
IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 28, 297–306. [CrossRef] [PubMed]
63. Li, Y.; Zhang, X.-R.; Zhang, B.; Lei, M.-Y.; Cui, W.-G.; Guo, Y.-Z. A Channel-Projection Mixed-Scale Convolutional Neural Network
for Motor Imagery EEG Decoding. IEEE Trans. Neural Syst. Rehabil. Eng. 2019, 27, 1170–1180. [CrossRef]
64. Brunner, C.; Leeb, R.; Müller-Putz, G.; Schlögl, A.; Pfurtscheller, G. BCI Competition 2008–Graz Data Set A; Laboratory of Brain-
Computer Interfaces, Institute for Knowledge Discovery, Graz University of Technology: Graz, Austria, 2008; Volume 16,
pp. 1–6.
65. Grosse-Wentrup, M.; Buss, M. Multiclass Common Spatial Patterns and Information Theoretic Feature Extraction. IEEE Trans.
Biomed. Eng. 2008, 55, 1991–2000. [CrossRef] [PubMed]
66. Schlögl, A.; Lee, F.; Bischof, H.; Pfurtscheller, G. Characterization of four-class motor imagery EEG data for the BCI-competition
2005. J. Neural Eng. 2005, 2, L14–L22. [CrossRef]
67. Nicolas-Alonso, L.F.; Corralejo, R.; Gomez-Pilar, J.; Alvarez, D.; Hornero, R. Adaptive Stacked Generalization for Multiclass
Motor Imagery-Based Brain Computer Interfaces. IEEE Trans. Neural Syst. Rehabil. Eng. 2015, 23, 702–712. [CrossRef]
68. Antony, M.J.; Sankaralingam, B.P.; Mahendran, R.K.; Gardezi, A.A.; Shafiq, M.; Choi, J.-G.; Hamam, H. Classification of EEG
Using Adaptive SVM Classifier with CSP and Online Recursive Independent Component Analysis. Sensors 2022, 22, 7596.
[CrossRef]
69. Zhu, J.; Zhu, L.; Ding, W.; Ying, N.; Xu, P.; Zhang, J. An improved feature extraction method using low-rank representation for
motor imagery classification. Biomed. Signal Process. Control. 2023, 80, 104389. [CrossRef]
70. Liu, X.; Shi, R.; Hui, Q.; Xu, S.; Wang, S.; Na, R.; Sun, Y.; Ding, W.; Zheng, D.; Chen, X. TCACNet: Temporal and channel attention
convolutional network for motor imagery classification of EEG-based BCI. Inf. Process. Manag. 2022, 59, 103001. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.