EEG Classification of Forearm Movement Imagery Using A Hierarchical Flow Convolutional Neural Network
EEG Classification of Forearm Movement Imagery Using A Hierarchical Flow Convolutional Neural Network
ABSTRACT Recent advances in brain-computer interface (BCI) techniques have led to increasingly refined
interactions between users and external devices. Accurately decoding kinematic information from brain
signals is one of the main challenges encountered in the control of human-like robots. In particular, although
the forearm of an upper extremity is frequently used in daily life for high-level tasks, only few studies
addressed decoding of the forearm movement. In this study, we focus on the classification of forearm
movements according to elaborated rotation angles using electroencephalogram (EEG) signals. To this end,
we propose a hierarchical flow convolutional neural network (HF-CNN) model for robust classification. We
evaluate the proposed model not only with our experimental dataset but also with a public dataset (BNCI
Horizon 2020). The grand-average classification accuracies of three rotation angles yield 0.73 (±0.04) for
the motor execution (ME) task and 0.65 (±0.09) for the motor imagery (MI) task across ten subjects in our
experimental dataset. Further, in the public dataset, the grand-averaged classification accuracies were 0.52
(±0.03) for ME and 0.51 (±0.04) for MI tasks across fifteen subjects. Our experimental results demonstrate
the possibility of decoding complex kinematics information using EEG signals. This study will contribute
to the development of a brain-controlled robotic arm system capable of performing high-level tasks.
INDEX TERMS Brain-computer interface (BCI), electroencephalogram (EEG), convolutional neural net-
work (CNN), forearm motor execution and motor imagery.
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
VOLUME 8, 2020 66941
J.-H. Jeong et al.: EEG Classification of Forearm Movement Imagery Using a HF-CNN
FIGURE 3. Overview of the proposed HF-CNN model for classifying the forearm rotation from EEG signals.
[39], [40]. We designed the frame of CNN and adopted a hier- Loss(L1 , L2 ) = 0.5 × L1 + 0.5 × L2 (1)
archical architecture to extract relevant features for multiple Each loss term is a generalized cross-entropy loss term,
classifications. The HF-CNN was trained according to each defined as:
subject because of the EEG uncertainty characteristics.
2
Initially, we randomly selected 80% of the trials as a X
L1 = − y1 log ŷ1 (2)
training set and used the remaining 20% as a test set for
c=1
classification [35]. The entire dataset included 150 trials N
comprising 50 trials per each class. Hence, the data from 120 X
L2 = − y2 log ŷ2 (3)
trials (i.e., 40 trials×3 classes) were assigned for training,
c=1
and the data from the remaining 30 trials were designated as
test data. Fig. 4 shows the proposed HF-CNN architecture, y1 and y2 are class labels of CNN I and CNN II, respec-
composed of two main steps: CNN I for movement detection tively; ŷ1 and ŷ2 are outputs of CNN I and CNN II. The hyper-
and CNN II for forearm rotation detection. CNN I and CNN parameter N depicts the number of forearm rotation classes.
II comprise three different layers: convolution, pooling, and In this case, we set the value of N to 2.
the fully-connected layer. In the CNN I step, the dimensions In this manner, the proposed HF-CNN model classifies
of EEG signals as input data were 300 × 20 (time×channel). the forearm rotation angles (0 ◦ , 90 ◦ , and 180 ◦ ) from EEG
The convolutional layer was employed for the convolution signals obtained from each subject. We performed 300 iter-
over the entire input space to linearly transform it using a ations (epochs) for the model training process and saved the
learnable kernel of 1 × 25 sizes to generate a receptive field model weights that generate the least loss of the test data. The
from 4 to 40 Hz since sampling rate was 100 Hz. Subse- detailed training stage of the HF-CNN model is depicted in
quently, the data were processed through a 20 × 1 spatial Algorithm 1.
filter to make the channel into a single channel. The average
pooling layer, obtained after the spatial filter downsampled C. PERFORMANCE EVALUATION
the convolution layer using a 1 × 3 kernel size, reduced the 1) DATASET I: OUR EXPERIMENTAL DATASET
computational cost for multiple stacked layers. We assumed To fairly evaluate the proposed method, we applied 5-fold
that classification between movement and resting state could cross-validation and compared it with the existing methods
be easily performed with low-level features. Therefore, CNN for EEG decoding via an offline analysis. To this end, we
I performed only two convolutions and pooling. Followingly, evaluated the classification performance of existing methods
a fully-connected (FC) layer flattened the features that were (e.g., FBCSP [41], [46], ShallowConvNet [42], DeepCon-
extracted through multiple layers. We applied an exponential vNet [42], and EEGNet [43]) which were used for robust EEG
linear unit (ELU) as the activation function. Using a cross- classification.
entropy loss function, the CNN architectures were trained to As the deep learning approaches, the ConvNets [42] model
extract relevant features for classifying the input data (i.e., is a robust CNN model employed to decode multi-classes
forearm rotation angle at 0 ◦ ). in the MI dataset. The DeepConvNet model comprised four
The input data of CNN II comprised the features extracted convolution-max-pooling blocks and used dropout layers
by the last convolution layer of CNN I. The convolution was with a 0.5 ratio to avoid overfitting problems. ShallowCon-
conducted with a 1 × 4 kernel size. The CNN II obtained vNet, inspired by the principle of FBCSP, included the first
Algorithm 1 Training Stage of HF-CNN project, which contains various upper extremity tasks, such
Input: Preprocessed EEG data {X , } as a forearm supination/pronation, elbow movement, hand
grasping, and rest [47]. The dataset contained data acquired
• X = {xi }D
i=1 , {xi } ∈ R
N ×T : a set of EEG data for a
from fifteen subjects (six males and nine females, 22∼40
single- trial, where D is total number of trials with N years) and acquired the EEG data using 61 channels. We
channels and T sample points classified the forearm supination class, forearm pronation
class, and rest using the proposed HF-CNN model. The
• = {Oi }D
i=1 : class labels, where Oi ∈ {0, 1, 2} and D forearm movement classes are also correlated to the forearm
is total number of trials rotation angle (−90 ◦ , 0 ◦ , and 90 ◦ ) similar to our experimen-
Output: Trained model tally obtained dataset. Hence, we conducted an evaluation
Stage 1: Divide EEG data into a training set and test set of the proposed model to demonstrate its availability and
efficiency.
at a ratio of 80:20
• Xtr : a training set of EEG data, tr : a label set of III. EXPERIMENTAL RESULTS
training data Table 1 shows the classification accuracies of both the pro-
posed model and existing methods for ME and MI tasks on
Stage 2: Train CNN I
the DATASET I. The proposed model exhibited the high-
• The parameters of CNN I are initialized to random est grand-average classification accuracy of 0.73 (±0.04)
values and modify the class labels to binary values (rest and 0.65 (±0.09) for both ME and MI tasks, respectively.
The FBCSP with a regularized linear discriminant analysis
and movement), defined as tr,1 = {Oi,1 }D
i=1 where (RLDA) [41], [46], one of the traditional machine learning
Oi,1 ∈ {0, 1} methods for BCI decoding, exhibited the lowest classification
• Store feature maps extracted in last convolution layer accuracies of 0.35 (±0.01) for both tasks. This performance
• Generate loss value by calculating differences was similar to the chance level accuracy for the three-class
problem (approximately 0.33). The deep learning approach
between CNN I output and class label tr,1 showed outperformed the classification performance of the
Stage 3: Train CNN II general machine learning method. In the ME task, EEGNet
• The CNN II initializes its parameters and defines class [43] and DeepConvNet [42] indicated grand-averaged accu-
racies of 0.63 (±0.04) and 0.64 (±0.03), respectively, for all
labels as rotation at 90 ◦ and rotation at 180 ◦ . tr,2 =
subjects. The ShallowConvNet [42] achieved the highest clas-
{Oi,2 }D
i=1 where Oi,2 ∈ {0, 1} sification accuracy of 0.71 (±0.04) among the deep learning
• Use stored feature maps to train CNN II. methods. In the MI task, the deep learning method showed
similar grand-average classification performance.
• Generate loss value by calculating differences
Table 2 lists the classification accuracies for DATASET
between CNN II output and class labels tr,2 II (public dataset) using the proposed model and existing
• Concatenate CNN I and CNN II outputs for class methods for ME and MI tasks. We conducted a three-class
labels classification of forearm movements (i.e., forearm supina-
tion, forearm pronation, and rest). As depicted in Table 2, the
Stage 4: Fine-tune parameters proposed model outperformed the classification performance
• Minimize loss values by tuning parameters of both with accuracies of 0.52 (±0.03) for ME task and 0.51 (±0.04)
CNN I and CNN II. for MI task, respectively, compared to the other methods.
The training accuracy for the HF-CNN for all subjects is
shown in Fig. 5. In the model training phase, the classification
two layers (temporal convolution and spatial filter), thereby performances are enhanced with an increasing number of
extracting the band power features [42]. Further, the EEGNet epochs. The grand-average training accuracy indicated (black
model, which is a compact CNN for EEG-based BCI for line) convergence within 100 epochs for both sessions. The
various paradigms (e.g., SMR and P300), comprises three grand-average training accuracies reached approximately 0.8
different convolution layers to extract the representative fea- for all subjects. However, as depicted by the blue and green
tures. The EEGNet exhibited a proficient classification per- lines, the training performance exhibited the variation among
formance compared to other existing methods [43]. In this the subjects. After completing the training process, the model
study, using the test data, classification performances for is evaluated using the test dataset.
forearm movement decoding were evaluated for each subject. Fig. 6 shows the confusion matrix of the proposed model
for multi-class classification on DATASET I and DATASET
2) DATASET II: PUBLIC DATASET II. Each column of the confusion matrix represents the target
Moreover, we validated the proposed HF-CNN model on class, whereas each row represents the predicted class (i.e.,
the public dataset published by the BNCI Horizon 2020 0 ◦ , 90 ◦ , and 180 ◦ ). In DATASET I, all true-positive values
TABLE 1. Classification accuracy of proposed and conventional methods for ME and MI tasks in DATASET I.
TABLE 2. Classification accuracy of proposed and existing methods for ME and MI tasks in DATASET II.
were higher than the true-negative values and the value of The obtained p-values are given in Table 3. Most p-values
the false-negative for both ME and MI tasks. Further, for are inferior to 0.01, which implies that the classification
DATASET II, the configuration of the multi-class was com- performance of the proposed model is statistically signifi-
posed of basic forearm movements, such as forearm prona- cant compared to the other methods. However, all obtained
tion and forearm supination. As depicted in the confusion p-values of the general machine learning method (FBCSP)
matrices, the true-positive of the rest class had the highest and deep learning approaches (ShallowConvNet, DeepCon-
value among movement classes for all tasks (i.e., 0.83 for vNet, EEGNet, and HF-CNN) showed statistical significance
the ME and 0.70 for the MI). However, in the MI task, (below 0.01). In our experimental dataset (DATASET I), there
the proposed HF-CNN model confused the classification of was no significant difference between the proposed model
forearm angles when the target class was −90 ◦ , such that the and ShallowConvNet for the ME task. The difference in
model yielded the true positive as 0.50. the classification performance was approximately 0.2. For
To verify the classification performance difference DATASET II, most groups exhibited a statistically significant
between the proposed model and existing methods, we difference, except ShallowConvNet vs. DeepConvNet and
conducted a statistical analysis employing the analysis of EEGNet vs. the proposed model with regard to the ME tasks.
variance (ANOVA) with the Bonferroni correction. We per- In the MI tasks, the performance difference between the
formed multiple comparisons between groups using classifi- proposed model and ShallowConvNet exhibited the lowest
cation accuracy for fair statistical analysis for multi-group. accuracy of approximately 0.3.
TABLE 3. Results of significant performance difference using t-test for both datasets.
Because EEG patterns for actual movement are prominent, time scenarios. On average, the training time for each subject
we anticipated the methods using CNN architectures, includ- lasted approximately 20 s with a high-performance computer,
ing existing methods and HF-CNN, to be trained in a similar which was configured with an Intel i7 CPU, a TITAN XP
pattern for only ME task data. GPU, a 64-GB RAM, and 1-TB SSD. In this manner, the
In this paper, we proposed the hierarchical CNN archi- HF-CNN could contribute to a real-time BCI system by
tecture for reducing the workload of each singular CNN performing high-level tasks to support daily life and therapy.
architecture. Generally, the parameters change in the network Therefore, we plan to evaluate the proposed model with real-
is one of the dramatic model performance improvements time BCI scenarios such as pouring water or open a door using
method, however, it has several limitations for the constrained a robotic arm.
environment such as using low-quality data and a lack of
the data. Therefore, we adopted a hierarchical structure for V. CONCLUSION AND FUTURE WORK
classification instead of the parameter optimization method. We presented the forearm rotation decoded from EEG signals
Furthermore, CNN I and CNN II become more specialized based on the proposed deep learning approach. The proposed
to be classified as the ‘movement or not’ and ‘90 ◦ or 180 ◦ ’ HF-CNN can classify complex MI tasks robustly owing
angles in the model training. In this manner, the proposed to the hierarchical flow. We verified the model using both
model could serve as a novel model to solve complex MI our experimental dataset and the public dataset. The results
decoding from EEG signals. However, the performance of showed that the HF-CNN achieves prominent classification
HF-CNN is more dependent on CNN I since it adopted the accuracy across both datasets. Therefore, the HF-CNN model
principle of hierarchy. When CNN I classified improperly, is considered a promising tool for MI classification and BCI
the final results would be the wrong prediction regardless application. The model has the potential to perform high-level
of the CNN II prediction. Hence, in order to overcome this tasks, such as pouring water into a cup and opening a door
limitation, we will need to modify the architecture that could using the EEG-based robotic arm or prosthesis.
reflect the error prediction. In future work, we will modify the HF-CNN to enable
Recent BCI advances adopted deep learning techniques, the adoption of real-time BCI scenarios by improving clas-
which already yielded a dramatically high performance in sification performance. Hence, we will plan to apply more
other research fields, such as computer vision and natural advanced machine learning algorithms such as the BCI adap-
language processing [33]. In particular, a few studies applied tation method and boost artifacts rejections method. More-
the deep networks to various BCI paradigms using EEG over, we will develop the EEG-based robotic arm system
signals such as mental state detection [48]–[50], emotion based on the proposed model and test the developed system
recognition [51], [52], intention decoding using steady-state with regard to its ability to support daily work for healthy
visual evoked potentials [16], P300 [53], [54], and MI [32], individuals and provide neuro-therapy for motor-disabled
[34]–[36], [42]. Several studies for MI decoding using deep patients.
learning approaches focused on enhancing decoding perfor-
mance for basic multi-classes (e.g., left hand, right hand, and ACKNOWLEDGMENT
foot) using a public dataset [35], [36]. In contrast, in this The authors thank Prof. C. Guan for the useful discussion
study, we focused on practical MI tasks to consider applying on the data analysis and Mr. B.-W. Yu for the deep learning
real-world situations. The upper-limb movement decoding architecture design.
from EEG signals using a single-arm has recently developed
as one of the challenging issues of the BCI [26], [28]. To the
REFERENCES
best of our knowledge, this is the novel study of complex fore-
[1] T. M. Vaughan, W. Heetderks, L. Trejo, W. Rymer, M. Weinrich,
arm rotation classification using only EEG signals. Hence, M. Moore, A. Kübler, B. Dobkin, N. Birbaumer, E. Donchin, E. Wolpaw,
this study could contribute to the advances of decoding for and J. Wolpaw, ‘‘Brain-computer interface technology: A review of the
complex high-level tasks, including both ME and MI, based second international meeting,’’ IEEE Trans. Neural Syst. Rehabil. Eng.,
vol. 11, no. 2, pp. 94–109, 2003.
on deep learning approaches.
[2] R. Abiri, S. Borhani, E. W. Sellers, Y. Jiang, and X. Zhao, ‘‘A comprehen-
Furthermore, most deep learning architectures take consid- sive review of EEG-based brain–computer interface paradigms,’’ J. Neural
erable training time to achieve sufficient performance. The Eng., vol. 16, no. 1, Feb. 2019, Art. no. 011001.
long calibration time is one of the critical problems in BCI [3] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and
T. M. Vaughan, ‘‘Brain–computer interfaces for communication and con-
advances, arising due to EEG non-stationary characteristics trol,’’ Clin. Neurophysiol., vol. 113, no. 6, pp. 767–791, 2002.
[55], [56]. Therefore, it is difficult to pre-train the deep learn- [4] Y. He, D. Eguren, J. M. Azorín, R. G. Grossman, T. P. Luu, and
ing model, as the EEG signals differ day by day. Additionally, J. L. Contreras-Vidal, ‘‘Brain–machine interfaces for controlling lower-
limb powered robotic systems,’’ J. Neural Eng., vol. 15, no. 2, Apr. 2018,
in real-time BCI scenarios, the long training time for the Art. no. 021004.
model could affect the subjects’ mental and physical state [5] N.-S. Kwak, K.-R. Müller, and S.-W. Lee, ‘‘A lower limb exoskeleton
due to inattention [57]. In particular, when patients use the control system based on steady state visual evoked potentials,’’ J. Neural
Eng., vol. 12, no. 5, Oct. 2015, Art. no. 056009.
BCI system, considerable side effects could arise due to the
[6] K.-T. Kim, H.-I. Suk, and S.-W. Lee, ‘‘Commanding a brain-controlled
long therapy phase. We designed the HF-CNN model to make wheelchair using steady-state somatosensory evoked potentials,’’ IEEE
possible its adoption in offline experiments as well as real- Trans. Neural Syst. Rehabil. Eng., vol. 26, no. 3, pp. 654–665, Mar. 2018.
[7] M.-H. Lee, J. Williamson, D.-O. Won, S. Fazli, and S.-W. Lee, ‘‘A high [29] B. J. Edelman, B. Baxter, and B. He, ‘‘EEG source imaging enhances
performance spelling system based on EEG-EOG signals with visual the decoding of complex right-hand motor imagery tasks,’’ IEEE Trans.
feedback,’’ IEEE Trans. Neural Syst. Rehabil. Eng., vol. 26, no. 7, Biomed. Eng., vol. 63, no. 1, pp. 4–14, Jan. 2016.
pp. 1443–1459, Jul. 2018. [30] X. Li, O. W. Samuel, X. Zhang, H. Wang, P. Fang, and G. Li, ‘‘A motion-
[8] T. Kaufmann and A. Kübler, ‘‘Beyond maximum speed—A novel two- classification strategy based on sEMG-EEG signal combination for upper-
stimulus paradigm for brain–computer interfaces based on event-related limb amputees,’’ J. NeuroEng. Rehabil., vol. 14, no. 1, Dec. 2017.
potentials (P300-BCI),’’ J. Neural Eng., vol. 11, no. 5, p. 056004, 2014. [31] A. Úbeda, J. M. Azorín, R. Chavarriaga, and J. D. R. Millán, ‘‘Classifi-
[9] D.-O. Won, H.-J. Hwang, S. Dähne, K.-R. Müller, and S.-W. Lee, ‘‘Effect cation of upper limb center-out reaching tasks by means of EEG-based
of higher frequency on the classification of steady-state visual evoked continuous decoding techniques,’’ J. NeuroEng. Rehabil., vol. 14, no. 1,
potentials,’’ J. Neural Eng., vol. 13, no. 1, Feb. 2016, Art. no. 016014. pp. 1–14, Dec. 2017.
[10] C. I. Penaloza and S. Nishio, ‘‘BMI control of a third arm for multitasking,’’ [32] Y. R. Tabar and U. Halici, ‘‘A novel deep learning approach for classi-
Sci. Robot., vol. 3, no. 20, Jul. 2018, Art. no. eaat1228. fication of EEG motor imagery signals,’’ J. Neural Eng., vol. 14, no. 1,
[11] J. Meng, S. Zhang, A. Bekyo, J. Olsoe, B. Baxter, and B. He, ‘‘Noninvasive Feb. 2017, Art. no. 016003.
electroencephalogram based control of a robotic arm for reach and grasp [33] A. Craik, Y. He, and J. L. Contreras-Vidal, ‘‘Deep learning for electroen-
tasks,’’ Sci. Rep., vol. 6, no. 1, p. 38565, Dec. 2016. cephalogram (EEG) classification tasks: A review,’’ J. Neural Eng., vol. 16,
[12] S. Crea, M. Nann, E. Trigili, F. Cordella, A. Baldoni, F. J. Badesa, no. 3, Jun. 2019, Art. no. 031001.
J. M. Catalán, L. Zollo, N. Vitiello, N. G. Aracil, and S. R. Soekadar, ‘‘Fea- [34] N. Lu, T. Li, X. Ren, and H. Miao, ‘‘A deep learning scheme for
sibility and safety of shared EEG/EOG and vision-guided autonomous motor imagery classification based on restricted Boltzmann machines,’’
whole-arm exoskeleton control to perform activities of daily living,’’ Sci. IEEE Trans. Neural Syst. Rehabil. Eng., vol. 25, no. 6, pp. 566–576,
Rep., vol. 8, no. 1, p. 10823, Dec. 2018. Jun. 2017.
[13] X. Chen, B. Zhao, Y. Wang, S. Xu, and X. Gao, ‘‘Control of a 7-DOF [35] Z. Zhang, F. Duan, J. Sole-Casals, J. Dinares-Ferran, A. Cichocki, Z. Yang,
robotic arm system with an SSVEP-based BCI,’’ Int. J. Neural Syst., and Z. Sun, ‘‘A novel deep learning approach with data augmentation to
vol. 28, no. 8, Oct. 2018, Art. no. 1850018. classify motor imagery signals,’’ IEEE Access, vol. 7, pp. 15945–15954,
[14] J.-H. Jeong, N.-S. Kwak, C. Guan, and S.-W. Lee, ‘‘Decoding movement- 2019.
related cortical potentials based on subject-dependent and section-wise [36] P. Wang, A. Jiang, X. Liu, J. Shang, and L. Zhang, ‘‘LSTM-based EEG
spectral filtering,’’ IEEE Trans. Neural Syst. Rehabil. Eng., vol. 28, no. 3, classification in motor imagery tasks,’’ IEEE Trans. Neural Syst. Rehabil.
pp. 687–698, Mar. 2020. Eng., vol. 26, no. 11, pp. 2086–2095, Nov. 2018.
[15] D. Kuhner, L. D. J. Fiederer, J. Aldinger, F. Burget, M. Völker, [37] J.-H. Jeong, K.-H. Shim, D.-J. Kim, and S.-W. Lee, ‘‘Brain-controlled
R. T. Schirrmeister, C. Do, J. Boedecker, B. Nebel, T. Ball, and W. Bur- robotic arm system based on multi-directional CNN-BiLSTM network
gard, ‘‘A service assistant combining autonomous robotics, flexible goal using EEG signals,’’ IEEE Trans. Neural Syst. Rehabil. Eng., early access,
formulation, and deep-learning-based brain–computer interfacing,’’ Robot. Mar. 18, 2020, doi: 10.1109/TNSRE.2020.2981659.
Auto. Syst., vol. 116, pp. 98–113, Jun. 2019. [38] S. Sakhavi, C. Guan, and S. Yan, ‘‘Learning temporal information for
[16] N.-S. Kwak, K.-R. Müller, and S.-W. Lee, ‘‘A convolutional neural network brain-computer interface using convolutional neural networks,’’ IEEE
for steady state visual evoked potential classification under ambulatory Trans. Neural Netw. Learn. Syst., vol. 29, no. 11, pp. 5619–5629,
environment,’’ PLoS ONE, vol. 12, no. 2, 2017, Art. no. e0172578. Nov. 2018.
[39] G. Xu, X. Shen, S. Chen, Y. Zong, C. Zhang, H. Yue, M. Liu, F. Chen, and
[17] M. Lee, C.-H. Park, C.-H. Im, J.-H. Kim, G.-H. Kwon, L. Kim,
W. Che, ‘‘A deep transfer convolutional neural network framework for EEG
W. H. Chang, and Y.-H. Kim, ‘‘Motor imagery learning across a sequence
signal classification,’’ IEEE Access, vol. 7, pp. 112767–112776, 2019.
of trials in stroke patients,’’ Restorative Neurol. Neurosci., vol. 34, no. 4,
[40] M. Alhussein, G. Muhammad, and M. S. Hossain, ‘‘EEG pathology detec-
pp. 635–645, Aug. 2016.
tion based on deep learning,’’ IEEE Access, vol. 7, pp. 27781–27788,
[18] L. F. Nicolas-Alonso and J. Gomez-Gil, ‘‘Brain computer interfaces, a
2019.
review,’’ Sensors, vol. 12, no. 2, pp. 1211–1279, 2012.
[41] K. Keng Ang, Z. Yang Chin, H. Zhang, and C. Guan, ‘‘Filter bank com-
[19] R. Leeb, L. Tonin, M. Rohm, L. Desideri, T. Carlson, and J. D. R. Millan,
mon spatial pattern (FBCSP) in brain-computer interface,’’ in Proc. IEEE
‘‘Towards independence: A BCI telepresence robot for people with severe
Int. Joint Conf. Neural Netw. (IEEE World Congr. Comput. Intelligence),
motor disabilities,’’ Proc. IEEE, vol. 103, no. 6, pp. 969–982, Jun. 2015.
Jun. 2008, pp. 2390–2397.
[20] H.-I. Suk and S.-W. Lee, ‘‘A novel Bayesian framework for discriminative [42] R. T. Schirrmeister, J. T. Springenberg, L. D. J. Fiederer, M. Glasstetter,
feature extraction in brain-computer interfaces,’’ IEEE Trans. Pattern Anal. K. Eggensperger, M. Tangermann, F. Hutter, W. Burgard, and T. Ball,
Mach. Intell., vol. 35, no. 2, pp. 286–299, Feb. 2013. ‘‘Deep learning with convolutional neural networks for EEG decoding and
[21] T.-E. Kam, H.-I. Suk, and S.-W. Lee, ‘‘Non-homogeneous spatial filter visualization,’’ Human Brain Mapping, vol. 38, no. 11, pp. 5391–5420,
optimization for ElectroEncephaloGram (EEG)-based motor imagery clas- Nov. 2017.
sification,’’ Neurocomputing, vol. 108, pp. 58–68, May 2013. [43] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, and
[22] K. K. Ang and C. Guan, ‘‘EEG-based strategies to detect motor imagery B. J. Lance, ‘‘EEGNet: A compact convolutional neural network for EEG-
for control and rehabilitation,’’ IEEE Trans. Neural Syst. Rehabil. Eng., based brain–computer interfaces,’’ J. Neural Eng., vol. 15, no. 5, Oct. 2018,
vol. 25, no. 4, pp. 392–401, Apr. 2017. Art. no. 056013.
[23] L. Yao, N. Mrachacz-Kersting, X. Sheng, X. Zhu, D. Farina, and N. Jiang, [44] S. Panchapagesan, M. Sun, A. Khare, S. Matsoukas, A. Mandal,
‘‘A multi-class BCI based on somatosensory imagery,’’ IEEE Trans. Neural B. Hoffmeister, and S. Vitaladevuni, ‘‘Multi-task learning and weighted
Syst. Rehabil. Eng., vol. 26, no. 8, pp. 1508–1515, Aug. 2018. cross-entropy for DNN-based keyword spotting,’’ in Proc. Interspeech,
[24] X. Yong and C. Menon, ‘‘EEG classification of different imaginary vol. 9, 2016, pp. 760–764.
movements within the same limb,’’ PLoS ONE, vol. 10, no. 4, 2015, [45] Z. Zhang and M. Sabuncu, ‘‘Generalized cross entropy loss for training
Art. no. e0121896. deep neural networks with noisy labels,’’ in Proc. Adv. Neural Inf. Process.
[25] J.-H. Kim, F. Biessmann, and S.-W. Lee, ‘‘Decoding three-dimensional tra- Syst. (NIPS), 2018, pp. 8778–8788.
jectory of executed and imagined arm movements from electroencephalo- [46] K. K. Ang, Z. Y. Chin, C. Wang, C. Guan, and H. Zhang, ‘‘Filter bank
gram signals,’’ IEEE Trans. Neural Syst. Rehabil. Eng., vol. 23, no. 5, common spatial pattern algorithm on BCI competition IV datasets 2A and
pp. 867–876, Sep. 2015. 2B,’’ Frontiers Neurosci., vol. 6, p. 39, 2012.
[26] A. Schwarz, P. Ofner, J. Pereira, A. I. Sburlea, and G. R. Müller-Putz, [47] P. Ofner, A. Schwarz, J. Pereira, and G. R. Müller-Putz, ‘‘Upper limb
‘‘Decoding natural reach-and-grasp actions from human EEG,’’ J. Neural movements can be decoded from the time-domain of low-frequency EEG,’’
Eng., vol. 15, no. 1, Feb. 2018, Art. no. 016005. PLoS ONE, vol. 12, no. 8, 2017, Art. no. e0182578.
[27] F. Galán, M. R. Baker, K. Alter, and S. N. Baker, ‘‘Degraded EEG decoding [48] J.-H. Jeong, B.-W. Yu, D.-H. Lee, and S.-W. Lee, ‘‘Classification of drowsi-
of wrist movements in absence of kinaesthetic feedback,’’ Hum. Brain ness levels based on a deep spatio-temporal convolutional bidirectional
Mapping, vol. 36, no. 2, pp. 643–654, Feb. 2015. LSTM network using electroencephalography signals,’’ Brain Sci., vol. 9,
[28] F. Shiman, E. López-Larraz, A. Sarasola-Sanz, N. Irastorza-Landa, no. 12, p. 348, 2019.
M. Spüler, N. Birbaumer, and A. Ramos-Murguialday, ‘‘Classification of [49] Z. Jiao, X. Gao, Y. Wang, J. Li, and H. Xu, ‘‘Deep convolutional neural
different reaching movements from the same limb using EEG,’’ J. Neural networks for mental load classification based on EEG data,’’ Pattern
Eng., vol. 14, no. 4, Aug. 2017, Art. no. 046018. Recognit., vol. 76, pp. 582–595, Apr. 2018.
[50] P. Zhang, X. Wang, W. Zhang, and J. Chen, ‘‘Learning spatial–spectral– DAE-HYEOK LEE received the B.S. degree in bio-
temporal EEG features with recurrent 3D convolutional neural networks engineering from UNIST, South Korea, in 2018.
for cross-task mental workload assessment,’’ IEEE Trans. Neural Syst. He is currently pursuing the master’s degree with
Rehabil. Eng., vol. 27, no. 1, pp. 31–42, Jan. 2019. the Department of Brain and Cognitive Engineer-
[51] S. Alhagry, A. A. Fahmy, and R. A. El-Khoribi, ‘‘Emotion recognition ing, Korea University, South Korea. His research
based on EEG using LSTM recurrent neural network,’’ Emotion, vol. 8, interests include signal processing, deep learning,
no. 10, pp. 355–358, 2017. and brain–computer interface.
[52] E. S. Salama, R. A. El-Khoribi, M. E. Shoman, and M. A. Wahby, ‘‘EEG-
based emotion recognition using 3D convolutional neural networks,’’ Int.
J. Adv. Comput. Sci. Appl., vol. 9, no. 8, pp. 329–337, 2018.
[53] J. Li, Z. L. Yu, Z. Gu, W. Wu, Y. Li, and L. Jin, ‘‘A hybrid network for
ERP detection and analysis based on restricted Boltzmann machine,’’ IEEE
Trans. Neural Syst. Rehabil. Eng., vol. 26, no. 3, pp. 563–572, Mar. 2018.
[54] A. Ditthapron, N. Banluesombatkul, S. Ketrat, E. Chuangsuwanich,
and T. Wilaiprasitporn, ‘‘Universal joint feature extraction for P300
EEG classification using multi-task autoencoder,’’ IEEE Access, vol. 7,
pp. 68415–68428, 2019.
[55] H.-I. Suk and S.-W. Lee, ‘‘Subject and class specific frequency bands
selection for multiclass motor imagery classification,’’ Int. J. Imag. Syst.
Technol., vol. 21, no. 2, pp. 123–130, Jun. 2011.
[56] O.-Y. Kwon, M.-H. Lee, C. Guan, and S.-W. Lee, ‘‘Subject-independent YONG-DEOK YUN received the B.S. degree in
brain-computer interfaces based on deep convolutional neural networks,’’ mechanical and information engineering from the
IEEE Trans. Neural Netw. Learn. Syst., early access, Nov. 13, 2019, doi: University of Seoul, South Korea, in 2017, and
10.1109/TNNLS.2019.2946869. the M.S. degree in brain and cognitive engineering
[57] A. Singh, S. Lal, and H. Guesgen, ‘‘Reduce calibration time in motor from Korea University, South Korea, in 2019. His
imagery using spatially regularized symmetric positives-definite matrices research interests include machine learning and
based classification,’’ Sensors, vol. 19, no. 2, p. 379, 2019. brain–machine interface.