Art 7
Art 7
Art 7
31, 2023
MRCPs-and-ERS/D-Oscillations-Driven Deep
Learning Models for Decoding Unimanual and
Bimanual Movements
Jiarong Wang , Luzheng Bi , Senior Member, IEEE, Aberham Genetu Feleke , and Weijie Fei
Abstract— Motor brain-computer interface (BCI) can and thereby build a communication pathway between brains
intend to restore or compensate for central nervous and peripheral devices [1]. Brain signals can be recorded in
system functionality. In the motor-BCI, motor execution invasive or noninvasive ways. Electroencephalography (EEG)
(ME), which relies on patients’ residual or intact movement
functions, is a more intuitive and natural paradigm. Based is a primary noninvasive recording method. Due to the advan-
on the ME paradigm, we can decode voluntary hand tages of low cost, portability, and less trauma of EEG, EEG-
movement intentions from electroencephalography (EEG) based BCIs have broad applications and are an important
signals. Numerous studies have investigated EEG-based branch of BCIs.
unimanual movement decoding. Moreover, some studies Typical EEG-based BCIs cover visual BCIs, audi-
have explored bimanual movement decoding since
bimanual coordination is important in daily-life assistance tory BCIs, and motor BCIs. Compared with visual and audi-
and bilateral neurorehabilitation therapy. However, the tory BCIs, which rely on external stimuli evoking passively,
multi-class classification of the unimanual and bimanual motor BCIs can reflect voluntary movement intentions, and
movements shows weak performance. To address this thus they are more natural and intuitive [2], [3]. Motor BCIs
problem, in this work, we propose a neurophysiological intend for the restoration or compensation of central nervous
signatures-driven deep learning model utilizing the
movement-related cortical potentials (MRCPs) and event- system functionality. The applications of motor-BCIs include
related synchronization/ desynchronization (ERS/D) neurorehabilitation [4], [5] and daily life assistance [6] for the
oscillations for the first time, inspired by the finding that patients with motor function loss. Patients with motor function
brain signals encode motor-related information with both impairment can only complete some in-balanced and distorted
evoked potentials and oscillation components in ME. movements or even only attempt to move without outer
The proposed model consists of a feature representation
module, an attention-based channel-weighting module, and behavior. In this case, decoding movement intentions from
a shallow convolutional neural network module. Results EEG signals is still feasible since previous studies show that
show that our proposed model has superior performance to the attempted movements and the executed movements shared
the baseline methods. Six-class classification accuracies similar brain activation patterns [7]. For example, combining
of unimanual and bimanual movements achieved 80.3%.
motor-BCIs with functional electrical stimulation could help
Besides, each feature module of our model contributes to
the performance. This work is the first to fuse the MRCPs patients move their impaired limbs actively according to their
and ERS/D oscillations of ME in deep learning to enhance movement intentions, further facilitating neural plasticity and
the multi-class unimanual and bimanual movements’ helping rebuild the neuromuscular circuit [8], [9].
decoding performance. This work can facilitate the neural Commonly used paradigms of motor-BCIs are motor
decoding of unimanual and bimanual movements for
imagery (MI) and motor execution (ME) paradigms. MI can
neurorehabilitation and assistance.
be seen as the mental rehearsal of a specific action without
Index Terms— Brain-computer interface, EEG, movement any overt motor output and more relies on the repetitive
decoding, motor execution.
imagination of motor patterns [10], [11]. It has been shown
I. I NTRODUCTION that the MI of motor actions can produce replicable brain acti-
RAIN-COMPUTER interface (BCI) has been attracting
B attention because it can translate brain signals directly
vation patterns over the supplementary motor area (SMA) and
primary motor area (M1) [12]. Its corresponding neural mod-
ulations are based on the sensorimotor rhythms (SMRs) with
Manuscript received 1 January 2023; revised 11 February 2023;
accepted 13 February 2023. Date of publication 15 February 2023; date the event-related and frequency band-specific power decrease
of current version 24 February 2023. This work was supported in part and increase, as known as event-related desynchronization
by the Basic Research Plan under Grant JCKY2022602C024 and in (ERD) and event-related synchronization (ERS) [13]. Despite
part by the National Natural Science Foundation of China under Grant
51975052. (Corresponding author: Weijie Fei.) its value in assisting and restoring impaired motor functions
This work involved human subjects in its research. Approval of all for patients, the MI tasks are limited to the small number
ethical and experimental procedures and protocols was granted by the of commands decoded, low decoding accuracy, and high
ethics committee of Beijing Institute of Technology, and performed in line
with the 2013 Declaration of Helsinki. mental load. Besides, MI tasks usually require the imagined
The authors are with the School of Mechanical Engineering, movements of different human body parts, e.g., right hand,
Beijing Institute of Technology, Beijing 100081, China (e-mail: left hand, feet, and tongue, and thus, in some cases, the MI
[email protected]; [email protected]; [email protected];
[email protected]). tasks are inconsistent with actual output commands. Compared
Digital Object Identifier 10.1109/TNSRE.2023.3245617 with the MI paradigm, the ME paradigm is more natural
This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
WANG et al.: MRCPs-AND-ERS/D-OSCILLATIONS-DRIVEN DEEP LEARNING MODELS 1385
because the goal-directed movement tasks closely match the 70.1% using a window of [0, 1] s corresponding to movement
participants’ natural behaviors. Besides, the ME is advanta- onset.
geous for its more pronounced brain activities in both temporal To sum up, the bimanual movement classification accu-
and spectral domains, leading to better decoding performance racy, especially its multi-class classification accuracy, needs
of movements, especially of multiple kinds of movements. further improvement to meet practical application require-
Different from other paradigms of BCIs (including MI, P300, ments. Inspired by the finding that executed motor-related
and SSVEP), the neural activities in the ME paradigm con- information was encoded by both evoked potentials and oscil-
tain both the event-related potential as well as oscillatory lation components, in this work, we design neurophysiolog-
components [14]. The evoked potentials could be captured ical signatures-driven deep learning models to improve the
from a low-frequency band of EEG signals, known as the decoding performance of multi-class unimanual and biman-
movement-related cortical potential (MRCP), which is induced ual movements for the first time. Specifically, our proposed
during the movement plan, preparation, and execution of actual model contains three modules. First, the MRCPs and ERS/D
movements [15]. The oscillatory components in ME share oscillations of unimanual and bimanual movements are rep-
similar patterns with MI and show an early ERD in mu rhythm resented using discrete wavelet transform (DWT) and con-
starting before movement onset and a late ERS in 20-30 Hz tinuous wavelet transform (CWT), respectively. Afterwards,
after movement execution [14]. Both the MRCPs and ERS/D the temporal and spectral features are extracted according to
oscillations reflect the sensorimotor cortical processes and the neural signatures of movements and selected by mutual
imply complementary motor-related information. information. Second, a spatial channel-weighting method is
Many studies have explored how to use MRCPs or ERDs to proposed based on attention mechanism. Third, a shallow
decode hand movement intentions based on the ME paradigm. convolution neural network (CNN) structure is applied to the
The decoding contains movement onset detection [16], the extracted temporal-spectral-spatial feature maps and decodes
classification of movement directions [17], torque levels [18], the executed unimanual and bimanual movements.
speed [19] and movement types [3], and continuous move- The contribution of this work is that it is the first to propose
ment reconstruction [20]. These studies show the feasibility neurophysiological signatures-driven deep learning models for
of decoding the upper limb or hand movement intentions decoding executed movements from EEG signals and also the
from EEG signals. However, most existing studies on hand first to utilize the MRCPs and ERS/D oscillations in deep
movement decoding based on the ME paradigms are limited learning to decode the unimanual and bimanual movements.
to unimanual movement decoding, requiring the non-dominant
hand to be kept still during the experiment. The reason for this II. M ETHOD AND M ATERIALS
setting is to reduce the movement distraction on the primary In this section, we describe our proposed deep learning
hand movement decoding. model for decoding executed unimanual and bimanual move-
However, in daily life, coordinating both hands to perform ments, namely as ME-Net. The model includes the feature rep-
one task is common, and also, in neurorehabilitation therapy, resentation module, as described in Section II-A, the attention
bilateral training can facilitate rehabilitation after stroke [21]. mechanism-based channel-weighting module, as described in
Thus, decoding bimanual movements is valuable. However, Section II-B, and a shallow CNN module which inputs the
it faces at least two problems. One is that brain activity is more temporal-spectral-spatial feature maps and outputs the 6-class
complex during the execution of bimanual movements than classification results, as described in Section II-C. Afterwards,
unimanual movements. Another is that bimanual movement the experimental protocol description and the model’s setting
patterns are more than unimanual movement patterns. These are given. Fig. 1 shows the network structure of our proposed
two problems bring a severe challenge to the decoding per- model.
formance of bimanual movements. Thus, bimanual movement
decoding needs to be addressed.
Several studies [17], [22], [23] have strode the first step A. MRCPs and ERS/D Oscillations Representations
toward decoding the unimanual and bimanual movements. Let us denote the raw input EEG signals as X = {x (i) }i=1
n ∈
In [22], Schwarz et al. discriminated 7-class reach-and-grasp RC×T , where i is the ith trial, n is total number of trials, C
movements using the EEG time-domain features, and averaged is the number of electrodes, and T is the number of time
classification accuracies reached 38.6% for a combination sampling points. The corresponding class labels are denoted
of six movements and one rest condition. In [17], Wang as Y = {y (i) }i=1
n ∈ R6 , representing six-class unimanual and
et al. decoded single-hand and both-hand movement directions bimanual movement combinations.
using the center-our paradigm and the temporal features of To obtain the MRCPs features, we first decomposed the
EEG signals and support vector machine classifier, and the raw EEG signals X into different frequency bands using
six-class classification accuracy peaked at 69.02% using the discrete wavelet transform (DWT) [24], [25], and then we
time window of [0.3, 1.3] s corresponding to movement applied mutual information (M-Info) to select one optimal
onset. In [23], Zhang et al. decoded bimanual coordinated frequency band. DWT is implemented as a filter-bank based
movements by using temporal features and a combined deep on Mallat’s algorithm, and it can decompose signals using
learning model of convolution neural network and bidirec- a cascade of high-pass and low-pass filters. Wavelet-based
tional long short-term memory network. The classification method could characterize signals well in both temporal and
accuracy of three bimanual coordinated directions peaked at spectral domains. In DWT, we first calculated the inner product
1386 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 31, 2023
Fig. 1. Structure of the proposed ME-Net framework for the executed movements’ decoding.
of the original EEG signals with the basis wavelet function at After filtering the EEG signals into different frequency
discrete points, and then found the bases in the specific band bands, we applied the M-Info method to select one optimal
to be analyzed, and finally reconstructed and sum-weighted frequency band on the training dataset [26]. From information
a series of signal bases to obtain the filtered signals. The theory point, M-Info can measure the statistical dependence
continuous wavelet transforms (CWT) is defined as between EEG samples X f b and class labels Y, and it can be
Z +∞ defined as:
1 t −b
Wa,b = f (t) √ 9 ∗ ( )dt, (1) p(x f b , y)
Z Z
−∞ |a| a I (X f b , Y) = p(x f b , y) log d xdy, (5)
Y Xfb p(x f b ) p(y)
(i)
where f(t) = xc (t), corresponding to the EEG signals of
where X f b denotes the feature vector extracted from the
time t at electrode c in the trial i, a and b are the scaling and
bandpass filter of X with the fbth frequency band, x f b ⊂
translation parameters of wavelet basic function, respectively,
X f b , y ⊂ Y, p(x f b , y) is the joint probability density of
9 is the wavelet function, and ∗ denotes the complex conju-
continual random variables X f b and Y, and p(x f b ) and p(y)
gation. a, b ∈ R, a ̸ = 0. By taking the scale and translation
are the marginal probability density, respectively. p(x f b ) =
parameters a and b at discrete values, DWT is performed.
P{X f b = x f b }, p(y) = P{Y = y}, and p(x f b , y) =
In this study, we used dyadic scales and translations:
P{X f b = x f b , Y = y}. The Shannon entropy H (X f b ) and
a j = 2 j , b j,k = k2 j , k, j ∈ Z . (2) H (Y) can measure the information obtained from variables
X f b and Y as follows:
In this case, 9 j,k (t) = 2− j/2 9(2− j t − k). The set of
n
9 j,k (t) is in the square integrable space L 2 (R). By applying X (i) (i)
H (X f b ) = − p(x f b ) log p(x f b ), (6)
multiresolution decomposition, we can decompose L 2 (R) into
i=1
multi-subspaces W j , each of which can be obtained by the 6
dilation and translation of a single basis function 9 j,k (t). X
H (Y) = − p(y) log p(y), (7)
Then, we can find out a sequence of closed subspaces V j
y=1
in L 2 (R). The subspace V j contains all the signals included
(i)
in V j+1 plus additional high-resolution signals: where x f b denotes the feature value of the ith trial from X f b .
The joint entropy is defined as:
V j = V j+1 ⊕ W j+1 , (3)
6
n X
(i) (i)
X
where ⊕ indicates that both V j+1 and W j+1 are one part of H (X f b , Y) = − p(x f b , y) log( p(x f b ) p(y)) (8)
subspace V j and they are orthogonal to each other. By comb- i=1 y=1
ing all resolution signals in subspaces V j , we can obtain the
The conditional entropy of X f b given Y is defined as:
original signal f (t) as:
6
n X
+∞ L
+∞ X (i) (i)
X
X X H (X f b |Y ) = − p(x f b , y) log p(x f b |y) (9)
f (t) = ck φ(t − k) + d j,k 9 j,k (t), (4)
i=1 y=1
k=−∞ k=−∞ j=0
Noting that H (X f b , Y) = H (X f b |Y) + H (Y). The entropy
where φ(·) is the scaling function, ck is the scaling coefficient,
H (Y) measures the amount of uncertainty about Y, while
d j,k is the wavelet coefficient, and L is the total number of
H (X f b |Y) measures the amount of uncertainty left in X f b
decomposition levels. Wavelet function 9 j,k (t) corresponds to
given Y. Therefore, the mutual information I (X f b , Y) in Eq.
the high-pass filter and maintains the signal details. Wavelet
(5) can be equal to:
function φ(·) corresponds to low-pass filter and maintains the
signal approximations. By convolving the original signals with I (X f b , Y) = H (X f b ) − H (X f b |Y)
the high-pass and low pass filters, we could obtain the detailed = H (Y) − H (Y|X f b )
and approximate coefficients. The filtered signals could be
= H (X f b ) + H (Y) − H (X f b , Y) (10)
reconstructed from these coefficients in the specific frequency
bands. In this study, we adopt wavelet family “sym5”, and set In our work, we calculated the sum of mutual information
the maximum decomposition level to be 6. with all variables in each frequency band, and maintained the
WANG et al.: MRCPs-AND-ERS/D-OSCILLATIONS-DRIVEN DEEP LEARNING MODELS 1387
E. Model Setting
In this study, we explored to discriminate executed uniman-
ual and bimanual movements. The movements discrimination
is a 6-class classification problem. Our proposed ME-Net
adopt Adaptive Moment Estimation (Adam) optimizer with
a learning rate of 0.0007 for the optimization algorithm
and used CrossEntropyLoss function as optimization function,
which was based on the Softmax function and Negative Log
Likelihood Loss (NLLLoss). The exponential decay rate of
Adam optimizer for 1st and 2nd moment estimates were set
to be0.5 and 0.999, respectively. We adopt a batch size of
Fig. 3. Illustration of experimental protocol.
16 for model learning. The CNN layers were initialized using
Xavier Uniform initialization. The models are implemented
The EEG experiment was conducted in a quiet room. The
with the PyTorch framework in Python and trained on a
subjects were seated in a comfortable chair with both-hands
NVIDIA Quadro P4000 GPU.
resting on the table. The distance between the computer screen
To evaluate the performance of our proposed ME-Net
and participants was about 0.5 m. There are 5 sessions in the
model, we adopt an 8-fold cross-validation strategy in a
experiment, and each movement session contains 6 blocks,
subject-specific manner. The EEG data was spilt up into
corresponding to six-kinds of movements. It contains right-
8 folds randomly with six-folds as training data, one-fold as
hand moving in left or right directions while left-hand moving
validation data, and one-fold as testing data. The model was
in up or down directions or keeping still. Thus, there are
trained and validated until each fold was exhausted. The model
total six movements’ combinations, including two unimanual
was determined on the validation data and finally tested on the
movements and four bimanual movements. This experimen-
test data. The final decoding accuracy was averaged across all
tal setup is because that the subjects are right-handed and,
participants.
in practice, moving right hand while keeping left hand still
or acting left hand as a coordinating role is more popular.
Each block contains 16 trials. Thus, for each movement task, III. R ESULTS
there are 80 trials collected. The experimental sessions are A. MRCPs Representation
performed randomly for order-balance. For each trial, there To observe the signals modalities over different rhythms,
are 4-s preparation period, 3-s executed movement period, and we decomposed and reconstructed one-epoch signal by using
4-s relaxing period. More experimental details can be found DWT. The raw signal was selected randomly and at Cz
in [17]. electrode. The filtered frequency bands were in range [0, 4], [4,
EEG signals were recorded from 24 scalp electrodes at the 8], [8, 13], [13, 30], and [30, 40] Hz. The filtered signals can
positions of Fz, F3, F4, FCz, FC3, FC4, Cz, C1, C2, C3, C4, be seen in the left column of Fig. 4. It could be seen that, the
T7, T8, CP3, CP4, Pz, P3, P4, P7, P8, POz, Oz, O1, O2. The reconstructed signal in the frequency band of [0, 4] Hz showed
reference and ground electrodes were placed at CPz and AFz, the similar wave with the MRCP, and displayed an obvious
respectively. Electrooculogram (EOG) signals were collected negative shift at around the movement onset. This result was
by two additional patch electrodes at the extraocular canthal also in accord with previous studies [3], [6], [17], and the low-
positions of eyes. Electrode impedances were calibrated to be frequency EEG signals encoded more motor-related informa-
below 5 K. The EEG and EOG signals were digitized at the tion. In further, we planned to select one larger frequency band
WANG et al.: MRCPs-AND-ERS/D-OSCILLATIONS-DRIVEN DEEP LEARNING MODELS 1389
Fig. 5. The average MRCP plots of unimanual and bimanual movements across frontal-central electrodes.
TABLE II
D ECODING R ESULTS C OMPARISONS U SING D IFFERENT C OMBINATIONS OF F EATURES
In our model, we adapted the M-Info method to auto- accuracies. Fig. 7 shows the comparison results of different
matically select the frequency bands of MRCPs and ERS/D frequency bands. The results were calculated from the time
oscillations. The selected set of MRCPs was A = window of [−0.5, 0.5] s. As shown in Fig. 7 (a) and (b),
{[0, 4], [0, 8], [0, 13], [0, 30], [0, 40]} according to the feature better performance was obtained when applying M-Info to the
representations in Fig. 4, and the selected set of the ERS/D frequency band selection than selecting a fixed frequency band
oscillations was B = {[0, 8], [8, 40], [8, 25], [25, 40], [0, 40]} for no matter the MRCPs or ERS/D oscillations features.
according to the feature representations in Fig. 6. To validate
the effectiveness of frequency bands selection in our model,
we compared with the results of those without frequency E. Unimanual VS Bimanual Movements Decoding
bands selection. Specifically, we fixed one frequency band In our work, we decoded the unimanual and bimanual
of MRCPs or ERS/D oscillations feature from the frequency movements with six-class classification, including two-class
band sets, and calculated the corresponding classification unimanual movements and four-class bimanual movements.
WANG et al.: MRCPs-AND-ERS/D-OSCILLATIONS-DRIVEN DEEP LEARNING MODELS 1391
TABLE III
D ECODING P ERFORMANCE C OMPARISONS W ITH
Fig. 8. (a) Decoding performance comparisons of unimanual, C ONTRASTIVE M ODELS
bimanual, and unimanual and bimanual movements classification.
(b), (c), and (d) are the confusion matrixes plots of unimanual and
bimanual movements, bimanual movements, and unimanual move-
ments, respectively.
For the feature representation, we represented the neural activ- were obtained based on the attention mechanism, and thus
ities of MRCPs and ERS/D oscillations during the unimanual the temporal-spectral-spatial features were extracted. Finally,
and bimanual movements. Besides, we compared the proposed a shallow CNN network, which was more suitable for mapping
model’s performance under different features’ combinations, the features of ME, was established to capture features and out-
frequency band selection, sliding time windows, and move- put classification results. To evaluate the effectiveness of our
ment conditions, and also compared our model with state- proposed model, we first compared the models’ performance
of-art methods. The results validated the effectiveness of our of different feature combinations. Results in Table II showed
proposed model. that no matter which feature part of the decoding model was
For the neural representations, we first represented the removed, the decoding accuracy decreased significantly. This
MRCP plots of unimanual and bimanual movements, as shown result demonstrated that the temporal, spectral, and spatial
in Fig. 5. Larger negative offsets were observed for the biman- features all contributed to the proposed model’s performance.
ual movements at the electrodes of centerline and right hemi- Besides, according to the neural representations of MRCPs
sphere. This result could be attributed to the increased neural and ERS/D oscillations, we divided the neural activities into
activity of bimanual movements and were in accordance with several different frequency bands and adapted the M-Info
previous reports in [22]. For the electrodes at left hemisphere, method to select automatically. The results in Fig. 7 showed
similar offsets were observed for both the bimanual and the effectiveness of frequency band selection. For the MRCP
unimanual movements. It should be noted that the unimanual features, the performance difference between several bands
movement in this study was the right-hand movement, and was slight, and this could be due to that all these frequency
due to the contralateral brain activation pattern of hand move- bands contained the low-frequency band, and the temporal
ment, there was no significant difference between the neural movement-related information was mostly concentrated in the
activities of bimanual movement and right-hand movement at low frequency. For the ERS/D features, the frequency bands of
left hemisphere. Besides, comparing the MRCPs of electrodes [8, 40] Hz and [0, 40] Hz showed slightly better performance,
at frontal area with those at central area, a positive offset at and both these bands covered the rhythms of ERD and ERS
around 500 ms before movement onset was more obvious oscillations.
at frontal area. This could be due to the visual stimulation As shown in Fig. 9, for the sliding-window results, the
and cognitive process of movement cue, and the frontal area decoding performance increased gradually when the decoding
was more related to the brain cognition function [31]. Besides window covered more real movement periods, and mean-
the MRCPs, we represented the time-frequency oscillations, while more useful decoding information contributed to the
as shown in Fig. 6. Previous studies reported that, for the movement decoding. Finally, the classification accuracies of
executed movement, there were not only the evoked potentials our proposed model for six-class unimanual and bimanual
but also an early ERD activity in the 10-12Hz frequency band movements reached 59.4% with the movement window of [-1,
before the movement onset and a late ERS activity of 20-30Hz 0] s before movement onset and 77.2% with the time window
activity after the movement onset [13]. In Fig. 6, similar oscil- of [-0.5, 0.5] s, whose center point was the movement onset,
lations activities were also observed with the ERD oscillations and finally peaked at 80.3% with the time window of [0, 1] s
at around mu rhythm starting before the movement onset and after movement onset. There are also several studies exploring
the ERS oscillations at around β and low γ rhythms after bimanual movement decoding. In [22], 7-class reach-and-
the movement onset. Comparing the time-frequency results of grasp unimanual and bimanual movements’ decoding accuracy
unimanual and bimanual movements, larger power increase peaked at 38.6% with the time window of [0.1, 1.1] s.
in the low-frequency band was observed for the bimanual In [17], 6-class unimanual and bimanual movement directions’
movements. This phenomenon was due to the more active decoding accuracy peaked at 69.02% with the time window of
brain activity related to the motor function [22]. While for [0.3, 1.3] s. In [23], 3-class bimanual coordinated directions
the ERD oscillations, more power increase was found during decoding accuracy peaked at 70.1% using a window of [0,
the unimanual movement. Though counterintuitive, the similar 1] s. Though using different datasets, our proposed method
result was in accordance with the findings in [32] and [33], and showed superior performance in the unimanual and bimanual
it was reported that the ERD oscillations were more related movement decoding. To further validate the effectiveness of
to the attention state and the single target task of unimanual our proposed method, we also compared with several state-
movement could be more attentive. The ERS oscillations were of-art models on our dataset. Results in Table III showed that
related to the neural signal after the movement execution, and our model showed not only higher decoding accuracy but also
larger rebound was observed for the bimanual movements. stronger robustness.
Based on the neural signatures of MRCPs and ERS/D
oscillations during ME, we designed a neural signatures-driven V. C ONCLUSION
deep learning model to fuse the MRCPs and ERS/D oscilla- To improve the multi-class decoding performance of uni-
tions features for unimanual and bimanual movements’ decod- manual and bimanual movements from EEG signals, we pro-
ing. Specifically, we divided the frequency bands of signals posed a neurophysiological signatures-driven deep learning
according to the neural signatures, and applied the M-Info model utilizing the MRCPs and ERS/D oscillations represen-
to select the optimal frequency bands of MRCPs and ERS/D tations in ME. The proposed model was inspired by the finding
oscillations features. Then, the extracted MRCPs and ERS/D that executed motor-related information was encoded by both
oscillations features were concatenated to form the temporal- evoked potentials and oscillation components. We designed the
spectral features. Afterwards, spatial channel-weighting scores proposed model containing a feature representation module,
WANG et al.: MRCPs-AND-ERS/D-OSCILLATIONS-DRIVEN DEEP LEARNING MODELS 1393
an attention-based channel-weighting module, and a shallow [12] G. Pfurtscheller and C. Neuperb, “Motor imagery activates primary
convolutional neural network module. Feature representations sensorimotor area in humans,” Neurosci. Lett., vol. 239, nos. 2–3,
pp. 65–68, 1997.
of MRCPs and time-frequency analysis were plotted. The [13] K. Nakayashiki, M. Saeki, Y. Takata, Y. Hayashi, and T. Kondo,
feature maps were extracted according to the neural signatures. “Modulation of event-related desynchronization during kinematic and
As a result, the proposed model obtained superior performance kinetic hand movements,” J. NeuroEng. Rehabil., vol. 11, no. 1, p. 90,
May 2014.
compared with existing EEG-based ME decoding methods. [14] C. Babiloni et al., “Human movement-related potentials vs desynchro-
The decoding accuracy of six-class unimanual and bimanual nization of eeg alpha rhythm: A high-resolution EEG study,” NeuroIm-
movements peaked at 80.3%. Besides, each feature represen- age, vol. 10, no. 6, pp. 658–665, 1999.
[15] H. Shibasaki and M. Hallett, “What is the Bereltschaftspotential?” Clin.
tation part was validated to be effective. Neurophysiol., vol. 117, no. 11, pp. 2341–2356, Nov. 2006.
The proposed model is a promising tool for decoding [16] M. Jochumsen, I. Khan Niazi, D. Taylor, D. Farina, and K. Dremstrup,
multi-class unimanual and bimanual movements. The key “Detecting and classifying movement-related cortical potentials asso-
lies in utilizing the neural signatures of ME to capture both ciated with hand movements in healthy subjects and stroke patients
from single-electrode, single-trial EEG,” J. Neural Eng., vol. 12, no. 5,
MRCPs and ERS/D oscillations features. Future work includes Oct. 2015, Art. no. 056013.
collecting the data from target users, involving single left-hand [17] J. Wang, L. Bi, W. Fei, and C. Guan, “Decoding single-hand and both-
movement and more types of unimanual and bimanual move- hand movement directions from noninvasive neural signals,” IEEE Trans.
Biomed. Eng., vol. 68, no. 6, pp. 1932–1940, Jun. 2021.
ments, and further testing the performance of the proposed [18] L. Mercado et al., “Decoding the torque of lower limb joints from EEG
model. Furthermore, considering that developing a robust recordings of pre-gait movements using a machine learning scheme,”
model across subjects and sessions is valuable for the practical Neurocomputing, vol. 446, pp. 118–129, Jul. 2021.
[19] L. Yang, H. Leung, M. Plank, J. Snider, and H. Poizner, “EEG activity
applications of BCI, we will focus on this issue in future during movement planning encodes upcoming peak speed and acceler-
work, including involving transfer learning and developing an ation and improves the accuracy in predicting hand kinematics,” IEEE
adaptive model. J. Biomed. Health Informat., vol. 19, no. 1, pp. 22–28, Jan. 2015.
[20] Y.-F. Chen et al., “Continuous bimanual trajectory decoding of coordi-
nated movement from EEG signals,” IEEE J. Biomed. Health Informat.,
ACKNOWLEDGMENT vol. 26, no. 12, pp. 6012–6023, Dec. 2022.
The authors would like to thank all the participants for [21] J. H. Cauraugh, N. Lodha, S. K. Naik, and J. J. Summers, “Bilat-
volunteering to participate in our experiments. eral movement training and stroke motor recovery progress: A struc-
tured review and meta-analysis,” Hum. Movement Sci., vol. 29, no. 5,
pp. 853–870, Oct. 2010.
R EFERENCES [22] A. Schwarz, J. Pereira, R. Kobler, and G. R. Müller-Putz, “Unimanual
[1] J. Wolpaw, N. Birbaumer, D. McFarland, G. Pfurtscheller, and and bimanual reach-and-grasp actions can be decoded from human
T. Vaughan, “Brain–computer interfaces for communication and con- EEG,” IEEE Trans. Biomed. Eng., vol. 67, no. 6, pp. 1684–1695,
trol,” Clin. Neurophys., vol. 113, no. 6, pp. 767–791, 2002. Jun. 2020.
[2] N. Birbaumer and L. G. Cohen, “Brain–computer interfaces: Communi- [23] M. Zhang et al., “Decoding coordinated directions of bimanual move-
cation and restoration of movement in paralysis,” J. Physiol., vol. 579, ments from EEG signals,” IEEE Trans. Neural Syst. Rehabil. Eng.,
no. 3, pp. 621–636, Mar. 2007. vol. 31, pp. 248–259, 2022, doi: 10.1109/TNSRE.2022.3220884.
[3] A. Schwarz, P. Ofner, J. Pereira, A. I. Sburlea, and G. R. Müller-Putz, [24] H. Adeli, Z. Zhou, and N. Dadmehr, “Analysis of EEG records in
“Decoding natural reach-and-grasp actions from human EEG,” J. Neural an epileptic patient using wavelet transform,” J. Neurosci. Methods,
Eng., vol. 15, no. 1, Feb. 2018, Art. no. 016005. vol. 123, no. 1, pp. 69–87, Feb. 2003.
[4] M. A. Khan, R. Das, H. K. Iversen, and S. Puthusserypady, “Review [25] A. Samant and H. Adeli, “Feature extraction for traffic incident detection
on motor imagery based BCI systems for upper limb post-stroke using wavelet transform and linear discriminant analysis,” Comput.-
neurorehabilitation: From designing to application,” Comput. Biol. Med., Aided Civil Infrastruct. Eng., vol. 15, no. 4, pp. 241–250, Jul. 2000.
vol. 123, Aug. 2020, Art. no. 103843. [26] J.-S. Bang, M.-H. Lee, S. Fazli, C. Guan, and S.-W. Lee, “Spatio-
[5] K. K. Ang and C. Guan, “Brain–computer interface for neuroreha- spectral feature representation for motor imagery classification using
bilitation of upper limb after stroke,” Proc. IEEE, vol. 103, no. 6, convolutional neural networks,” IEEE Trans. Neural Netw. Learn. Syst.,
pp. 944–953, Jun. 2015. vol. 33, no. 7, pp. 3038–3049, Jul. 2022.
[6] A. Schwarz, M. K. Höller, J. Pereira, P. Ofner, and G. R. Müller-Putz, [27] A. Gramfort, “MEG and EEG data analysis with MNE-
“Decoding hand movements from human EEG to control a robotic arm Python,” Frontiers Neurosci., vol. 7, p. 267, Dec. 2013, doi:
in a simulation environment,” J. Neural Eng., vol. 17, no. 3, Jun. 2020, 10.3389/fnins.2013.00267.
Art. no. 036010. [28] Y. Song, X. Jia, L. Yang, and L. Xie, “Transformer-based spatial–
[7] P. Ofner, A. Schwarz, J. Pereira, D. Wyss, R. Wildburger, and temporal feature learning for EEG decoding,” 2021, arXiv:2106.11170.
G. R. Müller-Putz, “Attempted arm and hand movements can be [29] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung,
decoded from low-frequency EEG from persons with spinal cord injury,” and B. J. Lance, “EEGNet: A compact convolutional neural network for
Sci. Rep., vol. 9, no. 1, p. 7134, May 2019. EEG-based brain–computer interfaces,” J. Neural Eng., vol. 15, no. 5,
[8] G. R. Müller-Putz, R. Scherer, G. Pfurtscheller, and R. Rupp, “EEG- Oct. 2018, Art. no. 056013.
based neuroprosthesis control: A step towards clinical practice,” Neu- [30] G. Bressan, G. Cisotto, G. R. Müller-Putz, and S. C. Wriessnegger,
rosci. Lett., vol. 382, nos. 1–2, pp. 169–174, Jul. 2005. “Deep learning-based classification of fine hand movements from
[9] C. Neuper, G. Müller-Putz, R. Scherer, and G. Pfurtscheller, “Motor low frequency EEG,” Future Internet, vol. 13, no. 5, p. 103,
imagery and EEG-based control of spelling devices and neuroprosthe- Apr. 2021.
ses,” Prog. Brain Res., vol. 159, no. 10, pp. 393–409, Jan. 2006, doi: [31] P. Nachev, C. Kennard, and M. Husain, “Functional role of the sup-
10.1016/S0079-6123(06)59025-9. plementary and pre-supplementary motor areas,” Nature Rev. Neurosci.,
[10] G. Purtscheller and C. Neuper, “Motor imagery and direct brain– vol. 9, no. 11, pp. 856–869, Nov. 2008.
computer communication,” Proc. IEEE, vol. 89, no. 7, pp. 1123–1134, [32] X. Liao, D. Yao, D. Wu, and C. Li, “Combining spatial filters for
Jul. 2001. the classification of single-trial EEG in a finger movement task,” IEEE
[11] A. Guillot, C. Collet, V. A. Nguyen, F. Malouin, C. Richards, and Trans. Biomed. Eng., vol. 54, no. 5, pp. 821–831, May 2007.
J. Doyon, “Brain activity during visual versus kinesthetic imagery: [33] K. Wang, M. Xu, Y. Wang, S. Zhang, L. Chen, and D. Ming, “Enhance
An fMRI study,” Hum. Brain Mapping, vol. 30, no. 7, pp. 2157–2172, decoding of pre-movement EEG patterns for brain–computer interfaces,”
Jul. 2009. J. Neural Eng., vol. 17, no. 1, Jan. 2020, Art. no. 016033.