An Upper-Limb Rehabilitation Exoskeleton System Controlled by MI Recognition Model With Deep Emphasized Informative Fea
An Upper-Limb Rehabilitation Exoskeleton System Controlled by MI Recognition Model With Deep Emphasized Informative Fea
31, 2023
Abstract— The prevalence of stroke continues to accuracy of MI classification (86.49% ± 3.02%); all subjects
increase with the global aging. Based on the motor performed two types of rehabilitation training tasks under
imagery (MI) brain–computer interface (BCI) paradigm their own models trained in the offline training experiment,
and virtual reality (VR) technology, we designed and with the highest average completion rates of 86.82% ±
developed an upper-limb rehabilitation exoskeleton system 4.66% and 88.48% ± 5.84%. The VR-ULE system can
(VR-ULE) in the VR scenes for stroke patients. The VR-ULE efficiently help stroke patients with hemiplegia complete
system makes use of the MI electroencephalogram (EEG) upper-limb rehabilitation training tasks, and provide the
recognition model with a convolutional neural network new methods and strategies for BCI-based rehabilitation
and squeeze-and-excitation (SE) blocks to obtain the devices.
patient’s motion intentions and control the exoskeleton
to move during rehabilitation training movement. Due to Index Terms— Rehabilitation exoskeleton, brain–comput
the individual differences in EEG, the frequency bands -er interface, virtual reality, convolutional neural networks,
with optimal MI EEG features for each patient are different. squeeze-and-excitation block, motor imagery.
Therefore, the weight of different feature channels is
learned by combining SE blocks to emphasize the useful
information frequency band features. The MI cues in the I. I NTRODUCTION
VR-based virtual scenes can improve the interhemispheric
HE prevalence of stroke continues to rise with the global
balance and the neuroplasticity of patients. It also makes
up for the disadvantages of the current MI-BCIs, such
as single usage scenarios, poor individual adaptability,
T aging. Stroke patients with hemiplegia have neurological
damage caused by the massive death of brain cells, resulting
and many interfering factors. We designed the offline in varying degrees of upper-limb motion disorders [1].
training experiment to evaluate the feasibility of the EEG Rehabilitation exoskeletons based on brain–computer interface
recognition strategy, and designed the online control
experiment to verify the effectiveness of the VR-ULE
(BCI) technology have become a more common rehabilitation
system. The results showed that the MI classification treatment plan for stroke patients in different rehabilitation
method with MI cues in the VR scenes improved the periods [2].
BCI technology realizes communication between the human
Manuscript received 2 July 2023; revised 23 September 2023;
accepted 26 October 2023. Date of publication 1 November 2023; date brain and external electronic devices by decoding the
of current version 9 November 2023. This work was supported in part features of the electroencephalogram (EEG) in the cerebral
by the Key Research and Development Program of Zhejiang Province cortex. As a new means of expression and interaction
under Grant 2022C03148, in part by the Philosophy and Social Science
Planning Fund Project of Zhejiang Province under Grant 22NDJC007Z, for motor intention, BCI has been widely used in the
in part by the National Social Science Fund of China under Grant rehabilitation training of stroke patients at different stages [3].
22CTQ016, and in part by the Fundamental Research Funds for the Barsotti et al. [4] designed a set of upper-limb exoskeletons
Provincial Universities of Zhejiang under Grant GB202003008 and Grant
GB202002012. (Corresponding author: Yuxin Peng.) based on MI-BCI to rehabilitate the grasping ability of
This work involved human subjects or animals in its research. Approval poststroke patients. Soekadar et al. [5] designed a noninvasive
of all ethical and experimental procedures and protocols was granted by brain/neural hand exoskeleton to assist stroke patients in
the Ethics Committee of the Zhejiang University of Technology.
Zhichuan Tang is with the Industrial Design Institute, Zhejiang daily motions such as eating and drinking. As a bridge for
University of Technology, Hangzhou 310023, China, and also with direct communication between the human brain and external
the Modern Industrial Design Institute, Zhejiang University, Hangzhou devices, BCI has been widely used in stroke rehabilitation
310013, China (e-mail: [email protected]).
Hang Wang, Zhixuan Cui, Xiaoneng Jin, Lekai Zhang, and Baixi treatment [6].
Xing are with the Industrial Design Institute, Zhejiang University of As one of the main paradigms of BCI technology, MI has
Technology, Hangzhou 310023, China. been widely used in rehabilitation therapy of cerebral motor
Yuxin Peng is with the College of Education, Zhejiang University,
Hangzhou 310058, China (e-mail: [email protected]). function in stroke patients [7]. Through MI training, the motor
Digital Object Identifier 10.1109/TNSRE.2023.3329059 nerve conduction pathways of stroke patients can be repaired
© 2023 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License.
For more information, see https://creativecommons.org/licenses/by/4.0/
TANG et al.: UPPER-LIMB REHABILITATION EXOSKELETON SYSTEM CONTROLLED BY MI RECOGNITION MODEL 4391
or reconstructed. The MI of different motions is mapped to features is another research interest that could improve the
the EEG changes in the corresponding regions of the cerebral accuracy of MI-EEG decoding deep learning models.
cortex, and decoding different EEG features can distinguish Squeeze-and-Excitation Networks (SENet) as a channel-
different motions [8]. For example, in unilateral hand MI, based attention mechanism, treats each feature channel as a
mu rhythms (8-13 Hz) and beta rhythms (13-30 Hz) of the whole and use global information to automatically “learn”
motor sensory area on the opposite side of the brain will the importance of different feature channels in the training
decrease in power, while mu rhythms and beta rhythms in process, thereby suppressing the relatively unimportant
the ipsilateral motor sensory area will increase in power. This features in the training classification process and boosting the
phenomenon is called event-related desynchronization (ERD) most discriminative and information-rich features to improve
and event-related synchronization (ERS) [7]. BCI technology the accuracy of the model [23]. Sun et al. [24] proposed
uses various computer algorithms to classify these different a CNN with sparse spectrotemporal decomposition (SSD)
ERD/ERS patterns and convert them into control signals for for MI-EEG classification, which adopted SE to adaptively
external devices. Tang et al. [8] proposed a BMI system recalibrate the channel direction. Zhang et al. [25] proposed
based on ERD/ERS and used for upper-limb exoskeleton an orthogonal CNN fused with SE blocks to perform feature
control, achieving high classification accuracy. Liu et al. [9] recalibration across different EEG channels. Inspired by SE,
proposed an ERD/ERS-based BCI control system and verified we merged SE blocks into the CNN model, enabling the
its effectiveness by operating a two-arm multi-finger robotic model to automatically obtain the weights of each feature
to complete tasks. Based on ERD/ERS, Li et al. [10] proposed channel (EEG features of different time and frequency bands),
a BCI hybrid control strategy that combines EEG and EMG adaptively weighted the feature maps generated by the original
signals to achieve flexible and stable control of the lower limb feature fusion layer, and improve the proportion of useful
dynamic exoskeleton. features in the current task. This approach can solve the
Convolutional Neural Networks (CNN) as a representative problem of optimal features of EEG signals from different
algorithm of deep learning have been widely used in subjects located in different frequency bands, and train a MI
computer vision, natural language processing, and other recognition and classification model with high recognition
fields [11]. Conventional EEG data processing relies on the accuracy suitable for specific users.
experience of researchers for complex data preprocessing and The current rehabilitation strategies based on MI-BCI
feature extraction. However, human-operated preprocessing mainly improve the MI recognition accuracy of subjects by
and feature extraction will reduce the accuracy and reliability improving the feature extraction algorithms and neglecting the
of classification results [11]. And the correlations between impact of MI signal strength on recognition accuracy [26].
EEGs of different channels are easily ignored during the Therefore, in order to maximize the activation of the subjects’
feature extraction process [12]. The CNN model can auto- motor nerves and improve their signal strength, virtual
matically extract features from the original input signal and rehabilitation technology combining MI-BCI technology and
obtain deeper and more distinguishable feature information virtual reality (VR) technology is applied in the field of stroke
through local receptive fields, weight sharing, and down- rehabilitation [27]. VR technology has solved the problem of
sampling, which reducing the subjectivity and incompleteness poor immersion and multiple external environmental inter-
of feature selection caused by human factors [13], [14], [15]. ference factors (sound, light) in conventional rehabilitation
Amin et al. [16] proposed an attention-based CNN model to training strategies (by observing cues on computer screens)
learn the importance of different features of MI data and [28]. VR technology can provide an immersive training
obtained good results when they applied it to the BCI IV environment, improve the interhemispheric activation balance
2a dataset. Roy [17] proposed a Multi-Scale (MS) CNN (IHAB) and enhance the cortical connectivity between the
which can extract the distinguishable features of several non- primary sensorimotor cortex (SM1), the primary motor cortex
overlapping canonical frequency bands of EEG signals from (M1), and the supplementary motor area (SMA) on both sides
multiple scales for MI-BCI classification. Zhao et al. [18] of the subject during motion induction. The VR scene can
proposed a multi-branch 3D-CNN classification strategy and provide real-time feedback in each training task, achieve more
the 3D representation is generated by transforming EEG comprehensive MI training, improve rehabilitation efficiency,
signals into a sequence of 2D array which preserves spatial shorten the rehabilitation period, and enhance the patient’s
distribution of sampling electrodes. Li et al. [19] proposed initiative and adaptability in rehabilitation [28]. Jang et al. [29]
an end-to-end EEG decoding framework, which employs raw demonstrated a shift in cortical organization of the affected
multi-channel EEG as inputs, to boost decoding accuracy limb from the ipsilateral hemisphere to the contralateral
by the channel-projection mixed-scale convolutional neural hemisphere after the VR intervention. Mekbib et al. [30]
network aided by amplitude-perturbation data augmentation. revealed that unilateral and bilateral limb mirroring exercises
However, due to significant individual differences between in an immersive virtual environment may stimulate MNs in the
subjects, such as the optimal time period and frequency damaged brain areas and may facilitate functional recovery
band of ERD/ERS changes [20], [21], [22]. It is not good of the affected upper extremities post-stroke. However, the
enough to use conventional recognition methods to perform current VR approaches use single-scenario rehabilitation,
shallow temporal or spectral feature learning on MI features. the inter-individual adaptability is poor [27]. At the same
Therefore, due to the influence of individual differences among time, the conventional rehabilitation training strategies lack
stroke patients, the refinement and weight assignment of deep visual feedback based on motor intention, but at the neural
4392 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 31, 2023
Fig. 5. Framework diagram of MI recognition model. The input EEG signal samples are divided into multiple different frequency bands in multiple
channels, the weight of different feature channels is learned by combining SE blocks to emphasize the useful information frequency band features.
layers with activation functions and no bias. These two fully- Online hybrid control model: The online hybrid control
connected layers are dimensioned down and up, respectively, module converts the classification signal identified by the
to form the structure of a bottleneck. The process can be MI recognition software subsystem into a VR-scene character
expressed as motion control signal or an exoskeleton motion control signal.
First, the VR scene control module in the MI recognition
(F6 , R4M c ) = σ (g (F6 , R4M c )) = σ (R4M 2 δ (F6 · R4M 1 )) . software subsystem will randomly generate left- and right-
(4) hand MI motion cues, and the stroke patients will then try
MI within a certain time. The trained CNN+SE model will
i. e, first F5 is multiplied by R4M 1 in a fully connected
acquire the subject’s MI EEG data and perform identification
layer operation, then multiplied by a rectified linear unit
and classification. The classification results are identified by
(ReLU) layer to keep the output dimension unchanged. Then
the online hybrid control module according to the training
by multiplying the result by R4M 2 in a fully connected layer
task and converted into a continuous control signal output.
operation, and then through a ReLU, and so on to R4M 8 ,
The Control-flow diagram of the VR-ULE system is shown
a [1 × 1 ×8] feature map is output through the sigmoid
in Fig 6.
function F6 .
Feature weighting layer (R7) - This layer performs channel-
wise multiplication using the weights obtained by the III. E XPERIMENT
excitation operation to perform channel-by-channel adaptive For the MI classification strategy based on the combination
weighting on the original eight feature maps. That is, of VR and SE blocks in our proposed VR-ULE, we designed
it multiplies the eight feature maps in the initial input SE offline training experiments and online control experiments to
by the eight weights in F6 . Finally, eight feature maps of test the effectiveness of the strategy. In the offline training
size [40 × 75] are obtained to achieve feature weighting. The experiment, we trained two types of CNN+SE models that
process can be expressed as were cued by the conventional experimental scene and the VR
scene for each subject in order to do comparative verification.
] c = Fscale (R4M c , F6 ) = R4M c · F6 .
R7M (5) In the online control experiment, we first selected the highest-
accuracy classification model trained in the offline experiment
Pooling layer (R8) - This layer performs average pooling of
for different subjects. Then, analogous to the rehabilitation
the output of the R7 layer in 5 × 5 regions with a stride of 5,
stage of brain motor neurons in the pre-rehabilitation stage
and the output is eight feature maps of size [8 × 15]. Fully
of stroke patients, the subjects will perform MI based on
connected layer (F9) - This layer is used as a fully connected
the VR scene to control the virtual characters to complete
layer. The eight feature maps output by the R8 layer are fully
the corresponding virtual tasks. At the same time, analogous
connected to obtain eight feature maps with a size of [120×1].
to the upper-limb muscle group strength training stage of
This process can be expressed as
X stroke patients in the later stage of rehabilitation, the subjects
Fm9 ( j) = f ] c (( j − 1) × 10 + 1) × km
R7M 9
+ bm9
( j) , independently perform MI according to the task requirements
to achieve different control of the exoskeleton system and
(6) complete corresponding tasks. Finally, the completion results
9 is the convolution kernel of [1 × 1]; b9 ( j) is the of the two types of analogy experiments are evaluated. We also
where km m chose three methods, conventional CNN [11], MRA+LDA
bias;
[33], and CSP+SVM [34], to train the MI recognition model
Fully connected layer (F10) - This layer fully connects
on the same training set, and then these models were tested
the eight feature maps output from the F9 layer to form a
using the same test set.
classification part of size [960 × 1], containing 200 neurons:
i≤8 p≤120
X X A. Subjects and Dataset Preparation
Fm10 ( j) = f( Fi9 ( p) ωi10 ( p) + b10 ( j)), (7)
For the experiment, we recruited 20 healthy subjects (age:
i=1 p=1
22 ± 1.21 years), all right-handed (as assessed by the
where ωi10 ( p) is the connection weight from the neurons in Edinburgh Handedness Questionnaire) [35]. At the same time,
the F9 layer to the neurons in the F10 layer, and b10 ( j) is the we also recruited one mild stroke patient and one moderate
bias. stroke patient to participate in the experiment. All the subjects
Output layer (O11) - This layer is the output layer, participated in the EEG experiment for the first time and
containing two neurons, representing a binary classification were not told any experimental hypotheses. Each subject
problem. The process can be expressed as signed an informed consent form before the experiment. The
i≤200
! experimental procedure was reviewed and approved by the
X Human Ethics Review Committee of Zhejiang University of
Om ( j) = f
11
Fm (i) ω (i) + b ( j) ,
10 5 5
(8)
Technology.
i=1
EEG signals were acquired with the ActiveTwo64 channel
where ω5 (i)is the connection weight of the neurons in the EEG signal acquisition system (BioSemi, Netherlands).
F10 layer to the neurons in the O11 layer, and b5 ( j) is the Twenty-three channels of EEG data (Fz, FC3, FC1, FCz, FC2,
bias. FC4, FC6, C5, C3, C1, Cz, C2, C4, C6, CP3, CP1, CPz, CP2,
4396 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 31, 2023
TABLE I
C LASSIFICATION ACCURACY OF F OUR C LASSIFICATION M ODELS
FOR E ACH S UBJECT IN THE P UBLIC DATASET
TABLE II
T HE T EST C LASSIFICATION ACCURACY OF E ACH S UBJECT ’ S N ON -VR T EST AND VR T EST DATA IN THE F OUR C LASSIFICATION M ODELS
TABLE III
T HE T EST C LASSIFICATION P RECISION , R ECALL , AND F S CORE OF THE VR T ESTS DATA
FOR E ACH S UBJECT IN THE F OUR C LASSIFICATION M ODELS
measured experimental dataset. The reason is that the CNN each subject. The advantage of this method is that while
will learn the temporal and spatial features of the subject’s avoiding individual differences, the final classification result
MI, and the SE blocks performs feature weighting operations of the model will also be output based on the weights of
on the EEG data of the subjects in different frequency bands all frequency bands, avoiding transiently missing a signal in
to learn strong features of the MI EEG frequency band of a band of the subject during the online test affecting the
4400 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 31, 2023
VI. C ONCLUSION
In this paper, based on the MI-BCI paradigm and VR
technology, we designed and developed a VR-ULE that
can be used for the rehabilitation of stroke patients with
classification results. Relevant studies have obtained results hemiplegia. The system obtains the patient’s motion intention
consistent with ours. For example, Sun et al. [24] used a deep through the MI EEG identification strategy based on a
learning framework called SSDSE-CNN integrating the SE CCN and SE blocks, and it controls the execution of
blocks for MI-EEG classification, and the highest classification VR-ULE rehabilitation training actions. The SE module
accuracy obtained was 79.3% ± 6.9%. Li et al. [37] proposed makes up for the shortcoming that different subjects have
a novel temporal-spectral-based SE feature fusion network for differences in MI frequency domain characteristics. The MI
MI-EEG decoding, and the highest classification accuracy was indication based on the VR scene strengthens the MI EEG
84.49% on the public dataset. of the subjects and makes up for the shortcomings of the
Our offline experimental results also show that when current MI-BCI rehabilitation strategies, such as a single
the subjects tried MI, compared with the boring screen rehabilitation scene, poor individual adaptability, and many
cues, the use of VR cues was more helpful for training external environmental interfering factors. Our results show
a network model, with higher classification accuracy. All that compared with the conventional classification strategy,
four classification models were verified, in which CNN+SE the proposed MI EEG recognition method (CNN+SE) can
obtained a classification accuracy of 86.49% ± 3.02%. The improve the MI classification accuracy. The VR-ULE system
reason is that the VR scene brings the subjects a more can more efficiently help stroke patients complete upper-
immersive experience, avoids the interference of many external limb rehabilitation training tasks through a more reasonable
factors, and makes it easier for the subjects to concentrate, MI identification strategy and an immersive experience of
and the arm movements of the characters in the virtual scene VR scenes, all of which improve the patients’ autonomous
will guide the subjects to quickly generate corresponding rehabilitation.
responses, improving their IHAB while activating connections
between more areas of the cerebral cortex, cueing patients R EFERENCES
to produce more pronounced MI EEG features. In related [1] F. Y. Wu and H. H. Asada, “Implicit and intuitive grasp posture control
research on stroke rehabilitation, Sip et al. [38] applied for wearable robotic fingers: A data-driven method using partial least
squares,” IEEE Trans. Robot., vol. 32, no. 1, pp. 176–186, Feb. 2016,
the Virtual Mirror Hand 1.0 procedure to the treatment of doi: 10.1109/TRO.2015.2506731.
hand functional recovery after stroke and compared it with [2] F. Grimm, A. Walter, M. Spüler, G. Naros, W. Rosenstiel, and
the classic mirror therapy, finding that applying VR to the A. Gharabaghi, “Hybrid neuroprosthesis for the upper limb: Combining
brain-controlled neuromuscular stimulation with a multi-joint arm
rehabilitation of stroke patients was feasible. Nath et al. [39] exoskeleton,” Frontiers Neurosci., vol. 10, p. 367, Aug. 2016, doi:
developed a VR task library for upper-limb rehabilitation of 10.3389/fnins.2016.00367.
poststroke patients and concluded that VR therapy can improve [3] A. Remsik et al., “A review of the progression and future implications
of brain-computer interface therapies for restoration of distal upper
the clinical symptoms of chronic stroke patients. extremity motor function after stroke,” Exp. Rev. Med. Devices, vol. 13,
Our online control experiments showed that the average no. 5, pp. 445–454, May 2016, doi: 10.1080/17434440.2016.1174572.
success rate of exoskeleton control task was 88.48% ± [4] M. Barsotti et al., “A full upper limb robotic exoskeleton for reaching
and grasping rehabilitation triggered by MI-BCI,” in Proc. IEEE Int.
5.84%, which was higher than that of virtual character arm Conf. Rehabil. Robot. (ICORR), Singapore, Aug. 2015, pp. 49–54, doi:
movement tasks in VR scenes. The reason is that the MI 10.1109/ICORR.2015.7281174.
command in the exoskeleton control task uses real task actions [5] S. R. Soekadar et al., “Hybrid EEG/EOG-based brain/neural hand
exoskeleton restores fully independent daily living activities after
to improve patients’ perception and motion mechanisms quadriplegia,” Sci. Robot., vol. 1, Dec. 2016, Art. no. eaag3296, doi:
[40]. Patients can perform more concrete MI based on the 10.1126/scirobotics.aag3296.
TANG et al.: UPPER-LIMB REHABILITATION EXOSKELETON SYSTEM CONTROLLED BY MI RECOGNITION MODEL 4401
[6] P. D. E. Baniqued et al., “Brain-computer interface robotics for hand [25] J. Zhang, R. Yao, W. Ge, and J. Gao, “Orthogonal convolutional
rehabilitation after stroke: A systematic review,” J. NeuroEng. Rehabil., neural networks for automatic sleep stage classification based on single-
vol. 18, no. 1, p. 15, Jan. 2021, doi: 10.1186/s12984-021-00820-8. channel EEG,” Comput. Methods Programs Biomed., vol. 183, Jan. 2020,
[7] G. Pfurtscheller and C. Neuper, “Motor imagery activates primary Art. no. 105089, doi: 10.1016/j.cmpb.2019.105089.
sensorimotor area in humans,” Neurosci. Lett., vol. 239, pp. 65–68, [26] T. P. Luu, Y. He, S. Brown, S. Nakagome, and J. L. Contreras-Vidal,
Dec. 1997, doi: 10.1016/S0304-3940(97)00889-6. “Gait adaptation to visual kinematic perturbations using a real-time
[8] Z. Tang, S. Sun, S. Zhang, Y. Chen, C. Li, and S. Chen, closed-loop brain-computer interface to a virtual reality avatar,” J. Neural
“A brain-machine interface based on ERD/ERS for an upper-limb Eng., vol. 13, no. 3, Jun. 2016, Art. no. 036006, doi: 10.1088/1741-
exoskeleton control,” Sensors, vol. 16, no. 12, p. 2050, Dec. 2016, doi: 2560/13/3/036006.
10.3390/s16122050. [27] L. Ferrero, M. Ortiz, V. Quiles, E. Iáñez, and J. M. Azorín, “Improving
[9] Y. Liu et al., “Motor-imagery-based teleoperation of a dual- motor imagery of gait on a brain-computer interface by means of virtual
arm robot performing manipulation tasks,” IEEE Trans. Cognit. reality: A case of study,” IEEE Access, vol. 9, pp. 49121–49130, 2021,
Develop. Syst., vol. 11, no. 3, pp. 414–424, Sep. 2019, doi: doi: 10.1109/ACCESS.2021.3068929.
10.1109/TCDS.2018.2875052. [28] P. Xie et al., “Research on rehabilitation training strategies using
[10] Z. Li et al., “Hybrid brain/muscle signals powered wearable walking multimodal virtual scene stimulation,” Frontiers Aging Neurosci.,
exoskeleton enhancing motor ability in climbing stairs activity,” IEEE vol. 14, Jun. 2022, Art. no. 892178, doi: 10.3389/fnagi.2022.892178.
Trans. Med. Robot. Bionics, vol. 1, no. 4, pp. 218–227, Nov. 2019, doi: [29] S. H. Jang et al., “Cortical reorganization and associated func-
10.1109/TMRB.2019.2949865. tional motor recovery after virtual reality in patients with chronic
[11] Z. Tang, C. Li, and S. Sun, “Single-trial EEG classification of motor stroke: An experimenter-blind preliminary study,” Arch. Phys.
imagery using deep convolutional neural networks,” Optik, vol. 130, Med. Rehabil., vol. 86, no. 11, pp. 2218–2223, Nov. 2005, doi:
pp. 11–18, Feb. 2017, doi: 10.1016/j.ijleo.2016.10.117. 10.1016/j.apmr.2005.04.015.
[12] X. Xiao and Y. Fang, “Motor imagery EEG signal recognition using deep [30] D. B. Mekbib et al., “Proactive motor functional recovery following
convolution neural network,” Frontiers Neurosci., vol. 15, Mar. 2021, immersive virtual reality-based limb mirroring therapy in patients with
Art. no. 655599, doi: 10.3389/fnins.2021.655599. subacute stroke,” Neurotherapeutics, vol. 17, no. 4, pp. 1919–1930,
[13] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, Oct. 2020, doi: 10.1007/s13311-020-00882-x.
and B. J. Lance, “EEGNet: A compact convolutional neural network for [31] J. Huang, M. Lin, J. Fu, Y. Sun, and Q. Fang, “An immersive motor
EEG-based brain-computer interfaces,” J. Neural Eng., vol. 15, no. 5, imagery training system for post-stroke rehabilitation combining VR
Oct. 2018, Art. no. 056013, doi: 10.1088/1741-2552/aace8c. and EMG-based real-time feedback,” in Proc. 43rd Annu. Int. Conf.
[14] G. A. Altuwaijri and G. Muhammad, “A multibranch of convolutional IEEE Eng. Med. Biol. Soc. (EMBC), Nov. 2021, pp. 7590–7593, doi:
neural network models for electroencephalogram-based motor imagery 10.1109/EMBC46164.2021.9629767.
classification,” Biosensors, vol. 12, no. 1, p. 22, Jan. 2022, doi: [32] F. Lotte, M. Congedo, A. Lécuyer, F. Lamarche, and B. Arnaldi,
10.3390/bios12010022. “A review of classification algorithms for EEG-based brain–computer
[15] S. U. Amin, H. Altaheri, G. Muhammad, W. Abdul, and M. Alsulaiman, interfaces,” J. Neural Eng., vol. 4, no. 2, pp. 1–13, Jun. 2007, doi:
“Attention-inception and long- short-term memory-based electroen- 10.1088/1741-2560/4/2/R01.
cephalography classification for motor imagery tasks in rehabilitation,” [33] P. Martin-Smith, J. Ortega, J. Asensio-Cubero, J. Q. Gan, and A. Ortiz,
IEEE Trans. Ind. Informat., vol. 18, no. 8, pp. 5412–5421, Aug. 2022, “A label-aided filter method for multi-objective feature selection in
doi: 10.1109/TII.2021.3132340. EEG classification for BCI,” in Advances in Computational Intelligence,
[16] S. U. Amin, H. Altaheri, G. Muhammad, M. Alsulaiman, and vol. 9094, I. Rojas, G. Joya, and A. Catala, Eds. Cham, Switzerland:
W. Abdul, “Attention based inception model for robust EEG Springer, 2015, pp. 133–144, doi: 10.1007/978-3-319-19258-1_12.
motor imagery classification,” in Proc. IEEE Int. Instrum. Meas. [34] H. Sun, Y. Xiang, Y. Sun, H. Zhu, and J. Zeng, “On-line EEG
Technol. Conf. (IMTC), Glasgow, U.K., May 2021, pp. 1–6, doi: classification for brain-computer interface based on CSP and SVM,” in
10.1109/I2MTC50364.2021.9460090. Proc. 3rd Int. Congr. Image Signal Process., Yantai, China, Oct. 2010,
[17] A. M. Roy, “An efficient multi-scale CNN model with intrinsic pp. 4105–4108, doi: 10.1109/CISP.2010.5648081.
feature integration for motor imagery EEG subject classification in [35] R. C. Oldfield, “The assessment and analysis of handedness:
brain-machine interfaces,” Biomed. Signal Process. Control, vol. 74, The Edinburgh inventory,” Neuropsychologia, vol. 9, no. 1, pp. 97–113,
Apr. 2022, Art. no. 103496, doi: 10.1016/j.bspc.2022.103496. 1971, doi: 10.1016/0028-3932(71)90067-4.
[18] X. Zhao, H. Zhang, G. Zhu, F. You, S. Kuang, and L. Sun, “A multi- [36] G. Pfurtscheller and F. H. L. da Silva, “Event-related EEG/MEG
branch 3D convolutional neural network for EEG-based motor imagery synchronization and desynchronization: Basic principles,” Clin. Neuro-
classification,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 27, no. 10, physiol., vol. 110, no. 11, pp. 1842–1857, 1999, doi: 10.1016/S1388-
pp. 2164–2177, Oct. 2019, doi: 10.1109/TNSRE.2019.2938295. 2457(99)00141-8.
[19] Y. Li, X.-R. Zhang, B. Zhang, M.-Y. Lei, W.-G. Cui, and [37] Y. Li, L. Guo, Y. Liu, J. Liu, and F. Meng, “A temporal-spectral-
Y.-Z. Guo, “A channel-projection mixed-scale convolutional neural based squeeze-and-excitation feature fusion network for motor imagery
network for motor imagery EEG decoding,” IEEE Trans. Neural EEG decoding,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 29,
Syst. Rehabil. Eng., vol. 27, no. 6, pp. 1170–1180, Jun. 2019, doi: pp. 1534–1545, 2021, doi: 10.1109/TNSRE.2021.3099908.
10.1109/TNSRE.2019.2915621. [38] P. Sip, M. Kozlowska, D. Czysz, P. Daroszewski, and P. Lisinski,
[20] G. Dai, J. Zhou, J. Huang, and N. Wang, “HS-CNN: A CNN with hybrid “Perspectives of motor functional upper extremity recovery with the use
convolution scale for EEG motor imagery classification,” J. Neural of immersive virtual reality in stroke patients,” Sensors, vol. 23, no. 2,
Eng., vol. 17, no. 1, Jan. 2020, Art. no. 016025, doi: 10.1088/1741- p. 712, Jan. 2023, doi: 10.3390/s23020712.
2552/ab405f. [39] D. Nath et al., “Clinical effectiveness of non-immersive virtual reality
[21] O. P. Idowu, A. E. Ilesanmi, X. Li, O. W. Samuel, P. Fang, and G. Li, tasks for post-stroke neuro-rehabilitation of distal upper-extremities:
“An integrated deep learning model for motor intention recognition A case report,” J. Clin. Med., vol. 12, no. 1, p. 92, Dec. 2022, doi:
of multi-class EEG signals in upper limb amputees,” Comput. 10.3390/jcm12010092.
Methods Programs Biomed., vol. 206, Jul. 2021, Art. no. 106121, doi: [40] J. S. Tutak, “Virtual reality and exercises for paretic upper limb of stroke
10.1016/j.cmpb.2021.106121. survivors,” Tehnicki Vjesnik, vol. 2, vol. 24, pp. 451–458, Sep. 2017, doi:
[22] Y. Zhang, C. S. Nam, G. Zhou, J. Jin, X. Wang, and A. Cichocki, 10.17559/TV-20161011143721.
“Temporally constrained sparse group spatial patterns for motor imagery [41] K. M. Oostra, A. Van Bladel, A. C. L. Vanhoonacker, and
BCI,” IEEE Trans. Cybern., vol. 49, no. 9, pp. 3322–3332, Sep. 2019, G. Vingerhoets, “Damage to fronto-parietal networks impairs motor
doi: 10.1109/TCYB.2018.2841847. imagery ability after stroke: A voxel-based lesion symptom mapping
[23] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and-excitation study,” Frontiers Behav. Neurosci., vol. 10, p. 5, Feb. 2016, doi:
networks,” 2019, arXiv:1709.01507 10.3389/fnbeh.2016.00005.
[24] B. Sun, X. Zhao, H. Zhang, R. Bai, and T. Li, “EEG motor imagery [42] F. Pichiorri et al., “Brain-computer interface boosts motor imagery
classification with sparse spectrotemporal decomposition and deep practice during stroke recovery,” Ann. Neurol., vol. 77, no. 5,
learning,” IEEE Trans. Autom. Sci. Eng., vol. 18, no. 2, pp. 541–551, pp. 851–865, May 2015, doi: 10.1002/ana.24390.
Apr. 2021, doi: 10.1109/TASE.2020.3021456.