0% found this document useful (0 votes)
43 views12 pages

An Upper-Limb Rehabilitation Exoskeleton System Controlled by MI Recognition Model With Deep Emphasized Informative Fea

The document describes an upper-limb rehabilitation exoskeleton system controlled by a motor imagery recognition model using deep learning in a virtual reality scene. The system uses EEG data and a convolutional neural network with squeeze-and-excitation blocks to classify motor intentions and control the exoskeleton. An offline experiment showed the method improved motor imagery classification accuracy, and an online experiment found subjects could complete rehabilitation tasks in the virtual reality scene at high rates using their trained models.

Uploaded by

911205joseph
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views12 pages

An Upper-Limb Rehabilitation Exoskeleton System Controlled by MI Recognition Model With Deep Emphasized Informative Fea

The document describes an upper-limb rehabilitation exoskeleton system controlled by a motor imagery recognition model using deep learning in a virtual reality scene. The system uses EEG data and a convolutional neural network with squeeze-and-excitation blocks to classify motor intentions and control the exoskeleton. An offline experiment showed the method improved motor imagery classification accuracy, and an online experiment found subjects could complete rehabilitation tasks in the virtual reality scene at high rates using their trained models.

Uploaded by

911205joseph
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 12

4390 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL.

31, 2023

An Upper-Limb Rehabilitation Exoskeleton


System Controlled by MI Recognition Model
With Deep Emphasized Informative
Features in a VR Scene
Zhichuan Tang , Hang Wang , Zhixuan Cui , Xiaoneng Jin , Lekai Zhang ,
Yuxin Peng , Member, IEEE, and Baixi Xing

Abstract— The prevalence of stroke continues to accuracy of MI classification (86.49% ± 3.02%); all subjects
increase with the global aging. Based on the motor performed two types of rehabilitation training tasks under
imagery (MI) brain–computer interface (BCI) paradigm their own models trained in the offline training experiment,
and virtual reality (VR) technology, we designed and with the highest average completion rates of 86.82% ±
developed an upper-limb rehabilitation exoskeleton system 4.66% and 88.48% ± 5.84%. The VR-ULE system can
(VR-ULE) in the VR scenes for stroke patients. The VR-ULE efficiently help stroke patients with hemiplegia complete
system makes use of the MI electroencephalogram (EEG) upper-limb rehabilitation training tasks, and provide the
recognition model with a convolutional neural network new methods and strategies for BCI-based rehabilitation
and squeeze-and-excitation (SE) blocks to obtain the devices.
patient’s motion intentions and control the exoskeleton
to move during rehabilitation training movement. Due to Index Terms— Rehabilitation exoskeleton, brain–comput
the individual differences in EEG, the frequency bands -er interface, virtual reality, convolutional neural networks,
with optimal MI EEG features for each patient are different. squeeze-and-excitation block, motor imagery.
Therefore, the weight of different feature channels is
learned by combining SE blocks to emphasize the useful
information frequency band features. The MI cues in the I. I NTRODUCTION
VR-based virtual scenes can improve the interhemispheric
HE prevalence of stroke continues to rise with the global
balance and the neuroplasticity of patients. It also makes
up for the disadvantages of the current MI-BCIs, such
as single usage scenarios, poor individual adaptability,
T aging. Stroke patients with hemiplegia have neurological
damage caused by the massive death of brain cells, resulting
and many interfering factors. We designed the offline in varying degrees of upper-limb motion disorders [1].
training experiment to evaluate the feasibility of the EEG Rehabilitation exoskeletons based on brain–computer interface
recognition strategy, and designed the online control
experiment to verify the effectiveness of the VR-ULE
(BCI) technology have become a more common rehabilitation
system. The results showed that the MI classification treatment plan for stroke patients in different rehabilitation
method with MI cues in the VR scenes improved the periods [2].
BCI technology realizes communication between the human
Manuscript received 2 July 2023; revised 23 September 2023;
accepted 26 October 2023. Date of publication 1 November 2023; date brain and external electronic devices by decoding the
of current version 9 November 2023. This work was supported in part features of the electroencephalogram (EEG) in the cerebral
by the Key Research and Development Program of Zhejiang Province cortex. As a new means of expression and interaction
under Grant 2022C03148, in part by the Philosophy and Social Science
Planning Fund Project of Zhejiang Province under Grant 22NDJC007Z, for motor intention, BCI has been widely used in the
in part by the National Social Science Fund of China under Grant rehabilitation training of stroke patients at different stages [3].
22CTQ016, and in part by the Fundamental Research Funds for the Barsotti et al. [4] designed a set of upper-limb exoskeletons
Provincial Universities of Zhejiang under Grant GB202003008 and Grant
GB202002012. (Corresponding author: Yuxin Peng.) based on MI-BCI to rehabilitate the grasping ability of
This work involved human subjects or animals in its research. Approval poststroke patients. Soekadar et al. [5] designed a noninvasive
of all ethical and experimental procedures and protocols was granted by brain/neural hand exoskeleton to assist stroke patients in
the Ethics Committee of the Zhejiang University of Technology.
Zhichuan Tang is with the Industrial Design Institute, Zhejiang daily motions such as eating and drinking. As a bridge for
University of Technology, Hangzhou 310023, China, and also with direct communication between the human brain and external
the Modern Industrial Design Institute, Zhejiang University, Hangzhou devices, BCI has been widely used in stroke rehabilitation
310013, China (e-mail: [email protected]).
Hang Wang, Zhixuan Cui, Xiaoneng Jin, Lekai Zhang, and Baixi treatment [6].
Xing are with the Industrial Design Institute, Zhejiang University of As one of the main paradigms of BCI technology, MI has
Technology, Hangzhou 310023, China. been widely used in rehabilitation therapy of cerebral motor
Yuxin Peng is with the College of Education, Zhejiang University,
Hangzhou 310058, China (e-mail: [email protected]). function in stroke patients [7]. Through MI training, the motor
Digital Object Identifier 10.1109/TNSRE.2023.3329059 nerve conduction pathways of stroke patients can be repaired

© 2023 The Authors. This work is licensed under a Creative Commons Attribution 4.0 License.
For more information, see https://creativecommons.org/licenses/by/4.0/
TANG et al.: UPPER-LIMB REHABILITATION EXOSKELETON SYSTEM CONTROLLED BY MI RECOGNITION MODEL 4391

or reconstructed. The MI of different motions is mapped to features is another research interest that could improve the
the EEG changes in the corresponding regions of the cerebral accuracy of MI-EEG decoding deep learning models.
cortex, and decoding different EEG features can distinguish Squeeze-and-Excitation Networks (SENet) as a channel-
different motions [8]. For example, in unilateral hand MI, based attention mechanism, treats each feature channel as a
mu rhythms (8-13 Hz) and beta rhythms (13-30 Hz) of the whole and use global information to automatically “learn”
motor sensory area on the opposite side of the brain will the importance of different feature channels in the training
decrease in power, while mu rhythms and beta rhythms in process, thereby suppressing the relatively unimportant
the ipsilateral motor sensory area will increase in power. This features in the training classification process and boosting the
phenomenon is called event-related desynchronization (ERD) most discriminative and information-rich features to improve
and event-related synchronization (ERS) [7]. BCI technology the accuracy of the model [23]. Sun et al. [24] proposed
uses various computer algorithms to classify these different a CNN with sparse spectrotemporal decomposition (SSD)
ERD/ERS patterns and convert them into control signals for for MI-EEG classification, which adopted SE to adaptively
external devices. Tang et al. [8] proposed a BMI system recalibrate the channel direction. Zhang et al. [25] proposed
based on ERD/ERS and used for upper-limb exoskeleton an orthogonal CNN fused with SE blocks to perform feature
control, achieving high classification accuracy. Liu et al. [9] recalibration across different EEG channels. Inspired by SE,
proposed an ERD/ERS-based BCI control system and verified we merged SE blocks into the CNN model, enabling the
its effectiveness by operating a two-arm multi-finger robotic model to automatically obtain the weights of each feature
to complete tasks. Based on ERD/ERS, Li et al. [10] proposed channel (EEG features of different time and frequency bands),
a BCI hybrid control strategy that combines EEG and EMG adaptively weighted the feature maps generated by the original
signals to achieve flexible and stable control of the lower limb feature fusion layer, and improve the proportion of useful
dynamic exoskeleton. features in the current task. This approach can solve the
Convolutional Neural Networks (CNN) as a representative problem of optimal features of EEG signals from different
algorithm of deep learning have been widely used in subjects located in different frequency bands, and train a MI
computer vision, natural language processing, and other recognition and classification model with high recognition
fields [11]. Conventional EEG data processing relies on the accuracy suitable for specific users.
experience of researchers for complex data preprocessing and The current rehabilitation strategies based on MI-BCI
feature extraction. However, human-operated preprocessing mainly improve the MI recognition accuracy of subjects by
and feature extraction will reduce the accuracy and reliability improving the feature extraction algorithms and neglecting the
of classification results [11]. And the correlations between impact of MI signal strength on recognition accuracy [26].
EEGs of different channels are easily ignored during the Therefore, in order to maximize the activation of the subjects’
feature extraction process [12]. The CNN model can auto- motor nerves and improve their signal strength, virtual
matically extract features from the original input signal and rehabilitation technology combining MI-BCI technology and
obtain deeper and more distinguishable feature information virtual reality (VR) technology is applied in the field of stroke
through local receptive fields, weight sharing, and down- rehabilitation [27]. VR technology has solved the problem of
sampling, which reducing the subjectivity and incompleteness poor immersion and multiple external environmental inter-
of feature selection caused by human factors [13], [14], [15]. ference factors (sound, light) in conventional rehabilitation
Amin et al. [16] proposed an attention-based CNN model to training strategies (by observing cues on computer screens)
learn the importance of different features of MI data and [28]. VR technology can provide an immersive training
obtained good results when they applied it to the BCI IV environment, improve the interhemispheric activation balance
2a dataset. Roy [17] proposed a Multi-Scale (MS) CNN (IHAB) and enhance the cortical connectivity between the
which can extract the distinguishable features of several non- primary sensorimotor cortex (SM1), the primary motor cortex
overlapping canonical frequency bands of EEG signals from (M1), and the supplementary motor area (SMA) on both sides
multiple scales for MI-BCI classification. Zhao et al. [18] of the subject during motion induction. The VR scene can
proposed a multi-branch 3D-CNN classification strategy and provide real-time feedback in each training task, achieve more
the 3D representation is generated by transforming EEG comprehensive MI training, improve rehabilitation efficiency,
signals into a sequence of 2D array which preserves spatial shorten the rehabilitation period, and enhance the patient’s
distribution of sampling electrodes. Li et al. [19] proposed initiative and adaptability in rehabilitation [28]. Jang et al. [29]
an end-to-end EEG decoding framework, which employs raw demonstrated a shift in cortical organization of the affected
multi-channel EEG as inputs, to boost decoding accuracy limb from the ipsilateral hemisphere to the contralateral
by the channel-projection mixed-scale convolutional neural hemisphere after the VR intervention. Mekbib et al. [30]
network aided by amplitude-perturbation data augmentation. revealed that unilateral and bilateral limb mirroring exercises
However, due to significant individual differences between in an immersive virtual environment may stimulate MNs in the
subjects, such as the optimal time period and frequency damaged brain areas and may facilitate functional recovery
band of ERD/ERS changes [20], [21], [22]. It is not good of the affected upper extremities post-stroke. However, the
enough to use conventional recognition methods to perform current VR approaches use single-scenario rehabilitation,
shallow temporal or spectral feature learning on MI features. the inter-individual adaptability is poor [27]. At the same
Therefore, due to the influence of individual differences among time, the conventional rehabilitation training strategies lack
stroke patients, the refinement and weight assignment of deep visual feedback based on motor intention, but at the neural
4392 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 31, 2023

Fig. 1. The architecture diagram of the upper-limb rehabilitation


exoskeleton system (VR-ULE).

mechanism level, visual feedback based on motor intention


can activate the mirror neuron system, promote brain plasticity
changes and functional reorganization, and contribute to the Fig. 2. The lightweight wearable exoskeleton hardware subsystem.
recovery of motor function [31]. Therefore, we used virtual
character arm motions to cue patients for simulated movements
in VR, and the patient’s mirror neurons were activated. differences of EEG signals between individuals and the
Then, the patient performed MI, and the computer decoded CNN+SE model effectively improved the accuracy of MI
the EEG and converted it into control commands for the recognition and classification;
virtual character to achieve visual feedback of the motion (3) Based on the mirror neuron theory, we built three VR
intention. Patients continuously adjusted the MI process based rehabilitation training scenes (lifting dumbbells, tasting fruits,
on feedback results. and feeding pets) with virtual motion guidance to improve
In this paper, we developed a virtual reality upper-limb the immersion experience and avoid some environmental
rehabilitation exoskeleton system (VR-ULE). VR-ULE used interference factors in conventional screen cues.
an SE block-based CNN model and a VR scene to improve
the MI recognition accuracy of stroke patients. VR-ULE II. SE-B ASED C LASSIFICATION S TRATEGY OF MI
includes a wearable exoskeleton hardware subsystem and an A. System Introduction
MI recognition software subsystem. The wearable exoskeleton The VR-ULE consists of a wearable exoskeleton hardware
subsystem is used to assist the patient’s limb movement. subsystem and a MI recognition software subsystem. The
The software subsystem for MI recognition is composed system framework is shown in Fig 1.
of a VR scenes cues control module, CNN+SE module, The wearable exoskeleton subsystem consists of a self-
and an online hybrid control module. The VR scenes cues designed and developed functional backpack and an upper-
module is used to provide patients with visual cues and limb exoskeleton robotic arm. The functional backpack is used
feedback with mirror operation intention. CNN+SE module to store various hardware control modules, and the upper-limb
is used to automatically analyze the importance of EEG exoskeleton robotic arm is used to assist the patient’s limb
features in different time periods and frequency bands. The movement. The MI recognition software subsystem consists of
SE blocks is used to emphasize important features and a VR scene cue module, a CNN+SE module, and an online
suppress nonimportant features through adaptive weighting. hybrid control module. The VR scene cue module is used
The online hybrid control module is used to receive the to provide patients with a strongly immersive virtual MI cue
patient’s MI signals and provide virtual feedback signals in during the CNN+SE model training stage. The CNN+SE
the early and middle stages of the patient’s motor neuron module it is used to amplify the strong-response frequency
rehabilitation. In the later stages of rehabilitation, the online band of EEG during the patient’s MI, accurately identify
hybrid control module is used to and control the upper-limb and classify the patient’s motor intentions. The online hybrid
exoskeleton robotic arm to assist patients in muscle group control module is used to receive the MI signal from patients
strength rehabilitation training. We designed an offline training during rehabilitation, provide feedback signals in different
experiment to obtain the MI EEG data of different subjects and periods of the patient’s rehabilitation, and control the motion
trained the CNN+SE classification model. We also designed of virtual character or upper-limb exoskeleton robotic arm
an online control experiment to evaluate and validate our motion.
proposed rehabilitation strategy and training system. The 1) Wearable Exoskeleton Subsystem: We independently
contributions of this study include: designed and produced a lightweight wearable exoskeleton
(1) Based on the theory of neural plasticity, we inde- hardware subsystem, as shown in Fig 2. The wearable
pendently designed an upper-limb rehabilitation exoskeleton exoskeleton hardware subsystem consists of a functional
system for stroke patients at different rehabilitation training backpack, an upper-limb exoskeleton robotic arm skeleton,
stages to perform active movement and passive movement; two power levers, a push lever drive board, a servo motor gear,
(2) The SE module based on channel attention mechanism a single-chip microcomputer, a power module pack, a lithium
in the CNN model were used to obtain the frequency band battery, a four-finger bionic hand, a disc damping shaft, and a
TANG et al.: UPPER-LIMB REHABILITATION EXOSKELETON SYSTEM CONTROLLED BY MI RECOGNITION MODEL 4393

Fig. 3. Exoskeleton degrees of freedom display.


Fig. 4. VR training scene diagram (cue).

universal joint. Among them, the functional backpack made of


3D printing materials which is light, small, and comfortable. training, better stimulates the enthusiasm of patients for
The inside of the backpack is used to store the push lever training, improves rehabilitation efficiency, and shortens the
drive board, servo motor gear, single-chip microcomputer, rehabilitation cycle. When a patient wears a VR head-mounted
power module pack, and lithium battery. Nylon shoulder straps display (VIVE-P130, HTC, Inc.) for offline MI training, the
and waist belts can be adjusted according to the patient’s VR scene only provides virtual motion cues, but during online
body shape. The upper-limb exoskeleton arm can simulate MI training, the patient sees the virtual motion cues and then
the motion of a healthy arm. Bionic arm skeleton can fit imagines the movement. After the MI signal is recognized and
the patient’s arm, and the power lever can provide motion classified by the CNN model, the online hybrid control module
thrust to assist the patient’s limb movement, which realized a outputs it back to the virtual scene to control the motion of
series of rehabilitation training motions with five degrees of the virtual characters. If the MI is wrong or fails, the virtual
freedom, including grasping, wrist flexion, elbow flexion, and scene will display the recognition results to provide feedback
shoulder abduction (Fig 3). The four joints of the exoskeleton to the patient to prompt the patient continuously correct or
arm are connected by detachable binding screws, which can strengthen the MI.
be smoothly rotated by simply inserting them into the holes. b) CNN+SE module: The EEG is a signal with spa-
The four-finger bionic finger module is located at the end tiotemporal characteristics, and its feature extraction process
of the upper-extremity exoskeleton, and the knuckles use the needs to consider temporal and spatial features [32]. Because
short-range rope-driven motion mode of the servo motor to there are significant individual differences in the frequency
achieve a more natural finger grasping effect. The upper- characteristics and spatial characteristics of EEG signals
limb exoskeleton arm is mounted on the backpack through among different subjects during MI., by incorporating the
the universal joint behind the right shoulder. The backpack improved SE blocks into the CNN model, it is beneficial to
is equipped with a lithium battery (12 V, 2400 mA), a lever train a CNN model that meets the specified user, thereby
driver board (L298N), and a microcontroller (ESP-WROOM- improving the recognition accuracy of MI. To this end,
32, Shenzhen Yusong Chuangda Electronics Co. Ltd, China). we independently designed and built a CNN model with
The entire wearable exoskeleton hardware subsystem is sewn an embedded SE block for the correct identification and
onto the inner nylon fabric, it is fixed to the patient’s chest classification of the patient’s MI signals, as shown in Fig 5.
and waist by multiple elastic nylon straps, and all its drive The entire CNN+SE model consisted of 10 layers: the first
levers are driven by lever driver board control. The single- layer was the input layer; the second and third layers were
chip microcomputer has a 4-MB storage space, which can convolutional layers, which constituted the feature extraction
communicate with the computer through WiFi to receive and part; the fourth layer was the feature fusion layer; the
convert the MI signal identified and classified by the CNN fifth layer was the SE blocks layer, which constituted the
model with SE into feedback control signals to control the frequency-band channel-weight learning part; the sixth layer
motion of the exoskeleton. was the feature weighting layer, which was used to weight
2) MI Recognition Software Subsystem: The MI recognition the output feature map of the fourth layer; the seventh layer
software subsystem consists of a VR scene cue module, a CNN was the average pooling layer, which was used for down
module with SE, and an exoskeleton control module. It is used sampling; the eighth and ninth layers were fully connected
for the identification, classification, and control signal output layers; the 10th output layer constituted the classification part;
of the patient’s MI signal. and the 11th layer was the output layer, which we used to
a) VR scene cue module: We designed and built three output the classification result.
types of VR training scenes for MI: lifting dumbbells, c) Description of each network layer of CNN+SE: Input layer
tasting fruits, and feeding pets. These are used to provide (L1) - 200 22 × 750 matrices of input samples per channel
patients with virtual MI cues (Fig 4). While ensuring the L N ,T where N is 22, representing 22 EEG channels, and T
sense of immersion, it increases the interest in rehabilitation is 750, representing the sampling time point in each channel.
4394 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 31, 2023

Fig. 5. Framework diagram of MI recognition model. The input EEG signal samples are divided into multiple different frequency bands in multiple
channels, the weight of different feature channels is learned by combining SE blocks to emphasize the useful information frequency band features.

in the feature map, km 2 is the convolution kernel of [22 × 1],

and bm ( j) is the bias.


2

Convolutional layer (C3) - This convolutional layer


performs temporal convolution on the input EEG through five
convolution kernels of size [1 × 10]. Eight frequency band
channels are extracted to output 40 [1 × 75] feature maps.
The process can be expressed as
X 
i≤10 2
Cm ( j) = f
3
Cm (( j − 1) × 10 + i) × km + bm ( j) ,
2 2
i=1
(2)

where km 2 is the convolution kernel of [1 × 10] and b2 ( j) is


m
the bias.
Feature fusion layer (R4) - This layer splices 40 feature
maps of size [1 × 75] output by the C3 layer of each frequency
band channel to form a [40 × 75] feature map, with a total of
eight feature maps of size [40 × 75]R4M c output from eight
frequency bands.
SE Blocks layer (SE) - this layer inputs feature maps
R4M c with the size of [40 × 75 ×8]. In the SE layer, the
squeeze operation is first performed to compress the input
feature map tensor in space, that is, global average pooling
is performed on the input feature maps in turn, and the results
Fig. 6. Overall Control-flow diagram of the VR-ULE system. are fully connected. The output is a [1 × 1 ×8] feature map
F5 to achieve the purpose of compressing and integrating
the original 8 feature maps and shielding spatial distribution
Convolutional layer (C2) - This convolutional layer verifies
information. At the same time, by extracting the overall
the input EEG signals through eight convolution kernels of
information of the eight feature channels, the underlying
size [22 × 1] LN,T LN,T and performs spatial convolution.
network can also obtain the global receptive field. This process
Eight frequency band channels are extracted to output eight
can be expressed as
feature maps of size [1 × 750]. This process can be expressed
as H X
W
1 X
Xi≤22 F5 = F sq (R4M c ) = R4M c (i, j). (3)
Cm2 ( j) = f ( 2
L i, j × km 2
+ bm ( j)), (1) H ×W
i=1 j=1
i=1
where Cm2 ( j) is the output feature map of the C2 layer, the Then, the excitation operation is performed to learn the
superscript 2 represents the number of layers, the subscript m nonlinear interactions between the eight feature channels and
represents the mth feature map, j represents the jth neuron limit the complexity of the model by using two fully connected
TANG et al.: UPPER-LIMB REHABILITATION EXOSKELETON SYSTEM CONTROLLED BY MI RECOGNITION MODEL 4395

layers with activation functions and no bias. These two fully- Online hybrid control model: The online hybrid control
connected layers are dimensioned down and up, respectively, module converts the classification signal identified by the
to form the structure of a bottleneck. The process can be MI recognition software subsystem into a VR-scene character
expressed as motion control signal or an exoskeleton motion control signal.
First, the VR scene control module in the MI recognition
(F6 , R4M c ) = σ (g (F6 , R4M c )) = σ (R4M 2 δ (F6 · R4M 1 )) . software subsystem will randomly generate left- and right-
(4) hand MI motion cues, and the stroke patients will then try
MI within a certain time. The trained CNN+SE model will
i. e, first F5 is multiplied by R4M 1 in a fully connected
acquire the subject’s MI EEG data and perform identification
layer operation, then multiplied by a rectified linear unit
and classification. The classification results are identified by
(ReLU) layer to keep the output dimension unchanged. Then
the online hybrid control module according to the training
by multiplying the result by R4M 2 in a fully connected layer
task and converted into a continuous control signal output.
operation, and then through a ReLU, and so on to R4M 8 ,
The Control-flow diagram of the VR-ULE system is shown
a [1 × 1 ×8] feature map is output through the sigmoid
in Fig 6.
function F6 .
Feature weighting layer (R7) - This layer performs channel-
wise multiplication using the weights obtained by the III. E XPERIMENT
excitation operation to perform channel-by-channel adaptive For the MI classification strategy based on the combination
weighting on the original eight feature maps. That is, of VR and SE blocks in our proposed VR-ULE, we designed
it multiplies the eight feature maps in the initial input SE offline training experiments and online control experiments to
by the eight weights in F6 . Finally, eight feature maps of test the effectiveness of the strategy. In the offline training
size [40 × 75] are obtained to achieve feature weighting. The experiment, we trained two types of CNN+SE models that
process can be expressed as were cued by the conventional experimental scene and the VR
scene for each subject in order to do comparative verification.
] c = Fscale (R4M c , F6 ) = R4M c · F6 .
R7M (5) In the online control experiment, we first selected the highest-
accuracy classification model trained in the offline experiment
Pooling layer (R8) - This layer performs average pooling of
for different subjects. Then, analogous to the rehabilitation
the output of the R7 layer in 5 × 5 regions with a stride of 5,
stage of brain motor neurons in the pre-rehabilitation stage
and the output is eight feature maps of size [8 × 15]. Fully
of stroke patients, the subjects will perform MI based on
connected layer (F9) - This layer is used as a fully connected
the VR scene to control the virtual characters to complete
layer. The eight feature maps output by the R8 layer are fully
the corresponding virtual tasks. At the same time, analogous
connected to obtain eight feature maps with a size of [120×1].
to the upper-limb muscle group strength training stage of
This process can be expressed as
X  stroke patients in the later stage of rehabilitation, the subjects
Fm9 ( j) = f ] c (( j − 1) × 10 + 1) × km
R7M 9
+ bm9
( j) , independently perform MI according to the task requirements
to achieve different control of the exoskeleton system and
(6) complete corresponding tasks. Finally, the completion results
9 is the convolution kernel of [1 × 1]; b9 ( j) is the of the two types of analogy experiments are evaluated. We also
where km m chose three methods, conventional CNN [11], MRA+LDA
bias;
[33], and CSP+SVM [34], to train the MI recognition model
Fully connected layer (F10) - This layer fully connects
on the same training set, and then these models were tested
the eight feature maps output from the F9 layer to form a
using the same test set.
classification part of size [960 × 1], containing 200 neurons:
i≤8 p≤120
X X A. Subjects and Dataset Preparation
Fm10 ( j) = f( Fi9 ( p) ωi10 ( p) + b10 ( j)), (7)
For the experiment, we recruited 20 healthy subjects (age:
i=1 p=1
22 ± 1.21 years), all right-handed (as assessed by the
where ωi10 ( p) is the connection weight from the neurons in Edinburgh Handedness Questionnaire) [35]. At the same time,
the F9 layer to the neurons in the F10 layer, and b10 ( j) is the we also recruited one mild stroke patient and one moderate
bias. stroke patient to participate in the experiment. All the subjects
Output layer (O11) - This layer is the output layer, participated in the EEG experiment for the first time and
containing two neurons, representing a binary classification were not told any experimental hypotheses. Each subject
problem. The process can be expressed as signed an informed consent form before the experiment. The
i≤200
! experimental procedure was reviewed and approved by the
X Human Ethics Review Committee of Zhejiang University of
Om ( j) = f
11
Fm (i) ω (i) + b ( j) ,
10 5 5
(8)
Technology.
i=1
EEG signals were acquired with the ActiveTwo64 channel
where ω5 (i)is the connection weight of the neurons in the EEG signal acquisition system (BioSemi, Netherlands).
F10 layer to the neurons in the O11 layer, and b5 ( j) is the Twenty-three channels of EEG data (Fz, FC3, FC1, FCz, FC2,
bias. FC4, FC6, C5, C3, C1, Cz, C2, C4, C6, CP3, CP1, CPz, CP2,
4396 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 31, 2023

The sequence diagram of the VR test is shown in Fig 7B-b.


Each trial lasted for 8 s. The screen was blank for the first
2 s, and then the word “ready!” appeared in the center of
the VR scene to remind the subjects that the trial was about
to start. From 3 s to 6 s, the left- or right-hand movement
randomly appeared in the VR scene. The subjects imagined
their left hand or right-hand movement according to the body
movements in the VR scene. There was a random interval
of 2 s to 5 s between each trial, and there was a 3-min rest
time between every 20-trial set to prevent fatigue. Each subject
was tested with one set of data, including 200 non-VR trials
and 200 VR trials, with seven subjects totaling seven sets of
data and 14 parts.
The data of each subject’s non-VR test and VR test were
cropped, and after removing the data corresponding to the
subjects’ rest period, the EEG data in the frequency band
of 7 Hz to 31 Hz were obtained by filtering, and the
extracted EEG data were further processed for frequency
band separation of 7–10, 10–13, 13–16, 16–19, 19–22, 22–25,
25–28, and 28–31 Hz. At the same time, the window length
Fig. 7. The offline training experimental scenario is shown in A, where
(a) is training without VR and (b) is training with VR; the timing diagram of the data segment was defined as 7.5 s. The CNN+SE
of one trial for two types of experiments are shown in B, where (a) is preprocessing code intercepted the 3 s after the MI cue as
timing diagram of one trial without VR and (b) is timing diagram of one the model input, so each input sample was composed of a
trial with VR.
22-channel × 750-sampling-time-point (3 s time period ×
250 Hz sampling rate) matrix, and each subject finally had
200 matrices of size [22 × 750] corresponding to the non-VR
CP4, P1, Pz, P2, POZ) were collected. The reference electrode
test and VR test. Finally, 200 matrix data of size [22 × 750]
was placed at the mastoid of the left ear; the ground electrode
for each subject’s non-VR test and VR test were randomly
was replaced by two independent electrodes, common-mode
divided into five copies, three of which were used to train
sense (CMS) and driven right leg (DRL). Before placing the
each subject’s MI model, one copy is used as a validation set
electrodes, a conductive gel was used to reduce the impedance
for training the model and one of which was used as a test
between the electrodes and the scalp. The sampling frequency
set for evaluating the trained model. The EEG of the C3 and
was set to 250 Hz, the cutoff frequency of the high-pass
C4 channels in all trials of each subject was subjected to the
filter was 0.05 Hz, the cutoff frequency of the low-pass filter
superposition average calculation (ERD/ERS), which can be
was 200 Hz, and the power frequency notch was 50 Hz. After
expressed as
placing all the electrodes, the subjects sat in front of the
computer screen and put their hands on the table naturally. E RD E EGA − E EGR
%= × 100%. (9)
The subjects were asked to avoid blinking and unnecessary E RS E EGR
head or body motions as much as possible. The collected data The sequence diagram and brain topography of ERD/ERS
were divided into a training set, validation set, and test set in were observed, and the time period when the ERD/ERS pattern
a 3:1:1 ratio. appeared and ended in each trial was recorded.
We compared this model with three other types models,
CNN, MRA+LDA, and CSP+SVM, which we trained on the
B. Offline Training Experiment same training set and then used the same test set to test these
Each subject needed to complete 200 trials in each models.
of the non-VR tests and VR tests throughout the offline
training experiment (Fig 7), including 100 imaginary left-hand C. Online Control Experiment
movements and 100 right-hand movements. The sequence The online control experiment had two parts: the VR scene
diagram of the non-VR test is shown in Fig 7B-a. Each trial online cue control test and the exoskeleton online control test.
lasted for 8 s. The screen was blank for the first 2 s, and then The MI model with high accuracy trained by each subject in
a “+” character appeared in the center of the screen to remind the offline training experiment was used for the control signal
the subjects that the trial was about to start. From 3 s to 6 s, the input of the VR scene and the exoskeleton in the online test.
“+” character on the screen changed to a randomly generated The experimental scenario is shown in Fig 8.
leftward or rightward arrow and the subjects imagined their In the online control test of the VR scene, the subjects
left-hand movement or the right-hand movement according to did not wear the exoskeleton equipment, only the VR head-
the arrow point. There was a random interval of 2 s to 5 s mounted display and the EEG cap, and sat in front of the
between each trial. There was a 3-min rest time between every screen. The subjects rested their hands on the table. The screen
20-trial set to avoid fatigue. was used to display the head-mounted display in real-time.
TANG et al.: UPPER-LIMB REHABILITATION EXOSKELETON SYSTEM CONTROLLED BY MI RECOGNITION MODEL 4397

TABLE I
C LASSIFICATION ACCURACY OF F OUR C LASSIFICATION M ODELS
FOR E ACH S UBJECT IN THE P UBLIC DATASET

Fig. 8. The Online control experimental scenario, where (a) is


exoskeleton control test and (b) is VR control test.

In the VR scene, the MI instruction “Please move your left


hand” or “Please move your right hand” appeared once every
10 s. At the same time, the virtual characters in the VR scene
made the same actions to guide the subjects, and the subjects
performed the corresponding MI according to the instructions.
The PC saved the EEG data collected 2 s before and 5.5 s after
the MI instructions. The data processing code preprocessed the
saved data and input it into the MI model trained by the offline
training experiment for classification. The classification result
was converted into a control signal and input to the VR scene
Fig. 9. ERD/ERS time course from 0 s to 5 s and EEG topography from
c control module to control the virtual character arm in the 0 s to 6 s for left- and right-hand MI of all subjects in all trials.
VR scene to move and complete the corresponding task. Each
subject performed 20 MI tasks with the left hand and 20 with
the right hand for each type of scene, so a total of 120 MI in Table I. The average classification accuracy of the four
tasks were performed. MI classification models was CNN+SE 87.53% ± 1.07%;
In the exoskeleton control test, the subjects wore the CNN 83.32% ± 1.04%; MRA+LDA 84.55% ± 1.62%; and
exoskeleton on the right hand and sat in front of the table CSP+SVM 83.02% ± 2.07%.
with the EEG cap on. The MI EEG signals in the left and right
hands were mapped as control signals for the input exoskeleton
to perform the task action or not, respectively. The subjects B. Results of Offline Training Experiment
made an MI trial every 8 s according to the task requirements, ERD/ERS Analysis: After the EEG data of the C3 and
and there was a 8-s rest between two trials. All EEG data were C4 channels of each subject in all tests were separated,
saved in the PC during the test. Similarly, the data processing the superimposed average calculation of the ERD/ERS
code preprocessed the saved data and input them into the phenomenon was performed [36]. The ERD/ERS sequence
MI model trained by the offline training experiment. The diagram from 0 to 5 s and brain topography from 0 to 6 s
classification results were converted into control signals and for a single trial is shown in Fig 9. As seen from the Fig,
input to the exoskeleton control module to control the motion the ERD/ERS pattern appeared in each trial for a time period
of the exoskeleton. A total of 30 MI trials were designed as ranging from 3.5 s to 4.5 s.
a full test. Analysis of the training results of the non-VR tests and VR
tests: The two types EEG data of each subject were trained in
four classification models, and models with good convergence
IV. R ESULTS
were obtained, and the training loss curve of each subject
A. Public Dataset Results was recorded. The best-converging model came from subject
To verify the effectiveness of the SE blocks in the SE-VR- 06. The training loss curve of top three models is shown
based MI classification strategy, we applied it to the public in Fig 10.
dataset 1 in BCI Competition IV for model training and Analysis of the classification results of the non-VR tests
verification, and compared it with CNN, MRA+LDA, and and VR tests: After using the four classification models for
CSP+SVM. For this dataset, seven subjects selected two types training of each subject’s non-VR tests and VR tests EEG
of movements from the left hand, right hand and foot, and data, the model with the best convergence was selected,
to perform 100 MI trials. Record the EEG data of 64 EEG and the classification test was performed. The resulting
channels for each subject at a sampling rate of 1000 Hz. classification accuracy is shown in Table II. From the data
More details on the experimental paradigm corresponding in the table, it can be seen that compared with a single
to this dataset can be obtained from the following website: screen cue, the VR scene to provide MI cues obtained higher
https://www.bbci.de/competition/iv/desc_1.html. classification accuracy with the same classification model, and
The classification results of the four MI classification CNN+SE achieved higher classification accuracy than CNN,
models for different subjects in the public dataset are shown MRA+LDA, and CSP+SVM (non-VR:78.13% ± 2.60 and
4398 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 31, 2023

TABLE II
T HE T EST C LASSIFICATION ACCURACY OF E ACH S UBJECT ’ S N ON -VR T EST AND VR T EST DATA IN THE F OUR C LASSIFICATION M ODELS

accuracy (F = 63.984, p < 0.01), and the MI cue has a


significant effect on classification accuracy (F = 152.328,
p < 0.01), but the MI class has no significant effect on the
classification accuracy (F = 0.02, p > 0.01).

C. Results of Online Control Experiment


In the online experiment, the task completion rate of each
subject is shown in Table IV. The task success rate was defined
as the percentage of times a task was completed correctly.
It can be seen from the online experimental results that the
success rates of virtual scene tasks 1 (lifting dumbbells) was
higher than virtual scene task 2 (tasting fruits) and 3 (feeding
Fig. 10. The model training loss curve obtained by the training of top
three classification models on the VR tests data of Subject 06. pets), and the success rate of the exoskeleton control task was
higher than the virtual scene tasks (88.48% ± 5.84%). And
there is no significant difference in the success rate between
VR:86.72% ± 2.99%). In addition, for the CNN+SE model the four types of tasks of Patient 1 (87.50%, 85.00%, 80.00%,
with VR, the model classification accuracy of two patients 86.67%) and the average success rate of the four types of tasks
were lower than the average model classification accuracy of 20 healthy subjects (87.00% ± 4.78%, 85.25% ± 4.74%,
of 20 healthy subjects (86.72% ± 2.99%), and the model 80.13% ± 4.71%, 88.83% ± 5. 99%). The success rate of the
recognition accuracy of Patient 1 (86.44%) was higher than four types of tasks of Patient 2 (87.00% ± 4.78%, 85.25% ±
that of Patient 2 (81.94%). 4.74%, 80.13% ± 4.71%, 88.83% ± 5.99%) is lower than the
To better evaluate the classification accuracy of the four average success rate of the four types of tasks of 20 healthy
classification models, the three average indicators, precision, subjects.
recall, and F score, were introduced in Table III. Based on
the data in Table II and Table III, we conducted an ANOVA V. D ISCUSSION
on the two types of MI cues (VR or non-VR), four types Our designed offline training experiments (MI recognition
of classification models (CNN+SE, CNN, MRA+LDA and and classification model training) and online control exper-
CSP+SVM), and two types of MI classes (Left or Right hand) iments (VR-scene character and exoskeleton arm movement
to evaluate their interactions and their impact on classification control tasks) fully verify the effectiveness of our proposed
accuracy. The results indicate that when the confidence level VR-ULE system.
is set to 95%, there is no interaction between the MI class and The offline training experiment results show that adding
the classification model, and there is no interaction between SE blocks into the CNN can promote the accuracy of
the MI cue and the classification model (all p > 0.01). The MI EEG classification in public datasets. And we got a
classification model has a significant effect on classification classification accuracy of 77.94% ± 2.65% in our own
TANG et al.: UPPER-LIMB REHABILITATION EXOSKELETON SYSTEM CONTROLLED BY MI RECOGNITION MODEL 4399

TABLE III
T HE T EST C LASSIFICATION P RECISION , R ECALL , AND F S CORE OF THE VR T ESTS DATA
FOR E ACH S UBJECT IN THE F OUR C LASSIFICATION M ODELS

measured experimental dataset. The reason is that the CNN each subject. The advantage of this method is that while
will learn the temporal and spatial features of the subject’s avoiding individual differences, the final classification result
MI, and the SE blocks performs feature weighting operations of the model will also be output based on the weights of
on the EEG data of the subjects in different frequency bands all frequency bands, avoiding transiently missing a signal in
to learn strong features of the MI EEG frequency band of a band of the subject during the online test affecting the
4400 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 31, 2023

TABLE IV obtained perception experience, which can improve the model


T HE S UCCESS R ATE OF E ACH S UBJECT ’ S F OUR recognition accuracy.
T YPES OF O NLINE E XPERIMENTAL TASKS
In the offline training experiment, the model classification
accuracy of two patients were lower than the average model
classification accuracy of 20 healthy subjects (86.72% ±
2.99%), and the model recognition accuracy of Patient 1
(86.44%) was higher than that of Patient 2 (81.94%). The
reason is that different degrees of stroke can cause varying
degrees of damage to the patient’s motor neurons, thereby
affecting the patient’s ERD/ERS patterns during MI and reduce
the classification performance [41], [42].
One limitation of this study is that the subjects in our
experiment lack diversity. In future studies, we will conduct
more experiments on stroke patients of different ages, genders,
and rehabilitation stages to verify the effectiveness of our
proposed rehabilitation strategy.

VI. C ONCLUSION
In this paper, based on the MI-BCI paradigm and VR
technology, we designed and developed a VR-ULE that
can be used for the rehabilitation of stroke patients with
classification results. Relevant studies have obtained results hemiplegia. The system obtains the patient’s motion intention
consistent with ours. For example, Sun et al. [24] used a deep through the MI EEG identification strategy based on a
learning framework called SSDSE-CNN integrating the SE CCN and SE blocks, and it controls the execution of
blocks for MI-EEG classification, and the highest classification VR-ULE rehabilitation training actions. The SE module
accuracy obtained was 79.3% ± 6.9%. Li et al. [37] proposed makes up for the shortcoming that different subjects have
a novel temporal-spectral-based SE feature fusion network for differences in MI frequency domain characteristics. The MI
MI-EEG decoding, and the highest classification accuracy was indication based on the VR scene strengthens the MI EEG
84.49% on the public dataset. of the subjects and makes up for the shortcomings of the
Our offline experimental results also show that when current MI-BCI rehabilitation strategies, such as a single
the subjects tried MI, compared with the boring screen rehabilitation scene, poor individual adaptability, and many
cues, the use of VR cues was more helpful for training external environmental interfering factors. Our results show
a network model, with higher classification accuracy. All that compared with the conventional classification strategy,
four classification models were verified, in which CNN+SE the proposed MI EEG recognition method (CNN+SE) can
obtained a classification accuracy of 86.49% ± 3.02%. The improve the MI classification accuracy. The VR-ULE system
reason is that the VR scene brings the subjects a more can more efficiently help stroke patients complete upper-
immersive experience, avoids the interference of many external limb rehabilitation training tasks through a more reasonable
factors, and makes it easier for the subjects to concentrate, MI identification strategy and an immersive experience of
and the arm movements of the characters in the virtual scene VR scenes, all of which improve the patients’ autonomous
will guide the subjects to quickly generate corresponding rehabilitation.
responses, improving their IHAB while activating connections
between more areas of the cerebral cortex, cueing patients R EFERENCES
to produce more pronounced MI EEG features. In related [1] F. Y. Wu and H. H. Asada, “Implicit and intuitive grasp posture control
research on stroke rehabilitation, Sip et al. [38] applied for wearable robotic fingers: A data-driven method using partial least
squares,” IEEE Trans. Robot., vol. 32, no. 1, pp. 176–186, Feb. 2016,
the Virtual Mirror Hand 1.0 procedure to the treatment of doi: 10.1109/TRO.2015.2506731.
hand functional recovery after stroke and compared it with [2] F. Grimm, A. Walter, M. Spüler, G. Naros, W. Rosenstiel, and
the classic mirror therapy, finding that applying VR to the A. Gharabaghi, “Hybrid neuroprosthesis for the upper limb: Combining
brain-controlled neuromuscular stimulation with a multi-joint arm
rehabilitation of stroke patients was feasible. Nath et al. [39] exoskeleton,” Frontiers Neurosci., vol. 10, p. 367, Aug. 2016, doi:
developed a VR task library for upper-limb rehabilitation of 10.3389/fnins.2016.00367.
poststroke patients and concluded that VR therapy can improve [3] A. Remsik et al., “A review of the progression and future implications
of brain-computer interface therapies for restoration of distal upper
the clinical symptoms of chronic stroke patients. extremity motor function after stroke,” Exp. Rev. Med. Devices, vol. 13,
Our online control experiments showed that the average no. 5, pp. 445–454, May 2016, doi: 10.1080/17434440.2016.1174572.
success rate of exoskeleton control task was 88.48% ± [4] M. Barsotti et al., “A full upper limb robotic exoskeleton for reaching
and grasping rehabilitation triggered by MI-BCI,” in Proc. IEEE Int.
5.84%, which was higher than that of virtual character arm Conf. Rehabil. Robot. (ICORR), Singapore, Aug. 2015, pp. 49–54, doi:
movement tasks in VR scenes. The reason is that the MI 10.1109/ICORR.2015.7281174.
command in the exoskeleton control task uses real task actions [5] S. R. Soekadar et al., “Hybrid EEG/EOG-based brain/neural hand
exoskeleton restores fully independent daily living activities after
to improve patients’ perception and motion mechanisms quadriplegia,” Sci. Robot., vol. 1, Dec. 2016, Art. no. eaag3296, doi:
[40]. Patients can perform more concrete MI based on the 10.1126/scirobotics.aag3296.
TANG et al.: UPPER-LIMB REHABILITATION EXOSKELETON SYSTEM CONTROLLED BY MI RECOGNITION MODEL 4401

[6] P. D. E. Baniqued et al., “Brain-computer interface robotics for hand [25] J. Zhang, R. Yao, W. Ge, and J. Gao, “Orthogonal convolutional
rehabilitation after stroke: A systematic review,” J. NeuroEng. Rehabil., neural networks for automatic sleep stage classification based on single-
vol. 18, no. 1, p. 15, Jan. 2021, doi: 10.1186/s12984-021-00820-8. channel EEG,” Comput. Methods Programs Biomed., vol. 183, Jan. 2020,
[7] G. Pfurtscheller and C. Neuper, “Motor imagery activates primary Art. no. 105089, doi: 10.1016/j.cmpb.2019.105089.
sensorimotor area in humans,” Neurosci. Lett., vol. 239, pp. 65–68, [26] T. P. Luu, Y. He, S. Brown, S. Nakagome, and J. L. Contreras-Vidal,
Dec. 1997, doi: 10.1016/S0304-3940(97)00889-6. “Gait adaptation to visual kinematic perturbations using a real-time
[8] Z. Tang, S. Sun, S. Zhang, Y. Chen, C. Li, and S. Chen, closed-loop brain-computer interface to a virtual reality avatar,” J. Neural
“A brain-machine interface based on ERD/ERS for an upper-limb Eng., vol. 13, no. 3, Jun. 2016, Art. no. 036006, doi: 10.1088/1741-
exoskeleton control,” Sensors, vol. 16, no. 12, p. 2050, Dec. 2016, doi: 2560/13/3/036006.
10.3390/s16122050. [27] L. Ferrero, M. Ortiz, V. Quiles, E. Iáñez, and J. M. Azorín, “Improving
[9] Y. Liu et al., “Motor-imagery-based teleoperation of a dual- motor imagery of gait on a brain-computer interface by means of virtual
arm robot performing manipulation tasks,” IEEE Trans. Cognit. reality: A case of study,” IEEE Access, vol. 9, pp. 49121–49130, 2021,
Develop. Syst., vol. 11, no. 3, pp. 414–424, Sep. 2019, doi: doi: 10.1109/ACCESS.2021.3068929.
10.1109/TCDS.2018.2875052. [28] P. Xie et al., “Research on rehabilitation training strategies using
[10] Z. Li et al., “Hybrid brain/muscle signals powered wearable walking multimodal virtual scene stimulation,” Frontiers Aging Neurosci.,
exoskeleton enhancing motor ability in climbing stairs activity,” IEEE vol. 14, Jun. 2022, Art. no. 892178, doi: 10.3389/fnagi.2022.892178.
Trans. Med. Robot. Bionics, vol. 1, no. 4, pp. 218–227, Nov. 2019, doi: [29] S. H. Jang et al., “Cortical reorganization and associated func-
10.1109/TMRB.2019.2949865. tional motor recovery after virtual reality in patients with chronic
[11] Z. Tang, C. Li, and S. Sun, “Single-trial EEG classification of motor stroke: An experimenter-blind preliminary study,” Arch. Phys.
imagery using deep convolutional neural networks,” Optik, vol. 130, Med. Rehabil., vol. 86, no. 11, pp. 2218–2223, Nov. 2005, doi:
pp. 11–18, Feb. 2017, doi: 10.1016/j.ijleo.2016.10.117. 10.1016/j.apmr.2005.04.015.
[12] X. Xiao and Y. Fang, “Motor imagery EEG signal recognition using deep [30] D. B. Mekbib et al., “Proactive motor functional recovery following
convolution neural network,” Frontiers Neurosci., vol. 15, Mar. 2021, immersive virtual reality-based limb mirroring therapy in patients with
Art. no. 655599, doi: 10.3389/fnins.2021.655599. subacute stroke,” Neurotherapeutics, vol. 17, no. 4, pp. 1919–1930,
[13] V. J. Lawhern, A. J. Solon, N. R. Waytowich, S. M. Gordon, C. P. Hung, Oct. 2020, doi: 10.1007/s13311-020-00882-x.
and B. J. Lance, “EEGNet: A compact convolutional neural network for [31] J. Huang, M. Lin, J. Fu, Y. Sun, and Q. Fang, “An immersive motor
EEG-based brain-computer interfaces,” J. Neural Eng., vol. 15, no. 5, imagery training system for post-stroke rehabilitation combining VR
Oct. 2018, Art. no. 056013, doi: 10.1088/1741-2552/aace8c. and EMG-based real-time feedback,” in Proc. 43rd Annu. Int. Conf.
[14] G. A. Altuwaijri and G. Muhammad, “A multibranch of convolutional IEEE Eng. Med. Biol. Soc. (EMBC), Nov. 2021, pp. 7590–7593, doi:
neural network models for electroencephalogram-based motor imagery 10.1109/EMBC46164.2021.9629767.
classification,” Biosensors, vol. 12, no. 1, p. 22, Jan. 2022, doi: [32] F. Lotte, M. Congedo, A. Lécuyer, F. Lamarche, and B. Arnaldi,
10.3390/bios12010022. “A review of classification algorithms for EEG-based brain–computer
[15] S. U. Amin, H. Altaheri, G. Muhammad, W. Abdul, and M. Alsulaiman, interfaces,” J. Neural Eng., vol. 4, no. 2, pp. 1–13, Jun. 2007, doi:
“Attention-inception and long- short-term memory-based electroen- 10.1088/1741-2560/4/2/R01.
cephalography classification for motor imagery tasks in rehabilitation,” [33] P. Martin-Smith, J. Ortega, J. Asensio-Cubero, J. Q. Gan, and A. Ortiz,
IEEE Trans. Ind. Informat., vol. 18, no. 8, pp. 5412–5421, Aug. 2022, “A label-aided filter method for multi-objective feature selection in
doi: 10.1109/TII.2021.3132340. EEG classification for BCI,” in Advances in Computational Intelligence,
[16] S. U. Amin, H. Altaheri, G. Muhammad, M. Alsulaiman, and vol. 9094, I. Rojas, G. Joya, and A. Catala, Eds. Cham, Switzerland:
W. Abdul, “Attention based inception model for robust EEG Springer, 2015, pp. 133–144, doi: 10.1007/978-3-319-19258-1_12.
motor imagery classification,” in Proc. IEEE Int. Instrum. Meas. [34] H. Sun, Y. Xiang, Y. Sun, H. Zhu, and J. Zeng, “On-line EEG
Technol. Conf. (IMTC), Glasgow, U.K., May 2021, pp. 1–6, doi: classification for brain-computer interface based on CSP and SVM,” in
10.1109/I2MTC50364.2021.9460090. Proc. 3rd Int. Congr. Image Signal Process., Yantai, China, Oct. 2010,
[17] A. M. Roy, “An efficient multi-scale CNN model with intrinsic pp. 4105–4108, doi: 10.1109/CISP.2010.5648081.
feature integration for motor imagery EEG subject classification in [35] R. C. Oldfield, “The assessment and analysis of handedness:
brain-machine interfaces,” Biomed. Signal Process. Control, vol. 74, The Edinburgh inventory,” Neuropsychologia, vol. 9, no. 1, pp. 97–113,
Apr. 2022, Art. no. 103496, doi: 10.1016/j.bspc.2022.103496. 1971, doi: 10.1016/0028-3932(71)90067-4.
[18] X. Zhao, H. Zhang, G. Zhu, F. You, S. Kuang, and L. Sun, “A multi- [36] G. Pfurtscheller and F. H. L. da Silva, “Event-related EEG/MEG
branch 3D convolutional neural network for EEG-based motor imagery synchronization and desynchronization: Basic principles,” Clin. Neuro-
classification,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 27, no. 10, physiol., vol. 110, no. 11, pp. 1842–1857, 1999, doi: 10.1016/S1388-
pp. 2164–2177, Oct. 2019, doi: 10.1109/TNSRE.2019.2938295. 2457(99)00141-8.
[19] Y. Li, X.-R. Zhang, B. Zhang, M.-Y. Lei, W.-G. Cui, and [37] Y. Li, L. Guo, Y. Liu, J. Liu, and F. Meng, “A temporal-spectral-
Y.-Z. Guo, “A channel-projection mixed-scale convolutional neural based squeeze-and-excitation feature fusion network for motor imagery
network for motor imagery EEG decoding,” IEEE Trans. Neural EEG decoding,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 29,
Syst. Rehabil. Eng., vol. 27, no. 6, pp. 1170–1180, Jun. 2019, doi: pp. 1534–1545, 2021, doi: 10.1109/TNSRE.2021.3099908.
10.1109/TNSRE.2019.2915621. [38] P. Sip, M. Kozlowska, D. Czysz, P. Daroszewski, and P. Lisinski,
[20] G. Dai, J. Zhou, J. Huang, and N. Wang, “HS-CNN: A CNN with hybrid “Perspectives of motor functional upper extremity recovery with the use
convolution scale for EEG motor imagery classification,” J. Neural of immersive virtual reality in stroke patients,” Sensors, vol. 23, no. 2,
Eng., vol. 17, no. 1, Jan. 2020, Art. no. 016025, doi: 10.1088/1741- p. 712, Jan. 2023, doi: 10.3390/s23020712.
2552/ab405f. [39] D. Nath et al., “Clinical effectiveness of non-immersive virtual reality
[21] O. P. Idowu, A. E. Ilesanmi, X. Li, O. W. Samuel, P. Fang, and G. Li, tasks for post-stroke neuro-rehabilitation of distal upper-extremities:
“An integrated deep learning model for motor intention recognition A case report,” J. Clin. Med., vol. 12, no. 1, p. 92, Dec. 2022, doi:
of multi-class EEG signals in upper limb amputees,” Comput. 10.3390/jcm12010092.
Methods Programs Biomed., vol. 206, Jul. 2021, Art. no. 106121, doi: [40] J. S. Tutak, “Virtual reality and exercises for paretic upper limb of stroke
10.1016/j.cmpb.2021.106121. survivors,” Tehnicki Vjesnik, vol. 2, vol. 24, pp. 451–458, Sep. 2017, doi:
[22] Y. Zhang, C. S. Nam, G. Zhou, J. Jin, X. Wang, and A. Cichocki, 10.17559/TV-20161011143721.
“Temporally constrained sparse group spatial patterns for motor imagery [41] K. M. Oostra, A. Van Bladel, A. C. L. Vanhoonacker, and
BCI,” IEEE Trans. Cybern., vol. 49, no. 9, pp. 3322–3332, Sep. 2019, G. Vingerhoets, “Damage to fronto-parietal networks impairs motor
doi: 10.1109/TCYB.2018.2841847. imagery ability after stroke: A voxel-based lesion symptom mapping
[23] J. Hu, L. Shen, S. Albanie, G. Sun, and E. Wu, “Squeeze-and-excitation study,” Frontiers Behav. Neurosci., vol. 10, p. 5, Feb. 2016, doi:
networks,” 2019, arXiv:1709.01507 10.3389/fnbeh.2016.00005.
[24] B. Sun, X. Zhao, H. Zhang, R. Bai, and T. Li, “EEG motor imagery [42] F. Pichiorri et al., “Brain-computer interface boosts motor imagery
classification with sparse spectrotemporal decomposition and deep practice during stroke recovery,” Ann. Neurol., vol. 77, no. 5,
learning,” IEEE Trans. Autom. Sci. Eng., vol. 18, no. 2, pp. 541–551, pp. 851–865, May 2015, doi: 10.1002/ana.24390.
Apr. 2021, doi: 10.1109/TASE.2020.3021456.

You might also like