Decoding Three-Dimensional Trajectory of Executed and Imagined Arm Movements From Electroencephalogram Signals
Decoding Three-Dimensional Trajectory of Executed and Imagined Arm Movements From Electroencephalogram Signals
Decoding Three-Dimensional Trajectory of Executed and Imagined Arm Movements From Electroencephalogram Signals
Abstract—Decoding motor commands from noninvasively mea- or for exerting control over limb prostheses. The best accuracy
sured neural signals has become important in brain–computer in- of decoding motor commands can be achieved by using in-
terface (BCI) research. Applications of BCI include neurorehabil- vasive recordings of neural activity. Invasive BCIs have been
itation after stroke and control of limb prostheses. Until now, most
studies have tested simple movement trajectories in two dimen- shown to enable successful decoding of hand movement speed
sions by using constant velocity profiles. However, most real-world and direction [1]–[3] and to specifically allow the use of a pros-
scenarios require much more complex movement trajectories and thetic arm during fine motor control tasks such as self-feeding
velocity profiles. In this study, we decoded motor commands in in experiments on nonhuman primates [1]. Additionally, in-
three dimensions from electroencephalography (EEG) recordings vasive BCIs have been utilized to control robotic arms for
while the subjects either executed or observed/imagined complex
upper limb movement trajectories. We compared the accuracy of
use in human patients with tetraplegia [2], [3]. Application of
simple linear methods and nonlinear methods. In line with pre- invasive BCIs in humans, however, has severe disadvantages.
vious studies our results showed that linear decoders are an ef- Most importantly, invasive recordings require surgeries on the
ficient and robust method for decoding motor commands. How- open brain, which can expose patients to inflammatory risks
ever, while we took the same precautions as previous studies to in the central nervous system. These risks can be avoided by
suppress eye-movement related EEG contamination, we found that
obtaining noninvasive neural measurements.
subtracting residual electro-oculogram activity from the EEG data
resulted in substantially lower motor decoding accuracy for linear Noninvasive BCIs are often based on electroencephalogram
decoders. This effect severely limits the transfer of previous results (EEG) recordings. To extract motor commands from EEG sig-
to practical applications in which neural activation is targeted. We nals, several paradigms have been established. One popular par-
observed that nonlinear methods showed no such drop in decoding adigm involves instructing subjects to imagine right and left
performance. Our results demonstrate that eye-movement related hand movements. Differential activation of brain regions asso-
contamination of brain signals constitutes a severe problem for de-
coding motor signals from EEG data. These results are important ciated with the motor control of these respective body parts can
for developing accurate decoders of motor signal from neural sig- then be decoded from EEG signals [4], [5]. Motor imagery (MI)
nals for use with BCI-based neural prostheses and neurorehabili- can be extended to a vast array of applications. For example,
tation in real-world scenarios. Doud et al. have successfully controlled a virtual helicopter by
Index Terms—Arm movement trajectory, brain–computer inter- using sensorimotor rhythms (SMRs) induced by motor imagina-
faces (BCI), electroencephalography (EEG), kernel ridge regres- tion [6]. Additionally, Muller-Putz et al. have shown that tem-
sion, upper limb rehabilitation. poral coding of individual MI patterns can be used to control two
independent degrees of an artificial robotic arm [7]. However,
MI requires a large amount of calibration data and does not work
I. INTRODUCTION for all subjects. Moreover, by using MI with body parts such as
the foot or tongue to control a robotic arm or a prosthetic device
is somewhat non-natural behavior. Another method to extract
1534-4320 © 2014 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission.
See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.
Authorized licensed use limited to: Institut Teknologi Sepuluh Nopember. Downloaded on February 27,2024 at 07:52:58 UTC from IEEE Xplore. Restrictions apply.
868 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 23, NO. 5, SEPTEMBER 2015
areas, while motor imagery-based BCIs often use motor im- To enhance the performance of EEG-based BCIs, several
agery of arbitrary body parts, and not the ones that correspond to studies have explored additional sources of control commands
a prosthetic effector. For both applications, to achieve intuitive outside neural activation. For example, Onose et al. used eye
and accurate control of prosthetic devices as well as to attain tracking to provide motion end-point information to a robotic
neurorehabilitation, it is desirable to directly extract movement arm by inferring the location of the object to be grasped from
kinematics from the associated brain regions. the focus point of a gaze that was concurrent with motor
Multiple studies have shown that the low-frequency compo- imagination [18]. However, the additional measurement device
nent of EEG signals in motor regions carries information about used in these studies can cause discomfort and increase cost.
movement onset [9], direction, and velocity [10]–[14]. This has More importantly, for the therapeutic goal of neurorehabilita-
allowed several investigators to decode kinematics of the ankle, tion to be accomplished, it is essential that only neural motor
knee, and hip joints during human treadmill walking [15]. In commands be used. In this study, we aimed to assess how eye
another study, the authors were able to reconstruct hand move- movements affect decoding of neural motor signals in com-
ment velocity during a four-directional drawing task [16]. Ad- plex real-world scenarios. The eye is a much stronger dipole
ditionally, Bradberry et al. have been able to decode 3-D hand than that of the neural sources in the brain. Consequently,
trajectories during center-out reaching tasks [10], while Ofner any eye movement that is correlated with motor commands
et al. proposed a new paradigm without external targets to suc- (eye-stabilizing reflexes during head or body movement or
cessfully decode continuous and self-paced movements [13]. eye movement when pursuing a target in a motor task) will
Interestingly, EEG-based studies on upper limb motor be reflected in an EEG. Because of volume conduction, eye
control have only focused on hand movement trajectories; movement-related signals will not only be reflected in elec-
however, motor control of other joints such as the elbow is also trodes close to the eye but also in distant electrodes. Therefore,
important for motor rehabilitation. When patients are trained for BCIs and neurorehabilitation, it is critical that eye-related
using an effector-based robot-assisted rehabilitation system, signals are excluded from motor decoding. In most previous
they often lack supervision to verify whether the movements studies subjects were instructed to suppress eye movements.
are performed in the correct manner. To this end, it is useful to During the session eye movements were monitored by the
assess the kinematics of the different joints of the arm. In turn, experimenter and electro-oculogram (EOG) activity was mea-
monitoring of the neural correlates of these joints, and eventual sured. EOG contamination of EEG activity is then measured by
discrepancies with the observed behavior, can be useful to better correlating the labels (i.e., movement velocity of the controlled
understand and assist motor neurorehabilitation [14]. By using limb) with the EOG activity [11]. Low correlations around 0.1
a novel preprocessing method and sparse linear regression, are then interpreted as EEG being not contaminated by EOG
Nakanishi et al. have predicted 3-D arm and elbow trajectories activity. Note however that this procedure does not ensure
over time from electrocorticography (ECoG) signals in humans that the decoder does not use EOG related signals, whether
[17]. To the best of our knowledge, this method has not been or not they are volitionally or subconsciously following the
used to investigate whether EEG-based systems can decode target position. Even if the EOG activity is not correlated to the
upper limb motor signals other than those of hand trajectories. target signal, correlations of EEG activity with this non-neural
Therefore, in the current study we applied well-established noise source can be used by the decoder to improve noise
linear decoding methods to extract the kinematics of both hand subtraction. Other studies used gaze tracking to ensure that
and elbow movements, when the subjects either performed a there are no eye movements that could contaminate the EEG.
trajectory themselves or observed and imagined a trajectory However gaze trackers cannot detect eye movements such as
performed by another entity. rotations around the rostro-caudal (roll) axis, which will lead to
To date, most studies have only decoded actual hand move- EEG contaminations. Here we provide empirical evidence for
ment velocities. In this study, we aimed to decode 3-D trajec- such undetected EOG contamination of EEG signals in motor
tories of imagined arm movements from EEG signals during a decoding tasks, see e.g., Fig. 6. While we took the same pre-
task that required the subjects to imagine and observe move- cautions previous studies to suppress eye movement, we found
ments performed by a robotic arm or another individual's arm. that removing residual eye-movement related artifacts in neural
Previous studies have already successfully demonstrated a sim- measurements can substantially decrease motor decoding accu-
ilar approach [2], [11]. However, in contrast to EEG studies that racy when using standard linear methods. Moreover we found
focus on simple motion trajectories in 2-D, here, we investigated that nonlinear decoding methods can help to counteract this
the extent to which linear decoding methods could be used in loss in decoding accuracy. We hope that our results can raise
further realistic settings. Specifically, we aimed to investigate awareness for the problem of EEG signal contamination by eye
the motor commands that were required for performing com- movements in motor signal decoding from neural signals and
plex trajectories in 3-D with varying velocity profiles. can help to improve motor signal decoding systems operating
Many previous studies have used linear methods for de- under realistic experimental conditions.
coding hand kinematics. To the best of our knowledge, it has not
yet been systematically explored whether nonlinear methods II. MATERIALS AND METHODS
can yield better results when decoding arm trajectories by using
EEG signals. Therefore, in the current study we compared the A. Experimental Procedure
results of linear methods with decoding accuracies obtained Ten healthy right-handed male subjects between the ages of
using a nonlinear method. 25 and 32 years participated in this experiment. The subjects
Authorized licensed use limited to: Institut Teknologi Sepuluh Nopember. Downloaded on February 27,2024 at 07:52:58 UTC from IEEE Xplore. Restrictions apply.
KIM et al.: DECODING THREE-DIMENSIONAL TRAJECTORY OF EXECUTED AND IMAGINED ARM MOVEMENTS 869
Fig. 1. Experimental setup. (a) Subject was instructed to move his arm in the
shape of the infinity symbol. (b) Movement guidelines at the x-y axes. (c) Motor
imagery with a volunteer's arm. (d) Motor imagery with a robotic arm.
of the symbol, EEG and motion tracking data were synchronized In the MLR, we used a linear model similar to that used in a
for analysis by using a computer. previous study [10]. The model is described in (2)–(4)
Fig. 3. Decoder example of executed arm movements: The comparison between the measured velocity (red dotted line) and decoded velocity (kernel ridge re-
gression [KRR]: blue dotted line; multiple linear regression [MLR]: black solid line) from subject 1 in the time domain. (a) Hand movements at normal velocity.
(b) Elbow movements at normal velocity.
and denotes the target variable, the velocity at time in the a second (inner) cross-validation was performed to determine
, and axis the out-of-sample performance for a certain parameter configu-
ration. This inner cross-validation was repeated for all param-
eter configurations, and the best configuration was used to train
(11)
the algorithm for the outer cross-validation.
To use the decoder in clinical applications, training data is
(12) required for calibrating the decoder. Ideally, a decoder should
use as little training data as possible to reduce the calibration
The predictions of KRR for a new data point is then
time. We measured the decoder performance as a function of
obtained using (8).
the amount of training data and reduced training samples by
For decoding we exclude seven electrodes (Fp1, Fp2, AF7,
performing cross-validation with only 2, 3, 4, 5, 6, and 7 training
AF3, AFz, AF4, AF8) from the analysis to further mitigate the
blocks.
influence of any eye movements on reconstruction [10].
As an additional control, we shuffled each trial of EEG and
E. Data Analysis motion tracking data in a training set of the same run. We then
carried out 8-fold cross-validation and nested cross-validation
To assess the accuracy of the velocity decoder, we carried out
for optimizing the parameters of the KRR to obtain an estimate
a blockwise 8-fold cross-validation method to keep the training
of chance-level performance.
set and test set not only disjointed, but as independent as pos-
Lastly, to graphically assess the relative contributions of scalp
sible [24]. In an attempt to evaluate the decoding accuracy in the
regions to the reconstruction of executed and imagined arm ve-
test set, we calculated two performance measures. The correla-
locity, we computed the correlation between standardized EEG
tion coefficient (r-value) between measured movement velocity
time series after EOG removal and the measured arm move-
of the subject, volunteer, or the robotic arm, and the predicted
ment velocity [26]. The corresponding scalp maps are plotted
movement velocity was used to compare the results of the cur-
in Fig. 8.
rent study to those of previous studies [10]. The correlation is
computed by III. RESULTS AND DISCUSSION
Fig. 4. Movement trajectory decoding performance on holdout data for linear (multiple linear regression [MLR], blue) and nonlinear (kernel ridge regression
[KRR], red) decoders. Each column shows the results for x-, y-, and z-directions, respectively. The performance measured in correlation and NRMSEs between
measured and predicted trajectories are plotted for constant velocity profile and varying velocity profile conditions.
methods could decode executed movements with high accuracy linear models in complex motor tasks. We also observed that
[10]. Additionally, velocities of executed hand movements chance level decoding yielded correlation coefficients below
were decoded the most accurately from EEG signals, by using 0.1 and 0.15 with MLR and KRR, respectively. This finding
both linear and nonlinear methods. Overall, we determined that shows that decoding results obtained using both linear and
movement imagined through observation of either a volunteer nonlinear methods were well above the chance level, even in
or a robotic arm, rather than a trajectory executed by the in- complex motion trajectory tasks.
dividual, was much more difficult to decode. Importantly, we
found that the velocity profiles, both constant and varying, were B. Effect of Reduced Training Data
reliably decoded. In the constant velocity profile condition, the Fig. 5(a) and (b) shows the decoding accuracy as a func-
decoding accuracy as measured using the correlation coefficient tion of the number of training samples. We found that KRR
was not significantly different when comparing KRR and MLR; could be used to accurately predict arm velocities by using fewer
however, we determined that KRR showed higher accuracies training data compared to that by using MLR. KRR showed
than that by MLR as established using NRMSE . NRMSE values of under 0.3 with 1200 samples, whereas MLR
This indicates that in the constant velocity profile condition, needed about 2800 samples to obtain a similar level of pre-
both linear and nonlinear methods could capture the movement cision. Fig. 5(c) shows computation time needed for decoder
trajectory, but that nonlinear KRR decoding showed better training as a function of the number of training samples. De-
results when modeling the exact scaling of the desired output. coding with KRR reduced the error in decoding movement tra-
On the other hand, in the varying velocity profile condition, jectories, but this improvement involved high computational
we found that KRR showed higher decoding accuracies than costs. More specifically, the training time of the KRR model
that by MLR in both the r-value and NRMSE evaluations. The increased with the number of data points, which determines the
results suggest that nonlinear methods can improve the simple size of the kernel matrix that needs to be inverted [see (9)].
Authorized licensed use limited to: Institut Teknologi Sepuluh Nopember. Downloaded on February 27,2024 at 07:52:58 UTC from IEEE Xplore. Restrictions apply.
KIM et al.: DECODING THREE-DIMENSIONAL TRAJECTORY OF EXECUTED AND IMAGINED ARM MOVEMENTS 873
Fig. 5. Decoding performance and computational cost as a function of the number of training samples for different decoders. Decoding performance, as measured
using r-value (a) or NRMSE (b) increased with the amount of training data. While the training time of linear models was small even for a large training data set,
the training time for the nonlinear kernel ridge regression model increased exponentially with the number of data points (c).
Fig. 6. (a), (b) EOG contamination tests used in previous studies were based on the cross-correlogram between the EOG channels and the target variable. Also
in our experiments, these correlations were diminishingly low as shown in the cross-correlogram for subject 1 in panel (a) and (b); panel (c) shows an example
of horizontal EOG channel activity and a correlated independent component (IC), which was removed from the data; (d) shows the same EOG activity and an
example of an uncorrelated IC, which was not removed. (e) Correlation coefficients (absolute values) between horizontal EOG channel activity and each IC of
EEG activity in one trial of the experiment. Panel (f) shows the histogram of correlations between EOG and ICs for all subjects in all experiments. Mean and
standard deviation are about 0.002 and 0.2. Dotted lines indicate threshold (0.4 and ) for rejection of an IC from the EEG data. We rejected all ICs with a
correlation coefficient that was 2 times the standard deviation away from the mean.
Fig. 7. Timelag of scalp maps with the correlation between standardized electroencephalography (EEG; after removal of electroocular [EOG]-related activity) and
standardized arm movement velocity from subject 3 during motor execution of the hand. The 11 time series exhibited similar contributions for decoding trajectory.
C. Effects of EOG and EMG on Decoding ferent decoding models. Note that also monitoring eye move-
ments with other techniques than EOG electrodes can fail to
To investigate the effects of EOG and EMG on decoding per- measure all eye movements. For instance using gaze tracking to
formance, we compared the ICA components of EEG with EOG exclude recordings does not ensure that no eye-related activity
and EMG signals. We emphasize that while we cannot ensure is present in the EEG, that can be used by the decoder to im-
that all eye-movement related activity is removed from the EEG, prove decoding accuracy. As in previous studies we found no
this does not compromise the comparison. Primarily we wanted strong correlations between EOG activity and hand movement
to investigate the effect of eye-movement related activity on velocity (see Fig. 6). Neither did we find any significant correla-
decoding performance in real world scenarios when using dif- tions with EMG channels in the imagined movement condition.
Authorized licensed use limited to: Institut Teknologi Sepuluh Nopember. Downloaded on February 27,2024 at 07:52:58 UTC from IEEE Xplore. Restrictions apply.
874 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 23, NO. 5, SEPTEMBER 2015
Fig. 8. Scalp maps of the correlation between standardized EEG (after removal of EOG-related activity) and standardized arm movement velocity. Seven pre-
frontal electrodes were excluded from the analysis. Scalp maps were at 50 ms in the past. (a) Scalp maps of all subjects during motor imagery with volunteer. (b)
Scalp maps of subject 3 during all tasks.
However, some ICA components showed strong correlations strength of the effect of EOG-related activity in EEG data in
with the EOG signals. Our results strongly suggest that when both linear and nonlinear decoding, by calculating the ratio of
EOG-related activity was left in the EEG recordings, the sig-
nals were being used by the decoder. Moreover, decoding per- (15)
formance was found to decrease substantially once EOG-related
activity was removed (see Fig. 4). When comparing We then plotted the coefficients averaged across all runs per
raw EEG data with the EEG data from which EOG-related ac- session in Fig. 10 and found that the drop in performance was
tivity had been removed, we found a significant drop in the de- substantially lower when using the nonlinear decoding method.
coding accuracy for the linear decoding method ; Investigating why nonlinear decoding methods yielded better
however, this large drop in performance was not observed for accuracies is an important topic of future research. One pos-
the nonlinear decoding approach. Therefore, we evaluated the sible explanation is that motor control is a difficult problem and
Authorized licensed use limited to: Institut Teknologi Sepuluh Nopember. Downloaded on February 27,2024 at 07:52:58 UTC from IEEE Xplore. Restrictions apply.
KIM et al.: DECODING THREE-DIMENSIONAL TRAJECTORY OF EXECUTED AND IMAGINED ARM MOVEMENTS 875
are almost the same on both sides. This could explain some
of the decoding accuracy differences. Our results show that
using nonlinear decoding, we could obtain higher correlation
values of about 0.4 averaged across all subjects. Moreover, the
scalp plots showed that hand and elbow movements elicited
similar patterns of activation, which can be explained by the
high correlation of direction and velocity profiles between
hand and elbow movements in the task. Consequently, the
correlation scalp maps of hand and elbow trajectories were
similar (Fig. 8). During hand motor execution, the sensors Cz,
C3, and CP2-CP3 of the modified 10/20 international system
exhibited high correlations in the x and y axis. Similarly, C3,
Cz, CP2-CP3, and P2-P3 were also strongly correlated with the
x and y axes during motor execution of the elbow. Along the
z-axis, strong correlations were observed not only in the motor
Fig. 9. Comparison of correlations plotted in Fig. 8 with chance level correla- cortex but also in the occipital regions. CP3 showed the highest
tions; for calculating the chance level correlations, we shuffled each trial of EEG
and motion tracking data in the same run. Absolute correlation coefficients were correlation averaged across all movement axes, which is in line
averaged across all channels for each subject. This figure shows that scalp map with findings in a previous study [10].
correlations are above chance level.
E. Decoding Trajectory of Imagined Arm Movement
We decoded 3-D imagined hand and elbow movements
during a combined task of imagination and observation of
a robotic arm or a volunteer's arm. Our results show that
when the subjects observed/imagined the robotic arm move-
ment, decoding performance with EOG related activation was
significantly higher than when subjects observed/imagined
a volunteer's arm movement . However, after
eliminating EOG-related activity, we found that the decoding
performance of the two conditions was similar, indicating that
the improved decoding accuracy in the robotic arm condition
was because of eye movement-related activity rather than
Fig. 10. Ratio between decoding accuracy after and before electroocular differences in neural activation. The robotic arm was larger
(EOG)-related activity from EEG signals [see (15)] for linear regression (mul- and more unnatural than that of a human arm; therefore, we
tiple linear regression [MLR], x-axis) and nonlinear kernel ridge regression speculated that the high EOG-related activation was because of
(kernel ridge regression [KRR], y-axis). Significantly lesser decrease in KRR
decoding accuracy was observed compared to that in MLR. the difference in physical appearances between the two arms.
Another possible explanation could be that another person's
arm performs smoother movements and is more similar in
representing all trajectories required in realistic conditions in a shape to an own arm than a robotic arm; thus subjects could
linear subspace of EEG activity could not be possible. Kernel imagine the hand movement more easily and neural circuits
methods, as KRR used in this study, have proven very useful in such as the mirror neuron system [11] could be enhanced.
previous studies for the task of motor control in robotics [21]. Fig. 8 shows that movement velocities were highly correlated
These results suggest that motor control of complex trajectories with EEG activity above motor cortex, FC1-FC4 and P1-P4.
are better modeled using nonlinear models. Besides, we cannot
fully exclude that the nonlinear decoder used non-neural signals IV. CONCLUSION
in the EEG data which is not accessible to the linear decoder. We
In the current study, we used noninvasive EEG recordings to
emphasize that this could explain some, not necessarily all, of
extract motor control signals for complex 3-D movement trajec-
the decoding accuracy differences.
tories with varying velocity profiles. The experimental condi-
tions are much closer to real-world scenarios of EEG-based BCI
D. Performance Comparison Between Hand and Elbow
control for neuroprostheses or neurorehabilitation than in pre-
We found that elbow velocity could be decoded at slightly vious studies. We found that both executed and imagined move-
smaller accuracy than hand velocity. This could be due to the ments could be reliably decoded from a low frequency compo-
trajectory the subjects performed. Subjects can move their nent of EEG signals and that this was true for hand movement
hand towards the body center (along the x-axis) with smaller velocities as well as elbow velocity. Importantly, we found that
elbow movements in the left side of infinity symbol compared in all tested conditions, the EEG activity was correlated to the
to the right side. So velocity variation and movement distance EOG activity; removing the eye-related activity from EEG data
of elbow movements are different on the left and right side resulted in a substantial drop of decoding accuracy when using
of the infinity symbol trajectory whereas hand movements the linear model. We did not observe such a strong decrease in
Authorized licensed use limited to: Institut Teknologi Sepuluh Nopember. Downloaded on February 27,2024 at 07:52:58 UTC from IEEE Xplore. Restrictions apply.
876 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 23, NO. 5, SEPTEMBER 2015
decoding accuracy when using nonlinear decoding methods. In- [17] Y. Nakanishi et al., “Prediction of Three-dimensional arm trajectories
vestigating why nonlinear methods perform better is an impor- based on ECoG signals recorded from human sensorimotor cortex,”
PLoS ONE, vol. 8, no. 8, pp. 1–9, 2013.
tant topic of future research. To summarize, the present study [18] G. Onose et al., “On the feasibility of using motor imagery EEG-based
showed that it is possible to decode executed and imagined com- brain-computer interface in chronic tetraplegics for assistive robotic
plex 3-D hand and elbow movement trajectories from low-fre- arm control: A clinical test and long-term post-trial follow-up,” Spinal
Cord, vol. 50, pp. 599–608, 2012.
quency EEG signals. Furthermore, the results of this study could [19] I. Winkler, S. Haufe, and M. Tangermann, “Automatic classification
form the basis of efforts aimed to develop natural movement of artifactual ICA components for artifact removal in EEG signals,”
control of an upper limb neuroprosthesis. Behav. Brain Funct., vol. 7, no. 30, pp. 1–15, 2011.
[20] A. Ziehe and K.-R. Müller, “TDSEP–An efficient algorithm for blind
separation using time structure,” in Proc. 8th Int. Conf. Artif. Neural
ACKNOWLEDGMENT Netw., Skövde, Sweden, 1998, pp. 675–680.
The authors would like to thank Prof. K-R. Müller for helpful [21] D. Nguyen-Tuong, M. Seeger, and J. Peters, “Model learning with
local Gaussian process regression,” Adv. Robot., vol. 23, no. 15, pp.
discussions and advice. 2015–2034, 2009.
[22] A. Aizerman, E. Braverman, and L. Rozonoer, “Theoretical founda-
REFERENCES tions of the potential function method in pattern recognition learning,”
[1] M. Velliste, S. Perel, M. C. Spalding, A. S. Whitford, and A. B. Automat. Remote Control, vol. 25, pp. 821–837, 1964.
Schwartz, “Cortical control of a prosthetic arm for self-feeding,” [23] K.-R. Müller, C. W. Anderson, and G. E. Birch, “Linear and nonlinear
Nature, vol. 453, no. 7198, pp. 1098–1101, 2008. methods for brain-computer interfaces,” IEEE Trans. Neural Syst. Re-
[2] L. R. Hochberg et al., “Reach and Grasp by people with tetraplegia habil. Eng., vol. 11, no. 2, pp. 165–169, 2003.
using a neurally controlled robotic arm,” Nature, vol. 485, no. 7398, [24] S. Lemm, B. Blankertz, T. Dickhaus, and K.-R. Müller, “Introduction
pp. 372–375, 2012. to machine learning for brain imaging,” Neuroimage, vol. 56, no. 2, pp.
[3] J. L. Collinger et al., “High-performance neuroprosthetic control by an 387–399, Jun. 2011.
individual with tetraplegia,” Lancet, vol. 381, no. 9866, pp. 557–564, [25] J. M. Antelis, L. Montesano, A. Ramos-Mruguialday, N. Birbaumer,
2013. and J. Minguez, “On the usage of linear regression models to recon-
[4] F. Galán et al., “A brain-actuated wheelchair: Asynchronous and non- struct limb kinematics from low frequency EEG signals,” PLoS ONE,
invasive Brain-computer interfaces for continuous control of robots,” vol. 8, no. 4, pp. 1–14, 2013.
Clin. Neurophysiol., vol. 119, pp. 2159–2169, 2008. [26] S. Haufe, “On the interpretation of weight vectors of linear models in
[5] T.-E. Kam, H.-I. Suk, and S.-W. Lee, “Non-homogeneous spatial filter multivariate neuroimaging,” Neuroimage, vol. 87, pp. 96–110, 2013.
optimization for electroencephalogram-based motor imagery classifi-
cation,” Neurocomputing, vol. 108, no. 2, pp. 58–68, 2013. Jeong-Hun Kim received the B.S. degree in
[6] A. J. Doud, J. P. Lucas, M. T. Pisansky, and B. He, “Continuous three- electronic engineering and avionics from Korea
dimensional control of a virtual helicopter using a motor imagery based Aerospace University, Gyeonggi-do, Korea, in 2009,
brain-computer interface,” PLoS ONE, vol. 6, no. 10, pp. 1–10, 2011. and the MS degree in brain and cognitive engineering
[7] G. R. Müller-Putz, R. Scherer, G. Pfurtscheller, and C. Neuper, “Tem- from the Korea University, Seoul, Korea, in 2014.
poral coding of brain patterns for direct limb control in humans,” Front. He is currently working at LIG Nex1. His current
Neurosci., vol. 4, no. 34, pp. 1–11, 2010. research interests include machine learning and
[8] G. D. Johnson, N. R. Waytowich, D. J. Cox, and D. J. Krusienski, “Ex- brain–computer interfaces.
tending the discrete selection capabilities of the speller to goal-oriented
robotic arm control,” in Proc. 2010 3rd IEEE RAS EMBS, Int. Conf.
Biomed. Robot. Biomechatron., Tokyo, Japan, 2010, pp. 572–575.
[9] E. Lew, R. Chavarriaga, S. Silvoni, and J. d. R. Millán, “Detection
of self-paced reaching movement intention from EEG signals,” Front.
Neuroeng., vol. 5, pp. 1–17, 2012. Felix Bießmann received the B.Sc. degree in
[10] T. J. Bradberry, R. J. Gentili, and J. L. Contreras-Vidal, “Recon- cognitive science from the University of Osnabrück,
structing three-dimensional hand movements from noninvasive elec- Osnabrück, Germany, in 2005, the M.Sc. degree
troencephalographic signals,” J. Neurosci., vol. 30, pp. 3432–3437, in neuroscience from the International Max-Planck
2010.
Research School, Tübingen, Germany, and the Ph.D.
[11] T. J. Bradberry, R. J. Gentili, and J. L. Contreras-Vidal, “Fast attain-
degree in machine learning from Berlin Institute of
ment of computer cursor control with noninvasively acquired brain sig-
Technology, Berlin, Germany.
nals,” J. Neural Eng., vol. 8, pp. 1–9, 2011.
His research interests include statistical learning
[12] J. Liu, C. Perdoni, and B. He, “Hand movement decoding by phase-
methods for multimodal data with a focus on neuro-
locking low frequency EEG signal,” in Proc. 33rd Annu. Int. Conf.
scientific and biomedical data.
IEEE Eng. Med. Biol. Soc, Boston, MA, 2011, pp. 6335–6338.
[13] P. Ofner and G. R. Müller-Putz, “Decoding of velocities and positions
of 3D arm movement from EEG,” in Proc. 34th Annu. Int. Conf. IEEE
Eng. Med. Biol. Soc., San Diego, CA, pp. 6406–6409.
Seong-Whan Lee (F'10) is a Hyundai Motor Chair
[14] J.-H. Kim, R. Chavarriaga, J. d. R. Millán, and S.-W. Lee, “3D Tra-
Professor at Korea University, Seoul, Korea, where
jectory reconstruction of upper limb based on EEG,” in 5th Int. Brain-
he is the Head of the Department of Brain and Cog-
Computer Interface Meeting, Pacific Grove, CA, Jun. 2013.
nitive Engineering. His research interests include
[15] A. Presacco, R. Goodman, L. Forrester, and L. Contreras-Vidal,
pattern recognition, brain–computer interface, and
“Neural decoding of treadmill walking from noninvasive electroen-
neuro-rehabilitation.
cephalographic signals,” J. Neurophysiol., vol. 106, pp. 1875–1887,
Prof. Lee is a Fellow of the International Associa-
2011.
tion for Pattern Recognition (IAPR) and the Korean
[16] J. Lv, Y. Li, and Z. Gu, “Decoding hand movement velocity from elec-
troencephalogram signals during a drawing task,” BioMed. Eng., vol. Academy of Science and Technology.
9, no. 64, pp. 1–21, 2010.
Authorized licensed use limited to: Institut Teknologi Sepuluh Nopember. Downloaded on February 27,2024 at 07:52:58 UTC from IEEE Xplore. Restrictions apply.