Non-Invasive Brain-Machine Interaction
Non-Invasive Brain-Machine Interaction
net/publication/41386907
Article in International Journal of Pattern Recognition and Artificial Intelligence · August 2008
DOI: 10.1142/S0218001408006600 · Source: OAI
CITATIONS READS
124 3,138
5 authors, including:
Eileen Lew
École Polytechnique Fédérale de Lausanne
18 PUBLICATIONS 1,935 CITATIONS
SEE PROFILE
All content following this page was uploaded by Jose del R. Millan on 19 May 2014.
RICARDO CHAVARRIAGA*
†
Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland
‡
University of Barcelona, Barcelona, Spain
[email protected]
1. Introduction
The idea of controlling machines not by manual operation, but by mere “thinking” (i.e.,
the brain activity of human subjects) has fascinated humankind since ever, and
researchers working at the crossroads of computer science, neurosciences, and
biomedical engineering have started to develop the first prototypes of brain-computer
interfaces (BCI) over the last decade or so1, 2, 3, 4, 5, 6. A BCI monitors the user’s brain
activity and translates their intentions into actions—such as moving a wheelchair7, 8 or
2 Millán, Ferrez, Galán, Lew, Chavarriaga
selecting a letter from a virtual keyboard9, 10—without using activity of any muscle or
peripheral nerve. The central tenet of a BCI is the capability to distinguish different
patterns of brain activity, each being associated to a particular intention or mental task.
Such a kind of BCI is a natural way to augment human capabilities by providing a
new interaction link with the outside world and is particularly relevant as an aid for
paralyzed humans, although it also opens up new possibilities in natural and direct
interaction for able-bodied people. Figure 1 shows the general architecture of a BCI.
Brain electrical activity is recorded with a portable device. These raw signals are first
processed and transformed in order to extract some relevant features that are then passed
on to some mathematical models (e.g., statistical classifiers or neural networks). This
model computes, after some training process, the appropriate mental commands to
control the device. Finally, visual feedback, and maybe other kinds such as tactile
stimulation, informs the subject about the performance of the brain-actuated device so
that they can learn appropriate mental control strategies and make rapid changes to
achieve the task.
Fig. 1. General architecture of a brain-computer interface (BCI) for controlling devices such as a cursor, a
robotic arm, or a motorized wheelchair. In this case the BCI measures electroencephalogram (EEG) signals
recorded non-invasively from electrodes placed on the subject’s scalp.
A BCI may monitor brain activity via a variety of methods, which can be coarsely
classified as invasive and non-invasive. In invasive BCI systems the activity of single
neurons (their spiking rate) is recorded from microelectrodes implanted in the brain. Less
invasive approaches are based on the analysis of electrocorticogram (ECoG) signals from
electrodes implanted under the skull. For humans, however, it is preferable to use non-
invasive approaches to avoid the risks generated by permanent surgically implanted
devices in the brain, and the associated ethical concerns. Most non-invasive BCI systems
use electroencephalogram (EEG) signals; i.e., the electrical brain activity recorded from
electrodes placed on the scalp. The main source of the EEG is the synchronous activity of
thousands of cortical neurons. Measuring the EEG is a simple noninvasive way to
Non-Invasive Brain-Machine Interaction 3
monitor electrical brain activity, but it does not provide detailed information on the
activity of single neurons (or small brain areas). Moreover, it is characterized by small
signal amplitudes (a few μVolts) and noisy measurements (especially if recording outside
shield rooms).
Besides electrical activity, neural activity also produces other types of signals, such as
magnetic and metabolic, that could be used in a BCI. Magnetic fields can be recorded
with magnetoencephalography (MEG), while brain metabolic activity—reflected in
changes in blood flow—can be observed with positron emission tomography (PET),
functional magnetic resonance imaging (fMRI), and optical imaging. Unfortunately, such
alternative techniques require sophisticated devices that can be operated only in special
facilities. Moreover, techniques for measuring blood flow have long latencies and thus
are less appropriate for interaction.
From this short review it follows that, because of its low cost, portability and lack of
risk, EEG is the ideal modality if we want to bring BCI technology to a large population.
In the next sections we review the main components of our BCI system, which is based
on the online analysis of spontaneous EEG signals and recognizes 3 mental tasks. Our
approach relies on four principles. The first one is an asynchronous protocol where
subjects decide voluntarily when to switch between mental tasks and perform those
mental tasks at their own pace. The second principle is mutual learning, where the user
and the BCI are coupled together and adapt to each other. In other words, we use machine
learning approaches to discover the individual EEG patterns characterizing the mental
tasks executed by the user while users learn to modulate their brainwaves so as to
improve the recognition of the EEG patterns. The third principle is the combination of the
user’s intelligence with the design of intelligent devices that facilitate interaction and
reduce the user’s cognitive workload. This is particularly useful for mental control of
robots. Finally, the fourth principle is the recognition of high-level cognitive states
related to the user’s awareness of erroneous responses, error potentials (ErrP). Thus,
user’s commands are executed only if no error is detected, what enables the BCI to
interact with the user in a much more meaningful way. We also describe the three brain-
actuated applications we have developed. Finally, we discuss current research directions
we are pursuing in order to improve the performance and robustness of our BCI system,
especially for real-time control of brain-actuated robots.
the analysis of EEG phenomena associated with various aspects of brain function related
to mental tasks carried out by the subject at his/her own will. Such a kind of BCI can
exploit two kinds of spontaneous, or endogenous, brain signals, namely slow potential
shifts11 or variations of rhythmic activity7, 9, 12, 13, 14, 15. We will focus on the latter that are
the most common.
EEG-based BCIs are limited by a low channel capacity*. Most of the current systems
have a channel capacity below 0.5 bits/s3. One of the main reasons for such a low
bandwidth is that they are based on synchronous protocols where EEG is time-locked to
externally paced cues repeated every 4-10 s and the response of the BCI is the overall
decision over this period11, 13, 14. Such synchronous protocols facilitate EEG analysis since
the starting time of mental states are precisely known and differences with respect to
background EEG activity can be amplified. Unfortunately, they are slow and BCI
systems that use them normally recognize only 2 mental states.
On the contrary, we utilize more flexible asynchronous protocols where the subject
makes self-paced decisions on when to stop doing a mental task and start immediately the
next one7, 9, 16. In such asynchronous protocols the subject can voluntarily change the
mental task being executed at any moment without waiting for external cues. The time of
response of an asynchronous BCI can be below 1 second. For instance, in our approach
the system responds every 1/2 second. The rapid responses of our asynchronous BCI,
together with its performance (see Section 3), give a theoretical channel capacity between
1 and 1.5 bits/s.
*
Channel capacity is the maximum possible information transfer rate, or bit rate, through a channel.
Non-Invasive Brain-Machine Interaction 5
that the direction of the eigenvectors A maximizes the quotient between the between-
classes dispersion matrix B and the pooled within-classes dispersion matrix W . Thus,
the CDSP are obtained by projecting X = SA . Once the CDSP are computed, we select
the electrodes with higher contribution on the CDSP. This contribution is measured with
a Discrimination index computed from the structure matrix—the pooled correlation
matrix between the original channels in S and the CDSP X . Given the c × (l − 1)
structure matrix T , where T = ∑ k =1 Tk
l
e = 1,...c, and the normalized eigenvalues
De = (∑ l −1 2
u =1 u eu
c
)
γ t / ∑ e =1 ∑ u =1 γ u teu2 × 100.
l −1
(1)
3.2. Classifier
We use a statistical Gaussian classifier (see Ref. 7 for more details). The output of
this statistical classifier is an estimation of the posterior class probability distribution for
a sample; i.e., the probability that a given single trial belongs to each mental task (or
class). Each class is represented by a number of Gaussian prototypes, typically less than
four. That is, we assume that the class-conditional probability function of class Ck is a
superposition of Nk Gaussian prototypes. We also assume that all classes have equal prior
probability. All classes have the same number of prototypes Np, and for each class each
prototype has equal weight 1/Nk. Then, dropping constant terms, the activity aki of the ith
prototype of class Ck for a given sample x is the value of the Gaussian with centre μki
6 Millán, Ferrez, Galán, Lew, Chavarriaga
and covariance matrix Σik . From this we calculate the posterior probability yk of the class
Ck. The posterior probability yk of the class Ck is now the sum of the activities of all the
prototypes of class k divided by the sum of the activities of all the prototypes of all the
classes. The classifier output for input vector x is now the class with the highest
probability, provided that the probability is above a given threshold, otherwise the result
is “unknown”. Usually each prototype of each class would have an individual covariance
matrix Σik , but to reduce the number of parameters the model has a single diagonal
covariance matrix common to all the prototypes of the same class. During offline training
of the classifier, the prototype centers are initialized by any clustering algorithm or
generative approach. This initial estimate is then improved by stochastic gradient descent
to minimize the mean square error E = 1 ∑ k ( yk − tk ) 2 , where t is the target vector in
2
the form 1-of-C; that is, if the second of three classes was the desired output, the target
vector is (0,1,0). The covariance matrices are computed individually and are then
averaged over the prototypes of each class to give Σk.
4. Blending of Intelligences
To be fully operative, a BCI system has to facilitate an effective human–device
interaction and reduce the user’s cognitive workload. It means the system has to be
comfortable and, in the case of control of external devices such as robots and prostheses,
to provide safe modes of operation. A way to promote this kind of interactions is to
design smart devices, which recognize the user’s intent and execute it automatically so
relieving the user from low-level detailed control, and then combine the intelligences of
both, the user and the device. Section 7 introduces the brain-actuated devices developed
in our lab and describes how we have developed this concept for mental control of robots
and wheelchairs. Despite these initial attempts to facilitate brain interaction, the operation
of brain-actuated devices requires a high degree of concentration and attentional levels.
simultaneously classifying mental commands for BCI control and detecting ErrP to filter
out erroneous commands in a real-time system, all this at the single-trial level25.
7. Brain-Actuated Devices
BCI systems are being used to operate a number of brain-actuated applications that
augment people’s communication capabilities, provide new forms of entertainment, and
also enable the operation of physical devices. In this section we briefly describe some of
the brain-actuated devices we have developed over the years. All these systems have been
largely demonstrated publicly.
Our asynchronous BCI can be used to select letters from a virtual keyboard on a
computer screen and to write a message9, 10. Initially, the whole keyboard (26 English
letters plus the space to separate words, for a total of 27 symbols organized in a matrix of
3 rows by 9 columns) is divided in three blocks, each associated to one of the mental
tasks. The association between blocks and mental tasks is indicated by the same colors as
during the training phase. Each block contains an equal number of symbols, namely 9 at
this first level (3 rows by 3 columns). Then, once the statistical classifier recognizes the
block on which the subject is concentrating, this block is split in 3 smaller blocks, each
having 3 symbols this time (1 row). As one of this second-level blocks is selected, it is
again split in 3 parts. At this third and final level, each block contains 1 single symbol.
Finally, to select the desired symbol, the user concentrates in its associated mental task as
indicated by the color of the symbol. This symbol goes to the message and the whole
process starts over again. Thus, the process of writing a single letter requires three
decision steps. It goes without saying that the incorporation of statistical language
models, or other techniques for word prediction such as T9 in cellular phones, will
facilitate and speed up writing (cf. principle “blending of intelligences”).
8 Millán, Ferrez, Galán, Lew, Chavarriaga
The second brain-actuated device is a simple computer game10, or “brain game”, but
other educational software could have been selected instead. It is the classical Pacman.
For the control of Pacman, two mental tasks are enough to make it turn left of right.
Pacman changes direction of movement whenever one of the mental tasks is recognized
twice in a row. In the absence of further mental commands, Pacman moves forward until
it reaches a wall, where it stops and waits for instructions.
Finally, it is also possible to control mentally robots and prostheses. Until recently,
EEG-based BCIs have been considered too slow for controlling rapid and complex
sequences of movements. But we have shown for the first time7, 9 that asynchronous
analysis of EEG signals is sufficient for humans to continuously control a mobile robot—
emulating a motorized wheelchair—along non-trivial trajectories requiring fast and
frequent switches between mental tasks (see Fig. 2). Two human subjects learned to
mentally drive the robot between rooms in a house-like environment visiting 3 or 4
rooms in the desired order. Furthermore, mental control was only marginally worse than
manual control on the same task. A key element of this brain-actuated robot is
cooperative control between two intelligent agents—the human user and the robot—so
that the user only gives high-level mental commands that the robot performs
autonomously. In particular, the user’s mental states are associated with high-level
commands (e.g., “turn right at the next occasion”) and that the robot executes these
commands autonomously using the readings of its on-board sensors. Another critical
feature is that a subject can issue high-level commands at any moment. This is possible
because the operation of the BCI is asynchronous and, unlike synchronous approaches,
does not require waiting for external cues. The robot relies on a behaviour-based
controller to implement the high-level commands to guarantee obstacle avoidance and
smooth turns. In this kind of controller, on-board sensors are read constantly and
determine the next action to take. In particular, if from the robot’s sensor point of view a
mental command is deemed to be unsafe, it will not be executed.
More recently, we have extended this work to the mental control of both a simulated
and a real wheelchair (see Fig. 3). This has been done in the framework of the European
project MAIA (http://www.maia-project.org) in cooperation with the KU Leuven. In this
case, we have incorporated shared control principles into the BCI27, 28. In shared control,
the intelligent controller relieves the human from low level tasks without sacrificing the
cognitive superiority and adaptability of human beings that are capable of acting in
unforeseen situations. In other words, in shared control there are two intelligent agents—
the human user and the robot—so that the user only conveys intents that the robot
performs autonomously. Although our first brain-actuated robot had already some form
of cooperative control, shared control is a more principled and flexible framework.
Non-Invasive Brain-Machine Interaction 9
Fig. 2. One of the users while driving mentally the robot through the different rooms of the environment,
making it turn right, turn left, or move forward. The robot has 3 lights on top to provide feedback to the user
and 8 infrared sensors around its diameter to detect obstacles.
Fig. 3. .Subject driving the wheelchair in a natural environment from non-invasive EEG. Note the laser scanner
in front of the wheelchair, in between the subject’s legs.
applied). Thus, online learning can be used to adapt the classifier throughout its use and
keep it tuned to drifts in the signals it is receiving in each session. Preliminary work
shows the feasibility and benefits of this approach. As already mentioned, detection of
error-related potentials (ErrP) prevents the execution of wrong mental commands (Sect.
5). But this is not the only way to take benefit from ErrP. Indeed, ErrP—which are
generated in response to errors made by the BCI rather than by the user—can provide
with performance feedback that, in combination with online adaptation, allows improving
the BCI while it is being used in a fully unsupervised way31.
Another aspect we are currently investigating is the potential benefit of using
neurocognitive knowledge to increase the recognition rate of ErrP and, more generally,
the performance of the BCI. Recent findings32 have uncovered that ErrP are most
probably generated in a deep fronto-central brain area called anterior cingulated cortex
(ACC). We have verified this hypothesis for our ErrP using a well-known inverse method
called sLORETA33. Furthermore, in a preliminary study based on another inverse model
called Cortical Current Density34 we have found that the most relevant voxels (tiny
cortical patches) for ErrP classification are in agreement with those neurophysiological
findings and, more importantly, their use improves ErrP recognition compared to scalp
EEG features25. We will continue exploring the use of inverse methods for both ErrP
recognition as well as for classification of mental commands.
Finally, the work on ErrP suggests that it could be possible to recognize in real time
high-level cognitive and emotional states from EEG (as opposed, and in addition, to
mental commands) such as alarm, fatigue, frustration, or attention that are crucial for an
effective and purposeful interaction. Indeed, the rapid recognition of these states will lead
to truly adaptive interfaces that customize dynamically in response to changes of the
cognitive and affective states of the user.
Aknowledgements
This work is supported by the Swiss National Science Foundation through the National
Centre of Competence in Research on “Interactive Multimodal Information Management
(IM2)” and also by the European IST Programme FET Project FP6-003758. This paper
only reflects the authors’ views and funding agencies are not liable for any use that may
be made of the information contained herein.
References
1. Nicolelis, M.A.L.: Actions from Thoughts. Nature 409, 403–407 (2001)
2. Millán, J.d.R.: Brain-Computer Interfaces. In: Arbib, M.A. (ed.) Handbook of Brain Theory
and Neural Networks, pp. 178–181. MIT Press, Cambridge, Massachusetts (2002)
3. Wolpaw, J.R., Birbaumer, N., McFarland, D.J., Pfurtscheller, G., Vaughan, T.M.: Brain-
Computer Interfaces for Communication and Control. Clin. Neurophysiol. 113, 767–791 (2002)
4. Wickelgren, I:. Tapping the Mind. Science 299, 496–499 (2003)
Non-Invasive Brain-Machine Interaction 11
5. Dornhege, G., Millán, J.d.R., Hinterberger, T., McFarland, D., Müller, K.-R. (eds.): Towards
Brain-Computer Interfacing. MIT Press, Cambridge, Massachusetts (2007)
6. Millán, J.d.R., Ferrez, P.W., Galán, F., Lew, E., Chavarriaga, R.: Non-Invasive Brain-Actuated
Interaction. In: Mele, F., Ramella, G., Santillo, S., Ventriglia, F. (eds.) Advances in Brain,
Vision, and Artificial Intelligence, pp. 438–447. LCNS 4729, Springer, Berlin (2007).
7. Millán, J.d.R., Renkens, F., Mouriño, J., Gerstner, W.: Non-Invasive Brain-Actuated Control of
a Mobile Robot by Human EEG. IEEE Trans. Biomed. Eng. 51, 1026–1033 (2004)
8. Galán, F., Nuttin, M., Lew, E., Ferrez, P.W., Vanacker, G., Philips, J., Van Brussel, H., Millán,
J.d.R.: An Asynchronous and Non-Invasive Brain-Actuated Wheelchair. In: 13th International
Symposium on Robotics Research. Hiroshima, Japan (2007)
9. Millán, J.d.R., Renkens, F., Mouriño, J., Gerstner, W.: Brain-Actuated Interaction. Artif. Intell.
159, 241–259 (2004)
10. Millán, J.d.R.: Adaptive Brain Interfaces. Comm. ACM 46, 74–80 (2003)
11. Birbaumer, N., Ghanayim, N., Hinterberger, T., Iversen, I., Kotchoubey, B., Kübler, A.,
Perelmouter, J., Taub, E., Flor, H.: A Spelling Device for the Paralysed. Nature 398, 297–298
(1999)
12. Babiloni, F., Cincotti, F., Lazzarini, L., Millán, J.d.R., Mouriño, J., Varsta, M., Heikkonen, J.,
Bianchi, L., Marciani, M.G.: Linear Classification of Low-Resolution EEG Patterns Produced
by Imagined Hand Movements. IEEE Trans. Rehab. Eng. 8, 186–188 (2000)
13. Pfurtscheller, G., Neuper, C.: Motor Imagery and Direct Brain-Computer Communication.
Proc. IEEE 89, 1123–1134 (2001)
14. Wolpaw, J.R., McFarland, D.J.: Control of a Two-Dimensional Movement Signal by a
Noninvasive Brain-Computer Interface in Humans. PNAS 101, 17849–17854 (2004)
15. Blankertz, B., Dornhege, G., Krauledat, M, Müller, K.R., Kunzmann, V., Losch, F., Curio, G.:
The Berlin Brain-Computer Interface: EEG-based Communication without Subject Training.
IEEE Trans. Neural Sys. Rehab. Eng. 14, 147–152 (2006)
16. Birch, G.E., Bozorgzadeh, Z., Mason, S.G.: Initial On-Line Evaluation of the LF-ASD Brain-
Computer Interface with Able-Bodied and Spinal-Cord Subjects using Imagined Voluntary
Motor Potentials. IEEE Trans. Neural Sys. Rehab. Eng. 10, 219–224 (2002)
17. Krzanowski, W.J.: Principles of Multivariate Analysis. Oxford University Press, Oxford (1998)
18. Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. JohnWiley & Sons, 2nd ed. (2001)
19. Galán, F., Ferrez, P.W., Oliva, F, Guàrdia, J., Millán, J.d.R.: Feature Extraction for Multi-class
BCI using Canonical Variates Analysis. In: IEEE International Symposium on Intelligent
Signal Processing. Alcalá de Henares, Spain (2007)
20. Schalk, G., Wolpaw, J.R., McFarland, D.J., Pfurtscheller, G.: EEG-based Communication:
Presence of an Error Potential. Clin. Neurophysiol. 111, 2138–2144 (2000)
21. Blankertz, B., Dornhege, G., Schäfer, C., Krepki, R., Kohlmorgen, J., Müller, K.R., Kunzmann,
V., Losch, F., Curio, G.: Boosting Bit Rates and Error Detection for the Classification of Fast-
Paced Motor Commands based on Single-Trial EEG Analysis. IEEE Trans. Neural Sys. Rehab.
Eng. 11, 127–131 (2003)
12 Millán, Ferrez, Galán, Lew, Chavarriaga
22. Ferrez, P.W., Millán, J.d.R.: You Are Wrong!—Automatic Detection of Interaction Errors from
Brain Waves. In: Proc. 19th International Joint Conference on Artificial Intelligence,
Edinburgh, UK (2005)
23. Ferrez, P.W., Millán, J.d.R.: Error-Related EEG Potentials in Brain-Computer Interfaces. In:
Dornhege, G., Millán, J.d.R., Hinterberger, T., McFarland, D., Müller, K.-R. (eds.) Towards
Brain-Computer Interfacing. MIT Press, Cambridge, Massachusetts (2007)
24. Ferrez, P.W., Millán, J.d.R.: Error-Related EEG Potentials Generated during Simulated Brain-
Computer Interaction. IEEE Trans. Biomed. Eng. To appear (2007)
25. Ferrez, P.W.: Error-Related EEG Potentials in Brain-Computer Interfaces. Ph.D. Thesis, Ecole
Polytechnique Fédéral de Lausanne, Switzerland (2007)
26. Mouriño, J.: EEG-based Analysis for the Design of Adaptive Brain Interfaces. Ph.D. Thesis,
Centre de Recerca en Enginyeria Biomèdica, Universitat Politècnica de Catalunya, Barcelona,
Spain (2003)
27. Philips, J., .Millán, J.d.R., Vanacker, G, Lew, E., Galán, F., Ferrez, P.W., Van Brussel, H.,
Nuttin, M.: Adaptive Shared Control of a Brain-Actuated Simulated Wheelchair. In: 10th
International Conference on Rehabilitation Robotics, Noordwijk, The Netherlands (2007)
28. Vanacker, G, Millán, J.d.R., Lew, E., Ferrez, P.W., Galán, F., Philips, J., Van Brussel, H.,
Nuttin, M.: Context-based Filtering for Assisted Brain-Actuated Wheelchair Driving.
Computational Intelligence and Neuroscience (2007)
29. Buttfield, A., Ferrez, P.W., Millán, J.d.R.: Towards a Robust BCI: Error Recognition and
Online Learning. IEEE Trans. Neural Sys. Rehab. Eng. 14, 164–168 (2006)
30. Millán, J.d.R., Buttfield, A., Vidaurre, C., Krauledat, M., Schögl, A., Shenoy, P., Blankertz, B.,
Rao, R.P.N., Cabeza, R., Pfurtscheller, G., Müller, K.-R.: Adaptation in Brain-Computer
Interfaces. In: Dornhege, G., Millán, J.d.R., Hinterberger, T., McFarland, D., Müller, K.-R.
(eds.) Towards Brain-Computer Interfacing. MIT Press, Cambridge, Massachusetts (2007)
31. Chavarriaga, R., Ferrez, P.W., and Millán, J.d.R.: To Err is Human: Learning from Error
Potentials in Brain-Computer Interfaces. In: 1st International Conference on Cognitive
Neurodynamics, Shanghai, China (2007)
32. Holroyd, C.B., Coles, M.G.H.: The Neural Basis of Human Error Processing: Reinforcement
Learning, Dopamine and the Error-related Negativity. Psychological Rev. 109, 679–709 (2002)
33. Pascual-Marqui, R.D.: Standardized Low Resolution Brain Electromagnetic Tomography
(sLORETA): Technical Details. Meth. Find. Exp. Clin. Pharmacol. 24D, 5–12 (2002)
34. Babiloni, F., Babiloni,, C., Locche, L., Cincotti, F., Rossini, P.M., Carducci, F.: High-
Resolution Electroencephalogram: Source Estimates of Laplacian-Transformed
Somatosensory-Evoked Potentials using Realistic Subject Head Model Constructed from
Magnetic Resonance Imaging. Med. Biol. Eng. Comput. 38: 512–519 (2000)
José del R. Millán is a he is a Research Assistant in the BCI group.
senior researcher at the His research interests span brain-computer
IDIAP Research Institute interfaces, pattern recognition, neuroscience
where he explores the and human-computer interaction. Recently
use of brain signals for his research has been focused on
multimodal interaction asynchronous detection and classification of
and, in particular, non- induced oscillatory brain activity recorded
invasive brain-controlled from the EEG for continuous control of
robots and prostheses. mobile devices. He was one of the winners
Dr. Millán is also an of the BCI Competition III.
adjunct professor at the Swiss Federal
Institute of Technology in Lausanne (EPFL). Eileen Lew was born in
He received his Ph.D. in computer science Sarawak, Island of
from the Univ. Politècnica de Catalunya Borneo, Malaysia in
(Barcelona, Spain) in 1992. His research on 1980. She received her
brain-computer interfaces was nominated Bachelor degree in
finalist of the European Descartes Prize Computer Engineering in
2001 and he has been named “Research 2002 and her Masters
Leader 2004” by the journal Scientific degree in Electrical
American for his work on brain-controlled Engineering by research
robots. in the field of artificial intelligence, pattern
recognition and image processing in 2004.
Pierre W. Ferrez is a Since 2005 she has been with IDIAP
research scientist at the Research Institute while pursuing her Ph.D.
IDIAP Research Institute at the Ecole Polytechnique Fédérale de
in Martigny, Switzerland, Lausanne (EPFL). Her work focuses on
where he works in the brain computer interface control of
brain-computer interface wheelchairs and robotic devices by
team. He received his exploring probabilistic approaches to deal
Ph.D. from the Ecole with the uncertainty of mental commands to
Polytechnique Fédérale improve BCI subject's level of control.
de Lausanne (EPFL), Switzerland, in 2007.
Prior to joining IDIAP in 2004, he has been Ricardo Chavarriaga is
a research assistant in the Institute for a scientific researcher at
Research in Ophthalmology (IRO) in Sion, the IDIAP Research
Switzerland. His research interests include Institute. He received an
biological signals acquisition and processing engineering degree in
and non-invasive brain-computer interfaces, electronics from the
with a focus on the integration of high-level Pontifcia Universidad
cognitive states such as error-related Javeriana in Cali,
potentials in EEG-based brain-computer Colombia in 1998, and a
interfaces. Ph.D. in Computational Neuroscience from
the Ecole Polytechnique Fédérale de
Ferran Galán was born in Lausanne (EPFL), Switzerland in 2005. His
Barcelona, Spain, in 1976. work is focused on the analysis of brain
He received his degree in electrical signals and the design of brain-
Psychology from the computer interfaces. In particular, he is
University of Barcelona in interested on the study of
2000 and currently he is neurophysiological correlates of human
finishing his Ph.D. in cognitive processing and its potential use in
Biometrics and Statistics at human-machine interaction.
the same University. Since 2005, he has
been with IDIAP Research Institute, where