Brain Computer Interface - Mpra
Brain Computer Interface - Mpra
Submitted by
E mail:[email protected]
Introduction:
Science fiction writing throughout the twentieth century is replete with multiple references to brain-controlled
devices and machines with "brains." Whether it's Forbidden Planet, Star Trek, or Star Wars, artificial intelligence and
the brain-machine interface have remained at tractive illusions. Artificial intelligence proceeded to enter the culture
much more quickly with the advent and rapid development of computers. The brute calculating power of artificial
intelligence was demonstrated in such trivial pursuits as the defeat of chess World Champion Garry Kasparov by
IBM's Deep Blue in 1997. However, true intelligence and insight still is lacking.
Progress on the brain-machine interface was started by working backward. Throughout the last century, stimulation of
the brain revealed interesting aspects of how machines could control brain activities. Classic demonstrations of this
were developed by Dr. Jose Delgado, who was able to stop a charging bull with electric current generated at specific
sites within the brain.The stimulation would not harm the animal, yet gave at least the illusion of behavioral control. A
more practical extension of this eventually developed into the deep brain stimulator, which is cap able of controlling a
variety of abnormal movements and is beginning to be applied to epilepsy, pain, psychiatric conditions and other
diseases.[
The development of noninvasive techniques has opened the possibility of brain-computer interface by indirect means.
The direct brain-machine interface seemed doomed for the next century, but in 1998, it suddenly became a reality.[14]
These studies were an outgrowth of a great deal of work on primates over a long period of time, and demonstrated
the ability of single neurons to change their firing pattern over time in a plastic manner.[13,16,18] The ability to re cord
signals from the same neuron over a long period of time led to speculation that the neuronal activity of an individual
could be used to control machines. The obvious ma chine to control was a computer. The obvious first approach
would be to restore communication to the patient who has lost that ability.
Brain Computer Interfaces (BCIs) are intended for enabling both the severely
motor
disabled as well as the healthy people to operate electrical devices and
applications
through conscious mental activity. Our approach bases on an artificial neural
network
that recognizes and classifies different brain activation patterns associated with
carefully
selected mental tasks. By this means we pursuit to develop a robust classifier
with short
classification time and, most importantly, a low rate of false positives (i.e. wrong
classifications). Figure 1 demonstrates a BCI in use.
Figure 1. The user has a EEG cap on. By thinking about left and right hand
movement
the user controls the virtual keyboard with her brain activity. Copyright © 2003 by
LCE.
Our group is especially interested in the neurophysiological basis of BCIs. We
believe
that before the signals can be classified they need to be fully understood. We are
especially interested in the activation of the motor cortex. Like most BCI groups,
we
measure the electric activity of the brain using electroencephalography (EEG). In
addition to EEG, we measure the magnetic activity of the brain with
magnetoencephalography (MEG). MEG signals are more localized than EEG
signals and
thus give us more information about the brain activity related to, e.g., finger
movements.
We study the signals, e.g., using time frequency representations (TFRs) and pick
out
important features from them. Figure 2 shows an example of a TFR.
Currently we are developing a BCI that measures the signals produced in the
brain with
magnetoencephalography (MEG). Most BCI groups measure the electric activity
of the
brain using electroencephalography (EEG). MEG signals are more localised than
EEG
signals and thus easier to classify. We are extending our research on BCIs to
simultaneous EEG and MEG recordings.
Figure 2. TFR of a MEG sensor on top of the motor cortex. The activation of the
brain
can be plotted with the time information on the x-axis and the frequency
information on
the y-axis. The colour scale represents the power of the activation. In this TFR
the
subject began to move his right finger at time point zero. Strong activation in the
10-
Over the last two decades, brain-computer interface (BCI) has emerged as a new frontier in assistive
technology (AT) since it could provide an alternative communication channel between a user’s brain
and the outside world [WOL02]. Other terms that are also used in the literature for referring to a BCI
system include: brain interface (BI), direct brain interface (DBI), and brain machine interface (BMI).
A BCI system allows individuals with motor disabilities to control objects in their environments (such
as a light switch in their room or television, wheelchairs, neural prosthesis and computers) using their
brain signals only.This could be accomplished by measuring specific features of the user’s brain
activity that relate to his/her intent to perform the control. This specific type of brain activity is termed
a “neurological phenomenon”. As an example, when a particular movement such as right index finger
flexion is performed, specific neurological phenomena that correspond to that movement are
generated. The corresponding neurological phenomena are then translated into signals that are
eventually used to control devices [MAS07].
Figure 1 shows a traditional BCI system in which a person controls a device in an operating
environment (e.g., a powered wheelchair in a house) through a series of functional components
(revised from [FAT06]). In this context, the user’s brain activity is used to generate IC commands that
operate the BCI system. The user monitors the state of the device to determine the result of his/her
control efforts.
The building components of a BCI system (shown in Figure 1) have the following tasks: the electrodes
placed on the head of the user record the brain signal (e.g., electroencephalography (EEG) signals
from the scalp, electrocorticography (ECoG) signals from the brain or neuronal activity recorded using
microelectrodes implanted in the brain). The ‘artifact processor’ block deals with artifacts in the EEG
signals after the signals have been amplified. This block can either remove artifacts from the EEG
signals or can simply mark some EEG epochs as artifact-contaminated. The ‘feature generator’ block
transforms the resultant signals into feature values that relate to the underlying neurological
phenomena employed by the user for control. For example, if the user is using the power of his/her Mu
(8-12Hz) rhythm for the purpose of control, the feature generator could continually generate features
relating to the power-spectral estimates of the user’s Mu rhythms. The feature generator generally
consists of three components: the ‘signal enhancement’, the ‘feature extraction’, and the ‘feature
selection’ components, as shown in Figure 1.
The ‘feature translator’ block translates the features into logical control signals, e.g., 0 and 1 where 0
denotes NC and 1 denotes IC. The translation algorithm uses linear classification methods (e.g., linear
discriminant analysis) or nonlinear ones (e.g., neural networks). As shown in Figure 1 , a feature
translator may consist of two components: ‘feature classification’ and ‘post-processing’. The main aim
of the feature classification component is to classify the features into logical control signals. Post-
processing methods such as a moving average may be used after feature classification to reduce the
number of activations of the system.
The control interface translates the logical control signals from the feature translator into semantic
control signals that are appropriate for the particular type of device used. Finally, the device controller
translates the semantic control signals into physical control signals that are used by the device. For
more detail refer to [MAS07].
The rest of this article explores some FAQs involving the BCI systems. Please go to the main page for
more discussion on the signal processing aspects of BCI systems.
Many people think that BCI systems can be used by anyone. This is simply not true (at least at his
stage). The potential users of BCI systems include [WOL06]:
Currently, the second class is the main target of BCI communication and applications. This is because
BCI systems are designed for individuals with motor disabilities to communicate with the outside
world. The number of control options that BCI systems currently provide is also very limited at the
time being. So only an individual who really needs to use a BCI system (and does not have any other
useful communication channel) may be willing to use a BCI system in the long run.
It has been stated that the specific cortical pattern associated with the variation of the parameters of
motor control during motor imagery and motor execution are the same [ROM00].
What is Brain-Computer-Interface?
An Electroencephalogram based Brain-Computer-Interface (BCI) provides a new communication
channel between the human brain and a computer. Patients who suffer from severe motor
impairments (late stage of Amyothrophic Lateral Sclerosis (ALS), severe cerebral palsy, head trauma
and spinal injuries) may use such a BCI system as an alternative form of communication by mental
activity.
Physiological background
It is a well known phenomenon that EEG rhythmic activities, observed over motor and related areas of
the brain, disappear about 1 second prior to a movement onset. Hence one can predict from the
spatio-temporal EEG pattern that, for example, a hand movement will be performed. It has also been
shown by various groups of researchers that this so-called desynchronized EEG is also observed for
an imagination of a hand movement.
The activation of hand area neurons either by the preparation for a real hand movement or by
imagination of a hand movement is accompanied by a circumscribed ERD over the hand area.
Depending on the type of motor imagery different EEG patterns can be obtained. Hence, one finds
also a circumscribed ERD over the foot area in foot movement and foot imagination experiments.
Experimental paradigm and recording setup for the BCI data acquisition
Experimental paradigm
The data set used for classification was acquired during a brain-computer interface experiment with
feedback. The session was divided into 4 experimental runs of 40 trials with randomized directions of
the cues (20 down and 20 right) and lasted about 1 hour (including electrode application, breaks
between runs and experimental preparation). The subject sat in a comfortable armchair 1.5 meters in
front of a computer-monitor and was instructed not to move, keep both arms and hands relaxed and to
maintain the fixation at the center of the monitor throughout the experiment.
Références
[1] Jonathan R. Wolpaw et al., “Brain-computer interface technology: A
review of the first international meeting”, IEEE Transactions on rehabilitation
engineering, vol. 8, no. 2, pp. 164-173, June 2000.
[2] A. Hyvärinen et al., «Independent component analysis : A tutorial »,
Helsinki university of technology, Laboratory of computer and information
science, April 1999. http://www.cis.hut.fi/projects/ica/
[3] M. Davy, C. Doncarli, “Optimal Kernels of time-frequency
representations for signal classification”.
[4] Time-frequency toolbox Matlab, CNRS (France) and Rice University
(USA), 1995-1996.
[5] S. Mallat, “Wavelet tour of signal processing”, Academic press, 1999,
ISBN 0-12-466606-X.
: