A Seminar Report On Brain Computer Interface
A Seminar Report On Brain Computer Interface
Seminar Report
On
Brain Computer
Interface
Robot
Neural/BIO Feedback
Conclusion
Brain-Computer Interfaces
In this chapter an overview is given of the current state of the art in brain-
computer interfaces. This is done by laying out the technology used for brain-
computer interfaces and by reviewing several case studies.
A lot of research has been done on brain-computer interfaces and many definitions have
been given. Wolpaw et al. (2002) define a brain-computer interface as “a device that
provides the brain with a new, non-muscular, communication and control channel”.
Levine (2000) says “A direct brain-computer interface accepts voluntary commands
directly from the human brain without requiring physical movement and can be used to
operate a computer or other technologies.” Kleber and Birbaumer (2005) say that “A
brain-computer interface provides users with the possibility of sending messages and
commands to the external world without using their muscles”.
The following is the definition for brain-computer interfaces that will be used in this
thesis.
interaction to take place. This should cover any desirable interaction a user could
want with a computer, such as menu navigation, text input, and pointer control.
This interaction should cause no straining or monopolization of the mind and
should be performed as though it were as natural as moving an arm or a leg.
The study of brain activity has shown that when a person performs a certain task
such as stretching the right arm, a ‘signal’ is created in the brain and is sent
through the nervous system to the muscles. Research has shown that when the
same person moves his arm in the same way ten times, there is clearly a pattern
to the neural activity. Scientists (Carmen et al., 2004) have therefore concluded
that if one is able to read the brain activity and scan for certain specific patterns,
this information can then be used for interaction. The more specific the
registering of the neural activity, the more precise and detailed the possible
interaction.
There are several methods of monitoring brain activity. These methods can be
divided into three distinct groups, namely external, surgical, and
nanotechnological.
Firstly there are several external or non-invasive methods of monitoring brain
activity including Positron Emission Tomography (PET) ii , functional Magnetic
Resonance Imaging (fMRI) iii , Magneto encephalography (MEG) iv , and
Electroencephalography (EEG) techniques, which all have advantages and
disadvantages.
In practice only EEG yields data that is easily recorded with relatively
inexpensive equipment, is rather well studied, and provides high temporal
resolution (Krepki et al., 2003). EEG is therefore the only external method to be
researched in this thesis.
These wires are far thinner than even the smallest blood vessels, so they could
be guided to any point in the body without blocking blood flow.
This finding has important implications for understanding the adaptability of the
primate brain and promises great possibilities for giving paraplegics physical
control over their environment.
Led by neurobiologist Miguel Nicolelis (2002) of Duke’s Center for
Neuroengineering, the experiments consisted of implanting an array of
microelectrodes, thinner than a human hair, into the frontal and parietal lobes of
the brains of two female rhesus macaque monkeys. A specially developed
computer system analyzed the faint signals from the electrode arrays and
recognized patterns that represented particular movement by a monkey’s arm.
Initially the monkeys were taught to use a joystick to position a cursor over a
target on a video screen and to grasp the joystick with a specified force (Figure
5-1).
After this initial training, the researcher made the cursor more than a simple
display, incorporating the dynamics, such as inertia and momentum, of a robot
arm functioning in another room. The performance worsened initially, but the
monkeys soon learned to deal with these new dynamics and became capable of
controlling the cursor, that reflects the movements of the robot arm.
Following this, the researchers removed the joystick and the monkeys continued
moving their arms in the air where the joystick used to be, still controlling the
robot arm in the other room. After a few days, the monkeys realized they did not
need to move their own arms in order to move the cursor. They kept their arms at
their sides and controlled the robot arm with only their brain and visual
feedback.
The extensive data drawn from these experiments showed that a large percentage
of the neurons become more ‘entrained’. In other words, their firing becomes
more correlated to the operation of the robot arm than to the monkey’s own arm.
According to Nicolelis (2002), this showed that the brain cells originally used for
the movement of their own arm had now shifted to controlling the robot arm. The
monkeys could still move their own arms, but the control for that had been
shifted to other brain cells.
The device, which is implanted underneath the skull in the motor cortex, consists
of a computer chip that is essentially a 2 mm by 2 mm
array of 100 electrodes (Figure 5-3). Surgeons Figure 0- 2
Richard Martin (2005) visited Nagle for an article in Wired Magazine. After his
accident in 2001, Nagle begged to be
Cyberkinetics’ first patient. With his young
age and his strong will to walk again, he
turned out to be an ideal patient. The chip
was surgically implanted, and, after a period
Figure 0- 3
of recovery, the tests could start. Nagle had array position
to think ‘left’ and ‘right’, the way he was
able to move his hand before being paralysed. After only several days he
succeeded in controlling the cursor on a computer.
When asked what he was thinking, he replied: “For a while I was thinking
about moving the mouse with my hand. Now, I just imagine moving the cursor
from place to place.” His brain has assimilated the system. Nagle is now able to
play Pong (and even win), read e-mail, control a television set and control a
robot hand.
The three chosen commands were used to steer a robot forwards, right, and
left. This was trained over a period of days. After the training, the subjects were
able to steer the robot through a maze. Correct recognition of the commands was
above 60%, whereas errors were below 5%.
Neural/BIO Feedback
I n much the same way as you can directly control machines with your brain, you
can also receive feedback directly into the brain. Nicolelis (2004) is currently
working on artificial sense of touch for robotic arms. Successful experiments
have also been carried out with robotic eyes connected to the brain. Veraart et al.
(2002) have developed a system, whereby a video feed is sent to electrodes
activating the optical nerve. This currently gives blind people rudimentary
vision.
Controlling a system directly with one’s brain raises certain ethical issues. When
you can control external systems in the same way as you control your own body,
the question can arise of whether or not this system becomes part of you. Also,
the idea of integrating technology with their bodies can scare people off. These
issues are considered in this chapter.
As seen in the BCI examples of the research done by Nicolelis (2002) and the
Braingate System developed by Cyberkinetics (2005), the subjects using the
brain-computer interfaces assimilate the functionality into their brains. In fact,
the brains have reorganised the functions of specific neurons to improve the
functionality of the interface, at the same time moving the control of the original
appendages to other areas of the brain. The brain is adapted as though the body
has an extra limb. “From a philosophical point of view, we are saying that the
sense of self is not limited to our capability for introspection, our sense of our
body limits, and to the experiences we’ve accumulated.” said Nicolelis (2002).
“It really incorporates every external device that we use to deal with the
environment.” One could indeed ask: “Is it right to alter one’s Self?” But, does a
blind man not alter himself by using a cane?
In these cases too, a physical connection is made between the body and
technology, introducing the ‘cyborg’ into reality. The term ‘cyborg’ was first
mentioned by Clynes and Kline in 1960. “A cyborg is a combination of human
and machine in which the interface becomes a ‘natural’ extension that does not
require much conscious attention, such as when a person rides a bicycle.”
(Starner 1999)
Warwick (2003) suggests that, once the technology to directly interface with
the brain exits, technology becomes part of the ‘self’. People could then in fact
be ‘enhanced’, and this raises the issue of whether cyborg morals would be the
same as human morals. Warwick also states a ‘murky’ view, that cyborgs are
likely to be networked, and one could ask if it is morally acceptable for cyborgs
to give up their individuality.
Other questions arise. Should all humans have the right to be ‘upgraded’ to
cyborgs? How will the relationship between humans and cyborgs develop?
These are not current issues, but one should consider them when working with
brain-computer interfaces.
Conclusion
Also the tactile interfaces too do not deliver all desirable characteristics, as the
existing interfaces are fairly restrictive. A device either needs to be held in a
hand, or the hands are covered in interface technology.
The ‘perfect’ brain computer interface would indeed fulfil all the desirable
characteristics for wearable computers. However, in the current state of the art,
the ‘perfect’ brain computer interface does not exist. This is in spite of the
promising advances currently being made. The surgically implanted devices
show the best results for successful brain-computer interfaces, but the risk and
social antipathy is too large for it to be considered for non- medical use. To
paraplegics, the life enhancement value is so great that the risks and fears
associated with an operation do not matter.
The nanotechnological method promises the most precise method of
interaction. Even though it is unknown how the general public would perceive
such technology, it could be a far ‘cleaner’ and safer method than surgical
implants. The development still has a long way to go and can therefore currently
not be considered as ‘the way to go’ for interfacing with wearable computers.
This leaves the non-invasive method of EEG interfaces. The current level of
precision of such interfaces is sufficiently high for simple interfaces, such as
four-way menus. The reaction speed is still very slow, but, with smart software
algorithms, researchers are finding more and more efficient and precise
interaction solutions. There also needs to be more development on the electrodes
required for EEG reading, if people want to feel comfortable walking around
with them.
Further development could, however, much improve the current state of the art.
A great challenge for the development of brain-computer interfaces is the fact
that the field is exceptionally multi-disciplinary, covering fields such as
neurobiology, nanotechnology, engineering, mathematics, computer science, and
interaction design. And the current technology is a long way from the ‘perfect’
brain-computer interface.
References
www.ieeexplore.ieee.org
www.ask.com
www.wikipedia.com
www.howstuffworks.com
www.cyberkineticsinc.com
i
“Electroencephalography is the neurophysiologic measurement of the electrical activity of the brain
by recording from electrodes placed to the scalp, or, in special cases, on the cortex. The resulting
traces are known as an electroencephalogram (EEG) and represent so-called brainwaves. This
device is used to assess brain damage, epilepsy, and other problems. In some jurisdictions it is used
to assess brain death. EEG can also be used in conjunction with other types of brain imaging.
ii
Positron emission tomography (PET) is a nuclear medicine medical imaging technique which
produces a three dimensional image or map of functional processes in the body. (www.wikipedia.org)
iii
Functional Magnetic Resonance Imaging (or fMRI) describes the use of MRI to measure
hemodynamic signals related to neural activity in the brain or spinal cord of humans or other animals.
(www.wikipedia.org)
iv
Magnetoencephalography (MEG) is the measurement of the magnetic fields produced by electrical
activity in the brain, usually conducted externally, using extremely sensitive devices such as SQUIDs.
Because the magnetic signals emitted by the brain are in the order of a few femtotesla (1 fT = 10 -
15
T), shielding from external magnetic signals, including the Earth's magnetic field, is necessary. An
appropriate magnetically shielded room can be constructed from Mu-metal, which is effective at
reducing high-frequency noise, while noise cancellation algorithms reduce low-frequency common
mode signals. (www.wikipedia.org)
v
usu “Nanotechnology comprises technological developments on the nanometer scale, ally 0.1 to 100
nm. (One nanometer equals 10 -9 m/one thousandth of a micrometer or one millionth of a millimeter.)
The term has sometimes been applied to microscopic technology.” (www.wikipedia.org)