0% found this document useful (0 votes)
49 views19 pages

Seminar Report On Touchscreen

There are three types of brain-computer interfaces: invasive BCIs, partially invasive BCIs, and non-invasive BCIs. Invasive BCIs are implanted directly into the brain during surgery and provide the highest quality signals but also risk building up scar tissue over time. Partially invasive BCIs have electrodes placed on the surface of the brain. Non-invasive BCIs use sensors placed on the scalp and have less risk but also lower signal quality. Invasive BCIs have been used to restore sight and provide movement for paralyzed individuals, but come with health risks due to their direct implantation into brain tissue.

Uploaded by

Ashita Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views19 pages

Seminar Report On Touchscreen

There are three types of brain-computer interfaces: invasive BCIs, partially invasive BCIs, and non-invasive BCIs. Invasive BCIs are implanted directly into the brain during surgery and provide the highest quality signals but also risk building up scar tissue over time. Partially invasive BCIs have electrodes placed on the surface of the brain. Non-invasive BCIs use sensors placed on the scalp and have less risk but also lower signal quality. Invasive BCIs have been used to restore sight and provide movement for paralyzed individuals, but come with health risks due to their direct implantation into brain tissue.

Uploaded by

Ashita Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 19

SEMINAR REPORT

On

“BRAIN COMPUTER
INTERFACE”
Submitted in partial fulfilment of the requirement for the degree of

Bachelor of Technology

In

Computer Science & Engineering

By: Guided By:

Ashita Singh Sikriwal Prof. Vikash Singh


Regd No: 1601287048 (Asst. professor)

GITA, BHUBANESWAR

1|Page
DEPARTMENT OF ELECTRONICS & COMMUNICATION
ENGINEERING

GITA, BHUBANESWAR
(At: Badaraghunathpur, P.O: Janla via Bhubaneswar,Pin-752054,Dist: Khurda, Odisha)

Certificate
This is to certify that the Seminar Report entitled “BRAIN COMPUTER
INTERFACE” by Ashita Singh Sikriwal, bearing Roll no. : 161023 and
university Regd. No. : 1601287048 in the batch 2016-2020 were done in the
partial fulfilment of the degree of Bachelor of Technology in Computer Science
& Eng. of Biju Patnaik University of Technology, Odisha. No part of this
seminar report has been submitted to any other University or Institution for the
award of any degree or otherwise to the best of our knowledge.

Prof. (Dr.) Tarini Prasad Panigrahi Asst. Prof.


Vikash Singh
(Head of the Department CSE) ( Seminar
Guide)

2|Page
DEPARTMENT OF COMPUTER SCIENCE & ENGINEERING

GITA,Bhubaneswar
(At: Badaraghunathpur, P.O: Janla via Bhubaneswar.Pin-752054,Dist: Khurda, Odisha)

Acknowledgement
I am extremely grateful to Prof.(Dr.) Tarini Prasad Panigrahi H.O.D of C.S.E Dept.&Eng
for giving us his consent to carry out the seminar.

I would like to thank Asst. Prof. Vikash Singh and other teaching staffs of C.S.E
Department for their active involvement in the entire process.

Last but not the least I would take the opportunity of thanking my parents and friends for
their constant support and co-operation.

Name: ASHITA SINGH SIKRIWAL

Regd No: 1601287048

Email Id: [email protected]

3|Page
PAGE INDEX
ABSTRACT

1. INTRODUCTION

2. WORKING ARCHITECTURE
2.1 Introduction
2.2 Invasive BCI
2.3 Partially Invasive BCI
2.4 Non-Invasive BCI

3. THE CURRENT BCI TECHNIQUES

3.1 Introduction
3.2 P300 Detection
3.3 EEG mu-Rhythm Conditioning
3.4 VEP Detection
3.5 EEG Pattern Mapping
3.6 Detecting Lateral Hemisphere differences

4. BRAIN GATE

4.1 DARPA

5. CONCLUSION

6. REFERENCES

4|Page
ABSTRACT

Brain–computer interface  (BCI) technology translates voluntary choices in


active command using brain activity. In fact, brain electrical signals,
particularly Evoked Related Potentials (ERPs), produced some ms after
cognitive tasks, are frequently used to activate commands, being the visual-
P300 based BCI system the main interface used for communication and control
purpose. In the P300-speller BCI, the user’s task consists in visualizing a matrix
of 6 × 6 rows and columns and then focusing attention on a desired character
(target, rare event). Since its implementation, some authors have carried out
different research for optimizing the P300-visual speller BCI to increase
accuracy performance or improve usability (ie,1,2). Until now, no studies have
been conducted for manipulating speller sizes to ensure best conditions to user
experience.

Brain computer interfaces are systems that use signals recorded from the brain
to enable communication and control applications for individuals who have
impaired function. This technology has developed to the point that it is now
being used by individuals who can actually benefit from it. However, there are
several outstanding issues that prevent widespread use. These include the ease
of obtaining high-quality recordings by home users, the speed, and accuracy of
current devices and adapting applications to the needs of the user. In this
chapter, we discuss some of these unsolved issues.

Brain computer interfaces (BCIs) give their users communication and control


channels that do not depend on peripheral nerves and muscles. The user’s intent
is decoded from electrophysiological or other measures of brain activity. This
brain activity is recorded noninvasively by electrodes on the scalp or invasively
by electrodes placed on the brain surface or within the brain. BCIs can enable
people who are severely paralyzed by amyotrophic lateral sclerosis, brain stem
stroke, or other disorders to communicate their wishes, operate word processing
or other computer programs, or even control a neuroprosthesis. With further
development and clinical validation, BCIs should significantly improve the lives
of people with severe disabilities.

5|Page
Chapter 1
INTRODUCTION
Man machine interface has been one of the growing fields of research and
Development in recent years. Most of the effort has been dedicated to the design
of user-Friendly or ergonomic systems by means of innovative interfaces such
as voice recognition, virtual reality. A direct brain-computer interface would
add a new dimension to man-machine interaction. A brain-computer interface,
sometimes called a direct neural interface or a brain machine interface, is a
direct communication pathway between a human or animal brain (or brain cell
culture) and an external device.

In one BCIs, computers either accept commands from the brain or send signals
to it but not both. Two way BCIs will allow brains and external devices to
exchange information in both directions but have yet to be successfully
implanted in animals or humans.
Brain-Computer interface is a staple of science fiction writing. In its earliest
incarnations no mechanism was thought necessary, as the technology seemed so
far fetched that no explanation was likely. As more became known about the
brain however, the possibility has become more real and the science fiction
more technically sophisticated.

Recently, the cyberpunk movement has adopted the idea of 'jacking in', sliding
'biosoft' chips into slots implanted in the skull(Gibson, W.1984).Although such
biosofts are still science fiction, there have been several recent steps toward
interfacing the brain and computers.
In this definition, the word brain means the brain or nervous system of an
organic life form rather than the mind. Computer means any processing or
computational device, from simple circuits to silicon chips (including
hypothetical future technologies like quantum computing).
Research on BCIs has been going on for more than 30 years but from the mid
1990’s there has been dramatic increase working experimental implants. The
common thread throughout the research is the remarkable cortical-plasticity of
the brain, which often adapts to BCIs treating prostheses controlled by implants
and natural limbs.

6|Page
Chapter 2
Working architecture

2.1. Introduction:

Before moving to real implications of BCI and its application let us first discuss
the three types of BCI. These types are decided on the basis of the technique
used for the interface.
Each of these techniques has some advantages as well as some disadvantages.
The three types of BCI are as follows with there features.

2.2. Invasive BCI:


Invasive BCI are directly implanted into the grey matter of the brain during
neurosurgery.
They produce the highest quality signals of BCI devices. Invasive BCIs has
targeted repairing damaged sight and providing new functionality to paralyzed
people. But these BCIs are prone to building up of scar-tissue which causes the
signal to become weaker and even lost as body reacts to a foreign object in the
brain.
In vision science, direct brain implants have been used to treat non-congenital
i.e. acquired blindness. One of the first scientists to come up with a working
brain interface to restore sight as private researcher, William Dobelle.
Dobelle’s first prototype was implanted into Jerry, a man blinded in adulthood,
in1978. A single-array BCI containing 68 electrodes was implanted onto Jerry’s
visual cortex and succeeded in producing phosphenes, the sensation of seeing
light. The system included TV cameras mounted on glasses to send signals to
the implant. Initially the implant allowed Jerry to see shades of grey in a limited
field of vision and at a low frame-rate also requiring him to be hooked up to a
two-ton mainframe. Shrinking electronics and faster computers made his
artificial eye more portable and allowed him to perform simple tasks unassisted.
In 2002, Jens Neumann, also blinded in adulthood, became the first in a series
of 16 paying patients to receive Dobelle’s second generation implant, marking
one of theearliest commercial uses of BCIs. The second generation device used
a more sophisticated implant enabling better mapping of phosphenes into
coherent vision. Phosphenes are spread out across the visual field in what
researchers call the starry-night effect. Immediately after his implant, Jens was

7|Page
able to use imperfectly restored vision to drive slowly around the parking area
of the research institute.BCIs focusing on motor Neuroprosthetics aim to either
restore movement in paralyzed individuals or provide devices to assist them,
such as interfaces with computers or robot arms.
Researchers at Emory University in Atlanta led by Philip Kennedy and Roy
Bakay were first to install a brain implant in a human that produced signals of
high enough quality to stimulate movement. Their patient, Johnny Ray, suffered
from ‘locked-in syndrome’ after suffering a brain-stem stroke. Ray’s implant
was installed in 1998 and he lived long enough to start working with the
implant, eventually learning to control a computer cursor.
Tetraplegic Matt Nagle became the first person to control an artificial hand
using a BCI in 2005 as part of the nine-month human trail of cyber kinetics
Neurotechnology’s Brain gate chip-implant. Implanted in Nagle’s right
precentral gyrus(area of the motor cortex for arm movement), the 96 electrode
Brain gate implant allowed Nagle to control a robotic arm by thinking about
moving his hand as well as a computer cursor, lights and TV.

2.3. Partially Invasive BCI:


Partially invasive BCI devices are implanted inside the skull but rest outside the
brain rather than amidst the grey matter. They produce better resolution signals
than noninvasive BCIs where the bone tissue of the cranium deflects and
deforms signals and have a lower risk of forming scar-tissue in the brain than
fully-invasive BCIs.
Electrocardiography (ECoG) uses the same technology as non-invasive
electroencephalography, but the electrodes are embedded in a thin plastic pad
that is placed above the cortex, beneath the Dura mater. ECoG technologies
were first traled in humans in 2004 by Eric Leuthardt and Daniel Moran from
Washington University in St Louis. In a later trial, the researchers enabled a
teenage boy to play Space Invaders using his ECoG implant. This research
indicates that it is difficult to produce kinematics BCI devices with more than
one dimension of control using ECoG. Light Reactive Imaging BCI devices are
still in the realm of theory. These would involve implanting laser inside the
skull. The laser would be trained on a single neuron and the neuron’s
reflectance measured by a separate sensor. When neuron fires, The laser light
pattern and wavelengths it reflects would change slightly. This would allow
researchers to monitor single neurons but require less contact with tissue and
reduce the risk of scartissue build up.

2.4. Non-Invasive BCI:

8|Page
As well as invasive experiments, there have also been experiments in humans
using non invasive neuro imaging technologies as interfaces. Signals recorded
in this way have been used to power muscle implants and restore partial
movement in an experimental volunteer. Although they are easy to wear, non-
invasive implants produce poor signal resolution because the skull dampens
signals, dispersing and blurring the electromagnetic waves created by the
neurons. Although the waves can still be detected it is more difficult to
determine the area of the brain that created them or the actions of individual
neurons.
Recordings of brainwaves produced by an electroencephalogram
Electroencephalography (EEG) is the most studied potential non-invasive
interface, mainly due to its fine temporal resolutions, ease of use, portability and
low setup cost. But as well as the technology's susceptibility to noise, another
substantial barrier to using EEG as a brain-computer interface is the extensive
training required before users can work the technology. For example, in
experiments beginning in the mid-1990s, Niels Birbaumer of the University of
Tübingen in Germany used EEG recordings of slow cortical potential to give
paralysed patients limited control over a computer cursor. (Birbaumer had
earlier trained epileptics to prevent impending fits by controlling this low
voltage wave.) The experiment saw ten patients trained to move a computer
cursor by controlling their brainwaves. The process was slow, requiring more
than an hour for patients to write 100 characters with the cursor, while training
often took many months. Another research parameter is the type of waves
measured. Birbaumer's later research with Jonathan Wolpaw at New York State
University has focused on developing technology that would allow users to
choose the brain signals they found easiest to operate a BCI, including mu and
beta waves. A further parameter is the method of feedback used and this is
shown in studies of P300 signals. Patterns of P300 waves are generated
involuntarily (stimulus-feedback) when people see something they recognizes
and may allow BCIs to decode categories of thoughts without training patients
first. By contrast, the biofeedback methods described above require learning to
control brainwaves so the resulting brain activity can be detected. In 2000, for
example, research by Jessica Bayliss at the University of Rochester showed that
volunteers wearing virtual reality helmets could control elements in a virtual
world using their P300 EEG readings, including turning lights on and off and
bringing a mock-up car to a stop. In 1999, researchers at Case Western Reserve
University led by Hunter Peckham, used 64-electrode EEG skullcap to return
limited hand movements to quadriplegic Jim Jatich. As Jatich concentrated on
simple but opposite concepts like up and down, his beta-rhythm EEG output
was analysed using software to identify patterns in the noise. A basic pattern
was identified and used to control a switch: Above average activity was set to
on, below average off. As well as enabling Jatich to control a computer cursor
the signals were also used to drive the nerve controllers embedded in his hands,

9|Page
restoring some movement. Electronic neural-networks have been deployed
which shift the learning phase from the user to the computer. Experiments by
scientists at the Fraunhofer Society in 2004 using neural networks led to
noticeable improvements within 30 minutes of training.
Experiments by Eduardo Miranda aim to use EEG recordings of mental activity
associated with music to allow the disabled to express themselves musically
through an encephalophone. Magneto encephalography (MEG) and functional
magnetic resonance imaging (fMRI) have both been used successfully as non-
invasive BCIs. In a widely reported experiment, fMRI allowed two users being
scanned to play Pong in real-time by altering their homodynamic response or
brain blood flow through biofeedback techniques. fMRI measurements of
homodynamic responses in real time have also been used to control robot arms
with a seven second delay between thought and movement.

Chapter 3
10 | P a g e
3. The Current BCI Techniques

3.1. Introduction:

In today’s time various techniques are used for BCI interface, there
implementations and result manipulation. These techniques are headed towards
the development of BCI in coming era.

3.2. P300 Detection:

Farwell [Farwell&Donchin 1988] of the Department of Psychology and


Cognitive Psychophysiology Laboratory at the University of Illinois at Urbana-
Champaign IL, describes a technique for detecting the P300 component of a
subject's event-related brain potential (ERP) and using it to select from an array
of 36 screen positions. The P300 component is a positive-going ERP in the EEG
with a latency of about 300ms following the onset of a rarely- occurring
stimulus the subject has been instructed to detect. The EEG was recorded using
electrodes placed at the Pz (parietal) site (10/20 International System), limited
with band-pass filters to .02-35Hz and digitized at 50Hz. Electrooculogram
(EOG) data was also recorded from each subject via electrodes placed above
and below the right eye. The "odd-ball" paradigm was used to elicit the P300,
where a number of stimuli are presented to the experimental subject who is
required to pay attention to a particular, rarely-occurring stimulus and respond
to it in some non- motor way, such as by counting occurrences. Detecting the
P300 response reliably requires averaging the EEG response over many
presentations of the stimuli. The purpose of the current experiment was to
discover the minimum number of presentations at two different inter-stimulus
intervals (ISI) required to detect the P300 response. The experiment presented a
36-position array of letters, plus common typing characters and controls (e.g.
space, backspace), made to flash in a random sequence first by rows and then
columns. Each trial consisted of a complete set of six column or row flashes.
Trials contaminated with muscular or EOG response were rejected and
additional trials presented until data were collected from a block of 30 good
trials, during which subjects were to fixate on aparticular position, and count the
number of times it flashed while a control message was elsewhere on the screen.
After each block the fixated letter (one of B-R-A-I-N) was
added to the screen so that subjects were conscious of slowly spelling out the
word "BRAIN" through a succession of five blocks. A set of five blocks was
run at each ISI -- 125ms and 500ms. The two presentation rates were chosen to
bracket a range of communication rates from a low of 30 averaged trials at
500ms ISI (93.6 seconds of presentation per character) to a high of one trial at
125ms (1.245 seconds of presentation per character), an effective
communication rate range of .01 to .8 characters-per- second, respectively. The
11 | P a g e
authors used four techniques to analyze the data for reliable P300 response
detection – stepwise descriminant analysis (SWDA), peak picking, area, and
covariance, and identified SWDA as leading to the greatest accuracy at the
fastest presentation rate. Results indicated that a character chosen from among
36 items can be detected with 95% accuracy within 26 seconds.

3.3. EEG mu-rhythm Conditioning:


Three papers using this technique were reviewed including Wolpaw [Wolpaw et
al 1991], McFarland [McFarland et al 1993], and colleagues at the Wadsworth
Center for Laboratories and Research, Albany, NY, and Pfurtscheller
[Pfurtscheller et al 1993] and colleagues at the Ludwig Boltzmann Institute of
Medical Informatics and Neuroinformatics, Department of Medical Informatics,
Institute of Biomedical Engineering, University of Technology Graz, Austria.
All three papers describe subjects' abilities to move a cursor toward a target on a
computer screen by manipulating their murhythm, a detectable pattern in a great
majority of individuals in the EEG 8-12Hz frequency range, centered about
9.1Hz. Work is based on earlier research efforts by Kuhlman [Kuhlman 1978b]
who described the mu-rhythm in normal and epileptic subjects. Wolpaw
describes detecting subjects' mu-rhythm amplitude, defined as the
square-root of the spectral EEG power at 9Hz, using two scalp-mounted
electrodes located near location C3 in the International 10/20 System and a
digital signal processing board analyzing continuous EEG in 333ms segments,
and using it to drive a cursor up or down on a screen toward a target placed
randomly at the top or bottom. An experiment operator preset the size of the
ranges and number of cursor movement steps assigned to each range for each
subject during testing prior to each experimental run. Ranges were set so that
the commonest mu-rhythm amplitudes (<4 microvolt’s) left the cursor in place
or moved it downwards moderately while higher amplitudes (>4 microvolt’s)
moved it upwards in increasing jumps. Weights were adjusted as subjects
exhibited better control of their mu-rhythm amplitudes for up and down targets
in repeated trials. Wolpaw substantiates subjects' learned intentional control
over mu-rhythm amplitude in three ways: by performing frequency analysis up
to 192Hz on subjects during cursor movement trials and failing to find any
relationship between mu- Rhythm changes and the higher frequencies
associated with muscular (EMG) activity; by subjects statements about not
making contra lateral movements and observing none; and by failing to find any
relationship between mu-rhythm changes and posterior scalp recordings of the
visual alpha-rhythm. Four out of five subjects acquired impressive control over
their mu-rhythm amplitude during 12 45-minute sessions over a period of two
months. Accuracies of 80-95% target hits across experimental subjects were
achieved and rates of 10-29 hits per minute. Off-line analysis of two subjects'
raw EEG data (see below) provided good support for Wolpaw's experimental
results. McFarland used essentially the same experimental setup and introduced

12 | P a g e
greater precision constraints on four subjects' attempts to position a cursor by
means of mu-rhythm control. A vertical bar target appeared in one of five
different vertical positions on the left side of the screen and crossed the screen
from left to right in 8 seconds. Subjects had to move the cursor (initially in the
middle of the right edge of the screen) quickly to the correct one of five
different vertical screen positions to intercept the target by controlling their mu-
rhythm amplitude. Analysis of the average distance between the center of the
target and the cursor during succeeding trials indicated that all subjects reduced
the distance and three out of four significantly so. Pfurtscheller used
contralateral blocking of the mu-rhythm during the 1-second period prior to a
motor activity (in this case pressing a microswitch using either the right or the
left index finger) to predict which response was to follow. An array of 30
electrodes spaced evenly across the scalp (two were at locations C3 and C4 in
the International 10/20 System) was used to record EEG activity. An initial
training period for each subject involved using data from all 30 electrodes to
train the classification network. During experimental trials, a feature-vector of
power values (Hilbert Transform) from electrodes at positions C3 and C4 was
constructed at 5 time points and classified using a Learning Vector Quantizer
(LVQ) artificial neural network of the type described by Kohonen [Kohonen
1988]. The experimenter achieved the best balance of reliability/speed of
classification by using the 1/2-second prior to response and
performing a multiple- classification and voting process. EEG data from two
subjects in the Wolpaw experiment described above were provided to the Graz
Institute for Information Processing for additional analysis described by
Flotzinger [Flotzinger et al, 1993] using the Graz LVQ neural net scheme (see
above) and a fixed time-segment. Cursor-movement was predicted >from raw
data with 90% accuracy. Results also implied that frequency bands other than
the mu and beta ranges may contain useful (i.e. target related) information.

3.4. VEP Detection:


This technique was reviewed by Sutter [Sutter 1992] at the Smith-Kettle well
Eye Research Institute in San Francisco CA, and Colliers’
[Cilliers&VanDerKouwe 1993] and colleague at the Department of Electrical
and Electronic Engineering, University of Pretoria, South Africa. Sutter
describes presenting a 64-position block on a computer screen and detecting
which block the subject looks at, while Colliers’ work uses a series
of four lights. In each case, several simultaneously presented stimuli are made
to change rapidly in some controlled way(intensity, pattern, color-shift) and the
subject has scalp electrodes placed over the visual cortex (back of the head) in a
position to detect changes in the evoked potential(VEP) at that location. Sutter
used a lengthy binary sequence to switch 64 screen positions between red and
green, and in other trials to reverse a checkerboard pattern. Each screen position
was shifted 20ms in the binary control sequence relative to its neighbors, and

13 | P a g e
the entire sequence was auto correlated with the VEP in overlapping
increments(the VEP response components last about 80ms) beginning 20ms
apart, with the resultant vector stored in a 64-position array of registers.
When a coefficient remains greater than all the others and above a threshold
value for a certain amount of time, the corresponding stimulus is considered to
have been selected. The 64 positions represent the letters of the alphabet and
commonly used words in the English language. The subject can fixate on any
word or letter. Whenever the subject fixates on a letter, the commonly used
words change to words beginning with that letter, for quick selection of an
entire word. Sutter suggests a need to optimize both electrode placement and
stimulation mode for each individual subject for good target discrimination.
Seventy normal subjects evaluating a prototype system achieved adequate
response times ranging from 1 to 3 seconds after an initial tuning process lasting
10-60 minutes. Sutter also tested his techniques on 20 severely disabled persons
and describes an experimental version involving an ALS patient using intra-
cranial electrodes implanted in the space between the Dura and the skull.
Cilliers' technique involves varying the intensity of four LED's modulated with
a 10Hz sine wave in phase quadrature and detecting the signal in the subject's
VEP using a pair of EEG surface electrodes placed on the occipital lobe. The
four flashing LED's are arranged around the edge of a computer screen
containing an image of a standard four-row keyboard with each row of
keys in a different color. Each LED is associated with one of the colors.
Fixating on one LED selects a key row, which is redisplayed in four colors for a
more detailed selection. The subject can select any particular key in an average
of three selections -- about 15 seconds with the current setup. A short initial
training period is required where subjects fixate on each LED for 5 seconds.
Cilliers' paper describes work with a quadriplegic patient with a C2-level injury.

3.5. EEG Pattern Mapping:


Several experimenters describe techniques for classifying, detecting and
mapping EEG patterns. Pfurtscheller's technique used a neural net featuring
learning-vector quantization (LVQ) to map EEG patterns during the 1-second
interval before a signal the experimental subject was instructed to wait for.
Hiraiwa [Hiraiwa et al 1993] used a back-propagation artificial neural network
to study readiness potentials (RP's)-- patterns in the EEG immediately prior to
the subject's uttering one of five different Japanese syllables or
moving a joystick in one of four different directions. Twelve channels of EEG
data taken >from scalp-mounted electrodes at locations Fp1, Fp2,Fz, C3, C4,
Pz, F5, F6, F7, F8, O1and O2 (International 10/20 system) were used to train
and then test two neural networks optimized for averaged data and for single-
trial, real-time analysis, respectively. High recognition rates were obtained for
the averaged data. Single- trial RP recognition,

14 | P a g e
though less reliable, showed considerable promise in the experimenters' view.
Keirn and Aunon [Keirn&Aunon 1990] recorded EEG data from scalp-mounted
electrodes at locations P3, P4, C3, C4, O1 and O2 (International 10/20 System)
during accomplishment of 5 different tasks during which subjects had their eyes
open or closed, for 10 alternative responses. The tasks included:
(1) relaxing and trying to think of nothing,
(2) a non-trivial multiplication problem,
(3) a 30-second study of a drawing of a 3-dimensional object after which
subjects were to a visualize the object being rotated about an axis,
(4) mental composition of a letter to a friend, and (5) visualize numbers being
written on a blackboard sequentially, with the previous a number being erased
before the next was written. Feature vectors were constructed from the EEG
patterns based on the Wiener- Khinchine method and classified using a Bayes
quadratic classifier.

3.6. Detecting lateral hemisphere differences:


Drake [Drake 1993] studied induced lateral differences in relative brain
hemisphere activation after subjects heard arguments through left, right or both
earphones which they either strongly agreed with or strongly disagreed with, as
determined by prior interviews. Subjects exhibited greater discounting of
arguments they disagreed with during left hemisphere activation as measured by
ratings of truth. Results supported previous work indicating asymmetries in
lateral activation potential during processing persuasive
arguments, however the study did not include measuring directly either
activation levels or potentials in the cortex.
fig.4.1: Dummy unit illustrating the design of a Brain Gate interface
Brain Gate is a brain implant system developed by the bio-tech company Cyber
kinetics in 2003 in conjunction with the Department of Neuroscience at Brown
University. The device was designed to help those who have lost control of their
limbs, or other bodily functions, such as patients with amyotrophic lateral
sclerosis (ALS) or spinal cord injury. The computer chip, which is implanted
into the brain, monitors brain activity in the patient and converts the intention of
the user into computer commands. Currently the chip uses 100 hair-thin
electrodes that sense the electro-magnetic signature of neurons firing in specific
areas of the brain, for example, the area that controls arm movement. The
activities is translated into electrically charged signals and are then sent and
decoded using a program, which can move either a robotic arm or a
computer cursor. According to the Cyber kinetics' website, three patients have
been implanted with the Brain Gate system. The company has confirmed that
one patient (Matt Nagle) has a spinal cord injury, whilst another has advanced
ALS. In addition to real-time analysis of neuron patterns to relay movement, the
Brain gate array is also capable of recording electrical data for later analysis. A
potential use of this feature would be for a neurologist to study seizure patterns

15 | P a g e
in a patient with epilepsy. Cyber kinetics has a vision, CEO Tim Surgenor
explained to Gizmag, but it is not promising "miracle cures", or that
quadriplegic people will be able to walk again - yet. Their primary goal is to
help restore many activities of daily living that are impossible for
paralysed people and to provide a platform for the development of a wide range
of other assistive devices.
"Today quadriplegic people are satisfied if they get a rudimentary connection to
the outside world. What we're trying to give them is a connection that is as good
and fast as using their hands. We're going to teach them to think about moving
the cursor using the part of the brain that usually controls the arms to push keys
and create, if you will, a mental device that can input information into a
computer. That is the first application, a kind of prosthetic, if you will. Then it
is possible to use the computer to control a robot arm or their own arm, but that
would be down the road." Existing technology stimulates muscle groups that
can make an arm move. The problem Surgenor and his team faced was in
creating an input or control signal. With the right control signal they found they
could stimulate the right muscle groups to make arm movement.
"Another application would be for somebody to handle a tricycle or exercise
machine to help patients who have a lot of trouble with their skeletal muscles.
But walking, I have to say, would be very complex. There's a lot of issues with
balance and that's not going to be an easy thing to do, but it is a goal."
Cyber kinetics hopes to refine the Brain Gate in the next two years to develop a
wireless device that is completely implantable and doesn't have a plug, making
it safer and less visible. And once the basics of brain mapping are worked out
there is potential for a wide variety of further applications, Surgenor explains.
"If you could detect or predict the onset of epilepsy, that would be a huge
therapeutic application for people who have seizures, which leads to the idea of
a 'pacemaker for the brain'. So eventually people may have this technology in
their brains and if something starts to go wrong it will take a therapeutic action.
That could be available by 2007 to 2008."
Surgenor also sees a time not too far off where normal humans are interfacing
with Brain Gate technology to enhance their relationship with the digital world -
if they're willing to be implanted.
"If we can figure out how to make this device cheaper, there might be
applications for people to control machines, write software or perform intensive
actions. But that's a good distance away. Right now the only way to get that
level of detail from these signals is to actually have surgery to place this on the
surface of the brain. It's not possible to do this with a non-invasive approach.
For example, you can have an EEG and if you concentrate really hard you can
think about and move a cursor on a screen, but if someone makes a loud noise
or you get interrupted, you lose that ability.

Chapter 4
16 | P a g e
4.1. DARPA

The Brown University group was partially funded by the Defense Advanced
Research Projects Agency (DARPA), the central research and development
organization for the US Department of Defense (DoD). DARPA has been
interested in Brain-Machine-Interfaces (BMI) for a number of years for military
applications like wiring fighter pilots directly to their planes to allow
autonomous flight from the safety of the ground. Future developments are also
envisaged in which humans could 'download' memory implants for
skill enhancement, allowing actions to be performed that have not been learned
directly input system.

Chapter 5
CONCLUSION

17 | P a g e
Brain-Computer Interface (BCI) is a method of communication based on
voluntary neural activity generated by the brain and independent of its normal
output pathways of peripheral nerves and muscles.
The neural activity used in BCI can be recorded using invasive or noninvasive
techniques.

We can say as detection techniques and experimental designs improve, the BCI
will improve as well and would provide wealth alternatives for individuals to
interact with their environment.

Chapter 6
REFERENCES

18 | P a g e
1. http://www.nicolelislab.net/NLnet_Load.html

2. http://www.youtube.com/watch?v=7-cpcoIJbOU

3. http://www.en.wikipedia.com/braincomputerinterface

19 | P a g e

You might also like