0% found this document useful (0 votes)
74 views

HCI Assignment #2 - Brain Computer Intraction

BCI technology allows disabled individuals to operate computers and communicate online through thought alone. While demonstration of BCI methods has been successful, determining the most effective interaction techniques and incorporating them into real-world applications is key. Current BCI systems rely on EEG readings from scalp sensors, but individual brain folding makes signals inconsistent between users. New algorithms aim to "unfold" the cortex to map signals closer to their source and allow population-level use. Improved low-cost wireless EEG headsets without gel now make home use of BCI possible.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
74 views

HCI Assignment #2 - Brain Computer Intraction

BCI technology allows disabled individuals to operate computers and communicate online through thought alone. While demonstration of BCI methods has been successful, determining the most effective interaction techniques and incorporating them into real-world applications is key. Current BCI systems rely on EEG readings from scalp sensors, but individual brain folding makes signals inconsistent between users. New algorithms aim to "unfold" the cortex to map signals closer to their source and allow population-level use. Improved low-cost wireless EEG headsets without gel now make home use of BCI possible.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 2

Brain Computer interaction

Personal Review:

BCI is very useful for disabled persons, they can operate computer they can write
documents and they can chat with someone on Internet.

The key to moving BCI technology beyond the demonstration stage is to determine
which methods of interaction are the most effective and to incorporate these into
real-world applications. All of the applications and interaction techniques described
have been tested with brain-signal emulation which is sufficient for assuring correct
functionality but not sufficient to draw conclusions about the efficacy of the user
interface.

Video Transcript:
Up until now, our communication with machines has always been limited to
conscious and direct forms. Whether it's something simple like turning on the lights
with a switch, or even as complex as programming robotics, we have always had to
give a command to a machine, or even a series of commands, in order for it to do
something for us. Communication between people, on the other hand, is far more
complex and a lot more interesting because we take into account so much more
than what is explicitly expressed. We observe facial expressions, body
language, and we can intuit feelings and emotions from our dialogue with one
another.

This actually forms a large part of our decision-making process. Our vision is to
introduce this whole new realm of human interaction into human-computer
interaction so that computers can understand not only what you direct it to do, but
it can also respond to your facial expressions and emotional experiences. And what
better way to do this than by interpreting the signals naturally produced by our
brain, our center for control and experience.

Well, it sounds like a pretty good idea, but this task, as Bruno mentioned, isn't an
easy one for two main reasons: First, the detection algorithms. Our brain is made up
of billions of active neurons, around 170,000 km of combined axon length. When
these neurons interact, the chemical reaction emits an electrical impulse, which can
be measured. The majority of our functional brain is distributed over the outer
surface layer of the brain, and to increase the area that's available for mental
capacity, the brain surface is highly folded. Now this cortical folding presents a
significant challenge for interpreting surface electrical impulses. Each individual's
cortex is folded differently, very much like a fingerprint. So even though a
signal may come from the same functional part of the brain, by the time the
structure has been folded, its physical location is very different between
individuals, even identical twins. There is no longer any consistency in the surface
signals.

Our breakthrough was to create an algorithm that unfolds the cortex, so that we can
map the signals closer to its source, and therefore making it capable of working
across a mass population. The second challenge is the actual device for observing
brainwaves. EEG measurements typically involve a hairnet with an array of
sensors, like the one that you can see here in the photo. A technician will put the
electrodes onto the scalp using a conductive gel or paste and usually after a
procedure of preparing the scalp by light abrasion. Now this is quite time
consuming and isn't the most comfortable process. And on top of that, these
systems actually cost in the tens of thousands of dollars.

So with that, I'd like to invite onstage Evan Grant, who is one of last year's
speakers, who's kindly agreedto help me to demonstrate what we've been able to
develop.

So the device that you see is a 14-channel, high-fidelity EEG acquisition system. It
doesn't require any scalp preparation, no conductive gel or paste. It only takes a
few minutes to put on and for the signals to settle. It's also wireless, so it gives you
the freedom to move around. And compared to the tens of thousands of dollars for a
traditional EEG system, this headset only costs a few hundred dollars. Now on to the
detection algorithms. So facial expressions as I mentioned before in emotional
experiences are actually designed to work out of the box with some sensitivity
adjustments available for personalization. But with the limited time we have
available, I'd like to show you the cognitive suite, which is the ability for you to
basically move virtual objects with your mind.

Now, Evan is new to this system, so what we have to do first is create a new profile
for him. He's obviously not Joanne so we'll "add user." Evan. Okay. So the first
thing we need to do with the cognitive suite is to start with training a neutral
signal. With neutral, there's nothing in particular that Evan needs to do. He just
hangs out. He's relaxed. And the idea is to establish a baseline or normal state for
his brain, because every brain is different. It takes eight seconds to do this, and now
that that's done, we can choose a movement-based action. So Evan, choose
something that you can visualize clearly in your mind.

So I'd like to show you a few examples, because there are many possible
applications for this new interface. In games and virtual worlds, for example, your
facial expressions can naturally and intuitively be used to control an avatar or
virtual character. Obviously, you can experience the fantasy of magic and control
the world with your mind. And also, colors, lighting, sound and effects can
dynamically respond to your emotional state to heighten the experience that you're
having, in real time. And moving on to some applications developed by developers
and researchers around the world, with robots and simple machines, for example
in this case, flying a toy helicopter simply by thinking "lift" with your mind.

The technology can also be applied to real world applications in this example, a
smart home. You know, from the user interface of the control system to opening
curtains or closing curtains. And of course, also to the lighting turning them on or
off. And finally, to real life-changing applications, such as being able to control an
electric wheelchair. In this example, facial expressions are mapped to the
movement commands.

You might also like