Reading The Brain From The Outside - Headache1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

Headache Reading the brain

from the outside


Can brain activity be deciphered without opening
up the skull?
10 hours ago

Can brain activity be deciphered without opening up the skull?

PATRICK KAIFOSH’S left hand lies flat on the table in front of him.
Occasionally his fingers twitch or his palm rises up slightly from the
surface. There is nothing obvious to connect these movements with what is
happening on the tablet in front of him, where a game of asteroids is being
played. Yet he is controlling the spaceship on the screen as it spins, thrusts
and fires.

What enables him to do so is a sweatband studded with small gold bars


that sits halfway up his left forearm. Each bar contains a handful of
electrodes designed to pick up the signals of motor units (the combination
of a motor neuron, a cell that projects from the spinal cord, and the muscle
fibres it controls). These data are processed by machine-learning
algorithms and translated into the actions in the game. Dr Kaifosh, a co-
founder of CTRL-Labs, the startup behind the device, has learned to
exercise impressive control over these signals with hardly any obvious
movement.

Some say that the claims of Dr Kaifosh and Thomas Reardon, his co-
founder, that CTRL-Labs has created a brain-machine interface are
nonsense. The sweatband is nowhere near the brain, and the signals it is
picking up are generated not just by the firing of a motor neuron but by the
electrical activity of muscles. “If this is a BCI, then the movement of my
fingers when I type on a keyboard is also a brain output,” sniffs one
researcher. Krishna Shenoy, who directs the neural prosthetics systems lab
at Stanford University and acts as an adviser to the firm, thinks it is on the
right side of the divide. “Measuring the movement of the hand is motion
capture. They are picking up neural activity amplified by the muscles.”

It is not reasonable to expect consumers to undergo brain


surgery

Whatever the semantics, it is instructive to hear the logic behind the firm’s
decision to record the activity of the peripheral nervous system, rather
than looking directly inside the head. The startup wants to create a
consumer product (its potential uses include being an interface for
interactions in virtual reality and augmented reality). It is not reasonable
to expect consumers to undergo brain surgery, say the founders, and
current non-invasive options for reading the brain provide noisy, hard-to-
read signals. “For machine-learning folk, there is no question which data
set—cortical neurons or motor neurons—you would prefer,” says Dr
Reardon.

This trade-off between the degree of invasiveness and the fidelity of brain
signals is a big problem in the search for improved BCIs. But plenty of
people are trying to find a better way to read neural code from outside the
skull.

The simplest way to read electrical activity from outside is to conduct an


electroencephalogram (EEG). And it is not all that simple. Conventionally,
it has involved wearing a cap containing lots of electrodes that are pressed
against the surface of the scalp. To improve the signal quality, a conductive
gel is often applied. That requires a hairwash afterwards. Sometimes the
skin of the scalp is roughened up to get a better connection. As a consumer
experience it beats going to the dentist, but not by much.

Once on, each electrode picks up currents generated by the firing of


thousands of neurons, but only in the area covered by that electrode.
Neurons that fire deep in the brain are not detected either. The signal is
distorted by the layers of skin, bone and membrane that separate the brain
from the electrode. And muscle activity (of the sort that CTRL-Labs looks
for) from eye and neck movements or clenched jaws can overwhelm the
neural data.

Even so, some EEG signals are strong enough to be picked up pretty
reliably. An “event-related potential”, for example, is an electrical signal
that the brain reliably gives off in response to an external stimulus of some
sort. One such, called an error-related potential (Errp), occurs when a user
spots a mistake. Researchers at MIT have connected a human observer
wearing an EEG cap to an industrial robot called Baxter as it carried out a
sorting task. If Baxter made a mistake, an Errp signal in the observer’s
brain alerted the robot to its error; helpfully, if Baxter still did not react,
the human brain generated an even stronger Errp signal.

If the cap fits

Neurable, a consumer startup, has developed an EEG headset with just


seven dry electrodes which uses a signal called the P300 to enable users to
play a virtual-reality (VR) escape game. This signal is a marker of surprise
or recognition. Think of the word “brain” and then watch a series of letters
flash up randomly on a screen; when the letter “b” comes up, you will
almost certainly be giving off a P300 signal. In Neurable’s game, all you
have to do is concentrate on an object (a ball, say) for it to come towards
you or be hurled at an object. Ramses Alcaide, Neurable’s boss, sees the
potential for entertainment companies like Disney (owner of the Star Wars
and Marvel franchises) to license the software in theme parks and arcade
games.

Thorsten Zander of the Technische Universität in Berlin thinks that


“passive” EEG signals (those that are not evoked by an external stimulus)
can be put to good use too. Research has shown that brainwave activity
changes depending on how alert, drowsy or focused a person is. If an EEG
can reliably pick this up, perhaps surgeons, pilots or truck drivers who are
becoming dangerously tired can be identified. Studies have shown strong
correlations between people’s mental states as shown by an EEG and their
ability to spot weapons in X-rays of luggage.

Yet the uses of EEGs remain limited. In a real-world environment like a


cockpit, a car or an airport, muscle activity and ambient electricity are
likely to confound any neural signals. As for Neurable’s game, it relies not
solely on brain activity but also deploys eye-tracking technology to see
where a player is looking. Dr Alcaide says the system can work with brain
signals alone, but it is hard for a user to disentangle the two.

Other non-invasive options also have flaws. Magnetoencephalography


measures magnetic fields generated by electrical activity in the brain, but it
requires a special room to shield the machinery from Earth’s magnetic
field. Functional magnetic resonance imaging (fMRI) can spot changes in
blood oxygenation, a proxy for neural activity, and can zero in on a small
area of the brain. But it involves a large, expensive machine, and there is a
lag between neural activity and blood flow.

If any area is likely to yield a big breakthrough in non-invasive recording of


the brain, it is a variation on fNIRS, the infrared technique used in the
experiment to allow locked-in patients to communicate. In essence, light
sent through the skull is either absorbed or reflected back to detectors,
providing a picture of what is going on in the brain. This technique does
not require bulky equipment, and unlike EEG it does not measure
electrical activity, so it is not confused by muscle activity. Both Facebook
and Openwater are focusing their efforts on this area.

The obstacles to a breakthrough are formidable, however. Current infrared


techniques measure an epiphenomenon, blood oxygenation (the degree of
which affects the absorption of light), rather than the actual firing of
neurons. The light usually penetrates only a few millimetres into the
cortex. And because light scatters in tissue (think of how your whole
fingertip glows red when you press a pen-torch against it), the precise
source of reflected signals is hard to identify.

Facebook is not saying much about what it is doing. Its efforts are being led
by Mark Chevillet, who joined the social-media giant’s Building 8
consumer-hardware team from Johns Hopkins University. To cope with
the problem of light scattering as it passes through the brain, the team
hopes to be able to pick up on both ballistic photons, which pass through
tissue in a straight line, and what it terms “quasi-ballistic photons”, which
deviate slightly but can still be traced to a specific source. The clock is
ticking. Dr Chevillet has about a year of a two-year programme left to
demonstrate that the firm’s goal of brain-controlled typing at 100 words a
minute is achievable using current invasive cell-recording techniques, and
to produce a road map for replicating that level of performance non-
invasively.

Openwater is much less tight-lipped. Ms Jepsen says that her San


Francisco-based startup uses holography to reconstruct how light scatters
in the body, so it can neutralise this effect. Openwater, she suggests, has
already created technology that has a billion times the resolution of an
fMRI machine, can penetrate the cortex to a depth of 10cm, and can
sample data in milliseconds.

Openwater has yet to demonstrate its technology, so these claims are


impossible to verify. Most BCI experts are sceptical. But Ms Jepsen has an
impressive background in consumer electronics and display technologies,
and breakthroughs by their nature upend conventional wisdom. Developer
kits are due out in 2018.

In the meantime, other efforts to decipher the language of the brain are
under way. Some involve heading downstream into the peripheral nervous
system. One example of that approach is CTRL-Labs; another is provided
by Qi Wang, at Columbia University, who researches the role of the locus
coeruleus, a nucleus deep in the brain stem that plays a role in modulating
anxiety and stress. Dr Wang is looking at ways of stimulating the vagus
nerve, which runs from the brain into the abdomen, through the skin to see
if he can affect the locus coeruleus.

Others are looking at invasive approaches that do not involve drilling


through the skull. One idea, from a firm called SmartStent, using
technology partly developed with the University of Melbourne, is to use a
stent-like device called a “stentrode” that is studded with electrodes. It is
inserted via a small incision in the neck and then guided up through blood
vessels to overlie the brain. Once the device is in the right location, it
expands from about the size of a matchstick to the size of the vessel and
tissue grows into its scaffolding, keeping it in place. Human trials of the
stentrode are due to start next year.

Another approach is to put electrodes under the scalp but not under the
skull. Maxime Baud, a neurologist attached to the Wyss Centre, wants to
do just that in order to monitor the long-term seizure patterns of
epileptics. He hopes that once these patterns are revealed, they can be used
to provide accurate forecasts of when a seizure is likely to occur.

Yet others think they need to go directly to the source of action potentials.
And that means heading inside the brain itself.

This article appeared in the Technology Quarterly section of the print


edition under the headline "Headache"

You might also like