0% found this document useful (0 votes)
27 views15 pages

Chapter 09 - Neural Encoding and Decoding

The document discusses neural encoding and decoding through analyzing experimental data of neurons responding to visual stimuli. It introduces questions about how neurons encode information in their firing patterns and how to decode stimuli from neural responses. It then presents a hypothetical experiment and provides a mathematical model called the Poisson probability model to describe neurons firing randomly in response to stimuli.

Uploaded by

ntghshr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views15 pages

Chapter 09 - Neural Encoding and Decoding

The document discusses neural encoding and decoding through analyzing experimental data of neurons responding to visual stimuli. It introduces questions about how neurons encode information in their firing patterns and how to decode stimuli from neural responses. It then presents a hypothetical experiment and provides a mathematical model called the Poisson probability model to describe neurons firing randomly in response to stimuli.

Uploaded by

ntghshr
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

MPHY2002: Introduction to Biophysics

9. Neural Encoding and Decoding

9.1 Introduction

A neuron is a cell that allows for information to be transmitted with


changes in its membrane potential. A natural question that arises is:
how is this information encoded? What language does a neuron speak?

These questions can arise when there are neurons that transmit
information about external stimuli that are sensed by an organism.
One famous example in neuroscience is visual stimuli consisting of
simple patterns of bars that are oriented in different directions.

Imagine that we have identified a neuron that is involved with


transmitting information about these visual stimuli. We can record its
membrane potential so that we can observe action potentials that it
generates. Two questions arise:

Neural coding: when we know what stimulus will be presented,


what is our prediction for the action potentials that the neuron
will generate?

Neural decoding: when we know what action potentials were


generated by the neuron, what is our prediction for the stimuli
that were presented?

As an example of an experiment that could be performed to explore


neural coding and decoding, we consider a hypothetical experiment in
which two stimuli are presented in succession – the first stimulus for

1
MPHY2002: Introduction to Biophysics

the time interval of 0 to 1 seconds, and the second stimulus for the
time interval of 1 to 2 seconds. This experiment is repeated several
times (each repetition is called a “trial”). An example of membrane
potential recordings that could be made from a neuron during this
experiment is shown in Figure 1. In this simulated data, the membrane
potential has a resting potential of -70 mV, which briefly rises to 30 mV
during action potentials.

Figure 1. Action potentials generated by one neuron during


presentation of 2 different stimuli. Stimulus 1 and Stimulus 2
were presented three times, in three different trials. This data

2
MPHY2002: Introduction to Biophysics

was simulated using Poisson Statistics (described in the next


section).

An important observation that we can make from the data in Figure 1


is that the neuron generates different temporal patterns of action
potentials in different trials. For instance, the pattern of action
potentials generated in Trial 1 when Stimulus 1 is presented is
different from the corresponding pattern generated by the neuron in
Trial 2 when Stimulus 1 is presented again. This observation has
implications for our neural coding question: even if we know the
stimulus that was presented, we cannot say exactly what pattern of
action potentials will be produced!

The recordings in Figure 1 strongly suggest that the neuron behaves


differently for different stimuli. For instance, the neuron appears to
generate more action potentials when Stimulus 2 is presented than
when Stimulus 1 is presented. This observation also has implications
for the neural decoding problem: if we know what action potentials
were generated by the neurons, we may have at least some information
about the stimuli that were presented.

9.2 The Poisson Probability Model for Action Potentials

To appreciate how neurons can generate action potentials in a random


manner (so that the temporal patterns of action potentials are
different for different experimental trials), it is very useful to have a
probability model.

In this section we will construct a probability model for a neuron that


allows us to calculate the probability of that a neuron will generate a
particular number of action potentials generated in a given time
interval. We will denote this number as the random variable N.

For our probability model, we start with the following assumptions:

1) The probability of generating one action potential in a small


time interval of length t is given by:

3
MPHY2002: Introduction to Biophysics

Pr  N  1    t (0.1)

where  is a constant called the Poisson parameter. This


equation becomes exact in the limit of t  0 .

2) The number of action potentials generated in one time interval


is independent of the number generated in another time
interval, as long as those two time intervals are non-
overlapping.

With these two assumptions, we can derive the number of action


potentials in a time interval of length T. To do so, we discretise this
time interval into m time intervals of length t .

Following the first assumption, the probability that the neuron


generates an action potential in one of the m time intervals is t , and
following the second assumption, the probability that the neuron
generates an action potential in n specific time intervals is given by

 t 
n
. With similar logic, the probability that the neuron generates

no action potentials the remaining m  n time intervals is given by

1  t 
mn
.

The number of ways of choosing the n time intervals in which action


potentials were generated from a total of m time intervals is given by

the binomial coefficient m!  m  n !n ! . Putting all of the factors

together, we obtain the approximation:

m!
Pr  N  n   t  1  t 
n mn
(0.2)
 m  n  !n !
Equation (0.2) is an approximation because it was obtained by
dividing the time interval into a finite number of intervals of
length t ; to obtain an exact expression, we need to consider the limit
as t  0 (since time is assumed to flow continuously, not in units of

4
MPHY2002: Introduction to Biophysics

length t ). Since t  T m , the limit t  0 is equivalent to m  

. The quotient m!  m  n ! in Equation (0.2) can be written as

m  m  1  m  n  1  mn 1 1 m  1   n  1 m so that it

approaches m n as m   . Additionally, the exponent m  n in


Equation (0.2) approaches m as m   . Taking these limits into
consideration, we can write Equation (0.2) as:

mn
Pr  N  n    t  1  t 
n m

n!
T t 
n

 t  1  t 
T t

n
(0.3)
n!
 T 
n

1  t 
T t

n!

If we define   t , we can rewrite Equation (0.3) as:

 T 
n

Pr  N  n  1   
 T 

n!
(0.4)
 T 
n
 T
 1     1

n!  

In the limit t  0 ,   0 so that:

 T 
n
 T
Pr  N  n   lim 1   1  
 0 n!  
(0.5)
 T 
n
 T
 lim 1     1

n !   0 

The term in square brackets is simply Euler’s number e, so that we


obtain:

 T 
n

Pr  N  n  e T (0.6)
n!

This probability model for the number of action potentials is called


Homogeneous Poisson statistics (the homogeneous part refers to the
fact that lambda is assumed to be constant over the time interval for
which we calculate the probabilities).

5
MPHY2002: Introduction to Biophysics

We state without proof that the mean  and the variance  2 of the

random variable N are as follows:

  T (0.7)

 2  T (0.8)

Equation (0.7) tells us that the mean of N scales linearly with the
length of the time interval, T. When T  1 ,    , so we can interpret

the Poisson parameter  as the average number of action potentials


generated in one second.

9.3 Neural Coding with Poisson Statistics

Using Poisson Statistics, we can develop a model for how a neuron


responds when different external stimuli are presented. To do so, we
can imagine that there is a relationship between the stimulus and the
Poisson parameter . For instance, in the case where there are two
stimuli, we can write:

Stimulus 1  1

Stimulus 2  2.

If there are an infinite number of different stimuli, we need a different


way of expressing this relationship. For instance, if the stimuli may be
bars that are presented as visual patterns, such that the bars can be
oriented at any angle theta. In this case, the relationship between the
stimulus and lambda is described by a function called a tuning curve.

As an example of neural coding calculations, we consider an


experiment in which two stimuli are presented: Horizontal Bars and
Vertical Bars. A neuron generates action potentials according to
Poisson statistics, and the following information is known about its
behaviour:

Horizontal Bars    10 s-1

Vertical Bars    20 s-1

6
MPHY2002: Introduction to Biophysics

What are the probabilities that the neuron generates exactly 10 action
potentials in a time interval of 1 second, when a) horizontal bars are
presented and b) vertical bars are presented? These probabilities are
conditional probabilities since they are dependent on the stimulus that
is presented.

With Equation (0.6), the conditional probability that N  10 when


Horizontal Bars are presented is:

10 1
10

Pr  N  10 | S  horizontal  e101  0.125


10!

Likewise, the conditional probability that N  10 when Vertical Bars


are presented is:

 20 1
10

Pr  N  10 | S  vertical  e201  0.00187


10!

We note that while the two conditional probabilities are different,


there is a non-zero conditional probability that the neuron will
generate the same number of action potentials in both cases. A
fascinating question that emerges is how the brain manages to cope
with these uncertainties, so that we can ultimately be certain (or at
least think that we’re certain!) about which stimulus is presented.

How can we extend our model to encompass recordings from more


neurons? There are several possibilities that depend on the extent to
which the neurons are coupled together. For instance, there can be two
neurons that are located very close together and which have very
similar connections other neurons. In this case, we might hypothesise
that these two neurons generate very similar temporal patterns of
action potentials for a given stimulus. Conversely, there can be
neurons that are located far from each other or which have very
different connections to other neurons. In this case, we might
hypothesise that the action potentials generated by the first neuron
are independent of the action potentials generated by the second
neuron.

7
MPHY2002: Introduction to Biophysics

As an example of neural coding with multiple neurons, we consider


two neurons, Neuron 1 and Neuron 2. We assume that the action
potentials generated by Neuron 1 are independent of Neuron 2 and
they are generated according to Poisson Statistics. One stimulus is
presented and for this stimulus,   10 for Neuron 1 and   3 for
Neuron 2. What is the probability that Neuron 1 generates 5 action

potentials in a time interval of one second  N1  5 and Neuron 2

generates 5 action potentials in that same time interval  N 2  5 ?

Using Equation (0.6), we can calculate the probabilities for each


neuron as follows:

10 1
5

Pr  N1  5  e101  0.0378
5!
 3 1
5

Pr  N 2  5  e31  0.101
5!

Using the assumption that the action potentials of the two neurons are
independent, the joint probability is a product of the probabilities for
the two neurons:

Pr  N1  5  N 2  5  Pr  N1  5 Pr  N 2  5
 0.0378  0.101  0.00381

8
MPHY2002: Introduction to Biophysics

The probability distributions for the two neurons are shown in Figure
2.

Figure 2. Probability mass functions for the number of action


potentials generated in one second by two different neurons
with different average firing rates (3 s-1 and 10 s-1), which were
calculated using Poisson statistics [Equation (0.6)].

9.4 Neural Decoding

In recent years researchers have found ways of safely recording from


multiple neurons in awake human patients. One of the natural clinical
applications of this research is to develop robotic limbs that paralysed
patients can move by thought (since their neuronal connections to
their limbs do not function properly).1 In these applications, the
problem that arises is to figure out what the brain probably “wanted to
do” by listening to signals from a few of its neurons. This problem is
decoding problem that was mentioned briefly in the first section of this
chapter.

1
Promising research at the Massachusetts General Hospital (Boston, U.S.A.) described in the following video:
http://www.youtube.com/watch?v=QRt8QCx3BCo

9
MPHY2002: Introduction to Biophysics

If we examine Figure 2 in the previous section, we see that the


probability mass functions for the number of action potentials
generated by two neurons with different average firing rates are
different, but that for each stimulus, there is a non-zero probability
associated with any particular number of action potentials. This
implies that there is not a unique correspondence between the number
of action potentials generated and the stimulus.

In plain language, the neural decoding problem is essentially the


following: what is the best estimate for the stimulus, given the action
potentials that were observed? When we convert from plain language
to mathematical language, we immediately run into the question: what
do we mean by “best”? If the brain acted more like a digital computer,
so that we had exact relationships like “the neuron will generate
exactly 4 action potentials when stimulus 1 is presented and exactly 6
action potentials when stimulus 2 is presented,” decoding would be
simple. Unfortunately (or fortunately) the brain is not so simple: the
same number of action potentials can be generated for either stimulus,
albeit with different probabilities. For the paralyzed patient who is
trying to move a robotic arm, our solution to the neural decoding
problem is of crucial importance!

9.5 Neural Decoding with the Maximum Likelihood Method

If we have a probability model for the number of action potentials


generated by a neuron, and if we observed that a neuron has generated
particular number of action potentials, we can solve the neural
decoding problem by finding the stimulus associated with the highest
conditional probability of generating the observed number of action
potentials. In this context, the conditional probability is called a
likelihood and method of solving the decoding problem is called
Maximum Likelihood decoding.

To illustrate the Maximum Likelihood decoding method, we consider a


simple experiment in which there is one neuron and are only two
stimuli presented, horizontal bars and vertical bars. From previous
experiments, the following is known.

10
MPHY2002: Introduction to Biophysics

Horizontal Bars    5 s-1

Vertical Bars    10 s-1

In this experiment, we observe that the neuron generates 7 action


potentials in one second, and the stimulus is unknown. We assume
that action potentials are generated according to Poisson Statistics.

To solve this neural decoding problem with the Maximum Likelihood


method, we first imagine that horizontal bars were presented, so that
  5 s-1. Using Equation (0.6), the conditional probability of our
observation is:

 5 1
7

Pr  N  7 | S  horizontal  e51  0.104


7!

Next, we imagine that vertical bars were presented, so that   10 s-1.


Again, using Equation (0.6), the conditional probability of our
observation is:

10 1
7

Pr  N  7 | S  vertical  e101  0.0901


7!

We see from the calculations above that the conditional probability


corresponding to with horizontal bars is higher than that
corresponding to vertical bars, so the Maximum Likelihood estimate is

S  horizontal bars . The tilde (~) is used to express the fact that the
equation refers to an estimate, which may be different from the actual
value.2

The maximum likelihood method can be extended to solve the neural


decoding problem for multiple neurons. In this case, the likelihood to
be maximised is the conditional probability that all of the neurons
generate the observed number of action potentials. To illustrate this
concept, we consider a second experiment similar to the previous one,
in which recordings of the membrane potentials of two neurons are

2
When the tilde is placed on top of the random variable, as it is here, it does not refer to the complement, as it
does when it is in front of the random variable (as mentioned in Chapter 2).

11
MPHY2002: Introduction to Biophysics

made simultaneously while either horizontal or vertical bars are


presented. The following information is known.

Neuron 1 Neuron 2

Horizontal Bars Horizontal Bars


   5 s-1    4 s-1

Vertical Bars  Vertical Bars 


  10 s-1   9 s-1

In this experiment, we observe that in one second, Neuron 1 generates

7 action potentials  N1  7  and Neuron 2 generates 8 action

potentials  N 2  8 . Importantly, we assume that the action potentials

generated by these two neurons are independent and they are


generated according to Poisson Statistics.

To use the Maximum Likelihood method in this context, we calculate


the conditional probabilities that each neuron generated the observed
number of action potentials given each stimulus type, and then we
calculate the joint conditional probabilities that both neurons
generated the observed number of action potentials. For Neuron 1,
these conditional probabilities were already calculated in Section 9.5;
for Neuron 2, they can be calculated in the same way.

Neuron 1

Pr  N1  7 | S  horizontal  0.104

Pr  N1  7 | S  vertical  0.0901

Neuron 2

Pr  N2  8 | S  horizontal   4 1 8! e41  0.0298


8

Pr  N2  8 | S  vertical   9 1 8! e91  0.132


8

12
MPHY2002: Introduction to Biophysics

With the assumption that the numbers of action potentials generated


by the two neurons are independent, we can calculate the joint
conditional probability as follows:

Pr  N1  7  N 2  8 | S  horizontal
 Pr  N1  7 | S  horizontal  Pr  N 2  8 | S  horizontal
 0.104  0.0298  0.00310

Pr  N1  7  N 2  8 | S  vertical
 Pr  N1  7 | S  vertical  Pr  N 2  8 | S  vertical
 0.0901 0.132  0.0119

Since the conditional probability corresponding to vertical bars and


two neurons is greater than that corresponding to horizontal bars, the

Maximum Likelihood estimate is S  vertical bars . We see from these


calculations that if we consider only Neuron 1, there is a greater
likelihood that horizontal bars were presented, but if we consider both
neurons, there is a greater likelihood that vertical bars were
presented.

9.6 Neural Decoding with the Maximum A Posteriori (MAP)


Method

In the previous two examples, we estimated the stimulus without


considering the probabilities of the stimuli themselves, which are
called the prior probabilities. This is a potentially a significant
limitation, as the probabilities of different stimuli can differ widely,
and in the extreme case in which the probability of a particular
stimulus is known to be zero, our decoding method should never
provide that stimulus as the answer.

If we know the prior probability of a particular stimulus s, Pr  S  s  ,

then we can calculate the probability that s was presented given that
exactly n action potentials were generated by the neuron,
Pr  S  s | N  s  . This calculation involves Bayes’ theorem, (Chapter
2), as follows:

13
MPHY2002: Introduction to Biophysics

Pr  N  n | S  s  Pr  S  s 
Pr  S  s | N  n  (0.9)
Pr  N  n

The probability Pr  S  s | N  s  is called the posterior probability.

The Maximum A Posteriori (MAP) decoding method involves the use of


posterior probabilities to estimate the stimulus values. To illustrate
this method, we consider an experiment similar to the one in Section
9.5, where the following information is known.

Horizontal Bars    5 s-1

Vertical Bars    10 s-1

Additionally, the following prior probabilities are known:

Pr  S  horizontal  0.25
Pr  S  vertical  0.75

To obtain the MAP estimate of the stimulus, we first imagine that


horizontal bars were presented, and calculate the numerator of
Equation (0.9). Subsequently, we imagine that vertical bars were
presented and repeat the calculation. These steps lead to the following
values.

Horizontal Bars

 5 1
7

Pr  N  7 | S  horizontal  e 51
7!
 0.104
Pr  N  7 | S  horizontal Pr  S  horizontal  0.104  0.25
 0.0261

Vertical Bars

10 1
7

Pr  N  7 | S  vertical  e 101
7!
 0.901
Pr  N  7 | S  vertical Pr  S  vertical  0.901 0.75
 0.0676

14
MPHY2002: Introduction to Biophysics

How about the denominator of Equation (0.9), Pr  N  7 ? It is just the

sum of two previously calculated values:

Pr  N  7   Pr  N  7 | S  horizontal Pr  S  horizontal
 Pr  N  7 | S  vertical Pr  S  vertical
 0.0261  0.0676
 0.0937

We can now use Equation (0.6) and calculate the posterior probability
of each stimulus:

Pr  N  7 | S  horizontal Pr  S  horizontal 
Pr  S  horizontal | N  7  
Pr  N  7 
 0.0261/ 0.0937
 0.2785
Pr  N  7 | S  vertical  Pr  S  vertical 
Pr  S  vertical | N  7  
Pr  N  7 
 0.0676 / 0.0937
 0.7215
.

We note that since the stimulus is either horizontal or vertical, we

must have Pr  S  horizontal | N  7  Pr  S  vertical | N  7  1 .


Furthermore, we note that it is not necessary to calculate the
denominator, Pr  N  7 , in order to determine which of the posterior

probabilities is larger: we can take the ratio of the two and the
denominator cancels out.

Since the posterior probability corresponding to vertical bars is higher


than that corresponding to horizontal bars, our MAP estimate for the

stimulus is S  vertical bars . We note that we had obtained a


different estimate using the Maximum Likelihood method. Knowledge
about the prior probabilities of the stimuli can be important!

15

You might also like