An Introduction To Modelling Neural Dynamics
An Introduction To Modelling Neural Dynamics
An Introduction to
Modeling
Neuronal Dynamics
123
Christoph Börgers
Department of Mathematics
Tufts University
Medford, MA, USA
Preface v
11 Saddle-Node Collisions 79
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
13 Hopf Bifurcations 91
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
19 Bursting 141
19.1 Hysteresis-Loop Bursting . . . . . . . . . . . . . . . . . . . . . . 141
19.2 A Concrete Example . . . . . . . . . . . . . . . . . . . . . . . . 143
19.3 Analysis in an Idealized Setting . . . . . . . . . . . . . . . . . . 145
19.4 Comparison of the Idealized Analysis with Biophysical
Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
Bibliography 435
Index 451
Chapter 1
dendrites
axon
Neurons and other cells are filled and surrounded by water in which ions
such as sodium (Na+ ), potassium (K+ ), chloride (Cl− ), and calcium (Ca2+ ) are
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 1) contains supplementary material, which is available to authorized users.
2 Chapter 1. Vocabulary and Notation
dissolved. The superscripts indicate electrical charge. For instance, a chloride ion
carries the charge −q, and a calcium ion carries the charge 2q, where q denotes the
elementary charge, i.e., the charge of a positron:
q ≈ 1.60 × 10−19 C.
Figure 1.2. Hippocampal neurons (green) and glial cells (red). Copyright
Paul De Koninck, Université Laval, www.greenspine.ca, reproduced with permission.
[K+ ]ex [K+ ]in , [Cl− ]ex [Cl− ]in , and [Ca2+ ]ex [Ca2+ ]in . (The symbols “”
and “” stand for “is much greater than” and “is much less than,” respectively.)
The difference in electrical potential between the interior and the exterior of the
membrane is called the membrane potential, denoted by v. For a nerve cell in
equilibrium, a typical value of v might be −70 mV, i.e., the potential on the interior
side of the membrane is 70 mV below that in the extracellular fluid. Here mV stands
for millivolt, the most commonly used unit of electrical potential in physiology.
It is customary in neuroscience to use the word hyperpolarization for lowering v
(making it more negative than it is in equilibrium), and depolarization for raising v
(making it closer to zero, or even positive). We also associate the word excitatory
with depolarization, and inhibitory with hyperpolarization; thus an excitatory input
raises v, and an inhibitory one lowers v.
Nerve and muscle cells are excitable, i.e., they are capable of generating brief
surges in the membrane potential called action potentials or voltage spikes; see
Fig. 1.3 for a computer simulation, and Fig. 1.4 for an experimental recording. When
a neuron generates an action potential, one also says that it spikes or fires. The
intervals between action potentials are called the inter-spike intervals. Most action
potentials are sodium-based; some are calcium-based. For sodium-based action
potentials to arise, the cell membrane must include sodium channels that open up
as v rises, for instance, as a result of input from other nerve cells (see below). Since
sodium is more plentiful outside the cell than inside, sodium then enters the cell.
Since sodium ions carry positive charge, this results in a further increase in v, further
opening of sodium channels, and so on. The result may be a rapid self-accelerating
rise in v, often to values above 0 mV. However, the sodium channels underlying the
generation of action potentials do not remain open indefinitely at high membrane
potentials; within a millisecond or two, they start closing again. One says that they
inactivate. At the same time, the elevated value of v causes potassium channels to
open. Just like the sodium channels, the potassium channels respond to rising v by
opening, but they do so more sluggishly. Since potassium is more plentiful inside the
cell than in the extracellular fluid, potassium leaves the cell, and since potassium
carries positive charge, v falls. The current carried by the potassium ions is called
the delayed rectifier current — delayed because the potassium channels open up
with a delay, and rectifier because the potassium current results in the return of v
to its equilibrium value.
50
v [mV]
−50
−100
0 50 100 150 200
t [ms]
In Figs. 1.3 and 1.4, it appears that the voltage gradually rises until it reaches
a fairly sharply defined threshold, then all of the sudden rises rapidly; the threshold
80
60
Voltage (mV)
40
20
0
-20
-40
-60
-80
0 100 200 300 400 500 600 700
Time (ms)
seems to be a little bit lower than −60 mV in Fig. 1.3. One calls this the firing
threshold or the threshold voltage. The fact that there often appears to be such
a threshold is the basis of the integrate-and-fire models discussed in Chapters 7
and 8. However, in reality there is not usually a sharply defined threshold voltage;
whether or not an action potential will occur depends not only on v, but also on
other variables characterizing the state of the neuronal membrane.
The voltage spike is spatially local: It typically originates near the cell body
on the axonal side, in a location called the axon hillock. It then travels down the
axon until it reaches the tips of the branches of the axon, also called the axon
terminals. The axon terminals come very close to the membranes of other neurons.
The locations of such near-contacts are called synapses. The space between the
pre-synaptic and post-synaptic membranes is called the synaptic cleft. It is on the
order of 20 nm = 20 × 10−9 m wide. When an action potential arrives at an axon
terminal, it often causes the release of a chemical called a neurotransmitter, which
diffuses across the synaptic cleft to the post-synaptic membrane, where it binds to
specialized receptors and leads to the opening of ion channels, thereby affecting the
membrane potential of the post-synaptic neuron.
Different classes of neurons release different neurotransmitters. An important
example of a neurotransmitter is glutamate. When glutamate binds to a receptor,
the effect is a depolarization of the post-synaptic membrane, i.e., a rise in the
membrane potential of the post-synaptic neuron. Therefore one calls glutamate an
excitatory neurotransmitter. It is the most common excitatory neurotransmitter in
the brain. Neurons that release an excitatory neurotransmitter are called excitatory
neurons. Neurons that release glutamate are called glutamatergic. Many excitatory
neurons in the brain have cell bodies of pyramidal shape, and are therefore called
pyramidal neurons.
Two important glutamate receptor classes are the AMPA receptors and the
NMDA receptors. They derive their names from synthetic substances that can
1.1. Biological Vocabulary 5
1 One must say “usually” here because GABA can sometimes have a shunting effect, i.e., hold
the membrane potential in place rather than lower it, or even be excitatory; see, for instance, [35].
2 Hyperpolarization can also have an excitatory effect indirectly, by inducing depolarizing cur-
We write
f (t) g(t) as t → ∞
for
f (t)
lim exists and is positive and finite,
t→∞ g(t)
that is, for
(v + 45)/10
αm (v) = .
1 − exp(−(v + 45)/10)
The letter v denotes a voltage here, and αm is a reciprocal time. With the proper
physical units, the formula should be written like this:
(v + 45 mV)/10 mV
αm (v) = ms−1 .
1 − exp(−(v + 45 mV)/10 mV)
This looks ludicrous, and we will therefore make the convention that the unit of
voltage is always the millivolt (mV), and the unit of time is always the millisecond
(ms). Other physical quantities similarly have standard units that are implied when
none are specified; see Table 1.1.
Even though it is assumed that the reader has a rudimentary knowledge of
high school physics, I will review the definitions of these units briefly. To move a
unit charge, i.e., the charge of a positron (which has the same charge as an electron,
but with positive sign), from location A to location B requires 1 joule (J) of work
if the electrical potential in B exceeds that in A by one volt (V). The joule, the
8 Chapter 1. Vocabulary and Notation
standard unit of work (energy), is a newton (N) times a meter, and the newton is
defined by the equation
kg m
N= 2 .
s
The “m” in mV, ms, and mS stands for “milli,” a factor of 10−3 . The ampere (A)
is the standard unit of current, defined by
C
A= ,
s
where the letter C stands for coulomb (unit of charge). The “μ” in μA and μF
stands for “micro,” a factor of 10−6 . The siemens (S) is the standard unit of
conductance, defined by
A
S= .
V
Its reciprocal is the ohm (Ω),
V
Ω=
A
the standard unit of resistance. The siemens is therefore sometimes called the
mho (ohm spelled backwards). The farad (F) is the standard unit of capacitance,
defined by
C
F= .
V
Neuroscientists work with current, conductance, and capacitance densities
more often than with currents, conductances, and capacitances per se. This is
why in Table 1.1, we have listed the units of current, conductance, and capacitance
densities.
Notice that time is typically measured in ms in this book, but frequency in
hertz (Hz), i.e., in s−1 , not in ms−1 . This incompatibility will cause factors of 1000
to appear in various formulas throughout the book.
Part I
Charged particles, namely ions, diffuse in water in the brain. There is a field of study
called electro-diffusion theory concerned with the diffusion of charged particles. In
this chapter, we study one electro-diffusion problem that is crucial for understanding
nerve cells.
As discussed in Chapter 1, ion concentrations and the electrical potential
are different on the interior and exterior sides of the cell membrane. Following
convention, we call the electrical potential in the extracellular fluid zero. (One is,
in general, allowed to choose freely which electrical potential one calls zero; only
potential differences have physical meaning.) With this convention, the membrane
potential v is the electrical potential on the interior side.
It is instructive to think about what would happen if a membrane potential
v were imposed artificially, for instance, by attaching a battery to the cell. Sup-
pose that the ions of some species X could diffuse through the membrane through
channels, but that there were no ion pumps actively transporting X-ions across the
membrane. If v = 0, one would then expect [X]in and [X]ex to equalize. If v
= 0,
electrical forces come into play. Denote by z the number of unit charges carried by
one X-ion: z = 2 for calcium, z = −1 for chloride, etc. If z > 0 and v < 0, for
instance, X-ions are attracted into the cell, causing [X]in to rise. The rise, however,
cannot continue indefinitely; as the discrepancy between [X]in and [X]ex increases,
so does the net rate at which X-ions diffuse from the interior to the exterior.
We denote by We the amount of work done against the electrical field when
moving one ion from the outside to the inside of the cell. (If z > 0 and v < 0,
then We < 0, meaning that the electrical field does work.) Similarly we denote by
Wd the amount of work done against the concentration jump when moving one ion
from the outside to the inside. (If [X]ex > [X]in , then Wd < 0.) The diffusional and
electrical effects are in equilibrium if
We + Wd = 0. (2.1)
12 Chapter 2. The Nernst Equilibrium
We = zqv, (2.2)
where q is the unit charge, i.e., the charge of a positron. (In general, the electrical
potential difference between two points in space is the work that needs to be done
against the electrical field to move a positron from one point to the other.)
To derive a formula for Wd is not quite as straightforward. The diffusional
effects are greater at higher temperature, and it is therefore not surprising that Wd
is proportional to the temperature T . One would also expect that Wd increases as
the ratio [X]in /[X]ex increases. The formula for Wd is
[X]in
Wd = kT ln . (2.3)
[X]ex
where J stands for joule (unit of energy or work, J=Nm= kg m2 /s2 ), and K stands
for kelvin (unit of temperature). We will not justify (2.3) here. However, for
motivation, we will derive the completely analogous and closely related formula for
the work required to compress an ideal gas while keeping its temperature constant
at the end of this chapter.
Using eqs. (2.2) and (2.3), eq. (2.1) becomes v = vX , with
kT [X]ex
vX = ln . (2.4)
zq [X]in
This quantity is called the Nernst equilibrium potential of ion species X, named after
Walther Nernst (1864–1941), a physical chemist.
Human body temperature is about 37 o C, or 310.15 K. For T = 310.15 K,
kT
≈ 26.7 mV. (2.5)
q
Typical values of Nernst potentials for sodium, potassium, chloride, and calcium in
mammalian nerve cells are on the order of
vNa = 70 mV, vK = −90 mV, vCl = −70 mV, vCa = 130 mV.
The membrane potential never gets as high as vNa , therefore sodium will flow into
the cell whenever sodium channels are open. The same holds for calcium. In general,
if v
= vX and there are open X-channels, there is a flow of X-ions across the cell
membrane, and therefore an electrical current, IX , carried by the ions. This current
is often assumed to obey Ohm’s law:
[X]
W = kT ln . (2.8)
[X]
would violate the first law of thermodynamics, the law of conservation of energy.
In exercise 5, you will be asked to verify explicitly that one particular alternative
way of compressing the gas leads to the same equation (2.8).
Exercises
2.1. Assume z < 0 and [X]ex > [X]in . In a sentence or two, explain why vX ,
computed from (2.4), has the sign that one would intuitively expect it to have.
2.2. There are indications [3] that in schizophrenia, the activity of NKCC1 (sodium-
potassium-chloride cotransporter 1) may be increased in the prefrontal cortex,
whereas that of KCC2 (potassium-chloride cotransporter 2) may be decreased.
NKCC1 mediates chloride uptake by cells, whereas KCC2 mediates chloride
extrusion. Thus one would expect there to be an abnormally high chloride
concentration in the cell interior. Would this raise or lower the Nernst potential
of chloride?
2.3. Convince yourself that the right-hand side of (2.7) has the physical dimension
of a pressure.
2.4. Before thinking much about (2.6), you might have guessed that an extra minus
sign ought to appear in the equation if the ions were negatively charged, i.e.,
z < 0. Explain why in fact, the signs in (2.6) are precisely what you should
intuitively expect them to be, regardless of whether z > 0 or z < 0.
2.5. We derived (2.8) by thinking about reducing the length of a cylindrical con-
tainer. Suppose that instead, you accomplish the increase in the number
density from [X] to [X] by putting the gas into a spherical container, and
shrinking the radius of the container. Show that this results in (2.8) as well.
Chapter 3
The Classical
Hodgkin-Huxley ODEs
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 3) contains supplementary material, which is available to authorized users.
4 An “ordinary differential equation” (ODE) involves derivatives of functions of one variable
only. By contrast, a “partial differential equation” (PDE) involves partial derivatives of functions
of several variables.
16 Chapter 3. The Classical Hodgkin-Huxley ODEs
where Q is the separated charge (the charge carried by the two layers is ±Q), and
the constant of proportionality, C, is called the capacitance. If Q and v depend on
time, t, we can differentiate both sides of (3.1) to obtain
dv
C = Itotal , (3.2)
dt
where Itotal = dQ/dt is the total electrical current from one side of the capacitor to
the other; this equation is the starting point for the Hodgkin-Huxley ODEs.
Based on experiments, Hodgkin and Huxley hypothesized that Itotal was made
up of four components: A sodium current INa , a potassium current IK , a small
additional current which they called the leak current and denoted by IL , carried
by chloride and other ions, and the current I that they themselves injected, using
electrodes, in the course of their experiments. Thus
dv
C = INa + IK + IL + I. (3.3)
dt
The currents INa , IK , and IL are assumed to obey Ohm’s law (see (2.6)):
Here vNa and vK are the Nernst potentials of sodium and potassium, respectively.
If the leak current were exclusively carried by chloride, vL would be the Nernst
potential of chloride, but since it is carried by a mixture of ion species, vL is a
weighted average of the Nernst potentials of those ion species. Since INa , IK , and
IL change sign when v passes vNa , vK , and vL , one also refers to vNa , vK , and vL as
reversal potentials.
Hodgkin and Huxley derived from their experiments that, and how, the con-
ductances gNa and gK track changes in v. Their descriptions of gNa and gK take the
following forms:
gNa = gNa m3 h, g K = g K n4 , (3.4)
where gNa and g K are constant conductances, and m, h, and n are time-dependent
dimensionless quantities varying between 0 and 1. Hodgkin and Huxley proposed
the following physical interpretation of eqs. (3.4). Suppose that each sodium channel
is guarded by four gates in series, that these gates open and close independently
of each other, and that all four gates must be open for the channel to be open.
Suppose further that there are three gates of one type, let us call them the m-gates,
and one gate of another type, let us call it the h-gate. If m and h denote the
fractions of open m- and h-gates, respectively, then the fraction of open sodium
channels is m3 h. Similarly, if a potassium channel has four identical independent
gates in series, with the channel open only if all four gates are open, and if n denotes
the fraction of open potassium gates, then the fraction of open potassium channels
is n4 . This physical interpretation is not to be taken literally.5 The observation is
simply that if it were true, the sodium and potassium conductances would in fact
5 However, when a potassium channel was imaged in detail for the first time [39], decades after
the work of Hodgkin and Huxley, the channel turned out to have four identical subunits.
3.1. The Hodgkin-Huxley Model 17
be described by eqs. (3.4), with g Na and gK equal to the largest possible sodium and
potassium conductances, realized when all channels are open. The variables m, h,
and n are therefore called gating variables.
In the Hodgkin-Huxley model, m, h, and n obey simple first-order ODEs of
the form
dm m∞ (v) − m dh h∞ (v) − h dn n∞ (v) − n
= , = , = , (3.5)
dt τm (v) dt τh (v) dt τn (v)
1 1
τ m [ms ]
m ∞ (v)
0.5 0.5
0 0
−100 −50 0 50 −100 −50 0 50
1 10
τ h [ms ]
h ∞ (v)
0.5 5
0 0
−100 −50 0 50 −100 −50 0 50
1 10
τ n [ms ]
n ∞ (v)
0.5 5
0 0
−100 −50 0 50 −100 −50 0 50
v [mV] v [mV]
By contrast with the sodium and potassium conductances, the leak conduc-
tance gL is constant. For aesthetic reasons, we write
gL = g L .
This is a system of four ODEs for the four unknown functions v, m, h, and n. Up to
now, we have thought of C as capacitance, g Na , g K , and g L as conductances, and I
as a current. However, dividing both sides of (3.8), by the total membrane area, we
see that the precisely same equation holds if we think of C, gNa , g K , gL , and I as
capacitance, conductance, and current per unit membrane area, i.e., as capacitance,
conductance, and current densities. Following Hodgkin and Huxley, this is how we
will interpret these quantities from here on.
All that is left to do to specify the model completely is to specify the constants
C, vNa , vK , vL , g Na , g K , and g L , and the formulas for x∞ (v) and τx (v), x = m, h, n.
The constants are
3.1. The Hodgkin-Huxley Model 19
Exercises
3.1. Using separation of variables, derive (3.7) from (3.6).
3.2. Suppose that
dx 2−x
= , x(0) = 1.
dt 4
What is x(3)?
3.3. Suppose that v = −75 mV is held fixed until the values of m, h, and n
reach equilibrium. What are the sodium, potassium, and leak conductance
densities now?
Exercises 21
3.4. Using l’Hospital’s rule, compute lim αm (v) and lim αn (v).
v→−45 v→−60
3.5. (∗) Using a computer, plot αx and βx , x = m, h, and n. You can use the
code that generates Fig. 3.1 as a starting point, if you like.
Chapter 4
Suppose that in addition to the system of ODEs, (4.1), we are given the initial
condition
y(0) = y0 , (4.2)
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 4) contains supplementary material, which is available to authorized users.
7 This certainly does not mean that there are not mathematical methods that yield very valu-
able insight. You will see some of them later in this book. However, for most differential equations
it is impossible to write down “explicit” solutions, that is, formulas representing solutions symbol-
ically, and even when it is possible it may not always be useful.
24 Chapter 4. Numerical Solution of the Hodgkin-Huxley ODEs
i.e., we are given v, m, h, and n at time t = 0, and that we want to find y(t),
0 ≤ t ≤ T , for some T > 0.
The simplest idea for computing solutions to this problem is as follows. Choose
a large integer M , and define Δt = T /M and tk = kΔt, k = 0, 1, 2, . . . , M . Compute
approximations
yk ≈ y(tk ), k = 1, 2, . . . , M,
yk − yk−1
= F (yk−1 ), k = 1, 2, . . . , M. (4.3)
Δt
This is called Euler’s method, named after Leonhard Euler (1707–1783). Note the
similarity between eqs. (4.1) and (4.3): One obtains (4.3) by replacing the derivative
in (4.1) by a difference quotient. This is motivated by the fact that
y(tk ) − y(tk−1 ) dy
≈ (tk−1 ) (4.4)
Δt dt
by
ŷk−1/2 − ŷk−1
= F (ŷk−1 ) and (4.5)
Δt/2
ŷk − ŷk−1
= F (ŷk−1/2 ), k = 1, 2, . . . , M. (4.6)
Δt
This is called the midpoint method. Equation (4.5) describes a step of Euler’s
method, with Δt replaced by Δt/2. Equation (4.6) is motivated by the fact that
y(tk ) − y(tk−1 ) dy
≈ (tk−1/2 ) (4.7)
Δt dt
for small Δt. The central difference approximation in (4.7) is much more accu-
rate than the one-sided approximation in (4.4); see exercise 3. You might be
concerned that whichever advantage is derived from using the central difference
approximation in (4.6) is essentially squandered by using Euler’s method to com-
pute ŷk−1/2 . However, this is not the case; the reason is, loosely speaking, that the
right-hand side of eq. (4.6) is multiplied by Δt in the process of solving the equation
for ŷk .
The theoretical analysis of Euler’s method and the midpoint method re-
lies on two assumptions. First, F must be sufficiently often differentiable.
Chapter 4. Numerical Solution of the Hodgkin-Huxley ODEs 25
(Twice is enough, but that is unimportant here: The right-hand side of the
Hodgkin-Huxley equations is infinitely often differentiable with respect to v, m,
h, and n.) In addition, one must assume that the solution y(t) is defined for
0 ≤ t ≤ T . See exercise 4 for an example illustrating that y(t) is not guaranteed
to be defined for 0 ≤ t ≤ T even if F is infinitely often differentiable. However, for
the Hodgkin-Huxley equations, one can prove that all solutions are defined for all
times; see exercise 5.
To characterize the accuracy of the approximations obtained using Euler’s
method and the midpoint method, we note first that yk and ŷk depend not only
on k, but also on Δt, and we make this dependence clear now by writing yk,Δt and
ŷk,Δt instead of yk and ŷk .
For Euler’s method, there exists a constant C > 0 independent of Δt (but
dependent on F , y0 , and T ) so that
for a constant Ĉ > 0 independent of Δt. The proofs of these results can be found
in most textbooks on numerical analysis, for instance, in [78].
For small Δt, ĈΔt2 is much smaller than CΔt. (If Δt is small, Δt2 is much
smaller than Δt.) Therefore the midpoint method gives much better accuracy than
Euler’s method when Δt is small. One says that Euler’s method is first-order
accurate, and the midpoint method is second-order accurate. This terminology
refers to the powers of Δt in eqs. (4.8) and (4.9).
Suppose that we want to compute the solution y up to some small error > 0.
≤ , so Δt ≤ /C. If we use
If we use Euler’s method, we should make sure that CΔt
the midpoint method, we need ĈΔt2 ≤ , so Δt ≤ /Ĉ. For small , the bound
/Ĉ is much larger than the bound /C. This means that at least for stringent
accuracy requirements (namely, for small ), the midpoint method is much more
efficient than Euler’s method, since it allows much larger time steps than Euler’s
method.
In many places in this book, we will present computed solutions of systems
of ODEs. All of these solutions were obtained using the midpoint method, most
typically with Δt = 0.01 ms.
26 Chapter 4. Numerical Solution of the Hodgkin-Huxley ODEs
v [mV]
50
−50
−100
0 10 20 30 40 50
0.5
0
0 10 20 30 40 50
t [ms]
0.8
0.6
n
0.4
0.2
0
−100 −50 0 50
v [mV]
Figure 4.2. Projection into the (v, n)-plane of the solution shown in
Fig. 4.1. The arrows indicate the direction in which the point (v, n) moves.
[HH_LIMIT_CYCLE]
50
v [mV]
0
−50
−100
0 10 20 30 40 50
50
v [mV]
−50
−100
0 10 20 30 40 50
50
v [mV]
−50
−100
0 10 20 30 40 50
t [ms ]
Figure 4.1 shows that it takes the gating variable, n, of the delayed recti-
fier current a few milliseconds to decay following a spike. This has an important
consequence: Input arriving within a few milliseconds following a spike has very
little effect. Positive charge injected at this time immediately leaks out. This is
illustrated by Fig. 4.3. When the pulse comes soon after a spike (middle panel), it
has almost no effect. When it comes just a few milliseconds later (bottom panel),
it instantly triggers a new spike. One says that the neuron is refractory for a brief
time following an action potential, the refractory period. It is not a sharply defined
time interval: Immediately after a spike, the neuron is nearly insensitive even to
strong input, then its input sensitivity gradually recovers.
Exercises
4.1. Consider an initial-value problem
dy
= −cy for t ≥ 0, y(0) = y0 ,
dt
with c > 0 and y0 given.
28 Chapter 4. Numerical Solution of the Hodgkin-Huxley ODEs
(a) Write down a formula for y(t), and show that limt→∞ y(t) = 0.
(b) Denote by yk , k = 1, 2, 3, . . ., the approximations to y(kΔt) obtained
using Euler’s method. Write down an explicit, simple formula for yk .
(c) Prove that
lim yk = 0 if Δt < 2/c,
k→∞
but
lim |yk | = ∞ if Δt > 2/c.
k→∞
dy
= y2, y(0) = 1.
dt
Show that the limit of y(t) as t → 1 from the left is ∞. This is called blow-up
in finite time.
Exercises 29
4.5. Suppose that v, m, h, and n solve the Hodgkin-Huxley ODEs, eqs. (3.8)
and (3.9). Define
I I
A = min vK , vL + , B = max vNa , vL + .
gL gL
Hodgkin and Huxley modeled the giant axon of the squid. Since then, many similar
models of neurons in mammalian brains have been proposed. In this chapter, we
list three examples, which will be used throughout the book.
In Chapter 3, the current density I reflected current injected by an experi-
menter. For a neuron in the brain, usually there is no experimenter injecting cur-
rent. Nonetheless, the Hodgkin-Huxley-like models described here include a term
I. It might represent input currents originating from other neurons, or currents not
explicitly modeled. We call I the external drive or the external input.
The models discussed in this chapter are called the RTM model, WB, and
Erisir models. All three are of the form of the classical Hodgkin-Huxley model,
eqs. (3.8) and (3.9), but with different parameter values, different functions αx and
βx (remember that αx and βx determine the functions x∞ and τx ), and with the
assumption that “τm = 0,” that is, m = m∞ (v). Thus m is not a dependent variable
any more, but a direct function of v. This assumption is justified by the fact that
1
τm (v) =
αm (v) + βm (v)
is very small for all v; see exercise 1, and compare also Fig. 3.1.
Sections 5.1–5.3 specify details, including formulas for the αx and βx . The
graphs of the functions x∞ and τx for all three models are given in Fig. 5.1, and the
constants in Table 5.1.
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 5) contains supplementary material, which is available to authorized users.
32 Chapter 5. Three Simple Models of Neurons in Rodent Brains
C vNa vK vL g Na gK gL
RTM 1 50 -100 -67 100 80 0.1
WB 1 55 -90 -65 35 9 0.1
Erisir 1 60 -90 -70 112 224 0.5
1
m∞
0.5
0
-100 -50 0 50
1 20
τh [ms]
h∞
0.5 10
0 0
-100 -50 0 50 -100 -50 0 50
1 20
τn [ms]
n∞
0.5 10
0 0
-100 -50 0 50 -100 -50 0 50
v [mV] v [mV]
Figure 5.1. The functions x∞ and τx for the RTM neuron (red and solid),
WB neuron (blue, dash-dots), and Erisir neuron (black, dashes). We left out τm
because m = m∞ (v) in all three of these models. [THREE_MODELS_GATING_VARIABLES]
The red, solid curves in Fig. 5.1 show the graphs of x∞ and τx , x = m, h, and n.
Figure 5.2 shows a voltage trace with I = 1.5 μA/cm2 .
50
v [mV]
−50
−100
0 20 40 60 80 100
t [ms]
Figure 5.2. Voltage trace of the RTM neuron with I = 1.5 μA/cm2 .
[RTM_VOLTAGE_TRACE]
Two different classes of inhibitory basket cells are ubiquitous in the brain, the
parvalbumin-positive (PV+) basket cells, which contain the protein parvalbumin,
and the cholecystokinin-positive (CCK+) basket cells [55], which contain the hor-
mone cholecystokinin. The PV+ basket cells are called fast-firing because they
are capable of sustained high-frequency firing, and are known to play a central
role in the generation of gamma frequency (30–80 Hz) oscillations. It is thought
that gamma rhythms are important for sensory processing, attention, and working
memory. The WB model is patterned after the fast-firing PV+ basket cells.
50
v [mV]
−50
−100
0 20 40 60 80 100
t [ms]
For the constants, see Table 5.1. The functions αx and βx are
0.1(v + 35)
αm (v) = , βm (v) = 4 exp(−(v + 60)/18),
1 − exp(−(v + 35)/10))
5
αh (v) = 0.35 exp(−(v + 58)/20), βh (v) = ,
1 + exp(−0.1(v + 28))
0.05(v + 34)
αn (v) = , βn (v) = 0.625 exp(−(v + 44)/80) .
1 − exp(−0.1(v + 34))
The blue, dash-dotted curves in Fig. 5.1 show the graphs of x∞ and τx , x = m, h,
and n. Figure 5.3 shows a voltage trace with I = 0.75 μA/cm2.
The most striking difference between Figs. 5.2 and 5.3 is that the spike af-
terhyperpolarization, i.e., the hyperpolarization following an action potential, is far
less deep in the WB model than in the RTM model. The difference between the
lowest value of v and the firing threshold is about 15 mV in Fig. 5.3. This is in
agreement with experimental results for fast-firing inhibitory interneurons; see ref-
erences in [174]. (See, however, also the voltage traces of cortical interneurons and
pyramidal cells in Figs. 1C and 1E of [148]. There the spike afterhyperpolarization
is significantly more pronounced in the interneurons than in the pyramidal cells.)
The spike afterhyperpolarization is less pronounced for the WB model than for
the RTM model because the maximal conductance densities g Na and g K are smaller.
Deeper afterhyperpolarization would be obtained if g Na and g K were raised (exer-
cise 5), or h and n made slower (exercise 6). In fact, the Wang-Buzsáki model as
stated in [174] included a scaling factor φ in front of the formulas for αh , βh , αn ,
and βn . Wang and Buzsáki chose φ = 5. This choice is built into the equations
as stated above. However, they pointed out that reducing φ, which amounts to
reducing αh , βh , αn , and βn , i.e., to slowing down h and n, makes spike afterhy-
perpolarization more pronounced.
The black, dashed curves in Fig. 5.1 show the graphs of x∞ and τx , x = m, h, and n.
Figure 5.4 shows a voltage trace with I = 7 μA/cm2 .
50
v [mV] 0
−50
0 20 40 60 80 100
t [ms]
Note that g Na and gK are quite large in the Erisir model, even larger than
in the RTM model. As a result, the voltage rises almost to vNa during an action
potential, and falls almost to vK immediately following an action potential. The
leak conductance density g L is large as well.
What is the significance of taking the potassium conductance to be g K n2 , not
4
gK n ? The main answer is that it does not appear to matter very much; compare
Fig. 5.5, where we have used gK n4 instead of gK n2 , with Fig. 5.4. In detail, using
gK n2 instead of g K n4 has the following effects, which one can see when comparing
Figs. 5.4 and 5.5.
1. As n rises to values near 1 during a spike, the potassium conductance responds
more rapidly when the exponent is 2, not 4. Therefore the spike termination
mechanism becomes faster, and the spikes become narrower.
2. As n falls to values near 0 following a spike, the potassium conductance follows
less rapidly when the exponent is 2, not 4. This has the effect that the
hyperpolarization following a spike is deeper.
50
v [mV]
0
-50
0 20 40 60 80 100
t [ms]
Figure 5.5. Voltage trace of modified Erisir neuron with potassium con-
ductance equal to g K n4 instead of g K n2 , I = 7 μA/cm2 . [ERISIR_VOLTAGE_TRACE_2]
Exercises
5.1. (∗) Using Matlab, plot
1
τm =
αm + βm
for the RTM neuron as a function of v ∈ [−100, 50]. You can use the code
that generates the red, solid curves in Fig. 5.1 as a starting point.
5.2. (∗) Using Matlab, plot h + n for a solution of the RTM model equations. You
can use the code generating Fig. 5.2 as a starting point. Convince yourself
that h + n varies approximately between 0.8 and 1.1.
5.3. (∗) In the code used to generate Fig. 5.2, make the modification of setting
h = 1−n instead of allowing h to be governed by its own differential equation.
(This is motivated by the previous exercise.) Plot the analogue of Fig. 5.2
with this modification.
5.4. Exercise 3 shows that one can reproduce behavior that looks like neuronal
firing with just two dependent variables: v and n. Explain why an equation
with just the single dependent variable v, i.e., an equation of the form dv/dt =
F (v), could not model the periodic firing of a neuron.
5.5. (∗) In the WB model, raise g Na to 100mS/cm2 and g K to 50mS/cm2. Using as
a starting point the code that generates Fig. 5.3, plot a voltage trace obtained
with the increased values of g Na and g K , and compare with Fig. 5.3.
5.6. (∗) In the WB model, multiply αh , βh , αn , and βn by 0.2. Using as a starting
point the code that generates Fig. 5.3, plot the voltage trace obtained, and
compare with Fig. 5.3.
5.7. (∗) Using as a starting point the codes that generate Figs. 5.2, 5.3, and 5.4,
plot the sodium, potassium, and leak current densities as functions of time
for the RTM, WB, and Erisir models. To see how large these current densities
are, relative to each other, between spikes, use the axis command in Matlab
to show only the range between −5 μA/cm2 and +5 μA/cm2 for the RTM
and WB neurons, and the range between −20 μA/cm2 and +20 μA/cm2 for
the Erisir neuron. In comparison with the leak current, are the sodium and
potassium currents significant even between action potentials?
Exercises 37
5.8. (∗) (a) Demonstrate numerically that the sodium current inactivates more
deeply when the potassium current in the Erisir neuron is taken to be g K n4
than when it is taken to be g K n2 . (This is an effect of the broader spikes
— the sodium current has more time to inactivate.) (b) Explain why deeper
inactivation of the sodium current could lead to slower firing. (c) Re-compute
Fig. 5.5 with I = 7.4 μA/cm2 to convince yourself that the resulting voltage
trace is very close to that of Fig. 5.4.
Chapter 6
The Classical
Hodgkin-Huxley PDEs
The model proposed by Hodgkin and Huxley in 1952 is not a set of ODEs, but a
set of PDEs — the dependent variables are not only functions of time, but also of
space. This dependence will be neglected everywhere in this book, except in the
present chapter. You can therefore safely skip this chapter, unless you are curious
what the PDE-version of the Hodgkin-Huxley model looks like, and how it arises.
When there is no piece of silver wire threaded through the axon, that is, when
there is no space clamp, the membrane potential v, as well as the gating variables m,
h, and n, become dependent on the position on the neuronal membrane. It turns out
that this adds one ingredient to the mechanism: diffusion of v along the neuronal
membrane. In this chapter we explain what this means, and why it is true, for the
simplest case, a cylindrical axon.
We consider a cylindrical piece of axon, and denote by “z” the coordinate along
the axis of the cylinder. We still make a simplification: We allow the dependent
variables to depend on z, but not on the angular variable. For symmetry reasons,
this is sensible if the axon is a circular cylinder. We will develop a partial differential
equation (PDE) describing the time evolution of v.
Suppose that Δz > 0 is small, and let us focus on the small piece of axon
between z − Δz/2 and z + Δz/2; see Fig. 6.1. The current entering this piece
through the cell membrane is approximately
Im = 2πaΔz g Na m(z, t)3 h(z, t) (vNa − v(z, t)) + g K n(z, t)4 (vK − v(z, t)) +
where a denotes the radius of the cylinder, the constants g Na , g K , and g L are
conductance densities, and I denotes the applied current density. The factor 2πaΔz
is the surface area of the small cylindrical piece. The subscript m in Im stands for
“membrane.”
40 Chapter 6. The Classical Hodgkin-Huxley PDEs
Im
a
Il Ir
z-direction
z - Dz/2 Im z + Dz/2
law:
v(z − Δz, t) − v(z, t)
Il = , (6.2)
ri Δz
where ri is the longitudinal resistance of the cell interior per unit length. (By con-
vention, a current carried by charge entering the piece of axon between z − Δz/2
and z + Δz/2 is positive.) The subscript l stands for “left.” Similarly, the voltage
difference between locations z and z + Δz gives rise to a current into the piece of
axon between z − Δz/2 and z + Δz/2 that is approximately equal to
v(z + Δz, t) − v(z, t)
Ir = . (6.3)
ri Δz
The equation governing v(z, t) is
∂v
2πaΔzC (z, t) = Im + Il + Ir , (6.4)
∂t
where C denotes capacitance density as before. Using eqs. (6.1), (6.2), and (6.3),
and dividing (6.4) by 2πaΔz, we find:
∂v 1 v(z + Δz, t) − 2v(z, t) + v(z − Δz, t)
C (z, t) = +
∂t 2πari Δz 2
g Na m(z, t)3 h(z, t) (vNa − v(z, t)) + gK n(z, t)4 (vK − v(z, t))+
g L (vL − v(z, t)) + I(z, t). (6.5)
We pass to the limit as Δz → 0, and find, omitting the arguments z and t, and
indicating partial derivatives with subscripts (see exercise 1):
1
Cvt = vzz + gNa m3 h (vNa − v) + g K n4 (vK − v) + gL (vL − v) + I. (6.6)
2πari
Chapter 6. The Classical Hodgkin-Huxley PDEs 41
The resistance of the cell interior per unit length, ri , is usually assumed to be
inversely proportional to the cross-sectional area of the axon [88, eq. (8.5)]:
Ri
ri = , (6.7)
πa2
where Ri is called the resistivity of the cell interior, independent of a. Using this
relation in (6.6), we obtain the equation as written by Hodgkin and Huxley [76]:
a
Cvt = vzz + g Na m3 h (vNa − v) + g K n4 (vK − v) + g L (vL − v) + I. (6.8)
2Ri
Even though the gating variables now depend on z, (3.9) remains unchanged.
Equation (6.8) is related to diffusion. To explain the connection, imagine a
long thin rod filled with a water-ink mixture. The ink diffuses in the water. Let
ρ = ρ(z, t) denote the ink concentration (amount of ink per unit length) at position
z at time t. It can then be shown that
ρt = ρzz , (6.9)
for some number > 0; see exercise 2. Thus (6.8) can be stated as follows. The
membrane potential obeys the Hodgkin-Huxley equations, but diffuses in space at
the same time. The diffusion coefficient, a/(2R), is a conductance (not a conduc-
tance density); see exercise 3. Hodgkin and Huxley measured a = 0.0238 cm, and
R = 35.4 Ωcm (Ω stands for ohm, the unit of resistance, Ω =V/A=1/S), implying
a/(2R) ≈ 0.34 mS.
To solve eqs. (6.8) and (3.9) numerically, one discretizes the z-axis, i.e., one
computes the functions v, m, h, and n at a finite number of points z only, in
effect returning to (6.5). Time can be s, for example, using the midpoint method
described in Chapter 4. We omit the details here.
The model predicts the existence of sharp voltage pulses traveling along the
axon. Qualitatively, the mechanism is as follows. When the voltage v is raised
in one location, it diffuses into neighboring locations because of the diffusion term
(a/(2R))vzz on the right-hand side of (6.8). This triggers the spike-generating
mechanism — sodium channels opening up — in those neighboring locations, while
the spike is ended in the original location by the opening of the potassium channels
and the closing of the sodium channels. Thus the pulse travels. However, this
discussion does not explain why the pulse typically travels uni-directionally. Action
potentials typically originate in the axon near the cell body. Because the cell body is
much larger in diameter than the axon, back-propagation into the cell body is more
difficult than propagation away from it. Once uni-directional pulse propagation
begins, it is easy to understand how it can be maintained: The tissue in the wake
of the pulse is refractory; this is why the diffusion of v, which has no directional
preference, causes the pulse to propagate forward, but not backward.
The modeling presented in this chapter, due to Hodgkin and Huxley, does
not address all questions concerning the spatial propagation of action potentials in
nerve cells. Real nerve cells may have approximately cylindrical pieces, but they
are not overall of cylindrical shape. They are very complicated geometric objects.
42 Chapter 6. The Classical Hodgkin-Huxley PDEs
A careful discussion of how to handle the complications arising from the geometry of
nerve cells would go very far beyond the scope of this book. However, the principle
discovered by Hodgkin and Huxley is correct even for nerve cells with realistic
geometry: When the membrane potential is high at one location, it raises the
membrane potential in neighboring locations via diffusion. This triggers the spike-
generating mechanism based on sodium and potassium currents in the neighboring
locations. Action potentials are traveling pulses generated in this way.
Often neurons of complicated shape are modeled as composed of cylindrical
and spherical pieces, coupled by gap junctions, with each of the pieces satisfy-
ing a system of Hodgkin-Huxley-like ODEs. Models of this kind are called multi-
compartment models. In this book, however, we will use single-compartment models
only. That is, we will pretend that all neurons are space-clamped. This simplifying
assumption is made frequently in mathematical neuroscience.
Axons are leaky cables immersed in salty water, a subject that was of interest
to people even before the days of Hodgkin and Huxley. On August 15, 1858, a
message was sent from Europe to North America through a transatlantic cable
for the first time in history. The cable connected Valentia Harbor in Ireland with
Trinity Bay in Newfoundland. It held up for only three weeks — but many new and
improved transatlantic cables followed during the second half of the 19th century.
In the early 1850s, the transatlantic cable project motivated the great Scottish
physicist William Thomson, nowadays known as Lord Kelvin, to study the physics
of leaky cables immersed in water. He showed that the voltage would diffuse along
the length of the cable; thus he derived the term proportional to vzz that appears
in the Hodgkin–Huxley PDE.
Exercises
6.1. In deriving (6.8), we used that
Explain why this is true using l’Hospital’s rule or, better, Taylor’s theorem.
6.2. Here is a sketch of the derivation of (6.9). Fill in the details by answering
the questions. Consider an interval [a, b] along the z-axis, and assume that
ink enters [a, b] through the right boundary (z = b) at a rate proportional to
ρz (b, t). Denote the constant of proportionality by . So the rate at which
ink enters [a, b] through the right boundary is ρz (b, t).
(a) Explain why you would expect to be positive.
Assume also that ink enters [a, b] through the left boundary (z = a) at rate
−ρz (a, t).
(b) Explain what motivates the minus sign.
Exercises 43
Linear Integrate-and-Fire
(LIF) Neurons
Nearly half a century before Hodgkin and Huxley, in 1907, Louis Édouard Lapicque
proposed a mathematical model of nerve cells. Lapicque died in 1952, the year when
the famous series of papers by Hodgkin and Huxley appeared in print. Lapicque’s
model is nowadays known as the integrate-and-fire neuron. We will refer to it as the
LIF neuron. Most authors take the L in “LIF” to stand for “leaky,” for reasons that
will become clear shortly. We take it to stand for “linear,” to distinguish it from the
quadratic integrate-and-fire (QIF) neuron discussed in Chapter 8. The LIF neuron
is useful because of its utter mathematical simplicity. It can lead to insight, but as
we will demonstrate with examples in later chapters, reduced models such as the
LIF neuron are also dangerous — they can lead to incorrect conclusions.
The LIF model can be described as follows. We assume that the ionic conduc-
tances are constant as long as the neuron does not fire, and that a spike is triggered
if the membrane potential rises to a certain threshold voltage. We don’t model the
process of spike generation at all. Of course, in a network of LIF neurons, we would
model the effects of neuronal firing on other neurons, but we won’t discuss this
topic here; see part III. As long as no spike is triggered, the equation governing the
membrane potential v is assumed to be
dv
C = gNa (vNa − v) + gK (vK − v) + gL (vL − v) + I, (7.1)
dt
where gNa , gK , and gL are constant (not to be confused with the quantities gNa , g K ,
and g L of earlier sections). Equation (7.1) can be written in the form
dv veq − v I
= + , (7.2)
dt τm C
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 7) contains supplementary material, which is available to authorized users.
46 Chapter 7. Linear Integrate-and-Fire (LIF) Neurons
where
C
τm =
gNa + gK + gL
and
gNa vNa + gK vK + gL vL
veq = .
gNa + gK + gL
Here the subscript m in τm stands for “membrane” — τm is called the membrane
time constant.
Equation (7.2) is supplemented by a condition of the form
v(t + 0) = vres if v(t − 0) = vthr , (7.3)
where vthr is called the threshold voltage, and vres < vthr the reset voltage. Here
“v(t − 0)” denotes the left-sided limit of v at t, and “v(t + 0)” denotes the right-sided
limit. The assumption underlying (7.3) is that a very rapid voltage spike occurs
when v reaches vthr , and v then “resets” to a low value.
Equations (7.2) and (7.3), taken together, define the LIF model. One calls the
LIF neuron leaky if τm < ∞ (that is, gNa + gK + gL > 0), and non-leaky if τm = ∞
(that is, gNa + gK + gL = 0).8 More generally, not only for the LIF model, the word
leakiness refers to the density of open ion channels, and the reciprocal of the sum of
all ion channel densities, multiplied by the capacitance density C, is the membrane
time constant τm . In Hodgkin-Huxley-like models, the membrane time constant
is C divided by the sum of all terms multiplying v on the right-hand side of the
equation governing v.
Figure 7.1 shows a solution of (7.2) and (7.3) (blue), with a solution of the
classical Hodgkin-Huxley equations superimposed (black). The non-leaky integrate-
50
v [mV]
−50
−100
0 20 40 60 80 100
t [ms]
8 As discussed earlier, we take the L in LIF to mean “linear,” not “leaky.” Therefore in our
as a function of time: The membrane time constant is in fact very small, below
1 ms, along the entire limit cycle! The classical Hodgkin-Huxley has nearly linear
voltage traces below threshold not because the sodium, potassium, and leak currents
are small below threshold, but because their sum happens to be close to constant
between spikes; see Fig. 7.3. Note that when the sum of the ion currents is approxi-
mately constant, dv/dt is approximately constant, and therefore v is approximately
a linear function of t.
1
τ m [ms]
0.5
0
0 20 40 60 80 100
t [ms]
Figure 7.2. Membrane time constant, as a function of time, for the solu-
tion of the classical Hodgkin-Huxley neuron shown in Fig. 7.1. [TAU_M_FOR_HH]
20
I [μA/cm 2 ]
−20
0 20 40 60 80 100
t [ms]
Figure 7.3. Sodium current (red), potassium current (green), leak current
(blue), and their sum (black) for the solution of the classical Hodgkin-Huxley neuron
shown in Fig. 7.1. [SUBTHR_FOR_HH]
To reduce the number of parameters in the LIF model, we shift and scale the
voltage:
v − vres
ṽ = . (7.4)
vthr − vres
Equation (7.2) then becomes (after two lines of straightforward calculation, see
exercise 1)
The reset potential is now ṽres = 0, and the threshold voltage is ṽthr = 1. In
summary, we have found that the LIF model can be transformed, simply by shifting
and scaling the membrane potential, into
dv v
=− + I, (7.6)
dt τm
48 Chapter 7. Linear Integrate-and-Fire (LIF) Neurons
(We have now dropped the tildes in the notation, and the subscript “ext” in I.)
Notice that the time has not been scaled — τm is the same membrane time constant,
measured in milliseconds, as before. The new I (that is, I˜ext as defined in (7.5)) is
a reciprocal time.
When discussing Hodgkin-Huxley-like models, we usually specify units of phys-
ical quantities. In the context of the normalized LIF model (7.6), (7.7), as well as
similar models discussed in future chapters, it often seems overly pedantic to insist
on specifying times in ms, I in ms−1 , and so on. We will therefore usually drop
units when discussing the normalized LIF model and similar future models, but
keep in mind that t and τm are times (measured in ms), and therefore I has to be
reciprocal time (measured in ms−1 ).
If v(t) satisfies (7.6) and v(0) = 0, then (see exercise 2)
v(t) = 1 − e−t/τm τm I. (7.8)
v(t) eventually reaches the threshold voltage 1 if and only if τm I > 1, i.e., I > 1/τm .
We therefore call 1/τm the threshold drive. Figure 7.4 shows an example of a voltage
trace with I > 1/τm .
1.5
1
v
0.5
0
0 20 40 60 80 100
t
to the threshold voltage, 1. As a result, the LIF neuron will be highly sensitive
to noisily fluctuating input: While v is nearly at threshold, any small input fluc-
tuation can cause a spike. In exercise 4, you will be asked to show that thisis
1.5
1
v
0.5
0
0 10 20 30 40 50
t
true whenever T /τm 1. When using the LIF model, τm should therefore not
be chosen much smaller than the largest period T we are interested in to avoid
excessive noise-sensitivity. Since the most typical firing periods are on the order of
tens of milliseconds, τm should certainly not be much smaller than 10 ms.
One could argue that this reflects a flaw of the LIF model: Hodgkin-Huxley-
like model neurons often do have very small membrane time constants (see Fig. 7.2
and exercise 5), yet they are not extremely sensitive to noisy input.
In the LIF model, the subthreshold dynamics of a neuron are replaced by
one-dimensional linear dynamics. The LIF neuron is therefore not a good model
of neurons with more complex subthreshold dynamics. For example, subthreshold
voltage traces often have inflection points; see, for instance, Figs. 5.2 and 5.4. A one-
dimensional linear equation describing subthreshold dynamics cannot reproduce this
feature. However, inflection points can be obtained by making the subthreshold
dynamics quadratic; this is the main topic of Chapter 8.
To reproduce subthreshold oscillations, or more generally, non-monotonic sub-
threshold voltage traces, a one-dimensional model of subthreshold dynamics is not
sufficient; compare exercise 5.4. The model of [80] has in common with the LIF neu-
ron that the subthreshold dynamics are linear, and there is a discontinuous reset as
soon as v reaches a threshold, but the subthreshold dynamics are two-dimensional.
The dependent variables are the membrane potential v, and an additional “recov-
ery variable” which we will call u, representing, for instance, voltage-gated currents.
When v reaches the threshold, both v and u may in general be reset; in [80, Fig. 5], in
fact only u is reset. With two-dimensional subthreshold dynamics, non-monotonic
voltage traces — oscillations, overshoots, undershoots, etc. — become possible.
In the Izhikevich neuron [81], the subthreshold dynamics are two-dimensional
as well, with the same dependent variables v and u, but the evolution equation for v
is quadratic in v. By this we mean that the term v 2 appears on the right-hand side of
the evolution equation for v; all other terms are linear in u and v. Characteristics of
many different types of neurons can be reproduced with different model parameters;
50 Chapter 7. Linear Integrate-and-Fire (LIF) Neurons
see [81, Fig. 2]. For a detailed study of the link between Hodgkin-Huxley-like models
and reduced two-dimensional models of subthreshold dynamics that are quadratic
in v, see [131].
Exercises
7.1. Verify (7.5).
7.2. Assume that v(0) = 0 and v(t) satisfies (7.6). Show:
(a) v(t) = 1 − e−t/τm τm I.
(b) v(t) eventually reaches 1 if and only if τm I > 1.
(c) If τm I > 1, then the time it takes for v(t) to reach 1 equals
τm I
T = τm ln .
τm I − 1
7.3. Let T be the period of the LIF neuron, as given in exercise 2(c). Show that
1
T ∼
I
as I → ∞. (See Section 1.2 for the meaning of “∼”.) Thus the membrane
time constant τm becomes irrelevant for very large I.
7.4. Assume that v(0) = 0 and v(t) satisfies (7.6). Let v(T̃ ) = 0.95 and v(T ) = 1.
Compute a formula for (T − T̃ )/T , the fraction of the period of the LIF neuron
during which v exceeds 0.95. Your formula should reveal that (T − T̃ )/T is
a function of T /τm only. Plot it as a function of T /τm , and discuss what the
plot tells you.
7.5. (∗) Using as a starting point the codes generating Figs. 5.2, 5.3, and 5.4, plot
C
τm =
gNa m3 h + g K n4 + g L
Quadratic
Integrate-and-Fire (QIF)
and Theta Neurons
As noted at the end of the preceding chapter, subthreshold voltage traces of neurons
often have an inflection point; see, for instance, Figs. 5.2 and 5.4. Voltage traces of
the LIF neuron (Fig. 7.4) don’t have this feature. However, we can modify the LIF
neuron to introduce an inflection point, as follows:
dv v(1 − v)
=− + I for v < 1, (8.1)
dt τm
v(t + 0) = 0 if v(t − 0) = 1. (8.2)
1.5
1
v
0.5
0
0 50 100 150
t
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 8) contains supplementary material, which is available to authorized users.
52 Chapter 8. Quadratic Integrate-and-Fire (QIF) and Theta Neurons
with
τm π v0 − 1/2
t± = ± − arctan . (8.5)
τm I − 1/4 2 τm I − 1/4
Taking advantage of this observation, we modify the reset condition (8.2) by
moving the threshold voltage to +∞, and the reset voltage to −∞:
v(t + 0) = −∞ if v(t − 0) = ∞. (8.6)
There is no biological reason for doing this, but the modification leads to a more
attractive mathematical model, as will be seen shortly. Figure 8.2 shows a voltage
trace with the reset condition (8.6).
2
1
v
−1
0 50 100 150
t
Figure 8.2. Voltage trace of a QIF neuron with threshold at +∞ and reset
at −∞. The portion of the voltage trace with 0 ≤ v ≤ 1 is bold.
[QIF_INFINITE_THRESHOLD]
We call the points (cos θ− , sin θ− ) and (cos θ+ , sin θ+ ) the fixed points of the
flow on the circle. The fixed point (cos θ− , sin θ− ) is stable: If the moving point
starts out near it, it approaches it. The fixed point (cos θ+ , sin θ+ ) is unstable: If
the moving point starts out near it, it moves away from it. In the left-most panel of
Fig. 8.3, the stable fixed point is indicated by a solid circle, and the unstable one by
an open circle. As I approaches 1/(4τm ) from below, the two fixed points approach
each other. They coalesce when I = 1/(4τm ). The flow on the circle then has only
one fixed point, at (1, 0), and this fixed point is semi-stable: The moving point is
attracted to it on one side, but repelled from it on the other side. We indicate
54 Chapter 8. Quadratic Integrate-and-Fire (QIF) and Theta Neurons
the semi-stable fixed point by a circle that is half-solid and half-open in the middle
panel of Fig. 8.3.
To illustrate from a slightly different point of view that the theta model de-
scribes “periodic firing,” we show in Fig. 8.4 the quantity 1 − cos θ as a function of
time. (This quantity has no biological interpretation, but it has narrow “spikes,”
reaching its maxima when θ = π modulo 2π.)
The firing period is the time that it takes for v, given by (8.4), to move from
−∞ to +∞. Equation (8.5) implies that this time is
πτm
T = . (8.11)
τm I − 1/4
2
1 − cos(θ )
1.5
0.5
0
0 50 100 150
t
Figure 8.4. The “periodic firing” of a theta neuron, with τm = 1/2 and
I = 0.505. [THETA_FIRING]
Exercises
8.1. Prove that the right-hand side of (8.1) is positive for 0 ≤ v ≤ 1 if and only if
I > 1/(4τm ).
8.2. Derive (8.4).
8.3. Show that (8.8) with τm = 1/2 is equivalent to
dθ
= 1 − cos θ + J(1 + cos θ),
dt
where J = 2I − 1. This is the form in which Ermentrout and Kopell stated
the theta model in [49].
8.4. (∗) Using as a starting point the code that generates Fig. 8.1, plot the voltage
trace of the normalized QIF neuron with τm = 1/2, adjusting I by trial and
error in such a way that the frequency becomes 20 Hz. Based on your plot,
would you think that the noise-sensitivity of the normalized QIF neuron for
small τm is as great as that of the normalized LIF neuron? (See the discussion
at the end of Chapter 7.)
8.5. This exercise continues the theme of exercise 4. For the LIF neuron, the
period is given by the formula
τm I
T = τm ln .
τm I − 1
Exercises 55
eT /τm − 1
κ= ,
T /τm
1 (T /τm )2
κ= + .
2 8π 2
Thus the sensitivity of T to perturbations in I becomes infinite as T /τm → ∞
in both cases, but it does so exponentially for the LIF neuron, and only
quadratically for the theta neuron.
Chapter 9
Spike Frequency
Adaptation
Many neurons, in particular the excitatory pyramidal cells, have spike frequency
adaptation currents. These are hyperpolarizing currents, activated when the mem-
brane potential is high, and de-activated, typically slowly, when the membrane
potential is low. As a consequence, many neurons cannot sustain rapid firing over
a long time; spike frequency adaptation currents act as “brakes,” preventing hyper-
activity. In this chapter, we discuss two kinds of adaptation currents, called M-
currents (Section 9.1) and calcium-dependent afterhyperpolarization (AHP) currents
(Section 9.2). Both kinds are found in many neurons in the brain. They have dif-
ferent properties. In particular, M-currents are active even before the neuron fires,
while calcium-dependent AHP currents are firing-activated. As a result, the two
kinds of currents affect the dynamics of neurons and neuronal network in different
ways; see, for instance, exercises 17.3 and 17.4.
Adaptation currents often make the inter-spike interval increase (at least
approximately) monotonically; see Fig. 9.1 for an example. However, it is not clear
that the increase in the inter-spike interval must be monotonic. One might think
that a longer inter-spike interval would give the adaptation current more time to
decay substantially, and might therefore be followed by a shorter inter-spike inter-
val, which would give the adaptation less time to decay and therefore be followed
by a longer inter-spike interval again, etc. In Section 9.3, we show that in a simple
model problem, this sort of oscillatory behavior of the inter-spike interval is not
possible.
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 9) contains supplementary material, which is available to authorized users.
58 Chapter 9. Spike Frequency Adaptation
IM = g M w(vK − v) (9.1)
with
dw w∞ (v) − w
= , (9.2)
dt τw (v)
1
w∞ (v) = , (9.3)
1 + exp(−(v + 35)/10)
400
τw (v) = . (9.4)
3.3 exp((v + 35)/20) + exp(−(v + 35)/20)
(One should think of IM as a current density, not a current.) Figure 9.2 shows the
graphs of w∞ and τw . We add this model M-current to the right-hand side of the
equation for v in the RTM model. Thus this equation now takes the form
dv
C = g Na m∞ (v)3 h(vNa − v) + gK n4 (vK − v) + g L (vL − v) + g M w(vK − v) + I,
dt
where everything other than the term gM w(vK − v) is as in Section 5.1. Figure 9.3
illustrates the resulting spike frequency adaptation, i.e., deceleration of firing.
The model M-current of Crook et al. does not only oppose prolonged rapid
firing; it hyperpolarizes even at rest. To demonstrate this, we repeat the simulation
of Fig. 9.3, but set I to zero, which has the effect that the neuron simply rests. The
result is shown in Fig. 9.4. The M-current is not zero at rest. In fact, in the simula-
tion of Fig. 9.4, the steady state value of w is 0.031, and therefore the steady state
value of the conductance density of the M-current is 0.031gM ≈ 0.0078 mS/cm2 .
Thus at rest, the M-current, as modeled here, gives rise to a weak but significant
tonic (constant) potassium current. This is indeed biologically realistic [114].
9.1. A Model M-Current 59
1 150
100
w∞
τw
0.5
50
0 0
−100 −50 0 50 −100 −50 0 50
v [mV] v [mV]
−50
−100
0 50 100 150 200 250 300
0.2
0.15
w
0.1
0.05
0
0 50 100 150 200 250 300
t [ms]
Figure 9.3. Buildup of the M-current decelerates firing of the RTM neuron.
In this simulation, g M = 0.25 mS/cm2 , and I = 1.5 μA/cm2 . [RTM_M]
−67
v [mV]
−68
−69
−70
0 100 200 300 400 500 600
0.04
w
0.02
0
0 100 200 300 400 500 600
t [ms]
Figure 9.4. Same as Fig. 9.3, but with I = 0. The M-current gives rise to
a weak but significant constant potassium conductance at rest. [RTM_M_RESTING]
60 Chapter 9. Spike Frequency Adaptation
with
4 120 − v
c∞ (v) = . (9.7)
25 1 + e−(v+15)/5
Figure 9.5 shows the graph of c∞ as a function of v. The values of c∞ are very close
to zero unless v is above −40 mV.
20
c∞
10
0
−100 −50 0 50
v [mV]
Figure 9.6 shows a voltage trace for the RTM neuron with IAHP added. Note
that Fig. 9.6 is quite similar to Fig. 9.3. However, when I is set to zero, the model
with IAHP behaves quite differently from the model with IM ; see Fig. 9.7. Unlike
the M-current, the calcium-dependent AHP current is essentially zero at rest.
50
v [mV]
0
−50
−100
0 50 100 150 200 250 300
0.2
0.15
[Ca2+ ]
0.1
0.05
0
0 50 100 150 200 250 300
t [ms]
−66
−67
v [mV]
−68
−69
−70
0 100 200 300 400 500 600
−3
x 10
4
[Ca2+]
0
0 100 200 300 400 500 600
t [ms]
Figure 9.7. Same as Fig. 9.6, but with I = 0. At rest, the model calcium-
dependent AHP current density is very close to zero. (Compare with Fig. 9.4, but
notice that the vertical axes are scaled differently.) [RTM_AHP_RESTING]
happen: The convergence of the inter-spike intervals to their limiting value has to
be monotonic, regardless of model parameter values. The caricature model will
also allow an analysis of the number of spikes it takes to reach a nearly constant
inter-spike interval; see exercise 6.
Our caricature model is given by the differential equations
dv v dw w
=− + I − wv and =− for v < 1, (9.8)
dt τm dt τw
62 Chapter 9. Spike Frequency Adaptation
where τw > 0 and > 0 are parameters. As for the normalized LIF and QIF
models, we will think of v as dimensionless, t as a time measured in ms, but will
usually omit specifying units. The adaptation variable w is analogous to gM w in
the model described by (9.1)–(9.4). It decays exponentially with time constant τw
between spikes, and jumps by when the neuron fires. We assume that τm I > 1.
This implies that there is an infinite sequence of spikes: Following a spike, w decays
exponentially, eventually becoming so small that the term −wv cannot prevent v
from reaching the threshold 1. Figure 9.8 shows v and w as functions of time for
one particular choice of parameters.
v
0
0 50 100 150 200 250 300
0.1
w
0.05
0
0 50 100 150 200 250 300
t
dv v
=− + I − wk e−(t−tk )/τw v,
dt τm
v(tk ) = 0.
9.3. Analysis in an Idealized Setting 63
dv v
=− + I − wk e−t/τw v, (9.10)
dt τm
v(0) = 0, (9.11)
and then Tk is the time at which the solution v of this problem reaches 1. For slight
notational simplification, this is how we will say it.
Lemma 9.1. For any τm > 0, I > 1/τm , wk ≥ 0, τw > 0, if v solves (9.10), (9.11),
then dv/dt > 0 for all t > 0.
Proof. From eq. (9.10), dv/dt = I when v = 0, so dv/dt > 0 initially. Suppose
that dv/dt ≤ 0 for some t > 0, and let t0 be the smallest such t. Then dv/dt = 0 at
t = t0 . Differentiating (9.10) once, we find
d2 v 1 dv dv
=− − wk e−t/τw + τw wk e−t/τw v.
dt2 τm dt dt
d2 v
= τw wk e−t0 /τw v > 0.
dt2
This implies that v has a strict local minimum at time t0 , which is impossible since
dv/dt > 0 for t < t0 . This contradiction proves our assertion.
Lemma 9.2. For any τm > 0, I > 1/τm , τw > 0, the solution v of (9.10), (9.11)
depends on wk continuously, and decreases with increasing wk .
dv v
=− + I − wk e−t/τw v, v(0) = 0, (9.12)
dt τm
and
dṽ ṽ
=− + I − w̃k e−t/τw ṽ, ṽ(0) = 0. (9.13)
dt τm
64 Chapter 9. Spike Frequency Adaptation
We will show that v(t) > ṽ(t) for t > 0. To do this, we write out the solutions
to (9.12) and (9.13) explicitly, using, for instance, an integrating factor. The calcu-
lation is exercise 5; the result is
t
t−s −s/τw −t/τw
v(t) = exp − − wk τw e −e ds, (9.14)
0 τm
and
t
t−s
ṽ(t) = exp − − w̃k τw e−s/τw − e−t/τw ds. (9.15)
0 τm
From these formulas, it is clear that wk < w̃k implies v(t) > ṽ(t) for t > 0, and also
that v(t) depends continuously on wk .
0.5
0
0 10 20 30 40 50 60
t
Figure 9.9. Solutions of eqs. (9.12) (solid) and (9.13) (dashes), with τm =
10, I = 0.13, wk = 0.05, w̃k = 0.08, τw = 40, and tk = 0. [V_V_TILDE]
Figure 9.9 illustrates the result that we just proved, and makes the next lemma
self-evident.
In spite of the simplicity of our model, it is not possible to find a useful explicit
expression for φ. However, for any given parameters τm , I, τw , and , it is easy
to evaluate this function numerically. Figure 9.10 shows the graph of φ for one
particular choice of parameters; the 45o -line is indicated as a dashed line. The
function φ depicted in Fig. 9.10 has exactly one fixed point z∗ , which is indicated
as well. The following lemma shows that qualitatively, the graph of φ looks like
Fig. 9.10 for all parameter choices.
0.04
φ(z)
0.02
0
0 0.02 z* 0.04
z
Lemma 9.4. Let τm > 0, I > 1/τm , τw > 0, and > 0. (a) 0 ≤ φ (z) < 1 for all
z ≥ 0. (b) φ(0) = . (c) limz→∞ φ(z) < ∞. (d) There exists exactly one z∗ ≥ 0
with φ(z∗ ) = z∗ .
We know that T (z) ≥ 0, from Lemma 9.3. Therefore (9.21) immediately im-
plies φ (z) < 1. To see why φ (z) ≥ 0, note that this inequality is equivalent, by
eq. (9.21), to
zT (z) ≤ τw . (9.22)
For z = 0, there is nothing to show. Let z > 0 and Δz > 0. Then T (z + Δz)
is the time it takes for v to reach 1, if v and w satisfy (9.8) with v(0) = 0 and
w(0) = z + Δz. Therefore T (z + Δz) is bounded by the time it takes for w to decay
66 Chapter 9. Spike Frequency Adaptation
(z + Δz)e−t/τw = z,
i.e., it is
z + Δz
τw ln .
z
Therefore
z + Δz
T (z + Δz) ≤ τw ln + T (z).
z
Subtract T (z) from both sides and divide by Δz:
T (z + Δz) − T (z) τw z + Δz
≤ ln .
Δz Δz z
Taking the limit as Δz → 0 on both sides, we find (9.22).
(b) This follows immediately from (9.19).
(c) To show that limz→∞ φ(z) exists, we must show that φ is bounded. Let z ≥ 0.
For the solution of (9.17), (9.18), we have v = 1, w = ze−T (z)/τw , and dv/dt ≥ 0 at
time t = T (z), and therefore
1
− + I − ze−T (z)/τw ≥ 0.
τm
This implies
1
φ(z) = ze−T (z)/τw + ≤ I − + < ∞.
τm
(d) The existence of z∗ follows from φ(0) = > 0 and limz→∞ φ(z) < ∞, by the
intermediate value theorem: The graph of φ starts out above the 45o -line, and ends
up (for large z) below it, so it must cross the line somewhere. The uniqueness
follows from part (a): The function φ(z) − z can be zero at only one point, since its
derivative is negative.
Lemma 9.5. For any choice of the parameters τm > 0, I > 1/τm , τw > 0, > 0,
assuming v(0) = 0 and w(0) = w0 ≥ 0, the sequence {wk } converges monotonically
to a finite positive limit.
Proof. Consider the plot in Fig. 9.10, and recall that the graph of φ qualitatively
looks like the one in Fig. 9.10 for all parameter choices (Lemma 9.4). If w0 ≤
z∗ , then w1 = φ(w0 ) ≤ φ(z∗ ) because φ is an increasing function, and therefore,
since φ(z∗ ) = z∗ , w1 ≤ z∗ . Repeating this reasoning, we find wk ≤ z∗ for all k.
Furthermore, since the portion of the graph of φ to the left of z = z∗ lies above
9.3. Analysis in an Idealized Setting 67
the 45o -line, wk+1 = φ(wk ) ≥ wk . Thus {wk }k=0,1,2,... is an increasing function,
bounded from above by z∗ . This implies that {wk } has a limit, which we call w∞ :
w∞ = lim wk .
k→∞
Proposition 9.6. For any choice of the parameters τm > 0, I > 1/τm , τw >
0, > 0, assuming v(0) = 0 and w(0) = w0 ≥ 0, the sequence {Tk } converges
monotonically to a finite positive limit.
Proof. This follows immediately from Lemma 9.5, since Tk = T (wk ) and T is
monotonically increasing.
One unrealistic feature of the caricature model (9.8), (9.9) lies in the fact that
the variable w does not saturate. It could, in principle, become arbitrarily large.
This is, of course, not correct for real adaptation currents: Once all channels are
open, the conductance cannot become larger any more. We might therefore modify
the reset condition (9.9) as follows:
where wmax > 0 is a new parameter, and 0 < ≤ 1. In this model, w cannot exceed
wmax , assuming it starts below wmax .
Lemma 9.7. The analogue of Lemma 9.5 holds for the modified model (9.8), (9.23).
Proof. The function T = T (z) is, of course, the same as in the analysis of the
original model (9.8), (9.9). However, the function φ(z) is replaced by
φ̃(z) = ze−T (z)/τw + wmax − ze−T (z)/τw = (1 − ) ze−T (z)/τw + wmax .
1−
Lemma 9.4 then implies (a) 0 ≤ φ̃ (z) < 1 − for all z ≥ 0, (b) φ̃(0) = wmax ,
(c) limz→∞ φ̃(z) < ∞. These properties of φ̃ imply that (d) there exists exactly
one z̃∗ ≥ 0 with φ̃ (z̃∗ ) = z̃∗ . The proof of Lemma 9.7 is then the same as that of
Lemma 9.5, with φ replaced by φ̃ and z∗ replaced by z̃∗ .
Proposition 9.8. The analogue of Proposition 9.6 holds for the modified
model (9.8), (9.23).
The caricature model (9.8), (9.9) can also be used to analyze how long it takes
for the period to become steady when there is adaptation; see exercise 6. The main
conclusion from exercise 6 is that it takes longer to reach a steady period when the
neuron is driven more strongly, or adaptation is weaker.
Exercises
9.1. In the caricature model (9.8), (9.9), if the neuron fires at a constant frequency,
the adaptation variable w varies from a maximal value wmax to a minimal
value wmin . Express wmax and wmin in terms of parameters of the model.
9.2. (∗) Starting with the code that generates Fig. 9.3, see how the solution
changes if you (a) double gM , or (b) double τw .
9.3. (∗) In Fig. 9.3, the variable w saturates rapidly: After three spikes, it is quite
close to its maximal value already. As a result, the firing frequency stabilizes
within about three spikes. Are there parameter values for which it takes more
spikes for the frequency to stabilize? Try raising or lowering (a) I, (b) g M .
You will discover that spike frequency adaptation is slower, in the sense that
it takes more spikes for the frequency to stabilize, if the neuron is driven more
strongly, and/or the adaptation is weaker. (Compare also with exercise 6.)
9.4. The model given by eqs. (9.5)–(9.7) differs from that proposed by Ermentrout
in [46] in two regards.
(a) Instead of the factor Ca2+ in in eq. (9.5), Ermentrout used Ca2+ in /(30+
2+ 2+ 2+
2+
Ca in ). Explain why replacing Ca in by Ca in / 30 + Ca in
in (9.5) amounts to dividing g AHP by 30 to very good approximation.
(b) Ermentrout used 25 in place of 15 in the exponent in eq. (9.7). Discuss
how replacing 25 by 15 changes the model.
9.5. Derive (9.14). (Don’t just verify it. That would be a pointless computation.
Derive it, to remind yourself of the use of integrating factors.)
9.6. (∗) (†) The purpose of this exercise is to think about how long it takes for
the period to become steady in the presence of adaptation. We analyze this
question for the caricature model (9.8), (9.9), which we simplify dramatically
by letting τw tend to ∞; this simplification is motivated by the fact that
adaptation currents typically recover slowly.
Exercises 69
(a) Starting with the code generating Fig. 9.10, reproduce Fig. 9.10 with τw =
1000 and τw = 2000. You will see that φ appears to have a limit, φ∞ , as
τw → ∞, and that φ∞ appears to be of the form
z + if z ≤ zc ,
φ∞ (z) =
zc + if z > zc ,
where zc is independent of τw .
(b) Guess a formula expressing zc in terms of parameters of the model. Verify
your guess computationally.
(c) Give a theoretical explanation of the formula for zc that you guessed
and verified in part (b). Your explanation will not likely be mathematically
rigorous, but it should be plausible at least.
(d) For large τw , how many times does the neuron have to fire, starting with
v(0) = 0 and w(0) = 0, before w comes close to reaching its limiting value?
(To answer this question, use the formula for zc that you guessed and verified
in part (b).)
(e) Check your answer to part (d) numerically, using as a starting point the
code generating Fig. 9.8. Is it accurate if τw = 100, or does τw have to be
much larger? (Adaptation currents may well have decay time constants of
100 ms, but decay time constants on the order of 1000 ms are probably not
realistic.)
Part II
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 10) contains supplementary material, which is available to authorized users.
74 Chapter 10. The Slow-Fast Phase Plane
dn n∞ (v) − n
= , (10.2)
dt τn (v)
with n∞ , τn , m∞ , and all parameters defined in Chapter 3. This is a two-dimensional
0.9
h+n
0.8
0.7
0 10 20 30 40 50
t [ms]
v [mV]
50
−50
−100
0 10 20 30 40 50
0.5
0
0 10 20 30 40 50
t [ms]
Figure 10.2. Same as Fig. 4.1, but with the reductions m = m∞ (v) and
h = 0.83 − n. [REDUCED_HH]
0.8
0.6
n
0.4
0.2
0
−100 −50 0 50
v [mV]
Figure 10.3. The n-nullcline (black) and the v-nullcline (red) of the re-
duced Hodgkin-Huxley equations for I = 10 μ/cm2 , together with a solution (blue).
[HH_NULLCLINES_PLUS_SOLUTION]
doesn’t tell you everything about the motion. We also use the words trajectory or
orbit for the solution of a system of ordinary differential equations, especially when
we depict a solution as a curve in phase space.
One of the first things to do about a two-dimensional system of first-order
ordinary differential equations is to study the nullclines, the curves along which
the derivatives of the dependent variables are zero. The curve where dn/dt = 0,
the n-nullcline, is very easy. It is given by n = n∞ (v). We plotted this curve in
Chapter 3 already; it is also the black curve in Fig. 10.3. Notice that the n-nullcline
is independent of the value of I. The curve where dv/dt = 0, the v-nullcline, is a
little less easy to compute. It is implicitly defined by
g Na m∞ (v)3 (0.83 − n)(vNa − v) + g K n4 (vK − v) + gL (vL − v) + I = 0. (10.5)
This equation cannot easily be solved by hand. However, it is easy to show that for
a fixed v between vK and vNa , it has at most one solution n ∈ [0, 1]; see exercise 2.
If there is a solution, it can be found with great accuracy using, for instance, the
bisection method; see Appendix A. The reader can see the details by reading the
code that generates Fig. 10.3. The red curve in that figure is the v-nullcline.
The moving point must pass through the black curve horizontally (since
dn/dt = 0), and through the red curve vertically (since dv/dt = 0). In Fig. 10.3, a
solution is shown in blue. The flow along this trajectory is counter-clockwise. You
see here that the trajectory appears to be attracted to a closed loop, a limit cycle,
corresponding to a periodic solution of eqs. (10.1) and (10.2).
Figure 10.3, like any phase plane plot, only shows where the solution moves,
but not how fast it moves. In fact, the speed around the cycle is very far from
constant. We define
2 2
dv/dt dn/dt
s(t) = + .
120 mV 0.35
(The scaling is motivated by the fact that on the limit cycle, max v−min v ≈ 120mV,
and max n−min n ≈ 0.35.) We call the motion along the cycle slow when s ≤ M/50,
76 Chapter 10. The Slow-Fast Phase Plane
0.8
0.6
n
0.4
0.2
0
−100 −50 0 50
v [mV]
Figure 10.4. Slow portion of cycle in green, fast portion in blue. (See text
for the precise definitions of “slow” and “fast” used here.) [HH_CYCLE_SPEED]
where M is the maximum of s over the entire cycle, and fast otherwise. Figure 10.4
shows the slow portion of the cycle in green and the fast portion in blue. The
motion is slow during the left-most part of the cycle, when v changes little and
n changes substantially, and fast during the remainder of the cycle, when v rises
and falls. One calls n the slow variable, v the fast variable, and the reduction that
we are considering here is also called the slow-fast phase plane.
The reduced, two-dimensional Hodgkin-Huxley model described in this chap-
ter is an example of a relaxation oscillator. Oscillator because solutions oscillate
(there is a limit cycle). Relaxation because v creeps up gradually first, while n falls,
“building up tension” as it were, then all of the sudden the tension is “released”
when the trajectory shoots over from the left branch of the v-nullcline to its right
branch.
2 2
0 0
n
−2 −2
−2 0 2 0 200 400
v t
Figure 10.5. Left: The n-nullcline (black) and the v-nullcline (red) of the
FitzHugh-Nagumo equations, with a = 1.25, τn = 25, and I = −0.5, together with
a solution (blue). Right: v as a function of t for the same equations. [FN]
Exercises
10.1. (∗) Using the program that generated Fig. 10.1 as a starting point, make
similar plots for I = 7, 8, . . . , 14, 15 μA/cm2.
10.2. Show: For any v with vK ≤ v ≤ vNa , eq. (10.5) has at most one solution
n ∈ [0, 1].
10.3. Consider the downward-moving portion of the limit cycle in Fig. 10.3, follow-
ing the crossing of the v-nullcline in the left upper “corner.” Explain why the
limit cycle cannot cross the v-nullcline again before crossing the n-nullcline.
10.4. When you look closely at the portion of the spike of a Hodgkin-Huxley neuron
when the voltage v decreases, you see a slight kink, about two-thirds of the
way towards the minimum: From the maximum to the kink, the voltage
decreases slightly less fast than from the kink to the minimum. Guess what
this kink corresponds to in the phase plane.
10.5. In (10.6), assume that n is constant. (This is motivated by the assumption
that τn 1.) Analyze the dynamics of v under this assumption.
10.6. Assume a ≥ 1. (a) Show that the FitzHugh-Nagumo system has exactly one
fixed point. (b) Show: If I > a − 2/3, the fixed point is stable. To do this,
you have to remember that a fixed point is stable if and only if the Jacobi
matrix at the fixed point has eigenvalues with negative real parts only; see
[149] or, for a very brief review, Chapter 12.
Thus the FitzHugh-Nagumo system can rest at a stable fixed point if I is
large enough; a similar phenomenon occurs in real neurons [9], and is called
depolarization block.
Chapter 11
Saddle-Node Collisions
Model neurons, as well as real neurons, don’t fire when they receive little or no input
current, but fire periodically when they receive strong input current. The transition
from rest to firing, as the input current is raised, is called a bifurcation. In general, a
bifurcation is a sudden qualitative change in the solutions to a differential equation,
or a system of differential equations, occurring as a parameter, called the bifurcation
parameter in this context, is moved past a threshold value, also called the critical
value. (To bifurcate, in general, means to divide or fork into two branches. This
suggests that in a bifurcation, one thing turns into two. This is indeed the case in
some bifurcations, but not in all.) Because the variation of drive to a neuron is the
primary example we have in mind, we will denote the bifurcation parameter by I,
and its threshold value by Ic , in this chapter.
Since we want to study the transition from (stable) rest to firing in a neuron,
we will focus on bifurcations in which stable fixed points become unstable, or dis-
appear altogether. In this chapter, we study the simplest such bifurcations, namely
collisions between saddle points and stable nodes resulting in the disappearance of
both fixed points. For more about this and other kinds of bifurcations, see, for
instance, [74] or [149].
We begin with the differential equation
dx
= x2 + I, (11.1)
dt
where I is given and x = x(t) is the unknown function. One can find the solutions
of this equation using separation of variables, but it is easier and more instructive
to understand what the solutions look like by drawing a picture. Figure 11.1 shows
the cases I = −1, I = 0, and I = 1. In each case, we plot x2 + I as a function of
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 11) contains supplementary material, which is available to authorized users.
80 Chapter 11. Saddle-Node Collisions
I = −1 I =0 I =1
x x x
Figure 11.1. Two fixed points collide and annihilate each other. The
equation is dx/dt = x2 + I.
Notice the similarity between Figs. 11.1 and 8.3. The transition from rest to
periodic firing in the theta neuron is a saddle-node bifurcation. In general, there is
usually a saddle-node bifurcation in an equation of the form
dx
= f (x, I) (11.2)
dt
if f , as a function of x, has a double zero, i.e., f (xc , Ic ) = 0 and fx (xc , Ic ) = 0 for
some value, Ic , of the parameter and some point xc . We assume that the fixed point
is otherwise non-degenerate, i.e., that fxx (xc , Ic )
= 0 and fI (xc , Ic )
= 0. We then
Chapter 11. Saddle-Node Collisions 81
replace the right-hand side in (11.2) by its leading-order Taylor approximation near
(xc , Ic ):
dx 1
= fxx (xc , Ic )(x − xc )2 + fI (xc , Ic )(I − Ic ). (11.3)
dt 2
We scale time as follows:
fxx (xc , Ic )
t̃ = t.
2
(If fxx (xc , Ic ) < 0, this also reverses the direction of time.) Then
dx dx dt 2fI (xc , Ic )
= = (x − xc )2 + (I − Ic ).
dt̃ dt dt̃ fxx (xc , Ic )
If we also define
2fI (xc , Ic )
x̃ = x − xc , and I˜ = (I − Ic ),
fxx (xc , Ic )
then
dx̃ ˜
= x̃2 + I.
dt̃
Thus up to shifting and/or re-scaling x, t, and I, (11.3) is the same as (11.1). One
therefore calls (11.1) the normal form of a saddle-node bifurcation.
I = −1 I =0 I =1
manifold is the set of all points (x0 , y0 ) in the plane so that the trajectory with
(x(0), y(0)) = (x0 , y0 ) converges to the saddle point. The set of points on the x-axis
to the right of x = −1 is the unstable manifold of the saddle point. In general, the
unstable manifold is the set of all points (x0 , y0 ) in the plane so that the trajectory
with (x(0), y(0)) = (x0 , y0 ) converges to the saddle point in backward time, that is,
as t → −∞. Intuitively, one should think of the unstable manifold as composed of
trajectories that emanate from the saddle at t = −∞. The stable fixed point at
(−1, 0) attracts all trajectories in its vicinity in a non-oscillatory manner; such a
fixed point is called a stable node.
As I rises, the saddle and the node approach each other, colliding when I = 0.
Typical trajectories for I = 0 are shown in the middle panel of Fig. 11.2. The fixed
point looks like a stable node on the left, and like a saddle on the right. For I > 0,
there are no fixed points; the collision has annihilated both fixed points.
Saddle-node collisions are very common in the sciences. Figure 11.3 shows
phase plane pictures for a system from [66], discussed also in [149, Example 8.1.1],
which models a gene and a protein interacting with each other.
Figure 11.3. Phase plane pictures for the system discussed in [149, Ex-
ample 8.1.1], originally from [66], modeling gene-protein interaction. The variables
x and y are proportional to the concentrations of protein and messenger RNA, re-
spectively. From left to right, the rate of degradation of x (denoted by a in [149,
Example 8.1.1]) rises. [SADDLE_NODE_BIFURCATION]
Figures 11.2 and 11.3 motivate the name saddle-node bifurcation: The two
fixed points colliding and annihilating each other are a saddle and a node. In the
next paragraph, I will argue that this is the only “non-degenerate” possibility when
two fixed points collide and annihilate each other. I will only discuss the case of a
two-dimensional system, and use ideas explained in easily accessible ways in [149],
but not in this book; readers unfamiliar with these ideas should simply skip the
following paragraph.
Suppose that a continuous change in a bifurcation parameter makes two fixed
points collide and annihilate each other. Consider a closed curve Γ surrounding the
site of the collision. The index IΓ of Γ [149, Section 6.8] with respect to the vector
field defined by the right-hand side of the system of ordinary differential equations
is zero after the fixed points have disappeared. Since IΓ depends continuously on
the bifurcation parameter and is an integer, it cannot depend on the bifurcation
parameter at all. So IΓ = 0 even prior to the collision of the two fixed points, thus
Exercises 83
the sum of the indices of the two fixed points is zero, and therefore one of the two
fixed points must be a saddle (index −1), while the other must be a non-saddle
(index 1) [149, Section 6.8]. Because the collision must involve a saddle and a non-
saddle, it must occur on the τ -axis of Figure 5.2.8 of [149], and therefore it must
be a saddle-node collision, if we disregard the “degenerate” possibility of a collision
that occurs at the origin of [149, Fig. 5.2.8].
Suppose now that I > Ic . We assume, as was the case in the examples of
Figs. 11.1 and 11.2, that the fixed points exist for I < Ic , not for I > Ic . In the
reverse case, the obvious sign changes need to be made in the following discussion.
If I ≈ Ic , the velocity is still small near the point where the saddle-node collision
occurred, since the right-hand side changes continuously. To move √ past this point
therefore takes a long time — as it turns out, on the order of C/ I − Ic , where C is
a positive constant. One says that the saddle-node collision leaves a “ghost” behind:
There are no fixed points any more when I > Ic , but trajectories still move slowly
past the location where the fixed points disappeared. We will make this precise,
and prove it, for the example shown in Fig. 11.1 only. Let a > 0 be an arbitrary
number. For I > 0, the time needed for x to move from −a to a is
√
a a 2 arctan a/ I
dt 1 π
dx = 2+I
dx = √ ∼ √ .
−a dx −a x I I
in the limit as I → 0, the time it
(See Section 1.2 for the meaning of “∼”.) Thus √
takes to move from −a to a is approximately π/ I, regardless of the choice of a.
Exercises
11.1. This exercise is not directly related to the content of this chapter, but illus-
trates the usefulness of thinking of an equation of the form dx/dt = f (x) as
describing a point moving on the x-axis.
Logistic population growth is described by
dx x
= rx 1 − ,
dt K
where x is the size of the population, r > 0 is the rate of growth when
the population is small, and K is the maximal population size that can be
supported by the environment, also called the carrying capacity. (Notice
that dx/dt becomes 0 as x approaches K from below.) Without computing
an expression for the solutions of the differential equation, show: (a) If 0 <
x(0) < K, then limt→∞ = K. (b) The graph of x is concave-up if 0 < x <
K/2, and concave-down if K/2 < x < K.
11.2. For the equation dx/dt = x2 + I, plot the fixed points as a function of
I, indicating stable fixed points by a solid curve, and unstable ones by a
dashed curve. This plot is called the bifurcation diagram for the equation
dx/dt = x2 + I.
84 Chapter 11. Saddle-Node Collisions
dx
= −Ix + y, (11.5)
dt
dy x2
= − y. (11.6)
dt 1 + x2
The three panels of the figure show phase plane pictures for (from left to
right) I = 0.45, I = 0.5, and I = 0.55. The critical value of I is Ic = 0.5.
The two fixed points collide at (x, y) = (1, 0.5).
For I = 0.5+10−p , p = 4, 5, 6, compute the solution with x(0) = 2, y(0) = 0.5,
and plot x as a function of t for 0 ≤ t ≤ 10, 000. (Use the midpoint method
for these computations.) You will find that x is always a decreasing function
of t, and converges to 0 as t → ∞. For each of the three values of I, compute
the time T = T (I) that√ it takes for x to fall from 1.5 to 0.5. Demonstrate
numerically that T (I) I − Ic is approximately independent of I.
11.4. Consider the equation
dx
= |x| + I.
dt
(a) Explain: As I rises above 0, two fixed points collide and annihilate each
other. (b) Show: For I > 0, I ≈ 0, the time needed to move past the ghost of
the fixed points√is ∼ 2 ln(1/I). (c) Why is the time needed to move past the
ghost not 1/ I, as suggested by the last paragraph of the chapter? (See
Section 1.2 for the meaning of “ ”.)
Chapter 12
Model Neurons of
Bifurcation Type 1
For a model neuron, there is typically a critical value Ic with the property that
for I < Ic , there is a stable equilibrium with a low membrane potential, whereas
periodic firing is the only stable behavior for I > Ic , as long as I is not too high.9
In this section, we discuss a class of model neurons in which a saddle-node
bifurcation occurs as I crosses Ic . We begin with the two-dimensional reduction
of the RTM neuron described in exercise 5.3. For this model, we will examine the
transition from I < Ic to I > Ic numerically. The model is of the form
dv
C = gNa (m∞ (v))3 (1 − n)(vNa − v) + gK n4 (vK − v) + g L (vL − v) + I, (12.1)
dt
dn n∞ (v) − n
= , (12.2)
dt τn (v)
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 12) contains supplementary material, which is available to authorized users.
9 For very high I, there is usually a stable equilibrium again, but at a high membrane potential.
This is referred to as depolarization block, an observed phenomenon in real neurons driven very
hard [9], but we don’t discuss it here; see also exercises 10.6, 12.3, and 12.4.
86 Chapter 12. Model Neurons of Bifurcation Type 1
A point (v∗ , n∗ ) in the (v, n)-plane is a fixed point of this system if and only if
f (v∗ , n∗ ) = 0 and g(v∗ , n∗ ) = 0, i.e., n∗ = n∞ (v∗ ) and
3 4
gNa (m∞ (v∗ )) (1 − n∞ (v∗ ))(vNa − v∗ ) + gK (n∞ (v∗ )) (vK − v∗ ) +
g L (vL − v∗ ) + I = 0. (12.4)
Notice that (12.4) is one equation in one unknown, v∗ . We prove next that this
equation has at least one solution for any value of I, so that there is always at least
one fixed point of (12.1), (12.2).
We will prove: (a) If v < vK and v < vL + I/g L , then F (v) > 0. (b) If v > vNa
and v > vL + I/gL , then F (v) < 0. Together, (a) and (b) imply that any v∗
with F (v∗ ) = 0 must satisfy inequalities (12.5). Furthermore, since F (v) > 0 for
sufficiently small v and F (v) < 0 for sufficiently large v, F must be equal to zero in
between at least once.
To prove (a), recall that vK < vL < vNa . If v < vK , then the terms
3 4
gNa (m∞ (v)) (1 − n∞ (v))(vNa − v) and g K (n∞ (v)) (vK − v) are positive. If fur-
thermore v < vL + I/gL , i.e., gL (vL − v) + I > 0, then F (v) > 0. The proof of (b)
is analogous.
It is easy to find the solutions of (12.4) numerically, using the bisection method
(see Appendix A), and therefore we can determine all fixed points of (12.1), (12.2);
we won’t explain in detail how we do this, but refer the interested reader to the
Matlab code generating Fig. 12.1.
We assume that the reader is familiar with the classification of fixed points of
two-dimensional systems of ODEs [149, Chapters 5 and 6]. Nonetheless we will very
briefly sketch the relevant ideas here. A fixed point (v∗ , n∗ ) is attracting if solutions
that start close enough to (v∗ , n∗ ) remain close to (v∗ , n∗ ) for all later times, and
converge to (v∗ , n∗ ) as t → ∞; it is called a stable node if the approach to the
fixed point is non-oscillatory, and a stable spiral if it involves damped oscillations
that continue indefinitely. For examples of stable nodes, see the stable fixed points
in Figs. 11.2 and 11.3. The fixed point (v∗ , n∗ ) is repelling if it would become
Chapter 12. Model Neurons of Bifurcation Type 1 87
One evaluates J at the fixed point (v∗ , n∗ ), then computes its eigenvalues by finding
zeros of the characteristic polynomial of J, using the quadratic formula. There are
typically two eigenvalues, λ+ and λ− , with the subscripts + and − corresponding
to the sign chosen in the quadratic formula. It is possible for λ+ and λ− to coincide;
then J has only one eigenvalue, but it is of multiplicity 2. Table 12.1 shows how
the eigenvalues are used to classify fixed points.
Non-real eigenvalues always come in complex-conjugate pairs. This is simply
a consequence of the fact that the Jacobi matrix J is real. In fact, if J is any real
N × N -matrix, with N ≥ 2, and if λ is a complex number that is an eigenvalue of
J, then the complex-conjugate λ is an eigenvalue of J as well; see exercise 5.
Using the eigenvalues, we can determine, for any fixed point of (12.1), (12.2),
whether it is stable or unstable. Figure 12.1 shows the result of computing and
classifying all fixed points for 0 ≤ I ≤ 0.2 μA/cm2 . For 0 ≤ I ≤ Ic , with Ic ≈
0.13 μA/cm2 , a stable node and a saddle exist. They collide when I reaches Ic , and
don’t exist for I > Ic . In addition, there is an unstable node for all I between 0
and 0.2 μA/cm2 . For the third, unstable node, v∗ appears to be independent of I,
judging by the figure; this is in fact almost, but not strictly, true.
88 Chapter 12. Model Neurons of Bifurcation Type 1
−30
−40
v [mV]
−50
*
−60
−70
0 0.05 0.1 0.15 0.2
I [μA/cm 2 ]
How does a saddle-node bifurcation result in the onset of firing in this example?
To understand, we compute solutions of (12.1), (12.2) for I = 0 (below threshold)
and I = 1 (above threshold). The left panel of Fig. 12.2 shows the limit cycle for
I = 1. The middle panel shows a solution for I = 0 which converges to the stable
fixed point as t → ∞ and to the saddle as t → −∞ — hence the gap. The right
panel shows a close-up of the gap, with another solution connecting the unstable
fixed point and the stable fixed point added in red. These pictures show that even
for I = 0, there is an invariant cycle, i.e., a closed loop in the phase plane which
a trajectory cannot leave. The stable node and the saddle lie on this invariant
cycle. When they collide and annihilate each other, the invariant cycle turns into a
limit cycle, and oscillations (that is, periodic voltage spikes) begin. One calls this
a saddle-node bifurcation on an invariant cycle (SNIC). The simplest example of
a SNIC was depicted in Fig. 8.3 already.
0.8 0.8
0.6 0.6
0.1
0.4 0.4
n
0 0
0
−100 −50 0 50 −100 −50 0 50 −70 −60 −50
v [mV] v [mV] v [mV]
examples as well, although this is not straightforward to prove, nor even entirely
easy to demonstrate convincingly using numerical computations.
There is another class of neuronal models, called models of bifurcation type 3 in
this book, in which the stable fixed point is abolished by a saddle-node bifurcation
as well, but not by a SNIC; see Chapter 16.
Exercises
12.1. Think about an ODE of the form
dv
= F (v),
dt
where F is a differentiable function with a continuous derivative. Explain:
v∗ is an attracting fixed point if F (v∗ ) = 0, F (v∗ ) < 0, and a repelling one
if F (v∗ ) = 0, F (v∗ ) > 0.
12.2. If we replace n by n∞ (v) in (12.1), we obtain
dv
C = g Na m∞ (v)3 (1 − n∞ (v))(vNa − v)
dt
+g K n∞ (v)4 (vK − v) + gL (vL − v) + I. (12.6)
(a) Explain: v∗ is a fixed point of (12.6) if and only if (v∗ , n∞ (v∗ )) is a fixed
point of (12.1), (12.2).
(b) (∗) (†) Starting with Fig. 12.1, explain: If v∗ is a stable fixed point
of (12.6), the point (v∗ , n∞ (v∗ )) may still be an unstable fixed point
of (12.1), (12.2).
12.3. (∗) Starting with the program that you wrote for problem 5.3, simulate the
two-dimensional reduced Traub-Miles neuron for I = 1000 μA/cm2 , and for
I = 1500 μA/cm2 . In each case, start with v(0) = −70 mV and n(0) =
n∞ (v(0)), simulate 100 ms, and show the time interval 90 ≤ t ≤ 100 in your
plot. You will find that for I = 1000μA/cm2, there is periodic spiking, but for
I = 1500 μA/cm2 , the trajectory comes to rest at a stable fixed point. This
is the phenomenon of depolarization block mentioned earlier, but it occurs
at quite unrealistically large values of I only in this model.
12.4. (∗) Make a plot similar to Fig. 12.1, but for I ≤ 2000 μA/cm2. (Use the code
that generates Fig. 12.1 as a starting point.) Discuss how the plot matches
with exercise 3.
12.5. Suppose that J is a real N × N -matrix, N ≥ 2. Suppose that λ = a + ib is
an eigenvalue of J, with a ∈ R and b ∈ R. Show: λ = a − ib is an eigenvalue
of J as well.
Chapter 13
Hopf Bifurcations
In some neuronal models, the transition from I < Ic to I > Ic involves not a SNIC,
as in Chapter 12, but a Hopf bifurcation. In this chapter, we give a very brief
introduction to Hopf bifurcations.
In a Hopf bifurcation, a stable spiral becomes unstable as a bifurcation pa-
rameter crosses a critical value. As before, we denote the bifurcation parameter by
the letter I, and the critical value by Ic ; in the examples of primary interest to us
in this book, I is an input current into a model neuron. We assume that the spiral
is stable for I < Ic , and unstable for I > Ic . (This is just a sign convention: If
it were the other way around, we could use I˜ = −I as the bifurcation parameter.)
The Jacobi matrix at the fixed point has a complex-conjugate pair of eigenvalues
with negative real part for I < Ic , and positive real part for I > Ic .
Very loosely speaking, the Hopf bifurcation theorem states that the loss of sta-
bility of a spiral typically involves the creation of oscillations. The precise statement
of the theorem is fairly technical, and we will not give it here; see, for instance, [41].
There are two kinds of Hopf bifurcations, called supercritical and subcritical. We
will present the simple examples used in [149] to illustrate these two different kinds
of Hopf bifurcations.
Example 13.1 We define a system of two ODEs for the two dependent variables
x = x(t), y = y(t), in terms of polar coordinates r and θ (that is, x = r cos θ,
y = r sin θ). Our equations are
dr
= Ir − r3 , (13.1)
dt
dθ
= 1. (13.2)
dt
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 13) contains supplementary material, which is available to authorized users.
92 Chapter 13. Hopf Bifurcations
Equation (13.2) means that the point (x, y) rotates around the origin in the positive
(counter-clockwise) orientation at unit angular speed. To understand eq. (13.1), we
I =1
f
−1 I = −1 I = 0
−2
−3
0 0.3 0.6 0.9 1.2
r
show the graph of f (r) = Ir − r3 , r ≥ 0, for three different values of I in Fig. 13.1.
For I ≤ 0, eq. (13.1) has a stable fixed point at r = 0, and no positive fixed points.
For √I > 0, eq,˜(13.1) has an unstable fixed point at r = 0, and a stable one at
r = I.
It is easy to translate these statements into statements about (x(t), y(t));
compare Fig. 13.2. For I ≤ 0, (x(t), y(t)) → 0 as t → ∞. Thus the origin is a stable
fixed point that attracts all solutions; one says that it is globally attracting. For
I > 0 the origin is still a fixed
√ point, but repelling. There is a trajectory traveling
along the circle with radius I centered at the origin.10 It is an attracting limit
cycle because nearby trajectories converge to the circle. In fact, all trajectories
converge to this circle, except for the one that rests at the origin at all times.
I <0 I >0
y
x x
Fig. 13.3 shows a (somewhat symbolic) bifurcation diagram. The solid horizon-
√ which is stable for I ≤ 0 and unstable
tal line signifies the fixed point at the origin,
for I > 0. The solid curves for I > 0 (± I) signify the radius of the attracting
limit cycle.
10 Strictly speaking, there are infinitely many such solutions: If (x(t), y(t)) is a solution, and
if τ ∈ R, then (x(t + τ ), y(t + τ )) is a solution as well. However, the distinction between (x(t +
τ, y(t + τ )) and (x(t), y(t)) is not visible in a phase plane picture.
Chapter 13. Hopf Bifurcations 93
√
As I passes through Ic = 0 from below, a stable limit cycle of radius R(I) = I
is created. We say that the limit cycle is created with an infinitesimally small radius,
since limI→0+ R(I) = 0. The oscillations in x and y have amplitude R(I). Thus
the oscillations are created with an infinitesimally small amplitude, but with a fixed
frequency — one oscillation in time 2π.
oscillation amplitude 1
0.5
−0.5
−1
−1 −0.5 0 0.5 1
I
dr
= Ir + r3 , (13.3)
dt
dθ
= 1. (13.4)
dt
Again eq. (13.4) means that the point (x, y) moves around the origin in the positive
(counter-clockwise) orientation at unit angular speed.
I =1
1
f
I =0
0
I = −1
−1
0 0.3 0.6 0.9 1.2
r
I <0 I >0
y
x x
1
oscillation amplitude
0.5
−0.5
−1
−1 −0.5 0 0.5 1
I
Figure 13.6 shows the bifurcation diagram. The horizontal line signifies the
fixed point at the origin, which
√ is stable for I ≤ 0 and unstable for I > 0. The
dashed curves for I < 0 (± −I) signify the radius of the repelling limit cycle.
Chapter 13. Hopf Bifurcations 95
√
As I approaches Ic = 0 from below, the repelling limit cycle of radius −I
encroaches upon the fixed point at the origin. When the repelling limit cycle reaches
the origin, the fixed point becomes unstable, and the system is left without any
stable structure (fixed point or limit cycle); trajectories diverge to infinity.
Note that in example 13.1, oscillations arise as the stable spiral at the ori-
gin becomes unstable, whereas no such thing happens in example 13.2. However,
the following example shows that the mechanism in 13.2 can in fact give rise to
oscillations.
Example 13.3 We add to eq. (13.3) a term that keeps the trajectories on the
outside of the repelling limit cycle from diverging to infinity:
dr
= Ir + r3 − r5 , (13.5)
dt
dθ
= 1. (13.6)
dt
0.5
I = 0.2
I =0
I = −0.2
0
f
I = −0.4
−0.5
0 0.3 0.6 0.9 1.2
r
y
x x x
Figure 13.8. Solutions of (13.5), (13.6) for I < −1/4, −1/4 < I < 0, and
I > 0. [HOPF_SUB_PHASE_PLANE_2]
at the origin, and there is an unstable closed orbit (a repelling limit cycle), along the
circle with radius r0 centered at the origin. If (x(0), y(0)) lies outside the circle with
radius r0 , then (x(t), y(t)) converges to the attracting limit cycle; if it lies inside,
then (x(t), y(t)) converges to the origin. We say that there is bistability (the co-
existence of two stable structures) for −1/4 < I < 0, and that the unstable closed
orbit separates the basins of attraction of the stable fixed point and the attracting
limit cycle. For I > 0, the origin is an unstable fixed point, and all trajectories
converge to the circle with radius R0 , of course with the exception of the one that
rests at (0,0) for all time.
2
oscillation amplitude
1.5
0.5
−0.5
−1
−1.5
−2
−1 −0.5 0 0.5 1
I
Fig. 13.9 shows the bifurcation diagram. The solid horizontal line signifies the
fixed point at the origin, which is stable for I < 0 and unstable for I ≥ 0. The
Exercises 97
solid curves (±R0 ) signify the stable limit cycle, and the dashed curves (±r0 ) the
unstable periodic orbit.
Two bifurcations occur in this example. When I crosses −1/4 from below,
two limit cycles arise, an attracting and a repelling one. This type of bifurcation is
called a saddle-node bifurcation of cycles; one might also call it a blue sky bifurcation
of cycles here, since the two cycles appear “out of the blue” as I is raised. Then, as
I crosses 0 from below, the origin becomes unstable just as it did in example 13.2.
At that point, the only stable structure left in the system is the attracting limit
cycle with radius R0 .
In contrast with example 13.1, here the onset of oscillations occurs at a non- √
zero amplitude. In fact, the stable limit cycle always has radius greater than 1/ 2
(the value of R0 = r0 when I = −1/4).
Note that in this example, there are two critical values of I, corresponding
to the two bifurcations: The blue sky bifurcation of cycles occurs a I rises above
I∗ = −1/4, and the Hopf bifurcation occurs as I rises above Ic = 0.
Example 13.1 exhibits a supercritical or soft Hopf bifurcation, whereas exam-
ples 13.2 and 13.3 exhibit subcritical or hard Hopf bifurcations. The words “soft”
and “hard” are easy to understand: The onset of oscillations is at infinitesimally
small amplitude in example 13.1, whereas it is at a positive amplitude in exam-
ple 13.3, and trajectories simply diverge to infinity when I > Ic in example 13.2.
To understand the motivation for the words “supercritical” and “subcritical,” look
at Figures 13.3 and 13.6. In Fig. 13.3, the “fork” is on the right, above Ic — hence
supercritical. (The convention that “above Ic ” means “on the side of Ic where the
fixed point is unstable” is crucial here.) In Fig. 13.6, the “fork” is on the left, below
Ic — hence subcritical.
In fact, the classical Hodgkin-Huxley neuron exhibits both types of Hopf bi-
furcations as the external drive is varied: The transition from resting at a hyper-
polarized membrane potential to periodic firing involves a hard Hopf bifurcation,
and the transition from periodic firing to depolarization block, i.e., resting at a
depolarized membrane potential, involves a soft Hopf bifurcation; see Fig. 5.7A of
[88]. Evidence for the onset of firing via a hard Hopf bifurcation will be given in
Chapter 14 for the two-dimensional reduction (10.1), (10.2), and in Fig. 17.1 for
the full four-dimensional classical Hodgkin-Huxley model.
Exercises
13.1. Derive the properties of f in example 13.3.
13.2. Write (a) eqs. (13.1), (13.2) and (b) eqs. (13.3), (13.4) in terms of x and y.
13.3. Let dθ/dt = 1 and (a) dr/dt = Ir − r2 , (b) dr/dt = Ir + r2 . Analyze the
behavior of the solutions.
13.4. (a) Write the equations in the preceding exercise in terms of x and y.
(b) (†) Show that the right-hand side is once but not twice differentiable (0, 0),
whereas the right-hand side in exercise 2 is infinitely often differentiable.
Chapter 14
Model Neurons of
Bifurcation Type 2
Solutions of this equation are easy to determine numerically. For a given I between
0 and 15, there is exactly one solution. (A Matlab program that verifies this can
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 14) contains supplementary material, which is available to authorized users.
100 Chapter 14. Model Neurons of Bifurcation Type 2
-60
-65
*
v
-70
-75
0 5 10 15
I
Figure 14.1. The v-coordinate, v∗ , of the fixed point of eqs. (10.1), (10.2),
as a function of I. The fixed point is a stable spiral for I < Ic ≈ 7.4 (red, solid),
and an unstable spiral for I > Ic (green, dashes). [HH_REDUCED_FIXED_POINTS]
0.5
Im (λ)
−0.5
−1
−1 −0.5 0 0.5 1
Re (λ)
Figure 14.2. The eigenvalues of the fixed points in Fig. 14.1 in the
complex plane. The imaginary axis (indicated in blue) is crossed when I = Ic .
[HH_REDUCED_FP_EVS]
attracting when the time direction is reversed. Reversing the time direction in a
system of differential equations,
dy
= F (y), (14.1)
dt
simply means putting a minus sign in front of the right-hand side: y = y(t)
solves (14.1) if and only if ỹ(t) = y(−t) solves
dỹ
= −F (ỹ). (14.2)
dt
We note that in general the computation of unstable periodic orbits is not so easy.
In dimensions greater than 2, an unstable periodic orbit may be repelling in one
direction, but attracting in another. Such a periodic orbit is said to be of saddle
type, and does not become attracting when time is reversed; see exercise 1. In
fact, we study the reduced, two-dimensional Hodgkin-Huxley model here precisely
to avoid this complication.11
0.7
0.6
n
0.4
0.5
0.4
0.3 0.35
−100 −50 0 50 −70 −60
v v
Figure 14.3. Two limit cycles for the reduced, two-dimensional Hodgkin-
Huxley model given by eqs. (10.1), (10.2), with I = 5.5: An attracting one (solid,
black), and a repelling one (red, dashes). The right panel is a close-up of the
left. The dot in the right panel indicates the fixed point, which is a stable spiral.
[HH_REDUCED_REPELLING_CYCLE]
For instance, for I = 5.5 < Ic ≈ 7.4, we find an attracting limit cycle, and a
repelling one, shown in Fig. 14.3. The diameter of the repelling limit cycle tends
to zero as I Ic ≈ 7.4, and it grows as I decreases. As I falls below a second
critical value, I∗ ≈ 5.2, both limit cycles disappear. The analogues of Ic and I∗
in example 13.3 are 0 and −1/4, respectively. As in example 13.3, the distance
between the two limit cycles tends to zero as I I∗ ; see Fig. 14.4.
We now turn to a two-dimensional reduction of the Erisir model. In the
simulation shown in Fig. 5.4, the sum h + n approximately varies between 0.27 and
0.40, with a mean of about 0.36. Figure 14.5 shows the voltage trace of Fig. 5.4, and
a similar voltage trace computed with the simplification h = 0.36 − n. Figure 14.6
shows the fixed points as a function of I. For I between 0 and the critical value
11 There are general methods for computing unstable periodic orbits in higher dimensions (see,
n
0.36 0.36 0.36
Figure 14.4. Close-up view of the stable and unstable limit cycles, near
the “knee” (the lowest point of the stable limit cycle), as I I∗ .
[HH_REDUCED_CYCLE_DISTANCE]
three-dimensional model
50
v [mV]
−50
0 20 40 60 80 100
−50
0 20 40 60 80 100
t [ms]
Figure 14.5. Voltage traces of the Erisir model as described in Section 5.3,
and of the reduced, two-dimensional model in which h = 0.36 − n. [ERISIR_REDUCED]
Ic ≈ 6.3, there are three fixed points. In order of increasing values of v∗ , they are
a stable fixed point, a saddle, and unstable node. For the unstable node, v∗ is very
close to independent of I. The stable fixed point is a stable node for some values
of I (solid black), and a stable spiral for others (solid red). As I reaches Ic from
below, the stable node and the saddle collide and annihilate each other, and the
unstable node is the only fixed point left.
Exercises
14.1. Think about the system
dr dθ dz
= r − r3 , = 1, = z,
dt dt dt
Exercises 103
-20
v [mV]
-40
*
-60
-80
0 2 4 6
I [μA/cm2 ]
where r and θ are the polar coordinates in the (x, y)-plane: x = r cos θ,
y = r sin θ. (a) Explain why x(t) = cos t, y(t) = sin t, z(t) = 0 defines a
periodic orbit. (b) Explain why this periodic orbit is unstable. (c) Explain
why the periodic orbit would remain unstable if time were reversed.
14.2. (∗) Think about the FitzHugh-Nagumo equations, (10.6) and (10.7), with
a = 1.25 and τn = 25. (These are the parameters of Fig. 10.5.) (a) Recall
from exercise 10.6 that there is exactly one fixed point (v∗ , n∗ ). Plot v∗ as a
function of I ∈ [−2, 0], using color to indicate the nature of the fixed point
(stable/unstable node/spiral, or saddle). There is a critical value, I = Ic ,
in the interval [−2, 0], with the property that the fixed point is stable for
I < Ic , and unstable for I > Ic . Numerically compute Ic . (b) Give numerical
evidence showing that the transition from I < Ic to I > Ic is a subcritical
Hopf bifurcation.12
12 See, however, also Section 15.1, where parameters a and τ are given that yield a supercritical
n
Hopf bifurcation.
Chapter 15
Canard Explosions
dv v3
=v− − n + I, (10.6)
dt 3
dn av − n
= , (10.7)
dt τn
with a = 5 and τn = 60.13 The equation has exactly one fixed point; see exer-
cise 10.6a. We denote the fixed point by (v∗ , n∗ ). It is easy to compute, using
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 15) contains supplementary material, which is available to authorized users.
13 This example is borrowed from [96, Fig. 3]. When the equations in [96] are translated into
the form (10.6), (10.7) by scaling and shifting v, n, t, and I, the parameters used in [96, Fig. 3]
become a = 100/21 and τn = 175/3. We use a = 100/20 = 5 and τn = 180/3 = 60 instead.
106 Chapter 15. Canard Explosions
v∗3
v∗ − − av∗ + I = 0.
3
This equation can be solved using bisection. After that, n∗ is obtained from n∗ =
av∗ . The fixed point can be classified using the eigenvalues of the Jacobi matrix.
Figure 15.1 shows v∗ as a function of I, and indicates the classification of the fixed
points. As I rises above Ic ≈ −4.29, the fixed point turns from a stable spiral into
an unstable one, in a Hopf bifurcation. In the range of values of I for which the
fixed point is unstable, there is a stable limit cycle; we indicate the maximum and
minimum of v along the limit cycle as functions of I.
0
*
v
-2
-4
-5 0 5
I
0
*
v
-1
-2
The blow-up of Fig. 15.1 shown in Fig. 15.2 demonstrates that the limit cy-
cle is born at zero amplitude as I rises above Ic . (This is not visible in Fig. 15.1
because the resolution is not high enough.) Thus the Hopf bifurcation is
15.1. A Supercritical Canard Explosion 107
supercritical.14 However, the figure also shows something else: As I rises above
another critical value, about −4.26 in this example and thus just barely greater
than Ic , the amplitude of the periodic solution “explodes.” This abrupt transition
from small- to large-amplitude oscillations is called a canard explosion.
The canard explosion is not a bifurcation. The same unstable fixed point and
stable limit cycle exist on both sides of the explosion. Standard results about the
continuous dependence of solutions of ordinary differential equations on the right-
hand side imply that the diameter of the limit cycle depends on I continuously.
However, the expansion of the diameter occurs on an interval of values of I with a
width that tends to 0 exponentially fast as τn → ∞; see, for instance, [99]. We will
not formulate this result precisely here, let alone prove it.
I =-4.258153
-2 2
-3 1
0
-4
n
-1
-5 -2
-6 -3
-2 0 2 0 100 200
v t
I =-4.256908 I =-4.250038
2 2
1 1
0 0
v
-1 -1
-2 -2
-3 -3
0 100 200 0 100 200
t t
14 This is the only neuronal model discussed in this book in which there is a supercritical (soft)
Hopf bifurcation at Ic .
108 Chapter 15. Canard Explosions
Figure 15.3 shows attracting limit cycles for several values of I that are ex-
tremely close to each other. For values just barely above Ic , small-amplitude oscilla-
tions are possible. As soon as the amplitude becomes somewhat larger, however, the
“spike-generating” mechanism of the model is ignited, and the oscillation amplitude
explodes. The limit cycle has a small diameter when I ≈ −4.258, and it is of full
size for I ≈ −4.250.
In this example, one can argue that it is the canard explosion which justifies
the idea that action potentials are “all-or-nothing events,” because it dramatically
reduces the parameter range in which there are lower-amplitude oscillations instead
of full action potentials.
-2
-3
-4
n
-5
-6
-2 0 2
v
Some people think that a trajectory like that shown in bold blue in Fig. 15.3
is reminiscent of a duck; see Fig. 15.4 for my best attempt to persuade you. If this
leaves you as unconvinced as it leaves me, you can also take “canard” to refer to
the Merriam-Webster definition of “canard” [117] as “a false report or story” [176],
perhaps because the canard explosion looks very much like a discontinuity, but isn’t
one.
Canards are closely related to mixed-mode oscillations, i.e., alternation be-
tween low-amplitude, subthreshold oscillations, and action potentials; see, for in-
stance, [134]. In fact, it is easy to turn the model discussed in this section into one
that generates mixed-mode oscillations. The simplest way of doing this is to add
to the external drive I a time-dependent “adaptation current” Iadapt , governed by
equations of the form
with δ > 0 and τadapt > 0. Thus Iadapt becomes more negative abruptly, by the
amount, δ, with each action potential, and returns to zero exponentially with decay
time constant τadapt otherwise. An example is shown in Fig. 15.5.
15.2. A Subcritical Canard Explosion 109
-1
-2
-3
0 200 400 600 800 1000
t
now the unstable limit cycle which grows as I decreases, then suddenly “explodes,”
colliding with and destroying the stable limit cycle (Fig. 15.6). Like supercritical
canards, subcritical ones can give rise to mixed-mode oscillations; see [132], and
also exercise 6.
40
20
-20
v
-40
-60
-80
I* ≈ 5.25 Ic ≈ 7.4
Figure 15.6. The bifurcation diagram for eqs. (10.1), (10.2). Stable
spiral (red, solid), unstable spiral (green, dashed), stable limit cycle (black, solid),
and unstable limit cycle (black, dashed). The nearly instantaneous rise in amplitude
of the unstable limit cycle happens on such a narrow range of values of I that it is
difficult to resolve with accuracy; in this figure we simply drew it into the figure as
a vertical dashed line. [HH_REDUCED_BIF_DIAG]
110 Chapter 15. Canard Explosions
Exercises
15.1. (a) How do the nullclines in Fig. 15.3 depend on τn ? (b) Sketch the nullclines
and the solutions of (10.6), (10.7) for τn = ∞. (c) The rising portion of the
cubic nullcline is often called the unstable or repelling portion. Explain this
terminology based on the sketch you made in part (b).
15.2. (†) Figure 15.7 again shows a portion of the “duck” trajectory indicated
in bold blue in Fig. 15.3 and, with some artistic additions, in 15.4. The
trajectory closely tracks the repelling portion (see exercise 1) of the v-nullcline
for quite a while, before veering off to the right. You might first think that
-4
-4.2
-4.4
n
-4.6
-4.8
-5
-1.5 -1 -0.5 0 0.5
v
Figure 15.7. A portion of the bold blue trajectory from Figs. 15.3
and 15.4, together with the v-nullcline (green) and the n-nullcline (red straight line).
[CANARD_2]
this is impossible: Why does the repelling portion of the cubic not repel
the trajectory for such a long time? Explain why this can only happen if
the trajectory tracks the v-nullcline at a distance that is exactly O(1/τn )
as τn → ∞. (See Section 1.2 for the meaning of “exactly O(1/τn )”.) Your
explanation will not be rigorous, but it should be convincing.
15.3. (∗) For the “duck” trajectory indicated in bold blue in Figs. 15.3 and 15.4, plot
v as a function of t. You will see v rise linearly with t for a significant portion
of the cycle, between action potentials. How is this reflected in Fig. 15.7?
15.4. (∗) What happens if you increase τadapt in Fig. 15.5? What happens if you
increase δ? Guess before you try it.
15.5. (∗) Can you choose parameters in Fig. 15.5 so that bursts of several action
potentials (not just one) alternate with subthreshold oscillations?
15.6. (∗) Can you generate mixed-mode oscillations by adding an adaptation cur-
rent as in Section 15.1 to the reduced Hodgkin-Huxley model of Section 15.2?
Would you choose a drive I just above I∗ , or just above Ic ?
Chapter 16
Model Neurons
of Bifurcation Type 3
-30
-40
v*
-50
-60
-70
-4 -2 0 2 4 6 8
I
Figure 16.1. Fixed points of the INa,p -IK model as a function of I. For
I < Ic ≈ 4.5, there are a stable node (black, solid), a saddle (magenta, dashes),
and an unstable spiral (green, dashes). When I reaches Ic , the saddle and the
stable node collide and annihilate each other, and only the unstable spiral remains.
[INAPIK_FIXED_POINTS]
dv
C = g Na m∞ (v)(vNa − v) + g K n (vK − v) + g L (vL − v) + I, (16.1)
dt
dn n∞ (v) − n
= . (16.2)
dt τn
In contrast with all Hodgkin-Huxley-like models discussed earlier, no powers of
gating variables appear in (16.1). The activation gate of the sodium current is
assumed to be infinitely fast, i.e., a function of v, reducing the number of dependent
variables to two (v and n). Izhikevich’s parameter choices are [82]
2 2 2 2
C = 1 μF/cm , g Na = 20 mS/cm , gK = 10 mS/cm , gL = 8 mS/cm ,
Both are increasing functions of v, i.e., both the m-gate and the n-gate are activation
gates.
To examine bifurcations in this model, we compute the fixed points of
eqs. (16.1), (16.2), and classify them by computing the eigenvalues of the Jacobi
matrix. For a given I, the fixed points are obtained by finding all solutions of the
equation
(v∗ , n∗ ) is a fixed point if and only if v∗ solves this equation and n∗ = n∞ (v∗ ).
Figure 16.1 shows the result of this calculation. At I = Ic ≈ 4.5, there is a saddle-
node collision.
I = −5 I = −1.4
0.8
0.6
0.4
n
0.2
0
I=2 I = 4.4
0.8
0.6
0.4
n
0.2
0
-50 0 -50 0
v v
Figure 16.2. Phase plane picture for the INa,p -IK model showing co-
existence of a stable limit cycle, a stable node (black solid dot), a saddle (magenta
open circle), and an unstable spiral (green open circle). The saddle and the node
collide when I = Ic ≈ 4.5 (right lower panel), and the saddle destroys the limit
cycle by inserting itself into the cycle when I = I∗ ≈ −1.4 (right upper panel).
[INAPIK_PHASE_PLANE]
The left lower panel of Fig. 16.2 shows the (v, n)-plane for I = 2. We see that
a stable fixed point and stable limit cycle co-exist here. As I Ic ≈ 4.5, the saddle
and the node collide and annihilate each other (right lower panel of the figure). As
I I∗ ≈ −1.4, the saddle inserts itself into the limit cycle, and this abolishes the
limit cycle (right upper panel), turning it into a trajectory that originates from the
114 Chapter 16. Model Neurons of Bifurcation Type 3
saddle in the unstable direction and returns to the saddle in the stable direction.
Such a trajectory is called homoclinic. The bifurcation in which a saddle inserts
itself into a limit cycle, turning the limit cycle into a homoclinic trajectory, is also
called a homoclinic bifurcation [149].
In summary, in this example the stable fixed point is abolished in a saddle-
node collision off an invariant cycle as I rises above Ic . The limit cycle is abolished
in a homoclinic bifurcation (a saddle-cycle collision) as I falls below I∗ .
but note that this reset condition is in fact without any impact since the right-hand
side of eq. (16.5) does not change when θ is replaced by θ ± 2π. We call the model
defined by eqs. (16.5)–(16.8) the self-exciting theta neuron. For illustration, we show
in Fig. 16.3 a solution for one particular choice of parameters.
Notice that our model is not a system of ordinary differential equations, be-
cause of the discontinuous reset of z. For readers who like differential equations
models (as I do), we will later make the reset of z smooth, turning the model
into a system of ordinary differential equations while keeping its essential behavior
unchanged; see eqs. (16.9) and (16.10).
Just as we think of the single theta neuron as representing a synchronized
population of neurons, we think of z as representing recurrent excitation, i.e., mu-
tual excitation of the neurons in the population. One could therefore argue that
discussion of the self-exciting theta neuron should be deferred to part III of this
16.2. The Self-Exciting Theta Neuron 115
1.5
1 − cos θ
0.5
0
0 10 20 30 40 50
0.2
0.15
0.1
z
0.05
0
0 10 20 30 40 50
t [ms]
Even for I < Ic , periodic firing is possible for the self-exciting theta neuron,
because of the self-excitation term. To understand how the possibility of periodic
firing is abolished, it is instructive to plot examples of solutions of (16.5), (16.7) in
the (θ, z)-plane for various values of I. Figure 16.4 shows solutions for zmax = 0.2,
τz = 20. (Our choice of τz = 20 will be discussed later in this section; see also
116 Chapter 16. Model Neurons of Bifurcation Type 3
exercise 2.) For these parameter values, periodic firing is possible if and only if
I > I∗ , with I∗ = −0.1069 . . .. In each panel of the figure, we indicate in bold the
trajectory that starts at θ = −π and z = zmax , the point in the (θ, π)-plane to which
the self-exciting theta neuron resets after firing. Periodic firing occurs if and only
if this trajectory reaches θ = π. In panel A of the figure, I = −0.15, significantly
below I∗ , and the trajectory starting at (θ, z) = (−π, zmax ) ends in the stable node
(θ− , 0); there is no periodic firing. In panel B, the value of I is very slightly above
I∗ . The trajectory indicated in bold now reaches θ = π, so there is periodic firing.
Before it reaches θ = π, it passes the saddle point (θ+ , 0) at a small distance. The
range of values I > I∗ for which the periodic cycle comes close to the saddle is quite
narrow, but if I were even closer to (but still greater than) I∗ , the trajectory would
come even closer to the saddle. In panel C, I > I∗ , but I < Ic = 0. Periodic firing
is possible, but the stable node and the saddle are still present. Panel D shows
solutions for I > Ic = 0, when there are no fixed points any more.
0.3 0.3
0.2 0.2
z
0.1 0.1
0 0
-2 0 2 -2 0 2
0.3 0.3
0.2 0.2
z
0.1 0.1
0 0
-2 0 2 -2 0 2
θ θ
Figure 16.4. Solutions of eqs. (16.5) and (16.7) with zmax = 0.2 and
τz = 20. For these parameter values, periodic firing is possible if and only if I >
I∗ = −0.1069 . . .. In panel B, I is very slightly above I∗ . The values of I in the three
other panels are specified above those panels. In each panel, the trajectory starting
at θ = −π and z = zmax is indicated as bold curve. [SETN_PHASE_PLANE]
16.2. The Self-Exciting Theta Neuron 117
Figure 16.4 clarifies what happens as I drops below I∗ : The saddle inserts itself
into the periodic cycle, turning it into a homoclinic trajectory when I is precisely
equal to I∗ , and then, for I < I∗ , into a trajectory that converges to the stable node
(panel A). For short, the self-exciting theta neuron is of bifurcation type 3.
We will now discuss our choice τz = 20 for our illustrative figures in this
section. As discussed earlier, the self-exciting theta neuron can be thought of as a
caricature of rhythmic working memory activity driven by recurrent excitation. The
recurrent excitation underlying working memory is known to be NMDA receptor-
mediated [171]. NMDA receptor-mediated recurrent excitation is relatively slow;
a decay time constant on the order of 100 ms or longer is typical (see Chapter 1).
Since we always think of time as measured in ms, even when working with reduced
models such as the theta model, it would have seemed natural to choose τz = 100
in this section. We chose τz = 20 because that makes it easier to obtain a picture
convincingly demonstrating the saddle-cycle collision such as panel B of Fig. 16.4.
The phase plane pictures for τz = 100 look qualitatively just like those in Fig. 16.4
(see exercise 2), but the range of values of I > I∗ for which the periodic trajectory
comes close to the saddle is extremely narrow for τz = 100, even narrower than for
τz = 20.
2
1.5
1 − cos θ
0.5
0
0 10 20 30 40 50
0.2
0.15
0.1
z
0.05
0
0 10 20 30 40 50
t [ms]
Figure 16.5. Solution of eqs. (16.9) and (16.10) with I = −0.05, zmax =
0.2, τz = 20, θ(0) = π/2, and z(0) = 0. [SELF_EXCITING_THETA_SMOOTH]
{(x, y, z) : x2 + y 2 = 1, z ∈ R}.
Exercises
16.1. Explain why for I < 0, the fixed point (θ+ , 0) of eqs. (16.5), (16.7) is a saddle,
and (θ− , 0) is a stable node.
16.2. (∗) Generate a figure similar to Fig. 16.4 with zmax = 0.05, τz = 100.
16.3. (∗) Generate an analogue of Fig. 16.4 for the smooth self-exciting theta neuron
given by (16.9) and (16.10), demonstrating numerically that it is of bifurca-
tion type 3.
Chapter 17
Frequency-Current Curves
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 17) contains supplementary material, which is available to authorized users.
120 Chapter 17. Frequency-Current Curves
0.01%) over a substantial time period (we often use 1000 ms). When there is peri-
odic spiking, we compute the frequency by first computing the times of the action
potentials. Strictly speaking, of course, action potentials are events of positive du-
ration, so the time of the action potential is not well-defined. We make an arbitrary
convention, which we use throughout this book: We call the time at which v crosses
−20 mV from above the spike time or the time of the action potential. Exercise 1
describes how we compute spike times numerically. In this chapter, we are inter-
ested in firing periods, not in absolute spike times, and therefore the arbitrariness
of this convention has no impact. Our convention is a bit unusual, but it has some
advantages. In particular, it avoids minor artifacts that arise in the phase response
curves of Chapter 25 if the spike times are defined, more conventionally, as the
times at which v crosses 0 from below. When there is an infinite sequence of action
potentials, we denote the spike times by t1 , t2 , t3 , . . .
Given a value of I and a starting position in phase space, we compute until
either a steady state is reached or four spikes occur. If a steady state is reached,
we set f = 0, otherwise f = 1000/(t4 − t3 ). The numerator of 1000 is needed
here because we measure time in ms but frequency in Hz=s−1 ; see the discussion
preceding eq. (7.10). We use t3 and t4 (not t1 and t2 ) to reduce initial transient
effects.
For each model neuron, we calculate the f -I curve for IL ≤ I ≤ IR , where
IL is chosen so low that for I = IL , there is a globally attracting fixed point. For
all models considered here, numerical experiments indicate that there is a globally
attracting fixed point when I is low enough (see exercise 2 for the proof of a weaker
statement). We discretize the interval [IL , IR ], performing simulations for I = Ij =
IL + jΔI with ΔI = (IR − IL )/N , where N is a large integer.
To capture the multi-valued nature of the f -I relation when the transition
from rest to firing involves a subcritical Hopf bifurcation, we sweep through the
values I = Ij twice, first upwards, in the order of increasing j, then downwards,
in the order of decreasing j. On the upward sweep, we start the simulation for
I = Ij in the point in phase space in which the simulation for I = Ij−1 ended,
j = 1, 2, . . . , N . Note that it does not matter where in phase space we start the
simulation for I = I0 = IL — the result will always be convergence to the globally
attracting fixed point, so f = 0. On the downward sweep, we start the simulation
for I = Ij in the point in phase space where the simulation for I = Ij+1 ended,
j = N − 1, N − 2, . . . , 0. (For I = IN = IR , only one simulation is carried out,
which we consider to be part of both sweeps.) In the interval of values of I for
which there is bistability, the upward sweep captures the stable fixed point, and the
downward one captures the stable limit cycle.
As a first example, we show in Fig. 17.1 the f -I curve of the classical Hodgkin-
Huxley neuron. The dots indicate the results of the upward sweep, and the circles
the results of the downward sweep. Thus in the range of bistability, dots reflect
stable fixed points, and circles reflect stable limit cycles. There are two branches of
the f -I curve, one indicated with dots and the other with circles; we call them the
lower branch and the upper branch of the f -I curve, respectively. Outside the region
of bistability, the lower and upper branches of the f -I curve coincide. The lower
branch is discontinuous at I = Ic ≈ 9.7, where the Hopf bifurcation occurs. The
17.2. Examples of Continuous, Single-Valued f-I Curves 121
50
f
0
5 I* 8 Ic 11
I
tween continuous and discontinuous onset was first described, based on experimen-
tal observations, in 1948 by Alan Hodgkin [75], who called neurons with continuous
onset class 1, and neurons with discontinuous onset class 2 neurons.
100
f
50
0
0 0.05 0.1 0.15 0.2
I
Theta Neuron
For the theta neuron, eq. (8.11) implies
√
1000 4τm I − 1
f= .
2πτm
Using the notation
1
Ic =
4τm
for the threshold drive, we re-write the formula for f as
1000
f= √ I − Ic . (17.5)
π τm
For τm = 1/2 (the value used by Ermentrout and Kopell [49]), the f -I curve is
shown in Fig. 17.3.
150
100
f
50
0
0.4 0.45 0.5 0.55
I
According to eq. (17.5), the f -I relation of the theta neuron is of the form
f = C I − Ic for I ≥ Ic , (17.6)
with C > 0. The theta neuron is of bifurcation type 1. In fact, for any neuronal
model of bifurcation type 1,
f I − Ic (17.7)
17.2. Examples of Continuous, Single-Valued f-I Curves 123
as I Ic . (See Section 1.2 for the meaning of “ ”.) The calculation at the end
of Chapter 11 explains why: The period is dominated by the time it takes to move
past the “ghost” of the√ two fixed points annihilated in the √ saddle-node collision,
and this time is 1/ I − Ic , therefore the frequency is I − Ic .
From (17.7), it follows that the f -I curve of a model neuron of bifurcation
type 1 has a (right-sided) infinite slope at I = Ic . However, it is still much less
steep than the f -I curve of the LIF neuron. To understand in which sense this is
true, think of I as a function of f ≥ 0. Solving (17.6), we obtain
f2
I − Ic = ,
C2
so I −Ic vanishes to second order at f = 0 for the theta neuron, and, more generally,
for a model neuron of bifurcation type 1. For the LIF neuron, we showed that I −Ic ,
as a function of f , vanishes to infinite order at f = 0.
RTM Neuron
For the RTM model neuron, we calculate the f -I curve numerically, as described in
Section 17.1. The upward and downward sweeps yield the same result; for each I,
there is only one possible (stable) f ; the dots and circles coincide — see Fig. 17.4.
The dependence of f on I is continuous.
√ For I > Ic but I ≈ Ic , the form of the
f -I curve is approximately f = C I − Ic , where C > 0 is a constant. Figure 17.5
shows computed√frequencies for I near Ic , together with the graph of a function of
the form f = C I − Ic , where the values of C and Ic are chosen to make the fit
with the computed data good: Ic ≈ 0.11935, C ≈ 54. (We omit a description of
how these parameters were estimated; the interested reader can take a look at the
code that generates Fig. 17.5.)
40
30
f
20
10
0
0 0.2 0.4 0.6 0.8 1
I
WB Neuron
The f -I curve of the WB neuron looks closer to linear than that of the RTM neuron,
but is otherwise
√ similar; see Fig. 17.6. A close-up confirms that f is again of the
form C I − Ic for I ≥ Ic , I ≈ Ic ; see Fig. 17.7.
124 Chapter 17. Frequency-Current Curves
0.4
0.3
f 0.2
0.1
0.1193 0.1194
I
Figure 17.5.
√ f -I curve of the RTM neuron, near the onset. The red curve
is of the form f = C I − Ic for I > Ic , f = 0 for I ≤ Ic . [RTM_F_I_CURVE_AT_ONSET]
40
f
20
0
0 0.2 0.4 0.6 0.8 1
I
0.4
0.3
f
0.2
0.1
Figure 17.7. f -I √
curve of the WB neuron, close-up near onset. The red
curve is of the form f = C I − Ic for I ≥ Ic . [WB_F_I_CURVE_AT_ONSET]
50
f
0
4 6 8 10 12
I
Figure 17.8. Same as Fig. 17.1, but with the reduction to two dependent
variables (m = m∞ (v), h = 0.83 − n). [HH_REDUCED_F_I_CURVE]
Erisir Neuron
For the Erisir model, the range of values of I for which there is bistability is much
smaller than for the Hodgkin-Huxley model; see Fig. 17.9.
80
60
40
f
20
0
6 6.5 7 7.5
I
We add to the RTM model the M-current given by eqs. (9.1)–(9.4), with g M =
0.2 mS/cm2 . Surprisingly, the model is now of bifurcation type 2; see exercise 3.
The f -I curve has qualitative features similar to that of the classical Hodgin-Huxley
126 Chapter 17. Frequency-Current Curves
and Erisir neurons; see Fig. 17.10. Notice, however, that the interval of bistability
in Fig. 17.10 is very short, and firing onset and offset are only weakly discontinuous.
2
f
0
0.5 0.502 0.504 0.506 0.508 0.51
I
1000
f
500
0
-2 0 2 4 6
I
When I > I∗ but I ≈ I∗ , the saddle is close to the limit cycle, and it takes
a long time to traverse the cycle, since the motion in phase space is slow near a
fixed point. Thus the firing period is long, i.e., the firing frequency is small. As
the distance between the saddle and the limit cycle approaches zero, the frequency
approaches zero; that is, the upper branch of the f -I curve is continuous at I∗ in
this example.
To better understand the shape of the upper branch of the f -I curve at I = I∗ ,
we investigate first how fast the distance between the saddle and the limit cycle tends
17.3. Examples of f-I Curves with Discontinuities and an Interval of Bistability 127
to zero in this limit. Figure 17.12 shows the distance, d, between the saddle and
the limit cycle as a function of I. The figure shows that d I − I∗ in the limit as
I I∗ .
3
2
d
0
−1.4 −1.2 −1 −0.8 −0.6 −0.4 −0.2 0
I
To pass a saddle point at a small distance d > 0 requires time ln(1/d); see
exercise 9. Therefore the period of the INa,p − IK model is ln (1/(I − I∗ )) in the
limit I I∗ . For the frequency f , we conclude
1
f . (17.8)
1
ln
I − I∗
(If you are wondering whether a constant is missing in the argument of ln in this
formula, or if you think I made the terrible mistake of taking the logarithm of a
dimensional quantity in eq. (17.8), see exercise 10.) As I I∗ , the right-hand side
of (17.8) tends to zero. The graph of f as a function of I is extremely steep at I∗ ,
just like the f -I curve of the LIF neuron near the firing onset; see also exercise 11.
100
50
f
0
-0.06 -0.04 -0.02 0 0.02
I
Figure 17.13. The f -I curve of the smooth self-exciting theta neuron given
by eqs. (16.9) and (16.10), with τz = 100 and zmax = 0.05. [SETN_F_I]
INa,p -IK model. The lower branch is discontinuous at Ic . The upper branch is
continuous at I∗ , but gets steep to infinite order as I I∗ , in the sense that we
explained when discussing the f -I curve of the INa,p -IK model, as I I∗ .
Exercises
17.1. Suppose that we simulate a neuron using a numerical method, such as the
midpoint method, computing approximations of the membrane potential
v at times kΔt, k = 0, 1, 2, . . . We denote these approximations by vk ,
k = 0, 1, 2, . . . (Unfortunately we must code v0 as v(1) in Matlab, and
consequently vk as v(k+1), because Matlab requires positive indices.) We
will think here about how to compute spike times, which we (somewhat arbi-
trarily but consistently) define to be times at which the membrane potential
crosses −20mV from above. Suppose that vk−1 ≥ −20mV and vk < −20mV.
We approximate the spike time as the time, t∗ , at which the straight line in
the (t, v)-plane through the points ((k − 1)Δt, vk−1 ) and (kΔt, vk ) crosses
the horizontal line v = −20. Find a formula for t∗ .
17.2. (†) Show that the classical Hodgkin-Huxley neuron has a single fixed point
when I is sufficiently low.
17.3. (∗) Demonstrate numerically that the RTM neuron with M-current, modeled
as in Section 9.1 with g M = 0.2 mS/cm2 and vK = −100 mV, undergoes a
Hopf bifurcation as I crosses Ic . You may find it useful to start with, for
instance, the code that generated Fig. 16.1, but note that the model is very
different here, and there are four dependent variables (v, h, n, and w), so
the Jacobi matrices will be 4 × 4.
17.4. (a) (∗) Demonstrate that the RTM neuron with calcium-dependent AHP
current, modeled as in Section 9.2 with g AHP = 0.2 mS/cm2 and vK =
−100 mV, is of bifurcation type 1. You may find it useful to start with, for
instance, the code that generated Fig. 16.1, but note that the model is very
different here, and there are four dependent variables (v, h, n, and Ca2+ in ),
so the Jacobi matrices will be 4 × 4. (b) Explain why (a) is not surprising.
17.5. (a) Prove (17.3). (b) Show that (17.3) implies (17.2).
17.6. Show that all right-sided derivatives of the function I = I(f ), f ≥ 0, defined
by (17.4) are zero.
17.7. The functions in (16.3) are both of the form
1
x∞ (v) = . (17.9)
1 + exp((v1/2 − v)/k)
Show: (a) x∞ is an increasing function of v. (b) limv→−∞ x∞ (v) = 0 and
limv→∞ x∞ (v) = 1. (c) x∞ (v1/2 ) = 1/2. (d) k measures how steep x∞ is at
v = v1/2 , in the sense that
dx∞ 1
(v1/2 ) = .
dv 4k
Exercises 129
17.8. (∗) Izhikevich, in [82], refers to the potassium current in the INa,p -IK model,
in the form described in Section 16.1, as a high-threshold potassium current.
In general, a high-threshold potassium current IK is one that is only ac-
tivated when v is high. Currents of this sort are found in many fast-firing
neurons, and are believed to help neurons fire rapidly by shortening the spike
afterhyperpolarization [45].
We have seen that some model neurons have continuous, single-valued f -I curves.
With the exception of the LIF neuron, all of these model neurons transition from
rest to firing via a SNIC. Other model neurons have discontinuous f -I curves with
an interval of bistability, in which both rest and periodic firing are possible. The
examples we have seen transition from rest to firing either via a subcritical Hopf
bifurcation, or via a saddle-node bifurcation off an invariant cycle. The distinction
between continuous and discontinuous f -I curves closely resembles the distinction
between “class 1” and “class 2” neurons made by Hodgkin in 1948 [75].
The discussion in the preceding chapters is not quite satisfactory, however,
because it does not address the physical difference between the two classes. In this
chapter, we will ask what it is about the classical Hodgkin-Huxley neuron, the Erisir
neuron, and the RTM neuron with an M-current, that causes bistability in a range
of values of I.
We have no rigorous answer to this question. However, in all three cases,
we will give numerical results indicating that bistability is attributable to rebound
firing. By this we mean that the hyperpolarization following an action potential
invokes a depolarizing mechanism that results in another action potential, and thus
in a periodic sequence of action potentials. The hyperpolarization-induced depolar-
izing mechanism responsible for rebound firing is associated with one of the gating
variables for each of the three models, but the mechanism is different for different
models. For the classical Hodgkin-Huxley neuron, it is de-activation of the delayed
rectifier potassium current (reduced n). For the Erisir neuron, it is de-inactivations
of the sodium current (raised h). For the RTM neuron with M-current, it is de-
activation of the M-current (reduced w).
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 18) contains supplementary material, which is available to authorized users.
132 Chapter 18. Bistability Resulting from Rebound Firing
50
v [mV]
−50
−100
0 10 20 30 40
50
v [mV]
−50
−100
0 10 20 30 40
t [ms]
1
m
0.5
0
0 5 10 15 20 25 30 35 40
1
0.5
h
0
0 5 10 15 20 25 30 35 40
1
0.5
n
0
0 5 10 15 20 25 30 35 40
t [ms]
Figure 18.2. The gating variables in the simulation in the lower panel
of Fig. 18.1. Black solid curves are the computed gating variables. Blue dashed
curves indicate m∞ (v), h∞ (v), and n∞ (v), the “moving targets” tracked by the
gating variables. Red horizontal lines indicate the equilibrium values of the gating
variables, m∗ , h∗ , and n∗ . The magenta bars on the t-axis indicate time intervals
during which h exceeds h∗ , or n drops below n∗ . [HH_BISTABLE_GATES]
134 Chapter 18. Bistability Resulting from Rebound Firing
very much smaller than 1 even in equilibrium, so inactivation of the sodium current
plays a significant role in maintaining the equilibrium. Once an action potential is
triggered, the hyperpolarization following it causes h to rise above h∗ for a prolonged
time. This is why a rebound spike is generated. In fact, if we replace, in each time
step, h by min(h, h∗ ), the rebound spike is abolished; see Fig. 18.6.
50
v [mV]
−50
−100
0 10 20 30 40
t [ms]
Figure 18.3. Same as lower panel in Fig. 18.1, but with n replaced by
max(n, n∗ ) in each time step of the simulation. [HH_BISTABLE_LIMITED_N]
50
v [mV]
−50
0 10 20 30 40
50
v [mV]
−50
0 10 20 30 40
t [ms]
Figure 18.4. Two possible voltage traces of the Erisir neuron with I =
6.9 μA/cm2 . [ERISIR_BISTABLE]
m 0.5
0
0 5 10 15 20 25 30 35 40
1
0.5
h
0
0 5 10 15 20 25 30 35 40
1
0.5
n
0
0 5 10 15 20 25 30 35 40
t [ms]
Figure 18.5. Analogous to Fig. 18.2, for the simulation of the lower panel
of Fig. 18.4. In the Erisir model, m = m∞ . This is why there is no solid black
curve in the top panel; it would be identical with the dashed blue curve.
[ERISIR_BISTABLE_GATES]
50
v [mV]
−50
0 10 20 30 40
t [ms]
Figure 18.6. Same as lower panel in Fig. 18.4, but with h replaced by
min(h, h∗ ) in each time step of the simulation. [ERISIR_BISTABLE_LIMITED_H]
Ih = g h r(vh − v) (18.1)
with vh = −32.9 mV and
dr r∞ (v) − r
= , (18.2)
dt τr (v)
1 1
r∞ (v) = , τr (v) = . (18.3)
1 + e(v+84)/10.2 e−14.59−0.086v + e−1.87+0.0701v
Fig. 18.10 shows the graphs of these functions. For low v, r∞ (v) is close to 1 and
τr (v) is large; thus the current builds up gradually when the neuron is hyperpolar-
ized. For high v, r∞ is close to 0, and τr (v) is small; thus the current shuts off when
the neuron fires.
136 Chapter 18. Bistability Resulting from Rebound Firing
The h-current is also called the sag current. The reason is illustrated by
Fig. 18.11. When the neuron receives a hyperpolarizing input, the membrane po-
tential first drops, but then the rise in the h-current causes v to rise again, giving
−62
v [mV]
−63
−64
−65
0 100 200 300 400 500
50
v [mV]
−50
−100
0 100 200 300 400 500
t [ms]
Figure 18.7. Two possible voltage traces of the RTM model with an
M-current with gM = 0.2 mS/cm2 , for I = 0.506 μA/cm2 . [RTM_WITH_I_M_BISTABLE]
0.98
h
0.96
0.1
n
0
0 100 200 300 400 500
0.06
0.055
w
0.05
0.045
0 100 200 300 400 500
t [ms]
Figure 18.8. Analogous to Fig. 18.2, for the simulation of the lower panel
of Fig. 18.7. The gating variables shown here are h, n, and w.
[RTM_WITH_I_M_BISTABLE_GATES]
the graph of v a “sagging” appearance. Fig. 18.11 also demonstrates that a neuron
with an h-current can fire a rebound spike following a time of hyperpolarization.
The h-current seems to do precisely what we have identified as the source
of bistability in the classical Hodgkin-Huxley and Erisir neurons, and in the RTM
neuron with M-current: It responds to hyperpolarization with a depolarizing cur-
rent that is not rapidly turned down as v rises again. We might therefore expect
18.4. RTM Neuron with an h-Current 137
that for the RTM neuron with an h-current, there should be an interval of bista-
bility. We investigate now whether this is indeed the case, by means of numerical
experiments with g h = 1 mS/cm2 .
v [mV] 50
−50
−100
0 100 200 300 400 500
t [ms]
Figure 18.9. Same as lower panel in Fig. 18.7, but with w replaced by
max(w, w∗ ) in each time step of the simulation. [RTM_WITH_I_M_LIMITED_W]
Figure 18.10. The steady state and time constant of the gating variable
in the model h-current of [157]. [H_CURRENT]
50
v [mV]
−50
−100
0 50 100 150 200 250 300
Figure 18.12 shows the f -I curve of the RTM neuron with the model h-current
added, with g h = 1 mS/cm2 . There is indeed bistability for I between two critical
values, I∗ and Ic , but the two critical values are extremely close to each other. The
smallest positive firing frequency is quite low, just slightly greater than 1 Hz.
To understand this example better, we consider specifically the value I =
−3.19 μA/cm2 , for which rest and slow firing are both possible and stable; see
Fig. 18.13. We denote the stable fixed point by (v∗ , h∗ , n∗ , r∗ ). In the upper panel
of the figure, we deliberately started at a small distance from this fixed point, to
show the oscillatory convergence to the fixed point. For the simulation in the lower
1.5
1
f
0.5
0
−3.2 −3.195 −3.19 −3.185
I
−62.56
−62.57
v [mV]
−62.58
−62.59
−62.6
0 500 1000 1500 2000 2500 3000
50
v [mV]
−50
−100
0 500 1000 1500 2000 2500 3000
t [ms]
panel of Fig. 18.13, we show in Fig. 18.14 the gating variables h, n, and r. During
the inter-spike intervals, h and n follow h∞ (v) and n∞ (v) tightly, while r follows
r∞ (v) sluggishly. Within each inter-spike interval, there is a time interval during
which r rises very slightly above its equilibrium value r∗ ; the effect is so slight that
it isn’t even clearly visible in the bottom panel of Fig. 18.14, but the time intervals
during which r > r∗ are indicated in magenta. In spite of being so small, this effect
causes the rebound firing: If we replace, in each time step, r by min(r, r∗ ), the
rebound spike is prevented; see Fig. 18.15.
1
0.99
h
0.98
0 500 1000 1500 2000
0.1
0.05
n
0
0 500 1000 1500 2000
0.2
0.1
r
0
0 500 1000 1500 2000
t [ms]
Figure 18.14. Analogous to Fig. 18.2, for the simulation of the lower panel
of Fig. 18.13. [RTM_WITH_I_H_BISTABLE_GATES]
50
v [mV]
−50
−100
0 500 1000 1500 2000 2500 3000
t [ms]
Exercises
18.1. Explain: For all the model neurons discussed in this chapter, if between
spikes, each gating variable x satisfied x ≡ x∞ (v), there would be no bista-
bility.
18.2. (∗) Verify that in the simulation of the lower panel of Fig. 18.1, replacing h
by min(h, h∗ ) in each time step does not abolish the rebound firing.
140 Chapter 18. Bistability Resulting from Rebound Firing
18.3. (∗) Verify that in the simulation of the lower panel of Fig. 18.4, replacing n
by max(n, n∗ ) in each time step does not abolish the rebound firing.
18.4. (∗) The reasoning in this chapter suggests that the interval of bistability for
the RTM neuron with h-current should become larger if τr (v) were reduced
for very low v, while remaining unchanged for all other v: The gating variable
r would then respond to deep hyperpolarization with a quicker and more pro-
nounced rise, while it would still only sluggishly approach r∗ as v approaches
v∗ . To test this, we replace τr by qτr , with
(v/20 + 5)2 if −100 ≤ v ≤ −80,
q(v) = (18.4)
1 otherwise.
Note that q(−100) = 0 and q(−80) = 1. The graphs of τr and qτr are
plotted in Fig. 18.16. The motivation for this modification is not biological;
the goal here is merely to put our understanding of the mechanism underlying
bistability to a test.
1000
800
600
400
200
0
-100 -50 0 50
v [mV]
Figure 18.16. Black, solid: time constant τr in the model of the h-current,
and the modification. Blue, dashes: qτr , with q defined in (18.4). The maximum
of τr occurs approximately at v = −80, and for v > −80, τr and qτr are the same.
[PLOT_MODIFIED_TAU_R]
Using the code generating Fig. 18.12 as a starting point, plot the f -I curve of
the model with τr replaced by qτr , for −4 ≤ I ≤ −3. See whether introducing
the factor of q changes the f -I curve in the expected way.
This exercise won’t take you much time, but the computation will take the
computer a fairly long time, so you’ll need to be a little patient.
Chapter 19
Bursting
Bursting is a very common behavior of neurons in the brain. A bursting neuron fires
groups of action potentials in quick succession, separated by pauses which we will
call the inter-burst intervals. Figure 19.1 shows an experimentally recorded voltage
trace illustrating bursting. Figure 19.2 shows a simulation discussed in detail later
in this chapter.
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 19) contains supplementary material, which is available to authorized users.
142 Chapter 19. Bursting
v [mV]
−50
−100
0 20 40 60 80 100
t [ms]
I* Ic I eff
only on present inputs, but also on past inputs that affected an unobserved variable,
and thereby altered the state of the system.15
The M- and calcium-dependent AHP currents discussed in Chapter 9 are
examples of depolarization- or firing-induced hyperpolarizing currents that decay
slowly in the absence of firing. The h-current, introduced in Chapter 18, is an
example of a depolarizing current that inactivates as a result of firing, and slowly
de-inactivates in the absence of firing. Another example is the T-current, an inward
(depolarizing) calcium current inactivated by firing [77]. Each of these currents has
been found to be involved in controlling bursts in experiments; see, for instance,
[27, 77, 115], and [116].
In patients with schizophrenia, bursting in the prefrontal cortex has been found
to be reduced [83]; a resulting loss of efficacy in signal transmission is hypothesized
in [83] to be a cause for cognitive deficits associated with schizophrenia.
with
dnslow nslow,∞ − nslow 1
= , nslow,∞ (v) = , (19.2)
dt τn,slow 1 + exp ((−20 − v) /5)
and
gK,slow = 5 mS/cm2 , τn,slow = 20 ms. (19.3)
This current is added to the right-hand side of the equation governing v, eq. (16.1).
Figure 19.2 shows v(t) for a solution of the resulting system with I = 7 μA/cm2 .
The solution converges to an attracting limit cycle, shown in the three-dimensional
phase space of the model in Fig. 19.4. The turns are rapid, and correspond to firing.
The descent to lower values of nslow is slow, and corresponds to the inter-burst
interval. Notice that in Fig. 19.2, the inter-spike interval within a burst increases.
This reflects the fact that in a homoclinic bifurcation, the frequency decreases as
the saddle approaches the cycle; this was discussed in Section 17.3.
Whether or not the model described above produces periodic bursting depends
on the value of g K,slow . If gK,slow is slightly reduced, the slow potassium current
is no longer strong enough to terminate bursts, and very high-frequency periodic
15 A thermostat is a familiar example. Whether or not your heat is on depends not only on the
current temperature in your room, but also on its history. The thermostat might turn the heat
on when the temperature drops below 66o F, and turn it off when the temperature rises above
70o F. If it is currently 68o F, the heat may be on or off, depending on which part of the cycle the
thermostat is in. Without the hysteresis effect, the heat would turn on and off very frequently.
144 Chapter 19. Bursting
firing results; see Fig. 19.5. On the other hand, a four-fold increase in g K,slow results
in lower-frequency periodic firing; see Fig. 19.6.
The kind of bursting studied in this section, involving a saddle-node bifurca-
tion at Ic and a homoclinic bifurcation at I∗ , is called square-wave bursting. The
name can be understood by removing the spikes in Fig. 19.2, and plotting what
remains; see the red curve in Fig. 19.7. (To be precise, we numerically estimated
dv/dt, left out all values of v where the absolute value of the estimate of dv/dt
was greater than 1, and filled the gaps by linear interpolation.) The bursts ride
on plateaus of elevated v. The reasonfor this behavior can be seen from Fig. 16.2:
0.06
0.04
n slow
0.02
0
0.5 0
−50
n 0 v
0
v [mV]
−50
−100
0 20 40 60 80 100
t [ms]
Figure 19.5. Same as Fig. 19.2, with g K,slow reduced from 5 to 4 mS/cm2 .
[INAPIK_PLUS_WEAK_SLOW_I_K]
0
v [mV]
−50
−100
0 20 40 60 80 100
t [ms]
v [mV]
−50
−100
0 20 40 60 80 100
t [ms]
Figure 19.7. Voltage trace of Fig. 19.2, and (in red) the same voltage
trace but without the spikes. This picture explains, to a viewer who isn’t fussy, what
square-wave bursting has to do with square waves. [SQUARE_WAVES]
The entire limit cycle lies to the right of the saddle and the node in the (v, n)-plane.
Thus throughout the bursts, v is higher than during the inter-burst intervals. For
much more about square-wave bursting, see [51, Section 5.2].
Ieff = time-varying effective drive (sum of external drive and a current that is
reduced rapidly by firing and builds up slowly between spikes)
Ic = drive value above which rest becomes unstable
I∗ = drive value below which periodic firing becomes impossible (I∗ < Ic )
Ik = value of Ieff just prior to the k-th action potential of a burst,
k = 0, 1, 2, 3, . . . (I0 > Ic but I0 ≈ Ic )
I∞ = lim Ik (if the burst continued indefinitely)
k→∞
Ik = αIk−1 + β, (19.6)
with
α = e−δ/τslow , β = −e−δ/τslow + I0 1 − e−δ/τslow .
I∞ = αI∞ + β. (19.7)
I∞ − Ik = αk (I∞ − I0 ) . (19.9)
Ik = I∞ + αk (I0 − I∞ ) . (19.12)
We assume that the burst is terminated when Ik reaches or falls below I∗ . For this
to happen eventually, we need I∞ < I∗ . Using (19.11), and with the approximation
I0 ≈ Ic , this condition becomes
τslow Ic − I∗
> . (19.13)
δ
When δ τslow , the recovery of Ieff between spikes is negligible, and therefore
the right-hand side of inequality (19.13) is approximately the number of action
potentials in a burst. The duration of the burst is therefore
Ic − I∗
δ ,
and inequality (19.13) expresses that the time constant τslow associated with the
slow current should be greater than the burst duration.
19.4. Comparison of the Idealized Analysis with Biophysical Models 147
−20
v [mV]
−40
−60
−80
40 50 60 70 80 90
I e ff [μA/cm 2 ]
6
4
2
0
−2
40 50 60 70 80 90
t [ms]
50
v [mV]
0
−50
Figure 19.9. The Erisir neuron, turned into a burster by adding a slow
potassium current that is strengthened by firing. [ERISIR_PLUS_SLOW_I_K]
50
v [mV]
−50
7.5
I e ff [μA/cm 2 ]
6.5
Figure 19.10. Upper panel: A segment of Fig. 19.9. Lower panel: Ieff =
I + IK,slow (here I = 7.5 μA/cm2 ). The red line approximately indicates Ic , and the
blue line I∗ . [ERISIR_SHOW_SLOW_I_K]
50
v [mV]
−50
Figure 19.11. Upper panel: Voltage trace of Fig. 19.9, and (in red) the
envelope of the spikes. This picture explains, at least to a lenient reader, what
elliptic bursting has to do with ellipses. [ELLIPSES]
demonstrating that in fact the addition of the slow potassium current leads to burst
firing. Figure 19.10 shows a segment of Fig. 19.9 again, together with the effective
current Ieff . Again the details are different from those in our idealized analysis in
Section 19.3, but qualitatively, the analysis captures what happens. In particular,
the burst duration is about 80 ms, smaller than τn,slow = 100 ms.
In Fig. 19.10, as in Fig. 19.8, we see that the instant when Ieff falls below I∗
is not the instant when the burst ends (in fact, Ieff briefly falls below I∗ with the
very first action potential of the burst), and there is a considerable delay between
the instant when Ieff rises above Ic and the burst onset. The reason for this delay
was discussed earlier already.
The kind of bursting seen in Fig. 19.9, involving a subcritical Hopf bifurcation
at Ic and a saddle-node collision of cycles at I∗ , is called elliptic bursting. The
name can be understood from Fig. 19.11, which shows the voltage trace of Fig. 19.9,
together with the envelope of the spikes. (To be precise, we marked the local maxi-
mum and local minimum points of v in red, and connected them piecewise linearly.)
It is typical of elliptic bursters that the spike amplitude waxes as the trajectory
spirals out from the fixed point to the stable limit cycle, after the fixed point looses
its stability in a subcritical Hopf bifurcation, and wanes as the slow hyperpolarizing
current strengthens. This effect, a trace of which is visible in Fig. 19.11, gives the
envelope during a burst an “elliptical” shape. For more about elliptic bursting, see
[51, Section 5.3].
Exercises
19.1. Derive eq. (19.5).
19.2. Derive eq. (19.8).
19.3. Derive (19.11).
19.4. (∗) Suppose that in the model of Section 19.2, you lowered g K,slow from 5 to
4.6 mS/cm2 . Would the bursts get longer or shorter? Would the inter-bursts
intervals get longer or shorter? What would happen if you increased gK,slow
to 10 mS/cm2 ?
Try to guess the answer using the analysis in Section 19.3, and only then check
numerically whether you guessed correctly, using the program that generates
Fig. 19.2 as a starting point.
19.5. (∗) Repeat problem 4, but this time modifying τn,slow , not gK,slow . Think
about lowering τn,slow from 20 to 15 ms, or raising it to 25 ms. Again, try to
guess the answers first, then find them by numerical simulation.
19.6. (∗) This exercise is motived by the discussion surrounding Fig. 19.8.
For the INa,p -IK model, I∗ ≈ −1.4 and Ic ≈ 4.5. (You can get these values
from the code generating Fig. 17.11. In that code, the I-axis was discretized
crudely, since that makes the resulting plot easier to read. To obtain I∗ and
Ic with good accuracy, you would want to refine the discretization of I.)
150 Chapter 19. Bursting
Starting with the code that generates Fig. 17.11, plot v(t) as a function of t,
0 ≤ t ≤ 500, starting with I = −4 at time t = 0, and letting I rise linearly
to 8 at time t = 500. Does firing start as soon as I rises above Ic ? You will
have to plot a narrower time window, containing the time when I rises above
Ic and the time when firing starts, to clearly see the answer to this question.
Part III
Modeling Neuronal
Communication
Chapter 20
Chemical Synapses
We now think about neuronal communication via (chemical) synapses; see Sec-
tion 1.1 for a brief general explanation of what this means. In this chapter, we
describe how to model chemical synapses in the context of differential equations
models of neuronal networks.
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 20) contains supplementary material, which is available to authorized users.
154 Chapter 20. Chemical Synapses
on the right-hand side of the equation governing vpost . Here gsyn (t) is a conductance
density; it rises in response to pre-synaptic action potentials, and decays to zero
otherwise. The potential vrev is called the reversal potential of the synapse. (The
name is explained by the fact that vrev − vpost changes its sign when vpost crosses
vrev .) The term Isyn will tend to drive vpost towards vrev .
For instance, if the synapse is GABAA receptor-mediated, vrev should be vCl ,
the Nernst potential of chloride, since GABAA receptors are chloride channels. If
the synapse is GABAB receptor-mediated, vrev should be vK , since activation of
GABAB receptors opens, via a second messenger cascade, potassium channels. It is
less clear what vrev should be for AMPA and NMDA receptor-mediated synapses,
since AMPA and NMDA receptors are non-specific cation channels, and different
cations (positively charged ions) have different Nernst potentials. The overall effect
of AMPA and NMDA receptor activation turns out to be excitatory, and we will
use the value vrev = 0 mV for AMPA or NMDA receptor-mediated synapses. Notice
that 0 mV is “high,” namely higher than the membrane potential at rest, which
is negative. Note that both excitatory and inhibitory synaptic currents have the
form (20.1). For inhibitory synaptic currents, no minus sign appears in front of
gsyn (t); instead, vrev is low.
We write
gsyn (t) = gsyn s(t),
where the conductance density gsyn is an upper bound on the possible values of
gsyn (t), and s(t) ∈ [0, 1] rises in response to pre-synaptic action potentials, and
decays to zero otherwise. We call s the synaptic gating variable. One can think of
it as the fraction of post-synaptic receptors that are activated.
It is possible for two neurons to make contact in several locations. However,
since we always make the idealizing assumption that neurons have no spatial extent,
we will disregard this possibility.
Since s should rise when the pre-synaptic neuron spikes, but decay to zero
otherwise, we let s be governed by an equation of the form
ds 1 + tanh(v/10) 1 − s s
= − , (20.2)
dt 2 τr τd
where τr > 0 and τd > 0 are time constants discussed below. Note that the factor
(1 + tanh(v/10))/2 on the right-hand side of eq. (20.2) varies between 0 and 1, and
is close to 0 when v 0, and close to 1 when v 0. For instance, when v < −20,
20.1. Nearly Instantaneous Rise 155
this factor is less than 0.02, and when v > 20, it is greater than 0.98. When
(1 + tanh(v/10))/2 ≈ 0, (20.2) is approximately equivalent to
ds s
=− ,
dt τd
which describes exponential decay to zero, with time constant τd . This explains the
subscript d in “τd ”: τd is the decay time constant of s when the pre-synaptic neuron
is silent. We call τd the synaptic decay time constant. When (1+tanh(v/10))/2 ≈ 1,
the term
1 + tanh(v/10) 1 − s 1−s
≈
2 τr τr
on the right-hand side of eq. (20.2) drives s towards 1 with time constant approxi-
mately equal to τr . This explains the subscript r in “τr ”. We call τr the synaptic
rise time constant.
The function (1 + tanh(v/10))/2 appearing on the right-hand side of eq. (20.2)
could be replaced, without any significant effect, by many other increasing functions
of v that are close to 0 when v 0 and close to 1 when v 0. For instance,
(1 + tanh(v/4))/2 has been used by some authors; see, for instance, [17, 50]. Many
other choices would work equally well; see exercise 1.
The decay time constants of AMPA receptor-mediated synapses are short;
τd = 2ms is not unreasonable [91]. For GABAA receptor-mediated synapses, various
different decay time constants have been reported in the literature, often between 4
and 10 ms or so [69, 137]. For GABAA receptor-mediated inhibition among basket
cells in the hippocampus, much shorter decay time constants, on the order of 2
to 3 ms or even less, have also been measured [6, 97]. On the other hand, much
longer decay time constants of GABAA receptor-mediated inhibition targeting the
dendrites of hippocampal pyramidal cells, on the order of tens of ms, have also been
reported and used in modeling studies [58, 133, 179].
Decay time constants for GABAB receptor-mediated synapses are much
longer. For instance, in [125], the time of decay to half amplitude of the post-synaptic
potential (the change in post-synaptic membrane potential) due to GABAB receptor
activation is reported to be slightly above 200 ms, corresponding to τd ≈ 300 ms
(exercise 2). Decay time constants for NMDA receptor-mediated synapses are of a
similar order of magnitude [187]. The exact values depend on what kind of cell the
post-synaptic neuron is, and in fact it is reported in [187] that the decay is best
described by the sum of two exponentials, with two different decay rates; the longer
of the two decay time constants is on the order of 200–300 ms, approximately.
When modeling a network of N synaptically connected neurons, we denote the
membrane potentials by vi , i = 1, 2, . . . , N . Let the i-th neuron be the pre-synaptic
one, and the j-th the post-synaptic. The gating variable s and the parameters g syn ,
vrev , τr , and τd may now all depend on i and j, and we indicate this dependence
with the double index “ij”. The synaptic input from neuron i into neuron j is then
and
In an all-to-all connected network, i.e., one with gsyn,ij > 0 for all i and j, each
neuron receives N synaptic inputs. (We do allow i = j. Synapses of a neuron
onto itself are called autapses, and they exist in the brain [154].) The number of
dependent variables in such a simulation is exactly O(N 2 ), because there are N 2
synaptic gating variables sij . (See Section 1.2 for the meaning of “exactly O(N 2 )”.)
If one wants to simulate large networks, this is a potential reason for concern.
However, with certain simplifying assumptions, one can reduce the number
of dependent variables. We will always assume that τr,ij , τd,ij , and sij (0) do not
depend on j:
Equation (20.4) then implies that sij (t) is independent of j for all t, and this reduces
the number of dependent variables from O(N 2 ) to O(N ): There is now only one
synaptic gating variable, si = si (t), per neuron, not one per pair of neurons.
Unfortunately, this simplification is less important than it might first seem:
Although there are no longer O(N 2 ) dependent variables, there are still O(N 2 )
terms on the right-hand side of the equations governing the vi , and as a result the
number of arithmetic operations per time step is still O(N 2 ). To reduce this num-
ber to O(N ) in an all-to-all connected network requires far more restrictive and less
realistic assumptions. For instance, if the vrev,ij and g syn,ij depend neither on i nor
on j, then the complexity per time step can be reduced from O(N 2 ) to O(N ); see
exercise 3. Notice that a network in which all synaptic reversal potentials are the
same consists of excitatory cells only or of inhibitory cells only. The reduction to
O(N ) arithmetic operations per time step is in fact also possible for networks in-
cluding both excitatory and inhibitory neurons. However, in any case an unrealistic
degree of uniformity of synaptic strengths has to be assumed for this reduction.
We do assume (20.5) now, so synaptic gating variables can be thought of as
being associated with the pre-synaptic neuron. For the remainder of this section,
we drop the subscripts i and j again, as we did at the beginning of the section, and
denote the pre- and post-synaptic membrane potentials by v and vpost . Associated
with the pre-synaptic neuron is the synaptic gating variable s, satisfying eq. (20.2).
For the reduced Traub-Miles neuron with I = 1μA/cm2 , we show in Fig. 20.2 graphs
of v (upper panel), and s with τd = 2 ms, τr = 0.2 ms (middle panel), and τr = 1 ms
(lower panel). The graphs show that one does not actually change the kinetics of
s in a meaningful way by changing τr . Larger values of τr simply translate into
smaller peak values of s, so increasing τr is very nearly equivalent to lowering g syn ,
i.e., to weakening the synapses. Regardless of the value of τr , the rise of s is limited
by the duration of the action potential, and therefore very brief. An increase in τr
simply makes s less responsive to the action potential.
20.2. Gradual Rise 157
100
v [mV]
0
−100
0 20 40 60 80 100
τ r =0.2 ms :
1
0.5
s
0
0 20 40 60 80 100
τ r =1 ms :
1
0.5
s
0
0 20 40 60 80 100
t [ms ]
50
v [mV]
−50
−100
0 20 40 60 80 100
0.5
q
0
0 20 40 60 80 100
t [ms]
The current density Isyn is added to the right-hand side of the equation governing
vpost . The intuitive meaning of τd and τr is easy to understand: τd measures how
fast s decays, and τr measures how fast it rises initially; note that for s = 0 and
q = 1, ds/dt = 1/τr . The decay time constant τd,q is less intuitive; roughly, it
measures how long q stays elevated following a pre-synaptic action potential, and
it therefore governs how long s rises.
20.2. Gradual Rise 159
Figure 20.4 shows s for an RTM neuron, using rise time constants τr remi-
niscent of NMDA receptor-mediated synapses (τr = 10 ms) and GABAB receptor-
mediated synapses (τr = 100 ms).
100
v [mV]
−100
500 1000 1500 2000
τ d =300 ms , τ r =10 ms , τ d , q =10 ms
1
0.5
s
0
500 1000 1500 2000
τ d =300 ms , τ r =100 ms , τ d , q =100 ms
1
0.5
s
0
500 1000 1500 2000
t [ms ]
Figure 20.4. Top: Voltage trace of RTM neuron with I = 0.12 μA/cm2 .
Middle: Trace of synaptic gating variable s, computed using eqs. (20.9) and (20.10),
with τd = 300 ms, τr = τd,q = 10 ms. Bottom: Same with τd = 300 ms and τr =τd,q =
100 ms. [RTM_PLOT_S_TWO_VARIABLES]
The time course of s depends, to a small extent, on the shape of the pre-
synaptic action potential. However, a pre-synaptic action potential approximately
makes q jump to 1. If there is an action potential at time t = 0, then approximately
s satisfies
ds 1−s s
= e−t/τd,q − (20.11)
dt τr τd
for t > 0. We think about the solution of this equation with
s(0) = 0. (20.12)
The term e−t/τd,q (1 − s)/τr initially drives s up. However, as e−t/τd,q decays, the
term −s/τd takes over. As a result, s first rises, then decays. The time-to-peak τpeak ,
i.e., the time at which the solution of (20.11), (20.12) reaches its maximum, depends
continuously on τd,q . (This is so because solutions of differential equations in general
depend continuously on parameters in the differential equations.) The greater τd,q ,
the slower is the decay of e−t/τd,q , and as a result the larger is τpeak . Furthermore,
τpeak is very small when τd,q is very small, since then the term e−t/τd,q (1 − s)/τr
in (20.11) quickly becomes negligible. On the other hand, τpeak is very large when
τd,q is very large, since then (20.12) is, for a long time, approximately
ds 1 1 1
= − + s, (20.13)
dt τr τr τd
160 Chapter 20. Chemical Synapses
and the solution of (20.13) with s(0) = 0 is monotonically increasing for all times.
In summary, we have derived, not entirely rigorously but hopefully entirely con-
vincingly, the following proposition.
Proposition 20.1. Denote by τpeak the time at which the solution of (20.11), (20.12)
reaches its maximum. For fixed τd > 0 and τr > 0, τpeak is a continuous and strictly
increasing function of τd,q with
lim τpeak = 0 and lim τpeak = ∞.
τd,q 0 τd,q →∞
Given τd > 0 and τr > 0, Proposition 20.1 implies that for any prescribed
value τ̂ > 0, there is exactly one τd,q > 0 for which τpeak = τ̂ . Furthermore, this
value τd,q is easy to compute using the bisection method (see Appendix A). We
therefore will from here on think of τpeak , not τd,q , as the given model parameter.
The value of τd,q is computed from τpeak , using the simplified equation (20.11), but
then used in the full model given by eqs. (20.8)–(20.10); see Fig. 20.5.
100
v [mV]
−100
500 1000 1500 2000
τ d =300 ms , τ r =10 ms , τ p e a k =20 ms :
1
0.5
s
0
500 1000 1500 2000
τ d =300 ms , τ r =100 ms , τ p e a k =150 ms :
1
0.5
s
0
500 1000 1500 2000
t [ms ]
Figure 20.5. Similar to Fig. 20.4, but with τpeak prescribed and τd,q cal-
culated from it. [RTM_PLOT_S_PRESCRIBE_TAU_PEAK]
Both the model in Section 20.1, and the modified model in the present section,
are not, of course, derived from first principles. For a more fundamental discussion
of the modeling of synapses, see [36].
two critical values of I, which we call, as before, I∗ and Ic . This is not surprising:
For I strong enough, but not so strong that stable rest is no longer possible, one
would expect that the excitatory autapse could perpetuate firing once it begins.
A more detailed investigation shows that there are a saddle-node bifurcation at Ic
(exercise 4), and a homoclinic bifurcation at I∗ , so we have here another example
of a model of bifurcation type 3.
40
f [Hz]
20
0
0 I* 0.05 0.1 Ic 0.15
I [μA/cm2 ]
The parameters in Fig. 20.6 may not be so realistic (too slow for AMPA
receptor-mediated synapses, too fast for NMDA receptor-mediated ones), but see
exercise 5.
one uses
Isyn = gsyn s(t)B(vpost )(vrev − vpost ), (20.14)
with
1
B (vpost ) =
[Mg]ex
1+ exp (−0.062vpost)
3.57
where [Mg]ex denotes the magnesium concentration in the extracellular fluid, mea-
sured in mM (millimolar). By definition, mM stands for 10−3 M, and M (molar)
stands for moles per liter. A mole of a substance consists of NA molecules, where
NA denotes Avogadro’s number, NA ≈ 6 × 1023 . A typical value of [Mg]ex is 1 mM.
162 Chapter 20. Chemical Synapses
This is the value that we will use in this book, so we will write
1
B (vpost ) = . (20.15)
exp (−0.062vpost)
1+
3.57
Figure 20.7 shows B as a function of vpost .
1
B
0.5
0
−100 −50 0 50
v p os t
100
v [mV] 50
-50
-100
0 500 1000 1500 2000
0.8
0.6
s
0.4
0.2
0
0 500 1000 1500 2000
t [ms]
Figure 20.8. Buildup of the synaptic gating variable s over several action
potentials. The upper panel shows the voltage trace of an RTM neuron firing with
period T ≈ 74.5. The lower panel shows the time evolution of s, with τr = 10,
τd,q = 5, τd = 300. [S_BUILDUP]
100
50
v [mV]
-50
-100
0 500 1000 1500 2000
0.08
0.06
s
0.04
0.02
0
0 500 1000 1500 2000
t [ms]
Figure 20.9. Analogous to Fig. 20.8, but with τr = 100, τd,q = 1, τd = 500.
[S_SLOW_BUILDUP]
164 Chapter 20. Chemical Synapses
Exercises
20.1. (a) Suppose that g = g(v) is an increasing function with the properties
limv→−∞ g(v) = 0, limv→∞ g(v) = 1, and g(0) = 1/2. Show that for any
Δ > 0, the function g(v/Δ) has these properties as well. Does the function
get steeper, or less steep, near v = 0 as Δ increases? (b) √ Show that the func-
tions g(v) = (1 + tanh(v))/2, arctan(v)/π + 1/2, and (v/ 1 + v 2 + 1)/2 have
the properties listed in part (a). (c) (∗) On the right-hand side of eq. (20.2),
you could replace (1 + tanh(v/10))/2 by many other functions of the form
g(v/Δ), where g has the properties listed in part (a), and Δ > 0. Re-compute
the middle panel of Fig. 20.2 using the three choices of g in part (b), and using
Δ = 1, 5, and 10. You will get 9 different plots. One of them, the one with
g(v) = (1 + tanh(v))/2 and Δ = 10, will be the middle panel of Fig. 20.2.
How do the others differ from it? Can you explain the differences you see?
20.2. In [125], the time it takes for GABAB receptor-mediated inhibition to decay to
half amplitude is reported to be about 200 ms. Assuming that the amplitude
decays proportionally to e−t/τd , what is τd ?
20.3. Consider a network of N model neurons, synaptically connected as described
in Section 20.1. Make the assumptions given by eq. (20.5), and in addition
assume
vrev,ij = vrev,j and g syn,ij = g syn,i . (20.17)
Explain why now the number of operations needed in each time step can be
reduced from O(N 2 ) to O(N ).
The first of the two assumptions in (20.17) would be satisfied, for instance,
if all synaptic reversal potentials were the same. All synapses would then be
inhibitory, or all would be excitatory. There are contexts in which this is not
unreasonable: For instance, the lateral septum is a brain structure mostly
comprised of inhibitory cells [142]. Furthermore, the assumption vrev,ij =
vrev,j can be relaxed a bit, allowing both excitatory and inhibitory synapses.
The most problematic assumption in (20.17) is the second one. It is a ho-
mogeneity assumption: All synapses in which neuron i is the pre-synaptic
cell have to have the same strength. This is not usually the case in real net-
works, and heterogeneity (non-uniformity) of synaptic strengths in networks
has significant consequences.
20.4. (∗) (†) Demonstrate numerically that the self-exciting RTM neuron with
g syn = 0.1 mS/cm2 , vrev = 0 mV, τd = 5 ms, τr = 0.2 ms, τpeak = 0.6 ms
(see Fig. 20.6) has a saddle-node bifurcation at Ic .
20.5. (∗) Compute the f -I curve of the self-exciting RTM neuron with gsyn =
0.02 mS/cm2 , vrev = 0 mV, and time constants reminiscent of NMDA
receptor-mediated synapses: τd = 300 ms, τr = 10 ms, τpeak = 20 ms. Include
the Jahr-Stevens model of the magnesium block.
Chapter 21
Gap Junctions
into neuron j. More precisely, Ii→j should be thought of as a current density, and
it appears as an extra summand on the right-hand side of the equation governing
vj . The parameter ggap,ij is a conductance density. At the same time, an extra
current density
Ij→i = ggap,ji (vj − vi )
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 21) contains supplementary material, which is available to authorized users.
166 Chapter 21. Gap Junctions
appears on the right-hand side of the equation for vi . The symmetry of the gap
junction is reflected by the relation
N
Igap,j = ggap,ij (vi − vj ) (21.2)
i=1
N
cj = ggap,ij (21.4)
i=1
is the sum of the entries of the j-th column of G, briefly called the j-th column
sum of G. Because G is symmetric, cj is also the j-th row sum of G, i.e., the sum
of the entries in the j-th row of G. To write (21.2) in the form (21.3) is useful
because it suggests arranging the computation in a cost-saving way, as follows. Let
us (unrealistically) assume that the N neurons are gap-junctionally coupled with
all-to-all connectivity, i.e., that the ggap,ij are all positive, or (more realistically) that
the fact that many of them are zero is ignored in the computation for convenience.
Then the computation of the cj takes O(N 2 ) operations, but can be done once
Chapter 21. Gap Junctions 167
and for all before the computation starts, and need not be repeated in each time
step. The matrix-vector multiplication Gv is then the only piece of the computation
of (21.2) that requires O(N 2 ) operations per time step.
If (21.2) were the entire right-hand side of the j-th equation, the system of
differential equations would be
dvj N
C = ggap,ij (vi − vj ), j = 1, 2, . . . , N. (21.5)
dt i=1
One says that this system describes discrete diffusion: Voltage flows from the i-th
neuron to the j-th at a rate proportional to vi − vj . The name “discrete diffusion”
is especially justified if we assume that ggap,ij > 0 only when neurons i and j are
neighbors — as, of course, is the case in the brain.
If the connectivity is “dense enough” (see below), discrete diffusion tends to
equilibrate the membrane potentials of all neurons in the network. Intuitively, this
is not surprising: If vj < vi , there is a current from neuron i into neuron j, and vice
versa. To explain what “dense enough” means, we symbolize the N neurons by N
dots in the plane, and connect the i-th dot with the j-th if and only if ggap,ij > 0.
This is called the connectivity graph; see Fig. 21.1. The dots are called the nodes of
the graph. The connectivity is dense enough for the assertion made above to hold
if the connectivity graph is connected, meaning that it is possible to walk from any
node in the graph to any other node following edges.
Proposition 21.1. Suppose that the connectivity graph is connected, and vj (t),
t ≥ 0, 1 ≤ j ≤ N , satisfy (21.5). Then
1
N
lim vj (t) = m0 for all j, with m0 = vj (0).
t→∞ N j=1
50
v 1 [mV]
0
−50
−100
0 20 40 60 80 100
v 2 [mV]
−62.5
−63
−63.5
0 20 40 60 80 100
t [ms]
shown in Fig. 21.2. The membrane potential v2 oscillates in response to the periodic
firing of the first neuron. Each time the first neuron fires, there is a sudden steep
surge in v2 by about 0.5 mV. This is comparable to the amplitudes of gap-junctional
potentials reported in [152]. It is clear why there are sudden nearly instantaneous
rises in v2 in response to action potentials of the first neuron: The gap-junctional
current (to be precise, current density) into the second neuron is
50
v 1 [mV]
−50
−100
0 20 40 60 80 100
−62
v 2 [mV]
−62.5
−63
−63.5
0 20 40 60 80 100
t [ms]
You might wonder whether the sudden surge in v2 triggered when the first
neuron fires is really important during inter-spike intervals. Figure 21.3 suggests
that it is. The red curves in Fig. 21.3 show how Fig. 21.2 changes if we turn off
the gap junctions whenever v1 exceeds −50 mV, that is, during action potentials:
v1 remains nearly unchanged, but v2 drops by 0.5 mV even during the inter-spike
intervals of the first neuron.
This sounds like bad news for integrate-and-fire models, which do not at-
tempt to reproduce the shape of the voltage spikes. However, this problem has the
following simple solution, proposed in [103]. Suppose that v1 and v2 are the mem-
brane potentials of two gap-junctionally coupled integrate-and-fire neurons. When
v1 reaches the firing threshold and is therefore reset, v2 is raised by a small amount
> 0 to account for the effect that the spike in v1 would have had on v2 , had it
been simulated. If adding raises v2 above the firing threshold, then we reset it as
well.
Two gap-junctionally coupled LIF neurons, with the normalizations in eqs. 7.6
and 7.7, are therefore governed by the following equations.
dv1 v1
=− + I1 + ggap (v2 − v1 ) if v1 < 1 and v2 < 1, (21.6)
dt τm
dv2 v2
=− + I2 + ggap (v1 − v2 ) if v1 < 1 and v2 < 1, (21.7)
dt τm
v1 (t + 0) = 0 and
v2 (t − 0) + if v2 (t − 0) + < 1,
v2 (t + 0) = if v1 (t − 0) = 1, (21.8)
0 if v2 (t − 0) + ≥ 1,
v2 (t + 0) = 0 and
v1 (t − 0) + if v1 (t − 0) + < 1,
v1 (t + 0) = if v2 (t − 0) = 1. (21.9)
0 if v1 (t − 0) + ≥ 1,
(For simplicity, we have taken the membrane time constant, τm , to be the same for
both neurons; of course one could think about the case of two different membrane
time constants as well.) The two parameters ggap > 0 and > 0 are not unrelated.
As pointed out in [103], should be taken to be proportional to ggap :
= βggap ,
where β > 0. We will do a rough calculation to see what a sensible choice of β
might be. In Fig. 5.3, the membrane potential of a periodically firing WB neuron
approximately resets to −67 mV in each spike, then rises to approximately −52 mV,
and then an action potential occurs; this is demonstrated by Fig. 21.4, where we
have indicated the horizontal lines v ≡ −67 mV and v ≡ −52 mV in blue. Since
in a normalized LIF neuron, v rises by 1 unit between reset and threshold, we
should approximately identify a rise of the membrane potential of the WB neuron
by 0.5 mV with a rise of the normalized v of the LIF neuron by = 0.5/15 = 1/30.
Since in Fig. 21.2, ggap = 0.01 and there is a rise in v2 by approximately 0.5 mV
each time the first neuron fires, it is not unreasonable to set β so that 1/30 ≈ 0.01β.
We will therefore use
β = 3.
170 Chapter 21. Gap Junctions
(With ggap = 0.01, this yields = 0.03, comparable to the value of 0.04 in [103,
Fig. 2]. Note, however, that ggap = 0.2 in [103, Fig. 2].) Using β = 3, we reproduce
Fig. 21.3 using two LIF neurons; this is shown in Fig. 21.5. The figures are qualita-
tively quite similar. Again, omitting the impact of spikes in v1 on v2 would lead to
a significant decrease in v2 even between spikes.
50
v [mV]
−50
−100
0 20 40 60 80 100
t [ms]
Figure 21.4. Like Fig. 5.3, with the lines v ≡ −67 mV and v ≡ −52 mV
indicated as dashed blue lines, to demonstrate that the WB neuron resets at
approximately −67 mV, and its firing threshold is at approximately −52 mV.
[RESET_THRESHOLD]
6
v 1 [mV]
0
0 20 40 60 80 100
v 2 [mV]
0.9
0.85
0 20 40 60 80 100
t [ms]
Figure 21.5. Two LIF neurons, with τm = 10 and I1 = 0.125 (above the
spiking threshold Ic = 1/τm = 0.1), I2 = 0.09 (below the spiking threshold), gap-
junctionally connected with ggap = 0.01, = βggap , β = 3 (black) and = 0 (red).
We have indicated voltage spikes with vertical lines in the graph of v1 . This figure
should be compared with Fig. 21.3. [LIF_NETWORK_WITH_GJ]
Exercises
21.1. (†) The aim of this exercise is to prove Proposition 21.1. Let v0 ∈ RN , and
supplement (21.5) with the initial condition
v(0) = v0 . (21.10)
Denote by m0 the average of the entries in v0 . You will prove that the
solution of (21.5), (21.10) converges to m0 e, where
⎡ ⎤
1
⎢ · ⎥
e=⎢ ⎥
⎣ · ⎦.
1
(G − D)bk = λk bk , k = 1, 2, . . . , N.
15
10
5
Im (z)
0
−5
−10
−15
−10 0 10
Re (z)
(a) Your first task is to prove that the eigenvalues must be ≤ 0. You can do
this using Gershgorin’s Theorem, which states that for any complex N × N -
matrix
A = [aij ]1≤i,j≤N ,
any eigenvalue, λ, lies in one of the Gershgorin disks. The Gershgorin disks
are the disks
{z ∈ C : |z − ajj | ≤ |aij |}, 1 ≤ j ≤ N,
i
=j
t1,1 < t1,2 < . . . < t1,n1 and t2,1 < t2,2 < . . . < t2,n2 .
A Wilson-Cowan Model
of an Oscillatory E-I
Network
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 22) contains supplementary material, which is available to authorized users.
176 Chapter 22. A Wilson-Cowan Model of an Oscillatory E-I Network
IE
E
I
II
which dE/dt and dI/dt depend on present values of E and I only. Thus the model
is a system of ODEs. It is of the general form
(In writing the system in this form, I also assumed that refractoriness is unim-
portant; see [185] for the small corrections to the equations needed to account for
refractoriness.) We will next discuss the physical dimensions and meaning of the
parameters and the functions f and g appearing in eqs. (22.1).
The weights wEE ≥ 0, wIE ≥ 0, wEI ≥ 0, and wII ≥ 0 are dimensionless.
They represent coupling strengths: wEE measures how strongly E-cells excite each
other, wEI measures how strongly E-cells excite I-cells, and so on. The parameters
IE and II , which should be thought of as normalized input drives to the E- and
I-cells, respectively, are frequencies, and so are the values of the functions f and g.
We assume that f and g are increasing, and will say more about them shortly. If f
and g were constant, say f ≡ f0 and g ≡ g0 , then eqs. (22.1) would drive E and I
towards f0 and g0 , respectively, exponentially fast with time constants τE and τI .
We will assume that neuronal frequencies vary between 0 and 100 Hz.
(Although there are examples of faster firing in the brain, it isn’t common.)
It is therefore natural to assume that f and g have values between 0 and 100 Hz.
With this assumption, if E(0) and I(0) lie between 0 and 100 Hz, then E(t) and I(t)
lie between 0 and 100 Hz for all t ≥ 0 (exercise 1). We also typically assume that f
and g are differentiable, increasing, and have exactly one inflection point. We call
a function of a single real variable that is bounded, differentiable, increasing, and
has exactly one inflection point sigmoidal.
We now consider an example of specific choices for the parameters in (22.1),
and for the functions f and g. Our example is almost precisely taken from [184].
First, we set
wEE = 1.5, wIE = 1, wEI = 1, wII = 0. (22.2)
Chapter 22. A Wilson-Cowan Model of an Oscillatory E-I Network 177
Thus we omit self-inhibition of the inhibitory population, but not recurrent excita-
tion, i.e., self-excitation of the excitatory population. Next, we set
τE = 5, τI = 10. (22.3)
Neuronal activity does not follow excitation instantly, but with a delay, since
synapses don’t act instantaneously and there are delays in the conduction of neu-
ronal signals. We think of the interactions in our network as mediated by AMPA
and GABAA receptors. Since AMPA receptor-mediated excitation is typically some-
what faster than GABAA receptor-mediated inhibition, it is reasonable to make τI
greater than τE .
We set
⎧
⎨ 100x2
f (x) = 2 2
if x ≥ 0, (22.4)
⎩ 30 0+ x otherwise,
and
⎧
⎨ 100x2
g(x) = 2 2
if x ≥ 0, (22.5)
⎩ 20 0+ x otherwise,
Notice the significance of the terms 302 and 202 appearing in these definitions:
f (30) = 50 and g(20) = 50. For all x, f (x) ≤ g(x); that is, the I-cells are assumed
to be activated more easily than the E-cells.
Finally, we set
IE = 20, II = 0. (22.6)
Thus we assume that the E-cells are driven harder than the I-cells.
50
0
0 50 100 150 200 250 300
t [ms]
A solution is shown in Fig. 22.2. Figure 22.3 depicts the same solution in the
phase plane. There is an attracting limit cycle, an oscillatory solution, with surges
in E always promptly followed by surges in I.
Remembering that E and I denote mean firing frequencies of large populations
of neurons, we can visualize the solution of eqs. (22.1)–(22.6) depicted in Figs. 22.2
and 22.3 in yet another way; see Fig. 22.4. This figure indicates firing times of
NE = 80 E-cells and NI = 20 I-cells. (We will explain shortly how those firing
times are derived from the computed mean frequencies E and I.) The horizontal
axis in Fig. 22.4 indicates time in ms. The vertical axis indicates neuronal index;
178 Chapter 22. A Wilson-Cowan Model of an Oscillatory E-I Network
100
80
60
I
40
20
0
0 50 100
E
Figure 22.3. Depiction of the solution of Fig. 22.2 in the (E, I)-plane
(black). The red and blue curves are the E- and I-nullclines, respectively; compare
this with Fig. 10.3. [WILSON_COWAN_PHASE_PLANE]
the i-th E-cell has index i+NI , and the j-th I-cell has index j. A dot in the location
(t, m) indicates that the m-th neuron fires at time t; spikes of E-cells are indicated
by red dots, and spikes of I-cells by blue ones. A plot of the sort shown in Fig. 22.4
is called a spike rastergram. We chose the ratio NE : NI to be 4 : 1 here because this
is often said to be the approximate ratio of glutamatergic to GABAergic neurons
in the brain [136].
neuronal index
100
50
0
0 50 100 150 200 250 300
t [ms]
We will now explain how the spike times in Fig. 22.4 were derived from the
functions E and I. We assume that in a time interval [kΔt, (k + 1)Δt], neuron m
fires with probability
⎧ 1 Ek + Ek+1
⎪
⎪ Δt if neuron m is an E-cell,
⎨ 1000 2
pm,k = (22.7)
⎪
⎪
⎩ 1 Ik + Ik+1
Δt if neuron m is an I-cell,
1000 2
where Ek , Ek+ , Ik , Ik+1 are the computed approximations for E(kΔt) E((k+1)Δt),
I(kΔt), I((k + 1)Δt). Here Δt denotes the time step size used for solving the
differential equations (22.1). (To generate our figures in this chapter, we used
Chapter 22. A Wilson-Cowan Model of an Oscillatory E-I Network 179
Δt = 0.01 ms.) The denominators of 1000 in (22.7) are needed because we measure
time in ms, but frequency in Hz. In the k-th time step, we have the computer draw
uniformly distributed (pseudo-)random numbers Um,k ∈ (0, 1), m = 1, 2, .., NE +NI ,
and say that neuron m fires if and only if Um,k ≤ pm,k . The Um,k are independent
of each other.
Readers unfamiliar with the basic notions in probability theory used in the
preceding discussion can learn about those notions in Appendix C, or, more thor-
oughly, in any elementary book on probability theory.
Figure 22.4 shows what Fig. 22.2 really means: Bursts of activity of the E-cells
bring on bursts of activity of the I-cells, which in turn bring on a pause. Then the
cycle repeats.
It is interesting to modify model parameters and see what happens. For a
particularly interesting example, suppose that we reduce wEE . Figure 22.5 shows
that the limit cycle shrinks. In fact, when wEE drops below approximately 0.85,
the limit cycle disappears. There is a bifurcation here in which a limit cycle shrinks
and eventually collapses into a stable fixed point, or — if we let wEE increase rather
than decrease — a bifurcation in which a limit cycle is born at zero amplitude out
of a stable fixed point, then grows. You might expect it to be a supercritical Hopf
bifurcation, and indeed it is; see exercise 2.
w E E =1.5 w E E =1.25 w E E =1
100 100 100
50 50 50
I
0 0 0
0 50 100 0 50 100 0 50 100
E E E
In fact, one can easily prove in much greater generality that time coarse-
grained Wilson-Cowan models without recurrent excitation do not allow periodic
solutions; see exercise 4. Unfortunately, however, this conclusion is misleading.
It suggests that recurrent excitation is needed to get oscillatory behavior in E-I
networks. This is most definitely not the case; see Chapter 30. A new interesting
question therefore arises: What exactly is it about the Wilson-Cowan model that
leads to this incorrect conclusion? We will leave this question unanswered here.
180 Chapter 22. A Wilson-Cowan Model of an Oscillatory E-I Network
Exercises
22.1. Assume that f = f (x) and g = g(x) are continuously differentiable functions,
defined for x ∈ R, with f (x) ∈ [0, 100] and g(x) ∈ [0, 100] for all x ∈ R.
Assume that E and I satisfy eqs. (22.1), and 0 ≤ E(0), I(0) ≤ 100. Prove
that 0 ≤ E(t), I(t) ≤ 100 for all t ≥ 0.
22.2. (∗) Verify that the system defined by eqs. (22.1) with wIE = wII = 1, IE =
20, II = 0, τE = 5, τI = 10, and f and g defined as in eqs. (22.4) and (22.5),
undergoes a Hopf bifurcation at a critical value wEE,c ≈ 0.85. Hint: Compute
the fixed points and the eigenvalues of the Jacobi matrix at those points.
Show: As wEE drops below wEE,c , an unstable spiral becomes stable.
22.3. Assume that f = f (x) and g = g(x) are continuously differentiable, strictly
increasing functions, defined for x ∈ [0, 100], with values in [0, 100]. Consider
the Wilson-Cowan system
dE f (−wIE I + IE ) − E dI g(wEI E + II ) − I
= , = , (22.8)
dt τE dt τI
with wIE > 0 and wEI > 0. This models an E-I network without recurrent
excitation and without inhibition of I-cells by each other. Prove: This system
has exactly one fixed point, and the fixed point is stable.
22.4. (†) Assume that f = f (x) and g = g(x) are continuously differentiable func-
tions defined for x ∈ [0, 100], with f (x) ∈ [0, 100] and g(x) ∈ [0, 100] for all
x ∈ [0, 100]. Assume that g is increasing. (Although this assumption would
be natural for f as well, we don’t need it.) Consider the Wilson-Cowan
system
Entrainment, Synchronization,
and Oscillations
Chapter 23
Entrainment by Excitatory
Input Pulses
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 23) contains supplementary material, which is available to authorized users.
184 Chapter 23. Entrainment by Excitatory Input Pulses
Figure 23.1. Figures 1A and 1B of [65]. The oscillatory traces are EEG
traces measured from three different locations on the scalp of an anesthetized rat.
Reproduced with publisher’s permission.
Figure 23.2. From Fig. 1 of [23]. LFP traces recorded from human amyg-
dala and hippocampus. Reproduced with publisher’s permission.
ds 1−s s
Isyn = −gsyn s(t)v, =q − ,
dt τr τd
where Isyn is the synaptic input current density, v denotes the membrane potential
of the WB neuron, and q rises to 1 abruptly at times kT , k = 1, 2, 3, . . ., and decays
exponentially with a time constant τd,q otherwise. We use τr = 0.5 ms, τd = 2 ms,
23.1. Simulations for a WB Neuron 185
and set τd,q so that τpeak , the time it takes s to reach its maximum, starting from
q = 1 and s = 0, becomes equal to τr ; see Section 20.2.
For T = 50 ms and two different values of gsyn , the responses of the WB
neuron to the input pulse sequence are shown in Fig. 23.3. The figure shows v, and
indicates the times kT , k = 1, 2, 3, . . . (red dotted lines). For gsyn = 0.195, it also
shows (left lower panel) the time δ between each voltage spike and the most recent
previous input pulse. Phase-locking means convergence of δ. For g syn = 0.195, there
is 1:1 entrainment: The target neuron settles into regular firing with period T . For
gsyn = 0.14, the target responds to the input with subthreshold oscillations, but
does not fire. In fact, 1:1 entrainment is obtained for gsyn between approximately
0.19 and 0.9. (More complicated response patterns, with one input pulse triggering
more spikes than one, can arise when g syn is greater than 0.9.) No firing is triggered
for gsyn < 0.14.
We find numerically that more generally, also for other choices of T , τr , τpeak ,
τd , and I < Ic , there are values A > 0 and B > A so that there is 1:1 entrainment
when g syn is greater than B, but not too large, and no firing at all when g syn
g sy n =0.195 g s y n =0.14
50 50
v [mV]
0 0
−50 −50
−100 −100
0 400 800 0 400 800
t [ms] t [ms]
15
δ [ms]
10
0
1 15
spike #
is less than A. In the example discussed here, A ≈ 0.14 and B ≈ 0.19. For
gsyn between A and B, the target neuron responds to the inputs by firing, but
at a frequency slower than the input frequency. We will refer to this as sparse
entrainment. Figure 23.4 shows examples of n:1 entrainment with n > 1; that is,
the target neuron fires periodically with period nT . Figure 23.5 shows examples of
sparse entrainment where there is not n:1 entrainment for any integer n; we will
use the phrase irregular entrainment for these cases (but see exercise 1). We use
the ratio B/A as a measure of how robustly sparse entrainment is possible. For the
186 Chapter 23. Entrainment by Excitatory Input Pulses
example that we discussed here, B/A ≈ 0.19/0.14 ≈ 1.36. This means that within
the window of sparse entrainment, input strengths differ by about 36%.
g s y n =0.17 g sy n =0.15
50 50
v [mV]
0 0
−50 −50
−100 −100
0 400 800 0 800 1600
t [ms] t [ms]
15
15
δ [ms]
10
10
5 5
0 0
1 7 1 7
spike # spike #
g s y n =0.18 g s y n =0.145
50 50
v [mV]
0 0
−50 −50
−100 −100
0 400 800 0 1600 3200
t [ms] t [ms]
40 40
δ [ms]
20 20
0 0
1 9 1 10
spike # spike #
Figure 23.5. Similar to Figs. 23.3 and 23.4, but with choices of gsyn that
do not lead to n:1 entrainment for any n. [WB_NEURON_IRREGULAR]
15
10
n
5
Figure 23.6. Structure of the interval of sparse entrainment for our model.
Here n is defined as the average firing period between t = 5, 000 and t = 10, 000,
divided by T . Integer values of n are indicated in red. [WB_ENTRAINMENT_INTERVALS]
Looking closely at the graph in Fig. 23.6, one sees much more structure than
we have discussed. There are plateaus other than the ones indicated in red. For
example, there is a fairly pronounced plateau to the right of gsyn = 0.18, where
n ≡ 1.5, corresponding to 3:2 entrainment. Another plateau that you may be
able to make out, just barely, in the figure is near gsyn = 0.16, where n ≡ 5/2,
corresponding to 5:2 entrainment.16
The Cantor function is a standard example of a function that is differentiable with derivative zero
everywhere except on a set with measure zero, but not constant. We won’t pursue this connection
here, though.
188 Chapter 23. Entrainment by Excitatory Input Pulses
So if we write
(α + )e−T /τ if α + < 1,
F (α) = (23.1)
0 if α + ≥ 1,
then
αk+1 = F (αk ). (23.2)
The behavior of the αk as k → ∞ can be understood geometrically, by thinking
about the graph of F ; see Appendix B. The graph of the function F defined by
eq. (23.1) is shown, for one choice of parameters, in Fig. 23.7. For this particular
case, one can easily see from the graph that αk → α∗ (the fixed point of F ) as
k → ∞ (exercise 4). Depending on the initial value of v, the first input pulse can
trigger a spike, but after that, there will be no spikes in response to the inputs any
more (exercise 5).
The discontinuity in Fig. 23.7 occurs at α = 1−. The figure looks qualitatively
similar, and the conclusions are the same, as long as
and therefore we conclude, combining (23.3) and (23.4), that the target does not
respond to the input pulses by firing as long as e−T /τ < 1 − , or equivalently,
< 1 − e−T /τ .
It is not hard to convince yourself that even when = 1 − e−T /τ precisely, there can
be at most one spike in response to the input sequence (exercise 6). So the precise
condition under which there is no firing response to the input sequence (with the
possible exception of a single initial spike, depending on the initial value of v) is
≤ 1 − e−T /τ . (23.5)
So the inputs are strong enough to make the target respond by firing, but not
strong enough to entrain the target 1:1. Figure 23.8 shows the example T = 25,
τ = 50, = 0.6. The figure also indicates the iteration αk+1 = F (αk ), starting with
α1 = 0. As the figure shows, the sequence {αk } will be periodic with period three:
α1 , α2 , α3 , α1 , α2 , α3 , α1 , . . . The LIF neuron will therefore be entrained to the input
pulses 3:1 — it will fire on every third input pulse.
In general, suppose that α1 = 0; that is, the target is at rest until the first
input arrives at time T . Let us compute formulas for α2 , α3 , α4 , . . . , assuming that
they are all smaller than 1 − :
α2 = e−T /τ ,
α3 = (α2 + )e−T /τ = e−T /τ e−T /τ + 1 ,
α4 = (α3 + )e−T /τ = e−T /τ e−2T /τ + e−T /τ + 1 ,
0.8
0.6
F
0.4
0.2
0
0 α 1
1
F
0.5
0
α1 α2 α3 1
k−2
1 − e−(k−1)T /τ
αk = e−T /τ e−jT /τ = e−T /τ . (23.6)
j=1
1 − e−T /τ
190 Chapter 23. Entrainment by Excitatory Input Pulses
(We used the formula for a geometric sum in the last step.) There will be n:1
entrainment if and only if
for in that case αn+1 = 0, so the sequence {αk } is periodic with period n; compare
also Fig. 23.8. Using (23.6), (23.7) becomes
1 − e−(n−2)T /τ 1 − e−(n−1)T /τ
e−T /τ <1− and e−T /τ ≥ 1 − .
1 − e−T /τ 1 − e−T /τ
Solving these inequalities for , we find
1 − e−T /τ 1 − e−T /τ
∈ , . (23.8)
1 − e−nT /τ 1 − e−(n−1)T /τ
We write r = e−T /τ . Then the length of the interval on the right side of (23.8) is
1−r 1−r (1 − rn )(1 − r) − (1 − rn−1 )(1 − r)
− = ∼ rn−1 (23.9)
1−r n−1 1−r n (1 − rn−1 )(1 − rn )
as r → 0 (see exercise 7).
We summarize our conclusion in the following proposition:
Proposition 23.1. For the model studied in this section, the behavior of the target
neuron depends on the pulse strength as follows:
≥ 1: ‘ 1:1 entrainment.
1 − e−T /τ 1 − e−T /τ
−nT
≤< , n ≥ 2: n:1 entrainment.
1−e /τ 1 − e−(n−1)T /τ
≤ 1 − e−T /τ : No firing response. (At most one action potential in response to
the inputs.)
The interval of sparse entrainment is 1 − e−T /τ , 1 , so it is of length e−T /τ . The
interval of n:1 entrainment is of length ∼ e−nT /τ in the limit as T /τ → ∞.
This result suggests that n:1 entrainment with n > 1 is an important phe-
nomenon only if the input pulses arrive at high frequency (T small) or the target
neuron is not very leaky (τ not very small). This is what one should expect: When
T /τ is large, the neuron has nearly “forgotten” the previous input pulse when the
next one arrives, and therefore 1:1 entrainment or no entrainment at all should
essentially be the only possibilities.
We conclude with an example of 10:1 entrainment. Suppose that T = 20 ms,
τ = 30 ms, and n = 10. The interval of 10 : 1 entrainment is then
1 − e−T /τ 1 − e−T /τ
, = [0.48720 . . . , 0.48779 . . .).
1 − e−nT /τ 1 − e−(n−1)T /τ
Figure 23.9 shows a simulation with = 0.48750.
Exercises 191
v
2
0
0 200 400 600 800 1000
t [ms]
Exercises
23.1. (∗) Run the two examples of Fig. 23.5 for much longer times, to show that
there is in fact a considerable degree of regularity in the behavior.
23.2. (∗) Starting with the program generating Fig. 23.3, generate a code that simu-
lates 100 independent WB neurons, driven by input pulses as in Section 23.1,
with the i-th neuron driven at a strength g syn = g syn,i chosen at random with
uniform distribution in the interval [0.14, 0.19] (approximately the interval of
sparse entrainment).17 Choose the g syn,i independently of each other, but for
a given i, fix the value for all times. Note that many of the values g syn,i will
result in irregular entrainment, not n:1 entrainment with an integer n.
After seeing the result, you may be tempted to conclude that clean entrain-
ment of a population by a sequence of input pulses at 20 Hz (T = 50 ms) is
unlikely unless many neurons in the target population receive pulses strong
enough for 1:1 entrainment.18
23.3. (∗) In Section 23.1, it is interesting to think about broader input pulses.
They might model a large network of neurons in approximate (but not tight)
synchrony delivering input to a target neuron. We will study whether sparse
and irregular entrainment are more or less common with broad input pulses
than with narrow ones.
(a) Start with the code that generates Fig. 23.3. Change the kinetics of
the input pulses as follows: τr = τpeak = τd = 3 ms. By trial and error,
approximate A and B for this case. Is B/A larger or smaller than for the
example discussed in Section 23.1?
(b) Start with the code generating Fig. 23.6, and generate the analogous figure
for τr = τpeak = τd = 3 ms. (Note: You must do (a) first, so that you know
which interval to vary g syn in.) Is irregular entrainment more or less common
when the input pulses are broader?
17 Matlab command for generating a column vector of 100 independent, uniformly distributed
23.4. Explain why for the function shown in Fig. 23.7, αk → α∗ as k → ∞, where
α∗ denotes the fixed point of F .
23.5. Explain why in the case of Fig. 23.7, there can be no voltage spike in response
to the inputs at times 2T , 3T , 4T , etc. (There could be one in response to
the input at time T , depending on initialization.)
23.6. Assume = e−T /τ in the model of Section 23.2. Show that the target neuron
can fire at most once in response to the input sequence.
23.7. Explain the asymptotic statement in (23.9).
Chapter 24
Synchronization by Fast
Recurrent Excitation
In this chapter, we begin our study of synchronization via mutual synaptic inter-
actions. The simplest example is a population of N identical neurons, driven so
strongly that by themselves they would fire periodically, coupled to each other by
excitatory synapses. For simplicity, we assume for now that all neurons receive the
same drive, all synaptic strengths are equal, and the coupling is all to all, i.e., any
two neurons are synaptically coupled with each other. (We do not include autapses
here.) We ask whether the neurons will synchronize their firing, i.e., whether they
will all fire at (nearly) the same times after a while.
We think about this question intuitively first. Suppose that the neurons are
already in approximate, but imperfect synchrony. The neurons that are ahead fire
earlier, and accelerate the ones that are behind. This promotes synchronization.
On the other hand, the neurons that are behind, when they fire, accelerate the ones
that are ahead, and that opposes synchronization. Whether or not there will be
synchronization is therefore not at all intuitively obvious.
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 24) contains supplementary material, which is available to authorized users.
194 Chapter 24. Synchronization by Fast Recurrent Excitation
potential is ϕT , with 0 ≤ ϕ < 1, we call ϕ the current phase of the neuron. Thus if
there is an action potential at time t0 , then
t − t0
ϕ= mod 1.
T
(In general, “x mod 1” means the number in the interval [0, 1) that differs from x
by an integer.)
Figure 24.1 is a spike rastergram illustrating one possible notion of asynchrony.
Here N = 30 RTM neurons, not connected to each other, all with I = 0.3 and
therefore all firing at the same period T , have been initialized so that the ith neuron
fires at times
i − 1/2
T + kT, k = 0, 1, 2, . . . , i = 1, 2, . . . , N.
N
One says that the neurons fire in a splay state. At any time, the phases of the
N neurons occupy a grid of points in the interval [0, 1) with spacing 1/N between
the grid points. The splay state is the extreme opposite of the synchronous state,
obtained in Fig. 24.2 by initializing all 30 neurons in the same way.
30
neuron #
20
10
0
0 50 100 150 200
t [ms]
30
neuron #
20
10
0
0 50 100 150 200
t [ms]
In most of our simulations, we don’t initialize in splay state. The reason is that
the notion of splay state has clear meaning only when all neurons fire at the same
period, not when they don’t. Suppose, for instance, that the external drives are
heterogeneous. The word heterogeneous simply means that different neurons receive
different external drives. We denote the intrinsic period of the i-th neuron (the
period with which the neuron fires if it receives no inputs from other neurons) by
Ti . By asynchronous initialization, we will mean that we initialize the i-th neuron
24.2. Simulations 195
at a point on its limit cycle with a random, uniformly distributed phase in [0, 1).
Our notion of asynchronous initialization disregards synaptic connections; we think
of them as being turned on after the neurons have been initialized.
Only intrinsically firing model neurons can be asynchronously initialized in
the sense that we have described. Neurons with subthreshold drives will typically
be initialized at or near their stable equilibrium in our simulations.
24.2 Simulations
We begin by initializing 30 RTM neurons in splay state, then couple them by excita-
tory synapses from time t = 0 on. The model synapses are defined as in Section 20.2,
with vrev = 0, τr = τpeak = 0.5 ms, τd = 2 ms; thus the model synapses are reminis-
cent of AMPA receptor-mediated synapses. For each synapse, we use gsyn = 0.0075;
we will discuss this choice below. Figure 24.3 shows the result of a simulation of
this network.
30
neuron #
20
10
0
0 50 100 150 200
t [ms]
The frequency with which each neuron fires is about 20 Hz when there is no
synaptic coupling (Fig. 24.1), and nearly 30 Hz with the coupling (Fig. 24.3). Some-
what greater values of g syn yield much greater and perhaps biologically unrealistic
frequencies; gsyn = 0.015, for instance, yields 103 Hz already.
Note that the neurons are in fact much closer to synchrony towards the end
of the simulation of Fig. 24.3 than at the beginning; see also exercise 1. One might
suspect that perfect synchrony will eventually be reached. However, Fig. 24.4 shows
results of the last 200 ms of a 10,000 ms simulation; even after such a large amount
of simulated time, tight synchrony has not been reached.
We repeat similar experiments with just N = 2 neurons, to see whether in
this simple case at least, tight synchrony is reached when the neurons are close to it
already, i.e., whether synchrony is stable. In the preceding simulations, each neuron
received 29 inputs with gsyn = 0.0075. Now, since N = 2, each neuron receives only
one input; we therefore define gsyn = 29 × 0.0075 = 0.2175. The drive to each of
the two neurons is I = 0.3, as before. We first let the two neurons fire five times in
synchrony. At the time of the fifth spike, we introduce a very slight perturbation
(see the caption of the figure). In Fig. 24.5, we show the difference between the
spike times of the second neuron and the spike times of the first, as a function of
spike number. Once synchrony is perturbed, the two neurons do not return to it,
196 Chapter 24. Synchronization by Fast Recurrent Excitation
30
neuron #
20
10
0
9800 9850 9900 9950 10000
t [ms]
Figure 24.4. Simulation of Fig. 24.3, continued for a much longer amount
of time. Only the last 200 ms of simulated time are shown. [RTM_E_TO_E_NETWORK_2]
−1
−2
0 5 10 15 20 25
spike #
Figure 24.5. Difference between spike times of two RTM neurons, cou-
pled with AMPA-like synapses. The first five spikes are simultaneous because the
two neurons are identical, and have been initialized identically. At the time of
the fifth spike, the membrane potential of the first neuron is lowered by 10−5 mV.
This causes the two neurons to lock at a small but non-zero phase difference.
[RTM_TWO_CELL_NETWORK]
20
10
0
0 50 100 150 200
t [ms]
Figure 24.6. Similar to Fig. 24.3, but with heterogeneity in drives and
synaptic strengths, and random asynchronous initialization.
[RTM_E_TO_E_HETEROGENEOUS]
The networks that we have considered so far ought to be ideal for synchroniza-
tion: All neurons receive the same drive, and all synapses are of the same strength.
For more biological realism, and to test the robustness of the approximate synchro-
nization that we see in Figs. 24.3 and 24.4, we now consider heterogeneous external
Exercises 197
drives. We denote the drive to the i-th neuron by Ii , and let it be random, uni-
formly distributed between 0.25 and 0.35. Further, we assume that the strength
of the synapse from neuron i to neuron j is a random number gsyn,ij , uniformly
distributed between 0.00625 and 0.00875, so the synaptic strengths are heteroge-
neous, too. All of these random quantities, the Ii and the g syn,ij , are taken to be
independent of each other. They are chosen once and for all, before the simulation
begins; they don’t vary with time. The “noise” that we have introduced by making
the Ii and g syn,ij random is therefore sometimes called frozen noise.
Figure 24.6 shows the transition from asynchronous initialization, in the sense
explained in Section 24.1, to approximate synchrony resulting from the synaptic
coupling. (Roughly the same degree of synchrony seen at time t = 200 in Fig. 24.6
is still seen at time t = 10, 000.)
In general, synaptic inhibition is thought to create neuronal synchrony more
effectively than excitation [167]. In later chapters, we will discuss results supporting
this idea. However, the results presented here, and those in Chapters 26 and 27,
confirm that excitatory synaptic connections can reasonably robustly create ap-
proximate synchrony as well. In fact, oscillations in rat neocortex that arise when
blocking inhibition and rely on AMPA receptor-mediated synaptic excitation were
reported in [26].
Exercises
24.1. (∗) Sometimes the average, v, of all computed membrane potentials is taken
to be a (probably poor) computational analogue of the LFP. Plot v as a
function of time, for a network of 30 RTM neurons that are uncoupled and
in splay state for the first 100 ms of simulated time, then connected as in
Fig. 24.3 for the remaining 200 ms of time. (Use the code generating Fig. 24.3
as a starting point.)
The plot that you will obtain demonstrates resoundingly that the synaptic
interaction creates a network oscillation.
24.2. (∗) Reproduce Fig. 24.3 with the i-th neuron initialized at phase ϕi ∈ [0, 1),
where the ϕi are random, uniformly distributed, and independent of each
other.
24.3. (∗) Reproduce Fig. 24.3 with g syn = 0.015 (twice the value used in Fig. 24.3).
You will see that there is no synchronization any more. Run the simula-
tion to t = 10, 000 (as in Fig. 24.4, but with gsyn = 0.015) and see whether
synchronization arises after a long time.
24.4. (∗) The AMPA receptor-dependent oscillations in [26] were at 10 Hz. In the
code generating Fig. 24.3, can you reduce I to a point where the oscillations
that emerge are at 10 Hz?
24.5. (∗) If you use τr = τpeak = 10ms, τd = 200ms (values approximately appropri-
ate for NMDA receptor-mediated synapses) in the code generating Fig. 24.3,
do you still get approximate synchronization? You will need to reduce g syn
by a lot, to avoid very high-frequency, unrealistic-looking activity.
Chapter 25
Phase response curves (PRCs) describe how neurons respond to brief, transient
input pulses. They are of fundamental importance in the theory of neuronal syn-
chronization, and can also be measured experimentally.
t + (1 − ϕ)T − t∗ .
The next action potential is delayed if this quantity is negative. Since phase is time
divided by T , the phase is advanced by
t∗ − t
g(ϕ) = 1 − ϕ − .
T
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 25) contains supplementary material, which is available to authorized users.
200 Chapter 25. Phase Response Curves (PRCs)
We call g the phase response function. From the definition, it follows that
g(ϕ) ≤ 1 − ϕ (25.1)
for all ϕ ∈ [0, 1), with strict inequality unless the input causes a spike instanta-
neously. The graph of g is called the phase response curve (PRC).
Figure 25.1 illustrates the phase shifts caused by input pulses. The input
arrival time is indicated by a vertical thin dashed line in each of the two panels. In
the upper panel, the input comes early in the cycle, and advances the subsequent
spikes only slightly. In the lower panel, the input comes in the middle of the cycle,
and advances the subsequent spikes much more.
50
v [mV]
−50
−100
0 50 100 150 200 250 300
50
v [mV]
−50
−100
0 50 100 150 200 250 300
t [ms]
By definition, the phase response function tells us nothing about the timing
of later action potentials, following the one at time t∗ . In applications of the PRC,
however, it is typically assumed that those action potentials occur at times t∗ + kT ,
k = 1, 2, 3, . . .. This is often close to the truth, provided that the input pulse is
brief, and the phase ϕ of the neuron at the input arrival time is not too close to 1;
see exercises 1 and 2, and also Fig. 25.1.
A first example of a PRC is shown in Fig. 25.2; see the figure caption for
details. The figure tells us that the phase of an RTM neuron is always advanced
by the input pulse, but more so if the input pulse comes in the middle of the cycle
(ϕ ≈ 0.5) than when it comes at the beginning or end of the cycle (ϕ ≈ 0 or ϕ ≈ 1).
A feature of the curve in Fig. 25.2 that may seem surprising at first sight is
that g(0)
= g(1). “Phase 0” means the same as “phase 1” — both refer to the time
at which the neuron fires. So why is g(0) not the same as g(1)? An input pulse
that arrives at phase 0 or at phase 1 arrives at the time when the neuron spikes,
i.e., crosses −20 mV from above. However, when computing g(0), we consider how
the next spike is affected, whereas when computing g(1), we consider how the spike
that occurs at the time of input arrival is affected. Clearly, that spike cannot be
affected at all, so we must always have g(1) = 0.
25.1. Input Pulses of Positive Strength and Duration 201
0.8
0.6
g
0.4
0.2
0
0 0.5 1
So f (ϕ) is the phase that the neuron is reset to if the input finds it at phase ϕ. I will
call f the interaction function, since it describes the outcome of synaptic interactions.
202 Chapter 25. Phase Response Curves (PRCs)
The graph of the interaction function corresponding to the PRC in Fig. 25.2 is shown
in Fig. 25.4.19 Equation (25.1) is equivalent to
f (ϕ) ≤ 1. (25.3)
0.8
0.6
f
0.4
0.2
0
0 0.5 1
Figure 25.5. Three PRCs similar to that in Fig. 25.2, but with much
weaker inputs. Notice that the scaling of the vertical axis is not the same in
the three plots, and in none of these three plots is it the same as in Fig. 25.2.
[RTM_PRC_THREE_WEAK_ONES]
Figure 25.5 shows three PRCs similar to the one in Fig. 25.2, but with smaller
values of g syn . The three panels of Fig. 25.5 all look very similar to Fig. 25.2, except
for the scaling of the vertical axis. The figure suggests that the phase response
19 The graph of f is often called the phase transition curve. I don’t use this terminology to
g/g syn
2
−1
0 0.2 0.4 0.6 0.8 1
function is, for small g syn , proportional to g syn . Writing g = g(ϕ, g syn ) to remind
ourselves of the fact that g depends on g syn , it appears that
g ϕ, gsyn
lim (25.4)
gsyn →0 g syn
The graph of ĝ is commonly called the infinitesimal PRC, although perhaps it should
really be called the doubly infinitesimal PRC — both the input strength and the
input duration are infinitesimal now. Figure 25.8 shows an example.
204 Chapter 25. Phase Response Curves (PRCs)
0.8
0.6
g
0.4
0.2
0
0 0.5 1
0.1
g/Δv
0.05
0
0 0.5 1
Figure 25.8. The infinitesimal PRC of the RTM neuron with I = 0.3,
approximated using Δv = 0.01 (solid black curve) and Δv = 0.001 (red dots).
[RTM_PRC_SHORT_AND_WEAK]
Panel B of Fig. 25.3 is a reproduction of [52, Fig. 1E], which shows an ap-
proximate infinitesimal PRC derived from [52, Fig. 1C] (reproduced in panel A of
Fig. 25.3).
The PRCs of the RTM neuron shown in the preceding section are of type 1,
and so are the PRC of the WB neuron in the left panel of Fig. 25.9, and that of
the Erisir neuron in the right panel. As stated earlier, the PRC of the classical
Hodgkin-Huxley neuron shown in the middle panel of Fig. 25.9 is of type 2. It is
often the case that neurons of bifurcation type 1 have PRCs of type 1, and neurons
of bifurcation type 2 have PRCs of type 2.
The PRC of the Erisir neuron in the right panel of Fig. 25.9 is an exception:
The neuron is of bifurcation type 2, but the PRC in the right panel of Fig. 25.9 is
positive. However, for different input pulses, the Erisir neuron does have type 2
PRCs; see [17]. For an example of a model of bifurcation type 1 with a type 2 PRC
is described, see [48].
WB HH Er isir
1 0.1
0.15
0.5 0 0.1
g
0.05
0 −0.1 0
0 0.5 1 0 0.5 1 0 0.5 1
0.5
g
−0.5
0 0.5 1
Figure 25.10. PRC of the WB neuron for a brief inhibitory input pulse:
I = 1, τr = τpeak = 0.5, τd = 2, gsyn = 0.5, vrev = −70 mV.
[WB_PRC_INHIBITORY_PULSE]
This is equivalent to
dv v(1 − v)
=− + I, (8.1)
dt τm
with
1 1 θ
v= + tan . (8.7)
2 2 2
For later use, we solve (8.7) for θ:
The solution of (8.1) with an initial condition v(0) = v0 ∈ R is given by eq. (8.4),
which we re-write, using (8.11), as follows:
1 πτm π (v0 − 1/2)T
v(t) = + tan t + arctan . (25.7)
2 T T πτm
The special case v0 = −∞ of this formula is
1 πτm π
v= + tan πϕ − , (25.8)
2 T 2
25.4. The PRC of a Theta Neuron with an Infinitesimally Brief Input 207
where ϕ = t/T is the phase. For later use, we solve this equation for ϕ:
1 T 1
ϕ = arctan (v − 1/2) + . (25.9)
π πτm 2
θ = θ0 = 2 arctan(2v0 − 1)
(The second of these two equations follows from eq. (8.7).) Thus
θ(t∗ − 0)
θ(t∗ + 0) = 2 arctan tan + 2Δv . (25.10)
2
This is what “instantaneous charge injection” means for the theta neuron.
Next we work out what the instantaneous charge injection means in terms of
ϕ. First, v = v0 means, by eq. (25.9),
1 T 1
ϕ = ϕ0 = arctan (v0 − 1/2) + . (25.11)
π πτm 2
This is the phase response function for the theta neuron when the input is an
instantaneous charge injection. It does not depend on the parameters Δv, τm , and
I individually, but only on
Δv
= . (25.14)
τm I − 1/4
Figure 25.11 shows an example for which = 1.
208 Chapter 25. Phase Response Curves (PRCs)
0.8
0.6
g
0.4
0.2
0
0 0.5 1
2
ĝ
0
0 0.5 1
From eq. (25.13), we obtain a formula for the infinitesimal PRC, which is the
partial derivative of g with respect to Δv, at Δv = 0:
1 1
ĝ(ϕ) = . (25.15)
π τm I − 1/4 1 + tan2 (π(ϕ − 1/2))
The infinitesimal PRC is symmetric with respect to ϕ = 1/2, whereas the non-
infinitesimal PRC reaches its maximum to the left of ϕ = 1/2; see Fig. 25.12.
The PRC given by eq. (25.13), graphed in Fig. 25.11, has a hidden symme-
try property, which becomes apparent when plotting the graph of the interaction
function f ; see Fig. 25.13. The figure suggests that the graph of f becomes the
graph of an even function if it is rotated by 45o in the clockwise direction, then
shifted horizontally to make the origin coincide with the center point of the dashed
line in Fig. 25.13. I will call this property diagonal symmetry. In Section 26.4,
we will prove that this property implies that for two theta neurons, coupled via
instantaneous charge injections, any phase difference is neutrally stable.
25.4. The PRC of a Theta Neuron with an Infinitesimally Brief Input 209
0.8
0.6
f
0.4
0.2
0
0 0.5 1
Proof. Let s label locations along the dashed diagonal in Fig. 25.13, with s = 0
corresponding
√ to the center of the diagonal (that is, the center of the square), and
s = ±1/ 2 corresponding to the vertices (0, 0) and (1, 1) of the square. Thus
points √ on the diagonal can be parametrized either by ϕ ∈ [0, 1], or by s ∈
√ (ϕ, ϕ)
[−1/ 2, 1/ 2]. The relation between the two coordinates is
√
ϕ = s/ 2 + 1/2. (25.16)
√ √
To convince yourself, just check the cases s = −1/ 2 and s = 1/ 2, and note that
the relation between ϕ and s is linear. For later use, we solve eq. (25.16) for s:
√
s = 2(ϕ − 1/2). (25.17)
√ √
For s ∈ [−1/ 2, 1/ 2], we define f˜(s) as in Fig. 25.14. See exercise 7 for the proof
that the picture looks qualitatively similar to that in Fig. 25.14 for all values of
> 0, and therefore f˜ is well-defined. Our claim is that f˜ is an even function of s.
Let ϕ ∈ [0, 1], and consider the point (ϕ, ϕ) on the diagonal. To compute f˜(s)
(with s given by eq. (25.17)), we must compute how far we have to move in the
direction perpendicular to the diagonal to get to the graph of f , starting at (ϕ, ϕ).
In other words, we should find u ≥ 0 so that the point (ϕ − u, ϕ + u) lies on the
graph of f , i.e., so that
ϕ + u = f (ϕ − u). (25.18)
√
˜
f (s) is the length of the red line segment in Fig. 25.14, which is 2 u:
√
f˜(s) = 2 u. (25.19)
Before continuing, we summarize what we have proved, combining (25.16), (25.18),
and (25.19):
√ s 1 s 1
f (s) = 2 u ⇔ √ + + u = f √ + − u .
˜ (25.20)
2 2 2 2
210 Chapter 25. Phase Response Curves (PRCs)
0.8
s )
f˜(
s
0.6
0
0.4
0.2
0
0 0.5 1
1 π 1
ϕ+u= arctan tan π(ϕ − u) − + + .
π 2 2
√
Now we use ϕ = s/ 2 + 1/2, and simplify a bit:
s 1 s
√ + u = arctan tan π √ − u + .
2 π 2
We multiply by π, and take the tangent on both sides:
s s
tan π √ + u = tan π √ − u + . (25.21)
2 2
If we replace s by −s in this equation, we obtain
s s
tan π − √ +u = tan π − √ −u + . (25.22)
2 2
Surprisingly the infinitesimal PRC, given by eq. (25.15), does not have the
same hidden symmetry property; see Proposition 26.6.
Exercises
25.1. (∗) As mentioned in Section 25.1, we will assume in future chapters that
following the arrival of the input pulse, there is a first spike at a time t∗
significantly affected by the input pulse, but perfectly periodic firing with
Exercises 211
τr τpeak τd g syn ϕ0 E1 E2
0.5 0.5 2 0.05 0.1
0.5 0.5 2 0.05 0.5
0.5 0.5 2 0.05 0.9
0.5 0.5 2 0.1 0.1
0.5 0.5 2 0.1 0.5
0.5 0.5 2 0.1 0.9
0.5 0.5 2 0.2 0.1
0.5 0.5 2 0.2 0.5
0.5 0.5 2 0.2 0.9
1.0 1.0 5 0.05 0.1
1.0 1.0 5 0.05 0.9
1.0 1.0 5 0.1 0.1
1.0 1.0 5 0.1 0.9
inter-spike interval T resumes immediately after that spike, so that the sub-
sequent spikes occur at t∗ + T , t∗ + 2T , . . . In other words, we will assume
that there is no memory of the input lasting past time t∗ .
To test the validity of this assumption, we initialize an RTM neuron with
I = 1 at a phase ϕ0 ∈ [0, 1), and give it an excitatory synaptic input pulse
at time 0 (that is, we set q(0) = 1). The pulse is characterized by four
parameters: τr , τpeak , τd , and g syn , which we will vary. (We always take vrev
to be zero here.) Denoting the spike times by t∗ = t1 , t2 , t3 , . . . , we set
Tk = tk+1 − tk . If our assumption is valid, the Tk should all be very close to
T . In any case, T10 is very close to T . (You can check this numerically if you
want.) We therefore compute the relative differences
T10 − T1 T10 − T2
and ,
T10 T10
hoping that they will be close to zero. Multiplying these numbers by 100%,
we get percentage errors:
T10 − T1 T10 − T2
E1 = × 100%, E2 = × 100%.
T10 T10
25.4. Recall Fig. 14.3. For I > Ic , but I not too large, the stable limit cycle would
still be present, and it would still encircle a fixed point, but that fixed point
would be unstable. (You may take this for granted, or you could show it
numerically. It is not shown in Fig. 14.3.)
Explain: For the classical Hodgkin-Huxley neuron, reduced to two dimen-
sions, when I > Ic but I is not too large, there is a value of Δv > 0 for which
the minimum of the phase response function g, for an input that causes v to
rise by Δv instantaneously, equals −∞.
25.5. Show that for the theta neuron, g(ϕ), given by eq. (25.13), converges to 1 − ϕ
as Δv → ∞.
25.6. (∗) (†) Plot a picture like that in the middle panel of Fig. 25.9, but with g syn =
0.30. What you will see will probably surprise you. Can you understand the
picture? Plotting (v(t), n(t)) for selected values of ϕ is suggestive.
25.7. Let f be the interaction function of the theta neuron subject to an instanta-
neous charge injection. (a) Show that f is strictly increasing. (b) Show that
f (ϕ) ≥ ϕ for all ϕ ∈ [0, 1], with strict inequality for ϕ ∈ (0, 1). (c) Explain
why Fig. 25.14 defines f˜ uniquely.
Chapter 26
Synchronization of Two
Pulse-Coupled Oscillators
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 26) contains supplementary material, which is available to authorized users.
214 Chapter 26. Synchronization of Two Pulse-Coupled Oscillators
This eliminates possible ambiguity about what happens when both neurons fire at
the same time — they don’t influence each other at all in that case. We also assume
g (ϕ) ≥ −1 for ϕ ∈ [0, 1], and g (ϕ) > −1 for ϕ ∈ (0, 1). (26.2)
− ϕ ≤ g(ϕ) ≤ 1 − ϕ for ϕ ∈ [0, 1], and −ϕ < g(ϕ) < 1 − ϕ for ϕ ∈ (0, 1). (26.3)
For the interaction function f = f (ϕ) = ϕ + g(ϕ) , (26.1) and (26.2) imply (see
exercise 1)
f (0) = 0, f (1) = 1, f (ϕ) ≥ 0 for ϕ ∈ [0, 1], f (ϕ) > 0 for ϕ ∈ (0, 1). (26.4)
the PRC looks somewhat similar to that of the classical Hodgkin-Huxley neuron in
Fig. 25.9. A type 2 PRC for an excitatory input pulse usually looks qualitatively
similar to that in Fig. 26.3. Figure 26.4 shows one example of a family of phase
response functions, parametrized by ∈ (0, 1], and given by the rather bizarre
formula
1+− (1 + )2 − 4(1 − ϕ)
g(ϕ) = 2(1 − ϕ) − . (26.5)
1 1
0.5 0.5
g
0 0
0 0.5 1 0 0.5 1
ϕ ϕ
1 1
G
0.5 0.5
F
0 0
0 0.5 1 0 0.5 1
ϕ ϕ
1 1
G
0.5 0.5
g
0 0
0 0.5 1 0 0.5 1
ϕ ϕ
0.5 1
G
0 0.5
g
−0.5 0
0 0.5 1 0 0.5 1
ϕ ϕ
What is natural about this formula will be explained in exercise 3. The PRC
has qualitative resemblance with the PRC of the WB neuron in Fig. 25.9. In
Fig. 26.4, we also show the interaction function f . We observe that it is diago-
nally symmetric. This term was defined in Section 25.4; its significance will be
explained in Section 26.4. In Fig. 26.5, g(ϕ) ≤ 0 for all ϕ ∈ [0, 1]. This is a model
1 1
0.5 0.5
g
0 0
0 0.5 1 0 0.5 1
ϕ ϕ
1 1
G
0.5 0.5
F
0 0
0 0.5 1 0 0.5 1
ϕ ϕ
0 1
G
−0.5 0.5
g
−1 0
0 0.5 1 0 0.5 1
ϕ ϕ
26.3 Analysis
phase
F (ϕ0 )
ϕ0
G(ϕ0 )
time
As you read this section, it may help to look at the diagram in Fig. 26.6.
Suppose that at time 0, neuron A has just fired, and is therefore now at phase
ϕA = 0, and neuron B has just received the signal from neuron A, and is now at
some phase ϕB = ϕ0 ∈ [0, 1). The next event of interest is a spike of B at time
(1 − ϕ0 )T . This spike will reset ϕB to 0, while raising ϕA to f (1 − ϕ0 ). We define
We now analyze the fixed point of F and G, and their connection with phase-
locking of A and B.
218 Chapter 26. Synchronization of Two Pulse-Coupled Oscillators
Proof. (a) This is a direct consequence of the definitions of F and G. (b) Suppose
G(ϕ∗ ) = ϕ∗ . This means F (F (ϕ∗ )) = ϕ∗ . We apply F to both sides of this
equation: F (F (F (ϕ∗ ))) = F (ϕ∗ ). But this can be re-written as G(F (ϕ∗ )) = F (ϕ∗ ).
(c) Suppose A has just fired, and has had its effect on B, and now B is at phase
ϕ∗ . After time (1 − ϕ∗ )T , B fires, and sends a signal to A. This signal finds A at
phase 1 − ϕ∗ , and moves it to phase f (1 − ϕ∗ ) = F (ϕ∗ ). After time (1 − F (ϕ∗ ))T ,
A fires again, and the resulting signal moves B to phase ϕ∗ again. One full cycle is
now complete, and the time that it took is
(1 − ϕ∗ )T + (1 − F (ϕ∗ ))T = (2 − ϕ∗ − F (ϕ∗ )) T.
(d) G(0) = F (F (0)) = F (1) = 0 and G(1) = F (F (1)) = F (0) = 1. The fixed point
0 corresponds to a phase-locked pattern where B is at phase 0 when A has just fired
and has had its effect on A — so when A has just fired, B has just fired. The fixed
point 1 = F (0) corresponds to the same phase-locked firing pattern. (e) That there
is such a fixed point follows from F (0) = 1 and F (1) = 0, using the intermediate
value theorem. That there is only one such fixed point follows from the fact that
F is strictly decreasing. By part (c), the period of the corresponding phase-locked
firing pattern is 2(1 − ϕ∗ )T , and the time between a spike of A and the next spike
of B is (1 − ϕ∗ )T , exactly half the period of the phase-locked firing pattern.
(ϕ∗ ∈ (0, 1)). In Fig. 26.4, all ϕ ∈ [0, 1] are fixed point of G. This implies that the
initial phase difference between A and B is simply conserved for all times.
Using the ideas explained in Appendix B, we see immediately that synchrony
is stable, and anti-synchrony is unstable, in the examples of Figs. 26.1 and 26.3.
In the examples of Figs. 26.2 and 26.5, synchrony is unstable and anti-synchrony is
stable. (When we say “synchrony is stable,” we mean that 0 and 1 are stable fixed
points of G, and when we say “anti-synchrony is stable,” we mean that the fixed
point ϕ∗ ∈ (0, 1) of G that corresponds to anti-synchrony is stable.) Comparing
Figs. 26.1 and 26.2, we see that synchrony can be stable or unstable when the pulse-
coupling is excitatory and the PRC is of type 1 (g(ϕ) ≥ 0 for all ϕ). The details
of the PRC matter. Figures 26.7 and 26.8 illustrate convergence to synchrony or
anti-synchrony, respectively.
It is easy to describe in general when synchrony is stable, using part (e) of
Proposition 26.1:
1 2 3 4 5 6 7 8 9 10
t [units of T ]
Figure 26.7. Two oscillators, pulse-coupled with the phase response func-
tion g(ϕ) = ϕ2 (1 − ϕ), synchronize. [TWO_PULSE_COUPLED_OSC]
1 2 3 4 5 6 7 8 9 10
t [units of T ]
Figure 26.8. Two oscillators, pulse-coupled with the phase response func-
tion g(ϕ) = 2ϕ(1 − ϕ)3 , anti-synchronize. [TWO_PULSE_COUPLED_OSC_2]
Proof. The left-hand side of inequalities (26.10) and (26.11) equals G (0):
Corollary 26.3. If g (1) ≤ 0, as is the case for the PRCs shown in Figs. 26.1–26.4,
synchrony is attracting if g (0) is small enough, namely
|g (1)|
g (0) < , (26.12)
1 + g (1)
and repelling if
|g (1)|
g (0) > .
1 + g (1)
1 −g (1) |g (1)|
g (0) < − 1 = = .
1 + g (1) 1 + g (1) 1 + g (1)
If g has the usual “type 2” shape, i.e., g(ϕ) < 0 for small ϕ, and g(ϕ) > 0
for large ϕ, then (26.12) holds, so synchrony is stable. If g (0) ≥ 0 and g (1) ≤ 0,
as would be the case for a type 1 PRC in response to an excitatory pulse, then
synchrony is stable only if g (0) is small enough, i.e., refractoriness is pronounced
enough.
From the definitions, (ii) means f (1−f (1−ϕ)) = ϕ for all ϕ ∈ [0, 1], or equivalently:
(ii) f (1 − f (ϕ)) = 1 − ϕ for all ϕ ∈ [0, 1].
We will prove that (i) implies (ii), and vice versa.
1
(ϕ, f (ϕ ))
0.8
u f˜
u
s
0.6
f
0.4
0.2
0
0 0.5 1
ϕ
(i) ⇒ (ii): Figure 26.9 will help in following the argument that we are about
to give. Let ϕ ∈ [0, 1]. Consider the point (ϕ, f (ϕ)) on the graph of f . Draw
a line segment that starts in this point, ends on the diagonal of the unit square
indicated as a dashed line in Fig. 26.9, and is perpendicular to that diagonal. This
line segment is indicated in red in Fig. 26.9. The√point√on the diagonal in which
the line segment ends has a coordinate s ∈ [−1/ 2, 1/ 2]. The red line segment
has length f˜(s), in the notation used in the proof of Proposition 25.1. If the graph
of f were on the other side of the diagonal, then f˜(s) would be negative, and the
length of the red segment would be |f˜(s)|; this possibility is allowed here. The
two segments indicated in blue in Fig. 26.9, together with the red segment,
√ form an
isosceles right triangle. The blue segments are of length u = ˜(s)/ 2. If f˜(s) were
f
√
negative, we would still define u to be f˜(s)/ 2, but the blue segments would then
be of length |u|.
We now note that
s 1 s 1
ϕ = √ + − u and f (ϕ) = √ + + u. (26.13)
2 2 2 2
222 Chapter 26. Synchronization of Two Pulse-Coupled Oscillators
We used eq. (26.13) for the first of these equations, eq. (26.14) for the second,
and (26.13) again for the third. The proof of (i) ⇒ (ii) is thereby complete.
√ √
(ii) ⇒ (i): Let s ∈ [−1/ 2, 1/ 2], and let u be the solution of
s 1 s 1
√ + +u=f √ + −u . (26.15)
2 2 2 2
Define
s 1
ϕ = √ + − u. (26.16)
2 2
Then (26.15) becomes
s 1
f (ϕ) = √ + + u. (26.17)
2 2
But we also assume (ii) now, so
f (1 − f (ϕ)) = 1 − ϕ,
Proposition 26.5. For two identical theta neurons coupled via instantaneous
charge injection as described in Section 25.4, the initial phase difference is pre-
served for all time.
26.5. A Borderline Case in Which the Infinitesimal PRC Leads to a False Conclusion 223
We will show that one would come to a different conclusion if one analyzed
the situation using the infinitesimal PRC, no matter how weak the interactions are.
In fact, the following proposition holds:
Proposition 26.6. For two identical theta neurons coupled via instantaneous
charge injection as described in Section 25.4, if the phase response function is re-
placed by Δv ĝ(ϕ), where ĝ is the infinitesimal phase response function, then syn-
chrony is locally attracting for all sufficiently small Δv > 0.
We assume that Δv, and therefore (see eq. (25.14), is fixed, and is so small that
the assumptions in (26.2) hold. For simplicity, we denote the interaction function
associated with (26.18) by fˆ, not indicating in the notation the dependence on Δv:
fˆ(ϕ) = ϕ + .
1 + tan2 (π(ϕ − 1/2))
The function “F ” derived from this interaction function (see eq. (26.6)) will be
denoted by F̂ :
F̂ (ϕ) = fˆ(1 − ϕ) = 1 − ϕ + =
1 + tan (π (1/2 − ϕ))
2
We want to prove now that 0 is a locally attracting fixed point of Ĝ(ϕ) = F̂ (F̂ (ϕ)).
Using our formulas for F̂ and F̂ , one easily finds (see exercise 6)
By Taylor’s theorem, then Ĝ(ϕ) < ϕ if ϕ > 0 is close enough to 0, and therefore
our assertion follows using the ideas of Appendix B.
One might accuse the infinitesimal PRC of being misleading in this example:
It predicts that synchrony is attracting, when in fact it is not, regardless of how
small Δv > 0 may be. However, the accusation isn’t entirely fair: Notice that the
analysis based on the infinitesimal PRC predicts that synchrony is “just barely”
attracting, in the sense that Ĝ (0) is in fact equal to 1. In cases that are not
borderline, the predictions derived using the infinitesimal PRC are right for small
Δv > 0; see exercise 7 for a way of making this precise.20
20 This is reminiscent of the Hartman-Grobman Theorem for ordinary differential equations
0.8
0.6
G
0.4
0.2
0
0 0.5 1
ϕ
Figure 26.10. The function G derived from the PRC in Fig. 25.2. [RTM_PLOT_G]
for all ϕ ∈ [0, 1], we can still compute the function G = G(ϕ) = F (F (ϕ)), with
F (ϕ) = f (1 − ϕ), and plot its graph. It seems plausible that this can still give valid
indications about synchronization behavior. Doing this for the PRC of Fig. 25.2,
we obtain Fig. 26.10. This plot suggests that for two RTM neurons, coupled with
the kind of synaptic pulses used in the computation of Fig. 25.2, there is a stable
phase-locked state that is nearly, but not precisely synchronous.
One must be somewhat skeptical of this reasoning. The analysis assumes that
an input pulse only affects the duration of the inter-spike period during which it
arrives, not the durations of the subsequent inter-spike periods, and this is espe-
cially inaccurate when the pulse arrives shortly before the end of the period (see
exercise 25.1), so conclusions about near-synchronous phase-locked states deserve
special skepticism. However, Fig. 24.5 shows that our conclusion is, in fact, correct.
It is also in line with the result from Chapter 24 that fast excitatory synapses in a
network of RTM neurons produce sloppy, but not precise synchronization.
Exercises
26.1. Prove that (26.1) and (26.2) imply (26.3) and (26.4).
26.2. Verify the properties of F and G listed in (26.8) and (26.9).
26.3. (†) We will use the notation introduced in Section 25.4 here. As explained
there, the diagonal connecting the points (0, 0) and (1, 1) in the unit square
Exercises 225
√ √
will be parametrized either by ϕ ∈ [0, 1], or by s ∈ [−1/ 2, 1/ 2]. From any
strictly increasing function f = f (ϕ) with f (0) = 0 and f (1) = 1, we can
derive a function f˜ = f˜(s), as indicated in Fig. 26.9. Recall that we call f
diagonally symmetric if f˜ is an even function
√ of s. The
√ end point conditions
f (0) = 0, f (1) = 1 translate into f˜(−1/ 2) = f˜(1/√ 2) = 0. √
The simplest even functions f˜ = f˜(s) with f˜(−1/ 2) = f˜(1/ 2) = 0 are the
quadratics
˜ 1
f (s) = c −s +
2
.
2
We assume the constant c to be positive.
(a) Explain
√ why f˜ corresponds
√ to a strictly increasing function f if c ∈
(0, 1/ 2], but not if c > 1/ 2. (Refer to Fig. 26.9.)
√
(b) Write c = / 2, ∈ (0, 1], so
1
f (s) = √
˜ −s +
2
. (26.21)
2 2
Fix Δv > 0, and let G(ϕ) denote the function G derived, as described in Sec-
tion 26.2, from the phase response function g(ϕ, Δv), and Ĝ(ϕ) the function
G derived from the phase response function Δvĝ(ϕ). (Of course, G and Ĝ
depend on Δv, but we don’t indicate that in the notation here.)
226 Chapter 26. Synchronization of Two Pulse-Coupled Oscillators
Show:
(a) Ĝ (0) = 1 + CΔv + DΔv 2 for constants C and D independent of Δv.
(b) If C < 0, then G (0) < 1 for sufficiently small Δv.
(c) In Section 26.5, C = D = 0.
26.8. For the PRC shown in Fig. 25.10, the function G is not well-defined. Why
not?
Chapter 27
Oscillators Coupled
by Delayed Pulses
Action potentials in the brain travel at a finite speed. The conduction delays
between different brain areas can be on the order of 10 ms or more [151, Table 1].
Nonetheless, precise synchronization between oscillations in different parts of the
brain has been reported numerous times [166]. This may sound surprising at first.
However, there is a large body of theoretical work proposing mechanisms that could
lead to synchronization in the presence of conductance delays. In this chapter, we
discuss what is arguably the simplest result of this sort. We consider two identical
oscillators, called A and B, pulse-coupled as in Chapter 26, but with a conduction
delay.
δ < 1. (27.1)
Biologically, this assumption, which will soon prove convenient, is reasonable: One
might think, for instance, of a delay of 10ms and an intrinsic frequency below 100Hz,
corresponding to an intrinsic period greater than 10 ms. When the signal reaches
the other oscillator, it shifts the phase of that oscillator from ϕ to f (ϕ) = ϕ + g(ϕ),
as in Chapter 26.
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 27) contains supplementary material, which is available to authorized users.
228 Chapter 27. Oscillators Coupled by Delayed Pulses
We will analyze the local stability of synchrony: If A and B already start out
near synchrony, will they become synchronous? We assume that at time t = 0, A
reaches phase 1, and its phase is reset to ϕA = 0. We assume that B is slightly
behind, at phase ϕB = 1 − α, with α > 0 small enough; it will become clear during
the course of our discussion what “small enough” should mean here. We also assume
that no signal is traveling in either direction just prior to time 0.
We will track events. We say that an “event occurs” if one of the oscillators
reaches phase 1 and is therefore reset to phase 0 and sets in motion a signal traveling
towards the other oscillator, or when a traveling signal reaches its target. For each
event, we record the following data:
Since we assume δ < 1 (see eq. (27.1)), the next event is that the signal traveling
from A to B reaches B.
• t = δT
• ϕA = δ, ϕB = f (δ − α)
• upcoming events:
(Recall that the phases recorded here are phases immediately after the event. Im-
mediately prior to the arrival of the signal, B is at phase δ − α. The arrival of the
signal changes this phase to f (δ − α).) We assume now that
α < 1 − δ. (27.3)
This inequality implies that the signal from B to A reaches A before A reaches
phase 1. We will show that it also reaches A before B reaches phase 1. That is, we
will show (for small enough α) that
α < 1 − f (δ − α).
We assume that the phase response function g is defined and differentiable on the
closed interval [0, 1], and satisfies (26.1) and (26.2). This implies that f is strictly
increasing; see Section 26.1. Therefore a sufficient condition for α < 1 − f (δ − α) is
We assume now that this holds. (This is simply one of the conditions defining
a “small enough” α.) Note that the right-hand side of (27.4) is positive, since
f (δ) < 1. The next event of interest is then the arrival of the signal at A.
• t = (α + δ)T
• ϕA = f (α + δ), ϕB = α + f (δ − α)
• upcoming events:
230 Chapter 27. Oscillators Coupled by Delayed Pulses
α + f (δ − α) + 1 − f (δ + α) = 1 − α + g(δ − α) − g(δ + α) =
g(δ + α) − g(δ − α)
1 − α − 2α = 1 − α (1 + 2g (δ)) + o(α).
2α
(See exercise 2 for the justification of the last equation.) Recall that we have
assumed g (δ) > −1/2, so 1 + 2g (δ) > 0. For small α, the two oscillators are closer
to synchrony after Event 5 than after Event 1 if 1+2g (δ) < 1, i.e., g (δ) < 0. In this
27.1. Two Abstract Oscillators 231
case, synchrony is locally attracting. They are further away from synchrony, again
assuming small enough α, if 1 + 2g (δ) > 1, i.e., g (δ) > 0. In this case, synchrony
is locally repelling. We summarize our conclusion in the following proposition:
Proposition 27.1. Consider the model of Chapter 26, with g differentiable on [0, 1],
g(0) = g(1) = 0, and g (ϕ) > −1/2 for all ϕ ∈ (0, 1), modified by the assumption
that the effect of one oscillator that reaches phase 1 on the other oscillator occurs
with a delay of magnitude δT , where δ ∈ (0, 1). Synchrony of the two oscillators is
locally attracting if g (δ) < 0, and locally repelling if g (δ) > 0.
δ =0.1:
10 12 14 16 18 20
δ =0.7:
10 12 14 16 18 20
t [units of T ]
Figure 27.1. Two oscillators, pulse-coupled with delays, with the phase
response function g(ϕ) = ϕ(1 − ϕ)/3, starting out near synchrony (ϕA = 0, ϕB =
0.9). The red and green dots indicate the times when A and B reach phase 1,
respectively, and are reset to phase 0. Only the second half of the simulation is
shown. Synchrony is repelling when the conduction delay is shorter (upper panel),
but attracting when it is longer (lower panel). [TWO_DELAYED_PULSE_COUPLED_OSC]
ϕ(1 − ϕ)
g(ϕ) = .
3
Note that g (ϕ) ≥ −1/3 > −1/2 for all ϕ ∈ [0, 1]. Here Proposition 27.1 implies
that synchrony is stable if δ > 1/2, unstable if δ < 1/2; Fig. 27.1 illustrates this.
232 Chapter 27. Oscillators Coupled by Delayed Pulses
0.8
0.6
δ
0.4
0.2
0
0 1 2 3 4 5
√
Figure 27.2. Yellow: Points (, δ), with = Δv/ τm I − 0.25 and
δ=conduction delay/T , for which synchrony of two pulse-coupled theta neurons is
stable, according to inequality (27.8). The dots show results of numerical simu-
lations of two pulse-coupled theta neurons; red and blue dots indicate stable and
unstable synchrony, respectively. The figure demonstrates that the conduction delay
must be large enough for synchrony to be stable. [TWO_THETA_NEURONS]
with
Δv
= .
τm I − 1/4
Synchrony is locally attracting, by Proposition 27.1, if
1 1
g (δ) < 0 ⇔
2 − 1 < 0.
1 + tan π δ − 2 + cos (π (δ − 1/2))
1 2
Since we assume δ ∈ (0, 1), we have ψ ∈ (−π/2, π/2). The condition g (δ) < 0 is
then equivalent to
2
1 + (tan ψ + ) cos2 ψ > 1.
27.2. Two Theta Neurons 233
δ =0.45:
δ =0.55:
We expand this:
cos2 ψ + cos2 ψ tan2 ψ + 2 tan ψ + 2 > 1.
Using tan ψ = sin ψ/ cos ψ and cos2 ψ + sin2 ψ = 1:
2 sin ψ cos ψ + 2 cos2 ψ > 0.
Since > 0 and ψ ∈ (−π/2, π/2), cos2 ψ is not zero, and we can divide by cos2 ψ:
2 tan ψ + > 0,
or
ψ > − arctan .
2
Using the definition of ψ, we conclude that synchrony is locally attracting if
1 1
δ > − arctan (27.8)
2 π 2
(recall that we always assume δ < 1 here), and locally repelling if
1 1
δ< − arctan .
2 π 2
Longer delays stabilize synchrony, shorter ones do not. Regardless of the value of
> 0, synchrony is locally attracting if δ ∈ [1/2, 1). Figure 27.2 shows the region in
the (, δ)-plane where synchrony is locally attracting according to (27.8), and also
shows numerical results confirming our analysis.
234 Chapter 27. Oscillators Coupled by Delayed Pulses
Exercises
27.1. Can you explain in words why longer delays may make synchrony locally
attracting when shorter delays don’t?
27.2. Assume that g is differentiable. Explain why for any δ ∈ (0, 1),
g(δ + α) − g(δ − α)
lim = g (δ).
α→0 2α
This shows that the left-hand side of eq. (27.5) approximates g (δ) for small α.
27.3. (∗) Modify the code that generates the lower panel of Fig. 27.3 so that the
conduction delays between oscillators 1 and 2 are 0.55T (in both directions,
from 1 to 2 and from 2 to 1), those between oscillators 1 and 3 are 0.6T , and
those between oscillators 2 and 3 are 0.65T . Thus the conduction delays are
now heterogeneous. What does the plot look like now?
Chapter 28
Weakly Coupled
Oscillators
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 28) contains supplementary material, which is available to authorized users.
21 Averaging methods are common in much greater generality in the study of dynamical systems
(The equation holds because ϕB is an integer when the increase in ϕA occurs, and
g0 is periodic with period 1.) This increase in ϕA happens approximately once in
time T . Thus the time rate at which ϕA is advanced by B is
g0 (ϕA − ϕB )
.
T
We therefore modify the equation for ϕA as follows:
dϕA 1
= + g0 (ϕA − ϕB ). (28.1)
dt T T
(If you are skeptical, please keep reading until you have read the next paragraph,
where your objections may be addressed.) Similarly, of course, we modify the
equation for ϕB :
dϕB 1
= + g0 (ϕB − ϕA ). (28.2)
dt T T
You may see inconsistencies in the reasoning of the preceding paragraph. First,
if (28.2) is the equation for ϕB , then the frequency with which B sends pulses to A
is no longer 1/T , yet in writing down (28.1), we appear to assume that 1/T is the
frequency with which A receives inputs from B. This is indeed an inaccuracy, but its
effect is of higher order in and therefore negligible. In fact, the actual frequency
differs from 1/T only by a term on the order of , and is multiplied by another
factor of in (28.1), so altogether this effect is quadratic in . Second, after ϕB has
passed an integer value once, and therefore ϕA has been increased by g0 (ϕA − ϕB ),
the phase difference ϕA − ϕB is no longer what it was before, and yet the argument
leading to (28.1) seems to assume that the phase difference remains constant over
multiple input pulses. Again the error caused by this inaccuracy is quadratic in ,
since the shift in ϕA caused by the signal from B is only of order .
We define now ψ = ϕB − ϕA . Subtracting eq. (28.1) from (28.2), and using
the fact that g0 (−ψ) = g0 (1 − ψ) because g0 is periodic with period 1, we find
dψ
= H(ψ), (28.3)
dt
28.1. Two Weakly Pulse-Coupled Identical Oscillators 237
with
H(ψ) = (g0 (ψ) − g0 (1 − ψ)) . (28.4)
T
We will test numerically whether this equation in fact describes the time evolution
of the phase difference between the two oscillators accurately for weak interactions.
Example 1: g0 (ϕ) = ϕ2 (1 − ϕ). Using Proposition 26.2, one can easily show
that synchrony is attracting for any ∈ (0, 1] here. For two different values of ,
Fig. 28.1 shows the actual phase difference between the two oscillators as a function
of time (black), and the time evolution of ψ obtained from eq. (28.3) (red). For small
, there is close agreement. In fact, need not be extremely small: Note that in the
upper panel of Fig. 28.1, where eq. (28.3) is already a fairly good approximation,
the coupling is still so strong that the transition from near-anti-synchrony to near-
synchrony occurs in no more than about 10 periods.
=0.5
1
ψ
0.5
0
0 2 4 6 8 10 12
=0.1
1
ψ
0.5
0
0 10 20 30 40 50 60
t [units of T ]
The assumptions from Section 26.1, which we adopt here, included g(0) =
g(1) = 0. Using the notation g = g0 that we use here and recalling that > 0, this
means the same as g0 (0) = g0 (1) = 0, and it implies H(0) = 0. Thus ψ∗ = 0 is a
fixed point of eq. (28.3). We will ask when this fixed point is attracting or repelling,
i.e., when eq. (28.3) predicts that synchrony of the two oscillators is attracting or
repelling. This is a matter of analyzing the sign of H (0). First we must ask whether
H is even differentiable.
=0.5
1
ψ
0.5
0
0 2 4 6 8 10 12
=0.1
1
ψ
0.5
0
0 10 20 30 40 50 60
t [units of T ]
Figure 28.2. Like Fig. 28.1, but with phase response function ϕ(1 − ϕ)3 .
Synchrony is unstable here. [WEAKLY_COUPLED_2]
Because g0 (0) = g0 (1) = 0, the extension of g0 to the whole real axis with
period 1 is continuous. In Section 26.1, we also assumed that g was continuously
differentiable on the interval [0, 1], and here that means the same as to say that g0 is
continuously differentiable on [0, 1]. Therefore the extension of g0 to the entire real
axis is continuously differentiable everywhere, except possibly at integer values of ϕ.
The “g (0)” of Section 26.1 is the right-sided derivative of the extension at 0; we will
still denote it by g (0), though. Similarly, g0 (0) denotes the right-side derivative of
g0 at 0, and g0 (1) the left-sided derivative of g0 at 1, or, equivalently, the left-sided
derivative of the extension of g0 at 0. We did not assume in Section 26.1 that
g (0) = g (1), and don’t assume g0 (0) = g0 (1) here. In fact, in Examples 1 and
2 above, g0 (0)
= g0 (1). Therefore the periodic extension of g0 is not in general
differentiable at integer values of ϕ.
Because the periodic extension of g0 is continuous with period 1, so is H. We
will now show that even though (the extension of) g0 need not be differentiable at
integer arguments ϕ, it is always true that H = H(ψ) is differentiable at integer
arguments ψ. We note first that the right-sided derivative of H at ψ = 0 exists,
because the one-sided derivatives of g0 at 0 and at 1 exist. Further, H is an odd
function (exercise 1). For an odd function, if the right-sided derivative at 0 exists,
28.2. Two Weakly Pulse-Coupled Non-identical Oscillators 239
so does the left-sided derivative, and it is equal to the right-sided derivative; see
exercise 2. We conclude that H (0) exists, and from (28.4),
H (0) = (g0 (0) + g0 (1)).
T
Therefore ψ∗ = 0 is a stable fixed point of eq. (28.3) if
and an unstable fixed point if g0 (0) + g0 (1) > 0. The value of plays no role.
In Section 26.1, we found a different condition for synchrony to be attracting,
namely inequality (26.10). Replacing g by g0 in that condition, it becomes
As → 0, (28.6) becomes (28.5); thus eq. (28.3) correctly answers the question
whether synchrony is attracting or repelling in the limit as → 0.
in eqs. (28.8) and (28.9)? The answer is similar to the answers to similar objections
in Section 28.1: In a fixed amount of time, the drift in the phase difference is of
order , and will therefore only cause a correction proportional to 2 in eqs. (28.8)
and (28.9). We neglect quantities that are quadratic in .
Subtracting (28.8) from (28.9), and using the fact that g is periodic with
period 1, we get the following equation for the phase difference ψ = ϕB − ϕA :
dψ 1 1
= − + g0 (ψ) − g0 (1 − ψ).
dt TB TA TA TB
We use eq. (28.7) now, and drop terms that are of higher order in (exercise 3):
dψ
= H(ψ), (28.10)
dt
with
H(ψ) = (−c + g0 (ψ) − g0 (1 − ψ)) . (28.11)
TA
0.1
c
0.05
D(ψ)
-0.05
-0.1
0 0.5 1
ψ
Figure 28.3. Graph of the function D defined by eq. (28.14), together with
the two fixed points of eq. (28.10) for c = 0.08. The stable fixed point is indicated
by a solid circle, and the unstable one by an open circle. [PLOT_D_TWO_FIXED_POINTS]
Fixed points of eq. (28.10) are phase differences at which the two oscillators
can lock. However, eq. (28.10) need not have any fixed point at all. The equation
has a fixed point if and only if there is a solution of H(ψ) = 0, that is, if and only
if there is a solution of
D(ψ) = c, (28.12)
where
D(ψ) = g0 (ψ) − g0 (1 − ψ). (28.13)
For g0 (ϕ) = ϕ (1 − ϕ), the example of Fig. 26.1,
2
1
D(ψ) = 2ψ(1 − ψ) ψ − . (28.14)
2
The
√ black curve in Fig. 28.3 shows the graph√of D. The maximal value of D is
3/18 ≈ 0.096 (exercise 5). So for 0 ≤ c ≤ 3/18, eq. (28.10) has a fixed point,
Exercises 241
=0.1, c =0.08
1
0.5
0
ψ -0.5
-1
0 100 200 300 400 500
=0.1, c =0.12
1
0
-1
ψ
-2
-3
-4
0 100 200 300 400 500
t [units of TA ]
Figure 28.4. Two oscillators, one with intrinsic period TA = 1 and the
other with intrinsic period TB = (1 + c)TA , pulse-coupled
√ with phase response
function g0 (ϕ) = ϕ2 (1 − ϕ). Top panel: c = 0.08 < 3/18 yields phase-locking
for small : The black curve shows the real value of ψ = ϕB − ϕA as a func-
tion of time, computed by simulating the two oscillators. Equation (28.10) predicts
this outcome; the solution ψ of (28.10) was plotted in red, but is entirely
√ covered
by the black curve in the upper panel. Bottom panel: c = 0.12 > 3/18 yields
phase walkthrough for small (black). Again eq. (28.10) predicts this outcome
(red), although the prediction becomes quantitatively inaccurate after some time.
[WEAKLY_COUPLED_HETEROGENEOUS_1]
√ √
whereas for c > 3/18 it does not. Furthermore, for 0 ≤ c < 3/18, there are
two fixed points of eq. (28.10), a stable one and an unstable one, as illustrated in
Fig. 28.3. √
√ We simulate the two oscillators for c = 0.08 < 3/18 and for c = 0.12 >
3/18, and compare with the predictions derived from the differential equa-
tion (28.10). The results are shown in Fig. 28.4, and confirm that (28.10) correctly
predicts the behavior for small .
Exercises
28.1. Explain why the function H defined by eq. (28.4) is odd.
28.2. Show: For any odd function H = H(ψ), if the right-sided derivative of H at
0 exists, then the left-sided derivative of H at 0 exists as well, and is equal
to the right-sided derivative.
28.3. Explain how (28.10), (28.11) come about.
28.4. Derive eq. (28.14).
242 Chapter 28. Weakly Coupled Oscillators
28.5. √
Show that the maximal value of the function D defined by eq. (28.14) is
3/18.
28.6. Think about two identical oscillators A and B, with intrinsic period T. As-
sume that the phase response function is ϕ(1 − ϕ) with 0 < < 1. (a)
What is the right-hand side H(ψ) of eq. (28.3) in this case? Is the fixed point
ψ∗ = 0 attracting? (b) Using the analysis of Chapter 26, determine whether
or not synchrony is attracting here.
28.7. Think about two oscillators, A and B, with intrinsic periods TA > 0 and TB =
(1+c)TA , where c > 0, and > 0 is small. Assume that the two oscillators are
pulse-coupled with phase response function g0 (ϕ), where g0 (ϕ) = ϕ(1 − ϕ)3 .
(a) Use eq. (28.10) to analyze for which values of c there will be stable phase-
locking. (b) (∗) Check your answer numerically, using a modification of the
code that generates Fig. 28.4.
28.8. Think about two oscillators A and B with intrinsic periods TA and TB =
(1 + c)TA , with c > 0 and > 0. Assume that the phase response function is
ϕ(1 − ϕ) with 0 < < 1. (a) Explain why eq. (28.10) now predicts that there
is no phase-locking for any c > 0. (b) (∗) Check the conclusion from part
(a) by generating a plot like those in Fig. 28.4 with c = 0.005 and = 0.02,
computing up to time 50, 000. This will take some computing time, but it
isn’t prohibitive.
28.9. (a) Figure 28.3 shows that D is an odd function with respect to ψ = 1/2.
That is,
1 1
D − s = −D +s (28.15)
2 2
for all s. Show that this is the case for all g0 , not just for the specific g0 used
in Fig. 28.3. (b) Assume that g0 (ϕ)
≡ g0 (1 − ϕ). Show that there is a value
M > 0 so that eq. (28.10) (with H defined by eq. (28.11)) has a fixed point
if c ∈ [0, M ], but not if c > M . (c) Again assume that g0 (ϕ)
≡ g0 (1 − ϕ).
Assume, as we did throughout this section, that g0 = g0 (ϕ) is defined and
continuous for all ϕ ∈ R, with g0 (ϕ) + 1 ≡ g0 (ϕ). Let c ∈ (0, M ), with M
defined as in part (b). Show that eq. (28.10) has a stable fixed point.
Chapter 29
Approximate
Synchronization by a
Single Inhibitory Pulse
dvi vi
=− + I, (29.1)
dt τm
vi (t + 0) = 0 if vi (t − 0) = 1,
with τm > 0 and I > 1/τm , where vi denotes the membrane potential of the i-th
LIF neuron, for 1 ≤ i ≤ N . In Chapter 7, we showed (see eq. (7.9)) that each of the
LIF neurons will then, as longs as I is the only input, fire periodically with period
τm I
T = τm ln . (29.2)
τm I − 1
We choose vi (0) in such a way that the i-th LIF neuron would, if I were the only
input, fire at time iT /N , for 1 ≤ i ≤ N :
vi (0) = 1 − e−(N −i)T /(N τm ) τm I. (29.3)
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 29) contains supplementary material, which is available to authorized users.
244 Chapter 29. Approximate Synchronization by a Single Inhibitory Pulse
(See exercise 1.) Thus we initialize the neurons in a splay state. At time 0, we give
the neurons an inhibitory pulse, modeled by adding the term
to the right-hand side of eq. (29.1), where gsyn > 0 is the maximal “conductance” of
the inhibitory term (because we non-dimensionalize most variables, but not time,
gsyn is actually a reciprocal time in this model), and τI > 0 is the decay time
constant. The added term (29.4) is inhibitory because it drives vi towards 0. Thus
the differential equation for vi is now
dvi vi
=− + I − gsyn e−t/τI vi .
dt τm
Figure 29.1 shows results of simulations with N = 10, and gsyn = 0 (no inhibitory
pulse, top panel), a “weak and long” inhibitory pulse (middle panel), and “strong
and brief” inhibitory pulse (bottom panel). The definitions of “weak and long” and
“strong and brief” are of course a bit subjective here; see discussion below. Dotted
vertical lines in Fig. 29.1 indicate times at which a membrane potential reaches the
threshold value 1, resulting in a reset to zero. The simulation of the i-th neuron is
halted as soon as vi is reset.
0.5
0
0 5 10 15 20 25 30
1
0.5
0
0 5 10 15 20 25 30
1
0.5
0
0 5 10 15 20 25 30
t
0.5
0
0 15 30
P1 P0
In this section, we will not use I > 1/τm but J = I − 1/τm > 0, the drive
above firing threshold, as the parameter measuring the strength of the external drive.
Either I or J could be used, of course. The choice makes a difference when studying
the sensitivity of the dependence of T , P , and S on external drive; this point will
be explained in the last paragraph of this section.
To fully understand the synchronization of a population of normalized LIF
neurons by an inhibitory pulse would mean to fully understand how P and S depend
on the four parameters τm , J, g syn , and τI . I have no clear-cut and complete
description of this dependence, but will give a few relevant numerical results. In
Fig. 29.3, we perturb the parameters around τm = 10, J = 0.02, gsyn = 0.15, and
τI = 9. These are the parameters of the middle panel of Fig. 29.1 — synchronization
by a long, weak inhibitory pulse. We vary τI (left two panels of Fig. 29.3), g syn
(middle two panels), or J (right two panels), and find that P depends significantly
on all three of these parameters.
30 30 30
P
20 20 20
8 9 10 0.13 0.15 0.17 0.017 0.02 0.023
0 0 0
8 9 10 0.13 0.15 0.17 0.017 0.02 0.023
τI g s yn J
the resulting percentage increase in P . The results are displayed in the upper row of
Table 29.1. We see that P is a bit more sensitive to changes in τI than to changes in
gsyn and J, but it has some (moderate) dependence on all three parameters. In the
lower row of Table 29.1, we show similar results, now perturbing from the reference
values J = 0.02, gsyn = 2, and τI = 1, corresponding to the bottom row of Fig. 29.1
— a strong, brief inhibitory pulse. Here P is much less dependent on τI and g syn ,
but still as strongly dependent on J as before. The reduced dependence on τI and
gsyn is not surprising: When τI is much shorter than T , then during most of the
time that it takes the membrane potential to reach the threshold value 1, inhibition
is small, regardless of what the precise values of g syn and τI are, and P is in any
case not much greater than T .
The entries in the first column of Table 29.1 would become larger if we replaced
I by 0.99I, rather than J by 0.99J. This follows from the fact that
1
J =I− < I.
τm
A once percent reduction in I is a smaller percentage reduction in J. In fact, with
our standard value τm = 10, the firing frequency becomes unrealistically high unless
J I.
To see this, suppose that J were even as large as I/2. Then eq. (29.2) would imply
τm I I
T = τm ln = τm ln = τm ln 2.
τm I − 1 J
With τm = 10, this would translate into the very high frequency of about 144 Hz al-
ready. Larger values of J translate into even greater frequencies. As a consequence,
in fact the entries in the first column of Table 29.1 would be much larger if they
were based on reducing I, not J, by 1%.
248 Chapter 29. Approximate Synchronization by a Single Inhibitory Pulse
−50
−100
0 5 10 15 20 25 30
50
v [mV]
−50
−100
0 5 10 15 20 25 30
50
v [mV]
−50
−100
0 5 10 15 20 25 30
t [ms ]
As before, we denote by P the time between the arrival of the inhibitory pulse
and the resumption of firing in approximate synchrony. Precisely, P is taken to
be the average first spike time of the ten RTM neurons, where for each neuron its
“spike time” is, as earlier, taken to be the time at which the membrane potential
crosses −20 mV from above.
The intrinsic period of the RTM neurons in Fig. 29.4, which have external
drive I = 1.2, is T ≈ 20.4 ms. In the middle panel of Fig. 29.4, P is slightly
greater than T , but in the bottom panel, it is significantly smaller than P . This
seems surprising: Why should an inhibitory pulse make some cells fire sooner than
they would have fired otherwise? The answer is that the reversal potential of the
inhibitory pulse, which is taken to be −75 mV in Fig. 29.4, lies significantly above
29.3. The River Picture for Theta Neurons 249
the membrane potential to which an RTM neuron resets after firing. Those cells
that are initialized near the reset potential are therefore initially accelerated by the
inhibitory pulse.
The sensitivity of P to small parameter changes is explored in Table 29.2,
which is fairly similar to the analogous table for LIF neurons, Table 29.1. Note that
P depends on none of the three parameters very sensitively (a 1% perturbation in
one of the parameters results in a perturbation smaller than 1% in P ). It is most
sensitive to perturbations in I, and least sensitive to perturbations in g syn .
Table 29.2. Analogous to Table 29.1, but for ten RTM neurons (see text
for the precise definition of P in the present context), with synaptic reversal potential
−75 mV, and baseline parameter values I = 1.2, g syn = 0.3, τI = 9, (upper row)
and I = 1.2, gsyn = 2.25, τI = 1, (lower row). [RTM_CONDITION_NUMBERS]
dv v(1 − v)
=− + I + gsyn (t)(vrev − v), (29.9)
dt τm
and write this equation in terms of θ again:
1 θ dθ 1 θ 1 1 θ
sec2 =− 1 − tan2 + I + gsyn (t) vrev − − tan .
4 2 dt 4τm 2 2 2 2
Equation (29.10) describes how synaptic input to a theta neuron should be modeled.
We now consider a theta neuron subject to an inhibitory pulse with
or equivalently
dgsyn gsyn
=− (29.11)
dt τI
and gsyn (0) = gsyn . We think of (29.10) and (29.11) as a system of two differential
equations in the two unknowns θ and gsyn . With vrev = 0, this system reads as
follows:
dθ cos θ
=− + (2I − gsyn )(1 + cos θ) − gsyn sin θ, (29.12)
dt τm
dgsyn gsyn
=− . (29.13)
dt τI
Notice that gsyn (0) = gsyn .
Figure 29.5 shows solutions of eqs. (29.12), (29.13) in the (θ, gsyn )-plane. The
picture should be thought of as 2π-periodic: When a solution exits through the
right edge, i.e., reaches θ = π, it re-enters through the left edge, with the same
value of gsyn with which it exited. Recall that reaching θ = π means firing for
a theta neuron; this is why the exit of a solution through the right edge of the
rectangle shown in Fig. 29.5 is of special interest.
29.3. The River Picture for Theta Neurons 251
0.8
0.6
gsyn
0.4
0.2
0 g*
-2 0 2
θ
There is a special solution, indicated in blue in the figure, with the property
that most solutions come very close to it before exiting through the right edge.
This solution is exponentially attracting (the convergence of other solutions to it
is exponentially fast). Such solutions are common in systems of ODEs. They are
called rivers, and have been the subject of extensive study [37, 38]. We also indicate
another special curve in Fig. 29.5, in red. Solutions approach it in backward time,
that is, when the arrows indicating the direction of motion are reversed. We won’t
give it a precise definition, but nonetheless we will give it a name: We call it the
unstable river, and for clarity we also call the river the stable river.22
We denote the point at which the stable river reaches θ = π and therefore
exits the rectangle by (π, g∗ ), with g∗ > 0. Consider now a solution (θ(t), gsyn (t)
that starts, at time 0, at (θ0 , g syn ). If it comes very close to the stable river before
reaching θ = π, then it exits the rectangle, to very close approximation, at (π, g∗ ).
This means that the time, P , at which it exits the rectangle satisfies e−P/τI g syn =
g∗ , or
g syn
P = τI ln , (29.14)
g∗
and is, in particular, independent of θ0 . The neurons of a population, initialized
with different values of θ0 , will all reach π at nearly the same time, namely the time
P given by (29.14). Thus the population will synchronize.
Trajectories that start out to the right of the unstable river exit through the
right edge, re-enter through the left edge, then approach the stable river, and exit
near (π, g∗ ) again, that is, reach θ = π at approximately the time P given by
eq. (29.14). This means that a theta neuron that is close to firing at time 0, the
22 The stable river is a well-defined trajectory of the system (29.12),(29.13): It is the only
solution of (29.12), (29.13) with θ(t) ∈ (−π, π) for all t, θ(t) → −π/2 as t → −∞. By contrast,
the “unstable river” is not a well-defined trajectory here.
252 Chapter 29. Approximate Synchronization by a Single Inhibitory Pulse
onset time of the inhibitory pulse, will fire once soon after onset of the inhibitory
pulse, but then again at approximately time P .
There is an interesting conclusion from formula (29.14). Note that g∗ is in-
dependent of gsyn . In fact, the whole picture in Fig. 29.5 does not depend on one
particular g syn ; instead, g syn should be thought of as the value of gsyn at time zero.
Therefore (29.14) implies
P = τI ln g syn + C, (29.15)
with C independent of g syn (but dependent, of course, on τI , τm , and I). Thus
the dependence of P on gsyn is logarithmic. Since we don’t non-dimensionalize
time, not even for the theta neuron, we must strictly speaking think of gsyn as a
reciprocal time. Consequently, there is a dimensional problem with (29.15): The
logarithm of a quantity that is physically a reciprocal time is taken. However, this
could easily be fixed by multiplying the argument of ln in (29.15) by a reference
time not dependent on g syn — for instance, τm .
One might be tempted to conclude from eq. (29.14) that P is proportional
to τI . However, this is not the case, since g∗ depends on τI ; see exercise 7.
Exercises
29.1. Explain why eq. (29.3) describes initialization in splay state.
29.2. Explain why the functions v0 and v1 in Section 29.1 cannot be written ex-
plicitly as functions of the parameters. Hints: (i) If it cannot be done us-
ing
! −u the method of integrating factors, it cannot be done. (ii) The integral
(e /u) du cannot be done explicitly. Both of these statements are provably
correct, but you may take both of them for granted.
29.3. Using the notation of eq. (29.5), show: S ≤ 1/2, and S < 1/2 if and only if
1
gsyn > I − .
τm
29.4. Let us think about very strong and very brief inhibitory pulses synchronizing
a population of LIF neurons. We consider the initial-value problem
dv v
=− + I − g syn e−t/τI v, v(0) = v∗ ,
dt τm
with v∗ > 0. We now take a limit in which gsyn → ∞, τI → 0, and gsyn τI
remains constant — call that constant value γ.
(a) (†) Explain: In the limit, v converges to the solution of
dv v
=− + I, v(0) = v∗ e−γ .
dt τm
You are allowed to be a bit non-rigorous. In particular, because we are
interested in a limit in which g syn → ∞, you may assume that the only term
Exercises 253
29.5. (∗) Re-compute Table 29.2 perturbing the three parameters I, gsyn , and τI
by 10% instead of 1%, and discuss how the resulting table does or does not
differ from Table 29.2.
29.6. (∗) If you look at eq. (29.10), you will see that synaptic input into a theta
neuron has two effects: First, the drive I is changed. I is replaced by
1
I + gsyn (t) vrev − .
2
But second, there is the extra term −gsyn (t) sin θ, which by itself would drive
θ towards 0. For a theta neuron, synaptic inputs differ from current inputs
by the presence of the term −gsyn (t) sin θ.
Note that for a Hodgkin-Huxley-like model neuron, a synaptic input is, sim-
ilarly, of the form
paper [12].
254 Chapter 29. Approximate Synchronization by a Single Inhibitory Pulse
eqs. (29.12), (29.13) with initial condition θ(0) = 0, say, and some large
gsyn (0). The approximation for g∗ is the value of gsyn at the time when
θ reaches π. The initial value gsyn (0) ought to be thought of here as a numer-
ical parameter — the larger it is, the greater is the accuracy with which g∗
is calculated. Choose it so large that there is no change in your plot visible
to the eye when you halve it.
29.8. (∗) The river picture, Fig. 29.5, shows that for theta neurons, firing resumes,
following a pulse of inhibition, as soon as the time-dependent inhibitory con-
ductance, gsyn e−t/τI , has fallen to some value that is approximately indepen-
dent of gsyn and θ(0), although it depends on other parameters. Is this true
for RTM neurons? To see, do the following numerical experiment.
Consider a single RTM neuron, subject to an inhibitory pulse, with parame-
ters as in Fig. 29.4, middle panel, except for gsyn , which will be varied here.
Initialize the neuron at phase ϕ0 = 0.3, say. Record the time P at which it
fires. The inhibitory conductance at the time of spiking is then gsyn e−P/τI .
(You ought to realize that this quantity depends on g syn in a more compli-
cated way than meets the eye, since P depends on g syn .) Plot gsyn e−P/τI as
a function of gsyn ∈ [0.25, 1]. Then repeat for ϕ0 = 0.7. Does gsyn e−P/τI
appear to be approximately independent of ϕ0 and of g syn ?
Chapter 30
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 30) contains supplementary material, which is available to authorized users.
24 This requires that input from the I-cells in fact delays and synchronizes E-cell firing. For
example, an h-current in the E-cells can undermine the mechanism, since hyperpolarization turns
on the h-current, which is depolarizing. However, in this chapter, we will take the E-cells to be
RTM neurons, for which no such complications arise.
256 Chapter 30. The PING Model of Gamma Rhythms
frequency range. However, the strength of the inhibitory synapses and especially
the external drive contribute significantly to setting the frequency; see Table 29.2,
and also Table 30.1.
Experimental evidence supports the idea that the PING mechanism often
underlies gamma rhythms. One of many examples is reproduced in Fig. 30.2, which
shows recordings from the CA1 area of rat hippocampus. (Areas CA1, CA2, and
CA3 are subdivisions of the hippocampus. “CA” stands for “cornu Ammonis,” the
horn of the ancient Egyptian god Amun. Cornu ammonis is an 18th-century term
for a part of the hippocampal formation.) The figure demonstrates that during
gamma rhythms triggered by tetanic stimulation (stimulation by a high-frequency
train of electrical pulses) in CA1, both pyramidal cells and inhibitory interneurons
fire at approximately gamma frequency.
-50
-100
0 50 100 150 200
t
We denote the period at which each of the two cells in Fig. 30.3 fires by P ,
and explore the parameter dependence of P . In analogy with Tables 29.1 and 29.2,
we compute the percentage change in P resulting from a 1% reduction in IE , a
1% increase in g IE , and a 1% increase in τd,I . By this measure, the period of the
rhythm depends far more sensitively on external drive than on the strength or decay
time constant of inhibition.
bigger than NI , since this is often said to be the approximate ratio of glutamatergic
to GABAergic neurons in the brain [136]. However, the ratio NE /NI is not of great
importance for the properties of PING rhythms if the synaptic strengths are scaled
as described below.
For each neuron in the network, we define a constant drive I. Different neurons
are allowed to have different drives. For any pair of neurons, A and B, in the
network, we define parameters associated with a synapse from A to B (compare
Section 20.2):
gsyn , vrev , τr , τpeak , τd .
The maximal conductance gsyn is allowed to be zero, so not all possible connections
are necessarily present. For simplicity, we do not allow the possibility of two different
synapses from A to B, for instance, a faster and a slower one, here.
In the examples of this section, the parameters are chosen as follows. The i-th
E-cell receives input drive
IE,i = I E (1 + σE Xi ), (30.1)
where I E and σE ≥ 0 are fixed numbers, and the Xi are independent standard
Gaussian random variables (see Appendix C). Similarly, the j-th I-cell receives
input drive
II,j = I I (1 + σI Yj ) , (30.2)
where the Yj are independent standard Gaussians. To set the strengths (maxi-
mal conductances) of the synaptic connections from E-cells to I-cells, the E-to-I
connections, we choose two parameters, ĝEI ≥ 0 and pEI ∈ (0, 1]. The maximal
conductance associated with the i-th E-cell and the j-th I-cell is then
ĝEI ZEI,ij
g EI,ij = , (30.3)
pEI NE
where the ZEI,ij are independent random numbers with
1 with probability pEI ,
ZEI,ij =
0 otherwise.
The total number of excitatory synaptic inputs to the j-th I-cell is
NE
ZEI,ij . (30.4)
i=1
The expected value of this number is pEI NE (exercise 2), the denominator in (30.3).
Consequently ĝEI is the expected value of the sum of all maximal conductances
associated with excitatory synaptic inputs into a given I-cell (exercise 3). Similarly,
the strength of the synaptic connection from the j-th I-cell to the i-th E-cell is
ĝIE ZIE,ji
gsyn,IE,ji = , (30.5)
pIE NI
with
1 with probability pIE ,
ZIE,ji =
0 otherwise.
260 Chapter 30. The PING Model of Gamma Rhythms
The strengths of the E-to-E and I-to-I synapses are set similarly.
We use the same values of τr , τpeak , τd , and vrev for all excitatory synapses.
We denote these values by τr,E , τpeak,E , τd,E , and vrev,E . Similarly, all inhibitory
synapses are characterized by parameters τr,I , τpeak,I , τd,I , and vrev,I .
Figure 30.4 shows the result of a typical network simulation. Starting with
E-cell initialized asynchronously, as described in Section 24.1, oscillations at approx-
imately 45 Hz develop rapidly, within about 50 ms. Human reaction times are about
200 to 250 ms, so if gamma rhythms are important for stimulus processing [18], then
it must be possible to generate these oscillations in a time much shorter than 200ms,
as indeed seen in Fig. 30.4. In fact, we gave an argument in [16] suggesting thatin
250
50
50
mean(v), E-cells
-50
-100
0 50 100 150 200
t [ms]
Figure 30.4. Spike rastergram of a PING network (top), and mean mem-
brane potential of the E-cells (bottom). Spike times of E-cells are indicated in red,
and spike times of I-cells in blue. The parameters are NE = 200, NI = 50, I E =
1.4, σE = 0.05, I I = 0, ĝEE = 0, ĝEI = 0.25, ĝIE = 0.25, ĝII = 0.25, pEI =
0.5, pIE = 0.5, pII = 0.5, τr,E = 0.5, τpeak,E = 0.5, τd,E = 3, vrev,E = 0, τr,I =
0.5, τpeak,I = 0.5, τd,I = 9, vrev,I = −75. [PING_1]
networks with drive heterogeneity (different neurons receive different drives), PING
oscillations must be created rapidly, within a small number of gamma cycles, if they
are to be created at all.
Properties of activity in E-I-networks have been studied extensively; for ex-
ample, see [11, 12, 61, 155, 156, 168, 183]. In the following sections, we will consider
only a few of many interesting aspects of PING rhythms.
250
50
50
−50
−100
0 50 100 150 200
t [ms]
Figure 30.5. As Fig. 30.4, but with all network heterogeneity removed:
σE = 0, pEI = pIE = pII = 1. [PING_2]
Let us ask just how small pEI , pIE , and pII can be before the oscillation is
lost. (Note that pEE plays no role yet because we are setting ĝEE = 0 for now.) For
instance, if we set pEI = pIE = pII = 0.05 in the simulation of Fig. 30.4, we obtain
Fig. 30.6 — there is only a very faint indication of an oscillation left. However, if
we keep the values of pEI , pIE , and pII as in Fig. 30.6, but multiply NE and NI
by 4 (recall that the strengths of individual synapses are then reduced by 4, see
eq. (30.3)), rhythmicity returns; see Fig. 30.7.
250
50
50
−50
−100
0 50 100 150 200
t [ms]
Figure 30.6. As Fig. 30.4, but with much greater sparseness of the con-
nectivity: pEI = pIE = pII = 0.05. [PING_3]
For the ability of the network to synchronize and form a rhythm, pEE , pEI ,
pIE , and pII are not as important as pEE NE , pEI NE , pIE NI , and pII NI , the
262 Chapter 30. The PING Model of Gamma Rhythms
1000
200
50
−50
−100
0 50 100 150 200
t [ms]
Figure 30.7. As Fig. 30.6, but for a four times larger network. [PING_4]
NI
E (g Ii ) = g IE E (ZIE,ji ) = g IE pIE NI . (30.7)
j=1
Since the ZIE,ji are independent of each other, their variances sum (see Appendix C):
NI
2
Ni
2
2
var (g Ii ) = (gIE ) var(ZIE,ji ) = (g IE ) E ZIE,ji − (E(ZIE,ji ))2 .
j=1 j=1
2
Since the only possible values of ZIE,ji are 0 and 1, ZIE,ji = ZIE,ji , and therefore
NI
2 2 2
(gIE ) E ZIE,ji − (E(ZIE,ji )) =
j=1
NI
2 2 2
(gIE ) E (ZIE,ji ) − (E(ZIE,ji )) = (g IE ) NI pIE − p2IE .
j=1
Fig. 30.10 looks quite similar to Fig. 30.4 — the fact that the I-cells have more drive
than in Fig. 30.4 is of little consequence. However, as I I rises from 0.7 (Fig. 30.10)
to 0.9 (Fig. 30.11), rhythmicity is largely lost.
250
50
50
50
250
50
50
50
Figure 30.9. Same as Fig. 30.8, but with simulations continued up to time
2000. Only the last 200 ms of simulated time are shown. [PING_6]
the same time constants as for the E-to-I connections (τr,E = τpeak,E = 0.5 ms,
τd,E = 3 ms), the PING rhythm is barely affected. Stronger E-to-E connections
destroy the rhythm; see Fig. 30.12. This is in contrast with the Wilson-Cowan
model of Chapter 22, which requires recurrent excitation for oscillations.
Exercises
30.1. Vary the baseline parameters perturbed in Table 30.1, and see how the results
change.
30.2. Explain why the expectation of (30.4) is pEI NE .
30.3. Explain why ĝEI is the expected value of the sum of all maximal conductances
associated with excitatory synaptic inputs into a given I-cell.
30.4. (∗) Verify that the rhythm in Fig. 30.4 is largely unchanged when ĝII is set
to zero.
30.5. (∗) Verify that the rhythm in Fig. 30.11 is restored when ĝII is tripled.
30.6. Explain why one would expect that short recurrent excitatory synapses would
not affect PING rhythms much.
30.7. (∗) (†) PING rhythms in our model networks have very regular population
frequencies; that is, the times between population spike volleys are nearly
constant. Experimentally recorded gamma rhythms are much less regular;
see, for instance, the top trace of Fig. 30.2.
266 Chapter 30. The PING Model of Gamma Rhythms
250
50
50
−50
−100
0 50 100 150 200
t [ms]
Figure 30.10. Same as Fig. 30.4, but with I I = 0.7, σI = 0.05. [PING_7]
250
50
50
−50
−100
0 50 100 150 200
t [ms]
Figure 30.11. Same as Fig. 30.4, but with I I = 0.9, σI = 0.05. [PING_8]
250
50
50
−50
−100
0 50 100 150 200
t [ms]
Figure 30.12. Same as Fig. 30.4, but with ĝEE = 0.25, pEE = 0.5. [PING_9]
Exercises 267
One can try to introduce more variability by adding to the drives IE,i a sin-
gle discrete Ornstein-Uhlenbeck process S(t) (independent of i), as defined
by eqs. (C.20)–(C.22) in Appendix C.6. This would model global fluctuations
in the excitability of E-cells. In a living brain, such fluctuations could result
from neuromodulation. (In general, the word neuromodulation denotes the
regulation of a whole population of neurons by a diffusely released neuro-
transmitter.)
Explore computationally whether you can set the parameters of the discrete
Ornstein-Uhlenbeck process so that the PING rhythm is not destroyed, but
its frequency becomes significantly more variable.
30.8. (∗) What happens if you make the inhibitory synapses in the simulation of
Fig. 30.4 much stronger, but also much faster, say τr,I = τpeak,I = 0.5 ms,
τd,I = 3 ms? Can you still get a gamma rhythm?
Chapter 31
ING Rhythms
Gamma rhythms can also be generated by the interaction of I-cells alone, without
any involvement of E-cells. For example, in brain slices, gamma rhythms can be
evoked even in the presence of drugs blocking AMPA and NMDA receptors. See
Fig. 31.1 for an example, recorded from rat CA1.
Assuming strong external drive to the I-cells, the mechanism seems, at first
sight, similar to PING: Activity of the I-cells creates inhibition, which silences
the entire population temporarily, and when firing resumes, it is in greater syn-
chrony, as described in Chapter 29.25 Gamma rhythms created in this way were
called Interneuronal Network Gamma (ING) rhythms in [183]; earlier studies of
such rhythms include [174] and [182].
To construct model networks of inhibitory cells, we simply omit the E-cells
from the networks of Chapter 30 and strengthen the external drive to the I-cells.
The resulting networks are capable of generating gamma rhythms, but only un-
der idealized circumstances, namely with little heterogeneity in external drives (σI
small) and very little randomness in synaptic connectivity (pII very close to 1); see
Sections 31.1 and 31.2 for examples.
The fast-firing interneurons involved in generating gamma rhythms are well
known to be connected by gap junctions [57]. When gap-junctional coupling is
added to the model networks, ING rhythms become much more robust; this is
demonstrated with examples in Section 31.3. It is in line with many experimen-
tal and modeling studies that have found gap junctions to drive neurons towards
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 31) contains supplementary material, which is available to authorized users.
25 This requires that inhibitory input in fact delays and synchronizes I-cell firing. For example,
an h-current in the I-cells may undermine the mechanism, just as an h-current in the E-cells can
undermine PING. However, in this chapter, we will take the I-cells to be WB neurons, for which
there are no such complications.
270 Chapter 31. ING Rhythms
synchrony (see, for instance, [93, 153, 160]), and is expected because of the equili-
brating effect of discrete diffusion (see Chapter 21).26
Even in the absence of any heterogeneity, ING networks without gap junctions
can generate clustering, with individual cells firing only on a fraction (most typically
one half) of the population spike volleys. This was shown in [174] (and also, for
purely inhibition-based rhythms at lower frequencies, earlier in [62]). We show
examples in Section 31.4.
In summary, synchrony in networks of inhibitory cells without gap-junctional
coupling seems fairly fragile. In Section 31.5, we illustrate this point in yet another
way, using the example of a pair of abstract oscillators coupled by inhibitory pulses.
26 It is not, however, obvious that the equilibrating effect of discrete diffusion carries over to the
case of spiking neurons, and in fact it is not always true; see [28, Fig. 8].
31.2. Basic Network Simulations 271
-50
-100
0 50 100 150 200
t [ms]
We denote the period at which the cell in Fig. 31.2 fires by P , and explore the
parameter dependence of P . In analogy with Table 30.1, we compute the percentage
change in P resulting from a 1% reduction in I, a 1% increase in g syn , and a 1%
increase in τd . Again we find that the period depends more sensitively on external
drive than on the strength or decay time constant of the autapse.
100
1
0 100 200 300 400 500
t [ms]
100
1
0 100 200 300 400 500
t [ms]
Figure 31.4. Same as Fig. 31.3, but with heterogeneous external drive
(different neurons receive different, temporally constant drives): σI = 0.03. [ING_2]
100
1
0 100 200 300 400 500
t [ms]
Figure 31.5. Same as Fig. 31.3, but with 15% of synaptic connections
omitted at random (pII = 0.85), and the remaining ones strengthened by the factor
100/85. [ING_3]
100
1
0 100 200 300 400 500
t [ms]
Figure 31.6. Same as Fig. 31.5, but now each cell receives inputs from
exactly 85 cells, instead of receiving input from a random number of cells with
mean 85. [ING_4]
100
1
0 100 200 300 400 500
t [ms]
Figure 31.7. As in Fig. 31.3, but with σI = 0.05 and pII = 0.5 [ING_5]
31.4 Clustering
Even in purely inhibitory networks without any heterogeneity, synchronization is
somewhat fragile in the absence of gap junctions. There are parameter choices for
274 Chapter 31. ING Rhythms
100
1
0 100 200 300 400 500
t [ms]
Figure 31.8. As in Fig. 31.7, but with ĝgap = 0.1 and pgap = 0.05. [ING_6]
which one sees a breakup of the cells into n > 1 clusters (usually n = 2, but see
exercise 7), with each cluster firing on every n-th population cycle. However, the
clustering behavior is fragile as well: A moderate amount of drive heterogeneity
destroys it, and gap-junctional coupling does not restore it, but instead results in
complete synchronization, even for parameters for which there is clustering in the
absence of heterogeneity. For these reasons, it seems unclear whether clustering in
ING networks could have biological relevance. Nonetheless we will briefly discuss it
here, as another illustration of the fragility of ING rhythms in the absence of gap
junctions.
Wang and Buzsáki [174] demonstrated numerically that in networks without
heterogeneity, clustering is seen when the hyperpolarization following firing is pro-
nounced [174, Fig. 3], especially when the synaptic reversal potential is relatively
high. The hyperpolarization following firing can be made pronounced by slowing
down the variables h and n, which play central roles in ending the action potential;
see [174], and also Exercise 5.6. If we multiply the functions αh , βh , αn , and βn by
1/2 (this amounts to doubling τh and τn , or to reducing the scaling factor φ of [174]
from 5 to 2.5), Fig. 31.3 turns into 31.9. The red rectangle in Fig. 31.9 is shown
once more, enlarged, in Fig. 31.10. There are two clusters firing alternatingly.27
100
1
0 100 200 300 400 500
t [ms]
Heuristically, one can see why strong spike afterhyperpolarization and a rela-
tively high synaptic reversal potential might counteract synchronization: Inhibition
soon after firing is then depolarizing, since the membrane potential after firing will
27 If you conclude, after looking at Fig. 31.10, that something must be wrong with my code,
20
1
400 420 440 460 480 500
t [ms]
20
1
400 420 440 460 480 500
t [ms]
20
1
400 420 440 460 480 500
t [ms]
Figure 31.12. Like Fig. 31.11, but with weak, sparse gap junctions: ĝgap =
0.04, pgap = 0.05. [ING_10]
be below the synaptic reversal potential, whereas inhibition arriving soon before
firing will of course be hyperpolarizing. Consequently, the cells that fire first in an
approximately (but not perfectly) synchronous spike volley accelerate each other
by their inhibitory interactions, while slowing down the cells that are behind and
receive the inhibition before firing.
When we introduce heterogeneity in drive (σI = 0.05) in the simulation of
Fig. 31.9, the firing becomes asynchronous; see Fig. 31.11. When we add gap junc-
tions, even quite weak and sparse ones, clustering does not return, but instead the
entire population synchronizes; see Fig. 31.12.
You might wonder whether in a PING rhythm, clustering of the E-cells
couldn’t happen for the precisely same reason for which it happens in ING. In fact
it can; see Chapter 32. For PING rhythms, clustering occurs easily when the E-cells
express28 adaptation currents. It is possible even without adaptation currents, but
requires very rapid, and probably unrealistically rapid, inhibitory feedback; see
Section 32.3.
28 The word “express,” used in this way, is useful neuroscience jargon. A cell in which a certain
with
1 + tanh(s)
H(s) = . (31.2)
2
See exercise 8 for the motivation for this definition. Figure 31.13 shows the graph
of g, and the graph of the function G derived from it as described in Section 26.
Note that there is qualitative similarity between the graph of g and Fig. 25.10, but
there is also a significant difference: In Fig. 25.10, g(0) < 0 (even an inhibitory pulse
that arrives right at the moment at which the membrane potential crosses −20 mV
from above will retard the next spike), whereas in the framework of Chapter 26,
g(0) = g(1) = 0. The graph of G shows that synchrony is weakly attracting (in
fact, G (0) ≈ 0.9467, see exercise 9), but anti-synchrony is attracting as well. This
is reminiscent of the network of Figs. 31.9 and 31.10, where clustering is a stable
state, but so is synchrony (see exercise 6b). For further discussion of this example,
see exercise 10.
0 1
G
−0.5 0.5
g
−1 0
0 0.5 1 0 0.5 1
ϕ ϕ
Figure 31.13. The function g defined by eqs. (31.1) and (31.2), and the
function G derived from it as described in Section 26. Fixed points of G correspond
to possible phase-locked states of the two oscillators. Synchrony (solid green points)
and anti-synchrony (solid blue point) are stable. There is another possible, but
unstable phase-locked state (red circles). [ABSTRACT_PULSE_COUPLING_INH]
100
I E =1.9
500
100
0 100 200 300 400 500
I E =2
500
100
0 100 200 300 400 500
I E =2.1
500
100
0 100 200 300 400 500
t [ms ]
Figure 31.14 shows a case in which the answer to the above question is “yes.”
Note that the firing looks very similar to that in a PING network, in spite of the
fact that there is no E-to-I coupling. There is a broad range of drives to the E-cells
for which similar patterns are obtained. As the drive to the E-cells gets stronger,
the phase relation between E- and I-cells changes, with the E-cells firing earlier in
the cycle of the I-cells; see upper panel of Fig. 31.15.
Then, as a threshold value is crossed, there is an abrupt transition from phase-
locking of the E-cells with the I-cell to a “phase walkthrough” pattern, shown in the
middle panel of Fig. 31.15. The E-cells fire earlier and earlier in the I-cell cycle, until
they fire twice in one cycle, then return to firing later in the cycle, and the gradual
shift to earlier phases resumes. For yet stronger drive, there is greater irregularity;
see bottom panel of Fig. 31.15. To separate the effect of a strong drive to the E-
cells from the effects of heterogeneity in drives and randomness in connectivity,
drive heterogeneity and randomness of connectivity were omitted in Fig. 31.15.
Exercises
31.1. (∗) Compute tables similar to Table 31.1 for other parameter values. Also
investigate how sensitively P depends on other parameters, for instance, on
gL.
31.2. (∗) Add gap junctions with ĝgap = 0.1 and pgap = 0.05 to the codes gen-
erating Figs. 31.4 and 31.5, and see what you get. (Hint: This is already
programmed in the codes generating Figs. 31.4 and 31.5, all you need to do
is make ĝgap non-zero!)
31.3. (∗) In the code that generates Fig. 31.7, change the initialization so that each
neuron is initialized at a random phase uniformly distributed not in [0, 1],
but in [0, 0.02]. Thus the population is nearly synchronous at the beginning.
How does the rastergram change? Does (approximate) synchrony appear to
be stable?
31.4. (∗) To which extent does the rhythm in Fig. 31.8 depend on the chemical
synapses? Would the neurons similarly synchronize, because of the gap
junctions, even if ĝII were zero?
31.5. (∗) When you look at Figs. 31.9 and 31.10, do you think something must
be wrong? I did, when I first saw these figures. There are strangely long
sequences of neurons with consecutive indices that belong to the same clus-
ter. It looks as though neurons with nearby indices were correlated. But
membership in clusters should really be random here, shouldn’t it? There
should be no correlation between the cluster that neuron i belongs to, and
the cluster that neuron i + 1 belongs to.
To reassure yourself that nothing is alarming about Fig. 31.9, define
1 with probability 1/2,
f (i) = i = 1, 2, . . . , 20,
2 with probability 1/2,
Exercises 279
−0.2
−0.4
g(ϕ )
−0.6
−0.8
−1
0 0.5 1
ϕ
31.10. (†) Figure 31.16 shows a phase response function qualitatively similar to that
in Fig. 31.13, but with simpler behavior near ϕ = 0 and 1. The behavior of g
near 0 and 1 determines the stability of synchrony; see (26.10). The stability
of non-synchronous phase-locked states, on the other hand, is determined
by the behavior away from the edges. So by analyzing non-synchronous
phase-locked states for the PRC shown in 31.16, we can understand non-
synchronous phase-locked states for the PRC in Fig. 31.13.
Assume that g(ϕ) = g0 (ϕ), where ∈ (0, 1], and g0 = g0 (ϕ), ϕ ∈ [0, 1], is
a continuously differentiable function with g0 (0) = g0 (1) = 0, g0 (ϕ) < 0 for
ϕ ∈ (0, 1), and |g (0)| < g (1).
280 Chapter 31. ING Rhythms
(a) Show that synchrony is unstable for sufficiently small ∈ (0, 1].
(b) Assume that the function G, derived from g as in Chapter 26, has finitely
many fixed points only. Show that there is a stable non-synchronous phase-
locked state.
31.11. (∗) Test what happens when one makes I E much smaller in the simulations
of Fig. 31.15. Find various possible entrainment patterns by varying I E .
31.12. (∗) Investigate ING rhythms with briefer, stronger, shunting inhibition, and
compare their properties with those of the ING rhythms studied in this
chapter, using variations on the codes generating the figures in this chapter.
Chapter 32
In the PING model of Chapter 30, each E-cell and each I-cell fires once on each
cycle of the oscillation. This is not what is usually seen experimentally in gamma
rhythms. Much more typically, each participating pyramidal cells fires on some,
but not all population cycles. The same is true for the participating inhibitory
interneurons, although they usually participate on a larger proportion of population
cycles. Figure 32.1 reproduces a small part of [34, Fig. 1] to illustrate this point.
In the experiment underlying [34, Fig. 1], the gamma oscillations were induced
by application of kainate, which activates glutamate receptors called the kainate
receptors. This sort of oscillations can persist for very long times in the in vitro
preparation (on the order of hours), and are therefore called persistent gamma
oscillations [21, 33].
We call PING-like models in which the E-cells participate “sparsely,” i.e., on
a fraction of the population cycles only, weak PING models. By contrast, we will
refer to the models of Chapter 30 as strong PING models. One way of obtaining
weak PING oscillations is to make the drive to the E-cells stochastic [10]. In such a
model, each individual E-cell participates only on those population cycles on which
it happens to have sufficient drive. We refer to this as stochastic weak PING. A
far more detailed stochastic model of gamma rhythms with sparse participation
of the E-cells is due to Roger Traub; see, for instance, [158]. In Traub’s model,
individual model neurons have multiple compartments. The gamma rhythm is
driven by stochastic activity originating in the pyramidal cell axons and amplified
by axo-axonal gap junctions. One can think of the stochastic weak PING model
studied here as a very much simplified caricature of Traub’s model.
An alternative way of obtaining sparse participation of E-cells is to add adap-
tation currents to the E-cell model, which can prevent individual E-cells from fir-
ing at or even near gamma frequency [89, 98, 110]. We call this adaptation-based
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 32) contains supplementary material, which is available to authorized users.
282 Chapter 32. Weak PING Rhythms
weak PING. We illustrate the stochastic and adaptation-based weak PING mecha-
nisms with numerical examples in Sections 32.1 and 32.2, and discuss differences in
their properties.
LeMasson and Kopell [94] proposed anh-current-based weak PING mechanism.
Figure 32.1. Two voltage traces from Fig. 1 of [34]. These are recordings
from a slice of rat auditory cortex. Gamma oscillations are induced by applying
kainate. Fast-firing interneurons fire on almost every gamma cycle (bottom trace),
while pyramidal cells fire sparsely (top trace). Scale bars: 200 ms and 25 mV.
Copyright (2004) National Academy of Sciences, USA. Reproduced with permission.
In their model, each E-cell has an h-current that builds up as the cell is hyperpo-
larized by the activity of the I-cells, until it reaches a level that forces a spike of the
E-cell. Abstractly, adaptation-based and h-current-based weak PING are similar.
In the former, a hyperpolarizing current is activated by firing, then gradually decays
in the absence of firing; in the latter, a depolarizing current is inactivated by firing,
then gradually recovers in the absence of firing.
where τd,q,E is set so that τpeak,E has the desired value (we always use τpeak,E =
0.5 ms, as in Chapter 30). We discretize all differential equations using the midpoint
method, even eq. (32.1), which we could of course also solve analytically. At the
end of the time step, qstoch jumps to 1 with probability (see eq. (C.30))
fstoch
Δt.
1000
The jumps in the variables qstoch associated with different E-cells are independent
of each other. A second variable associated with each E-cell is the synaptic gating
variable sstoch . It satisfies (see eq. (20.7))
dsstoch 1 − sstoch sstoch
= qstoch − .
dt τr,E τd,E
50
100
f E [Hz]
50
0
0 100 200 300 400 500
t [ms]
Figure 32.2. Upper panel: Like upper panel of Fig. 30.4, but I E has been
reduced from 1.4 to 0.5, and instead the E-cells are driven by Poisson sequences of
excitatory synaptic input pulses. The parameters characterizing the stochastic input
are fstoch = 60, g stoch = 0.03. Lower panel: Time-dependent mean firing frequency
of E-cells, as defined in eq. (32.2). The overall mean firing frequencies of the E-
and I-cells (see eq. (32.3)) are fˆE ≈ 27.4 Hz and fˆI ≈ 27.0 Hz. [POISSON_PING_1]
At any given time, most E-cells will be near rest. Therefore plotting the average
membrane potential of the E-cells, as we did in most of the figures of Chapter 30,
is not the best way of displaying the rhythmicity that emerges in the network here.
We plot instead the time-varying mean firing frequency, fE = fE (t), of the E-cells,
which we define as follows:
number of spikes of E-cells in interval [t − 5, t + 5]
fE (t) = 1000 × . (32.2)
10NE
For t within less than 5 ms of the start or the end of the simulation, we leave fE (t)
undefined. The factor of 1000 in eq. (32.2) is needed because we measure time in
ms, but frequency in Hz. Finally, we also define the overall mean firing frequencies,
fˆE and fˆI , of the E- and I-cells. The definition of fˆE is
number of spikes of E-cells overall
fˆE = 1000 × , (32.3)
time simulated (in ms) × NE
and the definition of fˆI is analogous. We want to set parameters so that the E-cells
have mean firing frequencies far below those of the I-cells, so fˆE fˆI .
Figure 32.2 shows results of a first network simulation. There is a very clean
oscillation visible in Fig. 32.2, but it is a bit slow for a gamma oscillation (below
30 Hz), and the E- and I-cells fire at approximately equal mean frequencies, once per
population cycle. To reduce the E-cell participation rate, we raise the excitability of
the I-cells, by increasing I I from 0 to 0.8. This should cause the I-cell spike volleys
to be triggered more quickly, aborting some of the E-cell spiking activity. Indeed
that is precisely what happens; see Fig. 32.3, where the E-cells participate on fewer
than every second population cycle, on the average: fˆE ≈ 16.3 Hz, fˆI ≈ 39.6 Hz.
(fˆI is approximately the frequency of the population rhythm.)
250
50
40
30
f E [Hz]
20
10
0
0 100 200 300 400 500
t [ms]
Figure 32.3. Like Fig. 32.2, but with I I = 0.8, σI = 0.05. Now the overall
mean firing frequencies are fˆE ≈ 16.3 Hz, fˆI ≈ 39.6 Hz. [POISSON_PING_2]
How changes in the network parameters affect stochastic weak PING is not
satisfactorily understood at the present time. However, not surprisingly, sparseness
32.1. Stochastic Weak PING 285
of E-cell firing is promoted by factors that make inhibitory feedback faster and more
effective. Such factors include large I I (but not too large, so that the I-cells still
fire only in response to E-cell activity), large ĝEI and ĝIE , and small ĝII (yet still
large enough to keep the I-cells from firing without being prompted by the E-cells).
Figure 32.4 shows an example in which I deliberately chose parameters to make E-
cell firing sparse. I left out all network heterogeneity in this example, to make sure
that fˆE is not affected by suppression of E-cells with low external drive, but only
by cycle skipping resulting from random lulls in the Poisson sequence of excitatory
input pulses. Individual E-cells participate on approximately every fifth population
cycle on the average in Fig. 32.4.
250
50
30
f E [Hz]
20
10
0
0 100 200 300 400 500
t [ms]
It is interesting to plot a single E-cell voltage trace; see Fig. 32.5. (I inten-
tionally picked one that fires four spikes in the simulated time interval; most fire
fewer than four.) When comparing with the upper trace of Fig. 32.1, you will see
significant differences. In particular, the firing of the pyramidal cell in Fig. 32.1 is
fairly regular, while in Fig. 32.5, the inter-spike intervals vary greatly. Also, the
pyramidal cell in Fig. 32.1 has a rising membrane potential between spikes (with
fluctuations superimposed), whereas the voltage trace in Fig. 32.5 shows oscilla-
tions around a roughly constant mean between spikes. Both of these discrepancies
could be explained by an adaptation current, brought up by firing and gradually
decaying between spikes. We will next consider networks in which there is such an
adaptation current in the E-cells.
286 Chapter 32. Weak PING Rhythms
50
v [mV]
0
−50
−100
0 100 200 300 400 500
t [ms]
250
50
50
v [ms]
−50
−100
0 200 400 600 800 1000
t [ms]
70
51
0 200 400 600 800 1000
t [ms]
Figure 32.7. Close-up of the rastergram in Fig. 32.6, showing only the
spike times of the first 20 E-cells. The vertical lines indicate “local maxima of the
overall E-cell firing rate.” (For the precise definition of that, consult the Matlab
code that generates the figure.) [M_CURRENT_PING_1_CLOSEUP]
our parameter choices as described so far, some I-cells fire without being prompted
by an E-cell spike volley. Raising ĝII prevents this. Figure 32.6 shows a simulation
with these parameter choices. In two regards, there appears to be a better match
with Fig. 32.1 in Fig. 32.6 than in Fig. 32.5: There is now an upward trend between
spikes (although a slight one), and the firing of the E-cell is regular, just much
slower than the population rhythm.
A closer look at the rastergram reveals — not surprisingly, considering the
regularity of the voltage trace in Fig. 32.6 — approximate clustering of the E-cell
action potentials. Figure 32.7 is a close-up of Fig. 32.6, and demonstrates the clus-
tering. The spiking pattern is shown for 20 E-cells in Fig. 32.7, cells 51 through 70
in the network. (Cells 1 through 50 are I-cells.) Some E-cells fire on one out of
three population cycles, others on one out of four, and some cells toggle between
these two patterns.
Shortening the decay time constant of inhibition in Fig. 32.6, while leaving all
other parameters unchanged, results in a higher population frequency, but in similar
E-cell firing rates, and therefore sparser E-cell participation, i.e., more clusters;
see Fig. 32.8. Shortening the decay time constant of adaptation (here, of the M-
current), while leaving all other parameters unchanged, results in a slight increase
in the population frequency, but in a substantial increase in the E-cell firing rate,
and therefore less sparse E-cell participation, i.e., fewer clusters; see Fig. 32.9. The
population fires another volley as soon as inhibition falls to a sufficiently low level,
while an individual cell fires another action potential as soon as its adaptation (M-
current) falls to a sufficiently low level. For detailed analyses of the parameter
dependence of the number of clusters and the population frequency in adaptation-
based weak PING, see [89] and [98].
For the parameters of Fig. 32.6, the clustered solution shown in the figure is
not the only possible clustered solution. In particular, if all cells are initialized
approximately equally, there will be one dominating cluster, much larger than the
others. To demonstrate this, assume that for −200 ≤ t ≤ 0, the drive I E and I I
are zero, with all other parameters as in Fig. 32.6. By time t = 0, all cells then
come to rest to very good approximation, regardless of how they are initialized at
288 Chapter 32. Weak PING Rhythms
70
51
0 200 400 600 800 1000
t [ms]
Figure 32.8. Same as Fig. 32.7, but with τd,I = 4.5. The overall
mean frequency of E-cells is fˆE ≈ 10 Hz, and that of the I-cells is fˆI ≈ 38 Hz.
[M_CURRENT_PING_2_CLOSEUP]
70
51
0 200 400 600 800 1000
t [ms]
Figure 32.9. Same as Fig. 32.7, but with the decay time constant of the
M-current halved. The overall mean frequency of E-cells is fˆE ≈ 15 Hz, and that of
the I-cells is fˆI ≈ 33 Hz. [M_CURRENT_PING_3_CLOSEUP]
time t = −200. Assume that at time t = 0, I E is raised to 3.0, and I I to 0.7, the
values of Fig. 32.6. The result is shown in Fig. 32.10. There are three clusters —
two smaller ones, and a dominating one of approximately twice the size of the two
smaller ones.
250
50
Figure 32.10. Simulation as in Fig. 32.6, but with all cells starting at time
0 from the equilibrium positions corresponding to I E = I I = 0.
[M_CURRENT_PING_1_FROM_REST]
32.3. Deterministic Weak PING Without Any Special Currents 289
50
300 350 400 450 500
affecting both neurons equally. In a network of two LIF neurons with inhibitory
coupling, there would be two different gating variables, s1 and s2 .
We refer to a solution of (32.4)–(32.9) as a cluster solution if the firing of the
two E-cells alternates. We call a cluster solution anti-synchronous if the firing of
the first E-cell always occurs exactly in the middle of the inter-spike interval of the
second E-cell, and vice versa.
To study phase-locking in our system, we assume that the first E-cell fires at
time 0, so v1 (0+0) = 0 and s(0+0) = 1. We use the letter x to denote v2 (0+0), and
assume 0 < x < 1. We denote by T1 the smallest positive time with v2 (T1 − 0) = 1,
and define ψ(x) = v1 (T1 ). We record some simple properties of the function ψ in
the following proposition; compare Fig. 32.12.
Proposition 32.1. (a) The function ψ = ψ(x), 0 < x < 1, is strictly decreasing
and continuous, with 0 < ψ(x) < 1 and lim ψ(x) = 1. (b) The limit lim ψ(x) equals
x0 x1
0 if 0 ≤ gI ≤ I − 1/τm , and lies strictly between 0 and 1 if gI > I − 1/τm .
Proof. Exercise 4.
0.8
0.6
ψ(x)
0.4
0.2
0
0 0.5 1
x
(see exercise 5). For the parameters of Fig. 32.12, the graph of φ is shown in
Fig. 32.13. In words, if the first E-cells fires, and the second E-cell is at mem-
brane potential x ∈ (0, 1), then the next time when the first E-cell fires, the second
E-cell will be at membrane potential φ(x) ∈ (0, 1).
The limits of ψ and φ as x 0 or x 1 exist by Proposition 32.1. We
denote them by ψ(0), ψ(1), φ(0), and φ(1). There is a possible point of confusion
here, which we will address next. Note that in Fig. 32.13, φ(0) is not zero. In
fact, Proposition 32.1 implies that φ(0) = 0 if and only if gI ≤ I − 1/τm , and
32.3. Deterministic Weak PING Without Any Special Currents 291
0.8
0.6
φ(x)
0.4
0.2
0
0 0.5 1
x
in Fig. 32.13, gI = 0.05 > I = 1/τm = 0.02. You might think that this should
imply that perfect synchrony of the two E-cells is not a possibility. This is not
correct. Perfect synchrony is always a possibility in this model. If the two E-
cells fire at exactly the same time once, the feedback inhibition will affect them in
exactly the same way, and they will therefore always fire in synchrony in the future.
However, if they fire only in approximate synchrony, the one that fires earlier can
very substantially delay the one that fires later. So φ(0) > 0 does not imply that
synchrony is impossible, but it does imply that synchrony is unstable.
The solution of (32.4)–(32.9) is a cluster solution if and only if x = v2 (0 + 0)
is a fixed point of φ, and an anti-synchronous solution if and only if x is a fixed
point of ψ. The fixed point x∗ in Fig. 32.13 is stable because |φ (x∗ )| < 1; see
Appendix B. It is the same as the fixed point in Fig. 32.12. It corresponds to the
anti-synchronous solution, which is stable here.
In Figs. 32.12 and 32.13, one would have to choose gI ≤ 0.02 for ψ(1) to
become 0, and therefore φ(0) to become 0 and φ(1) to become 1. In Fig. 32.14,
we show ψ and φ for the same parameters as in Figs. 32.12 and 32.13, but with gI
lowered to 0.015. Now φ(0) = 0, but φ (0) > 1, and this implies that synchrony of
the two E-cells is still unstable. The anti-synchronous solution is still stable.
1 1
ψ
0.5 0.5
0 0
0 0.5 1 0 0.5 1
x x
Figure 32.14. The functions ψ and φ defined in this section, for τm = 10,
I = 0.12, gI = 0.015, τI = 5. Here gI is so small that ψ(1) = 0, and therefore
φ(0) = 0 and φ(1) = 1. Synchrony of the two E-cells is still unstable. [PLOT_PSI_PHI]
292 Chapter 32. Weak PING Rhythms
This discussion suggests that in an E-I network with a very rapid inhibitory
response, the E-cells will cluster, not synchronize. PING rhythms are possible only
because the inhibitory response is not instantaneous. For instance, if a small delay
between the firing of an E-cell and the inhibitory response (the reset of s to 1) is
added to our model, synchrony becomes stable; see exercise 6.
Adaptation currents amplify the tendency towards clustering by holding back
cells that are behind, since those cells have a stronger active adaptation current.
Exercises
32.1. (∗) In the simulation of Fig. 32.4, what happens if you double (a) fstoch , or
(b) gstoch ? How are fˆE and fˆI affected?
32.2. (∗) (†) Produce a figure similar to Fig. 32.4, but now using independent dis-
crete Ornstein-Uhlenbeck processes (see eqs. (C.20)–(C.22) in Appendix C.6)
in place of the independent Poisson sequences of excitatory input pulses used
in Section 32.1. Note that this is different from what you did in exercise 30.7.
There the same Ornstein-Uhlenbeck process was used for all neurons. Here
the Ornstein-Uhlenbeck processes used for different neurons are independent
of each other. Unlike the global stochastic drive in exercise 30.7, this sort
of cell-specific stochastic drive cannot produce strongly varying population
frequencies, but it can produce sparse participation of the E-cells.
32.3. (∗) How does Fig. 32.7 change (a) when ĝIE is doubled, (b) when g M is
doubled?
32.4. (†) Prove Proposition 32.1.
32.5. Explain eq. (32.10).
32.6. (∗) In the code that generates Fig. 32.14, add a delay of 2ms between the firing
of an E-cell and the inhibitory response. Show that this renders synchrony of
the two E-cells stable, although anti-synchronous clustering remains stable.
32.7. (∗) (†) Generalize the model given by (32.4)–(32.9) to N > 2 LIF neurons,
again with common feedback inhibition triggered immediately when just one
of the LIF neurons fires. Initializing at random, does one typically obtain
clustering, splay state solutions, or what else?
Chapter 33
Beta Rhythms
longer-lasting, or (2) make inhibition stronger, or (3) lower the external drive to
the E-cells. These three possibilities are illustrated by Figs. 33.1–33.3.
One must make the decay time constant of the I-to-E synapses very much
longer (from 9 ms to 90 ms) to turn the gamma rhythm of Fig. 30.4 into the beta
250
50
50
−50
−100
0 100 200 300 400
t [ms]
Figure 33.1. Spike rastergram of an E-I network (top), and mean mem-
brane potential of the E-cells (bottom). All parameters as in Fig. 30.4, except the
decay time constant of I-to-E synapses is 90 ms here, while it was 9 ms in Fig. 30.4.
(The decay time constant of I-to-I synapses is still 9 ms, as in Fig. 30.4.) Note that
the simulated time interval is twice longer than in Fig. 30.4. [PINB_1]
250
50
50
−50
−100
0 100 200 300 400
t [ms]
Figure 33.2. Spike rastergram of an E-I network (top), and mean mem-
brane potential of the E-cells (bottom). All parameters as in Fig. 30.4, except
ĝIE = 10 here, while ĝIE = 0.25 in Fig. 30.4. [PINB_2]
rhythm of Fig. 33.1. Also notice that the synchrony of the E-cells is fairly poor in
Fig. 33.1. This is an effect of heterogeneity in external drives to the E-cells, and
33.2. A Period-Skipping Beta Rhythm, and Cell Assemblies 295
heterogeneity in the number of inhibitory synaptic inputs per E-cell. The same
level of heterogeneity has little effect in Fig. 33.2 (compare exercise 1). However,
to turn the gamma rhythm of Fig. 30.4 into the beta rhythm of Fig. 33.2, one must
strengthen I-to-E inhibition by a very large factor.
250
50
50
−50
−100
0 100 200 300 400
t [ms]
Figure 33.3. Spike rastergram of an E-I network (top), and mean mem-
brane potential of the E-cells (bottom). All parameters as in Fig. 30.4, except
I E = 0.4 here, while I E = 1.4 in Fig. 30.4. [PINB_3]
10
Figure 33.4. Clustered PING rhythm similar to that in Fig. 32.6. The
strength of the M-current is chosen deliberately here to create two clusters: g M = 0.5.
To make the details of the clustering easier to see, the number of neurons in the
network is reduced: NE = 40, NI = 10. To obtain a network comparable to the
larger one, the reduction in NE and NI must be accompanied by an increase in
the connection probabilities pEI , pIE , pII (see Section 30.3). Since sparseness and
randomness of connectivity is not the main focus here, we simply use all-to-all
connectivity. All other parameters are as in Fig. 32.6. The population frequency is
slightly below 40 Hz. [M_CURRENT_PING_4]
The second central ingredient in the model of [181] and [122] is plasticity.
The gamma oscillation was shown in [181] to lead to a strengthening of excitatory
synaptic connections among the pyramidal cells participating in the oscillation.
The effect is that pyramidal cells that participate in the gamma oscillation don’t
cluster; they (approximately) synchronize during the later phase of the experiment;
see Fig. 33.5. Note that the transition from Fig. 33.4 to Fig. 33.5 illustrates that
recurrent excitation can lower the network frequency.
The E-to-E connectivity introduced here is symmetric: The connection from
B to A is strengthened just as much as the connection from A to B. As pointed out
earlier, this is not quite in line with the original Hebbian hypothesis, and certainly
not with STDP as described in [68], but it is in line with yet another variation on
the Hebbian hypothesis, formulated, for instance, in [32]: (a) Temporal correlation
of pre- and post-synaptic activity will lead to synaptic strengthening, and (b) lack
of correlation will lead to synaptic weakening.
33.2. A Period-Skipping Beta Rhythm, and Cell Assemblies 297
In both the experiments and the model of [181] and [122], some E-cells partic-
ipated in the gamma oscillation, while others did not. Following [122], we call the
two groups the EP -cells andthe ES -cells (with P standing for participating, and S for
50
10
50
10
caused by the gamma oscillation. In [122], it was assumed that the ES -to-I synapses
are weakened during the gamma oscillation, when the ES -cells are silent and the I-
cells are active. Indeed, if we halve the strength of the ES -to-I synapses in Fig. 33.6,
fairly clean temporal separation of EP -cells and ES -cells results; see Fig. 33.7.
It is easy to see why weakening the ES -to-I synapses will tend to promote
temporal separation of EP - and ES -cells. Let us call the gamma cycles on which
the EP -cells fire the on-beats, and the gamma cycles on which they don’t fire the
off-beats. Weakening the ES -to-I synapses causes the I-cell spike volleys on the
off-beats to occur slightly later, and thereby allows more ES -cells to fire on the
off-beats. This falls short of explaining why the separation should become as clean
as in Fig. 33.7, but in fact it is not always as clean; see exercise 2.
50
10
Figure 33.7. Like Fig. 33.6, with the strengths of the synapses from E-cells
11 through 30 (the ES -cells) cut in half. [M_CURRENT_PING_7]
50
10
Figure 33.8. Like Fig. 33.6, with the external drive to the EP -cells (but
not the ES -cells) raised by 20%. [M_CURRENT_PING_8]
Exercises 299
We think of the EP -cells as forming a Hebbian cell assembly. From this point
of view, what is interesting about Figs. 33.7 and 33.8 is that plastic changes help the
cell assembly, which started out firing at gamma frequency, survive the gamma-beta
transition caused by the rising M-current.
100
0
0 100 200 300 400 500
t [ms]
Exercises
33.1. (∗) A PING rhythm can be slowed down to beta frequency by increasing τI ,
the decay time constant of inhibition (Fig. 33.1), or by raising ĝIE , the pa-
rameter determining the strength of I-to-E synapses (Fig. 33.2). Comparison
300 Chapter 33. Beta Rhythms
of the two figures shows that the effects of drive heterogeneity become large
when τI is raised, not when ĝIE is raised. We won’t give a rigorous explana-
tion of this observation. However, plot 0.25e−t/90 and 10e−t/9 as functions
of t ∈ [0, 50] in a single figure, and explain why what you see yields at least
a heuristic explanation.
33.2. (∗) The nearly strict temporal separation of EP - and ES -cells seen in Fig. 33.7
is not a completely robust effect. To see this, halve the strength of the ES -to-I
synapses in Fig. 33.7 once more. You will see that the temporal separation
of EP - and ES -cells becomes a bit less clean.
33.3. (∗) What happens if in Fig. 33.6, you halve the strengths of the ES -to-I
synapses and raise the drive to the EP -cells by 20%?
33.4. (∗) Demonstrate numerically that the cells in Fig. 33.9 fire in (loosely defined)
clusters.
Chapter 34
Nested Gamma-Theta
Rhythms
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 34) contains supplementary material, which is available to authorized users.
302 Chapter 34. Nested Gamma-Theta Rhythms
brief, transient memory (working memory) is on the order of seven (plus or minus
two) [118]. Most of us can briefly hold a seven-digit phone number in memory, while
we are looking for the phone, but few of us could do the same with a fourteen-digit
phone number.
Computationally, one can obtain a gamma oscillation riding on the crests of
a theta oscillation simply by driving the E-cells in a PING network at theta fre-
quency; see Section 34.1. Qualitatively, this might model a theta rhythm that is
“externally imposed,” projected into the network that we are studying from another
brain structure, somewhat reminiscent of a suggestion in [25] that inhibitory “su-
pernetworks” rhythmically entrain large populations of pyramidal cells throughout
the brain.
A model network that intrinsically generates both the theta rhythm and the
gamma rhythm nested in it can be obtained by adding to a PING network a second
class of inhibitory cells intrinsically firing at theta frequency. We will call this new
class of inhibitory cells the O-cells, a terminology that will be motivated shortly.
If the O-cells received no input from the PING network, and if they synchronized,
then this would not be very different from the PING networks driven at theta
frequency discussed in Section 34.1. However, a mechanism by which the O-cells
can synchronize is needed. One natural such mechanism would be common input
from the PING network. For nested gamma-theta oscillations to result, the O-cells
must allow several gamma cycles between any two of their population spike volleys.
This can be accomplished in a robust way if the O-cells express a slowly building
depolarizing current that is rapidly reduced by firing, such as an h-current, or a
slowly decaying hyperpolarizing current that is rapidly raised by firing, such as a
firing-induced slow potassium current. Several model networks generating nested
gamma-theta rhythms following these ideas have been proposed; examples can be
found in [157] and [173].
In Sections 34.2 and 34.3, we describe (in essence) the model from [157], in
which the second class of inhibitory cells represent the so-called oriens lacunosum-
moleculare (O-LM) interneurons [90] of the hippocampus; this is the reason for the
name O-cells.
the LFP in our model networks, and neither the mean membrane potential of the
E-cells, nor the mean gating variable of the E-cells, are likely to be good analogues.
50
10
0 200 400 600 800 1000
mean(v E )
−50
−100
0 200 400 600 800 1000
mean(s E )
0.2
0.1
0
0 200 400 600 800 1000
t [ms ]
Figure 34.2. A PING network with external drive to the E-cells oscil-
lating at theta frequency. Spike rastergram (top panel), mean of membrane poten-
tials of E-cells (middle panel), mean of gating variables of E-cells (bottom panel).
The parameters of the PING network are NE = 40, NI = 10, I E = 1.4, σE =
0.05, I I = 0, ĝEE = 0, ĝEI = 0.25, ĝIE = 0.25, ĝII = 0.25, pEI = 1, pIE =
1, pII = 1, τr,E = 0.5, τpeak,E = 0.5, τd,E = 3, vrev,E = 0, τr,I = 0.5, τpeak,I =
0.5, τd,I = 9, vrev,I = −75. The actual drive to the i-th E-cell, however, is not
IE,i = I E (1 + σE Xi ), but (1 + 0.8 sin(2πt/125)) IE,i , i.e., it oscillates with period
125 ms, or frequency 8 Hz. [PING_WITH_THETA_DRIVE]
50
10
0 200 400 600 800 1000
mean(v E )
−50
−100
0 200 400 600 800 1000
mean(s E )
0.2
0.1
0
0 200 400 600 800 1000
t [ms ]
Figure 34.3. Same as Fig. 34.2, but the external current inputs are con-
stant in time (IE,i = I E (1 + σE Xi )), and instead the pulsatile synaptic input
2
0.2e−10 sin (πt/125) (vrev,I − v) is added to each E-cell. [PING_WITH_THETA_INHIBITION]
In Section 34.3, the O-LM cells will play the role of pacing the theta oscillation.
As discussed in the introduction to this chapter, for them to be able to do that
robustly, they should express a slowly building depolarizing current that is rapidly
reduced by firing, or a slowly decaying hyperpolarizing current that is rapidly raised
by firing. In the model of [157], they are assumed to express an h-current, which
is rapidly reduced by firing and slowly builds up between action potentials; this
is in agreement with experimental evidence [109]. The model O-LM cells of [157]
also express a transient (inactivating) hyperpolarizing potassium current called an
A-current, again in agreement with experimental evidence [189]. This current will
be discussed in detail later.
In [157], the h- and A-currents were added to a single-compartment model
with the standard Hodgkin-Huxley currents, that is, the spike-generating sodium,
delayed rectifier potassium, and leak currents. That model was of the same form
as the RTM and WB models from Sections 5.1 and 5.2, except for the assumption
m = m∞ (v), which was made in Sections 5.1 and 5.2, but not in [157]. We modify
the model of [157] in that regard, i.e., we do set m = m∞ (v). Thus the form of our
O-LM cell model, without the h- and A-currents, is precisely the same as that of
the RTM and WB models. The constants are
1
αh (v) = 0.07 exp(−(v + 63)/20), βh (v) = ,
1 + exp(−(v + 33)/10)
0.5
0
−100 −50 0 50
1 10
τ h [ms]
h∞
0.5 5
0 0
−100 −50 0 50 −100 −50 0 50
1 4
τ n [ms]
n∞
0.5 2
0 0
−100 −50 0 50 −100 −50 0 50
v [mV] v [mV]
Figure 34.4. The functions x∞ and τx in the O-LM cell model of [157],
but without the h- and A-currents. We left out τm because we set m = m∞ (v).
[PRE_OLM_X_INF_TAU_X]
50
v [mV]
−50
−100
0 50 100 150 200
t [ms]
Figure 34.5. Voltage trace of the O-LM cell model of [157], but without
the h- and A-currents, and with m = m∞ (v). In this simulation, I = 1.5 μA/cm2 .
[PRE_OLM_VOLTAGE_TRACE]
We now add the h-current defined by eqs. (18.1)–(18.3) to this model. Fol-
lowing [157], we use g h = 12 mS/cm2 . The resulting voltage trace is shown in
306 Chapter 34. Nested Gamma-Theta Rhythms
Fig. 34.6, upper panel. The graph of r (lower panel of Fig. 34.6) shows that indeed
the h-current plummets to near zero in response to an action potential, then rises
more gradually. Note that addition of the h-current significantly accelerates the
neuron: Compare Fig. 34.5 with Fig. 34.6. The reason is that vh = −32.9 mV is far
above rest.
50
v [mV]
−50
−100
0 50 100 150 200
0.01
r [mV]
0.005
0
0 50 100 150 200
t [ms]
Figure 34.6. Upper panel: Same as Fig. 34.5, but with h-current defined
by eqs. (18.1)–(18.3), with gh = 12 mS/cm2 , and with I = 0 μA/cm2 . Lower panel:
Gating variable, r, of the h-current (see eqs. (18.1)–(18.3)). [OLM_WITH_H_CURRENT]
1 10
τ a [ms]
a∞ (v )
0.5 5
0 0
−100 −50 0 50 −100 −50 0 50
1
τ b [ms]
b ∞ (v )
500
0.5
0 0
−100 −50 0 50 −100 −50 0 50
v [mV] v [mV]
Figure 34.7. The steady states and time constants of the gating variables
in the model A-current of [157]. [A_CURRENT]
34.2. A Model O-LM Cell 307
50
0
v
−50
−100
0 100 200 300 400 500
0.04
ab
0.02
0
0 100 200 300 400 500
Figure 34.8. Upper panel: Same as Fig. 34.6, but with A-current de-
fined by eqs. (34.1)–(34.4), with g A = 22 mS/cm2 . Lower panel: The prod-
uct, ab, of the two gating variables of the A-current (see eqs. (34.1)–(34.4)).
[OLM_WITH_H_AND_A_CURRENTS]
Figure 34.9. Symbolic depiction of the network of Fig. 34.10. The large
circles labeled “E,” “I,” and “O” represent populations of cells. Lines ending in
arrows indicate excitation, and lines ending in solid circles indicate inhibition.
with
dx x∞ (v) − x
= for x = a, b,
dt τx (v)
1
a∞ (v) = , τa (v) = 5, (34.2)
1 + exp(−(v + 14)/16.6)
1
b∞ (v) = , (34.3)
1 + exp((v + 71)/7.3)
308 Chapter 34. Nested Gamma-Theta Rhythms
1
τb (v) = . (34.4)
0.000009 0.014
+
exp((v − 26)/28.5) 0.2 + exp((v + 70)/11)
Note that a∞ (v) and b∞ (v) are increasing and decreasing functions of v, re-
spectively. This is why a is called an activation variable, and b an inactivation
300
100
50
0 200 400 600 800 1000
−40
mean( v E )
−60
−80
0 200 400 600 800 1000
mean( s E )
0.2
0.1
0
0 200 400 600 800 1000
t [ms]
variable (see Section 3.2). From Fig. 34.7, you see that inactivation is fast, but
de-inactivation is slow: τb is small for large v, but large for v below threshold. As
a result, the total conductance density g A ab behaves very differently from the total
conductance density of an adaptation current: When the neuron fires, g A ab very
briefly rises. However, when the membrane potential falls, gA ab rapidly follows it,
because a follows v with a short time constant. (In fact, in [135] the activation gate
of the A-current was taken to be an instantaneous function of v.) The inactivation
gate, which also drops during the action potential, takes some time to recover. This
is why there is a prolonged dip in ab following an action potential; see Fig. 34.8.
Throughout the remainder of this chapter, we use gA = 22 mS/cm2 .
The values of ab between two action potentials only vary by about a factor
of 2 in Fig. 34.8. So to reasonably good approximation, the A-current adds tonic
inhibition (namely, inhibition with a constant conductance) to the cell. The question
whether the time dependence of ab actually matters to the model of nested gamma-
theta oscillations in Section 34.3 will be the subject of exercise 3.
Exercises 309
IO,k = I O (1 + σO Zk ) , (34.5)
where I O and σO ≥ 0 are fixed numbers, and the Zk are independent standard
Gaussians. The resulting network can indeed generate nested gamma-theta oscil-
lations; see Fig. 34.10 for an example. As in previous simulations, each cell of the
network was initialized at a random phase with uniform distribution on its limit
cycle.
Note that in Fig. 34.10, there are no E-to-O synapses. In [92, Fig. 3], nested
gamma-theta oscillations were shown for an E-I-O network with positive, albeit
very weak, E-to-O conductance. For more on this issue, see exercise 1.
Exercises
34.1. (∗) In Fig. 34.10, there is no feedback from E-cells to O-cells, as indicated in
Fig. 34.9. In [92, Fig. 3], there is such feedback, but it is quite weak. It is
known, however, that there are projections from CA1 pyramidal cells to CA1
O-LM cells [150].
(a) What happens if we add E-to-O synapses, say with ĝEO = 0.1, in the
simulation of Fig. 34.10? Try it out. You will see that the O-cells don’t reach
near-synchrony now. Some fire on each cycle of the gamma oscillation. As
a result, there is no nested gamma-theta rhythm, just an ongoing gamma
rhythm slowed down by the O-cells.
(b) Suppose there is some initial mechanism that roughly synchronizes the O-
cells, maybe some excitatory signal that makes many of them fire. To model
this, suppose that the initial phases of the O-cells are chosen at random not
between 0 and 1, but between 0 and 0.1. Is there then a nested gamma-theta
rhythm even when ĝEO = 0.1?
34.2. (∗) When the I-to-O connections are cut in Fig. 34.10, there is nothing to
synchronize the O-cells any more, and the nested gamma-theta rhythm is
lost. (Try it!) But now suppose that as in exercise 1b, we approximately
synchronize the O-cells at the start of the simulation, by choosing their initial
phases randomly between 0 and 0.1, not between 0 and 1. Does this restore
the nested gamma-theta oscillations for a significant amount of time?29
29 It couldn’t restore them forever: There is nothing to enforce synchrony of the O-cells now,
other than the initialization, and because of the heterogeneity of the external drives to the O-cells,
their synchrony must disintegrate eventually, and with it the nested gamma-theta oscillation.
310 Chapter 34. Nested Gamma-Theta Rhythms
34.3. (∗) In the simulation of Fig. 34.10, replace ab by 0.013, which is approximately
its average subthreshold value in Fig. 34.8 (lower panel). This means replacing
the A-current by tonic inhibition (inhibition with constant conductance).
How does the figure change?
Part V
Functional Significance of
Synchrony and Oscillations
Chapter 35
250 ms
Figure 35.1. Figure 3A of [143]. The figure shows an LFP in mouse barrel
cortex during activation of fast-firing inhibitory interneurons with 40 Hz light pulses.
Thin lines refer to individual mice, and the solid line is the average over three mice.
The blue bars indicate the light pulses. Reproduced with publisher’s permission.
This result points towards one possible general function of rhythmicity: Mak-
ing inhibition rhythmic might enhance input sensitivity. One can immediately see
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 35) contains supplementary material, which is available to authorized users.
314 Chapter 35. Rhythmic vs. Tonic Inhibition
a heuristic reason why that might be true: When inhibition waxes and wanes, the
waning phase offers a window of opportunity to weak inputs.
It is, however, not entirely clear that this reasoning is correct. We will com-
pare the effects of tonic inhibition, with a constant conductance g, and rhythmic
inhibition, with an oscillating conductance g(t) with temporal average g, and ask:
Is the rhythmic inhibition, in some precise sense, less powerful than the tonic inhi-
bition? It is true that the input will be more effective while inhibition is down, but
it will also be less effective while inhibition is up, and therefore what happens on
balance is not so obvious.
Even if it is correct that rhythmic inhibition is less powerful than tonic inhi-
bition, the question arises how significant an effect that is. For example, in [143],
the rhythmic stimulation of the fast-firing inhibitory interneurons might make their
firing rhythmic, but at the same time recruit a larger number of them. How easily
would that erase the effect of making inhibition rhythmic?
In this chapter, we think about precise versions of these questions. For a target
LIF neuron, we study the comparison between oscillatory and tonic inhibition by
means of a combination of analysis and computing. We then also present simulations
for a target RTM neuron.
where T > 0 is the period, A ≥ 0 is the time average, and α > 0 governs the
“peakedness” or “coherence” of the pulses; the greater α, the narrower are the
pulses. We note that the denominator in the definition of g(t) is independent of T
(substitute u = s/T to see this), so
2
eα cos (πt/T ) − 1
g(t) = A ! 1
. (35.1)
0
eα cos2 (πs) − 1 ds
The integral cannot be evaluated analytically, but it is easy to evaluate it numer-
ically with great accuracy.30 In the limit as α 0, g(t) becomes (using the local
linear approximation ex − 1 ∼ x, and using the double-angle formula for the cosine)
cos2 (πt/T ) 2πt
A! 1 = A 1 + cos .
cos2 (πs)ds T
0
"
As α increases, g becomes increasingly “peaked.” As α → ∞, g(t) → A k∈Z δ(t −
kT ), where δ denotes the Dirac delta function. (This is not important for our
30 Because the integrand is smooth and periodic, and the integral extends over one period, the
trapezoid method for evaluating integrals is of infinite order of accuracy here [78].
35.2. LIF Neuron Subject to Synaptic Inhibition 315
discussion here, so if you don’t know what it means, that won’t be a problem.)
Figure 35.2 shows examples with A = 1, T = 1, and various different values of α.
α 0
5
0
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
α= 5
5
0
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
α = 10
5
0
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5
We think of the synaptic reversal potential vrev as “low” here, for instance, vrev = 0,
but in our argument, we don’t make any assumption on the value of vrev . We
compare this LIF neuron with one subject to tonic inhibition with a conductance
equal to the temporal average of g,
1 T
g= g(t)dt.
T 0
We denote the membrane potential of the LIF neuron with tonic inhibition by v:
dv v
=− + I + g (vrev − v) , t ≥ 0 (35.5)
dt τm
316 Chapter 35. Rhythmic vs. Tonic Inhibition
v(0) = 0, (35.6)
v(t + 0) = 0 if v(t − 0) = 1. (35.7)
The following proposition is Theorem 2 in [12, Appendix A].
Proposition 35.1. Under the assumptions stated above, sup v(t) ≤ sup v(t).31
t≥0 t≥0
Proof. We set
v̂ = sup v(t).
t≥0
So we want to prove
v(t) ≤ v̂ for all t ≥ 0.
Note that v̂ ≥ 0. If v̂ = ∞, there is nothing to be proved. Assume therefore
that v̂ < ∞ now. Equation (35.2) implies
dv v̂
≥− + I + g(t) (vrev − v̂) . (35.8)
dt τm
The right-hand side of (35.8) is periodic with period T . For any integer k ≥ 0,
(k+1)T
dv v̂
v((k + 1)T ) − v(kT ) = dt ≥ T − + I + g (vrev − v̂) . (35.9)
kT dt τm
If the right-most expression in (35.9) were positive, the sequence {v(kT )}k=0,1,2,...
would grow beyond all bounds, contradicting our assumption v̂ < ∞. Therefore we
conclude that this expression is ≤ 0, i.e.,
1
− + g v̂ + I + g vrev ≤ 0.
τm
At any time at which v(t) > v̂, we would have
dv v 1
=− + I + g (vrev − v) = − + g v + I + g vrev <
dt τm τm
1
− + g v̂ + I + g vrev ≤ 0.
τm
This implies that v cannot become greater than v̂,32 so our assertion is proved.
0.2
0
0 20 40 60 80 100
0
0 20 40 60 80 100
0
0 20 40 60 80 100
t [ms]
of I for which the LIF neuron with oscillating inhibition fires, but the LIF neuron
with tonic inhibition does not fire. Thus rhythmicity of the inhibition makes the
target neuron more input sensitive for a broad range of drives. However, for very
strong drives, the effect can be the reverse: Firing can be faster with tonic than
with rhythmic inhibition; see also exercise 3.
200
150
f [Hz]
100
50
0
0.15 0.2 0.25 0.3
I
A
0.4
g
0.2
0
0 100 200 300 400 500
B 6
4
v
2
0
0 100 200 300 400 500
C 6
4
v
2
0
0 100 200 300 400 500
t [ms]
0
0 20 40 60 80 100
C v (blue) and v (red), I = 0.2
5
0
0 20 40 60 80 100
t [ms]
200
150
f [Hz]
100
50
0
0.15 0.2 0.25 0.3
I
Figure 35.7. Like Fig. 35.4, but with a lower-amplitude oscillation in the
inhibitory conductance (α = 1). [PERIODIC_INHIBITION_F_I_CURVE_2]
100
50
f
0
0 0.5 1 1.5 2 2.5 3
I [ m A/cm2 ]
100
50
f 0
0 0.5 1 1.5 2 2.5 3
I [ mA/cm2 ]
Figure 35.9. Same as Fig. 35.8, but with the rhythmic inhibition (not the
tonic inhibition) amplified by a factor of 2. [RTM_F_I_CURVE_WITH_INHIBITION_2]
Exercises
35.1. Derive eq. (35.10).
35.2. Assume that I > 1/τm and vrev < 1. Let f denote the firing frequency of
the LIF neuron defined by eqs. (35.5)–(35.7). We write f = f (g). Of course
f also depends on the parameters I, τm , and vrev , but those are taken to be
fixed here. (a) Show that there is a critical value g c such that f (g) = 0 for
g ≥ gc and f (g) > 0 for g < gc . (b) (∗) For g < g c , f (g) < 0. This will
be used in exercise 3b. I am not asking you to prove it analytically here,
that is an unpleasant calculation that yields no insight. However, plot f as a
function of g < g c for some sample parameter choices, and convince yourself
that indeed the graph is concave-down.
Exercises 321
and g(t + 25) = g(t) for all t. (b) Explain why for large I, the oscillatory
inhibitory conductance (35.11) will yield a lower mean firing frequency than
the constant inhibitory conductance g = 0.2. (Hint: Use exercise 2b.)
Chapter 36
In Chapter 35, we showed that inhibition can become less powerful when it is made
rhythmic. More precisely, a weak signal can elicit a response more easily when
the inhibition in the receiving network oscillates. The opposite effect is seen for a
strong signal. Here we will argue that excitation can become more powerful when it
is made rhythmic. More precisely, weak signals benefit from being made rhythmic,
while strong ones become less effective when made rhythmic.
We will model the excitatory signals as currents here, not as synaptic inputs.
You may wonder why I was careful to use synaptic inputs, including the reversal
term vrev − v, in Chapter 35, but not here. Using input currents instead of synaptic
inputs amounts to approximating the reversal term vrev − v by a constant. This is
much more reasonable when vrev v most of the time, as is the case for excitatory
synaptic inputs, than when vrev ≈ v most of the time, as is the case for inhibitory
input. So it is in fact more justifiable to model excitatory synaptic inputs as currents
than it is to do the same for inhibitory synaptic inputs.
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 36) contains supplementary material, which is available to authorized users.
324 Chapter 36. Rhythmic vs. Tonic Excitation
IH t − kT ∈ [0, ] for some integer k,
Ip = Ip (t) = (36.1)
IL otherwise,
with IL < IH , 0 < < T ; see Fig. 36.2. We will compare the effect of this sequence
of input pulses with the effect of a constant input equal to the time average of the
pulsed input,
I = IH + 1 − IL .
T T
We write
f = f (I).
We make one more idealizing assumption: We assume that the mean frequency at
which the target neuron fires when it receives the pulsed input equals
fp = f (IH ) + 1 − f (IL ).
T T
f
Ic
IH
IL
0 T T+ 2T T+2
Proposition 36.1. Under the assumptions stated above, (a) sufficiently strong
pulsing enables weak inputs to have an effect:
IH > Ic ⇒ fp > 0,
(all other parameters fixed), and (c) pulsing makes strong inputs less effective also
in the following sense:
fp < f if IL ≥ Ic .
Proof. Exercise 1.
Weak signals benefit from being made rhythmic because of the leakiness of the
target: When a constant input current is delivered, the charge entering the cell has
time to leak back out before triggering an action potential, while the same charge
injected as a brief pulse is capable of making the target fire. However, the fact that
strong signals elicit a lower mean firing frequency in the target when they are made
rhythmic is a consequence of leakiness as well, reflected in the condition f (I) < 0
for I > Ic ; see exercise 2.
80
60
40
f
20
0
0 0.5 1 1.5 2
I [ m A/cm2 ]
Figure 36.3. f -I curve of RTM neuron with constant input current (red),
and with input current pulsed at 40 Hz (blue). For a given I, the pulsed input
2
current has time average I, but is proportional to eα cos (πt/25) − 1, with α = 1.
[RTM_F_I_CURVE_PULSED_EXCITATION]
The figure shows that pulsing the input enhances the downstream impact,
measured by mean firing frequency of the target RTM neuron, only in a narrow
range of values of I. This is, however, a matter of leakiness; doubling the leak
conductance makes the efficacy-enhancing effect of pulsing more pronounced; see
Fig. 36.4.
326 Chapter 36. Rhythmic vs. Tonic Excitation
80
60
40
f
20
0
0 0.5 1 1.5 2
I [ m A/cm2 ]
Figure 36.4. Like Fig. 36.3, but with g L doubled: g L = 0.2 mS/cm2 .
[RTM_F_I_CURVE_PULSED_EXCITATION_2]
Exercises
36.1. Prove Proposition 36.1. (Compare also with exercise 35.3.)
36.2. (a) For the non-leaky LIF neuron (τm = ∞), explain why f (I) = 0 for
I > Ic . (b) For the non-leaky LIF neuron, under the assumptions made in
Section 36.1 and also assuming IL ≥ 0, show that pulsing an excitatory input
current never has any effect.
36.3. (∗) (†) Draw a curve similar to that in Fig. 36.3 for the Erisir neuron. The
point, of course, is to see whether for a type 2 neuron, the curve looks similar.
Hints: Start with the code that generates Fig. 36.3, and replace the RTM
neuron by the Erisir neuron. Explore I ∈ [2, 8]. The simulation will be a
bit costly. To reduce the cost, you might divide the interval [2, 8] into 100
subintervals, not 200 as in the code that generates Fig. 36.3.
Chapter 37
In Section 33.2, we mentioned Hebb’s idea of cell assemblies. The hypothesis is that
information is carried by membership in neuronal ensembles that (temporarily) fire
together. One attractive aspect of this idea is that it would give a brain with 1011
neurons an unfathomably large storage capacity, since the number of subsets of a
large set is huge.
For Hebbian cell assemblies to play a central role in brain function, there
would have to be a mechanism defining membership in an assembly, keeping non-
members from firing while the assembly is active. Olufsen et al. [122] pointed out
that the PING mechanism could serve this purpose: On a given gamma cycle, the
participating, most strongly excited E-cells fire, thereby rapidly creating feedback
inhibition that keeps nearby, less strongly excited E-cells from firing.
37.1 An Example
Figure 37.1 shows an example of thresholding, similar to examples in [122]. E-
cells 1 through 73 (neurons 51 through 123 in the spike rastergram, as neurons 1
through 50 are I-cells) are fully suppressed. E-cells 77 through 200 participate on
every gamma cycle. Only three E-cells, namely E-cells 74, 75, and 76, participate
sparsely, i.e., on some but not all cycles of the gamma oscillation. E-cell 76 skips
every third gamma cycle, and E-cells 74 and 75 skip every second; see Fig. 37.2.
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 37) contains supplementary material, which is available to authorized users.
328 Chapter 37. Gamma Rhythms and Cell Assemblies
250
50
78
E-cell number
76
74
72
50 100 150 200 250 300 350 400 450 500
t [ms]
Figure 37.2. Simulation of Fig. 37.1, carried on for a longer time, now
showing only spike times of E-cells 72 through 78 (cells 122 through 128 in Fig. 37.1).
We have omitted the first 50 ms of simulated time to eliminate initialization effects.
[PING_THR_1_ZOOM]
Note that ĝIE is six times stronger in Fig. 37.1 than it was in Fig. 30.4. To
obtain a picture like that in Fig. 37.1, one needs strong inhibition: E-cells 70 and
80, say, have very similar external drives (1.48 and 1.52), and nonetheless one is
completely suppressed, the other fires at gamma frequency. On each gamma cycle,
E-cell 70 comes close to firing; the inhibition must be strong enough to pull it back.
For illustration, Fig. 37.3 shows a solution of eq. (37.1) with v(0) = 1. In this
example, v(t) < 1 for t > 0, so the model neuron is permanently suppressed by the
inhibitory pulses.
330 Chapter 37. Gamma Rhythms and Cell Assemblies
0.5
v
0
0 20 40 60 80 100 120
t
Figure 37.3. The solution of eq. (37.1), with τm = 10, I = 0.12, g = 1/7,
T = 35, = 10, v(0) = 1. [NO_RESET]
IH − IL e−T g
as g → ∞.
This proposition confirms again, for the very simple model problem considered
here, that thresholding by inhibitory pulses is exponentially sharp as the strength
of inhibition grows.
Proof. (a) There is at least one reset in each period [kT, (k + 1)T ) if the solution of
dv v
=− + I, v(0) = 0 (37.3)
dt τm
reaches 1 before time T − , that is, if
τm I
τm ln < T − . (37.4)
τm I − 1
for the solution with v(0) = 0. If v(0) = 0, then v() = v,0 with
I
v,0 = 1 − e−(1/τm +(T /)g) , (37.5)
1/τm + (T /)g
and
v(T ) = v,0 e−(T −)/τm + τm I 1 − e−(T −)/τm , (37.6)
We conclude that IH satisfies the equation
v,0 e−(T −)/τm + τm IH 1 − e−(T −)/τm = 1. (37.7)
We will similarly derive an equation for IL . For the solutions of eqs. (37.1),
(37.2) there is no reset at all, regardless of the value of v(0) ∈ [0, 1), if the solution
with v(0) = 1 does not reach 1 in the time interval (0, T ]. The calculation is precisely
as before, except that v,0 is replaced by
v,1 = e−(1/τm +(T /)g) + v,0 .
In particular,
v,1 e−(T −)/τm + τm IL 1 − e−(T −)/τm = 1. (37.8)
−T g
Note that v,1 differs from v,0 only by a term that is e . (The limit that this
asymptotic statement refers to is the limit as g → ∞.) From eqs. (37.7) and (37.8),
we then conclude that IH and IL only differ by a term that is e−T g .
Exercises
37.1. (∗) Repeat the simulation of Fig. 37.1 with 500 E-cells instead of 200 E-cells
(still 50 I-cells), and carry it out up to time 500. For each E-cell, estimate
its firing frequency by determining how often it fires between t = 50 and
t = 500, then scaling appropriately to get an estimate of the frequency in
Hz. Determine how many E-cells with sparse participation (neither firing
on each cycle, nor altogether suppressed) there are. In Fig. 37.1, we used
ĝIE = 1.5. Repeat the experiment with ĝIE = 1.3, 1.7, and 1.9. Explain why
what you see is evidence that the number of E-cells that are partially, but
not completely suppressed drops exponentially as ĝIE increases.
332 Chapter 37. Gamma Rhythms and Cell Assemblies
37.2. (∗) (†) A flaw of the analysis in this chapter is that the excitatory drive to
the target is assumed to be constant in time. Investigate the case when the
target neuron is subject to strong deterministic pulses of synaptic inhibition,
and weak excitatory current pulses arriving on a Poisson schedule. (See the
discussion at the start of Chapter 36 for a justification of using current pulses
for the excitation, while using synaptic pulses for the inhibition.) Plot the
mean firing frequency of the target as a function of the mean frequency of
excitatory inputs, or as a function of the strengths of the excitatory input
pulses. Do you find thresholding?
Chapter 38
50
Figure 38.1. Figure 32.4, with an oscillation with period exactly equal
to 31 ms superimposed in green. The period in Fig. 32.4 is so close to 31 ms that
a perfectly periodic signal with period 31 ms is very close to phase-locked with the
network throughout the simulated 500 ms interval. [POISSON_PING_3_PLUS_GREEN]
to phase-locked with the network. Figure 38.1 shows that indeed a periodic signal
with period 31 ms is very close to phase-locked with the network throughout the
simulated 500 ms interval.
We now subject the first 5 E-cells in Fig. 38.1 to an additional pulsatile exci-
tatory input with period 31 ms. We expect that the cells receiving the added input
will be affected, but the overall network oscillation will remain largely unchanged,
since the input is received by such a small number of E-cells only. We will examine
how the effect of the input pulses depends on their phase relative to the network
oscillation. We add to the first 5 E-cells in Fig. 32.4 the pulsatile excitatory drive
2
eα cos (πt/T −ϕ) − 1
Ip (t) = 0.1 ! 1
. (38.1)
α cos2 (πs) − 1 ds
0 e
with T = 31, α = 4, ϕ ∈ [0, 1]; compare eq. (35.1). Ip oscillates with period T and
has temporal average 0.1. The parameter ϕ is a phase shift. Up to shifting and
scaling, the green curve in Fig. 38.1 is Ip with ϕ = 0.
40
# spikes in E-cells 1−5
35
30
25
20
15
10
0 0.5 1
ϕ
Figure 38.2 shows the total number of action potentials fired by E-cells 1
through 5 in 500 ms, as a function of ϕ. The results are noisy, and that is not
surprising, since the rhythm is noise-driven. However, the dependence on ϕ is clear
and pronounced.
Not surprisingly, this effect is dependent on a precise match between the fre-
quency of the sender and that of the receiver. When the period of the sender
is reduced by 2 ms, the effect gets much weaker; see Fig. 38.3. “Communication
through coherence,” in the sense in which we have interpreted it here, requires the
frequencies of sender and receiver to be closely matched. For real gamma rhythms,
which will likely not be as regular as the model rhythms considered here, this match
may be difficult to achieve.
38.1. Gamma Phase-Dependent Communication 335
40
30
25
20
15
10
0 0.5 1
ϕ
Figure 38.3. Like Fig. 38.2, but with slightly mismatched periods: The
input has period 29 ms here. [POISSON_PING_3_MISMATCHED_PULSES]
0
0 50 100 150 200
Figure 38.4. Top: A sharply peaked input (solid), and a nearly sinusoidal
one (dashes) with the same temporal mean. Using the notation of eq. (35.1), and
writing T = 1000/f , the parameters are A = 0.5 for both inputs, and f = 40 Hz,
α = 20 for the more sharply peaked input, f = 25 Hz, α = 0.1 for the less sharply
peaked one. Bottom: Response of an E-I pair consisting of one RTM neuron (E-
cell) and one WB neuron (I-cell) to the sum of the two inputs. The pulsatile inputs
drive the E-cell. Spike times of the E-cell are indicated in red, and spike times of
the I-cell in blue. The times at which the more sharply peaked input reaches its
maximum are indicated by vertical dashed lines in the bottom panel. The constant
external drive to both cells is zero. Other parameters are ĝEI = 0.35, ĝIE = 0.5,
ĝEE = 0, ĝII = 0.15, τr,E = 0.5, τpeak,E = 0.5, τd,E = 3, vrev,E = 0, τr,I =
0.5, τpeak,I = 0.5, τd,I = 9, vrev,I = −75. [GAMMA_COHERENCE_1]
336 Chapter 38. Gamma Rhythms and Communication
0
0 50 100 150 200
Figure 38.5. Like Fig. 38.4, but with the inhibitory conductance affecting
the E-cell replaced by the time average of the inhibitory conductance affecting the
E-cell in Fig. 38.4. [GAMMA_COHERENCE_2]
primary input will always be timed in a way that is advantageous to it, with the
pulses arriving when inhibition is at its weakest, while the pulses of the distractor
come at different phases, usually when inhibition is stronger. There are in fact
examples in which this argument is correct, and timing does decide which of the two
competing inputs entrains the network. This is the case in particular if the distractor
is just as coherent as the primary input. In many other cases, and in particular in
Fig. 38.4, however, coherence matters much more than timing. To illustrate this,
Fig. 38.5 shows the simulation of Fig. 38.4, but with the inhibitory conductance
affecting the E-cell constant in time, equal to the temporal average of what it was
in Fig. 38.4.
While the inhibitory conductance itself is important for coherence-dependent
communication, as it raises leakiness in the target, its time dependence is much less
important; see [13] for further discussion of this point.
Exercises
38.1. (∗) Replace the factor 0.1 in eq. (38.1) by 0.3, then re-compute Fig. 38.2 to see
whether the phase dependence is more or less pronounced when the oscillatory
input is stronger.
38.2. (∗) Compute Fig. 38.2 with E-cells 1 through 50 (instead of 5) receiving the
periodic drive Ip . Plot the total number of action potentials in E-cells 1
through 50 as a function of ϕ.
38.3. (∗) The computational experiments in Section 38.1 started with the stochastic
weak PING rhythm of Fig. 32.4. What happens if you start with the deter-
ministic weak PING rhythm of Fig. 32.6 instead? Can you obtain similar
results?
38.4. (∗) Show that Fig. 38.4 does not qualitatively change when the distractor
frequency is raised from 25 Hz to 55 Hz.
Part VI
Synaptic Plasticity
Chapter 39
Short-Term Depression
and Facilitation
At many synapses, repeated firing of the pre-synaptic neuron can lead to a transient
decrease in the strengths of post-synaptic currents, for instance, because of neuro-
transmitter depletion [53]. This is called short-term depression. A phenomenolog-
ical model of short-term depression was proposed by Tsodyks and Markram [165].
We describe it in Section 39.1.
The first few action potentials after a longer pause in firing can lead to the
opposite of short-term depression, short-term facilitation [87]. To sketch the hy-
pothesized physical mechanisms of short-term facilitation, we first briefly sketch
the mechanism of neurotransmitter release at chemical synapses in general. Neuro-
transmitter is held in the pre-synaptic cell in small packets called synaptic vesicles.
A pre-synaptic action potential leads to the opening of voltage-dependent calcium
channels in the axon terminal. This results in the influx of calcium into the axon
terminal. The calcium interacts with the vesicles in complex ways [4], causing
synaptic vesicles to fuse with the cell membrane and release their neurotransmitter
into the synaptic cleft. Short-term facilitation may be the result of residual calcium
left-over in the axon terminal from preceding action potentials [53, 87]. Repeated
firing of the pre-synaptic cell may also enhance the calcium influx into the axon ter-
minal [79]. Tsodyks and Markram extended their model to include this possibility
in [164]. Their model of facilitation is described in Section 39.2.
In the models of [164] and [165], dependent variables such as the amount of
neurotransmitter present in the synaptic cleft are assumed to jump discontinuously
in response to an action potential. In Section 39.3, we replace continuous jumps by
rapid transitions, returning to a model that is a system of ODEs. This is convenient
because one can then use standard methods for ODEs to solve the equations of
Electronic supplementary material: The online version of this chapter (doi: 10.1007/
978-3-319-51171-9 39) contains supplementary material, which is available to authorized users.
342 Chapter 39. Short-Term Depression and Facilitation
the model. It may also be more realistic — discontinuous jumps are, after all,
idealizations of rapid transitions.
where t0 denotes the time of an action potential; see eq. (2.2) of [164].
39.3. Replacing Jumps by Transitions Governed by ODEs 343
By adding the right-hand side of (39.6) to the right-hand side of (39.1), we obtain
an equation that makes p fall rapidly by the correct amount as t rises above t0 , and
preserves eq. (39.1) otherwise:
dp 1−p−q 1
= − pγ ln . (39.7)
dt τrec 1−U
The term that we just subtracted from the equation for dp/dt reflects the release
of available neurotransmitter as a result of the action potential. The same term
should be added to the equation for dq/dt, since p + q should not change as a result
of the action potential:
dq q 1
=− + pγ ln . (39.8)
dt τd,q 1−U
We now have to specify the pulse γ used in eq. (39.7). I have no entirely
satisfactory choice of γ, but will make γ proportional to 1 + tanh(v/10), since
1 + tanh(v/10) ≈ 2 during voltage spikes, and 1 + tanh(v/10) ≈ 0 otherwise.34
Figure 39.1 demonstrates this point.
50
0
v [mV]
-50
-100
18 18.5 19 19.5 20 20.5 21 21.5 22
2
1 + tanh(v/10)
1.5
0.5
0
18 18.5 19 19.5 20 20.5 21 21.5 22
t [ms]
To make the area under the pulse in the lower panel of Fig. 39.1 equal to 1, one
must multiply the function 1 + tanh(v/10) by a scaling factor C. The choice of C is
what is not entirely satisfactory about our definition of γ. Its value should depend
on the shape of the voltage trace during an action potential, and in particular on the
duration of the action potential. However, since the action potential shape is largely
independent of external or synaptic input to the neuron, it is not unreasonable to
choose a single value of C for the RTM neuron; see also exercise 1. The area under
the pulse in the lower panel of Fig. 39.1 is about 0.69, and 1/0.69 ≈ 1.45, so C = 1.45
will be our choice.
34 The function 1 + tanh(v/10) is a fairly arbitrary choice here; compare exercise 20.1.
39.3. Replacing Jumps by Transitions Governed by ODEs 345
0.8
0
v [mV]
0.6
p
0.4
-50
0.2
-100 0
0 100 200 0 100 200
0.5
0.3
0.4
0.3 0.2
s
q
0.2
0.1
0.1
0 0
0 100 200 0 100 200
t [ms] t [ms]
It is easy to show that the model of Section 20.2, with τr,q = 0.1, is the limiting
case of infinitely fast recovery (τrec → 0) of the model in the present section, with
almost perfect efficiency: U = 1 − e−1/0.29 ≈ 0.968 (exercise 2).
For the WB neuron, the choice of C that makes the area under the pulses in
C(1 + tanh(v/10)) equal to 1 is approximately 1.25. Figure 39.3 shows the analogue
of Fig. 39.2 with the RTM neuron replaced by the WB neuron, and with C = 1.25
instead of C = 1.45.
We now turn to the model of Section 39.2, and again turn discontinuous
jumps into rapid transitions governed by ordinary differential equations. We re-
write eq. (39.2) as
1 − U (t0 + 0) = (1 − μ)(1 − U (t0 − 0)). (39.12)
346 Chapter 39. Short-Term Depression and Facilitation
50 1
0.8
0
v [mV]
0.6
p
0.4
-50
0.2
-100 0
0 100 200 0 100 200
0.5
0.3
0.4
0.3 0.2
s
q
0.2
0.1
0.1
0 0
0 100 200 0 100 200
t [ms] t [ms]
Figure 39.3. Like Fig. 39.2, but with the RTM neuron replaced by the WB
neuron, and with C = 1.25 instead of C = 1.45. [WB_WITH_DEPRESSING_S]
dW (1 − U0 )eW − 1
=− . (39.18)
dt τfacil
Combining eq. (39.16) and (39.18), we find
dW (1 − U0 )eW − 1 1
=− + γ ln . (39.19)
dt τfacil 1−μ
39.3. Replacing Jumps by Transitions Governed by ODEs 347
50 1
v [mV]
0
0.5
p
-50
-100 0
0 200 400 0 200 400
0.2 1
0.2
0.5
U
s
0.1
q
0.1
0 0 0
0 200 400 0 200 400 0 200 400
t [ms] t [ms] t [ms]
In Section 39.2, the question arose whether one should use the pre-jump or
post-jump value of U to compute the jumps in p and q. Turning the model into a
system of differential equations has made the question disappear: p, q, and U (or,
equivalently, W ) change simultaneously and smoothly.
348 Chapter 39. Short-Term Depression and Facilitation
Exercises
39.1. (∗) In this chapter, we used the assumption that the differential equation
dw v
= C 1 + tanh , (39.25)
dt 10
with v = membrane potential of an RTM neuron and C = 1.45, raises w by 1
each time the RTM neuron fires. Of course this can’t be precisely correct, for
example, because the spike shape depends at least slightly on parameters such
as the external drive. To get a sense for how close to correct this assumption
is, do the following experiment. Start an RTM neuron with a drive I above
threshold at v = −70 mV, m = m∞ (v), h = h∞ (v), n = n∞ (v). Simulate
until 3 ms past the third action potential. At the same time, solve eq. (39.25),
with C = 1.45, starting at w = 0. Plot the final value of w, divided by 3, as
a function of I. The final value of w, divided by 3, ought to be close to 1,
since each of the three action potentials should raise w by about 1.
Start with the code that generates Fig. 5.2. You might want to consider
I ≥ 0.15 only, avoiding values of I for which the firing period is so long that
the simulation will take a long time.
39.2. Show that the model of Section 20.2, with τr,q = 0.1, is the special case
of (39.9)–(39.11) when τrec → 0 and U = 1 − e−1/0.29 .
39.3. Derive eq. (39.18).
39.4. Explain how eq. (39.19) becomes eq. (39.20).
Chapter 40
Spike Timing-Dependent
Plasticity (STDP)
40
20
-20
-40
-60
-100 -80 -60 -40 -20 0 20 40 60 80 100
Time of Synaptic Input (ms)
However, if neuron i fires very soon after neuron j, the reverse happens. We remove
this discontinuity by modifying eqs. (40.1) and (40.2) like this:
gij → g ij + K+ e−(tj −ti,0 )/τ+ 1 − e−5(tj −ti,0 )/τ+ , (40.3)
g ji → g ji − K− e−(tj −ti,0 )/τ− 1 − e−5(tj −ti,0 )/τ− . (40.4)
To put it without using the subscripts i and j, if the pre-synaptic neuron
fires at time tpre , the post-synaptic neuron fires at time tpost , and z = tpost − tpre ,
then the change in the synaptic strength from the pre-synaptic to the post-synaptic
neuron is F0 (z) according to (40.1) and (40.2), and F (z) according to the smoothed
model (40.3), (40.4), with
K+ e−z/τ+ if z > 0,
F0 (z) = (40.5)
−K− ez/τ− if z < 0,
40.1. The Song-Abbott Model 351
and
K+ e−z/τ+ (1 − e−5z/τ+ ) if z > 0,
F (z) = (40.6)
−K− ez/τ− (1 − e5z/τ− ) if z < 0.
Figure 40.2 shows the graphs of F0 (black) and F (red) for one particular choice of
parameters.
-20 -10 0 10 20
z [ms]
Figure 40.2. Graphs of functions of the forms (40.5) and (40.6). The
analogue of the horizontal coordinate in Fig. 40.1 is −z, not z. [ABBOTT_SONG]
where B > 0 is an upper bound imposed on gij . However, this would have the
effect of making the right-hand side of our differential equations in Section 40.2
non-smooth, since min(B, x) and max(0, x) are not differentiable functions of x. We
therefore use the smooth approximations minδ and maxδ for min and max defined
in eqs. (D.4) and (D.6), using a smoothing parameter δ > 0 yet to be chosen. (We
typically take it to be a fraction of the initial value of g syn .) Therefore (40.7)
and (40.8) now turn into
gij → minδ B, g ij + K+ e−(tj −ti,0 )/τ+ 1 − e−5(tj −ti,0 )/τ+ , (40.9)
g ji → maxδ 0, gji − K− e−(tj −ti,0 )/τ− 1 − e−5(tj −ti,0 )/τ− . (40.10)
352 Chapter 40. Spike Timing-Dependent Plasticity (STDP)
We re-write (40.10) with the roles of i and j reversed: When neuron i fires soon
after neuron j, then gij is weakened:
g ij → maxδ 0, gij − K− e−(ti −tj,0 )/τ− 1 − e−5(ti −tj,0 )/τ− . (40.11)
(40.11) describes how gij is weakened when neuron i fires soon after neuron j,
while (40.9) describes how g ij is strengthened when neuron j fires soon after neu-
ron i.
50
v [mV]
-50
-100
0 20 40 60 80 100
20
15
a [mV]
10
0
0 20 40 60 80 100
t [ms]
Figure 40.3. Voltage trace of the RTM neuron with I = 1.5, and the
function a satisfying (40.14) with a(0) = 0. [RTM_VOLTAGE_TRACE_WITH_A]
Thus if neuron i fires just before neuron j, the connection from i to j strengthens by
(0)
about g ij /2, but with the constraint that the strength cannot rise above B = Bij .
If neuron i fires just after neuron j, the connection from i to j weakens by about
(0)
gij /3, but with the constraint that the strength cannot fall below 0. The equation
K− = 2K+ /3 is in rough agreement with Fig. 5 of [190] (see Fig. 40.1) We always use
τ+ = τ− = 10 ms.
This is compatible with Fig. 5 of [190]. Thus two action potentials separated by
several tens of ms will have very little effect on synaptic strengths. The value of gij
is bounded by five times its initial value:
(0)
Bij = 5gij .
Figure 40.4. Spike rastergram for a network of two E-cells and one I-cell.
The E-cells receive external drives 0.4 and 0.8, respectively. All possible synapses are
present, except for E-to-E connections. The maximal conductances associated with
the synapses are g EI = 0.125, gIE = 0.25, and gII = 0.25. The time constants and
reversal potentials associated with the synapses are τr,E = 0.5, τpeak,E = 0.5, τd,E =
3, vrev,E = 0, τr,I = 0.5, τpeak,I = 0.5, τd,I = 9, vrev,I = −75. [THREE_CELL_PING_1]
frequency of about 31 Hz, considerably faster than the I-cell frequency in Fig. 40.4.
For very strong recurrent excitation, there can be runaway firing; see Fig. 40.5,
lower panel.
Figure 40.5. Same as Fig. 40.4, but the E-cells now excite each other,
with maximum conductance g EE = 0.05 (upper panel) or gEE = 0.4 (lower panel).
They do not self-excite. [THREE_CELL_PING_2]
The two E-cells in the upper panel of Fig. 40.5 are tightly phase-locked, but
their phase difference is substantial, with the first E-cell lagging behind the second.
The phase difference can be reduced by raising the strength of recurrent excitation.
We use
Δ = average delay between spike of E-cell 2 and next spike of E-cell 1 (40.15)
30 200
150
20
Δ
100
f
10
50
0 0
0 0.2 0.4 0 0.2 0.4
g EE g EE
Figure 40.6. Network as in Fig. 40.5, but with varying values of gEE .
Left: Synchrony measure Δ defined in eq. (40.15), as a function of g EE . Right:
Frequency of the second E-cell, as a function of g EE . [THREE_CELL_PING_3]
Since the second E-cell is driven much more strongly than the first, all we
really need to bring their spike times together is a strong excitatory connection
from the second E-cell to the first. Figure 40.7 shows the same results as Fig. 40.6,
but with an excitatory connection only from the second E-cell to the first, not the
other way around. As the connection is strengthened, the frequency now stays
constant, while synchrony improves.
30 200
150
20
100
δ
10
50
0 0
0 0.2 0.4 0 0.2 0.4
g EE g EE
Figure 40.7. Like Fig. 40.6, but with the synaptic connection from E-cell
1 to E-cell 2 removed. [THREE_CELL_PING_4]
The asymmetric connectivity used in Fig. 40.7 is precisely what STDP pro-
duces automatically. This is demonstrated by Fig. 40.8.
0.2
0
0 50 100 150 200 250 300 350 400 450 500
0.4
g syn,EE,21
0.2
0
0 50 100 150 200 250 300 350 400 450 500
10
lag
0
0 50 100 150 200 250 300 350 400 450 500
t [ms]
Figure 40.8. Network as in Figs. 40.4 and 40.5, but now with plastic gEE ,
with STDP modeled as in Chapter 40. The values g EE start out at 0.05 for the
excitatory connections from E-cell 2 to E-cell 1, and vice versa. The parameters
associated with the STDP model are K+ = 0.05/2, K− = 0.05/3, τ+ = τ− = 10,
B = 0.4, and δ = 0.05/2. [THREE_CELL_PING_5]
distribution of E-to-E strengths — some strong ones, and many weak ones. This is
why STDP creates recurrent excitation that sharpens synchrony without creating
runaway activity: The strongly driven E-cells accelerate the weakly driven ones,
but there is no indiscriminate all-to-all excitation among the E-cells.
Exercises
40.1. Show that the function F (z) in eq. (40.6) is continuous, but not in general
differentiable at z = 0.
40.2. How large does z > 0 have to be for F (z) to differ from F0 (z) by no more
than 2%?
40.3. Suppose that there is an autapse from neuron i to itself. What happens to g II
when neuron i fires, according to our smoothed version of the Song-Abbott
model?
40.4. Let δ > 0, and Mδ (x) = maxδ (0, x) for all x ∈ R. (See eq. (D.6) for the
definition of maxδ .) (a) Prove that Mδ is an infinitely often differentiable
2 2
e−(gEE,ij −gEE ) /(2σ )
with σ = 10−4 . Note that the function √ is a Gaussian density in the
2πσ2
variable g EE with standard deviation σ centered at g EE,ij ; see eq. (C.12). Its integral with
respect to gEE is 1. Therefore the integral of the function ρ is 1 as well.
358 Chapter 40. Spike Timing-Dependent Plasticity (STDP)
250
50
2500
2000
1500
1000
500
0
0 0.5 1 1.5 2 2.5
g EE [mS/cm2 ] ×10 -3
function. (b) Prove that Mδ is strictly increasing. (c) Plot Mδ (x) for δ = 0.1,
−1 ≤ x ≤ 1.
40.5. (∗) Why does Δ in the left panel of Fig. 40.6 suddenly rise again as g EE
exceeds a threshold value of about 0.23?
Exercises 359
40.6. Without STDP, there are O(NE +NI ) dependent variables in a PING network
model, and each equation has O(NE +NI ) terms on the right-hand side. With
recurrent excitation with STDP, there are O(NE2 + NI ) dependent variables,
since now the synaptic strengths become dependent variables. Each equation
governing one of the E-to-E synaptic strengths has O(NEk ) terms on the right-
hand side. What is k?
40.7. (∗) For Fig. 40.9, plot the standard deviation of the spike times in the n-th
spike volley of the E-cells as a function of n.
40.8. (∗) Can you generate a simulation similar to that of Fig. 40.9, but with
stochastic drive to the E-cells?
Appendix A
In several places in this book, we use the bisection method. It is the simplest
method for solving nonlinear equations. If f is a continuous function of a single real
variable, say f = f (x) with x ∈ R, and if a and b are real numbers with a < b and
f (a)f (b) ≤ 0 (that is, f (a) and f (b) have opposite signs), then there is a solution
of the equation
f (x) = 0
in the interval [a, b], by the intermediate value theorem. The bisection method for
finding a solution up to an error of, say, 10−12 , is defined in pseudo code as follows:
while b − a > 2 × 10−12 ,
c = (a + b)/2;
if f (a)f (c) ≤ 0,
a = c;
else
b = c;
end if;
end while;
x = (a + b)/2;
When the while loop stops, the interval [a, b] is of length ≤ 2 × 10−12, and f (a)f (b)
is still ≤ 0, so the interval [a, b] still contains a solution. The midpoint, x, of the
interval [a, b] is therefore no further than 10−12 from a solution.
Advantages of the bisection method are that it is simple and guaranteed to
work provided that an initial interval [a, b] with f (a)f (b) ≤ 0 is given, and that a
bound on the error can be guaranteed. The main disadvantage is that, at least in
the simple form given above, it is only applicable to one equation in one unknown.
Appendix B
F
x
x2 x0 x1
x0 x0
−1 < F (x ) < 0 F (x* ) < −1
*
x0 x0
Figure B.2. Typical examples of fixed point iteration. The left panels show
examples of attracting fixed points, and the right panels show examples of repelling
fixed points.
Appendix B. Fixed Point Iteration 365
The following proposition is one way of making precise that small |F | results
in convergence of the fixed point iteration:
Proposition B.2 is a special case of a more general result called the Banach fixed point
theorem. It is a result about global convergence — that is, convergence irrespective
of the starting point x0 . There is a similar result about local convergence, which we
state next.
(b) If |F (x∗ )| > 1, then x∗ is repelling, that is, there exists a number > 0 so that
x0 ∈ [x∗ − , x∗ + ] and x0
= x∗ ⇒ |x∗ − x1 | > |x∗ − x0 |.
Proof. (a) Because F is continuous and |F (x∗ )| < 1, we can choose > 0 so that
the assumptions of Proposition B.2 are satisfied with [a, b] = [x∗ − , x∗ + ]. This
implies the assertion. (b) Because F is continuous and |F (x∗ )| > 1, there is an
> 0 so that |F (x)| > 1 for x ∈ [x∗ − , x∗ + ]. If x0 ∈ [x∗ − , x∗ + ], x0
= x∗ ,
then |x∗ − x1 | = |F (x∗ ) − F (x0 )| = |F (c)(x∗ − x0 )| for some c between x0 and x∗ ,
and |F (c)(x∗ − x0 )| > |x∗ − x0 |.
Appendix C
Elementary Probability
Theory
random numbers by capital letters, for instance, X. We will not rigorously define
what we mean by a random real number.37 You should simply think of a random
number as an oracle: If you ask the oracle for the value of X once, it might answer
0.734957, but if you ask it a second time, it might answer something different,
maybe 0.174023.
For a random real number X, there is often a probability density function
(p.d.f.), that is, a function ρ with the property that for any a and b with −∞ ≤
a < b ≤ ∞,
b
P (X ∈ [a, b]) = ρ(x)dx.
a
If X is a random real number, its mean value or expected value or expectation
is the value that the oracle gives on the average — ask it very many times, and
average. It is denoted by E(X). It is easy to calculate E(X) if the only possible
values of X are the members of a (finite or infinite) sequence {xi }. Suppose that
P (X = xi ) =" pi , with pi ∈ [0, 1]. Since the xi are assumed to be the only possible
values of X, i pi = 1. Then
E(X) = pi xi . (C.2)
i
The smaller h, the better is the approximation. Note that the possible values of X̃
are the members of the (two-sided infinite) sequence {ih}. Then
ih
P X̃ = ih = pi = P (X ∈ [(i − 1)h, ih)) = ρ(x)dx.
(i−1)h
As h → 0, this becomes
∞
ih ∞
xρ(x)dx = xρ(x)dx.
i=−∞ (i−1)h −∞
Expectation is linear. That is, if X and Y are random real numbers, and c
and d are (non-random) real numbers, then
This way of writing it also avoids having to state that P (Y ∈ [c, d])
= 0 (which
we should have assumed when writing down (C.5)) or P (X ∈ [a, b])
= 0 (which we
should have assumed when writing down (C.6)). Similarly, n random real numbers,
X1 , X2 , . . . , Xn , are independent if and only if for any real intervals Ij , 1 ≤ j ≤ n,
By definition, infinitely many random variables are independent if and only if any
finite subsets of them are independent.
The variance of a random number X is
2
var(X) = E (X − E(X)) . (C.7)
2
So the variance measures how big (X − E(X)) is on the average.38 The standard
deviation is
std(X) = var(X).
38 You might first think that the average of X − E(X) would also be an interesting quantity, but
it isn’t: It is always zero. The average of |X − E(X)| is an interesting quantity, but the average
of (X − E(X))2 is easier to deal with, in essence because x2 is differentiable everywhere, while |x|
is not differentiable at x = 0.
370 Appendix C. Elementary Probability Theory
std(X)
cv(X) = . (C.8)
E(X)
Using the linearity of the expectation, we can re-write eq. (C.7) as follows:
2
var(X) = E (X − E(X)) = E X 2 − 2XE(X) + (E(X))2 =
2 2
E(X 2 ) − 2E(X)E(X) + (E(X)) = E(X 2 ) − (E(X)) .
We summarize this result:
2
var(X) = E(X 2 ) − (E(X)) . (C.9)
var(cX) = c2 var(X)
std(cX) = |c|std(X).
Variance is not additive, that is, var(X + Y ) is not var(X) + var(Y ) in general.
However, we can calculate exactly what var(X + Y ) really is, using (C.9):
var(X + Y ) = E (X + Y )2 − (E(X + Y ))2 =
E X 2 + 2XY + Y 2 − (E(X) + E(Y ))2 =
2 2
E(X 2 ) − (E(X)) + E(Y 2 ) − (E(Y )) + 2E(XY ) − 2E(X)E(Y ) =
var(X) + var(Y ) + 2 (E(XY ) − E(X)E(Y )) .
So we see that the variance is additive if and only if the expectation is multiplicative:
E(XY ) = E(X)E(Y ).
This is not generally the case. Expectation is additive (E(X + Y ) = E(X) + E(Y ))
and in fact, more generally, it is linear (E(cX + dY ) = cE(X) + dE(Y )), but
it is not in general multiplicative. We say that X and Y are uncorrelated if the
expectation does happen to be multiplicative: E(XY ) = E(X)E(Y ). So X and
C.3. Uniform Distributions in Matlab 371
The expression
cov(X, Y ) = E ((X − E(X))(Y − E(Y ))
that appears on the right-hand side of eq. (C.10) is called the covariance of X and
Y . So X and Y are uncorrelated if and only if their covariance is zero.
One can show (but we will omit the argument) that X and Y are uncorrelated
if they are independent. However, the reverse implication does not hold: X and
Y can be dependent, yet still uncorrelated. As an example, suppose that X is a
random real number with mean 0, and
X with probability 1/2,
Y =
−X with probability 1/2.
Then
X2 with probability 1/2,
XY =
−X 2 with probability 1/2.
It is clear that E(XY ) = 0, and E(X)E(Y ) = 0 as well, so X and Y are uncorre-
lated. They are not, of course, independent: Knowing X helps us guess Y , in fact
once we know X, we know Y up to sign! So uncorrelatedness is a weaker property
than independence.
X= rand(1,1)
372 Appendix C. Elementary Probability Theory
0.4
0.3
ρ
0.2
0.1
0
−3 −2 −1 0 1 2 3
x
Y = μ + σX
has mean μ and standard deviation σ, and its p.d.f. can easily be shown to be
1 2 2
ρY (y) = √ e−(y−μ) /(2σ ) . (C.12)
2πσ 2
S(0) = 0, (C.13)
dS S
=− for (k − 1)Δt < t < kΔt, k = 1, 2, 3, . . ., (C.14)
dt τnoise
S(kΔt + 0) = S(kΔt − 0) + γGk k = 1, 2, 3, . . . , (C.15)
where τnoise > 0, γ > 0, and the Gk are independent standard Gaussians. (These
equations will be modified later.)
To understand eqs. (C.13)–(C.15) better, we define
Sk = S(kΔt + 0), k = 0, 1, 2, . . . .
Then S0 = 0, and
Taking expectations on both sides of this equation, we find E(Sk ) = E(Sk−1 ), and
therefore, since E(S0 ) = 0, E(Sk ) = 0 for all k. Taking variances on both sides
of (C.16), we obtain
v∞ = e−2Δt/τnoise v∞ + γ 2 ,
374 Appendix C. Elementary Probability Theory
so
γ2
v∞ = . (C.18)
1− e−2Δt/τnoise
We are primarily interested in small Δt, since Δt is the time step used in solving
the differential equations of the model. However, for small Δt, the denominator
in (C.18) is small. Therefore γ 2 ought to be small as well, to keep v∞ from tending
to ∞ as Δt → 0. We define
γ = σnoise 1 − e−2Δt/τnoise , (C.19)
where σnoise > 0 is independent of Δt. Using (C.19) in (C.18), we find
2
v∞ = σnoise .
2
With this definition of γ, the variance of the Sk converges to σnoise . However,
if we had taken S(0) = S0 to be not 0, but a Gaussian with mean 0 and variance
2
σnoise , then it is clear from the calculations presented above that all Sk would have
2
been Gaussian with mean 0 and variance σnoise . We therefore modify eqs. (C.13)–
(C.15) as follows:
S(0) = σnoise G0 , (C.20)
dS S
=− for (k − 1)Δt < t < kΔt, k = 1, 2, 3, . . ., (C.21)
dt τnoise
S(kΔt + 0) = S(kΔt − 0) + σnoise 1 − e−2Δt/τnoise Gk k = 1, 2, 3, . . . ,(C.22)
where G0 , G1 , G2 , . . . are independent standard Gaussians.
Equations (C.20)–(C.22) describe how we generate noisy drive in our codes.
Figure C.2 shows a typical example.
0
S
−1
for all s > 0 and t > s. I will first explain why (C.26) expresses lack of memory,
and then explain why it follows from (C.25).
If you think of T as the life span of a person measured in years, for instance,
the equation says that the probability that you become t years old, given that you
are already s years old, is the same as the probability that you reach t − s years
of age from the time of your birth. Clearly, the life span of a person does not have
this property — the likelihood of living for another 40 years is much greater for
a newborn than for a 70-year-old. When estimating how much time you have left
to live, your age matters. On the other hand, the time between two incoming cell
phone calls in the afternoon may approximately lack memory — the fact that you
got two calls in the past ten minutes probably does not make it much more or much
less likely that you will get another in the next ten minutes.39 One could say that
T is “completely random” when it lacks memory — the past does not help you
predict it.
39 You might object “If you got two phone calls in the past ten minutes, you seem to be getting
phone calls at a high rate, so that makes it more likely that you’ll get another one in the next
ten minutes.” However, we assume here that we already know the mean rate at which phone calls
come in, and don’t need to try to estimate it by observing the rate at which phone calls have come
in recently.
376 Appendix C. Elementary Probability Theory
I will now explain the link between exponential distribution and lack of mem-
ory. Equation (C.26) means, by the definition of conditional probability,
P (T > t and T > s)
= P (T > t − s),
P (T > s)
and since t ≥ s, this means
P (T > t)
= P (T > t − s),
P (T > s)
or
P (T > t) = P (T > s)P (T > t − s). (C.27)
By eq. (C.25), this simply means
e−t/T = e−s/T e−(t−s)/T ,
so it follows from the law by which exponentials are multiplied. This shows that
exponentially distributed random numbers lack memory. The implication in fact
works the other way around as well: If T > 0 is a random number and P (T > t)
depends on t > 0 continuously and T lacks memory, then T is exponentially dis-
tributed. This is a consequence of the fact that a continuous function f = f (t) of
t > 0 with f (a + b) = f (a)f (b) for all a > 0 and b > 0 is an exponential.
The parameter T in eq. (C.25) is the expected value of T . You can verify this
easily using eq. (C.3).
Δt
p= . (C.29)
T
(Recall that ex ≈ 1+x for x ≈ 0. This is the linear approximation of the exponential
function at 0.) We call
1000
f=
T
the mean frequency of the Poisson schedule. As usual, the numerator of 1000 is
needed because we think of time as measured in ms, but frequency as measured
in Hz. With this notation, (C.29) becomes
f
p= Δt. (C.30)
1000
Appendix D
Smooth Approximations
of Non-smooth Functions
0.5
signδ (0, x)
-0.5
-1
-1 0 1
x
Figure D.1. The graph of signδ (0, x), as defined in eq. (D.1), for δ = 0.1.
[SIGN_DELTA]
a+b b−a x − x0
fδ (x) = + tanh .
2 2 δ
1.5
|x|δ
0.5
0
-1 0 1
x
Figure D.2. The graph of |x|δ , as defined in eq. (D.3), for δ = 0.1. [ABS_DELTA]
In summary,
|x|δ = |x| + δ ln 1 + e−2|x|/δ . (D.3)
Although (D.2) and (D.3) are equivalent, (D.3) is preferable because it avoids eval-
uation of exponentials of very large numbers and numerical issues potentially asso-
ciated with that. Figure D.2 shows the graph of |x|δ for δ = 0.1.
0.5
minδ (0, x)
-0.5
-1
-1 0 1
x
Figure D.3. The graph of minδ (0, x), as defined in eq. (D.4), for δ = 0.1.
[MIN_DELTA]
For a fixed real number a, we approximate the function min(a, x) like this:
a + x |a − x| a + x |a − x|δ
min(a, x) = − ≈ minδ (a, x) = − .
2 2 2 2
Using our definition (D.3) this means
δ
minδ (a, x) = min(a, x) − ln 1 + e−2|x−a|/δ . (D.4)
2
To estimate the difference between min(a, x) and its smooth approximation
minδ (a, x), we use the inequality ln z ≤ z − 1, which holds for all z > 0 be-
cause z − 1 is the tangent approximation to ln z at z = 1, and the graph of ln z is
382 Appendix D. Smooth Approximations of Non-smooth Functions
δ −2|x−a|/δ
0 < min(a, x) − minδ (a, x) ≤ e . (D.5)
2
The right-most expression in (D.5) is extremely small for |x − a| δ, and in any
case less than or equal to δ/2. Figure D.3 shows the graph of the function minδ (0, x)
for δ = 0.1.
In analogy with eq. (D.4), we define a smooth approximation to the function
max(a, x) by
δ
maxδ (a, x) = max(a, x) + ln 1 + e−2|x−a|/δ . (D.6)
2
Appendix E
Solutions to Selected
Homework Problems
2.5. When the gas fills a spherical container of radius R, it occupies the volume
4 3
V = πR ,
3
and the pressure P that it exerts on the container walls is therefore
kN T
P = ,
4πR3 /3
where N denotes the number of gas molecules, and T the temperature. We compress
the spherical container by gradually reducing its radius to R < R, thereby reducing
the volume to
4
V = πR3
3
and increasing the number density from
N
[X] =
4πR3 /3
to
N
[X] = ,
4πR3 /3
while keeping T constant. In general, reduction of the radius of the spherical con-
tainer from some value r > 0 to r − Δr, where Δr > 0 is small, requires work
kN T 3kN T
pressure × surface area of sphere × Δr = × 4πr2 × Δr = Δr.
4πr3 /3 r
The online version of chapter 1 (doi: 10.1007/978-3-319-51171-9 1) contains supplementary
material of this appendix, which is available to authorized users.
384 Appendix E. Solutions to Selected Homework Problems
R3 V [X]
kN T ln = kN T ln = kN T ln .
R3 V [X]
The work W per gas molecule is obtained by dividing by N , which yields (2.8).
4.5. We will show: If x = m, h, or n ever were to reach 0, then dx/dt would have
to be positive; this implies that x cannot, in fact, reach 0. To see this, note that if
x = 0, then
dx x∞ − x x∞
= = > 0.
dt τx τx
Similarly, if x were to reach 1, then dx/dt would have to be negative, and therefore
x cannot reach 1. If v were to reach A, then
dv
C = gNa m3 h(vNa − A) + gK n4 (vK − A) + g L (vL − A) + I >
dt
I
gL (vL − A) + I ≥ g L vL − vL − + I = 0,
gL
so v cannot, in fact, reach A, and similarly, if v were to reach B, then dv/dt would
have to be negative, and therefore v cannot reach B.
5.2. The plot is shown in Fig. E.1.
50
v [mV]
−50
−100
0 20 40 60 80 100
1.1
1
h+n
0.9
0.8
0 20 40 60 80 100
t [ms]
Figure E.1. A voltage trace of the RTM neuron (upper panel), and the
sum h + n (lower panel). [RTM_H_PLUS_N]
v [mV]
0
−50
−100
0 20 40 60 80 100
h = 1−n
50
v [mV]
−50
−100
0 20 40 60 80 100
t [ms]
Figure E.2. A voltage trace of the RTM neuron (upper panel), and the
voltage trace obtained if h is simply set to 1 − n (lower panel). [RTM_2D]
5.4. A voltage spike has a rising phase, and a falling phase. Thus there are pairs
of times, t1 and t2 , such that
dv dv
v(t1 ) = v( t2 ), but (t1 ) > 0, (t2 ) < 0.
dt dt
However, if
dv
= F (v), (E.1)
dt
regardless of what F is,
dv dv
v(t1 ) = v(t2 ) ⇒
(t1 ) = (t2 ).
dt dt
This argument proves that any solution of an ODE of the form (E.1) is either
monotonically increasing, or monotonically decreasing.
5.5. The plot is shown in Fig. E.3.
5.7. The plots are shown in Figs. E.4–E.6.
6.4. Suppose that Δz > 0 is small, and let us focus on the small piece of axon
between z − Δz/2 and z + Δz/2. The surface area of this pieces of axon is approx-
imately
2πa(z) 1 + a (z)2 Δz,
where a denotes the derivative of a with respect to z. The current entering this
piece through the cell membrane is therefore approximately
Im = 2πa(z) 1 + a (z)2 Δz g Na m(z, t)3 h(z, t) (vNa − v(z, t)) +
gK n(z, t)4 (vK − v(z, t)) + gL (vL − v(z, t)) + I(z, t) . (E.2)
386 Appendix E. Solutions to Selected Homework Problems
50
v [mV]
0
−50
−100
0 20 40 60 80 100
−50
−100
0 20 40 60 80 100
t [ms]
Figure E.3. A voltage trace of the WB neuron (upper panel), and the
voltage trace obtained if with strengthened gNa and g K (lower panel). [WB_MODIFIED]
50
v [mV]
−50
−100
0 20 40 60 80 100
−5
0 20 40 60 80 100
t [ms]
Figure E.4. A voltage trace of the RTM neuron (upper panel), and the
sodium, potassium, and leak currents (lower panel). [RTM_CURRENTS]
As before, gNa , gNa , and gL denote conductances per unit membrane area, and
I denotes external input current per unit membrane area. The voltage difference
between locations z and z −Δz gives rise to a current into the piece of axon between
z − Δz/2 and z + Δz/2 as well. The current entering from the left is approximately
π(a(z − Δz/2))2
Il = (v(z − Δz, t) − v(z, t)). (E.3)
Ri Δz
Here we used the relation (6.7) between resistance of the cell interior per unit length
and resistivity. Similarly, the current entering from the right is approximately
π(a(z + Δz/2))2
Ir = (v(z + Δz, t) − v(z, t)). (E.4)
Ri Δz
Appendix E. Solutions to Selected Homework Problems 387
50
v [mV]
0
−50
−100
0 20 40 60 80 100
−5
0 20 40 60 80 100
t [ms]
Figure E.5. A voltage trace of the WB neuron (upper panel), and the
sodium, potassium, and leak currents (lower panel).[WB_CURRENTS]
50
v [mV]
−50
0 20 40 60 80 100
−20
0 20 40 60 80 100
t [ms]
Figure E.6. A voltage trace of the Erisir neuron (upper panel), and the
sodium, potassium, and leak currents (lower panel).[ERISIR_CURRENTS]
g Na m(z, t)3 h(z, t) (vNa − v(z, t)) + gK n(z, t)4 (vK − v(z, t))+
g L (vL − v(z, t)) + I(z, t). (E.6)
Passing to the limit as Δz → 0, we find:
(a2 vz )z
Cvt = + g Na m3 h (vNa − v) + g K n4 (vK − v) + g L (vL − v) + I. (E.7)
2Ri a 1 + (a )2
This result can be found, for instance, in Appendix A of [100]. Note that in [100],
the “external input current density” is taken to be a current per unit length in
the z-direction (called the x-direction in [100]), not per unit membrane area. This
accounts for the very slight difference between the equation in [100] and (E.7).
7.4. The period T is the solution of τm I 1 − e−T /τm = 1, so
1
= 1 − e−T /τm .
τm I
Similarly, T̃ is the solution of τm I 1 − e−T̃ /τm = 0.95, and solving for T̃ we find
τm I
T̃ = τm ln .
τm I − 0.95
This implies
T̃ τm I 1
= ln = ln .
τm τm I − 0.95 1 − 0.95(1 − e−T /τm )
Therefore
T − T̃ T /τm − T̃ /τm 1 1
= =1− ln
.
T T /τm T /τm 1 − 0.95 1 − e−T /τm
Figure E.7 shows the graph of (T − T̃ )/T as a function of T /τm . The greater T /τm ,
the larger is the proportion of the inter-spike interval spent close to threshold,
namely between v = 0.95 and v = 1. As discussed in the chapter, this results in
high noise-sensitivity when T /τm is large.
1
(T − T̃ )/T
0.5
0
0 2 4 6 8 10
T /τ m
15
τ m [ms ]
10
0
0 20 40 60 80 100
15
τ m [ms ]
10
0
0 20 40 60 80 100
3
τ m [ms ]
0
0 20 40 60 80 100
t [ms ]
8.4. Figure E.9 shows the voltage trace of a normalized QIF neuron with τm = 1/2
and I = 0.5018. During a large fraction of the period, v is now near 1/2, not near
1. If during this time a small random input arrives, firing in response to that input
will not be immediate. For small τm , the QIF neuron is not as noise sensitive as
the LIF neuron.
1.5
1
v
0.5
0
0 50 100 150
t [ms]
Figure E.9. The voltage trace of a QIF neuron with τm = 1/2, firing at
20 Hz. [QIF_VOLTAGE_TRACE_SMALL_TAU_M]
Therefore
3
dT I I T I
κ = = 4πτ 2 (4τm I − 1)−3/2 = 4πτ 2 =
dI T m
T m
2πτm T
1 T2 1 T 2 πτm 2 1 1 (T /τm )2
= +
2π 2 τ 2 τm I = 2π 2 τ 2 T
+
4 2 8π 2
.
m m
dv v
=− + I − wk e−(t−tk )/τw v. (9.12)
dt τm
dv 1
+ g(t)v = I, with g(t) = + wk e−(t−tk )/τw .
dt τm
Let
t
G(t) = − wk τw e−(t−tk )/τw . (E.8)
τm
(Note that G is an anti-derivative of g.) Multiply both sides of the differential
equation by eG(t) :
dv
eG(t) + eG(t) g(t)v = I eG(t) .
dt
This is equivalent to
d G(t)
e v(t) = I eG(t) ,
dt
so (using that v(tk + 0) = 0)
t t
eG(t) v(t) = I eG(s) ds ⇒ v(t) = I eG(s)−G(t) ds.
tk tk
as claimed.
9.6. (a) Figure E.10 shows the two plots.
(b) In the proof of Lemma 9.4, we showed that
1
φ(z) ≤ I − + .
τm
Appendix E. Solutions to Selected Homework Problems 391
τ w = 1000 τ w = 2000
0.05 0.05
φ(z )
φ(z )
0.025 0.025
0 0
0 0.025 0.05 0 0.025 0.05
z z
Figure E.10. The map φ, with τm = 10, I = 0.12, = 0.01, and for two
different large values of τw : τw = 1000 (left) and τw = 2000 (right). [SLOW_ADAPT]
For large τw and large z, φ is nearly constant. It seems natural to guess that the
approximate value of φ for large τw and large z is just this upper bound,
1
I− + .
τm
If this is so, then
1
zc = I − .
τm
Figure E.11 confirms this guess. The figure shows, for τw = 2000 and for three
different choices of the parameters τm , I, and , the graph of φ (black solid), and
the graph of the function
⎧
⎪ 1
⎨ z+ for 0 ≤ z ≤ I − ,
τm (E.9)
⎪
⎩ I−
1
+ for z > I −
1
τm τm
(red dots).
0 0 0
0 0.025 0.05 0 0.02 0.04 0 0.05 0.1
z z z
dv v
=− + I − zv, v(0) = 0,
dt τm
T∞ (z)
lim =0
τw →∞ τw
T (z)
lim =0
τw →∞ τw
if z < I − 1/τm .
Now assume that z > I − 1/τm . Recall that T (z) is the time at which the
solution of
dv v
=− + I − ze−t/τw v, v(0) = 0,
dt τm
reaches 1. For v to reach 1, we must first wait until
v 1
− + I − ze−t/τw v =− + I − ze−tτw , (E.10)
τm v=1 τm
which starts out negative, becomes zero. The time that this takes is
z
τw ln , (E.11)
I − 1/τm
which is large when τw is large. After (E.10) becomes positive, the time it takes
for v to reach 1 is negligible in comparison with (E.11) for large τw . (This is not
difficult to make rigorous, but we won’t.) Therefore
T (z) z
lim = ln . (E.12)
τw →∞ τw I − 1/τm
(d) The question is how many iterations it takes for fixed point iteration for the
function given by (E.9) and depicted in Fig. E.11 to reach the fixed point, starting
at z = 0. Each iterate lies to the right of the previous iterate by , so the answer is
approximately (zc + )/ = (I − 1/τm )/ + 1.
(e) Fig. E.12 confirms that w saturates after approximately (I − 1/τm)/ + 1 spikes,
not only when τw is extremely large but even when τw = 100.
10.2. We denote the left-hand side of eq. (10.5) by g(n). If vK ≤ v ≤ vNa , then g is
a strictly decreasing function of n ≥ 0. This implies our assertion.
Appendix E. Solutions to Selected Homework Problems 393
n=3 n=6
0.06
0.03 A B
0.02 0.04
w
0.01 0.02
0 0
0 200 400 0 200 400
n=7 n=28
0.03
C 0.02 D
0.02 0.015
w
0.01
0.01
0.005
0 0
0 200 400 0 200 400
t t
1 2
g(1) = 1 − a − +I = −a+I >0
3 3
is either purely imaginary, or real and smaller than |v∗2 − 1 + 1/τn |. In either case,
the eigenvalues λ+ and λ− both have negative real part, so the fixed point is stable.
are fixed points only when I ≤ 0, and in that case, the fixed points
11.2. There √
are x∗,± = ± −I. Figure E.13 shows x∗,± as functions of I.
1
x*
−1
−2
−3
−2 0 2
I
Figure E.13. The bifurcation diagram for dx/dt = x2 + I. The solid curve
indicates stable fixed points, and the dashed curve indicates unstable ones. [FORK]
11.3. The plots of x(t) are in Fig. E.14. The long plateau during which x barely
changes at all is the time during which the trajectory passes the “ghost” of the two
fixed points annihilated in the saddle-node√collision as I rises above 0.5. Table E.1
gives numerical results, confirming that T I − Ic is approximately independent of
I when I ≈ Ic , I > Ic .
Appendix E. Solutions to Selected Homework Problems 395
1.5
x
1
0.5
0
0 2000 4000 6000 8000 10000
t
Figure E.14. (x(t), y(t)) solves (11.5), (11.6) with x(0) = 1, y(0) = 0.5,
and I = 0.5 + 10−4 (black), 0.5 + 10−5 (blue), and 0.5 + 10−6 (red). [GHOST]
√
I T T I − Ic
0.5 + 10−4 921 9.21
0.5 + 10−5 2959 9.36
0.5 + 10−6 9403 9.40
Table E.1. Results of the numerical experiments for problem 11.3. [GHOST]
A detail in the code that generates Fig. E.14 and Table E.1 deserves elabora-
tion. The code generates approximations xk for x(kΔt), k = 1, 2, . . . To approx-
imate the time, t∗ , at which x crosses 1.5, we first determine the largest k with
xk > 1.5. Note that then xk+1 ≤ 1.5. We compute the straight line through the
points (kΔt, xk ) and ((k + 1)Δt, xk+1 ), and define t∗ to be the time at which this
straight line intersects the horizontal line x = 1.5. Note that t∗ ∈ (kΔt, (k + 1)Δt].
The result of this calculation is easily seen to be
xk − 1.5 1.5 − xk+1
t∗ = (k + 1)Δt + kΔt. (E.15)
xk − xk+1 xk − xk+1
The time at which x crosses 0.5 is approximated analogously.
11.4. (a) The equation
|x| + I = 0
has two solutions, namely x± = ±I, when I < 0. The two solutions meet at 0 when
I = 0, and there is no solution of the equation for I > 0. (b) Let a > 0. The time
needed to move from x = −a to x = a is
a a a
dt 1 1 a
dx = dx = 2 dx = 2 ln(x + I)|x=0 =
−a dx −a |x| + I 0 x + I
1
2 (ln(a + I) − ln(I)) = 2 ln(a + I) + ln .
I
This is ∼ 2 ln(1/I) in the limit as I 0.
(c) This bifurcation does not behave like a typical saddle-node bifurcation because
|x| + I is not a differentiable function of x at x = 0, the site of the collision of the
two fixed points.
396 Appendix E. Solutions to Selected Homework Problems
12.2. (b) Figure 12.1 indicates three fixed points on, for instance, the vertical
line I = 0.05 μA/cm2 . We will denote them by (v∗,L , n∞ (v∗,L )), (v∗,M , n∞ (v∗,M )),
and (v∗,U , n∞ (v∗,U )), with v∗,L < v∗,M < v∗,U . (Of course, L, M , and U stand
for “lower,” “middle,” and “upper.”) Figure 12.1 shows that (v∗,L , n∞ (v∗,L )) is a
stable fixed point of (12.1), (12.2), whereas (v∗,M , n∞ (v∗,M )) and (v∗,U , n∞ (v∗,U ))
are both unstable fixed points of (12.1), (12.2).
However, v∗,U is a stable fixed point of (12.6). To see this, we denote the
right-hand side of (12.6) by F (v):
F (v) = g Na m∞ (v)3 (1 − n∞ (v))(vNa − v) + g K n∞ (v)4 (vK − v) + gL (vL − v) + I.
For I = 0.05 μA/cm2 , the graph of F crosses the v-axis three times, in the points
v∗,L , v∗,M , and v∗,U . Fixed points where F changes from positive to negative, as
v increases, are stable, and fixed points where F undergoes the reverse change of
signs are unstable. Thus drawing the graph of F will reveal whether v∗,U is stable
or unstable.
The graph of F is somewhat difficult to interpret because the values of F vary
widely. We therefore plot not F , but tanh(5F ), as a function of v. This function is
zero if and only if F = 0. Since tanh is strictly increasing, the graph of tanh(5F )
reveals the stability of the fixed points just as the graph of F does. However,
tanh(5F ) only varies between −1 and 1, and its graph crosses the v-axis 5 times
more steeply than that of F , and as a result the crossings are more easily visible.
Figure E.15 shows this plot, confirming that v∗,U is a stable fixed point of (12.6).
1
tanh (5F )
−1
−100 −90 −80 −70 −60 −50 −40 −30
v [mV]
I =1000 μA/cm2
50
v [mV]
0
-50
-100
90 92 94 96 98 100
2
I =1500 μA/cm
50
v [mV]
-50
-100
90 92 94 96 98 100
t [ms]
Figure E.16. Results for exercise 12.3, showing depolarization block for
very large (and quite unrealistic) values of I. [DEPOLARIZATION_BLOCK]
0
-10
-20
v [mV]
-30
-40
*
-50
-60
-70
0 500 1000 1500 2000
I [μA/cm2 ]
Figure E.17. Same as Fig. 12.1, but for a much larger range of drives,
showing that the unstable node (blue, dashes) becomes stable (black, solid)) for very
large (and quite unrealistic) values of I. [RTM_2D_FP_LARGE_I]
13.4. (a)
dx
= I ∓ x2 + y 2 x − y,
dt
dy
= I ∓ x2 + y 2 y + x.
dt
(b) x2 + y 2 x is obviously infinitely often differentiable as a function of (x, y) for
(x, y)
= (0, 0). It is differentiable
even at the origin. This is to say that there is
a linear approximation of x2 + y 2 x near (x, y) = (0, 0) which is accurate up to
an error of size o( x2 + y 2 ) as (x, y) → (0, 0). (See Section 1.2 for the meaning of
o(· · · ).) In fact, that linear approximation is zero:
398 Appendix E. Solutions to Selected Homework Problems
x2 + y 2 x = 0 + o( x2 + y 2 )
as (x, y) → (0, 0).
Similarly, x2+ y 2 y is differentiable at (0, 0) with zero deriva-
tive. However, x + y x and x2 + y 2 y are not twice differentiable. If one of
2 2
them were twice differentiable, the other would be as well by symmetry, and in that
case
x2 + y 2 x · x + x2 + y 2 y · y = (x2 + y 2 )3/2 = r3/2
would have to be twice differentiable; but it is not. On the other hand, the right-
hand sides in (E.16) and (E.17) are clearly infinitely often differentiable.
-0.5
*
-1
v
-1.5
-2
-2 -1 0
I
−1.19
n
−1.25
−1.05 −0.88
v
14.2. (a) Figure E.18 shows this curve. The code that generates this figure also
determines where the transition from a stable spiral (indicated as a solid red line in
the figure) to an unstable spiral (indicated as a dashed green line) occurs: The crit-
ical value of I is Ic ≈ −0.55848. (b) The fact that there is a transition from a stable
Appendix E. Solutions to Selected Homework Problems 399
n
-0.5
-1
-2 -1 0 1
v
dn 1 av − n
= .
dv τn v − v 3 /3 − n + I
3
2
1
0
v
-1
-2
-3
1000 1200 1400 1600 1800 2000
t
Figure E.21. Analogous to Fig. 15.5, but with a much smaller δ (δ = 0.02
here), and a somewhat larger τadapt (τadapt = 200). We continued the simulation
to t = 2000, but show only times between 1000 and 2000, because there is an initial
transition before a stable pattern emerges. [MMOS_2]
15.6. If I is slightly above I∗ , a single spike may reduce effective drive to a point
where periodic firing is no longer possible. The trajectory will then rapidly approach
the attracting fixed points. As the effective drive recovers and rises above I∗ again,
the trajectory will stay in the vicinity of the attracting fixed point, and therefore
there will be no second spike. If, on the other hand, I is slightly above Ic , a burst of
spikes can bring the effective drive below I∗ , thereby making firing cease. As soon
as the recovering effective drive rises above Ic again, firing resumes. A resulting
voltage trace is shown in Fig. E.22. One might call what is seen here “mixed-
mode oscillations,” in the sense that there are subthreshold oscillations between the
bursts, but they are of very small amplitude.
50
0
v
-50
-100
0 200 400 600 800 1000
t
Figure E.22. Analogous to Fig. 15.5, but based on the model of Sec-
tion 15.2, with I = 8, δ = 1.5, and τadapt = 50. [HH_MMOS]
0.08 0.08
0.06 0.06
z
0.04 0.04
0.02 0.02
0 0
-2 0 2 -2 0 2
0.08 0.08
0.06 0.06
z
0.04 0.04
0.02 0.02
0 0
-2 0 2 -2 0 2
θ θ
Figure E.23. Analogous to Fig. 16.4, but with zmax = 0.05 and τz = 100.
For these parameter values, I∗ = −0.0364 . . .. [SETN_PHASE_PLANE_SLOW]
and
lim F (v) = −gL . (E.19)
v→−∞
Equation (E.18) implies, by the intermediate value theorem, that F (v) = 0 has at
least one solution v = v∗ . Further, if v∗ is a solution, then
3 4
g Na (m∞ (v∗ )) h∞ (v∗ ) + gK (n∞ (v∗ )) + g L v∗ =
17.3. We calculate and classify the fixed points for 0.5 ≤ I ≤ 0.51. (This is the
range in which the subcritical Hopf bifurcation appears to occur, see Fig. 17.10.)
We find that a stable spiral turns into an unstable spiral as I rises above Ic ≈ 0.508;
see Fig. E.24. (There are other unstable fixed point in the same range of values of
I, not shown in the figure.) We also find that the Jacobi matrix at this fixed point
has a pair of complex-conjugate eigenvalues that passes through the imaginary axis
from left to right as I rises above Ic ; see Fig. E.25.
-63
v*
-64
0.506 0.508 0.51
I
Figure E.24. For the RTM neuron with M-current, g M = 0.2 mS/cm2 ,
vK = −100 mV, a stable spiral (red, solid) turns into an unstable spiral (green,
dashes) as I rises above Ic ≈ 0.508. [RTM_WITH_M_CURRENT_FP (figure 1)]
0.03
0.02
0.01
Im( )
−0.01
−0.02
−0.03
−4 −2 0 2 4
−3
Re( ) x 10
17.4. (a) Figure E.26 shows the bifurcation diagram, and demonstrates that there is
a saddle-node collision as I rises above Ic . (b) The calcium-activated AHP current
is dormant as long as there is no firing. Therefore the mechanism by which the
transition from rest to firing occurs cannot be affected by it.
Appendix E. Solutions to Selected Homework Problems 403
-40
-50
v*
-60
-70
-80
0 0.05 0.1 0.15 0.2
I
17.8. The results are shown in Table E.2. Among all the potassium currents
considered here, in fact Izhikevich’s has the lowest half-activation voltage.
model v1/2
classical Hodgkin-Huxley −24 mV
RTM −20 mV
WB 2 mV
Erisir 6 mV
INa,p -IK model −25 mV
50
v [mV]
0
−50
−100
0 10 20 30 40
t [ms]
Figure E.27. Same as lower panel in Fig. 18.1, but with h replaced by
min(h, h∗ ) in each time step of the simulation. [HH_BISTABLE_LIMITED_H]
50
v [mV]
−50
0 10 20 30 40
t [ms]
Figure E.28. Same as lower panel in Fig. 18.4, but with n replaced by
max(n, n∗ ) in each time step of the simulation. [ERISIR_BISTABLE_LIMITED_N]
40
f
20
0
−4 −3.8 −3.6 −3.4 −3.2 −3
I
value just prior to the action potential, to Ik−1 − , its value just after the action
potential. Between times (k − 1)δ and kδ, Ieff then evolves according to eq. (19.4);
this implies eq. (19.5).
19.4. Raising g K,slow should be akin to raising in the idealized analysis. Therefore
the bursts should get shorter. Lowering g K,slow should similarly make the bursts
longer. There is no obvious reason why the value of gK,slow should affect the inter-
burst interval. Figure E.30 confirms these guesses quite strikingly.
19.5. Raising τn,slow should be akin to lowering and raising τslow in the idealized
analysis. (The effect should be similar to lowering because with a larger τn,slow ,
the response of the slow potassium current to firing should be more sluggish, and
therefore weaker.) Therefore the bursts should get longer, and the inter-burst in-
tervals should get longer as well, as τn,slow is raised. Figure E.31 confirms these
guesses.
Appendix E. Solutions to Selected Homework Problems 405
g K , slow =4.6
0
v [mV]
−50
−50
−50
70 80 90 100 110
t [ms]
τ n, slow =15
0
v [mV]
−50
−50
−50
50 60 70 80 90 100 110
t [ms]
Figure E.31. Effects of varying τn,slow in the model in Section 19.2. Al-
though a different time interval is shown in each case, the length of the time in-
terval, and hence the scaling of the horizontal axis, is the same in all three cases.
[INAPIK_SLOW_I_K_VARY_TAU]
19.6. Figure E.32 shows the result. Note that there is a delay of a little over 10 ms
between the instant when I rises above Ic and the onset of firing. This is consistent
with Fig. 19.8.
20.1. (c) Figure E.33 shows the results. For s to be able to return to values very
close to zero between spikes, we must choose g and Δ so that g(v/Δ) is close to
zero for values of v near the resting membrane potential of a nerve cell. This means
either to choose Δ small enough (compare, for instance, the left and middle panels
406 Appendix E. Solutions to Selected Homework Problems
-20
v [mV]
-40
-60
-80
350 355 360 365 370 375 380
5.2
I [ m A/cm2 ]
4.8
4.6
4.4
Figure E.32. Response of the INa,p -IK model to external drive that ramps
up gradually. The red horizontal line in the lower panel indicates Ic . There is a
delay of more than 10 ms between the time when I rises above Ic and the onset of
firing. [INAPIK_RAMP]
Δ =1 Δ =5 Δ =10
1 1 1
0 0 0
0 50 100 0 50 100 0 50 100
1 1 1
0 0 0
0 50 100 0 50 100 0 50 100
1 1 1
0 0 0
0 50 100 0 50 100 0 50 100
t t t
Figure E.33. Analogous to the middle panel of Fig. 20.2, but with the
term (1 + tanh(v/10))/2 on the right-hand side of eq. (20.2) replaced by g(v/Δ),
with g(v) = (1√+ tanh(v))/2 (top row), g(v) = arctan(v)/π + 1/2 (middle row),
and g(v) = (v/ 1 + v 2 + 1)/2 (bottom row), Δ = 1 (left column), Δ = 5 (middle
column), and Δ = 10 (right column). The panel in the right upper corner reproduces
the middle panel of Fig. 20.2. [DIFFERENT_SIGMOIDS]
Appendix E. Solutions to Selected Homework Problems 407
of the bottom row of Fig. E.33 with the right panel), or to choose g so that it
converges to 0 as v → −∞ rapidly enough. Of the three choices of g tested here,
(1 + tanh(v))/2 converges to zero exponentially fast as v → −∞, while the others
converge to zero much slower:
arctan(v) 1 1
+ ∼ as v → −∞, (E.21)
π 2 π|v|
and
1 v 1
√ +1 ∼ as v → −∞. (E.22)
2 1 + v2 4v 2
The sum in the parentheses only needs to be evaluated once, it is the same for all
N neurons.
20.5. The result is shown in Fig. E.34.
f [Hz]
10
0
0.05 0.1
I [ m A/cm 2 ]
This disk lies entirely in the left half of the complex plane, i.e., in the half defined
by Re(z) ≤ 0.
(b) The j-th entry in the vector Ge is the j-th row sum of G, which is cj . But the
j-th entry in the vector De is also cj .
408 Appendix E. Solutions to Selected Homework Problems
(c) Suppose that (G − D)v = 0. Because G is symmetric, this equation can also be
written as (GT − D)v = 0. Take the j-entry of (GT − D)v:
N
ggap,ij vi − cj vj = 0. (E.23)
i=1
N
wi = 1,
i=1
so the right-hand side of (E.24) is a weighted average over those vi with ggap,ij > 0.
(d) We say that i is a nearest neighbor of j if ggap,ij > 0. In part (c), we concluded
that vj is a weighted average over vi , taken over the nearest neighbors of j. Since
vj ≥ vi for all i, this implies that vi = vj if i is a nearest neighbor of j. We can
then apply the same reasoning to the nearest neighbors of the nearest neighbors,
and so on. Because the connectivity graph is connected, this eventually leads to the
conclusion that vk = vj for all k.
(e) We have concluded now that there is a one-dimensional eigenspace associated
with the eigenvalue 0, spanned by e. All other eigenvalues are strictly negative:
0 = λ1 > λ2 ≥ . . . ≥ λN ,
and we may take b1 to be e. We expand v0 in the form
N
v0 = c1 e + c k bk . (E.25)
k=2
Taking the dot product of both sides of (E.25) with e/N , we conclude that c1 is the
mean of v0 , so c1 = m0 . Further,
N
v(t) = c1 e + ck bk e−λk t → c1 e = m0 e
k=2
as t → ∞.
21.3. One finds that the two neurons phase-lock if I1 and I2 are close enough to
each other, and not if not. Figure E.35 shows the examples I2 = 0.97 μA/cm2 (the
two neurons phase-lock), and I2 = 0.96 μA/cm2 (they do not).
Appendix E. Solutions to Selected Homework Problems 409
17
p eriod
16.5
16
15.5
0 20 40 60 80 100 120
17
p eriod
16.5
16
15.5
0 20 40 60 80 100 120
interspike interval #
22.1. Suppose that E(t) < 0 for some t. Let t0 ≥ 0 be the latest time prior to
time t at which E(t0 ) = 0. There is then, by the mean value theorem, some time
tc ∈ (t0 , t) so that
dE E(t) − E(t0 )
(tc ) = < 0.
dt t − t0
On the other hand,
dE f (wEE E(tc ) − wIE I(tc ) + IE ) − E(tc ) E(tc )
(tc ) = ≥− > 0.
dt τE τE
This contradiction proves that E(t) cannot become negative. Similar arguments
show that E(t) cannot be greater than 100, and I(t) cannot become negative or
greater than 100.
22.2. Numerically, we find that there is only one fixed point (E∗ , I∗ ) for 0 ≤ wEE ≤
1.5. Figure E.36 shows E∗ as a function of wEE . The figure confirms that a stable
spiral turns into an unstable spiral as wEE rises above the critical value, i.e., a
conjugate pair of complex eigenvalues crosses the imaginary axis.
I will not spell out all details of the calculation producing Fig. E.36, but refer
the reader to the code generating the figure instead. However, I will explain how
one finds all fixed points, for a given wEE . A fixed point is a point (E∗ , I∗ ) with
E∗ = f (wEE E∗ − I∗ + 20), I∗ = g(E∗ ). Inserting the second of these equations into
the first, we find a single equation for E∗ :
To find all solutions of this equation, we note that any solution must lie between
0 and 100, since values of f lie between 0 and 100. We discretize the interval
410 Appendix E. Solutions to Selected Homework Problems
[0, 100] into very short subintervals of length ΔE, evaluate the function F (E) =
f (wEE E − g(E) + 20) − E at the boundary points of those subintervals, and look
for subintervals on which F changes sign. In any such subinterval, we use the
bisection method to find a solution up to a very small error.
20
15
E*
10
0
0 0.5 1 1.5
wEE
E∗ = f (−wIE I∗ + IE ), (E.27)
I∗ = g(wEI E∗ + II ). (E.28)
We define
h(I∗ ) = f (−wIE I∗ + IE ),
and note that h is strictly decreasing. We denote the inverse function of h by
p = p(E∗ ). It is strictly decreasing as well. The right-hand side of eq. (E.28) is a
strictly increasing function of E∗ . We denote it by q(E∗ ). So (E.27), (E.28) can be
re-written as
I∗ = p(E∗ ) = q(E∗ ). (E.29)
Define
r(E∗ ) = q(E∗ ) − p(E∗ ). (E.30)
Note that r is a strictly increasing function. To find a solution of (E.29), we must
find a solution of r(E∗ ) = 0, then set I∗ = p(E∗ ) (or, equivalently, I∗ = q(E∗ )).
Further, note that h(0) = f (IE ), and h(100) = f (IE − 100wIE ). So p(f (IE )) = 0 ≤
q(f (IE )) and p(f (IE − 100wIE )) = 100 ≥ q(f (IE − 100wIE )), i.e., r(f (IE )) ≥ 0 and
r(f (IE − 100wIE )) ≤ 0. By the intermediate value theorem, there is a solution of
eq. (E.30) with E∗ ∈ [f (IE − 100wIE ), f (IE )]. Furthermore, the solution is unique
because r is strictly increasing.
This proves that there is exactly one fixed point. To prove that it is stable,
we must consider the Jacobi matrix at the fixed point. It is given by the following
Appendix E. Solutions to Selected Homework Problems 411
50
0
0 200 400 600 800 1000
t [ms]
23.3. (a) I find A ≈ 0.037 and B ≈ 0.053. It is not surprising that A and B are
smaller here than in Section 23.1: The input pulses are broader, so more current is
injected altogether. Note, however, that B/A ≈ 1.43 — only slightly different from
the value in Section 23.1. See WB_NEURON_BROAD_PULSES for the Matlab program that
I used to find these values. (b) Figure E.38 shows the result. The fraction of the
interval of sparse entrainment that corresponds to irregular entrainment has grown
somewhat.
24.1. The result is shown in Fig. E.39.
25.1. The results are in Table E.3. We have rounded all percentages to the nearest
one-hundredth. Not surprisingly, the effect of the input on T1 becomes stronger as
the input becomes stronger or longer-lasting, or comes later in the cycle.
25.6. Figure E.40 shows the PRC, and Fig. E.41 shows a close-up. What is in-
teresting here is the “staircase-like” appearance of the PRC. Three points on three
different steps of the staircase are indicated in red in Fig. E.41. Their horizontal
coordinates are ϕ = 0.3995, 0.4245, and 0.4395, respectively. For the values of ϕ
corresponding to these three points, Fig. E.42 shows the curves (v(t), n(t)), with
0 ≤ t ≤ T , with T = 20, 30, and 40, respectively. (For each of the three values
of ϕ0 , I chose T in such a way that the first passage of the trajectory through the
window shown in the figure is visible, but the second passage is not.)
These results suggest that for ϕ approximately between 0.41 and 0.53, the
input pulse sends the solution into the vicinity of a fixed point that is a weakly
412 Appendix E. Solutions to Selected Homework Problems
10
15
10
n
n
5
Figure E.38. Left: Fig. 23.6. Right: Same, but with broader input pulses.
[WB_INTERVALS_BROAD_PULSES]
−60
−70
LFP
−80
−90
0 50 100 150 200 250 300
t [ms]
−1
g
−2
−3
−4
0 0.5 1
j
Figure E.40. Like middle panel of Fig. 25.9, but with g syn = 0.30. [HH_PRC]
repelling spiral. The different steps of the staircase correspond to different numbers
of turns of the spiral that the trajectory must follow before leaving the vicinity of
the fixed point.
Appendix E. Solutions to Selected Homework Problems 413
τr τpeak τd g syn ϕ0 E1 E2
0.5 0.5 2 0.05 0.1 0.00% 0.00%
0.5 0.5 2 0.05 0.5 0.03% 0.00%
0.5 0.5 2 0.05 0.9 1.24% 0.00%
0.5 0.5 2 0.1 0.1 0.00% 0.00%
0.5 0.5 2 0.1 0.5 0.15% 0.00%
0.5 0.5 2 0.1 0.9 2.91% 0.00%
0.5 0.5 2 0.2 0.1 0.00% 0.00%
0.5 0.5 2 0.2 0.5 1.32% 0.00%
0.5 0.5 2 0.2 0.9 7.07% 0.00%
1.0 1.0 5 0.05 0.1 0.31% 0.00%
1.0 1.0 5 0.05 0.9 7.47% 0.10%
1.0 1.0 5 0.1 0.1 1.17% 0.01%
1.0 1.0 5 0.1 0.9 16.38% 0.33%
Table E.3. Results for exercise 25.1. Percentages are rounded to the
nearest one-hundredth. [NO_MEMORY]
−1
g
−2
−3
−4
0.35 0.4 0.45 0.5 0.55 0.6
j
0.45
n
0.4
0.35
−70 −65 −60 −70 −65 −60 −70 −65 −60
v [mV] v [mV] v [mV]
Figure E.42. (v(t), n(t)) corresponding to the three red dots in Fig. E.41.
[HH_PRC_ZOOM_IN, figure 2]
The assertion follows from the fact that tan and arctan are strictly increasing func-
tions. (b) This follows from eq. (E.31), since arctan ψ ∈ (−π/2, π/2) for all ψ ∈ R.
(c) From part (b), the graph of f lies in the left upper half of the unit square.
This implies (by the intermediate value theorem) that any line perpendicular to the
diagonal must intersect the graph in some point within the left upper half of the
unit square, and it can intersect the graph only once because f is increasing.
26.3. (a)√From Fig. 26.9,√we see that c has√to be so small that the ˜
√ slope√ of f at
s = −1/ 2 is ≤ 1, so c 2 ≤ 1 or c ≤ 1/ 2. (b) Let s ∈ [−1/ 2, 1/ 2]. The
corresponding point on the diagonal line in the (ϕ, f )-plane is
√
(s/ 2 + 1/2, s/ (2) + 1/2).
√
Define u = f˜(s)/ 2. Then the point
√ √
(s/ 2 + 1/2 − u, s/ 2 + 1/2 + u)
Therefore
(1 − )2 + 4ϕ
1−
g(ϕ) = f (ϕ) − ϕ = −2ϕ + 1 − =
1 + − (1 + )2 − 4(1 − ϕ)
2(1 − ϕ) − ,
which is precisely (26.5).
26.7. From the proof of Proposition 26.2,
∂g ∂g
G (0) = 1 + (0, Δv) 1+ (1, Δv) (E.35)
∂ϕ ∂ϕ
and
∂2g ∂2g
Ĝ (0) = 1 + Δv (0, 0) 1 + Δv (1, 0) . (E.36)
∂ϕ∂Δv ∂ϕ∂Δv
G (0) = 1 − 2 ,
With a small amount of algebra, this turns out (by sheer coincidence) to be the
negative of the function D shown in Fig. 28.3. Therefore,
√ just as in the case of
Fig. 28.3, there is a stable fixed point for 0 ≤ c < 3/18. (b) Figure E.43 shows
the results.
=0.1, c =0.08
1
0.5
ψ
0
0 100 200 300 400 500
=0.1, c =0.12
1
-1
ψ
-2
-3
0 100 200 300 400 500
t [units of TA ]
Figure E.43. Same as Fig. 28.4, but with g0 (ϕ) = ϕ(1 − ϕ)3 .
[WEAKLY_COUPLED_HETEROGENEOUS_2]
28.8. (a) Now D(ψ) = 0 for all ψ, and there is therefore no c > 0 for which
eq. (28.10) has a fixed point. If ψ obeys eq. (28.10), it will simply decrease indefi-
nitely. (b) Figure E.44 shows the result.
=0.02, c =0.005
0
ψ
−2
−4
0 1 2 3 4 5
4
t [units of TA ] x 10
Figure E.44. Same as Fig. 28.4, but with g0 (ϕ) = ϕ(1 − ϕ).
[WEAKLY_COUPLED_HETEROGENEOUS_3]
and
1 1 1
D +s = g0 + s − g0 −s .
2 2 2
This implies eq. (28.15). (b) The assumption g0 (ϕ)
≡ g0 (1 − ϕ) implies that D(ψ)
≡
0. By part (a), D then has both positive and negative values. Furthermore, D(1/2)
is clearly 0. Let M = maxψ∈[0,1] D(ψ). Then D(ψ) = c has a solution ψ if c ∈ [0, M ]
(by the intermediate value theorem), but not if c > M . (c) A stable fixed point of
eq. (28.10) is a ψ where H falls from above to below 0, that is, where D falls from
above to below c. Choose ψmax ∈ R with D (ψmax ) = M , and ψ0 > ψmax with
D (ψ0 ) = 0. (It is possible to choose ψ0 greater than ψmax because D is periodic
with period 1.) Between ψmax and ψ0 , there must be a value ψ where D falls from
above to below c, since 0 < c < M .
29.2. We apply the method of integrating factors. The equation
dv v
=− + I − g syn e−t/τI v
dt τm
is equivalent to
d −t/τI
−t/τI
v(t)et/τm −g syn τI e = Iet/τm −g syn τI e .
dt
To continue, we would have to integrate the right-hand side. Let’s think about the
−t
special case τm = g syn = τI = 1. Can we explicitly integrate the function et−e ?
Let’s try:
t−e−t −t
e dt = et e−e dt.
42 You probably don’t know how to prove such things, or what the statement even means
precisely. However, a practical way of convincing yourself that the integral cannot be done is to
try doing it with an online symbolic integrator.
418 Appendix E. Solutions to Selected Homework Problems
On the other hand, now insert t = βtc with β > 1 into (E.38). The result is
γ
= γτIβ−1 ,
τI1−β
which tends to 0 as τI → 0, since β > 1. So for t = βtc , the term (E.38) becomes
irrelevant as τI → 0, and the differential equation becomes
dv v
=− + I.
dt τm
This does not prove our assertion, but hopefully it makes it quite plausible. If you
are not convinced yet, part (b) will probably do the trick.
(b) Figure E.45 shows a simulation that confirms the result of (a).
0.5
v
0
0 5 10 15 20 25
0.5
v
0
0 5 10 15 20 25
t
Figure E.45. Black: LIF neuron with a decaying inhibitory pulse, with
v(0) = v∗ = 0.6, τm = 10, I = 0.2, τI = 1 and gsyn = 2 (upper panel), τI = 0.1 and
gsyn = 20 (lower panel). Red: LIF neuron without inhibitory pulse, with v(0) =
v∗ e−γ , where γ = g syn τI = 2. [LIF_SHORT_STRONG_INHIBITION]
29.5. The results are in Table E.4. The ten times greater perturbations lead to
approximately ten times greater changes in P .
Table E.4. Analogous to Table 29.2, but with 10% perturbations of the
parameters, instead of 1% perturbations. [RTM_CONDITION_NUMBERS_STRONG_PERT]
420 Appendix E. Solutions to Selected Homework Problems
29.6. Figure E.46 shows what becomes of Fig. 29.5 when the sine term is dropped.
There is still a stable river, so synchronization by an inhibitory pulse ought to work
without the sine term. (In fact it does.)
1
0.8
0.6
g syn 0.4
0.2
0
−2 0 2
q
Figure E.46. Same as Fig. 29.5, but with the sine term omitted from the
model; that is, the inhibitory pulse is now a current input, not a synaptic input.
[RIVER_SIMPLIFIED]
0.025
0.02
g*
0.015
0.01
0.005
0
0 2 4 6 8 10 12
τI
29.8. The plot is in Fig. E.48. It confirms that g syn e−t/τI is largely independent of
gsyn and ϕ0 , especially when gsyn is large.
30.7. To make it easier to determine the inter-spike intervals, I simulated a network
with only one I-cell. (This amounts to the assumption that the I-cells are perfectly
synchronous.) Figures E.49 and E.50 show that a stochastic component of the drive
to the E-cells, common to all E-cells of the network, can in fact cause substantial
fluctuations in the PING frequency.
30.8. Yes, but it really requires much stronger inhibitory conductances; see
Fig. E.51.
31.5. Figure E.52 shows a realization. You may suspect that I searched for a long
time for a seed for the random number generator that would give me a very broad
plateau — but no, the second seed I tried gave me Fig. E.52. (You might want to
Appendix E. Solutions to Selected Homework Problems 421
t/τ I
g syne
0.022
0 0.2 0.4 0.6 0.8 1
g syn
Figure E.48. Parameters as in the middle panel of Fig. 29.4. We now con-
sider only a single RTM neuron, vary g syn , and record the time, P , that it takes the
neuron so fire following the inhibitory input pulse. The figure shows gsyn e−P/τI , the
inhibitory conductance at the time of firing, as a function of gsyn , for two different
initial phases ϕ0 . [RTM_WITH_INHIBITION_GSTAR]
200
1
200 250 300 350 400 450 500
30
20
10
200 250 300 350 400 450 500
0
S
−1
200 250 300 350 400 450 500
t [ms ]
Figure E.49. Spike rastergram of a PING network with a single I-cell (top
trace), intervals between spikes of I-cell (middle trace), and stochastic component of
drive to E-cells (bottom trace). Spike times of E-cells are indicated in red, and spike
times of I-cell in blue. The parameters are NE = 200, NI = 1, I E = 1.4, σE =
0.05, I I = 0, ĝEE = 0, ĝEI = 0.25, ĝIE = 0.25, ĝII = 0.25, pEI = 0.5, pIE =
1, pII = 1, τr,E = 0.5, τpeak,E = 0.5, τd,E = 3, vrev,E = 0, τr,I = 0.5, τpeak,I =
0.5, τd,I = 9, vrev,I = −75. The stochastic component of the drive to the E-cells is
an Ornstein-Uhlenbeck process with mean 0, as defined by eqs. (C.20)–(C.22), with
τnoise = 50 and σnoise = 0.5. Only the last 300 ms of a 500 ms simulation are shown,
to avoid initialization effects. [PING_10]
422 Appendix E. Solutions to Selected Homework Problems
200
1
200 250 300 350 400 450 500
30
20
10
200 250 300 350 400 450 500
0
S
−1
200 250 300 350 400 450 500
t [ms ]
Figure E.50. For comparison, same as Fig. E.49 but with σnoise = 0. [PING_11]
250
50
50
mean(v ), E-cells
−50
−100
0 50 100 150 200
t [ms]
Figure E.51. Like Fig. 30.4, but with τd,I = 3 ms, and with ĝIE and ĝII
increased ten-fold (to 2.5) [PING_12]
try a few more seeds to convince yourself that “plateaus” are much more common
than you might think.)
31.8. (a) Figure E.53 shows the graph. (b) To understand eq. (31.1), first omit the
term H(−4), which is small, about 3.4 × 10−4 :
ϕ − 0.1 0.8 − ϕ ϕ
g(ϕ) ≈ −H H (E.41)
0.1 0.05 2
Appendix E. Solutions to Selected Homework Problems 423
f (i)
0 5 10 15 20
i
0.5
0
-3 -2 -1 0 1 2 3
s
Therefore the theory of Chapter 26 applies, and G (ϕ) > 0 for ϕ ∈ [0, 1]. This will
be used below.
(a) Synchrony is unstable if G (0), which is always the same as G (1), is greater
than 1. We have (see the proof of Proposition 26.2)
I E =0.92
500
100
I E =0.9
500
100
Figure E.54. Just like Fig. 31.15, but with smaller values of I E . A
close-up look at the upper panel (I E = 0.92) reveals that each E-cell fires on
every second inhibitory pulse. There are two clusters of E-cells firing in anti-
synchrony. In the lower panel (I E = 0.9), each E-cell fires on every third in-
hibitory pulse. There are three clusters, a large one and two very much smaller
ones. [ING_ENTRAINING_E_CELLS_3]
32.6. Figure E.56 shows the analogue of Fig. 32.14 with a 2 ms delay between the
firing of an E-cell and the reset of s to 1. We now see that 0 < φ (0) < 1, and
therefore synchrony is stable. However, the fixed point x∗ of ψ, which is also a fixed
point of φ, also satisfies 0 < φ (x∗ ) < 1, so anti-synchrony is stable as well.
32.7. Figure E.57 shows results of simulations with 3 neurons (upper panel), and
with 20 neurons (lower panel). The 3 neurons in the upper panel fire in splay state.
With 20 neurons, clusters of various sizes are seen. The clusters fire in splay state,
but neurons belonging to the same cluster synchronize.
33.2. The result of this numerical experiment is in Fig. E.58. The temporal sepa-
ration between EP - and ES -cells becomes slightly worse than in Fig. 33.7 when the
strength of the ES -to-I synapses is halved again.
33.3. The result of this numerical experiment is in Fig. E.59.
33.4. Fig. E.60 shows the voltage trace of a single cell in Fig. 33.9, scaled and
shifted so that the values range from 0 to 200, and superimposed on the spike
426 Appendix E. Solutions to Selected Homework Problems
250
50
30
f E [Hz]
20
10
0
0 100 200 300 400 500
t [ms]
1 1
ψ
0.5 0.5
φ
0 0
0 0.5 1 0 0.5 1
x x
Figure E.56. Analogue of Fig. 32.14 with a 2 ms delay between the firing
of an E-cell and the reset of s to 1. [PLOT_PSI_PHI_WITH_DELAY]
rastergram of Fig. 33.9. The cell skips either two or three population cycles between
action potentials. This is typical; other cells behave similarly. Thus the cells fire in
clusters, but membership in the clusters is far from fixed.
34.1. Figures E.61 and E.62 show the results for parts a and b, respectively. The
E-to-O synapses cause the O-cells to fire continuously, and therefore there are no
pauses in the gamma oscillation occurring at theta frequency. Initial approximate
synchronization of the O-cells restores the nested gamma-theta rhythm.
Appendix E. Solutions to Selected Homework Problems 427
neuronal index
3
0
0 200 400 600 800 1000
20
neuronal index
15
10
0
0 200 400 600 800 1000
t
Figure E.57. Spike rastergrams for the generalization of the model given
by (32.4)–(32.9) to 3 neurons (upper panel), and 20 neurons (lower panel). The
neurons have been re-numbered, after the simulation, such that the last spike of neu-
ron i occurs prior to the last spike of neuron j if and only if i < j. This numbering
convention makes it easier to see the clusters in the lower panel. [N_LIF_NEURONS]
50
10
Figure E.58. Like Fig. 33.6, with the strengths of the synapses from E-cells
11 through 30 (the ES -cells) to I-cells cut by a factor of 4. Compare with Fig. 33.7,
where the ES -to-I synapses were twice stronger than here, and twice weaker than in
Fig. 33.6. [M_CURRENT_PING_9]
50
10
Figure E.59. Like Fig. 33.6, with the strengths of the synapses from E-
cells 11 through 30 (the ES -cells) to I-cells cut by a factor of 2, and the drive to the
EP -cells raised by 20%. [M_CURRENT_PING_10]
428 Appendix E. Solutions to Selected Homework Problems
200
100
0
0 200 400 600 800 1000
t [ms]
Figure E.60. The voltage trace of a single cell in Fig. 33.9, scaled and
superimposed onto the spike rastergram. (Note that the simulation was continued
for twice as much time as in Fig. 33.9.) [M_CURRENT_BETA_WITH_GJ_V]
300
100
50
0 200 400 600 800 1000
−40
mean(v E )
−60
−80
0 200 400 600 800 1000
mean(s E )
0.2
0.1
0
0 200 400 600 800 1000
t [ms]
Figure E.61. Like Fig. 34.10, but with ĝEO = 0.1. [EIO_2]
34.2. Figure E.63 shows what happens when the I-to-O connection is removed
in Fig. 34.10: The theta frequency pauses in the gamma oscillation have nearly
disappeared.44 Figure E.64 shows what happens when in Fig. E.63, each O-cell is
initialized at a random phase uniformly distributed between 0 and 0.1, not between
0 and 1: Very clean nested gamma-theta oscillations are restored, and persist for a
second of simulated time at least.
34.3. Figure E.65 shows the result. The time dependence of the A-current indeed
does not matter for the nested gamma-theta rhythm. (In fact, the A-current is
needed in Fig. 34.10, but only as a way of counteracting the excitatory effect of
the h-current. If you omit the A-current altogether, the O-cells fire rapidly and
suppress the other cells of the network.)
35.1. We note that v is the membrane potential of a LIF neuron with I replaced
44 That they have not entirely disappeared is an accident: There is nothing to synchronize the
O-cells. However, individually they fire at theta frequency, and therefore the frequency of O-cell
firing, which fluctuates randomly, peaks periodically at theta frequency.
Appendix E. Solutions to Selected Homework Problems 429
300
100
50
0 200 400 600 800 1000
−40
mean(v E )
−60
−80
0 200 400 600 800 1000
mean(s E )
0.2
0.1
0
0 200 400 600 800 1000
t [ms]
Figure E.62. Like Fig. E.61, but with nearly synchronous initialization of
the O-cells, as described in exercise 34.1b. [EIO_3]
300
100
50
0 200 400 600 800 1000
−40
mean(v E )
−60
−80
0 200 400 600 800 1000
mean(s E )
0.2
0.1
0
0 200 400 600 800 1000
t [ms]
Figure E.63. Like Fig. 34.10, but with the I-to-O connection removed. [EIO_4]
1
by I˜ = I + gvrev , and τm replaced by τ̃m = . The period is
1/τm + g
⎧
⎨ τ̃m ln τ̃m˜I˜ if τ̃m I˜ > 1,
τ̃ I−1
m
T =
⎩
∞ otherwise.
The condition τ̃m I˜ > 1 is clearly equivalent to I > 1/τm +g(1−vrev ). The frequency
430 Appendix E. Solutions to Selected Homework Problems
300
100
50
0 200 400 600 800 1000
−40
mean(v E )
−60
−80
0 200 400 600 800 1000
mean(s E )
0.2
0.1
0
0 200 400 600 800 1000
t [ms]
Figure E.64. Like Fig. E.63, but with the O-cells initialized in near syn-
chrony: Each O-cell is initialized at a random phase uniformly distributed between
0 and 0.1. [EIO_5]
300
100
50
0 200 400 600 800 1000
−40
mean(v E )
−60
−80
0 200 400 600 800 1000
mean(s E )
0.2
0.1
0
0 200 400 600 800 1000
t [ms]
Figure E.65. Like Fig. 34.10, but with ab replaced by 0.013 — that is,
with the time-dependent A-current replaced by tonic inhibition. [EIO_6]
is
1000 1 1
f= = 1000 +g =
T τm ˜ m I˜ − 1)
ln τ̃m I/(τ̃
1 + τm g 1
1000 =
τm ln τ̃m I/(τ̃
˜ m I˜ − 1)
Appendix E. Solutions to Selected Homework Problems 431
1 + τm g 1 1 + τm g
1000 = 1000 τm (I+gvrev )
.
τm ln I+gvrev / I+gvrev − 1 τm ln
1/τm +g 1/τm +g τ m (I+gv rev )−1−τ m g
35.2. (b) Figure E.66 shows an example. When you run the code that generates
Fig. E.66, you will see 200 graphs flash by; the one in Fig. E.66 is the last. The code
generates 200 random triples (I, τm , vrev ) with τm ∈ (1, 16), I ∈ (1/τm , 1.5/τm),
and vrev ∈ (−1, 1). For each of the randomly chosen triples (I, τm , vrev ), the code
computes the frequency fk = f (g k ) for
k
g = gk = g , k = 0, 1, 2, . . . , 99999,
100000 c
and verifies that the graph is concave-down by verifying that fk > (fk−1 + fk+1 )/2
for 1 ≤ k ≤ 99998. When you run the code, you will see that f , as a function of
g ∈ [0, gc ), is not always decreasing. It has a local maximum when vrev is close
enough to 1.
20
f
10
0
0 0.5 1 1.5 2 2.5 3
g
−3
x 10
Figure E.66. The firing frequency f of the LIF neuron given by eqs.
(35.5)–(35.7), as a function of g, for a random choice of τm , I, and vrev .
[CONCAVE_DOWN]
35.3. (a) Figure E.67 shows the result. (b) For large I, in essence the neuron fires
at the rate f (0.3) half the time, and at the firing rate f (0.1) half the time, where
f (g) denotes the firing rate obtained with tonic inhibition with conductance g. (Of
course the frequency also depends on I, τm , and vrev , but that dependence is not
made explicit in the notation “f (g)”.) Since the second derivative of f with respect
to g is negative (see exercise 35.2), the average of f (0.1) and f (0.3) is smaller than
f (0.2).
36.3. Figure E.68 shows the result.
37.1. Table E.5 shows the results. Increasing ĝIE by 0.2 has the effect of reducing
the number of sparsely firing E-cells by a factor of 2. This suggests that the number
of sparsely firing E-cells, over some range of values of ĝIE , behaves like e−cĝIE , where
c > 0 is determined by e−0.2c = 1/2, so c = 5 ln 2 ≈ 3.5.
38.4. This is shown in Fig. E.69.
39.1. Figure E.70 shows the result.
39.2. In the limit as τrec → 0, p + q must be 1, otherwise the term (1 − p − q)/τrec
on the right-hand side of eq. (39.9) would become infinite. Therefore eq. (39.10)
432 Appendix E. Solutions to Selected Homework Problems
200
150
f [Hz]
100
50
0
0.2 0.25 0.3
I
100
50
f
0
2 3 4 5 6 7 8
I [ m A/cm2 ]
Figure E.68. Analogous to Fig. 36.3, but with the RTM neuron replaced
by an Erisir neuron. [ERISIR_F_I_CURVE_PULSED_EXCITATION]
Table E.5. The number of sparsely firing E-cells in a simulation like that
of Fig. 37.1, carried out to time t = 500 ms, with 500 E-cells, and four different
values of ĝIE . [PING_THR_1_TABLE]
becomes
dq q 1 v
=− + 1.45 ln 1 + tanh (1 − q).
dt τd,q 1−U 10
This is eq. (20.9) if
1 1
1.45 ln = ,
1−U 0.2
i.e.,
1
= e5/1.45 = e1/0.29 ,
1−U
or U = 1 − e−1/0.29 .
39.3. First we solve eq. (39.14) for U :
U = 1 − e−W . (E.43)
Appendix E. Solutions to Selected Homework Problems 433
0
0 50 100 150 200
Figure E.69. Like Fig. 38.4, but with the distractor oscillating at 55 Hz
instead of 25 Hz. [GAMMA_COHERENCE_3]
final w divided by 3
1.2
1.1
1
0.9
0.8
0.5 1 1.5 2 2.5 3
I
40.4.
δ
Mδ (x) = maxδ (0, x) = max(0, x) + ln 1 + e−2|x|/δ
2
by eq. (D.6). For x > 0, this is
δ δ δ
x + ln 1 + e−2x/δ = ln e2x/δ + ln 1 + e−2x/δ = ln e2x/δ + 1 .
2 2 2
For x ≤ 0, it is
δ δ
0 + ln 1 + e2x/δ = ln e2x/δ + 1 .
2 2
434 Appendix E. Solutions to Selected Homework Problems
So for all x ∈ R,
δ 2x/δ
Mδ (x) = ln e +1 .
2
This is clearly an infinitely often differentiable, strictly increasing function of x.
Figure E.71 shows the graph of Mδ for δ = 0.1.
Mδ 0.5
-0.5
-1
-1 0 1
x
Figure E.71. The function Mδ (x) = maxδ (0, x) for δ = 0.1. [PLOT_M_DELTA]]
40.5. Figure E.72 shows the transition from gEE < 0.23 to gEE > 0.23. For g EE ,
the more strongly driven (second) E-cell is driven too hard to be entrained by the
I-cell.
Figure E.72. Just like Fig. 40.5, but with gEE = 0.22 (upper panel) and
gEE = 0.24 (lower panel). [THREE_CELL_PING_6]]
40.6. k = 0: If gEE,ij denotes the strength of the synapse from E-cell i to E-cell j,
only terms related to E-cells i and j appear on the right-hand side of the equation
for gEE,ij .
Bibliography
[37] F. Diener, Propriétés asymptotiques des fleuves, C. R. Acad. Sci. Paris, 302
(1985), pp. 55–58.
[61] D. Golomb and D. Hansel, The number of synaptic inputs and the syn-
chrony of large sparse neuronal networks, Neural Comp., 12 (2000), pp. 1095–
1139.
[104] X. Li, Cortical oscillations and synaptic plasticity — from a single neuron to
neuronal networks, PhD thesis, Hong Kong Polytechnic University, 2010.
[105] J. E. Lisman and M. A. Idiart, Storage of 7 +/- 2 short-term memories
in oscillatory subcycles, Science, 267 (1995), pp. 1512–1515.
[106] J. E. Lisman and O. Jensen, The γ-θ neural code, Neuron, 77 (2013),
pp. 1002–1016.
[107] A. Lüthi and D. A. McCormick, H-current: Properties of a neuronal and
network pacemaker, Neuron, 21 (1998), pp. 9–12.
[108] A. Lutz, L. L. Greischar, N. B. Rawlings, M. Ricard, and R. J.
Davidson, Long-term meditators self-induce high-amplitude gamma syn-
chrony during mental practice, Proc. Natl. Acad. Sci. USA, 101 (2004),
pp. 16369–16373.
[109] G. Maccaferri and C. J. McBain, The hyperpolarization-activated cur-
rent (Ih ) and its contribution to pacemaker activity in rat CA1 hippocam-
pal stratum oriens-alveus interneurones, J. Physiol., 497 ( Pt. 1) (1996),
pp. 119–130.
[110] E. O. Mann, C. A. Radcliffe, and O. Paulsen, Hippocampal gamma-
frequency oscillations: from interneurones to pyramidal cells, and back, J.
Physiol., 562 (2005), pp. 55–63.
[111] E. Marder, Electrical Synapses: Rectification Demystified, Current Biology,
19 (2009), pp. R34–R35.
[112] F. Marino, M. Ciszak, S. F. Abdalah, K. Al-Naimee, R. Meucci,
and F. T. Arecchi, Mixed-mode oscillations via canard explosions in light-
emitting diodes with optoelectronic feedback, Phs. Rev. E, 84 (2011), p. 047201.
[113] G. Marmont, Studies on the axon membrane. 1. A new method, J. Cell
Physiol., 34 (1949), pp. 351–382.
[114] N. V. Marrion, Control of M-current, Ann. Rev. Physiol., 59 (1997),
pp. 483–504.
[115] D. Mattia, H. Kawasaki, and M. Avoli, Repetitive firing and oscillatory
activity of pyramidal-like bursting neurons in the rat subiculum, Experimental
Brain Research, 114 (1997), pp. 507–517.
[116] D. A. McCormick and H.-C. Pape, Properties of a hyperpolarization-
activated cation current and its role in rhythmic oscillation in thalamic relay
neurones, J. Physiol., 431 (1980), pp. 291–318.
[117] Merriam-Webster Dictionary, canard, merriam-webster.com, 2016.
[118] G. A. Miller, The magical number seven, plus or minus two: some limits on
our capacity for processing information, Psychol. Rev., 63 (1956), pp. 81–97.
444 Bibliography
[135] M. E. Rush and J. Rinzel, The potassium A-current, low firing rates and
rebound excitation in Hodgin-Huxley-models, Bull. Math. Biol., 57 (1995),
pp. 899–929.
[173] X.-J. Wang, Pacemaker neurons for the theta rhythm and their synchro-
nization in the septohippocampal reciprocal loop, J. Neurophysiol., 87 (2002),
pp. 889–900.
Symbols B
α-amino-3-hydroxy-5-methyl- Banach fixed point theorem, 365
4-isoxazolepropionic acid barrel cortex, 313
(AMPA), 5 basal ganglia, 201, 293
basin of attraction, 96
A basket cell, 33
A-current, 304 beta oscillation, 293
acetylcholine (ACh), 5, 58 beta rhythm, 293
action potential, 3 bifurcation, 79
activation gate, 17 bifurcation diagram, 83, 92–94,
activation variable, 17, 20 96, 394
adaptation-based weak PING, bifurcation type 1, 85, 88
282, 286 bifurcation type 2, 99
afterhyperpolarization (AHP) bifurcation type 3, 111
current, 60 big-O notation, 7
all-to-all coupling, 193 bisection method, 75, 86, 160, 361
AMPA decay time constant, 155 bistability, 96, 399
AMPA receptor, 4, 154 blow-up in finite time, 28
ampere (A, unit of current), 7 blue sky bifurcation of cycles, 97
anti-synchrony, 214, 218 blue sky bifurcation of fixed points,
artificial cerebrospinal fluid (ACSF), 80
297 Boltzmann’s constant, 12
asymptotics (notation), 6 Brouwer fixed point theorem, 364
asynchronous initialization, 193, 195 bursting, 141, 143, 144, 146, 147, 149
attention, 58
attracting fixed point, 80, 86, 89 C
attracting limit cycle, 26, 92 CA1, 257
autapse, 156, 160 CA3, 257
averaging, 235 canard explosion, 105
Avogadro’s number, 161 Cantor function, 187
axon, 1 capacitor, 15
axon hillock, 4 cation, 132
axon terminal, 4 cell assembly, 295, 327
452 Index
GABAergic, 5 hyperpolarization, 3
gamma oscillation, 256 hysteresis loop, 142
gamma rhythm, 256
gamma-aminobutyric acid (GABA), I
5 I-cell, 175, 255
gamma-beta transition, 295 ideal gas law, 13
gap junction, 6, 165, 273 inactivating current, 111, 112
gating variable, 17 inactivation, 3, 20
Gaussian, 372 inactivation gate, 18, 306
Gershgorin’s theorem, 172 inactivation variable, 18, 20
ghost of a pair of fixed points, 83, 84, independent random variables,
394, 395 179, 369
glial cell, 1 infinitesimal PRC, 202, 203
global convergence, 365 ING, 269
globally attracting fixed point, 92 inhibitory, 3
glutamate, 4 integrating factor, 64, 252, 417
glutamatergic, 4 inter-burst interval, 141
inter-spike interval, 3
H interaction function, 201, 214
h-current, 132, 135 interneuron, 5
h-current-based weak PING, 282 intrinsic period, 194, 213
hard Hopf bifurcation, 97 intrinsically bursting (IB) cells, 299
Heaviside function, 423 invariant cycle, 88
Hebb, Donald, 295 ionotropic receptor, 5
Hebbian plasticity, 295 irregular entrainment, 185
hertz (Hz, unit of frequency), 7, 8, 48 Izhikeich neuron, 49
heterogeneous conduction delays, 234
heterogeneous drives, 194 J
heterogeneous synaptic strengths, Jacobi matrix, 77, 87
197 Jahr-Stevens model, 161, 162
high-threshold potassium current, joule (J, unit of work and energy), 7,
129 12
hippocampus, 1
Hodgkin, Alan, 15 K
Hodgkin-Huxley ODEs, 15, 18, kainate, 281
99, 121, 132 kainate receptors, 281
Hodgkin-Huxley ODEs, reduced to kelvin (K, unit of temperature), 12
two dimensions,
L
74, 75, 99–101, 125
lack of memory, 375
Hodgkin-Huxley PDEs, 39
Lapicque, Louis Édouard, 45
homoclinic bifurcation, 114
lateral septum, 164
homoclinic trajectory, 114
layer II/III, 6
Hopf bifurcation, 91, 97
leak current, 16
Hopf bifurcation theorem, 91
leakiness, 46, 325
Huxley, Andrew, 15
454 Index
unstable spiral, 87 W
upper branch of the f -I curve, 121 Wang-Buzsáki (WB) model, 33, 123
weak coupling, 235
V weak PING, 281, 282
variance, 369 Wiener process, 375
vesicle, 341 Wilson-Cowan model, 175
volt (V, unit of electrical potential), winning streak, 279
7 work of compression, 13
voltage spike, 3, 20 working memory, 115, 302