0% found this document useful (0 votes)
37 views53 pages

Presentation JC

The document summarizes the assumptions made in deriving a mean-field model of a spiking neural network composed of Izhikevich neurons with adaptation. The assumptions include all-to-all connectivity, the thermodynamic limit of infinite neurons, moment closure approximations treating the adaptation variable independently of voltage, and limits placed on resetting behavior after firing. The mean-field model is compared to network behavior observed in experimental studies of hippocampal neurons.

Uploaded by

dagush
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
37 views53 pages

Presentation JC

The document summarizes the assumptions made in deriving a mean-field model of a spiking neural network composed of Izhikevich neurons with adaptation. The assumptions include all-to-all connectivity, the thermodynamic limit of infinite neurons, moment closure approximations treating the adaptation variable independently of voltage, and limits placed on resetting behavior after firing. The mean-field model is compared to network behavior observed in experimental studies of hippocampal neurons.

Uploaded by

dagush
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 53

CNS, JC talks

Exact mean-field models for


spiking neural networks with
adaptation
Liang Chen, Sue Ann Campbell
Journal of Computational Neuroscience (2022) 50:445–469
https://doi.org/10.1007/s10827-022-00825-9

Presented by Gustavo Patow, ViRVIG–UdG & CNS


Why?
Personal reasons…
The Izhikevich model
E.M. Izhikevich, 2003
https://www.izhikevich.org/publications/spikes.htm

u = 0.04 × u + 5 × u + 140 - w + I
2

w = a × (b × u - w)
if u > 30mV threshold
u=c
w=w+d
Variables: a, b, c, d
6
u = 0.04 × u 2 + 5 × u + 140 - w + I
w = a × (b × u - w)
if u > 30mV u
u=c
w
w=w+d

7
With code!
Equivalence


θ

vk(t)
-∞

wk(t)

QIF model vk (t) = tan (


θk (t)
) Theta model
2
+ Adap. + Adap.

Izhikevich 2003, IEEE Trans. Neural Networks, 14(6) Adapted from Wiki
Mean-Field formulation
The network of Izhikevich neurons

All-to-all coupled
vk(t)

wk(t)
CA3 Izhikevich neurons…
They use a variant of the standard Izhikevich’s neuron:

vk’ = vk(vk − 𝛼) − wk + 𝜂k + Iext + Isyn,k


wk’ = a(b vk − wk)
If vk ≥vpeak, then vk ← vreset
and wk ← wk + wjump

Note: dimensionless version!


[Dur-e-Ahmad et al., 2012]
CA3 Izhikevich neurons…
Neurons are connected by the standard synaptic current model

Isyn,k = gsyn sk(er− vk)

• where (assumed to be the same for all neurons)


• er is the reversal potential
• gsyn the maximum synaptic conductance
• The synaptic gating variable, sk ∈ [0, 1], represents the proportion of
ion channels open in the postsynaptic neuron as the result of the
firing in presynaptic neurons
• For a network with all-to-all connectivity, sk is homogeneous across the network
as every post synaptic neuron receives the same summed input from all the
presynaptic neurons,
• Thus sk = s
Gating Variable(s)
• The mechanism of synaptic transmission can be formally
described by a linear system of ODEs with a sum of delta
pulses corresponding to the times a neuron fires a spike
[Ermentrout & Terman, 2010]
• The single exponential synapse is modelled by

sʹ=−s + sjump ∑k=1∑tjk<t𝛿(t−tjk)


𝜏s N
Aside: Lorentzian approximation (next)
Population density approach
• Excellent readings:
• Felix Apfaltrer, Cheng Ly & Daniel Tranchina (2006) Population density
methods for stochastic neurons with realistic synaptic kinetics: Firing
rate dynamics and fast computational methods, Network:
Computation in Neural Systems, 17:4, 373-418,
DOI:10.1080/09548980601069787
• Nykamp, D.Q., and Tranchina, D. (2000) A population density
approach that facilitates large-scale modelling of neural networks:
analysis and an application to orientation tuning. J. Comput.
Neurosci. 8, 19–50. doi:10.1023/A:1008912914816
Mean-field modelling ideas ηk

dr/dt = ? d⟨ v⟩ /dt = ?
• Lorentzian distribution ansatz
• [Montbrió et al. 2015, Phys. Rev. X 5]
• [Ott and Antonsen, 2008, Chaos, 18(3)]

Population density approach

x(η, t) Δη

y(η, t) η¯

ρ(v, w, η, t) L(η)
Mean-field modelling ideas ηk

d⟨ w⟩ /dt = ?
Moment closure assumption:
• [Nicola & Campbell, 2013b]: The validity of this assumption at
high firing rates is supported by numerical simulations of the
full network

⟨ w | v, η⟩ ≈ ⟨ w | η⟩
vk(t)

wk(t)
Mean-field modelling ideas ηk

d⟨ w⟩ /dt = ?
• Perturbation theory:
the mean adaptation with the parameter η is sufficiently
greater than the after-spike jump size [Nicola & Campbell.
2013, JCN, 35(1)]
⟨ w | η⟩ ≫ w jump

vk(t)

wk(t)
The mean-field model
ODEs:

rʹ = Δη/π + 2r⟨v⟩ − (α + gsyn s)r


⟨v⟩ʹ = ⟨v⟩2 − α⟨v⟩ − ⟨w⟩ + η + Iext + gsyn s(er − ⟨v⟩) − π2r2
⟨w⟩ʹ = a (b⟨v⟩ − ⟨w⟩) + wjump r
sʹ = − s/τs + sjump r
Network vs Mean-field Model

Asynchronous tonic firing (EPs) Data: pyramidal neurons in CA3 of hippocampus


Synchronous bursting (POs)
[Dur-e-Ahman et al. 2012, JCN, 33(1)]
[Hemond et al. 2008, Hippocampus, 18(4)]
Network vs Mean-field Model 2
0.35

0.3

0.25

0.2

0.15

0.1
EP-
EP+
0.05
PO-
PO+
0
0.05 0.1 0.15 0.2 0.25 0.3

Asynchronous tonic firing (EPs)


Synchronous bursting (POs)
Network of two-population Izhikevich
neurons
Strongly adapting

Weakly adapting

Data: pyramidal neurons in CA3 of hippocampus


vk(t)
[Dur-e-Ahman et al. 2012, JCN, 33(1)]
[Hemond et al. 2008, Hippocampus, 18(4)]
wk(t)

[Izhikevich 2003, IEEE Trans. Neural Networks, 14(6)]


Network of two-population Izhikevich
neurons
Strongly adapting (p) Weakly adapting (q)
Network of two-population Izhikevich
neurons
• Populations: Strong (p) / Weak (q)
• With currents and gating variables:

• K is the proportion k=Np/(Np+Nq)


Two-pop. Network vs Mean-Field Model
SA

WA

7
Two-pop.
What can we learn from the model ?
New discovery

SA

WA

Explain phenomena observed in


experimental studies
• [Dur-e-Ahman et al. 2012, JCN, 33(1)]
• [Hemond et al. 2008, Hippocampus, 18(4)]
New discovery!!

8
Assumptions…
To assess the validity of the mean-field approximation, we examine all the
assumptions that are imposed during the derivation
Assumptions (1)
• All-to-all connectivity within the population and
between different populations
• Reasonable for the application to CA3 region of
hippocampus
• There are formalisms for sparse networks [Ferguson
et al., 2015; Di Volo & Torcini, 2018; Biet al., 2021;
Lin et al., 2020]
Assumptions (2)
• N → ∞ , the
thermodynamic
limit
• As the number of
neurons increases,
the spread of the
network variables
around the mean
narrows and gets
closer to the
dynamics of the
mean-field model
Assumptions (3)
• ⟨w|𝜂⟩ ≫ wjump , the mean adaptation variable with the
parameter 𝜂 is sufficiently greater than the homogeneous
after-spike jump value
• This assumption is required for the differential equation of ⟨w⟩
• However, the mean-field description still captures the essential shape
and frequency of the firing activity of the network
• The accuracy could be improved by inclusion of high-order terms in
the Taylor expansion ⇒ extra term for ⟨w⟩’
Assumptions (4)
• ⟨w|v,𝜂⟩ = ⟨w|𝜂⟩, first-order moment closure
assumption, also called the adiabatic
approximation
• This assumption entails fast dynamics of the
membrane potential
• We could employ a high-order moment closure
approximation,
• Although we need to assess the cost of the added effort
in terms of the improvement of the accuracy of the
resulting mean-field model [Ly & Tranchina, 2007]
Assumptions (5)
• The Lorentzian ansatz on the conditional density
function

• A crucial step for the population density function


• [Nicola and Campbell, 2013b] show how changing the
expansion of the population density function can drastically
change the resulting mean-field model
Assumptions (6)
• vpeak = −vreset → ∞, limit of the resetting rule when neurons fire
• Used parameter values are based on actual neuronal data except the
resetting values
• Essential for the validity of the Lorentzian ansatz
• And for linking the Izhikevich model to the Theta model
• When dealing with a biological network based on experimental data,
changing vpeak and vreset can affect firing rates and estimation of <v>
• In numerical experiments: vpeak = −vreset = 200
• Could be addressed by adding a refractory period to the network
model [Montbrio et al., 2015],
• Makes the firing rate of the asynchronous tonic firing (EPs) and <v> of the
network match those of the theta model and hence the mean-field model
• But lacks adaptation and thus, cannot have synchronous bursting (POs)
Assumptions (7)
• Lorentzian distribution of the heterogeneous current

• Many parameters can be the sources of heterogeneity in a network


(e.g., η but also gsyn)
• Sharply reduces the complexity of the final mean field model
• The Lorentzian distribution is physically implausible since both its
expected value and its variance are undefined
• Could use a Gaussian [Klinshov et al., 2021], but would result in a
much more complex expression…
Assumptions (8)
• 𝜂 ∈ (−∞, ∞), range of the heterogeneous current
• This assumption is adopted in evaluation of the
integrals using the residue theorem in the deduction
process
• For the neural network to be realistic in spite of this
requirement, the distribution range of the
heterogeneous parameter should be much wider
than its half-width at half maximum
Code available!
• Matlab
• All figures
• Well organized
• Could be streamlined, though…
• In general, OK!
Summary
• Derivation of exact mean-field models for Neural
network with spike adaptation (Izhikevich neurons)
• One population
• Two-coupled populations
• Great agreement between the network and mean-field
models
• Results from bifurcation analysis of the mean-field
model
• Impact of proportion of strongly adapting neurons
• Impact of heterogeneity of firing rates of individual neurons
• Interesting mechanism to generate bursting
Reproducing the model
Problems and more problems…
• [Reproducing Polychronization: A
Guide to Maximizing the
Reproducibility of Spiking Network
Models, Pauli et al, 2018]
• Reproduced the original results
with nest::…
• Long story short: not easy, even
with code!
nest::
• NEST is a simulator for spiking neural network models that
focuses on the dynamics, size and structure of neural systems
rather than on the exact morphology of individual neurons
• The development of NEST is coordinated by the NEST Initiative
• NEST is ideal for networks of spiking neurons of any size, for
example:
• Models of information processing e.g. in the visual or auditory cortex
of mammals
• Models of network activity dynamics, e.g. laminar cortical networks or
balanced random networks
• Models of learning and plasticity
Step 0
• Try nest:: desktop online
• Needs EBRAINS account
• Fiddle with setting and parms…
• Only able to use NESTML models
• Default
• From repository
• Conclusion:
• Cool
• Limited (NESTML
required)…
First step
• Install nest::
• From Conda…
• Manually
• From an yml file
• Docker…
• Local Collab…
• Jupyter Notebook!
• Learn a little bit of (pretty easy):
• nest::
• nestml
Next step: Izhikevich nest:: tutorial
• Try to reproduce the
original paper behaviours
• Had to deal with some
nest:: idiosyncrasies
• Sk, not S…
• No reloading NESTML
• Kill & reload kernel!
• No resetting rule
• nest.ResetKernel()
• Rebuild network for every
experiment!
Next: Reproduce single neuron
• No stop until spike-perfect…

v w
Then… 2 ~ 10 neurons!
• From code…

• Spikes/Deltas/Convolutions…?
Ask the forums!
• Not much
action… :-(
• ~ 1 week
• ~ 1 answer
• Not always
useful
Personal conclusions…
• A full month and still
going on…
• Very elegant platform
• New gold standard?
• Specially if planning to run
thousands of neurons
• GPU/Cluster cappabilities
• However…
• Documentation not perfect (spikes!)
• Forums not very active, but they answer in the end, sort of…
• With my own code, ony a couple of weeks, probably
• Not worth the effort… :-(
Final piece of advice
• Stay safe and far from the water!
CNS, JC talks
Exact mean-field models for
spiking neural networks with
adaptation
Liang Chen, Sue Ann Campbell
Journal of Computational Neuroscience (2022) 50:445–469
https://doi.org/10.1007/s10827-022-00825-9

Thanks!

You might also like