0% found this document useful (0 votes)
9 views25 pages

Neural Fields Masses and Bayesian Modelling

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views25 pages

Neural Fields Masses and Bayesian Modelling

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 25

Neural Fields, Masses and Bayesian Modelling

D.A. Pinotsis and K.J. Friston

The Wellcome Trust Centre for Neuroimaging, University College London, Queen Square,
WC1N 3BG

Abstract

This chapter considers the relationship between neural field and mass models
and their application to modelling empirical data. Specifically, we consider neural
masses as a special case of neural fields, when conduction times tend to zero and
focus on two exemplar models of cortical microcircuitry; namely, the Jansen-Rit
and the Canonical Microcircuit Model. Both models incorporate parameters per-
taining to important neurobiological attributes, such as synaptic rate constants and
the extent of lateral connections. We describe these models and show how Bayes-
ian inference can be used to assess the validity of their field and mass variants,
given empirical data. Interestingly, we find greater evidence for neural field vari-
ants in analyses of LFP data but fail to find more evidence for such variants, rela-
tive to their neural mass counterparts, in MEG (virtual electrode) data. The key
distinction between these data is that LFP data are sensitive to a wide range of
spatial frequencies and the temporal fluctuations that these frequencies contain. In
contrast, the lead fields, inherent in non-invasive electromagnetic recordings, are
necessarily broader and suppress temporal dynamics that are expressed in high
spatial frequencies. We present this as an example of how neuronal field and mass
models (hypotheses) can be compared formally.

Acknowledgments This work was supported by the Wellcome Trust.


2

Introduction

This chapter reviews recent developments in the modelling of brain imaging


data that exploits neural field theory. We focus on the Bayesian optimization of
model parameters, called Dynamic Causal Modelling (DCM) and its application in
the context of neural fields (Friston et al., 2003; Pinotsis et al., 2012). This frame-
work is part of the academic freeware Statistical Parametric Mapping (SPM)1,
which is a popular platform for analyzing neuroimaging data, used by several neu-
roscience communities worldwide. DCM allows for a formal (Bayesian) statistical
analysis of cortical network connectivity, based upon realistic biophysical mod-
els of brain responses. It is this particular feature of DCM – the unique combina-
tion of generative models with optimization techniques based upon (variational)
Bayesian principles - that furnishes a novel way to characterize brain organization.
In particular, it provides answers to questions about how the brain is wired and
how it responds in different situations. In this chapter, we first present the general
framework with an emphasis on the role of neural fields. We then consider par-
ticular applications, in the context of LFP (Moran et al., 2011) and MEG data
(Schwarzkopf et al., 2012) and show how DCM allows one to adjudicate between
alternative models of brain imaging data, such as neural masses and fields. This
comparison is based upon the Free Energy approximation to Bayesian model evi-
dence that comprises both accuracy and complexity (see e.g. Friston et al, 2007).
This allows one to adjudicate among competing models, in terms of explanations
for data that are both accurate and parsimonious.

Neural masses and fields

Neural mass models are a particular case of neural fields, where the (hidden
neuronal) states of populations of neurons are functions of time only. Such models
can generate temporal responses from one or several interconnected populations
and have been used successfully to explain empirical electrophysiological data
from local field potentials (LFP) and EEG/MEG (see e.g. Lopes da Silva et al.,
1974;van Rotterdam et al., 1982; Steriade and Deschenes, 1984; Lumer et al.,
1997; Valdes et al.., 1999; Liley et al., 1999; Riera et al.,2007; Moran et al.,
2007; Kiebel et al., 2009). To date, neural mass models have been largely based
upon point sources and formulated using ordinary differential equations (ODEs).
A key challenge in this area has been to model observed signals as being gener-
ated by continuous and spatially distributed neuronal activity, of the sort observed
directly using high density multi-electrode arrays and optical imaging. Here, we
consider how one can address this challenge using neural field models.
Neural field models model current fluxes as continuous processes on a cortical
manifold, using partial differential equations (PDEs) (see Deco et al., 2008 for a

1 http://www.fil.ion.ucl.ac.uk/spm/
3

review and also Nunez, 1995,1996; Nunez et al., 2006; Lopes da Silva and Storm
van Leeuwen, 1978; Attay and Hutt, 2006; Breakspear et al., 2006; Bressloff,
1996,2001;Coombes et al.,2003,2005,2007; Freeman 2003,2005; Liley et al.,
2002; Robinson et al., 2001,2002,2003; O’Connor et al., 2005; Jirsa and Haken,
1996; Qubbaj and Jirsa,2007;Jirsa, 2009; Ghosh et al., 2008). The key advance
that neural field models offer, over conventional neural mass models, is that they
embody spatial parameters (like the density and extent of lateral connections).
This means, in principle, one can estimate the spatial parameters of cortical infra-
structures generating electrophysiological signals (and infer changes in those pa-
rameters over different levels of an experimental factor) from empirical data. This
rests on modelling responses not just in time but also over space. Clearly, to ex-
ploit this sort of model, one needs to measure the temporal dynamics of observed
cortical responses over different spatial scales; for example, with high-density re-
cordings, at the epidural or intracortical level. However, the impact of spatially ex-
tended dynamics is not restricted to expression over space but can also have pro-
found effects on temporal (e.g., spectral) responses at one point (or averaged
locally over the cortical surface). This means that neural field models can also
play a key role in the modelling of non-invasive electrophysiological data that
does not resolve spatial activity directly.

Neural fields as models for empirical data

The modelling of electrophysiological signals depends upon models of how


they are generated in source space and how the resulting (hidden) neuronal states
are detected by sensors. At the source level we consider two neural field models
that are inspired by biophysical considerations: an extension of a widely used
mass model introduced by Jansen and Rit comprising three neural populations
(Jansen and Rit, 1995); and a canonical microcircuit field model, where the py-
ramidal cell population of the previous model is split into two subpopulations.
This separates the sources of forward and backward connections in cortical hierar-
chies that have proven useful to explain several aspects of distributed cortical
computations in theoretical neurobiology.
In terms of the mapping from source to sensor space, we use a conventional
lead field formulation (that is expanded in terms of spatial basis functions) and fo-
cus on the ensuing observations of power spectra of the sort considered in conven-
tional time-series analysis. There is a long history of modelling steady-state (or
ongoing) activity spectra, associated with neural fields, usually in models of the
whole cortex; e.g., (Jirsa, 2009). Robinson and colleagues (Robinson, 2006) have
developed a neurophysiologically grounded model of corticothalamic activity,
which reproduces many properties of empirical EEG signals; such as the spectral
peaks seen in various sleep states and seizure activity. Technically, the spectra
summarizing the response of cortical sources can be defined in terms of transfer
functions, mapping endogenous neuronal fluctuations to observed responses
(Freeman, 1972; Nunez, 1995; Robinson, 2003; Robinson et al., 2001). We will
derive transfer functions and expressions for the spectral responses for sources
4

that comprise multiple layers. This allows us to model the spectral activity of cor-
tical fields as measured on the cortical surface and to also compute the corre-
sponding spectral responses, as measured by invasive (LFP) and non-invasive
(MEG) electrophysiological sensors in an efficient manner.
The resulting scheme can be regarded as inverting population models of the
Amari type – using real data – with a particular focus on Bayesian model inver-
sion. Previous work in a similar vein includes the use of Kalman filters to develop
estimation schemes for both neural mass (Valdes et al., 1999; Riera et al., 2007)
and neural field models of a single population (Schiff and Sauer, 2008; Galka et
al., 2008). In a related approach, (Daunizeau et al., 2009) replaced the standard di-
pole source – used in neural mass models – with the principal Fourier mode of a
neural field, for the particular case of exponentially decaying synaptic density over
the cortical surface. Finally, (Markounikau et al., 2010) used a combination of lin-
ear and nonlinear optimisation methods to invert a two-layered neural field model
of voltage-sensitive dye data, describing inhibitory and excitatory populations
(without conduction delays).

Generative models for cross spectral densities

Dynamic Causal Modelling (DCM) allows for the comparison and estimation
of biophysically plausible models of fMRI, EEG, MEG and LFP data (Friston et
al., 2003; David et al., 2006; Moran et al., 2007; 2009; 2011). DCM calls on an
underlying generative model to predict observed data. As with other state-space
models, Dynamic Causal Models are based on a combination of evolution equa-
tions for the hidden neuronal states with static observer equations:

V (t )  f (V ,U ,  )
(1)
Y (t )  L(V ,  )

where V (t )   and U (t )   are vectors of hidden state variables and inputs


respectively, Y (t ) are the predicted time series and we use  to denote the pa-
rameters of the model. Many generative models can be cast in the form of Equa-
tion (1): these include neural mass models, such as the Jansen and Rit model
(Jansen and Rit, 1995) considered below. Most DCMs for electrophysiological
data have been based on neural mass models, which use point sources (e.g.,
equivalent current dipoles) and preclude spatially extended dynamics. The DCM
introduced in (Pinotsis et al., 2012) allows for an explicit modelling of the spatio-
temporal aspects of cortical activity. This means one can make inferences about
parameters pertaining to the topographic distribution of cortical sources from LFP
and M/EEG data; like the extent of connectivity and the conduction velocity of
axonal propagation. Furthermore, including spatial parameters allows one to ex-
plain some effects that other models such as neural masses attribute to time pa-
rameters. In particular, neural field models characterize the propagation of electri-
5

cal activity on the cortex to provide a more complete parameterization of the


mechanisms generating cortical responses. In the models considered here, the
postsynaptic convolution of presynaptic inputs is described by a second-order
ODE or two first-order ODEs pertaining to voltage and current. This means that
the left hand side of Equation (1) is augmented with the second derivative of hid-
den (depolarization) states. For cortical sources comprising of n layers, our neural
field formulation is based on the following extension of Equation (1)

V  2 BV  B 2V  D  F  V  G  U


Y (t )  L ( x,  )  V ( x, t )dx

D  Q   D  x  x, t  t    Q( x, t )dx dt  (2)

 v1 (t ) 
 
V (t )    
vn (t ) 

where the n  1 vector V (t )   of hidden neuronal states in each layer in Equation


(1) is replaced by V ( x, t )   ; both this vector and the input U ( x, t )  are ex-
plicit functions of both space and time. The dynamics of cortical sources now con-
form to integrodifferential equations, as implied by the spatiotemporal convolution
denoted by  on the right hand side of Equation (2).
In the above equation, D( x, t ) is a n  n smooth (analytic) matrix-valued con-
nectivity function or kernel; F :  n   n is a nonlinear mapping from postsynap-
tic depolarization to presynaptic firing rates at each point on the cortical manifold
and B is a n  n matrix encoding average synaptic decay rates. In short, Equation
(2) says that the rate of change of voltage in each layer comprises three terms; the
first is a simple decay, the second is due to presynaptic inputs from other parts of
the cortical manifold and the final part is due to external inputs, where
G :  n   n maps the inputs to the motion of hidden states. It is the second com-
ponent, involving the convolution with the connectivity kernel D( x, t ) that embod-
ies lateral interactions over the cortical manifold. In terms of the observer function,
the linear mapping from hidden states to observed signal now becomes a m  n ma-
trix function of source space L( x,  ) encoding the contribution of the n hidden
states to each of m sensors.
6

The Jansen and Rit Model

In this section, we provide a brief review of the well-known Jansen and Rit
(JR) neural mass model (Jansen and Rit, 1995). In the JR model, each cortical
source is modelled with three subpopulations: excitatory spiny stellate input cells,
inhibitory interneurons and deep excitatory output pyramidal cells (for classical
approaches to modelling such populations with neural fields, see e.g. Freeman
1972; Wilson and Cowan 1973; Amari 1977; Nunez 1995). For simplicity, we
consider a single source, noting that extensions to multiple sources involve adding
extrinsic (between-source) connections or kernels. The JR model is a particular in-
stance of Equation (1) (where the convolution of presynaptic input is second-
order):

V  2 BV   B 2V  ABF  V  GU


(3)
Y  L V

Here, A and B are the 3  3 matrices of synaptic parameters controlling the


maximum postsynaptic responses and the rate-constants of postsynaptic filtering
and G is a column vector:

A  diag  me , mi , me 
B  diag  e ,  i ,  e 

(4)
 e me 
G   0 
 0 

As above F :    is a nonlinear mapping from depolarization to firing and


U (t )   is the external input to each population. Note that there is only one input
and this input enters the first (spiny stellate) population. Based on cortical micro-
circuitry of intrinsic connections, the JR model prescribes the mapping
F :    in terms of nonlinear firing rate functions of the depolarization in the
three populations. Writing out Equation (3) in full we have

v1  2 e v1   e2 v1   e me (d13   (v3 )  U )


v2  2 i v2   i2 v2   i mi d 23   (v3 ) (5)
v3  2 e v3   e2 v3   e me  d31   (v1 )  d32   (v2 ) 
7

where va (t ) : a  1, 2,3 denotes the expected depolarization in the a-th popula-


tion (excitatory stellate, inhibitory population and excitatory pyramidal respec-
tively) and the sigmoid firing rate function is

1
 ( va )  (6)
1  exp(r (  va ))

Here, r and  are parameters that determine the shape of this sigmoid activa-
tion function. In particular, r is synaptic gain and  is the postsynaptic potential
at which the half of the maximum firing rate is elicited. In Equation (5),
d ab   (vb ) is (endogenous) presynaptic input to the a-th population from the b-th
and corresponds to the mapping F  V . See Fig. 1 for a schematic of this model.

Figure 1: This schematic summarizes the equations of motion or state equations that specify a
(Jansen and Rit 1995) neural mass model of a single source. This model contains three populations,
each loosely associated with a specific cortical subpopulation or layer. The second-order differential
equations describe changes in hidden states (e.g., voltage) that subtend observed local field potentials
or EEG signals. These differential equations effectively implement a linear convolution of presynaptic
activity to produce postsynaptic depolarization. Average firing rates within each subpopulation are
then transformed through a nonlinear (sigmoid) voltage-firing rate function  () to provide inputs to
other populations. These inputs are weighted by connection strengths.
8

The canonical microcircuit

Recent work suggests that superficial layers of visual cortex oscillate preferen-
tially at gamma frequencies, while deep layers primarily oscillate at lower al-
pha/beta frequencies (Buffalo et al., 2011). Since forward connections originate
largely from superficial layers and backward connections primarily originate from
deep layers, these spectral asymmetries suggest that forward connections use
faster (gamma) temporal frequencies, while backward connections may employ
lower (beta) frequencies – a suggestion that has experimental support (Roopun et
al., 2010). These asymmetries mandate something quite remarkable: namely, a
synthesis and segregation of forward and backward output from afferent input.
This segregation can only arise from local neuronal computations that are formally
structured and precisely interconnected. The canonical microcircuit is a detailed
proposal for such a laminar-specific intracortical architecture that describes how
information flows through the cortical column. This model is based on findings in
the primary visual cortex (Douglas and Martin, 1991) but recent work (Lefort et
al., 2009; Weiler et al., 2008) indicates that similar microcircuits exist in other re-
gions – such as somatosensory and motor cortex.
Douglas and Martin recorded intracellular potentials from cells in area 17 of
the cat, while they stimulated cortical afferents, and noticed a strong compartmen-
talization of the superficial and deep cell properties – reflected in slow superficial
responses and fast input layer responses. The authors created a conductance-based
model that reproduced the evolution of excitation and inhibition through the corti-
cal circuit with great precision. This model contained three groups of cells: super-
ficial and deep pyramidal cells, and a common pool of inhibitory cells. All three
pools of neurons receive thalamic drive – although the thalamic drive to deep
layer cells was weaker than the other inputs. All neuronal populations had self-
connectivity, and are interconnected with the other populations. Finally, inhibition
was stronger onto the deep pyramidal population. This model was able to repro-
duce the features observed in their electrophysiological recordings – including the
latency difference between superficial and input layer neurons – and has served to
establish several basic properties that are now believed to be replicated in other
cortical areas: first, although superficial and deep compartments are strongly in-
terconnected, their response properties are also segregated. Second, cortex is not
under tonic inhibition, rather, both excitation and inhibition are generated by af-
ferent thalamic input and both shape ongoing cortical responses. Third, the ca-
nonical microcircuit can amplify thalamic inputs to generate self-sustaining activ-
ity, while also maintaining a delicate balance between excitation and inhibition –
so as to prevent runaway excitation.
Previous computational modelling studies indicate that this circuitry allows the
cortex to optimally organize and integrate bottom-up, lateral, and top-down infor-
mation (Raizada and Grossberg, 2003). Douglas and Martin suggest that the rich
anatomical connectivity of superficial layer 2/3 pyramidal cells allows them to
collect information from all relevant top-down, lateral, and bottom-up inputs, and
– through processing in the dendritic tree – select the most likely interpretation of
its subcortical inputs. For a discussion and more details about the canonical mi-
9

crocircuit and its potential role in predictive coding we refer the reader to (Bastos
et al., under review).
Haeusler and Maass used Hodgkin and Huxley neurons to build a realistic mi-
crocircuit model and showed that a cortical column – whose connectivity con-
forms to the canonical microcircuit – can perform various computations effi-
ciently, in relation to a column with random connectivity (Haeusler and Maass,
2007). By collapsing two pairs of cell types in the Haeusler and Maass model,
while preserving the topology of the connectivity, one obtains the canonical mi-
crocircuit (CMC) depicted in Fig. 2: this circuit comprises four populations: exci-
tatory spiny stellate input cells (1), inhibitory interneurons (2), deep excitatory py-
ramidal cells (3) and superficial excitatory pyramidal cells (4). The corresponding
evolution equations for the neuronal states (the analogues of Equation 5) are (see
also Fig. 2):

v1  2 e v1   e2 v1  1me (d14   (v4 )  d11   (v1 )  d12   (v2 )  U )


v2  2 i v2   i2 v2   2 mi (d 21   (v1 )  d 22   (v2 )  d 23   (v3 ))
(7)
v3  2 e v3   e2 v3   3 me  d32   (v2 )  d33   (v3 ) 
v4  2 e v4   e2 v4   4 me  d 41   (v1 )  d 44   (v4 ) 

Figure 2: This figure shows the evolution equations that specify a Canonical Microcircuit neural
mass model of a single source. This model contains four populations occupying different cortical lay-
ers: the pyramidal cell population of the JR model is here split into two subpopulations allowing a
separation of the sources of forward and backward connections in cortical hierarchies. As with the JR
model, second-order differential equations mediate a linear convolution of presynaptic activity to pro-
10
duce postsynaptic depolarization. This depolarization gives rise to firing rates within each sub-
population that provide inputs to other populations.

Neural Field extensions of the Canonical Microcircuit and Jansen and Rit
models

In this section, we transcribe the neural mass models described above into neu-
ral fields. In the case of fields, we consider spatially extended sources occupying
bounded manifolds (patches) in different layers that lie beneath the cortical sur-
face. In this setting, each subpopulation of a neural mass model now becomes a
layer in the cortical sheet. The dynamics of cortical sources conform to integrodif-
ferential equations, such as the Wilson-Cowan or Amari equations, where cou-
pling is parameterised by matrix-valued coupling kernels – namely, smooth (ana-
lytic) connectivity matrices that also depend on time and space. Assuming that the
connectivity kernels dij ( x, t ) appearing in Equation (2) factorize into
dij ( x, t )   ij (| x |) (t  ij | x |) , Equation (2) becomes

(V  2BV  B2V )(x, t)  AB K ( x  x ')F V ( x ', t  | x  x ' | )dx ' GU


 (8)

where  is the inverse speed at which spikes propagate along connections and in-
teractions among populations – within and across macrocolumns – are described
by the connectivity kernel K  K ( i )  K ( e ) . This form provides an explicit parame-
terization of conduction delays that will be exploited later, when using the field
model as an observation model. One can see that in the infinite speed limit   0
the spatial convolution in Equation (8) disappears (to within a scaling constant)
and we recover the neural mass Equations (3).
For both the canonical microcircuit and the Jansen and Rit models, the intrinsic
connectivity K ( i ) is an exponentially decaying kernel, commonly used in the lit-
erature to account for excitatory and inhibitory interactions (see e.g. Pinotsis et al.,
2012). The extrinsic connectivity kernel K ( e ) was introduced in (Pinotsis and Fris-
ton, 2011; Grindrod and Pinotsis, 2011) to model patchy lateral (horizontal) con-
nections and is characterized by non-central peaks allowing for differences in (and
estimation of) the range and dispersion of lateral connections, summarized in
terms of the parameters ha and caa respectively. The canonical microcircuit
(CMC) field model we consider here is described by Equation (8), where the con-
nectivity kernel is given by
11

K  K (i )  K ( e )
 k11( i ) k12(i ) 0 k14( i )   k11( e ) 0 0 0 
 (i )   
k k22(i ) k23(i ) 0  0 ( e)
k 0 0
K (i )   21 , K (e)  22 
 0 k32(i ) k33(i ) 0   0 0 k33( e ) 0 
 (i )   
 k41   0 k44 
(i) (e)
0 0 k44 0 0

(i )
kab ( x)  12 aab e  cab | x|
kaa( e ) ( x)  12 caa e   caa x  ha
e
 caa x  ha

(9)
Here, the parameters aab and cab encode the strength (analogous to the num-
ber
(9) of synaptic connections) and extent of intrinsic connections between cortical
layers. The intrinsic connections can be regarded as inter-laminar connections
within a macrocolumn, while the extrinsic – between macrocolumn connections –
correspond to horizontal connections and connect layers of the same type at a dis-
tance ha , see Fig. 3.

Figure 3: Connectivity kernel describing a combination of patchy but isotropic distributions by us-
ing connectivity kernels with non-central peaks. This kernel models sparse intrinsic connections in cor-
tical circuits that mediate both local and non-local interactions. In other words, neurons talk both to
their immediate neighbours and receive input from remote populations who share the same functional
selectivity; see Equation (9). The insert is a modified from www.ini.uzh.ch/node/23776
12

This kernel models sparse intrinsic connections in cortical circuits that mediate
both local and non-local interactions and allows one to estimate properties of lat-
eral interactions that are particularly relevant in the context of data obtained from
retinotopically mapped visual cortex. We will see an example of this using an at-
tention task and MEG data below. On the other hand, to illustrate the Jansen and
Rit field model in non-mapped cortex, we will use data from the auditory cortex
under anaesthesia and neglect extrinsic connectivity; that is K ( e )  0 . In this ex-
ample, we will use the same exponential decaying form, as for the CMC field
model above, namely the function K ( x)  K ( i ) ( x) where

 0 0 k13(i ) 
 
K (i )  0 0 k 23(i )  (10)
 (i ) (i ) 
 k 31 k 32 0 

and kab(i ) are given by Equation (9).

Power spectra of neural fields

In what follows, we describe the generative or forward mapping from external


inputs (exogenous neuronal fluctuations) to observed spectral responses for a sin-
gle cortical source. This allows one to compare the predictions of our model with
real data and requires a mapping of neuronal states (the depolarisation fields
above) to sensors – called the lead field. The lead field allows one to infer hidden
parameters characterizing the deployment of sources on the cortical surface, even
when there is no explicit spatial information in the data; see also (Robinson,
2006). The lead field samples particular spatiotemporal frequencies, depending on
the sensitivity profile of the sensors used. For example, if the lead field has a nar-
row spatial support (e.g., when using LFP electrodes), its spatial Fourier transform
will be broad and it will be sensitive to a wide range of spatial frequencies. Con-
versely, when the lead field sees a large part of the cortical surface (e.g., non-
invasive EEG sensors), the spatial Fourier transform will be narrow and only fluc-
tuations in low spatial frequencies will contribute to the observed cross-spectra.
The lead field is parameterised by  as a continuous gain function L( x,  )
over the cortical patch that is applied to a mixture of neuronal (depolarisation)
states at each point on the patch. In the case of the CMC field model, this mixture
is determined by four coefficients Q  [q1 q2 q3 q4 ] , while the gain function is pa-
rameterised in terms of the coefficients L(k ,  ) of a spatial Fourier basis set:

L( x,  )   L(k ,  )eikx (11)


k
13

The predicted response at an LFP or virtual electrode – for a given set of neural
and lead field parameters  is obtained by integrating over the cortical patch

y (t ,  )   L( x,  )(Q  V ( x, t ))dx (12)

which leads to a spectral response of the form

Y ( ,  )   L(k ,  )(Q  T (k ,  )U (k ,  )) (13)


k

The predicted spectral response measured by the sensor is therefore

g ( , )  Y ( ,  )Y * ( ,  )
  | L (k ,  ) |2 QT (k ,  ) g u (k ,  )T (k ,  )* Q T
(14)
k

2
where g u (k ,  )  U (k ,  ) is the auto-power spectrum of external input. The
above expression depends upon the transfer function T (k ,  ) associated with the
evolution Equations of the model, which we will derive below.
For the case of MEG virtual electrodes and LFP data we consider here, the gain
function has a simple Gaussian form, which we parameterize in terms of its dis-
persion  such that L( x,  )  e  x / 2 – noting that the amplitude is fixed to avoid
2 2

redundancy with the parameters Q . This leads to Fourier coefficients of the form
L(k ,  )  e 2  k
2 2 2
and Equation (14) becomes

g ( , )   e 2
2
 k
QT (k ,  ) g u (k ,  )T (k ,  )* QT
2 2 2
(15)
k

The function T (k ,  ) can be derived by assuming that the neural field defined by
Equation (2) is perturbed around a spatially homogeneous steady-state V0 at-
tained in the absence of external or exogenous perturbations, (see also Pinotsis and
Friston, 2011; Pinotsis et al., 2012):

V0  B 1 A  F (V0 )  K ( x )dx (16)

Using linear systems analysis, we define the transfer function of a field model
with the following relation
14

P (k ,  )
T (k ,  )  (17)
U (k ,  )

where U (k, ) is the two dimensional Fourier transform of external input:

U (k ,  )  FT (U ( x, t ))
(18)
  U ( x, t )e ikx  it dtdx

and P(k , ) is the Fourier transform of the perturbations around the steady-state
solution. Given the transfer function, we can characterise the spectral response of
the system to any external input, in terms of the underlying connectivity kernel,
propagation velocities and post-synaptic response function: Substituting
V ( x, t )  V0  P( x, t ) into Equation (2) and expanding F  V around V0 , we obtain
a second-order expression for the perturbations P( x, t )

  2 BP   B 2 P  AB D  P  GU
P
 re r
 (va  0)  ab (19)
 ab    (1  e r ) 2
vb 
 0 ab

where  is the gain of the nonlinear mapping between depolarisation and firing
rate. Equations (17) and (19) provide the transfer function of our canonical micro-
circuit neural field model. Taking the Fourier transform of Equation (19) and sub-
stituting into Equation (17) gives:

T (k ,  )  ( 2 I 4  2i B  B 2  J (k ,  )) 1 G
(20)
J (k ,  )  ABD(k ,  )

where J (k , ) is a 4  4 matrix incorporating the synaptic parameters, connec-


tivity parameters and gain matrix and D(k ,  ) is the Fourier transform of the spa-
tiotemporal connectivity: see Equation (22) below.
In summary, Equation (20) provides a transfer function mapping from exoge-
nous inputs or fluctuations acting upon each neuronal layer and the resulting spa-
tiotemporal response in source space. This transfer function is specified com-
pletely by synaptic and connectivity parameters implicit in the neural field model.
15

Substituting Equation (20) into Equation (15), we obtain an expression for the
predicted spectra as a mixture of contributions from each population weighted by
qa ( a  1,..., 4 ):

g ( ,  )   qaWa (k ,  )
a, k
2
(21)
Wa (k ,  )  e2  k
1me S a (k ,  ) R 1 (k ,  ) g u (k ,  )
2 2 2

The term Sa (k ,  ) R 1 (k ,  ) in (21) expresses the relative contribution of each


population to the predictions at the source level and depends upon the particular
form of the connections among these populations. It can be seen from Equation
(9), that this ratio depends upon the (Fourier transforms of) intrinsic and extrinsic
connectivity (see also, Pinotsis and Friston, 2011; Pinotsis et al., 2012);

aab (cab  i )


Dab(i ) (k ,  ) 
c   ab
2
ab
2
 2  2i cab  k 2
caa  e  ha caa c   ei ha   2 ha caa (c    ) ei ha  c   e  ha caa c   e i ha   ha caa  
Daa( e ) (k ,  )    
2  4 k 2 2  c  c  4 k 2 2  c  c  

  cos(2ha k ),   2k  sin(2ha k  )
 
(22)
c  caa  i , c  caa  i

In particular, R (k ,  ) and Sa (k ,  ) are given by

R(k ,  )  V14 (k ,  )(V23 (k ,  )  Q2 (k ,  )Q3 (k ,  ))


Q4 (k ,  )[V23 (k ,  )Q1 (k ,  )  Q3 (k ,  )(V12 (k ,  )  Q1 (k ,  )Q2 (k ,  ))]

S1 (k ,  )  Q4 (k ,  )(V23 (k ,  )  Q2 (k ,  )Q3 (k ,  ))


S2 (k ,  )  D21(i ) (k ,  ) 2 mi Q3 (k ,  )Q4 (k ,  )
S3 (k ,  )   D21(i ) (k ,  ) D32(i ) (k ,  ) 2 2 3 me mi Q4 (k ,  )
S4 (k ,  )  D41(i ) (k ,  ) 4 me (V23 (k ,  )  Q2 (k ,  )Q3 (k ,  ))
(23)

where the functions Qa (k ,  ) and Vab (k ,  ) depend on the Fourier transforms


(i )
Dab (k ,  ) and Daa
(e)
(k ,  ) as follows:
16

Qa (k ,  )   a 2   ( Daa(i ) (k ,  )  Daa
( e)
(k ,  )) a ma  2i a   2
(24)
Vab (k ,  )  Dab(i ) (k ,  ) Dba(i ) (k ,  ) 2 a b ma mb

In summary, the predicted spectral response at the sensor for the CMC field
model is given by:

g ( , )   e 2 1me  a qa Sa (k ,  ) R 1 (k ,  ) g u (k ,  )
2
 k
2 2 2
(25)
k

where the functions Sa (k ,  ) and R (k ,  ) are defined by (23). Equation (25) re-
flects the fact that the predicted spectral responses of the system are coupled to its
spatial as well as its temporal properties; these properties are encoded in the trans-
fer functions Sa (k ,  ) and R (k ,  ) through the underlying connectivity functions
Dab (k, ) . In turn, these are specified by the synaptic parameters associated with
the canonical microcircuit   {mi , me , i ,  e , r ,} and the spatial parameters
  {aab , cab , ha ,ab } that encode intrinsic and extrinsic connections among dif-
ferent layers and neighbouring columns or points on the cortical circuits.
The predicted spectral responses for the JR field model obeying Equation (8)
with connectivity determined by Equation (10) are also given by an equation of
the form of Equation (25) above (with a  1, 2,3 ) where the functions Sa (k ,  )
and R (k ,  ) are now given by:

D32( i ) (k ,  )
S1 (k ,  )   S 2 (k , )  ( e  i) 2 (  i  i ) 2
D31( i ) (k ,  )
S2 (k ,  )  D23(i ) (k ,  ) D31( i ) (k , ) 2 e i memi
S3 (k ,  )  D31(i ) (k ,  ) e me ( i  i) 2

R(k ,  )   D23( i ) (k ,  ) D 32( i ) (k , ) 2 e im emi (  e  i ) 2

 
 ( i  i ) 2  e 4  4i e 3  4i e 3   4   e 2 D13( i ) (k ,  ) D31(i ) (k ,  ) 2me 2  6 2 
(26)
17

Neural Fields as Dynamic Causal Models

Probabilistic models of empirical data and their inversion

To complete our specification of a generative model, we assume that the ob-


served cross-spectra g y are a mixture of predicted spectra, channel and Gaussian
observation noise

g y ( )  g ( , )  gn ( , )   y
u 
gu ( , )   u  , gn ( ,  )   n  n , (27)
 
Re( ) ~  (0, ( ,  )),  Im( ) ~  (0, ( ,  ))

Here, g ( ,  )  g n ( ,  ) are the predictions of the data features g y ( ) and  y


are the corresponding prediction errors with covariance ( ,  ) . The spectra of
the neuronal fluctuations or input gu ( ,  ) , are assumed to be spatially white;
namely, they are independent of spatial frequency. However both input and noise
spectra are modelled as a mixture of white and coloured fluctuations over time. In
particular, we have introduced extra free parameters   { n ,  u ,  n , u } control-
ling the spectra of the inputs and channel noise.
Equation (27) provides the basis for our generative model and entails free pa-
rameters controlling the spectra of the inputs and channel noise as well as the am-
plitude of observation error. Gaussian assumptions about the observation error
mean that we have a probabilistic mapping from all of the unknown parameters to
observed (spectral) data features. Inversion of this model means estimating, prob-
abilistically, the free parameters from the data.
Having prescribed the generative model of our DCM, we can now turn to its
inversion via Bayesian techniques. Almost universally, the fitting or inversion of
Dynamic Causal Models optimizes variational free energy. Variational free energy
serves as a bound approximation to the log-evidence ln p( g y | M ) for a model
M . This optimization is carried out with respect to a variational density q( ) on
the unknown model parameters. By construction, the free energy bound ensures
that when the variational density maximizes free energy, it approximates the true
posterior density over parameters, q( )  p( | g y , M ) . At the same time, the free
energy itself approximates the log-evidence (log-marginal likelihood of the data).
The (approximate) conditional density and (approximate) log-evidence are used
for inference on parameters and models respectively. In other words, one first
compares different models (e.g. neural fields and masses) using their log-evidence
18

and then turns to inferences on parameters, under the model selected. One usually
assumes the conditional density has a Gaussian form q( )   (, C) . This is
known as the Laplace assumption. The conditional density is quantified by the
most likely value of the parameters,  and their conditional covariance C that
encodes uncertainty about the estimates and their conditional dependencies. Under
this assumption about the variational density and Gaussian observation noise, the
free energy has a very simple form:

   (  )  12 ln |    |
  H  12 Re( (  ))T  1 Re( (  ))  12 Im( (  ))T  1 Im( (  ))
(28)
H   12  (  )T  1  (  )  12 ln |  |  12 ln |  |
   

Here,  (  )   are prediction errors on the parameters, in relation to their


prior density p( | m)   ( , ) . Model complexity in Equation (28) corre-
sponds to the  12  T  1  term: This reports the deviation of the estimated pa-
rameters from their prior expectations and effectively penalizes the free energy
objective function in proportion to the degrees of freedom used to explain the data.
A full description of the resulting Variational Laplace scheme can be found in
(Friston et al., 2007).
The underlying generative model generally admits a unique solution during
model inversion; this follows from the use of biophysically plausible priors over
the biophysical parameters. Tables 1 and 2 describe the priors over synaptic pa-
rameters for the CMC and JR field models; as well as parameters pertaining to the
spatial structure of cortical sources. These priors are based on the modelling litera-
ture, while others come from the experimental literature (Kandel et al.,1991;
Steriade et al.,1997). In general, priors are chosen to restrict parameter estimates
to a physiologically meaningful range. However, it should be noted that the pre-
cise values of the priors are not important: the inversion scheme has the latitude to
accommodate deviations from these values to optimise model evidence.
In what follows, we apply the theory of preceding sections to real electro-
physiological data to illustrate the sorts of questions and quantitative characterisa-
tions that are enabled by combining neural field models with dynamic causal
modelling.
19

Table 1. Prior expectations of parameters of the CMC field model

Parameter Physiological interpretation Prior mean

me , mi Maximum postsynaptic depolarisation 8, 32 (mV)

1 ,  2 ,  3 ,  4 Postsynaptic rate constants 1/2, 1/2, 1/16, 1/28 (ms-1)

a22 , a33 , a41 3200


Amplitude of intrinsic connectivity kernels
a12 , a44 , a23 , a32 800,800,1600,1600
a11 , a14 , a21 9600,4000,4800
cab Spatial decay of connectivity kernels 0.6 a  b
 (mm-1)
 2 ab

ha Separation between columns 4.5 (mm)

r , Parameters of the postsynaptic firing rate 0.54, 0


function
 Inverse conduction speed 1.5 s/m

 Dispersion of the lead field 2 / 20

Table 2. Priors of parameters of the JR field model


Parameter Physiological interpretation Prior mean

13 ,  23 ,  31 ,  32 Amplitude of intrinsic 2000, 8000, 2000, 1000


connectivity kernels

LFP auditory cortex data

We will first formulate neural mass models as a special case of neural field
models by simply setting the conduction times to zero. This provides a useful per-
spective on the relationship between these two models, in terms of the implicit as-
sumptions we make when modelling observed data. A pragmatic advantage of
20

emulating neural mass models, with a transit time of zero, is that we can apply
precise shrinkage priors to conduction times to facilitate model comparison. In
other words, it provides a simple means of comparing models with and without
spatial dynamics (with and without prior constraints on conduction or transit
times). In particular, we first consider the mass and field variants of the JR model
and assume that output comes primarily from pyramidal cells. The corresponding
predictions for neural fields and masses are shown in Figure 4. These predictions
are based on model fits or inversions using local field potentials recorded from
primary (A1) auditory cortex in the Lister hooded rat, following the application of
the anaesthetic agent Isoflurane (see Moran et al., 2011 for details) under acoustic
white noise stimuli at a level of 83 dB. In brief, ten minutes of recordings were ex-
tracted from the continuous time domain data and down-sampled to a sampling
rate of 125 Hz. Frequency domain data-features were obtained from this epoch us-
ing a vector autoregression model of order eight. The model predictions of Fig. 4
illustrate nicely the difference between the field and mass models: one can see that
the neural field model has approximated the preponderance of low frequencies
more accurately than the neural mass model. This is because it has extra degrees
of freedom; namely conduction velocity and the extent of lateral connections.
These extend the repertoire of predictions to include those afforded by spatial dy-
namics. Crucially, the log-evidence for the neural field model was 1271 above the
log-evidence for the neural mass model. This suggests that there is a very strong
evidence for spatial dynamics over the cortical manifold in these auditory cortex
data (Penny et al., 2010). Recall that the model fit is based on optimising the free
energy bound to model-evidence. The free energy is just the difference between a
term quantifying accuracy (goodness of fit) and a term quantifying complexity.
This means the inversion provides explanations for empirical data that are both
accurate and parsimonious.

Figure 4: Real data (dashed line) and model predictions (full line) showing a 1  spectral profile that
is typically seen under anaesthesia. We observe a better fit of the field, relative to the mass model, in
particular for low frequencies.

Clearly, the choice of an appropriate model depends upon the question of inter-
est; in particular, neural fields are appropriate for addressing questions about the
deployment of sources on the cortical surface and induced spatial dynamics. How-
ever, the above example highlights that neural field models can be more appropri-
ate than mass models, from a Bayesian perspective, even if the spatial parameters
21

of a neuronal model are not the focus of study: In the context of our Bayesian
scheme, each model is scored using a free energy bound on model-evidence,
where better models have a higher free energy. This provides a principled way to
compare (score) different modes or hypotheses about how neuronal time-series are
generated.

MEG data from the visual cortex

We now turn to MEG data that summarise the spectral expression of endoge-
nous activity in the visual cortex of (twelve) human subjects described in detail in
Schwarzkopf et al., (2012). We used an adaptive spatial filter or beam-former
(Van Veen et al., 1997) to obtain estimates of ongoing neuronal activity in pri-
mary visual cortex. This provides estimates of electrical cortical activity based on
a weighted combination of sensors – sometimes referred to as a virtual electrode.
We then used the CMC model describing inter-laminar and lateral intra-laminar
interactions and inverted its mass and field variants. In these illustrative inver-
sions, the synaptic and spatial parameters were optimized and the intermodal dis-
tance ha was fixed to a physiologically plausible value. We computed the log evi-
dence ratio (using the free energy approximation) comparing field and mass
variants at the group level. As above, the neural mass model was formulated as a
special case of the neural field model by shrinking the conduction times to zero.
Contrary to our earlier result, we found no evidence in favour of the neural field
model (the relative log evidence between mass and field models was on average
2.62): see Fig. 5 for spectral fits of an exemplar subject.

Figure 5: Real data (dashed line) and model predictions (full line) for spectra in the gamma band ob-
tained from the human visual cortex during an attention task (Schwarzkopf et al., 2012). We observe
that the fits of both the field and mass models are equally good with no manifest differences.

This highlights the fact that the best model depends upon the data modelled. It
also underlines the importance of combining a neuronal model with a spatial for-
ward model: although both auditory and visual cortices are thought to conform to
22

the local homogeneity constraints implicit in neural field models, the loss of spa-
tial frequency resolution – with non-invasive data – might render neural field
models unnecessary, in relation to neural mass models. Our failure to establish a
significantly greater evidence for neural field models, in the present model com-
parison, is intuitively sensible because non-invasive MEG data have much lower
spatial resolution than the LFP data we used in the previous model comparison.
This observation speaks to the potential importance of using spatially resolved
data to take full advantage of neural field models. Data with high signal to noise
ratio and wide brain coverage – such as those afforded by ECoG sensors or multi-
array grids – can, in principle, disclose a full spectrum of spatiotemporal dynamics
at different scales, which may be important for an informed (efficient) estimate of
spatial parameters in neural field models.

Conclusions

By exploiting a combination of neural field modelling and Bayesian inference,


we have shown that dynamic causal modelling can help appraise biophysical
models for explaining electrophysiological data. We have focused on two main
classes of biophysical models of brain activity, the so-called neural field and mass
models and have considered their application to modelling empirical LFP and
MEG data. Bayesian model comparison – using a variational free energy ap-
proximation to log model evidence – suggests neural field models provide a better
explanation of empirical data if, and only if, there is sufficient spatial frequency
information in the data. In other words, we found greater evidence for neural field
models in analyses of LFP data but failed to find more evidence for neural field
models, relative to neural mass models, in MEG (virtual electrode) data. The key
distinction between these different modalities is that the LFP data is sensitive to a
wide range of spatial frequencies and the temporal fluctuations that these frequen-
cies contain. In contrast, the lead fields inherent in non-invasive electromagnetic
recordings are necessarily broader and suppress temporal dynamics that are ex-
pressed in high spatial frequencies. This is a nice example of modelling with neu-
ral mass and field models that highlights the key role of both data and biophysi-
cally informed models in hypothesis testing and model comparison.

References

Amari, S. (1977). Dynamics of pattern formation in lateral-inhibition type neural fields. Biol Cy-
bern 27, 77-87.
Atay, F. M., and Hutt, A. (2006). Neural fields with distributed transmission speeds and long-
range feedback delays. SIAM Journal on Applied Dynamical Systems 5, 670-698.

Bastos, AM1, 2, 3, Usrey, WM1, Adams, RA4, Mangun, GR2, Fries, P3 and Friston, KJ4, Canonical
microcircuits for predictive coding, preprint
23
Breakspear, M., Roberts, J. A., Terry, J. R., Rodrigues, S., Mahant, N., and Robinson, P. A.
(2006). A unifying explanation of primary generalized seizures through nonlinear brain mod-
eling and bifurcation analysis. Cerebral Cortex 16, 1296-1313.
Bressloff, P. C. (2001). Traveling fronts and wave propagation failure in an inhomogeneous neu-
ral network. Physica D: Nonlinear Phenomena 155, 83-100.
Bressloff, P. C. (1996). New mechanism for neural pattern formation. Phys Rev Lett 76, 4644-
4647.
Buffalo, E.A., Fries, P., Landman, R., Buschman, T.J., and Desimone, R. (2011). Laminar differ-
ences in gamma and alpha coherence in the ventral stream. Proceedings of the National Academy
of Sciences 108, 11262.
Coombes, S. (2005). Waves, bumps, and patterns in neural field theories. Biological Cybernetics
93, 91-108.
Coombes, S., Lord, G. J., and Owen, M. R. (2003). Waves and bumps in neuronal networks with
axo-dendritic synaptic interactions. Physica D-Nonlinear Phenomena 178, 219-241.
Coombes, S., Venkov, N. A., Shiau, L., Bojak, I., Liley, D. T. J., and Laing, C. R. (2007). Mod-
eling electrocortical activity through improved local approximations of integral neural field
equations. Physical Review E 76, 051901.
Daunizeau, J., Kiebel, S. J., and Friston, K. J. (2009). Dynamic causal modelling of distributed
electromagnetic responses. Neuroimage 47, 590-601.
David, O., Kiebel, S. J., Harrison, L. M., Mattout, J., Kilner, J. M., and Friston, K. J. (2006). Dy-
namic causal modeling of evoked responses in EEG and MEG. Neuroimage 30, 1255-72.
Deco, G., Jirsa, V. K., Robinson, P. A., Breakspear, M., and Friston, K. (2008). The Dynamic
Brain: From Spiking Neurons to Neural Masses and Cortical Fields. Plos Computational Bi-
ology 4, p. e1000092.
Douglas, R.J., and Martin, K. (1991). A functional microcircuit for cat visual cortex. The
Journal of Physiology 440, 735.
Freeman, W. J. (2005). A field-theoretic approach to understanding scale-free neocortical dy-
namics. Biological Cybernetics 92, 350-359.
Freeman, W. J. (2003). A neurobiological theory of meaning in perception. Proceedings of the
International Joint Conference on Neural Networks 2003, Vols 1-4, 1373-1378.
Freeman, W. J. (1972). Linear Analysis of Dynamics of Neural Masses. Annual Review of Bio-
physics and Bioengineering 1, 225-256.
Friston, K. J., Harrison, L., and Penny, W. (2003). Dynamic causal modelling. Neuroimage 19,
1273-302.
Friston, K., Mattout, J., Trujillo-Barreto, N., Ashburner, J., and Penny, W. (2007). Variational
free energy and the Laplace approximation. Neuroimage 34, 220-234.
Galka, A., Ozaki, T., Muhle, H., Stephani, U., and Siniatchkin, M. (2008). A data-driven model
of the generation of human EEG based on a spatially distributed stochastic wave equation.
Cognitive Neurodynamics 2, 101-113.
Ghosh, A., Rho, Y., McIntosh, A. R., Kotter, R., and Jirsa, V. K. (2008a). Cortical network dy-
namics with time delays reveals functional connectivity in the resting brain. Cognitive Neu-
rodynamics 2, 115-120.
Grindrod, P., and Pinotsis, D. A. (2011). On the spectra of certain integro-differential-delay
problems with applications in neurodynamics. Physica D: Nonlinear Phenomena 240, 13-20.
Haeusler, S., and Maass, W. (2007). A statistical analysis of information-processing proper-
ties of lamina-specific cortical microcircuit models. Cerebral Cortex 17, 149.
Jansen, B. H., and Rit, V. G. (1995). Electroencephalogram and visual evoked potential genera-
tion in a mathematical model of coupled cortical columns. Biol Cybern 73, 357-66.
Jirsa, V. K. (2009). Neural field dynamics with local and global connectivity and time delay. Phi-
losophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sci-
ences 367, 1131.
24
Jirsa, V. K., and Haken, H. (1996). Field theory of electromagnetic brain activity. Physical Re-
view Letters 77, 960-963.
Kandel ER, Schwartz JH, Jessell TM (2000). Principles of Neural Science, 4th ed. McGraw-Hill,
New York.
Kiebel, S. J., Garrido, M. I., Moran, R., Chen, C. C., and Friston, K. J. (2009). Dynamic causal
modeling for EEG and MEG. Human Brain Mapping 30, 1866-76.
Lefort, S., Tomm, C., Floyd Sarria, J.-C., and Petersen, C.C.H. (2009). The Excitatory Neuronal
Network of the C2 Barrel Column in Mouse Primary Somatosensory Cortex. Neuron 61, 301–
316.
Liley, D. T. J., Alexander, D. M., Wright, J. J., and Aldous, M. D. (1999). Alpha rhythm emerges
from large-scale networks of realistically coupled multicompartmental model cortical neu-
rons. Network: Computation in Neural Systems 10, 79-92.
Liley, D. T. J., Cadusch, P. J., and Dafilis, M. P. (2002). A spatially continuous mean field the-
ory of electrocortical activity. Network: Computation in Neural Systems 13, 67-113.
Lopes da Silva, F. H., Hoeks, A., Smits, H., and Zetterberg, L. H. (1974). Model of brain rhyth-
mic activity. Biological Cybernetics 15, 27-37.
Lopes da Silva, F. H., and Storm van Leeuwen, W. (1978). The cortical alpha rhythm in dog: the
depth and surface profile of phase (Raven Press New York).
Lumer, E. D., Edelman, G. M., and Tononi, G. (1997). Neural dynamics in a model of the tha-
lamocortical system. I. Layers, loops and the emergence of fast synchronous rhythms. Cere-
bral Cortex 7, 207.
Markounikau V, Igel C, Grinvald A, Jancke D (2010) A Dynamic Neural Field Model of
Mesoscopic Cortical Activity Captured with Voltage-Sensitive Dye Imaging. PLoS Comput Biol
6(9): e1000919. doi:10.1371/journal.pcbi.1000919
Moran, R. J., Kiebel, S. J., Stephan, K. E., Reilly, R. B., Daunizeau, J., and Friston, K. J. (2007).
A neural mass model of spectral responses in electrophysiology. Neuroimage 37, 706-20.
Moran, R. J., Stephan, K. E., Seidenbecher, T., Pape, H. C., Dolan, R. J., and Friston, K. J.
(2009). Dynamic causal models of steady-state responses. Neuroimage 44, 796-811.
Moran, R.J., Jung, F., Kumagai, T, Endepols, H., Graf, R., Dolan, R.J., Friston, K.J., Stephan,
K.E., Tittgemeyer, M., (2011) Dynamic causal models and physiological inference: a validation
study using isoflurane anaesthesia in rodents, PLoS ONE 6(8): e22790.
doi:10.1371/journal.pone.0022790
Nunez, P. L. (1995). Neocortical dynamics and human EEG rhythms (Oxford University Press,
USA).
Nunez, P. L. (1996). Multiscale neocortical dynamics, experimental EEG measures, and global
facilitation of local cell assemblies. Behavioral and Brain Sciences 19, 305-&.
Nunez, P. L., and Srinivasan, R. (2006). Electric Fields of the Brain 1, i-612.
O’Connor, S. C., and Robinson, P. A. (2005). Analysis of the electroencephalographic activity
associated with thalamic tumors. Journal of Theoretical Biology 233, 271-286.
Penny, W. D., Stephan, K. E., Daunizeau, J., Rosa, M. J., Friston, K. J., Schofield, T. M., and
Leff, A. P. (2010). Comparing families of dynamic causal models. Plos Computational Biol-
ogy 6, e1000709.
Pinotsis, D. A., and Friston, K. J. (2011). Neural fields, spectral responses and lateral connec-
tions. Neuroimage 55, 39-48.
Pinotsis, D. A., Moran, R. J., and Friston, K. J. (2012). Dynamic causal modeling with neural
fields. NeuroImage, 59, 1261–1274.
Qubbaj, M. R., and Jirsa, V. K. (2009). Neural field dynamics under variation of local and global
connectivity and finite transmission speed. Physica D-Nonlinear Phenomena 238, 2331-2346.
Raizada, R.D.S., and Grossberg, S. (2003). Towards a theory of the laminar architecture of cere-
bral cortex: Computational clues from the visual system. Cerebral Cortex 13, 100–113.
Riera, J. J., Jimenez, J. C., Wan, X., Kawashima, R., and Ozaki, T. (2007). Nonlinear local elec-
trovascular coupling. II: From data to neuronal masses. Human brain mapping 28, 335-354.
25
Robinson, P. A. (2006). Patchy propagators, brain dynamics, and the generation of spatially
structured gamma oscillations. Physical Review E 73, 041904.
Robinson, P. A., Loxley, P. N., O’Connor, S. C., and Rennie, C. J. (2001). Modal analysis of
corticothalamic dynamics, electroencephalographic spectra, and evoked potentials. Physical
Review E 6304, 041909
Robinson, P. A., Rennie, C. J., and Rowe, D. L. (2002). Dynamics of large-scale brain activity in
normal arousal states and epileptic seizures. Physical Review E 65, 041924
Robinson, P. A., Rennie, C. J., Rowe, D., O’Connor, S. C., Wright, J. J., Gordon, E., and White-
house, R. W. (2003). Neurophysical modeling of brain dynamics. Neuropsychopharmacology
28, S74-S79.
Roopun AK, Kramer MA, Carracedo LM, Kaiser M, Davies CH, Traub RD, Kopell NJ, Whit-
tington MA. (2008). Period concatenation underlies interactions between gamma and beta
rhythms in neocortex. Front Cell Neurosci. 2:1.
Schiff, S., and Sauer, T. (2008). Kalman filter control of a model of spatiotemporal cortical dy-
namics. BMC Neuroscience 9, O1.
Schwarzkopf, D. S., Robertson, D. J., Song, C., Barnes, G. R., and Rees, G. (2012). The Fre-
quency of Visually Induced Gamma-Band Oscillations Depends on the Size of Early Human
Visual Cortex. The Journal of Neuroscience 32, 1507-1512.
Steriade, M., and Deschenes, M. (1984). The thalamus as a neuronal oscillator. Brain Research
Reviews.
Valdes, P. A., Jimenez, J. C., Riera, J., Biscay, R., and Ozaki, T. (1999). Nonlinear EEG analysis
based on a neural mass model. Biological cybernetics 81, 415-424.
Van Rotterdam, A., Lopes da Silva, F. H., Van den Ende, J., Viergever, M. A., and Hermans, A.
J. (1982). A model of the spatial-temporal characteristics of the alpha rhythm. Bulletin of
Mathematical Biology 44, 283-305.
Van Veen, B. D., Van Drongelen, W., Yuchtman, M., and Suzuki, A. (1997). Localization of
brain electrical activity via linearly constrained minimum variance spatial filtering. Biomedical
Engineering, IEEE Transactions on 44, 867-880.
Weiler, N., Wood, L., Yu, J., Solla, S.A., and Shepherd, G.M.G. (2008). Top-down laminar
organization of the excitatory network in motor cortex. Nat Neurosci 11, 360–366.
Wilson, H. R., and Cowan, J. D. (1973). Mathematical Theory of Functional Dynamics of Corti-
cal and Thalamic Nervous-Tissue. Kybernetik 13, 55-80.

You might also like