Monografia Estadistica Monografia
Monografia Estadistica Monografia
Monografia Estadistica Monografia
Sao Carlos
Outubro /2018
INDEX
ABSTRACT
INTRODUCCTION
1. NEURONAL AVALANCHE
2. KINOUCHI-COPELLI MODEL
In the present monograph a general study is made about the work of Osame Kinouchi and
Mario Copelli, entitled optimization of the dynamic range in excitable networks at
criticality, for this we will analyze the behavior during the activity of a neuronal
avalanche in the KC model, whose objective serves To explain the problems in the
information process in sensory systems, we also deal with the dynamics of the KC model,
such as statistical properties, parameters and rules that follow that model. In this
monograph the work of Camilo Akimushkin Valencia is cited, as among others.
INTRODUCTION
The capacity of a sensory system in efficiently detecting stimuli is usually given by the
dynamic range, a simple measure of the range of stimulus intensity over which the
network is sensible enough. Many times biological systems exhibit large dynamic ranges,
covering many orders of magnitude. There is no easy explanation for that, since
individual neurons present very short dynamic range isolatedly. Arguments based on
sequential recruitment are doomed to failure since the corresponding arrangment of the
limiar thresholds of the units over many orders of magnitud is unrealistic.
Figure 1: Schematic of data representation. Local field potentials(LFPs) that exceed three
standard deviations are represented by black squares.
The movie illustrates that multi-channel data can be broken down into frames where there
is no activity and where there is at least one active electrode, which may pick up the
activity from several neurons. A sequence of consecutively active frames, bracketed by
inactive frames, can be called an avalanche. The example avalanche shown has a size of 9
because this is the total number of electrodes that were driven over threshold. Avalanche
sizes are distributed in a manner that is nearly fit by a power law. Due to the limited
number of electrodes in the array, the power law begins to bend downward in a cutoff
well before the array size of 60. But for larger electrode arrays, the power law is seen to
extend much further.[7]
Figure 2: Example of an avalanche. Seven frames are shown, where each frame
represents activity on the electrode array during one 4 ms time step. An avalanche is a
series of consecutively active frames that is preceded by and terminated by blank frames.
Avalanche size is given by the total number of active electrodes. The avalanche shown
here has a size of 9.
P ( S ) kS
Where P(S) is the probabilty of observing an avalanche of size S ,α is the exponent that
gives the slope of the power law in a log-log graph, and k is a proportionality constant.
For experiments with slice cultures, the size distribution of avalanche of local field
potentials has an exponent α ≈ 1.5,but in recordings of spikes from a different array the
exponent is α ≈ 2.1. The reasons behind this difference in exponents are still being
explored. It is important to note that a power law distribution is not what would be
expected if activity at each electrode were driven independently. The neuronal avalanches
emerge from collective processes in a distributed network.
Figure 3: Avalanche size distributions. A, Distribution of sizes from acute
slice LFPsrecorded with a 60 electrode array, plotted in log-log space. Actual data are
shown in black, while the output of a Poisson model is shown in red. In the Poisson
model, each electrode fires at the same rate as that seen in the actual data, but
independently of all the other electrodes. Note the large difference between the two
curves. The actual data follow a nearly straight line for sizes from 1- 35; after this point
there is a cutoff induced by the electrode array size. The straight line is indicative of a
power law, suggesting that the network is operating near the critical point (unpublished
data recorded by W. Chen, C. Haldeman, S. Wang, A. Tang, J.M. Beggs). B, Avalanche
size distribution for spikes can be approximated by a straight line over three orders of
magnitude in probability, without a sharp cutoff as seen in panel A. Data were collected
with a 512 electrode array from an acute cortical slice bathed in high potassium and zero
magnesium (unpublished work of A. Litke, S. Sher, M. Grivich, D. Petrusca, S.
Kachiguine, J.M. Beggs). Spikes were thresholded at -3 standard deviations and were not
sorted. Data were binned at 1.2 ms to match the short interelectrode distance of 60 μm.
Results similar to A and B are also obtained from cortical slice cultures recorded in
culture medium.[7]
While avalanches in critical sandpile models are stochastic in the patterns they form,
avalanches of local field potentials occurs in spatio-temporal patterns that repeat more
often than expected by chance ( Begg and Plenz, 2004). The figure shows several such
patterns from an acute cortical slice. These patters are reproducible over periods of as
long as 10 hours and have a temporal precision of 4ms ( Begg and Plenz, 2004). The
stability and precision of these patterns suggest that neuronal avalanches could be used by
neuronal networks as a substrate for storting information. In this sense, avalanches appear
to be similar to sequences of action potential observed in vivo while animals performs
cognitive tasks. It is unclear at prese whether the repeating activity patterns froms in vivo
are also avalanches.[7]
Figure 4: Families of repeating avalanches from an acute slice. Each family (1-4) shows
a group of three similar avalanches. Repeating avalanches are stable for 10 hrs and have a
temporal precision of 4 ms, suggesting that they could serve as a substrate for storing
information in neural networks.
Models that explicitly predicted avalanches of neural activity include the work of Herz
and Hopfield (1995) which connects the reverberations in a neural network to the power
law distribution of earthquake sizes. Also notable is the work of Eurich, Hermann and
Ernst (2002), which predicted that the avalanche size distribution from a network of
globally coupled nonlinear threshold elements should have an exponent of α ≈ 1.5.
Remarkably, this exponent turned out to match that reported experimentally (Beggs and
Plenz, 2003). In this model, a processing unit which is active at one time step will
produce, on average, activity in σ processing units in the next time step. The number σ is
called the branching parameter and can be thought of as the expected value of this ratio:
descendants
ancestors
Where Ancestors is the number of processing units active at time step t and Descendants
is the numbers of processing units active at time step t+1. There are three general regimes
for σ, as show in the figure.
Figure 5: The three regimes of a branching process. Top, when the branching
parameter, σ , is less than unity, the system is subcritical and activity dies out over time.
Middle, when the branching parameter is equal to unity, the system is critical and activity
is approximately sustained. In actuality, activity will die out very slowly with a power
law tail. Bottom, when the branching parameter is greater than unity, the system is
supercritical and activity increases over time.[7]
At the level of a singles processing unit in the network, the branching parameter σ is set
by the following relationship:
i j 1 pij
N
where σi is the expected number of descendant processing units activated by unit i, N is
the number of units that unit i connects to, and pij is the probability that activity in unit i
will transmit to unit j. Because some transmission probabilities are greater that others,
preferred paths of transmission may occur leading to reproducible avalanche patterns.
Both the power law distributionof avalanche size and the repeating avalanches are
qualitatively captured by this model when σ is tuned to the critical point σ=1 .
When the model is tuned moderately above (σ>1) or below (σ<1) the critical point, it
fails to produce a power law distribution of avalanche sizes. This phenomenological
model does not explicitly state the cellular or synaptic mechanisms that may underlie the
branching process, and many of this model's predictions need to be tested.[7]
2. Kinouchi-Copelli Model
The Brazilian researchers Osame Kinouchi and Mauro Copelli presented the first model
in 2006 (hereafter KC 2006), which fully explains the wide dynamic range observed [1].
As another independent evidence of the veracity of the model, the correct answer is with
Stevens' psychophysical law for weak stimuli.
The models used consist of a dynamics on a network: the dynamics and sets of rules that
define the temporal evolution of the stages of the elements from the states in the previous
instant while the network represents the connections between the elements that, of course,
also will shave the evolution of the system..
There are models that consist of coupled differential equations and therefore represent
continuous variables evolving in a continuous time, there are also models of discrete
variables in discrete times, the latter are the most convenient to be treated
computationally due to its simplicity and speed.
Connections are incorporated into the models simply by adding a coupling term in the
dynamics of each pair of connected neurons. however the choice of pairs of neurons to be
connected is defined by the network topology.
These discrete variavel models in discrete tempo are concatenated as cellular automato.
The cellular automats are usually defined on a regular network two-dimensional wave or
state of each element depending on time, following simple regimes, state of element and
two states two elements vizinhos next no previous time. We can update all the elements
of the system in a synchronized or random way, for example by updating or state of a
single item randomly selected.
The dynamics of an element in the ACGH can represent the action potential of a neuron
as shown in figure 7. if a quiescent neuron receives a strong enough stimulus it and
depolarized and enters the excited state, later, while the slow conductances are activated,
the neuron travels successively through the refractory states until it returns to the
quiescent state. While the internal dynamics of each element can be modeled by the
ACGH, the interactions between neighbors must be modified if we look for a realistic
model: a neural network, of course, is not square and the number of neighbors is not the
same for all neurons.
On the other hand, the deterministic connections between neurons prove to be excessively
strong. One way to adjust the excitation power of synapses in the model is to introduce a
probability that an excited neuron can excite each of its quiescent neighbors.
Consider an undirected weighted Erdos-Renyi ramdim graph with N nodes. Each node
represent a neuron, i.e.an excitable unit whose possible states will be described below.
Given a desired average connectivity K (mean degree of a node) for the graph, each of the
NK/2 edges is assigned to a ramdoly chosen pair of nodes. Let Vj be neighborhood of
node j, i.e. The set of nodes connected to j by an edge.The strengh of a synapse in this
neuronal network is represented by the weight of the corresponding edge in this graph
and is sorted from an uniform probability density in the interval [0,p max], where 0≤pmax≤1.
Representing the absence of a synapse as a null weight, we can define an (symmetric)
adjancency matrix matrix A whose element Ajk is the weight of the edege between nodes j
and k. (paper 1)
Let be a ramdom variable representing X j (t ) the state of the j-th neuron at the instant
t. For all j and t, exhibits realizations in the set {0,1,2,...,m-1}. The state 0 is
called either the quiescent state or the rest state. The state 1 is the excited state and all
other states are called refractory states. The full dynamics of the system consists in the
temporal evolution of the family {} in discrete time, with synchronous updating,
according to the following rules:
Explicity ,”independent contributions” mean that each of the numbers η and {Ajk} are
meaningful as excitation probablitities only in isolation (absence of all other
contributions). Also notice that the refractory period of a neuron equals m-1 time steps,
starting right after this neuron getting excited. Its evolution is deterministic meanwhile.
The only probabilistic state transition occurs from the quiescent state to the excited one.
For KC ,where would be an arbitrary 1t t e rt
continuous time internal
(usually,≈1ms) and r would be the probability rate of a Poisson process. In olfactory
intraglomerular neuronal network (a biological system where the KC model may be
applcable), r would be directly related to the concentration of an odorant capable of
exciting neurons.
The main observables of the KC model (t ) are the density of excited neurons at t-the
time step ( the fraction of the population of neurons composed by excited units)
and its temporal average, the activity of the network,
(1) 1
T
F : t 1
(t ).
T
For a large enough (≈103ms)
values of the observation window T, so r * thata dynamical equilibrium is reached,
its precise value does not have relevant effects on the behavoiur of F. Then critical
behavior is revealed only, when , the range of values of r over which F exhibits
“significant” variation (1). Is seen as a function of the average branching ratio σ, defined
as the mean value (average over all the neurons) of the local branching ratio σ j of the j-th
node.
(2) j A jk .
kV j
Indeed the dynamic range turns out
to be optimal when σ=1. So the roles of r *
control parameter is performed by σ,
which is a measure of how much activity can be directly generated by an excited unit of
the netowrk stimulating a resting neighborhood.
Experimentally we have recorded several behavior of the activity in the biological neural
networks that could be compared with the simulations. However, the discovery of neural
avalanches in vitro would be particularly illuminating in the dynamic range problem and
in general in the study of neural network complexity [3]. the activity observed in cortical
tissue using an arrangement of electrodes has shown that the neurons shoot together
according to space-time patterns distributed in the form of power laws in a large region,
which makes one think that the system is near a critical point. To characterize the
network, the branching or branching ratio ("initial") was defined as the ratio between the
number of active neurons (or electrodes) in the two instants of time according to a period
without registering activities [3].
(3) (t 1)
,
(t )
Such that, , (t 1) 0
The definition (3) is not the most comfortable choice to characterize our networks since it
depends on the individual realizations in time and therefore several experimental
measures are required until converging to an average value. However, our probabilistic
description in terms of the weights of connections makes it possible to characterize the
network without any gaps before doing any particular simulation. Thus, for probabilistic
networks, we will use the definition of [4] equivalent to expression (3). the local
branching reason of a neuron, "i" is defined as the sum over the neighboring "Ki" of the
excitation probabilities from that element:
(4) Ki
pij ,
j
(5) Kpmax
2
The probabilistic excitation of the r 01, neighbors is essential to obtain the
critical point and generates a non trivial behavior as shown in figure (8) where the
temporal evolution of the instantaneous activity is presented in the presence of a constant
stimulus in time and evanescently weak ,(but it is essential to maintain the activity) for
different values of the branching ratio σ. For this figure 8 was used the random network
in which, in principle, any neuron can be connected with any other. starting from the
same initial condition with all the states equally occupied, regardless of the value of σ if
there is a decrease at the beginning to imperceptible values of the activity. for all the
networks slightly connected, , the network tends to continue quiescent for all time: if
there is any excitation due to the external stimulus, it dies quickly and can spread to a few
neighboring neurons (blue line in figure 8. when the value reaches a qualitative change
occurs: the total number of neurons activated and the time that the network becomes
active from a single external excitation has an enormous variability that can generate both
a brief activity and an activation mass limited only by the size of the entire network (red
line in figure 8).
Figure 8: Actividade instantaneous 1
Nr
00.0
10 9
55
activity in the random network in the
presence of a weak stimulus, .Simulations done with networks of neurons with branching
ratios from (blue line), to (red line) in steps of 0.1 (line in degrade from red to green).
For larger branching reasons , the initial 1 excitation can certainly activate a large
portion of the network neurons (lines degrade from red to green in figure 8). The activity
grows until it is around a constant value that will depend on reason of branching σ. for
slightly supercritical networks, , the activity is around a low value and has an apparently
random profile.For networks with larger ramifications, the activity begins to take the
form of oscillations around the fixed value molded by an envelope randomly.
To obtain the graphic below , we need to to introduce some equations which was obtained
to the KC Model,
the conditional
probabilty of exciting the i-th neuron in the quiescent state and depends on the external
stimulus and also the interaction with others possible neighbors.And also we need to
introduce another equation which was obtained to the KC Model: Mean activity of the
model, "f", as the temporal mean of the instantaneous activity,
2.9 f [1 (n 1) f ]hi (t )
(7)
hi (t )
This relation is later used in the limit of decoupled neurons and in the mean field
approximation. note that may depend, in general, on the activity of the neighboring
neurons and therefore of f.
In the random network we can easily pij obtain the mean field approximation
expressions from the "stationary" activity (6) and the general expression for the
excitation probability (7) as at [1]. The mean field excitation probability is obtained
assuming that the fraction of active neighbors equals the average activity f and that the
probability of excitation is equal to the mean value.
(8) pmax
pij p
2 K
We must also consider the same
number of neighbors for each neuron, K=Ki . Replacing in equation (6) and later in (7) we
obtain the following transcendental equation for the activity in the steady state.
(9) f (1 (n 1) f [1 (1 f / K ) K (1 )]
We verified that the values obtained from the roots of equation (9) fit well to the values of
the simulation as shown by the stimulus-response curves of figure 9.
Figura 9.
Mean
activity
as a
function
of the
external
stimulus
for
different
values σ
in the
Erdos-
Renyi
random
network
in linear
vertical scale.
REFERENCES
[4] HALDEMAN, C.; BEGGS, J.M. Critical branching captures activity in living neural
networks and maximizes the number of metaastable states. Physical Review letters.
[6] LEWIS, T. J.; RINZEL, J. Topological target patterns and populations oscillations in a
network with ranom gap junctional coupling. Neorucomputing.
[7] JOHN, M,B (2007) Neuronal avalanche. Schorlapedia