0% found this document useful (0 votes)
53 views

Random Graph and Stochastic Process

This document discusses how network architecture and stochastic processes each contribute to network dynamics. It explores these contributions by running two different stochastic processes (percolation and neural spiking) on three types of random graphs (Erdos-Renyi, small-world, and scale-free) and analyzing the resulting dynamics. The goal is to better understand when network structure versus process dominates the observed dynamics.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
53 views

Random Graph and Stochastic Process

This document discusses how network architecture and stochastic processes each contribute to network dynamics. It explores these contributions by running two different stochastic processes (percolation and neural spiking) on three types of random graphs (Erdos-Renyi, small-world, and scale-free) and analyzing the resulting dynamics. The goal is to better understand when network structure versus process dominates the observed dynamics.
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 10

DISCRETE AND CONTINUOUS Website: www.aimSciences.

org
DYNAMICAL SYSTEMS
Supplement 2011 pp. 1279–1288

RANDOM GRAPH AND STOCHASTIC PROCESS


CONTRIBUTIONS TO NETWORK DYNAMICS

Deena Schmidt
Mathematical Biosciences Institute
Ohio State University
Columbus, OH 43210, USA

Janet Best
Department of Mathematics
Ohio State University
Columbus, OH 43210, USA

Mark S. Blumberg
Department of Psychology
University of Iowa
Iowa City, IA 52242, USA

Abstract. A fundamental question regarding neural systems and other ap-


plications involving networks is the extent to which the network architecture
may contribute to the observed dynamics. We explore this question by inves-
tigating two different processes on three different graph structures, asking to
what extent the graph structure (as represented by the degree distribution) is
reflected in the dynamics of the process. Our findings suggest that memoryless
processes are more likely to reflect the degree distribution, whereas processes
with memory can robustly give power law or other heavy-tailed distributions
regardless of degree distribution.

1. Introduction. In recent years, a number of studies have found interesting struc-


ture in data arising from biological systems, and data with power law structure have
particularly garnered much interest [20, 7, 4, 27]. Explanations for how such struc-
ture arises have sometimes focused on the processes generating the data [7, 1, 16]
and other times on the structure of the network on which the process occurs [22, 24].
The question naturally arises: if one is trying to understand data produced by a
process on a network, should one study the process or the network? How can one
understand the contributions of the network architecture and of the process itself
in shaping the data? Surely the relative contributions vary; are there clues when
one or the other dominates the effect?
The brain contains a great many neurons with synaptic connections forming an
intricate neuronal network. The propagation of action potentials along an axon and
its synaptic transmission from one cell to the next has been extensively studied and

2000 Mathematics Subject Classification. Primary: 60G99, 05C80; Secondary: 92B20.


Key words and phrases. stochastic process, power law, random graph, network topology.
DS and JB are supported by NSF agreement 0112050 through the Mathematical Biosciences
Institute. JB is also supported by NSF CAREER Award DMS-0956057 and is an Alfred P. Sloan
Research Fellow. MSB is supported by Independent Scientist Award from the National Institute
of Mental Health MH66424.

1279
1280 DEENA SCHMIDT, JANET BEST AND MARK S. BLUMBERG

mathematically modeled. The architecture of connections between neurons within


a nucleus and between different brain nuclei remains an active area of investigation.
The collective activity of a group of neurons often has interesting structure that
invites theoretical exploration. Recent innovations such as electrocorticography
and in vitro cortical networks provide rich data of neuronal network activity with
good spatial and temporal resolution. These technologies have led to reports of
power law distributions of activity, in terms of the number of spikes per activity
bout and the duration of activity bouts. Models for the generation of such power
laws include branching processes [28] and processes occurring on scale-free graphs
[23] (for which the network itself has a power law degree distribution).
While Beggs and colleagues [7, 14] argue that criticality may be desirable for
dynamical reasons, critical processes require fine tuning of a parameter to a critical
value, a feature that has led to criticism on the grounds that a biological system
would likely require greater robustness. Abbot and Rohrkemper [1] partially deflect
this criticism by providing a simple but plausible model demonstrating that self-
tuning of a network to a critical value may not be difficult to achieve. Magnasco
[18] presents a novel model in which a network uses an anti-Hebbian rule to self-
tune, resulting in some range of scale-free dynamics. Teramae et al. [24] present a
variation of a branching process model (a “critical wiring process”) that is robust
to perturbation. Their model results in a network that combines feed forward
chains and recurrent circuits and may have implications for network topology in
the cortex. Joshi and colleagues developed a simple model in which a small number
of interacting neuronal populations produce power law distributed activity bouts
via a doubly stochastic process [16, 17, 8].
Shkarayev and colleagues [22], on the other hand, focused more directly on the
network architecture, showing that the topology of the network can control the
dynamical properties of the network. They demonstrate, in a network of integrate-
and-fire neurons, that a scale-free network can produce scale-free (power law) dy-
namics. These authors address possible relations between functional connectivity
(as observed for example in fMRI imaging) and architectural connectivity.
In this paper we investigate the relative contributions to network dynamics made
by the structure of the graph itself and by the nature of the process occurring on
that graph. In particular, we are interested in what clues one may look for in evalu-
ating the relative importance of these two factors. We consider randomly generated
graphs with varied degree distributions, and we further vary the clustering within
the graph by considering some small-world networks. We consider two different
processes on the graphs, (i) a type of percolation and (ii) a neural spiking process.
In Section 2 we describe the graphs used in this investigation together with the
processes that we put on the graphs. We describe in Section 3 the results produced,
and we discuss the investigation and its results in the final section.

2. Methods. In order to investigate the relation between graph structure and


dynamics, we selected three graph structures, Erdös-Rényi, small-world, and scale-
free, and two stochastic processes to put on the graphs.
We chose representative graphs that differ in their degree distribution; i.e., the
probability distribution of the numbers of edges at each node in the graph. Ver-
tex degree distribution is a distinguishing characteristic that captures important
differences in graph structure (see for example [19]).
CONTRIBUTIONS TO NETWORK DYNAMICS 1281

The graph structure introduced by Paul Erdös and Alfred Rényi in 1959 [10, 11]
and now referred to as an Erdös-Rényi random graph assumes that, for a given set
of nodes, any possible edge occurs independently of all other possible edges.
Watts and Strogatz [26] introduced small-world networks after recognizing that
many naturally occurring networks, such as social networks, exhibit greater clus-
tering than does an Erdös-Rényi graph: an edge is more likely to occur if it would
complete a triangle. In social networks, this lack of edge independence expresses
the observation that two individuals are more likely to be acquainted if they share
another acquaintance. Small-world networks such as the Watts and Strogatz model
are characterized by a high degree of clustering as well as a short average path length
between any two nodes. They are not, however, characterized by degree distribu-
tion. Nonetheless, this class of networks has proved widely useful in applications,
and we include here some judiciously chosen examples.
The third class of networks that we study here, scale-free graphs, is actually a
distinguished subset of small-world networks [3]. These networks have a power law
degree distribution; in particular, they include nodes with high degree known as
hubs. In algorithms for creating scale-free graphs, an edge is more likely to occur if
it would connect to a hub - a node that is already highly connected.
We consider here two processes occurring on each type of random graph. First, we
look at a process based upon percolation of activity through a graph in which nodes
can be in either an active or an inactive state. Beginning with some proportion of
active nodes, one asks whether all nodes eventually become active. For cases in
which activity percolates, one computes the distribution of percolation times. The
second process is a simple neural-like process involving spiking activity. Nodes fire
spikes at some Poisson rate, and each spike increases the firing rate of neighbor
nodes, tending to prolong activity. We ask how long each bout of such activity lasts
and calculate the distribution of active bout durations.
In the subsections below we describe in more detail the selection and construction
of the graphs as well as the algorithm for each process on the graphs. In each case,
the work was performed using the R programming language (version 2.9.0) [21]
along with the igraph package [9].

2.1. Graph Families. Each random graph is generated by starting with a set
of N nodes and adding edges between them according to a random process. For
simplicity, we consider only undirected, connected graphs.
In the Erdös-Rényi model [10], we start with N nodes and connect each pair with
probability p, independently of all other pairs of nodes. Since a node is equally likely
to be connected to each of the N − 1 other nodes, the probability that a node has
degree k is given by the binomial distribution
 
N −1 k
P (k) = p (1 − p)N −1−k .
k
Here the expected degree of a node is λ = (N − 1)p which allows us to rewrite this
expression as
k  N −1
λk −λ
 
N −1 λ λ
P (k) = 1− ' e .
k N −1−λ N −1 k!
Thus the degree distribution of an Erdös-Rényi graph becomes Poisson in the limit
of large N .
1282 DEENA SCHMIDT, JANET BEST AND MARK S. BLUMBERG

The small-world graphs we consider are generated by the model of Watts and
Strogatz [26]. We start with N nodes in a regular ring lattice where each node
is connected to k of its nearest neighbors. Then with probability q, each edge is
rewired. In other words, an edge is disconnected from one of its two nodes and then
reconnected to another node chosen at random. This rewiring process introduces
long-range connections, leading to small average path length characteristic of small-
world graphs. In addition, this graph is more highly clustered than an Erdös-Rényi
random graph.
To generate scale-free graphs, we follow the model by Barabási and Albert [6]
in which nodes are connected by a process with preferential attachment. We begin
with a small network of m0 ≥ 2 nodes and add new nodes to the network one at a
time until we have a total of N nodes. The probability that the new node will be
connected to an existing node i, pi = Pkikj , is proportional to the degree ki of node i.
j
Highly connected hubs tend to quickly accumulate more links, while nodes of small
degree are unlikely to gain new connections. Thus, new nodes have a “preference”
to attach themselves to hubs. The degree distribution of a Barabási-Albert random
graph is scale-free, and in particular, it follows a power law distribution
P (k) = Ak −γ
where A is a constant that ensures that P (k) sums to 1, and the degree exponent γ
typically lies in the range 2 < γ < 3 [2]. Since scale-free graphs belong to the larger
class of small-world graphs, they share the property of small average path length,
but the extent of clustering is closer to that of an Erdös-Rényi random graph [6].

2.2. Percolation. The first of the two stochastic processes we consider on random
graphs is based upon percolation of activity through the network. Assume that
each node can be in one of two possible states: active or inactive. We start the
percolation-type process at time t = 0 with a proportion ρ of initially active nodes
in the graph. We then define an updating rule for the spread of activity: in the
next time step, a node with k active neighbors becomes active with probability
pa , which depends on k. If a node was active in the previous time step, it stays
active. In contrast to classical bootstrap percolation, we are not considering a
fixed threshold above which nodes become activated with probability 1. We use a
saturating function to define the activation threshold. In particular, for each node
i ∈ {1, . . . , N } we compute the proportion of active neighbors yi and then take
cyi
pa =
yi + d
where c and d are constants such that pa ∈ [0, 1]. For cases in which activity
percolates throughout the network, we look at the distribution of percolation times,
i.e. the time it takes for all nodes in the graph to become active. This is a variation
on a classical question in random graph theory that we describe below.
For a given graph with N nodes where each node is initially active with proba-
bility p, let E be the event that all nodes are eventually active. Identify p− and p+
such that for p < p− (respectively, p > p+ ), the probability of event E is essentially
0 (respectively, 1). In particular, show the existence of a phase transition window
[p− (N ), p+ (N )] and quantify how quickly it shrinks to 0 as N → ∞.
Balogh and Pittel [5] give results for this question in the case of d-regular random
graphs, but for other types of random graphs and more complicated percolation
processes, little is known.
CONTRIBUTIONS TO NETWORK DYNAMICS 1283

2.3. Spiking Model. The second process we consider is a simple neural-like pro-
cess involving spiking activity. Nodes fire action potential spikes at a Poisson rate,
and each spike increases the firing rate of neighboring nodes, tending to prolong
activity on the graph. Here we are interested in the length of each bout of activity
in order to compute the distribution of active bout durations.
The spiking model we consider is similar to a model by Abbott and Rohrkemper
[1], but with different network structure and without critically tuning any parame-
ters. These authors consider a neural network that grows and shrinks in response to
intracellular calcium concentration C, a measure of activity in the network. Their
model involves tuning parameter C to a critical value in order to produce power
law distributions of activity.
We generate and fix a random graph with N nodes from one of the three families
described above. We then put a Poisson spiking process on the graph according to
the firing rate equations from [1]: their equation (1) and a modified version of (2).
These are equations (1) and (2) below.
dri
τr = r0 − ri (1)
dt
Here ri is the firing rate for node i ∈ {1, . . . , N }, r0 is the background firing rate
which is constant for all nodes in the graph, and τr is a time constant. When node
i fires an action potential, the firing rate of each of its neighboring nodes j gets
incremented by a constant g:
rj → rj + gAij . (2)
A is the adjacency matrix of the graph, i.e., entry Aij = 1 if nodes i and j are
connected, otherwise Aij = 0. Hence, node i’s neighbors are the non-zero entries in
row i of A.
We simulate the above processes on random graphs via Gillespie’s Stochastic
Simulation Algorithm [12] implemented in R. All simulations are done with a fixed
graph for 100,000 replications of the process on that graph. We record the length
of each bout of activity separated by a period of inactivity of at least length ∆, and
then compute the distribution of active bout durations for a range of parameter
values for each of the three graph families.

3. Results. We present the results of our investigation of the relation between


graph structure and dynamics, focusing on three graph structures and two stochastic
processes occurring on the graphs. The percolation process is considered in the first
subsection and the spiking model in the second.

3.1. Percolation Results. First we consider the percolation process described in


Section 2.2 on the three different graph structures. All graphs in this section are
generated with N = 1000 nodes. The time to percolation is measured in terms of the
number of integer time steps it takes for all nodes in the network to become active.
In general, we see that the percolation process reflects the degree distribution for a
wide range of ρ values and certain activation threshold functions.
For an Erdös-Rényi random graph generated with edge probability p = 0.005,
Figure 1 shows the degree distribution and distribution of time to percolation. Here
the percolation process starts with 1% of the nodes initially active, i.e. ρ = 0.01,
and the saturating function constants are c = d = 1. We see that the distribution
of time to percolation has the same general shape as the degree distribution. As ρ
1284 DEENA SCHMIDT, JANET BEST AND MARK S. BLUMBERG

increases, the mean time to percolation decreases while the shape of the distribution
remains roughly the same.

0.25
0.15

0.20
0.15
0.10

Density
Density

0.10
0.05

0.05
0.00

0.00
0 5 10 15 0 5 10 15 20 25 30 35
Degree Time

Figure 1. Percolation process on an Erdös-Rényi random graph


with N = 1000 nodes and edge probability p = 0.005. On the left
is the degree distribution, and on the right is the distribution of
time to percolation for the case with 1% of nodes initially active.

Next we use a Watts-Strogatz random graph generated with k = 5 and rewiring


probability q = 0.4. Figure 2 shows the corresponding degree distribution and
distribution of time to percolation when the process starts with 1% of the nodes
initially active and c = d = 1. Again, the shape of the distribution of time to per-
colation resembles the degree distribution. As ρ increases, the mean time decreases
as in the Erdös-Rényi case, and the variance also shrinks slightly.
0.20
0.15

0.15
0.10

Density
Density

0.10
0.05

0.05
0.00

0.00

0 5 10 15 20 0 10 20 30 40
Degree Time

Figure 2. Percolation process on a Watts-Strogatz random graph


with N = 1000 nodes, number of neighbors k = 5 in initial ring
lattice, and rewiring probability q = 0.4. On the left is the de-
gree distribution, and on the right is the distribution of time to
percolation for the case with 1% of nodes initially active.

Lastly, Figure 3 shows the distributions of node degree and time to percolation
for a Barabási-Albert random graph generated as described in Section 2.1. The
percolation process starts with 1% of nodes initially active (ρ = 0.01), but in this
CONTRIBUTIONS TO NETWORK DYNAMICS 1285

case we averaged over a range of saturating functions by choosing c and d uniformly


at random from [0.2, 1] and [0.5, 2], respectively. These distributions are plotted on
a log-log scale; thus a straight line is consistent with a power law distribution.

−5
● ●
−1

−6

−2


−7
log(Frequency)
log(Frequency)


−3

−8
● ● ●
−4

−9
● ●
−5

−10
● ●
● ●
● ● ●
−6

−11
● ● ● ● ●● ●● ● ●
−7

● ● ● ●●●● ●●●● ● ● ● ●

1.0 1.5 2.0 2.5 3.0 4 5 6 7 8


log(Degree) log(Time)

Figure 3. Percolation process on a Barabási-Albert random graph


with N = 1000 nodes and linear preferential attachment. On the
left is the degree distribution, and on the right is the distribution
of time to percolation for the case with 1% of nodes initially active.

3.2. Spiking Model Results. For the spiking process, we define bouts of activity
as periods of time (measured in milliseconds) during which at least one node in
the network fires an action potential spike and there has not been a gap in activity
of more than ∆ milliseconds. For the numerical results we present below, we use
∆ = 1 ms. Other parameters values used are r0 = 0.002 Hz, g = 0.00001 Hz (except
as noted), and τr ∈ [0.1, 1000] ms; the vector of firing rates r = {r1 , . . . , rN } was
always initialized to the background firing rate r0 for each node i.
As illustrated in Figure 4 (left and middle panels), the spiking model results in
approximately power law distributed bout durations over a wide range of param-
eter values for the Erdös-Rényi and Bárbasi-Albert graphs; power laws were also
observed when the underlying graph had Watts-Strogatz structure (not shown).
There are, of course, parameter choices that do not result in a power law. Note
that if τr is too small, then firing rates raised by excitatory inputs are almost
instantly reset to r0 , so that the network remains approximately a union of inde-
pendent Poisson processes and therefore approximately a Poisson process, by the
Superposition Theorem [13]. In such cases, the bout distribution is approximately
exponential.
When τr is large enough (in relation to g), the increment in firing rate due to
an incoming spike does not have time to dissipate before the next spike arrives; the
newly increased firing rate in turn impacts the neighbors’ rates. As nodes engage in
such positive feedback across an activity bout, firing rates of many nodes increase.
The increased firing rates decrease the probability of a gap in firing sufficient to end
the active episode, thereby leading to longer active bouts and a heavier tail in the
distribution of bout durations. In this case, each node fires as an inhomogeneous
Poisson process with rate λi (t); the resulting bout distribution can approximate a
power law for appropriate λi (t). Note that if λi (t) grows too quickly, the resulting
1286 DEENA SCHMIDT, JANET BEST AND MARK S. BLUMBERG

active bout distribution will be more heavy-tailed than power law. The right panel
of Figure 4 provides an illustration on an Erdös-Rényi graph.

● ● ●
● ●
−2

−2

−2
● ●

● ●
● ●
log(Frequency)

log(Frequency)
● ●
−4

log(Frequency)
−4
● ● ●

−4

●● ●● ●
●●
●● ●
●● ●
●● ●●
●●
−6

−6
●● ●●
●● ● ●

−6
●● ●● ●● ●
●● ●●
● ●
●●●● ● ●
● ●● ● ● ●●●●
●●●●●●●●● ● ●●● ●●
−8

−8
●●● ● ●
● ●● ● ●●●
●●● ● ● ● ●● ●●● ●

−8
●● ● ●● ●●●● ●
●● ●●● ● ●●●●●● ● ● ● ●●●●● ●●●● ●●●
−10


●●●● ●●●●●●●●● ●
−10

●●●●● ● ● ●●●●● ●● ●●●●●●●●●● ●● ● ● ●●


0 1 2 3 4 0 1 2 3 4 0 1 2 3 4 5 6 7
log(Time) log(Time) log(Time)

Figure 4. Spiking model. All graphs are generated with N = 100


nodes. Left panel shows the length of active bout durations on
an Erdös-Rényi graph, g = 0.00001 Hz and τr = 1000 ms. In the
middle is the active bout durations for a Barabási-Albert graph,
g = 0.00001 Hz and τr = 10 ms. Right panel shows the active bout
durations on an Erdös-Rényi graph, g = 0.00002 Hz and τr = 1000
ms.

4. Discussion. We have reported here our numerical investigation of the relative


contributions of network structure and of node dynamics in determining the collec-
tive dynamics of a network. In ongoing work, we have been developing analytical
calculations that shed light on the reasons for the results of these computer simula-
tions. We have focused on the presence or absence of power law dynamics, a feature
of interest in many recent studies [1, 7, 16, 22, 24].
We looked at a percolation-like process and a spiking neural process on randomly-
generated graphs with various degree distributions. We found that the percolation
process reflected the degree distribution, while the spiking process did not. For the
spiking process, the distribution of active bout durations was either exponential
(when the parameter τr was too small, relative to g, to allow firing rate increments
to accumulate) or heavy-tailed (typically power law or heavier tail).
Together with the results of Shkarayev et al. [22], this work suggests that mem-
oryless processes such as percolation-like processes and spiking of integrate-and-fire
neurons, in which the neurons integrate inputs and fire spikes without altering their
intrinsic propensity to fire (modeled here as firing rate), may reflect the degree dis-
tribution of the graph. We refer to these processes as memoryless because the nodes
show neither potentiation nor depression. That is, in response to being stimulated
to fire frequently, they neither increase their response (as in long term potentiation,
[25]) nor do they adapt to the stimulus and show a dampened response (long term
depression, [15]). We find that processes with memory, for instance processes in
which the nodes change their firing rate in response to inputs, can robustly produce
heavy-tailed distributions of activity, including power laws. Thus the underlying
graph structure may be obscured when the process has a memory mechanism such
that the dynamics of the process can change in time.
CONTRIBUTIONS TO NETWORK DYNAMICS 1287

We observe that the behavior of firing rates of individual nodes in the network
is one of the distinguishing features of different mechanisms for generating power
law distributions via network dynamics. In the memoryless model of Shkarayev
and colleagues [22], nodes fire in response to neighbors and in response to Poisson
inputs. In their network, the nodes manifest a power law distribution of firing rates
[23]. The spiking model with memory considered here behaves quite differently,
with the firing rates increasing throughout a bout of activity.
Neuronal connectivity patterns vary substantially in different regions of the brain,
likely reflecting different functions and needs in addition to evolutionary and devel-
opmental history. Though details of connectivity are often unknown, even partial
information concerning circuit architecture can help distinguish between some of
the possible mechanisms underlying the collective dynamics of the network. Know-
ing properties of nodes (adapting or potentiating synapses, for example) or of the
network (e.g., variance of firing rate among nodes) can be useful in ruling out some
mechanisms.
One of the questions we address in this study is whether scale-free graph structure
makes power law dynamics more robust. It was known that one could achieve
power law dynamics in a non-scale free network with a critically tuned parameter
[28, 1, 24], and we previously constructed an example of a simple network with power
law dynamics without tuning [16, 17]. It was also known that a power law structure
in the graph could give rise to power law dynamics [22]. A natural question is, can
a scale-free network structure contribute to power law dynamics, thereby making
power law behavior more robust or more likely to be observed than on a graph with
some other structure? Perhaps surprisingly, we have not found evidence for such a
conclusion. In the examples studied here, either the graph structure was entirely
responsible for the power law, or the dynamics were robustly capable of producing
power law dynamics on a wide range of graph structures. This conclusion may
depend on other properties of the graph or process not studied here. We have
not yet considered a directed graph, nor have we considered inhibitory connections.
Nonetheless, this work contributes to the understanding of power law dynamics and
begins to identify criteria according to which one may probe a network to determine
what factors contribute to power law dynamics.

Acknowledgments. The authors would like to thank Boris Pittel, Peter Kramer
and Gregor Kovačič for helpful discussions.

REFERENCES

[1] L. F. Abbott and R. Rohrkemper, A simple growth model constructs critical avalanche net-
works, Prog. in Brain Res., 165 (2007), 13–19.
[2] R. Albert and A.-L. Barabási, Statistical mechanics of complex networks, Rev. Modern Phys.,
74 (2002), 47–97.
[3] L. A. N. Amaral, A. Scala, M. Barthelemy and H. E. Stanley, Classes of small-world networks,
Proc. Natl. Acad. Sci., 97 (2000), 11149–11152.
[4] G. D. Bader and C. W. V. Hogue Analyzing yeast protein-protein interaction data obtained
from different sources, Nat. Biotechnol., 20 (2002), 991–997.
[5] J. Balogh and B. G. Pittel, Bootstrap percolation on the random regular graph, Random
Struct. Algor., 30 (2007), 257–286.
[6] A.-L. Barabási and R. Albert, Emergence of Scaling in Random Networks, Science, 286
(1999), 509–512.
[7] J. M. Beggs and D. Plenz, Neuronal avalanches in neocortical circuits, J. of Neurosci., 23
(2003), 11167–11177.
1288 DEENA SCHMIDT, JANET BEST AND MARK S. BLUMBERG

[8] J. Best, Doubly Stochastic Processes: an Approach for Understanding Central Nervous
System Activity, Selected Topics on Applied Mathematics, Circuits, Systems, and Signals;
WSEAS Press, (2009), 155–158.
[9] G. Csardi and T. Nepusz, The igraph software package for complex network research, Inter-
Journal, Complex Systems 1695 (2006), http://igraph.sf.net.
[10] P. Erdös and A. Rényi, On random graphs, Publicationes Mathematicae, 6 (1959), 290–297.
[11] P. Erdös and A. Rényi, On the evolution of random graphs, Publ. Math. Inst. Hung. Acad.
Sci., 5 (1960), 17–61.
[12] D. T. Gillespie, A General Method for Numerically Simulating the Stochastic Time Evolution
of Coupled Chemical Reactions, J. Comput. Phys., 22 (1976), 403–434.
[13] G. Grimmett and D. Stirzaker, “Probability and Random Processes,” 3rd edition, Oxford
University Press, 2001.
[14] C. Haldeman and J. M. Beggs, Critical branching captures activity in living neural networks
and maximizes the number of metastable states Phys. Rev. Lett., 94 (2005), 058101.
[15] M. Ito, Long-term depression, Ann. Rev. Neurosci., 12 (1989), 85–102.
[16] Badal Joshi, “A doubly stochastic Poisson process model for wake-sleep cycling”, Ph.D thesis,
The Ohio State University in Columbus, OH, 2009.
[17] B. Joshi, J. Best and M. S. Blumberg, Developmental dynamics of sleep-wake cycles: a
mathematical model, preprint.
[18] M. O. Magnasco, O. Piro and G. A. Cecchi Self-tuned critical anti-Hebbian networks Phys.
Rev. Lett. 102 (2009), 258102.
[19] M. E. J. Newman, S. H. Strogatz and D. J. Watts, Random graphs with arbitrary degree
distributions and their applications, Phys. Rev. E, 64 (2001), 026118.
[20] Nataša Pržulj Biological network comparison using graphlet degree distribution, Bioinformat-
ics, 23 (2006), 177–183.
[21] R Development Core Team, R: A language and environment for statistical computing,
R Foundation for Statistical Computing, Vienna, Austria. (2009), ISBN 3-900051-07-0,
http://www.R-project.org.
[22] M. S. Shkarayev, G. Kovačič, A. V. Rangan and D. Cai, Architectural and functional connec-
tivity in scale-free integrate-and-fire networks, EPL-Europhys. Lett., 88 (2009), 50001.
[23] M. S. Shkarayev and G. Kovačič, Unpublished.
[24] J. Teramae and T. Fukai, Local cortical circuit model inferred from power-law distributed
neuronal avalanches, J. Comput. Neurosci., 22 (2007), 301–312.
[25] T. J. Teyler and P. DiScenna, Long-term potentiation, Ann. Rev. Neurosci., 10 (1987), 131–
161.
[26] D. J. Watts and S. H. Strogatz, Collective dynamics of small-world networks, Nature, 393
(1998), 440–442.
[27] G. B. West, J. H. Brown and B. J. Enquist, A general model for the origin of allometric
scaling laws in biology, Science, 276 (1997), 122–126.
[28] S. Zapperi, K. B. Lauritsen and H. E. Stanley Self-organized branching process: mean-field
theory for avalanches, Phys. Rev. Lett., 75 (1995), 4071–4074.

Received September 2010; revised July 2011.


E-mail address: [email protected]
E-mail address: [email protected]
E-mail address: [email protected]

You might also like