Random Graph and Stochastic Process
Random Graph and Stochastic Process
org
DYNAMICAL SYSTEMS
Supplement 2011 pp. 1279–1288
Deena Schmidt
Mathematical Biosciences Institute
Ohio State University
Columbus, OH 43210, USA
Janet Best
Department of Mathematics
Ohio State University
Columbus, OH 43210, USA
Mark S. Blumberg
Department of Psychology
University of Iowa
Iowa City, IA 52242, USA
1279
1280 DEENA SCHMIDT, JANET BEST AND MARK S. BLUMBERG
The graph structure introduced by Paul Erdös and Alfred Rényi in 1959 [10, 11]
and now referred to as an Erdös-Rényi random graph assumes that, for a given set
of nodes, any possible edge occurs independently of all other possible edges.
Watts and Strogatz [26] introduced small-world networks after recognizing that
many naturally occurring networks, such as social networks, exhibit greater clus-
tering than does an Erdös-Rényi graph: an edge is more likely to occur if it would
complete a triangle. In social networks, this lack of edge independence expresses
the observation that two individuals are more likely to be acquainted if they share
another acquaintance. Small-world networks such as the Watts and Strogatz model
are characterized by a high degree of clustering as well as a short average path length
between any two nodes. They are not, however, characterized by degree distribu-
tion. Nonetheless, this class of networks has proved widely useful in applications,
and we include here some judiciously chosen examples.
The third class of networks that we study here, scale-free graphs, is actually a
distinguished subset of small-world networks [3]. These networks have a power law
degree distribution; in particular, they include nodes with high degree known as
hubs. In algorithms for creating scale-free graphs, an edge is more likely to occur if
it would connect to a hub - a node that is already highly connected.
We consider here two processes occurring on each type of random graph. First, we
look at a process based upon percolation of activity through a graph in which nodes
can be in either an active or an inactive state. Beginning with some proportion of
active nodes, one asks whether all nodes eventually become active. For cases in
which activity percolates, one computes the distribution of percolation times. The
second process is a simple neural-like process involving spiking activity. Nodes fire
spikes at some Poisson rate, and each spike increases the firing rate of neighbor
nodes, tending to prolong activity. We ask how long each bout of such activity lasts
and calculate the distribution of active bout durations.
In the subsections below we describe in more detail the selection and construction
of the graphs as well as the algorithm for each process on the graphs. In each case,
the work was performed using the R programming language (version 2.9.0) [21]
along with the igraph package [9].
2.1. Graph Families. Each random graph is generated by starting with a set
of N nodes and adding edges between them according to a random process. For
simplicity, we consider only undirected, connected graphs.
In the Erdös-Rényi model [10], we start with N nodes and connect each pair with
probability p, independently of all other pairs of nodes. Since a node is equally likely
to be connected to each of the N − 1 other nodes, the probability that a node has
degree k is given by the binomial distribution
N −1 k
P (k) = p (1 − p)N −1−k .
k
Here the expected degree of a node is λ = (N − 1)p which allows us to rewrite this
expression as
k N −1
λk −λ
N −1 λ λ
P (k) = 1− ' e .
k N −1−λ N −1 k!
Thus the degree distribution of an Erdös-Rényi graph becomes Poisson in the limit
of large N .
1282 DEENA SCHMIDT, JANET BEST AND MARK S. BLUMBERG
The small-world graphs we consider are generated by the model of Watts and
Strogatz [26]. We start with N nodes in a regular ring lattice where each node
is connected to k of its nearest neighbors. Then with probability q, each edge is
rewired. In other words, an edge is disconnected from one of its two nodes and then
reconnected to another node chosen at random. This rewiring process introduces
long-range connections, leading to small average path length characteristic of small-
world graphs. In addition, this graph is more highly clustered than an Erdös-Rényi
random graph.
To generate scale-free graphs, we follow the model by Barabási and Albert [6]
in which nodes are connected by a process with preferential attachment. We begin
with a small network of m0 ≥ 2 nodes and add new nodes to the network one at a
time until we have a total of N nodes. The probability that the new node will be
connected to an existing node i, pi = Pkikj , is proportional to the degree ki of node i.
j
Highly connected hubs tend to quickly accumulate more links, while nodes of small
degree are unlikely to gain new connections. Thus, new nodes have a “preference”
to attach themselves to hubs. The degree distribution of a Barabási-Albert random
graph is scale-free, and in particular, it follows a power law distribution
P (k) = Ak −γ
where A is a constant that ensures that P (k) sums to 1, and the degree exponent γ
typically lies in the range 2 < γ < 3 [2]. Since scale-free graphs belong to the larger
class of small-world graphs, they share the property of small average path length,
but the extent of clustering is closer to that of an Erdös-Rényi random graph [6].
2.2. Percolation. The first of the two stochastic processes we consider on random
graphs is based upon percolation of activity through the network. Assume that
each node can be in one of two possible states: active or inactive. We start the
percolation-type process at time t = 0 with a proportion ρ of initially active nodes
in the graph. We then define an updating rule for the spread of activity: in the
next time step, a node with k active neighbors becomes active with probability
pa , which depends on k. If a node was active in the previous time step, it stays
active. In contrast to classical bootstrap percolation, we are not considering a
fixed threshold above which nodes become activated with probability 1. We use a
saturating function to define the activation threshold. In particular, for each node
i ∈ {1, . . . , N } we compute the proportion of active neighbors yi and then take
cyi
pa =
yi + d
where c and d are constants such that pa ∈ [0, 1]. For cases in which activity
percolates throughout the network, we look at the distribution of percolation times,
i.e. the time it takes for all nodes in the graph to become active. This is a variation
on a classical question in random graph theory that we describe below.
For a given graph with N nodes where each node is initially active with proba-
bility p, let E be the event that all nodes are eventually active. Identify p− and p+
such that for p < p− (respectively, p > p+ ), the probability of event E is essentially
0 (respectively, 1). In particular, show the existence of a phase transition window
[p− (N ), p+ (N )] and quantify how quickly it shrinks to 0 as N → ∞.
Balogh and Pittel [5] give results for this question in the case of d-regular random
graphs, but for other types of random graphs and more complicated percolation
processes, little is known.
CONTRIBUTIONS TO NETWORK DYNAMICS 1283
2.3. Spiking Model. The second process we consider is a simple neural-like pro-
cess involving spiking activity. Nodes fire action potential spikes at a Poisson rate,
and each spike increases the firing rate of neighboring nodes, tending to prolong
activity on the graph. Here we are interested in the length of each bout of activity
in order to compute the distribution of active bout durations.
The spiking model we consider is similar to a model by Abbott and Rohrkemper
[1], but with different network structure and without critically tuning any parame-
ters. These authors consider a neural network that grows and shrinks in response to
intracellular calcium concentration C, a measure of activity in the network. Their
model involves tuning parameter C to a critical value in order to produce power
law distributions of activity.
We generate and fix a random graph with N nodes from one of the three families
described above. We then put a Poisson spiking process on the graph according to
the firing rate equations from [1]: their equation (1) and a modified version of (2).
These are equations (1) and (2) below.
dri
τr = r0 − ri (1)
dt
Here ri is the firing rate for node i ∈ {1, . . . , N }, r0 is the background firing rate
which is constant for all nodes in the graph, and τr is a time constant. When node
i fires an action potential, the firing rate of each of its neighboring nodes j gets
incremented by a constant g:
rj → rj + gAij . (2)
A is the adjacency matrix of the graph, i.e., entry Aij = 1 if nodes i and j are
connected, otherwise Aij = 0. Hence, node i’s neighbors are the non-zero entries in
row i of A.
We simulate the above processes on random graphs via Gillespie’s Stochastic
Simulation Algorithm [12] implemented in R. All simulations are done with a fixed
graph for 100,000 replications of the process on that graph. We record the length
of each bout of activity separated by a period of inactivity of at least length ∆, and
then compute the distribution of active bout durations for a range of parameter
values for each of the three graph families.
increases, the mean time to percolation decreases while the shape of the distribution
remains roughly the same.
0.25
0.15
0.20
0.15
0.10
Density
Density
0.10
0.05
0.05
0.00
0.00
0 5 10 15 0 5 10 15 20 25 30 35
Degree Time
0.15
0.10
Density
Density
0.10
0.05
0.05
0.00
0.00
0 5 10 15 20 0 10 20 30 40
Degree Time
Lastly, Figure 3 shows the distributions of node degree and time to percolation
for a Barabási-Albert random graph generated as described in Section 2.1. The
percolation process starts with 1% of nodes initially active (ρ = 0.01), but in this
CONTRIBUTIONS TO NETWORK DYNAMICS 1285
−5
● ●
−1
−6
●
−2
●
●
−7
log(Frequency)
log(Frequency)
●
−3
−8
● ● ●
−4
−9
● ●
−5
−10
● ●
● ●
● ● ●
−6
−11
● ● ● ● ●● ●● ● ●
−7
● ● ● ●●●● ●●●● ● ● ● ●
3.2. Spiking Model Results. For the spiking process, we define bouts of activity
as periods of time (measured in milliseconds) during which at least one node in
the network fires an action potential spike and there has not been a gap in activity
of more than ∆ milliseconds. For the numerical results we present below, we use
∆ = 1 ms. Other parameters values used are r0 = 0.002 Hz, g = 0.00001 Hz (except
as noted), and τr ∈ [0.1, 1000] ms; the vector of firing rates r = {r1 , . . . , rN } was
always initialized to the background firing rate r0 for each node i.
As illustrated in Figure 4 (left and middle panels), the spiking model results in
approximately power law distributed bout durations over a wide range of param-
eter values for the Erdös-Rényi and Bárbasi-Albert graphs; power laws were also
observed when the underlying graph had Watts-Strogatz structure (not shown).
There are, of course, parameter choices that do not result in a power law. Note
that if τr is too small, then firing rates raised by excitatory inputs are almost
instantly reset to r0 , so that the network remains approximately a union of inde-
pendent Poisson processes and therefore approximately a Poisson process, by the
Superposition Theorem [13]. In such cases, the bout distribution is approximately
exponential.
When τr is large enough (in relation to g), the increment in firing rate due to
an incoming spike does not have time to dissipate before the next spike arrives; the
newly increased firing rate in turn impacts the neighbors’ rates. As nodes engage in
such positive feedback across an activity bout, firing rates of many nodes increase.
The increased firing rates decrease the probability of a gap in firing sufficient to end
the active episode, thereby leading to longer active bouts and a heavier tail in the
distribution of bout durations. In this case, each node fires as an inhomogeneous
Poisson process with rate λi (t); the resulting bout distribution can approximate a
power law for appropriate λi (t). Note that if λi (t) grows too quickly, the resulting
1286 DEENA SCHMIDT, JANET BEST AND MARK S. BLUMBERG
active bout distribution will be more heavy-tailed than power law. The right panel
of Figure 4 provides an illustration on an Erdös-Rényi graph.
● ● ●
● ●
−2
−2
−2
● ●
●
● ●
● ●
log(Frequency)
log(Frequency)
● ●
−4
log(Frequency)
−4
● ● ●
−4
●
●● ●● ●
●●
●● ●
●● ●
●● ●●
●●
−6
−6
●● ●●
●● ● ●
−6
●● ●● ●● ●
●● ●●
● ●
●●●● ● ●
● ●● ● ● ●●●●
●●●●●●●●● ● ●●● ●●
−8
−8
●●● ● ●
● ●● ● ●●●
●●● ● ● ● ●● ●●● ●
−8
●● ● ●● ●●●● ●
●● ●●● ● ●●●●●● ● ● ● ●●●●● ●●●● ●●●
−10
●
●●●● ●●●●●●●●● ●
−10
We observe that the behavior of firing rates of individual nodes in the network
is one of the distinguishing features of different mechanisms for generating power
law distributions via network dynamics. In the memoryless model of Shkarayev
and colleagues [22], nodes fire in response to neighbors and in response to Poisson
inputs. In their network, the nodes manifest a power law distribution of firing rates
[23]. The spiking model with memory considered here behaves quite differently,
with the firing rates increasing throughout a bout of activity.
Neuronal connectivity patterns vary substantially in different regions of the brain,
likely reflecting different functions and needs in addition to evolutionary and devel-
opmental history. Though details of connectivity are often unknown, even partial
information concerning circuit architecture can help distinguish between some of
the possible mechanisms underlying the collective dynamics of the network. Know-
ing properties of nodes (adapting or potentiating synapses, for example) or of the
network (e.g., variance of firing rate among nodes) can be useful in ruling out some
mechanisms.
One of the questions we address in this study is whether scale-free graph structure
makes power law dynamics more robust. It was known that one could achieve
power law dynamics in a non-scale free network with a critically tuned parameter
[28, 1, 24], and we previously constructed an example of a simple network with power
law dynamics without tuning [16, 17]. It was also known that a power law structure
in the graph could give rise to power law dynamics [22]. A natural question is, can
a scale-free network structure contribute to power law dynamics, thereby making
power law behavior more robust or more likely to be observed than on a graph with
some other structure? Perhaps surprisingly, we have not found evidence for such a
conclusion. In the examples studied here, either the graph structure was entirely
responsible for the power law, or the dynamics were robustly capable of producing
power law dynamics on a wide range of graph structures. This conclusion may
depend on other properties of the graph or process not studied here. We have
not yet considered a directed graph, nor have we considered inhibitory connections.
Nonetheless, this work contributes to the understanding of power law dynamics and
begins to identify criteria according to which one may probe a network to determine
what factors contribute to power law dynamics.
Acknowledgments. The authors would like to thank Boris Pittel, Peter Kramer
and Gregor Kovačič for helpful discussions.
REFERENCES
[1] L. F. Abbott and R. Rohrkemper, A simple growth model constructs critical avalanche net-
works, Prog. in Brain Res., 165 (2007), 13–19.
[2] R. Albert and A.-L. Barabási, Statistical mechanics of complex networks, Rev. Modern Phys.,
74 (2002), 47–97.
[3] L. A. N. Amaral, A. Scala, M. Barthelemy and H. E. Stanley, Classes of small-world networks,
Proc. Natl. Acad. Sci., 97 (2000), 11149–11152.
[4] G. D. Bader and C. W. V. Hogue Analyzing yeast protein-protein interaction data obtained
from different sources, Nat. Biotechnol., 20 (2002), 991–997.
[5] J. Balogh and B. G. Pittel, Bootstrap percolation on the random regular graph, Random
Struct. Algor., 30 (2007), 257–286.
[6] A.-L. Barabási and R. Albert, Emergence of Scaling in Random Networks, Science, 286
(1999), 509–512.
[7] J. M. Beggs and D. Plenz, Neuronal avalanches in neocortical circuits, J. of Neurosci., 23
(2003), 11167–11177.
1288 DEENA SCHMIDT, JANET BEST AND MARK S. BLUMBERG
[8] J. Best, Doubly Stochastic Processes: an Approach for Understanding Central Nervous
System Activity, Selected Topics on Applied Mathematics, Circuits, Systems, and Signals;
WSEAS Press, (2009), 155–158.
[9] G. Csardi and T. Nepusz, The igraph software package for complex network research, Inter-
Journal, Complex Systems 1695 (2006), http://igraph.sf.net.
[10] P. Erdös and A. Rényi, On random graphs, Publicationes Mathematicae, 6 (1959), 290–297.
[11] P. Erdös and A. Rényi, On the evolution of random graphs, Publ. Math. Inst. Hung. Acad.
Sci., 5 (1960), 17–61.
[12] D. T. Gillespie, A General Method for Numerically Simulating the Stochastic Time Evolution
of Coupled Chemical Reactions, J. Comput. Phys., 22 (1976), 403–434.
[13] G. Grimmett and D. Stirzaker, “Probability and Random Processes,” 3rd edition, Oxford
University Press, 2001.
[14] C. Haldeman and J. M. Beggs, Critical branching captures activity in living neural networks
and maximizes the number of metastable states Phys. Rev. Lett., 94 (2005), 058101.
[15] M. Ito, Long-term depression, Ann. Rev. Neurosci., 12 (1989), 85–102.
[16] Badal Joshi, “A doubly stochastic Poisson process model for wake-sleep cycling”, Ph.D thesis,
The Ohio State University in Columbus, OH, 2009.
[17] B. Joshi, J. Best and M. S. Blumberg, Developmental dynamics of sleep-wake cycles: a
mathematical model, preprint.
[18] M. O. Magnasco, O. Piro and G. A. Cecchi Self-tuned critical anti-Hebbian networks Phys.
Rev. Lett. 102 (2009), 258102.
[19] M. E. J. Newman, S. H. Strogatz and D. J. Watts, Random graphs with arbitrary degree
distributions and their applications, Phys. Rev. E, 64 (2001), 026118.
[20] Nataša Pržulj Biological network comparison using graphlet degree distribution, Bioinformat-
ics, 23 (2006), 177–183.
[21] R Development Core Team, R: A language and environment for statistical computing,
R Foundation for Statistical Computing, Vienna, Austria. (2009), ISBN 3-900051-07-0,
http://www.R-project.org.
[22] M. S. Shkarayev, G. Kovačič, A. V. Rangan and D. Cai, Architectural and functional connec-
tivity in scale-free integrate-and-fire networks, EPL-Europhys. Lett., 88 (2009), 50001.
[23] M. S. Shkarayev and G. Kovačič, Unpublished.
[24] J. Teramae and T. Fukai, Local cortical circuit model inferred from power-law distributed
neuronal avalanches, J. Comput. Neurosci., 22 (2007), 301–312.
[25] T. J. Teyler and P. DiScenna, Long-term potentiation, Ann. Rev. Neurosci., 10 (1987), 131–
161.
[26] D. J. Watts and S. H. Strogatz, Collective dynamics of small-world networks, Nature, 393
(1998), 440–442.
[27] G. B. West, J. H. Brown and B. J. Enquist, A general model for the origin of allometric
scaling laws in biology, Science, 276 (1997), 122–126.
[28] S. Zapperi, K. B. Lauritsen and H. E. Stanley Self-organized branching process: mean-field
theory for avalanches, Phys. Rev. Lett., 75 (1995), 4071–4074.