Moretti 2013
Moretti 2013
Received 15 Apr 2013 | Accepted 28 Aug 2013 | Published 3 Oct 2013 DOI: 10.1038/ncomms3521
Hallmarks of criticality, such as power-laws and scale invariance, have been empirically found
in cortical-network dynamics and it has been conjectured that operating at criticality entails
functional advantages, such as optimal computational capabilities, memory and large dyna-
mical ranges. As critical behaviour requires a high degree of fine tuning to emerge, some type
of self-tuning mechanism needs to be invoked. Here we show that, taking into account the
complex hierarchical-modular architecture of cortical networks, the singular critical point is
replaced by an extended critical-like region that corresponds—in the jargon of statistical
mechanics—to a Griffiths phase. Using computational and analytical approaches, we find
Griffiths phases in synthetic hierarchical networks and also in empirical brain networks such
as the human connectome and that of Caenorhabditis elegans. Stretched critical regions,
stemming from structural disorder, yield enhanced functionality in a generic way, facilitating
the task of self-organizing, adaptive and evolutionary mechanisms selecting for criticality.
1 Departamento de Electromagnetismo y Fisica de la Materia and Instituto Carlos I de Fisica Teórica y Computacional, Facultad de Ciencias, Universidad de
Granada, 18071 Granada, Spain. Correspondence and requests for materials should be addressed to M.A.M. (email: [email protected]).
E
mpirical evidence that living systems can operate near on extended critical-like Griffiths regions. Our work also raises a
critical points is flowering in contexts ranging from gene series of questions worthy of future pursuit. For instance, is there
expression patterns1, to optimal cell growth2, bacterial any connection between GPs and the empirically reported generic
clustering3 or flocks of birds4. In the context of neuroscience, power-law decay of short-time memories? Do our results extend
synchronization patterns have been shown to exhibit broadband to other hierarchical architectures such as those encountered in
criticality5, critical avalanches of spontaneous neural activity have metabolic or technological networks?
been consistently found both in vitro6–8 and in vivo9, and results
from large-scale brain models based on the human connectome
Results
show that only at criticality the brain structure is able to support Hierarchical network architectures. The cortex network has
the dynamics observed in functional magnetic resonance imaging
been the focus of attention in neuroanatomy for a long time, but
(fMRI) recordings10. All this evidence suggests that criticality— only recently the development of high-throughput methods
with its concomitant power-laws and scale invariance–might have
has allowed the unveiling of its intricate architecture or
a relevant role in intact–brain dynamics8,11. At variance with connectome29,30. Brain networks have been found to be
inanimate matter—for which the emergence of generic or self-
structured in moduli—each modulus being characterized by
organized criticality in sandpile models, type-II superconductors having a much denser connectivity within it than with elements
or solar flares is relatively well understood12–14—criticality in
in other moduli—organized in a hierarchical fashion across many
living systems can be conjectured to be the result of evolutionary scales31–33. Moduli exist at each hierarchical level: cortical
or adaptive processes, which for reasons to be understood select
columns arise at the lowest level, cortical areas at intermediate
for it. ones and brain regions emerge at the systems level, forming a sort
The criticality hypothesis8,11,15 states that biological systems
of fractal-like nested structure31–34.
can perform the complex computations that they require to To be able to perform systematic analyses, we have designed
survive only by operating at criticality (the edge-of-chaos), that is,
synthetic hierarchical and modular networks (HMN) with s
at the borderline between an active or chaotic phase in which hierarchical levels, N nodes/neurons and L links/synapses, whose
noise propagates unboundedly—thereby corrupting all informa-
structure can be tuned to mimic that of real networks. We employ
tion processing or storage—and a quiescent or ordered phase in two different HMN models based on a bottom–top approach; in
which perturbations readily fade away, hindering the ability to
the first, local fully connected moduli are constructed and then
react and adapt16,17. Critical dynamics provides a delicate trade- recursively grouped by establishing new inter-moduli links
off between these two impractical tendencies, and it has been
between them in either a stochastic way with a level-dependent
argued to imply optimal transmission and storage of probability p as sketched in Fig. 1 (HMN-1) section or in a
information7,8,18, optimal computational capabilities19, large
deterministic way with a level-dependent number of connections
network stability17, maximal variety of memory repertoires20 (HMN-2). For further details see the Methods section. Similarly,
and maximal sensitivity to stimuli21.
top–down models can also be designed35.
Such a delicate balance occurs just at a singular or critical A way to encode key network structural information is the
point, requiring a precise fine tuning. However, a very recent
topological dimension, D, that measures how the number of
fMRI analysis of the human brain at its resting state reveals that neighbours of any given node grows when moving 1, 2, 3,..., r
the brain spends most of the time wandering around a broad
steps away from it: NrBrD for large values of r. Networks with the
region near a critical point, rather than just sitting at it22. This
small-world property36 have local neighbourhoods quickly
suggests that the region where cortical networks operate is not covering the whole network, that is, Nr grows exponentially
just a critical point, but a whole extended region around it.
with r, formally corresponding to D-N. Instead, large-worlds
Here inspired by this empirical observation as well as by some have a finite topological dimension, while D ¼ 0 describes
recent findings in network theory and neuroscience23–25, we
fragmented networks (see Fig. 1). Our synthetic HMN models
scrutinize the dynamics of simple models of neural activity span all the spectrum of D-values as illustrated in Fig. 1.
propagation when the structural architecture of brain networks is
Strictly speaking, the HMN networks that we will consider in
explicitly taken into account. Using a combination of analytical the following are finite dimensional only for p ¼ 1/4, in which
and computational tools, we show that the intrinsically
case the number of inter-moduli connection is stable across
disordered (hierarchical and modular) organization of brain hierarchical levels (see Methods). For p41/4 (resp. po1/4),
networks dramatically influences the dynamics by inducing the
networks become more and more densely (resp. sparsely)
emergence—in the jargon of Statistical Mechanics—of a Griffiths connected as the hierarchy depth (that is, the network size) is
phase (GP)23,26–28. This phase, which stems from the presence of
increased. Deviations from p ¼ 1/4 create fractal-like networks up
disorder (structural heterogeneity here), is characterized by to certain scale, being good approximations for finite-dimensional
generic power-laws extending over broad regions in parameter
networks in finite size. In some works (for example, Gallos
space. Furthermore, functional advantages usually ascribed to et al.37), the Hausdorff (fractal) dimension Df is computed for
criticality, such as a huge sensitivity to stimuli, are reported to
complex networks. We have verified numerically that DfED in all
emerge generically all along the GP. Remarkably, not only do we cases for HMNs.
find GPs in stylized models of brain architecture, but also in
real neural networks such as those of the Caenorhabditis elegans
(C. elegans) and the human connectome. Architecture-induced GPs. Disorder is well-known to radically
Our conclusion is that, as a consequence of the intrinsically affect the behaviour of phase transitions (see studies by Vojta28
disordered architecture of brain networks, critical-like regions are and references therein). In disordered systems, there exist local
extended from a singular point to a broad or stretched region, regions characterized by parameter values that differ significantly
much as evidenced in recent fMRI experiments. The existence of from their corresponding system averages. Such rare-regions can,
GPs facilitates the task of self-organizing, adaptive or evolutive for instance, induce the system to be locally ordered, even if
mechanisms seeking for critical-like attributes, with all their globally it is in the disordered phase. In this way, in propagation-
alleged functional advantages. We claim that the intrinsic dynamic models, activity can transitorily linger for long
structural heterogeneity of cortical networks calls for a change times within rare active regions, even if the system is in its
of paradigm from the critical/edge-of-chaos to a new one, relying quiescent phase. In the particular case in which broadly different
p4
106
p = 0.3
105 p = 0.4
p3
104
p2 103 p = 1/4
Nr
p1 102
10
1
1 10 100 500
r
106
105
104
Nr
103
1 2 3 4
102 2.8
10 2.4
D
2
1
1 10 100 500
r
p
1/4
Figure 1 | Hierarchical-modular networks. (a) Sketch of the bottom–top approach (HMN-1 model): initially, nodes are grouped into fully connected
modules of size M0 (blue squares); then nodes in different modules are clustered recursively into sets of b higher level blocks (for example, in pairs, b ¼ 2)
linking their respective nodes with hierarchical level-dependent wiring probabilities (HMN-1): pl ¼ apl with 0opo1 and a a constant. At level l, each
of the existing N/2l pairs is connected on average by nl ¼ 4l(M0/2)2 apl links. The resulting networks are always connected; with total number of
P
N ¼ M0 bs nodes, and average connectivity k ¼ ðM0 1Þ þ aðM0 =2Þ si¼1 ð2pÞi . (b) Graph representation of a HMN-1 with N ¼ 211 nodes, organized across
s ¼ 10 hierarchical levels (M0 ¼ 2, p ¼ 1/4, and a ¼ 4). (c) Adjacency matrix of the connection-density (as in b) averaged over several network realizations
(greener for larger densities). (d,e) Topological dimension, D, as defined by NrBrD (see main text) as a function of parameters. (d) As p is increased
D (the slope of the straight lines in the double logarithmic plot) grows and eventually becomes infinite; for smaller values of p (not shown) it becomes flat,
and D-0. (e) Keeping p ¼ 1/4, the topological dimension D is finite and continuously varying as function of a, (values summarized in the inset).
(f) Summary of structural properties: networks are disconnected (vanishing topological dimension, D) and small-world (D ¼ N) for small and large values
of p, respectively, while around p ¼ 1/4 networks have a finite D (as well as a finite connectivity and a finite density of connections in the large-N limit,
that is, networks are scalable).
rare-regions exist—with broadly distinct sizes and time-scales— activity propagation in a minimal way; in some of the cases that
the overall system behaviour, determined by the convolution of we study, the network nodes are not neurons but coarse neural
their corresponding highly heterogeneous contributions, becomes regions, for which effective models of activity propagation are
anomalously slow (see below). In contrast with standard critical expected to provide a sound description of large-scale properties.
points, systems with rare-region effects have an intermediate Every node (or neuron) is endowed with a binary state variable s,
broad phase separating order from disorder: a GP with generic representing either activity (s ¼ 1) or quiescence (s ¼ 0). Each
power-law behaviour and other anomalous properties (see studies active neuron is spontaneously deactivated at some rate m (m ¼ 1
by Vojta28 and below). here), while it propagates its activity to other directly connected
Remarkably, it has been very recently shown that structural neurons at rate l. We have considered two different dynamics: in
heterogeneity can have, in networked systems, a role analogous to the first one, (Model A) a synapsis between an active and a
that of standard quenched disorder in physical systems23. In quiescent node is randomly selected at each time and proved for
particular, simple dynamical models of activity propagation activation, while in the other variant (Model B) a neuron is
exhibit GPs when running upon networks with a finite selected and all its neighbours are proved for activation. Details of
topological dimension D. On the other hand, in small-world the computational implementation of the two models, known in
networks (with D ¼ N) local neighbourhoods are too large— statistical physics as the contact process and the SIS model
quickly covering the whole network—as to be compatible with the respectively, can be found in Methods.
very concept of rare (isolated) regions23. Therefore, it has been In general, depending on the value of l, these models can be
conjectured that a finite topological dimension D is an excellent either in an active phase—for which the density r of active nodes
indicator of eventual rare-region effects and GPs. reaches a steady-state value rs40 in the large system size and
large-time limit—or in the inactive phase in which r falls
ineluctably in the quiescent configuration (rs ¼ 0). Separating
Anomalous propagation dynamics in HMNs. To model the these two regimes, at some value, lc, there is a standard critical
propagation of neuronal activity, we consider highly simplified point where the system exhibits power-law behaviour for
dynamical models running upon HMNs. More realistic models of quantities of interest, such as the time decay of a homogeneous
neural dynamics with additional relevant layers of information initial activity density, r(t)Bt y, or the size distribution of
could be considered, but we do not expect them to significantly avalanches triggered by an initially localized perturbation,
affect our conclusions. Our approach here consists in modelling P(S)BS t. Here y and t are critical indices or exponents.
a s c s
c c
b d e f
ER Model A HMN-1 Model A HMN-1 Model B HMN-2 Model A
1
=2.4 =0.42
=1.15 10–1 10–3
10–1 =2.74
10–3
P (S)
10–2 10–5
10–3 10–5
=2.2 =0.36 10–7 =2.70
=0.90
10–4 10–7
1 101 102 103 101 103 105 107 101 103 105 107 105 107 109
t t t Avalanche size S
Figure 2 | Conventional versus non-conventional phase diagram. Left: (a,b) Conventional critical point scenario for our Model A running upon a random
Erdös–Rényi network (106 nodes, average connectivity 20, infinite topological dimension D): a singular power-law separates a quiescent phase (with
exponential decay) from an active one (with a non-trivial steady-state). Right: (c–f) Emergence of broad regions of power-law scaling in HMNs.
(c) Schematic phase diagram for a system exhibiting a broad region of power-law scaling. The stationary density of activity, rs, is depicted as a function of
the spreading rate l. (d,e) Steady-state density of active sites for Model A and Model B dynamics, respectively, on HMN-1 networks (with N ¼ 214 nodes,
and parameters s ¼ 13, p ¼ 1/4, a ¼ 1). Data for increasing values of the spreading rate l, from bottom to top. (f) Avalanche-size distributions for
Model A on a HMN-2 network (N ¼ 214, s ¼ 13, p ¼ 1/4, a ¼ 1; GP for such networks is observed for 2.60rlr2.79); avalanche sizes are power-law
distributed over a wide-range l values reflecting the existence of a GP. These conclusions have been confirmed in finite-size scaling analyses, and can be
generalized for other combinations of network architectures and dynamical models.
This standard critical-point scenario (see Fig. 2a,b) holds for large regions are exponentially rare). Therefore, the overall activity
regular lattices, Erdös-Renyi networks and many other types of density, r(t), decays as the following convolution integral
networks. On the other hand, computer simulations of the Z h i
different dynamical models running upon our complex HMN rðtÞ dzPðzÞzexp t=ðt0 eAðlÞz Þ ; ð1Þ
topologies with finite D reveal a radically different behaviour (see
Fig. 2c–f and Methods). The power-law decay of the average
density r(t)—specific to the critical point in pure systems— which evaluated in saddle-point approximation leads to r(t)Bt y,
extends to a broad range of l values. The existence of a broad with y(l) varying continuously with the disorder average value, l.
interval with power-law decaying activity is supported by finite Such generic power-laws signal the emergence of GPs. This is just
size scaling analyses reported in Methods. Likewise, as shown in an explanatory example of a general phenomenon, thoroughly
Fig. 2f, avalanches of activity generated from a localized seed have studied in classical, quantum and non-equilibrium disordered
power-law distributed sizes, with continuously varying exponents, systems28. In HMNs, the quenched disorder is encoded in the
in the same broad region. These features are fingerprints of a GP intrinsic disorder of the hierarchical contact pattern.
and have been confirmed to be robust against increasing system
size (up to N ¼ 220), using different types of HMN (HMN-1 with Diverging response in HMNs. One of the main alleged advan-
different values of a and p, HMN-2 models, all with finite D) and tages of operating at criticality is the strong enhancement of the
dynamical models (see Methods). system’s ability to distinctly react to highly diverse stimuli. In
How do Griffith phases work? For illustration purposes, let us the statistical mechanics jargon, this stems from the divergence of
consider a simplified example. Consider Model A (the contact the susceptibility at criticality38. How do systems with broad GPs
process) on a generic network, with a node-dependent quenched respond to stimuli? To answer this question we measure the
spreading rate l(x), characterized—without loss of generality—by following two different quantities (see Methods):
a bimodal distribution of l with average value l. Suppose, the two First, the dynamic susceptibility gauges the overall response to
possible values of l are one above and one below the critical point a continuous localized stimulus and is defined as S(l) ¼ N[rf(l)–
of the pure model, lc. In this way, at each location the system has rs(l)], where rs(l) is the stationary density in the absence of
an intrinsic preference to be either in the active or in the quiescent stimuli and rf(l) is the steady-state density reached when one
phase. Under these circumstances, typically, lcolc, so that, for single node is constrained to remain active. As shown in Fig. 3, S
values of l in between lc and lc the disordered system is in its becomes extremely large in the GP and, more importantly, it
quiescent phase. However, there are always spatial locations grows as a power-law of system size, S(lElc, N)BNZ, implying
characterized by significantly over-average values of (actually, that there is an extended extended region (the whole GP) where
local values of l(x)4lc). In these regions, initial activity can linger the system exhibits a divergent response (with l-dependent
for very long periods, especially if they happen to be large. Still, as continuously varying exponents).
such rare-regions have a finite size, they ineluctably end up falling Second, the dynamic range, D, introduced in this context in the
into the inactive state. Considering a rare active region of size z, studies by Kinouchi and Copelli21 measures the range of
it decays to the quiescent state after a typical time t(z), which perturbation intensities/frequencies for which the system reacts
grows exponentially with cluster size, that is, tCt0 exp[A(l)z] in distinct ways, being thus able to discriminate among them. We
(Arrhenius law), where t0 and A(l) do not depend on z. On the have computed D(l) in the HMN-2 model (see Fig. 3), which
other hand, the distribution of z-values is also exponential (very clearly illustrates the presence of a broad region with huge
40 1
Human connectome C. elegans
10–1
35
P (S )
10–1 10–2
Dynamic susceptibility
Dynamic range
10–3
30 103
10–2 10–4
2 5 10 50
P (S )
25 S
10–3
20 102 1.86
1.82
15 10–4 1.78
Griffiths phase
1.74
0.017 0.019
10
2.6 2.7 2.8 10–5
2 5 10 20 50 100
Spreading rate Avalanche size S
Figure 3 | Network response diverges all along the GP. Right axis: Figure 4 | Critical avalanches all along the GP in real brain networks.
dynamic susceptibility, S, measured for the dynamical Model A in HMN-2 Avalanche-size statistics for Model B dynamics in real neural networks.
networks of size N ¼ 214 (blue circles) and N ¼ 217 (purple circles); the Main plot: human connectome network (consisting of 998 brain areas and
critical point lcE2.79 is marked as a vertical line (see Methods: dynamical the structural connections among them). Avalanche-size distributions for
protocol iii). In the GP (resp. active phase), the overall response increases Model B dynamics (with 0.016rlr0.020 equispaced values). Truncated
(resp. decreases) for larger systems as marked by the red (resp. blue) power-laws P(S)BS te S/x with continuously varying exponents
arrows. In the GP, S grows as a l-dependent power-law of N. Left axes: t constitute the most reliable fits according to the Kolmogorov–Smirnoff
dynamic range D (green squares) for the same case as above, N ¼ 214 criterion (see Methods). Lower inset: The t exponent (estimated through
(dynamical protocol iv; see Methods). Instead of the usual symmetric non-linear least-square fits) as a function of l; it converges to E1.7 at
cusp-singularity at the critical point, there is a whole region of extremely the critical point, lcE0.020. Upper inset: as the main figure but for the
large D with a strong asymmetry around the transition point, as C. elegans neural network (0.08rlr0.12).
corresponds to the existence of a GP.
dynamic ranges (rather than the standard situation for which avalanches are clearly distributed as power-laws, with moderate
large responses are sharply peaked at criticality). finite-size effects, in a broad range of l-values (see Fig. 4).
Therefore, if critical-like dynamics is important to access a Actually, truncated power-laws of the form P(S)BS te S/x—
broad dynamic range and to enhance system’s sensitivity, then it with l-dependent values of t—provide highly reliable fits of the
becomes much more convenient to operate with hierarchical- size distributions, P(S), according to the Kolmogorov–Smirnoff
modular systems, where criticality—with extremely large criterion (see Methods), supporting the picture of a broad critical-
responses and huge sensitivity—becomes a region rather than a like region. This strongly suggests that if it were feasible to run
singular point. dynamical models upon the actual human brain network (with
about 1012 neurons and 1015 synapses) a GP would appear in a
robust way, and would extend over much larger range of size and
GPs in real networks. Analyses of different nature have revealed time scales. Similar results, even if affected by more severe size
that organisms from the primitive C. elegans32,33,39,40 (for which a effects, are obtained for the C. elegans detailed neural network
full detailed map of its about 300 neurons has been constructed) consisting of Nt300 neurons (see Fig. 4, upper inset).
to cats, macaque monkeys or humans (for which large-scale
connectivity maps are known29) have a hierarchically organized
neural network. Such structure is also shared by functional brain Spectral fingerprints of GPs in HMNs. To further confirm the
networks (for example, from fMRI data)29,30,37,41,42. Do simple existence of GPs (beyond direct computational simulations),
dynamical models of activity propagation (such as those in here we present some analytical results. An important tool in the
previous sections) running upon real neural networks exhibit GPs? analysis of network dynamics is provided by spectral graph
We have considered the human connectome network, obtained theory, in which the network structure is encoded in some
by Sporns and collaborators29,30 using diffusion imaging matrix and linear algebra techniques are exploited43. For instance,
techniques. It consists of a highly coarse-grained mapping (as the dynamics of simple models (for example, Model B) is
opposed for instance to the detailed map of C. elegans) of often governed by the largest (or principal) eigenvalue, Lmax, of
anatomical connections in the human brain, comprising N ¼ 998 the adjacency matrix, Aij (with 1’s as entries for existing
brain areas and the fibre tract densities between them, with a connections and 0’s elsewhere), which straightforwardly appears
hierarchical organization32,33 (see Methods). in a standard linear stability analysis (as detailed in the Methods
Given that this network comprises only Nt1,000 nodes, the section). It is easy to show that (with very mild assumptions)
maximum size of possible rare-regions and the associated power- the critical point—signalling the limit of linear stability of per-
laws are necessarily cutoff at small sizes and short times. turbations on the quiescent state—is given by lcLmax ¼ 1.
Nevertheless, as illustrated in Fig. 4, simulations of the dynamical Remarkably, it has been recently shown that this general
models above (Model B in this case) show a significant deviation result may not hold for certain networks, for which the largest
from the typical standard critical-point scenario. Instead, eigenvalue has an associated localized eigenvector (for example,
0.6
250 0.5
Component
0.4
200 0.3
0.2
0.1
150
N()
0.0
0 5,000 10,000 15,000
100 Node
50 1 0.15
0.5
0.2 0.1
0
IPR
IPR
–3 –2 –1 0 1 2 3 0.1
0.05
0.05
0.02 0
210 211 212 213 214 1 2 3 4
N
5
0.4
Networks
4
3
0.3
2
N ()
1 0.2
2.70 2.75 2.80 2.85 2.90 2.95
Tail eigenvalues 0.1
0
0 2 4 6 8 10
(10–7)
Figure 5 | Spectral analyses of hierarchical networks. (a) Average spectrum of the adjacency matrix of HMN-2 networks (N ¼ 214, s ¼ 13, M0 ¼ 2, a ¼ 1).
Data are averaged over 150 network realizations. The vertical axis reports the average number of eigenvalues (not the density). (b) Ideal zoom of the higher
spectral edge of (a). The values of the five largest eigenvalues for five randomly chosen networks from (a) are represented. No proper spectral gap is
observed. (c) Localization of the five eigenvectors corresponding to the largest eigenvalues. The principal eigenvector f(Lmax) is plotted in red. Being our
HMNs connected, in agreement with the Perron–Frobenius theorem60, the components of f(Lmax) (even the vanishing ones) are all strictly positive,
although this cannot be appreciated in linear scale. The next eigenvectors are plotted in magenta, orange, green and blue. (d) Dependence of the IPR on the
system size and the number of block-block connections in HMN-2 networks. (e) Lower spectral edge of the cumulative distribution of Laplacian
eigenvalues of HMN-2 networks as in (a). Numerical data (points) are compared with an exponential Lifshitz tail with exponent aE1.00. The Laplacian
P
matrix is defined as Lij ¼ Aik Aij .
k
with only a few non-vanishing components; this pheno- that the distribution of the eigenvalues corresponding to localized
menon closely resembles Anderson localization in physics44). In eigenvectors results in an exponential tail of the continuum
such a case, linear instabilities for l slightly larger than 1/Lmax spectrum, where an infinite dimensional graph would exhibit a
lead to localized activity around only few nodes (where the spectral gap instead (see Methods). This translates into an
corresponding eigenvector is localized), not pervading the exponential tail of the cumulative distribution f(L) of Laplacian
network and not leading to a true active state. This implies that eigenvalues (or integrated density of states) at the lower spectral
the critical point is shifted to a larger value of l. Instead, these edge, a so-called Lifshitz tail—which in equilibrium systems is
regions of localized activity resemble very much rare-regions in related to the Griffiths singularity45. We have found Lifshitz tails
GPs: activity lingers around them until eventually a large with their characteristic form
fluctuation kills them.
Inspired by this novel idea, we performed a spectral analysis of 1
fðLÞ ð2Þ
our HMNs (for example, HMN-2 nets with a ¼ 1, see Fig. 5), with ðLmax LÞa
the result that—for finite D networks—not only the largest
eigenvalue Lmax corresponds to a localized eigenvector, but a (where a is a real parameter, see Fig. 5e). Interestingly, Lifshitz
whole range of eigenvalues below Lmax (even hundreds of them) tails are also rigorously predicted on Erdös Rényi networks below
share this feature, as can be quantitatively confirmed (see Fig. 5c, the percolation threshold, where rare-region effects and GPs are
d and Methods). In particular, the principal eigenvector is heavily an obvious consequence of the network disconnectedness46.
peaked around a cluster of neighbouring nodes. We have Therefore, the presence of both (i) localized eigenvectors and (ii)
conjectured and verified numerically that the clusters where the Lifshitz tails confirms the existence of GPs in networks with
largest eigenvalues are localized correspond to the rare-regions, complex heterogeneous architectures.
with above-average connectivity and where localized activity The fingerprints of extended criticality in the human
lingers for long time. Also, we numerically found lcE0.4141/ connectome are a result of its hierarchical network structure,
LmaxE0.33 confirming the prediction above. The interval and the localization properties that characterize it. Figure 6
between these two values defines the GP (see Methods). supports this view highlighting the localization of the principal
In addition, we also considered large network ensembles and eigenvector. In particular, we show that the principal eigenvector
computed the probability distribution of eigenvalues. We found of the full adjacency matrix and that of the unweighted (that is,
Given that disorder is an intrinsic and unavoidable feature of Measures of response. The standard method to estimate responses consisting in
neural systems and that neural-network architectures are measuring the variance of activity in the steady-state would not provide a measure
of susceptibility in the Griffiths region, where a steady-state is trivial (quiescent).
hierarchical, GPs are expected to have a relevant role in many We define the dynamic susceptibility as S(l) ¼ N[rf(l)–rs(l)], where rf(l) is the
dynamical aspects and, hence, they should become a relevant steady-state reached when a single node is constrained to be active throughout the
concept in Neuroscience as well as in other fields such as systems simulation and rs(l) the steady-state for protocol (i), that is, in the absence of
biology, where HMNs have a key role54. We hope that our work constraints. In the inactive state, rs(l) ¼ 0 while rf(l) is finite but small (of the
contributes to this purpose, fostering further research. order of 1/N) as the active node continuously fosters activity in its surroundings. In
the active state, rs(l) and rf(l) are both large and again differ by a little amount
(given by the fixed node and its induced activity) that vanishes for larger system
Methods sizes. Only in the parameter region where the response of the system is high, the
Synthetic hierarchical networks. HMN-1: At each hierarchical level l ¼ 1,2,ys, little perturbation introduced by the constrained node produces a diverging
pairs of blocks are selected, each block of size 2i 1M0. All possible 4i 1M20 response. This is found to occur throughout the Griffiths region (see Fig. 3, main
undirected eventual connections between the two blocks are evaluated, and text), confirming the claim of an anomalous response over an extended range of the
established with probability apl, avoiding repetitions. With our choice of para- parameter l.
meters, we stay away from regions of the parameter space for which the average An alternative measure of response is provided by the dynamic range D,
number of connections between blocks nl is less than one, as this would lead introduced by Kinouchi and Copelli21. We determine D(l) for various values of l
inevitably to disconnected networks (as rare-region effects would be a trivial in the Griffiths and active phases, as follows: (i) a seed node is chosen and initially
consequence of disconnectedness, we work exclusively on connected networks, that activated, but not constrained to be active; (ii) the dynamical model (A, B, y) is
is, networks with no isolated components). As links are established stochastically, run; (iii) if the dynamics selects the seed node, and it is found inactive, it is
there is always a chance that, after spanning all possible connections between two reactivated with probability pstimulus; (iv) the steady-state density r is recorded (due
blocks, no link is actually assigned. In such a case, the process is repeated until, at to the intermittent reactivation, a steady-state depending on pstimulus is always
the end of it, at least one link is established. This procedure enforces the con- reached, unless pstimulus is infinitesimal); (v) upon varying pstimulus, the steady-state
nectedness of the network and its hierarchical structure, introducing a cutoff for density r varies continuously within a finite window. We identify the values r0.1
the minimum number of block-block connections at 1. Observe also that for and r0.9, corresponding to the 10 and 90% values within such window, and call p0.1
M0 ¼ 2 and p ¼ 1/4, a is the target average number of block-block connections and and p0.9 the values of pstimulus leading to those values respectively; and (vi) the
1 þ a the target average degree. However, by applying the above procedure to dynamic range is calculated as D ¼ 10log10(p0.9/p0.1).
enforce connectedness, both the number of connections and the degree are Notice that in the active phase r0.1 reaches a finite steady-state at exponentially
eventually slightly larger than these expected, unconstrained, values. large times in the limit l-lþc. This makes the study of large systems very lengthy
For the HMN-2, the number of connections between blocks at every level is in that parameter region.
a priori set to a constant value a. Undirected connections are assigned choosing Extended regions of enhanced response are found also by running our simple
random pairs of nodes from the two blocks under examination, avoiding dynamic protocols on the connectome network. A way to visualize the broadening
repetitions. Choosing aZ1 ensures that the network is hierarchical and connected. of the critical region is presented in Supplementary Fig. S3, where the density of
This method is also stochastic in assigning connections, although the number of active sites given a fixed active seed rf is plotted as a function of l. The critical
them (as well as the degree of the network) is fixed deterministically. In both cases, region broadens if compared with the case of a regular (disorder-free) lattice of the
the resulting networks exhibit a degree distribution characterized by a fast same size.
exponential tail, as shown in Supplementary Fig. S1.
Finite-size scaling. In the standard critical point scenario—assuming the system
Empirical brain networks. Data of the adjacency matrix of C. elegans are publicly sits exactly at the critical point but it runs upon a finite system (of linear size L)—
available (see for example, Kaiser32). Different analyses have confirmed that this the average density (order parameter) starting from an initially active configura-
network has a hierarchical-modular structure (see for example refs 33,39,40,55,56). tions decays as t y exp( t/t(L)), where the cutoff time scales with system size,
In particular, in (ref. 56) a new measure is defined to quantify the degree of as t(L)n p Lz (z the dynamic critical exponent), allowing us to perform collapse
hierarchy in complex networks. The C. elegans neural networks is found to be six plots for different system sizes. Instead, in GPs, the cutoff time does not have an
times more hierarchical than the average of similar randomized networks. algebraic dependence on L; it is the largest cluster, which is cutoff by Ld, and the
Connectivity data for the human connectome network have been recently obtained corresponding escape/decay time from it grows like t(L) p exp(cLd). Therefore,
experimentally29,30. In this case too, the network is hierarchical22,33,42. even for relatively small system sizes, such a cutoff is not observable in (reasonable)
Supplementary Figure S2 shows a graphical representation of its adjacency matrix, computer simulations: order-parameter-decay plots should exhibit power-law
highlighting its hierarchical organization. asymptotes without any apparent cutoff. On the other hand, the power-law
exponent—which can be estimated from a saddle-point approximation, dominated
by the largest rare region—is severely affected by finite size effects. Therefore,
Dynamical models. In both cases (Model A and Model B), neurons are identified summing up, although in standard critical points finite size effects maintain the
with nodes of the network and are endowed with a binary state-variable s ¼ 0.1. critical exponent but visibly affect the exponential cutoffs, in GPs, apparent critical
The state of the system is sequentially updated as follows. A list of active sites is exponents are affected by finite-size corrections while exponential cutoffs are not
kept. Model A: At each step, an active node is selected and becomes inactive s ¼ 0 observable. Unless otherwise specified, simulations on HMNs are for systems of
with probability m/(l þ m), while with complementary probability l/(l þ m), it size N ¼ 214 ¼ 16,384. Supplementary Fig. S4 shows that upon increasing the
activates one randomly chosen nearest neighbour provided it was inactive. Model system size (and the number of hierarchical levels accordingly), the picture of
B: At each step, an active node is selected and becomes inactive s ¼ 0 with generic power-law decay of activity remains valid in the whole GP. One can
probability m/(l þ m), while with complementary probability l/(l þ m), it checks all observe that for each value of l, the effective exponent tends to an asymptotic
of its nearest neighbours, activating each of them with probability 0olo1 value, which is expected to hold in the large-network-size limit.
provided it was inactive, then it deactivates. Both Model A and B have well-known
counterparts in computational epidemiology, where they correspond to the contact
Avalanches in the human connectome. Unlike avalanches on synthetic HMNs,
process and the susceptible-infective-susceptible model respectively (see for
typically run on several (107–108) network realizations, avalanche statistics on the
instance57). The value of similar minimalistic dynamic rules in Neuroscience was
human connectome network is the result of a large number of avalanches (4109)
proven before, for example, in (ref. 58). Results for real networks are obtained by
on the unique network available. Such limitation explains the emergence of strong
running Model B dynamics on the unweighted version of the network. We have
cutoffs in the avalanche-size distributions (see Fig. 4). In the main text, we propose
verified that the introduction of weights does not alter the qualitative picture
to fit avalanche size distributions with truncated power-laws, reflecting the
obtained.
superposition of generic power-law behaviour and finite-size effects. In order to
assess the validity of our hypothesis, we resort to the Kolmogorov–-Smirnov
Dynamical protocols. We employ four different dynamical protocols: (i) decay of method, by which the best fit is provided by the fitting function g(S), which
a homogeneous state: all nodes are initially active and the system is let evolve, minimizes the estimator
monitoring the density of active sites r(t) as a function of time. (ii) Spreading from DKS ¼ max j GðSÞ FðSÞ j; ð3Þ
a localized initial seed: an individual node is activated in an otherwise quiescent
network. It produces an avalanche of activity, lasting until the system eventually where G(S) is the cumulative distribution associated with g(S) and F(S) is the
falls back to the quiescent state; the survival probability Ps(t) is measured. The cumulative distribution of empirical data (simulation results, in our case).
avalanche size S is defined as the number of activation events that occur for the In case of limited amount of empirical data, the use of diverse fitting techniques
duration of the avalanche itself. The process is iterated and the avalanche size (least squares, maximum likelihood and so on) is advised, in order to avoid biases.
distribution P(S) is monitored. (iii) Identical to (ii), except that the seed is kept However, given the abundance of data in our case, a non-linear least-squares fit
active throughout the simulation (continuous stimulus). (iv) Identical to (ii), except provides a reliable estimate of parameters (note that a truncated power-law cannot
that the seed node is subsequently reactivated with probability pstimulus ¼ be fit linearly by standards methods). We recall that the least-squares method is
1–exp( rDt) for t40 (Poissonian stimulus of rate r). essentially a minimization problem: given a set of empirical data points
(xi,yi)i ¼ 1,yn and a fitting function g(x,y,b) depending on a set of parameters b, the Having introduced a criterion to identify localized eigenvectors, we found that a
fit
Pnis provided by the set of parameters that minimizes the function whole range of eigenvalues below Lmax correspond to localized eigenvectors. The
2
i¼1 ½yi gðxi ; bÞ . In the case of a linear regression, such minimization can be structure of equation (4) and its solutions suggest that although a spreading rate of
performed exactly. In the case of a non-linear fit, instead, the minimization has to the order of l ¼ 1/Lmax allows the system to access the localized behaviour dictated
be performed numerically. by the eigenvector f(Lmax), lager values of l grant access to the next eigenvalues
For every value of l (every curve in Fig. 4), we proceed as follows: (i) we provide and eigenvectors that are less and less localized. This ultimately establishes a strong
a least-squares fit for the avalanche-size distribution, based upon the truncated connection between eigenvalue localization and rare-region effects.
power-law hypothesis and calculate the corresponding Kolmogorov–Smirnov DKS;
(ii) we repeat the above procedure for alternative candidate distributions (non-
truncated power law and exponential); (iii) we compare the results for the DKS References
indicators and choose the hypothesis with the smallest DKS as the best fit. In 1. Nykter, M. et al. Gene expression dynamics in the macrophage exhibit
Supplementary Fig. S5 we provide an example of this procedure, as obtained from criticality. Proc. Natl Acad. Sci. USA 105, 1897–1900 (2008).
our data for l ¼ 0.017. In this case, as in every other case examined, the truncated 2. Furusawa, C. & Kaneko, K. Adaptation to optimal cell growth through self-
power law provides the best fit among the ones tested, both by the KS criterion and organized criticality. Phys. Rev. Lett. 108, 208103 (2012).
upon visual inspection. Notice that also the power-law hypothesis appears plausible 3. Chen, X., Dong, X., Be’er, A., Swinney, H. & Zhang, H. Scale-invariant
to some extent, whereas the exponential hypothesis deviates significantly from the correlations in dynamic bacterial clusters. Phys. Rev. Lett. 108, 148101 (2012).
data; however, one chooses the minimum avalanche size Smin to fit. Special 4. Bialek, W. et al. Statistical mechanics for natural flocks of birds. Proc. Natl
attention has been devoted to the choice of the lower bound Smin, as advised in Acad. Sci. USA 109, 4786–4791 (2012).
(ref. 59). Such a choice is usually made by visual inspection for large systems, where 5. Kitzbichler, M. G., Smith, M. L., Christensen, S. R. & Bullmore, E. Broadband
it is easy to estimate visually the point in which power-law behaviour takes over. In criticality of human brain network synchronization. PLoS Comput. Biol. 5,
small systems, instead, a more quantitative procedure is required. For every fit e1000314 (2009).
described above (points (i) and (ii)), we have chosen the best estimate for the Smin
6. Beggs, J. M. & Plenz, D. Neuronal avalanches in neocortical circuits. J. Neurosci.
upon preliminarily applying a KS procedure to different candidate values of S
23, 11167–11177 (2003).
(following59). We found that in each case, the KS estimator displayed a minimum
7. Plenz, D. & Thiagarajan, T. C. The organizing principles of neuronal
for values of SE6, for the truncated power-law and power law hypotheses.
avalanches: cell assemblies in the cortex? Trends. Neurosci. 30, 101–110 (2007).
8. Beggs, J. M. The criticality hypothesis: How local cortical networks might
Spectral analysis. Let us call qi(t) the probability that the node i is active at time t. optimize information processing. Phil. Trans. R. Soc. A 366, 329–343 (2008).
The density of active sites can be written as r(t) ¼ /qi(t)S, averaged over the 9. Petermann, T. et al. Spontaneous cortical activity in awake monkeys
whole network. In the case of Model B dynamics, the probabilities qi(t) obey the composed of neuronal avalanches. Proc. Natl Acad. Sci. USA 106, 15921–15926
evolution equation (2009).
X
N 10. Haimovici, A., Tagliazucchi, E., Balenzuela, P. & Chialvo, D. R. Brain
q_ i ðtÞ ¼ qi ðtÞ þ l½1 qi ðtÞ Aij qj ðtÞ; ð4Þ organization into resting state networks emerges at criticality on a model of the
i¼1 human connectome. Phys. Rev. Lett. 110, 178101 (2013).
where A denotes the adjacency matrix and l the spreading rate. Calling L a generic 11. Chialvo, D. R. Emergent complex neural dynamics. Nat. Phys. 6, 744–750
eigenvalue of A, its corresponding eigenvector f(L) obeys Af(L) ¼ Lf(L). Working (2010).
on undirected networks, all eigenvalues L are real and any state of the system can 12. Bak, P. How Nature Works: The Science of Self-organized Criticality 1st edn
be decomposed as a linear combination of eigenvectors, as in (Copernicus Springer, 1996).
X 13. Jensen, H. J. Self-Organized Criticality (Cambridge University Press, 1998).
qi ¼ cðLÞfi ðLÞ: ð5Þ 14. Dickman, R., Munoz, M. A., Vespignani, A. & Zapperi, S. Paths to self-
L organized criticality. Braz. J. Phys. 30–45 (2000).
More importantly, if the network is connected (all our HMNs are), the maximum 15. Mora, T. & Bialek, W. Are biological systems poised at criticality? J. Stat. Phys.
eigenvalue of A, Lmax, is positive and unique (Perron–Frobenius theorem, see for 144, 268–302 (2011).
example, Gantmacher60). As a consequence, it is commonly assumed that the 16. Langton, C. G. Computation at the edge of chaos: phase transitions and
critical dynamics of Equation (4) at l ¼ lc is dominated by the leading eigenvalue emergent computation. Phys. D 42, 12–37 (1990).
Lmax and that, at the threshold lc, 17. Bertschinger, N. & Natschlager, T. Real-time computation at the edge of chaos
in recurrent neural networks. Neural. Comput. 16, 1413–1436 (2004).
qi cðLmax Þfi ðLmax Þ: ð6Þ 18. Haldeman, C. & Beggs, J. M. Critical branching captures activity in living
Then one can impose the steady-state condition q_ i(t) ¼ 0 and, under the funda- neural networks and maximizes the number of metastable states. Phys. Rev.
mental assumption of equation (6), derive the well-known result44 Lett. 94, 058101 (2005).
19. Legenstein, R. & Maass, W. Edge of chaos and prediction of computational
lc ¼ 1=Lmax : ð7Þ performance for neural circuit models. Neural Networks 20, 323–334 (2007).
This result relies on the implicit assumption that the principal eigenvalue is 20. Beggs, J. M. & Plenz, D. Neuronal avalanches are diverse and precise activity
significantly larger than the following one. The existence of such spectral gap’, patterns stable for many hours in cortical slice cultures. J. Neurosci. 24,
separating Lmax from the continuum spectrum of A, is a quite common feature in 5216–5229 (2004).
complex networks, being a measure of their small-world property. However, the 21. Kinouchi, O. & Copelli, M. Optimal dynamical range of excitable networks at
picture of cortical networks as hierarchical structures distributed across several criticality. Nat. Phys. 2, 348–351 (2006).
levels suggests that such systems may exhibit very different properties. We will 22. Tagliazucchi, E., Balenzuela, P., Fraiman, D. & Chialvo, D. R. Criticality in
prove this in the following and show how the above picture changes in HMNs. large-scale brain fMRI dynamics unveiled by a novel point process analysis.
Figure 5a shows the average eigenvalue spectrum of the adjacency matrix A for Front. Phys. 3, 15 (2012).
HMNs. A detailed analysis of the peak structure is beyond the scope of this work. 23. Muñoz, M. A., Juhász, R., Castellano, C. & Ódor, G. Griffiths phases on
Notice the absence of isolated eigenvalues at the higher spectral edge (see also complex networks. Phys. Rev. Lett. 105, 128701 (2010).
Fig. 5b). The principal eigenvalue Lmax is not clearly separated from the others. 24. Rubinov, M., Sporns, O., Thivierge, J. P. & Breakspear, M. Neurobiologically
The spectral gap, characterizing small-world networks, here is replaced by an realistic determinants of self-organized criticality in networks of spiking
exponential tail of eigenvalues. All such eigenvalues share a common feature: their neurons. PLoS Comput. Biol. 7, e1002038 (2011).
corresponding eigenvectors are localized, as shown in Fig. 5c. All components are 25. Wang, S. J. & Zhou, C. Hierarchical modular structure enhances the robustness
close to zero, except for a few of them in each network, corresponding to a rare of self-organized criticality in neural networks. New. J. Phys. 14, 023005 (2012).
region of adjacent nodes. We claim that such rare region are responsible for the 26. Griffiths, R. B. Nonanalytic behavior above the critical point in a random ising
emergence of the GP over a finite range of the spreading rate l. ferromagnet. Phys. Rev. Lett. 23, 17–19 (1969).
Localization in networks can be measured through the inverse participation 27. Noest, A. J. New universality for spatially disordered cellular automata and
ratio, defined as
directed percolation. Phys. Rev. Lett. 57, 90–93 (1986).
X
N 28. Vojta, T. Rare region effects at classical, quantum and nonequilibrium phase
IPRðLÞ ¼ fi4 ðLÞ: ð8Þ transitions. J. Phys. A 39, R143–R205 (2006).
i¼1 29. Hagmann, P. et al. Mapping the structural core of human cerebral cortex. PLoS
If eigenvectors are correctly normalized, IPR(L) Biol. 6, e159 (2008).
pffiffiffiffi is a finite constant of the order of
O(1) if L is localized, while IPRðLÞ 1= N otherwise. Such localization 30. Honey, C. J. et al. Predicting human resting-state functional connectivity from
estimator is usually calculated for the principal eigenvalue Lmax of a network. structural connectivity. Proc. Natl Acad. Sci. 106, 2035–2040 (2009).
Indeed Fig. 5d shows that IPR is finite and insensitive to changes in systems sizes. 31. Sporns, O. Networks of the Brain (MIT Press, 2010).
On the other hand, upon increasing the density of inter-module connections, IPR 32. Kaiser, M. A tutorial in connectome analysis: topological and spatial features of
rapidly decreases, suggesting that small-world effects enhance delocalization. brain networks. NeuroImage 57, 892–907 (2011).
33. Meunier, D., Lambiotte, R. & Bullmore, E. T. Modular and hierarchically 51. Bonachela, J. A., de Franciscis, S., Torres, J. J. & Muñoz, M. A. Self-organization
modular organization of brain networks. Front. Neurosci. 4, 200 (2010). without conservation: are neuronal avalanches generically critical? J. Stat.
34. Sporns, O., Chialvo, D. R., Kaiser, M. & Hilgetag, C. C. Organization, Mech.. P02015 (2010).
development and function of complex brain networks. Trends Cogn. Sci. 8, 52. Johnson, S., Marro, J. & Torres, J. J. Robust short-term memory without
418–425 (2004). synaptic learning. PLoS One 8, e50276 (2013).
35. Wang, S. J., Hilgetag, C. C. & Zhou, C. Sustained activity in hierarchical 53. Wixted, J. T. & Ebbesen, E. B. Genuine power curves in forgetting: a
modular neural networks: self-organized criticality and oscillations. Front. quantitative analysis of individual subject forgetting functions. Mem. Cogn. 25,
Comput. Neurosci. 5, 30 (2011). 731–739 (1997).
36. Albert, R. & Barabási, A. L. Statistical mechanics of complex networks. Rev. 54. Treviño, III S., Sun, Y., Cooper, T. F. & Bassler, K. Robust detection of
Mod. Phys. 74, 47–97 (2002). hierarchical communities from Escherichia Coli gene expression data. PLoS
37. Gallos, L. K., Makse, H. A. & Sigman, M. A small world of weak ties provides Comput. Biol. 8, e1002391 (2012).
optimal global integration of self-similar modules in functional brain networks. 55. Reese, T. M., Brzoska, A., Yott, D. T. & Kelleher, D. J. Analyzing self-similar and
Proc. Natl Acad. Sci. USA 109, 2825–2830 (2012). fractal properties of the C. Elegans neural network. PLoS One 7, e40483 (2012).
38. Binney, J., Dowrick, N., Fisher, A. & Newman, M. The Theory of Critical 56. Mones, E., Vicsek, L. & Vicsek, T. Hierarchy measure for complex networks.
Phenomena (Oxford University Press, 1993). PLoS One 7, e33799 (2012).
39. Chatterjee, N. & Sinha, S. Understanding the mind of a worm: hierarchical 57. Pastor-Satorras, R. & Vespignani, A. Epidemic spreading in scale-free networks.
network structure underlying nervous system function in C. elegans. Prog. Phys. Rev. Lett. 86, 3200–3203 (2001).
Brain. Res. 168, 145–153 (2007). 58. Grinstein, G. & Linsker, R. Synchronous neural activity in scale-free network
40. Varshney, L. R., Chen, B. L., Paniagua, E., Hall, D. H. & Chklovskii, D. B. models versus random network models. Proc. Natl Acad. Sci. USA 102,
Structural properties of the C. Elegans neuronal network. PLoS Comput. Biol. 7, 9948–9953 (2005).
e1001066 (2011). 59. Clauset, A., Shalizi, C. R. & Newman, M. E. J. Power-law distributions in
41. Zhou, C., Zemanova, L., Zamora, G., Hilgetag, C. C. & Kurths, J. Hierarchical empirical data. SIAM Rev. 51, 661–703 (2009).
organization unveiled by functional connectivity in complex brain networks. 60. Gantmacher, F. The Theory of Matrices Vol. 2 (AMS Chelsea Pub, 2000).
Phys. Rev. Lett. 97, 238103 (2006).
42. Bassett, D. S. et al. Efficient physical embedding of topologically complex
information processing networks in brains and computer circuits. PLoS
Comput. Biol. 6, e1000748 (2010). Acknowledgements
43. Chung, F. R. K. Spectral Graph Theory (CBMS Regional Conference Series in We acknowledge financial support from Junta de Andalucia, Grant P09-FQM-4682.
Mathematics, No. 92) (American Mathematical Society, 1996). We thank Olaf Sporns for kindly giving us access to the human connectome data.
44. Goltsev, A. V., Dorogovtsev, S. N., Oliveira, J. G. & Mendes, J. F. F. Localization
and spreading of diseases in complex networks. Phys. Rev. Lett. 109, 128702
(2012). Author contributions
45. Nieuwenhuizen, T. M. Griffiths singularities in two-dimensional random-bond
P.M. and M.A.M. designed the analyses, discussed the results and wrote the manuscript.
Ising models: Relation with Lifshitz band tails. Phys. Rev. Lett. 63, 1760–1763 P.M. wrote the codes and performed the simulations.
(1989).
46. Khorunzhiy, O., Kirsch, W. & Müller, P. Lifshits tails for spectra of Erdös Rényi
random graphs. Ann. Appl. Probab. 16, 295–309 (2006).
47. Kaiser, M., Görner, M. & Hilgetag, C. Criticality of spreading dynamics in Additional information
hierarchical cluster networks without inhibition. N. J. Phys. 9, 110 (2007). Supplementary Information accompanies this paper at http://www.nature.com/
48. Müller-Linow, M., Hilgetag, C. C. & Hütt, M. T. Organization of naturecommunications
excitable dynamics in hierarchical biological networks. PLoS Comput. Biol.
4, 15 (2008). Competing financial interests: The authors declare no competing financial interests.
49. Levina, A., Herrmann, J. M. & Geisel, T. Dynamical synapses causing self- Reprints and permission information is available online at http://npg.nature.com/
organized criticality in neural networks. Nat. Phys. 3, 857–860 (2007). reprintsandpermissions/
50. Millman, D., Mihalas, S., Kirkwood, A. & Niebur, E. Self-organized criticality
occurs in non-conservative neuronal networks during ‘up’ states. Nat. Phys. 6, How to cite this article: Moretti, P. and Muñoz, M. A. Griffiths phases and the stretching
801–805 (2010). of criticality in brain networks. Nat. Commun. 4:2521 doi: 10.1038/ncomms3521 (2013).