Null Models in Network Neuroscience: František Váša and Bratislav Miši
Null Models in Network Neuroscience: František Váša and Bratislav Miši
Abstract | Recent advances in imaging and tracing technology provide increasingly detailed
reconstructions of brain connectomes. Concomitant analytic advances enable rigorous
identification and quantification of functionally important features of brain network architecture.
Null models are a flexible tool to statistically benchmark the presence or magnitude of features
of interest, by selectively preserving specific architectural properties of brain networks while
systematically randomizing others. Here we describe the logic, implementation and interpretation
of null models of connectomes. We introduce randomization and generative approaches to
constructing null networks, and outline a taxonomy of network methods for statistical inference.
We highlight the spectrum of null models — from liberal models that control few network
properties, to conservative models that recapitulate multiple properties of empirical networks —
that allow us to operationalize and test detailed hypotheses about the structure and function
of brain networks. We review emerging scenarios for the application of null models in network
neuroscience, including for spatially embedded networks, annotated networks and correlation-
derived networks. Finally, we consider the limits of null models, as well as outstanding questions
for the field.
Connectomics
The connectomics revolution has shifted focus to how the generative mechanism? Modern scientific discovery for
The study of wiring patterns brain functions as a networked and integrated system1,2. complex systems increasingly relies on sophisticated
in neural systems. Understanding the wiring principles of the brain is now methods of statistical inference19. Theories about impor-
the primary goal of multiple institutes3, journals4 and tant and unimportant features are operationalized as
Graph
funding initiatives5,6. Central to this pursuit is the graph null models — alternative realizations of brain networks
A mathematical description of
a network, capturing pairwise
model of brain structure and function, in which neural that possess some features but not others. Systematic
relationships (edges) among elements (such as neurons, neuronal populations or grey comparisons between real and null networks enable
elements (nodes). matter areas) are represented as nodes, and connections researchers to disentangle dependencies between fea-
or interactions among them are represented as edges. tures of interest, revealing to what extent the presence or
Degree distribution
The probability distribution
Encoding neural systems as graphs enables us to quantify magnitude of a feature of interest arises as a consequence
of degrees (the number of and articulate architectural features that are important of other features. New discoveries are then incorporated
connections of a node with for brain function7. into theories, prompting modification of existing null
other nodes) of all nodes Advances in imaging technology8, analytics7,9 and models. The process of refining theories by constructing
in a network.
data sharing10 provide the opportunity to describe the progressively more stringent null models allows us to
Hub organization of the nervous system with unprecedented organically develop a nuanced theoretical understand-
A node with many connections. detail and depth. Over the past 15 years, convergent ing of features that are statistically unexpected and,
findings from multiple species, reconstruction tech- potentially, functionally important in brain networks.
niques and spatial scales point to a core set of repro- In this Review, we lay out the logic behind null models
ducible network features11. These include a specificity of in network neuroscience. We begin by describing the
connection profiles12,13, a heavy-tailed degree distribution, process of implementing null hypotheses about network
1
Institute of Psychiatry, with a small set of disproportionately well-connected features. We then develop a taxonomy of network meth-
Psychology and Neuroscience, hub nodes14,15, and densely interconnected network ods for statistical inference and frame the discussion
King’s College London, modules 16,17. Collectively, these network features are from a user’s perspective. We emphasize null model-
London, UK.
thought to promote a balance between segregation and ling as a process of sampling a wider space of possible
2
Montréal Neurological integration of function18. networks, and ultimately, a tool for benchmarking the
Institute, McGill University,
Montréal, Québec, Canada.
How do we demonstrate that a brain network fea- statistical unexpectedness of specific features of interest.
✉e-mail: bratislav.misic@ ture is more prominent than would be expected by Finally, we consider how this flexible framework can
mcgill.ca chance? Is this network feature fundamental, or does be applied to non-standard data types and to emerg-
https://doi.org/10.1038/ it arise as a by-product of other features? Can the ing questions in the field, such as correlation-based
s41583-022-00601-9 magnitude of this feature be attributed to a particular networks and annotated networks.
0123456789();:
Reviews
Modules Null models for networks clustering coefficient) is first computed on the observed
Groups of nodes densely Suppose that you have access to a new network, repre- network (red). To assess the statistical significance of this
connected with each other, senting the brain of a previously unstudied species or feature, we generate a population of null networks (blue)
but sparsely connected with the brain of a well-studied species, now reconstructed that preserve some properties of the observed network
the rest of the network.
with unprecedented detail. Using methods from network (in this example, density) but systematically disrupt
Null models science, you compute statistics about the network, such others (in this example, topology). We then compute the
Synthetic realizations of brain as its characteristic path length (the mean of shortest same network feature for each null network, generating
networks, used to benchmark paths among all pairs of nodes) or its clustering coeffi- a distribution of feature x (xp) under the null hypothesis
whether observed network
cient (the proportion of a node’s neighbours that are also that the feature is due to density, as opposed to topol-
features are statistically
unexpected.
connected with each other). You find that the network ogy. If the feature of interest x has significantly greater
has a path length of 2.5 and a clustering coefficient of or smaller magnitude in the observed network than in
Null hypotheses 0.3. Are these network features special and specific to the null networks, this constitutes evidence that feature
The premises that observed your network? Are they unexpectedly large or small in x can be explained by the properties that were not pre-
relationships between network
features are due to chance
magnitude? Could they have occurred just by chance served in the null networks. For example, if we find that
alone. in a randomly configured network of a similar size? the observed network consistently displays greater clus-
The challenge for network neuroscientists is to tering than the null networks with randomized topol-
Density benchmark how statistically unexpected a network fea- ogy but preserved density, we would conclude that the
The proportion of possible
ture is. Many network features will depend on simpler clustering in the observed network is due to topology,
edges that exist in a network.
features, such as the number of nodes (network size), rather than density. A p value is naturally estimated as
Degree sequence the proportion of possible edges that exist (network the proportion of distribution xp that is greater than or
List of degrees of all nodes density) and the number of edges incident on each node equal to x.
in a graph. (degree sequence ). Topological differences between
Topology
organisms, populations or experimental conditions may Ubiquity of null models
Logical arrangement of nodes be masked or overemphasized by trivial differences in Null models are so fundamental to network inference
and edges in a network (used these basic features. The main inferential process by that they are effectively subsumed into the very defini-
in this Review specifically to which we determine that a feature of interest (such as tions of multiple network measures. For example, the
refer to network topology).
path length or clustering) is due to the topology of our small-world coefficient (σ), which measures the ratio of
Network inference network rather than other features (such as density) is clustering (C) to path length (L)22, is computed by first
Testing of hypotheses about by comparison with null models20,21. normalizing each of these measures according to their
properties or mechanisms that Figure 1 outlines the process of graph null-hypothesis average magnitude in a set of randomized networks
give rise to observed network testing. A network feature (x; for example, path length or (Cr, Lr)23:
features.
Fig. 1 | Generating null distributions for network features. Graph- others that represent hypothesis H1 (such as topology). Recalculating the
theoretic analysis of an observed network (red) can be used to derive a same feature x for each null network allows us to construct a distribution of
desired feature x, such as path length, clustering or modularity. Red features xp under the null hypothesis that the magnitude of feature x is due
elements in the matrix represent the presence of a connection and to properties that were preserved, and not due to properties that were
white elements represent the absence of a connection. To determine randomized. A p value is then estimated by computing the probability
whether the feature is statistically unexpected, we generate a population of Pr that xp is more extreme than x. The figure was generated using an
null networks (blue) that preserve some desired properties of the observed open diffusion MRI and functional MRI data set featuring 70 healthy
network representing null hypothesis H0 (such as density), but randomize participants122.
0123456789();:
Reviews
0123456789();:
Reviews
pseudorandomly according to a small set of wiring rules null distribution, and instead assemble a null distribu-
that embody the null hypothesis. This construction pro- tion from the observed data using some randomization
cedure ends when the null network has the same size procedure. Over time, as randomization models gradu-
and density as the observed network. The specific wiring ally become more conservative, they can help to narrow
rules can take many forms. Edges can be placed entirely down hypotheses and help to inform generative models.
at random (Erdős–Rényi random graphs36,37), to mini- So far in this Review, we have focused on the ran-
mize the total edge length (wiring cost) of the network38,39, domization of network topology through rewiring or
or to maximize homophilic attachment among nodes with generative modelling. However, null models can also
similar connectivity profiles40,41 or similar microscale be used to disentangle the interdependence of net-
attributes42. More extensive generative growth models work properties at other steps in the network construc-
may also embody some mechanism by which nodes are tion and analysis pipeline, a topic that we consider in
added to the network, such as preferential attachment, greater detail in the sections on null models for anno-
wherein new nodes are more likely to connect to exist- tated and correlation networks below. For example, use
ing nodes with greater degree43. Although generative of the correlation coefficient to quantify relationships
models primarily aim at uncovering latent principles between pairs of neurophysiological time series dur-
of organization and growth of empirical networks, they ing functional-network construction leads to an abun-
can also be used for hypothesis testing in ways similar to dance of fully connected triplets of nodes (owing to the
randomization-based null models. In other fields, such transitive property); in this scenario, a null model involv-
as time-series analysis, this distinction is sometimes dis- ing randomization of time series may be more realistic
cussed from the perspective of typical realizations that than randomizing network topology48. Similarly, when
fit a model to the data (analogous to generative models), evaluating relationships between pairs of nodal brain
versus constrained realizations that are produced by maps, one of the maps can be directly randomized at
matching one or more properties of the data (analogous the level of nodal features while preserving certain pro
to randomization models)44. perties (such as spatial autocorrelation), without the need
Generative models can also go beyond significance to rewire the underlying networks at the level of edges49.
testing and be used for model identification more gener-
ally, by evaluating multiple competing hypotheses about A sampling space of networks
the mechanisms that drive an observed network feature. Given the numerous choices of null models available,
For example, generative models have been used to test how do we select the appropriate model as a frame of
competing accounts of brain network formation: model A reference for our observed network? Figure 3 shows
(edges are placed to minimize wiring cost only39), ver- an observed network, and three different null models
sus model B (edges are placed among brain regions with that preserve increasingly more of its features. The first
overlapping connection profiles40) versus model C (edges null is a random graph, in which only density is pre-
are placed among regions with similar gene-expression served. The second null is a Maslov–Sneppen rewired
patterns42). Models are then compared by assessing how null with preserved density and degree sequence. Notice
well each of the alternative models (embodying dis- that, compared with the random null, the rewired null
tinct generative mechanisms) recapitulates features of has prominent horizontal and vertical streaks that cor-
the observed network. This is conceptually similar to respond to similar streaks in the observed network,
model identification in formulations of brain networks because nodes that were hubs in the observed network
as dynamic systems, such as dynamic causal modelling, remain hubs in the rewired network. The third null,
in which competing accounts of dynamic neural cir- which preserves density, degree and the total wiring
cuit interactions are tested to identify the best-fitting or cost of the network, displays a similar organization
Wiring cost most parsimonious model45,46, or when competing hypo to the observed network. In other words, as we increase
The total physical length theses are grouped into distinct families of mechanisms the constraints imposed on the null model, we begin to
of all edges in the network. or families of models47. generate more veridical representations of the original
Despite differences in implementation, randomiza graph, yielding a more conservative null model. This
Homophilic attachment
The tendency for pairs of tion and generative models form a coherent set of analy leads to stricter tests associated with the identification
nodes to be connected if they tical tools to formulate and refine hypotheses about of more fine-grained effects, which explain progressively
have similar characteristics, network structure. With randomization models, we sys- smaller amounts of variance in the original data. This
such as degree, connection tematically randomize factors that we hypothesize are example illustrates two important properties of null
profile or microscale attributes.
important for the network feature we are studying, and models. First, null models exist on a spectrum, rang-
Transitive property retain factors that are hypothesized to not be important. ing from liberal to more conservative nulls, depending
The dependence among the With generative models, we systematically add the mini on the constraints imposed. Second, null models are
three edges in triangles of mum set of factors that we hypothesize are important. a method of systematically sampling a larger space of
nodes when the edges are
In this sense, the difference between randomization and potential networks, and situating the observed network
estimated by statistical
association, such as generative models is analogous to the difference between in this space. We expand on these themes below.
correlations among time non-parametric and parametric tests in statistics. With Up to this point in this Review, we have implicitly
series. generative models, as in parametric tests, we explicitly discussed hard constraints; that is, features that are
define the data-generating process and assume that preserved exactly, such as the degree sequence of a net-
Spatial autocorrelation
The tendency for spatially
the null distribution can be captured by a small set of work. Yet there exist myriad null models that operate
proximal brain regions to parameters. With randomization models, as in non- under ‘soft’ constraints that are met only approximately.
possess similar attributes. parametric tests, we do not make assumptions about the A straightforward example is how Maslov–Sneppen-like
0123456789();:
Reviews
Preserve density
Preserve density plus degree sequence
Preserve density plus degree sequence plus wiring cost Empirical network
Fig. 3 | A spectrum of null models. Null models can be adapted to test a random edge swapping results in a network with the same density and
range of hypotheses. The network on the right is an observed empirical degree sequence as the empirical network31. In the third null model, edges
network derived from diffusion MRI. Increasing constraints imposed on are randomly swapped only if total wiring cost is preserved, resulting in a
the model from left to right yield graphs that retain more features from the network with the same density, degree and cost as the empirical network32.
empirical network, resulting in increasingly conservative null models. Visually, the more conservative null models increasingly resemble the
In the first null model, an Erdős–Rényi random graph, edges are placed at observed network because they embody more of its underlying properties.
random such that the network has the same density as the empirical Figure generated using an open diffusion MRI and functional MRI data set
network36. In the second null model, using Maslov–Sneppen rewiring, featuring 70 healthy participants122.
rewiring is applied to weighted networks to approxi- null-model sampling space. Figure 4 demonstrates this
mately preserve the sum of the weights of edges inci- concept: null models a, b and c sample increasingly larger
dent on each node (strength sequence)26. This procedure spaces of networks, resulting in increasingly variable
involves a first rewiring step that preserves the degree null distributions. An important methodological ques-
sequence, followed by a second weight-swapping step, tion is whether null models uniformly sample the target
in which exact convergence is complicated by the fact space. The mere fact that a model retains one feature
that edge weights consist of continuous values that are and randomizes another does not mean that it samples
not perfectly interchangeable. Although the desired con- the space of all possible realizations exhaustively. This
straints are not exactly met in each individual network, is directly tied to how null models are implemented in
they are satisfied, on average, across the ensemble of null practice, which we consider in the section below about
networks50. the limits of null models.
More generally, the process of constructing null Ultimately, there is no right or wrong null model. The
models can be thought of as sampling from a broader null model should be an implementation of an explicit
space of possible networks with related characteristics and falsifiable null hypothesis that is specific to one’s
(Fig. 4). This multidimensional network morphospace is research question. A variable can be the main independ-
spanned by axes that represent specific network traits ent variable in one study, or a covariate that needs to be
or features51. Each point or location in this space rep- controlled for in another. A salient example, which we
resents distinct network morphologies, some of which discuss in detail in the next section, is the contribution
can be realized under desired constraints whereas oth- of geometric embedding and wiring cost to brain net-
ers cannot. Proximity in this space reflects similarity work architecture. The wider space of nulls is a powerful
between network morphologies. The number and strin- tool to probe the observed network from multiple angles
gency of null-model constraints determine the portion and to parse the contributions of different network prop-
of this space that will be populated by null-model instan- erties to the feature of interest (Fig. 4). In this sense, using
tiations; for example, stringent models will be situated multiple nulls simultaneously to triangulate towards an
close to the empirical network, whereas increasingly answer may be the most comprehensive and informative
lenient models will tend to be situated further away. Null way to analyse networks21.
models — whether realized by rewiring or generative
mechanisms — are therefore methods to systematically Null models for spatial networks
sample from different parts of this network morpho Perhaps the most important feature to consider when
space and generate null distributions for desired net- analysing brain network topology is geometry52. The
work features. Exploring this space enables us to quantify brain is a spatially embedded system with finite meta-
Network morphospace
A space of network the contribution of different constraints and generative bolic and material resources, resulting in a prevalence
configurations that can be mechanisms to specific features of our observed network. of short-range connections that presumably confer lower
realized under a set of A corollary is that the size of the sampling space cost than do long-range connections53. Indeed, multiple
constraints. will influence the variability of null-model realizations. imaging modalities and tracing techniques show that
Geometry
In general, models with additional constraints should neural elements are more likely to be connected and to
The embedding of nodes and theoretically yield realizations that are more similar display stronger connectivity weights if they are physi-
edges in physical space. to each other and hence cover a small portion of the cally closer together than if they are further apart41,54–58.
0123456789();:
Reviews
40
overall network architecture. More recently, generative
models have begun to consider the joint influence of
geometry, topology and local biological annotations
30 of network nodes, such as gene expression or laminar
differentiation42,64, a topic that we consider further in
20 the next section.
Null model a
Null model b
Null model c Null models for annotated networks
10 Empirical network The graph representation of the brain deliberately
abstracts away microscale differences between regions,
0 2 4 6 8 10
resulting in homogeneous nodes. However, network
Network feature (x)
neuroscience is increasingly focused on the relationship
Fig. 4 | A sampling space of null models. For any empirical network, we can use multiple between network architecture and regional annotations65,
null models to systematically sample alternative network configurations. This example such as neuron morphology66, gene expression67,68, recep-
shows two network features (x and y) estimated for an empirical network (black star). tor profiles69, laminar differentation70,71, myelination72
Three distinct null models (a, b and c) are then used to construct null distributions for and intrinsic dynamics73. A typical comparison may
each feature (red, purple and blue probability densities, respectively). Situating the involve correlating, across brain regions, a region’s
empirical network in this space enables systematic and detailed phenotyping of its network attribute (such as degree) and its microscale
features. This illustration is shown for synthetic data. annotation (such as the average number of dendritic
spines). However, an important problem arises when
Therefore, connectivity and geometry are fundamen- estimating a p value for the correlation coefficient.
tally intertwined, necessitating null models that can Namely, the standard parametric method assumes that
selectively tease apart their contributions. the two vectors come from an uncorrelated bivariate
The principal difficulty with using standard rewir- normal distribution, whereas the non-parametric (naive
ing models is that naive swapping will, on average, place permutation-based) method assumes that the elements
edges between nodes that are further apart, yielding of the vectors are exchangeable. This independence
null networks with unrealistically higher wiring costs assumption is violated by multiple forms of dependence
than the observed network. One approach is to use iter- between data points74–76. First, spatial autocorrelation of
ative rewiring with additional constraints. Early stud- imaging data gives rise to similar values of anatomical
ies proposed to ‘latticize’ the observed network; here, and physiological measurements between neighbouring
the spatial positions of network nodes are taken into locations. Second, homotopic symmetry leads to sim-
account while swapping edges. Candidate edge swaps are ilar measurements between corresponding locations
implemented only if they place edges between spatially within the left and right hemispheres of the brain. Last,
proximal neighbours, creating lattice-like networks22,33. the spatial resolution of analyses is arbitrary, leading to
Although latticization reduces the wiring length of the dependence of both the p value and the effect size on the
rewired network, more sophisticated versions of this number of regions, vertices or voxels considered. These
algorithm create rewired networks that precisely match limitations can be addressed using null models that con-
the edge length distribution59 and the weight–length trol for spatial autocorrelation, including both spatial
relationship32,54 of the observed network, in addition permutation tests and parameterized data models49,77.
to density and the degree sequence. Further alternative Spatial permutation tests, also known as ‘spin tests’,
models have used so-called spatially repositioned nulls, randomize the relationship between network structure
in which the spatial locations of the nodes are permuted and annotations by randomly rotating spherical pro-
but the topology is preserved, enabling researchers to jections of brain annotation maps (Fig. 5). These mod-
assess whether the observed network feature is driven els project brain regions or vertices to a sphere using
by spatial relationships among nodes, rather than spherical coordinates generated during cortical-surface
topology60,61. extraction. Following random rotation of the sphere, a
Collectively, geometric nulls enable us to quantify the permutation is obtained by assigning each coordinate
proportion of topology that comes passively from spatial to its nearest rotated counterpart. By applying the same
embedding. For example, purely geometric models that (mirrored) random rotation to both brain hemispheres,
minimize wiring cost can reproduce multiple hallmarks spatial permutation models also (approximately) pre-
of brain networks, including the presence of hubs62, net- serve hemispheric symmetry. The end result is an anno-
work cores39 and modules54,62. As a result, these nulls tated graph in which the network structure is preserved
0123456789();:
Reviews
Surrogate and the spatial autocorrelation of the annotations is connectivity constructed by rotating annotations. More
Realization of a null model. preserved, but the correspondence between network broadly, these methods have been extended to address
Mainly used to refer to null nodes and annotations is randomized (Fig. 5). This test the effect of spatial autocorrelation in diverse biological
time series, but frequently
was initially developed at the level of surface vertices74,75 questions, such as within-participant correspondence of
to networks as well.
and subsequently extended to the region level — that is, neuroimaging modalities87 or gene-set enrichment76,88,89.
Matrix for parcellated data78–81. Notably, different implementa-
A table in which rows and tions of spatial permutation models at the regional level Null models for correlation networks
columns correspond to diverge in specific methodological decisions, such as Connectivity is often estimated using measures of sta-
network nodes, and every
element encodes a relationship
how to deal with the medial wall or whether they enable tistical covariation between regional attributes, such
between two nodes, such annotation values to be assigned more than once49. as correlations among measurements of neural acti
as connectivity or physical By contrast, parameterized data models generate vity, gene expression or cortical thickness. Examples
distance. surrogate brain maps with predefined spatial properties, of correlation-derived networks include functional
such as the same spatial autocorrelation as the empirical networks90, structural covariance networks91,92, mor-
data set. Parameterized models do not rely on spherical phometric similarity networks93, gene co-expression
permutation of data; instead, these models use a matrix networks94, neuroreceptor similarity networks69 and
of distances between brain-map locations to impose temporal profile similarity networks73. These networks
spatial autocorrelation — as described using a parsimo- are weighted (and generally signed) by construction26,
nious set of parameters — on randomly permuted data. suggesting that null models that operate at the level of
Several parameterized methods have been proposed, binary topology, such as rewiring models, might be
including spatial autoregressive models82, smoothing of inappropriate. In particular, networks constructed by
randomized values to match the empirical variogram76 correlation obey the transitive property: if we know the
and spatial eigendecomposition using Moran spectral value of edges A–B and A–C, we can place limits on
randomization83. For a systematic comparison of spatial the value of edge B–C. Rewiring may swap edges in such
permutation and parameterized data models, see ref.49. a way that does not preserve this transitive property —
Similar to geometric models, spatial autocorrelation- for example, with strong positive correlations between
preserving null models enable us to ask to what extent A and B and between A and C, but weak or negative
network features occur above and beyond the back- correlation between B and C. Thus, topological rewiring
ground influence of spatial embedding. As a result, of edges in such networks may result in null networks
these models are quickly becoming ubiquitous and can that could not have arisen naturally as a correlation
be applied to a wide range of analytical questions. One network48. In other words, correlation-derived networks
such application is to assess whether a particular node necessitate alternative null models that take into account
attribute x (for example, degree) is enriched in a par- the transitive property and the sign of edge weights.
ticular class of nodes (such as in an intrinsic network Various methods can be used to create more realis-
or cytoarchitectonic class)68,84,85. Here the annotations tic null models for correlation-derived brain networks.
(classes) are categorical variables, and the null model One option is to create null correlation matrices, using
can be used to quantify how unexpectedly large the methods such as the Hirschberger–Qi–Steuer algorithm
attribute x is while controlling for the size and spatial that matches the mean and variance of the empirical
extent of the class. An alternative application is to assess matrix48,95,96, or a configuration model that preserves
whether nodes with similar annotations display greater empirical node strength97. A more stringent approach is
than expected connectivity69,73,86. In this type of analysis, to randomize the signal itself. For example, in the case
the mean connectivity among nodes with a particular of functional connectivity, surrogate time series can be
annotation is compared against a null distribution of generated by transforming empirical time series to the
Fig. 5 | Spatial permutation of network annotations. When evaluating correspondence between network architecture
and local node annotations (such as molecular or cellular attributes), a spatial null model can be used to permute node
annotations while preserving their autocorrelation structure49. Spatial permutation can be implemented using a ‘spin test’,
whereby the annotation map for each hemisphere is projected onto a sphere, randomly rotated and projected back onto
cortex. Such permutations approximately preserve spatial autocorrelation and hemispheric symmetry of empirical values
or annotations, but systematically randomize correspondence between network structure and annotations. This illustration
is generated using glycolytic indices123 mapped to network nodes defined in ref.124 as annotation data (shades of red and
blue). Edges correspond to an anatomical connectome from ref.125 (connectivity data are available at ref.126). Node sizes
are proportional to weighted node connectivity strength, calculated from the connectivity data. Data are shown for left
hemisphere only.
0123456789();:
Reviews
Null distribution
Empirical value
Sampling distribution
Confidence interval
p value
frequency domain using the Fourier transform, shuffling across regions (within participants) because — unlike
the phase coefficients and taking the inverse transform time series — data points from different participants are
to the time domain. The resulting surrogate time series independent96. In this sense, resampling with replace
have preserved power spectra but randomized tempo- ment (bootstrapping) can also be applied to correlation-
ral dependencies48,98. This can also be accomplished based networks to assess the reliability of network
using the wavelet transform, a procedure known as features rather than to test a null hypothesis per se
‘wave-strapping’99. Finally, emerging methods from (Box 1). Note that networks estimated using alternative
graph signal processing can generalize classical signal measures of covariation, such as partial correlation,
operations, such as the Fourier and wavelet transforms, may be less susceptible to interdependencies among
to networks100. These methods generate surrogate signals edge values induced by the transitive property48,109–112.
that preserve the smoothness of the observed network101. However, such networks should also be evaluated using
In addition to randomization, surrogate time series null models that take into account the fact that their
can also be created using generative approaches. For edges represent statistical associations between node
instance, autoregressive models can generate surrogate attributes.
time series that preserve the temporal (auto)correlation, Collectively, these methods showcase an important
power spectral density, cross-power spectral density and point: that some types of networks can be randomized
amplitude distribution of empirical data102–104. Although at different levels of the construction and analysis pipe-
these generative models exclusively preserve temporal line (Fig. 6). Whereas null models for structural networks
features, more recent hybrid models also preserve spa- and annotated networks tend to operate on the edges or
tial features. These models generate surrogate time series nodes of the networks themselves, many null models for
that preserve spatial autocorrelation105, or time series that correlation networks can additionally operate at earlier
preserve both spatial and temporal autocorrelation106. steps in network construction, such as the correlation of
Phase coefficients Beyond human imaging data, sophisticated null models physiological time series or anatomical feature vectors.
The offsets or temporal for multi-neuron firing-rate recordings can simultane- Application of a null model at an early stage of network
dependencies among ously preserve the covariance across time, neurons and construction can still be used to generate randomized
sinusoidal components of a
time series following a Fourier
experimental conditions107,108. instances of derived statistics, by applying the analysis
transform to the frequency In the case of networks of covarying anatomical attrib- workflow to the randomized input data. For exam-
domain. utes, the anatomical measures can simply be randomized ple, randomized surrogate time series can be used to
0123456789();:
Reviews
construct a correlation matrix, a binary network and a More generally, the freedom to construct spectra of
map of regional node degree, each randomized relative null models necessitates careful interpretation of results
to their empirical counterparts (Fig. 6). Alternatively, the relative to underlying assumptions. Overly lenient null
empirical correlation matrix (or derived binary topology models that control too few network attributes give rise
or node attribute) could be randomized directly (Fig. 6). to trivial ‘straw man’ arguments that are easy to reject
Crucially, these different null models can be evaluated but cannot precisely identify the origin of effects of
simultaneously, to benchmark empirical features of interest. Conversely, overly stringent null models that
interest more comprehensively. simultaneously control too many network attributes
may, ultimately, constrain the sampling space of net-
Limits of null models work architectures, resulting in too close a match to the
Although null models are the backbone of the scien- empirical network and limiting insight. This does not
tific method in network neuroscience, they are sub- mean that the lenient or stringent limits of the spec-
ject to important theoretical and practical caveats. trum of null models are uninteresting or should not
Relationships between network properties can make be explored. Rather, these limits should be considered
it difficult to disentangle their unique and shared con- alongside other null models whenever possible to com-
tributions to overall network architecture. Namely, the prehensively characterize the contribution of multiple
presence of some features may induce the presence of network attributes.
other features, making it impossible to selectively ran- A related consideration is how well null models actu-
domize one while keeping others fixed. For example, ally sample the space that they seek to explore. Namely,
the concurrence of segregated modules and dispro- null models may systematically undersample or over-
portionately highly connected hub nodes may induce sample certain locations of the target space. A relevant
structural by-products, such as hierarchies and rich example is the spin test for annotated networks, which
clubs50. Therefore, null-model testing cannot always only explores a limited region of the space of networks
unambiguously recover the causal structure among with preserved spatial autocorrelation. By exactly per-
network features. muting regional annotations, the spin test has fewer
Randomize
data
Randomize
correlation
matrix
Randomize
topology
Randomize
node
attribute
Fig. 6 | Null models can be implemented at different stages of network (preserving spatial autocorrelation). Because each null model is applied at
construction and analysis. Brain network construction and analysis involves a different stage of the workflow, there are multiple ways to generate
numerous steps; an example pipeline involves construction of functional randomized entities, in which some are randomized directly, and others are
connectivity from regional time series, proportional thresholding (25% den- considered randomized because an entity at a previous stage of the pipeline
sity) and subsequent topological analysis. A null model could be imple- was randomized. For example, both time-series randomization and spin nulls
mented at every stage of this process, including randomization of time-series may, ultimately, yield randomized node-attribute maps. Illustration gener-
data (preserving power spectra), correlation matrix (preserving mean and ated using functional MRI data (blood oxygen level-dependent (BOLD)
variance), network topology (preserving node degree) and node attributes responses) from ref.127.
0123456789();:
Reviews
possible realizations compared with a parameterized lead to more complete and powerful nulls that can more
generative model76, or compared with a wavelet-based accurately pinpoint the origin of observed phenom-
resampling approach in which data are first whitened ena. These hybrid models will help evaluate the relative
and then permuted99. This example also highlights how importance of canonical brain network features (such
randomizations cover a tighter portion of the sampling as modules, hubs and rich clubs) and, ultimately, illumi-
space of null models, whereas comparable generative nate a ‘feature hierarchy’ that separates unique features
model realizations show greater variability, while pre- of the brain from those that are by-products50. Key to this
serving the quantity of interest in the limit of infinite endeavour will be paradigms in which ensembles of null
sampling. models combinatorially constrain one or more features,
The diversity of null models also brings practical to systematically test which features arise as by-products
considerations for users. Algorithmic implementations of others.
of a null model often imply specific assumptions, which An important but underexplored direction is to sub-
can translate into non-trivial differences in outcomes. tly perturb observed networks and explore alternative
For example, different versions of the spin test differ in realizations of brain networks that retain many empir-
how they identify the centroid of a region of interest, ical features. In network neuroscience, null networks
and whether they permit duplicate assignments during are typically created through complete randomization,
the mapping of spatially randomized region coordinates and, conversely, generative models are typically fully
to the original ones49. This variability between different completed. However, both processes can be paused
implementations of the same null model can result in incrementally to explore the immediate vicinity of an
differences in inference49. empirical network in null-model space, thereby giving
Details of algorithmic implementation can matter insight into the stochastic neighbourhood of brain net-
even within a specific null model version. It is important works. Such ‘connectome mutagenesis’ can offer insight
to sample null-model instances as uniformly as possi- into pathological perturbations involved in psychiatric
ble, to ensure that the resulting statistics are not biased. and neurological disorders116. Parametrically tuning
For example, early implementations of the spin test the extent of randomization in this way can be used
naively used random rotations around the (x,y,z) axes75,78; to systematically map the space of possible network
however, this led to oversampling of certain rotations, realizations and to illuminate how trade-offs among
unlike the unbiased approach based on QR decompo- biological constraints manifest as network features and
sition of standard normal rotation matrices (factorizing architectures51,117.
the rotation matrix into an orthogonal matrix Q and an More broadly, null models are the ideal vehicle for
upper triangular matrix R)113,114. Both aforementioned forging links and establishing a common language in
examples on algorithmic implementation details high- the neuroscience community and with adjacent fields.
light an important point: deciding what network feature Indeed, many of these methods originate from other
or features to randomize is not sufficient; deciding how domains, such as astrophysics118, time-series analysis119,
this randomization is implemented is important, too. ecology120 and bioinformatics31. The dominant frame-
This underlines the importance of sharing code under- work in network neuroscience revolves around the use
lying the selected null model implementation and clearly of null models for null-hypothesis testing. As prediction
reporting the relevant details10. and cross-validation become dominant frameworks else-
We finally note that some questions in network where in the natural sciences and engineering, a major
neuroscience do not necessarily require benchmarking new frontier is to formulate models that predict the
with null models. If the goal of a study is to generate a presence and prominence of specific network features
feature of brain networks that differentiates groups in unseen data. In this sense, generative models that
of individuals (such as patients and controls) or predicts are validated out-of-sample present a promising step in
individual differences in some exogenous feature (such the continued evolution of network-neuroscience null
as symptom severity), it is more important to show that models42.
the feature predicts individual differences in unseen From a more pragmatic perspective, standardized
data, rather than to confirm that the feature is statisti- reporting of results with respect to multiple null models
cally unexpected. In other words, comparing a feature in will promote more comprehensive scientific communi-
empirical and surrogate data is not informative about its cation121. We encourage readers to explore these meth-
clinical or, more generally, predictive utility104,115. ods in their own work; Supplementary Table 1 shows
existing cutting-e dge implementations of various
Outlook and conclusion randomization and generative null models, in Python,
We close this Review by considering outstanding ques- MATLAB and R programming languages. Going for-
tions for next-generation inferential methods in network ward, null models present a unique opportunity to
neuroscience. As the field moves beyond descriptive sta- transparently share methods and harmonize analyti-
tistics of brain networks towards understanding gener- cal frameworks. In turn, this will stimulate widespread
ative mechanisms, null models will be key for distilling adoption of null-model methods and their continued
the smallest set of rules or constraints that can parsimo- development.
niously explain the hallmark features of brain networks This is an exciting time for network neuroscience.
and their phylogeny and ontogeny. We envisage that the Rapid methodological development enables us to explic-
emerging integration of geometric and microarchitec- itly define falsifiable null hypotheses in the form of null
tural constraints with existing topological models will models. Null models, in turn, enable us to ask specific
0123456789();:
Reviews
and increasingly diverse questions about brain network more informative network features and null models. As
organization. The continued development of null mod- the repertoire of inferential methods grows, so will our
els drives cycles of discovery. At every step or iteration, understanding of the principles governing brain network
data are tested against increasingly more sophisticated organization.
null models that spark new insights, prompt theoretical
and methodological innovation and, ultimately, lead to Published online 31 May 2022
1. Sporns, O., Tononi, G. & Kötter, R. The human 27. MacMahon, M. & Garlaschelli, D. Community detection 50. Rubinov, M. Constraints and spandrels of interareal
connectome: a structural description of the human for correlation matrices. Phys. Rev. X 5, 21006 (2015). connectomes. Nat. Commun. 7, 13812 (2016).
brain. PLoS Comput. Biol. 1, e42 (2005). 28. Colizza, V., Flammini, A., Serrano, M. A. & This modelling study introduces an integrative
2. Bullmore, E. & Sporns, O. Complex brain networks: Vespignani, A. Detecting rich-club ordering in complex approach to infer causal relationships among
graph theoretical analysis of structural and functional networks. Nat. Phys. 2, 110–115 (2006). network features.
systems. Nat. Rev. Neurosci. 10, 186–198 (2009). 29. Alstott, J., Panzarasa, P., Rubinov, M., Bullmore, E. 51. Avena-Koenigsberger, A., Goñi, J., Solé, R. & Sporns, O.
3. DeWeerdt, S. How to map the brain. Nature 571, S6 & Vértes, P. A unifying framework for measuring Network morphospace. J. R. Soc. Interface 12,
(2019). weighted rich clubs by integrating randomized 20140881 (2015).
4. Sporns, O. The future of network neuroscience. controls. Sci. Rep. 4, 7525 (2014). This article reviews how to chart and explore the
Netw. Neurosci. 1, 1–2 (2017). 30. Im, K., Paldino, M. J., Poduri, A., Sporns, O. & space of possible network realizations (network
5. Insel, T. R., Landis, S. C. & Collins, F. S. The NIH Brain Grant, P. E. Altered white matter connectivity and morphospace).
Initiative. Science 340, 687–688 (2013). network organization in polymicrogyria revealed by 52. Stiso, J. & Bassett, D. S. Spatial embedding imposes
6. Amunts, K. et al. The Human Brain Project: creating individual gyral topology-based analysis. NeuroImage constraints on neuronal network architectures.
a European research infrastructure to decode the 86, 182–193 (2014). Trends Cogn. Sci. 22, 1127–1142 (2018).
human brain. Neuron 92, 574–581 (2016). 31. Maslov, S. & Sneppen, K. Specificity and stability in 53. Bullmore, E. & Sporns, O. The economy of brain
7. Bassett, D. S. & Sporns, O. Network neuroscience. topology of protein networks. Science 296, 910–913 network organization. Nat. Rev. Neurosci. 13,
Nat. Neurosci. 20, 353–364 (2017). (2002). 336–349 (2012).
8. Sejnowski, T. J., Churchland, P. S. & Movshon, J. A. 32. Betzel, R. F. & Bassett, D. S. Specificity and 54. Roberts, J. A. et al. The contribution of geometry to
Putting big data to good use in neuroscience. robustness of long-distance connections in weighted, the human connectome. NeuroImage 124, 379–393
Nat. Neurosci. 17, 1440–1441 (2014). interareal connectomes. Proc. Natl Acad. Sci. USA (2016).
9. Rubinov, M. & Sporns, O. Complex network measures 115, E4880–E4889 (2018). 55. Markov, N. T. et al. Cortical high-density counterstream
of brain connectivity: uses and interpretations. This study introduces a constrained rewiring model architectures. Science 342, 1238406 (2013).
Neuroimage 52, 1059–1069 (2010). that preserves density and degree sequence, and 56. Liu, Z.-Q., Zheng, Y.-Q. & Misic, B. Network topology
10. Poldrack, R. A. et al. Scanning the horizon: towards approximately preserves the connection length of the marmoset connectome. Netw. Neurosci. 4,
transparent and reproducible neuroimaging research. distribution and length–weight relationship. 1181–1196 (2020).
Nat. Rev. Neurosci. 18, 115–126 (2017). 33. Sporns, O. & Kötter, R. Motifs in brain networks. 57. Liu, Z.-Q., Betzel, R. & Misic, B. Benchmarking
11. Van den Heuvel, M. P., Bullmore, E. T. & Sporns, O. PLoS Biol. 2, e369 (2004). functional connectivity by the structure and geometry
Comparative connectomics. Trends Cogn. Sci. 20, 34. Kale, P., Zalesky, A. & Gollo, L. L. Estimating the impact of the human brain. Netw. Neurosci. https://doi.org/
345–361 (2016). of structural directionality: how reliable are undirected 10.1162/netn_a_00236 (2021).
12. Passingham, R. E., Stephan, K. E. & Kötter, R. connectomes? Net. Neurosci. 2, 259–284 (2018). 58. Mišić, B. et al. The functional connectivity landscape
The anatomical basis of functional localization in the 35. Suárez, L. E., Richards, B. A., Lajoie, G. & Misic, B. of the human brain. PLoS ONE 9, e111007 (2014).
cortex. Nat. Rev. Neurosci. 3, 606–616 (2002). Learning function from structure in neuromorphic 59. Samu, D., Seth, A. K. & Nowotny, T. Influence of wiring
13. Mars, R. B., Passingham, R. E. & Jbabdi, S. networks. Nat. Mach. Intell. 3, 771–786 (2021). cost on the large-scale architecture of human cortical
Connectivity fingerprints: from areal descriptions to 36. Erdős, P. & Rényi, A. On the evolution of random graphs. connectivity. PLoS Comput. Biol. 10, e1003557
abstract spaces. Trends Cogn. Sci. 22, 1026–1037 Publ. Math. Inst. Hung. Acad. Sci. 5, 17–60 (1960). (2014).
(2018). 37. Gilbert, E. N. Random graphs. Ann. Math. Stat. 30, 60. Seguin, C., Van Den Heuvel, M. P. & Zalesky, A.
14. Sporns, O., Honey, C. J. & Kötter, R. Identification and 1141–1144 (1959). Navigation of brain networks. Proc. Natl Acad.
classification of hubs in brain networks. PLoS ONE 2, 38. Kaiser, M. & Hilgetag, C. C. Nonoptimal component Sci. USA 115, 6297–6302 (2018).
e1049 (2007). placement, but short processing paths, due to long- 61. Zheng, Y.-Q. et al. Local vulnerability and global
15. Van Den Heuvel, M. P., Kahn, R. S., Goñi, J. & distance projections in neural systems. PLoS Comput. connectivity jointly shape neurodegenerative disease
Sporns, O. High-cost, high-capacity backbone for Biol. 2, e95 (2006). propagation. PLoS Biol. 17, e3000495 (2019).
global brain communication. Proc. Natl Acad. 39. Ercsey-Ravasz, M. et al. A predictive network model of 62. Henderson, J. A. & Robinson, P. A. Relations between
Sci. USA 109, 11372–11377 (2012). cerebral cortical connectivity based on a distance rule. the geometry of cortical gyrification and white-matter
16. Hilgetag, C.-C., Burns, G. A., O’Neill, M. A., Neuron 80, 184–197 (2013). network architecture. Brain Conn. 4, 112–130
Scannell, J. W. & Young, M. P. Anatomical connectivity 40. Betzel, R. F. et al. Generative models of the human (2014).
defines the organization of clusters of cortical areas connectome. NeuroImage 124, 1054–1064 (2016). 63. Vértes, P. E. et al. Simple models of human brain
in the macaque and the cat. Philos. Trans. Roy. Soc. 41. Goulas, A., Betzel, R. F. & Hilgetag, C. C. functional networks. Proc. Natl Acad. Sci. USA 109,
Lond. B 355, 91–110 (2000). Spatiotemporal ontogeny of brain wiring. Sci. Adv. 5, 5868–5873 (2012).
17. Sporns, O. & Betzel, R. F. Modular brain networks. eaav9694 (2019). This study uses a generative model to investigate
Annu. Rev. Psychol. 67, 613–640 (2016). 42. Oldham, S. et al. Modeling spatial, developmental, the contribution of geometric and topological
18. Sporns, O. Network attributes for segregation and physiological, and topological constraints on human wiring constraints to hallmark network features
integration in the human brain. Curr. Opin. Neurobiol. brain connectivity. Preprint at bioRxiv https://doi.org/ of the brain.
23, 162–171 (2013). 10.1101/2021.09.29.462379 (2021). 64. Akarca, D., Vértes, P. E., Bullmore, E. T. & Astle, D. E.
19. Chung, J. et al. Statistical connectomics. Annu. Rev. 43. Barabási, A.-L. & Albert, R. Emergence of scaling in A generative network model of neurodevelopmental
Stat. 8, 463–492 (2021). random networks. Science 286, 509–512 (1999). diversity in structural brain organization. Nat. Commun.
20. Fornito, A., Zalesky, A. & Bullmore, E. Fundamentals 44. Lancaster, G., Iatsenko, D., Pidde, A., Ticcinelli, V. & 12, 1–18 (2021).
of Brain Network Analysis Ch. 10 (Academic, 2016). Stefanovska, A. Surrogate data for hypothesis testing 65. Vázquez-Rodríguez, B., Liu, Z.-Q., Hagmann, P. &
21. Klimm, F., Bassett, D. S., Carlson, J. M. & Mucha, P. J. of physical systems. Phys. Rep. 748, 1–60 (2018). Misic, B. Signal propagation via cortical hierarchies.
Resolving structural variability in network models and 45. Daunizeau, J., David, O. & Stephan, K. Dynamic causal Netw. Neurosci. 4, 1072–1090 (2020).
the brain. PLoS Comput. Biol. 10, e1003491 (2014). modelling: a critical review of the biophysical and 66. Scholtens, L. H., Schmidt, R., de Reus, M. A.
This study proposes to comprehensively benchmark statistical foundations. NeuroImage 58, 312–322 & van den Heuvel, M. P. Linking macroscale
observed networks with respect to a spectrum of (2011). graph analytical organization to microscale
null models, thereby providing a more complete 46. Roebroeck, A., Formisano, E. & Goebel, R. neuroarchitectonics in the macaque connectome.
feature profile. The identification of interacting networks in the J. Neurosci. 34, 12192–12205 (2014).
22. Watts, D. J. & Strogatz, S. H. Collective dynamics brain using fMRI: model selection, causality and 67. Fulcher, B. D. & Fornito, A. A transcriptional signature
of ‘small-world’ networks. Nature 393, 440–442 deconvolution. NeuroImage 58, 296–302 (2011). of hub connectivity in the mouse connectome. Proc.
(1998). 47. Penny, W. D. et al. Comparing families of dynamic Natl Acad. Sci. USA 113, 1435–1440 (2016).
23. Humphries, M. D. & Gurney, K. Network ‘small-world- causal models. PLoS Comput. Biol. 6, 1–14 (2010). 68. Hansen, J. Y. et al. Mapping gene transcription and
ness’: a quantitative method for determining canonical 48. Zalesky, A., Fornito, A. & Bullmore, E. On the use neurocognition across human neocortex. Nat. Hum.
network equivalence. PLoS ONE 3, e0002051 (2008). of correlation as a measure of network connectivity. Behav. 5, 1240–1250 (2021).
24. Newman, M. E. & Girvan, M. Finding and evaluating NeuroImage 60, 2096–2106 (2012). 69. Hansen, J. Y. et al. Mapping neurotransmitter systems
community structure in networks. Phys. Rev. E 69, This statistical study investigates how the to the structural and functional organization of the
026113 (2004). transitive property induces topological structure human neocortex. Preprint at bioRxiv https://doi.org/
25. Esfahlani, F. Z. et al. Modularity maximization as in correlation-based networks. 10.1101/2021.10.28.466336 (2021).
a flexible and generic framework for brain network 49. Markello, R. D. & Mišić, B. Comparing spatial null 70. Goulas, A., Majka, P., Rosa, M. G. & Hilgetag, C. C.
exploratory analysis. NeuroImage 244, 118607 models for brain maps. NeuroImage 236, 118052 A blueprint of mammalian cortical connectomes.
(2021). (2021). PLoS Biol. 17, e2005346 (2019).
26. Rubinov, M. & Sporns, O. Weight-conserving This benchmarking study compares the performance 71. Shamir, I. & Assaf, Y. An MRI-based, data-driven model
characterization of complex functional brain networks. of ten spatial null models in both simulations and of cortical laminar connectivity. Neuroinformatics 19,
NeuroImage 56, 2068–2079 (2011). empirical data analysis. 205–218 (2021).
0123456789();:
Reviews
72. Whitaker, K. J. et al. Adolescence is associated with specified distributional characteristics. Eur. J. Oper. 119. Theiler, J., Eubank, S., Longtin, A., Galdrikian, B. &
transcriptionally patterned consolidation of the hubs Res. 177, 1610–1625 (2007). Doyne Farmer, J. Testing for nonlinearity in time series:
of the human brain connectome. Proc. Natl Acad. 96. Hosseini, S. M. H. & Kesler, S. R. Influence of choice of the method of surrogate data. Phys. D. Nonlinear
Sci. USA 113, 9105–9110 (2016). null network on small-world parameters of structural Phenom. 58, 77–94 (1992).
73. Shafiei, G. et al. Topographic gradients of intrinsic correlation networks. PLoS ONE https://doi.org/ 120. Gotelli, N. J. & Graves, G. R. Null Models in Ecology
dynamics across neocortex. eLife 9, e62116 (2020). 10.1371/journal.pone.0067354 (2013). (Smithsonian Institution Press, 1996).
74. Alexander-Bloch, A., Raznahan, A., Bullmore, E. & 97. Masuda, N., Kojaku, S. & Sano, Y. Configuration 121. DuPre, E. et al. Beyond advertising: new
Giedd, J. The convergence of maturational change model for correlation matrices preserving the node infrastructures for publishing integrated research
and structural covariance in human cortical networks. strength. Phys. Rev. E 98, 12312 (2018). objects. PLoS Comput. Biol. 18, 1–7 (2022).
J. Neurosci. 33, 2889–2899 (2013). 98. Prichard, D. & Theiler, J. Generating surrogate data 122. Griffa, A., Alemán-Gómez, Y. & Hagmann, P.
75. Alexander-Bloch, A. F. et al. On testing for spatial for time series with several simultaneously measured Structural and functional connectome from 70 young
correspondence between maps of human brain variables. Phys. Rev. Lett. 73, 951–954 (1994). healthy adults. Zenodo https://doi.org/10.5281/
structure and function. NeuroImage 178, 540–551 99. Breakspear, M., Brammer, M. J., Bullmore, E. T., zenodo.2872623 (2019).
(2018). Das, P. & Williams, L. M. Spatiotemporal wavelet 123. Vaishnavi, S. N. et al. Regional aerobic glycolysis
This methodological paper introduces a spatial resampling for functional neuroimaging data. in the human brain. Proc. Natl Acad. Sci. USA 107,
permutation null model to test for correspondence Hum. Brain Mapp. 23, 1–25 (2004). 17757–17762 (2010).
between brain maps. 100. Huang, W. et al. A graph signal processing perspective 124. Váša, F. et al. Conservative and disruptive modes
76. Burt, J. B., Helmer, M., Shinn, M., Anticevic, A. & on functional brain imaging. Proc. IEEE 106, 868–885 of adolescent change in human brain functional
Murray, J. D. Generative modeling of brain maps with (2018). connectivity. Proc. Natl Acad. Sci. USA 117,
spatial autocorrelation. NeuroImage 220, 117038 101. Pirondini, E., Vybornova, A., Coscia, M. & 3248–3253 (2020).
(2020). Van De Ville, D. A spectral method for generating 125. Rosen, B. Q. & Halgren, E. A whole-cortex probabilistic
This methodological study develops a parameterized surrogate graph signals. IEEE Sig Proc. Lett. 23, diffusion tractography connectome. eNeuro https://
model that generates null brain maps with preserved 1275–1278 (2016). doi.org/10.1523/ENEURO.0416-20.2020 (2021).
spatial autocorrelation. 102. Chang, C. & Glover, G. H. Time–frequency dynamics 126. Rosen, B. Q. & Halgren, E. A whole-cortex
77. Markello, R. D. et al. Neuromaps: structural and of resting-state brain connectivity measured with fMRI. probabilistic diffusion tractography connectome.
functional interpretation of brain maps. Preprint at NeuroImage 50, 81–98 (2010). Zenodo https://doi.org/10.5281/zenodo.4060485
bioRxiv https://doi.org/10.1101/2022.01.06.475081 103. Zalesky, A., Fornito, A., Cocchi, L., Gollo, L. L. (2020).
(2022). & Breakspear, M. Time-resolved resting-state 127. Senden, M. et al. Task-related effective connectivity
78. Váša, F. et al. Adolescent tuning of association cortex brain networks. Proc. Natl Acad. Sci. Usa. 111, reveals that the cortical rich club gates cortex-wide
in human structural brain networks. Cereb. Cortex 28, 10341–10346 (2014). communication. Hum. Brain Mapp. 39, 1246–1262
281–294 (2018). 104. Liégeois, R., Yeo, B. T. T. & Van De Ville, D. Interpreting (2018).
79. Vázquez-Rodríguez, B. et al. Gradients of structure– null models of resting-state functional MRI dynamics: 128. Efron, B. & Tibshirani, R. Bootstrap methods for
function tethering across neocortex. Proc. Natl Acad. not throwing the model out with the hypothesis. standard errors, confidence intervals, and other
Sci. USA 116, 21219–21227 (2019). NeuroImage 243, 118518 (2021). measures of statistical accuracy. Stat. Sci. 1, 54–75
80. Baum, G. L. et al. Development of structure–function This review explores how null models can be (1986).
coupling in human brain networks during youth. applied at different points in the analysis pipeline 129. Filosi, M., Visintainer, R., Riccadonna, S., Jurman, G.
Proc. Natl Acad. Sci. USA 117, 771–778 (2020). to identify unexpected features of time-resolved & Furlanello, C. Stability indicators in network
81. Cornblath, E. J. et al. Temporal sequences of brain functional brain dynamics. reconstruction. PLoS ONE 9, e89815 (2014).
activity at rest are constrained by white matter 105. Esfahlani, F. Z., Bertolero, M. A., Bassett, D. S. & 130. Cheng, H. et al. Pseudo-bootstrap network analysis —
structure and modulated by cognitive demands. Betzel, R. F. Space-independent community and hub an application in functional connectivity fingerprinting.
Commun. Biol. 3, 1–12 (2020). structure of functional brain networks. NeuroImage Front. Hum. Neurosci. 11, 351 (2017).
82. Burt, J. B. et al. Hierarchy of transcriptomic 211, 116612 (2020). 131. Ohara, K., Saito, K., Kimura, M. & Motoda, H.
specialization across human cortex captured by 106. Shinn, M. et al. Spatial and temporal autocorrelation in Int. Conf. Discovery Sci. (eds Džeroski, S., Panov, P.,
structural neuroimaging topography. Nat. Neurosci. weave human brain networks. Preprint at bioRxiv Kocev, D. & Todorovski, L.) 228–239 (Springer
21, 1251–1259 (2018). https://doi.org/10.1101/2021.06.01.446561 (2021). International, 2014).
83. Wael, R. V. D. et al. BrainSpace: a toolbox for the 107. Elsayed, G. F. & Cunningham, J. P. Structure in neural 132. Bhattacharyya, S. & Bickel, P. J. Subsampling
analysis of macroscale gradients in neuroimaging population recordings: an expected byproduct of bootstrap of count features of networks. Ann. Stat.
and connectomics datasets. Commun. Biol. 3, 103 simpler phenomena? Nat. Neurosci. 20, 1310–1318 43, 2384–2411 (2015).
(2020). (2017). 133. Gel, Y. R., Lyubchich, V. & Ramirez Ramirez, L. L.
84. Bazinet, V., de Wael, R. V., Hagmann, P., Bernhardt, B. C. This article introduces a framework to test whether Bootstrap quantification of estimation uncertainties in
& Misic, B. Multiscale communication in cortico-cortical population structure in multi-neuron recordings network degree distributions. Sci. Rep. 7, 5807 (2017).
networks. NeuroImage 243, 118546 (2021). is a by-product of correlations across time, neurons
85. Shafiei, G. et al. Spatial patterning of tissue volume and experimental conditions. Acknowledgements
loss in schizophrenia reflects brain network 108. Pillow, J. W. & Aoi, M. C. Is population activity The authors thank A. Goulas for stimulating discussions dur-
architecture. Biol. Psychiat 87, 727–735 (2020). more than the sum of its parts? Nat. Neurosci. 20, ing the conceptualizing of this work, and E. Suárez, A. Luppi,
86. Shafiei, G. et al. Network structure and transcriptomic 1196–1198 (2017). V. Bazinet, G. Shafiei, J. Hansen, Z.-Q. Liu, O. Sherwood and
vulnerability shape atrophy in frontotemporal dementia. 109. Marrelec, G. et al. Partial correlation for functional R. Moran for constructive comments on the manuscript. F.V.
Brain https://doi.org/10.1093/brain/awac069 (2022). brain interactivity investigation in functional MRI. acknowledges support from the Data to Early Diagnosis and
87. Weinstein, S. M. et al. A simple permutation-based NeuroImage 32, 228–237 (2006). Precision Medicine Industrial Strategy Challenge Fund, UK
test of intermodal correspondence. Hum. Brain Mapp. 110. Koller, D. & Friedman, N. Probabilistic Graphical Research and Innovation (UKRI) and the Bill & Melinda Gates
42, 5175–5187 (2021). Models: Principles and Techniques (Academic, 2009). Foundation. B.M. acknowledges support from the Natural
88. Fulcher, B. D., Arnatkeviciute, A. & Fornito, A. 111. Dadi, K. et al. Benchmarking functional connectome- Sciences and Engineering Research Council of Canada
Overcoming false-positive gene-category enrichment based predictive models for resting-state fMRI. (NSERC), the Canadian Institutes of Health Research (CIHR),
in the analysis of spatially resolved transcriptomic NeuroImage 192, 115–134 (2019). the Brain Canada Foundation Future Leaders Fund, the
brain atlas data. Nat. Commun. 12, 2669 (2021). 112. Liégeois, R., Santos, A., Matta, V., Van De Ville, D. & Canada Research Chairs Program and the Healthy Brains for
89. Wei, Y. et al. Statistical testing in transcriptomic- Sayed, A. H. Revisiting correlation-based functional Healthy Lives initiative.
neuroimaging studies: a how-to and evaluation of connectivity and its relationship with structural
methods assessing spatial and gene specificity. connectivity. Netw. Neurosci. 4, 1235–1251 (2020). Author contributions
Hum. Brain Mapp. 43, 885–901 (2021). 113. Blaser, R. & Fryzlewicz, P. Random rotation The authors contributed equally to all aspects of the article.
90. Hlinka, J., Paluš, M., Vejmelka, M., Mantini, D. & ensembles. J. Mach. Learn. Res. 17, 1–26 (2016).
Corbetta, M. Functional connectivity in resting-state 114. Lefèvre, J. et al. Spanol (spectral analysis of lobes): Competing interests
fMRI: is linear correlation sufficient? NeuroImage 54, a spectral clustering framework for individual and The authors declare no competing interests.
2218–2225 (2011). group parcellation of cortical surfaces in lobes.
91. Alexander-Bloch, A., Giedd, J. N. & Bullmore, E. Front. Neurosci. 12, 00354 (2018). Peer review information
Imaging structural co-variance between human brain 115. Zhang, M. The use and limitations of null-model-based Nature Reviews Neuroscience thanks M. Breakspear and the
regions. Nat. Rev. Neurosci. 14, 322–336 (2013). hypothesis testing. Biol. Philos. 35, 1–22 (2020). other, anonymous referee(s) for their contribution to the peer
92. Evans, A. C. Networks of anatomical covariance. 116. Gollo, L. L. et al. Fragility and volatility of structural review of this work.
NeuroImage 80, 489–504 (2013). hubs in the human connectome. Nat. Neurosci. 21,
93. Seidlitz, J. et al. Morphometric similarity networks 1107–1116 (2018). Publisher’s note
detect microscale cortical organization and predict This study implements connectome mutations by Springer Nature remains neutral with regard to jurisdictional
inter-individual cognitive variation. Neuron 97, parametrically tuning the extent of randomization. claims in published maps and institutional affiliations.
231–247 (2018). 117. Goñi, J. et al. Exploring the morphospace of
94. Fornito, A., Arnatkevičiūtė, A. & Fulcher, B. D. communication efficiency in complex networks. PLoS Supplementary information
Bridging the gap between connectome and ONE 8, e58070 (2013). The online version contains supplementary material available
transcriptome. Trends Cogn. Sci. 23, 34–50 (2019). 118. Barrow, J. D., Bhavsar, S. G. & Sonoda, D. H. at https://doi.org/10.1038/s41583-022-00601-9.
95. Hirschberger, M., Qi, Y. & Steuer, R. E. Randomly A bootstrap resampling analysis of galaxy clustering.
generating portfolio-selection covariance matrices with Monthly Not. R. Astron. Soc. 210, 19P–23P (1984). © Springer Nature Limited 2022
0123456789();: