Percolation Theory in Python
Percolation Theory in Python
Anders Malthe-Sørenssen
This textbook was developed for the course Fys4460 - Disordered media
and percolation theory. The course was developed in 2004 and taught for
twenty years at the University of Oslo. The original idea of the course
was to provide an introduction to basic aspects of scaling theory to a
cross-disciplinary student group. Both geoscience and physics students
have successfully taken the course.
This book follows the underlying philosophy that learning a subject
is a hands-on activity that requires student activities. The course that
used the book was project driven. The students solved a set of extensive
computational and theoretical exercises and were supported by lectures
that provided the theoretical background and group sessions with a
learning assistant. The exercises used in the course have been woven into
the text, but are also given as a long project description in an appendix.
This textbook provides much of the same information as provided in the
lectures.
The underlying idea is that in order to learn a subject such as scaling,
the student needs to gain hands on experience with real data. The student
should learn to generate, analyze and understand data and models. The
focus is not to generate perfect data. Instead, we aim to teach the student
how to make sense of imperfect data. The data presented in the book
and the data that students may generate using the supplied programs
are therefore not for very long simulations, but instead from simulations
that take a few minutes on an ordinary computer. Experience from this
course has been that students learn most effectively by being guided
through the process of building models and generating data. Some details
v
vi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v
1 Introduction to percolation . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Basic concepts in percolation . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Percolation probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Spanning cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Percolation in small systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2 One-dimensional percolation . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1 Percolation probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Cluster number density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.1 Definition of cluster number density . . . . . . . . . . . . . . . 20
2.2.2 Measuring the cluster number density . . . . . . . . . . . . . . 23
2.2.3 Shape of the cluster number density . . . . . . . . . . . . . . . 25
2.2.4 Numerical measurement of the cluster number density 27
2.2.5 Average cluster size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3 Spanning cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.4 Correlation length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
ix
x Contents
3 Infinite-dimensional percolation . . . . . . . . . . . . . . . . . . . . . 37
3.1 Percolation threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2 Spanning cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Average cluster size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4 Cluster number density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.5 (Advanced) Embedding dimension . . . . . . . . . . . . . . . . . . . . . . . 49
3.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
4 Finite-dimensional percolation . . . . . . . . . . . . . . . . . . . . . . . 51
4.1 Cluster number density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.1.1 Numerical estimation of n(s, p) . . . . . . . . . . . . . . . . . . . . 54
4.1.2 Measuring probabilty densities of rare events . . . . . . . . 55
4.1.3 Measurements of n(s, p) when p → pc . . . . . . . . . . . . . . 58
4.1.4 Scaling theory for n(s, p) . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.1.5 Scaling ansatz for 1d percolation . . . . . . . . . . . . . . . . . . 60
4.1.6 Scaling ansatz for Bethe lattice . . . . . . . . . . . . . . . . . . . . 61
4.2 Consequences of the scaling ansatz . . . . . . . . . . . . . . . . . . . . . . 61
4.2.1 Average cluster size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2.2 Density of spanning cluster . . . . . . . . . . . . . . . . . . . . . . . 62
4.3 Percolation thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5 Geometry of clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.1 Characteristic cluster size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.1.1 Analytical results in one dimension . . . . . . . . . . . . . . . . 71
5.1.2 Numerical results in two dimensions . . . . . . . . . . . . . . . 71
5.1.3 Scaling behavior in two dimensions . . . . . . . . . . . . . . . . 75
5.2 Geometry of finite clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.2.1 Correlation length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.3 Geometry of the spanning cluster . . . . . . . . . . . . . . . . . . . . . . . 85
xi
7 Renormalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.1 The renormalization mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.1.1 (Advanced) Renormalization of correlation length . . . . 116
7.1.2 Iterating the renormalization mapping . . . . . . . . . . . . . 116
7.1.3 Application of renormalization to ξ . . . . . . . . . . . . . . . . 118
7.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
7.2.1 Example: One-dimensional percolation . . . . . . . . . . . . . 120
7.2.2 Example: Renormalization on 2d site lattice . . . . . . . . . 121
7.2.3 Example: Renormalization on 2d triangular lattice . . . 123
7.2.4 Example: Renormalization on 2d bond lattice . . . . . . . 125
7.3 (Advanced) Universality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.4 (Advanced) Fragmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Introduction to percolation
1
1
2 1 Introduction to percolation
Fig. 1.1 Illustration of a porous material from a nanoporous silicate (SiO2 ). The colors
inside the pores illustrates the distance to the nearest part of the solid.
The resulting matrices are shown in the Fig. 1.2 for various values of p.
The left figure illustrates the matrix, m with its various values. A site i is
set as p reaches the value mi in the matrix. (This is similar to changing
the water level and observing what parts of a landscape is above water).
Fig. 1.2 Illustration of an array of 4 × 4 random numbers, and the various sites set for
different values of p.
neighbor connectivity we would get a path from the bottom to the top
already at p = 0.4. We call the value pc , when we first get a path from
one side to another (from the top to the bottom, from the left to the
right, or both) the percolation threshold. For a given realization of the
matrix, there is well-defined value for pc , but for another realization,
there would be another pc . We therefore need to either use statistical
averages to characterize the properties of the percolation system, or we
need to refer to a theoretical – thermodynamic – limit, such as the value
for pc in an infinitely large system. When we use pc here, we will refer to
the thermodynamic value.
In this book, we will develop theories describing various physical prop-
erties of the percolation system as a function of p. We will characterize
the sizes of connected regions, the size of the region connecting one side
to another, the size of the region that contributes to transport (fluid,
thermal or electrical transport), and other geometrical properties of the
system. Most of the features we study will be universal, independent of
many of the details of the system. From Fig. 1.2 we see that pc depends
on the details: It depens on the rule for connectivity. It would also de-
pend on the type of lattice used: square, trianglar, hexagonal, etc. The
value of pc is specific. However, many other properties are general. For
example, how the conductivity of the porous medium depends on p near
pc does not depend on the type of lattice or the choice of connectivity
rule. It is universal. This means that we can choose a system which is
simple to study in order to gain intuition about the general features,
and then apply that intuition to the special cases afterwards. While the
connectivity or type of lattice does not matter, some things do matter.
For example, the dimensionality matters: The behavior of a percolation
system is different in one, two and three dimensions. However, the most
important differences occur between one and two dimensions, where the
difference is dramatic, whereas the difference between two and three
dimensions is more of a degree that we can easily handle. Actually, the
percolation problem becomes simpler again in higher dimensions. In two
dimensions, it is possible to go around a hole, and still have connectivity.
But it is not possible to have connectivity of both the pores and the
solid in the same direction at the same time. This is possible in three
dimensions: A two-dimensional creature would have problems with hav-
ing a digestive tract, as it would divide the creature in two, but in three
dimensions this is fully possible. Here, we will therefore focus on two and
three-dimensional systems.
1.1 Basic concepts in percolation 5
Definitions
• two sites are connected if they are nearest neighbors (4 neigh-
bors on square lattice)
• a cluster is a set of connected sites
• a cluster is spanning if it spans from one side to the opposite
side
6 1 Introduction to percolation
This function returns the matrix lw, which for each site in the original
array tells what cluster it belongs to. Clusters are numbered sequentially,
and each cluster is given an index. All the sites with the same index
belong to the same cluster. The resulting array is shown in Fig. 1.3,
where the index for each site is shown and a color is used to indicate the
various clusters. Notice that there is a distribution of cluster sizes, but
no cluster is large enough to reach from one side to another, and as a
result the system does not percolate.
Fig. 1.3 Illustration of the index array for a 10 × 10 system for p = 0.45.
Unfortunately, this colors the clusters gradually from the bottom up.
This is a property of the underlying algorithm: Clusters are indexed
starting from the bottom-left of the matrix. Hence, clusters that are close
to each other will get similar colors and therefore be difficult to discern
unless we shuffle the colormap. We can fix this by shuffling the labeling:
b = arange(lw.max() + 1)
shuffle(b)
shuffledLw = b[lw]
imshow(shuffledLw, origin=’lower’)
The resulting image is shown to the right in Fig. 1.3. (Notice that in
these figures we have reversed the ordering of the y-axis. Usually, the
first row is in the top-right corner in your plots – and this will also be
the case in most of the following plots).
It may also be useful to color the clusters based on the size of the
clusters, where size refers to the number of sites in a cluster. We can do
this using
area = measurements.sum(m, lw, index=arange(lw.max() + 1))
areaImg = area[lw]
imshow(areaImg, origin=’lower’)
colorbar()
Fig. 1.4 shows the clusters for a 100 × 100 system for p ranging from
0.2 to 0.7 in steps of 0.1. We see that the clusters increase in size as p
increases, but at p = 0.6, there is just one large cluster spanning the
8 1 Introduction to percolation
entire region. We have a percolating cluster, and we call this cluster that
spans the system the spanning cluster. However, the transition is very
rapid from p = 0.5 to p = 0.6. We therefore look at this region in more
detail in Fig. 1.5. We see that the size of the largest cluster increases
rapidly as p reaches a value around 0.6, which corresponds to pc for this
system. At this point, the largest cluster spans the entire system. For
the two-dimensional system illustrated here we know that in an infinite
lattice the percolation threshold is pc ' 0.5927.
Fig. 1.4 Plot of the clusters in a 100 × 100 system for various values of p.
Fig. 1.5 Plot of the clusters in a 100 × 100 system for various values of p.
1.2 Percolation probability 9
The aim of this book is to develop a theory to describe how this random
porous medium behaves close to pc . We will characterize properties such
as the density of the spanning cluster, the geometry of the spanning
cluster, and the conductivity and elastic properties of the spanning cluster.
We will address the distribution of cluster sizes and how various parts of
the clusters are important for particular physical processes. We start by
characterizing the behavior of the spanning cluster near pc .
When does the system percolate? When there exists a path connecting
one side to another. This occurs at some value p = pc . However, in a finite
system, like the system we simulated above, the value of pc for a given
realization will vary with each realization. It may be slightly above or
slightly below the pc we find in an infinite sample. Later, we will develop
a theory to understand how the effective pc in a finite system varies from
the thermodynamic pc in an infinitely large system. But already now,
we realize that as we perform different experiments, we will measure
various values of pc . We can characterize this behavior by introducing a
probability Π(p, L):
Percolation probability
The percolation probability Π(p, L) is the probability for there to
be a connected path from one side to another side as a function of
p in a system of size L.
We will then loop over all samples, and for each sample we generate a new
random matrix. The for each value of pi we perform the cluster analysis
as we did above. We use the function measurements.label to label the
clusters. Then we check if the set of labels on the left side and the set of
labels on the right side have any intersection. If the length of the set of
intersections is larger than zero, there is at least one percolating cluster:
lw,num = measurements.label(z)
perc_x = intersect1d(lw[0,:],lw[-1,:})
perc = perc_x[where(perc_x>0)]
Ni[ip] = Ni[ip] + 1
Pi = Ni/N
plot(p,Pi)
xlabel(’$p$’)
ylabel(’$\Pi$’)
The resulting plot of Π(p, L) is seen in Fig. 1.7. The figure shows the
resulting plots as a function of system size L. We see that as the system
size increases, Π(p, L) approaches a step function at p = pc .
0.8
0.6
0.4
L= 50
0.2 L=100
L=200
0
0.4 0.5 0.6 0.7 0.8 0.9 1
0.8
0.6
0.4
L= 50
0.2 L=100
L=200
0
0.4 0.5 0.6 0.7 0.8 0.9 1
Fig. 1.7 Plot of Π(p, L), the probability for there to be a connected path from one side
to anther, as a function of p for various system sizes L.
where Π(p, L|c) is the value of Π for the particular configuration c, and
P (c) is the probability of this configuration.
The configurations for L = 2 have been numbered from c = 1 to
c = 16 in Fig. 1.8. However, configurations that are either mirror images
or rotations of each other will have the same probability and the same
physical properties since percolation can take place both in the x and the
y directions. It is therefore only necessary to group the configurations into
6 different classes as illustrated in the bottom of Fig. 1.8, but we then
need to remember the multiplicity, gc , for each class when we calculate
probabilities. Let us make table of the configurations, the number of such
configurations, the probability of one such configuration, and the value
of Π(p, L|c) for this configuration.
1.4 Percolation in small systems 15
Fig. 1.8 The possible configurations for a L = 2 site percolation lattice in two-dimensions.
The configurations are indexed using the cluster configuration number c.
1
L=1
0.8 L=2
L=3
L=4
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
1
L=1
0.8 L=2
L=3
0.6 L=4
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
with L. It is therefore not realistic to use this technique for calculating the
percolation probabilities. We will need to have more powerful techniques,
or simpler problems, in order to perform exact calculations.
However, we can still learn much from a discussion of finite L. For
example, we notice that
2
Π(p, L) ' LpL + c1 pL+1 + . . . + cn pL , (1.6)
1.6 Exercises
The percolation problem can be solved exactly in two limits: in the one-
dimensional and the infinite dimensional cases. Here, we will first address
the one-dimensional system. While the one-dimensional system does not
allow us to study the full complexity of the percolation problem, many
of the concepts and measures introduced to study the one-dimensional
problem can generalized to higher dimensions.
Π(p, L) = pL (2.1)
19
20 2 One-dimensional percolation
It is common to use the notation sn(s, p) for this probability for a given
site to belong to a cluster of size s. Why is it divided into two parts,
s and n(s, p)? Because we must divide the question into two parts: (1)
What is the probability for a given site to be a specific site in a cluster
of size s, and (2) how many such specific sites are there? What do we
mean by a specific site? For cluster number 3 in Fig. 2.1 there are 4 sites.
We could therefore ask the question, what is the probability for a site to
be the left-most site in a cluster of size s. This is what we mean with a
specific site. We could ask the same question about the second left-most,
the third left-most and so on. We call the probability for a site to belong
to a specific site in a cluster of size s (such as the left-most site in the
cluster) the cluster number density, and we use the notation n(s, p)
for this. To find the probability sn(s, p) for a site to belong to any of the
s sites in a cluster of size s we must sum the probabilities for each of the
specific sites. This is illustrated for the case of a cluster of size 4:
2.2 Cluster number density 21
Because each of these probabilities are the same. What is the probability
for a site to be the left-most site in a cluster of size s in one dimension?
In order for it to be in a cluster of size s, the site must be present, which
has probability p, and then s − 1 sites must also be present to the right
of it, which has probability ps−1 . In addition, the site to the left must be
empty (illustrated by an X in Fig. 2.1 bottom part), which has probability
(1 − p) and the site to the right of the fourth site (illustrated by an X
in Fig. 2.1 bottom part), which also has probability (1 − p). Since the
occupation probabilities for each site are independent, the probability
for the site to be the left-most site in a cluster of size s is:
1 2 3 3 3 3 4 4 5
Empty sites
Fig. 2.1 Realization of a L = 16 percolation system in one dimension. Occupied sites
are marked with black squares.
22 2 One-dimensional percolation
s=1
Let us check that this is indeed the case for the one-dimensional result
we have found by calculating the sum:
∞ ∞ ∞
sn(s, p) = sps (1 − p)2 = (1 − p)2 p (2.6)
X X X
sps−1 ,
s=1 s=1 s=1
s=1
dp s=0 dp 1 − p
which gives
∞ ∞
sn(s, p) = (1 − p) p 2
sps−1 = (1 − p) p (1 − p)−2 = p . (2.8)
X X
s=1 s=1
2.2 Cluster number density 23
How can we now estimate sn(s, p), the probability for a given site to be
part of a cluster of size s, from Ns ? The probability for a site to belong
to cluster of size s can be estimated by the number of sites belonging
to a cluster of size s divided by the total number of sites. The number
of sites belonging to a cluster of size s is sNs , and the total number of
sites is Ld , where L is the system size and d is the dimensionality. (Here,
d = 1). This means that we can estimate the probability sn(s, p) from
sNs
sn(s, p) = , (2.9)
Ld
where we use a bar to show that this is an estimated quantity and not
the actual probability. We divide by s on both sides, and find
Ns
n(s, p) = . (2.10)
Ld
This argument and the result are valid in any dimension, not only for
d = 1. We have therefore found a method to estimate the cluster number
density:
s 16 16 16 16
50
0
-50
-100
-150
-200 p=0.900
p=0.990
-250 p=0.999
-300
0 1 2 3 4 5 6
50
0
-50
-100
-150
-200 p=0.900
p=0.990
-250 p=0.999
-300
-3 -2 -1 0 1 2 3
Fig. 2.2 (Top) A plot of n(s, p)(1 − p)2 as a function of s for various values of p for a
one-dimensional percolation system shows that the cut-off increases as a function of s.
(Bottom) When the s axis is rescaled by s/sξ all the curves fall onto a common scaling
function, that is, n(s, p) = (1 − p)2 F (s/sξ ).
sξ ∝ |p − pc |−1/σ , (2.25)
when p → pc . In one dimension, σ = 1.
where F (u) = u2 e−u . We will see later that this form is general – it is
valid for percolation in any dimension, although with other values for
the exponent −2. In percolation theory, we call this exponent τ :
Ns (M )
n(s, p) = , (2.29)
L2 · M
where Ns (M ) is the number of clusters of size s measured in M re-
alizations of the percolation system. We generate a one-dimensional
percolation system and index the clusters using
from pylab import *
from scipy.ndimage import measurements
L = 20
p = 0.90
z = rand(L)
28 2 One-dimensional percolation
m = z<p
lw, num = measurements.label(m)
Now, lw contains the indices for all the clusters. We can extract the size
of the clusters by summing the number of elements for each label:
labelList = arange(lw.max() + 1)
area = measurements.sum(m, lw, labelList)
We need to collect all the areas of all the clusters for many realizations,
and then calculate the number of clusters of each size s based on this
long list of areas. This is all brought together by continuously appending
the area-array to the end of an array allarea that contains the areas
of all the clusters.
from pylab import *
from scipy.ndimage import measurements
nsamp = 1000
L = 1000
p = 0.90
allarea = array([])
for i in range(nsamp):
z = rand(L)
m = z<p
lw, num = measurements.label(m)
labelList = arange(lw.max() + 1)
area = measurements.sum(m, lw, labelList)
allarea = append(allarea,area)
n,sbins = histogram(allarea,bins=int(max(allarea)))
s = 0.5*(sbins[1:]+sbins[:-1])
nsp = n/(L*nsamp)
sxi = -1.0/log(p)
nsptheory = (1-p)**2*exp(-s/sxi)
plot(s,nsp,’o’,s,nsptheory,’-’)
xlabel(’$s$’)
ylabel(’$n(s,p)$’)
We find the theoretically predicted form for n(s, p), which is n(s, p) =
(1 − p)2 exp(−s/sξ ), where sξ = −1/lnp. This is calculated for the same
values of s as found from the histogram using:
sxi = -1.0/log(p)
nsptheory = (1-p)**2*exp(-s/sxi)
When we use the histogram function with many bins, we risk that many
of the bins contain zero elements. To remove these elements from the
plot, we can use the nonzero function to find the indices of the elements
of n that are non-zero:
i = nonzero(n)
And then we only plot the values of n(s, p) at these indices. The values
for the theoretical n(s, p) are calculated for all values of s:
plot(s[i],nsp[i],’o’,s,nsptheory,’-’)
The resulting plot is shown in Fig. 2.3. We see that the measured results
and the theoretical values fit nicely, even though the theory is for infinite
system sizes, and the simulations where performed at L = 1000. We also
see that for larger values of s there are fewer observed values. It may
therefore be a good idea to make the bins used for the histogram larger
for larger values of s. We will return to this when we measure the cluster
number density in two-dimensional systems in chapter 4.
0.01
0.008
0.006
0.004
0.002
0
0 20 40 60 80 100 120
10-2
10-4
10-6
10-8
0 20 40 60 80 100 120
Fig. 2.3 Plot of the predicted n(s, p), based on M = 1000 samples of a L = 1000 system
with p = 0.9, and the theoretical n(s, p) curve on a linear scale (top) and a semilogarithmic
scale (bottom). The semilogarithmic plot clearly shows that n(s, p) follows an exponential
curve.
1X 2
S= s n(s, p) (2.33)
p s
(1 − p)2 X 2 s
= s p (2.34)
p s
(1 − p)2 X ∂ ∂ s
= p p p (2.35)
p s ∂p ∂p
(1 − p)2 ∂ ∂ X s
= p p p (2.36)
p ∂p ∂p s
(1 − p)2 ∂ p
= ( from sn(s, p) ) (2.37)
X
p
p ∂p (1 − p)2 s
∂ p
= (1 − p)2 (2.38)
∂p (1 − p)2
1 2p
= (1 − p)2 ( + ) (2.39)
(1 − p)2 (1 − p)3
1+p
= (2.40)
1−p
where we have used the trick introduced in (2.7) to move the derivation
out through the sum. In addition, we have also used our previous result
from s sn(s, p) directly.
P
with γ = 1 and Γ (p) = 1 + p. That is, the average cluster size also
diverges as a power-law when p approaches pc . The exponent γ = 1 of
the power-law is again universal. That is, it depends on features such as
dimensionality, but not on details such as the lattice structure.
Later, we will observe that we have a similar behavior for percolation
in any dimension, although with other values of γ.
We will leave it as an exercise for our reader to find the behavior for
higher moments, Sk , using a similar argument.
The density of the spanning cluster, P (p; L), is similarly simple to find
and discuss. The spanning cluster only exists for p ≥ pc . The discussion
32 2 One-dimensional percolation
for P (p; L) is therefore not that interesting for the one-dimensional case.
However, we can still introduce some of the general notions.
The behavior of P (p; ∞) in one dimension is given as
(
0p<1
P (p; ∞) = . (2.42)
1p=1
We could introduce a similar finite size scaling discussion also for P (p; L).
However, we will here concentrate on the relation between P (p; L) and
the distribution of cluster sizes. The distribution of the size of a finite
cluster is described by sn(s, p), which is the probability that a given
site belongs to a cluster of size s. If we look at a given site, that site is
occupied with probability p. If a site is occupied it is either part of a
finite cluster of size s or it is part of the spanning cluster. Since these
two events cannot occur at the same time, the probability for a site to
be set must be the sum of the probability to belong to a finite cluster
and to belong to the infinite cluster. The probability to belong to a finite
cluster is the sum of the probability to belong to a cluster of s for all s.
We therefore have the equality:
p = P (p; L) + (2.43)
X
sn(s, p; L) ,
s
which is not only valid in the one-dimensional case, but also for percolation
problems in general.
We can use this relation to find the density of the spanning cluster
from the cluster number density n(s, p) through
P (p) = p − (2.44)
X
sn(s, p) .
s
From the simulations in Fig. 1.4 we see that the size of the clusters in-
creases as p → pc . We expect a similar behavior for the one-dimensional
system. We have already seen that the mass (or area) of the clusters di-
verges as p → pc . However, the characteristic cluster size sξ characterizes
2.4 Correlation length 33
the mass (or area) of a cluster. How can we characterize the extent of a
cluster?
To characterize the linear extent of a cluster, we find the probability for
two sites at a distance r to be part of the same cluster. This probability
is called the correlation function, g(r):
r
a b
L
Fig. 2.4 An illustration of the distance r between two sites a and b. The two sites a and
b are connected if and only if all the sites between a and b are occupied.
not notice the effect of a finite system, because no cluster is large enough
to notice the finite system size. However, when ξ L, the behavior is
dominated by the system size L, and we are no longer able to determine
how close we are to percolation.
We have so far not discussed the effects of a finite lattice size L. We have
implicitly assumed that the lattice size L is so large that the corrections
will be small and can be ignored. However, we have now observed that
the average cluster size S, the characteristic cluster size sξ , and the
correlation length ξ diverges when p approaches pc . We will therefore
eventually start observing effects of the finite system size as p approaches
pc .
We have essentially ignored two effects:
• (a) the upper limit for cluster sizes is L and not ∞
• (b) there are corrections to n(s, p; L) due to the finite lattice size
The effect of (b) becomes clear as p approaches pc : As sξ increases it will
eventually be larger than L, which in one dimension also provides an
upper limit for s. This is indeed observed in the scaling collapse plot for
n(s, p), where we for finite lattice sizes will find a cross-over cluster size
sL , which depends on the lattice size L.
What will be the effect of including a finite upper limit L for all the
sums? This will imply that the result of the sum s ps will be
P
L
1 − pL
ps = (2.48)
X
,
s=1
1−p
where the right-hand term is the probability that the system became
spanning when p increased from p to p + dp. That is, it is the probability
that the spanning cluster appeared for the first time for p between p and
p + dp. We can therefore interpret Π 0 as the probability density for p0 ,
which is the p when a spanning cluster appears.
What can we learn from the form of Π 0 ? If we perform numerical
experiments to find pc , we see that for finite system sizes L, we might
observe a pc which is lower than 1. We can use Π 0 to find the average p0
found - this will be done generally further on. Here, we will only study
the width of the distribution Π 0 , which will give us an idea about the
possible deviation when we measure pc by a measurement of p0 . We define
the width as the value px for which Π 0 has reached 1/2 (or some other
value you like).
Π 0 (px , L) = LpL−1
x = 1/2 . (2.51)
This gives
ln 2
ln px = −
, (2.52)
L−1
We will now use a standard approximation for ln x, when x is close to 1,
by writing
ln px = ln(1 − (1 − px )) ' −(1 − px ) , (2.53)
where we have used that ln(1 − x) ' −x, when x 1. This gives us that
ln 2
(1 − px ) ' , (2.54)
L−1
and consequently,
ln 2
px = pc − . (2.55)
L−1
36 2 One-dimensional percolation
2.6 Exercises
s p
We have now seen how the percolation problem can be solved exactly for
a one-dimensional system. However, in this case the percolation threshold
is pc = 1, and we were not able to address the behavior of the system
for p > pc . There is, however, another system in which many features of
the percolation problem can be solved exactly, and this is percolation
on a regular tree structure on which there are no loops. The condition
of no loops is essential. This is also why we call this system a system of
infinite dimensions, because we need an infinite number of dimensions in
Euclidean space in order to embed a tree without loops. In this section,
we will provide explicit solution to the percolation lattice on a particular
tree structure called the Bethe lattice.
The Bethe lattice, which is also called the Cayley tree, is a tree
structure in which each node has Z neighbors. This structure has no loops.
If we start from the central point and draw the lattice, the perimeter
grows as fast as the bulk. Generally, we will call Z the coordination
number. The Bethe lattice is illustrated in Fig. 3.1.
37
38 3 Infinite-dimensional percolation
(a) (b)
Branch 1
t
oin
al p
ntr
Ce
Branch 2 Branch 3
Fig. 3.1 Illustration of four generations of the Bethe lattice with number of neighbors
Z = 3.
p(Z − 1) ≥ 1 , (3.1)
site, and then address the probability that a given branch is connected
to infinity.
We can use a strictly technical approach to find P by noting that P
can be found from
p=P+ (3.3)
X
sn(s, p) ,
s
where the sum is the probability that the site is part of a finite cluster,
that is, it is the probability that the site is not connected to infinity. Let
us use Q to denote the probability that a branch does not lead to infinity.
The concept of a central point and a branch is illustrated in Fig. 3.1.
We can arrive at this result by noticing that the probability that at site
is not connected to infinity in a particular direction is Q. The probability
that the site is not connected to infinity in any direction is therefore QZ .
The probability that the site is connected to infinity is therefore 1 − QZ .
In addition, we need to include the probability p that the site is occupied.
The probability that a given site is connected to infinity, that is, that it
is part of the spanning cluster, is therefore
P = p(1 − QZ ) . (3.4)
Q = (1 − p) + pQZ−1 . (3.5)
Q = 1 − p + pQ2 , (3.6)
pQ2 − Q + 1 − p = 0 . (3.7)
The solution of this second order equation is
1± (2p − 1)2 1
p (
p < pc
Q= = 1−p . (3.8)
2p p p > pc
P = p(1 − Q3 ) (3.9)
1−p 3
= p(1 − ( ) ) (3.10)
p
1−p 1−p 1−p 2
= p(1 − )(1 + +( ) ). (3.11)
p p p
This result is illustrated in Fig. 3.2.
From this we observe the expected result that when p → 1, P (p) ∝ p.
We can rewrite the equation as
1 1−p 1−p 2
P = 2(p − )(1 + +( ) ), (3.12)
2 p p
From this we can immediately find the leading order behavior when
p → pc = 1/2. In this case we have
0.8
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
50
40
30
20
10
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Fig. 3.2 (Top) A plot of P (p) as a function of p for the Bethe lattice with Z = 3. The
tangent at p = pc is illustrated by a straight line. (Bottom) A plot of the average cluster
size, S(p), as a function of p for the Bethe lattice with Z = 3. The average cluster size
diverges when p → pc = 1/2 both from below and above.
S = 1 + ZT , (3.15)
42 3 Infinite-dimensional percolation
where the 1 represents the central point, and T is the average number
of sites on each branch. We will again find a self-consistent equation for
T , starting from a center site. The average cluster size T is found from
summing the probability that the next site k is empty, 1 − p, multiplied
with the contribution to the average in this case (0), plus the probability
that the next site is occupied, p, multiplied with the contribution in this
case, which is the contribution from the site (1) and the contribution of
the remaining Z − 1 subbranches. In total:
which is illustrated in Fig. 3.2. The expression for S(p) can therefore be
written on the general form
Γ
S= , (3.19)
(pc − p)γ
In order to find the cluster number density for the Bethe lattice, we need
to address how we in general can find the cluster number density. In
general, in order to find the cluster number density for a given s, we
need to find all possible configurations of clusters of size s, and sum up
their probability:
3.4 Cluster number density 43
c(s)
Here we have included the term ps , because we know that we must have
all the s sites of the cluster present, and we have included the term
(1 − p)t , because all the neighboring sites must be unoccupied, and there
are t(c) neighbors for configuration c. Based on this, we realize that we
could instead make a sum over all t, but then we need to include the
effect that there are several clusters that can have the same t. We will
then have to introduce the degeneracy factor gs,t which gives the number
of different clusters that have size s and a number of neighbors equal to
t. The cluster number density can then be written as
There are two clusters with t = 8, and four clusters with t = 7. There
are no other clusters of size s = 3. We can therefore conclude that for
the two-dimensional lattice, we have g3,8 = 2, and g3,7 = 4, and g3,t = 0
for all other values of t.
For the Bethe lattice, there is a particularly simple relation between
the number of sites, and the number of neighbors. We can see this by
looking at the first few generations of a Bethe lattice grown from a central
seed. For s = 1, the number of neighbors is t1 = Z. When we add one
more site, we remove one neighbor from what we had previously, in order
to add a new site, and then we add Z − 1 new neighbors: s = 2, and
t2 = t1 + (Z − 2). Consequently,
tk = tk−1 + (Z − 2) , (3.22)
44 3 Infinite-dimensional percolation
and therefore:
ts = s(Z − 2) + 2 . (3.23)
The cluster number density, given by the sum over all t, is therefore
reduced to only a single term for the Bethe lattice
1
Z=3
0.8 Z=4
Z=5
0.6
0.4
0.2
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Fig. 3.4 A plot f (p) = p(1 − p)Z−2 , which is a term in the cluster number density
n(s, p) = gs [p(1 − p)Z−2 ]s (1 − p)2 for the Bethe lattice. We notice that f (p) has a
maximum at p = pc , and that the second derivative, f 00 (p), is zero in this point. A Taylor
expansion of f (p) around p = pc will therefore have a second order term in (p − pc ) as
the lowest-order term - to lowest order it is a parabola at p = pc . It is this second order
term which determines the exponent σ, which consequently is independent of Z.
1
f (p) ' f (pc ) − f 00 (pc )(p − pc )2 = A(1 − B(p − pc )2 ) . (3.32)
2
The cluster number density is
We use the first order of the Taylor expansion of ln(1 − x) ' −x, to get
2
n(s, p) ' gs As e−sB(p−pc ) (1 − p)2 . (3.35)
sξ = B −1 (p − pc )−2 , (3.39)
S= s2 n(s, pc ) → ∞ , (3.42)
X
sn(s, p) = p − P , (3.45)
X
s2 n(s, pc ) → ∞ , (3.46)
X
s
3.4 Cluster number density 47
which should not converge. This provides a set of limits on the possible
values of τ , because
s s
and
s2 n(s, pc ) ' s2−τ > ∞ ⇒ τ − 2 ≤ 1 , (3.48)
X X
We will now use this expression to calculate S, for which we know the
exact scaling behavior, and then again use this to find the value for τ
Z ∞
S=C 2−τ −s/sξ
s2−τ e−s/sξ ds . (3.51)
X
s e →C
s 1
S ∼ s3−τ
ξ ∼ (p − pc )−2(3−τ ) ∼ (p − pc )−1 , (3.55)
48 3 Infinite-dimensional percolation
sξ ∼ (p − pc )−2 , (3.56)
and that
S ∼ (p − pc )−1 . (3.57)
Direct solution therefore shows that
5
τ= . (3.58)
2
This relation also satisfies the exponent relations we found above, since
2 < 5/2 ≤ 3. A plot of the scaling form is shown in Fig. 3.5.
0
(p-pc )=0.100
-2
(p-pc )=0.010
-4 (p-pc )=0.001
-6
-8
-10
-12
-14
-16
-18
-20
0 1 2 3 4 5 6 7 8
Fig. 3.5 A plot of n(s, p) = s−τ exp(−s(p − pc )2 ) as a function of s for various values
of p illustrates how the characteristic cluster size sξ appears as a cut-off in the cluster
number density that scales with p − pc .
where
n(s, pc ) = Cs−τ , (3.60)
and
sξ = s0 |p − pc |−1/σ . (3.61)
In addition, we have the following scaling relations:
P (p) ∼ (p − pc )β , (3.62)
ξ ∼ |p − pc |−ν , (3.63)
and
S ∼ |p − pc |−γ , (3.64)
with a possible non-trivial behavior for higher moments of the cluster
density.
S ∝ Ld−1 , (3.66)
However, for the Bethe lattice, the surface is proportional to the volume,
S ∝ V , which would imply that d → ∞.
3.6 Exercises
51
52 4 Finite-dimensional percolation
one point may be connected again further out. For the Bethe lattice, we
could also estimate the multiplicity g(s, t) of the clusters, the number of
possible clusters of size s and surface t, since t was a function of s. In a
two- or three-dimensional system this is not similarly simple, because the
multiplicity g(s, t) is not simple even in two dimensions, as illustrated in
Fig. 4.1.
This means that the solution methods used for the one and the
infinite dimensional systems cannot be extended to address two- or three-
dimensional systems. However, several of the techniques and observations
we have made for the one-dimensional and the Bethe lattice systems, can
be used as the basis for a generalized theory that can be applied in any
dimension. Here, we will therefore pursue the more general features of
the percolation system, starting with the cluster number density, n(s, p).
4 Finite-dimensional percolation 53
s=4, t=10 s=4, t=10 s=4, t=9 s=4, t=9 s=4, t=9 s=4, t=9
s=4, t=8 s=4, t=8 s=4, t=9 s=4, t=9 s=4, t=9 s=4, t=9
s=4, t=8 s=4, t=8 s=4, t=8 s=4, t=8 s=4, t=8 s=4, t=8
s=4, t=8
s=3, t=8 s=3, t=8 s=3, t=7 s=3, t=7 s=3, t=7 s=3, t=7
s=1, t=4
Fig. 4.1 Illustration of the possible configurations for two-dimensional clusters of size
s = 1, 2, 3, 4.
54 4 Finite-dimensional percolation
We have found that the cluster number density plays a fundamental role
in our understanding of the percolation problem, and we will use it here
as our basis for the scaling theory for percolation.
When we discussed the Bethe lattice, we found that we could write
the cluster number density as a sum over all possible configurations of
cluster size, s:
n(s, p) = ps (1 − p)tj , (4.1)
X
where j runs over all different configurations, and tj denotes the number
of neighbors for this particular configuration. We can simplify this by
rewrite the sum to be over all possible number of neighbors, t, and include
the degeneracy gs,t , the number of configurations with t neighbors:
bin. If we number the bins by i, then the edges of the bins are si = ai ,
and the width of bin i is ∆si = si+1 − si . We then count how many
events, Ni , occur in the range from si to si + ∆si , and we use this to
find the cluster number density n(s, p; L). However, since we now look
at ranges of s values, we need to be precise: We want to measure the
probability for a cluster to belong to a specific site of a cluster in the
range from s to s + ∆s, that is, we want to measure n(s, p; L)∆s, which
we estimate from
Ni
n(si , p; L)∆si = , (4.4)
M Ld
and we find n(s, p; L) from
Ni
n(si , p; L) = . (4.5)
M Ld ∆si
It is important to remember to divide by ∆si when the bin sizes are not
all the same! We implement this by generating an array of all the bin
edges. First, we find an upper limit to the bins, that is, we find an im so
that
aim > max(s) ⇒ loga aim > loga max(s) , (4.6)
im > loga max(s) . (4.7)
We can for example round the right hand side up to the nearest integer
a = 1.2
logamax = ceil(log(max(allarea))/log(a));
And we can further generate the histogram with this set of bin edges
nl,nlbins = histogram(allarea,bins=logbins)
And we must then find the bin sizes and the bin centers
nl,nlbins = histogram(allarea,bins=logbins)
Finally we plot the results. The complete code for this analysis is found
in the following script
a = 1.2
4.1 Cluster number density 57
logamax = ceil(log(max(s))/log(a))
logbins = a**arange(0,logamax)
nl,nlbins = histogram(allarea,bins=logbins)
ds = diff(logbins)
sl = 0.5*(logbins[1:]+logbins[:-1])
nsl = nl/(M*L**2*ds)
loglog(sl,nsl,’.b’)
The resulting plot for a = 1.2 is shown in Fig. 4.2c. Notice that the
resulting plot now is much easier to interpret than the linearly binned plot.
(You should, however, always reflect on whether your binning method
may influence the resulting plot in some way, since there may be cases
where your choice of binning method may affect the results you get.
Although this is not expected to play any role in your measurements in
this book.) We will therefore in the following adapt logarithmic binning
strategies whenever we measure a dataset which is sparse.
0.02
0.01
0
0 2000 4000 6000 8000 10000 12000 14000 16000
100
10-10
100
10-10
Fig. 4.2 Plot of n(s, p; L) estimated from M = 1000 samples for p = 0.58 and L = 200.
(a) Direct plot. (b) Log-log plot. (c) Plot of the logarithmically binned distribution.
58 4 Finite-dimensional percolation
sξ ∝ |p − pc |−1/σ (4.8)
where σ was 1 is one dimension. We will now use this to develop a theory
for both n(s, p; L) and sξ based on our experience from one and infinite
dimensional percolation.
0
-2 p=0.45
p=0.50
-4 p=0.54
p=0.57
-6 p=0.58
-8
-10
-12
-14
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
104
4
0
0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8
Fig. 4.3 (a) Plot of n(s, p; L) as a function of s for various values of p for a 512 × 512
lattice. (b) Plot of sξ (p) measured from the plot of n(s, p) corresponding to the points
shown in circles in (a).
s
n(s, p) = n(s, pc )F ( ), (4.9)
sξ
d β τ σ γ ν D µ Dmin Dmax DB
1 2 1 1 1
2 5/36 187/91 36/91 43/18 4/3 91/48 1.30 1.13 1.4 1.6
3 0.41 2.18 0.45 1.80 0.88 2.53 2.0 1.34 1.6 1.7
4 0.64 2.31 0.48 1.44 0.68 3.06 2.4 1.5 1.7 1.9
Bethe 1 5/2 1/2 1 1/2 4 3 2 2 2
-1
-2
-3 p=0.45
p=0.50
p=0.54
-4 p=0.57
p=0.58
-5
-5 -4 -3 -2 -1 0 1
Fig. 4.4 A plot of n(s, p)sτ as a function of |p − pc |1/σ s shows that the cluster number
density satisfies the scaling ansatz of (4.12).
where we have introduced the function F̂ (u) = F (uσ ). These forms are
equivalent, but in some cases this form produces simpler calculations.
This scaling form should in particular be valid for both the 1d and
the Bethe lattice cases - let us check this in detail.
where F (u) = u2 e−u . This is indeed in the general scaling form with
τ = 2.
4.2 Consequences of the scaling ansatz 61
We can now insert the scaling ansatz n(s, p) = s−τ F (s/sξ ), getting:
Z ∞
S(p) = s2−τ F (s/sξ )ds , (4.20)
1
We know that the function F (s/sξ ) goes very rapidly to zero when s is
larger than sξ , and that it is approximately a constant when s is smaller
than sξ . We will therefore approximately F (u) by a step function which
is constant up to 1 and then 0 afterwards. We therefore only integrate
up to sξ , where F (s/sξ ) is constant:
Z ∞ Z sξ
S(p) = s2−τ F (s/sξ )ds ' Cs2−τ ds (4.21)
1 1
S(p) = C 0 s3−τ
ξ . (4.22)
P (p) = p − (4.27)
X
sn(s, p) .
s
s=1 s=1
s=1 1
P (p) = p − (4.31)
X
sn(s, p) ' p − c1 − c2 s2−τ
ξ .
s
smaller than or equal to zero, otherwise P (p) will diverge. This gives us
a new boundary for τ :
The two boundaries we have for τ are then 2 ≤ τ < 3. We have therefore
bounded τ between 2 and 3! This is a nice result from the scaling ansatz.
Relating β and τ . We can rewrite the expression in (4.31) for P (p) and
insert sξ = s0 |p − pc |−1/σ , getting:
2−τ
P (p) ' p − c1 − c2 s2−τ
ξ ' (p − pc ) + c2 |p − pc |−1/σ (4.33)
4.4 Exercises
We have then produced the array lw that contains labels for each of the
connected clusters.
a) Familiarize yourself with labeling by looking at lw, and by studying
the second example in the python help system on the image analysis
toolbox.
We can examine the array directly by mapping the labels onto a
color-map, using imshow.
imshow(lw)
66 4 Finite-dimensional percolation
You can also extract information about the clusters using the
skimage.measure module. This provides a powerful set of tools that can
be used to characterize the clusters in the system. For example, you can
determine if a system is percolating by looking at the extent of a cluster.
If the extent in any direction is equal to L, then the cluster is spanning
the system. We can use this to find the area of the spanning cluster or
to mark if there is a spanning cluster:
import skimage
props = skimage.measure.regionprops(lw)
spanning = False
for prop in props:
if (prop.bbox[2]-prop.bbox[0]==L or prop.bbox[3]-prop.bbox[1]==L):
# This cluster is percolating
area = prop.area
spanning = True
break
P (p, L) ∼ (p − pc )β . (4.35)
Use the data from above to find an expression for β. For this you may
need that pc = 0.59275.
Your task is to determine the distribution function fZ (z) for this distri-
bution. Hint: the distribution is on the form f (u) ∝ uα .
a) Find the cumulative distribution, that is, P (Z > z). You can then
find the actual distribution from
dP (Z > z)
fZ (z) = . (4.36)
dz
b) Generate a method to do logarithmic binning in python. That is, you
estimate the density by doing a histogram with bin-sizes that increase
exponentially in size.
Hint. Remember to divide by the correct bin-size.
We have so far studied the clusters in our model porous material, the
percolation system, through the distribution of cluster sizes, n(s, p),
and derivatives of this, such as the average cluster size, S and the
characteristic cluster size, sξ . However, clusters with the same mass,
s, can have very different shapes. Fig. 5.1 illustrates three clusters all
with s = 20 sites. (The linear and the compact clusters are unlikely, but
possible realizations). How can we characterize the diameter or radius of
these clusters?
There are many ways to define the extent of a cluster. We could, for
example, define the maximum distance between any two points in the
cluster (Rmax ) to be the extent of the cluster, or we could use the average
69
70 5 Geometry of clusters
where the sum is over all sites n and m in cluster i, and we have divided
by 2s2i because each site is counted twice and the number of components
in the sum is s2i . The radius of gyration of the clusters in Fig. 5.1 is
illustrated by the circles in the figures1 .
This provides a measure of the radius of a cluster i. As we see from
Fig. 5.1, clusters of the same size s can have different radii. How can we
then find a characteristic size for a given cluster size s? We find that by
averaging over all clusters of the same size s.
si
1 X
rcm,i = ri,j , (5.8)
si j=1
We assume that the clusters are numbered and marked in the lattice
with their index, as done by the lw, num = measurements.label(m)
command. We can find the center of mass by a built-in function, such as
cm = measurements.center_of_mass(m, lw, labelList) or we can
calculate the center-of-mass explicitly. This is done by running through
all the sites ix,iy, in the lattice. For each site, we find what cluster i
the site belongs to: i = lw[ix,iy]. If the site belongs to a cluster, that
is if i>0, we add the coordinates for this part of the cluster to the sum
for the center of mass of the cluster
rcm[ilw] = rcm[ilw] + array([ix,iy])
Finally, we find the center of mass for each of cluster by dividing rcm by
the corresponding area for each of the clusters:
rcm[:,0] = rcm[:,0]/area
rcm[:,1] = rcm[:,1]/area
After running through all the sites, we divide by the area, si , to find the
radius of gyration according to the formula
si
1 X
Ri2 = (ri,j − rcm,i )2 , (5.9)
si j=1
import numba
@numba.njit(cache=True)
def radiusofgyration(m,lw):
labelList = arange(lw.max() + 1)
area = measurements.sum(m, lw, labelList)
cm = measurements.center_of_mass(m, lw, labelList)
nx = shape(lw)[0]
5.1 Characteristic cluster size 73
ny = shape(lw)[1]
rad2 = zeros(int(lw.max()+1))
for ix in range(nx):
for iy in range(ny):
ilw = lw[ix,iy];
if (ilw>0):
dr = array([ix,iy])-cm[ilw]
dr2 = dot(dr,dr)
rad2[ilw] = rad2[ilw] + dr2
rad = sqrt(rad2/area)
We use this function to calculate the average radius of gyration for each
cluster size s and plot the results using the following script:
M = 20 # Nr of samples
L = 400 # System size
p = 0.58 # p-value
allr2 = array([])
allarea = array([])
for i in range(M):
z = rand(L,L)
m = z<p
lw, num = measurements.label(m)
area,rcm,rad2 = radiusofgyration(m,lw)
allr2 = append(allr2,rad2)
allarea = append(allarea,area)
The resulting plots for several different values of p are shown in Fig. 5.2.
We see that that there is an approximately linear relation between Rs2
and s in this double-logarithmic plot, which indicates that there is a
power-law relationship between the two:
Rs2 ∝ sx . (5.10)
How can we interpret this relation? Equation (5.10) relates the radius Rs
and the area (or mass) of the cluster. We are more used to the inverse
relation:
s ∝ RsD , (5.11)
where D = 2/x is the exponent relating the radius to the mass of a
cluster. This corresponds to our intuition from geometry. We know that
for a cube of size L, the mass (or volume) of the cube is M = L3 .
For a square of length L, the mass (or area) is M = L2 , and similarly
for a circle M = πR2 , where R is the radius of the circle. For a line
of length L, the mass is M = L1 . We see a general trend, M ∝ Rd ,
where R is a characteristic length for the object, and d describes the
dimensionality of the object. If we extend this intuition to the relation
in (5.11), which is an observation based on Fig. 5.2, we see that we may
74 5 Geometry of clusters
105
p=0.40
p=0.45
104
p=0.50
p=0.55
103 p=0.59
Rs2
102
101
100
10-1
100 101 102 103 104 105
s
Fig. 5.2 Plot of Rs2 as function of s for simulations on two-dimensional percolation
system with L = 400. The largest cluster for each value of p is illustrated by a circle. The
dotted line shows the curve Rs2 ∝ s2/D for D = 1.89.
sξ ∝ RsDξ . (5.12)
when p < pc . The behavior is similar when p > pc , but the prefactor s0
may have a different value. How does Rsξ behave when p approaches pc ?
We can find this by combining the scaling relations for sξ and Rsξ . We
1/D
remember that Rsξ ∝ sξ . Therefore
1/D
1/D
R sξ ∝ s ξ ∝ (p − pc )−1/σ ∝ (p − pc )−1/σD , (5.14)
120
L= 64
L=128
100 L=256
L=512
80
9(p)
60
40
20
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
p
Fig. 5.3 A plot of ξ as a function of p for a L = 64, 128, 256 and 512 system as a function
of p. We observe that ξ diverges when p → pc . We notice that the correlation length does
not diverge, but crosses over as a result of the finite system size.
where we have inserted Rs2 ∝ s2/D . This expression is valid when s < sξ .
We insert it here since F (s/sξ ) goes rapidly to zero when s > sξ , and
therefore only the s < sξ values will contribute significantly to the
integral. We change variables to u = s/sξ , getting:
2/D+3−τ R∞
sξ 1/sξ u2/D+2−τ F (u)du
2
R ∝ R∞ (5.22)
s3−τ
ξ 1/sξ u2−τ F (u)du
R ∞ 2/D+2−τ
2/D 0 u F (u)du 2/D
∝ sξ R∞ ∝ sξ , (5.23)
0 u
2−τ F (u)du
where the two integrals over F (u) simply are numbers, and therefore
have been included in the constant of proportionality.
2/D 2/D
This shows that R2 ∝ sξ . We found above that Rsξ ∝ sξ . There-
fore, R ∝ Rsξ ! These two characteristic lengths therefore have the
same behavior. They are only different by a constant of proportion-
ality, R = c Rsξ . We can therefore use either length to characterize the
system — they are effectively the same.
Fig. 5.4 illustrates the radius of gyration of the largest cluster with a
circle and the average radius of gyration, R, indicated by the length of
the side of the square. As p increases, both the maximum cluster size
and the average cluster size increases — according to the theory they
are indeed proportional to each other and therefore increase in concert.
78 5 Geometry of clusters
Fig. 5.4 Illustration of the largest cluster in 512 × 512 systems for p = 0.55, p = 0.57,
and p = 0.59. The circles illustrate the radius of gyration of the largest cluster, and the
boxes show the size of the average radius of gyration, R = hRs i. We observe that both
lengths increase approximately proportionally as p approaches pc .
where the sum is over all sites j and the average is also over all sites
i. The denominator is a normalization sum, which corresponds to the
average cluster size, S. You can think of this sum in the following way:
For a site i, we sum over all other sites j in the system. The probability
that site j belongs to the same cluster as site i is g(rij ; p), and the mass
of the site at j is 1. The average number of sites connected to site at i is
therefore:
S(p) = h g(rij ; p) = g(rij ; p)ii , (5.25)
X X
j j
Fig. 5.5 shows the resulting plots of the correlation function g(r; p) for
various values of p for an L = 200 system. First, we notice that the
scaling is rather poor. We will understand this as we develop a theory
for g(r; p) below. The plot shows that there is indeed a cross-over length
ξ, beyond which the correlation function falls rapidly to zero. And there
appears to be a scaling regime for r > ξ where the correlation function is
approximately a power-law, although it is unclear how wide that scaling
regime is in this plot. The plot suggests the following functional form
where the cross-over function f (u) falls rapdily to zero when u > 1
and is approximately constant when u < 1. When p approaches pc , the
5.2 Geometry of finite clusters 81
10−1
10−3
g(r; p)
10−5
p = 0.48
p = 0.5
p = 0.52
10−7 p = 0.54
p = 0.56
Fig. 5.5 A plot of g(r; p) as a function of r for various values of p. The function approaches
a power-law behavior g(r) ∝ rx when p approaches pc .
We can use this scaling form to determine the exponent η. We know that
the average cluster size S is given as an integral over g(r; p), that is
Z
S= g(r; p) = (5.33)
X
g(r; p)dr .
j
Let us use the scaling form for g(r; p) to calculate this integral when p
approaches pc , but is not equal to pc .
Z Z ∞
S= g(r; p)dr = r−(d−2+η) f (r/ξ)drd (5.34)
1
Z ∞ Z ∞
= r−(d−2+η rd+1 exp(−r/ξ)drdΩ ∝ r1−η exp(−r/ξ)dr (5.35)
1 1
r dr
Z Z
= ξ 2−η ( )1−η exp(−r/ξ) = ξ 2−η u1−η exp(−u)du ∝ ξ 2−η
ξ ξ
(5.36)
ξ ∝ |p − pc |−γ/(2−η) , (5.38)
S ∝ s3−τ
ξ ∝ R(3−τ )/D , (5.39)
We will typically only use the symbol ξ for this characteristic length
of the system, and the exponent ν characterizes how ξ diverges as p
approaches pc :
84 5 Geometry of clusters
Correlation length
The correlation length ξ scales as
As we can observe in Fig. 5.6 the system becomes more and more
homogeneous when p goes away from pc . We will now address this feature
in more detail when p > pc .
Fig. 5.6 Illustration of the largest cluster in 512 × 512 systems with p > pc , for p = 0.593,
p = 0.596, and p = 0.610. The circles illustrate the radius of gyration of the largest cluster.
We observe that the radius of gyration increases as p approaches pc .
M ∝ LD , (5.48)
M = hL2 , (5.50)
hL2
ρ= = hL−1 . (5.51)
L3
It is only in the case when we use a two-dimensional volume L × L with
a third dimension of constant thickness H larger than h, that we recover
a constant density ρ independent of system size.
Let us now return to the discussion of the mass M (p, L) of the spanning
cluster for p > pc in a finite system of size L. The behavior of the
percolation system for p > pc is illustrated in Fig. 5.6. We notice that
the correlation length ξ diverges when p approaches pc . At lengths larger
than ξ, the system is effectively homogeneous because there are no holes
significantly larger than ξ. There are two types of behavior, depending
on whether L is larger than or smaller than the correlation length ξ.
When L ξ, we are again in the situation where we cannot discern p
from pc because the size of the holes (empty regions described by ξ when
5.4 Spanning cluster above pc 87
p > pc ) in the percolation cluster is much larger than the system size.
In this case, the mass of the percolation cluster will follow the scaling
relation s ∝ RsD , and the finite section of size L of the cluster will follow
the same scaling if we assume that the radius of gyration of the cluster
inside a region of size L is proportional to L:
In the other case, when L ξ, and p > pc , the typical size of a hole
in the percolation cluster is ξ, as illustrated in Fig. 5.6. This means
that on lengths much larger than ξ, the percolation cluster is effectively
homogeneous. We can therefore divide the L × L system into (L/ξ)d
regions of size ξ, so that for each such region, the mass if m ∝ ξ D . The
total mass of the spanning cluster is therefore the mass of one such region
multiplied with the number of regions:
Fig. 5.7 Illustration of the spanning cluster in a 512 × 512 system at p = 0.595 > pc . In
this case, the correlation length is ξ = 102. The system is divided into regions of size ξ.
Each such region has a mass M (p, ξ) ∝ ξ D , and there are (L/ξ)d ' 25 such regions in
the system.
We can now introduce the complete behavior of the mass, M (p, L), of
the spanning cluster for p > pc :
(
LD Lξ
M (p, L) ∝ . (5.54)
ξ D−d Ld L ξ
88 5 Geometry of clusters
Fig. 5.8 Illustrations of the spanning cluster (shown in red), and the other clusters
(shown in gray) at p = pc in a L = 900 site percolation system. a The 900 × 900 system. b
The central 300 × 300, and part. c The central 100 × 100. Each step represents a rescaling
by a factor 3. However, at p = pc , the correlation length is infinite, so a rescaling of the
length-scales should not influence the geometry of the cluster, which is evident from the
pictures: The percolation clusters are indeed similar in a statistical manner.
Fig. 5.9 Illustration of three generations of the Sierpinski gasket starting from an
equilateral triangle.
triangles from the first iteration, we need to rescale the x and the y axes
by a factor 2. We can write this as a rescaling of the system size, L, by a
factor 2
L0 = 2L . (5.59)
Through this rescaling we get three triangles, each with the same mass
as the original triangle. The mass is therefore rescaled by a factor 3.
M 0 = 3M . (5.60)
we see that
2−D = 3−1 , (5.64)
giving
ln 3
D= . (5.65)
ln 2
We will use this rescaling relation as our definition of fractal dimension.
The relation corresponds to the relation M = LD for the mass. Let us also
show that this relation is indeed consistent with our notion of Euclidean
dimension. For a cube of size L, the mass is L3 . If we look at a piece
of size (L/2)3 , we see that we need to rescale it by a factor of 2 in all
direction to get back to the original cube, but the mass must be rescaled
by a factor 8. We will therefore find the dimension from
ln 8
D= =3, (5.66)
ln 2
which is, as expected, the Euclidean dimension of the cube.
Typically, the mass dimension is measured by box counting. The
sample is divided into regular boxes where the size of each side of the box
is δ. The number of boxes, N (δ), that contain the cluster are counted as
a function of δ. For a uniform mass we expect
5.6 Exercises 91
L
N (δ) = ( )d , (5.67)
δ
and for a fractal structure we expect
L
N (δ) = ( )D , (5.68)
δ
We leave it as an exercise for the reader to address what happens when
δ → 1, and when δ → L.
5.6 Exercises
93
94 6 Finite size scaling
L = 25 0.26
0.7 L = 50
L = 100
0.6 0.25
L = 200
0.5 0.24
P (p, L)
P (pc )
0.4
0.23
0.3
0.22
0.2
0.21
0.1
0.20
0.0
0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 25 50 75 100 125 150 175 200
p L
Fig. 6.1 (a) Plot of P (p, L). (b) Plot of P (pc ; L) as a function of L.
M (p, L) LD
P (p, L) = ∝ ∝ LD−d ∝ L−β/ν . (6.4)
L2 Ld
Finite size scaling ansatz. The fundamental idea of finite size scaling
is then to assume a particular form of a function that encompasses the
behavior both when ξ L and ξ L, by rewriting the expression for
P (p, L) as
6.2 Finite size scaling of P (p, L) 97
Indeed, we could have used this in order to find the exponent in the
relation ξ −β/ν — it would simply have been enough to assume that
P (p, L) = ξ x for some exponent x in the limit of ξ L.
We have therefore found that in order to satisfy these conditions, the
scaling form of P (p, L) must be
where (
const. u1
f (u) = (6.10)
uβ/ν u1
Testing the scaling ansatz. We can now test the scaling ansatz by
plotting P (p, L) according to the ansatz, following a strategy similar to
what we developed for n(s, p). We rewrite the scaling function P (p, L) =
L−β/ν f (L/ξ) by inserting ξ = ξ0 |p − pc |−ν :
Therefore is we plot L1/ν (p − pc ) along the x-axis and Lβ/ν P (p, L) along
the y-axis, we expect all the data to fall onto a common curve, the curve
f (u). This is done in Fig. 6.2, which shows that the measured data is
consistent with the scaling ansatz. We call such as plot a scaling data
collapse plot.
1.4
L = 25
1.2 L = 50
L = 100
L = 200
1.0
0.8
y
0.6
0.4
0.2
0.0
Now, let us see if we can apply the finite-size scaling approach to develop
a scaling theory for S(p, L). First, we will measure S(p, L), and then
develop and test a scaling theory.
S(p, L) = (6.20)
X
s2i /L2 .
i
103
L = 25
L = 50
800 L = 100
L = 200
600
S(p, L)
S(pc )
400 102
200
0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 3 ⇥ 101 4 ⇥ 101 6 ⇥ 101 102 2 ⇥ 102
p L
Fig. 6.3 (a) Plot of S(p, L). (b) Plot of S(pc ; L) as a function of L.
6.3 Average cluster size 101
The resulting plot is shown in Fig. 6.4, which indeed demonstrates that
the measured data is consistent with the scaling theory. Success!
L = 25
0.07
L = 50
L = 100
0.06
L = 200
0.05
S(p, L)Ly
0.04
0.03
0.02
0.01
0.00
Fig. 6.4 A data-collapse plot of the rescaled average cluster size L−γ/ν S(p, L) as a
function of L1/ν (p − pc ) for various L.
for iL in range(nL):
L = LL[iL]
N = int(2000*25/L)
for i in range(N):
z = rand(L,L)
for ip in range(nx):
m = z<p[ip]
lw, num = measurements.label(m)
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x>0)]
if (len(perc)>0):
Ni[ip,iL] = Ni[ip,iL] + 1
Pi[:,iL] = Ni[:,iL]/N
for iL in range(nL):
L = LL[iL]
lab = "$L="+str(L)+"$"
plot(p,Pi[:,iL],label=lab)
ylabel(’$\Pi(p,L)$’)
xlabel(’$p$’)
legend()
1.0 L = 25
L = 50
L = 100
L = 200
0.8
0.6
⇧(p, L)
0.4
0.2
0.0
Fig. 6.6. Here, we have also plotted p1/2 as a function of L, where p1/2 is
the value for p so that Π(p1/2 , L) = 1/2. These values for p1/2 are calcu-
lated by a simple interpolation as illustrated in the following program.
(Notice that as usual in this book, we do not aim for high precision in
this program. The simulations are for small system sizes and few samples,
but are meant to illustrate the principle and be reproduceable for you.)
1.0 L = 25 0.588
L = 50
L = 100 0.586
0.8 L = 200
0.584
0.6 0.582
⇧(p, L)
p1/2
0.580
0.4
0.578
0.2 0.576
0.574
0.0
0.572
0.55 0.56 0.57 0.58 0.59 0.60 0.61 0.62 25 50 75 100 125 150 175 200
p L
Fig. 6.6 (a) Plot of Π(p, L). (b) Plot of p1/2 as a function of L.
for iL in range(nL):
ipc = argmax(Pi[:,iL]>0.5) # Find first value where Pi>0.5
# Interpolate from ipc-1 to ipc to find intersection
ppc = p[ipc-1] + (0.5-Pi[ipc-1,iL])*\
(p[ipc]-p[ipc-1])/(Pi[ipc,iL]-Pi[ipc-1,iL])
Pic = 0.5
plot(L,ppc,’o’)
xlabel(’$L$’)
ylabel(’$p_{1/2}$’)
From Fig. 6.6 we see that as L increases the value for p1/2 gradually
approaches pc . Well, we cannot really see that it is approaching pc , but
we guess that it will. However, in order extrapolate the curve to infinite
L we need to develop a theory for how p1/2 behaves. We need to develop
a finite size scaling theory for Π(p, L).
px − pc = Cx L−1/ν . (6.33)
If we know ν, we see that this gives a method to estimate the value of
pc . Fig. 6.7 shows a plot of p1/2 − pc as a function of L−1/ν for ν = 4/3.
We can use this plot to extrapolate to find pc in the limit when L → ∞
as indicated in the plot. The resulting value for pc extrapolated from
L = 50, 100, 200 is pc = 0.5914, which is surprisingly good given the
small system sizes and small sample sizes used for this estimate. This
demonstrates the power of finite size scaling.
106 6 Finite size scaling
0.5900
0.5875
0.5850
0.5825
p1/2
0.5800
0.5775
0.5750
0.5725
0.00 0.02 0.04 0.06 0.08
x
L
Fig. 6.7 Plot of p1/2 as a function of L−1/ν . The dashed line indicates a linear fit to the
data for L = 50, 100, 200. The extrapolated value for pc at L → ∞ is pc = 0.5914.
and therefore
x0
pmax = pc +. (6.41)
L1/ν
In each numerical experiment we are really measuring an effective pc ,
but as L → ∞ we see that pef f → pc . The way it goes to pc tells us
something about ν.
Average of the distribution. Because Π 0 is a probability density, we
can also calculate the average p of this distribution, that is the average p
at which we first get a percolation cluster in a system of size L. Let us
call this quantity hpi.
108 6 Finite size scaling
Z 1
hpi = pΠ 0 (p)dp (6.42)
0
Z 1
= L−1/ν pL1/ν Φ0 [(p − pc )L1/ν ]dpL1/ν (6.43)
0
Z 1 Z 1
= (p − pc )L 1/ν
Φ [(p − pc )L
0 1/ν
]dp + pc Π 0 dp (6.44)
0 0
where the last integral is simply a constant, so that we can write the
average critical percolation threshold in a finite system size as
6.5 Exercises
111
112 7 Renormalization
Lξa. (7.1)
We will not address whether it is possible to average over some of the sites
in such a way that the macroscopic behavior does not change significantly.
That is, we want to replace cells of bd sites with new, “renormalized”
single sites. This averaging procedure is illustrated in Fig. 7.1.
b
L/b
L
for the new, averaged sites. We will therefore call the new occupation
probability p0 - the probability to occupy a renormalized site. We write
the mapping between the original and the new occupation probabilities
as
p0 = R(p) , (7.2)
where the function R(p), which provides the mapping, depends on the
details of the rule used for renormalization. It is important to realize
that the system size L and the correlation length ξ does not change in
real terms, it is only in units of lattice constants they are changing.
There are many choices for the mapping between the original and the
renormalized lattice. We have illustrated a particular rule for a mapping
with a rescaling b = 2 in Fig. 7.2. For a site percolation problem with
b = 2 there are bd possible configurations. The different configurations
are classified into the 6 categories c, where the number of configurations
in each category is listed below. In Fig. 7.2 we have also illustrated a
particular averaging rule. However, we could also have chosen different
rules. Usually, we should ensure that the global information is preserved
by the mapping. For example, we would want the mapping to conserve
connectivity. That is, we would like to ensure that
L
Π(p, L) = Π(p0 , ). (7.3)
b
However, even though we may ensure this on the level of the mapping,
this does not ensure that the mapping actually conserves connectivity
when applied to a large cluster - it may, for example, connect clusters
that were unconnected in the original lattice, or disconnect clusters that
were connected, as illustrated in Fig. 7.3.
Currently, we will not consider the details of the renormalization
mapping p0 = R(p), we will only assume that such a map exists and
study its qualitative features. Then we will address the renormalization
mapping through two worked examples. For any choice of mapping, the
rescaling must result in a change in the correlation length ξ:
1
ξ 0 = ξ(p0 ) = ξ(p) . (7.4)
b
We will use this relation to address the fixpoints of the mapping. A
fixpoint is a point p∗ that does not change when the mapping is applied.
That is
p∗ = R(p∗ ) . (7.5)
114 7 Renormalization
Fig. 7.2 Illustration of a renormalization rule for a site percolation problem with a
rescaling b = 2. The top row indicates various clusters categorized into 6 classes c. The
number of different configurations n in each class is also listed. The mapping ensures that
connectivity is preserved. However, this renormalization mapping is not unique: we could
have chosen many different averaging schemes.
b
Fig. 7.3 Illustration of a single step of renormalization on an 8 × 8 lattice of sites. We
see that the renormalization procedure introduces new connections: the blue cluster is
now much larger than in the original. However, the procedure also removes previously
existing connections: the original yellow cluster is split into two separate clusters.
R(p)
1
p’>p
p’<p
p* 1 p
Fig. 7.4 Illustration the renormalization mapping p0 = R(p) as a function of p. The
non-trivial fixpoint p∗ = R(p∗ ) is illustrated. Two iterations sequences are illustrated by
the lines with arrows. Let us look at the path starting from p > p∗ . Through the first
application of the mapping, we read off the resulting value of p0 . This value will then be
the input value for the next application of the renormalization mapping. A fast way to
find the corresponding position along the p axis is to reflect the p0 value from the line
p0 = p shown as a dotted line. This gives the new p value, and the mapping is applied
again producing yet another p0 which is even further from p∗ . With the drawn shape of
R(p) there is only one non-trivial fixpoint, which is unstable.
116 7 Renormalization
and
ξ(p0 ) = ξ0 (p0 − pc )−ν . (7.8)
We can then use the renormalization condition for the correlation length
from (7.4) to obtain:
1
ξ0 (p − pc )−ν = ξ0 (p0 − pc )−ν . (7.9)
b
When p → pc , we see that both ξ(p) and ξ(p0 ) approaches infinity, which
implies that if p = pc , then we must also have that p0 = pc . That is, we
have found that pc is a fixpoint of the mapping.
We see that the value of Λ characterizes the fixpoint. For Λ > 1 the new
point p0 will be further away from p∗ than the initial point p. Consequently,
the fixpoint is unstable. By a similar argument, we see that for Λ < 1
the fixpoint is stable. For Λ = 1 we call the fixpoint a marginal fixpoint.
Let us now assume that the fixpoint is indeed the percolation threshold.
In this case, when p is close to pc , we know that the correlation length is
for the renormalized point. We will now use (7.12) for p∗ = pc , giving
p0 − pc = Λ(p − pc ) . (7.15)
b = Λν . (7.19)
where ξ(p) was the functional form of the correlation length in the original
system. At least we should be able to make this assumption in some
small neighborhood around pc . That is, we assume that ξ(p) = ξ 0 (p)
for |p − pc | 1. In this case, we can write the correlation function as
a function of the deviation from pc : = p − pc . Similarly, we define
0 = p0 − pc . The relation between the correlation lengths can then be
written as
ξ()
ξ(0 ) = , (7.22)
b
where ξ(u) is a particular function of u. The Taylor expansion of the
renormalization mapping R(p) in (7.12) can also be rewritten in terms
of giving
0 = Λ . (7.23)
We can therefore rewrite (7.22) as
ξ()
ξ(0 ) = ξ(Λ) = , (7.24)
b
or, equivalently
ξ() = bξ(Λ) . (7.25)
This implies that ξ() is a homogeneous function. Let us see how this
function responds to iterations. We notice that after an iteration, the
new value of is Λ, and we can write
7.2 Examples
p∗ = (p∗ )b , (7.35)
We can also apply the theory directly to find the exponent ν. The
renormalization relation is p0 = R(p) = pb . We can therefore find Λ from:
7.2 Examples 121
0.8
0.6
p’
0.4
0.2
0
0 0.2 0.4 0.6 0.8 1
p
Fig. 7.6 Illustration of a renormalization rule for a one-dimensional site percolation
system with b = 3.
∂R
Λ= = b(p∗ )b−1 = b , (7.36)
∂p p∗
Fig. 7.7 Possible configurations for a 2 × 2 site percolation system. The top row indicates
various clusters categorized into 6 classes c. The number of different configurations n in
each class is also listed.
only one direction, there are only two of the configurations c = 3 that
contribute the the spanning probability, and the renormalization relation
becomes
p0 = R(p) = p4 + 4p3 (1 − p) + 2p2 (1 − p)2 . (7.39)
This is the probability for configurations c = 1, c = 2, or c = 3 to occur.
The renormalization relation is illustrated in Fig. 7.8.
1.0 R(p)
p
0.8
0.6
R(p), p
0.4
0.2
0.0
(a)
(b) (c)
(d)
c=1 c=2 c=3 c=4 c=5
1 p5 (1 − p)0 1 1
2 p4 (1 − p)1 1 1
3 p4 (1 − p)1 4 1
4 p3 (1 − p)2 2 1
5 p3 (1 − p)2 2 1
6 p3 (1 − p)2 2 0
3 2
7 p (1 − p) 4 1
8 p2 (1 − p)3 2 1
9 p2 (1 − p)3 4 0
10 p2 (1 − p)3 2 0
2 3
11 p (1 − p) 2 0
12 p1 (1 − p)4 5 0
13 p0 (1 − p)5 1 0
Table 7.1 A list of the possible configuration for renormalization of a bond lattice. The
P is c is denoted Π|c. The spanning
probability for percolation given that the configuration
probability for the whole cell is then Π(p) = p0 = c
n(c)P (c)Π|c.
13
p = R(p) = Π =
0
n(c)P (c)Π|c , (7.48)
X
c=1
p0 = R(p) (7.49)
= p + p (1 − p) + 4p (1 − p) + 2p (1 − p)
5 4 4 3 2
(7.50)
+2p3 (1 − p)2 + 4p3 (1 − p)2 + 2p2 (1 − p)3 (7.51)
= 2p5 − 5p4 + 2p3 + 2p2 . (7.52)
7.3 (Advanced) Universality 127
connectivity, even if the original problem was a pure site problem. This
will produce a mixed site-bond percolation problem. The probability
q to connect two nearest-neighbors in the original site lattice must be
found by counting all possible combinations of spanning between nearest
neighbor sites in the original lattice. We may also have to introduce
next-nearest neighbor bonds and so on.
Let us describe the renormalized problem by the two renormalized
probabilities p0 for sites, and x0 for bonds. The renormalization procedure
will be described by a set of two renormalization relations:
p0 = R1 (p, x) (7.55)
x = R2 (p, x)
0
(7.56)
Now, the flow in the renormalization procedure will not simply be along
the p axis, but will occur in the two-dimensional p, x-space, as illustrated
in Fig. 7.12. We will no longer have a single critical points, pc , but a set
of points (pc , xc ) corresponding to a curve in p, x-space, as shown in the
figure. We also notice that when x = 1 we have a pure site percolation
problem – all bonds will be present and connectivity depends on the
presence of sites alone - and similarly for p = 1 we have a pure bond
percolation problem.
There are still two trivial fixpoints, for (p, x) = 0, and for (p, x) = (1, 1),
and we expect these points to be attractors. We will therefore need a line
that separates the two trivial fixpoints. If we start on this line, we will
remain on this line. We will therefore expect there to be a fixpoint on this
line, the non-trivial fixpoints (p∗ , x∗ ). We remark that the fixpoint no
longer corresponds to the critical threshold - there will be a whole family
of critical values corresponding to the curved, black line in Fig. 7.12.
We can find the non-trivial fixpoint from the equations
p∗ = R1 (p∗ , x∗ ) (7.57)
x∗ = R2 (p∗ , x∗ ) (7.58)
Let us linearize the system near the fixpoint. We will do a Taylor ex-
pansion for the two functions R1 (p, x), and R2 (p, x), around the point
(p∗ , x∗ ):
P>0
P=0
p
Fig. 7.12 Illustration of the flow due to renormalization in a combined site-bond perco-
lation system. The black line shows the critical line, on which the correlation length is
infinite, ξ = ∞. Below the critical line, renormalization will lead to the trivial fixpoint at
p, x = 0 as illustrated by the green lines. Above the line, renormalization will lead to the
fixpoint at p, x = (1, 1).
∂R1 ∂R1
Λ11 = Λ12 = (7.61)
∂p (p∗ ,x∗ )
∂x (p∗ ,x∗ )
∂R2 ∂R2
Λ21 = Λ22 = (7.62)
∂p (p∗ ,x∗ ) ∂x (p∗ ,x∗ )
We want to find the behavior after many iterations. This can be done by
finding the eigenvector and the eigenvalues of the matrix. That is, we
find the vectors xi = (pi , xi ) such that
Λxi = λi xi . (7.64)
We know that we can find two such vectors, and that the vectors are
linearly independent, so that any vector x can be written as a linear
combination of the two eigenvectors:
130 7 Renormalization
" #
p − p∗
= x = a1 x 1 + a2 x 2 . (7.65)
x − x∗
Λx = λ1 a1 x1 + λ2 a2 x2 , (7.66)
ΛN x = λN
1 a1 x1 + λ2 a2 x2 .
N
(7.67)
We see that if both λ1 < 1 and λ2 < 1, then any deviation from the
fixpoint will approach zero after many iterations, because the values
λN1 → 0, and λ2 → 0. We call eigenvalues in the range 0 < λ < 1
N
irrelevant, and the fixpoint is stable. Eigenvalues with λ > 1 are termed
relevant, because the fixpoint will move away along the direction specified
be the corresponding eigenvector. Eigenvalues λ = 1 are termed marginal
— there is no movement along this direction.
Let us look at the case when λ1 > 1 > λ2 , which corresponds to what
we will call a simple critical point. (For a simple critical point, there is
only one relevant eigenvalue, and all other eigenvalues are irrelevant.)
This corresponds to a stable behavior in the direction x2 , and an unstable
behavior in the x1 direction. That is, the behavior is like a saddle point,
as illustrated in Fig. 7.13. This is consistent with the picture of a critical
line. The flow along the line corresponds to the stable direction, and the
flow normal to the line corresponds to the unstable direction, which is
the natural generalization of the behavior we found in one dimension.
Therefore any point which is originally close to the line, will first flow
towards the fixpoint (p∗ , x∗ ), before it flows out in the direction of the
the relevant eigenvector.
Let us now study the behavior close to the critical line in detail for
a system with λ1 > 1 > λ2 . We notice that the correlation length ξ
is infinite along the whole critical line, because it does not change by
iterations along the critical line. That is, we have just a single fixpoint, but
infinitely many critical points corresponding to a critical line. Let us start
at a point (p0 , 1) close to the critical line, and perform renormalization
in order to find the functional shape of ξ and the exponent ν. After k
iterations, the point has moved close to the fixpoint, just before it is
expelled out from the fixpoint. We can therefore write
7.3 (Advanced) Universality 131
p
Fig. 7.13 Illustration of the flow around the unstable saddle point corresponding to
the fixpoint p∗ . The black line shows the critical line, on which the correlation length is
infinite, ξ = ∞. Below the critical line, renormalization will lead to the trivial fixpoint at
p, x = 0 as illustrated by the green lines. Above the line, renormalization will lead to the
fixpoint at p, x = (1, 1).
" #
p(k) − p∗
= a1 x 1 + a2 x 2 . (7.68)
x(k) − x∗
Since the iteration point is close to the fixpoint, we will assume that we
can use the linear expansion around the fixpoint to address the behavior
of the system. After a further l iterations we assume that we are still in
the linear range, and the renormalized position in phase-space is
" #
p(k+l) − p∗
= λl1 a1 x1 + λl2 a2 x2 . (7.69)
x(k+l) − x∗
(
We stop the renormalization procedure at l = l∗ when a1 l) ' 0.1 (or
some other small value that we can choose). That is
∗
λl1 a1 ' 0.1 . (7.70)
ξ(p0 , 1) ∝ a−ν
1 ∝ (p − pc )
−ν
. (7.79)
with the same power-law exponent ν. We can also use similar arguments
to argue that the critical exponent ν is the same below and above the
percolation threshold.
We will use the concepts and tools we have developed so far to address
several problems of interest. First, let us address fragmentation: a large
body that is successively broken into smaller part due to fracturing.
There can be many processes that may induce and direct the fracturing
of the grain. For example. the fracturing may depend on an external
load placed on the grain, on a rapid change in temperature in the grain,
on a high-amplitude sound wave propagating through the grain, or by
stress-corrosion or chemical decomposition processes. Typical examples
of fragment patterns are shown in Fig. 7.14.
L0 L1 L2 L3
Fig. 7.14 Illustration of a deterministic fragmentation model. The shaded areas indicate
the regions that will not fragment any further. That is, this drawing illustrates the case
of f = 3/4.
Why did I choose D to denote this exponent? Let us look at the scaling
properties of the structure generated by these iterations. Let us first
assume that we describe the system purely geometrically, and that we are
interested in the geometry of the regions that have fragmented. We will
therefore assume that areas that are no longer fracturing are removed,
and we are studying the mass that is left by this process. Let us start at
a length-scale `n , where the mass of our system is mn , and let us find
what the mass will be when the length is doubled. For f = 3/4 we can
then generate the new cluster by placing three of the original clusters
into three of the four placed in the two-by-two square as illustrated in
Fig. 7.15. The rescaling of mass and length is therefore: `n+1 = 2`n , and
134 7 Renormalization
mn+1 = 3mn . Similarly, for arbitrary f , the relations are `n+1 = 2`n ,
and mn+1 = 4f mn . As we found in section 5.5, this is consistent with a
power-law scaling between the mass and length of the set
L D
m(L) = m0 ( ) , (7.81)
L0
where D = ln 3/ ln 2 is the fractal dimension of the structure. The value
for the case of general f is similarly D = ln(4f )/ ln 2.
ln l n+1
Fig. 7.15 Illustration of construction by length and mass rescaling. Three instances of
the fully developed structure with mass mn and length `n is used to generate the same
structure at a length `n+1 and with mass mn+1 = 3mn . The mass corresponds to the
mass of the regions that are not left unfragmented.
Fig. 7.16 Illustration of the fragmentation model. In each iteration, each cubical grain is
divided into 8 identical smaller cubes. The fundamental rule for the model is that if there
are two neighboring grains of the same size, one of them will fracture. In the figure we
have shaded the regions that are not fragmented in this process for the first few steps of
the iterations.
7.5 Exercises
z
y
x
Fig. 7.17 Illustrations of the 3d H-cell.
def coarse(z,f):
#function zz = coarse(z,f)
# The original array is z
# The transfer function is f given as a vector with 16 possible places
# f applied to a two-by-two matrix should return
# the renormalized values
#
# The various values of f correspond to the following
# configurations of the two-by-two region that is renormalized,
# where I have used X to mark a present site, and 0 to mark an
# empty sites
#
# 0 00 4 00 8 00 12 00
# 00 X0 0X XX
#
# 1 X0 5 X0 9 X0 13 X0
# 00 X0 0X XX
#
# 2 0X 6 0X 10 0X 14 0X
# 00 X0 0X XX
#
# 3 XX 7 XX 11 XX 15 XX
# 00 X0 0X XX
#
nx = shape(z)[0]
ny = shape(z)[1]
138 7 Renormalization
139
140 8 Subset geometry
Fig. 8.1 Illustration of the spanning cluster, the singly connected bonds (red), the
backbone (blue), and the dangling ends (yellow) for a 256 × 256 bond percolation system
at p = pc . (Figure from Martin Søreng).
For example, we propose that the mass of the singly connected sites
(MSC ) to have the scaling form
where we call the dimension DSC the fractal dimension of the singly
connected sites.
Scaling argument. Because the set of singly connected sites is a subset
of the spanning cluster, we know that MSC ≤ M . It therefore follows
that
DSC ≤ D . (8.2)
Generalization to other subsets. Based on this simple example, we
will generalize the approach to other subsets of the spanning cluster.
However, first we will introduce a new concept, a self-avoiding path on
the spanning cluster.
Scaling hypothesis for the minimal path. We assume that mass of the
minimal path also scales with the system size according to the scaling
form:
Mmin ∝ LDmin . (8.3)
Where we have introduced the scaling exponent of the minimal path to
be Dmin .
8.1.3 Backbone
Intersection of all self-avoiding paths. The notion of SAPs can also
be used to address the physical properties of the cluster, such as we
saw for the singly connected bonds. The set of singly connected bonds
is the intersections between all SAPs connecting the two sides. That
is, the singly connected bonds is the set of points that any path must
go through in order to connect the two sides. From this definition, we
notice that the dimension DSC < Dmin , and as we will see further on,
DSC = 1/ν which is smaller than 1 for two-dimensional systems.
Union of all self-avoiding paths. Another useful set is the union of all
SAPs that connect the two edges of the cluster. This set is called the
backbone with dimension DB .
Backbone
The backbone is the union of all self-avoiding paths on the spanning
cluster that connect two opposite edges.
8.1 Self-avoiding paths on the cluster 143
Fig. 8.2 Illustration of the spanning cluster consisting of the backbone (red) and the
dangling ends (blue) for a 512 × 512 site percolation system for (a) p = 0.58, (b) p = 0.59,
and (c) p = 0.61.
where the first inequality L1 ≤ Mmin follows because even the minimum
path must be at least of length L to go from one side to the opposite
side.
Now, if this is to be true for all values of L, it can be argued that
because all the masses are between two scaling relations, L1 and Ld , also
the scaling of the intermediate masses, Mx , must be power-laws with
some power-law exponents, Mx ∝ LDx , with the hierarchy of exponents
given in (8.4).
Fig. 8.3 Illustration of the hierarchical blob-model for the percolation cluster showing
the backbone (bold), singly connected bonds (red) and blobs (blue).
M 0 ∝ bDx M , (8.8)
where Dx denotes the exponent for the particular subset we are looking
at. We can use this to determine the fractal exponent Dx from
ln M 0 /M
Dx = . (8.9)
ln b
We will do this be calculating the average value of the mass of the H-cell,
by taking the mass of the subset we are interested in for each configura-
tion, Mx (c), and multiplying it by the probability of that configuration,
summing over all configurations:
We have now calculated the average mass in the original 2 by 2 lattice, and
this should correspond to the average renormalized mass, hM 0 i = p0 M 0 ,
which is the mass of the renormalized bond, M 0 multiplied with the
probability for that bond to be present p0 . That is, we will find M 0 from:
1 p5 (1 − p)0 0 2 2.5 3 5 5
2 p4 (1 − p)1 0 2 2 2 4 4
4 2p3 (1 − p)2 2 2 2 2 2 3
5 2p3 (1 − p)2 3 3 3 3 3 3
6 4p3 (1 − p)2 2 2 2 2 2 3
7 2p2 (1 − p)3 2 2 2 2 2 2
hMx i 26/25 34/25 36.5/25 39/25 47/25 53/25
ln 13 ln 17 ln 36.5 39
ln 16 ln 47 ln 53
Dx 8
ln 2
8
ln 2
16
ln 2 ln 2
16
ln 2
16
ln 2
Dx 0.7004 1.0875 1.1898 1.2854 1.5546 1.7279
Dx,n 3/4 1.13 1.4 1.6 91/48
Table 8.1 Numerical exponents for the exponent describing various subsets of the
spanning cluster defined using the set of Self-Avoiding Walks going from one side to the
opposite side of the cluster. The last line shows the exponents found from numerical
simulations in a two-dimensional system.
where A is given as
∂π 0
A= . (8.13)
∂π π=1
We realize that the new p in the system after the introduction of π is given
by p = πpc , when the ordinary percolation system is at pc . Similarly, the
renormalized occupation probability is p0 = π 0 pc , and we have therefore
found that
∂π 0 ∂p0
A= = = b1/ν . (8.14)
∂π π=1 ∂p p=pc
Fig. 8.4 Illustration of first three generations of the Mandelbrot-Given curve. The length
is rescaled by a factor b = 3 for each iteration, and the mass of the whole structure is
increased by a factor of 8. The fractal dimension is therefore D = ln 8/ ln 3 ' 1.89.
Let us first address the singly connected bonds. In the zero’th gener-
ation, the system is simply a single bond, and the length of the singly
connected bonds, LSC is 1. In the first generation, there are two bonds
that are singly connecting, and in the second generation there are four
bonds that are singly connecting. The general relation is that
LSC = 2l , (8.16)
L3
L2 L1-l1
l1
L4-l4
l4
8.5 Lacunarity
1:
2:
as we assumed above.
We therefore see that the distribution of masses is characterized by the
distribution P (m, `), which in turn is described by the fractal dimension,
D, and the scaling function f (u), which gives the shape of the distribution.
The distribution of masses can be broad, which would correspond
to “large holes”, or narrow, which would correspond to a more uniform
distribution of mass. The width of the distribution can be characterized
by the mean-square deviation of the mass from the average mass:
8.6 Exercises
@numba.njit(cache=True)
def walk(z):
# Left turning walker
# Returns left: nr of times walker passes a site
# First, ensure that array only has one contact point at
# left and right : topmost points chosen
nx = z.shape[0]
ny = z.shape[1]
i = where(z[0,:] > 0)
ix0 = 0 # starting row for walker is always 0
iy0 = i[0][0] # starting column (first element where
# there is a matching column which is zero)
print("Starting walk in x=" + str(ix0) + " y=" + str(iy0))
# First do left-turning walker
directions = zeros((4,2), int)
directions [0,0] = -1 # west
directions [0,1] = 0
directions [1,0] = 0 # south
directions [1,1] = -1
directions [2,0] = 1 # east
directions [2,1] = 0
directions [3,0] = 0 # north
directions [3,1] = 1
nwalk = 1
ix = ix0
iy = iy0
direction = 0 # 0 = west, 1 = south, 2 = east, 3 = north
left = zeros((nx,ny),int)
right = zeros((nx,ny),int)
while (nwalk >0):
left[ix,iy] = left[ix,iy] + 1
# Turn left until you find an occupied site
nfound = 0
while (nfound==0):
direction = direction - 1
8.6 Exercises 155
if (direction <0):
direction = direction + 4
return left, right
159
160 9 Introduction to disorder
from the small scale to the large scaling. This process, often referred to
as up-scaling, requires that we know the scaling properties of our system.
We will address up-scaling in detail in this chapter.
We may argue that the point close to the percolation threshold is
anomalous and that any realistic system, such as a geological system,
would typically be far away from the percolation threshold. In this case,
the system will only display an anomalous, size-dependent behavior up
to the correlation length, and over larger lengths the behavior will be
that of a homogeneous material. We should, however, be aware that
many physical properties are described by broad distributions of material
properties, and this will lead to a behavior similar to the behavior close
to the percolation threshold, as we will discuss in detail in this part.
In addition, several physical processes ensure that the system is driven
into or is exactly at the percolation threshold. One such example is
the invasion-percolation process, which gives a reasonable description
of oil-water emplacement processes such as secondary oil migration. For
such systems, the behavior is best described by the scaling theory we
have developed.
In this part, we will first provide an introduction to the scaling of
material properties such as conductivity and elasticity. Then we will
demonstrate how processes occurring in systems with frozen disorder,
such as a porous material, often lead to the formation of fractal structures.
Flow in disordered media
10
161
162 10 Flow in disordered media
kLd−1 k
Φ= ∆p = Ld−2 ∆p . (10.3)
ηL η
10.1 Conductance of a percolation lattice 163
From this, we see that the electric conductivity problem in this limit is
the same as the Darcy-flow permeability problem. We will therefore not
discern between the two problems in the following. We will simply call
them flow problems and describe them using the current, I, the conduc-
tivity, g, the conductance G, and the potential V . We will study these
problems on a Ld percolation lattice, using the theoretical, conceptual
and computational tools we have developed so far.
(a) (b)
Ii,j
Vi Vj
V
Fig. 10.1 (a) Illustration of flow through a bond percolation system. The bonds shown
in red are the singly connected bonds: all the flux has to go through these bonds. The
bonds shown in blue are the rest of the backbone: The flow only takes place on the singly
connected bonds and the backbone, the remaining bonds are the dangling ends, which do
not participate in fluid flow. (b) Illustration of the potentials Vi and Vj in two adjacent
sites and the current Ii,j from site i into site j.
Ii,k = 0 (10.6)
X
This provides us with a set of equations for all the potentials Vi , which
we must solve to find the potentials and hence the currents between all
the sites in a percolation system.
Finding currents and potentials. We can use this to find all the poten-
tials for a percolation system. Let us adress a two-dimensional system
10.1 Conductance of a percolation lattice 165
def MK_EQSYSTEM (A , X , Y ):
# Total no of internal lattice sites
sites = X *( Y - 2)
# Allocate space for the nonzero upper diagonals
main_diag = zeros(sites)
10.1 Conductance of a percolation lattice 167
upper_diag1 = zeros(sites - 1)
upper_diag2 = zeros(sites - X)
# Calculates the nonzero upper diagonals
main_diag = A[X:X*(Y-1), 0] + A[X:X*(Y-1), 1] + \
A[0:X*(Y-2), 1] + A[X-1:X*(Y-1)-1, 0]
upper_diag1 = A [X:X*(Y-1)-1, 0]
upper_diag2 = A [X:X*(Y-2), 1]
main_diag[where(main_diag == 0)] = 1
# Constructing B which is symmetric , lower = upper diagonals .
B = dia_matrix ((sites , sites)) # B *u = t
B = - spdiags ( upper_diag1 , -1 , sites , sites )
B = B + - spdiags ( upper_diag2 ,-X , sites , sites )
B = B + B.T + spdiags ( main_diag , 0 , sites , sites )
# Constructing C
C = zeros(sites)
# C = dia_matrix ( (sites , 1) )
C[0:X] = A[0:X, 1]
C[-1-X+1:-1] = 0*A [-1 -2*X + 1:-1-X, 1]
return B , C
def sitetobond ( z ):
# Function to convert the site network z(L,L) into a (L*L,2) bond
# network
# g [i,0] gives bond perpendicular to direction of flow
# g [i,1] gives bond parallel to direction of flow
# z [ nx , ny ] -> g [ nx * ny , 2]
nx = size (z ,1 - 1)
ny = size (z ,2 - 1)
N = nx * ny
gg_r = zeros ((nx , ny)) # First , find these
gg_d = zeros ((nx , ny )) # First , find these
gg_r [:, 0:ny - 1] = z [:, 0:ny - 1] * z [:, 1:ny]
gg_r [: , ny - 1] = z [: , ny - 1]
gg_d [0:nx - 1, :] = z [0:nx - 1, :] * z [1:nx, :]
gg_d [nx - 1, :] = 0
# Then , concatenate gg onto g
g = zeros ((nx *ny ,2))
g [:, 0] = gg_d.reshape(-1,order=’F’).T
g [:, 1] = gg_r.reshape(-1,order=’F’).T
return g
ly = 400
p = 0.5927
ncount = 0
perc = []
while (len(perc)==0):
ncount = ncount + 1
if (ncount >100):
break
z=rand(lx,ly)<p
lw,num = measurements.label(z)
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x > 0)]
print("Percolation attempt", ncount)
zz = asarray((lw == perc[0]))
# zz now contains the spanning cluster
zzz = zz.T # Transpose
g = sitetobond ( zzz ) # Generate bond lattice
V, c_eff = FIND_COND (g, lx, ly) # Find conductivity
x = coltomat ( V , lx , ly ) # Transform to nx x ny lattice
V = x * zzz
g1 = g[:,0]
g2 = g[: ,1]
z1 = coltomat( g1 , lx , ly )
z2 = coltomat( g2 , lx , ly )
# Plot results
figure(figsize=(16,16))
ax = subplot(221)
imshow(zzz, interpolation=’nearest’)
title("Spanning cluster")
subplot(222, sharex=ax, sharey=ax)
imshow(V, interpolation=’nearest’)
title("Potential")
# Plot results
subplot(223, sharex=ax, sharey=ax)
imshow(fn, interpolation=’nearest’)
title (" Current ")
# Singly connected
zsc = fn > (fn.max() - 1e-6)
# Backbone
zbb = fn>1e-6
# Combine visualizations
ztt = ( zzz*1.0 + zsc*2.0 + zbb*3.0 )
zbb = zbb / zbb.max()
subplot(224, sharex=ax, sharey=ax)
imshow(ztt, interpolation=’nearest’)
title (" SC, BB and DE ")
50 50
100 100
150 150
200 200
250 250
300 300
350 350
0 50 100 150 200 250 300 350 0 50 100 150 200 250 300 350
Current BB and DE
0 0
50 50
100 100
150 150
200 200
250 250
300 300
350 350
0 50 100 150 200 250 300 350 0 50 100 150 200 250 300 350
Fig. 10.2 Plots of the spanning cluster, the potential, V (x, y), the absolute value of the
current flowing into each site, and the singly connected bonds, the backbone and the
dangling ends.
170 10 Flow in disordered media
i=0
We use the following program to find the conductance, G(p, L), for an
L × L system for L = 400, as well as the density of the spanning cluster
P (p, L). The resulting behavior for L = 400 and M = 600 is shown in
Fig. 10.3. We observe two things from this plot: First we see that the
behavior of G(p, L) and P (p, L) is qualitatively different around p = pc :
P (p, L) increases very rapidly as (p−pc )β where β is less than 1. However,
it appears that G(p, L) increases more slowly. Indeed, from the plot it
looks as if G(p, L) increases as (p − pc )x with an exponent x that is larger
than 1. How can this be? Why does the density of the spanning cluster
increase very rapidly, but the conductance increases much slower? This
may be surprising, but we will develop an explanation for this below.
from pylab import *
from scipy.ndimage import measurements
Lvals = [400]
pVals = logspace(log10(0.58), log10(0.85), 20)
C = zeros((len(pVals),len(Lvals)),float)
P = zeros((len(pVals),len(Lvals)),float)
nSamples = 600
G = zeros(len(Lvals))
for iL in range(len(Lvals)):
L = Lvals[iL]
lx = L
ly = L
for pIndex in range(len(pVals)):
p = pVals[pIndex]
ncount = 0
for j in range(nSamples):
ncount = 0
perc = []
while (len(perc)==0):
ncount = ncount + 1
if (ncount > 1000):
10.1 Conductance of a percolation lattice 171
G
0.8
P
0.6
G, P
0.4
0.2
0.0
0.60 0.65 0.70 0.75 0.80 0.85
p
Fig. 10.3 Plots of the conductance G(p, L) and the density of the spanning cluster
P (p, L) for L = 400.
We will now use the same scaling techniques we introduced to find the
behavior of P (p, L) to develop a theory for the conductance G(p, L).
First, we realize that we instead of p may describe the conductance as a
function of ξ and L: G(p, L) = G(ξ, L). Second, we realize that we want
to describe the system in the range where p > pc . We will address two
limits of the behavior: When L ξ and then at pc , that is, when ξ L.
10.2 Scaling arguments for conductance and conductivity 173
G(ξ, ξ)
g(ξ, L) = L−(d−2) G(ξ, L) = . (10.16)
ξ d−2
What is G(ξ, ξ)? A system with correlation length equal to the system
size is indistinguishable from a system at p = pc . The conductance G(ξ, ξ)
is therefore the conductance of the spanning cluster at p = pc in a system
of size L = ξ. Let us therefore find the conductance of a finite system —
a system of size L — at the percolation threshold.
exponent ζ̃R :
G(∞, L) ∝ L−ζ̃R . (10.17)
Finding bounds for the scaling behavior. In many cases, we cannot
find the scaling exponents directly, but we may be able to find bounds
for the scaling exponents. We will pursue this approach here. We will see
if we can find bounds for the scaling of G(∞, L), and thereby determine
bounds for the exponent ζ̃R ?
Lower bound for the scaling exponent. First, we know that the span-
ning cluster consists of blobs in series with the singly connected bonds.
This implies that the resistivity R = 1/G of the spanning cluster is given
as the resistivity of the singly connected bonds RSC plus the resistivity
of the blobs, Rblob since resistivities are added for a series of resistances:
This implies that R > RSC . The resistance of the singly connected
bonds can easily be found, since the definition of singly connected bonds
is that they are connected in series, one after another. We know the
effective resistance of a series of resistors from basic electromagnetism:
The resistance of a series of resistors is the sum of the resistances. The
resistance of the singly connected bonds is therefore the resistance of a
single bond multiplied with the number of singly connected bonds, MSC .
We have therefore found that
Because MSC ∝ LDSC , and we have assumed that R ∝ Lζ̃R , we find that
of the spanning cluster is smaller than the mass of the minimal path,
Mmin , which we know scales with system size, Mmin ∝ LDmin . We have
therefore found an upper bound for the exponent
and therefore
ζ̃R ≤ Dmin . (10.22)
Upper and lower bound demonstrate the scaling relation. We have
therefore demonstrated (or proved) the scaling relation
The renormalization scheme and the values used are shown in the ta-
ble 10.1. The resulting value for the renormalized resistance is
1 X
R0 = g(c)P (c)R(c) (10.31)
p0 c
1 1 5
5
= 1+4· + 1 + 2 · 2 + 2 · 3 + 4 · 2 + 2 · 2 (10.32)
p0 2 3
' 1.917 . (10.33)
c P (c) g(c) Rc
1 p5 (1 − p)0 1 1
2 p4 (1 − p)1 1 1
3 p4 (1 − p)1 4 5/3
4 p3 (1 − p)2 2 2
5 p3 (1 − p)2 2 3
3 2
6 p (1 − p) 4 2
7 p2 (1 − p)3 2 2
Table 10.1 Renormalization scheme for the scaling of the resistance R in a random
resistor network. The value R(c) gives the resistance of configuration c, and g(c) is the
degeneracy, that is, the number of such configurations.
g ∝ (p − pc )µ ∝ ξ −µ/ν , (10.36)
with the exponent µ given as µ = ν d − 2 + ζ̃R .
Finite size scaling ansatz for the conductivity. How can we use this
scaling behavior as a basis for a finite-size scaling ansatz? We apply the
usual approach, where we assume that we can extend the behavior of the
infinite system to the finite size system by the introduction of a finite
size scaling function f (L/ξ):
L
g(ξ, L) = ξ −µ/ν f ( ) . (10.37)
ξ
C = zeros((len(pVals),len(Lvals)),float)
P = zeros((len(pVals),len(Lvals)),float)
nSamples = 600
mu = zeros(len(Lvals))
for iL in range(len(Lvals)):
L = Lvals[iL]
for pIndex in range(len(pVals)):
p = pVals[pIndex]
ncount = 0
for j in range(nSamples):
ncount = 0
perc = []
while (len(perc)==0):
ncount = ncount + 1
if (ncount > 1000):
print("Couldn’t make percolation cluster...")
break
z=rand(L,L)<p
lw,num = measurements.label(z)
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x > 0)]
if len(perc) > 0:
zz = asarray((lw == perc[0]))
# zz now contains the spanning cluster
zzz = zz.T
# # Generate bond lattice from this
g = sitetobond ( zzz )
# # Generate conductivity matrix
Pvec, c_eff = FIND_COND(g, lx, ly)
C[pIndex,iL] = C[pIndex,iL] + c_eff
C[pIndex,iL] = C[pIndex,iL]/nSamples
for iL in range(len(Lvals)):
L = Lvals[iL]
plot(pVals,C[:,iL],label="L="+str(L))
xlabel(r"$p$")
ylabel(r"$C$")
legend()
The results for L = 25, 50, 100, 200, 400 are shown in Fig. 10.4. Here,
we plot both the raw data, g(p, L), and the behavior of g(pc , L) as a
function of L on a log − log-scale, showing that g(pc , L) indeed scales as
a power-law with L.
Scaling data collapse. We can also test the scaling ansatz by plotting
a finite-size scaling data collapse. We expect that the conductivity will
behave as
g(p, L) = L−µ/ν f˜ (L/ξ) , (10.47)
which we can rewrite by introducing ξ = ξ0 (p − pc)−ν to get:
ν
g(p, L) = L−µ/ν f˜
L1/ν (p − pc ) . (10.48)
10.4 Finite size scaling 181
0.6 L = 25
L = 50
0.5 L = 100
L = 200
0.4 L = 400
g(p, L)
0.3
0.2
0.1
0.0
0.60 0.65 0.70 0.75 0.80 0.85
p
1.2
1.4
1.6
log(g(pc , L))
1.8
2.0
2.2
2.4
160
140
120
100
Lµ/⌫ g(p, L)
80
60
40
20
0
0 5 10 15 20
L1/⌫ (p pc )
Fig. 10.5 Finite-size data scaling collapse for g(p, L) showing the validity of the scaling
ansatz.
1.00
1.25
1.50
1.75
2.00
2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6
log |p pc |
1.2
1.0
0.8
µ
0.6
0.4
0.2
0.0
50 100 150 200 250 300 350 400
L
Fig. 10.6 (a) Plot of g(p, L) for increasing values of L. (b) Plot of the exponent µ
calculated by a linear fit for increasing system sizes L.
10.5 (Advanced) Internal flux distribution 183
When we solve the flow problem, for electricity or fluids, on the percolation
cluster, we find a set of currents Ib = Ii,j for each bond b = (i, j) on the
backbone. For all other bonds, the flux will be identically zero. How can
we describe the distribution of fluxes on the backbone?
For electrical flow, the conservation of energy is formulated in the
expression:
RI 2 = (10.49)
X
rb Ib2 ,
b
form:
i2q (10.53)
X
y(q)
b ∝L .
b
What can we say about the scaling exponents y(q) for moment q?
Scaling exponent for q = 0. For q = 0, the sum is
y(q)
DB
ζR
DSC
1 2 q
Fig. 10.7 Illustration of the exponents y(q) characterizing the scaling of the moments of
the distribution of fractional currents, as a function q, the order of the moment.
So far, we have only addressed the case when rb = 1 for all the bonds on
the backbone. However, in reality there will be some variations in the
local resistances, so that we can write
rb = 1 + δrb , (10.58)
where hδrb i = 0.
where
(0)
ib = ib + δib . (10.61)
186 10 Flow in disordered media
b b
∆ = hδrb2 i . (10.66)
Mq ∝ Ly(q) , (10.68)
Mq ∝ |p − pc |xq , (10.70)
1 i iq 1 i
Fig. 10.8 Illustration of the distribution n(i) of fractional currents i in a random
resistor network. Part (a) shows the direct distribution, and part (b) shows n(i)i2q . The
distribution has a maximum at iq .
the cluster. For example, we saw that the zeroth moment of the current
distribution picks out the singly connected bonds and the infinite moment
picks out the backbone.
System-size dependence. Let us now address the system-size L-
dependence of n(i) and i2q
q . Let us assume that iq and n(iq ) is scaling
2
and
n(iq ) ∝ Lf (α(q)) . (10.74)
And we will assume that the q-th moment depends on the distribution
at iq :
mq ' n(iq )i2q
q ∝L
f (α(q))−qα(q)
∝ Ly(q)−DB . (10.75)
The value iq is found from the maximum of n(i)i2q . The condition for
this maximum is
∂ h i
n(i)i2q = 0 , (10.76)
∂i iq
or
∂
[ln n(i) + 2q ln i]iq = 0 , (10.77)
∂i
which gives
( ∂ ln∂in(i) )
= −2q . (10.78)
ln i i
q
Now we can substitute the L-dependent expressions for n(iq ) and i2q ,
getting
ln n(iq ) = f ln L , (10.79)
10.7 Real conductivity 189
and
ln i2q = −α ln L , (10.80)
and therefore we find that
∂f
=q. (10.81)
∂α
We therefore have two equations relating y(q) to f (α) and α(q):
G2 )
(G 1
G = G2 (p − pc ) f± (
µ
), (10.92)
(p − pc )y
where the exponent y is yet to be determined.
The random resistor network we studied above corresponds to G1 → 0,
and G2 = c. In this case, we retrieve the scaling behavior for p close to
pc , by assuming that f+ (0) is a constant.
For the random superconductor network, the conductances are G2 →
∞, and G1 = const.. We will therefore need to construct f− (u) in such a
way that the infinite conductance is canceled from the prefactor. That is,
10.8 Exercises 191
10.8 Exercises
the sites the contribute to the flow conductivity of the spanning cluster.
The remaining sites are the dangling ends.
We call the mass of the backbone MB , and the density of the backbone
PB = MB /Ld , where L is the system size, and d the dimensionality of the
percolation system. Here, we will study two-dimensional site percolation.
a) Argue that the functional form of PB (p) when p → p+ c is
PB (p) = P0 (p − pc )x , (10.99)
and find an expression for the exponent x. You can assume that the
fractal dimension of the backbone, DB , is known.
b) Assume that the functional form of PB (p) when p → p+
c and ξ L
is
PB (p) = P0 (p − pc )x , (10.100)
Determine the exponent x by numerical experiment. If needed, you may
use that ν = 4/3.
195
196 11 Elastic properties of disordered media
where U is the total energy, the sums are over all particle pairs ij or all
particle triplets ijk. The force constant is kij = k for bonds in contact
and zero otherwise, and κijk = κ for triplets with a common vertice, and
zero otherwise. The vector ui gives the displacement of node i from its
equilibrium position. The various quantities are illustrated in Fig. 11.1
φijk
j
ui i
uj
Fig. 11.1 Illustration of the initial bond lattice (dashed, gray), and the deformed bond
lattice. Three nodes i, j, k are illustrated. The angle φijk is shown. The displacements ui
and uj are shown respectively with cyan vectors.
z-direction.
Fz ∆Lz
σzz = =E , (11.2)
A L
We can therefore write the relation between the force Fz and the elonga-
tion ∆Lz as
EA EL2
Fz = ∆L = ∆L = Ld−2 E∆L . (11.3)
L L
We recognize this as a result similar to the relation between the conduc-
tance and the conductivity of the sample, and we will call K = Ld−2 E
the compliance of the system. We recognize this as being similar to the
spring constant of a spring.
Elastic properties when p < pc . What happens to the compliance of
the system as a function of p? When p < pc there are no connecting
paths from one side to another, and the compliance will therefore be
zero. It requires zero force Fz to generate an elongation ∆Lz in the
system. Notice that we are only interested in the infinitesimal effect of
deformation. If we compress the sample, we will of course eventually
generate a contacting path, but we are only interested in the initial
response of the system.
Elastic properties when p > pc . When p ≥ pc there will be at least one
path connecting the two edges. For a system with a bending stiffness, there
will be a load-bearing path through the system, and the deformation ∆Lz
of the system requires a finite force, Fz . The compliance K will therefore
be larger than zero. We have therefore established that for a system with
bending stiffness, the percolation threshold for rigidity coincides with
the percolation threshold for connectivity. However, for a central force
lattice, we know that the spanning cluster at pc will contain may singly
connected bonds. These bonds will be free to rotate, and as a result a
central force network will have a rigidity percolation threshold which is
higher than the connectivity threshold. Indeed, rigidity percolation for
central force lattices will have very high percolation thresholds in three
dimensions and higher. Here, we will only focus on lattices with bond
bending terms.
Behavior of E close to pc . Based on our experience with percolation
systems, we may hypothesize that Young’s modulus will follow a power-
law in (p − pc ) when p approaches pc :
(
0 p < pc
E∝ . (11.4)
(p − pc )τ p > pc
198 11 Elastic properties of disordered media
ξ
L
Fig. 11.2 Illustration of subdivision of a system with p = 0.60 into regions with a size
corresponding to the correlation length, ξ. The behavior inside each box is as for a system
at p = pc , whereas the behavior of the overall system is that of a homogeneous system of
boxes of linear size ξ.
where
1 X 2
2
RSC = r , (11.12)
MSC ij i
where the sum is taken over all the singly connected bonds.
The elastic energy of the singly connected bonds is therefore:
1 R2
USC = ( + SC )MSC F 2 , (11.13)
2k 2κ
11.1 Rigidity percolation 201
F2 1
KSC = = . (11.14)
2U (1/k + RSC /κ)MSC
2
with the radius of gyration of the bonds on the minimal path Rmin .
2
where we have found that when L 1, Kmin ∝ L−(Dmin +2) and KSC ∝
L−(DSC +2) . That is:
K(ξ, ξ) ξ ζ̃K
E(ξ, L) = ∝ ∝ ξ ζ̃K −(d−2) . (11.18)
ξ d−2 ξ d−2
We have therefore found a relation for the scaling exponent τ :
Similarity between the flow and the elastic problems. We see that
the bounds are similar to the bounds we found for the exponent ζ̃R . This
similarity lead Sahimi [29] and Roux [27] to conjecture that the elastic
coefficients E and G, and the conductivity σ is related through
E
∝ ξ −2 . (11.23)
σ
and therefore that
τ = µ + 2ν = (d + ζ̃R )ν . (11.24)
which is well supported by numerical studies.
In the limit of high dimensions, d ≥ 6, the relation τ = µ + 2ν = 4
becomes exact. However, we can use as a rule of thumb that the exponent
τ ' 4 in all dimensions d ≥ 2.
Diffusion in disordered media
12
203
204 12 Diffusion in disordered media
where ui is step i. We will usually assume that the steps ui are indepen-
dent and isotropically distributed.
Generating a random walk. We can generate an example of random
walk by selecting ui = (xi , yi ), where xi and yi are selected from e.g. a
uniform random distribution from −1 to 1. The following program calcu-
lates and visualizes the resulting path starting from the origin, and the
resulting path is shown in Fig. 12.1.
from pylab import *
n=1000
u = 2*random(size=(n,2))-1
r = cumsum(u,axis=0)
plot(r[:,0],r[:,1])
Fig. 12.1 Plots of 10 random walks of size n = 100 (left) and n = 1000 (right).
i=1 i=1
where we have used that since ui are isotropic, hui i = 0. This is not
surprising, the random walker has the same probability to walk in all
directions and therefore does not get anywhere on average.
12.1 Diffusion and random walks in homogeneous media 205
However, from Fig. 12.1 we see that the extent of the path increases
with the number of steps n. We can characterize this using similar
measures to what we used to describe the geometry of the percolation
clusters, by measuring rn2 instead. We find the average value of rn2 using
the same approach:
!
hrn2 i = h (12.3)
X X
ui · uj i
i j
=h (12.4)
XX
ui · uj i
i j
=h ui · uj i + h (12.5)
X X
ui · ui i
i=j i6=j
= hui · ui i + (12.6)
X X
hui · ui i
| {z }
i i6=j
=0
= nδ , 2
(12.7)
where the sum is over all neighbors j of the site i. The term σi,j is the
transition probability. The first term in the sum represents the probability
that the walker during the time period δt walks into site i from site j,
and the second term represents the probability that the walker during
the time period δt walks from site i to one of the neighboring sites j.
When δt → 0 this equation approaches a differential equation
∂Pi X
= [σj,i Pj (t) − σi,j Pi (t)] . (12.10)
∂t j
If we assume that the transition probability is equal for all the neighbors,
so that σi,j = 1/Z, where Z is the number of neighbors, the differential
equation simplifies to
∂P
= D∇2 P , (12.11)
∂t
which we recognize as the diffusion equation, where the diffision constant
D is related to the transition probabilities σi,j and Z.
The general solution to this equation is
12.2 Random walks on clusters 207
1 2 1 − 1 ( r )2
P (r, t) = e−r /2Dt = e 2 |R| , (12.12)
(2πDt)d/2 (2π) |R|
d/2 2
√
where we have introduced |R| = Dt.
It can be shown that the moments of this distribution are
where L is the system size. If this site is empty, the walk stops immediately
and its length is zero:
if not cluster[ix,iy]:
return
Storing the trajectory of the walker. We store the trace of the walker
in two arrays (we need both to handle periodic boundary conditions
later): walker_map which consists of the positions ix,iy of the walker
for each step, and displacement, which consists of the positions relative
to the initial position of the walker.
Random selection of next step. How do we select where the walker
can move? The walker is restricted to move to nearest neighbor sites that
are present. There are several approaches:
• We may select a direction at random and try to move in this direction.
If the walker cannot move in this direction it stays put for this step,
and then tries again in the next step. In this case, the walker may
have many steps without any motion.
• We may find all the directions the walker can possibly move in, and
then select one of these directions at random. In this case the walker
will move onto a new site in each step.
Both these methods effectively produce the same behavior. We will select
the second method. We therefore need to create a list of the possible
directions to move in. In order to make this list, we have a list called
directions of possible movement directions:
directions = np.zeros((2, 4), dtype=np.int64)
# X-dir: east and west, Y-dir: north and south.
directions[0, 0] = 1
directions[1, 0] = 0
directions[0, 1] = -1
directions[1, 1] = 0
directions[0, 2] = 0
12.2 Random walks on clusters 209
directions[1, 2] = 1
directions[0, 3] = 0
directions[1, 3] = -1
For each step, we need to collect all the possible steps into a list called
neighbor_arr. This is done by the following loop:
neighbor = 0
for idir in range(directions.shape[1]):
dr = directions[:,idir]
iix = ix + dr[0]
iiy = iy + dr[1]
if 0 <= iix < L and 0 <= iiy < L and cluster[iix, iiy]:
neighbor_arr[neighbor] = idir
neighbor += 1
If this list is empty, that is, if neighbor is zero, there are no possible
places to move. This means that the walker has landed on a cluster of
size s = 1. In this case, we stop and return with n = 1.
Finally, we select one of the neighbor directions at random, move the
walker into this site, update walker_map and displacement and repeat
the process.
# Select random direction from 0 to neighbor-1
randdir = randint(neighbor)
dir = neighbor_arr[randdir]
ix += directions[0, dir]
iy += directions[1, dir]
step += 1
walker_map[0, step] = ix
walker_map[1, step] = iy
displacement[:,step]=displacement[:,step-1]+directions[:,dir]
@numba.njit(cache=True)
def percwalk(cluster, max_steps):
"""Function performing a random walk on the spanning cluster.
Parameters
----------
cluster : np.ndarray
Boolean array with 1’s signifying a site in the spanning cluster.
max_steps : int
Maximum number of walker steps to perform.
Returns
-------
210 12 Diffusion in disordered media
walker_map : np.ndarray
A coordinate map of the walk performed, x in [0] and y in [1]
displacement : np.ndarray
A coordinate map of relative positions, x in [0] and y in [1]
num_steps : int
Number of steps performed.
"""
walker_map = np.zeros((2, max_steps))
displacement = np.zeros_like(walker_map)
directions = np.zeros((2, 4), dtype=np.int64)
neighbor_arr = np.zeros(4, dtype=np.int64)
# X-dir: east and west, Y-dir: north and south.
directions[0, 0] = 1
directions[1, 0] = 0
directions[0, 1] = -1
directions[1, 1] = 0
directions[0, 2] = 0
directions[1, 2] = 1
directions[0, 3] = 0
directions[1, 3] = -1
# Initial random position
Lx, Ly = cluster.shape
ix = np.random.randint(Lx)
iy = np.random.randint(Ly)
walker_map[0, 0] = ix
walker_map[1, 0] = iy
step = 0
if not cluster[ix, iy]: # Landed outside the cluster
return walker_map, displacement, step
while step < max_steps-1:
# Make list of possible moves
neighbor = 0
for idir in range(directions.shape[1]):
dr = directions[:,idir]
iix = ix + dr[0]
iiy = iy + dr[1]
if 0 <= iix < Lx and 0 <= iiy < Ly and cluster[iix, iiy]:
neighbor_arr[neighbor] = idir
neighbor += 1
if neighbor == 0: # No way out, return
return walker_map, displacement, step
# Select random direction from 0 to neighbor-1
randdir = randint(neighbor)
dir = neighbor_arr[randdir]
ix += directions[0, dir]
iy += directions[1, dir]
step += 1
walker_map[0, step] = ix
walker_map[1, step] = iy
displacement[:,step]=displacement[:,step-1]+directions[:,dir]
return walker_map, displacement, step
12.2 Random walks on clusters 211
Walks from 10 such simulations are shown in Fig. 12.2. This looks
reasonable and nice, but we do notice that quite a few of these walks
reach the boundaries of the system. We may wonder how this finite
system size affects the behavior and statistics of the system.
40
30
20
10
0
0 10 20 30 40
Fig. 12.2 Trajectories of 10 random walks for a (homogeneous) system with L = 50 and
p = 1.
t = arange(len(r2))
plot(t,r2)
The resulting plot is shown in Fig. 12.3. We do not really learn much from
this plot — we need to collect more statistics. We need to generate many
different walks and then average over all the walks to find a statistically
better measure for r2 (t).
350
40 300
250
30
200
r2
20 150
100
10
50
0 0
0 10 20 30 40
0 100 200 300 400 500
t
Fig. 12.3 (a) Trajectory of a random walk for a (homogeneous) system with L = 50 and
p = 1. (b) Plot of the corresponding r2 (t).
The resulting plot in Fig. 12.4(a) shows that the system indeed behaves
as we expect — for small values of t. However, as t increases, we see that
the effect of the finite system size L starts to affect the results. This is
because the random walker is limited by the wall and eventually we will
be limited the L × L system. This problem will also arise when we study
the percolation system. How can we reduce this problem?
1.5
periodic
1.0 non-periodic
0.5
0.0
log(r2 )
0.5
1.0
1.5
2.0
2.5
Fig. 12.4 Plot of r2 (t) for a L = 100 system with non-periodic and periodic boundary
conditions.
The resulting plot of r2 (t) in Fig. 12.4 shows that this solves the problem
with the boundaries. This aspect will be even more important when we
study percolation systems in non-uniform media.
# With periodic boundary conditions - essential for good statistics
import numba
import numpy as np
@numba.njit(cache=True)
def percwalk(cluster, max_steps):
"""Function performing a random walk on the spanning cluster.
Parameters
----------
cluster : np.ndarray
Boolean array with 1’s signifying a site in the spanning cluster.
max_steps : int
Maximum number of walker steps to perform.
Returns
-------
walker_map : np.ndarray
A coordinate map of the walk performed, x in [0] and y in [1]
displacement : np.ndarray
A coordinate map of relative positions, x in [0] and y in [1]
num_steps : int
Number of steps performed.
"""
walker_map = np.zeros((2, max_steps))
displacement = np.zeros_like(walker_map)
directions = np.zeros((2, 4), dtype=np.int64)
neighbor_arr = np.zeros(4, dtype=np.int64)
# X-dir: east and west, Y-dir: north and south.
directions[0, 0] = 1
directions[1, 0] = 0
directions[0, 1] = -1
directions[1, 1] = 0
directions[0, 2] = 0
directions[1, 2] = 1
directions[0, 3] = 0
directions[1, 3] = -1
# Initial random position
Lx, Ly = cluster.shape
ix = np.random.randint(Lx)
iy = np.random.randint(Ly)
walker_map[0, 0] = ix
walker_map[1, 0] = iy
step = 0
# Check if we landed outside the spanning cluster
if not cluster[ix, iy]:
# Return the map with starting position and number of steps
return walker_map, displacement, step
while step < max_steps-1:
# Make list of possible moves
neighbor = 0
12.2 Random walks on clusters 215
aften some time, r2 (t) crosses over to a constant instead. How can we
understand this behavior?
0.5
1.0
1.5
log(n)
2.0
2.5
3.0
0 1 2 3 4
log(r2 )
of r2 (t) for all these different walks. If we drop the walker at a random
position, the probability for that walker to land on a cluster of size s is
sn(s, p), and the contribution from this cluster to r2 (t) after a long time
is Rs2 . Therefore, the average hr2 (t)i for the walker is:
h i h i
hr2 i ∝ Rs2 = (12.15)
X
Rs2 sn(s, p) .
s
We realize that the function F (s/sξ ) falls to zero very rapidly when
s > sξ and it is effectively constant below that, we therefore replace the
integral with an integral up to sξ :
h i Z sξ
Rs2 = Rs2 ss−τ ds . (12.17)
1
We now insert that Rs2 ∝ s2/D and perform the integral, getting:
h i
2/D+2−τ 2/D 2−τ
Rs2 ∝ sξ ∝ sξ sξ . (12.18)
We notice that in this case the average is of Rs2 over sn(s, p), but when
we calculated the correlation length in (5.21) the average was of Rs2 over
s2 n(s, p), and this is the reason for the appearance of the exponent β −2ν
and not simply −2ν as we got for the correlation length.
Short term behavior. There is a transition in r2 (t) to R2 after some
crossover time t0 . For times shorter than t0 we see from Fig. 12.6 that
the behavior appears to be that r2 (t) ∝ t2k for some exponent 2k. We
notice that as p approaches pc , the crossover time t0 increases. All the
curves for various p-values appear to have similar, or possibly the same
behavior for t < t0 .
In Fig. 12.6 we notice that the exponent 2k is not 1, as we found for
the homogeneous case. It is clearly lower than 1. If we measure it, we find
218 12 Diffusion in disordered media
that 2k ' 0.66 and k ' 0.33. We call this behavior anomalous diffusion
because the mean squared distance r2 (t) does not grow linearly with
time, but with an exponent different than 1. What can we say about the
crossover time t0 ? We will return to this after examining the case when
p > pc .
12.2.3 Diffusion at p = pc
From Fig. 12.6 we also see that for p = pc the random walk follows
r2 (t) ∝ t2k . This behavior is as expected. For times shorter than t0 ,
the walker behaves as if it is on pc , whereas after a long time, t > t0 ,
we start noticing that the walker is restricted when it diffuses on the
finite clusters. Another way to think of this is that the crossover time t0
increases as p → pc , and diverges at p = pc . The exponent k is a universal
exponent for diffusion on percolation systems. It does not depend on the
lattice structure or the rules for connectivity, but it does depend on the
embedding dimension d.
1.0
p = 0.75
0.5 p = 0.65
p = 0.5927
0.0 k = 0.35
k = 0.5
0.5
log(r2 )
1.0
1.5
2.0
2.5
3.0
We therefore expect that when p > pc , and the time is larger than a
crossover time t0 (p), that the behavior is scaling with exponent µ, iden-
tical to that of conductivity. And for a time shorter than the crossover
time, the behavior is identical to the behavior at p = pc . We can un-
derstand this in the same way as above: When t < tc the walker does
still not experience that the characteristic clusters are limited by a finite
characteristic length ξ.
We could also have started from any of the end-points, such as from the
assumption that
t
hr2 i = (pc − p)β−2ν G1 ( ) , (12.23)
t0
or
t
hr2 i = (p − pc )µ G2 ( ) . (12.24)
t0
220 12 Diffusion in disordered media
Let us now address the various limits in order to determine the scaling
exponents k and x in terms of known exponents.
Scaling behavior in the limit p > pc . First, we know that when p > pc ,
that is when u 1, we have that
hr2 i ∝ (p − pc )µ t , (12.26)
2k = 1 − µx , (12.28)
or
1 − µx
k= . (12.29)
2
Scaling behavior in the limit p < pc . Similarly, we know that the
behavior in the limit of u −1 should be proportional to (pc − p)β−2ν .
Consequently, the scaling ansatz gives
(pc − p)β−2ν ∝ t2k f ((p − pc )tx ) ∝ t2k [(pc − p)tx ]β−2ν , (12.30)
Solving to find the exponents. We solve the two equations for x and
k, finding
1 µ
k = [1 − ], (12.32)
2 2ν + µ − β
and
1
x= . (12.33)
2ν + µ − β
12.2 Random walks on clusters 221
Our argument therefore shows that the scaling ansatz is indeed consistent
with the limiting behaviors we have already determined, and it allows us
to make a prediction for k and x.
Testing the scaling ansatz. We can test the scaling function by a
direct plot of the simulated result. The scaling relation states that
r2 (t) = t2k f [(p − pc )tx ], which means that r2 (t)t−2k = f [(p − pc )tx ]. If
we therefore plot r2 (t)t−2k on one axis and (p − pc )tx on the other axis,
all the data for the various values of p should fall onto a common curve
corresponding to the function f (u). This is illustrated in Fig. 12.8, which
shows that the scaling ansatz is in good correspondence with the data.
Indeed, the plot also shows that the assumptions about the shape of the
scaling function f (u) are correct.
3.2
3.4
3.6
3.8
r )
2k 2
4.0
log(t
4.2
4.4
4.6
4.8
which gives
t0 ∝ |p − pc |−1/x ∝ |p − pc |−(2ν+µ−β) . (12.38)
Interpreting the crossover time. How can we interpret this relation?
We could decompose the relation to be:
|p − pc |β−2ν
t0 ∝ , (12.39)
|p − pc |µ
where we know that the average radius of gyration for clusters are
[Rs2 ]
t0 (p) ∝ , (12.41)
D
where D is the diffusion constant. Why is this time not proportional to
ξ 2 /D, the time it takes to diffuse a distance proportional to the correlation
length? The difference comes from the particular way we devised the
experiment: the walker was dropped onto a randomly selected occupied
site.
Interpreting the behavior for p > pc . Let us now address what happens
when p > pc . In this case, the contributions to the variance of the position
has two main terms: one term from the spanning cluster and one term
from the finite clusters.
P 0
[hr2 i] = Dt = D t + Rs2 , (12.42)
p
where the first term, P/pD0 t is the contribution from the random walker
on the infinite cluster. This term consists of the diffusion constant D0
for a walker on the spanning cluster, and the prefactor P/p which comes
from the probability for the walker to land on the spanning cluster: For
a random walker placed randomly on an occupied site in the system, the
probability for the walker to land on the spanning cluster is P/p, and the
probability to land on any of the finite clusters is 1 − P/p. The second
term is due to the finite cluster. This term reaches a constant value for
12.2 Random walks on clusters 223
large times t. The only time dependence is therefore in the first term,
and we can write:
P
Dt = D0 t , (12.43)
p
for long times, t. That is:
Dp µ−β
D0 = ∝ (p − pc )µ−β ∝ ξ − ν ∝ ξ −θ . (12.44)
P
where we have introduce the exponent
µ−β
θ= . (12.45)
ν
Interpreting the crossover time for p > pc . We have therefore found
an interpretation of the cross-over time t0 , and, in particular for the
appearance of the β in the exponent. We see that the cross-over time is
|p − pc |β−2ν ξ2
t0 ∝ ∝ . (12.46)
|p − pc |µ D0
The interpretation of t0 is therefore that t0 is the time the walker needs
to travel a distance ξ when it is diffusing with diffusion constant D0 on
the spanning cluster.
hr2 i = Dt , (12.56)
or, equivalently, we can find the diffusion constant for Fick’s law from:
∂ 2
D= hr i . (12.57)
∂t
Now, we have established that for diffusion on the spanning cluster for
p = pc , the diffusion is anomalous. That is, the relation between the
square distance and time is not linear, but a more complicated power-law
relationship
0
hr2 i ∝ t2k . (12.58)
As a result, we find that the diffusion constant D0 for diffusion on the
spanning cluster defined through Fick’s law is
∂ 2k0 0
D0 ∝ t ∝ t2k −1 . (12.59)
∂t
We can therefore interpret the process as a diffusion process where D
decays with time.
In the anomalous regime, we find that
0
r ∝ tk , (12.60)
We could therefore also say that the diffusion constant is decreasing with
distance. The reverse is also generally true: Whenever D depends on the
distance, we will end up with anomalous diffusion.
We can also relate these results back to the diffusion equation. The
diffusion equation for the random walk was:
∂P
= D0 ∇2 P = ∇D0 ∇P , (12.63)
∂t
where the last term is the correct term if the diffusion constant depends
on the spatial coordinate.
We can rewrite the dimension, dw , of the walk to make the relation
between the random walker and the dimensionality of the space on which
226 12 Diffusion in disordered media
dw = ζ̃R + D . (12.67)
12.3 Exercises
229
230 13 Dynamic processes in disordered media
(a) (b)
Fig. 13.1 Illustration of the diffusion front. Particles are diffusing from a source at the
left side. We address the front separating the particles connected to the source from the
particles not connected to the source. The average distance is given by xc shown in the
figure. The width of the front, ξ, is also illustrated in the figure. The different clusters
are colored to distinguish them from each other. The close-up in figure (b) illustrates the
finer details of the diffusion fronts, and the local cluster geometries.
Exact solution for concentration. For this problem we know the exact
solution for the concentration, c(x, t), or particles, corresponding to the
occupation probability P (x, t). The solution to the diffusion equation
with a constant concentration, or P (x = 0, t) = 1, is the error function
given as the integral over a Gaussian function:
x
P (x, t) = 1 − erf ( √ ) , (13.1)
Dt
where the error function is defined as the integral:
2 u
Z
−v 2
erf (u) = √ e 2 dv . (13.2)
2π 0
This solution produces the expected deviation hx2 i = Dt, where D is the
diffusion constant for the particles. There is no y (or z) dependence for
the solution.
13.1 Diffusion fronts 231
w = ξ0 |P (x + w, t) − pc |−ν . (13.5)
which gives
w ∝ xν/(1+ν)
c . (13.9)
The width of the front therefore scales with the average position of the
front, and the scaling exponent is related to the scaling exponent of the
correlation length for the percolation problem.
Time development. What happens in this system with time? Since xc
is increasing with time, we see that the relative width decreases:
ν/(1+ν)
w xc 1
− 1+ν
∝ ∝ xc . (13.10)
xc xc
This effect will also become apparent under renormalization. Applying
a renormalization scheme with length b, will result in a change in the
front width by a factor bν/(1+ν) , but along the y-direction the rescaling
will simply be by a factor b. Successive applications will therefore make
the front narrower and narrower. This difference in scaling along the
x and the y axis is referred to as self-affine scaling, in contrast to the
self-similar scaling where the rescaling is the same in all directions.
We will now study the slow injection of a non-wetting fluid into a porous
medium saturated with a wetting fluid. In the limit of infinitely slow
injection, this process is termed invasion percolation for reasons that will
soon become obvious [39, 12].
Physical system — fluid saturated porous medium. When a non-
wetting fluid is injected slowly into a saturated porous medium, the
pressure in the non-wetting fluid must exceed the capillary pressure in
a pore-throat for the fluid to propagate from one pore to the next, as
13.2 Invasion percolation 233
Fig. 13.2 Illustration of the invasion percolation process in which a non-wetting fluid
is slowly displacing a wetting fluid. The left figure shows the interface in a pore throat:
the pressure in the invading fluid must exceed the pressure in the displaced fluid by an
amount corresponding to the capillary pressure Pc = Γ/, where Γ is the interfacial
surface tension, and is a characteristic length for the pore throat. The right figure
illustrates the invasion front after injection has started. The fluid may invade any of
the sites along the front indicated by small circles. The site with the smallest capillary
pressure threshold will be invaded first, changing the front and exposing new boundary
sites.
234 13 Dynamic processes in disordered media
Then, we find the labels of all the clusters on the left side of the lattice.
All the clusters with these labels are connected to the left side and are
13.2 Invasion percolation 235
Then we make a matrix that stores at what time t (pressure value p(t))
a particular site was invaded. This is done by simply adding a 1 for all
set sites at t to a matrix pcluster. The first clusters invaded will then
have the highest value in the pcluster matrix. We use the pcluster
matrix to visualize the dynamics.
pcluster = pcluster + 1.0*cluster
Finally, we check if the fluid has reached the right-hand side by comparing
the labels on the left-hand side with those on the right-hand side. If
any labels are the same, there is a cluster connecting the two sides (a
spanning cluster), and the fluid invasion process stops:
# Check if it has reached the right hand side
span = intersect1d(lw[:,1],lw[:,-1])
if (len(span[where(span > 0)])>0):
break
imshow(log(pcluster),origin="lower")
Fig. 13.3 Illustration of the invasion percolation cluster. The color-scale indicates nor-
malized pressure at which the site was invaded.
as p approaches pc both the front position and the front width diverges,
that is, both the front position h̄ and the width, w, are proportional to
the system size L:
h̄ ∝ w ∝ L , (13.13)
However, when the system size increases, we would expect other sta-
bilizing effects to become important. For a very small, but finite fluid
injection velocity, the viscous pressure drop will eventually become im-
portant and comparable to the capillary pressure. Also, any deviation
from a completely flat system or for a system with a slight different in
densities, the effect of the hydrostatic pressure term will also eventually
become important. We will now demonstrate how we may address the
effect of such a stabilizing (or destabilizing) effect [5, 25].
Invasion percolation in a gravity field. Let us assume that the invasion
percolation occurs in the gravity field. This implies that the pressure
needed to invade a pore depends both on the capillary pressure, and on
a hydrostatic term. The pressure Pic needed to invade site i at vertical
position xi in the gravity field is:
Γ
Pic = + ∆ρgxi , (13.14)
We can again normalize the pressures, resulting in
∆ρg 0
i = pi +
pC (13.15)
0
x ,
Γ 2 i
where the coordinates are measured in units of the pore size, , which
is the unit of length in our system. The last term is called the Bond
number:
∆ρg
Bo = , (13.16)
Γ 2
13.2 Invasion percolation 239
Here, we will include the effect of the bond number in a single number g,
so that the critical pressure at site i is:
p = p 0 + xc g . (13.18)
240 13 Dynamic processes in disordered media
g=0.0 g=0.0001
g=0.001 g=0.01
Fig. 13.4 Illustration of the gravity stabilized invasion percolation cluster for g = 0,
g = 10−4 , g = 10−3 , and g = 10−2 . The color-scale indicates normalized pressure at
which the site was invaded.
The front will also extend beyond the average front position. The occu-
pation probability at a distance a from the front is p0 = pc − ag, since
fewer sites will be set beyond the front due to the stabilizing term g. A
site at a distance a is connected to the front if this distance a is shorter
to or equal to the correlation length for the occupation probability p0 at
this distance. The maximum distance a for which a site is connected to
the front therefore occurs when
This gives
This gives
a ∝ g −ν/(1+ν) , (13.21)
as the front width. We leave it as an exercise to show find the form of
the position h(p, g), and the width, w(p, g), as a function of p and g.
We observe that the width has a reasonable dependence on g. When g
approaches 0, the width diverges. This is exactly what we expect since
the limit g = 0 corresponds to the limit of ordinary invasion percolation.
This discussion demonstrates a general principle that we can use to
study several stabilizing effects, such as the effect of viscosity or other
material or process parameters that affect the pressure needed to advance
the front. The introduction of a finite width or characteristic length ξ
that can systematically be varied in order to address the behavior of the
system when the characteristic length diverges is also a powerful method
of both experimental and theoretical use.
g=0.0 g=0.0001
g=0.001 g=0.01
Fig. 13.5 Illustration of the gravity de-stabilized invasion percolation cluster for g = 0,
g = −10−4 , g = −10−3 , and g = −10−2 . The color-scale indicates normalized pressure at
which the site was invaded.
References
[1] Joan Adler, Yigal Meir, Amnon Aharony, A. B. Harris, and Lior
Klein. Low-concentration series in general dimension. Journal of
Statistical Physics, 58(3):511–538, 1990.
[2] A. Aharony, Y. Gefen, and A. Kapitulnik. Scaling at the percolation
threshold above six dimensions. Journal of Physics A: Mathematical
and General, 17(4):L197–L202, 1984.
[3] Per Bak. How Nature Works: the Science of Self-Organized Critical-
ity. Copernicus, 1996.
[4] David J. Bergman and Yacov Kantor. Critical Properties of an
Elastic Fractal. Physical Review Letters, 53(6):511–514, 1984.
[5] A. Birovljev, L. Furuberg, J. Feder, T. Jssang, K. J. Mly, and
A. Aharony. Gravity invasion percolation in two dimensions: Ex-
periment and simulation. Physical Review Letters, 67(5):584–587,
1991.
[6] John L. Cardy. Introduction to Theory of Finite-Size Scaling. In
Current Physics–Sources and Comments, volume 2 of Finite-Size
Scaling, pages 1–7. Elsevier, 1988.
[7] Kim Christensen and Nicholas R. Moloney. Complexity and Criti-
cality. Imperial College Press, 2005.
[8] A. Coniglio and R. Figari. Droplet structure in Ising and Potts
models. Journal of Numerical Mathematics, 16(14):L535–L540, 1983.
[9] P. G. de Gennes. La percolation: Un concept unificateur. La
Recherche, 7:919, 1976.
[10] M. M. Dias and D. Wilkinson. Percolation with trapping. Journal
of Physics A: Mathematical and General, 19(15):3131–3146, 1986.
243
244 REFERENCES
D, 74, 85, 98 c, 1
DSC , 140 f (α), 189
G, 161, 163 g, 161
G(p, L), 163 g(c), 14
L, 34 g(r), 33
P , 19, 31, 40 gs,t , 42
P (p, L), 12, 62, 97 kij , 196
Q(p), 38 n(s, p), 20, 21, 23, 32, 44, 54, 58
Rs , 69 p, 3, 5, 37, 40
S, 29, 42, 61 pc , 3, 4, 64, 105
S(p, L), 99 s, 23
Z, 3, 37 sξ , 45
Λ, 116
Π, 9, 14, 19 anomalous diffusion, 217
Π(p, L), 102 average cluster size, 27, 29, 41, 42,
β, 40 61
γ, 31, 42, 62, 99 first moment, 30
κ, 196 average path, 142
ν, 33, 83
backbone, 142
φ, 1
bending stiffness, 195
σ, 45
Bethe lattice, 37
τ , 62, 198
binary mixture, 189
ξ, 32, 75, 78, 94
blob model, 144
247
248 INDEX