0% found this document useful (0 votes)
175 views

Percolation Theory in Python

This document provides an introduction to the textbook "Percolation theory using Python" by Anders Malthe-Sørenssen. The textbook was developed for a course on percolation theory taught for 20 years at the University of Oslo. It follows a hands-on, project-based approach to learning about percolation theory and scaling concepts. The textbook includes computer code examples in Python to generate and analyze percolation data. The goal of the textbook is to teach students how to build models, generate data, and make sense of imperfect results by guiding them through the process with worked examples combining theory, modeling, implementation, analysis, and connecting theory to results.

Uploaded by

Kristóf Kássa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
175 views

Percolation Theory in Python

This document provides an introduction to the textbook "Percolation theory using Python" by Anders Malthe-Sørenssen. The textbook was developed for a course on percolation theory taught for 20 years at the University of Oslo. It follows a hands-on, project-based approach to learning about percolation theory and scaling concepts. The textbook includes computer code examples in Python to generate and analyze percolation data. The goal of the textbook is to teach students how to build models, generate data, and make sense of imperfect results by guiding them through the process with worked examples combining theory, modeling, implementation, analysis, and connecting theory to results.

Uploaded by

Kristóf Kássa
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 262

Percolation theory using Python

Anders Malthe-Sørenssen

Department of Physics, University of Oslo

Apr 20, 2020


Preface

This textbook was developed for the course Fys4460 - Disordered media
and percolation theory. The course was developed in 2004 and taught for
twenty years at the University of Oslo. The original idea of the course
was to provide an introduction to basic aspects of scaling theory to a
cross-disciplinary student group. Both geoscience and physics students
have successfully taken the course.
This book follows the underlying philosophy that learning a subject
is a hands-on activity that requires student activities. The course that
used the book was project driven. The students solved a set of extensive
computational and theoretical exercises and were supported by lectures
that provided the theoretical background and group sessions with a
learning assistant. The exercises used in the course have been woven into
the text, but are also given as a long project description in an appendix.
This textbook provides much of the same information as provided in the
lectures.
The underlying idea is that in order to learn a subject such as scaling,
the student needs to gain hands on experience with real data. The student
should learn to generate, analyze and understand data and models. The
focus is not to generate perfect data. Instead, we aim to teach the student
how to make sense of imperfect data. The data presented in the book
and the data that students may generate using the supplied programs
are therefore not for very long simulations, but instead from simulations
that take a few minutes on an ordinary computer. Experience from this
course has been that students learn most effectively by being guided
through the process of building models and generating data. Some details

v
vi

of the computer programs have therefore been provided in the text,


and we strive to use a similar notation in the computer code and in
the mathematics in order to make the transfer from mathematics to
computational modeling as simple as possible.
Another aspect of the book is that it tried to be complete in its
exposition and worked examples. Not only are the theoretical arguments
carried out in detail. The computer code needed to develop data are
provided in such as form that they can be run and can generate the data
in the examples. This provides students with a complete set of worked
examples that contain theory, modeling (the transfer from theory to
model), implementation, analysis and the resulting connection between
theory and analysis.
In the full course, this textbook was only one half of the curriculum. For
the first ten years the first part of the course focused on random walks and
the last part focused on random growth processes. For the second ten years
of the course, the course switched to be a course on cross-scale modeling of
porous media. The first half of the course focused on molecular dynamics
modeling of homogeneous systems in order to build an understanding
of concepts from statistical physics from computational examples. The
second part of the course used molecular dynamics simulations to model
nanoscale porous media with focus on fluid transport (diffusion) and
fluid flow in a nanoporous system and elastic properties of the porous
matrix. Percolation theory was then introduced as a method to upscale
the nanoscale systems, and we measured correlation functions, flow and
diffusion problems across scales.
The course on percolation theory developed at the University of Oslo
and that formed the basis for this textbook is inspired by a course given
by Amnon Aharony on random systems several times in the 1990-ies.
This course was a great inspiration for all students and faculty and the
course served as an inspiration for this course and for this text. Thank
you for your great inspiration Amnon.
This book is written as a practical textbook introduction to the field
of percolation theory with particular emphasis on containing all the
computational methods needed to study percolation theory because
our experience is that students learn best by performing these studies
themselves — students learn through their activities. Thus, we have
focused on included computer code where it is needed. The textbook
does not aim to provide a complete set of references to percolation
theory. Instead, only a few key references are included for students how
want to explore more. There are many other good text and reviews that
vii

provide a detailed set of references and a more historical description of


the development of the field.
This textbook is the result of the contributions from many students
in the course. Originally, the textbook was written with examples in
matlab (and the whole book is available with all programs in matlab).
However, as Python gradually have developed into the tool of choice
for scientific computing, also the code in this course was updated. This
was first done by Svenn-Arne Dragly, and some of the translations from
matlab to Python was originally done by him. Later contributions from
e.g. Øyvind Schøyen to the translation of matlab to Python code for
diffusion are also acknowledged.
Thank you to all the students who have contributed in this course. It
has been great fun to teach it because of your input and inspiration. I
am greatly indebted to you!
Thank you also to my mentors Jens Feder, Torstein Jøssang and Bjørn
Jamtveit who built up a cross-disciplinary research environment between
physics, computer science and geoscience — the Center for the Physics of
Geological Processes. You have always supported my work and inspired
me to be a better researcher and a better teacher. Also thank you to my
mentor in teaching, textbooks and computing, Hans Petter Langtangen.
Without you, this book would never have been realized. Your vision,
voice and spirit live on in us who worked with you. And thank you
to my collegue Morten Hjorth-Jensen who has built up the group in
computational physics at the University of Oslo, who generously included
me in this group, and who has by example inspired me to be a better
teacher.
This textbook was written using doconce — a document translation
and formatting tool that allows simple integration of text, mathematics,
and computer code developed by Hans Petter Langtangen.

April 2020 Anders Malthe-Sørenssen


Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

1 Introduction to percolation . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Basic concepts in percolation . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Percolation probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Spanning cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.4 Percolation in small systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.5 Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
1.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2 One-dimensional percolation . . . . . . . . . . . . . . . . . . . . . . . . 19
2.1 Percolation probability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Cluster number density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.2.1 Definition of cluster number density . . . . . . . . . . . . . . . 20
2.2.2 Measuring the cluster number density . . . . . . . . . . . . . . 23
2.2.3 Shape of the cluster number density . . . . . . . . . . . . . . . 25
2.2.4 Numerical measurement of the cluster number density 27
2.2.5 Average cluster size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.3 Spanning cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
2.4 Correlation length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

ix
x Contents

2.5 (Advanced) Finite size effects . . . . . . . . . . . . . . . . . . . . . . . . . . . 34


2.5.1 Finite size effects in Π(p, L) and pc . . . . . . . . . . . . . . . . 35
2.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3 Infinite-dimensional percolation . . . . . . . . . . . . . . . . . . . . . 37
3.1 Percolation threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.2 Spanning cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
3.3 Average cluster size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
3.4 Cluster number density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.5 (Advanced) Embedding dimension . . . . . . . . . . . . . . . . . . . . . . . 49
3.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

4 Finite-dimensional percolation . . . . . . . . . . . . . . . . . . . . . . . 51
4.1 Cluster number density . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.1.1 Numerical estimation of n(s, p) . . . . . . . . . . . . . . . . . . . . 54
4.1.2 Measuring probabilty densities of rare events . . . . . . . . 55
4.1.3 Measurements of n(s, p) when p → pc . . . . . . . . . . . . . . 58
4.1.4 Scaling theory for n(s, p) . . . . . . . . . . . . . . . . . . . . . . . . . 58
4.1.5 Scaling ansatz for 1d percolation . . . . . . . . . . . . . . . . . . 60
4.1.6 Scaling ansatz for Bethe lattice . . . . . . . . . . . . . . . . . . . . 61
4.2 Consequences of the scaling ansatz . . . . . . . . . . . . . . . . . . . . . . 61
4.2.1 Average cluster size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
4.2.2 Density of spanning cluster . . . . . . . . . . . . . . . . . . . . . . . 62
4.3 Percolation thresholds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
4.4 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

5 Geometry of clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.1 Characteristic cluster size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
5.1.1 Analytical results in one dimension . . . . . . . . . . . . . . . . 71
5.1.2 Numerical results in two dimensions . . . . . . . . . . . . . . . 71
5.1.3 Scaling behavior in two dimensions . . . . . . . . . . . . . . . . 75
5.2 Geometry of finite clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
5.2.1 Correlation length . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
5.3 Geometry of the spanning cluster . . . . . . . . . . . . . . . . . . . . . . . 85
xi

5.4 Spanning cluster above pc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86


5.5 Fractal cluster geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
5.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

6 Finite size scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93


6.1 General aspects of finite size scaling . . . . . . . . . . . . . . . . . . . . . 94
6.2 Finite size scaling of P (p, L) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
6.3 Average cluster size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.3.1 Measuring moments of the cluster number density . . . 99
6.3.2 Scaling theory for S(p, L) . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.4 Percolation threshold . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
6.4.1 Measuring the percolation probability Π(p, L) . . . . . . . 102
6.4.2 Measuring the percolation threshold pc . . . . . . . . . . . . . 103
6.4.3 Finite-size scaling theory for Π(p, L) . . . . . . . . . . . . . . . 104
6.4.4 Estimating pc using the scaling ansatz . . . . . . . . . . . . . . 105
6.4.5 Estimating pc and ν using the scaling ansatz . . . . . . . . 106
6.4.6 (Advanced) Finite-size scaling for dΠ(p, L)/dp . . . . . . 106
6.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

7 Renormalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
7.1 The renormalization mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.1.1 (Advanced) Renormalization of correlation length . . . . 116
7.1.2 Iterating the renormalization mapping . . . . . . . . . . . . . 116
7.1.3 Application of renormalization to ξ . . . . . . . . . . . . . . . . 118
7.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
7.2.1 Example: One-dimensional percolation . . . . . . . . . . . . . 120
7.2.2 Example: Renormalization on 2d site lattice . . . . . . . . . 121
7.2.3 Example: Renormalization on 2d triangular lattice . . . 123
7.2.4 Example: Renormalization on 2d bond lattice . . . . . . . 125
7.3 (Advanced) Universality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
7.4 (Advanced) Fragmentation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
7.5 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

8 Subset geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139


8.0.1 Singly connected bonds . . . . . . . . . . . . . . . . . . . . . . . . . . 139
xii Contents

8.1 Self-avoiding paths on the cluster . . . . . . . . . . . . . . . . . . . . . . . 141


8.1.1 Minimal path . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
8.1.2 Maximum and average path . . . . . . . . . . . . . . . . . . . . . . 142
8.1.3 Backbone . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
8.1.4 Scaling of the dangling ends . . . . . . . . . . . . . . . . . . . . . . 143
8.1.5 Argument for the scaling of subsets . . . . . . . . . . . . . . . . 144
8.1.6 Blob model for the spanning cluster . . . . . . . . . . . . . . . . 144
8.1.7 Mass-scaling exponents for subsets of the spanning
clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
8.2 Renormalization calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
8.3 (Advanced) Singly connected bonds and ν . . . . . . . . . . . . . . . . 147
8.4 Deterministic fractal models . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
8.5 Lacunarity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
8.6 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

9 Introduction to disorder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

10 Flow in disordered media . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161


10.0.1 Electrical conductivity and resistor networks . . . . . . . . 161
10.0.2 Flow conductivity of a porous system . . . . . . . . . . . . . . 162
10.1 Conductance of a percolation lattice . . . . . . . . . . . . . . . . . . . . . 163
10.1.1 Finding the conductance of the system . . . . . . . . . . . . . 163
10.1.2 Computational methods . . . . . . . . . . . . . . . . . . . . . . . . . . 165
10.1.3 Measuring the conductance . . . . . . . . . . . . . . . . . . . . . . . 170
10.1.4 Conductance and the density of the spanning cluster . 171
10.2 Scaling arguments for conductance and conductivity . . . . . . . 172
10.2.1 Scaling argument for p > pc and L  ξ . . . . . . . . . . . . . 173
10.2.2 Conductance of the spanning cluster . . . . . . . . . . . . . . . 173
10.2.3 Conductivity for p > pc . . . . . . . . . . . . . . . . . . . . . . . . . . 175
10.3 Renormalization calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
10.4 Finite size scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
10.4.1 (Advanced) Developing the scaling form from
renormalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
10.4.2 The finite-size scaling ansatz . . . . . . . . . . . . . . . . . . . . . . 179
10.4.3 Finite-size scaling observations . . . . . . . . . . . . . . . . . . . . 179
10.5 (Advanced) Internal flux distribution . . . . . . . . . . . . . . . . . . . . 183
xiii

10.5.1 Scaling of current fluctuations . . . . . . . . . . . . . . . . . . . . . 185


10.5.2 Potential fluctuations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
10.6 (Advanced) Multi-fractals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
10.7 Real conductivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
10.8 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191

11 Elastic properties of disordered media . . . . . . . . . . . . . . . 195


11.1 Rigidity percolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195
11.1.1 Developing a theory for E(p, L) . . . . . . . . . . . . . . . . . . . 198
11.1.2 Compliance of the spanning cluster at p = pc . . . . . . . . 199
11.1.3 Finding Young’s modulus E(p, L) . . . . . . . . . . . . . . . . . . 201

12 Diffusion in disordered media . . . . . . . . . . . . . . . . . . . . . . . 203


12.1 Diffusion and random walks in homogeneous media . . . . . . . . 203
12.1.1 Theory for the time development of a random walk . . 204
12.1.2 Continuum description of a random walker . . . . . . . . . . 206
12.2 Random walks on clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
12.2.1 Developing a program to study random walks on
clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
12.2.2 Diffusion on a finite cluster for p < pc . . . . . . . . . . . . . . 215
12.2.3 Diffusion at p = pc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
12.2.4 Diffusion for p > pc . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
12.2.5 Scaling theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
12.2.6 Diffusion on the spanning cluster . . . . . . . . . . . . . . . . . . 223
12.2.7 (Advanced) The diffusion constant D . . . . . . . . . . . . . . 224
12.3 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226

13 Dynamic processes in disordered media . . . . . . . . . . . . . . 229


13.1 Diffusion fronts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
13.2 Invasion percolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
13.2.1 Gravity stabilization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
13.2.2 Gravity destabilization . . . . . . . . . . . . . . . . . . . . . . . . . . . 241

References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247
Introduction to percolation
1

Percolation is the study of connectivity of random media and of other


properties of connected subsets of random media [37, 30, 7]. Fig. 1.1
illustrates a porous material — a material with holes, pores, of various
sizes. These are examples of random materials with built-in disorder. In
this book, we will address the physical properties of such media, develop
the underlying mathematical theory and the computational and statistical
methods needed to discuss the physical properties of random media. In
order to do that, we will develop a simplified model system, a model
porous medium, for which we can develop a well-founded mathematical
theory, and then afterwards we can apply this model to realistic random
systems.
The porous media illustrated in the figure serves as a useful, fun-
damental model for random media in general. What characterizes the
porous material in Fig. 1.1? The porous medium consists of regions
with material and without material. It is therefore an extreme, binary
version of a random medium. An actual physical porous material will be
generated by some physical process, which will affect the properties of
the porous medium in some way. For example, if the material is generated
by sedimentary deposition, details of the deposition process may affect
the shape and connectivity of the pores, or later fractures may generate
straight fractures in addition to more round pores. These features are
always present in the complex geometries found in nature, and they will
generate correlations in the randomness of the material. While these
correlations can be addressed in detailed, specific studies of random

1
2 1 Introduction to percolation

materials, we will here instead start with a simpler class of materials —


uncorrelated random, porous materials.

Fig. 1.1 Illustration of a porous material from a nanoporous silicate (SiO2 ). The colors
inside the pores illustrates the distance to the nearest part of the solid.

We will here introduce a simplified model for a random porous material.


We divide the material into cubes (sites) of size d. Each site can be either
filled or empty. We can use this method to characterize an actual porous
medium, as illustrated in Fig. 1.1, or we can use it as a model for a random
porous medium if we fill each voxel with a probability p. On average, the
volume of the solid part of the material will be Vs = pV , where V is the
volume of the system, and the volume of the pores will be Vp = (1 − p)V .
We usually call the relative volume of the pores, the porosity, φ = Vp /V ,
of the material. The solid is called the matrix and the relative volume
of the matrix, Vs /V is called the solid fraction, c = Vs /V . In this case,
we see that p corresponds to the solid fraction. Initially, we will assume
that on the scale of lattice cells, the fill probabilities are statistically
independent – we will study an uncorrelated random medium.
1 Introduction to percolation 3

Fig. 1.2 illustrates a two-dimensional system of 4 × 4 cells filled with


a probability p. We will call the filled cells occupied or set, and they are
colored black. This system is a 4 × 4 matrix, where each cell is filled with
probability p. We can generate such a matrix, m, in python using
p = 0.25
z = rand(4,4)
m = z<p
imagesc(m)

The resulting matrices are shown in the Fig. 1.2 for various values of p.
The left figure illustrates the matrix, m with its various values. A site i is
set as p reaches the value mi in the matrix. (This is similar to changing
the water level and observing what parts of a landscape is above water).

Fig. 1.2 Illustration of an array of 4 × 4 random numbers, and the various sites set for
different values of p.

Percolation is the study of connectivity. The simplest question we can


ask is when does a path form from one side of the sample to the other?
By when, we mean at what value of p. For the particular realization of
the matrix m shown in Fig. 1.2 we see that the answer depends on how
we define connectivity. If we want to make a path along the set (black)
sites from one side to another, we must decide on when two sites are
connected. Here, we will typically use nearest neighbor connectivity: Two
sites in a square (cubic) lattice are connected if they are nearest neighbors.
In the square lattice in Fig. 1.2 each site has Z = 4 nearest neighbors
and Z = 8 next-nearest neighbors, where the number Z is called the
connectivity. We see that with nearest-neighbor connectivity, we get a
path from the bottom to the top when p = 0.7, but with next-nearest
4 1 Introduction to percolation

neighbor connectivity we would get a path from the bottom to the top
already at p = 0.4. We call the value pc , when we first get a path from
one side to another (from the top to the bottom, from the left to the
right, or both) the percolation threshold. For a given realization of the
matrix, there is well-defined value for pc , but for another realization,
there would be another pc . We therefore need to either use statistical
averages to characterize the properties of the percolation system, or we
need to refer to a theoretical – thermodynamic – limit, such as the value
for pc in an infinitely large system. When we use pc here, we will refer to
the thermodynamic value.
In this book, we will develop theories describing various physical prop-
erties of the percolation system as a function of p. We will characterize
the sizes of connected regions, the size of the region connecting one side
to another, the size of the region that contributes to transport (fluid,
thermal or electrical transport), and other geometrical properties of the
system. Most of the features we study will be universal, independent of
many of the details of the system. From Fig. 1.2 we see that pc depends
on the details: It depens on the rule for connectivity. It would also de-
pend on the type of lattice used: square, trianglar, hexagonal, etc. The
value of pc is specific. However, many other properties are general. For
example, how the conductivity of the porous medium depends on p near
pc does not depend on the type of lattice or the choice of connectivity
rule. It is universal. This means that we can choose a system which is
simple to study in order to gain intuition about the general features,
and then apply that intuition to the special cases afterwards. While the
connectivity or type of lattice does not matter, some things do matter.
For example, the dimensionality matters: The behavior of a percolation
system is different in one, two and three dimensions. However, the most
important differences occur between one and two dimensions, where the
difference is dramatic, whereas the difference between two and three
dimensions is more of a degree that we can easily handle. Actually, the
percolation problem becomes simpler again in higher dimensions. In two
dimensions, it is possible to go around a hole, and still have connectivity.
But it is not possible to have connectivity of both the pores and the
solid in the same direction at the same time. This is possible in three
dimensions: A two-dimensional creature would have problems with hav-
ing a digestive tract, as it would divide the creature in two, but in three
dimensions this is fully possible. Here, we will therefore focus on two and
three-dimensional systems.
1.1 Basic concepts in percolation 5

In this book, we will first address percolation in one and infinite


dimensions, since we can solve the problems exactly in these cases. We
will then address percolation in two dimensions, where there is no exact
solutions. However, we will see that if we assume that the cluster density
function has a particular scaling form, we can still address the problem in
two dimensions and make powerful predictions. We will also see that close
to the percolation threshold the porous medium has a self-affine scaling
structure — it is a fractal. This property has important consequences
for the physical properties of random systems. We will also see how this
is reflected in a systematic change of scales, a renormalization procedure,
which is a general tool that can applied to rescaling in many areas.

1.1 Basic concepts in percolation

Let us initially study a specific example of a random medium. We will


generate an L × L lattice of points that are occupied with probability
p. This corresponds to a coarse-grained porous medium with a porosity
φ = p, if we assume that the occupied sites are considered to be holes in
the porous material.
We can generate a realization of a square L × L system in python
using
from pylab import *
L = 20
p = 0.5
z = rand(L,L)
m = z<p
imshow(m, origin=’lower’)
show()

The resulting matrix is illustrated in Fig. 1.3. However, this visualization


does not provide us with any insight into the connectivity of the sites in
this system. Let us instead analyze the connected regions in the system.

Definitions
• two sites are connected if they are nearest neighbors (4 neigh-
bors on square lattice)
• a cluster is a set of connected sites
• a cluster is spanning if it spans from one side to the opposite
side
6 1 Introduction to percolation

• a cluster that is spanning is called the spanning cluster


• a system is percolating if there is a spanning cluster in the
system

Fortunately, there are built-in functions in python that finds con-


nected regions in an image. The function measurements.label finds
clusters based on a given connectivity. For example, with a connectivity
corresponding to 4 we find
from scipy.ndimage import measurements
lw, num = measurements.label(m)

This function returns the matrix lw, which for each site in the original
array tells what cluster it belongs to. Clusters are numbered sequentially,
and each cluster is given an index. All the sites with the same index
belong to the same cluster. The resulting array is shown in Fig. 1.3,
where the index for each site is shown and a color is used to indicate the
various clusters. Notice that there is a distribution of cluster sizes, but
no cluster is large enough to reach from one side to another, and as a
result the system does not percolate.

Fig. 1.3 Illustration of the index array for a 10 × 10 system for p = 0.45.

In order to visualize the clusters effectively, we give the various clusters


different colors.
imshow(lw, origin=’lower’)
1.1 Basic concepts in percolation 7

Unfortunately, this colors the clusters gradually from the bottom up.
This is a property of the underlying algorithm: Clusters are indexed
starting from the bottom-left of the matrix. Hence, clusters that are close
to each other will get similar colors and therefore be difficult to discern
unless we shuffle the colormap. We can fix this by shuffling the labeling:
b = arange(lw.max() + 1)
shuffle(b)
shuffledLw = b[lw]
imshow(shuffledLw, origin=’lower’)

The resulting image is shown to the right in Fig. 1.3. (Notice that in
these figures we have reversed the ordering of the y-axis. Usually, the
first row is in the top-right corner in your plots – and this will also be
the case in most of the following plots).
It may also be useful to color the clusters based on the size of the
clusters, where size refers to the number of sites in a cluster. We can do
this using
area = measurements.sum(m, lw, index=arange(lw.max() + 1))
areaImg = area[lw]
imshow(areaImg, origin=’lower’)
colorbar()

Let us now study the effect p on the set of connected clusters. We


vary the value of p for the same underlying random matrix, and plot the
resulting images:
from pylab import *
from scipy.ndimage import measurements
L = 100
pv = [0.2,0.3,0.4,0.5,0.6,0.7]
z = rand(L,L)
for i in range(len(pv)):
p = pv[i]
m = z<p
lw, num = measurements.label(m)
area = measurements.sum(m, lw, index=arange(lw.max() + 1))
areaImg = area[lw]
subplot(2,3,i+1)
tit = ’p=’+str(p)
imshow(areaImg, origin=’lower’)
title(tit)
axis()

Fig. 1.4 shows the clusters for a 100 × 100 system for p ranging from
0.2 to 0.7 in steps of 0.1. We see that the clusters increase in size as p
increases, but at p = 0.6, there is just one large cluster spanning the
8 1 Introduction to percolation

entire region. We have a percolating cluster, and we call this cluster that
spans the system the spanning cluster. However, the transition is very
rapid from p = 0.5 to p = 0.6. We therefore look at this region in more
detail in Fig. 1.5. We see that the size of the largest cluster increases
rapidly as p reaches a value around 0.6, which corresponds to pc for this
system. At this point, the largest cluster spans the entire system. For
the two-dimensional system illustrated here we know that in an infinite
lattice the percolation threshold is pc ' 0.5927.

Fig. 1.4 Plot of the clusters in a 100 × 100 system for various values of p.

Fig. 1.5 Plot of the clusters in a 100 × 100 system for various values of p.
1.2 Percolation probability 9

The aim of this book is to develop a theory to describe how this random
porous medium behaves close to pc . We will characterize properties such
as the density of the spanning cluster, the geometry of the spanning
cluster, and the conductivity and elastic properties of the spanning cluster.
We will address the distribution of cluster sizes and how various parts of
the clusters are important for particular physical processes. We start by
characterizing the behavior of the spanning cluster near pc .

1.2 Percolation probability

When does the system percolate? When there exists a path connecting
one side to another. This occurs at some value p = pc . However, in a finite
system, like the system we simulated above, the value of pc for a given
realization will vary with each realization. It may be slightly above or
slightly below the pc we find in an infinite sample. Later, we will develop
a theory to understand how the effective pc in a finite system varies from
the thermodynamic pc in an infinitely large system. But already now,
we realize that as we perform different experiments, we will measure
various values of pc . We can characterize this behavior by introducing a
probability Π(p, L):

Percolation probability
The percolation probability Π(p, L) is the probability for there to
be a connected path from one side to another side as a function of
p in a system of size L.

We can measure Π(p, L) in a finite sample of size L × L, by generating


many random matrices. For each matrix, we perform a cluster analysis for
a sequence of pi values. For each pi we find all the clusters. We then check
if any of the clusters are present both on the left- and on the right-hand
side of the lattice. In that case, they are spanning. (We could also have
included a test for clusters spanning from the top to the bottom, but
this does not change the statistics significantly). In this case, there is a
spanning cluster — the system percolates — and we count up how many
times a system percolates for a given pi , Ni , and then divide by the total
number of experiment, N to estimate the probability for percolation for
10 1 Introduction to percolation

a given pi , Π(pi , L) ' Ni /N . We implement this as follows. First, we


generate a sequence of 100 pi values from 0.35 to 1.0:
p = linspace(0.35,1.0,100)

Then we prepare an array for Ni with the same number of elements as


pi :
nx = len(p)
Pi = zeros(nx)

We will generate N = 1000 samples:


N = 100

We will then loop over all samples, and for each sample we generate a new
random matrix. The for each value of pi we perform the cluster analysis
as we did above. We use the function measurements.label to label the
clusters. Then we check if the set of labels on the left side and the set of
labels on the right side have any intersection. If the length of the set of
intersections is larger than zero, there is at least one percolating cluster:
lw,num = measurements.label(z)
perc_x = intersect1d(lw[0,:],lw[-1,:})
perc = perc_x[where(perc_x>0)]

Now, we are ready to implement this into a complete program. For a


given value of p, we count in how many simulations Np (p) there is a path
spanning from one side to another and estimate Π̄(p) ' Np (p)/N , where
N is the total number of simulations/samples. This is implemented in
the following program:
from pylab import *
from scipy.ndimage import measurements
p = linspace(0.4,1.0,100)
nx = len(p)
Ni = zeros(nx)
P = zeros(nx)
N = 1000
L = 100
for i in range(N):
z = rand(L,L)
for ip in range(nx):
m = z<p[ip]
lw, num = measurements.label(m)
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x>0)]
if (len(perc)>0):
1.2 Percolation probability 11

Ni[ip] = Ni[ip] + 1
Pi = Ni/N
plot(p,Pi)
xlabel(’$p$’)
ylabel(’$\Pi$’)

The resulting plot of Π(p, L) is seen in Fig. 1.7. The figure shows the
resulting plots as a function of system size L. We see that as the system
size increases, Π(p, L) approaches a step function at p = pc .

Fig. 1.6 Illustration of the BoundingBox for the clusters in a 6 × 6 simulation.


12 1 Introduction to percolation

0.8

0.6

0.4
L= 50
0.2 L=100
L=200
0
0.4 0.5 0.6 0.7 0.8 0.9 1

0.8

0.6

0.4
L= 50
0.2 L=100
L=200
0
0.4 0.5 0.6 0.7 0.8 0.9 1

Fig. 1.7 Plot of Π(p, L), the probability for there to be a connected path from one side
to anther, as a function of p for various system sizes L.

1.3 Spanning cluster

The probability Π(p, L) described the probability for there to be a


spanning cluster, but what about the spanning cluster itself, how can
we characterize it? We see from Fig. 1.4 that the spanning cluster grows
quickly around p = pc . Let us therefore characterize the cluster by its
size, MS , or by its density, P (p, L) = MS /L2 , which corresponds to the
probability for a site to belong the spanning cluster.

Density of the spanning cluster


The probability P (p, L) for a site to belong to a spanning cluster is
called the density of the spanning cluster, or the order parameter
for the percolation problem.
1.3 Spanning cluster 13

We can measure P (p, L) by counting the mass Mi of the spanning


cluster as a function of pi for various values of pi . We can find the mass
of the spanning cluster, by finding a cluster that spans the system (there
may be more than one) as we did above, and then measuring the number
of sites in the cluster using the area = measurements.sum(m, lw, perc)
We do this in the same program as we developed above. For each pi ,
we see if a cluster is spanning from one side to another, and if it is, we
add the mass of this cluster to MS (pi ). We implement these features in
the following program, which measures both Π(p, L) and P (p, L) for a
given value of L:
from pylab import *
from scipy.ndimage import measurements
p = linspace(0.4,1.0,100)
nx = len(p)
Ni = zeros(nx)
P = zeros(nx)
N = 1000
L = 100
for i in range(N):
z = rand(L,L)
for ip in range(nx):
m = z<p[ip]
lw, num = measurements.label(m)
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x>0)]
if (len(perc)>0):
Ni[ip] = Ni[ip] + 1
area = measurements.sum(m, lw, perc[0])
P[ip] = P[ip] + area
Pi = Ni/N
P = P/(N*L*L)
subplot(2,1,1)
plot(p,Pi)
subplot(2,1,1)
plot(p,Pi)
subplot(2,1,2)
plot(p,P)

The resulting plot of P (p, L) is shown in the bottom of Fig. 1.7. We


see that P (p, L) changes rapidly around p = pc and that it grows slowly
– approximately linearly – as p → 1. We can understand this linear
behavior: When p is near 1 all the set sites are connected and part of
the spanning cluster. The density of the spanning cluster is therefore
proportional to p in this limit. We will now develop a theory for the
observations of Π(p, L), P (p, L) and other features of the percolation
system. First, we see what insights we can gain from small, finite systems.
14 1 Introduction to percolation

1.4 Percolation in small systems

First, we will address the two-dimensional system directly. We will study


a L × L system, and the various physical properties of it. We will start
with L = 1, L = 2, and L = 3, and then try to generalize.
First, let us address L = 1. In this case, the system percolates if the site
is present, which has a probability p, hence the percolation probability is
Π(p, 1) = p. The probability for a site to belong to the spanning cluster
is p, therefore P (p, 1) = 1.
Then, let us examine L = 2. This is still simple, but we now have to
develop a more advanced strategy than for L = 1. Our strategy will be
to list all possible outcomes, find the probability for each outcome, and
then use this to find the probability for the various physical properties
we are interested in. The possible configurations are listed in Fig. 1.8.
The strategy is to use a basic result from probability theory: If we want
to calculate the probability of an event A, we can do this by summing
the probability of A given B multiplied by the probability for B over all
possible outcomes B.

P (A) = P (A|B)P (B) , (1.1)


X

where we have used the notation P (A|B) to denote the probability of A


given that B occurs . We can use this to calculate properties such as Π
and P (p, L) by summing over all possible configurations c:

Π(p, L) = Π(p, L|c)P (c) , (1.2)


X

where Π(p, L|c) is the value of Π for the particular configuration c, and
P (c) is the probability of this configuration.
The configurations for L = 2 have been numbered from c = 1 to
c = 16 in Fig. 1.8. However, configurations that are either mirror images
or rotations of each other will have the same probability and the same
physical properties since percolation can take place both in the x and the
y directions. It is therefore only necessary to group the configurations into
6 different classes as illustrated in the bottom of Fig. 1.8, but we then
need to remember the multiplicity, gc , for each class when we calculate
probabilities. Let us make table of the configurations, the number of such
configurations, the probability of one such configuration, and the value
of Π(p, L|c) for this configuration.
1.4 Percolation in small systems 15

c=1 c=2 c=3 c=4 c=5 c=6 c=7 c=8

c=9 c=10 c=11 c=12 c=13 c=14 c=15 c=16

c=1 c=2 c=3 c=4 c=5 c=6


g1=1 g2=4 g3=4 g4=2 g5=4 g6=1

Fig. 1.8 The possible configurations for a L = 2 site percolation lattice in two-dimensions.
The configurations are indexed using the cluster configuration number c.

c gc P (c) Π(p, L|c)


1 1 p0 (1 − p)4 0
2 4 p1 (1 − p)3 0
3 4 p2 (1 − p)2 1
4 2 p2 (1 − p)2 0
5 4 p3 (1 − p)1 1
6 1 p4 (1 − p)0 1

We should check that we have actually listed all possible configurations.


The total number of configurations is 24 = 16 = 1 + 4 + 2 + 4 + 4 + 1,
which is ok.
We can then find the probability for Π directly:
16 1 Introduction to percolation

Π = 0 · 1 · p0 (1 − p)4 + 0 · 4 · p1 (1 − p)3 + 1 · 4 · p2 (1 − p)2 (1.3)


+ 0 · 2 · p (1 − p) + 1 · 4 · p (1 − p) + 1 · 1 · p (1 − p) .
2 2 3 1 4 0
(1.4)

We therefore find the exact value for Π(p, L = 2):

Π(p, L = 2) = 4p2 (1 − p)2 + 4p3 (1 − p)1 + p4 (1 − p)0 , (1.5)

which we can simplify further if we want. The shape of Π(p, L) for L = 1,


andL = 2 is shown in Fig. 1.9.

1
L=1
0.8 L=2
L=3
L=4
0.6

0.4

0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

1
L=1
0.8 L=2
L=3
0.6 L=4

0.4

0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Fig. 1.9 Plot of Π(p, L) for L = 1 and L = 2 as a function of p.

We could characterize p = pc as the number for which Π = 1/2, which


would give pc (L = 2) =, which is better than for L = 1, for which we
got pc (L = 1) = 1/2. Maybe we can just continue doing this type of
calculation for higher and higher L and we will get a better and better
approximation for pc ?
We notice that for finite L, Π(p, L) will be a polynomial of order
o = L2 - it is in principle a function we can calculate. However, the
2
number of possible configurations is 2L which increases very rapidly
1.6 Exercises 17

with L. It is therefore not realistic to use this technique for calculating the
percolation probabilities. We will need to have more powerful techniques,
or simpler problems, in order to perform exact calculations.
However, we can still learn much from a discussion of finite L. For
example, we notice that
2
Π(p, L) ' LpL + c1 pL+1 + . . . + cn pL , (1.6)

in the limit of p  1. The leading order term when p → 0 is therefore


LpL .
Similarly, we find that for p → 1, the leading order term is approxi-
mately
Π(p, L) ' 1 − (1 − p)L . (1.7)
These two results gives us an indication about how the percolation
probability Π(p, L) is approaching the step function when L → ∞.
Similarly, we can calculate P (p, L) for L = 2. However, we leave the
calculation of the L = 3 and the P (p, L) system to the exercises.

1.5 Further reading

There are good general introduction texts to percolation theory such


as the popular books by Stauffer and Aharony [37], by Sahimi [30], by
Christensen and Moloney [7], and the classical book by Grimmet [14].
Mathematical aspects are addressed by Kesten [21] and phase transitions
in general are introduced by e.g. Stanley [35]. Applications of percolation
theory are found in many fields such as in geoscience [22], porous media
[18] or social networks [33] and many more. We enourage you to explore

1.6 Exercises

Exercise 1.1: Percolation for L = 3


a) Find P (p, L) for L = 1 and L = 2.
b) Categorize all possible configurations for L = 3.
c) Find Π(p, L) and P (p, L) for L = 3.
18 1 Introduction to percolation

Exercise 1.2: Counting configurations in small systems


a) Write a program to find all the configurations for L = 2.
b) Use this program to find Π(p, L = 2) and P (p, L = 2). Compare with
the exact results from the previous exercise.
c) Use you program to find Π(p, L) and P (p, L) for L = 3, 4 and 5.

Exercise 1.3: Percolation in small systems in 3d


In this exercise we will study the three-dimensional site percolation
system for small system sizes.
a) How many configurations are there for L = 2?
b) Categorize all possible configurations for L = 2.
c) Find Π(p, L) and P (p, L) for L = 2.
d) Compare your results with your result for the two-dimensional system.
Comment on similarities and differences.
One-dimensional percolation
2

The percolation problem can be solved exactly in two limits: in the one-
dimensional and the infinite dimensional cases. Here, we will first address
the one-dimensional system. While the one-dimensional system does not
allow us to study the full complexity of the percolation problem, many
of the concepts and measures introduced to study the one-dimensional
problem can generalized to higher dimensions.

2.1 Percolation probability

Let us first address a one-dimensional lattice of L sites. In this case, there


is a spanning cluster if and only if all the sites are occupied. If only a
single site is empty, there will not be any connecting path from one side
to the other. The percolation probability is therefore

Π(p, L) = pL (2.1)

This has a trivial behavior when L → ∞


(
0p<1
Π(p, ∞) = . (2.2)
1p=1

This shows that the percolation threshold is pc = 1 in one dimension.


However, the one-dimensional system is anomalous, and higher dimen-
sions, we will always have pc < 1, so that we can study the system both

19
20 2 One-dimensional percolation

above and below pc . Unfortunately, for the one-dimensional system we


can only study the system below pc .

2.2 Cluster number density

2.2.1 Definition of cluster number density


In the simulations in Fig. 1.4 we saw that the percolation system was
characterized by a wide distribution of clusters – regions of connected
sites. The clusters have varying shape and size. If we increase p to
approach pc we saw that the clusters increased in size until they reached
the system size. We can use the one-dimensional system to learn more
about the behavior of clusters as p approaches pc .
Fig. 2.1 illustrates a realization of an L = 16 percolation system in
one dimension below pc = 1. In this case there are 5 clusters of sizes
1,1,4,2,1 measured in the number of sites in each cluster. The clusters are
numbered - indexed - from 1 to 5 as we did for the numerical simulations
in two dimensions. How can we characterize the clusters in a system? In
percolation theory we characterize cluster sizes by asking a particular
question: If you point at a (random) site in the lattice, what is the
probability for this site to belong to a cluster of size s?

P (site is part of cluster of size s) = sn(s, p) . (2.3)

It is common to use the notation sn(s, p) for this probability for a given
site to belong to a cluster of size s. Why is it divided into two parts,
s and n(s, p)? Because we must divide the question into two parts: (1)
What is the probability for a given site to be a specific site in a cluster
of size s, and (2) how many such specific sites are there? What do we
mean by a specific site? For cluster number 3 in Fig. 2.1 there are 4 sites.
We could therefore ask the question, what is the probability for a site to
be the left-most site in a cluster of size s. This is what we mean with a
specific site. We could ask the same question about the second left-most,
the third left-most and so on. We call the probability for a site to belong
to a specific site in a cluster of size s (such as the left-most site in the
cluster) the cluster number density, and we use the notation n(s, p)
for this. To find the probability sn(s, p) for a site to belong to any of the
s sites in a cluster of size s we must sum the probabilities for each of the
specific sites. This is illustrated for the case of a cluster of size 4:
2.2 Cluster number density 21

P (site to be in cluster of size 4)


= P (site to be left-most site in cluster of size 4)
+ P (site to be second left-most site in cluster of size 4)
+ P (site to be third left-most site in cluster of size 4)
+ P (site to be fourth left-most site in cluster of size 4)
= 4P (site to be left-most site in cluster of size 4)
;.

Because each of these probabilities are the same. What is the probability
for a site to be the left-most site in a cluster of size s in one dimension?
In order for it to be in a cluster of size s, the site must be present, which
has probability p, and then s − 1 sites must also be present to the right
of it, which has probability ps−1 . In addition, the site to the left must be
empty (illustrated by an X in Fig. 2.1 bottom part), which has probability
(1 − p) and the site to the right of the fourth site (illustrated by an X
in Fig. 2.1 bottom part), which also has probability (1 − p). Since the
occupation probabilities for each site are independent, the probability
for the site to be the left-most site in a cluster of size s is:

n(s, p) = (1 − p)2 ps . (2.4)

This is the cluster number density in one dimension.

1 2 3 3 3 3 4 4 5

Left-most site in cluster of size 4

Empty sites
Fig. 2.1 Realization of a L = 16 percolation system in one dimension. Occupied sites
are marked with black squares.
22 2 One-dimensional percolation

Cluster number density


The cluster number density n(s, p) is the probability for a site to
be a particular site in a cluster of size s. For example, in 1d, n(s, p)
is the probability for a site to be the left-most site in a cluster of
size s.

We should check that sn(s, p) really is a normalized probability. How


should it be normalized? We know that if we point at a random site in
the system, the probability for that site to be occupied is p. An occupied
site is then either a part of a finite cluster of some size s or it is part
of the infinite cluster. The probability for a site to be a part of the
infinite cluster is P . This means that we have the following normalization
condition:

Normalization of the cluster number density


A site is occupied with probability p. An occupied site is either part
of a finite cluster of size s with probability sn(s, p) or it is part of
the infinite (spanning) cluster with probability P :

p= sn(s, p) + P . (2.5)
X

s=1

Let us check that this is indeed the case for the one-dimensional result
we have found by calculating the sum:
∞ ∞ ∞
sn(s, p) = sps (1 − p)2 = (1 − p)2 p (2.6)
X X X
sps−1 ,
s=1 s=1 s=1

where we will now employ a common trick:


∞ ∞
d X d 1
sps−1 = ps = = (1 − p)−2 , (2.7)
X

s=1
dp s=0 dp 1 − p

which gives
∞ ∞
sn(s, p) = (1 − p) p 2
sps−1 = (1 − p) p (1 − p)−2 = p . (2.8)
X X

s=1 s=1
2.2 Cluster number density 23

Since P = 0 when p < 0 we see that the probability is normalized. We


can use similar tricks to calculate moments of any order.

2.2.2 Measuring the cluster number density


In order to gain further insight into the distribution of cluster sizes, let
us look study Fig. 2.1 in more detail. There are 3 clusters of size s = 1,
one cluster of size s = 2, and one cluster of size s = 4. We could therefore
introduce a histogram of cluster sizes, which is what we would do if
we studied the cluster distribution numerically. Let us write Ns as the
number of clusters of size s.
s Ns n(s, p)
1 3 3/16
2 1 1/16
3 0 0/16
4 1 1/16

How can we now estimate sn(s, p), the probability for a given site to be
part of a cluster of size s, from Ns ? The probability for a site to belong
to cluster of size s can be estimated by the number of sites belonging
to a cluster of size s divided by the total number of sites. The number
of sites belonging to a cluster of size s is sNs , and the total number of
sites is Ld , where L is the system size and d is the dimensionality. (Here,
d = 1). This means that we can estimate the probability sn(s, p) from
sNs
sn(s, p) = , (2.9)
Ld
where we use a bar to show that this is an estimated quantity and not
the actual probability. We divide by s on both sides, and find
Ns
n(s, p) = . (2.10)
Ld
This argument and the result are valid in any dimension, not only for
d = 1. We have therefore found a method to estimate the cluster number
density:

Measuring the cluster number density


24 2 One-dimensional percolation

We can measure n(s, p) in a simulation by measuring Ns , the number


of clusters of size s, and then calculate n(s, p) from
Ns
n(s, p) = . (2.11)
Ld

For the clusters in Fig. 1.8 we find that


N1 3
n(1, p) = = , (2.12)
L 1 16
N2 1
n(2, p) =
= , (2.13)
L 1 16
N3 0
n(3, p) = 1 = , (2.14)
L 16
N4 1
n(4, p) = 1 = , (2.15)
L 16
which is our estimate of n(s, p) based on this single realization.
We check the consistency of the result by ensuring that the estimated
probabilities also are normalized:
3 1 1 9
sn(s, p) = 1 · +2· +3·0+4· = =p, (2.16)
X

s 16 16 16 16

where p is estimated from number of present sites divided by the total


number of sites.
In order to produce good statistical estimates for n(s, p) we must
sample from many random realization of the system. If we sample from
M realizations, and then measure the total number of clusters of size s,
Ns (M ), summed over all the realizations, we estimate the cluster number
density from
Ns (M )
n(s, p) = . (2.17)
M Ld
Notice that all simulations are for finite L, and we would therefore expect
deviations due to L as well as randomness due to the finite number of
samples. However, we expect the estimated n(s, p; L) to approach the
underlying n(s, p) as M and L approaches infinity.
2.2 Cluster number density 25

2.2.3 Shape of the cluster number density


We found that the cluster number density in one dimension is

n(s, p) = (1 − p)2 ps . (2.18)

In Fig. 2.2 we have plotted n(s, p) for various values of p. In order to


compare see the s-dependence of the plot directly for various p-values
we plot
G(s) = (1 − p)2 n(s, p) = ps , (2.19)
as a function of s. We notice that (1−p)2 n(s, p) is approximately constant
for a wide range of s, and then falls off rapidly for some characteristic
value sξ which increases as p approaches pc = 1. We can understand this
behavior better by rewriting n(s, p) as

n(s, p) = (1 − p)2 es ln p = (1 − p)2 e−s/sξ , (2.20)

where we have introduced the cut-off cluster size


−1
sξ = . (2.21)
ln p
What we are seeing in Fig. 2.2 is therefore the exponential cut-off curve,
where the cut-off sξ (p) increases as p → 1. We call it a cut-off because
the value of n(s, p) decays very rapidly (exponentially) when s is larger
than sξ .
How does sξ depend on p?. We see from (2.21) that as p approaches
pc = 1, the characteristic cluster size sξ will diverge. The form of the
divergence can be determined in more detail through a Taylor expansion:
1
sξ = − (2.22)
ln p
when p is close to 1, 1 − p  1 and we can write

ln p = ln(1 − (1 − p)) ' −(1 − p) , (2.23)

where we have used that ln(1 − x) = −x + o(x2 ), which is simply the


Taylor expansion of the logarithm. As a result
1 1
sξ ' = = |p − pc |−1/σ . (2.24)
1−p pc − p
26 2 One-dimensional percolation

50
0
-50
-100
-150
-200 p=0.900
p=0.990
-250 p=0.999
-300
0 1 2 3 4 5 6

50
0
-50
-100
-150
-200 p=0.900
p=0.990
-250 p=0.999
-300
-3 -2 -1 0 1 2 3

Fig. 2.2 (Top) A plot of n(s, p)(1 − p)2 as a function of s for various values of p for a
one-dimensional percolation system shows that the cut-off increases as a function of s.
(Bottom) When the s axis is rescaled by s/sξ all the curves fall onto a common scaling
function, that is, n(s, p) = (1 − p)2 F (s/sξ ).

This shows that the divergence of sξ as p approaches pc is a power-law


with exponent −1. This is a feature which is general in percolation theory.

Scaling behavior of the characteristic cluster size The char-


acteristic clustes size sξ diverges as

sξ ∝ |p − pc |−1/σ , (2.25)
when p → pc . In one dimension, σ = 1.

The value of the exponent σ depends on the lattice dimensionality,


but it does not depend on the details of the lattice. It would, for example,
be the same also for next-nearest neighbor connectivity — a problem we
leave for the reader to solve as an exercise.
2.2 Cluster number density 27

The functional form we have found is also an example of a data


collapse. We see that if we plot (1 − p)−2 n(s, p) as a function of s/sξ ,
all data-points for various values of p should fall onto a single curve, as
illustrated in Fig. 2.2:

n(s, p) = (1 − p)2 e−s/sξ , (2.26)

This is what we call a data-collapse. We have one behavior for small s


and then a rapid cut-off when s reaches sξ . We can rewrite n(s, p) so
that all the sξ dependence is in the cut-off function by realizing that
since sξ ' (1 − p)−1 we have that (1 − p)2 = s−2
ξ . This gives
!2 !
s − ss s
n(s, p) = s−2
ξ e
−s/sξ
=s −2
e ξ =s F
−2
. (2.27)
sξ sξ

where F (u) = u2 e−u . We will see later that this form is general – it is
valid for percolation in any dimension, although with other values for
the exponent −2. In percolation theory, we call this exponent τ :

n(s, p) = s−τ F (s/sxi ) , (2.28)

where τ = 2 in two dimensions. The exponent τ is another example


of a universal exponent that does not depend on details such as the
connectivity rule, but it does depend on the dimensionality of the system.

2.2.4 Numerical measurement of the cluster number density


Let us now test the measurement method and the theory through a
numerical study of the cluster number density. According to the theory
developed above we can estimate the cluster number density n(s, p) from

Ns (M )
n(s, p) = , (2.29)
L2 · M
where Ns (M ) is the number of clusters of size s measured in M re-
alizations of the percolation system. We generate a one-dimensional
percolation system and index the clusters using
from pylab import *
from scipy.ndimage import measurements
L = 20
p = 0.90
z = rand(L)
28 2 One-dimensional percolation

m = z<p
lw, num = measurements.label(m)

Now, lw contains the indices for all the clusters. We can extract the size
of the clusters by summing the number of elements for each label:
labelList = arange(lw.max() + 1)
area = measurements.sum(m, lw, labelList)

The resulting list of areas for one sample is


>> lw
array([1, 1, 1, 0, 2, 2, 2, 2, 2, 2, 2, 2, 2, 0, 3, 0, 4, 4, 4, 4],
dtype=int32)
>> area
array([0., 3., 9., 1., 4.])

We need to collect all the areas of all the clusters for many realizations,
and then calculate the number of clusters of each size s based on this
long list of areas. This is all brought together by continuously appending
the area-array to the end of an array allarea that contains the areas
of all the clusters.
from pylab import *
from scipy.ndimage import measurements
nsamp = 1000
L = 1000
p = 0.90
allarea = array([])
for i in range(nsamp):
z = rand(L)
m = z<p
lw, num = measurements.label(m)
labelList = arange(lw.max() + 1)
area = measurements.sum(m, lw, labelList)
allarea = append(allarea,area)
n,sbins = histogram(allarea,bins=int(max(allarea)))
s = 0.5*(sbins[1:]+sbins[:-1])
nsp = n/(L*nsamp)
sxi = -1.0/log(p)
nsptheory = (1-p)**2*exp(-s/sxi)
plot(s,nsp,’o’,s,nsptheory,’-’)
xlabel(’$s$’)
ylabel(’$n(s,p)$’)

This script also calculates Ns using the histogram function with L


bins to ensure that there is at least one bin for each value of s:
n,sbins = histogram(allarea,bins=int(max(allarea)))
s = 0.5*(sbins[1:]+sbins[:-1])
2.2 Cluster number density 29

where we find s as the midpoints of the bins returned by the histogram-


function.
We estimate n(s, p) from
nsp = n/(L*nsamp)

We find the theoretically predicted form for n(s, p), which is n(s, p) =
(1 − p)2 exp(−s/sξ ), where sξ = −1/lnp. This is calculated for the same
values of s as found from the histogram using:
sxi = -1.0/log(p)
nsptheory = (1-p)**2*exp(-s/sxi)

When we use the histogram function with many bins, we risk that many
of the bins contain zero elements. To remove these elements from the
plot, we can use the nonzero function to find the indices of the elements
of n that are non-zero:
i = nonzero(n)

And then we only plot the values of n(s, p) at these indices. The values
for the theoretical n(s, p) are calculated for all values of s:
plot(s[i],nsp[i],’o’,s,nsptheory,’-’)

The resulting plot is shown in Fig. 2.3. We see that the measured results
and the theoretical values fit nicely, even though the theory is for infinite
system sizes, and the simulations where performed at L = 1000. We also
see that for larger values of s there are fewer observed values. It may
therefore be a good idea to make the bins used for the histogram larger
for larger values of s. We will return to this when we measure the cluster
number density in two-dimensional systems in chapter 4.

2.2.5 Average cluster size


Since we have a precise form of the cluster number density, n(s, p) we
can use it to calculate the average cluster size. However, what do we
mean by the average cluster size in this case? In percolation theory it is
common to define the average cluster size as the average size of a cluster
connected to a given (random) site in our system. That is, we will use the
cluster number density, n(s, p), as the basic distribution for calculating
the moments.
30 2 One-dimensional percolation

0.01

0.008

0.006

0.004

0.002

0
0 20 40 60 80 100 120

10-2

10-4

10-6

10-8
0 20 40 60 80 100 120

Fig. 2.3 Plot of the predicted n(s, p), based on M = 1000 samples of a L = 1000 system
with p = 0.9, and the theoretical n(s, p) curve on a linear scale (top) and a semilogarithmic
scale (bottom). The semilogarithmic plot clearly shows that n(s, p) follows an exponential
curve.

Average cluster size


The average cluster size S(p) is defined as
sn(s, p)
S(p) = hsi = ), (2.30)
X
s( P
s s sn(s, p)

The normalization sum in the denominator is equal to p when p < pc .


We can therefore write this as
sn(s, p)
S(p) = ). (2.31)
X
s(
s p

Similarly, we can define the k-th moment to be


sn(s, p)
Sk = hsk i = sk ( ). (2.32)
X
p
Let us calculate the first moment, corresponding to k = 1, the average
cluster size.
2.3 Spanning cluster 31

1X 2
S= s n(s, p) (2.33)
p s
(1 − p)2 X 2 s
= s p (2.34)
p s
(1 − p)2 X ∂ ∂ s
= p p p (2.35)
p s ∂p ∂p
(1 − p)2 ∂ ∂ X s
= p p p (2.36)
p ∂p ∂p s
(1 − p)2 ∂ p
= ( from sn(s, p) ) (2.37)
X
p
p ∂p (1 − p)2 s
∂ p
= (1 − p)2 (2.38)
∂p (1 − p)2
1 2p
= (1 − p)2 ( + ) (2.39)
(1 − p)2 (1 − p)3
1+p
= (2.40)
1−p
where we have used the trick introduced in (2.7) to move the derivation
out through the sum. In addition, we have also used our previous result
from s sn(s, p) directly.
P

This shows that we can write


1+p Γ
S= = , (2.41)
1−p |p − pc |γ

with γ = 1 and Γ (p) = 1 + p. That is, the average cluster size also
diverges as a power-law when p approaches pc . The exponent γ = 1 of
the power-law is again universal. That is, it depends on features such as
dimensionality, but not on details such as the lattice structure.
Later, we will observe that we have a similar behavior for percolation
in any dimension, although with other values of γ.
We will leave it as an exercise for our reader to find the behavior for
higher moments, Sk , using a similar argument.

2.3 Spanning cluster

The density of the spanning cluster, P (p; L), is similarly simple to find
and discuss. The spanning cluster only exists for p ≥ pc . The discussion
32 2 One-dimensional percolation

for P (p; L) is therefore not that interesting for the one-dimensional case.
However, we can still introduce some of the general notions.
The behavior of P (p; ∞) in one dimension is given as
(
0p<1
P (p; ∞) = . (2.42)
1p=1

We could introduce a similar finite size scaling discussion also for P (p; L).
However, we will here concentrate on the relation between P (p; L) and
the distribution of cluster sizes. The distribution of the size of a finite
cluster is described by sn(s, p), which is the probability that a given
site belongs to a cluster of size s. If we look at a given site, that site is
occupied with probability p. If a site is occupied it is either part of a
finite cluster of size s or it is part of the spanning cluster. Since these
two events cannot occur at the same time, the probability for a site to
be set must be the sum of the probability to belong to a finite cluster
and to belong to the infinite cluster. The probability to belong to a finite
cluster is the sum of the probability to belong to a cluster of s for all s.
We therefore have the equality:

p = P (p; L) + (2.43)
X
sn(s, p; L) ,
s

which is not only valid in the one-dimensional case, but also for percolation
problems in general.
We can use this relation to find the density of the spanning cluster
from the cluster number density n(s, p) through

P (p) = p − (2.44)
X
sn(s, p) .
s

This illustrates that the cluster number density n(s, p) is a fundamental


property, which can be used to deduce many of the other properties of
the percolation system.

2.4 Correlation length

From the simulations in Fig. 1.4 we see that the size of the clusters in-
creases as p → pc . We expect a similar behavior for the one-dimensional
system. We have already seen that the mass (or area) of the clusters di-
verges as p → pc . However, the characteristic cluster size sξ characterizes
2.4 Correlation length 33

the mass (or area) of a cluster. How can we characterize the extent of a
cluster?
To characterize the linear extent of a cluster, we find the probability for
two sites at a distance r to be part of the same cluster. This probability
is called the correlation function, g(r):

The correlation function g(r) describes the conditional probability


that two sites a and b, which both are occupied and are separated
by a distance r belong to the same cluster.

For one-dimensional percolation, two sites a and b only can be part of


the same cluster if all the points in between a and b are occupied. If r
denotes the number of points between a and b (not counting the start
and end positions) as illustrated in Fig. 2.4, we find that the correlation
function is
g(r) = pr = e−r/ξ , (2.45)
where ξ = − ln1p is called the correlation length. The correlation length
diverges as p → pc = 1. We can again find the way in which it diverges
through by using that when p → 1

ln p = ln(1 − (1 − p)) ' −(1 − p) . (2.46)

We find that the correlation length is

ξ = ξ0 (pc − p)−ν , (2.47)

with ν = 1. The correlation length therefore diverges as a power-law when


p → pc = 1. This behavior is general for percolation theory, although the
particular value of the exponent ν depends on the dimensionality.

r
a b
L
Fig. 2.4 An illustration of the distance r between two sites a and b. The two sites a and
b are connected if and only if all the sites between a and b are occupied.

We can use the correlation function to strengthen our interpretation


of when a finite system size becomes relevant. As long as ξ  L, we will
34 2 One-dimensional percolation

not notice the effect of a finite system, because no cluster is large enough
to notice the finite system size. However, when ξ  L, the behavior is
dominated by the system size L, and we are no longer able to determine
how close we are to percolation.

2.5 (Advanced) Finite size effects

We have so far not discussed the effects of a finite lattice size L. We have
implicitly assumed that the lattice size L is so large that the corrections
will be small and can be ignored. However, we have now observed that
the average cluster size S, the characteristic cluster size sξ , and the
correlation length ξ diverges when p approaches pc . We will therefore
eventually start observing effects of the finite system size as p approaches
pc .
We have essentially ignored two effects:
• (a) the upper limit for cluster sizes is L and not ∞
• (b) there are corrections to n(s, p; L) due to the finite lattice size
The effect of (b) becomes clear as p approaches pc : As sξ increases it will
eventually be larger than L, which in one dimension also provides an
upper limit for s. This is indeed observed in the scaling collapse plot for
n(s, p), where we for finite lattice sizes will find a cross-over cluster size
sL , which depends on the lattice size L.
What will be the effect of including a finite upper limit L for all the
sums? This will imply that the result of the sum s ps will be
P

L
1 − pL
ps = (2.48)
X
,
s=1
1−p

instead of 1/(1 − p) when L is infinite. Indeed, this sum approaches


1/(1 − p) as L → ∞. This implies that S will approach L when p → pc ,
as can be seen by applying l’Hopital’s rule to find the limit as p → pc .
However, as long as ξ  L, we will still observe that S ∝ 1/(1 − p). We
will make these types of arguments more precise when we discuss finite
size scaling further on.
2.5 (Advanced) Finite size effects 35

2.5.1 Finite size effects in Π(p, L) and pc


So far we have only addressed the behavior of an infinite system, We
have found that Π(p, L) = pL . From this, we find that

Π0 = = LpL−1 . (2.49)
dp
What is the interpretation of Π 0 ? We can write

Π 0 (p, L)dp = Π(p + dp, L) − Π(p, L) , (2.50)

where the right-hand term is the probability that the system became
spanning when p increased from p to p + dp. That is, it is the probability
that the spanning cluster appeared for the first time for p between p and
p + dp. We can therefore interpret Π 0 as the probability density for p0 ,
which is the p when a spanning cluster appears.
What can we learn from the form of Π 0 ? If we perform numerical
experiments to find pc , we see that for finite system sizes L, we might
observe a pc which is lower than 1. We can use Π 0 to find the average p0
found - this will be done generally further on. Here, we will only study
the width of the distribution Π 0 , which will give us an idea about the
possible deviation when we measure pc by a measurement of p0 . We define
the width as the value px for which Π 0 has reached 1/2 (or some other
value you like).
Π 0 (px , L) = LpL−1
x = 1/2 . (2.51)
This gives
ln 2
ln px = −
, (2.52)
L−1
We will now use a standard approximation for ln x, when x is close to 1,
by writing
ln px = ln(1 − (1 − px )) ' −(1 − px ) , (2.53)
where we have used that ln(1 − x) ' −x, when x  1. This gives us that
ln 2
(1 − px ) ' , (2.54)
L−1
and consequently,
ln 2
px = pc − . (2.55)
L−1
36 2 One-dimensional percolation

We will therefore have an L dependence in the effective pc which is


measured for a finite system. We will address this topic in much more
depth later on under finite size scaling in chap. 6.
We can also find a similar scaling for Π(p, L), because

Π(p, L) = pL = eL ln p = e−L/ξ , (2.56)

where we have defined ξ = −1/ ln p. We notice that ξ → ∞ when


p → pc = 1. We can therefore classify the behavior of Π according to the
relative sizes of the length ξ and L:
(
1Lξ
Π(p, L) = , (2.57)
0Lξ

We have therefore found an important length scale ξ in our problem that


appears whenever the length L appears.

2.6 Exercises

Exercise 2.1: Next-nearest neighbor connectivity in 1d


Assume that connectivity is to the next-nearest neighbors for an infinite
one-dimensional percolation system.
a) Find Π(p, L) for a system of length L.
b) What is pc for this system?
c) Find n(s, p) for an infinite system.

Exercise 2.2: Higher moments of s


The k’th moment of s is defined as
sn(s, p)
hsk i = sk ( ). (2.58)
X

s p

a) Find the second moment of s as a function of p.


b) Calculate the first moment of s numerically from M = 1000 samples
for p = 0.90, 0.95, 0.975 and 0.99. Compare with the theoretical result.
c) Calculate the second moment of s numerically from M = 1000 samples
for p = 0.90, 0.95, 0.975 and 0.99. Compare with the theoretical result.
Infinite-dimensional percolation
3

We have now seen how the percolation problem can be solved exactly for
a one-dimensional system. However, in this case the percolation threshold
is pc = 1, and we were not able to address the behavior of the system
for p > pc . There is, however, another system in which many features of
the percolation problem can be solved exactly, and this is percolation
on a regular tree structure on which there are no loops. The condition
of no loops is essential. This is also why we call this system a system of
infinite dimensions, because we need an infinite number of dimensions in
Euclidean space in order to embed a tree without loops. In this section,
we will provide explicit solution to the percolation lattice on a particular
tree structure called the Bethe lattice.
The Bethe lattice, which is also called the Cayley tree, is a tree
structure in which each node has Z neighbors. This structure has no loops.
If we start from the central point and draw the lattice, the perimeter
grows as fast as the bulk. Generally, we will call Z the coordination
number. The Bethe lattice is illustrated in Fig. 3.1.

37
38 3 Infinite-dimensional percolation

(a) (b)

Branch 1
t
oin
al p
ntr
Ce

Branch 2 Branch 3

Fig. 3.1 Illustration of four generations of the Bethe lattice with number of neighbors
Z = 3.

3.1 Percolation threshold


If we start from the center and move along a branch, we will generate
(Z − 1) new neighbors from each of the branches. To get a spanning
cluster, we need to ensure that at least one of the Z − 1 sites are occupied
on average. That is, the occupation probability, p, must be:

p(Z − 1) ≥ 1 , (3.1)

in order for this process to continue indefinitely.


We associate pc with the value for p where the cluster is on the verge
of dying out, that is
1
pc = . (3.2)
Z −1
For Z = 2 we regain the one-dimensional system, with percolation
threshold pc = 1. However, when Z > 2, we obtain a finite percolation
threshold, that is, pc < 1, which means that we can observe the behavior
both above and below pc .
In the following, we will use a set of standard techniques to find the
density of the spanning cluster, P (p), the average cluster size S, before
we address the full scaling behavior of the cluster density n(s, p).

3.2 Spanning cluster


We will use a standard approach to find the density P (p) of the spanning
cluster when p > pc . The technique is based on starting from a “central”
3.2 Spanning cluster 39

site, and then address the probability that a given branch is connected
to infinity.
We can use a strictly technical approach to find P by noting that P
can be found from
p=P+ (3.3)
X
sn(s, p) ,
s

where the sum is the probability that the site is part of a finite cluster,
that is, it is the probability that the site is not connected to infinity. Let
us use Q to denote the probability that a branch does not lead to infinity.
The concept of a central point and a branch is illustrated in Fig. 3.1.
We can arrive at this result by noticing that the probability that at site
is not connected to infinity in a particular direction is Q. The probability
that the site is not connected to infinity in any direction is therefore QZ .
The probability that the site is connected to infinity is therefore 1 − QZ .
In addition, we need to include the probability p that the site is occupied.
The probability that a given site is connected to infinity, that is, that it
is part of the spanning cluster, is therefore

P = p(1 − QZ ) . (3.4)

It now remains to find an expression for Q(p). We will determine Q


through a consistency equation. Let us assume that we are moving
along a branch, and that we have come to a point k. Then, Q gives the
probability that this branch does not lead to infinity. This can occur by
either the site k not being occupied, with probability (1 − p), or by site k
being occupied with probability p, and all of the Z − 1 branches leading
out of k not being connected to infinity, with probability QZ−1 . The
probability Q for the branch not to be connected to infinity is therefore

Q = (1 − p) + pQZ−1 . (3.5)

We can check this equation by looking at the case when Z = 2, which


should correspond to the one-dimensional system. In this case we have
Q = 1 − p + pQ, which gives, (1 − p)Q = (1 − p), where we see that when
p=6 1, Q = 1. That is, when p < 1 all branches are not connected to
infinity, implying that there is no spanning cluster. We regain the results
from one-dimensional percolation theory.
We could solve this equation for general Z. However, for simplicity
we will restrict ourselves to Z = 3, which is the smallest Z that gives a
behavior different from the one-dimensional system. In this case
40 3 Infinite-dimensional percolation

Q = 1 − p + pQ2 , (3.6)

pQ2 − Q + 1 − p = 0 . (3.7)
The solution of this second order equation is

1± (2p − 1)2 1
p (
p < pc
Q= = 1−p . (3.8)
2p p p > pc

There are two possible solutions. We recognize that the solution


(1 − p)/p is 1 for p = pc = 1/2, and is larger than 1 for smaller values of
p, we must therefore use the other solution of Q = 1 for p < pc = 1/2.
These results confirm the value pc = 1/2 as the percolation threshold.
When p ≤ pc , we find that Q = 1, that is, no branch propagates to
infinity. Whereas, when p > pc , Q becomes smaller than 1, and there is
a finite probability for a branch to continue to infinity.
We insert this back into the equation for P (p) and find that for p > pc :

P = p(1 − Q3 ) (3.9)
1−p 3
= p(1 − ( ) ) (3.10)
p
1−p 1−p 1−p 2
= p(1 − )(1 + +( ) ). (3.11)
p p p
This result is illustrated in Fig. 3.2.
From this we observe the expected result that when p → 1, P (p) ∝ p.
We can rewrite the equation as
1 1−p 1−p 2
P = 2(p − )(1 + +( ) ), (3.12)
2 p p
From this we can immediately find the leading order behavior when
p → pc = 1/2. In this case we have

P ' 6(p − pc ) + o((p − pc )2 ) . (3.13)

We have therefore found that for p > pc

P (p) ' B(p − pc )β , (3.14)

where B = 6, and the exponent β = 1. The density of the spanning


cluster is therefore a power-law in (p−pc ) with exponent β. The exponent
depends on the dimensionality of the lattice, but should not depend on
3.3 Average cluster size 41

0.8

0.6

0.4

0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

50

40

30

20

10

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Fig. 3.2 (Top) A plot of P (p) as a function of p for the Bethe lattice with Z = 3. The
tangent at p = pc is illustrated by a straight line. (Bottom) A plot of the average cluster
size, S(p), as a function of p for the Bethe lattice with Z = 3. The average cluster size
diverges when p → pc = 1/2 both from below and above.

lattice details, such as the number of neighbors Z. We will leave it to


the reader as an exercise to show that β is the same for Z = 4.
We notice in passing that our approach is an example of a mean field
solution, or a self-consistency solution: We assume that we know Q, and
then solve to find Q. We will use similar methods further on in this
course.

3.3 Average cluster size


We will use a similar method to find the average cluster size, S(p). Let
us introduce T (p) as the average number of sites connected to a given
site on a specific branch, such as in branch 1 in Fig. 3.1. The average
cluster size S is then given as

S = 1 + ZT , (3.15)
42 3 Infinite-dimensional percolation

where the 1 represents the central point, and T is the average number
of sites on each branch. We will again find a self-consistent equation for
T , starting from a center site. The average cluster size T is found from
summing the probability that the next site k is empty, 1 − p, multiplied
with the contribution to the average in this case (0), plus the probability
that the next site is occupied, p, multiplied with the contribution in this
case, which is the contribution from the site (1) and the contribution of
the remaining Z − 1 subbranches. In total:

T = (1 − p)0 + p(1 + (Z − 1)T ) , (3.16)

We can solve this directly for T , finding


p
T = , (3.17)
1 − p(Z − 1)

where we recognize that the value pc = 1/(Z − 1) plays a special role


because the average size of the branch diverges when p → pc . We find
the average cluster size S to be:
1+p pc (1 + p)
S = 1 + ZT = = , (3.18)
1 − (Z − 1)p pc − p

which is illustrated in Fig. 3.2. The expression for S(p) can therefore be
written on the general form
Γ
S= , (3.19)
(pc − p)γ

where our argument determines pc = 1/(Z − 1), and the exponent γ = 1.


The average cluster size S therefore diverges as a power-law when p
approaches pc . The exponent γ characterizes the behavior, and the value
of γ depends on the dimensionality, but not on the details of the lattice.
Here, we notice in particular that γ does not depend on Z.

3.4 Cluster number density

In order to find the cluster number density for the Bethe lattice, we need
to address how we in general can find the cluster number density. In
general, in order to find the cluster number density for a given s, we
need to find all possible configurations of clusters of size s, and sum up
their probability:
3.4 Cluster number density 43

n(s, p) = ps (1 − p)t(c) (3.20)


X

c(s)

Here we have included the term ps , because we know that we must have
all the s sites of the cluster present, and we have included the term
(1 − p)t , because all the neighboring sites must be unoccupied, and there
are t(c) neighbors for configuration c. Based on this, we realize that we
could instead make a sum over all t, but then we need to include the
effect that there are several clusters that can have the same t. We will
then have to introduce the degeneracy factor gs,t which gives the number
of different clusters that have size s and a number of neighbors equal to
t. The cluster number density can then be written as

n(s, p) = ps gs,t (1 − p)t . (3.21)


X

This can be illustrated for two-dimensional percolation. Let us study


the case when s = 3. In this case there are 6 possible clusters for size
s = 3, as illustrated in Fig. 3.3.

t=8 t=8 t=7 t=7 t=7 t=7


Fig. 3.3 Illustration of the 6 possible configurations for a two-dimensional cluster of size
s = 3.

There are two clusters with t = 8, and four clusters with t = 7. There
are no other clusters of size s = 3. We can therefore conclude that for
the two-dimensional lattice, we have g3,8 = 2, and g3,7 = 4, and g3,t = 0
for all other values of t.
For the Bethe lattice, there is a particularly simple relation between
the number of sites, and the number of neighbors. We can see this by
looking at the first few generations of a Bethe lattice grown from a central
seed. For s = 1, the number of neighbors is t1 = Z. When we add one
more site, we remove one neighbor from what we had previously, in order
to add a new site, and then we add Z − 1 new neighbors: s = 2, and
t2 = t1 + (Z − 2). Consequently,

tk = tk−1 + (Z − 2) , (3.22)
44 3 Infinite-dimensional percolation

and therefore:
ts = s(Z − 2) + 2 . (3.23)
The cluster number density, given by the sum over all t, is therefore
reduced to only a single term for the Bethe lattice

n(s, p) = gs,ts ps (1 − p)ts , (3.24)

For simplicity, we will write gs = gs,ts . In general, we do not know gs ,


but we will show that we still can learn quite a lot about the behavior of
n(s, p).
The cluster density can therefore be written as

n(s, p) = gs ps (1 − p)2+(Z−2)s . (3.25)

We rewrite this as a common factor to the power s:

n(s, p) = gs [p(1 − p)Z−2 ]s (1 − p)2 , (3.26)

which, for Z = 3 becomes

n(s, p) = gs [p(1 − p)]s (1 − p)2 . (3.27)

However, we can use a general Z for our argument. We will study


n(s, p) for p close to pc . In this range, we will do a Taylor expansion
of the term f (p) = p(1 − p)Z−2 , which is raised to the power s in the
equation for n(s, p). The shape of f (p) as a function of p is shown in
Fig. 3.4. The maximum of f (p) occurs for p = pc = 1/(Z − 1). This is
also easily seen from the first derivative of f (p).

f 0 (p) = (1 − p)Z−2 − p(Z − 2)(1 − p)Z−3 = (3.28)


= (1 − p) Z−3
(1 − p − p(Z − 2)) = (3.29)
= (1 − p) Z−3
(1 − (Z − 1)p) (3.30)

which shows that f 0 (pc ) = 0. We leave it to the reader to show that


f 00 (pc ) < 0.
The Taylor expansion can be written as
1
f (p) = f (pc ) + f 0 (pc )(p − pc ) + f 00 (pc )(p − pc )2 + o((p − pc )3 ) , (3.31)
2
where we already have found the the first order term, f 0 (pc ) = 0. We can
therefore write
3.4 Cluster number density 45

1
Z=3
0.8 Z=4
Z=5

0.6

0.4

0.2

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Fig. 3.4 A plot f (p) = p(1 − p)Z−2 , which is a term in the cluster number density
n(s, p) = gs [p(1 − p)Z−2 ]s (1 − p)2 for the Bethe lattice. We notice that f (p) has a
maximum at p = pc , and that the second derivative, f 00 (p), is zero in this point. A Taylor
expansion of f (p) around p = pc will therefore have a second order term in (p − pc ) as
the lowest-order term - to lowest order it is a parabola at p = pc . It is this second order
term which determines the exponent σ, which consequently is independent of Z.

1
f (p) ' f (pc ) − f 00 (pc )(p − pc )2 = A(1 − B(p − pc )2 ) . (3.32)
2
The cluster number density is

n(s, p) = gs [f (p)]s (1 − p)2 = gs es ln f (p) (1 − p)2 , (3.33)

where we now insert f (p) ' A(1 − B(p − pc )2 ) to get


2
n(s, p) ' gs As es ln(1−B(p−pc ) ) (1 − p)2 . (3.34)

We use the first order of the Taylor expansion of ln(1 − x) ' −x, to get
2
n(s, p) ' gs As e−sB(p−pc ) (1 − p)2 . (3.35)

Consequently, for p = pc we get

n(s, pc ) = gs As (1 − p)2 . (3.36)

As a result, we can rewrite the cluster density in terms of n(s, pc ),


giving
2
n(s, p) = n(s, pc )e−sB(p−pc ) , (3.37)
when p is close to pc . The exponential term we could again rewrite as

n(s, p) = n(s, pc )e−s/sξ , (3.38)

where the characteristic cluster size sξ is


46 3 Infinite-dimensional percolation

sξ = B −1 (p − pc )−2 , (3.39)

which implies that the characteristic cluster size diverges as a power-law


with exponent 1/σ = 2. The general scaling form for the characteristic
cluster size sξ is
sξ ∝ |p − pc |−1/σ , (3.40)
where the exponent σ is universal, meaning that is does not depend
on lattice details such a Z, as we have demonstrated here, but it does
depend on lattice dimensionality. It will therefore be a different value for
two-dimensional percolation.
Behavior at p = pc . The next step is to address the behavior at p = pc ,
when the characteristic cluster size is diverging.
We have already found some limits on the behavior of the cluster
density n(s, p), because we have found S and P (p), which can be related
to the cluster number density. We will use these relations two find limits
on the behavior of n(s, pc ).
The average cluster size at p = pc is
Γ
S= , (3.41)
pc − p
which should diverge, that is

S= s2 n(s, pc ) → ∞ , (3.42)
X

if we go the the limit of a continuous n(s, pc ), the integral


Z ∞
S= s2 n(s, pc )ds → ∞ , (3.43)
0

should diverge. We can therefore conclude that n(s, pc ) is not an expo-


nential, since that would lead to convergence. We can make a scaling
ansatz
n(s, pc ) ' Cs−τ , (3.44)
for s  1. We can include this into the restrictions that

sn(s, p) = p − P , (3.45)
X

which should converge, and

s2 n(s, pc ) → ∞ , (3.46)
X

s
3.4 Cluster number density 47

which should not converge. This provides a set of limits on the possible
values of τ , because

sn(s, pc ) ' s1−τ < ∞ ⇒ τ − 1 > 1 , (3.47)


X X

s s

and
s2 n(s, pc ) ' s2−τ > ∞ ⇒ τ − 2 ≤ 1 , (3.48)
X X

which therefore implies that

2<τ ≤3. (3.49)

Summary of arguments. We can therefore sum up our arguments so


far in the relation
2s 2s
n(s, p) = n(s, pc )e−B(p−pc ) = Cs−τ e−B(p−pc ) = Cs−τ e−s/sξ . (3.50)

We will now use this expression to calculate S, for which we know the
exact scaling behavior, and then again use this to find the value for τ
Z ∞
S=C 2−τ −s/sξ
s2−τ e−s/sξ ds . (3.51)
X
s e →C
s 1

We could now make a very rough estimate. This is useful, since it is


in the spirit of this course, and it also provides the correct behavior. We
could assume that
Z ∞ Z sξ
S=C s2−τ e−s/sξ ds ∼ C s2−τ ds ∼ s3−τ
ξ , (3.52)
1 1

which actually provides the correct result. We can do it slightly more


elaborately: Z ∞
S'C s2−τ e−s/sξ ds , (3.53)
1
we change variables by introducing, u = s/sξ , which gives
Z ∞
S ' s3−τ
ξ u2−τ e−u du . (3.54)
1/sξ

Where the integral is now a number, since 1/sξ → 0, when p → pc . The


asymptotic scaling behavior in the limit p → pc is therefore

S ∼ s3−τ
ξ ∼ (p − pc )−2(3−τ ) ∼ (p − pc )−1 , (3.55)
48 3 Infinite-dimensional percolation

where we have used that

sξ ∼ (p − pc )−2 , (3.56)

and that
S ∼ (p − pc )−1 . (3.57)
Direct solution therefore shows that
5
τ= . (3.58)
2
This relation also satisfies the exponent relations we found above, since
2 < 5/2 ≤ 3. A plot of the scaling form is shown in Fig. 3.5.

0
(p-pc )=0.100
-2
(p-pc )=0.010
-4 (p-pc )=0.001

-6

-8

-10

-12

-14

-16

-18

-20
0 1 2 3 4 5 6 7 8

Fig. 3.5 A plot of n(s, p) = s−τ exp(−s(p − pc )2 ) as a function of s for various values
of p illustrates how the characteristic cluster size sξ appears as a cut-off in the cluster
number density that scales with p − pc .

Preliminary scaling theory for cluster number density. This provides


us with a preliminary scaling theory for the cluster number density. We
will spend time now trying to verify this scaling relation for percolation
in other dimensionalities. We have found that in the vicinity of pc , we
do not expect deviations until we reach large s, that is, before we reach
a characteristic cluster size sξ that increases as p → pc . We therefore
expect a general form of the cluster density
s
n(s, p) = n(s, pc )F ( ), (3.59)

3.6 Exercises 49

where
n(s, pc ) = Cs−τ , (3.60)
and
sξ = s0 |p − pc |−1/σ . (3.61)
In addition, we have the following scaling relations:

P (p) ∼ (p − pc )β , (3.62)

ξ ∼ |p − pc |−ν , (3.63)
and
S ∼ |p − pc |−γ , (3.64)
with a possible non-trivial behavior for higher moments of the cluster
density.

3.5 (Advanced) Embedding dimension

Why is it difficult to embed such a structure in a d + 1-dimensional space?


Because for a Euclidean structure of dimension d, the volume, V grows
as
V ∝ Ld , (3.65)
and the surface, S, grows as

S ∝ Ld−1 , (3.66)

where L is the linear dimension of the system. This means that


1
S ∝ V 1− d . (3.67)

However, for the Bethe lattice, the surface is proportional to the volume,
S ∝ V , which would imply that d → ∞.

3.6 Exercises

Exercise 3.1: P (p) for Z = 4


Find P (p) for Z = 4 and determine β for this value of Z.
Finite-dimensional percolation
4

For the one-dimensional and the infinite-dimensional systems we have


been able to find exact results for the percolation probability, Π(p), for
P (p), the probability for a site to belong to an infinite cluster, and we
have characterized the behavior using the distribution of cluster sizes,
n(s, p) and its cut-off, sξ . In both one and infinite dimensions we have
been able to calculate these functions exactly. However, in two and
three dimensions – which are the most relevant for our world – we are
unfortunately not able to find exact solutions. We saw above that the
number of configurations in a Ld system in d-dimensions increases very
rapidly with L – so rapidly that a complete enumeration is impossible.
But can we still use what we learned from the one and infinite-dimensional
systems?
In the one-dimensional case it was simple to find Π(p, L) because
there is only one possible path from one side to another. We cannot
generalize this to two dimensions, since in two-dimensions there are many
paths from one side to another – and we need to include all to estimate
the probability for percolation. Similarly, it was simple to find n(s, p),
because all clusters only have two neighboring sites – the surface is always
of size 2. This is also not generalizable to higher dimensions.
In the infinite-dimensional system, we were able to find P (p) because
we could separate the cluster into different paths that never can intersect
expect in a single point, because there are no loops in the Bethe lattice.
This is not the case in two and three dimensions, where there will always
be the possibility for loops. When there are loops present, we cannot use
the arguments we used for the Bethe lattice, because a branch cut off at

51
52 4 Finite-dimensional percolation

one point may be connected again further out. For the Bethe lattice, we
could also estimate the multiplicity g(s, t) of the clusters, the number of
possible clusters of size s and surface t, since t was a function of s. In a
two- or three-dimensional system this is not similarly simple, because the
multiplicity g(s, t) is not simple even in two dimensions, as illustrated in
Fig. 4.1.
This means that the solution methods used for the one and the
infinite dimensional systems cannot be extended to address two- or three-
dimensional systems. However, several of the techniques and observations
we have made for the one-dimensional and the Bethe lattice systems, can
be used as the basis for a generalized theory that can be applied in any
dimension. Here, we will therefore pursue the more general features of
the percolation system, starting with the cluster number density, n(s, p).
4 Finite-dimensional percolation 53

s=4, t=10 s=4, t=10 s=4, t=9 s=4, t=9 s=4, t=9 s=4, t=9

s=4, t=8 s=4, t=8 s=4, t=9 s=4, t=9 s=4, t=9 s=4, t=9

s=4, t=8 s=4, t=8 s=4, t=8 s=4, t=8 s=4, t=8 s=4, t=8

s=4, t=8

s=3, t=8 s=3, t=8 s=3, t=7 s=3, t=7 s=3, t=7 s=3, t=7

s=2, t=6 s=2, t=6

s=1, t=4
Fig. 4.1 Illustration of the possible configurations for two-dimensional clusters of size
s = 1, 2, 3, 4.
54 4 Finite-dimensional percolation

4.1 Cluster number density

We have found that the cluster number density plays a fundamental role
in our understanding of the percolation problem, and we will use it here
as our basis for the scaling theory for percolation.
When we discussed the Bethe lattice, we found that we could write
the cluster number density as a sum over all possible configurations of
cluster size, s:
n(s, p) = ps (1 − p)tj , (4.1)
X

where j runs over all different configurations, and tj denotes the number
of neighbors for this particular configuration. We can simplify this by
rewrite the sum to be over all possible number of neighbors, t, and include
the degeneracy gs,t , the number of configurations with t neighbors:

n(s, p) = gs,t ps (1 − p)t . (4.2)


X

The values of gs,t have been tabulated up to s = 40. However, while


this may give us interesting information about the smaller cluster, and
therefore for smaller values of p, it does not help us to develop a theory
for the behavior for p close to pc .
In order to address the cluster number density, we will need to study the
characteristics of n(s, p), for example by generating numerical estimates
for its scaling behavior, and then propose a general scaling form which
will be tested in various settings.

4.1.1 Numerical estimation of n(s, p)


We discussed how to measure n(s, p) from a set of numerical simulations
in chap. 2. We can use the same method in two and higher dimensions.
We estimate n(s, p; L) using
Ns
n(s, p; L) = , (4.3)
M · Ld
where Ns is the total number of clusters of size s measured for M
simulations in a system of size Ld and for a given value of p. We perform
these simulations just as we did in one dimension, using the following
program:
4.1 Cluster number density 55

from pylab import *


from scipy.ndimage import measurements
nsamp = 2000
L = 200
p = 0.58
allarea = array([])
for i in range(nsamp):
z = rand(L)
m = z<p
lw, num = measurements.label(m)
labelList = arange(lw.max() + 1)
area = measurements.sum(m, lw, labelList)
allarea = append(allarea,area)
n,sbins = histogram(allarea,bins=int(max(allarea)))
s = 0.5*(sbins[1:]+sbins[:-1])
nsp = n/(L*nsamp)
i = nonzero(n)
subplot(2,1,1)
plot(s[i],nsp[i],’o’)
xlabel(’$s$’)
ylabel(’$n(s,p)$’)
subplot(2,1,2)
loglog(s[i],nsp[i],’o’)
xlabel(’$s$’)
ylabel(’$n(s,p)$’)

The resulting plot of n(s, p; L) for L = 200 is shown in Fig. 4.2a,b.


Unfortunately, this plot is not very useful. The problem is that there are
many values of s for which we have little or no data at all! For small
values of s we have many clusters for each value of s and the statistics
is good. But for large values of s, such as for clusters of size s = 104
and above, we have less than one data point for each value of s. Our
measured distribution n(s, p; L) is therefore a poor representation of the
real n(s, p; L) in this range.

4.1.2 Measuring probabilty densities of rare events


The problem with the measured results in Fig. 4.2 occur because we have
chosen a very small bin size for the histogram. However, we see that for
small values of s we want to have a small bin size, since the statistics
here is good, but for large values of s we want to have larger bin sizes.
This is often solved by using logarithmic binning: We make the bin edges
ai , where a is the basis for the bins and i is bin number. If we chose
a = 2 as the basis for the bins, the bin edges will be 20 , 21 , 22 , 23 , . . .,
that is 1, 2, 4, 8, . . .. (Maybe we should instead have called the method
exponential binning). We then count how many events occur in each such
56 4 Finite-dimensional percolation

bin. If we number the bins by i, then the edges of the bins are si = ai ,
and the width of bin i is ∆si = si+1 − si . We then count how many
events, Ni , occur in the range from si to si + ∆si , and we use this to
find the cluster number density n(s, p; L). However, since we now look
at ranges of s values, we need to be precise: We want to measure the
probability for a cluster to belong to a specific site of a cluster in the
range from s to s + ∆s, that is, we want to measure n(s, p; L)∆s, which
we estimate from
Ni
n(si , p; L)∆si = , (4.4)
M Ld
and we find n(s, p; L) from
Ni
n(si , p; L) = . (4.5)
M Ld ∆si
It is important to remember to divide by ∆si when the bin sizes are not
all the same! We implement this by generating an array of all the bin
edges. First, we find an upper limit to the bins, that is, we find an im so
that
aim > max(s) ⇒ loga aim > loga max(s) , (4.6)
im > loga max(s) . (4.7)
We can for example round the right hand side up to the nearest integer
a = 1.2
logamax = ceil(log(max(allarea))/log(a));

where allarea corresponds to all the s-values. We can then generate an


array of indices from 1 to this maximum value
logbins = a**arange(0,logamax)

And we can further generate the histogram with this set of bin edges
nl,nlbins = histogram(allarea,bins=logbins)

And we must then find the bin sizes and the bin centers
nl,nlbins = histogram(allarea,bins=logbins)

And we calculate the estimated value for n(s, p; L):


nsl = nl/(nsamp*L**2*ds)

Finally we plot the results. The complete code for this analysis is found
in the following script
a = 1.2
4.1 Cluster number density 57

logamax = ceil(log(max(s))/log(a))
logbins = a**arange(0,logamax)
nl,nlbins = histogram(allarea,bins=logbins)
ds = diff(logbins)
sl = 0.5*(logbins[1:]+logbins[:-1])
nsl = nl/(M*L**2*ds)
loglog(sl,nsl,’.b’)

The resulting plot for a = 1.2 is shown in Fig. 4.2c. Notice that the
resulting plot now is much easier to interpret than the linearly binned plot.
(You should, however, always reflect on whether your binning method
may influence the resulting plot in some way, since there may be cases
where your choice of binning method may affect the results you get.
Although this is not expected to play any role in your measurements in
this book.) We will therefore in the following adapt logarithmic binning
strategies whenever we measure a dataset which is sparse.

0.02

0.01

0
0 2000 4000 6000 8000 10000 12000 14000 16000

100

10-10

100 101 102 103 104 105

100

10-10

100 101 102 103 104 105

Fig. 4.2 Plot of n(s, p; L) estimated from M = 1000 samples for p = 0.58 and L = 200.
(a) Direct plot. (b) Log-log plot. (c) Plot of the logarithmically binned distribution.
58 4 Finite-dimensional percolation

4.1.3 Measurements of n(s, p) when p → pc


What happens to n(s, p : L) when we change p so that it approaches pc .
We perform a sequence of simulations for various values of pc and plot
the resulting values for n(s, p; L). The resulting plot is shown in Fig. 4.3.
Since the plot is double-logarithmic, a straight line corresponds to a
power-law type behavior, n(s, p) ∝ s−τ . We see that as p approaches pc
the cluster number density n(s, p) more and more approaches a power-law
behavior. For a value of p which is away from pc , the n(s, p) curve follows
the power-law behavior for some time, but then deviates by dropping
rapidly. This is an effect of the characteristic cluster size, which also
can be visually observed in Fig. 1.4 and Fig. 1.5, where we see that
the characteristic cluster size increases as p approaches pc . How can we
characterize the characteristic cluster size based on this measurement of
n(s, p)? When s reaches sξ , it falls off from the power-law type behavior
observed as p → pc . So, we could measure sξ directly from the lot, by
drawing a straight line parallel to the behavior of n(s, pc ), but below
the n(s, pc ) line, as illustrated in Fig. 4.3. When the measured, n(s, p)
intersects this drawn line, n(s, p) has fallen by a constant factor below
n(s, pc ) and we define this as sξ , and we measure it by reading the values
from the s-axis. The resulting set of sξ values are plotted as a function
of p in Fig. 4.3. We see that sξ increases and possibly diverges as p
approaches pc . This is an effect we also found in the one-dimensional and
the infinite-dimensional case, where we found that

sξ ∝ |p − pc |−1/σ (4.8)

where σ was 1 is one dimension. We will now use this to develop a theory
for both n(s, p; L) and sξ based on our experience from one and infinite
dimensional percolation.

4.1.4 Scaling theory for n(s, p)


When we develop a theory, we realize that we are only interested in the
limit p → pc , that is |p − pc |  1, and s  1. In this limit, we expect
that sξ marks the cross-over between two different behaviors. There is a
common behavior for small s, up to a cut-off, sξ , as we also observe in
Fig. 4.3: The curves for all p are approximately equal for small s.
Based on what we observed in one-dimension and infinite-dimensions,
we expect and propose the following form for n(s, p):
4.1 Cluster number density 59

0
-2 p=0.45
p=0.50
-4 p=0.54
p=0.57
-6 p=0.58

-8
-10
-12
-14
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5

104
4

0
0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8

Fig. 4.3 (a) Plot of n(s, p; L) as a function of s for various values of p for a 512 × 512
lattice. (b) Plot of sξ (p) measured from the plot of n(s, p) corresponding to the points
shown in circles in (a).

s
n(s, p) = n(s, pc )F ( ), (4.9)

n(s, pc ) = Cs−τ , (4.10)


sξ = s0 |p − pc | −1/σ
. (4.11)
The best estimates for the exponents for various systems are listed in
the following table [37, 7] .

d β τ σ γ ν D µ Dmin Dmax DB
1 2 1 1 1
2 5/36 187/91 36/91 43/18 4/3 91/48 1.30 1.13 1.4 1.6
3 0.41 2.18 0.45 1.80 0.88 2.53 2.0 1.34 1.6 1.7
4 0.64 2.31 0.48 1.44 0.68 3.06 2.4 1.5 1.7 1.9
Bethe 1 5/2 1/2 1 1/2 4 3 2 2 2

We will often simplify the scaling form by writing it on the form:

n(s, p) = s−τ F (s/sξ ) = s−τ F ((p − pc )1/σ s) . (4.12)


60 4 Finite-dimensional percolation

What can we expect from the scaling function F (x) ?


This is essentially the prediction of a data-collapse. If we plot sτ n(s, p)
as a function of s|p − pc |1/σ we would expect to get the scaling function
F (x), which should be a universal curve, as illustrated in Fig. 4.4.

-1

-2

-3 p=0.45
p=0.50
p=0.54
-4 p=0.57
p=0.58
-5
-5 -4 -3 -2 -1 0 1

Fig. 4.4 A plot of n(s, p)sτ as a function of |p − pc |1/σ s shows that the cluster number
density satisfies the scaling ansatz of (4.12).

An alternative scaling form is

n(s, p) = s−τ F̂ ((p − pc )sσ ) , (4.13)

where we have introduced the function F̂ (u) = F (uσ ). These forms are
equivalent, but in some cases this form produces simpler calculations.
This scaling form should in particular be valid for both the 1d and
the Bethe lattice cases - let us check this in detail.

4.1.5 Scaling ansatz for 1d percolation


In the case of one-dimensional percolation, we know that we can write
the cluster density exactly as

n(s, p) = (1 − p)2 e−s/sξ . (4.14)

We showed that we could rewrite this as


s
n(s, p) = s−2 F ( ), (4.15)

where F (u) = u2 e−u . This is indeed in the general scaling form with
τ = 2.
4.2 Consequences of the scaling ansatz 61

4.1.6 Scaling ansatz for Bethe lattice


For the Bethe lattice we found that the cluster density was approximately
on the form
n(s, p) ∝ s−τ e−s/sξ , (4.16)
which is already on the wanted form, so that

n(s, p) = s−τ F (s/sξ ) . (4.17)

4.2 Consequences of the scaling ansatz

The scaling ansatz is simple, but it has powerful consequences. Here, we


address the theoretical consequences of the scaling ansatz, and demon-
strate how we can use the scaling ansatz in theoretical arguments. The
methods we introduce here are important methods in scaling theories,
and we will use them in theoretical arguments throughout this text.

4.2.1 Average cluster size


Let us demonstrate how we can use the scaling ansatz to calculate the
scaling of the average cluster size, and how this can be used to provide
limits for the exponent τ .
Definition of average cluster size S. The average cluster size is defined
as follows: We point to a random point in the percolation system. What
is the average size of the cluster connected to that point? The probability
that a random point is part of the cluster of size s is sn(s, p) and the
size of that cluster is s. We find the average cluster by summing over all
(finite) clusters, that is from s = 1 to infinity:
∞ inf
Xty
S(p) = ssn(s, p) = (4.18)
X
s2 n(s, p) .
s=1 s=1

We assume that we are study systems where p is close to pc so that the


cluster number density n(s, p) is wide and that its drop-off (crossover)
sξ is large. The sum over s will then be a sum with many non-negligible
terms and we can approximate this sum by an integral over s instead:
inf
Xty Z ∞
S(p) = s2 n(s, p) ' s2 n(s, p) ds . (4.19)
s=1 1
62 4 Finite-dimensional percolation

We can now insert the scaling ansatz n(s, p) = s−τ F (s/sξ ), getting:
Z ∞
S(p) = s2−τ F (s/sξ )ds , (4.20)
1

We know that the function F (s/sξ ) goes very rapidly to zero when s is
larger than sξ , and that it is approximately a constant when s is smaller
than sξ . We will therefore approximately F (u) by a step function which
is constant up to 1 and then 0 afterwards. We therefore only integrate
up to sξ , where F (s/sξ ) is constant:
Z ∞ Z sξ
S(p) = s2−τ F (s/sξ )ds ' Cs2−τ ds (4.21)
1 1

We solve this integral, finding that

S(p) = C 0 s3−τ
ξ . (4.22)

We can now insert the behavior of sξ , sξ = |p − pc |−1/σ , giving:


 3−τ 3−τ
S(p) ∝ |p − pc |−1/σ ∝ |p − pc | σ . (4.23)

We recall that we called the scaling exponent of S(p) γ: S(p) ∝ |p −


pc |−γ . We have therefore found what we call a relation between scaling
exponents:
3−τ
γ= . (4.24)
σ
Consequences for τ . We know that the average cluster size diverges as
p → pc . This implies that the exponent γ must be positive, which again
implies that
3−τ
γ>0 ⇒ >0 ⇒ 3>τ . (4.25)
σ
We have therefore found a first bound for τ : τ < 3. As an exercise, you
can check that this relation holds for the one-dimensional system and
the Bethe lattice.

4.2.2 Density of spanning cluster


We can use a similar argument to find the behavior of P (p) from the
cluster number density, which will give us further scaling relations —
4.2 Consequences of the scaling ansatz 63

relations between scaling exponent — and another bound on the exponent


τ.
Relation between P (p) and n(s, p). We recall the general relation

sn(s, p) + P (p) = p . (4.26)


X

We interpret this equation as a way of formulating that a site picked at


random is occupied with probability p (right hand side), and this site
must either be in a finite cluster, with a probability corresponding to the
sum s sn(s, p), or in the infinite cluster with probability P (p).
P

We can therefore find P (p) from

P (p) = p − (4.27)
X
sn(s, p) .
s

We can therefore find P (p) by calculating the sum.


Finding s sn(s, p). We can find the sum over sn(s, p) when p is close
P

to pc in the same way as we did above. We realize that we can replace


n(s, p) with its scaling form: n(s, p) = s−τ F (s/sξ ):
∞ ∞
sn(s, p) = ss−τ F (s/sξ ) . (4.28)
X X

s=1 s=1

And we realize that the sum can be approximated by an integral over s


since there is a wide range of s-values when p is close to pc :
∞ Z ∞
s1−τ F (s/sξ ) ' s1−τ F (s/sξ )ds . (4.29)
X

s=1 1

And we recall that F (s/sξ ) is approximately a constant when s < sξ and


goes very rapidly to zero when s > sξ , so we integrate up to sξ assuming
that F (s/sξ ) is a constant C up to sξ , giving
Z ∞ Z sξ
1−τ
s F (s/sξ )ds ' s1−τ ds = c1 + c2 s2−τ
ξ . (4.30)
1 1

We insert this back into the expression for P (p), getting:

P (p) = p − (4.31)
X
sn(s, p) ' p − c1 − c2 s2−τ
ξ .
s

Consequences for τ . First, we realize that P (p) cannot diverge when


p → pc . Since sξ diverges, this means that the exponent 2 − τ must be
64 4 Finite-dimensional percolation

smaller than or equal to zero, otherwise P (p) will diverge. This gives us
a new boundary for τ :

2−τ ≤0 ⇒ 2≤τ . (4.32)

The two boundaries we have for τ are then 2 ≤ τ < 3. We have therefore
bounded τ between 2 and 3! This is a nice result from the scaling ansatz.
Relating β and τ . We can rewrite the expression in (4.31) for P (p) and
insert sξ = s0 |p − pc |−1/σ , getting:
 2−τ
P (p) ' p − c1 − c2 s2−τ
ξ ' (p − pc ) + c2 |p − pc |−1/σ (4.33)

We realize that when p → pc the linear term (p − pc ) will be smaller than


the term |p − pc |(τ −2)/σ . And we remember that P (p) ∝ (p − pc )β . This
gives us a new scaling relation for β:
τ −2
β= . (4.34)
σ
We have therefore again demonstrated the power of the scaling ansatz by
both calculating bounds for τ and finding relations between the scaling
exponents.

4.3 Percolation thresholds

While the exponents are universal — independent of the details of the


lattice but dependent on the dimensionality — the percolation threshold,
pc , depends on all the details of the system. The percolation threshold
depends on the lattice type and the type of percolation. We typically
discern between site percolation, where percolation is on the sites of
a lattice, and bond percolation, where the bonds between the sites
determines the connectivity. The following table provides established
values for the percolation thresholds for various dimensions and lattice
types [37, 7]. (For d = 1, the percolation threshold is pc = 1 for all lattice
types.)
4.4 Exercises 65

Lattice type Site Bond


d=2
Square 0.592746 0.50000
Triangular 0.500000 0.34729
d=3
Cubic 0.3116 0.2488
FCC 0.198 0.119
BCC 0.246 0.1803
d=4
Cubic 0.197 0.1601
d=5
Cubic 0.141 0.1182
d=6
Cubic 0.107 0.0942
d=7
Cubic 0.089 0.0787

4.4 Exercises

Exercise 4.1: Alternative way to analyze percolation clusters


In this exercise we will use python to generate and visualize percolation
clusters. We generate a L × L matrix of random numbers, and will
examine clusters for a occupation probability p.
We generate the percolation matrix consisting of occupied (1) and
unoccupied (0) sites, using
from pylab import *
from scipy.ndimage import measurements
L = 100
r = rand(L,L)
p = 0.6
z = r<p # This generates the binary array
lw, num = measurements.label(m)

We have then produced the array lw that contains labels for each of the
connected clusters.
a) Familiarize yourself with labeling by looking at lw, and by studying
the second example in the python help system on the image analysis
toolbox.
We can examine the array directly by mapping the labels onto a
color-map, using imshow.
imshow(lw)
66 4 Finite-dimensional percolation

We can extract information about the labeled image using


measurements, for example, we can extract an array of the areas
of the clusters using
labelList = arange(lw.max() + 1)
area = measurements.sum(m, lw, labelList)

You can also extract information about the clusters using the
skimage.measure module. This provides a powerful set of tools that can
be used to characterize the clusters in the system. For example, you can
determine if a system is percolating by looking at the extent of a cluster.
If the extent in any direction is equal to L, then the cluster is spanning
the system. We can use this to find the area of the spanning cluster or
to mark if there is a spanning cluster:
import skimage
props = skimage.measure.regionprops(lw)
spanning = False
for prop in props:
if (prop.bbox[2]-prop.bbox[0]==L or prop.bbox[3]-prop.bbox[1]==L):
# This cluster is percolating
area = prop.area
spanning = True
break

b) Using these features, write a program to calculate P (p, L) for various


p for the two-dimensional system.
c) How robust is your algorithm to changes in boundary conditions?
Could you do a rectangular grid where Lx  Ly ? Could you do a more
complicated set of boundaries? Can you think of a simple method to
ensure that you can calculate P for any boundary geometry?

Exercise 4.2: Finding Π(p, L) and P (p, L)


a) Write a program to find P (p, L) and Π(p, L) for L =
2, 4, 8, 16, 32, 64, 128. Comment on the number of samples you need to
make to det a good estimate for P and Π.
b) Test the program for small L by comparing with the exact results
from above. Comment on the results?
4.4 Exercises 67

Exercise 4.3: Determining β


We know that when p > pc , the probability P (p, L) for a given site to
belong to the percolation cluster, has the form

P (p, L) ∼ (p − pc )β . (4.35)

Use the data from above to find an expression for β. For this you may
need that pc = 0.59275.

Exercise 4.4: Determining the exponent of power-law


distributions
In this exercise you will build tools to analyse power-law type probability
densities.
Generate the following set of data-points in python:
from pylab import *
z = rand(1e6)**(-3+1)

Your task is to determine the distribution function fZ (z) for this distri-
bution. Hint: the distribution is on the form f (u) ∝ uα .
a) Find the cumulative distribution, that is, P (Z > z). You can then
find the actual distribution from
dP (Z > z)
fZ (z) = . (4.36)
dz
b) Generate a method to do logarithmic binning in python. That is, you
estimate the density by doing a histogram with bin-sizes that increase
exponentially in size.
Hint. Remember to divide by the correct bin-size.

Exercise 4.5: Cluster number density n(s, p)


We will generate the cluster number density n(s, p) from the two-
dimensional data-set.
Hint 1. The cluster sizes are extracted using area = measurements.sum(m,
lw, labelList) as described in a previous exercise.
Hint 2. Remember to remove the percolating cluster.
68 4 Finite-dimensional percolation

Hint 3. Use logarithmic binning.


a) Estimate n(s, p) for a sequence of p values approaching pc = 0.59275
from above and below.
b) Estimate n(s, pc ; L) for L = 2k for k = 4, . . . , 9. Use this plot to
estimate τ .
c) Can you estimate the scaling of sξ ∼ |p − pc |−1/σ using this data-set?
Hint 1: Use n(s, p)/n(s, pc ) = F (s/sξ ) = 0.5 as the definition of sξ .

Exercise 4.6: Average cluster size


a) Find the average (finite) cluster size S(p) for p close to pc , for p above
and below pc .
b) Determine the scaling exponent S(p) ∼ |p − pc |−γ .
c) In what ways can you generate S (k) (p)? What do you think is the
best way?
Geometry of clusters
5

We have seen how we can characterize clusters by their mass, s.


As p approaches pc , the typical cluster size s increases. From this figure
we also see that the characteristic diameter of the clusters increases. In
this chapter we will discuss the geometry of clusters, and by geometry we
will mean how the number of sites in a cluster is related to the linear size
of the cluster. We will introduce a measure to characterize the spatial
extent, the characteristic diameter, of clusters; how the characteristic
length behaves as p approaches pc ; and how the characteristic length is
related to the characteristic mass, s, of a cluster.

5.1 Characteristic cluster size

We have so far studied the clusters in our model porous material, the
percolation system, through the distribution of cluster sizes, n(s, p),
and derivatives of this, such as the average cluster size, S and the
characteristic cluster size, sξ . However, clusters with the same mass,
s, can have very different shapes. Fig. 5.1 illustrates three clusters all
with s = 20 sites. (The linear and the compact clusters are unlikely, but
possible realizations). How can we characterize the diameter or radius of
these clusters?
There are many ways to define the extent of a cluster. We could, for
example, define the maximum distance between any two points in the
cluster (Rmax ) to be the extent of the cluster, or we could use the average

69
70 5 Geometry of clusters

Fig. 5.1 Illustrations of three clusters all with s = 24.

distance between two points in the cluster. However, we usually introduce


a measure which is similar to the standard deviation used to characterize
the spread in a random variable: We use the standard deviation in the
position, which is also known as the radius of gyration of a cluster:
The radius of gyration, Ri for a particular cluster i of size si , with
sites rj for j = 1, . . . , si , is defined as
si
1 X
Ri2 = (rj − rcm,i )2 , (5.1)
si j=1

where rcm,j is the center of mass of cluster i. An equivalent definition is


1 X
Ri2 = (rn − rm )2 , (5.2)
2s2i n,m

where the sum is over all sites n and m in cluster i, and we have divided
by 2s2i because each site is counted twice and the number of components
in the sum is s2i . The radius of gyration of the clusters in Fig. 5.1 is
illustrated by the circles in the figures1 .
This provides a measure of the radius of a cluster i. As we see from
Fig. 5.1, clusters of the same size s can have different radii. How can we
then find a characteristic size for a given cluster size s? We find that by
averaging over all clusters of the same size s.

Rs2 = hRi2 ii . (5.3)

where the average is over all clusters of the same size.


1
Notice that we could have used another moment q to define the radius. Higher moments
will put more emphasis on the sites that are far from the center of mass. As the order q
approaches infinity, the radius will approach the maximum size of the cluster, Rmax .
5.1 Characteristic cluster size 71

5.1.1 Analytical results in one dimension


We can use the one-dimensional percolation system to gain insight into
how we expect Rs to depend on s. For the one-dimensional system, there
is just one cluster for a given size s corresponding to a line of length s.
If the cluster runs from 1 to s, the center of mass is at s/2, and the sum
over all sites runs from 1 to s:
s
1X
Rs2 = (i − s/2)2 , (5.4)
s i=1

where we assume that s is so large that we only need to address the


leading term in s, and we do not have to treat even and odd s separately.
This can be expanded to
s
1X s2
Rs2 = [ i2 − is + ] (5.5)
s i=1 4
1 s3 s(s + 1) s2
= [ −s +s (5.6)
s 3 2 4
∝ s2 (5.7)

We have therefore found the result that s ∝ Rs in one dimension —


which is what we expected.

5.1.2 Numerical results in two dimensions


For the one-dimensional system we found that s ∝ Rs . How does this
generalize to higher dimensions? We start by measuring the behavior for
a finite system of size L and with a percolation threshold p. Our strategy
is to generate clusters on a L × L lattice, analyze the clusters, for each
cluster, i, of size si we will find the center of mass and the radius of
gyration, Ri2 . For each value of s we will find the average radius, Ri2 , by a
linear average. However, for larger values of s we will collect the data in
bins, following the same approach we used to determine n(s, p) — using
logarithmic binning.
First, we introduce a function to calculate the radius of gyration of
all the clusters in a lattice. This is done in two steps, first we find the
center of mass of all clusters, and then we find the radius of gyration.
The center of mass for a cluster i with positions ri,j for j = 1, . . . , si , is
72 5 Geometry of clusters

si
1 X
rcm,i = ri,j , (5.8)
si j=1

We assume that the clusters are numbered and marked in the lattice
with their index, as done by the lw, num = measurements.label(m)
command. We can find the center of mass by a built-in function, such as
cm = measurements.center_of_mass(m, lw, labelList) or we can
calculate the center-of-mass explicitly. This is done by running through
all the sites ix,iy, in the lattice. For each site, we find what cluster i
the site belongs to: i = lw[ix,iy]. If the site belongs to a cluster, that
is if i>0, we add the coordinates for this part of the cluster to the sum
for the center of mass of the cluster
rcm[ilw] = rcm[ilw] + array([ix,iy])

Finally, we find the center of mass for each of cluster by dividing rcm by
the corresponding area for each of the clusters:
rcm[:,0] = rcm[:,0]/area
rcm[:,1] = rcm[:,1]/area

Second, we follow a similar approach to find the radius of gyration. We


run through all the sites in the cluster, and for each site, we find the
cluster number i it belongs to, and add the sum of the square of the
distance from the site to the center of mass:
dr = array([ix,iy])-cm[ilw]
rad2[ilw] = rad2[ilw] + dot(dr,dr)

After running through all the sites, we divide by the area, si , to find the
radius of gyration according to the formula
si
1 X
Ri2 = (ri,j − rcm,i )2 , (5.9)
si j=1

This is implemented in the following function:


from pylab import *
from scipy.ndimage import measurements

import numba

@numba.njit(cache=True)
def radiusofgyration(m,lw):
labelList = arange(lw.max() + 1)
area = measurements.sum(m, lw, labelList)
cm = measurements.center_of_mass(m, lw, labelList)
nx = shape(lw)[0]
5.1 Characteristic cluster size 73

ny = shape(lw)[1]
rad2 = zeros(int(lw.max()+1))
for ix in range(nx):
for iy in range(ny):
ilw = lw[ix,iy];
if (ilw>0):
dr = array([ix,iy])-cm[ilw]
dr2 = dot(dr,dr)
rad2[ilw] = rad2[ilw] + dr2
rad = sqrt(rad2/area)

We use this function to calculate the average radius of gyration for each
cluster size s and plot the results using the following script:
M = 20 # Nr of samples
L = 400 # System size
p = 0.58 # p-value
allr2 = array([])
allarea = array([])
for i in range(M):
z = rand(L,L)
m = z<p
lw, num = measurements.label(m)
area,rcm,rad2 = radiusofgyration(m,lw)
allr2 = append(allr2,rad2)
allarea = append(allarea,area)

The resulting plots for several different values of p are shown in Fig. 5.2.
We see that that there is an approximately linear relation between Rs2
and s in this double-logarithmic plot, which indicates that there is a
power-law relationship between the two:

Rs2 ∝ sx . (5.10)

How can we interpret this relation? Equation (5.10) relates the radius Rs
and the area (or mass) of the cluster. We are more used to the inverse
relation:
s ∝ RsD , (5.11)
where D = 2/x is the exponent relating the radius to the mass of a
cluster. This corresponds to our intuition from geometry. We know that
for a cube of size L, the mass (or volume) of the cube is M = L3 .
For a square of length L, the mass (or area) is M = L2 , and similarly
for a circle M = πR2 , where R is the radius of the circle. For a line
of length L, the mass is M = L1 . We see a general trend, M ∝ Rd ,
where R is a characteristic length for the object, and d describes the
dimensionality of the object. If we extend this intuition to the relation
in (5.11), which is an observation based on Fig. 5.2, we see that we may
74 5 Geometry of clusters

interpret D as the dimension of the cluster. However, the value of D is


not an integer. We have indicated the value of D = 1.89 with a dotted
line in Fig. 5.2. (The value of D is well know for the two-dimensional
percolation problem). This non-integer value of D may seem strange, but
it is fully possible, mathematically, to have non-integer dimensions. This
is a feature frequently found in fractal structures, and the percolation
clusters as p approaches pc is indeed a good example of a self-similar
fractal. We will return to this aspect of the geometry of the percolation
system in Sect. 5.5.

105
p=0.40
p=0.45
104
p=0.50
p=0.55
103 p=0.59
Rs2

102

101

100

10-1
100 101 102 103 104 105
s
Fig. 5.2 Plot of Rs2 as function of s for simulations on two-dimensional percolation
system with L = 400. The largest cluster for each value of p is illustrated by a circle. The
dotted line shows the curve Rs2 ∝ s2/D for D = 1.89.

The largest cluster and its corresponding radius of gyration is indicated


by a circle for each p value in Fig. 5.2. We see that as p approaches pc ,
both the area and the radius of the largest cluster increases. Indeed, this
corresponds to the observation we have previously made for the char-
acteristic cluster size, sξ . We may define a corresponding characteristic
cluster radius, Rsξ . This gives:

sξ ∝ RsDξ . (5.12)

This length is a characteristic length for the system at a given value of p,


corresponding to the largest cluster size or the typical cluster size in the
system. In Sect. 5.2 we see how we can relate this length directly to the
cluster size distribution.
5.2 Geometry of finite clusters 75

5.1.3 Scaling behavior in two dimensions


We have already found that the characteristic cluster size sξ diverges as
a power law as p approaches pc :

sξ ' s0 (p − pc )−1/σ , (5.13)

when p < pc . The behavior is similar when p > pc , but the prefactor s0
may have a different value. How does Rsξ behave when p approaches pc ?
We can find this by combining the scaling relations for sξ and Rsξ . We
1/D
remember that Rsξ ∝ sξ . Therefore
 1/D
1/D
R sξ ∝ s ξ ∝ (p − pc )−1/σ ∝ (p − pc )−1/σD , (5.14)

where we introduce the symbol ν = 1/(σD). For two-dimensional per-


colation the exponent ν is a universal number, just like σ and D. This
means that it does not depend on details such as the lattice type or the
connectivity of the lattice, although it does depend on the dimensionality
of the system. We know the value of ν reasonably well in two dimensions,
ν = 4/3.
The arguments we have provided here is again an example of scaling
argument. In these arguments we are only interested in the exponent in
the scaling relation, the functional form, and not in the values of the
constant prefactors.

5.2 Geometry of finite clusters

We have defined the characteristic length Rsξ through the definition


of the characteristic cluster size, sξ , and the scaling relation s ∝ RsD .
However, it may be more natural to define the characteristic length of
the system as the average radius and not the cut-off radius. We have
introduced several averages for the radius of gyration. For each cluster i
we can calculate the radius of gyration, Ri . We can then find the average
radius of gyration for a cluster of size s by averaging over all clusters i
of size s:
Rs2 = hRi2 ii , (5.15)
where the average is over all clusters i of the same size s. This gives us
the radius of curvature Rs which we found to scale with cluster mass s
as s ∝ RsD .
76 5 Geometry of clusters

For the cluster sizes, we introduced an average cluster size S, which is


1 X
S= s sn(s, p) , ZS = (5.16)
X
sn(s, p) .
ZS s s

We can also similarly introduce an average radius of gyration, R, by


averaging Rs over all cluster sizes:
1 X 2 k
R= Rs s sn(s, p) , ZR = (5.17)
X
sk sn(s, p) .
ZR s s

Here, we have purposely introduced an unknown exponent k. We are to


some extent free to choose this exponent, although the average needs
to be finite, and the exponent will determine how we small and large
clusters are weighed in the sum. A natural choise may be to choose k = 1
so that we get terms Rs2 s2 n(s, p) in the sum. However, the results we
present here will not change in any significant way, expect for different
prefactors to the scaling relations, if you choose a larger value of k. Our
definition of the average radius of gyration is therefore:
1 X 2 2
R= Rs s n(s, p) , ZR = (5.18)
X
s2 n(s, p) ,
ZR s s

where we notice that the normalization sum ZR = S is the average cluster


size.
Fig. 5.3 shows a plot of the average R as a function of p for various
systems sizes L. We see that R diverges as p approaches pc . How can we
develop a theory for this behavior?
We know that the cluster number density n(s, p) has the approximate
scaling form

n(s, p) = s−τ F (s/sξ ) , sξ ∝ |p − pc |−1/σ . (5.19)

We can use this to calculate the average radius of gyration, R, when p is


close to pc .
The average radius of gyration is
R ∞ 2 2−τ
R s F (s/sξ )ds
P 2 2
s Rs s n(s, p)
R = P 2
2
= 1R ∞ s2−τ (5.20)
s s n(s, p) 1 s F (s/sξ )ds
R ∞ 2/D 2−τ
s s F (s/sξ )ds
∝ 1 R ∞ 2−τ , (5.21)
1 s F (s/sξ )ds
5.2 Geometry of finite clusters 77

120
L= 64
L=128
100 L=256
L=512
80
9(p)

60

40

20

0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
p
Fig. 5.3 A plot of ξ as a function of p for a L = 64, 128, 256 and 512 system as a function
of p. We observe that ξ diverges when p → pc . We notice that the correlation length does
not diverge, but crosses over as a result of the finite system size.

where we have inserted Rs2 ∝ s2/D . This expression is valid when s < sξ .
We insert it here since F (s/sξ ) goes rapidly to zero when s > sξ , and
therefore only the s < sξ values will contribute significantly to the
integral. We change variables to u = s/sξ , getting:
2/D+3−τ R∞
sξ 1/sξ u2/D+2−τ F (u)du
2
R ∝ R∞ (5.22)
s3−τ
ξ 1/sξ u2−τ F (u)du
R ∞ 2/D+2−τ
2/D 0 u F (u)du 2/D
∝ sξ R∞ ∝ sξ , (5.23)
0 u
2−τ F (u)du

where the two integrals over F (u) simply are numbers, and therefore
have been included in the constant of proportionality.
2/D 2/D
This shows that R2 ∝ sξ . We found above that Rsξ ∝ sξ . There-
fore, R ∝ Rsξ ! These two characteristic lengths therefore have the
same behavior. They are only different by a constant of proportion-
ality, R = c Rsξ . We can therefore use either length to characterize the
system — they are effectively the same.
Fig. 5.4 illustrates the radius of gyration of the largest cluster with a
circle and the average radius of gyration, R, indicated by the length of
the side of the square. As p increases, both the maximum cluster size
and the average cluster size increases — according to the theory they
are indeed proportional to each other and therefore increase in concert.
78 5 Geometry of clusters

Fig. 5.4 Illustration of the largest cluster in 512 × 512 systems for p = 0.55, p = 0.57,
and p = 0.59. The circles illustrate the radius of gyration of the largest cluster, and the
boxes show the size of the average radius of gyration, R = hRs i. We observe that both
lengths increase approximately proportionally as p approaches pc .

5.2.1 Correlation length


We can also measure the typical size of a cluster from the correlation
function. The correlation function g(r, p), which is the probability that
two sites, which are a distance r apart, are connected and part of the
same cluster for a system with occupation probability p. We can use
this to define the average squared distance between two sites i and j
belonging to the same cluster as
2
g(rij ; p)
P
j rij
ξ=h P ii . (5.24)
j g(rij ; p)

where the sum is over all sites j and the average is also over all sites
i. The denominator is a normalization sum, which corresponds to the
average cluster size, S. You can think of this sum in the following way:
For a site i, we sum over all other sites j in the system. The probability
that site j belongs to the same cluster as site i is g(rij ; p), and the mass
of the site at j is 1. The average number of sites connected to site at i is
therefore:
S(p) = h g(rij ; p) = g(rij ; p)ii , (5.25)
X X

j j

where we average over all the the sites i in the system.


This means that we can connect g(r; p) and ξ to the average cluster
size S. Let us now see if we can calculate the behavior of g(r; p) in a
one-dimensional system, how to measure it in a two-dimensional system,
and how to develop a theory for it for any dimension.
One-dimensional system. In chapter 2 we found that for the one-
dimensional system the correlation function g(r) is
5.2 Geometry of finite clusters 79

g(r) = pr = e−r/ξ , (5.26)

where ξ = − ln1p =' 1/(1 − pc ) is called the correlation length. The


correlation length diverges as p → pc = 1, ξ ' (1 − pc )−ν , where ν = 1.
We can generalize this behavior by writing the correlation function in
a more general scaling form for the one-dimensional system

g(r) = r0 f (r/ξ) , (5.27)

where f (u) decays rapidly when u is larger than 1. We will assume


that this behavior is general. Also for other dimensions, we expect the
correlation function to decay rapidly beyond a length, which corresponds
to the typical extent of clusters in the system.
Measuring the correlation function. For the two- or three-dimensional
system, we cannot find an exact solution for the correlation function.
However, we can still measure it from our simulations, although such
measurements typically are computationally intensive. How can we mea-
sure it? We can loop through all sites i and j and find their distance
rij . We estimate the probability for two sites at a distance rij to be
connected to count how many of the sites that are a distance rij apart
are connected, compared to how many sites in total are a distance rij
apart. This is done through the following implementation, which returns
the correlation function g(r) estimated for a lattice lw which contains
the cluster indexes for each site, similar to what is returned by the
lw, num = measurements.label(m) command. We write a subroutine
perccorrfunc to find the correlation function for a given lattice lw, and
then we use this function to find the correlation function for several
values of p and for several values of L:
from pylab import *
from scipy.ndimage import measurements
from numba import jit
@jit
def perccorrfunc(m,lw):
nx = shape(lw)[0]
ny = shape(lw)[1]
L = max([nx,ny])
r = arange(2*L) # Positions
pr = zeros(2*L) # Correlation function
npr = zeros(2*L) # Nr of elements
for ix1 in range(nx):
for iy1 in range(ny):
lw1 = lw[ix1,iy1]
if (lw1>0):
for ix2 in range(nx):
80 5 Geometry of clusters

for iy2 in range(ny):


lw2 = lw[ix2,iy2]
if (lw2>0):
dx = (ix2-ix1)
dy = (iy2-iy1)
rr = hypot(dx,dy)
nr = int(ceil(rr)+1) # Corresponding box
pr[nr] = pr[nr] + (lw1==lw2)
npr[nr] = npr[nr] + 1
pr = pr/npr
return r,pr
# Calculate correlation function
M = 10 # Nr of samples
L = 200 # System size
pp = [0.5,0.52,0.54,0.55,0.56] # p-value
lenpp = len(pp)
pr = zeros((2*L,lenpp),float)
rr = zeros((2*L,lenpp),float)
for i in range(M):
print("i = ",i)
z = rand(L,L)
for ip in range(lenpp):
p = pp[ip]
m = z<p
lw, num = measurements.label(m)
r,g = perccorrfunc(m,lw)
pr[:,ip] = pr[:,ip] + g
rr[:,ip] = rr[:,ip] + r
pr = pr/M
r = r/M
# Plot data - linearly binned
for ip in range(lenpp):
loglog(rr[:,ip],pr[:,ip],’.’,label="p="+str(pp[ip]))
legend()

Fig. 5.5 shows the resulting plots of the correlation function g(r; p) for
various values of p for an L = 200 system. First, we notice that the
scaling is rather poor. We will understand this as we develop a theory
for g(r; p) below. The plot shows that there is indeed a cross-over length
ξ, beyond which the correlation function falls rapidly to zero. And there
appears to be a scaling regime for r > ξ where the correlation function is
approximately a power-law, although it is unclear how wide that scaling
regime is in this plot. The plot suggests the following functional form

g(r; p) = rx f (r/ξ) , (5.28)

where the cross-over function f (u) falls rapdily to zero when u > 1
and is approximately constant when u < 1. When p approaches pc , the
5.2 Geometry of finite clusters 81

correlation length ξ grows to infinity, and the correlation function g(r; pc )


approaches a power-law rx for all values of r.

10−1

10−3
g(r; p)

10−5
p = 0.48
p = 0.5
p = 0.52
10−7 p = 0.54
p = 0.56

102 103 104


r

Fig. 5.5 A plot of g(r; p) as a function of r for various values of p. The function approaches
a power-law behavior g(r) ∝ rx when p approaches pc .

Theory for the correlation function. Based on these observations, we


are motivated to develop a theory for the behavior of the correlation
function. First, we know that when p = pc , the average cluster size, S
diverges. We can express S using the correlation function as
Z ZZ
S= g(r, pc ) = g(r) dr = (5.29)
X
d
g(r)rd−1 drdΩ ,
j

where the integral is written in spherical coordinates in a d-dimensional


space, and the integration over Ω indicates an integration over all angles.
For this integral to diverge, the function g(r) cannot have an exponential
cut-off, and it needs to diverge slower than a power-law with exponent
−d. That is, in order for S to diverge at p = pc , we know that at p = pc :

g(r; pc ) ∝ r−(d−2+η) , (5.30)


82 5 Geometry of clusters

where η is a positive number, ranging from η = 0 for the Bethe lattice


(infinite dimensions) to η = 1 for one-dimensional percolation, as we
found above.
This corresponds both to the results we found for the one-dimensional
system, and to the results we found from numerical measurements for
the two-dimensional system. In addition, we know that for p 6= pc , the
correlation function should have a cut-off proportional to ξ, because the
probability for two sites to be connected goes exponentially to zero with
distance when the distance is significantly larger than ξ. These features
indicates that g(r, p) has a scaling form, and we propose the following
scaling ansatz for g(r, p):
r
g(r, p) = r−(d−2+η) f ( ) . (5.31)
ξ
The scaling function f (r/ξ) should be a constant when r  ξ, and in
this range we cannot discern the behavior from the behavior of a system
at pc . For r  ξ, we expect the function to have an exponential form.
The scaling function will therefore have the following behavior:
(
constant when u  1
f (u) = . (5.32)
exp(−u) whenu  1

We can use this scaling form to determine the exponent η. We know that
the average cluster size S is given as an integral over g(r; p), that is
Z
S= g(r; p) = (5.33)
X
g(r; p)dr .
j

Let us use the scaling form for g(r; p) to calculate this integral when p
approaches pc , but is not equal to pc .
Z Z ∞
S= g(r; p)dr = r−(d−2+η) f (r/ξ)drd (5.34)
1
Z ∞ Z ∞
= r−(d−2+η rd+1 exp(−r/ξ)drdΩ ∝ r1−η exp(−r/ξ)dr (5.35)
1 1
r dr
Z Z
= ξ 2−η ( )1−η exp(−r/ξ) = ξ 2−η u1−η exp(−u)du ∝ ξ 2−η
ξ ξ
(5.36)

We already know the scaling behavior of S when p → pc :

S ∝ |p − pc |−γ ∝ ξ 2−η , (5.37)


5.2 Geometry of finite clusters 83

Consequently, we now know the behavior of ξ:

ξ ∝ |p − pc |−γ/(2−η) , (5.38)

where η is a number between 0 (for the infinite-dimensional system) and


1 (for the one-dimensional system). Indeed we remember that for the
one-dimensional system we found that ξ ∝ |p − pc |−1 and that γ = 1,
which is indeed consistent with η = 1.
What does this teach us about the two- and three-dimensional system?
For these systems, we already have related the average cluster size to the
average radius of gyration, R:

S ∝ s3−τ
ξ ∝ R(3−τ )/D , (5.39)

and we know that the average radius of gyration behaves as


1/D
R ∝ Rsξ ∝ sξ ∝ |p − pc |−1/σD . (5.40)

We interpret both ξ and R (and Rsξ ) as characteristic lengths. Let us now


make a daring assumption! Let us assume that ξ and R are proportional
— that there is only one characteristic length in the system. This allows
us to write:
R ∝ |p − pc |−1/σD ∝ |p − pc |−γ/(2−η) . (5.41)
We can use this relation to find η, given that the assumption of ξ ∝ R is
correct, or to demonstrate that ξ ∝ R by measuring η and checking for
consistency with this equation.
We have already done this for the one-dimensional system, where
σ = D = 1 and γ = 1, and therefore η = 1, which is indeed what we
found above. Similarly, we can check this result for the Bethe-lattice,
where we also find that the assumption holds. Simulations and theoretical
arguments indeed support the assumption. We will therefore in the
following only use one symbol for all the characteristic lengths since they
are proportional to each other and therefore only differ (scaling-wise) by
a constant of proportionality:

ξ ∝ R ∝ Rsξ ∝ |p − pc |−ν . (5.42)

We will typically only use the symbol ξ for this characteristic length
of the system, and the exponent ν characterizes how ξ diverges as p
approaches pc :
84 5 Geometry of clusters

Correlation length
The correlation length ξ scales as

ξ ∝ |p − pc |−ν when p → pc . (5.43)

The exponent ν = −1/(σD) = −γ/(2 − η). For d = 2, ν = 4/3.

The characteristic length ξ and system size L. The introduction of a


single characteristic length ξ, corresponding to the characteristic cluster
size sξ through sξ ∝ ξ D , allows us to discuss what happens to a system
that is close to, but not exactly at, pc . Fig. 5.6 shows a plot of ξ(p) for
two-dimensional systems with L = 100, 200, and 400. Notice that since
ξ diverges as p approaches pc , and we are in a finite system of size L, we
will not observe clusters that are larger than L. This means that if we
measure ξ(p) and we try to estimate pc we only know that it is somewhere
in the region where ξ(p) > L, but we do not really know where. This
also means that if we are studying a system where p is different from,
but close to pc , we need to study clusters that are at least of the size of
ξ in order to notice that we are not at p = pc .
If we study a system of size L  ξ, we will typically observe a cluster
that spans the system, since the typical cluster size, ξ, is larger than
the system size. We are therefore not able to determine if we observe a
spanning cluster because we are at pc or only because we are sufficiently
close to pc . We will start to observe a spanning cluster when ξ ' L,
which corresponds to

ξ− (pc − p)−ν = ξ ' L , (5.44)

and therefore that


(pc − p) ' (L/ξ− )−(1/ν) , (5.45)
when p < pc , and a similar expression for p > pc . This means that when
we observe spanning we can only be sure that p is within a certain range
of pc :
|p − pc | = cL−1/ν . (5.46)
The correlation length ξ is therefore the natural length characterizing the
geometry of the cluster. At distances smaller than ξ, the system behaves
as if it is at p = pc . However, at distances much larger than ξ, the system
is essentially homogeneous.
5.3 Geometry of the spanning cluster 85

As we can observe in Fig. 5.6 the system becomes more and more
homogeneous when p goes away from pc . We will now address this feature
in more detail when p > pc .

Fig. 5.6 Illustration of the largest cluster in 512 × 512 systems with p > pc , for p = 0.593,
p = 0.596, and p = 0.610. The circles illustrate the radius of gyration of the largest cluster.
We observe that the radius of gyration increases as p approaches pc .

5.3 Geometry of the spanning cluster

How can we develop a scaling theory for the spanning cluster? As p is


increased from below towards pc , the characteristic cluster size ξ diverges.
However, the size of a characteristic cluster of size ξ is expected to follow
the scaling relation sξ ∝ ξ D . For a given value of p we can therefore
choose the system size L to be equal to ξ, L = ξ(p). In this case, a cluster
of size ξ would correspond to a cluster of size L, which is a spanning
cluster in this system. For this system of size L = ξ, we therefore expect
the mass of the spanning cluster to be M (p, L) ∝ ξ D ∝ LD . This suggests
(but does not really prove) that the mass of the spanning cluster in a
system close to or at pc scales as M (p, L) ∝ LD .
The density of the spanning cluster at p = pc therefore has the following
behavior:
M (p, L)
P (p, L) = ∝ LD /Ld ∝ LD−d . (5.47)
Ld
Because we know that P (p, L) → 0 when L → ∞, we deduce that D < d.
The value of D in two-dimensional percolation is D = 91/48 ' 1.90.
Fractal geometry of the spanning cluster. What does this result tell
us about the geometry of the percolation cluster? First, we observe that
the density of the cluster depends on the system size, L, on which we
86 5 Geometry of clusters

are observing it. This is a general feature of a fractal with a dimension


different from the Euclidean dimension in which it is embedded. For any
object that obeys the scaling relation

M ∝ LD , (5.48)

where D < d, and d is the dimension of the Euclidean dimension, we


have that the density ρ is
M
ρ∝ ∝ LD−d , (5.49)
Ld
which depends on system size L. We also notice that the density decreases
as the system size increases.
Notice that these features do not represent something new, but are
simply extensions of features we are very well familiar with. For example,
consider a thin, flat sheet of thickness h, and dimensions L × L, placed
in a three-dimensional space. If we cut out a volume of size L × L × L,
so that L  L, the mass of the sheet inside that volume is

M = hL2 , (5.50)

which implies that the density of the sheet is

hL2
ρ= = hL−1 . (5.51)
L3
It is only in the case when we use a two-dimensional volume L × L with
a third dimension of constant thickness H larger than h, that we recover
a constant density ρ independent of system size.

5.4 Spanning cluster above pc

Let us now return to the discussion of the mass M (p, L) of the spanning
cluster for p > pc in a finite system of size L. The behavior of the
percolation system for p > pc is illustrated in Fig. 5.6. We notice that
the correlation length ξ diverges when p approaches pc . At lengths larger
than ξ, the system is effectively homogeneous because there are no holes
significantly larger than ξ. There are two types of behavior, depending
on whether L is larger than or smaller than the correlation length ξ.
When L  ξ, we are again in the situation where we cannot discern p
from pc because the size of the holes (empty regions described by ξ when
5.4 Spanning cluster above pc 87

p > pc ) in the percolation cluster is much larger than the system size.
In this case, the mass of the percolation cluster will follow the scaling
relation s ∝ RsD , and the finite section of size L of the cluster will follow
the same scaling if we assume that the radius of gyration of the cluster
inside a region of size L is proportional to L:

M (p, L) ∝ LD when L  ξ . (5.52)

In the other case, when L  ξ, and p > pc , the typical size of a hole
in the percolation cluster is ξ, as illustrated in Fig. 5.6. This means
that on lengths much larger than ξ, the percolation cluster is effectively
homogeneous. We can therefore divide the L × L system into (L/ξ)d
regions of size ξ, so that for each such region, the mass if m ∝ ξ D . The
total mass of the spanning cluster is therefore the mass of one such region
multiplied with the number of regions:

M (p, L) ∝ (ξ D )(L/ξ)d ∝ ξ D−d Ld . (5.53)

Fig. 5.7 Illustration of the spanning cluster in a 512 × 512 system at p = 0.595 > pc . In
this case, the correlation length is ξ = 102. The system is divided into regions of size ξ.
Each such region has a mass M (p, ξ) ∝ ξ D , and there are (L/ξ)d ' 25 such regions in
the system.

We can now introduce the complete behavior of the mass, M (p, L), of
the spanning cluster for p > pc :
(
LD Lξ
M (p, L) ∝ . (5.54)
ξ D−d Ld L  ξ
88 5 Geometry of clusters

This form can be rewritten in the standard scaling form as:


L
M (p, L) = LD Y ( ) , (5.55)
ξ
where (
constant u  1
Y (u) = . (5.56)
ud−D u1

5.5 Fractal cluster geometry

What happens to the scaling behavior of the system if we change the


effective length-scale by a factor b?. That is, what happens if we introduce
a new set of variables ξ 0 = ξ/b, and L0 = L/b.
We can use our scaling form M (p, L) = LD Y (L/ξ), to find that

M (p0 , L0 ) = (L0 )D Y (L0 /ξ 0 ) = (L/b)D Y (L/ξ) = b−D M (p, L) , (5.57)

where we have written p0 to indicate that a rescaling of the correlation


length corresponds to a change in p - reducing the correlation length
corresponds to moving p further away from pc .
This shows that the mass displays a simple rescaling when the system
size is rescaled. Functions that display this simple form of rescaling are
called homogeneous functions.
The change of length-scale results in a change of correlation length,
except for the cases when the correlation length is either zero or infinity.
The correlation length is zero for p = 0, and for p = 1. These two values
of p therefore correspond to trivial fix-points for the rescaling: The scaling
behavior does not change under this rescaling. The correlation length
is infinite for p = pc , which implies that the correlation length does not
change when the system size is rescaled by a factor b. This is illustrated
in Fig. 5.8, which shows that the structure of the percolation cluster at
p = pc does not change significant.
Self-similar fractals. The spanning cluster shows a particular simple
scaling behavior at p = pc . That is when the correlation length increases
to infinity — there is therefore no other length-scale in our system except
the system size L and the lattice unit a. When p = pc we found that the
mass of the spanning cluster displayed the scaling relation:

M (L) = b−D M (bL) , (5.58)


5.5 Fractal cluster geometry 89

Fig. 5.8 Illustrations of the spanning cluster (shown in red), and the other clusters
(shown in gray) at p = pc in a L = 900 site percolation system. a The 900 × 900 system. b
The central 300 × 300, and part. c The central 100 × 100. Each step represents a rescaling
by a factor 3. However, at p = pc , the correlation length is infinite, so a rescaling of the
length-scales should not influence the geometry of the cluster, which is evident from the
pictures: The percolation clusters are indeed similar in a statistical manner.

corresponding to a rescaling by a factor b. This is an example of self-


similar scaling.
Let us address self-similar scaling in more detail by addressing an ex-
ample of a deterministic fractal, the Sierpinski gasket [32]. The Sierpinski
gasket can be defined iteratively. We start with a unit equilateral triangle
as illustrated in Fig. 5.9. We divide the triangle into 4 identical triangles,
and remove the center triangle. For each of the remaining triangles,
we continue this process. The result set of points after infinitely many
iterations is called the Sierpinski gasket. This set contains a hierarchy
of holes. We also notice that the structure is identical under (a specific)
dilatational rescaling. If we take one of the tree triangles generated in
the first step and rescale it to fit on top of the initial triangle, we see
that it reproduces the original identically. This structure is therefore a
fractal.

Fig. 5.9 Illustration of three generations of the Sierpinski gasket starting from an
equilateral triangle.

The dimensionality of the structure is related to the relation between


the rescaling of the mass and the length. If we take one of the three
90 5 Geometry of clusters

triangles from the first iteration, we need to rescale the x and the y axes
by a factor 2. We can write this as a rescaling of the system size, L, by a
factor 2
L0 = 2L . (5.59)
Through this rescaling we get three triangles, each with the same mass
as the original triangle. The mass is therefore rescaled by a factor 3.

M 0 = 3M . (5.60)

If we write the mass as a function of length, M (L), we can formulate the


scaling as
M (2L) = 3M (L) , (5.61)
or, equivalently,
M (L) = 3−1 M (2L) . (5.62)
If we compare this with the general relation,

M (L) = b−D M (bL) , (5.63)

we see that
2−D = 3−1 , (5.64)
giving
ln 3
D= . (5.65)
ln 2
We will use this rescaling relation as our definition of fractal dimension.
The relation corresponds to the relation M = LD for the mass. Let us also
show that this relation is indeed consistent with our notion of Euclidean
dimension. For a cube of size L, the mass is L3 . If we look at a piece
of size (L/2)3 , we see that we need to rescale it by a factor of 2 in all
direction to get back to the original cube, but the mass must be rescaled
by a factor 8. We will therefore find the dimension from
ln 8
D= =3, (5.66)
ln 2
which is, as expected, the Euclidean dimension of the cube.
Typically, the mass dimension is measured by box counting. The
sample is divided into regular boxes where the size of each side of the box
is δ. The number of boxes, N (δ), that contain the cluster are counted as
a function of δ. For a uniform mass we expect
5.6 Exercises 91

L
N (δ) = ( )d , (5.67)
δ
and for a fractal structure we expect
L
N (δ) = ( )D , (5.68)
δ
We leave it as an exercise for the reader to address what happens when
δ → 1, and when δ → L.

5.6 Exercises

Exercise 5.1: Mass scaling of percolating cluster


a) Find the mass M (L) of the percolating cluster at p = pc as a function
of L, for L = 2k , k = 4, . . . , 11.
b) Plot log(M ) as a function of log(L).
c) Determine the exponent D.

Exercise 5.2: Correlation function


a) Write a program to find the correlation function, g(r, p, L) for L = 256.
b) Plot g(r, p, L) for p = 0.55 to p = 0.65 for L = 256.
c) Find the correlation length ξ(p, L) for L = 256 for the p-values used
above.
d) Plot ξ as a fuction of p − pc , and determine ν.
Finite size scaling
6

In this chapter we will learn how to utilize a disadvantage, such as a finite


system size, to an advantage. Usually, we have found a finite system
size to be a hassle in simulations. We would like to find the general
behavior, but we are limited by the largest finite system size we can
afford to simulate. It may be tempting to put all our resources into one
attempt — to make one simulation in a really large system. However,
this is usually not a good strategy. Because we will then know that our
results are limited by the system size, but we do not know to what degree
this affects our result.
Instead, we will follow a different strategy: the strategy of finite size
scaling. We will systematically increase the system size, measure the
quantities we are interested in, and then try to extrapolate to an infinite
system size. This has several advantages: It allows us to understand and
estimate the errors in our predictions, and it allows us to use simulations
of smaller systems. Indeed, it turns out that it is more important to do
simulations in smaller systems, than only to try to simulate that largest
system possible. However, for this to be effective, we need to have a
theoretical understanding of finite size scaling [6].
The methods we develop here are powerful and can be generalized
to many other experimental and computational situations. In many
experiments it is also tempting to try to perform the perfect experiment
by reducing noise or measurement errors. For example, we may perform
an experiment where we need to make the experimental system as
horizontal as possible, because deviations from a horizontal system would
introduce errors. Instead of trying to make the system as horizontal

93
94 6 Finite size scaling

as possible, we may instead systematically vary the orientation, and


then extrapolate to the case when the system is perfectly horizontal.
This also allow us to control the uncertainty. Of course, we cannot vary
all possible uncertainties in an experiment or a simulation, but this
alternative mindset provides us with a new tool in our toolbox, and a
new way to deal with uncertainties.
In practical situations, we will always be limited by finite system sizes.
If you measure the size of earthquakes in the Earth’s crust, your results
are limited by the thickness of the crust or by the extent of a homogeneous
region. If you simulate a molecular system, you are definitely limited by
the number of atoms you can include in your simulation. Thus, better
insight into how we can systematically vary the system size and use this
to gain insight are general tools of great utility.
Here, you will learn how to systematically vary system size L in order
to find much better estimates for exponents and percolation thresholds.
Indeed, my hope is that you will see that finite size scaling is a pow-
erful tool that can be used both theoretically and computationally. To
introduce this tool, we need to address specific examples that can help
build our intuition and shape our mindset. We will therefor start from
a few examples, such as the finite size scaling for the density of the
spanning cluster, P (p, L), and then apply the method to a new case, the
percolation probability Π(p, L).

6.1 General aspects of finite size scaling

We have found that a percolation system is described by three length-


scales: the size of a site, the system size L, and the correlation length
ξ. Finite size scaling addresses the change in behavior of a system as
we change the system size L. Typically, we divide the behavior into two
categories:
• When the system size L is much smaller than the correlation length
ξ, L  ξ, the system appears to be on the percolation threshold
• When L is much larger than ξ, L  ξ, the geometry is essentially
homogeneous at lengths longer than ξ

We will then address the behavior close to pc . In the case of percolation,


we usually assume that the behavior is a power-law in p−pc . For example,
the mass M (p; L) of the spanning cluster:
6.2 Finite size scaling of P (p, L) 95

M (p) ∝ (p − pc )−x , (6.1)

where the exponent x determines the behavior close to pc .


The general approach in finite size scaling is to make a scaling ansatz
— an assumption about how the system behaves:
L
 
x
M (p, L) = L ν f , (6.2)
ξ
where f (u) is an unknown function. (Sometimes we instead make the
assumption M (p, L) = ξ x/ν f˜(L/ξ) and we leave it to the reader to
demonstrate that these assumptions are equivalent.)
We will then apply our insight into the particulars of the system to
infer the behavior in the limits when ξ  L, and ξ  L to determine
the form of the scaling function f (u), and use this functional form as a
tool to study the behavior of the system. We will explain this reasoning
through three examples: The case of P (p, L), the case of S(p, L) and the
case of Π(p, L).

6.2 Finite size scaling of P (p, L)

Measuring P (p, L) for finite L. Let us now apply this methodology to


study the behavior of the density of the spanning cluster, P (p, L), for
finite system sizes. First, we generate a plot of P (p, L) for various values
of L using the following program:
from pylab import *
from scipy.ndimage import measurements
LL = [25,50,100,200]
p = linspace(0.4,0.75,50)
nL = len(LL)
nx = len(p)
P = zeros((nx,nL),float)
for iL in range(nL):
L = LL[iL]
N = int(2000*25/L)
for i in range(N):
z = rand(L,L)
for ip in range(nx):
m = z<p[ip]
lw, num = measurements.label(m)
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x>0)]
if (len(perc)>0):
Ni[ip] = Ni[ip] + 1
96 6 Finite size scaling

area = measurements.sum(m, lw, perc[0])


P[ip] = P[ip] + area
P[:,iL] = P[:,iL]/((L*L)*N)
for iL in range(nL):
L = LL[iL]
plot(p,P[:,iL])
ylabel(’P(p,L)’)
xlabel(’p’)

The resulting plot of P (p, L) is shown in Fig. 6.1. We see that as


L increases, P (p, L) approaches the shape expected in the limit when
L → ∞. We can see how it approaches this limit by finding the value
of P (pc , L) as a function of L. We expect this value to go to zero as L
increases. Fig. 6.1b shows how P (pc , L) approaches zero. Let us see if we
can develop a theoretical prediction for this behavior and check if our
measured results confirm the prediction.

L = 25 0.26
0.7 L = 50
L = 100
0.6 0.25
L = 200

0.5 0.24
P (p, L)

P (pc )

0.4
0.23

0.3
0.22
0.2
0.21
0.1
0.20
0.0

0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 25 50 75 100 125 150 175 200
p L

Fig. 6.1 (a) Plot of P (p, L). (b) Plot of P (pc ; L) as a function of L.

Finite size effects in P (p, L). We expect

P (p) ∝ (p − pc )β ∝ ξ −β/ν , (6.3)

in the limit when L → ∞. The limit of large L corresponds to L being


large compared to the characteristic length scale in the system, ξ: L  ξ.
The other limit, when L is small compared to ξ, L  ξ, that is when
ξ is large, which again corresponds to p → pc . In this case, we see from
Fig. 6.1 that P (pc , L) depends on L. In this case we found that

M (p, L) LD
P (p, L) = ∝ ∝ LD−d ∝ L−β/ν . (6.4)
L2 Ld
Finite size scaling ansatz. The fundamental idea of finite size scaling
is then to assume a particular form of a function that encompasses the
behavior both when ξ  L and ξ  L, by rewriting the expression for
P (p, L) as
6.2 Finite size scaling of P (p, L) 97

P (p, L) = L−β/ν f (L/ξ) . (6.5)


Where we have assumed that the only length scales are L and ξ and
that the function therefore only can depend on a ratio between these
two length scales. How does the function f (u) need to behave for this
general form to reduce to eq.(6.3) and eq.(6.4)?
First, we see that when ξ  L the function f (L/ξ) should be a
constant, that is, f (u) is a constant when u  1.
Second, we see that when ξ  L, we need the function f (L/ξ) to
cancel all the L-dependency in order to find the relation in eq.(6.3):

P (p, L) = L−β/ν f (L/ξ) = ξ −β/ν . (6.6)

We assume that f (u) is a power-law, f (u) = ua . In order to cancel L we


see that:

P (p, L) = L−β/ν (L/ξ)a = L−β/ν+a ξ −a = ξ −β/ν (6.7)


⇒ −β/ν + a = 0 ⇒ a = β/ν . (6.8)

Indeed, we could have used this in order to find the exponent in the
relation ξ −β/ν — it would simply have been enough to assume that
P (p, L) = ξ x for some exponent x in the limit of ξ  L.
We have therefore found that in order to satisfy these conditions, the
scaling form of P (p, L) must be

P (p, L) = L−β/ν f (L/ξ) , (6.9)

where (
const. u1
f (u) = (6.10)
uβ/ν u1
Testing the scaling ansatz. We can now test the scaling ansatz by
plotting P (p, L) according to the ansatz, following a strategy similar to
what we developed for n(s, p). We rewrite the scaling function P (p, L) =
L−β/ν f (L/ξ) by inserting ξ = ξ0 |p − pc |−ν :

P (p, L) = L−β/ν f (L/ξ) (6.11)


=L −β/ν
f (Lξ0 |p − pc | )
ν
(6.12)
=L −β/ν
f ((ξ0 L1/ν (p − pc ))ν ) (6.13)
= L−β/ν f˜(L1/ν (p − pc )) . (6.14)

We can again rewrite this as


98 6 Finite size scaling

Lβ/ν P (p, L) = f˜(L1/ν (p − pc )) . (6.15)

Therefore is we plot L1/ν (p − pc ) along the x-axis and Lβ/ν P (p, L) along
the y-axis, we expect all the data to fall onto a common curve, the curve
f (u). This is done in Fig. 6.2, which shows that the measured data is
consistent with the scaling ansatz. We call such as plot a scaling data
collapse plot.

1.4
L = 25
1.2 L = 50
L = 100
L = 200
1.0

0.8
y

0.6

0.4

0.2

0.0

−10.0 −7.5 −5.0 −2.5 0.0 2.5 5.0 7.5


(p − pc )Lx

Fig. 6.2 Scaling data collapse plot of P (p, L).

Comparing theory at p = pc . Finally, we can now use this theory to


understand the behavior for P (pc , L). In this case we find that P (pc , L) =
cL−β/ν . We can therefore measure −β/ν from the plot of P (pc , L) in
Fig. 6.1. However, while the data in this figure is somewhat too poor to
produce a reliable result, the figure demonstrates the principle.
Varying L to gain insight. The take-home message is that instead of
trying to simulate one single simulation with as large L as possible, we
should instead vary L systematically and then use this variation in order
to estimate the relevant exponent ν and β. The methods demonstrated
here usually provide much better results in term of precision of the
exponents that a direct measurement for a large system size.
Alternative approaches. We could instead have started with a scaling
ansatz of P (p, L) = (p−pc )β g(L/ξ) = ξ −β/ν g(L/ξ). The whole derivation
and, of course, the end result would have been the same. We leave this
as an exercise for the eager reader.
6.3 Average cluster size 99

6.3 Average cluster size

We can characterize the distribution of cluster sizes using moments of


the cluster number distribution. The k-th moment Mk (p, L) is defined as:

Mk (p, L) = (6.16)
X
sk n(s, p; L) .
s=1

We have already introduced the second moment, M2 (p, L), which we


called the average cluster size, S(p, L).

S(p, L) = M2 (p, L) = (6.17)
X
sk n(s, p; L) .
s=1

Now, let us see if we can apply the finite-size scaling approach to develop
a scaling theory for S(p, L). First, we will measure S(p, L), and then
develop and test a scaling theory.

6.3.1 Measuring moments of the cluster number density


How would we measure S(p, L)? We recall that we measure the cluster
number density from
Ns
n(s, p; L) = d , (6.18)
L
where Ns is the number of clusters of size s. Thus we can estimate S(p, L)
from: ∞
Ns
S(p, L) = (6.19)
X
s2 d .
s=1
L
We realize that we can perform this sum by summing over all possible
s and then including how many clusters we have for a given s, or we
can alternatively sum over all the observed clusters si . (Try to convince
yourself that this is the same by looking at a sequence of clusters of sizes
1, 2, 1, 5, 1, 2.). Thus, we can estimate the second moment from the sum:

S(p, L) = (6.20)
X
s2i /L2 .
i

And similarly by summing over ski for the k-th moment.


We implement this in the following program:
100 6 Finite size scaling

from pylab import *


from scipy.ndimage import measurements
LL = [25,50,100,200]
p = linspace(0.4,0.75,50)
nL = len(LL)
nx = len(p)
S = zeros((nx,nL),float)
for iL in range(nL):
L = LL[iL]
M = int(2000*25/L)
for i in range(M):
z = rand(L,L)
for ip in range(nx):
m = z<p[ip]
lw, num = measurements.label(m)
labelList = arange(lw.max() + 1)
area = measurements.sum(m, lw, labelList)
# Remove spanning cluster by setting its area to zero
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x>0)]
if (len(perc)>0):
area[perc[0]] = 0
S[ip,iL] = S[ip,iL] + sum(area*area)
S[:,iL] = S[:,iL]/(L**2*M)
# Plotting the results
for iL in range(nL):
L = LL[iL]
lab = "$L="+str(L)+"$"
plot(p,S[:,iL],label=lab)
ylabel(’$S(p,L)$’)
xlabel(’$p$’)
legend()

The resulting plot of S(p, L) as a function of p for various values of L


is shown in Fig. 6.3.

103
L = 25
L = 50
800 L = 100
L = 200

600
S(p, L)

S(pc )

400 102

200

0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75 3 ⇥ 101 4 ⇥ 101 6 ⇥ 101 102 2 ⇥ 102
p L

Fig. 6.3 (a) Plot of S(p, L). (b) Plot of S(pc ; L) as a function of L.
6.3 Average cluster size 101

6.3.2 Scaling theory for S(p, L)


How can we understand these plots and how can we develop a theory for
S(p, L)? We found previously that S diverges as p approaches pc :

S(p) = S0 |p − pc |−γ , (6.21)

where the exponent γ = 43/18 for d = 2. Following the approach for


finite-size scaling introduced above we introduce the finite size L through
a scaling function f (L/ξ), giving us a finite-size scaling ansatz (our
hypothesis):
L
 
S(p, L) = S0 |p − pc |−γ f . (6.22)
ξ
We rewrite the first expression by introducing ξ = ξ0 |p − pc |−ν so that
S0 |p − pc |−γ = ξ γ/ν , giving:
L
 
S(p, L) = ξ γ/ν
f . (6.23)
ξ
Now, we see from Fig. 6.3 that when p = pc , S(pc , L) does not diverge,
but depends on L — as we would expect for a finite system. Thus we
know that in the limit when p → pc , S(p, L) can only depend on L.
This implies that the function f (L/ξ) in this limit must be so that the
ξ in f (L/ξ) cancels the ξ γ/ν in front of it. This can only happen if
f (L/ξ) ∝ (L/ξ)γ/ν :
 γ/ν
L
S(p, L) ∝ ξ γ/ν
∝ Lγ/ν . (6.24)
ξ

Thus, we have found that S(pc , L) ∝ Lγ/ν .


This allows us to write the scaling form of S(p, L) in a different way:
L
 
S(p, L) = L γ/ν
g . (6.25)
ξ

We can test this prediction by plotting S(p, L)L−γ/ν as a function of


L/ξ:
L
 
−γ/ν
=g = g L(p − pc )−ν (6.26)

S(p, L)L
ξ
 ν   
=g L1/ν (p − pc ) = g̃ L1/ν (p − pc ) . (6.27)
102 6 Finite size scaling

The resulting plot is shown in Fig. 6.4, which indeed demonstrates that
the measured data is consistent with the scaling theory. Success!

L = 25
0.07
L = 50
L = 100
0.06
L = 200
0.05

S(p, L)Ly
0.04

0.03

0.02

0.01

0.00

10.0 7.5 5.0 2.5 0.0 2.5 5.0 7.5


(p pc )Lx

Fig. 6.4 A data-collapse plot of the rescaled average cluster size L−γ/ν S(p, L) as a
function of L1/ν (p − pc ) for various L.

6.4 Percolation threshold

Finally, we will demonstrate one of the most elegant applications of


finite-size scaling theory to the percolation probability Π(p, L) and to
see how a finite system size will affect the effective percolation threshold.

6.4.1 Measuring the percolation probability Π(p, L)


We can measure the percolation probability for a set of finite system sizes
using the methods we developed previously. Here, we have implemented
the measurement in the following program which is very similar to the
program developed to measure P (p, L)
from pylab import *
from scipy.ndimage import measurements
LL = [25,50,100,200]
p = linspace(0.4,0.75,50)
nL = len(LL)
nx = len(p)
Ni = zeros((nx,nL),float)
Pi = zeros((nx,nL),float)
6.4 Percolation threshold 103

for iL in range(nL):
L = LL[iL]
N = int(2000*25/L)
for i in range(N):
z = rand(L,L)
for ip in range(nx):
m = z<p[ip]
lw, num = measurements.label(m)
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x>0)]
if (len(perc)>0):
Ni[ip,iL] = Ni[ip,iL] + 1
Pi[:,iL] = Ni[:,iL]/N
for iL in range(nL):
L = LL[iL]
lab = "$L="+str(L)+"$"
plot(p,Pi[:,iL],label=lab)
ylabel(’$\Pi(p,L)$’)
xlabel(’$p$’)
legend()

The resulting plot of Π(p, L) for various values of L is shown in


Fig. 6.5.

1.0 L = 25
L = 50
L = 100
L = 200
0.8

0.6
⇧(p, L)

0.4

0.2

0.0

0.40 0.45 0.50 0.55 0.60 0.65 0.70 0.75


p

Fig. 6.5 Plot of Π(p, L).

6.4.2 Measuring the percolation threshold pc


Let us now assume that we do not a priori know pc or any of the scaling
exponents. How can we use this data-set to estimate the value for pc ?
The simplest approach may be to estimate pc as the value for p that
makes Π(p, L) = 1/2. This corresponds to intersection between the hor-
izontal line Π = 1/2 and the curves in Fig. 6.5. This is illustrated in
104 6 Finite size scaling

Fig. 6.6. Here, we have also plotted p1/2 as a function of L, where p1/2 is
the value for p so that Π(p1/2 , L) = 1/2. These values for p1/2 are calcu-
lated by a simple interpolation as illustrated in the following program.
(Notice that as usual in this book, we do not aim for high precision in
this program. The simulations are for small system sizes and few samples,
but are meant to illustrate the principle and be reproduceable for you.)

1.0 L = 25 0.588
L = 50
L = 100 0.586
0.8 L = 200
0.584

0.6 0.582
⇧(p, L)

p1/2
0.580
0.4
0.578

0.2 0.576

0.574
0.0
0.572
0.55 0.56 0.57 0.58 0.59 0.60 0.61 0.62 25 50 75 100 125 150 175 200
p L

Fig. 6.6 (a) Plot of Π(p, L). (b) Plot of p1/2 as a function of L.

for iL in range(nL):
ipc = argmax(Pi[:,iL]>0.5) # Find first value where Pi>0.5
# Interpolate from ipc-1 to ipc to find intersection
ppc = p[ipc-1] + (0.5-Pi[ipc-1,iL])*\
(p[ipc]-p[ipc-1])/(Pi[ipc,iL]-Pi[ipc-1,iL])
Pic = 0.5
plot(L,ppc,’o’)
xlabel(’$L$’)
ylabel(’$p_{1/2}$’)

From Fig. 6.6 we see that as L increases the value for p1/2 gradually
approaches pc . Well, we cannot really see that it is approaching pc , but
we guess that it will. However, in order extrapolate the curve to infinite
L we need to develop a theory for how p1/2 behaves. We need to develop
a finite size scaling theory for Π(p, L).

6.4.3 Finite-size scaling theory for Π(p, L)


We apply the same method as before to develop a theory for Π(p, L).
First. we notice that at pc Π(pc , L) does not either diverge or go to zero.
This means that Π(p, L) cannot be a function of ξ alone, but instead
must have the scaling form:
L
 
Π(p, L) = ξ 0 f . (6.28)
ξ
6.4 Percolation threshold 105

We rewrite this in terms of (p − pc ) by inserting ξ = ξ0 |p − pc |−ν :


  ν 
Π(p, L) = f (Lξ0 |p − pc |ν ) = f ξ0 L1/ν (p − pc ) . (6.29)
 
We introduce a new function Φ(u) = f ξ0 u1/ν :
 
Π(p, L) = Φ L1/ν (p − pc ) . (6.30)

This is our finite-size scaling ansatz (theory).

6.4.4 Estimating pc using the scaling ansatz


How can we now use this to estimate pc ? We follow a technique similar
to what we used above: We find the value px that makes Π(px , L) = x.
Above we did this for x = 1/2, but we can do this more generally.
Actually, as L → ∞, we expect any such px to converge to pc . We notice
from above that px is a function of L: px = px (L).
We insert this into the scaling ansatz:
 
x = Φ (px (L) − pc ) L1/ν , (6.31)

which can be solved as

(px − pc )L1/ν = Φ−1 (x) = Cx , (6.32)

where it is important to realize that the right hand, Cx , side is a number


which only depends on the x and not on L. We can therefore rewrite this
as

px − pc = Cx L−1/ν . (6.33)
If we know ν, we see that this gives a method to estimate the value of
pc . Fig. 6.7 shows a plot of p1/2 − pc as a function of L−1/ν for ν = 4/3.
We can use this plot to extrapolate to find pc in the limit when L → ∞
as indicated in the plot. The resulting value for pc extrapolated from
L = 50, 100, 200 is pc = 0.5914, which is surprisingly good given the
small system sizes and small sample sizes used for this estimate. This
demonstrates the power of finite size scaling.
106 6 Finite size scaling

0.5900

0.5875

0.5850

0.5825

p1/2
0.5800

0.5775

0.5750

0.5725
0.00 0.02 0.04 0.06 0.08
x
L

Fig. 6.7 Plot of p1/2 as a function of L−1/ν . The dashed line indicates a linear fit to the
data for L = 50, 100, 200. The extrapolated value for pc at L → ∞ is pc = 0.5914.

6.4.5 Estimating pc and ν using the scaling ansatz


However, this result depends on knowing the value for ν. What if we did
not know neither ν nor pc . How can we estimate both from the scaling
ansatz? On alternative is to generate plots of px as a function of L−1/ν
for several values of x. Then we adjust the values of ν until we get a
straight line, in that case we can read of the intersect with the px axis as
the value for pc .
However, we can do even better by noticing a trick: For two x values
x1 and x2 , we get

dp = pΠ=x1 (L) − pΠ=x2 (L) = (Cx1 − Cx2 )L−ν , (6.34)

and we can therefore plot log(dp) as a function of log(L) to get ν, and


then use this to estimate pc .
As an exercise, the reader is encouraged to demonstrate that this
scaling ansatz is valid for d = 1, and in this case find Cx explicitly.

6.4.6 (Advanced) Finite-size scaling for dΠ(p, L)/dp


We can make our theory even more elegant by looking at the derivative
of Π instead of Π.
What is the interpretation of Π 0 = dΠ/dp? Since we know that
Z 1
Πdp = p , (6.35)
0
6.4 Percolation threshold 107

we find that Π 0 is normalized


Z 1
Π 0 dp = 1 . (6.36)
0

Our interpretation of Π 0 is that Π 0 (p)dp is the probability that the


system percolates for the first time in the interval from p to p + dp. In an
infinite system we know that Π is a step-function which goes abruptly
from 0 to 1 at pc . The derivative is the delta-function, which is zero
everywhere, except in a small region around pc .
We use our scaling ansatz to find the derivative:

Π 0 = L1/ν Φ0 [(p − pc )L1/ν ] . (6.37)

In particular we find that

Π 0 (pc ) = L1/ν Φ0 [0] . (6.38)

We also find that position of the maximum of Π 0 is given by the second


derivative
Π 00 = L2/ν Φ00 [(p − pc )L1/ν ] . (6.39)
and we will be looking for where Π 00 = 0. Let us suppose that the value
x0 makes the second derivate zero, that is, suppose that Φ0 (x) has a
maximum at x = x0 .
At the maximum of Φ0 we have that

(pmax − pc )L1/ν = x0 , (6.40)

and therefore
x0
pmax = pc +. (6.41)
L1/ν
In each numerical experiment we are really measuring an effective pc ,
but as L → ∞ we see that pef f → pc . The way it goes to pc tells us
something about ν.
Average of the distribution. Because Π 0 is a probability density, we
can also calculate the average p of this distribution, that is the average p
at which we first get a percolation cluster in a system of size L. Let us
call this quantity hpi.
108 6 Finite size scaling
Z 1
hpi = pΠ 0 (p)dp (6.42)
0
Z 1
= L−1/ν pL1/ν Φ0 [(p − pc )L1/ν ]dpL1/ν (6.43)
0
Z 1 Z 1
= (p − pc )L 1/ν
Φ [(p − pc )L
0 1/ν
]dp + pc Π 0 dp (6.44)
0 0

where the last integral is the normalization integral, and is 1.


We therefore get the result that
Z
hpi = pc + L −1/ν
xΦ0 [x]dx , (6.45)

where the last integral is simply a constant, so that we can write the
average critical percolation threshold in a finite system size as

hpi = pc + CL−1/ν (6.46)

Which is not located exactly at pc but the shift decreases with L.

6.5 Exercises

Exercise 6.1: Finite-size scaling in one dimension


a) Show that the scaling ansatz for Π(p, L) is valid for d = 1.
b) Find an explicit expression for Cx for d = 1.

Exercise 6.2: Finite-size scaling in two dimensions


In this exercise we will use the scaling ansatz to provide estimates of ν,
pc and the average percolation probability hpi in a system of size L.
We define px so that Π(px , L) = x. Notice that px is a function of
system size L used for the simulation.
a) Find px for x = 0.3 and x = 0.8 for L = 25, 50, 100, 200, 400, 800. Plot
px as a function of L.
According to the scaling theory we have

px1 − px2 = (Cx1 − Cx2 ) L−1/ν . (6.47)


6.5 Exercises 109

b) Plot log (p0.8 − p0.3 ) as a function of log(L) to estimate the exponent


ν. How does it compare to the exact result?
In the following, please use the exact value ν = 4/3. The scaling theory
also predicted that
px = pc + Cx L−1/ν . (6.48)
c) Plot px as a function of L−1/ν to estimate pc . Generate a data-collapse
plot for Π(p, L) to find the function Φ(u) described above.
d) Plot Π 0 (p, L) as a function of p for the various L values used above.
Generate a data-collapse plot of Π 0 (p, L). Find hpi and plot hpi as a
function of L−1/ν to find pc .

Exercise 6.3: Finite size scaling of n(s, pc , L)


a) Develop a finite size scaling ansatz/theory for n(s, pc , L). You should
provide arguments for the behavior in the various limits.
b) Plot n(s, pc , L) as a function of s for L = 100, 200, 400, 800.
c) Demonstrate the validity of the scaling theory by producing a data-
collapse plot for n(s, pc , L).
Renormalization
7

We have now learned that when p approaches pc , the correlation length


grows to infinity, and the spanning cluster becomes a self-similar fractal
structure. This implies that the spanning cluster has statistical self-
similarity: if we cut out a piece of the spanning cluster, and rescale the
lengths in the system, the rescaled system will have the same geometrical
properties as the original system. In particular, the rescaled system will
have the same mass scaling relation: it will also be a self-similar fractal
with the same scaling properties.
What happens when p 6= pc ? In this case, there will be a finite
correlation length, ξ, and a rescaling of the lengths in the system implies
that the correlation length is also rescaled. A rescaling by a factor b
corresponds to making an average over bd sites in order to form the
new lattice. Now, we will simply assume that this also implies that the
correlation length is reduced by a factor b: ξ 0 = ξ/b. After a few iterations
of this rescaling procedure, the correlation length will correspond to the
lattice size, and the lattice is uniform.
We could have made this argument even simpler by initially stating
that we divide the system into parts that are larger than the correlation
length. Again, this would lead to a system that is homogeneous from the
smallest lattice spacing an upwards. We can conclude that when p < pc ,
the system behaves as a uniform, unconnected system. and when p > pc ,
the system is uniform and connected.
The argument we have sketched above is the essence of the renormal-
ization group argument. It is only exactly at p = pc that an iterative
rescaling is a non-trivial fix point: the system iterates onto itself because

111
112 7 Renormalization

it is a self-similar fractal. When p is away from pc , rescaling iterations


will make the system progressively more homogeneous, and effectively
bring the rescaled p towards either 0 or 1.
In this section we will provide an introduction to the theoretical
framework for renormalization. This is a powerful set of techniques,
introduced for equilibrium critical phenomena by Kadanoff [19] in 1966
and by Wilson [40] in 1971. Wilson later received the Nobel prize for his
work on critical phenomena.

7.1 The renormalization mapping


Let us return to our theoretical model for our study of disorder: the
model porous medium with occupation probability p. We will study a
system of size L with a correlation length ξ, which is a function of p. We
will call the length of a side of a single site a, and we ensure that

Lξa. (7.1)

We will not address whether it is possible to average over some of the sites
in such a way that the macroscopic behavior does not change significantly.
That is, we want to replace cells of bd sites with new, “renormalized”
single sites. This averaging procedure is illustrated in Fig. 7.1.

b
L/b
L

Fig. 7.1 Illustration of averaging using a rescaling b = 2, so that a cell of size b × b = 2 × 2


is reduced to a single site, producing a “renormalized” system of size L/2. The original
pattern was generated with p = 0.625 for a L = 16 lattice.

In the original lattice the occupation probability is p. However, through


our averaging procedure, we may change the occupation probability
7.1 The renormalization mapping 113

for the new, averaged sites. We will therefore call the new occupation
probability p0 - the probability to occupy a renormalized site. We write
the mapping between the original and the new occupation probabilities
as
p0 = R(p) , (7.2)
where the function R(p), which provides the mapping, depends on the
details of the rule used for renormalization. It is important to realize
that the system size L and the correlation length ξ does not change in
real terms, it is only in units of lattice constants they are changing.
There are many choices for the mapping between the original and the
renormalized lattice. We have illustrated a particular rule for a mapping
with a rescaling b = 2 in Fig. 7.2. For a site percolation problem with
b = 2 there are bd possible configurations. The different configurations
are classified into the 6 categories c, where the number of configurations
in each category is listed below. In Fig. 7.2 we have also illustrated a
particular averaging rule. However, we could also have chosen different
rules. Usually, we should ensure that the global information is preserved
by the mapping. For example, we would want the mapping to conserve
connectivity. That is, we would like to ensure that
L
Π(p, L) = Π(p0 , ). (7.3)
b
However, even though we may ensure this on the level of the mapping,
this does not ensure that the mapping actually conserves connectivity
when applied to a large cluster - it may, for example, connect clusters
that were unconnected in the original lattice, or disconnect clusters that
were connected, as illustrated in Fig. 7.3.
Currently, we will not consider the details of the renormalization
mapping p0 = R(p), we will only assume that such a map exists and
study its qualitative features. Then we will address the renormalization
mapping through two worked examples. For any choice of mapping, the
rescaling must result in a change in the correlation length ξ:
1
ξ 0 = ξ(p0 ) = ξ(p) . (7.4)
b
We will use this relation to address the fixpoints of the mapping. A
fixpoint is a point p∗ that does not change when the mapping is applied.
That is
p∗ = R(p∗ ) . (7.5)
114 7 Renormalization

c=1 c=2 c=3 c=4 c=5 c=6


n=1 n=4 n=4 n=2 n=4 n=1

Fig. 7.2 Illustration of a renormalization rule for a site percolation problem with a
rescaling b = 2. The top row indicates various clusters categorized into 6 classes c. The
number of different configurations n in each class is also listed. The mapping ensures that
connectivity is preserved. However, this renormalization mapping is not unique: we could
have chosen many different averaging schemes.

b
Fig. 7.3 Illustration of a single step of renormalization on an 8 × 8 lattice of sites. We
see that the renormalization procedure introduces new connections: the blue cluster is
now much larger than in the original. However, the procedure also removes previously
existing connections: the original yellow cluster is split into two separate clusters.

There are two trivial fixpoints : p = 0 and p = 1. At a fixpoint, the


iteration relation for the correlation length becomes:
ξ(p∗ )
ξ(p∗ ) = . (7.6)
b
This relation is satisfied at the two trivial fixpoints, because the correla-
tion length is zero here, ξ(0) = ξ(1) = 0. The only possible solutions for
ξ(p∗ ) = ξ(p∗ )/b is for ξ = 0 or for ξ = ∞.
Let us assume that there exists a non-trivial fixpoint p∗ , and let us
address the behavior for p close to p∗ . We notice that for any finite ξ,
iterations by the renormalization relation will reduce ξ. That is, both for
p < p∗ and for and for p > p∗ iterations will make ξ smaller. This implies
that iterations will take the system further away from the non-trivial
fixpoint, where the correlation length is infinite. The non-trivial fixpoint
7.1 The renormalization mapping 115

is therefore an unstable fixpoints. Similarly, for p close to a trivial fixpoint,


where ξ = 0, iterations will decrease p, and the renormalized system
will move closer to the fixpoint in each iteration. The trivial fixpoint is
therefore stable.
Iterations by the renormalization relation p0 = R(p) may be studied
through on the graph R(p), as illustrated in Fig. 7.4. Consecutive itera-
tions take the system along the arrows illustrated in the figure, as the
reader should convince himself of by following the mapping. Notice that
the line p0 = p is drawn as a dotted reference line. In the figure, the
two end points, p = 0 and p = 1 are the only stable fixpoints, and the
points p∗ is the only unstable fixpoints. The actual shape of the function
R(p) depends on the renormalization rule, and the shape may be more
complex than what is illustrated in Fig. 7.4.

R(p)
1

p’>p

p’<p

p* 1 p
Fig. 7.4 Illustration the renormalization mapping p0 = R(p) as a function of p. The
non-trivial fixpoint p∗ = R(p∗ ) is illustrated. Two iterations sequences are illustrated by
the lines with arrows. Let us look at the path starting from p > p∗ . Through the first
application of the mapping, we read off the resulting value of p0 . This value will then be
the input value for the next application of the renormalization mapping. A fast way to
find the corresponding position along the p axis is to reflect the p0 value from the line
p0 = p shown as a dotted line. This gives the new p value, and the mapping is applied
again producing yet another p0 which is even further from p∗ . With the drawn shape of
R(p) there is only one non-trivial fixpoint, which is unstable.
116 7 Renormalization

7.1.1 (Advanced) Renormalization of correlation length


Let us make small diversion, and see what consequences the renormaliza-
tion of the correlation length has if we know that there exists a percolation
threshold pc . In that case, we know that

ξ(p) = ξ0 (p − pc )−ν , (7.7)

and
ξ(p0 ) = ξ0 (p0 − pc )−ν . (7.8)
We can then use the renormalization condition for the correlation length
from (7.4) to obtain:
1
ξ0 (p − pc )−ν = ξ0 (p0 − pc )−ν . (7.9)
b
When p → pc , we see that both ξ(p) and ξ(p0 ) approaches infinity, which
implies that if p = pc , then we must also have that p0 = pc . That is, we
have found that pc is a fixpoint of the mapping.

7.1.2 Iterating the renormalization mapping


We are now ready for a more quantitative argument for the effect of
iterations through the renormalization mapping R(p). We can argue that
we have found that the non-trivial fixpoint corresponds to the percolation
threshold, since the correlation length is diverging for this value of p,
and we will indeed assume that we can identify pc as the fixpoints, as we
argued more quantitatively above.
We will now assume that R(p) is an analytic function. This is not
a strong assumption, since for any simple R(p) based on polynomials
of p and 1 − p this is trivially fulfilled. We will not Taylor expand the
mapping p0 = R(p) around p = p∗ . First, we notice that

p0 − p∗ = R(p) − R(p∗ ) . (7.10)

The Taylor expansion of R(p) for a p close to p∗ is:

R(p) = R(p∗ ) + R0 (p∗ )(p − p∗ ) + o(p − p∗ )2 . (7.11)

If we define Λ = R0 (p∗ ), we can write to first order in p − p∗ :

p0 − p∗ ' Λ(p − p∗ ) , (7.12)


7.1 The renormalization mapping 117

We see that the value of Λ characterizes the fixpoint. For Λ > 1 the new
point p0 will be further away from p∗ than the initial point p. Consequently,
the fixpoint is unstable. By a similar argument, we see that for Λ < 1
the fixpoint is stable. For Λ = 1 we call the fixpoint a marginal fixpoint.
Let us now assume that the fixpoint is indeed the percolation threshold.
In this case, when p is close to pc , we know that the correlation length is

ξ(p) = ξ0 (p − pc )−ν , (7.13)

for the initial point, and

ξ(p0 ) = ξ0 (p0 − pc )−ν (7.14)

for the renormalized point. We will now use (7.12) for p∗ = pc , giving

p0 − pc = Λ(p − pc ) . (7.15)

Inserting this into (7.14) gives

ξ(p0 ) = ξ0 (p0 − pc )−ν = ξ0 (Λ(p − pc ))−ν = ξ0 Λ−ν (p − pc )−ν . (7.16)

We can rewrite this using ξ(p)

ξ(p0 ) = Λ−ν ξ(p) . (7.17)

However, we also know that


1
ξ(p0 ) = ξ(p) . (7.18)
b
Consequently, we have found that

b = Λν . (7.19)

This implies that the exponent ν is a property of the fixpoint of the


mapping R(p). We can find ν from
ln b
ν= , (7.20)
ln Λ
where we remember that Λ = R0 (pc ).
118 7 Renormalization

7.1.3 Application of renormalization to ξ


We will now show that we can achieve all of these results just from a
simple assumption on the effect of renormalization on the correlation
length. Trivially, a renormalization procedure will lead to a change
in correlation length. Starting at p with a correlation length ξ(p), a
renormalization step will produce a new occupation probability p0 and a
new correlation length ξ 0 (p0 ). The fundamental assumption in the theory
for the renormalization group is that the functional form of ξ and ξ 0 is
the same. That is, that we can write

ξ 0 (p0 ) = ξ(p0 ) , (7.21)

where ξ(p) was the functional form of the correlation length in the original
system. At least we should be able to make this assumption in some
small neighborhood around pc . That is, we assume that ξ(p) = ξ 0 (p)
for |p − pc |  1. In this case, we can write the correlation function as
a function of the deviation from pc :  = p − pc . Similarly, we define
0 = p0 − pc . The relation between the correlation lengths can then be
written as
ξ()
ξ(0 ) = , (7.22)
b
where ξ(u) is a particular function of u. The Taylor expansion of the
renormalization mapping R(p) in (7.12) can also be rewritten in terms
of  giving
0 = Λ . (7.23)
We can therefore rewrite (7.22) as

ξ()
ξ(0 ) = ξ(Λ) = , (7.24)
b
or, equivalently
ξ() = bξ(Λ) . (7.25)
This implies that ξ() is a homogeneous function. Let us see how this
function responds to iterations. We notice that after an iteration, the
new value of  is Λ, and we can write

ξ(Λ) = bξ(ΛΛ) = bξ(Λ2 ) . (7.26)

We can insert this value into (7.25) to get

ξ() = bξ(Λ) = b2 ξ(Λ2 ) . (7.27)


7.2 Examples 119

We can continue this process up to any power n, giving

ξ() = bn ξ(Λn ) , (7.28)

for any n ≥ 1, where we have implicitly assumed that b > 1.


Let us now prove that (7.28) implies that ξ() is to leading order a
power-law, and let us also find the exponent. We choose a value of n to
that
Λn  = c , (7.29)
which implies that
ln c/
n= . (7.30)
ln Λ
We can always ensure that this produces a value n > 1 by selecting c
sufficiently small. If we insert this value of n into (7.28) we get
ln c/ ln c/
ξ() = b( ln Λ ) ξ(c) = eln b( ln Λ ξ(c) (7.31)
c ln b
= ( ) ln Λ ξ(c) ∝ −ν , (7.32)

where ν is given as
ln b
ν= . (7.33)
ln Λ
We have now proved that the solution to the equation ξ(p0 ) = ξ(p)/b is a
power-law function ξ ∝ |p − pc |−ν with the exponent ν given by (7.33).
This argument shows that the most important assumption of the
renormalization theory, is that the functional form ξ(p) does not change
by the renormalization procedure. It is important to realize that this is
an assumption, and we will then have to check whether this produces
reasonable results.

7.2 Examples

In the following we provide several examples of the application of the


renormalization theory. Our renormalization procedure can be summa-
rized in the following points.
• Coarse-grain the system into cells of size bd .
• Find a rule to determine the new occupation probability, p0 , from the
old occupation probability, p: p0 = R(p).
120 7 Renormalization

• Determine the non-trivial fixpoints, p∗ , of the renormalization map-


ping: p∗ = R(p∗ ), and use these points as approximations for pc :
pc = p∗ .
• Determine the rescaling factor Λ from the renormalization relation at
the fixpoint: Λ = R0 (p∗ ).
• Find ν from the relation ν = ln b/ ln Λ.
It is important to realize that the renormalization mapping R(p) is not
unique. However, in order to obtain useful results we should ensure that
the mapping preserves connectivity on average.

7.2.1 Example: One-dimensional percolation


Let us first address the one-dimensional percolation problem using the
renormalization procedure. We have illustrated the one-dimensional
percolation problem in Fig. 7.5. We generate the renormalization mapping
by ensuring that it conserves connectivity. The probability for two sites to
be connected over a distance b is pb when the occupation probability for
a single site is p. A renormalization mapping that conserves connectivity
is therefore:
p0 = Π(p, b) = pb . (7.34)
The fixpoints for this mapping are

p∗ = (p∗ )b , (7.35)

with only two possible solutions, p∗ = 0, and p∗ = 1. An example of a


renormalization iteration is shown in Fig. 7.6. The curve illustrates that
p∗ = 0 is the only attractive or stable fixpoint, and that p∗ = 1 is an
unstable fixpoint.

Fig. 7.5 Illustration of a renormalization rule for a one-dimensional site percolation


system with b = 3.

We can also apply the theory directly to find the exponent ν. The
renormalization relation is p0 = R(p) = pb . We can therefore find Λ from:
7.2 Examples 121

0.8

0.6

p’
0.4

0.2

0
0 0.2 0.4 0.6 0.8 1
p
Fig. 7.6 Illustration of a renormalization rule for a one-dimensional site percolation
system with b = 3.


∂R
Λ= = b(p∗ )b−1 = b , (7.36)
∂p p∗

where we are now studying the unstable fixpoint p∗ = 1. We can therefore


determine ν from (7.20):
ln b
ν= =1. (7.37)
ln Λ
We notice that b was eliminated in this procedure, which is essential since
we do not want the exponent to depend on details such as the size of
renormalization cell. The result for the scaling of the correlation length
is therefore
1
ξ∝ , (7.38)
1−p
when 1 − p  1.

7.2.2 Example: Renormalization on 2d site lattice


Let us now use this method to address a renormalization scheme for
two-dimensional site percolation. We will use a scheme with b = 2. The
possible configurations for a 2 × 2 lattice are shown in Fig. 7.7.
In order to preserve connectivity, we need to ensure that configurations
c = 1 and c = 2 are occupied also in the renormalized lattice. However,
for configuration c = 3, we may choose only to consider spanning in
one direction, or spanning in both directions. If we include spanning in
122 7 Renormalization

c=1 c=2 c=3 c=4 c=5 c=6


n=1 n=4 n=4 n=2 n=4 n=1

Fig. 7.7 Possible configurations for a 2 × 2 site percolation system. The top row indicates
various clusters categorized into 6 classes c. The number of different configurations n in
each class is also listed.

only one direction, there are only two of the configurations c = 3 that
contribute the the spanning probability, and the renormalization relation
becomes
p0 = R(p) = p4 + 4p3 (1 − p) + 2p2 (1 − p)2 . (7.39)
This is the probability for configurations c = 1, c = 2, or c = 3 to occur.
The renormalization relation is illustrated in Fig. 7.8.

1.0 R(p)
p

0.8

0.6
R(p), p

0.4

0.2

0.0

0.0 0.2 0.4 0.6 0.8 1.0


x
Fig. 7.8 Plot of the renormalization relation p0 = R(p) = p4 + 4p3 (1 − p) + 2p2 (1 − p)2
for a two-dimensional site percolation problem.

We will now follow steps 3 and 4. First, in step 3, we determine the


fixpoints of the renormalization relation. That is, we find the solutions
to the equation

p∗ = R(p∗ ) = (p∗ )4 + 4(p∗ )3 (1 − p∗ ) + 2(p∗ )2 (1 − p∗ )2 . (7.40)


7.2 Examples 123

The trivial solution p∗ = 0 is not of interest. Therefore we divide by p∗


to produce

(p∗ )3 + 4(p∗ )2 (1 − p∗ ) + 2(p∗ )(1 − p∗ )2 = 1 . (7.41)

The other trivial fixpoint is p∗ = 1. We divide the equation by 1 − p∗ to


get
(p∗ )2 + p∗ − 1 = 0 . (7.42)
The solutions to this second order equation are
√ √
1± 1+4 5±1
p =−

= ' 0.62 . (7.43)
2 2
We have therefore found an estimate of pc by setting pc = p∗ . This does
not produce the correct value for pc in a two-dimensional site percolation
system, but the result is still reasonably correct. We can similarly find
the exponent ν by calculating R0 (p∗ ).

7.2.3 Example: Renormalization on 2d triangular lattice


We will now the same method to address percolation on site percolation on
a triangular lattice. A triangular lattice is a lattice where each point has
six neighbors. In solid state physics, the lattice is known as the hexagonal
lattice because of its hexagonal rotation symmetry. Site percolation on the
triangular lattice is particularly well suited for renormalization treatment,
because a coarse grained version of the lattice is also √ a triangular lattice,
as illustrated in Fig. 7.9, with a lattice spacing b = 3 times the original
lattice size.
We will use the majority rule for the renormalization mapping. That
is, we will map a set of three sites onto an occupied site if a majority
of the sites are occupied, meaning that two or more sites are occupied.
Otherwise, the renormalized site is empty. This mapping is illustrated in
Fig. 7.9. This mapping does, as the reader may easily assure himself, on
the average conserve connectivity. The renormalization mapping is

p0 = R(p) = p3 + 3p2 (1 − p) = 3p2 − 2p3 . (7.44)

The fixpoints of this mapping are the solutions of the equation

p∗ = 3(p∗ )2 − 2(p∗ )3 . (7.45)


124 7 Renormalization

c=1 c=2 c=3 c=4


Fig. 7.9 Illustration of a renormalization
√ scheme for site percolation on a triangular
lattice. The rescaling factor is b = 3, and we use the majority rule for the mapping, that
is, configurations c = 1 and c = 2 are occupied, and configurations c = 3 and c = 4 are
mapped onto empty sites.

We observe that the trivial fixpoints past = 0, and p∗ = 1 indeed satisfy


(7.45). The non-trivial fixpoint is p∗ = 1/2. We are pleased to observe
that this is actually the exact solution for pc for site percolation on the
triangular lattice.
We can use this relation to determine the scaling exponent ν. First,
we calculate Λ:
3
Λ = R0 (p∗ ) = 6p(1 − p)|p= 1 = . (7.46)
2 2
As a result we find the exponent ν from
1 ln Λ ln 3/2
= = √ ' 1.355 , (7.47)
ν ln b ln 3
which is very close to the exact result that ν = 4/3 for two-dimensional
percolation.
7.2 Examples 125

7.2.4 Example: Renormalization on 2d bond lattice


As our last example of renormalization in two-dimensional percolation
problems, we will study the bond percolation problem on a square
lattice. The renormalization procedure is shown in Fig. 7.10. In the
renormalization procedure, we replace 8 bonds by 2 new bonds. We
consider connectivity only in the horizontal direction, and may therefore
simplify the lattice, by only considering the mapping of the H-cell, a
mapping of five bonds onto one bond in the horizontal direction. The
various configurations are shown in the figure. In table 7.1 we have
shown the number of such configurations, and the probabilities for each
configuration, which is needed in order to calculate the renormalization
connection probability p0 .

(a)

(b) (c)

(d)
c=1 c=2 c=3 c=4 c=5

c=6 c=7 c=8 c=9 c=10

c=11 c=12 c=13


Fig. 7.10 (a) Illustration of a renormalization scheme for bond percolation on a square
lattice in two dimensions. The rescaling factor is b = 2. (b) In general, the renormalization
involves a mapping from 8 to two bonds. However, we will consider percolation only in
the horizontal direction. This simplifies the mapping, to the figure shown in (c). For this
mapping, the configurations are shown an enumerated in (d).

The resulting renormalization equation is given as


126 7 Renormalization

c P (c) n(c) Π|c

1 p5 (1 − p)0 1 1

2 p4 (1 − p)1 1 1

3 p4 (1 − p)1 4 1

4 p3 (1 − p)2 2 1

5 p3 (1 − p)2 2 1

6 p3 (1 − p)2 2 0
3 2
7 p (1 − p) 4 1

8 p2 (1 − p)3 2 1

9 p2 (1 − p)3 4 0

10 p2 (1 − p)3 2 0
2 3
11 p (1 − p) 2 0

12 p1 (1 − p)4 5 0

13 p0 (1 − p)5 1 0

Table 7.1 A list of the possible configuration for renormalization of a bond lattice. The
P is c is denoted Π|c. The spanning
probability for percolation given that the configuration
probability for the whole cell is then Π(p) = p0 = c
n(c)P (c)Π|c.

13
p = R(p) = Π =
0
n(c)P (c)Π|c , (7.48)
X

c=1

where we have used c to denote the various configurations, P (c) is the


probability for one instance of configuration c, n(c) is the number of
different configurations due to symmetry consideration, and Π|c is the
spanning probability for configuration c given that the configuration is c.
The resulting relation is

p0 = R(p) (7.49)
= p + p (1 − p) + 4p (1 − p) + 2p (1 − p)
5 4 4 3 2
(7.50)
+2p3 (1 − p)2 + 4p3 (1 − p)2 + 2p2 (1 − p)3 (7.51)
= 2p5 − 5p4 + 2p3 + 2p2 . (7.52)
7.3 (Advanced) Universality 127

The fixpoints for this mapping are p∗ = 0, p∗ = 1, and p∗ = 1/2.


The fixpoints p∗ = 1/2 provides the exact solution for the percolation
threshold on the bond lattice in two dimensions. We find Λ by derivation
13
Λ = R0 (p∗ ) = . (7.53)
8
The corresponding estimate for the exponent ν is
ln b
ν= ' 1.428 , (7.54)
ln Λ
which should be compared with the exact result of ν = 4/3 for two-
dimensional percolation.

7.3 (Advanced) Universality

Even though we can choose renormalization rules that preserves con-


nectivity statistically, the rule will not preserve connectivity exactly.
The renormalization procedure is not exact. This can be illustrated by
site renormalization of site percolation in two dimensions are shown in
Fig. 7.11. We may speculate that various errors of this form, some of
them adding together non-connected bonds, and other removing connec-
tions, would cancel out on average. However, this is not the case. For
the majority rule for two-dimensional site percolation, the connectivity
is not preserved, even on the average. The result is that we end up with
an error in our estimate of both pc and ν.

Fig. 7.11 Illustration of renormalization of connectivity for site percolation in two


dimensions. The blue sites show the renormalized sites, and the lines shows which clusters
are connected. In this case, we see that the renormalized lattice is spanning, even though
there are no spanning clusters in the original lattice.

How can we improve this situation? We need to introduce additional


bonds between the sites during the renormalization procedure to preserve
128 7 Renormalization

connectivity, even if the original problem was a pure site problem. This
will produce a mixed site-bond percolation problem. The probability
q to connect two nearest-neighbors in the original site lattice must be
found by counting all possible combinations of spanning between nearest
neighbor sites in the original lattice. We may also have to introduce
next-nearest neighbor bonds and so on.
Let us describe the renormalized problem by the two renormalized
probabilities p0 for sites, and x0 for bonds. The renormalization procedure
will be described by a set of two renormalization relations:

p0 = R1 (p, x) (7.55)
x = R2 (p, x)
0
(7.56)

Now, the flow in the renormalization procedure will not simply be along
the p axis, but will occur in the two-dimensional p, x-space, as illustrated
in Fig. 7.12. We will no longer have a single critical points, pc , but a set
of points (pc , xc ) corresponding to a curve in p, x-space, as shown in the
figure. We also notice that when x = 1 we have a pure site percolation
problem – all bonds will be present and connectivity depends on the
presence of sites alone - and similarly for p = 1 we have a pure bond
percolation problem.
There are still two trivial fixpoints, for (p, x) = 0, and for (p, x) = (1, 1),
and we expect these points to be attractors. We will therefore need a line
that separates the two trivial fixpoints. If we start on this line, we will
remain on this line. We will therefore expect there to be a fixpoint on this
line, the non-trivial fixpoints (p∗ , x∗ ). We remark that the fixpoint no
longer corresponds to the critical threshold - there will be a whole family
of critical values corresponding to the curved, black line in Fig. 7.12.
We can find the non-trivial fixpoint from the equations

p∗ = R1 (p∗ , x∗ ) (7.57)
x∗ = R2 (p∗ , x∗ ) (7.58)

Let us linearize the system near the fixpoint. We will do a Taylor ex-
pansion for the two functions R1 (p, x), and R2 (p, x), around the point
(p∗ , x∗ ):

p0 − p∗ = Λ11 (p − p∗ ) + Λ12 (x − x∗ ) (7.59)


x − x = Λ21 (p − p ) + Λ22 (x − x )
0 ∗ ∗ ∗
(7.60)

where we have defined


7.3 (Advanced) Universality 129

P>0

P=0

p
Fig. 7.12 Illustration of the flow due to renormalization in a combined site-bond perco-
lation system. The black line shows the critical line, on which the correlation length is
infinite, ξ = ∞. Below the critical line, renormalization will lead to the trivial fixpoint at
p, x = 0 as illustrated by the green lines. Above the line, renormalization will lead to the
fixpoint at p, x = (1, 1).


∂R1 ∂R1
Λ11 = Λ12 = (7.61)
∂p (p∗ ,x∗ )
∂x (p∗ ,x∗ )

∂R2 ∂R2
Λ21 = Λ22 = (7.62)
∂p (p∗ ,x∗ ) ∂x (p∗ ,x∗ )

We can therefore rewrite the recursion relation in matrix form, as


" # " #" #
p0 − p∗ Λ11 Λ12 p − p∗
= . (7.63)
x0 − x∗ Λ21 Λ22 x − x∗

We want to find the behavior after many iterations. This can be done by
finding the eigenvector and the eigenvalues of the matrix. That is, we
find the vectors xi = (pi , xi ) such that

Λxi = λi xi . (7.64)

We know that we can find two such vectors, and that the vectors are
linearly independent, so that any vector x can be written as a linear
combination of the two eigenvectors:
130 7 Renormalization
" #
p − p∗
= x = a1 x 1 + a2 x 2 . (7.65)
x − x∗

Applying the renormalization mapping will therefore produce

Λx = λ1 a1 x1 + λ2 a2 x2 , (7.66)

and after N iterations we get

ΛN x = λN
1 a1 x1 + λ2 a2 x2 .
N
(7.67)

We see that if both λ1 < 1 and λ2 < 1, then any deviation from the
fixpoint will approach zero after many iterations, because the values
λN1 → 0, and λ2 → 0. We call eigenvalues in the range 0 < λ < 1
N

irrelevant, and the fixpoint is stable. Eigenvalues with λ > 1 are termed
relevant, because the fixpoint will move away along the direction specified
be the corresponding eigenvector. Eigenvalues λ = 1 are termed marginal
— there is no movement along this direction.
Let us look at the case when λ1 > 1 > λ2 , which corresponds to what
we will call a simple critical point. (For a simple critical point, there is
only one relevant eigenvalue, and all other eigenvalues are irrelevant.)
This corresponds to a stable behavior in the direction x2 , and an unstable
behavior in the x1 direction. That is, the behavior is like a saddle point,
as illustrated in Fig. 7.13. This is consistent with the picture of a critical
line. The flow along the line corresponds to the stable direction, and the
flow normal to the line corresponds to the unstable direction, which is
the natural generalization of the behavior we found in one dimension.
Therefore any point which is originally close to the line, will first flow
towards the fixpoint (p∗ , x∗ ), before it flows out in the direction of the
the relevant eigenvector.
Let us now study the behavior close to the critical line in detail for
a system with λ1 > 1 > λ2 . We notice that the correlation length ξ
is infinite along the whole critical line, because it does not change by
iterations along the critical line. That is, we have just a single fixpoint, but
infinitely many critical points corresponding to a critical line. Let us start
at a point (p0 , 1) close to the critical line, and perform renormalization
in order to find the functional shape of ξ and the exponent ν. After k
iterations, the point has moved close to the fixpoint, just before it is
expelled out from the fixpoint. We can therefore write
7.3 (Advanced) Universality 131

p
Fig. 7.13 Illustration of the flow around the unstable saddle point corresponding to
the fixpoint p∗ . The black line shows the critical line, on which the correlation length is
infinite, ξ = ∞. Below the critical line, renormalization will lead to the trivial fixpoint at
p, x = 0 as illustrated by the green lines. Above the line, renormalization will lead to the
fixpoint at p, x = (1, 1).

" #
p(k) − p∗
= a1 x 1 + a2 x 2 . (7.68)
x(k) − x∗

Since the iteration point is close to the fixpoint, we will assume that we
can use the linear expansion around the fixpoint to address the behavior
of the system. After a further l iterations we assume that we are still in
the linear range, and the renormalized position in phase-space is
" #
p(k+l) − p∗
= λl1 a1 x1 + λl2 a2 x2 . (7.69)
x(k+l) − x∗

(
We stop the renormalization procedure at l = l∗ when a1 l) ' 0.1 (or
some other small value that we can choose). That is

λl1 a1 ' 0.1 . (7.70)

The correlation length for this number of iterations is


∗ ξ(p0 , 1)
ξ (k+l ) = . (7.71)
b(k+l∗ )
132 7 Renormalization

We have therefore found an expression for the correlation length in the


point (p0 , 1)

ξ(p0 , 1) = ξ(a1 = 0.1)b(k+l ) , (7.72)
where the value ξ(a1 = 0.1) is a constant due to the way we have chosen
l∗ . The value for l∗ is
a1 )
ln( 0.1
l =

. (7.73)
ln λ1
We have therefore found that the correlation length in the original point
(p0 , 1) is
ln(0.1/a1 ) 0.1 ln b
ξ(p0 , 1) = bk b ln λ1 = bk ( ) ln λ1 . (7.74)
a1
We can express this further as:
1 lnlnλb 1
ξ(p0 , 1) ∝ ( ) 1 = ( )ν = a−ν
1 . (7.75)
a1 a1
Now, what is a1 ? This is the value of a1 at the original point, (p0 , 1),
which we can relate the the critical threshold pc for pure site percolation:

a1 = a1 (p0 , 1) = a1 (pc + (p0 − pc )) (7.76)


' a1 (pc ) + a01 (pc )(p0 − pc ) (7.77)
= A(p0 − pc ) (7.78)

where we have done a Taylor expansion around pc . We have used that


a1 (pc , 1) = 1, since this is a point on the critical line, and A = a01 (pc ). If
we put this relation back into (7.74), we get

ξ(p0 , 1) ∝ a−ν
1 ∝ (p − pc )
−ν
. (7.79)

We have therefore shown by renormalization arguments, that ξ has a


power-law behavior with exponent ν. However, we can make a similar
argument starting at a point (1, q0 ) close the the critical point qc . That
is, we could start from a pure bond percolation problem, and we would
end up with a similar relation for the correlation length

ξ ∝ |q0 − qc |−ν , (7.80)

where the exponent ν depends on λ1 .


We have therefore shown that the exponent ν is the same in these two
cases. This is an example of universality. Both pure site and pure bond
percolation leads to a power-law behavior for the correlation length ξ
7.4 (Advanced) Fragmentation 133

with the same power-law exponent ν. We can also use similar arguments
to argue that the critical exponent ν is the same below and above the
percolation threshold.

7.4 (Advanced) Fragmentation

We will use the concepts and tools we have developed so far to address
several problems of interest. First, let us address fragmentation: a large
body that is successively broken into smaller part due to fracturing.
There can be many processes that may induce and direct the fracturing
of the grain. For example. the fracturing may depend on an external
load placed on the grain, on a rapid change in temperature in the grain,
on a high-amplitude sound wave propagating through the grain, or by
stress-corrosion or chemical decomposition processes. Typical examples
of fragment patterns are shown in Fig. 7.14.

L0 L1 L2 L3
Fig. 7.14 Illustration of a deterministic fragmentation model. The shaded areas indicate
the regions that will not fragment any further. That is, this drawing illustrates the case
of f = 3/4.

Why did I choose D to denote this exponent? Let us look at the scaling
properties of the structure generated by these iterations. Let us first
assume that we describe the system purely geometrically, and that we are
interested in the geometry of the regions that have fragmented. We will
therefore assume that areas that are no longer fracturing are removed,
and we are studying the mass that is left by this process. Let us start at
a length-scale `n , where the mass of our system is mn , and let us find
what the mass will be when the length is doubled. For f = 3/4 we can
then generate the new cluster by placing three of the original clusters
into three of the four placed in the two-by-two square as illustrated in
Fig. 7.15. The rescaling of mass and length is therefore: `n+1 = 2`n , and
134 7 Renormalization

mn+1 = 3mn . Similarly, for arbitrary f , the relations are `n+1 = 2`n ,
and mn+1 = 4f mn . As we found in section 5.5, this is consistent with a
power-law scaling between the mass and length of the set
L D
m(L) = m0 ( ) , (7.81)
L0
where D = ln 3/ ln 2 is the fractal dimension of the structure. The value
for the case of general f is similarly D = ln(4f )/ ln 2.

ln l n+1
Fig. 7.15 Illustration of construction by length and mass rescaling. Three instances of
the fully developed structure with mass mn and length `n is used to generate the same
structure at a length `n+1 and with mass mn+1 = 3mn . The mass corresponds to the
mass of the regions that are not left unfragmented.

Remember that we now calculated the mass dimension of the part of


the system that is present, that is the part of the system that is still
fragmenting into smaller pieces. The mass dimension of the part of the
system that is no longer fragmenting should be D0 = d, which is the
fractal dimension of the “dust” left by the fragmentation processes.
The methodology that we have introduced to describe fragmentation
here, is consistent with the argument of Steacy and Sammis [38] for
the grains size distribution in fault gouges. They argue that during the
fragmentation process, two grains of the same size cannot be nearest
neighbors without fragmenting. There may be various physics arguments
for this assumption, but we will not discuss them in detail here. If this
7.5 Exercises 135

argument is applied in a simple cubic three-dimensional lattice, the


remaining fragments will look like Fig. 7.16. However, we realize that
this is identical to the fragmentation model introduced here, because
features such as the size distribution (and the mass dimension of the
unfractured grains), does not depend on where the remaining grains
are placed in space, only in the relative density of unfractured grains in
each generation. The model of Sammis et al. therefore corresponds to
the fragmentation model with f = 6/8, and with spatial dimensionality
d = 3. We have therefore found the prediction D = ln 6/ ln 2 ' 2.58 for
the grain size distribution in fault gouges.

Fig. 7.16 Illustration of the fragmentation model. In each iteration, each cubical grain is
divided into 8 identical smaller cubes. The fundamental rule for the model is that if there
are two neighboring grains of the same size, one of them will fracture. In the figure we
have shaded the regions that are not fragmented in this process for the first few steps of
the iterations.

It is important to realize that the argument of Sammis et al. depends


on the dimensionality of the system, and on the lattice used. For example,
in two dimensions the argument leads to a fractal dimension D = 1 for
a square system (corresponding to a line), whereas a triangular lattice
produces a dimension between 1 and 2. We leave it as an exercise for the
reader to find the dimension in this case.

7.5 Exercises

Exercise 7.1: Renormalization of nnn-model


a) Develop a renormalization scheme for a two-dimensional site perco-
lation system with next-nearest neighbor connectivity. That is, list the
16 possible configurations, and determine what configuration they map
onto in the renormalized lattice.
b) Find the renormalized occupation probability p0 = R(p).
c) Plot R(p) and f (p) = p.
136 7 Renormalization

d) Find the fixpoints p∗ so that R(p∗ ) = p∗ .


e) Find the rescaling factor Λ = R0 (p∗ ).
f) Determine the exponent ν = ln Λ/ ln b.
g) How can we improve the estimates of pc and ν?

Exercise 7.2: Renormalization of three-dimensional site


percolation model
a) Find all 28 possible configurations for the 2 × 2 × 2 renormalization
cell for three-dimensional site percolation.
b) Determine a renormalization scheme - what configurations map onto
an occupied site?
c) Find the renormalized occupation probability p0 = R(p).
d) Plot R(p) and f (p) = p.
e) Find the fixpoints p∗ so that R(p∗ ) = p∗ .
f) Find the rescaling factor Λ = R0 (p∗ ).
g) Determine the exponent ν = ln Λ/ ln b.

Exercise 7.3: Renormalization of three-dimensional bond


percolation model
In this exercise we will develop an H-cell renormalization scheme for
bond percolation in three dimensions. The three-dimensional H-cell is
illustrated in fig. 7.17.
a) Find all 21 2 possible configurations for this H-cell.
b) Determine a renormalization scheme - what configurations map onto
an occupied site?
c) Find the renormalized occupation probability p0 = R(p).
d) Plot R(p) and f (p) = p.
e) Find the fixpoints p∗ so that R(p∗ ) = p∗ .
f) Find the rescaling factor Λ = R0 (p∗ ).
g) Determine the exponent ν = ln Λ/ ln b.
7.5 Exercises 137

z
y
x
Fig. 7.17 Illustrations of the 3d H-cell.

Exercise 7.4: Numerical study of renormalization


Use the following program to study the renormalization of a given sample
of a percolation system.
# Coarsening procedure
from pylab import *
from scipy.ndimage import measurements

def coarse(z,f):
#function zz = coarse(z,f)
# The original array is z
# The transfer function is f given as a vector with 16 possible places
# f applied to a two-by-two matrix should return
# the renormalized values
#
# The various values of f correspond to the following
# configurations of the two-by-two region that is renormalized,
# where I have used X to mark a present site, and 0 to mark an
# empty sites
#
# 0 00 4 00 8 00 12 00
# 00 X0 0X XX
#
# 1 X0 5 X0 9 X0 13 X0
# 00 X0 0X XX
#
# 2 0X 6 0X 10 0X 14 0X
# 00 X0 0X XX
#
# 3 XX 7 XX 11 XX 15 XX
# 00 X0 0X XX
#
nx = shape(z)[0]
ny = shape(z)[1]
138 7 Renormalization

if (nx%2==1): # Must be even number


raise ValueError(’nx must be even’)
if (ny%2==1): # Must be even number
raise ValueError(’ny must be even’)
nx2 = int(nx/2)
ny2 = int(ny/2)
zz = zeros((nx2,ny2),float) # Generate return matrix
x = zeros((2,2),float)
for iy in range(0,ny,2):
for ix in range(0,nx,2):
x = z[ix,iy]*1 + z[ix,iy+1]*2 + \
z[ix+1,iy]*4 + z[ix+1,iy+1]*8
xx = f[int(x)]
zz[int((ix+1)/2),int((iy+1)/2)] = xx
return zz

# Example of use of the coarsening procedure


z = (rand(8,8)<0.58)*1.0
# Set up array for transformation f
f = [0,0,0,1,0,0,0,1,0,0,0,1,1,1,1,1]
lz,nz = measurements.label(z)
zz = coarse(z,f)
lzz,nzz = measurements.label(zz)
zzz = coarse(zz,f)
lzzz,nzzz = measurements.label(zzz)
zzzz = coarse(zzz,f)
lzzzz,nzzzz = measurements.label(zzzz)

# Plot the results


subplot(2,2,1)
imshow(lz)
axis(’equal’)
subplot(2,2,2)
imshow(lzz)
axis(’equal’)
subplot(2,2,3)
imshow(lzzz)
axis(’equal’)
subplot(2,2,4)
imshow(lzzzz)
axis(’equal’)

Perform successive iterations for p = 0.3, p = 0.4, p = 0.5, p = pc ,


p = 0.65, p = 0.70, and p = 0.75, in order to understand the instability
of the fixpoint at p = pc .
Subset geometry
8

So far, we have studied the geometry of the percolation system. Now, we


will gradually address the physics of processes that occur in a percolation
system. We have addressed one physics-like property of the system, the
density of the spanning cluster, and we found that we could build a theory
for the density P as a function of the porosity (occupation probability)
p of the system. In order to address other physical properties, we need
to have a clear description of the geometry of the percolation system
close to the percolation threshold. In this chapter, we will develop a
simplified geometric description that will be useful, indeed essential,
when we discuss physical process in disordered media. We will introduce
various subsets of the spanning cluster — sets that play roles in specific
physical processes. We will start by introducing singly connected bonds,
the backbone and dangling ends and provide a simplified image of the
spanning cluster in terms of the blob model for the percolation system
[9, 2, 16, 34].

8.0.1 Singly connected bonds


We will start with an example of a subset of the spanning cluster, the
set of singly connected sites (or bonds). This will demonstrate what we
mean by a subset and how the subset is connected to a physical problem.

139
140 8 Subset geometry

Singly connected site


A singly connected site is a site with the property that if it is
removed, the spanning cluster will no longer be spanning.

We can relate this to a physical property: If we study fluid flow in the


spanning cluster, all the fluid has to go through the singly connected
sites. These sites are also often referred to as red sites, because if we were
studying a set of random resistors, the highest current would have to go
through the singly connected bonds, and they would therefore heat up
and become “red”. Several examples of subsets of the spanning cluster,
including the singly connected bonds, are shown in Fig. 8.1.

Fig. 8.1 Illustration of the spanning cluster, the singly connected bonds (red), the
backbone (blue), and the dangling ends (yellow) for a 256 × 256 bond percolation system
at p = pc . (Figure from Martin Søreng).

Scaling hypothesis. We have learned that the spanning cluster may be


described by the mass scaling relation M ∝ LD , where D is termed the
fractal dimension of the spanning cluster. Here, we will make a daring
hypothesis, which we will also substantiate: We propose that subsets of
the spanning cluster obey similar scaling relations.
8.1 Self-avoiding paths on the cluster 141

For example, we propose that the mass of the singly connected sites
(MSC ) to have the scaling form

MSC ∝ LDSC , (8.1)

where we call the dimension DSC the fractal dimension of the singly
connected sites.
Scaling argument. Because the set of singly connected sites is a subset
of the spanning cluster, we know that MSC ≤ M . It therefore follows
that
DSC ≤ D . (8.2)
Generalization to other subsets. Based on this simple example, we
will generalize the approach to other subsets of the spanning cluster.
However, first we will introduce a new concept, a self-avoiding path on
the spanning cluster.

8.1 Self-avoiding paths on the cluster

The study of percolation is the study of connectivity, and many of the


physical properties that we are interested in depends on various forms
of the connecting paths between the two opposite edges of the spanning
cluster. We can address the structure of connected paths between the
edges by studying self-avoiding paths (SAPs). A Self-Avoiding Path
(SAP) is a set of connected sites that correspond to the sites on the path
of a walk on the spanning cluster that does not intersect itself going from
one side to the opposite side.

8.1.1 Minimal path


The shortest path between the two edges is called the shortest SAP
between the two edges. (Notice, that there may be more than one path
the satisfy this criterion. We chose one of these paths randomly). We
call this the minimal path with a length Lmin . The length here refers to
the number of sites in the path, that is, we can also call this the mass
of the path, Mmin = Lmin . We will use the mass instead of the length in
the following to describe the paths.
142 8 Subset geometry

Scaling hypothesis for the minimal path. We assume that mass of the
minimal path also scales with the system size according to the scaling
form:
Mmin ∝ LDmin . (8.3)
Where we have introduced the scaling exponent of the minimal path to
be Dmin .

8.1.2 Maximum and average path


Similarly, we call the longest SAP between the two edges the longest
path with a mass Mmax . Again, we assume that the mass has a scaling
form, Mmax ∝ LDmax . We notice that Mmin ≤ Mmax . Consequently, a
similar relation holds for the exponents Dmin ≤ Dmax .
We also introduce the term the average path, meaning the average
mass (length) of all possible SAPs going between opposite sides of the
system, hMSAP i ∝ LDSAW . The dimension DSAW will lie between the
dimensions of the minimal and the maximal path.

8.1.3 Backbone
Intersection of all self-avoiding paths. The notion of SAPs can also
be used to address the physical properties of the cluster, such as we
saw for the singly connected bonds. The set of singly connected bonds
is the intersections between all SAPs connecting the two sides. That
is, the singly connected bonds is the set of points that any path must
go through in order to connect the two sides. From this definition, we
notice that the dimension DSC < Dmin , and as we will see further on,
DSC = 1/ν which is smaller than 1 for two-dimensional systems.
Union of all self-avoiding paths. Another useful set is the union of all
SAPs that connect the two edges of the cluster. This set is called the
backbone with dimension DB .

Backbone
The backbone is the union of all self-avoiding paths on the spanning
cluster that connect two opposite edges.
8.1 Self-avoiding paths on the cluster 143

This set has a simple physical interpretation for a random porous


material, since it corresponds to the sites that are accessible to fluid
flow if a pressure is applied accross the material. The remaining sites are
called dangling ends. The backbone are all the sites that have at least
two different paths leading into them, one path from each side of the
cluster. The remaining sites only have one (self-avoiding) path leading
into them, and we call this set of sites the dangling ends. The spanning
cluster consists of the backbone plus the dangling bonds, as illustrated
in Fig. 8.2. The dangling ends are therefore pieces of the cluster that can
be cut away by the removal of a single bond.

(a) (b) (c)

Fig. 8.2 Illustration of the spanning cluster consisting of the backbone (red) and the
dangling ends (blue) for a 512 × 512 site percolation system for (a) p = 0.58, (b) p = 0.59,
and (c) p = 0.61.

We have arrived at the following hierarchy of exponents describing


various subsets of paths through the cluster:

DSC ≤ Dmin ≤ DSAW ≤ Dmax ≤ DB ≤ D ≤ d , (8.4)

8.1.4 Scaling of the dangling ends


Generally, we will find that the dimension of the backbone, DB , is
smaller than the dimension of the spanning cluster. For example, in
two dimensions, we find that DB ' 1.6, whereas D ' 1.89. This has
implications for the relative size of the backbone and the dangling ends.
The spanning cluster consists of the backbone and the dangling ends.
Therefore, the mass of the spanning cluster, M , must equal the sum of
the masses of the backbone and the dangling ends M = MB + MDE .
Since we know that M ∝ LD and MB ∝ LDB , we find that

MDE = M − MB = M0 LD − M0,B LDB . (8.5)


144 8 Subset geometry

To see what happens when L → ∞, we divide by M :

MDE M0,B LDB


=1− = 1 − cLDB −D , (8.6)
M M0 LD
Since DB ≤ D, we see that this fraction goes to a constant (one) as L
approaches infinity. Consequently, we have found that MDE ∝ M ∝ LD .
This also implies that as the system size goes to infinity most of the mass
is in the dangling ends and that the backbone occupies a smaller and
smaller portion of the total mass of the system.

8.1.5 Argument for the scaling of subsets


We can provide a better argument for why the various subsets should
scale with the system size L to various exponents. We notice that the
following relation between the masses must be true:

L1 ≤ Mmin ≤ MSAW ≤ Mmax ≤ MBB ≤ M ≤ Ld , (8.7)

where the first inequality L1 ≤ Mmin follows because even the minimum
path must be at least of length L to go from one side to the opposite
side.
Now, if this is to be true for all values of L, it can be argued that
because all the masses are between two scaling relations, L1 and Ld , also
the scaling of the intermediate masses, Mx , must be power-laws with
some power-law exponents, Mx ∝ LDx , with the hierarchy of exponents
given in (8.4).

8.1.6 Blob model for the spanning cluster


Let us now try to formulate our geometric description of the spanning
cluster into a model of the spanning cluster [36]. We have found that the
spanning cluster can be subdivided first into two parts: the backbone
and the dangling ends. The backbone may again be divided into two
parts: a set of blobs where the are several parallel paths and a set of sites,
the singly connected sites, that connect the blobs to each other and the
blobs to the dangling ends. Thus, we have ended up with a model with
three components:
• the dangling ends,
8.1 Self-avoiding paths on the cluster 145

• a set of blobs where there are several parallel paths


• the singly connected points, connecting the blobs to each other and
the blobs to the dangling ends.
Each of the blobs and the dangling ends will again have a similar substruc-
ture of dangling ends, blobs with parallel paths, and singly connected
bonds as illustrated in Fig. 8.3. This cartoon image of the clusters will
prove itself to provide useful intuition on the geometrical structure of
percolation clusters when we address the physics of disordered systems
in the next chapters.

Fig. 8.3 Illustration of the hierarchical blob-model for the percolation cluster showing
the backbone (bold), singly connected bonds (red) and blobs (blue).

8.1.7 Mass-scaling exponents for subsets of the spanning


clusters
The exponents can be calculated either by numerical simulations, where
the masses of the various subsets are measured as a function of system
size at p = pc , or by the renormalization group method. Numerical results
based on computer simulations are listed in the following table.

d DSC Dmin Dmax DB D DDE


2 3/4 1.1 1.5 1.6 1.89
146 8 Subset geometry

8.2 Renormalization calculation

We will now use the renormalization group approach to address the


scaling exponent for various subsets of the spanning cluster at p = pc . For
this, we will here use the renormalization procedure for bond percolation
on a square lattice in two dimensions following Hong and Stanley [17],
where we have found that the renormalization procedure produces the
exact result for the percolation threshold, pc = p∗ = 1/2. This is a
fixpoint of the mapping.
Our strategy will be to assume that all the bonds have a mass M = 1
in the original lattice, and then find the mass M 0 in the renormalized
lattice, when the length has been rescaled by b. For a property that
displays a self-similar scaling, we will expect that

M 0 ∝ bDx M , (8.8)

where Dx denotes the exponent for the particular subset we are looking
at. We can use this to determine the fractal exponent Dx from
ln M 0 /M
Dx = . (8.9)
ln b
We will do this be calculating the average value of the mass of the H-cell,
by taking the mass of the subset we are interested in for each configura-
tion, Mx (c), and multiplying it by the probability of that configuration,
summing over all configurations:

hM i = Mx (c)P (c) . (8.10)


X

We have now calculated the average mass in the original 2 by 2 lattice, and
this should correspond to the average renormalized mass, hM 0 i = p0 M 0 ,
which is the mass of the renormalized bond, M 0 multiplied with the
probability for that bond to be present p0 . That is, we will find M 0 from:

p0 M 0 = M (c)P (c) , (8.11)


X

We will study our system at the nontrivial fixpoint p = p∗ = 1/2 = pc .


The spanning configurations c for bond renormalization in two dimensions,
are shown together with their probabilities and the masses of various
subsets in following table:
8.3 (Advanced) Singly connected bonds and ν 147

c P (c) MSC Lmin LAV G Lmax MBB M

1 p5 (1 − p)0 0 2 2.5 3 5 5

2 p4 (1 − p)1 0 2 2 2 4 4

3 4p4 (1 − p)1 1 2 2.5 3 4 4

4 2p3 (1 − p)2 2 2 2 2 2 3

5 2p3 (1 − p)2 3 3 3 3 3 3

6 4p3 (1 − p)2 2 2 2 2 2 3

7 2p2 (1 − p)3 2 2 2 2 2 2
hMx i 26/25 34/25 36.5/25 39/25 47/25 53/25
ln 13 ln 17 ln 36.5 39
ln 16 ln 47 ln 53
Dx 8
ln 2
8
ln 2
16
ln 2 ln 2
16
ln 2
16
ln 2
Dx 0.7004 1.0875 1.1898 1.2854 1.5546 1.7279
Dx,n 3/4 1.13 1.4 1.6 91/48

Table 8.1 Numerical exponents for the exponent describing various subsets of the
spanning cluster defined using the set of Self-Avoiding Walks going from one side to the
opposite side of the cluster. The last line shows the exponents found from numerical
simulations in a two-dimensional system.

This use of the renormalization group method to estimate the expo-


nents demonstrates the power of the renormalization arguments. Similar
arguments will be used to address other properties of the percolation
system.

8.3 (Advanced) Singly connected bonds and ν

Let us also use this technique to develop an interpretation of the exponent


ν and how it is related to the singly connected sites (See Coniglio [8]
for a detailed argument). Because the exponent ν can be found from
the renormalization equation at the fixpoint, which corresponds to the
percolation threshold, it is reasonable to assume that the exponent ν can
be derived from some property of the fractal structure at p = pc .
Let us address bond percolation in two dimensions, described by the
occupation probability p for bonds, and let us introduce an additional
variable 1 − π: the probability to remove an occupied bond from the
system. We will consider the percolation problem to be described by
these two values. When we are at p = pc , we would expect that for any
1 − π > 0, the spanning cluster will break into a set of unconnected
148 8 Subset geometry

clusters. The only fixpoint value when p = pc is therefore for 1 − π = 0,


that is, π = 1.
Can we derive a recursion relation for 1 − π? For our renormalized
cell of size b, the probability to break connectivity between the end
nodes should be 1 − π 0 . This corresponds to the probability that the
renormalized bond is broken, because after renormalization there is only
one bond in the box of size b. We write the recursion relation as a Taylor
expansion around the fix-point 1 − π = 0, or π = 1:

1 − π 0 = A(1 − π) + O((1 − π)2 ) , (8.12)

where A is given as
∂π 0

A= . (8.13)
∂π π=1
We realize that the new p in the system after the introduction of π is given
by p = πpc , when the ordinary percolation system is at pc . Similarly, the
renormalized occupation probability is p0 = π 0 pc , and we have therefore
found that
∂π 0 ∂p0

A= = = b1/ν . (8.14)
∂π π=1 ∂p p=pc

8.4 Deterministic fractal models

We have found that we can calculate the behavior of infinite-dimensional


and one-dimensional systems exactly. However, for finite dimensions such
as for d = 2 or d = 3 we must rely on numerical simulations and renor-
malization group arguments to determine the exponents and the behavior
of the system. However, in order to learn about physical properties in
systems with scaling behavior, we may be able to construct simpler
models that contain many of the important features of the percolation
cluster. For example, we may be able to introduce deterministic, iterative
fractal structures that reproduce many of the important properties of
the percolation cluster at p = pc , but that is deterministic and not a
random system. The idea is that we can use this system to study other
properties of the physics on fractal structures.
An example of an iterative fractal structure that has many of the
important features of the percolation clusters at p = pc is the Mandelbrot-
Given curve. The curve is generated by the iterative procedure described
in Fig. 8.4. Through each generation, the length is rescaled by a factor
8.4 Deterministic fractal models 149

b = 3, and the mass is rescaled by a factor 8. That is, for generation l,


the mass is m(l) = 8l , and the linear size of the cluster is L(l) = 3l . If
we assume a scaling on the form m = LD , we find that
ln 8
D= ' 1.89 . (8.15)
ln 3
This is surprisingly similar to the fractal dimension of the percolation clus-
ter. We can also look at other dimensions, such as for the singly connected
bonds, the minimum path, the maximum path and the backbone.

Fig. 8.4 Illustration of first three generations of the Mandelbrot-Given curve. The length
is rescaled by a factor b = 3 for each iteration, and the mass of the whole structure is
increased by a factor of 8. The fractal dimension is therefore D = ln 8/ ln 3 ' 1.89.

Let us first address the singly connected bonds. In the zero’th gener-
ation, the system is simply a single bond, and the length of the singly
connected bonds, LSC is 1. In the first generation, there are two bonds
that are singly connecting, and in the second generation there are four
bonds that are singly connecting. The general relation is that

LSC = 2l , (8.16)

where l is the generation of the structure. The dimension, DSC , of the


singly connected bonds is therefore DSC = ln 2/ ln 3 ' 0.63, which
should be compared with the exact value DSC = 3/4 for two-dimensional
percolation.
The minimum path will for all generations be the path going straight
through the structure, and the length of the minimal path will therefore
be equal to the length of the structure. The scaling dimension Dmin is
therefore Dmin = 1.
The maximum path increases by a factor 5 for each iteration. The
dimension of the maximum path is therefore Dmax = ln 5/ ln 3 ' 1.465.
150 8 Subset geometry

We can similarly find that the mass of the backbone increases by a


factor 6 for each iteration, and the dimension of the backbone is therefore
DB = ln 6/ ln 3 ' 1.631.
The Mandelbrot-Given curve can be optimized by selecting the lengths
Li illustrated in Fig. 8.5 in the way that provides the best estimate for
the exponents of interest.

L3

L2 L1-l1

l1
L4-l4
l4

Fig. 8.5 The Mandelbrot-Given construction can be optimized by choosing particular


values for the lengths l1 , L2 , L3 , l4 , and L4 . Here, L3 gives the length around the whole
curved path. The choice of l1 and l4 does not affect the scaling properties we have been
addressing, and are therefore not relevant parameters. The ordinary Mandelbrot-given
curve corresponds to b = 3, L1 = 2, L2 = 1, L3 = 3, L4 = 2, l1 = 1, and l4 = 1.

This deterministic iterative fractal can be used to perform quick


calculations of various properties on a fractal system, and may also serve
as a useful hierarchical lattice on which to perform simulations when we
are studying processes occurring on a fractal structure.

8.5 Lacunarity

The fractal dimension describes the scaling properties of structures such


as the percolation cluster at p = pc . However, structures that have the
same fractal dimension, may have a very different appearance. As an
example, let us study several variations of the Sierpinski gasket introduced
in section 5.5. As illustrated in Fig. 8.6, we can construct several rules for
the iterative generation of the fractal that all result in the same fractal
dimension, but have different visual appearance. The fractal dimension
D = ln 3/ ln 2 for both of the examples in Fig. 8.6, but by increasing
the number of triangles that are used in each generation, the structures
become more homogeneous. How can we quantify this difference?
8.5 Lacunarity 151

1:

2:

b=1 b=2 b=4 b=8 b=16


M=1 M=3 M=9 M=27 M=81
Fig. 8.6 Two versions of the Siepinski gasket. In version 1, the next generation is made
from 3 of the structures from the last generation, and the spatial rescaling is by a factor
b = 3. In version 2, the next generation is made from 9 of the structures from the last
generation, and the spatial rescaling is by a factor b = 6. The resulting fractal dimension
is D2 = ln 9/ ln 4 = ln 32 / ln 22 = ln 3/ ln 2 = D1 . The two structures therefore have the
same fractal dimension. However, version 1 have large fluctuations that version 2.

In order to quantify this difference, Mandelbrot invented the concept


of lacunarity . We measure lacunarity from the distribution of mass-
sizes. We can characterize and measure the fractal dimension of a fractal
structure using box-counting, as explained in section 5.5. The structure,
such as the percolation cluster, is divided into boxes of size `. In each
box, i, there will be a mass mi (`). The fractal dimension was found by
calculating the average mass per box of size `:

hmi (`)ii = A`D . (8.17)

However, there will be a full distribution of masses m(`) in the boxes,


characterized by a distribution P (m, `), which gives the probability for
mass m in a box of size `. We can characterize this distribution by its
moments:
hmk (`)i = Ak `kD , (8.18)
where this particular scaling form implies that the structure is unifractal:
the scaling exponents for all the moments are linearly related.
For a unifractal structure, we expect the distribution of masses to
have the scaling form
m
P (m, `) = `x f ( ), (8.19)
`D
where the scaling exponent x is yet undetermined. In this case, the
moments can be found by integration over the probability density
152 8 Subset geometry
Z
hmk i = P (m, `)mk dm (8.20)
m
Z
= mk `x f (
D
)dm (8.21)
`
m m m
Z
= `(kD+x+D ( D )k f ( D )d( D ) (8.22)
Z ` ` `
= `D(k+1)+x xk f (x)dx (8.23)

We can determine the unknown scaling exponent x from the scaling of


the zero’th moment, that is, from the normalization of the probability
density: hm0 i = 1 implies that D(0 + 1) + x = 0, and therefore, that
x = −D. The scaling ansatz for the distribution of masses is therefore
m
P (m, `) = `−D f ( ). (8.24)
`D
And we found that the moments can be written as
Z
hmk i = `D(k+1)−D xk f (x)dx = Ak `kD , (8.25)

as we assumed above.
We therefore see that the distribution of masses is characterized by the
distribution P (m, `), which in turn is described by the fractal dimension,
D, and the scaling function f (u), which gives the shape of the distribution.
The distribution of masses can be broad, which would correspond
to “large holes”, or narrow, which would correspond to a more uniform
distribution of mass. The width of the distribution can be characterized
by the mean-square deviation of the mass from the average mass:

hm2 i − hmi2 A2 − A21


∆= = . (8.26)
hmi2 A21
This number describes another part of the mass distribution relation
than the scaling relation, and can be used to characterize fractal set.
For the percolation problem, this number is assumed to be universal,
independent of lattice type, but dependent on the dimensionality.
8.6 Exercises 153

8.6 Exercises

Exercise 8.1: Singly connected bonds


Use the example programs from the text to find the singly connected
bonds.
a) Run the programs to visualize the singly connected bonds. Can you
understand how this algorithms finds the singly connected bonds? Why
are some of the bonds of a different color?
b) Find the mass, MSC , of the singly connected bonds as a function
of system size L for p = pc and use this to estimate the exponent DSC :
MSC ∝ LDSC .
c) Can you find the behavior of PSC = MSC /Ld as a function of p − pc ?

Exercise 8.2: Left/right-turning walker


We have provided a subroutine and an example program that implements
the left/right-turning walker algorithm. The algorithm works on a given
clusters. From one end of the cluster, two walkers are started. The walkers
can only walk according to the connectivity rules on the lattice. That
is, for a nearest-neighbor lattice, they can only walk to their nearest
neighbors. The left-turning walker always tries to turn left from its
previous direction. If this site is empty, it tries the next-best site, which
is to continue straight ahead. If that is empty, it tries to move right,
and if that is empty, it moves back along the direction it came. The
right-turning walker follows a similar rule, but prefers to turn right in
each step. The first walker to reach the other end of the cluster stops,
and the other walker stops when it reaches this site.
The path of the two walkers is illustrated in the figure below. The
sites that are visited by both walkers consitute the singly connected
bonds. The union of the two walks consitutes what is called the external
perimeter (Hull) of the cluster.
a) Use the following programs to generate and illustrate of the singly
connected bonds for a 100 × 100 system. Check that the illustrated bonds
correspond to the singly connected bonds.
from pylab import *
from scipy.ndimage import measurements
import numba
154 8 Subset geometry

Fig. 8.7 Illustrations of the left-right turning walker.

@numba.njit(cache=True)
def walk(z):
# Left turning walker
# Returns left: nr of times walker passes a site
# First, ensure that array only has one contact point at
# left and right : topmost points chosen
nx = z.shape[0]
ny = z.shape[1]
i = where(z[0,:] > 0)
ix0 = 0 # starting row for walker is always 0
iy0 = i[0][0] # starting column (first element where
# there is a matching column which is zero)
print("Starting walk in x=" + str(ix0) + " y=" + str(iy0))
# First do left-turning walker
directions = zeros((4,2), int)
directions [0,0] = -1 # west
directions [0,1] = 0
directions [1,0] = 0 # south
directions [1,1] = -1
directions [2,0] = 1 # east
directions [2,1] = 0
directions [3,0] = 0 # north
directions [3,1] = 1
nwalk = 1
ix = ix0
iy = iy0
direction = 0 # 0 = west, 1 = south, 2 = east, 3 = north
left = zeros((nx,ny),int)
right = zeros((nx,ny),int)
while (nwalk >0):
left[ix,iy] = left[ix,iy] + 1
# Turn left until you find an occupied site
nfound = 0
while (nfound==0):
direction = direction - 1
8.6 Exercises 155

if (direction < 0):


direction = direction + 4
# Check this direction
iix = ix + directions[direction,0]
iiy = iy + directions[direction,1]
if (iix >= nx):
nwalk = 0 # Walker escaped
nfound = 1
iix = nx
ix1 = ix
iy1 = iy
# Is there a site here?
elif(iix >= 0):
if(iiy >= 0):
if (iiy < ny):
if (z[iix,iiy]>0): # site present, move here
ix = iix
iy = iiy
nfound = 1
direction = direction + 2
if (direction > 3):
direction = direction - 4
#left
nwalk = 1
ix = ix0
iy = iy0
direction = 1 # 1=left, 2 = down, 3 = right, 4 = up
while(nwalk >0):
right[ix,iy] = right[ix,iy] + 1
# ix,iy
# Turn right until you find an occupied site
nfound = 0
while (nfound==0):
direction = direction + 1
if (direction > 3):
direction = direction - 4
# Check this directionection
iix = ix + directions[direction,0]
iiy = iy + directions[direction,1]
if (iix >= nx):
if (iy >= iy1):
nwalk = 0 # Walker escaped
nfound = 1
iix = nx
# Is there a site here?
elif(iix >= 0):
if(iiy >= 0):
if (iiy < ny):
if (iix < nx):
if (z[iix,iiy]>0): # site present, move here
ix = iix
iy = iiy
nfound = 1
direction = direction - 2
156 8 Subset geometry

if (direction <0):
direction = direction + 4
return left, right

from pylab import *


from scipy.ndimage import measurements
# Generate spanning cluster (l-r spanning)
lx = 100
ly = 100
p = 0.6
ncount = 0
perc = []
while (len(perc)==0):
ncount = ncount + 1
if (ncount >1000):
print("Couldn’t make percolation cluster...")
break
z=rand(lx,ly)<p
lw,num = measurements.label(z)
perc_x = intersect1d(lw[0,:],lw[-1,:]) # Testing using intersect
perc = perc_x[where(perc_x > 0)]
print("ncount = ",ncount)
if len(perc) > 0:
zz = (lw == perc[0])
# zz now contains the spanning cluster
figure() # Display spanning cluster
imshow(zz, interpolation=’nearest’, origin=’upper’)
l,r = walk(zz)
figure()
imshow(l, interpolation=’nearest’, origin=’upper’)
figure()
imshow(r, interpolation=’nearest’, origin=’upper’)
zzz = l*r # Find points where both l and r are non-zero
figure()
imshow(zzz, interpolation=’nearest’, origin=’upper’)
colorbar()
zadd = zz + zzz
figure()
imshow(zadd, interpolation=’nearest’, origin=’upper’)
colorbar()

b) Measure the dimension DSC .


c) Modify the programs to find the external perimeter (Hull) of a
spanning cluster in a 100 × 100 system.
d) Measure the dimension DP of the perimeter.
e) (Advanced) Develop a theory for the behavior of PH (p, L), the prob-
ability for a site to belong to the Hull as a function of p and L for
p > pc .
8.6 Exercises 157

f) (Advanced) Measure the behavior of PH (p, L) as a function of p for


L = 512 × 512.
Introduction to disorder
9

We have now developed the tools to address the statistical properties of


the geometry of a disordered system such as a model porous medium:
the percolation system. In this part, we will apply this knowledge to
address physical properties of disordered systems and to study physical
processes in disordered materials.
We have learned that the geometry of a disordered system displays
fractal scaling close to the percolation threshold. Material properties
such as the density of singly connected sites, or the backbone of the
percolation cluster, display self-similar scaling. The backbone is the
part of the spanning cluster that participates in fluid flow. The mass,
MB , of the backbone scales with the system size, L, according to the
scaling relation MB = LDB , where DB is smaller than the Euclidean
dimension. The density of the backbone therefore decreases with system
size. This implies that material properties which we ordinarily would
treat as material constants, depend on the size of the sample. In this
part we will develop an understanding of the origin of this behavior, and
show how we can use the tools from percolation theory to address the
behavior in such systems.
The behavior of a disordered system can in principle always be ad-
dressed by direct numerical simulation. For example, for incompressible,
single-phase fluid flow through a porous material, the effective perme-
ability of a sample can be found to very good accuracy from a detailed
numerical model of fluid flow through the system. However, it is not
practical to model fluid flow down to the smallest scaling in practical
problems such as in oil migration. We would therefore need to extrapolate

159
160 9 Introduction to disorder

from the small scale to the large scaling. This process, often referred to
as up-scaling, requires that we know the scaling properties of our system.
We will address up-scaling in detail in this chapter.
We may argue that the point close to the percolation threshold is
anomalous and that any realistic system, such as a geological system,
would typically be far away from the percolation threshold. In this case,
the system will only display an anomalous, size-dependent behavior up
to the correlation length, and over larger lengths the behavior will be
that of a homogeneous material. We should, however, be aware that
many physical properties are described by broad distributions of material
properties, and this will lead to a behavior similar to the behavior close
to the percolation threshold, as we will discuss in detail in this part.
In addition, several physical processes ensure that the system is driven
into or is exactly at the percolation threshold. One such example is
the invasion-percolation process, which gives a reasonable description
of oil-water emplacement processes such as secondary oil migration. For
such systems, the behavior is best described by the scaling theory we
have developed.
In this part, we will first provide an introduction to the scaling of
material properties such as conductivity and elasticity. Then we will
demonstrate how processes occurring in systems with frozen disorder,
such as a porous material, often lead to the formation of fractal structures.
Flow in disordered media
10

We will start our studies of physics in disordered media by addressing


flow, either in the form of incompressible fluid flow in a random, porous
system or in the form of electric current in a random, porous materials.
We will first introduce the basic properties of flow of current or fluids, and
then address flow in a percolating system close to pc . We will study the
behavior numerically, develop a scaling theory, and find properties using
the renormalization group approach. Our initial studies will be on the
binary porous medium of the percolation system. However, we can also
extend our results to more general random media, and we demonstrate
how this can be done towards the end of the chapter.

10.0.1 Electrical conductivity and resistor networks


Traditionally, the conductive properties of a disordered material have
been addressed by studying the behavior of random networks of resistors
called random resistor networks [24, 23, 1]. In this case, a voltage V
is applied across the disordered material, such as a bond-percolation
network, and the total current, I, through the sample is measured, giving
the conductance G of the sample as the constant of proportionality
I = GV . (We recall that the current I is the amount of charge flowing
through a given cross-sectional area per unit time).
We remember from electromagnetism that we discern between con-
ductance and conductivity:

161
162 10 Flow in disordered media

• conductance, G, is a property of a specific sample — a given medium


— with specific dimensions
• conductivity, g, is a material property
For an Ld sample in a d-dimensional system, the conductance of a
homogeneous material with conductivity σ is

G = Ld−1 g/L = Ld−2 g . (10.1)

It is common in electromagnetism to use σ for conductivity. Here, we will


instead use g to avoid confusion with the exponent σ we introduced pre-
viously for the behavior of sξ . The conductance is inversely proportional
to the length of the sample in the direction of flow, and proportional to
the cross-sectional d − 1-dimensional area. We can understand this by
considering that there are Ld−1 parallel parts that contribute to the flow.
Parallel-parts add to the conductance. In addition, each part has a length
L, and we recall from electromagnetism that conductance decreases with
length.

10.0.2 Flow conductivity of a porous system


We can also use fluid flow in porous medium as our basic physical system.
If done in the limit of slow, incompressible fluid flow these two systems
are practically identical. For fluid flow in a porous medium of length L
and cross-sectional area A, the system is described by Darcy’s law which
provide a relation between the amount of fluid volume flowing through
a given cross-sectional area, A, per unit time, Φ, and the pressure drop
∆p across the sample:
kA ∆p
Φ= , (10.2)
η L
where k is the called permeability of the material and is a property of the
material geometry, and η is the viscosity of the fluid. Again, we would
like a description so that k is material property, and all the information
about the geometry of the material goes into the flow conductance of the
sample through the length L and the cross-sectional area A. Generalized
to a d-dimensional system, the relation is

kLd−1 k
Φ= ∆p = Ld−2 ∆p . (10.3)
ηL η
10.1 Conductance of a percolation lattice 163

From this, we see that the electric conductivity problem in this limit is
the same as the Darcy-flow permeability problem. We will therefore not
discern between the two problems in the following. We will simply call
them flow problems and describe them using the current, I, the conduc-
tivity, g, the conductance G, and the potential V . We will study these
problems on a Ld percolation lattice, using the theoretical, conceptual
and computational tools we have developed so far.

10.1 Conductance of a percolation lattice

Let us first address the conductance of a Ld sample of a percolation


system. The system may be either a site or a bond percolation system,
but many of the concepts we introduce are simpler to explain if we just
consider a bond percolation system.
We will start with a simplified system: The system will be a network of
bonds that are present with probability p. We assume that all bonds have
the same conductance, which we can set to 1 without loss of generality.
The bonds are removed with probability 1 − p, and we model this by
setting the conductance of a removed bond to be zero.

10.1.1 Finding the conductance of the system


The conductance of the Ld sample is found by solving the flow problem
illustrated in fig. 10.1. A potential difference V is applied across the
whole sample, and we find (measure) the resulting current I. We find
the conductance from Ohm’s law (or similarly from Darcy’s law for fluid
flow):
I
I = GV ⇒ G = . (10.4)
V
In general, the conductance G will be a function of p and L: G = G(p, L).
Local potentials and currents. Let us look at the network in more
detail. Fig. 10.1b illustrates a small part of the whole system. The two
adjacent sites i and j are connected with a bond of conductance Gi,j . If
the bond is present (with probability p in the percolation system), the
conductance is Gi,j is 1, otherwise it is zero.
The current from site i to site j is related to the difference in potential
between the two sites:
164 10 Flow in disordered media

(a) (b)

Ii,j
Vi Vj

V
Fig. 10.1 (a) Illustration of flow through a bond percolation system. The bonds shown
in red are the singly connected bonds: all the flux has to go through these bonds. The
bonds shown in blue are the rest of the backbone: The flow only takes place on the singly
connected bonds and the backbone, the remaining bonds are the dangling ends, which do
not participate in fluid flow. (b) Illustration of the potentials Vi and Vj in two adjacent
sites and the current Ii,j from site i into site j.

Ii,j = Gi,j (Vi − Vj ) , (10.5)

where we notice that the current is positive if the potential is higher in


site i than in site j.
Conservation of current. In addition, the continuity condition provides
a conservation equation for the currents: Charges (or fluid mass for Darcy
flow) cannot change, and therefore the net current into any point inside
the lattice must be zero. This corresponds to the condition that the sum
of the current from site i into all its neighboring sites k must be zero:

Ii,k = 0 (10.6)
X

In electromagnetism this is called Kirchhoff’s rule for currents. We can


rewrite this in terms of the local potentials Vi instead by inserting (10.5)
in (10.6):
Gi,k (Vi − Vk ) = 0 . (10.7)
X

This provides us with a set of equations for all the potentials Vi , which
we must solve to find the potentials and hence the currents between all
the sites in a percolation system.
Finding currents and potentials. We can use this to find all the poten-
tials for a percolation system. Let us adress a two-dimensional system
10.1 Conductance of a percolation lattice 165

of size L × L. The potential in a position (x, y) on the lattice is V (x, y),


where x and y are integers, x = 0, 1, 2, . . . , L − 1 and y = 0, 1, 2, . . . , L − 1.
We denote Gi,j as G(xi , yi ; xj , yj ). We can then rewrite (10.7) as

G(x, y; x + 1, y) (V (x, y) − V (x + 1, y)) + (10.8)


G(x, y; x − 1, y) (V (x, y) − V (x − 1, y)) + (10.9)
G(x, y; x, y + 1) (V (x, y) − V (x, y + 1)) + (10.10)
G(x, y; x, y − 1) (V (x, y) − V (x, y − 1)) = 0 (10.11)

In order to solve this two-dimensional problem, it is usual to rewrite it


as a one-dimensional system of equations with a single index. The index
i = x + yL uniquely describes a point so that V (x, y) = Vi . We see that
(x, y) = i, (x + 1, y) = i + 1, (x − 1, y) = i − 1, (x, y + 1) = i + L, and
(x, y − 1) = i − L. We can rewrite (10.11) using this indexing system:

Gi,i+1 (Vi − Vi+1 ) + Gi,i−1 (Vi − Vi−1 ) + (10.12)


Gi,i+L (Vi − Vi+L ) + Gi,i−L (Vi − Vi+L ) = 0 (10.13)

This is effectively a set of Ld equations for Vi . In addition we have


the boundary conditions that V (0, j) = V and V (L − 1, j) = 0 for
j = 0, 1, . . . , L − 1. This defines the system as a tri-diagonal set of linear
equations that can be solved easily numerically.

10.1.2 Computational methods


We have now reformulated the conductivity problem on a percolation
lattice to a computational problem that we can solve. We do this by
generating random lattices of size L × L, solve to find the potential
V (x, y), and then study the effective conductivity, G = I/V of the
system by summing up all the currents exiting the system (or entering —
these should be the same).
We can do this by generating a bond-lattice, where the values Gi,j are
either 0 or 1. However, so far all our visualization methods have been
constructed for site lattices. We will therefore study a site lattice, but
instead generate Gi,j between two sites based on whether the sites are
present. We set Gi,j for two nearest-neighbors to be present (1) if both
sites i and j are present (1). Otherwise we set Gi,j to zero, that is, if at
least one of the sites is empty we set Gi,j to be zero. We assume all the
sites on the left and right outsides to be present. This is where current
flows in and where the potentials are set. In addition, we assume all the
166 10 Flow in disordered media

sites on the top and bottom outsides to be empty. It is therefore now


flow in from the top or bottom. We therefore only study percolation from
the left to the right.
We have written subroutines to help you with these studies. The
function sitetobond transforms your percolation matrix z to a bond
matrix. The function FIND_CON solves the system of equations to find the
potentials, Vi , and the function coltomat transforms the resulting array
of potentials back into a matrix form, V (x, y). The following programs
are used to calculate the potentials and currents and visualize the results.
The resulting plots are shown in Fig. 10.2.
from pylab import *
from scipy.sparse import spdiags, dia_matrix, coo_matrix
from scipy.sparse.linalg import spsolve
from scipy.ndimage import measurements

# Written by Marin Soreng 2004


# Calculates the effective flow conductance Ceff of the
# lattice A as well as the potential V in every site .
def FIND_COND (A , X , Y ):
V_in = 1.
V_out = 0.
# Calls MK_EQSYSTEM .
B,C = MK_EQSYSTEM (A , X , Y )
# Kirchhoff ’ s equations solve for V
V = spsolve(B, C)
# The pressure at the external sites is added
# ( Boundary conditions )
V = concatenate((V_in * ones (X), V, V_out * ones (X)))
# Calculate Ceff
# second-last X elements of V multiplied with second-last elem. of A
# these are the second last column of the system
# gives the conductivity of the system per row?
Ceff = dot((V[-1-2*X:-1-X] - V_out).T, A[-1-2*X:-1-X, 1]) \
/ ( V_in - V_out )
return V , Ceff

# Sets up Kirchoff ’ s equations for the 2 D lattice A .


# A has X * Y rows and 2 columns . The rows indicate the site ,
# the first column the bond perpendicular to the flow direction
# and the second column the bond parallel to the flow direction .
#
# The return values are [B , C ] where B * x = C . This is solved
# for the site pressure by x = B \ C .

def MK_EQSYSTEM (A , X , Y ):
# Total no of internal lattice sites
sites = X *( Y - 2)
# Allocate space for the nonzero upper diagonals
main_diag = zeros(sites)
10.1 Conductance of a percolation lattice 167

upper_diag1 = zeros(sites - 1)
upper_diag2 = zeros(sites - X)
# Calculates the nonzero upper diagonals
main_diag = A[X:X*(Y-1), 0] + A[X:X*(Y-1), 1] + \
A[0:X*(Y-2), 1] + A[X-1:X*(Y-1)-1, 0]
upper_diag1 = A [X:X*(Y-1)-1, 0]
upper_diag2 = A [X:X*(Y-2), 1]
main_diag[where(main_diag == 0)] = 1
# Constructing B which is symmetric , lower = upper diagonals .
B = dia_matrix ((sites , sites)) # B *u = t
B = - spdiags ( upper_diag1 , -1 , sites , sites )
B = B + - spdiags ( upper_diag2 ,-X , sites , sites )
B = B + B.T + spdiags ( main_diag , 0 , sites , sites )
# Constructing C
C = zeros(sites)
# C = dia_matrix ( (sites , 1) )
C[0:X] = A[0:X, 1]
C[-1-X+1:-1] = 0*A [-1 -2*X + 1:-1-X, 1]
return B , C

def sitetobond ( z ):
# Function to convert the site network z(L,L) into a (L*L,2) bond
# network
# g [i,0] gives bond perpendicular to direction of flow
# g [i,1] gives bond parallel to direction of flow
# z [ nx , ny ] -> g [ nx * ny , 2]
nx = size (z ,1 - 1)
ny = size (z ,2 - 1)
N = nx * ny
gg_r = zeros ((nx , ny)) # First , find these
gg_d = zeros ((nx , ny )) # First , find these
gg_r [:, 0:ny - 1] = z [:, 0:ny - 1] * z [:, 1:ny]
gg_r [: , ny - 1] = z [: , ny - 1]
gg_d [0:nx - 1, :] = z [0:nx - 1, :] * z [1:nx, :]
gg_d [nx - 1, :] = 0
# Then , concatenate gg onto g
g = zeros ((nx *ny ,2))
g [:, 0] = gg_d.reshape(-1,order=’F’).T
g [:, 1] = gg_r.reshape(-1,order=’F’).T
return g

def coltomat (z, x, y):


# Convert z(x*y) into a matrix of z(x,y)
# Transform this onto a nx x ny lattice
g = zeros ((x , y))
for iy in range(1,y):
i = (iy - 1) * x + 1
ii = i + x - 1
g[: , iy - 1] = z[ i - 1 : ii]
return g

# Generate spanning cluster (l - r spanning )


lx = 400
168 10 Flow in disordered media

ly = 400
p = 0.5927
ncount = 0
perc = []

while (len(perc)==0):
ncount = ncount + 1
if (ncount >100):
break
z=rand(lx,ly)<p
lw,num = measurements.label(z)
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x > 0)]
print("Percolation attempt", ncount)
zz = asarray((lw == perc[0]))
# zz now contains the spanning cluster
zzz = zz.T # Transpose
g = sitetobond ( zzz ) # Generate bond lattice
V, c_eff = FIND_COND (g, lx, ly) # Find conductivity
x = coltomat ( V , lx , ly ) # Transform to nx x ny lattice
V = x * zzz
g1 = g[:,0]
g2 = g[: ,1]
z1 = coltomat( g1 , lx , ly )
z2 = coltomat( g2 , lx , ly )

# Plot results
figure(figsize=(16,16))
ax = subplot(221)
imshow(zzz, interpolation=’nearest’)
title("Spanning cluster")
subplot(222, sharex=ax, sharey=ax)
imshow(V, interpolation=’nearest’)
title("Potential")

# Calculate current from top to down from the potential


f2 = zeros ( (lx , ly ))
for iy in range(ly -1):
f2[: , iy ] = ( V [: , iy ] - V [: , iy +1]) * z2 [: , iy ]
# Calculate current from left to right from the potential
f1 = zeros ( (lx , ly ))
for ix in range(lx-1):
f1[ ix ,:] = ( V [ ix ,:] - V [ ix +1 ,:]) * z1 [ ix ,:]
# Find the sum of (absolute) currents in and out of each site
fn = zeros (( lx , ly ))
fn = fn + abs ( f1 )
fn = fn + abs ( f2 )
# Add for each column (except leftmost) the up-down current, but offset
fn [: ,1: ly ] = fn [: ,1: ly ] + abs ( f2 [: ,0: ly -1])
# For the left-most one, add the inverse potential
# multiplied with the spanning cluster bool information
fn [: ,0] = fn [: ,0] + abs (( V [: ,0] - 1.0)*( zzz [: ,0]))
# For each row (except topmost) add the left-right current, but offset
fn [1: lx ,:] = fn [1: lx ,:] + abs ( f1 [0: lx -1 ,:])
10.1 Conductance of a percolation lattice 169

# Plot results
subplot(223, sharex=ax, sharey=ax)
imshow(fn, interpolation=’nearest’)
title (" Current ")
# Singly connected
zsc = fn > (fn.max() - 1e-6)
# Backbone
zbb = fn>1e-6
# Combine visualizations
ztt = ( zzz*1.0 + zsc*2.0 + zbb*3.0 )
zbb = zbb / zbb.max()
subplot(224, sharex=ax, sharey=ax)
imshow(ztt, interpolation=’nearest’)
title (" SC, BB and DE ")

Spanning cluster Potential


0 0

50 50

100 100

150 150

200 200

250 250

300 300

350 350

0 50 100 150 200 250 300 350 0 50 100 150 200 250 300 350

Current BB and DE
0 0

50 50

100 100

150 150

200 200

250 250

300 300

350 350

0 50 100 150 200 250 300 350 0 50 100 150 200 250 300 350

Fig. 10.2 Plots of the spanning cluster, the potential, V (x, y), the absolute value of the
current flowing into each site, and the singly connected bonds, the backbone and the
dangling ends.
170 10 Flow in disordered media

10.1.3 Measuring the conductance


Using this program we can measure the conductance of the system and
how it varies with both p and L. First, we perform simulations for M
samples and calculate the conductance from G(p, L) = I(p, L)/V , where
V is given, and we calculate I. Here, I is the sum of all the currents
escaping (or entering) the system. We have set all the potentials on the
left side to V = 1, that is, ViL = 1 for i = 0, 1, . . . , L − 1. For these sites,
Gi , i + 1 is always 1. The current I is therefore:
L−1
I= IiL,iL+1 = GiL,iL+1 (ViL − ViL+1 ) = ViL − ViL+1 . (10.14)
X

i=0

We use the following program to find the conductance, G(p, L), for an
L × L system for L = 400, as well as the density of the spanning cluster
P (p, L). The resulting behavior for L = 400 and M = 600 is shown in
Fig. 10.3. We observe two things from this plot: First we see that the
behavior of G(p, L) and P (p, L) is qualitatively different around p = pc :
P (p, L) increases very rapidly as (p−pc )β where β is less than 1. However,
it appears that G(p, L) increases more slowly. Indeed, from the plot it
looks as if G(p, L) increases as (p − pc )x with an exponent x that is larger
than 1. How can this be? Why does the density of the spanning cluster
increase very rapidly, but the conductance increases much slower? This
may be surprising, but we will develop an explanation for this below.
from pylab import *
from scipy.ndimage import measurements
Lvals = [400]
pVals = logspace(log10(0.58), log10(0.85), 20)
C = zeros((len(pVals),len(Lvals)),float)
P = zeros((len(pVals),len(Lvals)),float)
nSamples = 600
G = zeros(len(Lvals))
for iL in range(len(Lvals)):
L = Lvals[iL]
lx = L
ly = L
for pIndex in range(len(pVals)):
p = pVals[pIndex]
ncount = 0
for j in range(nSamples):
ncount = 0
perc = []
while (len(perc)==0):
ncount = ncount + 1
if (ncount > 1000):
10.1 Conductance of a percolation lattice 171

print("Couldn’t make percolation cluster...")


break
z=rand(lx,ly)<p
lw,num = measurements.label(z)
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x > 0)]
if len(perc) > 0: # Found spanning cluster
area = measurements.sum(z, lw, perc[0])
P[pIndex,iL] = P[pIndex,iL] + area # Find P(p,L)
zz = asarray((lw == perc[0])) # zz=spanning cluster
zzz = zz.T
g = sitetobond (zzz) # Generate bond lattice
Pvec, c_eff = FIND_COND(g, lx, ly) # Find conducance
C[pIndex,iL] = C[pIndex,iL] + c_eff
C[pIndex,iL] = C[pIndex,iL]/nSamples
P[pIndex,iL] = P[pIndex,iL]/(nSamples*L*L)
plot(pVals,C[:,-1],’-ob’,label=’$C$’)
plot(pVals,P[:,-1],’-or’,label=’$P$’)
legend()
xlabel(r"$p$")
ylabel(r"$C,P$")

G
0.8
P

0.6
G, P

0.4

0.2

0.0
0.60 0.65 0.70 0.75 0.80 0.85
p
Fig. 10.3 Plots of the conductance G(p, L) and the density of the spanning cluster
P (p, L) for L = 400.

10.1.4 Conductance and the density of the spanning cluster


For an infinite system, that is when L → ∞, we cannot define a con-
ductance G. Instead, we must describe the system by its conductivity
g = Ld−2 G (see (10.1)). The two-dimensional system is a special case
172 10 Flow in disordered media

where the conductance and the conductivity are identical. However, in


general, we need to use this transformation to relate G and g.
For an infinite system, we know that for p < pc there will be no
spanning cluster. The effective conductivity is therefore zero. When p
is close to 1, the density of the spanning cluster will be proportional to
p, and we also expect the conductance to be proportional to p in this
range. This may lead us to assume that the density of the spanning
cluster and the conductance of the sample is proportional also when p is
close to pc . However, our direct measurements above (originally done by
Last and Thouless [24]) show that P and G are not proportional when p
approaches pc .
We have the tools to understand this behavior. The spanning cluster
consists of the backbone and dangling ends. However, it is only the
backbone that contributes to conductance of the sample. We could
remove all the dangling ends, and still get the same behavior the the
conductance. This suggests, that it is the scaling behavior of the backbone
that is important. However, we have found that the mass-scaling exponent
of the backbone, DB , is smaller than D, the mass scaling exponent for the
spanning cluster. This indicates that most of the mass of the spanning
cluster is found in the dangling ends. This is the reason for the difference
between the behavior of P (p), and G(p) for p close to pc . In the following
we will develop a detailed scaling argument for the behavior of the
conductance G and the conductivity g of the percolation system.

10.2 Scaling arguments for conductance and


conductivity

We will now use the same scaling techniques we introduced to find the
behavior of P (p, L) to develop a theory for the conductance G(p, L).
First, we realize that we instead of p may describe the conductance as a
function of ξ and L: G(p, L) = G(ξ, L). Second, we realize that we want
to describe the system in the range where p > pc . We will address two
limits of the behavior: When L  ξ and then at pc , that is, when ξ  L.
10.2 Scaling arguments for conductance and conductivity 173

10.2.1 Scaling argument for p > pc and L  ξ


When L  ξ, we know that over length scales larger than ξ, the system
is effectively homogeneous. We can see this by subdividing the system
into cells of size ξ, giving us a total of (L/ξ)d such cells.
For a homogeneous system of `d boxes of size `, we know that the
conductance G of the whole system is G = `d−2 G` , where G` is the
conductance of a single box. We apply the same principle to this system:
The conductance G(ξ, L) is given as
L
G(ξ, L) = ( )d−2 G(ξ, ξ) , (10.15)
ξ
where we have written G(ξ, ξ) for the conductivity within the box — the
conductance of a system where the correlation length is ξ and L = ξ.
From (10.1), the conductivity g(ξ, L) is given as

G(ξ, ξ)
g(ξ, L) = L−(d−2) G(ξ, L) = . (10.16)
ξ d−2
What is G(ξ, ξ)? A system with correlation length equal to the system
size is indistinguishable from a system at p = pc . The conductance G(ξ, ξ)
is therefore the conductance of the spanning cluster at p = pc in a system
of size L = ξ. Let us therefore find the conductance of a finite system —
a system of size L — at the percolation threshold.

10.2.2 Conductance of the spanning cluster


This leads us to address the conductance, G(∞, L), of the spanning cluster
at p = pc . We know that the spanning cluster consists of the backbone
and the dangling ends, and that only the backbone will contribute to
the conductivity. The backbone can be described by the blob model (see
section 8.1 for a discussion of the blob model): The backbone consists of
blobs of bonds in parallel, and links of singly connected bonds between
them.
We will start from a scaling hypothesis — a scaling ansatz for the
conductance — derive the consequences of this assumption, and then
finally test these consequences to see if the data is consistent with our
assumption. This will then validate, or at least not invalidate to be
more precise, the hypothesis. Our scaling hypothesis will be that the
conductance of a system of size L at pc can be described by the scaling
174 10 Flow in disordered media

exponent ζ̃R :
G(∞, L) ∝ L−ζ̃R . (10.17)
Finding bounds for the scaling behavior. In many cases, we cannot
find the scaling exponents directly, but we may be able to find bounds
for the scaling exponents. We will pursue this approach here. We will see
if we can find bounds for the scaling of G(∞, L), and thereby determine
bounds for the exponent ζ̃R ?
Lower bound for the scaling exponent. First, we know that the span-
ning cluster consists of blobs in series with the singly connected bonds.
This implies that the resistivity R = 1/G of the spanning cluster is given
as the resistivity of the singly connected bonds RSC plus the resistivity
of the blobs, Rblob since resistivities are added for a series of resistances:

1/G = R = RSC + Rblob . (10.18)

This implies that R > RSC . The resistance of the singly connected
bonds can easily be found, since the definition of singly connected bonds
is that they are connected in series, one after another. We know the
effective resistance of a series of resistors from basic electromagnetism:
The resistance of a series of resistors is the sum of the resistances. The
resistance of the singly connected bonds is therefore the resistance of a
single bond multiplied with the number of singly connected bonds, MSC .
We have therefore found that

MSC < R . (10.19)

Because MSC ∝ LDSC , and we have assumed that R ∝ Lζ̃R , we find that

DSC ≤ ζ̃R . (10.20)

We have found a lower bound for the exponent!


Upper bound for the scaling exponent. We can find an upper bound
by examining the minimal path. The resistance of the spanning cluster
will be smaller than or equal to the resistance of the minimal path, since
the spanning cluster will have some regions, the blobs, where there are
bonds in parallel. Adding parallel bonds will always lower the resistance.
Hence, the resistance is smaller than or equal to the resistance of the
minimal path. Since the minimal path is a series of resistances in series,
the total resistance of the minimal path is the mass of the minimal path
multiplied by the resistance of a single bond. Consequently, the resistance
10.2 Scaling arguments for conductance and conductivity 175

of the spanning cluster is smaller than the mass of the minimal path,
Mmin , which we know scales with system size, Mmin ∝ LDmin . We have
therefore found an upper bound for the exponent

Lζ̃R ∝ R ≤ Lmin ∝ LDmin , (10.21)

and therefore
ζ̃R ≤ Dmin . (10.22)
Upper and lower bound demonstrate the scaling relation. We have
therefore demonstrated (or proved) the scaling relation

DSC ≤ ζ̃R ≤ Dmin . (10.23)

Because this scaling relation shows that the scaling of R is bounded


by two power-laws in L, we have also proved that the resistance R is
a power-law, and that the exponents are within the given bounds. We
notice that when dimensionality of the system is high, the probability of
loops will be low, and blobs will be unlikely. In this case

DSC = ζ̃R = Dmin = Dmax . (10.24)

10.2.3 Conductivity for p > pc


By scaling arguments, we have established that the conductance G(∞, L)
of the spanning cluster in a system of size L is described by the exponent
ζ̃R :
G(∞, L) ∝ L−ζ̃R when L ≤ ξ . (10.25)
We use this to find an expression for G(ξ, ξ), which is the conductance
of the spanning cluster at p = pc in a system of size L = ξ. Therefore

G(ξ, ξ) ∝ ξ −ζ̃R . (10.26)

We insert this in (10.16) in order to establish the behavior of the conduc-


tivity, g, for p > pc , finding that
G(ξ, ξ)
g= ∝ ξ −(d−2+ζ̃R ) (10.27)
ξ d−2
∝ (p − pc )ν(d−2+ζ̃R ) ∝ (p − pc )µ (10.28)

We have introduced the exponent µ:


176 10 Flow in disordered media

µ = ν(d − 2 + ζ̃R ) . (10.29)

We notice that for two-dimensional percolation, any value of ζ̃R larger


than 1/ν will lead to a value for µ > 1, which was what was observed in
Fig. 10.3. The exponent µ is therefore larger than 1, which is significantly
different from the exponent β, which is less than 1, that describes the
mass of the spanning cluster.
Can the results be generalized? We have therefore explained the
difference between how P (p, L) and G(p, L) (or g(p, L)) scales with
(p − pc ) close to pc . This is a useful insight and a useful result the
provide important information about how we expect a random porous
material to behave just as flow is starting to occur through it. Notice
that when we study percolation systems here, we have generally assumed
that the porosity is uncorrelated. For real systems, the porosity may
have correlations due to the physical processes that have generated the
porosity or the underlying materials. However, when we know how to
describe uncorrelated systems like the percolation system, we may be
able to use similar theoretical, scaling and computational approaches to
study the behavior of real systems.

10.3 Renormalization calculation

Another theoretical approach to address and understand the behavior of


the system is through the renormalization group calculation. Here, we
will use the renormalization group approach for a square bond lattice in
order to estimate the exponent ζ̃R .
In order to apply the renormalization group approach, we calculate
the average resistance hR0 i of a 2 × 2 cell. We use the H-cell approach
and only look at percolation in the horizontal direction. The various
configurations c and their degeneracy g(c) is illustrated in table 10.1.
(The degeneracy is the number of configurations of the same type). We
assume that the resistance of a single bond is R. The renormalized
resistance R0 is then given as p0 R0 = hR0 i. Using the scaling relation for
the resistance, R ∝ Lζ̃R , we can determine the exponent from
0
ln hRp0 i
ζ̃R = . (10.30)
ln b
10.4 Finite size scaling 177

The renormalization scheme and the values used are shown in the ta-
ble 10.1. The resulting value for the renormalized resistance is
1 X
R0 = g(c)P (c)R(c) (10.31)
p0 c
1 1 5
 5  
= 1+4· + 1 + 2 · 2 + 2 · 3 + 4 · 2 + 2 · 2 (10.32)
p0 2 3
' 1.917 . (10.33)

Consequently, the exponent ζ̃R is given by


ln 1.917
ζ̃R ' ' 0.939 . (10.34)
ln 2
This value is consistent with the scaling bounds set by the scaling relation
in (10.24).

c P (c) g(c) Rc

1 p5 (1 − p)0 1 1

2 p4 (1 − p)1 1 1

3 p4 (1 − p)1 4 5/3

4 p3 (1 − p)2 2 2

5 p3 (1 − p)2 2 3
3 2
6 p (1 − p) 4 2

7 p2 (1 − p)3 2 2

Table 10.1 Renormalization scheme for the scaling of the resistance R in a random
resistor network. The value R(c) gives the resistance of configuration c, and g(c) is the
degeneracy, that is, the number of such configurations.

10.4 Finite size scaling

General scaling of the conductivity. In general, the conductance and


the conductivity is related by:

G(p, L) = Ld−2 g(p, L) (10.35)


178 10 Flow in disordered media

We found that the scaling of the conductivity is:

g ∝ (p − pc )µ ∝ ξ −µ/ν , (10.36)
 
with the exponent µ given as µ = ν d − 2 + ζ̃R .
Finite size scaling ansatz for the conductivity. How can we use this
scaling behavior as a basis for a finite-size scaling ansatz? We apply the
usual approach, where we assume that we can extend the behavior of the
infinite system to the finite size system by the introduction of a finite
size scaling function f (L/ξ):
L
g(ξ, L) = ξ −µ/ν f ( ) . (10.37)
ξ

10.4.1 (Advanced) Developing the scaling form from


renormalization
This scaling relation can be developed systematically using the renor-
malization group approach. Through each renormalization iteration, the
conductivity g is mapped onto itself as all length are rescaled by a factor
b, by multiplying the whole expression by b to some exponent x, yet to
be determined:
ξ L
g(ξ, L) = bx g( , ) . (10.38)
b b
The result after l iterations is
ξ L
g(ξ, L) = (bl )x g( l , l ) . (10.39)
b b
For the case L < ξ we choose to continue iterations until bl = L, which
gives
ξ
g(ξ, L) = Lx g(1, ) . (10.40)
L
Similarly, when L > ξ, we continue iterations until bl = ξ, which gives
L
g(ξ, L) = ξ x g( , 1) . (10.41)
ξ
Now, we have already established the scaling behavior when L > ξ, where
we found that g ∝ ξ −µ/ν . We recognize that the exponent x = −µ/ν,
and that the function g(x, y) is a constant both when x = 1, and y  1,
10.4 Finite size scaling 179

and when y = 1, and x  1. We have therefore found the limiting scaling


behavior: (
ξ −µ/ν L  ξ
g(ξ, L) = . (10.42)
L−µ/ν L  ξ

10.4.2 The finite-size scaling ansatz


The result for L  ξ we can also find by a direct scaling argument.
When L  ξ, the system behaves as if it is at p = pc . In this case we
have that G ∝ L−ζ̃R . We insert this into the expression for g and find
g = GL−(d−2) , which gives

g = GL−(d−2) ∝ L−ζR L−(d−2) ∝ L−(d−2+ζ̃R ) ∝ L−µ/ν . (10.43)

This means that we have two possible finite-size scaling assumptions.


Either
L
 
g(ξ, L) = ξ −µ/ν
f , (10.44)
ξ
where the scaling function f (u) has the form
(
const. u  1, L → ∞
f (u) = (10.45)
u−µ/ν u  1, ξ → ∞
or
L
 
g(ξ, L) = L −µ/ν
f˜ , (10.46)
ξ
where we leave it to the reader to develop the form of the scaling function
f˜(u).

10.4.3 Finite-size scaling observations


How does the scaling ansatz correspond to the observations? We can
use the program we have developed to measure the conductivity as a
function of both p and system size L. The following program has been
modified for this type of measurement:
from pylab import *
from scipy.ndimage import measurements
from matplotlib.colors import ListedColormap
Lvals = [25,50,100,200,400]
pVals = logspace(log10(0.58), log10(0.85), 20)
180 10 Flow in disordered media

C = zeros((len(pVals),len(Lvals)),float)
P = zeros((len(pVals),len(Lvals)),float)
nSamples = 600
mu = zeros(len(Lvals))
for iL in range(len(Lvals)):
L = Lvals[iL]
for pIndex in range(len(pVals)):
p = pVals[pIndex]
ncount = 0
for j in range(nSamples):
ncount = 0
perc = []
while (len(perc)==0):
ncount = ncount + 1
if (ncount > 1000):
print("Couldn’t make percolation cluster...")
break
z=rand(L,L)<p
lw,num = measurements.label(z)
perc_x = intersect1d(lw[0,:],lw[-1,:])
perc = perc_x[where(perc_x > 0)]
if len(perc) > 0:
zz = asarray((lw == perc[0]))
# zz now contains the spanning cluster
zzz = zz.T
# # Generate bond lattice from this
g = sitetobond ( zzz )
# # Generate conductivity matrix
Pvec, c_eff = FIND_COND(g, lx, ly)
C[pIndex,iL] = C[pIndex,iL] + c_eff
C[pIndex,iL] = C[pIndex,iL]/nSamples
for iL in range(len(Lvals)):
L = Lvals[iL]
plot(pVals,C[:,iL],label="L="+str(L))
xlabel(r"$p$")
ylabel(r"$C$")
legend()

The results for L = 25, 50, 100, 200, 400 are shown in Fig. 10.4. Here,
we plot both the raw data, g(p, L), and the behavior of g(pc , L) as a
function of L on a log − log-scale, showing that g(pc , L) indeed scales as
a power-law with L.
Scaling data collapse. We can also test the scaling ansatz by plotting
a finite-size scaling data collapse. We expect that the conductivity will
behave as
g(p, L) = L−µ/ν f˜ (L/ξ) , (10.47)
which we can rewrite by introducing ξ = ξ0 (p − pc)−ν to get:
ν 
g(p, L) = L−µ/ν f˜

L1/ν (p − pc ) . (10.48)
10.4 Finite size scaling 181

0.6 L = 25
L = 50
0.5 L = 100
L = 200
0.4 L = 400

g(p, L)
0.3

0.2

0.1

0.0
0.60 0.65 0.70 0.75 0.80 0.85
p

1.2

1.4

1.6
log(g(pc , L))

1.8

2.0

2.2

2.4

1.4 1.6 1.8 2.0 2.2 2.4 2.6


log(L)

Fig. 10.4 (a) Illustration of the conductivity g(p, L) as a function of p for L =


25, 50, 100, 200, 400. (b) We see that at pc the conductivity g(pc , L) is scaling according
to g ∝ L−µ/ν .

We test this by plotting Lµ/ν g(p, L) as a function of L1/ν (p − pc ). This


is shown in Fig. 10.5.
Estimating the exponent µ from the data. We can also use the results
from the simulations to measure µ directly by plotting g(p, L) as a
function of (p − pc ) and fitting a linear function on a log-log plot. We do
this for increasing values of L in Fig. 10.6. (Notice that the curves for
small values of L clearly are not linear, and we should, ideally, have fitted
the linear curve to only the part of the curve that is approximately linear.
We will not address methods to do this here, but you should develop
such methods in your own research.)
Implications of the scaling ansatz. Our conclusion is that the conduc-
tivity is a function of p, but also of system size, which implies that the
conductivity of a disordered system close to pc is not a simple material
property as we are used to — we need to address the scaling behav-
ior of the system in detail in order to understand the behavior of the
conductivity and the conductance of the system.
182 10 Flow in disordered media

160

140

120

100

Lµ/⌫ g(p, L)
80

60

40

20

0
0 5 10 15 20
L1/⌫ (p pc )
Fig. 10.5 Finite-size data scaling collapse for g(p, L) showing the validity of the scaling
ansatz.

0.25 L50, µ = 0.96


L100, µ = 1.13
0.50 L200, µ = 1.22
L400, µ = 1.27
0.75
log g(p, L)

1.00

1.25

1.50

1.75

2.00
2.0 1.8 1.6 1.4 1.2 1.0 0.8 0.6
log |p pc |

1.2

1.0

0.8
µ

0.6

0.4

0.2

0.0
50 100 150 200 250 300 350 400
L

Fig. 10.6 (a) Plot of g(p, L) for increasing values of L. (b) Plot of the exponent µ
calculated by a linear fit for increasing system sizes L.
10.5 (Advanced) Internal flux distribution 183

10.5 (Advanced) Internal flux distribution

When we solve the flow problem, for electricity or fluids, on the percolation
cluster, we find a set of currents Ib = Ii,j for each bond b = (i, j) on the
backbone. For all other bonds, the flux will be identically zero. How can
we describe the distribution of fluxes on the backbone?
For electrical flow, the conservation of energy is formulated in the
expression:
RI 2 = (10.49)
X
rb Ib2 ,
b

where R is the total resistance of the system, I is the total current, rb is


the resistivity of bond b and Ib is the current in bond b. We can therefore
rewrite the total resistance R as
Ib 2 X 2
R= rb ( ) = (10.50)
X
rb ib ,
b
I b

where we have introduced the fractional current ib = Ib /I. We have


written the total resistance as a sum of the square of the fractional
currents in each of the bonds.
The fractional current ib is assigned to each bond of the backbone.
We can describe the fractional currents by the probability distribution
for various values of ib . We can count the number of bonds n(i) having
the fractional current i. The total number of bonds is the mass of the
backbone:
1 = M B ∝ L DB . (10.51)
X

The distribution of fractional currents is therefore given by P (i) =


n(i)/MB .
We characterize the distribution P (i) through the moments of the
distribution:
1 X 2q 1
Z
hi2q i = ib = i2q n(i)di . (10.52)
MB b MB

However, there is no general way to simplify this relation, since we do


not know whether the function n(i) has a simple scaling form.
However, we can address specific moments of the distribution. We
know that the mass of the backbone has a fractal scaling with exponent
DB . This corresponds to the zero’th moment of the distribution. We
expect (or hypothesize) that at p = pc , the other moments has a scaling
184 10 Flow in disordered media

form:
i2q (10.53)
X
y(q)
b ∝L .
b

What can we say about the scaling exponents y(q) for moment q?
Scaling exponent for q = 0. For q = 0, the sum is

(i2b )0 ∝ Ly(0) ∝ LDB , (10.54)


X

that is, y(0) = DB .


Scaling exponent for q → ∞. What happens in the limit of q → ∞?
In this case, the only terms that will be important in the sum are the
terms where ib = 1, because all other terms will be zero. The bonds with
ib = 1 are the singly connected bonds: all the current passes through
these bonds. Therefore, we have

(i2b )∞ ∝ Ly(∞) ∝ MSC ∝ LDSC , (10.55)


X

and we find that y(∞) = DSC .


Scaling exponent for q = 1. When q = 1, we find from (10.50) that the
sum is given as the total resistance of the cluster

(i2b )1 = R ∝ Lζ̃R , (10.56)


X

which implies that y(1) = ζ̃R .


The shape of y(q). We can in general argue that because each term
in the sum b (ib )2q is monotonically decreasing in q, the sum is also
P

monotonically decreasing. We can therefore illustrate the curve y(q) in


Fig. 10.7.
Fluctuations in real resistors. In real resistor-networks, the case is
even more complex, because the resistivity is due to impurities, and
the impurities diffuse. Therefore, the fluctuations in the resistivity will
also have a time-dependent part. This is the origin of thermal noise in
the circuit. If we keep the total current I constant, fluctuations in the
resistivity will lead to fluctuations in the voltage.
10.5 (Advanced) Internal flux distribution 185

y(q)
DB

ζR
DSC

1 2 q
Fig. 10.7 Illustration of the exponents y(q) characterizing the scaling of the moments of
the distribution of fractional currents, as a function q, the order of the moment.

10.5.1 Scaling of current fluctuations


What can we learn about the second moment, q = 2? We know that the
total resistance, R, is
R= (10.57)
X
rb i2b .
b

So far, we have only addressed the case when rb = 1 for all the bonds on
the backbone. However, in reality there will be some variations in the
local resistances, so that we can write

rb = 1 + δrb , (10.58)

where hδrb i = 0.

10.5.2 Potential fluctuations


Let us estimate the fluctuations in the potential (voltage):

δV = V − hV i = δrb (ib )2 . (10.59)


X

However, the fractional currents ib are now also different, since ib = Ib /I


depends on the overall current I. Therefore we introduce
(0)
R0 = (ib )2 , (10.60)
X

where
(0)
ib = ib + δib . (10.61)
186 10 Flow in disordered media

There is a general theorem giving that

1 · δ(i2b ) ' 0 , (10.62)


X

to leading order. We can therefore conclude that

δV = V − hV i = δrb i2b + 1 · (δi2b ) ' 0 . (10.63)


X X

b b

However, what about the fluctuations in the deviations?

h(δV )2 i = h hδrb δrb0 ii2b (i0b )2 . (10.64)


X X
hδrb δrb0 ii2b i2b0 h=
b,b0 b,b0

If we assume that the fluctuations are independent:

hδrb δrb0 i = δbb0 ∆ , (10.65)

where we have introduced

∆ = hδrb2 i . (10.66)

We therefore find that

hδV 2 i = ∆ (ib )4 ∝ ∆Ly(2) . (10.67)


X

Consequently, we find that the noise is related to the second moment.


We know that the exponent y(2) is bounded: DSC ≥ y(2) ≥ ζ̃R , which
places the value for y(2) ' 0.9 for a two-dimensional system.

10.6 (Advanced) Multi-fractals

The distribution of fractional currents in the random resistor network is


an example of a multi-fractal distribution. The higher moments of this
distribution have the non-trivial scaling relation

Mq ∝ Ly(q) , (10.68)

Unifractal distributions. Previously, we have studied unifractal distri-


butions, such as the distribution of percolation cluster sizes when p is
close to pc . We found that the moments of the distribution of clusters
sizes could be written as:
10.6 (Advanced) Multi-fractals 187

Mq hsq−1 i ∝ |p − pc |−γq(β+1) , (10.69)

This is an example of a unifractual distribution. All the moments are


described by a single exponent and have a simple scaling form:

Mq ∝ |p − pc |xq , (10.70)

where the exponent x = −γ(β + 1). The whole distrubtion is only


described by this single exponent, and this is why we use the word
unifractal to describe the distribution.
Multi-fractal distributions. We have now studied on example of a multi-
fractal: the distribution of fractional currents in the random resistor
network. In this case, we found that the moments of the distribution was
described by
P 2q
i
Mq = hi2q i = b b ∝ Ly(q)−DB . (10.71)
MB
This is different from the unifractal case, where the exponent y(q) is linear
in q: y(q) = Dq. For the multifractal there is a whole spectrum of different
exponents — each moment is described by a different exponent and there
is no simple relation that relates the exponents. In practice, multifractals
are typically encountered when measures, such as the fractional current
through a bond, is imposed on a fractal structure.
Moments pick out various aspects of the distribution. What does
the moments of order q measure in the system? Let us assume that the
distribution n(i) of fractional current in the system has the functional
form illustrated in Fig. 10.8(a). In Fig. 10.8(b) we illustrate the function
n(i)i2q . The maximum of this function is found at iq (that is, iq gives the
maximum of the function n(i)i2q ). We will assume that the distribution
is sharp, so that we can calculate the moment by only using the values in
a small neighborhood of iq . The q-th moment is therefore approximately
Z
Mq = n(i)i2q di ' n(iq )i2q
q . (10.72)

This approximation becomes better as q → ∞ since the distribution


n(i)i2q is then approaching a delta function around iq .
This implies that the various moments will pick out various values of
i. That is, the moment q will address the structure of sites that have
currents that are close to iq , since it is the currents close to iq that
contribute to the moment. Looking at the different moments of the
distribution therefore corresponds to looking at different substructures of
188 10 Flow in disordered media

n(i) (a) n(i) i2q (b)

1 i iq 1 i
Fig. 10.8 Illustration of the distribution n(i) of fractional currents i in a random
resistor network. Part (a) shows the direct distribution, and part (b) shows n(i)i2q . The
distribution has a maximum at iq .

the cluster. For example, we saw that the zeroth moment of the current
distribution picks out the singly connected bonds and the infinite moment
picks out the backbone.
System-size dependence. Let us now address the system-size L-
dependence of n(i) and i2q
q . Let us assume that iq and n(iq ) is scaling
2

with system size according to

i2q ∝ L−α(q) , (10.73)

and
n(iq ) ∝ Lf (α(q)) . (10.74)
And we will assume that the q-th moment depends on the distribution
at iq :
mq ' n(iq )i2q
q ∝L
f (α(q))−qα(q)
∝ Ly(q)−DB . (10.75)
The value iq is found from the maximum of n(i)i2q . The condition for
this maximum is
∂ h i
n(i)i2q = 0 , (10.76)
∂i iq
or

[ln n(i) + 2q ln i]iq = 0 , (10.77)
∂i
which gives
( ∂ ln∂in(i) )

= −2q . (10.78)
ln i i
q

Now we can substitute the L-dependent expressions for n(iq ) and i2q ,
getting
ln n(iq ) = f ln L , (10.79)
10.7 Real conductivity 189

and
ln i2q = −α ln L , (10.80)
and therefore we find that
∂f
=q. (10.81)
∂α
We therefore have two equations relating y(q) to f (α) and α(q):

f (α(q)) = [y(q) − DB ] + qα(q) (10.82)


∂f
=q (10.83)
∂α
We can also find the reverse equations, using a Legendre transformation:
∂ ∂f ∂α
(f (α(q)) = (10.84)
∂q ∂α ∂q
∂y ∂α
= + α(q) + q (10.85)
∂q ∂q
∂y
= −α (10.86)
∂q

Interpretation. What is the interpretation of f (α)? Because n(iq )LDB is


the total number of bonds with current iq , we can interpret f (α(q)) + DB
as the fractal dimension of the set of bonds with a current iq .
Numerically, we estimate f (α) by selecting iq , and then measure the
fractal dimension of the subset with i = iq , and plot the relation between
iq and the fractal dimension using as f (α).
Notice that it is generally assumed that f (α) (and y(q)) are universal
values that do not depend on the details of the lattice, but do depend on
the dimensionality of the systems.

10.7 Real conductivity

So far we have addressed conductivity of a percolation cluster. That is a


system where the local conductances (or permeabilities) are either zero
or a given constant conductance. That is, we have studied a system with
local conductances Gi,j so that
(
1p
Gb = Gi,j = . (10.87)
01−p
190 10 Flow in disordered media

Binary mixture of conductors. However, in practice, we want to address


systems with some distribution of conductances, such as a binary mixture
of good and bad conductors, with conductances:
(
G2 p
Gb = Gi,j = . (10.88)
G1 1 − p

Superconductor networks. However, in order to address this problem,


let us first look at the conjugate problem to the random resistor net-
work, the random superconductor network. We will assume that the
conductances are (
∞p
Gb = Gi,j = . (10.89)
1 1−p
In this case, we expect the conductance to diverge when p approaches pc
from below, and that the conductance is infinite when p > pc . It can be
shown that the behavior for the random superconductor network is similar
to that of the random resistor network, but that the exponent describing
the divergence of the conductance (and consequently conductivity) when
p approaches pc is s:
G ∝ (pc − p)−s , (10.90)
Combining the two approaches. How can we address both these prob-
lems? For any system with a finite smallest conductance, G< , we can
always use the smaller conductance as the unit for conductance, and
write the functional form for the conductance of the whole system as
G1 G2
G( G , , p) G2
G(G1 , G2 , p) = ( 1 G1
) = G( , p) , (10.91)
G1 G1
Scaling ansatz for binary mixture of conductances. We will make a
scaling ansatz for the general behavior of G:

G2 )
(G 1

G = G2 (p − pc ) f± (
µ
), (10.92)
(p − pc )y
where the exponent y is yet to be determined.
The random resistor network we studied above corresponds to G1 → 0,
and G2 = c. In this case, we retrieve the scaling behavior for p close to
pc , by assuming that f+ (0) is a constant.
For the random superconductor network, the conductances are G2 →
∞, and G1 = const.. We will therefore need to construct f− (u) in such a
way that the infinite conductance is canceled from the prefactor. That is,
10.8 Exercises 191

we need f− (u) ∝ u. We insert this into (10.92), getting


G1
G ∝ G2 (p − pc )µ G2
∝ G1 |p − pc |µ+y . (10.93)
(p − pc )y
Because we know that the scaling exponent should be µ + y = −s in
this limit, we have determined y: y = −µ − s, where µ and s are deter-
mined from the random resistor and random superconductor networks
respectively.
Finite G2 and G1 . When p → pc the conductance G should approach
a constant number when both G2 and G1 are finite. However, p → pc
corresponds to the argument x → +∞ in the function f± (x). However,
the only way to ensure that the total conductance is finite, is to require
that the two dependencies on (p − pc ) cancel exactly. We achieve this by
selecting
f± (x) ∝ xµ/(µ+s) . (10.94)
We can insert this relation into (10.92), getting
G1
G = G2 |p − pc |µ ( G2
)µ/(µ+s) , (10.95)
|p − pc |µ+s
which results in
G1 µ+sµ
G = G2 (
) . (10.96)
G2
This expression can again be simplified to
s µ

G(p = pc ) = G2µ+s G1µ+s , (10.97)

Numerical values in two dimensions. In two dimensions, µ = s ' 1.3,


and the relation becomes:
1
G ∝ (G1 G2 ) 2 , (10.98)

10.8 Exercises

Exercise 10.1: Density of the backbone


The backbone of a spanning cluster is the union of all self-avoiding walks
from one side of the cluster to the opposite. The backbone corresponds to
192 10 Flow in disordered media

the sites the contribute to the flow conductivity of the spanning cluster.
The remaining sites are the dangling ends.
We call the mass of the backbone MB , and the density of the backbone
PB = MB /Ld , where L is the system size, and d the dimensionality of the
percolation system. Here, we will study two-dimensional site percolation.
a) Argue that the functional form of PB (p) when p → p+ c is

PB (p) = P0 (p − pc )x , (10.99)

and find an expression for the exponent x. You can assume that the
fractal dimension of the backbone, DB , is known.
b) Assume that the functional form of PB (p) when p → p+
c and ξ  L
is
PB (p) = P0 (p − pc )x , (10.100)
Determine the exponent x by numerical experiment. If needed, you may
use that ν = 4/3.

Exercise 10.2: Flow on fractals


Use the example programs from the text to study fluid flow in a percola-
tion system.
a) Run the example programs provided in the text to visualize the
currents on the spanning cluster.
b) Modify the program to find the backbone and the dangling ends of
the spanning cluster.
c) Use the program to find the singly connected bonds in the spanning
cluster.

Exercise 10.3: Conductivity


a) Find the conductivity as a function of p − pc . Determine the exponent
ζ̃R by direct measurement.
b) Find the conductivity at p = pc as a function of system size L.

Exercise 10.4: Current distribution


Use the example programs from the text to find the currents Ib in each
bonds b on a spanning cluster at p = pc , p = 0.585, and p = 0.60.
10.8 Exercises 193

a) Find the total current I going through the system.


In the following we will study the normalized currents, ib = Ib /I.
b) Find the distribution P (i) of the normalized currents.
c) Measure moments of the distribution.

Exercise 10.5: Bivariate porous media


Rewrite the programs in the text to study a bivariate distribution of
conductances. That is, for each site, the conductance is 1 with probability
p and g0 < 1 with probability 1 − p.
a) Visualize the distribution of currents for g0 = 0.1.
b) Find the conductivity g(p) for σ0 = 0.1, 0.01, and 0.001.
c) Plot σ(pc ) as a function of σ0 .
d) (Advanced) Can you find a way to rescale the conductivities to
produce a data-collapse?
Elastic properties of disordered
media 11

There are various physical properties that we may be interested in for a


disordered material. In the previous chapter, we studied flow problems in
disordered materials using the percolation system as a model disordered
material. In this chapter we will address mechanical properties of the
disordered material, such as the coefficients of elasticity [11, 4, 20, 28, 41].
We will address the behavior of the disordered material in the limit
of fractal scaling. In this limit we expect material properties such as
Young’s modulus to display a non-trivial dependence on system size.
That is, we will expect material properties such as Young’s modulus to
have an explicit system size dependence. We will use the terminology
and techniques already developed to study percolation to address the
mechanical behavior of disordered systems.

11.1 Rigidity percolation

What are the elastic properties of a percolation system? First, we need


to decide on how to convert a percolation system onto an elastic system.
We will start by modeling an elastic material as a bond lattice, where
each bond represents a local elastic element. The element will in general
have resistance to stretching and bending. Systems with only stretching
stiffness are termed central force lattices. Here, we will address systems
with both stretching and bending stiffness.

195
196 11 Elastic properties of disordered media

Models for stretching and bending stiffness. We can formulate the


effect of bending and stretching through the elastic energy of the system.
The energy will have terms that depend on the elongation of bonds
— these will be the terms that are related to stretching resistance. In
addition, there will be terms related to the bending of bonds. Here we will
introduce the bending terms through the angles between bonds. For any
two bonds connected to the same site, there will be an energy associated
with changes in the angle of the bond. This can be expressed as
X1 X1
U= kij (ui − uj )2 + κijk φ2ijk , (11.1)
ij
2 ijk
2

where U is the total energy, the sums are over all particle pairs ij or all
particle triplets ijk. The force constant is kij = k for bonds in contact
and zero otherwise, and κijk = κ for triplets with a common vertice, and
zero otherwise. The vector ui gives the displacement of node i from its
equilibrium position. The various quantities are illustrated in Fig. 11.1

φijk
j
ui i
uj

Fig. 11.1 Illustration of the initial bond lattice (dashed, gray), and the deformed bond
lattice. Three nodes i, j, k are illustrated. The angle φijk is shown. The displacements ui
and uj are shown respectively with cyan vectors.

Elastic modulus. Let us address the effective elastic behavior of the


percolation system. We would like to describe the material using a
material property such as Young’s modulus, E, or the shear modulus,
G. Let us consider a three-dimensional sample with cross-sectional area
A = L2 and length L. Young’s modulus, E, relates the tensile stress, σzz ,
applied normal to the surface with area A to the elongation ∆L in the
11.1 Rigidity percolation 197

z-direction.
Fz ∆Lz
σzz = =E , (11.2)
A L
We can therefore write the relation between the force Fz and the elonga-
tion ∆Lz as
EA EL2
Fz = ∆L = ∆L = Ld−2 E∆L . (11.3)
L L
We recognize this as a result similar to the relation between the conduc-
tance and the conductivity of the sample, and we will call K = Ld−2 E
the compliance of the system. We recognize this as being similar to the
spring constant of a spring.
Elastic properties when p < pc . What happens to the compliance of
the system as a function of p? When p < pc there are no connecting
paths from one side to another, and the compliance will therefore be
zero. It requires zero force Fz to generate an elongation ∆Lz in the
system. Notice that we are only interested in the infinitesimal effect of
deformation. If we compress the sample, we will of course eventually
generate a contacting path, but we are only interested in the initial
response of the system.
Elastic properties when p > pc . When p ≥ pc there will be at least one
path connecting the two edges. For a system with a bending stiffness, there
will be a load-bearing path through the system, and the deformation ∆Lz
of the system requires a finite force, Fz . The compliance K will therefore
be larger than zero. We have therefore established that for a system with
bending stiffness, the percolation threshold for rigidity coincides with
the percolation threshold for connectivity. However, for a central force
lattice, we know that the spanning cluster at pc will contain may singly
connected bonds. These bonds will be free to rotate, and as a result a
central force network will have a rigidity percolation threshold which is
higher than the connectivity threshold. Indeed, rigidity percolation for
central force lattices will have very high percolation thresholds in three
dimensions and higher. Here, we will only focus on lattices with bond
bending terms.
Behavior of E close to pc . Based on our experience with percolation
systems, we may hypothesize that Young’s modulus will follow a power-
law in (p − pc ) when p approaches pc :
(
0 p < pc
E∝ . (11.4)
(p − pc )τ p > pc
198 11 Elastic properties of disordered media

where τ is an exponent describing the elastic system. We will now use


our knowledge of the percolation systems to show that this behavior is
indeed expected, and to determine the value of the exponent τ .

11.1.1 Developing a theory for E(p, L)


Let us address the Young’s modulus E(p, L) of a percolation system with
occupation probability p and a system size L. We could also write E as a
function of the correlation length ξ = ξ(p), so that E = E(ξ, L). Young’s
modulus is in general related to the compliance through E(ξ, L) =
K(ξ, L)Ld−2 . We can therefore address the compliance of the system and
then calculate Young’s modulus.
Dividing the system into boxes of size ξ. We will follow an approach
similar to what we used to derive the behavior of P (p, L). First, we
address the case when the correlation length ξ  L. In this case, we
can subdivide the Ld system into boxes of linear size ξ as illustrated in
Fig. 11.2. There will be (L/ξ)d such boxes. On this scale the system is
homogeneous. Each box will have a compliance K(ξ, ξ), and the total
compliance will be K(ξ, L).
Compliance of the combined system. We know that the total com-
pliance of n elements in series is 1/n times the compliance of a single
element. You can easily convince yourself of this addition rule for spring
constants, by addressing two springs in series. Similarly, we know that
adding n elements in parallel will make the total system n times stiffer,
that is, the compliance will be n times the compliance of an individual
element. The total compliance K(ξ, L) of this system of (L/ξ)d boxes is
therefore:
L
K(ξ, L) = K(ξ, ξ)( )d−2 . (11.5)
ξ
Young’s modulus can then be found as
K(ξ, ξ)
E(ξ, L) = L−(d−2) K(ξ, L) = . (11.6)
ξ d−2
In order to progress further we need to find the compliance K(ξ, ξ).
However, we recognize that this is the compliance of the percolation
system at p = pc when the system size L is the correlation length L.
We are therefore left with the problem of finding the compliance of the
spanning cluster at p = pc as a function of system size L.
11.1 Rigidity percolation 199

ξ
L

Fig. 11.2 Illustration of subdivision of a system with p = 0.60 into regions with a size
corresponding to the correlation length, ξ. The behavior inside each box is as for a system
at p = pc , whereas the behavior of the overall system is that of a homogeneous system of
boxes of linear size ξ.

11.1.2 Compliance of the spanning cluster at p = pc


Again, we expect from our experience from the behavior of scaling
structures that the compliance will scale with the system size with a
fractal dimension ζ̃K :
K ∝ Lζ̃K . (11.7)
We will follow our standard approach: we assume a scaling behavior,
establish a set of bounds for K, which will also serve as a proof of the
scaling behavior of K, and then use this result to develop a general
theory for K(p, L).
Energy, force and elongation of the system. We will use arguments
based on the total energy of the system. The total energy of a system
subjected to a force F = Fz resulting in an elongation ∆L is:
1
U = K(∆L)2 , (11.8)
2
where the elongation ∆L is related to the force F through, ∆L = F/K.
Consequently,
1 F 1 F2
U = K( )2 = . (11.9)
2 K 2K
We can therefore relate the elastic energy of a system subjected to the
force F directly to the compliance of that system.
200 11 Elastic properties of disordered media

Upper bound for the compliance. Our arguments will be based on


the geometrical picture we have of the spanning cluster when p = pc .
The cluster consists of singly connected bonds, blobs, and dangling ends.
The dangling ends do not influence the elastic behavior, and can be
ignored in our discussion. It is only that backbone that contribute to
the elastic properties of the spanning cluster. We can find an upper
bound for the compliance by considering the singly connected bonds.
The system consists of the blob and the singly connected bonds in series.
The compliance must include the effect of all the singly connected bonds
in series. However, adding the blobs in series as well will only contribute
to lowering the compliance. We will therefore get an upper bound on the
compliance, by assuming all the blobs to be infinitely stiff, and therefore
only include the effects of the singly connected bonds.
Let us therefore study the elastic energy in the singly connected
bonds when the cluster is subjected to a force F . The energy, U , can be
decomposed in a stretching part, Us , and a bending part, Ub : U = Us +Ub .
For a singly connected bond from site i to site j, the change in length,
δ`ij , due to the applied force F is δ`ij = F/k, where k is the force
constant for a single bond. The energy due to stretching, Us , is therefore
X1 X1 F 1 MSC 2
Us = kδ`2ij = k( )2 = F , (11.10)
ij
2 ij
2 k 2 k

where MSC is the mass of the singly connected bonds.


We can find a similar expression for the bending terms. For a bond
between sites i and j, the change in angular orientation, δφij is due to the
torque T = ri F , where ri is the distance to bond i in the direction normal
to the direction of the applied force F : δφij = T /κ. The contribution
from bending to the elastic energy is therefore
X1 1 X ri F 2 1
Ub = κ(δφij )2 = κ( ) = 2
MSC RSC F2 , (11.11)
ij
2 2 ij κ 2κ

where
1 X 2
2
RSC = r , (11.12)
MSC ij i
where the sum is taken over all the singly connected bonds.
The elastic energy of the singly connected bonds is therefore:

1 R2
USC = ( + SC )MSC F 2 , (11.13)
2k 2κ
11.1 Rigidity percolation 201

and the compliance of the singly connected bonds is

F2 1
KSC = = . (11.14)
2U (1/k + RSC /κ)MSC
2

which is an upper bound for the compliance K of the system.


Lower bound for the compliance. We can make a similar argument for
a lower bound for the compliance K of the system. The minimal path
on the spanning cluster provides the minimal compliance. The addition
of any bonds in parallel will only make the system stiffer, and therefore
increase the compliance. We can determine the compliance of the minimal
path by calculating the elastic energy of the minimal path. We can make
an identical argument as we did above, but we need to replace MSC with
the mass, Mmin , of the minimal path, and the radius of gyration RSC 2

with the radius of gyration of the bonds on the minimal path Rmin .
2

Kantor [20] has provided numerical evidence that both Rmin


2
and RSC
2

is proportional to ξ . When we are studying the spanning cluster at


2

p = pc this corresponds to Rmin and RSC being proportional to L. This


shows that the dominating term for the energy is the bending and not
the stretching energy when p is approaching pc .
Bounded expression for the compliance K. We have therefore deter-
mined the scaling relation

Kmin ≤ K ≤ KSC , (11.15)

where we have found that when L  1, Kmin ∝ L−(Dmin +2) and KSC ∝
L−(DSC +2) . That is:

L−(Dmin +2) ≤ K(L) ≤ L−(DSC +2) . (11.16)

Because K(L) is bounded by two power-law in L (for all values of L),


we have also demonstrated that K(L) also is a power-law in L with an
exponent ζ̃K satisfying the relation

− (Dmin + 2) ≤ ζ̃K ≤ −(DSC + 2) . (11.17)

11.1.3 Finding Young’s modulus E(p, L)


This scaling relation gives us K(pc , L). We use this expression to find
K(ξ, ξ), the compliance of a system of size ξ from (11.6):
202 11 Elastic properties of disordered media

K(ξ, ξ) ξ ζ̃K
E(ξ, L) = ∝ ∝ ξ ζ̃K −(d−2) . (11.18)
ξ d−2 ξ d−2
We have therefore found a relation for the scaling exponent τ :

E(p, L) = ξ −(d−2−ζ̃K ) ∝ (p − pc )(d−2−ζ̃K )ν ∝ (p − pc )τ . (11.19)

The exponent τ is therefore in the range:

(d − 2 + DSC + 2)ν ≤ τ ≤ (d − 2 + Dmin + 2)ν , (11.20)

Bounds on the exponent τ . The resulting bounds on the scaling expo-


nents are:
(DSC + 2) ν ≤ τ ≤ (Dmin + 2) ν , (11.21)
For two-dimensional percolation the exponents are approximately

3.41 ≤ τ ≤ 3.77 , (11.22)

Similarity between the flow and the elastic problems. We see that
the bounds are similar to the bounds we found for the exponent ζ̃R . This
similarity lead Sahimi [29] and Roux [27] to conjecture that the elastic
coefficients E and G, and the conductivity σ is related through
E
∝ ξ −2 . (11.23)
σ
and therefore that
τ = µ + 2ν = (d + ζ̃R )ν . (11.24)
which is well supported by numerical studies.
In the limit of high dimensions, d ≥ 6, the relation τ = µ + 2ν = 4
becomes exact. However, we can use as a rule of thumb that the exponent
τ ' 4 in all dimensions d ≥ 2.
Diffusion in disordered media
12

In this chapter we will study diffusional transport in disordered media. We


can model diffusional transport either by solving the diffusion equation or
by studying the time developments of random walks — both approaches
produce the same results. We will use the statistical approach and
study how random walkers spread with time in free space as well as on
percolation clusters. We will introduce a scaling theory for the behavior
of this process in both space and time — extending our previous scaling
approaches and proving us with new tools and insights. We will do this
in several steps, starting with a brief introduction to random walks and
diffusion in uniform media, then introduce a computational model for
random walks on the percolation cluster, and finally apply our full set of
tools to develop scaling theories for the observed behavior [15, 13, 26].

12.1 Diffusion and random walks in homogeneous media


A typical example of a random walk is the random motion of a small
dust particle due to random collisions with air molecules, a process called
Browian motion. Random walks are general processes that we often use
as physicial, theoretical or conceptual models.
A two-dimensional random walk. If a random walker starts at r = 0,
its position rn after n steps can be written as
n
rn = r0 + (12.1)
X
ui ,
i=1

203
204 12 Diffusion in disordered media

where ui is step i. We will usually assume that the steps ui are indepen-
dent and isotropically distributed.
Generating a random walk. We can generate an example of random
walk by selecting ui = (xi , yi ), where xi and yi are selected from e.g. a
uniform random distribution from −1 to 1. The following program calcu-
lates and visualizes the resulting path starting from the origin, and the
resulting path is shown in Fig. 12.1.
from pylab import *
n=1000
u = 2*random(size=(n,2))-1
r = cumsum(u,axis=0)
plot(r[:,0],r[:,1])

Fig. 12.1 Plots of 10 random walks of size n = 100 (left) and n = 1000 (right).

We notice that the random walker spreads out gradually, leaving


behind a trace with a complex geometry. Let us now see if we can develop
a theory for growth of random walker and for their geometry.

12.1.1 Theory for the time development of a random walk


We can develop a theory for the position rn as a function of the number
of steps n. For simplicity, we start the walker at the origin, so that r0 = 0.
First, we see find the average position after n steps:
n n
hrn i = h un i = hui i = 0 , (12.2)
X X

i=1 i=1

where we have used that since ui are isotropic, hui i = 0. This is not
surprising, the random walker has the same probability to walk in all
directions and therefore does not get anywhere on average.
12.1 Diffusion and random walks in homogeneous media 205

However, from Fig. 12.1 we see that the extent of the path increases
with the number of steps n. We can characterize this using similar
measures to what we used to describe the geometry of the percolation
clusters, by measuring rn2 instead. We find the average value of rn2 using
the same approach:
!  

hrn2 i = h (12.3)
X X
ui ·  uj i
i j

=h (12.4)
XX
ui · uj i
i j

=h ui · uj i + h (12.5)
X X
ui · ui i
i=j i6=j

= hui · ui i + (12.6)
X X
hui · ui i
| {z }
i i6=j
=0
= nδ , 2
(12.7)

where hui · ui i = δ 2 is a property of the distribution of ui corresponding


to the variance of the distribution. And where we have used that because
ui and uj are independent, the average of their product is equal to the
product of their averages:

hui · uj i = hui i · huj i = 0 · 0 = 0 . (12.8)

Consequently, we have shown that rn2 = nδ 2 . This is a very general result.


We have found that the size of the diffusion path increases slowly with
the number of steps: rn = δn1/2 . This result is valid in any dimension
as long as the two basic assumptions are satisfied: The individual steps
are independent and each individual step has an isotropic distribution so
that the average displacement from a single step is zero.
The dimension of the random walk. Here we have demonstrated that
the size of the random walk, measured as r2 , is proportional to the
number of elements in the random walk. This is similar to the way
we measured the size of a cluster using the radius of gyration of the
cluster. Indeed, it can be shown that these two definitions give the same
relation rn2 = b2 n, where b is a constant of unit length that describes
the distribution of a single step. We realize that n is the number of
elements in the random walk, corresponding to s, the number of sites
in a cluster. We have therefore found that rn = bn1/2 , or similarly, that
n = (rn /b)2 ∝ rDw . This implies that the dimension, Dw , of the random
206 12 Diffusion in disordered media

walk always is Dw = 2 — independent of the embedding dimension d.


This means that for d = 1 the random walk will overfill space. Indeed,
we expect it to step on top of itself repeatedly. For d = 2 the random
walk will just fill space since Dw = d, whereas for d = 3 and higher
dimensions the random walk will fill a diminishing portion of space. (Just
like the spanning cluster had a smaller scaling exponent than the spatial
dimension, and hence the density of the spanning cluster decreased for
larger systems).

12.1.2 Continuum description of a random walker


We can also describe the motion of the random walker through the
probability density P (r, t), where P (r, t)drdt is the probability for the
random walker to be in the volume rdr in the time period t to t + dt.
For a random walker on a grid, the probability to be at a grid position
i is given as Pi (t). The probability for the walker to be at a position i at
the time t = t + δt is then

Pi (t + δt) = Pi (t) + [σj,i Pj (t) − σi,j Pi (t)]δt , (12.9)


X

where the sum is over all neighbors j of the site i. The term σi,j is the
transition probability. The first term in the sum represents the probability
that the walker during the time period δt walks into site i from site j,
and the second term represents the probability that the walker during
the time period δt walks from site i to one of the neighboring sites j.
When δt → 0 this equation approaches a differential equation
∂Pi X
= [σj,i Pj (t) − σi,j Pi (t)] . (12.10)
∂t j

If we assume that the transition probability is equal for all the neighbors,
so that σi,j = 1/Z, where Z is the number of neighbors, the differential
equation simplifies to
∂P
= D∇2 P , (12.11)
∂t
which we recognize as the diffusion equation, where the diffision constant
D is related to the transition probabilities σi,j and Z.
The general solution to this equation is
12.2 Random walks on clusters 207

1 2 1 − 1 ( r )2
P (r, t) = e−r /2Dt = e 2 |R| , (12.12)
(2πDt)d/2 (2π) |R|
d/2 2


where we have introduced |R| = Dt.
It can be shown that the moments of this distribution are

hrk i = Ak R(t)k ∝ tk/2 , (12.13)

and specifically, that


Z
hr i =
2
P (r, t)r2 dr = R2 (t) = Dt . (12.14)

We are now ready to address the behavior of random walkers on


percolation clusters.

12.2 Random walks on clusters

We now have the basic tools to understand diffusion in homogeneous


media: by studying the position r(t) of a random walker as a function
of the number of steps n or the time t = nδt, where δt is the time for a
single step.
How can we use this method to study diffusion on a percolation cluster?
We want to address how a particle diffuses on the cluster. That is, we
want to study how a random walker moves on the occupied sites in the
percolation system. We will assume that the walker only can move onto
connected neighbor sites in each step.
There are many different ways we can construct such measurements,
and as always, we need to be very precise when we define both the
experiment and our set of measures. Our plan is to drop a random walker
onto a random site in the percolation system and measure the position
r(t) of the walker as a function of time.

12.2.1 Developing a program to study random walks on


clusters
In order to study the behavior we need to develop a program to generate
a random walk on top of a percolation lattice, generate many such paths
and collect, analyze and visualize the resulting behavior.
208 12 Diffusion in disordered media

The rules for such a walker would be that we select a position at


random and then parachute the walker into this position. We start with
a percolation system given by the L × L matrix cluster, where cluster
is True in the points where the sites are present. The initial positions,
ix, iy, in the x- and y-direction for the walker are therefore random
numbers between 0 and L − 1 respectively:
ix = randint(L)
iy = randint(L)

where L is the system size. If this site is empty, the walk stops immediately
and its length is zero:
if not cluster[ix,iy]:
return

Storing the trajectory of the walker. We store the trace of the walker
in two arrays (we need both to handle periodic boundary conditions
later): walker_map which consists of the positions ix,iy of the walker
for each step, and displacement, which consists of the positions relative
to the initial position of the walker.
Random selection of next step. How do we select where the walker
can move? The walker is restricted to move to nearest neighbor sites that
are present. There are several approaches:
• We may select a direction at random and try to move in this direction.
If the walker cannot move in this direction it stays put for this step,
and then tries again in the next step. In this case, the walker may
have many steps without any motion.
• We may find all the directions the walker can possibly move in, and
then select one of these directions at random. In this case the walker
will move onto a new site in each step.
Both these methods effectively produce the same behavior. We will select
the second method. We therefore need to create a list of the possible
directions to move in. In order to make this list, we have a list called
directions of possible movement directions:
directions = np.zeros((2, 4), dtype=np.int64)
# X-dir: east and west, Y-dir: north and south.
directions[0, 0] = 1
directions[1, 0] = 0
directions[0, 1] = -1
directions[1, 1] = 0
directions[0, 2] = 0
12.2 Random walks on clusters 209

directions[1, 2] = 1
directions[0, 3] = 0
directions[1, 3] = -1

For each step, we need to collect all the possible steps into a list called
neighbor_arr. This is done by the following loop:
neighbor = 0
for idir in range(directions.shape[1]):
dr = directions[:,idir]
iix = ix + dr[0]
iiy = iy + dr[1]
if 0 <= iix < L and 0 <= iiy < L and cluster[iix, iiy]:
neighbor_arr[neighbor] = idir
neighbor += 1

If this list is empty, that is, if neighbor is zero, there are no possible
places to move. This means that the walker has landed on a cluster of
size s = 1. In this case, we stop and return with n = 1.
Finally, we select one of the neighbor directions at random, move the
walker into this site, update walker_map and displacement and repeat
the process.
# Select random direction from 0 to neighbor-1
randdir = randint(neighbor)
dir = neighbor_arr[randdir]
ix += directions[0, dir]
iy += directions[1, dir]
step += 1
walker_map[0, step] = ix
walker_map[1, step] = iy
displacement[:,step]=displacement[:,step-1]+directions[:,dir]

Here, step corresponds to n, the current step number.


Preparing the function. We put this into a function and use the numba
library to speed up simulation times.
import numba
import numpy as np

@numba.njit(cache=True)
def percwalk(cluster, max_steps):
"""Function performing a random walk on the spanning cluster.
Parameters
----------
cluster : np.ndarray
Boolean array with 1’s signifying a site in the spanning cluster.
max_steps : int
Maximum number of walker steps to perform.
Returns
-------
210 12 Diffusion in disordered media

walker_map : np.ndarray
A coordinate map of the walk performed, x in [0] and y in [1]
displacement : np.ndarray
A coordinate map of relative positions, x in [0] and y in [1]
num_steps : int
Number of steps performed.
"""
walker_map = np.zeros((2, max_steps))
displacement = np.zeros_like(walker_map)
directions = np.zeros((2, 4), dtype=np.int64)
neighbor_arr = np.zeros(4, dtype=np.int64)
# X-dir: east and west, Y-dir: north and south.
directions[0, 0] = 1
directions[1, 0] = 0
directions[0, 1] = -1
directions[1, 1] = 0
directions[0, 2] = 0
directions[1, 2] = 1
directions[0, 3] = 0
directions[1, 3] = -1
# Initial random position
Lx, Ly = cluster.shape
ix = np.random.randint(Lx)
iy = np.random.randint(Ly)
walker_map[0, 0] = ix
walker_map[1, 0] = iy
step = 0
if not cluster[ix, iy]: # Landed outside the cluster
return walker_map, displacement, step
while step < max_steps-1:
# Make list of possible moves
neighbor = 0
for idir in range(directions.shape[1]):
dr = directions[:,idir]
iix = ix + dr[0]
iiy = iy + dr[1]
if 0 <= iix < Lx and 0 <= iiy < Ly and cluster[iix, iiy]:
neighbor_arr[neighbor] = idir
neighbor += 1
if neighbor == 0: # No way out, return
return walker_map, displacement, step
# Select random direction from 0 to neighbor-1
randdir = randint(neighbor)
dir = neighbor_arr[randdir]
ix += directions[0, dir]
iy += directions[1, dir]
step += 1
walker_map[0, step] = ix
walker_map[1, step] = iy
displacement[:,step]=displacement[:,step-1]+directions[:,dir]
return walker_map, displacement, step
12.2 Random walks on clusters 211

Testing the function. Let us test the newly generated function on a


few simplified cases. First, we try it on a system with p = 1, that is, on
a homogeneous system.
from pylab import *
L = 50
p = 1
z=rand(L,L)<p
imshow(z,origin="lower")
walker_map, displacement, steps = percwalk(z,200)
# walker_map is oriented as row-column (ix, iy)
plot(walker_map[1,:steps],walker_map[0,:steps])

Walks from 10 such simulations are shown in Fig. 12.2. This looks
reasonable and nice, but we do notice that quite a few of these walks
reach the boundaries of the system. We may wonder how this finite
system size affects the behavior and statistics of the system.

40

30

20

10

0
0 10 20 30 40

Fig. 12.2 Trajectories of 10 random walks for a (homogeneous) system with L = 50 and
p = 1.

Measuring r2 (t) for a random walker. The function percwalk returns


the displacements, rn , for the walking starting from r0 = 0. We find rn2
and visualize the result for a single walk:
from pylab import *
L = 50
p = 1
z=rand(L,L)<p
walker_map, displacement, steps = percwalk(z,200)
r2 = sum(displacement**2,axis=0)
212 12 Diffusion in disordered media

t = arange(len(r2))
plot(t,r2)

The resulting plot is shown in Fig. 12.3. We do not really learn much from
this plot — we need to collect more statistics. We need to generate many
different walks and then average over all the walks to find a statistically
better measure for r2 (t).

350

40 300

250
30
200

r2
20 150

100
10
50

0 0
0 10 20 30 40
0 100 200 300 400 500
t

Fig. 12.3 (a) Trajectory of a random walk for a (homogeneous) system with L = 50 and
p = 1. (b) Plot of the corresponding r2 (t).

Collecting statistics for r2 (t). We therefore write a small function to


generate a given number of clusters with the given p. For each such
cluster we will generate a given number of walks. Notice, that we must
also specify the maximum number of steps that we model for each walk.
The following function implements these features:
@numba.njit(cache=True)
def find_displacements(p,L,num_systems,num_walkers,max_steps):
displacements = zeros(max_steps)
for system in range(num_systems):
z = rand(L,L)<p
for j in range(num_walkers):
num_steps = 0
while num_steps <= 1:
walker_map,displacement,num_steps = percwalk(z,max_steps)
displacements += sum(displacement**2, axis=0)
displacements = displacements/(num_walkers*num_systems)
return displacements
12.2 Random walks on clusters 213

Notice a few details: If the number of steps is 1 or smaller it means that


the walker landed either on an empty size (n = 0) or on a single site
(n = 1). We do not want to include these in our statistics since they
provide little information about the behavior of the random walker. We
use this program to collect statistics from M = 500 random walks of
length n = 10000 steps on a L = 100 system:
p = 1
L = 100
max_steps = 10000
num_walkers = 500
num_systems = 100
displacements = find_displacements(p,L,num_systems,num_walkers,max_steps)
dr1 = displacements[1:]
t = arange(len(dr))
loglog(t,dr)

The resulting plot in Fig. 12.4(a) shows that the system indeed behaves
as we expect — for small values of t. However, as t increases, we see that
the effect of the finite system size L starts to affect the results. This is
because the random walker is limited by the wall and eventually we will
be limited the L × L system. This problem will also arise when we study
the percolation system. How can we reduce this problem?

1.5
periodic
1.0 non-periodic

0.5

0.0
log(r2 )

0.5

1.0

1.5

2.0

2.5

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0


log(t)

Fig. 12.4 Plot of r2 (t) for a L = 100 system with non-periodic and periodic boundary
conditions.

Introducing periodic boundary condition. One way of reducing this


problem is by introducing periodic boundary conditions. The idea is that
is the random walker steps outside the lattice on the left side, it appears
on the right-hand side instead. That is, if ix becomes -1, it is instead set
to L − 1. We implement this in the percwalk function in the following.
214 12 Diffusion in disordered media

The resulting plot of r2 (t) in Fig. 12.4 shows that this solves the problem
with the boundaries. This aspect will be even more important when we
study percolation systems in non-uniform media.
# With periodic boundary conditions - essential for good statistics
import numba
import numpy as np

@numba.njit(cache=True)
def percwalk(cluster, max_steps):
"""Function performing a random walk on the spanning cluster.
Parameters
----------
cluster : np.ndarray
Boolean array with 1’s signifying a site in the spanning cluster.
max_steps : int
Maximum number of walker steps to perform.
Returns
-------
walker_map : np.ndarray
A coordinate map of the walk performed, x in [0] and y in [1]
displacement : np.ndarray
A coordinate map of relative positions, x in [0] and y in [1]
num_steps : int
Number of steps performed.
"""
walker_map = np.zeros((2, max_steps))
displacement = np.zeros_like(walker_map)
directions = np.zeros((2, 4), dtype=np.int64)
neighbor_arr = np.zeros(4, dtype=np.int64)
# X-dir: east and west, Y-dir: north and south.
directions[0, 0] = 1
directions[1, 0] = 0
directions[0, 1] = -1
directions[1, 1] = 0
directions[0, 2] = 0
directions[1, 2] = 1
directions[0, 3] = 0
directions[1, 3] = -1
# Initial random position
Lx, Ly = cluster.shape
ix = np.random.randint(Lx)
iy = np.random.randint(Ly)
walker_map[0, 0] = ix
walker_map[1, 0] = iy
step = 0
# Check if we landed outside the spanning cluster
if not cluster[ix, iy]:
# Return the map with starting position and number of steps
return walker_map, displacement, step
while step < max_steps-1:
# Make list of possible moves
neighbor = 0
12.2 Random walks on clusters 215

for idir in range(directions.shape[1]):


dr = directions[:,idir]
iix = ix + dr[0]
iiy = iy + dr[1]
# Periodic BC
if iix>=Lx:
iix = iix-Lx
if iix<0:
iix = iix+Lx
if iiy>=Ly:
iiy = iiy-Ly
if iiy<0:
iiy = iiy+Ly
if cluster[iix, iiy]:
neighbor_arr[neighbor] = idir
neighbor += 1
if neighbor == 0: # No way out, return
return walker_map, displacement, step
# Select random direction from 0 to neighbor-1
randdir = np.random.randint(neighbor)
dir = neighbor_arr[randdir]
ix += directions[0, dir]
iy += directions[1, dir]
step += 1
walker_map[0, step] = ix
walker_map[1, step] = iy
displacement[:,step]=displacement[:,step-1]+directions[:,dir]
return walker_map, displacement, step

12.2.2 Diffusion on a finite cluster for p < pc


We now have all the tools to stary studying the behavior of a random
walker on top of a percolation system. We select p = pc and drop the
random walker on a random position on the lattice. The resulting set of
walks from such a simulation can be seen in Fig. 12.5.
from pylab import *
L = 100
p = 0.5927
z=rand(L,L)<p
imshow(z,origin="lower")
for i in range(30):
walker_map, displacement, steps = percwalk(z,10000)
plot(walker_map[1,:steps],walker_map[0,:steps],’o’,markersize=4)

Understanding behavior for p > pc . We then simulate a larger set of


walks for p = 0.45, 0.50, 0.55, pc . The resulting plots of r2 (t) are shown in
Fig. 12.6. We see that when p < pc , r2 (t) ∝ tx for some time, but then
216 12 Diffusion in disordered media

Fig. 12.5 Plot of 30 walks in a L = 100 system at p = pc .

aften some time, r2 (t) crosses over to a constant instead. How can we
understand this behavior?

0.5

1.0

1.5
log(n)

2.0

2.5

3.0

0 1 2 3 4
log(r2 )

Fig. 12.6 Plots of r2 (t; p, L) for p = 0.45, 0.50, 0.55, pc

Long-term behavior when p < pc . For a single walker that lands on a


cluster of size s, we expect that the walker will be limited to walk on
this cluster and therefore cannot reach positions that are much further
away than Rs . Thus, after a long time, we expect r2 (t) ∝ Rs2 . If we
repeat this experiment many times, each time dropping the walker onto
a random occupied point in the system, we need to take the average over
all clusters of size s and over all starting positions, to find the average
12.2 Random walks on clusters 217

of r2 (t) for all these different walks. If we drop the walker at a random
position, the probability for that walker to land on a cluster of size s is
sn(s, p), and the contribution from this cluster to r2 (t) after a long time
is Rs2 . Therefore, the average hr2 (t)i for the walker is:
h i h i
hr2 i ∝ Rs2 = (12.15)
X
Rs2 sn(s, p) .
s

We approximate this sum by an integral and replace n(s, p) by the scaling


ansatz n(s, p) = s−τ F (s/sξ ), getting
h i Z ∞
Rs2 = Rs2 ss−τ F (s/sξ )ds . (12.16)
1

We realize that the function F (s/sξ ) falls to zero very rapidly when
s > sξ and it is effectively constant below that, we therefore replace the
integral with an integral up to sξ :
h i Z sξ
Rs2 = Rs2 ss−τ ds . (12.17)
1

We now insert that Rs2 ∝ s2/D and perform the integral, getting:
h i
2/D+2−τ 2/D 2−τ
Rs2 ∝ sξ ∝ sξ sξ . (12.18)

where we recognize the first factor as ξ 2 ∝ (p − pc )−2ν and the second


factor from (4.34) as (p − pc )β so that
h i
Rs2 ∝ (p − pc )β−2ν . (12.19)

We notice that in this case the average is of Rs2 over sn(s, p), but when
we calculated the correlation length in (5.21) the average was of Rs2 over
s2 n(s, p), and this is the reason for the appearance of the exponent β −2ν
and not simply −2ν as we got for the correlation length.
Short term behavior. There is a transition in r2 (t) to R2 after some
 

crossover time t0 . For times shorter than t0 we see from Fig. 12.6 that
the behavior appears to be that r2 (t) ∝ t2k for some exponent 2k. We
notice that as p approaches pc , the crossover time t0 increases. All the
curves for various p-values appear to have similar, or possibly the same
behavior for t < t0 .
In Fig. 12.6 we notice that the exponent 2k is not 1, as we found for
the homogeneous case. It is clearly lower than 1. If we measure it, we find
218 12 Diffusion in disordered media

that 2k ' 0.66 and k ' 0.33. We call this behavior anomalous diffusion
because the mean squared distance r2 (t) does not grow linearly with
time, but with an exponent different than 1. What can we say about the
crossover time t0 ? We will return to this after examining the case when
p > pc .

12.2.3 Diffusion at p = pc
From Fig. 12.6 we also see that for p = pc the random walk follows
r2 (t) ∝ t2k . This behavior is as expected. For times shorter than t0 ,
the walker behaves as if it is on pc , whereas after a long time, t > t0 ,
we start noticing that the walker is restricted when it diffuses on the
finite clusters. Another way to think of this is that the crossover time t0
increases as p → pc , and diverges at p = pc . The exponent k is a universal
exponent for diffusion on percolation systems. It does not depend on the
lattice structure or the rules for connectivity, but it does depend on the
embedding dimension d.

12.2.4 Diffusion for p > pc


We can use the same computational approach to study the behav-
ior of the random walker when p > pc . The resulting plots for p =
0.8, 0.75, 0.70, 0.65 and pc are shown in Fig. 12.7. The plots show that
when p > pc , for short times the r2 (t) curve follows the behavior for
p = pc with r2 (t) ∝ t2k . But for a crossover time t0 , the behavior changes
and crosses over to a behavior where r2 (t) ∝ t1 , that is, it crosses over
to the behavior of a homogeneous system. How can we understand this?
Developing a model for p > pc . We know that when p = 1, the system
is homogeneous, and hr2 i = D(1)t. We will therefore write the general
relation for p > pc :
hr2 i = D(p)t , r  ξ . (12.20)
What behavior do we expect from D(p)? We expect D(p) to increase in
a way similar to the density of the backbone or the conductivity g. In
fact, the Einstein relation for diffusion relates the diffusion constant to
the conductance through:

D(p) ∝ g(p) ∝ (p − pc )µ . (12.21)


12.2 Random walks on clusters 219

1.0
p = 0.75
0.5 p = 0.65
p = 0.5927
0.0 k = 0.35
k = 0.5
0.5

log(r2 )
1.0

1.5

2.0

2.5

3.0

0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0


log(t)

Fig. 12.7 Plots of r2 (t; p, L) for p = 0.8, 0.75, 0.70, 0.65, pc

We therefore expect that when p > pc , and the time is larger than a
crossover time t0 (p), that the behavior is scaling with exponent µ, iden-
tical to that of conductivity. And for a time shorter than the crossover
time, the behavior is identical to the behavior at p = pc . We can un-
derstand this in the same way as above: When t < tc the walker does
still not experience that the characteristic clusters are limited by a finite
characteristic length ξ.

12.2.5 Scaling theory


Let us develop a scaling theory for the behavior of hr2 i. We will assume
that when the time is smaller than a cross-over time, the behavior is
according to a power-law with exponent 2k, and that when the time is
larger than the cross-over time, the behavior is either that of diffusion
with diffusion constant D(p) for p > pc , or it reaches a constant plateau
for the case when p < pc .
Let us introduce a scaling ansatz with these properties:

hr2 i = t2k f [(p − pc )tx ] . (12.22)

We could also have started from any of the end-points, such as from the
assumption that
t
hr2 i = (pc − p)β−2ν G1 ( ) , (12.23)
t0
or
t
hr2 i = (p − pc )µ G2 ( ) . (12.24)
t0
220 12 Diffusion in disordered media

We have two unknown exponents k and x that must be determined from


independent knowledge. We will assume that the function f (u) has the
behavior 
 const.
 |u|  1
f (u) = u µ
u1 (12.25)
 (−u)β−2ν u  −1

Let us now address the various limits in order to determine the scaling
exponents k and x in terms of known exponents.
Scaling behavior in the limit p > pc . First, we know that when p > pc ,
that is when u  1, we have that

hr2 i ∝ (p − pc )µ t , (12.26)

which should correspond to the functional form from the ansatz:

(p − pc )µ t ∝ t2k f ((p − pc )tx ) ∝ t2k [(p − pc )tx ]µ . (12.27)

This results in the exponent relation

2k = 1 − µx , (12.28)

or
1 − µx
k= . (12.29)
2
Scaling behavior in the limit p < pc . Similarly, we know that the
behavior in the limit of u  −1 should be proportional to (pc − p)β−2ν .
Consequently, the scaling ansatz gives

(pc − p)β−2ν ∝ t2k f ((p − pc )tx ) ∝ t2k [(pc − p)tx ]β−2ν , (12.30)

which results in the exponent relation:

2k + x(β − 2ν) = 0 . (12.31)

Solving to find the exponents. We solve the two equations for x and
k, finding
1 µ
k = [1 − ], (12.32)
2 2ν + µ − β
and
1
x= . (12.33)
2ν + µ − β
12.2 Random walks on clusters 221

Our argument therefore shows that the scaling ansatz is indeed consistent
with the limiting behaviors we have already determined, and it allows us
to make a prediction for k and x.
Testing the scaling ansatz. We can test the scaling function by a
direct plot of the simulated result. The scaling relation states that
r2 (t) = t2k f [(p − pc )tx ], which means that r2 (t)t−2k = f [(p − pc )tx ]. If
we therefore plot r2 (t)t−2k on one axis and (p − pc )tx on the other axis,
all the data for the various values of p should fall onto a common curve
corresponding to the function f (u). This is illustrated in Fig. 12.8, which
shows that the scaling ansatz is in good correspondence with the data.
Indeed, the plot also shows that the assumptions about the shape of the
scaling function f (u) are correct.

3.2

3.4

3.6

3.8
r )
2k 2

4.0
log(t

4.2

4.4

4.6

4.8

1.4 1.2 1.0 0.8 0.6 0.4 0.2 0.0 0.2


log((p pc )tx )
2
Fig. 12.8 Plots of r (t; p, L) for p = 0.45, 0.50, 0.55 rescaled according to the scaling
theory.

Interpreting the dimension of the walk at p = pc . When p = pc , we


find that 2ν−β
hr2 i ∝ t2k = t 2ν+µ−β , (12.34)
We can write this relation in the same way as we wrote the behavior of
an ordinary random walk,
t ∝ r dw , (12.35)
where dw is the dimension of the random walk. We have therefore found
that
1 µ
dw = = 2 + , (12.36)
k ν − β2
which is a number larger than 2. This means that for a given time, the
walk remains more compact, which is consistent with our intuition.
222 12 Diffusion in disordered media

Defining the cross-over time. We have introduced a cross-over time,


t0 , which is defined so that

(p − pc )tx0 ' 1 , (12.37)

which gives
t0 ∝ |p − pc |−1/x ∝ |p − pc |−(2ν+µ−β) . (12.38)
Interpreting the crossover time. How can we interpret this relation?
We could decompose the relation to be:

|p − pc |β−2ν
t0 ∝ , (12.39)
|p − pc |µ
where we know that the average radius of gyration for clusters are

[Rs2 ] ∝ |p − pc |β−2ν , (12.40)

This gives us an interpretation of the cross-over time for diffusion:

[Rs2 ]
t0 (p) ∝ , (12.41)
D
where D is the diffusion constant. Why is this time not proportional to
ξ 2 /D, the time it takes to diffuse a distance proportional to the correlation
length? The difference comes from the particular way we devised the
experiment: the walker was dropped onto a randomly selected occupied
site.
Interpreting the behavior for p > pc . Let us now address what happens
when p > pc . In this case, the contributions to the variance of the position
has two main terms: one term from the spanning cluster and one term
from the finite clusters.
P 0
[hr2 i] = Dt = D t + Rs2 , (12.42)
p
where the first term, P/pD0 t is the contribution from the random walker
on the infinite cluster. This term consists of the diffusion constant D0
for a walker on the spanning cluster, and the prefactor P/p which comes
from the probability for the walker to land on the spanning cluster: For
a random walker placed randomly on an occupied site in the system, the
probability for the walker to land on the spanning cluster is P/p, and the
probability to land on any of the finite clusters is 1 − P/p. The second
term is due to the finite cluster. This term reaches a constant value for
12.2 Random walks on clusters 223

large times t. The only time dependence is therefore in the first term,
and we can write:
P
Dt = D0 t , (12.43)
p
for long times, t. That is:
Dp µ−β
D0 = ∝ (p − pc )µ−β ∝ ξ − ν ∝ ξ −θ . (12.44)
P
where we have introduce the exponent
µ−β
θ= . (12.45)
ν
Interpreting the crossover time for p > pc . We have therefore found
an interpretation of the cross-over time t0 , and, in particular for the
appearance of the β in the exponent. We see that the cross-over time is

|p − pc |β−2ν ξ2
t0 ∝ ∝ . (12.46)
|p − pc |µ D0
The interpretation of t0 is therefore that t0 is the time the walker needs
to travel a distance ξ when it is diffusing with diffusion constant D0 on
the spanning cluster.

12.2.6 Diffusion on the spanning cluster


How does the random walker behave on the spanning cluster? We have
found that for p > pc and for t > t0 the mean square displacement
increases according to

hr2 i = D0 t ∝ (p − pc )µ−β t , (12.47)

For t < t0 , we expect the behavior to be


0
hr2 i ∝ t2k , (12.48)

as illustrated in Fig. 12.6.


Interpretation of t0 for walks on the spanning cluster. We expect the
relations to be valid up to the point (t0 , ξ 2 ), where both descriptions
should provide the same result. Therefore we expect
0
ξ ∝ t2k 0
0 ∝ D t0 , (12.49)
224 12 Diffusion in disordered media

and therefore that


ξ2 (p − pc )−2ν
t0 ∝ ∝ ∝ (p − pc )−(2ν+µ−β) . (12.50)
D0 (p − pc )µ−β
Consequently, the value of t0 is the same for diffusion on the spanning
cluster only as for the diffusion on any cluster including the spanning
clsuter. In general, we can interpret t0 as the time it takes for the walker
to diffuse to the end of the cluster when p < pc , and the time it takes to
diffuse to a distance ξ on the spanning cluster when p > pc .
Interpretation of k for walks on the spanning cluster. Let us check
the other exponent, k 0 . We find that
0
ξ 2 ∝ (p − pc )−2(2ν+µ−β)k , (12.51)

and therefore that


ν
k0 = , (12.52)
2ν + µ − β
which is not the same as we found in (12.32) for all clusters. We find
that k 0 is slightly larger than k.
Interpretation of k 0 and k. What is the interpretation of k 0 ? If we
consider random walks on the spanning cluster only, the behavior at
p = pc is described by
0
hr2 i ∝ t2k , (12.53)
this gives
0
r1/k ∝ t ∝ rdw , (12.54)
where dw can be interpreted as the dimension of the random walk. For
the case of random walkers on the spanning cluster at p = pc we have
therefore found that4
µ−β
dw = 2 + . (12.55)
ν
The fractal dimension is larger than 2. This corresponds to the walker
getting stuck on the percolation cluster, and the structure of the walk is
therefore more dense or compact.

12.2.7 (Advanced) The diffusion constant D


We can use the theory we have developed so far to address the behavior of
the diffusion constant with time. Fick’s law can generally be formulated
as
12.2 Random walks on clusters 225

hr2 i = Dt , (12.56)
or, equivalently, we can find the diffusion constant for Fick’s law from:
∂ 2
D= hr i . (12.57)
∂t
Now, we have established that for diffusion on the spanning cluster for
p = pc , the diffusion is anomalous. That is, the relation between the
square distance and time is not linear, but a more complicated power-law
relationship
0
hr2 i ∝ t2k . (12.58)
As a result, we find that the diffusion constant D0 for diffusion on the
spanning cluster defined through Fick’s law is
∂ 2k0 0
D0 ∝ t ∝ t2k −1 . (12.59)
∂t
We can therefore interpret the process as a diffusion process where D
decays with time.
In the anomalous regime, we find that
0
r ∝ tk , (12.60)

and therefore that


0
r1/k ∝ t . (12.61)
We can therefore also write the diffusion constant D0 as
0 0
D0 ∝ t2k −1 ∝ r2−1/k ∝ r−θ . (12.62)

We could therefore also say that the diffusion constant is decreasing with
distance. The reverse is also generally true: Whenever D depends on the
distance, we will end up with anomalous diffusion.
We can also relate these results back to the diffusion equation. The
diffusion equation for the random walk was:
∂P
= D0 ∇2 P = ∇D0 ∇P , (12.63)
∂t
where the last term is the correct term if the diffusion constant depends
on the spatial coordinate.
We can rewrite the dimension, dw , of the walk to make the relation
between the random walker and the dimensionality of the space on which
226 12 Diffusion in disordered media

it is moving more obvious:


µ β
dw = 2 − d + +d− , (12.64)
ν ν
where we recognize the first term as
µ
ζ̃R = 2 − d + , (12.65)
ν
and the second term as the fractal dimension, D, of the spanning cluster:
β
D =d− . (12.66)
ν
We have therefore established the relation

dw = ζ̃R + D . (12.67)

This relation is actually generalizable, so that for a random walker


restricted to only walk on the backbone, the dimension of the walker is

dw,B = ζ̃R + DB . (12.68)

12.3 Exercises

Exercise 12.1: Random walks on the spanning cluster


In this exercise we will use and modify the program percwalk from
the text to study random walks in percolation systems, and on the
spanning cluster in particular. We want to find the dimension dw of a
two-dimensional random walk on the spanning cluster.
a) Find the distance hr2 i as a function of the number of steps N for
random walks on the spanning cluster for p = pc .
b) Find the dimension, dw of the walk, from the relation hr2 i ∝ N 2/dw .
c) Find the distribution P (r, N ) for the position r as a function of the
number of steps N for a random walker on the percolation cluster.
d) (Advanced) Can you produce a data-collapse for the distribution
P (r, N ).
e) (Advanced) Can you determine the functional form of the distributoin
P (r, N ). Is it a Gaussian?
12.3 Exercises 227

Exercise 12.2: Random walks on percolation clusters


In this exercise we will use and modify the program percwalk to study
random walks on the spanning cluster of a percolation system.
a) Find the distance hr2 i as a function of the number of steps N for
random walks on the spanning cluster for p < pc and for p > pc .
b) Plot loghR2 i as a function of N for various values of p.
c) Can you find the behavior of the correlation length ξ from this plot?
d) Discuss the behavior of the characteristic cross-over time t0 based on
the plot.

Exercise 12.3: Self-avoiding walks on fractals


(Advanced) In this exercise we will use the program percwalk to study
a self-avoiding random walker on the spanning cluster. In this exercise
you will need to collect extensive statistics to be able to determine the
scaling behavior.
a) Find the distance hR2 i as a function of the number of steps N for
random walks on the spanning cluster for p = pc .
b) Find the dimension, dw of the walk, from the relation hR2 i ∝ N 2/dw .
Dynamic processes in disordered
media 13

So far, we have studied the behavior and properties of systems with


disorder, such as the model porous material we call the percolation
system. That is, we have studied properties that depend on the existing
disorder of the material. In this chapter, we will start to address dynamical
processes that generate percolation-like disordered structures, but where
the structures evolve, develop, and change in time.
The first dynamic problem we will address is the formation diffusion
fronts, and we will demonstrate that the front of a system of diffusing
particles can be described as a percolation system.
The second dynamic problem we will address is the slow displacement
of one fluid by another in a porous medium. We will in particular
demonstrate that the invasion percolation process generates a fractal
structure similar to the percolation cluster by itself - it is a process
that drives itself to a critical state, similar to the recently introduced
notion of Self-Organized Criticality [3]. We will then address how we
can study similar processes in the gravity field, and, in particular, the
influence of stabilizing and destabilizing mechanisms. Invasion percolation
in a destabilizing gravity field provides a good model to describe and
understand the process of primary migration.

13.1 Diffusion fronts

The first dynamical problem we will address is the structure of a diffusion


front [31]. Let us address a diffusion process on a square lattice. One

229
230 13 Dynamic processes in disordered media

example of such a process is the two-dimensional diffusion of particles


from a source at x = 0 into the x > 0 plane, when particles are not
allowed to overlap. The system of diffusing particles is illustrated in
Fig. 13.1.

(a) (b)

Fig. 13.1 Illustration of the diffusion front. Particles are diffusing from a source at the
left side. We address the front separating the particles connected to the source from the
particles not connected to the source. The average distance is given by xc shown in the
figure. The width of the front, ξ, is also illustrated in the figure. The different clusters
are colored to distinguish them from each other. The close-up in figure (b) illustrates the
finer details of the diffusion fronts, and the local cluster geometries.

Exact solution for concentration. For this problem we know the exact
solution for the concentration, c(x, t), or particles, corresponding to the
occupation probability P (x, t). The solution to the diffusion equation
with a constant concentration, or P (x = 0, t) = 1, is the error function
given as the integral over a Gaussian function:
x
P (x, t) = 1 − erf ( √ ) , (13.1)
Dt
where the error function is defined as the integral:
2 u
Z
−v 2
erf (u) = √ e 2 dv . (13.2)
2π 0

This solution produces the expected deviation hx2 i = Dt, where D is the
diffusion constant for the particles. There is no y (or z) dependence for
the solution.
13.1 Diffusion fronts 231

Structure of clusters connected to diffusion front. We will address


the structure of connected clusters of diffusing particles. Two particles are
connected if they are neighbors so that they inhibit each others diffusion
in a particular direction. If we fix t, we notice that the system will be
compact close to x = √ 0, and that there only will be a few thinly spread
particles when x  Dt. In this system, the occupation probability
varies with both time t and spatial position x. However, we expect the
system of diffusing particles to be connected to the source out to a
distance xc corresponding to the point where the occupation probability
is equal to the percolation threshold pc for the lattice type studied. That
is:
P (xc , t) = pc , (13.3)
defines the center of the diffusion front: the front separating the particles
that are connected to the source from the√ particles that are not connected
to the source. We notice that xc (t) = Dt.
Width of the diffusion front. idxwidth of diffusion front]
What is the width of the diffusion front? For a given time t, the
occupation probability decreases with δx = x − xc . Similarly, the correla-
tion length will therefore also depend on the distance δx to the average
position of the front. We expect that a cluster may be connected to the
front if it is within a distance ξ of xc . Particles that are further away
than the local correlation length, ξ, will not be connected over such
distances, and will therefore not be connected. Particles that are closer
to xc than ξ will typically by connected through some connecting path.
We will therefore introduce ξ as the width of the front, corresponding to
the distance at which the local correlation length, due to the occupation
probability P (x, t), is equal to the distance from xc . The local correlation
length ξ(x) is given as

ξ(x) = ξ0 |P (x, t) − pc |−ν , (13.4)

The distance w at which ξ(xc + w) = w gives the width of the front. We


can write this self-consistency equation for w as

w = ξ0 |P (x + w, t) − pc |−ν . (13.5)

Let us introduce a Taylor expansion of P (x) around x = xc :



dP
P (x, t) ' P (xc , t) + (x − xc ) , (13.6)
dx xc
232 13 Dynamic processes in disordered media

where we recognize that xc ∝ Dt gives
1 1

dP
∝√ ∝ . (13.7)
dx xc

Dt xc

We insert this into the self-consistency equation (13.5) getting



dP −ν
w = ξ0 |w | ∝ (w/xc )−ν , (13.8)
dx xc

which gives
w ∝ xν/(1+ν)
c . (13.9)
The width of the front therefore scales with the average position of the
front, and the scaling exponent is related to the scaling exponent of the
correlation length for the percolation problem.
Time development. What happens in this system with time? Since xc
is increasing with time, we see that the relative width decreases:
ν/(1+ν)
w xc 1
− 1+ν
∝ ∝ xc . (13.10)
xc xc
This effect will also become apparent under renormalization. Applying
a renormalization scheme with length b, will result in a change in the
front width by a factor bν/(1+ν) , but along the y-direction the rescaling
will simply be by a factor b. Successive applications will therefore make
the front narrower and narrower. This difference in scaling along the
x and the y axis is referred to as self-affine scaling, in contrast to the
self-similar scaling where the rescaling is the same in all directions.

13.2 Invasion percolation

We will now study the slow injection of a non-wetting fluid into a porous
medium saturated with a wetting fluid. In the limit of infinitely slow
injection, this process is termed invasion percolation for reasons that will
soon become obvious [39, 12].
Physical system — fluid saturated porous medium. When a non-
wetting fluid is injected slowly into a saturated porous medium, the
pressure in the non-wetting fluid must exceed the capillary pressure in
a pore-throat for the fluid to propagate from one pore to the next, as
13.2 Invasion percolation 233

illustrated in Fig. 13.2. The pressure difference, δP needed corresponds


to the capillary pressure Pc , given as
Γ
Pc = , (13.11)

where Γ is the interfacial surface tension, and  is the characteristic size
of the pore-throats in the porous medium. However, there will be some
disorder present in the porous medium corresponding to local variation in
the characteristic pore sizes . This will lead to a distribution of capillary
pressures threshold Pc needed to invade a particular pore. We will assume
that the medium can be described as a set of pores connected with pore
throats with a uniform distribution of capillary pressure thresholds, and
we will assume that the capillary pressure thresholds are not correlated
but statistically independent. We can therefore rescale the pressure scale,
by subtracting the minimum pressure threshold and dividing by the
range of pressure thresholds, and describe the system as a matrix of
critical pressures Pi required to invade a particular site.

Fig. 13.2 Illustration of the invasion percolation process in which a non-wetting fluid
is slowly displacing a wetting fluid. The left figure shows the interface in a pore throat:
the pressure in the invading fluid must exceed the pressure in the displaced fluid by an
amount corresponding to the capillary pressure Pc = Γ/, where Γ is the interfacial
surface tension, and  is a characteristic length for the pore throat. The right figure
illustrates the invasion front after injection has started. The fluid may invade any of
the sites along the front indicated by small circles. The site with the smallest capillary
pressure threshold will be invaded first, changing the front and exposing new boundary
sites.
234 13 Dynamic processes in disordered media

Modeling the fluid displacement process. The fluid displacement pro-


cess can then be modeled by assuming that all the sites on the left side
of the matrix are in contact with the invading fluid. The pressure in the
invading fluid is increased slowly, until the fluid invades the connected
site with the lowest pressure threshold. This generates a new set of
invaded sites in contact with the inlet, and a new set of neighboring
sites. The invasion process continues until the invading fluid reaches the
opposite side. Further injection will then not produce any further fluid
displacement, the fluid will flow through the system through the open
path generated.
Computational implementation. How can we transfer this model de-
scription to a computational model? We introduce a lattice of pores to
represent the pore throat sizes. For each lattice site, there is a critical
pore size into that pore, with a critical pressure, pi , needed to push the
fluid into this pore. We map the pressure onto a scale from 0.0 to 1.0,
where 1.0 represents the pressure needed to invade all pores in the lattice.
We then start to gradually increase the pressure in the fluid and allow
the fluid to invade from the left side of the lattice. Let us say we have
increased the pressure to the value p (0 ≤ p ≤ 1). This would mean that
all sites that have pi ≤ p and that are connected to the left side would
be invaded.
This corresponds to a percolation problem. If we make a percolation
system with occupation probability p, then the fluid will have invaded all
the clusters that are connected to the left side. Thus, we have mapped
the invasion percolation problem onto a percolation problem. Let us
implement this approach.
First, we generate a random lattice of critical pressures and an array
of pressures p that we will loop through:
from pylab import *
from scipy.ndimage import measurements
L = 400
z = rand(L,L) # Random distribution of thresholds
p = arange(0.0,0.7,0.01)

We step gradually through this set of p-values, finding the clusters of


connected sites that have p-values smaller than p[npstep]
for nstep in range(len(p)-1):
zz = z<p[nstep]
lw,num = measurements.label(zz)

Then, we find the labels of all the clusters on the left side of the lattice.
All the clusters with these labels are connected to the left side and are
13.2 Invasion percolation 235

part of the invasion percolation cluster called cluster. We do this in


two steps. First, we find a list of unique labels that are on the left side.
Then we find all the clusters with labels that are in this list using the
numpy-function isin:
leftside = lw[:,0]
leftnonzero = leftside[where(leftside>0)]
uniqueleftside = unique(leftnonzero)
cluster = isin(lw,uniqueleftside)

Then we make a matrix that stores at what time t (pressure value p(t))
a particular site was invaded. This is done by simply adding a 1 for all
set sites at t to a matrix pcluster. The first clusters invaded will then
have the highest value in the pcluster matrix. We use the pcluster
matrix to visualize the dynamics.
pcluster = pcluster + 1.0*cluster

Finally, we check if the fluid has reached the right-hand side by comparing
the labels on the left-hand side with those on the right-hand side. If
any labels are the same, there is a cluster connecting the two sides (a
spanning cluster), and the fluid invasion process stops:
# Check if it has reached the right hand side
span = intersect1d(lw[:,1],lw[:,-1])
if (len(span[where(span > 0)])>0):
break

The whole program for the simulation, including initialization of


pcluster is then:
# Example program for studying invasion percolation problems
# NOTE: This is not an optimal but an educational algorithm
from pylab import *
from scipy.ndimage import measurements
L = 400
p = arange(0.0,0.7,0.01)
z = rand(L,L) # Random distribution of thresholds
pcluster = zeros((L,L),float)
for nstep in range(len(p)-1):
zz = z<p[nstep]
lw,num = measurements.label(zz)
leftside = lw[:,0]
leftnonzero = leftside[where(leftside>0)]
uniqueleftside = unique(leftnonzero)
cluster = isin(lw,uniqueleftside)
pcluster = pcluster + 1.0*cluster
# Check if it has reached the right hand side
span = intersect1d(lw[:,1],lw[:,-1])
if (len(span[where(span > 0)])>0): break
236 13 Dynamic processes in disordered media

imshow(log(pcluster),origin="lower")

Results for fluid displacement process. The resulting pattern of in-


jected nodes is illustrated in Fig. 13.3, where the colors indicate the
pressure at which the injection took place. It can be seen from the figure
that the injection occurs in bursts. When a site is injected, many new
connected neighbors are available as possible sites to invade. As the
pressure approaches the pressure needed to percolate to the other side,
these newly appearing sites of the front will typically also be invaded,
and invasion will occur in gradually larger regions. These bursts have
been characterized by Furuberg et al. [12], and it can be argued that
the distribution of burst sizes as well as the time between bursts are
power-law distributed.
Mapping invasion percolation onto percolation. Based on this algo-
rithmic model for the fluid displacement process, it is also easy to connect
the invasion percolation problem with ordinary percolation. For an in-
jection pressure of p, all sites with critical pressure below or equal to p
are in principle available for the injection process. However, it is only
the clusters of such sites connected to the left side that will actually be
invaded, since the invasion process requires a connected path from the
inlet to the site for a site to be filled. We will therefore expect that the
width of the invasion percolation front corresponds to the correlation
length ξ = ξ0 (pc − p)−ν as p approaches the percolation threshold pc ,
because this is the length at which clusters are connected. That is, cluster
that are a distance ξ from the left side will typically be connected to
the left side, and therefore connected, whereas clusters that are further
away than ξ will typically not be connected and therefore not invaded.
This shows that the critical pressure will correspond to pc . This also
shows that when the fluid reaches the opposite side, the system is ex-
actly at pc , and we expect the invasion percolation cluster to have the
same scaling properties as the spanning cluster at p = pc . There will be
small differences, because the invasion percolation cluster also contains
smaller clusters connected to the left side, but we do not expect these to
change the scaling behavior of the cluster. That is, we expect the fractal
dimension of the invasion percolation cluster to be D. This implies that
the density of the displaced fluid decreases with system size.
Invasion percolation with and without trapping. The process outlined
above does, however, not contain all the essential physics of the fluid
displacement process. For displacement of an incompressible fluid, a
13.2 Invasion percolation 237

Fig. 13.3 Illustration of the invasion percolation cluster. The color-scale indicates nor-
malized pressure at which the site was invaded.

region that is fully bounded by the invading fluid cannot be invaded,


since the displaced fluid does not have any place to go. Instead, we
should study the process called invasion percolation with trapping. It has
been found that when trapping is included, the fractal dimension of the
invasion percolation cluster is slightly smaller [10]. In two dimensions,
the dimension is D ' 1.82.
This difference between the process with and without trapping disap-
pears for three-dimensional geometries because trapping become unlikely
in dimensions higher than 2. Indeed, direct numerical modeling shows
that the fractal dimension for both the ordinary percolation system and
invasion percolation is D ' 2.5 for invasion percolation with and without
trapping.
238 13 Dynamic processes in disordered media

13.2.1 Gravity stabilization


The invasion percolation cluster displays self-similar scaling similar to
that of ordinary percolation. This implies that the position h(x, p) of
the fluid front as a function of the non-dimensional applied pressure p is
given as the correlation length - since this is how far clusters connected
to the left side typically are connected. That is, when p approaches pc ,
the average position of the front is h̄(x, p) = ξ(p) = ξ0 (pc − p)−ν . The
width, w(p) of the front is also given as the correlation length:

w(p) = ξ0 (pc − p)−ν , (13.12)

as p approaches pc both the front position and the front width diverges,
that is, both the front position h̄ and the width, w, are proportional to
the system size L:
h̄ ∝ w ∝ L , (13.13)
However, when the system size increases, we would expect other sta-
bilizing effects to become important. For a very small, but finite fluid
injection velocity, the viscous pressure drop will eventually become im-
portant and comparable to the capillary pressure. Also, any deviation
from a completely flat system or for a system with a slight different in
densities, the effect of the hydrostatic pressure term will also eventually
become important. We will now demonstrate how we may address the
effect of such a stabilizing (or destabilizing) effect [5, 25].
Invasion percolation in a gravity field. Let us assume that the invasion
percolation occurs in the gravity field. This implies that the pressure
needed to invade a pore depends both on the capillary pressure, and on
a hydrostatic term. The pressure Pic needed to invade site i at vertical
position xi in the gravity field is:
Γ
Pic = + ∆ρgxi , (13.14)

We can again normalize the pressures, resulting in
∆ρg 0
i = pi +
pC (13.15)
0
x ,
Γ 2 i
where the coordinates are measured in units of the pore size, , which
is the unit of length in our system. The last term is called the Bond
number:
∆ρg
Bo = , (13.16)
Γ 2
13.2 Invasion percolation 239

Here, we will include the effect of the bond number in a single number g,
so that the critical pressure at site i is:

pci = p0i + gx0i , (13.17)

where p0i is a random number between 0 and 1.


Computational implementation. We implement this by changing the
values of the pressure threshold pi in the computational code:
g = 0.001
grad = g*meshgrid(range(L),range(L))[0]
z = z + grad

The whole program then becomes


# Now we add the effect on gravity - modifying the values of z
from pylab import *
from scipy.ndimage import measurements
L = 400
p = arange(0.0,0.7,0.01)
z = rand(L,L) # Random distribution of thresholds
g = 0.001
grad = g*meshgrid(range(L),range(L))[0]
z = z + grad
pcluster = zeros((L,L),float)
for nstep in range(len(p)-1):
zz = z<p[nstep]
lw,num = measurements.label(zz)
leftside = lw[:,0]
leftnonzero = leftside[where(leftside>0)]
uniqueleftside = unique(leftnonzero)
cluster = isin(lw,uniqueleftside)
pcluster = pcluster + 1.0*cluster
# Check if it has reached the right hand side
span = intersect1d(lw[:,1],lw[:,-1]) # Testing using intersect
if (len(span[where(span > 0)])>0):
break
imshow(log(pcluster),origin="lower")

Visualization of results. The resulting invasion percolation front for


various numbers of g is illustrated in Fig. 13.4. How can we understand
the gradual flattening of the front as g increases from g?
Front width analysis. This problem is similar to the diffusion front
problem. For an applied pressure p the front will typically be connected
up to an average distance xc given as

p = p 0 + xc g . (13.18)
240 13 Dynamic processes in disordered media

g=0.0 g=0.0001

g=0.001 g=0.01

Fig. 13.4 Illustration of the gravity stabilized invasion percolation cluster for g = 0,
g = 10−4 , g = 10−3 , and g = 10−2 . The color-scale indicates normalized pressure at
which the site was invaded.

The front will also extend beyond the average front position. The occu-
pation probability at a distance a from the front is p0 = pc − ag, since
fewer sites will be set beyond the front due to the stabilizing term g. A
site at a distance a is connected to the front if this distance a is shorter
to or equal to the correlation length for the occupation probability p0 at
this distance. The maximum distance a for which a site is connected to
the front therefore occurs when

a = ξ(p0 ) = ξ0 (pc − p0 )−ν . (13.19)


13.2 Invasion percolation 241

This gives

a = ξ(p0 ) = ξ0 (pc − p0 )−ν = ξ0 (pc − (pc − ag))−ν = ξ0 (ag)−ν a . (13.20)

This gives
a ∝ g −ν/(1+ν) , (13.21)
as the front width. We leave it as an exercise to show find the form of
the position h(p, g), and the width, w(p, g), as a function of p and g.
We observe that the width has a reasonable dependence on g. When g
approaches 0, the width diverges. This is exactly what we expect since
the limit g = 0 corresponds to the limit of ordinary invasion percolation.
This discussion demonstrates a general principle that we can use to
study several stabilizing effects, such as the effect of viscosity or other
material or process parameters that affect the pressure needed to advance
the front. The introduction of a finite width or characteristic length ξ
that can systematically be varied in order to address the behavior of the
system when the characteristic length diverges is also a powerful method
of both experimental and theoretical use.

13.2.2 Gravity destabilization


The gravity destabilized invasion percolation process corresponds to the
case when a less dense fluid is injected at the bottom of a denser fluid.
This is similar to the process known as secondary migration, where the
produced oil is migrating up through the sediments filled with denser
water. Examples of the destabilizing front is shown in Fig. 13.5.
We can make a similar argument for the case when g < 0, but in this
case the front is destabilized, and the correlation length ξ ∝ (−g)−ν/(1+ν)
corresponds to the width of the finger extending front the front. The
extending finger can be modeled as a sequence of blobs of size ξ extending
from the flat surface. This implies that the region responsible for the
transport of oil in secondary migration is essentially one-dimensional
structures: lines with a finite width w. The amount of hydrocarbons
left in the sediments during this process is therefore negligible. However,
there will be other effects, such as the finite viscosity and the rate of
production compared to the rate of flow, which will induce more than
one finger. However, the full process has only to a small degree been
addressed. Gravity destabilized invasion percolation is used as a modeling
242 13 Dynamic processes in disordered media

tool in studies of petroleum plays and commercial software packages are


available for such simulation.

g=0.0 g=0.0001

g=0.001 g=0.01

Fig. 13.5 Illustration of the gravity de-stabilized invasion percolation cluster for g = 0,
g = −10−4 , g = −10−3 , and g = −10−2 . The color-scale indicates normalized pressure at
which the site was invaded.
References

[1] Joan Adler, Yigal Meir, Amnon Aharony, A. B. Harris, and Lior
Klein. Low-concentration series in general dimension. Journal of
Statistical Physics, 58(3):511–538, 1990.
[2] A. Aharony, Y. Gefen, and A. Kapitulnik. Scaling at the percolation
threshold above six dimensions. Journal of Physics A: Mathematical
and General, 17(4):L197–L202, 1984.
[3] Per Bak. How Nature Works: the Science of Self-Organized Critical-
ity. Copernicus, 1996.
[4] David J. Bergman and Yacov Kantor. Critical Properties of an
Elastic Fractal. Physical Review Letters, 53(6):511–514, 1984.
[5] A. Birovljev, L. Furuberg, J. Feder, T. Jssang, K. J. Mly, and
A. Aharony. Gravity invasion percolation in two dimensions: Ex-
periment and simulation. Physical Review Letters, 67(5):584–587,
1991.
[6] John L. Cardy. Introduction to Theory of Finite-Size Scaling. In
Current Physics–Sources and Comments, volume 2 of Finite-Size
Scaling, pages 1–7. Elsevier, 1988.
[7] Kim Christensen and Nicholas R. Moloney. Complexity and Criti-
cality. Imperial College Press, 2005.
[8] A. Coniglio and R. Figari. Droplet structure in Ising and Potts
models. Journal of Numerical Mathematics, 16(14):L535–L540, 1983.
[9] P. G. de Gennes. La percolation: Un concept unificateur. La
Recherche, 7:919, 1976.
[10] M. M. Dias and D. Wilkinson. Percolation with trapping. Journal
of Physics A: Mathematical and General, 19(15):3131–3146, 1986.

243
244 REFERENCES

[11] Shechao Feng and Pabitra N. Sen. Percolation on Elastic Networks:


New Exponent and Threshold. Physical Review Letters, 52(3):216–
219, 1984.
[12] Liv Furuberg, Jens Feder, Amnon Aharony, and Torstein Jøs-
sang. Dynamics of Invasion Percolation. Physical Review Letters,
61(18):2117–2120, 1988.
[13] Yuval Gefen, Amnon Aharony, and Shlomo Alexander. Anomalous
Diffusion on Percolating Clusters. Physical Review Letters, 50(1):77–
80, 1983.
[14] Geoffrey R. Grimmett. Percolation. Grundlehren der mathematis-
chen Wissenschaften. Springer, Berlin Heidelberg, 2 edition, 1999.
[15] Shlomo Havlin and Daniel Ben-Avraham. Diffusion in disordered
media. Advances in Physics, 36(6):695–798, 1987.
[16] H. J. Herrmann and H. E. Stanley. The fractal dimension of the
minimum path in two- and three-dimensional percolation. Journal
of Physics A: Mathematical and General, 21(17):L829–L833, 1988.
[17] D. C. Hong and H. E. Stanley. Cumulant renormalisation group
and its application to the incipient infinite cluster in percolation.
Journal of Numerical Mathematics, 16(14):L525–L529, 1983.
[18] Allen Hunt, Robert Ewing, and Behzad Ghanbarian. Percolation
Theory for Flow in Porous Media. Lecture Notes in Physics. Springer,
3 edition, 2014.
[19] Leo P. Kadanoff. Scaling laws for ising models near ${T}_{c}$.
Physics Physique Fizika, 2(6):263–272, 1966.
[20] Yacov Kantor and Itzhak Webman. Elastic Properties of Random
Percolating Systems. Physical Review Letters, 52(21):1891–1894,
1984.
[21] Harry Kesten. Percolation Theory for Mathematicians. Progress in
Probability. Birkhäuser Basel, 1982.
[22] Peter R. King and Mohsen Masihi. Percolation Theory in Reservoir
Engineering. WORLD SCIENTIFIC (EUROPE), 2018.
[23] Scott Kirkpatrick. Percolation and Conduction. Reviews of Modern
Physics, 45(4):574–588, 1973.
[24] B. J. Last and D. J. Thouless. Percolation Theory and Electrical
Conductivity. Physical Review Letters, 27(25):1719–1721, 1971.
[25] Paul Meakin, Aleksandar Birovljev, Vidar Frette, Jens Feder, and
Torstein Jøssang. Gradient stabilized and destabilized invasion
percolation. Physica A: Statistical Mechanics and its Applications,
191(1):227–239, 1992.
[26] Yigal Meir, Raphael Blumenfeld, Amnon Aharony, and A. Brooks
REFERENCES 245

Harris. Series analysis of randomly diluted nonlinear resistor net-


works. Physical Review B, 34(5):3424–3428, 1986.
[27] S. Roux. Relation between elastic and scalar transport exponent in
percolation. Journal of Numerical Mathematics, 19(6):L351–L356,
1986.
[28] M. Sahimi. Relation between the critical exponent of elastic per-
colation networks and the conductivity and geometrical exponents.
Journal of Physics C: Solid State Physics, 19(4):L79–L83, 1986.
[29] Muhammad Sahimi and Joe D. Goddard. Elastic percolation models
for cohesive mechanical failure in heterogeneous systems. Physical
Review B: Condensed Matter and Materials Physics, 33(11):7848–
7851, 1986.
[30] M. Sahini and M. Sahimi. Applications Of Percolation Theory. Wiley
Press, 2003.
[31] B. Sapoval, M. Rosso, and J. F. Gouyet. The fractal nature of a
diffusion front and the relation to percolation. Journal de Physique
Lettres, 46(4):149–156, 1985.
[32] Waclaw Sierpinski. Sur une courbe dont tout point est un point de
ramification. Comp. Rend. Acad. Sci. Paris, 160:302–305, 1915.
[33] Sorin Solomon, Gerard Weisbuch, Lucilla de Arcangelis, Naeem
Jan, and Dietrich Stauffer. Social percolation models. Physica A:
Statistical Mechanics and its Applications, 277(1):239–247, 2000.
[34] H. E. Stanley. Cluster shapes at the percolation threshold: and
effective cluster dimensionality and its connection with critical-
point exponents. Journal of Physics A: Mathematical and General,
10(11):L211–L220, 1977.
[35] H. E. Stanley. Introduction to Phase Transitions and Critical Phe-
nomena. Oxford University Press USA, New York, reprint edition
edition, 1987.
[36] H. E. Stanley and A. Coniglio. Fractal structure of the incipient
infinite cluster in percolation. In G. Deutsher, R. Zallen, and J. Adler,
editors, Percolation Structures and Processes. Adam Hilger, Bristol,
1983.
[37] Dietrich Stauffer and Ammon Aharony. Introduction To Percolation
Theory: Second Edition. Wiley Press, 1992.
[38] Sandra J. Steacy and Charles G. Sammis. An automaton for fractal
patterns of fragmentation. Nature, 353(6341):250–252, 1991.
[39] D. Wilkinson and J. F. Willemsen. Invasion percolation: a new
form of percolation theory. Journal of Numerical Mathematics,
16(14):3365–3376, 1983.
246 REFERENCES

[40] Kenneth G. Wilson. Renormalization Group and Critical Phenomena.


I. Renormalization Group and the Kadanoff Scaling Picture. Physical
Review B: Condensed Matter and Materials Physics, 4(9):3174–3183,
1971.
[41] J. G. Zabolitzky, D. J. Bergman, and D. Stauffer. Precision calcu-
lation of elasticity for percolation. Journal of Statistical Physics,
44(1):211–223, 1986.
Index

D, 74, 85, 98 c, 1
DSC , 140 f (α), 189
G, 161, 163 g, 161
G(p, L), 163 g(c), 14
L, 34 g(r), 33
P , 19, 31, 40 gs,t , 42
P (p, L), 12, 62, 97 kij , 196
Q(p), 38 n(s, p), 20, 21, 23, 32, 44, 54, 58
Rs , 69 p, 3, 5, 37, 40
S, 29, 42, 61 pc , 3, 4, 64, 105
S(p, L), 99 s, 23
Z, 3, 37 sξ , 45
Λ, 116
Π, 9, 14, 19 anomalous diffusion, 217
Π(p, L), 102 average cluster size, 27, 29, 41, 42,
β, 40 61
γ, 31, 42, 62, 99 first moment, 30
κ, 196 average path, 142
ν, 33, 83
backbone, 142
φ, 1
bending stiffness, 195
σ, 45
Bethe lattice, 37
τ , 62, 198
binary mixture, 189
ξ, 32, 75, 78, 94
blob model, 144

247
248 INDEX

bond lattice, 125 mass, 90


box counting, 90, 150 distribution of cluster sizes, 23

capillary pressure, 232 effective percolation threshold, 102


Cayley tree, 37 embedding dimension, 49
characteristic cluster size, 25, 45 estimator, 10
characteristic length, 32
cluster, 5, 8 finite cluster, 8
cluster number density, 20, 21, 23, finite lattice, 95
32, 44, 54, 58 finite size scaling, 32, 34, 94
numerical estimation, 54 average cluster size, 99
scaling ansatz, 58 mass, 98
cluster radius, 69 moments of cluster number
cluster size, 23 density, 99
cluster size distribution, 8 P (p, L), 95
cluster visualization, 6 Π(p, L), 34
conditional probability, 14 finite size scaling ansatz, 95
conductance, 161, 163, 171 finite-dimensional percolation
conductivity, 161, 163 average cluster size, 61
configuration, 14 cluster number density, 58
connected, 8 scaling ansatz, 58
connected clusters, 5 scaling function, 60
connectivity, 3 fixpoint, 113
conservation of current, 164 marginal, 117
correlation function, 33, 78 non-trivial, 114
correlation length, 78, 83, 94 trivial, 113
cross-over length, 75 unstable, 116
cubic lattice, 3 flow, 161
current, 162 flux, 162
fractal, 74, 88, 89, 111
dangling ends, 142 fractal dimension, 74, 85, 98
Darcy’s law, 163 fragmentation, 133
data collapse, 26, 97, 102
data collapse plot, 97 histogram, 55
data-collapse, 60 hyper-scaling, 98
density of spanning cluster, 31, 32,
38, 171 incompressible fluid, 236
density of the spanning cluster, 12 inf-dimensional percolation
deterministic fractal, 148 average cluster size, 41
diffusion equation, 230 β, 40
dimension, 74 characteristic cluster size, 45
INDEX 249

cluster number density, 44 number of neighbors, 3


density of spanning cluster, 38
γ, 42 occupation probability, 3, 38
n(s, p), 44 occupied, 2
occupation probability, 38 Ohm’s law, 163
p, 38, 40 one-dimensional percolation, 19
pc , 42 g(r), 33
percolation threshold, 40 average cluster size, 27
S, 41 characteristic cluster size, 25
scaling ansatz, 48, 61 characteristic length, 32
scaling relations, 49 cluster number density, 20
σ, 45 correlation function, 33, 78
spanning cluster, 38 correlation length, 32
sξ , 45 finite size scaling, 34
interfactial surface tension, 232 g(r), 78
invasion percolation, 232 γ, 31
iterative fractal, 148 ν, 33
percolation threshold, 19
lacunarity, 150 renormalization, 120
logaritmic binning, 55 Rs , 71
scaling ansatz, 26, 60
Mandelbrot-Given curve, 148 σ, 25
mass, 95 spanning cluster, 31
mass of spanning cluster, 86 sxi , 25
maximum path, 142
minimal path, 141 percolating cluster, 7
moment of distribution, 187 percolation probability, 9
moments, 99 percolation threshold, 3, 4, 64, 102,
moments of a distribution, 151 105
moments of cluster number density, periodic boundary conditions, 213
99 permeability, 162
multi-fractal, 186, 187 porosity, 2, 5
multiplicity, 14 power-law, 25
pressure drop, 162
nearest neighbor, 3
nearest neighbor connectivity, 3 radius of gyration, 69, 71
next-nearest neighbor, 3, 25 random media, 1
next-nearest neighbor connectivity, random porous medium, 2
3 random resistor networks, 161
non-wetting fluid, 232 real conductivity, 189
normalization, 24 renormalization, 111
250 INDEX

rigidity percolation, 195, 197


rigidity threshold, 197
saturated porous medium, 232
scaling ansatz, 46, 48, 58, 97
scaling exponents
bounds, 174
scaling function, 60
scaling relation, 62
scaling relations, 49
self-affine, 232
self-avoiding path, 141
self-similar, 88, 111, 232
self-similar fractal, 88
self-similar scaling, 89
shear modulus, 196
Sierpinski gasket, 89, 150
singly connected, 140
singly connected bonds, 147
singly connected site, 139
site, 2
site lattice, 121
spanning, 5
spanning cluster, 5, 7, 8, 12, 19, 38,
95
geometry, 85
square lattice, 3
stress, 196
stretching stiffness, 195
superconductor networks, 190
surface tension, 232
Taylor expansion, 25, 44, 116
thermodynamic limit, 94
triangular lattice, 123
unifractal, 151, 186
universality, 127
viscosity, 162
Young’s modulus, 196

You might also like