Recurrence Vs Transience: An Introduction To Random Walks: Preface
Recurrence Vs Transience: An Introduction To Random Walks: Preface
random walks
Pablo Lessa∗
May 24, 2015
Preface
These notes are aimed at advanced undergraduate students of Mathematics.
Their purpose is to provide a motivation for the study of random walks in a
wide variety of contexts. I have chosen to focus on the problem of determining
recurrence or transience of the simple random walk on infinite graphs, especially
trees and Cayley graphs (associated to a symmetric finite generator of a discrete
group).
None of the results here are new and even less of them are due to me.
Except for the proofs the style is informal. I’ve used contractions and many
times avoided the use of the editorial we. The bibliographical references are
historically incomplete. I give some subjective opinions, unreliable anecdotes,
and spotty historical treatment. This is not a survey nor a text for experts.
This work is dedicated to François Ledrappier.
Contents
1 Introduction 2
1.1 Pólya’s Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 The theory of random walks . . . . . . . . . . . . . . . . . . . . . 3
1.3 Recurrence vs Transience: What these notes are about . . . . . . 7
1.4 Notes for further reading . . . . . . . . . . . . . . . . . . . . . . . 11
1
2.6 Simple random walks on graphs . . . . . . . . . . . . . . . . . . . 19
2.7 A counting proof of Pólya’s theorem . . . . . . . . . . . . . . . . 21
2.8 The classification of recurrent radially symmetric trees . . . . . . 23
2.9 Notes for further reading . . . . . . . . . . . . . . . . . . . . . . . 25
1 Introduction
1.1 Pólya’s Theorem
A simple random walk, or drunkard’s walk, on a graph, is a random path ob-
tained by starting at a vertex of the graph and choosing a random neighbor
at each step. The theory of random walks on finite graphs is rich and inter-
esting, having to do with diversions such as card games and magic tricks, and
also being involved in the construction and analysis of algorithms such as the
PageRank algorithm which at one point in the late 90s was an important part
of the Google search engines.
In these notes we will be interested in walks on infinite graphs. I believe
it’s only a mild exaggeration to claim that the theory was born in 1920s with a
single theorem due to George Pólya.
Theorem 1 (Pólya’s Theorem). The simple random walk on the two dimen-
sional grid is recurrent but on the three dimensional grid it is transient.
Here the two dimensional grid Z2 is the graph whose vertices are pairs of
integers and where undirected edges are added horizontally and vertically so
2
that each vertex has 4 neighbors. The three dimensional grid Z3 is defined
similarly, each vertex has 6 neighbors.
A simple random walk is called recurrent if almost surely (i.e. with proba-
bility 1) it visits every vertex of the graph infinitely many times. The reader
can probably easily convince him or herself that such paths exist. The content
of the first part of Pólya’s theorem is that a path chosen at random on Z2 will
almost surely have this property (in other words the drunkard always makes it
home eventually).
Transience is the opposite of recurrence. It is equivalent to the property
that given enough time the walk will eventually escape any finite set of vertices
never to return. Hence it is somewhat counterintuitive that the simple random
walk on Z3 is transient but its shadow or projection onto Z2 is recurrent.
3
30
24 25 26
28
27,29
23
22
18,20 9 8
21 7
0,2,10,12,16 6
11,17,19
1,3,13,15 4, 14 5
Figure 1: Bêbado
I generated this by flipping a brazilian “1 real” coin twice for each step. The
result was almost to good of an illustration of Pólya’s theorem, returning to the
origin 4 times before leaving the box from (−5, −5) to (5, 5) after exactly 30
steps.
4
Figure 2: If you drink don’t space walk.
More coin flipping. I kept the previous 2d walk and added a 3rd coordinate,
flipping coins so that this third coordinate would change one third of the time
on average (if you have to know, if both coins landed heads I would flip one
again to see whether to increment or decrement the new coordinate, if both
landed tails I would just ignore that toss, in any other case I would keep the
same third coordinate and advance to the next step in the original 2d walk).
The result is a 3d simple random walk. Clearly it returns to the origin a lot less
than its 2d shadow.
5
property does not hold in R4 ). However, the two paths will pass arbitrarily close
to one another so the perfume property does hold in a slightly weaker sense.
Brownian motion makes sense on Riemannian manifolds (basically because a
list of instructions of the type “forward 1 meter, turn right 90 degrees, forward 1
meter, etc” can be followed on any Riemannian surface, this idea is formalized
by the so-called Eells-Ellsworthy-Malliavin construction of Brownian motion)
so a natural object to study is Brownian motion on the homogeneous surface
geometries (spherical, Euclidean, and hyperbolic). A beautiful result (due to
Jean-Jaques Prat) in this context is the following:
Theorem 3 (Hyperbolic Brownian motion is transient). Brownian motion on
the hyperbolic plane escapes to infinity at unit speed and has an asymptotic
direction.
The hyperbolic plane admits polar coordinates (r, θ) with respect to any
chosen base point. Hence Brownian motion can be described as a random curve
(rt , θt ) indexed on t ≥ 0. Prat’s result is that rt /t → 1 and the limit θ∞ = lim θt
exists almost surely (and in fact eiθ∞ is necessarily uniform on the unit circle by
symmetry). This result is very interesting because it shows that the behaviour of
Brownian motion can change drastically even if the dimension of the underlying
space stays the same (i.e. curvature affects the behavior of Brownian motion).
Another type of random walk is obtained by taking two 2 × 2 invertible
matrices A and B and letting g1 , . . . , gn , . . . be independent and equal to either A
or B with probability 1/2 in each case. It was shown by Furstenberg and Kesten
that the exponential growth of the norm of the product An = g1 · · · gn exists,
i.e. χ = lim n1 log(|An |) (this number is called the first Lyapunov exponent of
the sequence, incredibly this result is a trivial corollary of a general theorem of
Kingman proved only a few years later using an almost completely disjoint set
of ideas).
The sequence An can be seen as a random walk on the group of invertible
matrices. There are three easy examples where χ = 0. First, one can take both
A and B to be rotation matrices (in this case one will even have recurrence in
the following weak sense: An willpass arbitrarily close to theidentity matrix).
1 1 −1 1 −1
Second, one can take A = and B = A = . And third, one
0 1
0 1
0 1 2 0
can take A = and B = . It is also clear that conjugation by a
1 0 0 1/2
matrix C (i.e. changing A and B to C AC and C −1 BC respectively) doesn’t
−1
change the behavior of the walk. A beautiful result of Furstenberg (which has
many generalizations) is the following.
Theorem 4 (Furstenberg’s exponent theorem). If a sequence of independent
random matrix products An as above has χ = 0 then either both matrices A and
B are conjugate to rotations with respect to the same conjugacy matrix, or they
both fix a one-dimensional subspace of R2 , or they both leave invariant a union
of two one-dimensional subspaces.
6
1.3 Recurrence vs Transience: What these notes are about
From the previous subsection the reader might imagine that the theory of ran-
dom walks is already too vast to be covered in three lectures. Hence, we con-
centrate on a single question: Recurrence vs Transience. That is we strive to
answer the following:
Question 1. On which infinite graphs is the simple random walk recurrent?
Even this is too general (though we will obtain a sharp criterion, the Flow
Theorem, which is useful in many concrete instances). So we restrict even fur-
ther to just two classes of graphs: Trees and Cayley graphs (these two families,
plus the family of planar graphs which we will not discuss, have recieved special
attention in the literature because special types of arguments are available for
them, see the excellent recent book by Russell Lyons and Yuval Perez “Proba-
bility on trees and networks” for more information).
A tree is simply a connected graph which has no non-trivial closed paths
(a trivial closed path is the concatenation of a path and its reversal). Here are
some examples.
Example 1 (A regular tree). Consider the regular tree of degree three T3 (i.e.
the only connected tree for which every vertex has exactly 3 neighbors). The
random walk on T3 is clearly transient. Can you give a proof ?
Example 2 (The Canopy tree). The Canopy tree is an infinite tree seen from
the canopy (It’s branches all the way down!). It can be constructed as follows:
1. There is a “leaf vertex” for each natural number n = 1, 2, 3, . . ..
2. The leaf vertices are split into consecutive pairs (1, 2), (3, 4), . . . and each
pair is joined to a new “level two” vertex v1,2 , v3,4 , . . ..
3. The level two vertices are split into consecutive pairs and joined to level
three vertices and so on and so forth.
Is the random walk on the Canopy tree recurrent?
Example 3 (Radially symmetric trees). Any sequence of natural numbers a1 , a2 , . . .
defines a radially symmetric tree, which is simply a tree with a root vertex having
a1 children, each of which have a2 children, each of which have a3 children, etc.
Two simple examples are obtained by taking an constant equal to 1 (in which
case we get half of Z on which the simple walk is recurrent) or 2 (in which case
we get an infinite binary tree where the simple random walk is transient). More
interesting examples are obtained using sequences where all terms are either 1
or 2 but where both numbers appear infinitely many times. It turns out that such
sequences can define both recurrent and transient trees (see Corollary 1).
Example 4 (Self-similar trees). Take a finite tree with a distinguished root
vertex. At each leaf attach another copy of the tree (the root vertex replacing
the leaf ). Repeat ad infinitum. That’s a self-similar tree. A trivial example
7
is obtained when the finite tree used has only one branch (the resulting tree is
half of Z and therefore is recurrent). Are any other self-similar trees recurrent?
A generalization of this construction (introduced by Woess and Nagnibeda) is
to have n rooted finite trees whose leaves are labeled with the numbers 1 to n.
Starting with one of them one attaches a copy of the k-th tree to each leaf labeled
k, and so on ad infinitum.
Given any tree one can always obtain another by subdividing a few edges.
That is replacing an edge by a chain of a finite number of vertices (this concept
of subdivision appears for example in the characterization of planar graphs, a
graph is planar if and only if it doesn’t contain a subdivision of the complete
graph in 5 verteces or the 3 houses connected to electricity, water and telephone
graph1 ). For example a radially symmetric tree defined by a sequence of ones
and twos (both numbers appearing infinitely many times) is a subdivision of
the infinite binary tree.
Question 2. Can one make a transient tree recurrent by subdividing its edges?
Besides trees we will be considering the class of Cayley graphs which is
obtained by replacing addition on Zd with a non-commutative operation. In
general, given a finite symmetric generator F of a group G (i.e. g ∈ F implies
g −1 ∈ F , an example is F = {(±1, 0), (0, ±1)} and G = Z2 ) the Cayley graph
associated to (G, F ) has vertex set G and an undirected edge is added between
x and y if and only if x = yg for some g ∈ F (notice that this relationship is
symmetric).
Let’s see some examples.
Example 5 (The free group in two generators). The free group F2 in two
generators is the set of finite words in the letters N, W, E, W (north, south,
east, and west) considered up to equivalence with respect to the relations N S =
SN = EW = W E = e (where e is the empty word). It’s important to note
that these are the only relations (for example N E 6= EN ). The Cayley graph of
F2 (we will always consider the generating set {N, W, E, W }) is a regular tree
where each vertex has 4 neighbors.
Example 6 (The modular group). The modular group Γ is the group of frac-
tional linear transformations of the form
az + b
g(z) =
cz + d
where a, b, c, d ∈ Z and ad − bc = 1. We will always consider the generating
set F = {z 7→ z + 1, z 7→ z − 1, z 7→ −1/z}. Is the simple random walk on the
corresponding Cayley graph recurrent or transient?
8
Figure 3: Random walk on the modular group
Starting at the triangle to the right of the center and choosing to go through
either the red, green or blue side of the triangle one is currently at, one obtains
a random walk on the modular group. Here the sequence red-blue-red-blue-
green-red-blue-blue-red-red is illustrated.
9
Figure 4: Wallpaper
I made this wallpaper with the program Morenaments by Martin von Gagern.
It allows you to draw while applying a group of Euclidean transformations to
anything you do. For this picture I chose the group ∗632.
plane and let F be the set of 12 axial symmetries with respect to the 6 sides, 3
diagonals (lines joining opposite vertices) and 3 lines joining the midpoints of
opposite sides. The group ∗632 is generated by F and each element of it pre-
serves a tiling of the plane by regular hexagons. The strange name is a case of
Conway notation and refers to the fact that 6 axii of symmetry pass through the
center of each hexagon in this tiling, 3 pass through each vertex, and 2 through
the midpoint of each side (the lack of asterisk would indicate rotational instead
of axial symmetry). Is the simple random walk on the Cayley graph of ∗632
recurrent?
10
Figure 5: Heisenberg
A portion of the Cayley graph of the Heisenberg group.
where the triangle denotes symmetric difference. The “elementary moves” corre-
spond to multiplication by elements of the generating set F = {(±1, {}), (0, {0})}.
Is the random walk on Lamplighter(Z) recurrent?
Question 3. Can it be the case that the Cayley graph associated to one gen-
erator for a group G is recurrent while for some other generator it’s transient?
i.e. Can we speak of recurrent vs transient groups or must one always include
the generator?
11
Figure 6: Drunken Lamplighter
I’ve simulated a random walk on the lamplighter group. After a thousand
steps the lamplighter’s position is shown as a red sphere and the white spheres
represent lamps that are on. One knows that the lamplighter will visit every
lamp infinitely many times, but will he ever turn them all off again?
For the very rich theory of the simple random walk on Zd a good starting
point is the classical book by Spitzer [Spi76] and the recent one by Lawler
[Law13].
For properties of classical Brownian motion on Rd it’s relatively painless and
highly motivating to read the article by Einstein [Ein56] (this article actually
motivated the experimental confirmation of the existence of atoms via observa-
tion of Brownian motion, for which Jean Perrin recieved the Nobel prize later
on!). Mathematical treatment of the subject can be found in several places such
as Mörters and Peres’ [MP10].
Anyone interested in random matrix products should start by looking at the
original article by Furstenberg [Fur63], as well as the excellent book by Bougerol
and Lacroix [BL85].
Assuming basic knowledge of stochastic calculus and Riemannian geometry
I can recommend Hsu’s book [Hsu02] for Brownian motion on Riemannian man-
ifolds, or Stroock’s [Str00] for those preferring to go through more complicated
calculations in order to avoid the use of stochastic calculus . Purely analytical
treatment of the heat kernel (including upper and lower bounds) is well given in
Grigoryan’s [Gri09]. The necessary background material in Riemannian geom-
etry is not very advanced into the subject and is well treated in several places
(e.g. Petersen’s book [Pet06]), similarly, for the Stochastic calculus background
there are several good references such as Le Gall’s [LG13].
12
2 The entry fee: A crash course in probability
theory
Let me quote Kingman who said it better than I can.
13
A random element is a function x from a probability space Ω to some com-
plete separable metric space X (i.e. a Polish space) with the property that the
preimage of any open set belongs to the σ-algebra F (i.e. x is measurable).
Usually if the function takes values in R it’s called a random variable, and as
Kingman put it, a random elephant is a measurable function into a suitable
space of elephants.
The point is that given a suitable (which means Borel, i.e. belonging to
the smallest σ-algebra generated by the open sets) subset A of the Polish space
X defined by some property P one can give meaning to “the probability that
the
random element x satisfies property P” simply by assigning it the number
P x−1 (A) which sometimes will be denoted by P [x satisfies property P ] or
P [x ∈ A].
Some people don’t like the fact that P is assumed to be countably additive
(instead of just finitely additive). But this is crucial for the theory and is
necessary in order to make sense out of things such as P [lim xn exists ] where
xn is a sequence of random variables (results about the asymptotic behaviour of
sequences of random elements abound in probability theory, just think of or look
up the Law of Large Numbers, Birkhoff’s ergodic theorem, Doob’s martingale
convergence theorem, and of course Pólya’s theorem).
2.2 Distributions
The distribution of a random element x is the Borel (meaning defined on the
σ-algebra of Borel sets) probability measure µ defined by µ(A) = P [x ∈ A].
Similarly the joint distribution of a pair of random elements x and y of spaces
X and Y is a probability on the product space X × Y , and the joint distribution
of a sequence of random variables is a probability on a sequence space (just
group all the elements together as a single random element and consider its
distribution).
Two events A and B of a probability space Ω are said to be independent if
P [A ∩ B] = P [A] P [B]. Similarly, two random elements x and y are said to be
indpendent if P [x ∈ A, y ∈ B] = P [x ∈ A] P [y ∈ B] for all Borel sets A and B
in the corresponding ranges of x and y. In other words they are independent
if their joint distribution is the product of their individual distributions. This
definitions generalizes to sequences. The elements of a (finite or countable)
sequence are said to be independent if their joint distribution is the product of
their individual distributions.
A somewhat counterintuitive fact is that independence of the pairs (x, y), (x, z)
and (y, z) does not imply indepedence of the sequence x, y, z. An example is
obtained by letting x and y be independent with P [x = ±1] = P [y = ±1] = 1/2
and z = xy (the product of x and y). A random element is independent from
itself if and only if it is almost surely constant (this strange legalistic loophole
is actually used in the proof of some results, such as Kolmogorov’s zero-one law
or the ergodicity of the Gauss continued fraction map; one shows that an event
has probability either zero or one by showing that it is independent from itself).
The philosophy in probability theory is that the hypothesis and statements
14
of the theorems should depend only on (joint) distributions of the variables
involved (which are usually assumed to satisfy some weakened form of indepen-
dence) and not on the underlying probability space (of course there are some
exceptions, notably Skorohod’s representation theorem on weak convergence
where the results is that a probability space with a certain sequence of variables
defined on it exists). The point of using a general space Ω instead of some fixed
Polish space X with a Borel probability µ is that, in Kolmogorov’s framework,
one may consider simultaneously random objects on several different spaces and
combine them to form new ones. In fact one could base the whole theory (pretty
much) on Ω = [0, 1] endowed with Lebesgue measure on the Borel sets and a few
things would be easier (notably conditional probabilities), but not many peo-
ple like this approach nowadays (the few who do study “standard probability
spaces”).
15
where P [A] is assumed to be positive (making sense out of conditional prob-
abilities when the events with respect to which one wishes to condition have
probability zero was one of the problems which pushed towards the develop-
ment of the Kolmogorov framework... but we wont need that).
We can now formalize the “memoryless” property of Markov chains (called
the Markov property) in its simplest form.
Theorem 6 (Weak Markov property). Let x0 , x1 , . . . be a Markov chain on a
countable set X with initial distribution p and transition matrix q. Then for
each fixed n the sequence y0 = xn , y1 = xn+1 , . . . is also a Markov chain with
transition matrix q (the initial distribution is simply the distribution of xn ).
Furthermore the sequence y = (y0 , y1 , . . .) is conditionally independent from
x0 , . . . , xn−1 given xn by which we mean
P [y ∈ A|x0 = a0 , . . . , xn = an ] = P [y ∈ A|xn = an ]
for all a0 , . . . , an ∈ X.
2.4 Expectation
Given a real random variable x defined on some probability space its expectation
is defined in the following three cases:
16
n
P
1. If x assumes only finitely many values x1 , . . . , xn then E [x] = P [x = xk ] xk .
k=1
2. If x assumes only non-negative values then E [x] = sup{E [y]} where the
supremum is taken over all random variables with 0 ≤ y ≤ x which assume
only finitely many values. In this case one may have E [x] = +∞.
17
random walk starting at 2, etc (so en is the expected hitting time of 0 for a
simple random walk starting at n). Using the weak Markov property one can
get
en−1 + en+1
en = 1 +
2
for all n ≥ 1 (where e0 = 0) which gives us en+1 − en = (en − en−1 ) − 2 so
any solution to this recursion is eventually negative. The expected values of the
hittings times we’re considering, if finite, would all be positive, so they’re not
finite.
18
Theorem 9 (Strong Markov property). Let x0 , x1 , . . . be a Markov chain with
transition matrix q and τ be a finite stopping. Then with respect to the prob-
ability Pτ the sequence y = (y0 , y1 , . . .) where yn = xτ +n is a Markov chain
with transition matrix q. Furthermore y is independent from event prior to τ
conditionally on on xτ by which we mean that
Pτ [y ∈ B|A, xτ = x] = Pτ [y ∈ B|xτ = x]
for all A occurring prior to τ , all x ∈ X, and all Borel B ⊂ X N .
The reader should try to work out the proof in enough detail to understand
where the fact that τ is a stopping time is used.
To see why the generality of the strong Markov property is useful let’s con-
sider the simple random walk on Z starting at 0. We have shown that the
probability of returning to 0 is 1. The strong Markov property allows us to con-
clude the intuitive fact that almost surely the random walk will hit 0 infinitely
many times. That is one can go from
P [xn = 0 for some n > 0] = 1
to
P [xn = 0 for infinitely many n] = 1
.
To see this just consider τ = min{n > 0 : xn = 0} the first return time to
0. We have already seen that the xn returns to 0 almost surely. The strong
Markov property allows one to calculate the probability that xn will visit 0
at least twice as P [τ < +∞] Pτ [xτ +n = 0 for some n > 0] and tells us that the
second factor is exactly the probability that xn returns to zero at least once
(which is 1). Hence the probability of returning twice is also equal to 1 and so
on and so forth. Finally, after we’ve shown that event {xn = 0 at least k times}
has full probability their intersection (being countable) also does.
This application may not seem very impressive. It gets more interesting
when the probability of return is less than 1. We will use this in the next
section.
19
Definition 1 (Recurrence and Transience). A simple random walk {xn : n ≥ 0}
on a graph X is said to be recurrent if almost surely it visits every vertex in X
infinitely many times i.e.
for all y 6= x (functions satisfying this equation are said to be harmonic) and
X
p= q(x, z)f (z).
z∈X
20
the n-th time without hitting y is q n , since this goes to 0 one obtains that any
simple random walk will almost surely hit y eventually, and therefore infinitely
many times. In short all simple random walks on X are recurrent.
We will now establish the formula for the expected number of visits.
Notice that the number of visits to x of a simple random walk xn is simply
+∞
X
1{xn =x}
n=0
where 1A is the indicator of an event A (i.e. the function taking the value 1
exactly on A and 0 everywhere else in the domain probability space Ω). Hence
the first equality in the formula for the expected number of visits follows simply
by the monotone convergence theorem.
The equality of this to the third term in the case p = 1 is trivial (all terms
being infinite). Hence we may assume from now on that p < 1.
In order to establish the second equality we use the sequence of stopping
times τ, τ1 , τ2 , . . . defined above. The probability that the number of visits to x is
exactly n is shown by strong Markov property to be P [τ < +∞] pn−1 (1−p).
Pthe n−1
Using the fact that np = 1/(1 − p)2 one obtains the result.
so that all one really needs to estimate is the number of closed paths of length
n starting and ending at 0 (in particular notice that pn = 0 if n is odd, which
is quite reasonable).
For the case d = 2 the number of closed paths of length 2n can be seen
2
2n
to be by bijection as follows: Consider the 2n increments of a closed
n
path. Associate to each increment one of the four letters aAbB according to the
following table
(1, 0) a
(−1, 0) B
.
(0, 1) A
(0, −1) b
Notice that the total number of ‘a’s (whether upper or lowercased) is n and
that the total number of uppercase letters is also n. Reciprocally if one is given
two subsets A, U of {1, . . . , 2n} with n elements each one can obtain a closed
21
path by interpreting the set A as the set of increments labeled with ‘a’ or ‘A’
and the set U as those labelled with an upper case letter.
Hence, one obtains for d = 2 that
2
2n
p2n = 4−2n .
n
√ n n
Stirling’s formula n! ∼ 2πn e implies that
1
p2n ∼ ,
πn
P
which clearly implies that the series pn diverges. This shows that the simple
random walk on Z2 is recurrent.
For d = 3 the formulaP gets a bit tougher to analyze
P but we can grind it out.
First notice that if p6n converges then so does pn (because the number
of closed paths of length 2n is an increasing function of n). Hence we will bound
only the number of closed paths of length 6n.
The number of closed paths of length 6n is (splitting the 6n increments into
6 groups according to their value and noticing that there must be the same
number of (1, 0, 0) as (−1, 0, 0),etc)
X (6n)! (6n)! X 1 (6n)! X (3n)!
≤ ≤
(a!)2 (b!)2 (c!)2 (n!)3 a!b!c! (3n)!(n!)3 a!b!c!
a+b+c=3n a+b+c=3n a+b+c=3n
(6n)! 1
= 33n ∼ 66n √ ,
(3n)!(n!)3 4π 3 n3
where the first inequality is because a!b!c! ≥ (n!)3 when a + b + c = 3n and we
use that 33n is the sum of (3n)!/(a!b!c!) over a + b + c = 3n. √
Hence we have shown thatPfor all > 0 one has p6n ≤ (1 + )/ 4π 3 n3 for
n large enough. This implies pn < +∞ and hence the random walk on Z3 is
transient.
This proof is unsatisfactory (to me at least) for several reasons. First, you
don’t learn a lot about why the result is true from this proof. Second, it’s very
inflexible, remove or add a single edge from the Zd graph and the calculations
become a lot more complicated, not to mention if you remove or add edges at
random or just consider other types of graphs. The ideas we will discuss in the
following section will allow for a more satisfactory proof, and in particular will
allow one to answer questions such as the following:
Question 4. Can a graph with a transient random walk become recurrent after
one adds edges to it?
At one point while learning some of these results I imagined the following:
Take Z3 and add an edge connecting each vertex (a, b, c) directly to 0. The
simple random walk on the resulting graph is clearly recurrent since at each step
it has probability 1/7 of going directly to 0. The problem with this example
22
is that it’s not a graph in our sense and hence does not have a simple random
walk (how many neighbors does 0 have? how would one define the transition
probabilities for 0?). In fact, we will show that the answer to the above question
is negative.
23
the form α + βg for some constants α and β. Does this mean that there’s a
positive probability of never hitting 0? The answer is yes and that’s the content
of Blackwell’s theorem (this is a special case of a general result, the original
paper is very readable and highly recommended).
Theorem 10 (Blackwell’s theorem). Consider the transition matrix on Z+
defined by q(0, 0) = 1, q(n, n + 1) = pn and qn = 1 − pn = q(n, n − 1) for all
n ≥ 1. Then any Markov chain with this transition matrix will eventually hit 0
almost surely if and only if the equation
f (n) = qn f (n − 1) + pn f (n + 1)
Proof. The distance to the root of a random walk on T is simply a Markov chain
on the non-negative integers with transition probability q(0, 1) = 1, q(n, n−1) =
1/(1 + an ) = qn , and q(n, n + 1) = an /(1 + an ) = pn . Clearly such a chain
will eventually almost surely hit 0 if and only if the corresponding chain with
modified transition matrix q(0, 0) = 1 does.
Hence by Blackwell’s result the probability of hitting 0 is 1 for all simple
random walks on T if and only if there are no non-constant bounded solutions
to the equation
f (n) = qn f (n − 1) + pn f (n + 1)
on the non-negative integers.
It’s a linear recurrence of order two so one can show the space of solutions
is two dimensional. Hence all solutions are of the form α + βg where g is the
24
solution with g(0) = 0 and g(1) = 1. Manipulating the equation one gets the
equation g(n + 1) − g(n) = (g(n) − g(n − 1))/an for the increments of g. From
this it follows that
1 1 1
g(n) = 1 + + + ··· +
a1 a1 a2 a1 a2 · · · an−1
for all n. Hence the random walk is recurrent if and only if the g(n) → +∞
when n → +∞, as claimed.
Theorem 11 (The flow theorem). The simple random walk on a graph is tran-
sient if and only if the graph admits a finite energy source flow.
25
3.1 The finite flow theorem
The flow theorem can be reduced to a statement about finite graphs but first
we need some notation.
So far, whenever X was a graph, we’ve been abusing notation by using X
to denote the set of vertices as well (i.e. x ∈ X means x is a vertex of X).
Let’s complement this notation by setting E(X) to be the set of edges of X.
We consider all edges to be directed so that each edge e ∈ E(X) has a starting
vertex e− and an end e+ . The fact that our graphs are undirected means there
is a bijective involution e 7→ e−1 sending each edge to another one with the start
and end vertex exchanged (loops can be their own inverses).
A field on X is just a function ϕ from E(X) to R satisfying ϕ(e−1 ) = −ϕ(e)
for all edges e ∈ E(X). Any function f : X → R has an asociated gradient field
∇f defined by ∇f (e) P = f (e+ ) − f (e− ). The energy of a field ϕ : E(X) → R
1 2
is defined by E(ϕ) = Pa field ϕ : E(X) → R is a
2 ϕ(e) . The divergence of
function div(ϕ) : X → R defined by div(ϕ)(x) = ϕ(e).
e− =x
Obviously all the definitions were chosen by analogy with vector calculus in
Rn . Here’s the analogous result to integration by parts.
Lemma 2 (Summation by parts). Let f be a function and ϕ a field on a finite
graph X. Then one has
X X
∇f (e)ϕ(e) = −2 f (x)div(ϕ)(x).
e∈E(X) x∈X
26
fact that the divergence of the gradient is zero at the rest of the vertices follows
from the weak Markov property (f is harmonic except at a and b). Hence f is
a flow from a to b as claimed. The formula for divergence at a follows directly
from the definition, and for the energy one uses summation by parts.
The main point of the theorem is that ∇f is the unique energy minimizing
flow (for fixed divergence at a). To see this consider any flow ϕ from a to b with
the same divergence as ∇f at a.
We will first show that unless ϕ is the gradient of a function it cannot
minimize energy. For this purpose assume that e1 , .P . . , en is a closed path in the
graph (i.e. e1+ = e2− , . . . , en+ = e1− ) such that ϕ(ek ) 6= 0. Let ψ be the
flow such that ψ(ek ) = 1 for all ek and ψ(e) = 0 unless e = ek or e = e−1 k for
some k (so ψ is the unit flow around the closed path). One may calculate to
obtain
∂t E (ϕ + tψ)|t=0 6= 0
so one obtains for some small t (either positive or negative) a flow with less
energy than ϕ and the same divergence at a. This shows that any energy
minimizing flow with the same divergence as ∇f at a must be the gradient of a
function.
Hence it suffices to show that if g : X → R is a function which is harmonic ex-
cept at a and b and such that div(∇g)(a) = div(∇f )(a) then ∇g = ∇f . For this
purpose notice that because ∇g is a flow one has div(∇g)(b) = −div(∇g)(a) =
−div(∇f )(a) = div(∇f )(b). Hence f − g is harmonic on the entire graph and
therefore constant.
Exercise 1. Show that all harmonic functions on a finite graph are constant.
For each n consider the finite graph Xn obtained from X by replacing all
vertices with d(x, a) ≥ n by a single vertex bn (edges joining a vertex at distance
n − 1 to one at distance n now end in this new vertex, edges joining two vertices
at distance larger than n from a in X are erased).
The finite flow theorem implies that any flow from a to bn in Xn with
divergence 1 at a has energy greater than or equal to 1/pn . Notice that if X
is recurrent pn → 0 and this shows there is no finite energy source flow (with
soure a) on X.
On the other hand if X is transient then p > 0, so by the finite flow theorem
there are fields ϕn with divergence 1 at a, divergence 0 at vertices with d(x, a) 6=
n and with energy less than 1/p. For each edge e ∈ E(X) the sequence ϕn (e) is
bounded, hence, using a diagonal argument, there is a subsequence ϕnk which
27
converges to a finite energy flow with divergence 1 at a and 0 at all other vertices
(i.e. a finite energy source flow). This completes the proof (in fact one could
show that the sequence ϕn converges directly if one looks at how it is defined
in the finite flow theorem).
This is a special case of a very important result which we will not prove but
which can also be proved via the flow theorem. Two metric spaces (X, d) and
(X 0 , d0 ) are said to be quasi-isometric if they are large-scale bi-lipshitz i.e. there
exists a function f : X → X 0 (not necesarilly continuous) and a constant C
such that
d(x, y)/C − C ≤ d(f (x), f (y)) ≤ Cd(x, y) + C,
28
and f (X) is C-dense in X 0 .
The definition is important because it allows one to relate discrete and con-
tinuous objects. For example Z2 and R2 are quasi-isometric (projection to the
nearest integer point gives a quasi-isometry in one direction, and simple inclu-
sion gives one in the other).
Exercise 2. Prove that Cayley graphs of the same group with respect to different
generating sets are quasi-isometric.
An important fact is that recurrence is invariant under quasi-isometry.
Corollary 4 (Kanai’s lemma). If two graphs of bounded degree X and X 0 are
quasi-isometric then either they are both recurrent or they are both transient.
The bounded degree hypothesis can probably be replaced by something
sharper (I really don’t know how much is known in that direction). The problem
is that adding multiple edges between the same two vertices doesn’t change the
distance on a graph but it may change the behavior of the random walk.
Exercise 3. Prove that the random walk on the graph with vertex set {0, 1, 2, . . .}
where n is joined to n + 1 by 2n edges is transient.
29
3.6 Logarithmic capacity and recurrence
Short digresion: There’s a beautiful criterion due to Kakutani which answers
the question of whether or not a Brownian motion in R2 will hit a compact set
K with positive probability or not. Somewhat surprisingly it is not necesary
for K to have positive measure for this to be the case. The sharp criterion is
given by whether or not K can hold a distribution of electric charge in such a
way that the potential energy (created by the repulsion between charges of the
same sign) is finite. In this formulation charges are considered to be infinitely
divisible so that if one has a unit of charge at a single point then the potential
energy is infinite (because one can consider that one has two half charges at the
same spot, and they will repulse each other infinitely). Sets with no capacity
to hold charges (such as a single point, or a countable set) will not be hit by
Brownian motions, but sets with positive capacity will. End digresion.
Definition 2. Let (X, d) be a compact metric space. Then (X, d) is is said to
be Polar (or have zero capacity) if and only if for every probability measure µ
on X one has Z
− log(d(x, y))dµ(x)dµ(y) = ∞.
Otherwise the space is said to have positive capacity. In general the capacity is
defined as (I hope I got all the signs right, another solution would be to let the
potential energy above be negative and use a supremum in the formula below)
Z
C(X, d) = exp − inf − log(d(x, y))dµ(x)dµ(y)
Proof. For each e ∈ E(T ) let [e] denote the subset of ∂T consisting of paths
Sn
containing the edge e. Notice that ∂T \ [e] = [ei ] where the ei are the
i=1
remaining edges at the same level (i.e. joining vertices at the same distance
from a) as e. Also the sets [e] are compact so that if we write [e] as a countable
union of disjoint sets [ei ] then in fact the union is finite. Since this implies
30
that [e] 7→ ϕ(e) is countably additive on the algebra of sets generated by the [e]
one obtains by Caratheodory’s extension theorem that it extends to a unique
probability measure.
The inverse direction is elementary. Given a measure µ on ∂T one defines
ϕ(e) = µ([e]) and verifies that it is a unit flow simply by additivity of µ.
The important part of the statement is the relationship between the energy
of the flow ϕ and the type of integral used to define the capacity of ∂T .
To prove this we use a well known little trick in probability. It’s simply
the observation that if X is a random variable taking value in the non-negative
integers then X
E [X] = P [X ≥ n] .
n
which is simply the sum of probabilities of all the different ways two infinite
paths can coincide up to distance n from a.
The folowing corollary is immediate (notice that team Lyons has the ball
and Terry has passed it over to Russell).
Corollary 6 (Russell Lyons (1992)). The simple random walk on a tree T is
transient if and only if ∂T has positive capacity. It is recurrent if and only if
∂T is polar.
Corollary 7 (Benjamini and Peres (1992)). The simple random walk on a tree
T is transient if and only if there is a finite constante C > 0 such that for any
finite n there are n vertices b1 , . . . , bn ∈ T whose average meeting height is less
than C.
31
Proof. Suppose there is a finite unit flow ϕ and let x1 , . . . , xn , . . . be random
independent paths in ∂T with distribution µϕ . Then for each n one has
1 X
E
n = E(ϕ).
m(xi , xj )
1≤i<j≤n
2
planet should know about this notion are: First, that it’s the notion that appears in the
statement of the Central Limit Theorem. And, second, that a good reference is Billinsgley’s
book.
32
This shows that the dimension of [0, 1] is less than or equal to 1 (the easy
part). But how does one show that it is actually 1? This is harder because one
needs to control all possible covers not just construct a particular sequence of
them.
The trick is to use Lebesgue measure µ on [0, 1]. The existence of Lebesgue
measure (and the fact that it is countably
P additive) implies that if one covers
[0, 1] by intervals of length ri then ri ≥ 1. Hence the 1-dimensional Hausdorff
content of [0, 1] is positive (greater than or equal to 1) and one obtains that the
dimension must be 1.
For compact subsets of Rn the above trick was carried out to its maximal
potential in a Phd. thesis from the 1930s. The result works for general compact
metric spaces and is now known by name of the author of said thesis.
Lemma 3 (Frostman’s Lemma). The d-dimensional Hausdorff content of (X, d)
is positive if and only if there exists a probability measure µ on X satisfying
µ(Br ) ≤ Crd
Since this is valid for all x one may itegrate over x to obtain that ∂T has
positive capacity.
33
Exercise 6. Prove that if T is a tree and a, b ∈ T then the boundaries one
obtains by considering a and b as the root vertex are Lipschitz homeomorphic
and the Lipschitz constant is no larger than ed(a,b) .
Exercise 7. Prove that if one subdivides the edges of a tree into no more than
N smaller edges then the boundary of the resulting tree is homeomorphic to the
original via a Hölder homeomorphism.
Exercise 8. Show that if T and T 0 are quasi-isometric trees then their bound-
aries are homeomorphic via a Hölder homeomorphism.
cretization proceedure. It turns out Furstenberg once wanted to prove that SL(2, Z) was not
a lattice in SL(3, R), and there was a very simple proof available using Kazhdan’s property
T, which was well known at the time. But Furstenberg didn’t know about property T, so he
concocted the fantastic idea of trying to translate the clearly distinct behavior of Brownian
motion on SL(2, R) and SL(3, R) to the discrete random walks on the corresponding lattices.
This was done mid-proof, but has all the essential elements later refined in Sullivan and Lyons’
paper (see [Fur71]).
34
4 The classification of recurrent groups
4.1 Sub-quadratic growth implies recurrence
We’ve seen as a consequence of the flow theorem that the simple random walk
behaves the same way (with respect to recurrence and transience) on all Cayley
graphs of a given group G. Hence one can speak of recurrent or transient groups
(as opposed to pairs (G, F )).
In this section G will always denote a group and F a finite symmetric gen-
erator of G. The Cayley graph will be denoted by G abusing notation slightly
and we keep with the notation E(G) for the set of (directed) edges of the graph.
We introduce a new notation which will be very important in this section.
Given a set A ⊂ G (or in any graph) we denote by ∂A the edge boundary of A,
i.e. the set of outgoing edges (edges starting in A and ending outside of A).
Let Bn denote the ball of radius n centered at the identity element of G.
That is, it is the set of all elements of the group which can be written as a
product of n or less elements of F . A first consequence of the flow theorem
(which in particular implies the recurrence of Z2 and of all wallpaper groups
such as *632) is the following:
Corollary 10. If there exists c > 0 such that |Bn | ≤ cn2 for all n then the
group G is recurrent.
Proof. Suppose ϕ is a unit source flow with source the identity element of G.
Notice that for all n one has
X
ϕ(e) = 1.
e∈∂Bn
Using Jensen’s inequality (in the version “average of squares is greater than
square of the average”) one gets
!2
1 X
2 1 X
ϕ(e) ≥ ϕ(e) ,
|∂Bn | |∂Bn |
e∈∂Bn e∈∂Bn
so that X 1
ϕ(e)2 ≥
|∂Bn |
e∈∂Bn
P
for all n. Hence it suffices to show that 1/|∂Bn | = +∞ to conclude that the
flow has infinite energy (notice that for the standard Cayley graph associated
to Z2 one can calculate |∂Bn | = 4 + 8n). Here the growth hypothesis must
be used and we leave it to the reader (see the exercise below and notice that
|F |(|Bn+1 | − |Bn |) ≥ |∂Bn |).
Exercise 9. Let ak , k = 1, 2, . . . be a sequence of positive numbers such that for
some positive constant c the inequality
n
X
an ≤ cn2
k=1
35
holds for all n. Prove that
+∞
X 1
= +∞.
an
k=1
We have seen that a group with polynomial growth of degree d less than or
equal to 2 is recurrent. Our objective in this section is to prove that a group
with |Bn | ≥ cn3 for some c > 0 and all n must be transitive. These two cases
cover all posibilities by the theorem of Gromov classifying groups of polynomial
growth. Recall that a subgroup H of a group G is said to have finite index k if
there exist g1 , . . . , gk ∈ G such that every element of G can be written as hgi
for some h ∈ H and some i = 1, . . . , k. The final result is the following:
Theorem 14 (Varopoulos+Gromov). A group G is recurrent if and only if it
has polynomial growth of degree less than or equal to 2. This can only happen if
the group is either finite, has a subgroup isomorphic to Z with finite index, or
has a subgroup isomorphic to Z2 with finite index.
This is an example of a result whose statement isn’t very interesting (it
basically says that one shouldn’t study recurrence of Cayley graphs since it’s
too strong of a property) but for which the ideas involved in the proof are very
interesting (the flow theorem, isoperimetric inequalities, and Gromov’s theorem
on groups of polynomial growth).
We will not prove the theorem of Gromov which leads to the final clasifica-
tion. Only the fact that growth larger than cn3 implies transience.
36
4.3 Examples
Recall our list of examples from the first section: The d-dimensional grid Zd ,
the free group in two generators F2 , the Modular group of fractional linear
transformations Γ, the wallpaper group ∗632, the Heisenberg group N il, and
the lamplighter group Lamplighter(Z).
It is relatively simple to establish that Zd has polynomial growth of degree d,
that the free group in two generators has exponential growth, that the wallpaper
group has polynomial growth of degree d = 2, and that the lamplighter group
has exponential growth (with 2n moves the lamplighter can light the first n-
lamps in any of the 2n possible on-off combinations).
It turns out that the modular group has exponential growth. To see this it
−2
suffices to establish that the subgroup generated by z 7→ z + 2 and z 7→ 2z+1
is free (this subgroup is an example of a “congruence subgroup” which are
important in number theory, or so I’ve been told... by wikipedia). We leave
it to the interested reader to figure out a proof (several are possible, either by
trying to find an explicit fundamental domain on the action of the upper half
plane of C, by combinatorial analysis of the coefficients of compositions, or by
a standard argument for establishing freeness of a group called the Ping-Pong
argument which the reader can Google and learn about quite easily, there’s even
a relevant post in Terry Tao’s blog).
In view of Gromov’s theorem (which again, we will neither state fully nor
prove), working out the growth of the Heisenberg group and variants of it (which
was first done by Bass and Guivarc’h) turns out to be a key point in the proof
of Theorem 14. Hence any time the reader spends thinking about this issue is
well spent.
Exercise 10. Show that there exists c > 0 such that cn4 ≤ |Bn | for all n on
the Heisenberg group N il.
37
of showing this, in fact the strong isoperimetric inequality is equivalent to a
property called non-amenability which has a near-infinite list of equivalent def-
initions which are not trivially equivalent, some of these definitions are simple
to verify on the Modular group).
A Cayley graph is said to satisfy the d-dimensional isoperimetric inequality
if there is a positive constant c such that
c|A|d−1 ≤ |∂A|d
so the point is to bound the L1 norm of the gradient of f (i.e. the above sum)
from below in terms of |A|.
On the other hand X
|f |1 = |f (x)| = |A|
x∈G
The point of the choice of n is that if x ∈ A then f (x) = 1 but f˜(x) ≤ 1/2.
Hence the L1 norm of f˜ − f is bounded from below as follows
1
|A| ≤ |f˜ − f |1 .
2
If we can bound |f˜ − f |1 from above using |∇f |1 then we’re done.
To acomplish this notice that if g ∈ F then
X
|f (xg) − f (x)| ≤ |∇f |1 .
x∈G
38
A slight generalization (left to the reader; Hint: triangle inequality) is that
if g = g1 · · · gk for some sequence of gi ∈ F then
X
|f (xg) − f (x)| ≤ k|∇f |1 .
x∈G
Using this one can bound the L1 norm of f˜ − f from above in terms of |∂A|
as follows
X 1 X X ˜
|f˜(x) − f (x)| ≤ |f (xg) − f (x)| ≤ n|∇f | = 2n|∂A|.
|Bn |
x∈G g∈Bn x∈G
Notice that since fn takes values in [0, 1] one has Dn ≤ deg(a) = |F | (recall F
is a finite symmetric generating set which was used to define the Cayley graph).
If one could bound Dn from below by a positive constant then this would imply
that (taking a limit of a subsequence of ∇fn ) there is a finite energy source flow
on G and hence G is transient as claimed.
To acomplish this let n be fixed and define a finite sequence of subsets
beginning with A1 = {a} using the rule that if Ak ⊂ Bn−1 then
2Dn
Ak+1 = Ak ∪ e+ : e ∈ ∂Ak , ∇fn (e) ≤ .
|∂Ak |
The sequence stops the first time Ak contains a point at distance n from A, let
N be the number of sets in the thus obtained sequence.
39
The fact that the sequence stops follows because if Ak ⊂ Bn−1 one has
X
∇f (e) = Dn ,
e∈∂Ak
from above (in a way which doesn’t depend on n) to obtain a uniform lower
bound for Dn and hence prove that G is transient.
Here we use the 3-dimensional isoperimetric inequality, the fact we had noted
before that if k < N then at least half the edges of ∂Ak lead to Ak+1 , and the
fact that at most |F | edges can lead to any given vertex. Combining these facts
we obtain
1 2
|Ak+1 | − |Ak | ≥ |∂Ak | ≥ c|Ak | 3
2|F |
where c > 0 is the constant in the isoperimetric inequality divided by 2|F |.
This implies (see the exercise below) that |Ak | ≥ c3 k 3 /343. Hence
N +∞
X 1 X 49 49π 2
≤ =
|∂Ak | c3 k 2 6c3
k=1 k=1
where the exact sum in the final equation was included on a whim since only
convergence is needed to obtain the desired result.
Exercise 11. Let 1 = x1 < x2 < · · · < xN be a sequence satisfying xk+1 − xk ≥
2
cxk3 for all k = 1, . . . , N − 1. Prove that xk ≥ c3 k 3 /343 for all k = 1, . . . , N , or
at least prove that there exists a constant λ > 0 depending only on c such that
xk ≥ λ 3 k 3 .
40
improved by several people, the reader might want to check out Gerl’s article
[Ger88] and the more recent work by Virág [Vir00].
Kesten was also the first to introduce the idea that growth might determine
recurrence or transience (see [Kes67]) and the idea that polynomial growth of
order 2 was equivalent to recurrence is sometimes known as Kesten’s conjecture.
I find it interesting that Kesten’s conjecture was first established for contin-
uous groups (see [Bal81] which is the culmination of a series of works by several
people including Baldi, Guivarc’h, Keane, Roynette, Peyrière, and Lohoué).
The idea of using d-dimensional isoperimetric inequalities for estimating re-
turn probabilities was introduced quite successfully into the area of random
walks on discrete groups by Varopoulos in the mid-80s. The main result is that
a d-dimensional isoperimetric inequality implies a decay of return probabilities
of the order of n−d/2 (in particular if d ≥ 3 the series converges and the walk is
transient) which was proved in [Var85].
Instead of proving Varopoulos’ decay estimates we borrowed the proof given
in Mann’s excellent book [Man12] that growth implies isoperimetric inequalities
on finitely generated groups, and then proved that an isoperimetric inequality
of degree 3 or more implies transience using an argument from a paper by
Benjamini and Kozma [BK05].
There are several good references for Gromov’s theorem including Gromov’s
original paper (where the stunning idea of looking at a discrete group from far
away to obtain a continuous one is introduced), Mann’s book [Man12], and even
Tao’s blog.
References
[Ash00] Robert B. Ash. Probability and measure theory. Harcourt/Academic
Press, Burlington, MA, second edition, 2000. With contributions by
Catherine Doléans-Dade.
[Bal81] P. Baldi. Caractérisation des groupes de Lie connexes récurrents. Ann.
Inst. H. Poincaré Sect. B (N.S.), 17(3):281–308, 1981.
[Ben13] Itai Benjamini. Coarse geometry and randomness, volume 2100 of
Lecture Notes in Mathematics. Springer, Cham, 2013. Lecture notes
from the 41st Probability Summer School held in Saint-Flour, 2011,
Chapter 5 is due to Nicolas Curien, Chapter 12 was written by Ariel
Yadin, and Chapter 13 is joint work with Gady Kozma, École d’Été de
Probabilités de Saint-Flour. [Saint-Flour Probability Summer School].
[BK05] Itai Benjamini and Gady Kozma. A resistance bound via an isoperi-
metric inequality. Combinatorica, 25(6):645–650, 2005.
[BL85] Philippe Bougerol and Jean Lacroix. Products of random matrices
with applications to Schrödinger operators, volume 8 of Progress in
Probability and Statistics. Birkhäuser Boston, Inc., Boston, MA, 1985.
41
[Bla55] David Blackwell. On transient Markov processes with a countable
number of states and stationary transition probabilities. Ann. Math.
Statist., 26:654–658, 1955.
[BP92] Itai Benjamini and Yuval Peres. Random walks on a tree and capacity
in the interval. Ann. Inst. H. Poincaré Probab. Statist., 28(4):557–592,
1992.
[Doo89] J. L. Doob. Kolmogorov’s early work on convergence theory and foun-
dations. Ann. Probab., 17(3):815–821, 1989.
[Doo96] Joseph L. Doob. The development of rigor in mathematical probability
(1900–1950) [in development of mathematics 1900–1950 (luxembourg,
1992), 157–170, Birkhäuser, Basel, 1994; MR1298633 (95i:60001)].
Amer. Math. Monthly, 103(7):586–595, 1996.
[Ein56] Albert Einstein. Investigations on the theory of the Brownian move-
ment. Dover Publications, Inc., New York, 1956. Edited with notes
by R. Fürth, Translated by A. D. Cowper.
[Fur63] Harry Furstenberg. Noncommuting random products. Trans. Amer.
Math. Soc., 108:377–428, 1963.
[Fur71] Harry Furstenberg. Random walks and discrete subgroups of Lie
groups. In Advances in Probability and Related Topics, Vol. 1, pages
1–63. Dekker, New York, 1971.
[Ger88] Peter Gerl. Random walks on graphs with a strong isoperimetric
property. J. Theoret. Probab., 1(2):171–187, 1988.
[Gri09] Alexander Grigor’yan. Heat kernel and analysis on manifolds, vol-
ume 47 of AMS/IP Studies in Advanced Mathematics. American
Mathematical Society, Providence, RI; International Press, Boston,
MA, 2009.
[Hsu02] Elton P. Hsu. Stochastic analysis on manifolds, volume 38 of Graduate
Studies in Mathematics. American Mathematical Society, Providence,
RI, 2002.
42
[Kes59b] Harry Kesten. Symmetric random walks on groups. Trans. Amer.
Math. Soc., 92:336–354, 1959.
[Kes67] H. Kesten. The Martin boundary of recurrent random walks on count-
able groups. In Proc. Fifth Berkeley Sympos. Math. Statist. and Prob-
ability (Berkeley, Calif., 1965/66), pages Vol. II: Contributions to
Probability Theory, Part 2, pp. 51–74. Univ. California Press, Berke-
ley, Calif., 1967.
[Kin93] J. F. C. Kingman. Poisson processes, volume 3 of Oxford Studies in
Probability. The Clarendon Press, Oxford University Press, New York,
1993. Oxford Science Publications.
[LP15] Russel Lyons and Yuval Peres. Random walks on infinite graphs and
groups. 2015.
[LS84] Terry Lyons and Dennis Sullivan. Function theory, random paths and
covering spaces. J. Differential Geom., 19(2):299–323, 1984.
43
[Mat95] Pertti Mattila. Geometry of sets and measures in Euclidean spaces,
volume 44 of Cambridge Studies in Advanced Mathematics. Cambridge
University Press, Cambridge, 1995. Fractals and rectifiability.
[MP10] Peter Mörters and Yuval Peres. Brownian motion. Cambridge Series
in Statistical and Probabilistic Mathematics. Cambridge University
Press, Cambridge, 2010. With an appendix by Oded Schramm and
Wendelin Werner.
[NW59] C. St. J. A. Nash-Williams. Random walk and electric currents in
networks. Proc. Cambridge Philos. Soc., 55:181–194, 1959.
[Vir00] B. Virág. Anchored expansion and random walk. Geom. Funct. Anal.,
10(6):1588–1605, 2000.
[Woe00] Wolfgang Woess. Random walks on infinite graphs and groups, volume
138 of Cambridge Tracts in Mathematics. Cambridge University Press,
Cambridge, 2000.
44
Index
Canopy Tree, 7, 37
Pólya’s Theorem, 2, 21
Self-similar Tree, 7
Simple Random Walk, 2, 19
Wallpaper Group, 8, 35
45