From Random Walk Trajectories To Random Interlacements: Ji R I Cern y Augusto Teixeira
From Random Walk Trajectories To Random Interlacements: Ji R I Cern y Augusto Teixeira
ATICOS
200X, Volume XX, XXX
From random walk trajectories
to random interlacements
Jir
Cern y
Augusto Teixeira
Abstract. We review and comment recent research on random inter-
lacements model introduced by A.-S. Sznitman in [43]. A particular em-
phasis is put on motivating the denition of the model via natural questions
concerning geometrical/percolative properties of random walk trajectories
on nite graphs, as well as on presenting some important techniques used
in random interlacements literature in the most accessible way. This text
is an expanded version of the lecture notes for the mini-course given at the
XV Brazilian School of Probability in 2011.
2000 Mathematics Subject Classication: 60G50, 60K35, 82C41, 05C80.
Acknowledgments
This survey article is based on the lecture notes for the mini-course on
random interlacements oered at the XV Brazilian School of Probability,
from 31st July to 6th August 2011. We would like to thank the organizers
and sponsors of the conference for providing such opportunity, specially
Claudio Landim for the invitation. We are grateful to David Windisch
for simplifying several of the arguments in these notes and to A. Drewitz,
R. Misturini, B. Gois and G. Montes de Oca for reviewing earlier versions
the text.
Contents
1 Introduction 1
2 Random walk on the torus 7
2.1 Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.2 Local entrance point . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Local measure . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.4 Local picture as Poisson point process . . . . . . . . . . . . 16
2.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.5.1 Disconnection of a discrete cylinder . . . . . . . . . 18
3 Denition of random interlacements 20
3.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4 Properties of random interlacements 27
4.1 Basic properties . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.2 Translation invariance and ergodicity . . . . . . . . . . . . . 29
4.3 Comparison with Bernoulli percolation . . . . . . . . . . . . 32
4.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
5 Renormalization 37
5.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
6 Interlacement set 43
6.1 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
7 Locally tree-like graphs 52
7.1 Random interlacements on trees . . . . . . . . . . . . . . . . 52
7.2 Random walk on tree-like graphs . . . . . . . . . . . . . . . 55
7.2.1 Very short introduction to random graphs . . . . . . 57
7.2.2 Distribution of the vacant set . . . . . . . . . . . . . 58
7.2.3 Random graphs with a given degree sequence. . . . . 59
7.2.4 The degree sequence of the vacant graph . . . . . . . 60
7.3 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
CONTENTS
Index 63
Bibliography 65
Chapter 1
Introduction
These notes are based on the mini-course oered at the XV Brazilian
School of Probability in Mambucaba in August 2011. The lectures tried to
introduce the audience to random interlacements, a dependent-percolation
model recently introduced by A.-S. Sznitman in [43]. The emphasis was
put on motivating the denition of this model via some natural questions
about random walks on nite graphs, explaining the diculties that appear
when studying the model, and presenting some of the techniques used to
analyze random interlacements. We tried to present these techniques in
the simplest possible way, sometimes at expense of generality.
Let us start setting the stage for this review by introducing one of the
problems which motivated the denition of random interlacements. To this
end, x a nite graph G = (V, c) with a vertex set V and an edge set c,
and denote by (X
n
)
n0
a simple random walk on this graph, that is the
Markovian movement of a particle on G prescribed as follows: It starts at
a given (possibly random) vertex x G, X
0
= x, and given the position
at time k 0, say X
k
= y, its position X
k+1
at time k + 1 is uniformly
chosen among all neighbors of y in G.
Random walks on nite and innite graphs, in particular on Z
d
, has
been subject of intense research for a long time. Currently, there is a great
deal of studying material on this subject, see for instance the monographs
[26, 27, 28, 38, 54]. Nevertheless, there are still many interesting questions
and vast areas of research which are still to be further explored.
The question that will be of our principal interest was originally asked
by M.J. Hilhorst, who proposed the random walk as a toy model for cor-
rosion of materials. For sake of concreteness, take the graph G to be the
d-dimensional discrete torus T
d
N
= (Z/NZ)
d
which for d = 3 can be re-
garded as a piece of crystalline solid. The torus is made into a graph
by adding edges between two points at Euclidean distance one from each
other. Consider now a simple random walk (X
n
)
n0
, and imagine that this
random walk represents a corrosive particle wandering erratically through
1
2
Cerny, Teixeira
the crystal, while it marks all visited vertices as corroded. (The particle
can revisit corroded vertices, so its dynamics is Markovian, i.e. it is not
inuenced by its past.)
If the time that the particle runs is short, then one intuitively expects
that only a small part of the torus will be corroded, the crystal will be
intact. On the other hand, when the running time is large, many sites
will be corroded and the connected components of non-corroded sites will
be small, the crystal will be destroyed by the corrosion, see Figure 1.1 for
the simulations. The question is how long should the particle run to destroy
the crystal and how this destruction proceeds.
Figure 1.1: A computer simulation by David Windisch of the largest com-
ponent (light gray) and second largest component (dark gray) of the vacant
set left by a random walk on (Z/NZ)
3
after [uN
3
] steps, for N = 200. The
pictures correspond consecutively to u being 2.0, 2.5, 3.0, and 3.5. Ac-
cording to recent simulation, the threshold of the phase transition satises
u
c
(T
3
) = 2.95 0.1.
Remark that throughout these notes, we will not be interested in the
instant when all sites become corroded, that is in the cover time of the graph
by the simple random walk. Note however that random interlacements can
also be useful when studying this problem, see the recent papers [3, 4] of
Random Walks and Random Interlacements 3
D. Belius.
In a more mathematical language, let us dene the vacant set left by the
random walk on the torus up to time n
1
N
(n) = T
d
N
X
0
, X
1
, . . . , X
n
. (1.1)
1
N
(n) is the set of non-visited sites at time n (or simply the set non-
corroded sites at this time). We are interested in connectivity properties of
the vacant set, in particular in the size of its largest connected component.
We will see later that the right way to scale n with N is n = uN
d
for
u 0 xed. In this scaling the density of the vacant set is asymptotically
constant and non-trivial, that is is for every x T
d
N
,
lim
N
Prob[x 1
N
(uN
d
)] = c(u, d) (0, 1). (1.2)
This statement suggest to view our problem from a slightly dierent
perspective: as a specic site percolation model on the torus with density
(roughly) c(u, d), but with spatial correlations. These correlations decay
rather slowly with the distance, which makes the understanding of the
model delicate.
At this point it is useful to recall some properties of the usual Bernoulli
site percolation on the torus T
d
N
, d 2, that is of the model where the sites
are declared open (non-corroded) and closed (corroded) independently with
respective probabilities p and 1p. This model exhibits a phase transition
at a critical value p
c
(0, 1). More precisely, when p < p
c
, the largest
connected open cluster (
max
(p) is small with high probability,
p < p
c
= lim
N
Prob[[(
max
(p)[ = O(log
N)] = 1, (1.3)
and when p > p
c
, the largest connected open cluster is comparable with
the whole graph (it is then called giant),
p > p
c
= lim
N
Prob[[(
max
(p)[ N
d
] = 1. (1.4)
Much more is known about this phase transition, at least when d is large
[9, 10]. A similar phase transition occurs on other (sequences of) nite
graphs, in particular on large complete graph, where it was discovered (for
the edge percolation) in the celebrated paper of Erdos and Renyi [18].
Coming back to our random walk problem, we may now rene our ques-
tions: Does a similar phase transition occur there. Is there a critical value
u
c
= u
c
(T
d
_
1/2, if x = y,
1/(4d), if x and y are neighbors in T
d
N
,
0, otherwise.
(2.1)
Let be the uniform distribution on the torus T
d
N
. It is easy to see that
and C(x, y) satisfy the detailed balance condition, that is is reversible
for the random walk.
We write P for the law on (T
d
N
)
N
of lazy simple random walk on T
d
N
started with uniform distribution . We let X
n
, n 0, stand for the
canonical process on (T
d
N
)
N
. The law of the canonical (lazy) random walk
started at a specied point x T
d
N
is denoted by P
x
.
Note that we omit the dependence on N in the notation , P, P
x
and
X
n
. This will be done in other situations throughout the text, hoping that
the context will clarify the omission.
7
8
Cerny, Teixeira
For k 0, we introduce the canonical shift operator
k
in the space of
trajectories (T
d
N
)
N
, which is characterized by X
n
k
= X
n+k
for every
n 0. Analogously, we can dene
T
for a random time T.
The main reason for considering the lazy random walk are the following
facts:
C(, ) has only positive eigenvalues 1 =
1
>
2
N
d 0. (2.2)
The spectral gap
N
:=
1
2
satises
N
cN
2
, (2.3)
see for instance [28], Theorems 5.5 and 12.4. A simple calculation using
the spectral decomposition leads then to
sup
x,yT
d
N
P
x
[X
n
= y] (y)
ce
N
n
, for all n 0, (2.4)
see [28], (4.22), (4.35) and Theorem 5.5.
It will be useful to dene the regeneration time r
N
associated to the
simple random walk on T
d
N
by
r
N
=
1
N
log
2
N cN
2
log
2
N. (2.5)
To justify the name regeneration time, observe that, for every x T
d
N
, by
(2.2) and (2.4), the total variation distance between P
x
[X
r
N
] and
satises
|P
x
[X
r
N
] ()|
TV
:=
1
2
yT
d
N
P
x
[X
r
N
= y] (y)
c
t
N
d
e
c log
2
N
c
t
e
c log
2
N
,
(2.6)
which decays with N faster than any polynomial. This means that, inde-
pendently of its starting distribution, the distribution of the random walk
position at time r
N
is very close to being uniform.
We also consider the simple (lazy) random walk on the innite lattice
Z
d
where edges again connect points with Euclidean distance one. The
canonical law of this random walk starting at some point x Z
d
is denoted
by P
Z
d
x
. If no confusion may arise, we write simply P
x
.
We introduce the entrance and hitting times H
A
and
H
A
of a set A of
vertices in T
d
N
(or in Z
d
) by
H
A
= inft 0 : X
t
A,
H
A
= inft 1 : X
t
A.
(2.7)
Throughout these notes, we will suppose that the dimension d is greater
or equal to three (cf. Remark 1.1), implying that
the random walk on Z
d
is transient. (2.8)
Random Walks and Random Interlacements 9
Fix now a nite set A Z
d
(usually we will denote subsets of Z
d
by
A, B, . . . ). Due to the transience of the random walk, we can dene the
equilibrium measure, e
A
, and the capacity, cap(A), of A by
e
A
(x) := 1
xA
P
Z
d
x
[
H
A
= ], x Z
d
,
cap(A) := e
A
(A) = e
A
(Z
d
).
(2.9)
Note that cap(A) normalizes the measure e
A
into a probability distribution.
Throughout this notes we use B(x, r) to denote the closed ball centered
at x with radius r (in the graph distance), considered as a subset of Z
d
or
T
d
N
, depending on the context.
2.2 Local entrance point
We now start the study of the local picture left by the random walk on
T
d
N
. To this end, consider a (nite) box A Z
d
centered at the origin.
For each N larger than the diameter of A, one can nd a copy A
N
of this
box inside T
d
N
. We are interested in the distribution of the intersection of
the random walk trajectory (run up to time n) with the set A
N
, that is
X
0
, X
1
, . . . , X
n
A
N
. As N increases, the boxes A
N
get much smaller
compared to the whole torus T
d
N
, explaining the use of the terminology
local picture. In particular, it is easy to see that
(A
N
) 0 as N . (2.10)
As soon as N is strictly larger than the diameter of the box A, we can
nd an isomorphism
N
: A
N
A between the box A
N
and its copy A in
the innite lattice. As usual, to avoid a clumsy notation, we will drop the
indices N by
N
and A
N
.
The rst question we attempt to answer concerns the distribution of the
point where the random walk typically enters the box A. Our goal is to
show that this distribution almost does not depend on the starting point
of the walk, if it starts far enough from the box A.
To specify what we mean by far enough from A, we consider a sequence
of boxes A
t
N
centered at the origin in Z
d
and having diameter N
1/2
(the
specic value 1/2 is not particularly important, any value strictly between
zero and one would work for our purposes here). Note that for N large
enough A
t
N
contains A and N
1/2
N. Therefore, we can extend the
isomorphism dened above to : A
t
N
A
t
N
Z
d
, where A
t
N
is a copy
of A
t
N
inside T
d
N
. Also (A
t
N
) 0 as N , therefore, under P, the
random walk typically starts outside of A
t
N
.
The rst step in determination of the entrance distribution to A is the
following lemma which roughly states that the random walk likely regen-
erates before hitting A.
10
Cerny, Teixeira
Lemma 2.1. (d 3) For A
t
and A
t
as above, there exists > 0 such that
sup
xT
d
N
\A
P
x
[H
A
r
N
] N
N
P
Z
d
x
[H
A
< ] N
+P
Z
d
(x)
_
H
1
(A)B((x),N log
2
N)
<
.
(2.14)
Using (3.30) on p.227 of [29], we obtain
P
Z
d
(x)
_
H
B((x),N log
2
N)
c r
N
P
Z
d
(x)
_
max
1tr
N
[X
t
(x)[
cN log
2
N
_
2d exp2(cN log
2
N)
2
/4r
N
ce
c log
2
N
.
(2.15)
The set
1
(A) B((x), N log
2
N) is contained in a union of no more
than c log
c
N translated copies of the set A. By the choice of x, (x) is at
distance at least cN
1/2
from each of these copies. Hence, using the union
bound and (2.13) again, we obtain that
P
(x)
_
H
1
(A)B((x),N log
2
N)
<
c(log N)
c
N
c
. (2.16)
Inserting the last two estimates into (2.14), we have shown (2.11).
As a consequence of (2.11), we can now show that, up to a typically
small error, the probability P
y
[X
H
A
= x] does not depend much on the
starting point y T
d
N
A
t
:
Lemma 2.2.
sup
xA,
y,y
T
d
N
\A
P
y
[X
H
A
= x] P
y
[X
H
A
= x]
cN
. (2.17)
Random Walks and Random Interlacements 11
Proof. We apply the following intuitive argument: by the previous lemma,
it is unlikely that the random walk started at y T
d
N
A
t
visits the set
A before time r
N
, and at time r
N
the distribution of the random walk is
already close to uniform, i.e. it is independent of y. Therefore the hitting
distribution cannot depend on y too much.
To make this intuition into a proof, we rst observe that (2.17) is implied
by
sup
yT
d
N
\A
P
y
[X
H
A
= x] P[X
H
A
= x]
cN
. (2.18)
To show (2.18), we rst deduce from inequality (2.4) that
sup
yT
d
N
\A
E
y
_
P
Xr
N
[X
H
A
= x]
P[X
H
A
= x]
T
d
N
sup
yT
d
N
\A
P
y
[X
r
N
= y
t
] (y
t
)
P
y
[X
H
A
= x]
cN
d
e
c log
2
N
e
c log
2
N
.
(2.19)
For any y T
d
N
A
t
, by the simple Markov property applied at time r
N
and the estimate (2.19),
P
y
[X
H
A
= x] P
y
[X
H
A
= x, H
A
> r
N
] +P
y
[H
A
r
N
]
E
y
_
P
Xr
N
[X
H
A
= x]
+P
y
[H
A
r
N
]
P[X
H
A
= x] +e
c log
2
N
+P
y
[H
A
r
N
].
(2.20)
With (2.11), we have therefore shown that for any y T
d
N
A
t
,
P
y
[X
H
A
= x] P[X
H
A
= x] N
. (2.21)
On the other hand, for any y T
d
N
A
t
, by the simple Markov property at
time r
N
again,
P
y
[X
H
A
= x] P
y
[X
H
A
= x, H
A
> r
N
]
E
y
_
P
Xr
N
[X
H
A
= x]
P
y
[H
A
r
N
]
P[X
H
A
= x] N
,
(2.22)
using (2.19), (2.11) in the last inequality. Combining (2.21) and (2.22),
(2.18) follows.
Given that the distribution of the entrance point of the random walk
in A is roughly independent of the starting point (given that the starting
point is not in A
t
), we are naturally tempted to determine such distribution.
This is the content of the next lemma, which will play an important role
in motivating the denition of random interlacements later.
12
Cerny, Teixeira
Lemma 2.3. For A and A
t
as above there is > 0 such that
sup
xA, yT
d
N
\A
P
y
[X
H
A
= x]
e
A
((x))
cap(A)
. (2.23)
Note that the entrance law is approximated by the (normalized) exit dis-
tribution, cf. denition (2.9) of the equilibrium measure. This is intimately
related to the reversibility of the random walk.
Proof. Let us x vertices x A, y T
d
N
A
t
. We rst dene the equilibrium
measure of A, with respect to the random walk killed when exiting A
t
,
e
A
A
(z) = 1
A
(z)P
z
[H
T
d
N
\A
<
H
A
], for any z A. (2.24)
Note that by (2.12) and the strong Markov property applied at H
T
d
N
\A
,
e
A
((z)) e
A
A
(z) e
A
((z)) +N
zA\x]
x
P
x
[H
T
d
N
\A
<
H
A
, X
H
A
= z] =
x
P
x
[H
T
d
N
\A
<
H
A
, X
H
A
,= x]
=
zA\x]
z
P
z
[H
T
d
N
\A
<
H
A
, X
H
A
= x].
(2.26)
By the strong Markov property applied at time H
T
d
N
\A
, we have for any
z A,
z
P
z
[H
T
d
N
\A
<
H
A
, X
H
A
= x]
=
z
E
z
_
1
H
T
d
N
\A
<
H
A
]
P
X
H
T
d
N
\A
[X
H
A
= x]
_
.
(2.27)
With (2.25) and (2.17), this yields
z
P
z
[H
T
d
N
\A
<
H
A
, X
H
A
= x]
x
e
A
((z))P
y
[X
H
A
= x]
, (2.28)
for any z A. With this estimate applied to both sides of (2.26), we obtain
x
e
A
((x))
_
1 P
y
[X
H
A
= x]
_
=P
y
[X
H
A
= x]
_
x
cap(A)
x
e
A
((x))
_
+O
_
[A[N
_
,
(2.29)
implying (2.23).
We observe that the entrance distribution P
y
[X
H
B
= ] was approxi-
mated in Lemma 2.3 by a quantity that is independent of N and solely
relates to the innite lattice random walk. This is a very important ingre-
dient of the local picture construction.
Random Walks and Random Interlacements 13
2.3 Local measure
We continue to study the trace that a random walk on the torus leaves
inside a small box A T
d
N
. We already know from the previous section that
the random walk typically enters A in a point x chosen with distribution
e
A
((x))/ cap(A). After entering the box A, the random walk behaves in
the same way as on the innite lattice Z
d
until it gets far away from A
again. We will therefore split the random walk trajectory into so-called
excursions. For this, recall the denition of A
t
and the shift operators
k
from Section 2.2 and let
R
0
= H
A
, D
0
= H
T
d
N
\A
R0
+R
0
,
R
l
= H
A
D
l1
+D
l1
, D
l
= H
T
d
N
\A
R
l
+R
l
, for l 1.
(2.30)
These will be respectively called return and departure times of the random
walk between A and A
t
, see Figure 2.1.
AN
A
N
T
d
N
X0
R0
D0
R1
D1
Figure 2.1: A trajectory of the random walk inside the torus split into
excursions (thick lines) and the remaining parts (thin lines) with respect
to the boxes A
N
, A
t
N
.
Observe that every time n for which the random walk is inside A has to
satisfy R
k
n < D
k
for some k 0. This implies that
X
0
, X
1
, . . . , X
D
k
A =
k
_
j=0
X
Rj
, X
Rj+1
, . . . , X
Dj
A. (2.31)
14
Cerny, Teixeira
Or in other words, the trace left by the random walk trajectory in A up to
time D
k
is given by union of the traces of k separate excursions.
Since X
D
k
/ A
t
N
, using Lemma 2.2 and the strong Markov property
applied at the time D
k
, we can conclude that the set of points in A visited
by the random walk between times R
k+1
and D
k+1
is roughly indepen-
dent of what happened before the time D
k
. Therefore, the excursions
X
Rj
, X
Rj+1
, . . . , X
Dj
, j = 0, 1, 2, . . . , should be roughly independent. A
xed number of such excursions should actually become i.i.d. in the limit
N .
Lemma 2.3 yields that the entrance points X
Rj
of these trajectories
in A are asymptotically distributed as e
A
(())/ cap(A). Moreover, as N
grows, the dierence D
k
R
k
tends to innity. Therefore, as N grows, the
excursion X
Rj+1
, . . . , X
Dj
looks more and more like a simple random
walk trajectory on Z
d
(note that this heuristic claim is only true because
the random walk on Z
d
, for d 3, is transient).
From the previous arguments it follows that the asymptotic distribution
of every excursion should be given by
Q
A
[X
0
= x, (X
n
)
n0
] =
e
A
(x)
cap(A)
P
Z
d
x
[ ], for x Z
d
. (2.32)
To understand fully thy trace left by random walk in A, we now have to
understand how many excursions are typically performed by the random
walk between A and A
t
until some xed time n.
Using a reversibility argument again, we rst compute the expected num-
ber of excursions before time n. To this end x k 0 and a and let us
estimate the probability that k is a return time R
j
for some j 0. This
probability can be written as
P[k = R
j
for some j 0]
=
xA
y(A
)
c
mk
P
_
X
m
= y, X
k
= x, X
i
A
t
A, m < i < k
.
(2.33)
Let
j
(y, x) be the set of possible random walk trajectories of length j join-
ing y to x and staying in A
t
A between times 1 and j 1. By reversibility,
for every
j
(y, x) and l 0,
P[(X
l
, . . . , X
l+j
) = ] = (y)P
y
[(X
0
, . . . , X
j
) = ]
= (x)P
x
[(X
0
, . . . , X
j
) = ],
(2.34)
where
j
(x, y) is the time-reversed path . Observing that the time
reversal is a bijection from
j
(y, x) to
j
(x, y), the right-hand side of (2.33)
Random Walks and Random Interlacements 15
can be written as
k
m=0
xA
y(A
)
c
km
(y,x)
P[(X
m
, . . . , X
k
) = ]
(2.34)
=
k
m=0
xA
y(A
)
c
km
(x,y)
(x)P
x
[(X
0
, . . . , X
km
) = ]
=
xA
k
m=0
N
d
P
x
[k m = H
T
d
N
\A
<
H
A
]
= N
d
xA
P
x
[H
T
d
N
\A
< mink,
H
A
].
(2.35)
We now use (2.25) to obtain that
lim
N
lim
k
N
d
P
_
k = R
j
for some j 0
cap A
= 0. (2.36)
As the random variables (R
i+1
R
i
)
i0
are i.i.d., the renewal theory yields
lim
N
N
d
E[R
i+1
R
i
] = (cap A)
1
. (2.37)
and thus, for all u > 0 xed,
lim
N
E
_
# excursions between times 0 and uN
d
= ucap A. (2.38)
Remark, that we nally obtained a justication for the scaling mentioned
in (1.2)!
The expectation of the dierence R
i+1
R
i
N
d
is much larger than
the regeneration time. Hence, typically the random walk regenerates many
times before returning to A. It is thus plausible that, asymptotically as
N , N
d
(R
i+1
R
i
)/ cap A has exponential distribution with pa-
rameter 1, and that the number of excursions before uN
d
has Poisson
distribution with parameter cap A. This heuristic can be made rigorous,
see [1, 2].
Combining the discussion of the previous paragraph with the asymptotic
i.i.d. property of the excursions, and with (2.32), we deduce the following
somewhat informal description of how the random walk visits A:
the random walk trajectory is split into roughly independent excur-
sions,
for each x A, the number of excursions starting at x is approxi-
mately an independent Poisson random variable with mean ue
A
(x),
the trace left by the random walk on A is given by the union of all
these excursions intersected with A.
16
Cerny, Teixeira
To make the last point slightly more precise, observe that with high
probability X
0
, X
uN
d / A
t
, that the times 0 and uN
d
are not in any
excursion. So with hight probability there are no incomplete excursion in
A at time uN
d
.
2.4 Local picture as Poisson point process
We are now going to make the above informal construction precise, using
the formalism of Poisson point processes. For this, let us rst introduce
some notation. Let W
+
be the space of innite random walk trajectories
that spend only a nite time in nite sets of Z
d
.
W
+
=
_
w : N Z
d
: |w(n) w(n + 1)|
1
1 for each n 0
and n : w(n) = y is nite for all y Z
d
_
.
(2.39)
(Recall that we consider lazy random walk.) As usual, X
n
, n 0, denote
the canonical coordinates on W
+
dened by X
n
(w) = w(n). We endow the
space W
+
with the -algebra J
+
generated by the coordinate maps X
n
,
n 0.
Recall the denition of
Q
A
in (2.32) and dene the measure Q
+
A
:=
cap(A)
Q
A
on (W
+
, J
+
), that is
Q
+
A
[X
0
= x, (X
n
)
n0
F] = e
A
(x)P
Z
d
x
[F], x Z
d
, F J
+
. (2.40)
From the transience of the simple random walk on Z
d
it follows that W
+
has a full measure under Q
+
A
, that is Q
+
A
(W
+
) = e
A
(A) = cap A.
To dene the Poisson point process we consider the space of nite point
measures on W
+
,
+
=
_
+
=
n
i=1
wi
: n N, w
1
, . . . , w
n
W
+
_
, (2.41)
where
w
stands for the Diracs measure on w. We endow this space with
the -algebra generated by the evaluation maps
+
+
(D), where D
J
+
.
Now let P
u
A
be the law of a Poisson point process on
+
with intensity
measure uQ
+
A
(see e.g. [32], Proposition 3.6, for the denition and the
construction). The informal description given at the end of the last section
can be then retranslated into following theorem. Its full proof, completing
the sketchy arguments of the last section, can be found in [52].
Theorem 2.4. Let u > 0, J be the index of the last excursion started
before uN
d
, J := maxi : R
i
uN
d
. For every i J dene w
N
i
W
+
to be an (arbitrary) extension of the i
th
excursion, that is
w
n
i
(k) = (X
Ri+k
) for all k 0, . . . , D
i
R
i
. (2.42)
Random Walks and Random Interlacements 17
Then the distribution of the point process
iJ
w
N
i
converges to P
u
A
weakly
on
+
.
As consequence, the distribution of (X
0
, . . . , X
uN
d A) on 0, 1
A
converges to the distribution of
iN
Range w
i
A, where the law of
+
=
N
i=1
wi
is given by P
u
A
.
2.5 Notes
The properties of the vacant set left by the random walk on the torus were
for the rst time studied by Benjamini and Sznitman in [6]. In this paper
it is shown that (for d large enough) when the number of steps scales as
n = uN
d
for u suciently small, then the vacant set has a giant component,
which is unique (in a weak sense). The size of the second largest component
was studied in [51].
The local picture in the torus and its connection with random inter-
lacements was established in [52] for many distant microscopic boxes si-
multaneously. This result was largely improved in [49], by extending the
connection to mesoscopic boxes of size N
1
. More precisely, for A being
a box of size N
1
in the torus, Theorem 1.1 of [49] constructs a coupling
between the random walk on the torus and random interlacements at levels
u(1 ) and u(1 +) such that for arbitrary > 0
Prob[J
(1)u
A X
0
, . . . , X
uN
dA J
(1+)u
A] 1N
, (2.43)
for N large enough, depending on d, , u and . Here, J
u
is the interlace-
ment set at level u, that we will construct in the next chapter.
This coupling allows one to prove the best known results on the properties
of the largest connected component (
max
(u, N) of the vacant set on the
torus, going in direction of the phase transition mentioned in (1.5). In
the following theorem, which is taken from Theorems 1.21.4 of [49], u
=
u
= u
. We recommend [37]
for further material on this subject.
Theorem 2.5 (d 3). (i) Subcritical phase: When u > u
, then for
every > 0,
P[[(
max
(u, N)[ N
d
]
N
0. (2.44)
In addition, when u > u
N]
N
0. (2.45)
(ii) Supercritical phase: When u is small enough then for some > 0,
P[[(
max
(u, N)[ N
d
]
N
1. (2.46)
18
Cerny, Teixeira
Moreover, for d 5, the second largest component of the vacant set
has size at most log
of random interlacements on Z
d
. In addition, u
= u
N
d
[(
max
(u, N)[
N
(u) (0, 1). (2.47)
2.5.1 Disconnection of a discrete cylinder
These notes would be incomplete without mentioning another problem
which motivated the introduction of random interlacements: the disconnec-
tion of a discrete cylinder, or, in a more picturesque language, the problem
of termite in a wooden beam.
In this problem one considers a discrete cylinder G Z, where G is an
arbitrary nite graph, most prominent example being the torus T
d
N
, d 2.
On the cylinder one considers a simple random walk started from a point
in its base, G0. The object of interest is the disconnection time, T
G
,
of the discrete cylinder which is the rst time such that the range of the
random walk disconnects the cylinder. More precisely, T
G
is the smallest
time such that, for a large enough M, (, M] G and [M, ) G are
contained in two distinct connected components of the complement of the
range, (GZ) X
0
, . . . , X
T
G
.
The study of this problem was initiated by Dembo and Sznitman [16]. It
is shown there that T
N
:= T
T
d
N
, is of order N
2d
, on the logarithmic scale:
lim
N
log T
N
log N
= 2d, d 2. (2.48)
This result was successively improved in [17] (a lower bound on T
N
: the
collection of random variables N
2d
/T
N
, N 1, is tight when d 17), [40]
(the lower bound hold for any d 2), [42] (an upper bound: T
N
/N
2d
is
tight). Disconnection time of cylinders with a general base G is studied in
[39]: for the class of bounded degree bases G, it is shown that T
G
is roughly
of oder [G[
2
.
Some of the works cited in the last paragraph explore considerably the
connection of the problem with the random interlacements. This connec-
tion was established in [41], and states that the local picture left by random
walk on the discrete cylinder converges locally to random interlacements.
The connection is slightly more complicated than on the torus (that is why
Random Walks and Random Interlacements 19
we choose the torus for our motivation). The complication comes from
the fact that the parameter u of the limiting random interlacements is not
deterministic but random, and depends on the local time of the vertical
projection of the random walk. We state the connection as a theorem
which is a simplied version of [41, Theorem 0.1].
Theorem 2.7. Let x
N
T
d
N
Z be such that its Z-component z
N
satises
lim
N
z
N
/N
d
= v. Let L
z
t
=
t
i=0
1X
i
T
d
N
z be the local time
of the vertical projection of the random walk. Assume that t
N
satises
limt
N
/N
2d
= . Then, for any n > 0 xed, in distribution,
_
X
0
, . . . , X
t
N
B(x
N
, n), L
z
N
t
N
/N
d
_
N
(J
L
B(n), L), (2.49)
where L/(d + 1) has the distribution of the local time of Brownian motion
at time /(d +1) and spatial position v, and J
L
is the interlacement set of
random interlacements at level L.
A version of Theorem 2.7 for cylinders with general base G is given
in [53].
The dependence of the intensity of the random interlacements on the
local time of the vertical projection should be intuitively obvious: While in
the horizontal direction the walk mixes rather quickly (in time N
2
log N),
there is no averaging going on in vertical direction. Therefore the intensity
of the local picture around x
N
must depend on the time that the random
walk spends in the layer z
N
, which is given by L
z
N
t
N
.
It should be not surprising that Conjecture 2.6 can be transfered to the
disconnection problem:
Conjecture 2.8 (Remark 4.7 of [42]). The random variable T
N
/N
2d
con-
verges in distribution to a random variable U which is dened by
U = inft 0 : sup
xR
(t/(d + 1), x) u
(d + 1), (2.50)
where (t, x) is the local time of a one-dimensional Brownian motion and
u
A
automatically satisfy the restriction property
Q
N,u
A
=
A,A
Q
N,u
A
,
1
(3.1)
where
A,A
: 0, 1
A
0, 1
A
is the usual restriction map. Moreover, by
Theorem 2.4 (or Theorem 1.1 of [52]),
Q
N,u
A
converges weakly as N to a measure Q
u
A
, (3.2)
where Q
u
A
is the distribution of the trace left on A by the Poisson pro-
cess P
u
A
.
Using (3.2), we can see that the restriction property (3.1) passes to the
limit, that is
Q
u
A
=
A,A
Q
u
A
. (3.3)
1
For a measurable map f : S
1
S
2
and a measure on S
1
, we use f to denote
the push forward of by f, (f )() := (f
1
()).
20
Random Walks and Random Interlacements 21
Kolmogorovs extension theorem then yields the existence of the wanted
innite volume measure Q
u
on 0, 1
Z
d
(endowed with the usual cylinder
-eld).
The construction of the previous paragraph has a considerable disadvan-
tage. First, it relies on (3.2), whose proof is partly sketchy in these notes.
Secondly, it does not give enough information about the measure Q
u
. In
particular, we completely lost the nice feature that Q
u
A
is the trace of a
Poisson point process of random walk trajectories.
This is the motivation for another, more constructive, denition of the
innite volume model. The reader might consider this denition rather
technical. However, the eort put into it will be more than paid back when
working with the model. The following construction follows the original
paper [43] with minor modications.
We wish to construct the innite volume analog to the Poisson point
process P
u
A
. The rst step is to introduce the measure space where the new
Poisson point process will be dened. To this end we need few denitions.
Similarly to (2.39), let W be the space of doubly-innite random walk
trajectories that spend only a nite time in nite subsets of Z
d
, i.e.
W =
_
w : Z Z
d
: |w(n) w(n + 1)|
1
1 for each n 0
and n : w(n) = y is nite for all y Z
d
_
.
(3.4)
We again denote with X
n
, n Z, the canonical coordinates on W, and
write
k
, k Z, for the canonical shifts,
k
(w)() = w( +k), for k Z (resp. k 0 when w W
+
). (3.5)
We endow W with the -algebra J generated by the canonical coordinates.
Given A Z
d
, w W (resp. w W
+
), we dene the entrance time in A
and the exit time from A for the trajectory w:
H
A
(w) = infn Z(resp. N) : X
n
(w) A,
T
A
(w) = infn Z(resp. N) : X
n
(w) / A.
(3.6)
When A Z
d
(i.e. A is a nite subset of Z
d
), we consider the subset of
W of trajectories entering A:
W
A
= w W : X
n
(w) A for some n Z. (3.7)
We can write W
A
as a countable partition into measurable sets
W
A
=
_
nZ
W
n
A
, where W
n
A
= w W : H
A
(w) = n. (3.8)
Heuristically, the reason why we need to work with the space W of
the doubly-innite trajectories is that when taking the limit A Z
d
, the
excursions start at innity.
22
Cerny, Teixeira
The rst step of the construction of the random interlacements is to ex-
tend the measure Q
+
A
to the space W. This is done, naturally, by requiring
that (X
n
)
n0
is a simple random walk started at X
0
conditioned not to
return to A. More precisely, we dene on (W, J) the measure Q
A
by
Q
A
[(X
n
)
n0
F, X
0
= x, (X
n
)
n0
G] = P
x
[F[
H
A
= ]e
A
(x)P
x
[G],
(3.9)
for F, G J
+
and x Z
d
.
Observe that Q
A
gives full measure to W
0
A
. This however means that
the set A is still somehow registered in the trajectories, more precisely the
origin of the time is at the rst visit to A. To solve this issue, it is convenient
to consider the space W
= W/ , where w w
t
i w() = w
t
( +k) for some k Z, (3.10)
which allows us to ignore the rather arbitrary (and A-dependent) time
parametrization of the random walks. We denote with
the canonical
projection from W to W
. The map
induces a -algebra in W
given
by
J
= U W
: (
)
1
(U) J. (3.11)
It is the largest -algebra on W
(W
, J
) is mea-
surable. We use W
A
to denote the set of trajectories modulo time shift
entering A Z
d
,
W
A
=
(W
A
). (3.12)
It is easy to see that W
A
J
.
The random interlacements process that we are dening will be governed
by a Poisson point process on the space (W
R
+
, J
B(R
+
)). To this
end we dene in analogy to (2.41):
=
_
=
i1
(w
i
,ui)
: w
i
W
, u
i
R
+
such that
(W
A
[0, u]) < , for every A Z
d
and u 0
_
.
(3.13)
This space is endowed with the -algebra / generated by the evaluation
maps (D) for D J
B(R
+
).
At this point, the reader may ask why we do not take to be simply
the space of point measures on W
i
, u
i
)
can be viewed as a label attached to the trajectory w
i
. This trajectory
Random Walks and Random Interlacements 23
will inuence the random interlacements model at level u only if its label
satisfy u
i
u.
The intensity measure of the Poisson point process governing the random
interlacements will be given by du . Here, du is the Lebesgue measure
on R
+
and the measure on W
, J
A
=
Q
A
,
2
(3.14)
where the nite measure Q
A
on W
A
is given by (3.9).
Proof. The uniqueness of satisfying (3.14) is clear since, given a sequence
of sets A
k
Z
d
, W
=
k
W
A
k
.
For the existence, what we need to prove is that, for xed A A
t
Z
d
,
(1
W
A
Q
A
) =
Q
A
. (3.15)
Indeed, we can then set, for arbitrary A
k
Z
d
,
=
k
1
W
A
k
\W
A
k1
Q
A
k
. (3.16)
The equality (3.15) then insures that does not depend on the sequence A
k
.
We introduce the space
W
A,A
= w W
A
: H
A
(w) = 0 (3.17)
and the bijection s
A,A
: W
A,A
W
A,A
given by
[s
A,A
(w)]() = w(H
A
(w) +), (3.18)
moving the origin of time from the entrance to A
t
to the entrance to A.
To prove (3.15), it is enough to show that
s
A,A
(1
W
A,A
Q
A
) = Q
A
. (3.19)
Indeed, from (3.9) it follows that 1
W
A,A
Q
A
= 1
W
A
Q
A
and thus (3.15)
follows just by applying
Z
d
such that (0) A
t
, (n) / A for n < N
, and (N
) A. We split the
left-hand side of (3.19) by partitioning W
A,A
into the sets
W
A,A
= w W
A,A
: w restricted to 0, , N
equals , for .
(3.20)
2
For any set G and measure , we dene 1
G
() := (G ).
24
Cerny, Teixeira
For w W
A,A
, we have H
A
(w) = N
Q
A
) =
N
(1
W
A,A
Q
A
). (3.21)
To prove (3.19), consider an arbitrary collection of sets A
i
Z
d
, for
i Z, such that A
i
,= Z
d
for at most nitely many i Z. Then,
s
A,A
(1
W
A,A
Q
A
)[X
i
A
i
, i Z]
=
Q
A
[X
i+N
(w) A
i
, i Z, w W
A,A
]
=
Q
A
[X
i
(w) A
iN
, i Z, w W
A,A
].
(3.22)
Using the formula (3.9), the identity
e
A
(x)P
x
[ [
H
A
= ] = P
x
[ ,
H
A
= ], x supp e
A
, (3.23)
and the Markov property, the above expression equals
xsupp e
A
P
x
_
X
j
A
jN
, j 0,
H
A
=
P
x
_
X
n
= (n) A
nN
, 0 n N
P
(N)
_
X
n
A
n
, n 0
xsupp e
A
yA
:(N)=y
P
x
_
X
j
A
jN
, j 0,
H
A
=
P
x
_
X
n
= (n) A
nN
, 0 n N
P
y
_
X
n
A
n
, n 0
.
(3.24)
For xed x supp e
A
and y A, we have, using the reversibility in the
rst step and the Markov property in the second,
:(N)=y
P
x
_
X
j
A
jN
, j 0,
H
A
=
P
x
_
X
n
= (n) A
nN
, 0 n N
:(N)=y
(0)=x
P
x
_
X
j
A
jN
, j 0,
H
A
=
P
y
_
X
m
= (N
m) A
m
, 0 m N
:(N)=y
(0)=x
P
y
_
X
m
= (N
m) A
m
, 0 m N
,
X
m
A
m
, m N
,
H
A
N
=
_
= P
y
_
H
A
= , the last visit to A
t
occurs at x, X
m
A
m
, m 0
_
.
(3.25)
Random Walks and Random Interlacements 25
Using (3.25) in (3.24) and summing over x supp e
A
, we obtain
s
A,A
(1
W
A,A
Q
A
)[X
i
A
i
, i Z]
=
yA
P
y
[
H
A
= , X
m
= A
m
, m 0]P
y
[X
m
A
m
, m 0]
(3.9)
= Q
A
[X
m
A
m
, m Z].
(3.26)
This shows (3.19) and concludes the proof of the existence of the measure
satisfying (3.14). Moreover, the measure is clearly -nite, it is sucient
to observe that (W
A
, [0, u]) < for any A Z
d
and u 0.
We can now complete the construction of the random interlacements
model, that is describe the innite volume of the local pictures discussed
in the previous chapter. On the space (, /) we consider the law P of a
Poisson point process with intensity (dw
i0
(w
i
,ui)
we dene two subsets of Z
d
, the
interlacement set at level u, that is the set of sites visited by the trajectories
with label smaller than u,
J
u
() =
_
i:uiu
Range(w
i
), (3.27)
and its complement, the vacant set at level u,
1
u
() = Z
d
J
u
(). (3.28)
Let
u
be the mapping from to 0, 1
Z
d
given by
u
() = (1x 1
u
() : x Z
d
). (3.29)
We endow the space 0, 1
Z
d
with the -eld } generated by the canonical
coordinates (Y
x
: x Z
d
). As for A Z
d
, we have
1
u
A if and only if (W
A
[0, u]) = 0, (3.30)
the mapping
u
: (, /) (0, 1
Z
d
, }) is measurable. We can thus dene
on (0, 1
Z
d
, }) the law Q
u
of the vacant set at level u by
Q
u
=
u
P. (3.31)
The law Q
u
of course coincides with the law Q
u
constructed abstractly
using Kolmogorovs theorem below (3.3). In addition, we however gained
a rather rich structure behind it which will be useful later.
26
Cerny, Teixeira
Some additional notation. We close this chapter by introducing some
additional notation that we use frequently through the remaining chapters.
Let s
A
: W
A
W be dened as
s
A
(w
) = w
0
, where w
0
is the unique element of W
0
A
with
(w
0
) = w
,
(3.32)
i.e. s
A
gives to w W
A
its A-dependent time parametrization. We also
dene a measurable map
A
from to the space of point measures on
(W
+
R
+
, J
+
B(R
+
)) via
A
()(f) =
_
W
A
R+
f(s
A
(w
)
+
, u)(dw
A,u
()(dw) =
A
()(dw [0, u]), (3.34)
which selects from
A
() only those trajectories whose labels are smaller
than u. Observe that
J
u
() A =
_
wsupp
A,u
()
Range w A. (3.35)
It also follows from the construction of the measure P and from the
dening property (3.14) of that
A,u
P = P
u
A
. (3.36)
3.1 Notes
As we mentioned, random interlacements on Z
d
were rst time introduced
in [43]. Later, [46] extended the construction of the model to any transient
weighted graphs. Since then, large eort has been spent in the study of its
percolative and geometrical properties, which relate naturally to the above
mentioned questions on the fragmentation of a torus by random walk. In
the next section we start to study some of the most basic properties of this
model on Z
d
.
Chapter 4
Properties of random
interlacements
4.1 Basic properties
We now study the random interlacements model introduced in the last
chapter. Our rst goal is to understand the correlations present in the
model.
To state the rst result we dene the Greens function of the random
walk,
g(x, y) =
n0
P
x
[X
n
= y], for x, y Z
d
. (4.1)
We write g(x) for g(x, 0), and refer to [26], Theorem 1.5.4 p.31 for the
following estimate
c
t
1 +[x y[
d2
g(x, y)
c
[x y[
d2
, for x, y Z
d
. (4.2)
Lemma 4.1. For every u 0, x, y Z
d
, A Z
d
,
P[A 1
u
] = expucap(A), (4.3)
P[x 1
u
] = expu/g(0), (4.4)
P[x, y 1
u
] = exp
_
2u
g(0) +g(y x)
_
. (4.5)
Remark 4.2. The equality (4.3) in fact characterizes the distribution of the
vacant set 1
u
, and can be used to dene the measure Q
u
. This follows
from the theory of point processes, see e.g. [24], Theorem 12.8(i).
Proof. Using the notation introduced at the end of the last chapter, we
observe that A 1
u
() if and only if
A,u
() = 0. Claim (4.3) then
27
28
Cerny, Teixeira
follows from
P[
A,u
() = 0]
(3.36)
= expuQ
A
(W
+
)
(2.40)
= expue
A
(Z
d
) = expucap(A).
(4.6)
The remaining claims of the lemma follows from (4.3), once we com-
pute cap(x) and cap(x, y). For the rst case, observe that under
P
x
the number of visits to x has geometrical distribution with parame-
ter P
x
[
H
x
= ] = cap(x), by the strong Markov property. This yields
immediately that
cap(x) = g(0)
1
. (4.7)
For the second case, we recall the useful formula that we prove later,
P
x
[H
A
< ] =
yA
g(x, y)e
A
(y), x Z
d
, A Z
d
. (4.8)
Assuming without loss of generality that x ,= y, we write e
x,y]
=
x
x
+
y
, and cap(x, y) =
x
+
y
for some
x
,
y
0. From formula (4.8),
it follows that
1 =
x
g(z, x) +
y
g(z, y), for z x, y. (4.9)
Solving this system for
x
,
y
yields
cap(x, y) =
2
g(0) +g(x y)
. (4.10)
Claims (4.4) and (4.5) then follows directly from (4.3) and (4.7), (4.10).
To show (4.8), let L = supk 0 : X
k
A be the time of the last visit
to A, with convention L = if A is not visited. Then,
P
x
[H
A
< ] = P
x
[L 0] =
yA
n0
P
x
[L = n, X
L
= y]
=
yA
n0
P
x
[X
n
= y, X
k
/ A for k > n]
=
yA
n0
P
x
[X
n
= y]e
A
(y) =
yA
g(x, y)e
A
(y),
(4.11)
where we used the strong Markov property in the forth, and the denition
of the Green function in the fth equality.
The last lemma and (4.2) imply that
Cov
P
(1
x\
u, 1
y\
u)
2u
g(0)
2
e
2u/g(0)
g(x y)
c
u
[x y[
2d
, as [x y[ .
(4.12)
Random Walks and Random Interlacements 29
Long range correlation are thus present in the random set 1
u
.
As another consequence of (4.3) and the sub-additivity of the capacity,
cap(A A
t
) cap A + cap A
t
, (4.13)
see [26, Proposition 2.2.1(b)], we obtain that
P[A A
t
1
u
] P[A 1
u
] P[A
t
1
u
], for A, A
t
Z
d
, u 0,
(4.14)
that is the events A 1
u
and A
t
1
u
are positively correlated.
The inequality (4.14) is the special case for the FKG inequality for the
measure Q
u
(see (3.31)) which was proved in [46]. We present it here for
the sake of completeness without proof.
Theorem 4.3 (FKG inequality for random interlacements). Let A, B }
be two increasing events. Then
Q
u
[A B] Q
u
[A]Q
u
[B]. (4.15)
The measure Q
u
thus satises one of the principal inequalities that hold
for the Bernoulli percolation. Many of the diculties appearing when
studying random interlacements come from the fact that another impor-
tant inequality of Bernoulli percolation (the so-called van den Berg-Kesten)
does not hold for Q
u
as one can easily verify.
4.2 Translation invariance and ergodicity
We now explore how random interlacements interacts with the translations
of Z
d
. For x Z
d
and w W we dene w + x W by (w + x)(n) =
w(n) + x, n Z. For w W
, we then set w
+ x =
(w + x) for
(w) = w
. Finally, for =
i0
(w
i
,ui)
we dene
x
=
i0
(w
i
x,ui)
. (4.16)
We let t
x
, x Z
d
, stand for the canonical shifts of 0, 1
Z
d
.
Proposition 4.4.
(i) is invariant under translations
x
of W
for any x Z
d
.
(ii) P is invariant under translation
x
of for any x Z
d
.
(iii) For any u 0, the translation maps (t
x
)
xZ
d dene a measure pre-
serving ergodic ow on (0, 1
Z
d
, }, Q
u
).
30
Cerny, Teixeira
Proof. The proofs of parts (i), (ii) and of the fact that (t
x
)
xZ
d is a measure
preserving ow are left as an exercise. They can be found in [43, (1.28) and
Theorem 2.1]. We will only show the ergodicity, as its proof is instructive.
As we know that (t
x
) is a measure preserving ow, to prove the ergodicity
we only need to show that it is mixing, that is for any A Z
d
and for
any [0, 1]-valued (Y
x
: x A)-measurable function f on 0, 1
Z
d
, one has
lim
]x]
E
Q
u
[f f t
x
] = E
Q
u
[f]
2
. (4.17)
In view of (3.35), (4.17) will follow once we show that for any A Z
d
and
any [0, 1]-valued measurable function F on the set of nite point measures
on W
+
endowed with the canonical -eld,
lim
]x]
E[F(
A,u
)F(
A,u
)
x
] = E[F(
A,u
)]
2
. (4.18)
Since, due to denition of
x
and
A,u
, there exists a function G with similar
properties as F, such that F(
A,u
)
x
= G(
A+x,u
), (4.18) follows from
the next lemma.
Lemma 4.5. Let u 0 and A
1
and A
2
be nite disjoint subsets of Z
d
. Let
F
1
and F
2
be [0, 1]-valued measurable functions on the set of nite point-
measures on W
+
endowed with its canonical -eld. Then
E[F
1
(
A1,u
) F
2
(
A2,u
)] E[F
1
(
A1,u
)] E[F
2
(
A2,u
)]
4ucap(A
1
) cap(A
2
) sup
xA1,yA2
g(x y).
(4.19)
Proof. We write A = A
1
A
2
and decompose the Poisson point process
A,u
into four point processes on (W
+
, J
+
) as follows:
A,u
=
1,1
+
1,2
+
2,1
+
2,2
, (4.20)
where
1,1
(dw) = 1X
0
A
1
, H
A2
=
A,u
(dw),
1,2
(dw) = 1X
0
A
1
, H
A2
<
A,u
(dw),
2,1
(dw) = 1X
0
A
2
, H
A1
<
A,u
(dw),
2,2
(dw) = 1X
0
A
2
, H
A1
=
A,u
(dw), .
(4.21)
In words, the support of
1,1
are trajectories in the support of
A,u
which
enter A
1
but not A
2
, the support
1,2
are trajectories that enter rst A
1
and then A
2
, and similarly
2,1
,
2,2
.
The
i,j
s are independent Poisson point processes, since they are sup-
ported on disjoint sets (recall that A
1
and A
2
are disjoint). Their corre-
sponding intensity measures are given by
u1X
0
A
1
, H
A2
= P
e
A
,
u1X
0
A
1
, H
A2
< P
e
A
,
u1X
0
A
2
, H
A1
< P
e
A
,
u1X
0
A
2
, H
A1
= P
e
A
.
(4.22)
Random Walks and Random Interlacements 31
We observe that
A1,u
1,1
1,2
is determined by
2,1
and therefore
independent of
1,1
,
2,2
and
1,2
. In the same way,
A2,u
2,2
2,1
is independent of
2,2
,
2,1
and
1,1
. We can therefore introduce auxiliary
Poisson processes
t
2,1
and
t
1,2
having the same law as
A1,u
1,1
1,2
and
A2,u
2,2
2,1
respectively, and satisfying
t
2,1
,
t
1,2
,
i,j
, 1 i, j 2
are independent. Then
E[F
1
(
A1,u
)] = E[F
1
((
A1,u
1,1
1,2
) +
1,1
+
1,2
)]
= E[F
1
(
t
2,1
+
1,1
+
1,2
)],
(4.23)
and in the same way
E[F
2
(
A2
)] = E[F
2
(
t
1,2
+
2,2
+
2,1
)]. (4.24)
Using (4.23), (4.24) and the independence of the Poisson processes
t
2,1
+
1,1
+
1,2
and
t
1,2
+
2,2
+
2,1
we get
E[F
1
(
A1
)] E[F
2
(
A2
)]
= E[F
1
(
t
2,1
+
1,1
+
1,2
)F
2
(
t
1,2
+
2,2
+
2,1
)].
(4.25)
From (4.25) we see that
E[F
1
(
A1
) F
2
(
A2
)] E[F
1
(
A1
)] E[F
2
(
A2
)]
P[
t
2,1
,= 0 or
t
1,2
,= 0 or
2,1
,= 0 or
1,2
,= 0]
2(P[
2,1
,= 0] +P[
1,2
,= 0])
2u
_
P
e
A
[X
0
A
1
, H
A2
< ] +P
e
A
[X
0
A
2
, H
A1
< ]
_
.
(4.26)
We now bound the two last terms in the above equation
P
e
A
1
A
2
[X
0
A
1
, H
A2
< ]
xA1
e
A1
(x)P
x
[H
A2
< ]
=
xA1,yA2
e
A1
(x)g(x, y)e
A2
(y)
cap(A
1
) cap(A
2
) sup
xA1, yA2
g(x, y).
(4.27)
A similar estimate holds for P
e
A
1
A
2
[X
0
A
2
, H
A1
< ] and the lemma
follows.
As (4.18) follows easily from Lemma 4.5, the proof of Proposition 4.4 is
completed.
Proposition 4.4(iii) has the following standard corollary.
Corollary 4.6 (zero-one law). Let A } be invariant under the ow
(t
x
: x Z
d
). Then, for any u 0,
Q
u
[A] = 0 or 1. (4.28)
32
Cerny, Teixeira
In particular, the event
Perc(u) := : 1
u
() contains an innite connected component,
(4.29)
satises for any u 0
P[Perc(u)] = 0 or 1. (4.30)
Proof. The rst statement follows from the ergodicity by usual techniques.
The second statement follows from
P[Perc(u)] = Q
u
__
y 0, 1
Z
d
:
y contains an innite
connected component of 1s
__
(4.31)
and the fact that the event on the right-hand side is in } and t
x
invariant.
We now let
(u) = P[0 belongs to an innite connected component of 1
u
], (4.32)
it follows by standard arguments that
(u) > 0 P[Perc(u)] = 1. (4.33)
In particular dening
u
<
which we will (partially) do in the next chapter.
4.3 Comparison with Bernoulli percolation
We nd useful to to draw a parallel between random interlacements and
the usual Bernoulli percolation on Z
d
.
We recall the denition of Bernoulli percolation. Given p [0, 1], con-
sider on the space 0, 1
Z
d
the probability measure R
p
under which the
canonical coordinates (Y
x
)
xZ
d are a collection of i.i.d. Bernoulli(p) ran-
dom variables. We say that a given site x is open if Y
x
= 1, otherwise we
say that it is closed. Bernoulli percolation on Z
d
is rather well understood,
see e.g. monographs [19, 8].
In analogy to (4.33) and (4.34), one denes for Bernoulli percolation the
following quantities:
(p) = R
p
_
the origin is connected to innity by an open path
,
p
c
= infp [0, 1] such that (p) > 0.
(4.35)
Random Walks and Random Interlacements 33
An well-known fact about Bernoulli percolation is that for d 2, p
c
(0, 1) [19, Theorem (1.10)]. In other words, this means that the model
undergoes a non-trivial phase transition. As we said, we would like to
prove an analogous result for random interlacements percolation, that is to
show that u
(0, ).
Before doing this, let us understand how the random conguration in
0, 1
Z
d
obtained under the measures R
p
and Q
u
dened in (3.31) compare
to one another.
The rst important observation is that under the measure R
p
every con-
guration inside a nite set A has positive probability, while this is not
the case with Q
u
. This follows from the following easy claim, which is the
consequence of the denitions (3.13), (3.31) of and Q
u
.
for every u 0, almost surely under the measure Q
u
, the
set x Z
d
: Y
x
= 0 has no nite connected components.
(4.36)
One particular consequence of this fact is that the random interlacements
measure Q
u
will not satisfy the so-called nite energy property. We say
that a measure R on 0, 1
Z
d
satises the nite energy property if
0 < R(Y
y
= 1[Y
z
, z ,= y) < 1, R-a.s., for all y Z
d
, (4.37)
for more details, see [20] (Section 12). Intuitively speaking, this says that
not all congurations on a nite set have positive probability under the
measure Q
u
. As a consequence, some percolation techniques, such as Bur-
ton and Keanes uniqueness argument, will not be directly applicable to Q
u
.
Another important technique in Bernoulli independent percolation is the
so-called Peierls-type argument. This argument makes use of the so-called
-paths dened as follows. We say that a sequence x
0
, x
1
, . . . , x
n
is a -path
if the supremum norm [x
i
x
i+1
[
+
(W
+
) = L
d2
log
2
L|
P
L
d2
log
2
L
e
A
/ cap(A)
_
A
L
d2
log
2
L
_
i=1
Range(X
i
)
_
(4.46)
where the last probability is the independent product of L
d2
log
2
L| sim-
ple random walks X
i
s, starting with distribution e
A
/ cap(A).
Let us rst evaluate the rst term in (4.46), corresponding to the Pois-
son distribution of
+
(W
+
). For this, we write = ucap(A) and =
L
d2
log
2
L|. Then, using de Moivre-Stirlings approximation, we obtain
that the left term in the above equation is
e
!
c
e
+
(4.47)
and using again (4.43), for L c(u) suciently large,
expc
u
L
d2
+L
d2
log
2
L|
_
c
u
log
2
L
_
_
c
u
log
2
L
_
exp
_
c
u
log(log
2
L) (log
2
L)L
d2
_
exp
_
c
u
(log
3
L)L
d2
_
.
(4.48)
To bound the second term in (4.46), x rst some z A and estimate
P
e
A
/ cap(A)
_
z
i=1
Range(X
i
)
= 1
_
P
e
A
/ cap(A)
_
z , Range(X
1
)
_
(4.2)
1
_
1 cL
2d
_
cL
d2
log
2
L
1 e
c log
2
L
.
(4.49)
Therefore, by a simple union bound, we obtain that the right-hand side of
(4.46) is bounded from below by 1/2 as soon as L is large enough depending
on u. Putting this fact together with (4.46) and (4.48), we obtain that
_
g dQ
u
1 c exp
_
c
u
(log
3
L)L
d2
_
, (4.50)
which is smaller than the right hand side of (4.44) for L large enough
depending on p and u. This proves that Q
u
does not dominate R
p
for any
values of p (0, 1) or u > 0, nishing the proof of the lemma.
36
Cerny, Teixeira
4.4 Notes
Various interesting properties of random interlacements have been estab-
lished in recent works. Let us mention here a few of them.
In Theorem 2.4 of [43] and in Appendix A of [12], the authors provide a
comparison between Bernoulli percolation and random interlacements on
sub-spaces (slabs) of Z
d
. If the co-dimension of the subspace in question
is at least three, then the domination by Bernoulli percolation holds. This
domination was useful in simplifying arguments for the non-triviality of
the phase transition for the vacant set and the behavior of the chemical
distance in J
u
in high dimensions, see Remark 2.5 3 of [43] and Appendix A
of [12] for more details.
Another important result on the domination of random interlacements
says that the component of the interlacement set J
u
containing a given
point is dominated by the range of a certain branching random walk, see
Propositions 4.1 and 5.1 in [48]. This result does not add much for the
study of random interlacements on Z
d
(as the mentioned branching random
walk covers the whole lattice almost surely), but it has been valuable for
establishing results for random interlacements on non-amenable graphs,
see [48].
It may be interesting to observe that the measure Q
u
does not induce a
Markov eld. More precisely, this means that the law of 1
u
A conditioned
on 1
u
A
c
is not the same as that of 1
u
A conditioned on 1
u
x
A
c
; x A. See [46], Remark 3.3 3) for more details.
After dening random interlacements on dierent classes of graphs (see
[46] for such a construction), one could get interested in understanding for
which classes of graphs there exists a non-trivial phase transition at some
critical 0 < u
> 0.
Proof. The proof we present here follows the arguments of Proposition 4.1
in [43] with some minor modications.
We will use this bound in the renormalization argument we mentioned
above. This renormalization will take place on Z
2
Z
d
, which is identied
by the isometry (x
1
, x
2
) (x
1
, x
2
, 0, . . . , 0). Throughout the text we make
no distinction between Z
2
and its isometric copy inside Z
d
.
We say that : 0, , n Z
2
is a -path if
[(k + 1) (k)[
D
m
=
_
i,j1,0,1]
D
(n,q+(i,j))
. (5.6)
As we mentioned, our strategy is to prove that the probability of nding
a -path in the set J
u
Z
2
that separates the origin from innity in Z
2
is
smaller than one. We do this by bounding the probabilities of the following
crossing events
B
u
m
=
_
: there exists a -path in J
u
Z
2
connecting D
m
to the complement of
D
m
_
, (5.7)
where m J
n
. For u > 0, we write
q
u
n
= P[B
u
(n,0)
]
Proposition 4.4
= sup
mJn
P[B
u
m
]. (5.8)
In order to show that for u small enough q
u
n
decays with n, we are going
to obtain an induction relation between q
u
n
and q
u
n+1
(that were dened in
terms of two dierent scales). For this we consider, for a xed m J
n+1
,
the indices of boxes in the scale n that are in the boundary of D
m
. More
precisely
/
m
1
= m
1
J
n
: D
m1
D
m
and D
m1
is neighbor of Z
2
D
m
. (5.9)
And the indices of boxes at the scale n, having a point at distance L
n+1
/2
from D
m
, i.e.
/
m
2
= m
2
J
n
: D
m2
x Z
2
: d
Z
2(x, D
m
) = L
n+1
/2 , = . (5.10)
Random Walks and Random Interlacements 39
D
m
D
m
D
m1
D
m1
D
m2
D
m2
Figure 5.1: The gure shows all the boxes with indices in /
1
and /
2
. Note
that the event B
u
m
implies B
u
m1
and B
u
m2
for some m
1
/
1
and m
2
/
2
.
The boxes associated with the two sets of indices above are shown in Fig-
ure 5.1. In this gure we also illustrate that the event B
u
m
implies the oc-
currence of both B
u
m1
and B
u
m2
for some choice of m
1
/
m
1
and m
2
/
m
2
.
This, with a rough counting argument, allows us to conclude that
q
u
m
cl
2
n
sup
m
1
K
m
1
m
2
K
m
2
P[B
u
m1
B
u
m2
], for all u 0. (5.11)
We now want to control the dependence of the process in the two boxes
D
m1
and
D
m2
. For this we will use Lemma 4.5, which provides that
P[B
u
m1
B
u
m1
]
P[B
u
m1
]P[B
u
m1
] + 4ucap(
D
m1
) cap(
D
m2
) sup
x
Dm
1
,y
Dm
2
g(x y)
(5.1)
(q
u
n
)
2
+cL
2
n
L
2
n
L
5
n+1
(5.12)
where we assumed in the last step that u 1. Using (5.11) and taking the
supremum over m J
n+1
, we conclude that
q
u
n+1
cl
2
n
_
(q
u
n
)
2
+L
4
n
L
5
n+1
_
. (5.13)
With help of this recurrence relation, we prove the next lemma, which
shows that for some choice of L
0
and for u taken small enough, q
u
n
goes to
zero suciently fast with n.
40
Cerny, Teixeira
Lemma 5.2. There exist L
0
and u = u(L
0
) > 0, such that
q
u
n
c
0
l
2
n
L
1/2
n
(5.14)
for every u < u.
Proof of Lemma 5.2. We dene the sequence
b
n
= c
0
l
2
n
q
u
n
, for n 0. (5.15)
The equation (5.13) can now be rewritten as
b
n+1
c
_
_
l
n+1
l
n
_
2
b
2
n
+ (l
n+1
l
n
)
2
L
4
n
L
5
n+1
_
, for n 0. (5.16)
With (5.3) one concludes that (l
n+1
l
n
)
2
cL
2a
n
L
2a
n+1
cL
4a+2a
2
n
. Inserting
this in (5.16) and using again (5.3), we obtain
b
n+1
c
1
(L
2a
2
n
b
2
n
+L
2a
2
a1
n
) c
1
L
2a
2
n
(b
2
n
+L
1
n
). (5.17)
We use this to show that, if for some L
0
> (2c
1
)
4
and u 1 we have
b
n
L
1/2
n
, then the same inequality also holds for n+1. Indeed, supposing
b
n
L
1/2
n
, we have
b
n+1
2c
1
L
2a
2
1
n
(5.3)
2c
1
L
1/2
n+1
L
1/2(1+a)+2a
2
1
n
(5.3)
2c
1
L
1/2
n+1
L
1/4
0
L
1/2
n+1
.
(5.18)
Which is the statement of the lemma. So all we still have to prove is that
b
0
L
1/2
0
for L
0
> (2c
1
)
4
and small enough u. Indeed,
b
0
(5.15)
= c
0
l
2
0
q
u
0
c
0
l
2
0
sup
mJ0
P[J
u
D
m
,= ]
c
1
L
2a+2
0
sup
xV
P[x J
u
]
(4.3)
c
1
L
2a+2
0
(1 e
cap(x])u
).
(5.19)
For some L
0
> (2c
1
)
4
, we take u(L
0
) small enough such that b
0
L
1/2
0
for any u u(L
0
). This concludes the proof of Lemma 5.2
We now use this lemma to show that with positive probability, one can
nd an innite connection from (0, 0) to innite in the set 1
u
Z
d
. For
Random Walks and Random Interlacements 41
this we choose L
0
and u < u(L
0
) as in the lemma. Writing B
M
for the set
[M, M] [M, M] Z
2
, we have
1 (u, (0, 0)) P[(0, 0) is not in an innite component of 1
u
Z
2
]
P[J
u
B
M
,= ] +P
_
there is a -path in Z
2
B
M
surrounding the point (0, 0) in Z
2
_
_
1 exp(u cap(B
M
))
_
+
nn0
P
_
J
u
Z
2
B
M
contains a -path surrounding (0, 0) and
passing through some point in [L
n
, L
n+1
1] 0 Z
2
_
(5.20)
The last sum can be bounded by
nn0
m
P[B
u
m
] where the index m runs
over all labels of boxes D
m
at level n that intersect [L
n
, L
n+1
1]0 Z
2
.
Since the number of such ms is at most l
n
cL
a
n
,
1(u, (0, 0)) cL
2
n0
u+
nn0
cL
a
n
L
1/2
n
(5.3)
c(L
2
n0
u+
nn0
L
1/4
n
). (5.21)
Choosing n
0
large and u u(L
0
, n
0
), we obtain that the percolation prob-
ability is positive. So that u
.
More precisely, it was shown that
u
for some
c, c
t
, (0, ), s.e. staying for stretched-exponentially small. We write
f(n) = 1 s.e.(n), when 1 f(n) = s.e.(n). Observe that n
c
s.e.(n) = s.e.(n)
for any xed c > 0. So, it is quite convenient to use this notation e.g. in
the following situation: assume that we have at most n
c
events, each of
probability bounded from above by s.e.(n). Then, the probability of their
union is s.e.(n) as well.
Recall from Chapter 3 that P denotes the law of the Poisson point process
on W
i=0
P
x
[X
i
= y] cg(x, y). (6.9)
We therefore omit the details and leave the proof of (6.9) and of the lemma
as exercise.
To apply Lemma 6.3 we need to estimate capacities of various collec-
tions of random walk trajectories. The estimate we will use is given in
Lemmas 6.5, 6.6 below. We start with a technical estimate.
Lemma 6.4. Let d 5, (x
k
)
k1
be a sequence in Z
d
, and let X
k
be a
sequence of independent simple random walks on Z
d
with X
k
0
= x
k
. Then
for all positive integers N and n and for all (x
k
)
k1
,
E
_
N
k,l=1
2n
i,j=n+1
g(X
k
i
, X
l
j
)
_
C
_
Nn +N
2
n
3d/2
_
. (6.10)
Proof. Let X be a simple random walk with X
0
= 0, then for all y Z
d
and for all positive integers k, by the Markov property,
Eg (X
k
, y) =
i=k
P[X
i
= y] C
i=k
i
d/2
Ck
1d/2
. (6.11)
Here we used the inequality sup
yZ
d P[X
k
= y] Ck
d/2
, see [38, Propo-
sition 7.6] In order to prove (6.10), we consider separately the cases k = l
and k ,= l. In the rst case, the Markov property and the fact that
g(x, y) = g(x y) imply
E
_
N
k=1
2n
i,j=n+1
g(X
k
i
, X
k
j
)
_
= NE
_
2n
i,j=n+1
g(X
]ij]
)
_
(6.11)
CNn
_
1 +
n
i=1
i
1d/2
_
(d5)
CNn.
(6.12)
46
Cerny, Teixeira
In the case k ,= l, an application of (6.11) gives
E
_
2n
i,j=n+1
g(X
k
i
, X
l
j
)
_
n
2
Cn
1d/2
. (6.13)
This completes the proof.
Let (X
k
)
k1
be the collection of independent random walks, as in the
previous lemma. For A Z
d
, let T
k
A
= infi 0 : X
k
i
/ A be the exit time
of X
k
from A. For positive integers N and n, dene the subset (N, n) of
Z
d
,
(N, n) =
N
_
k=1
_
X
k
i
: 1 i n T
k
B(x
k
,2n
1/2
)
_
. (6.14)
Lemma 6.5. For any sequence (x
i
)
i1
Z
d
and for all positive integers
N and n,
cap (N, n) CNn, (6.15)
and
E cap (N, n) c min(Nn, n
(d2)/2
). (6.16)
Proof. The upper bound follows from the sub-additivity of the capacity
(4.13) and the fact that cap(x) = g(0, 0)
1
< .
We proceed with the lower bound on E cap (N, n). By Kolmogorovs
maximal inequality applied coordinatewise, for each > 0 and n 1,
P
_
max
1in
[X
i
[
_
n
2
. (6.17)
Let J be the random set J = 1 k N : sup
1in
[X
k
i
x
k
[ 2n
1/2
,
and D the event D = [J[ N/4. From (6.17), it follows that E[J[
N
_
1
n
4n
_
N
2
. Since [J[ N, we get P[D]
1
3
.
To obtain a lower bound on the capacity, the following variational for-
mula (see Proposition 2.3 [52]) is useful:
cap A = (inf c())
1
, (6.18)
where the energy c() of a measure is given by
c() =
x,yZ
d
(x)(y)g(x, y), (6.19)
and the inmum in (6.18) is taken over all probability measures supported
on A.
Taking to be (x) =
2
]J]n
kJ
n
i=n/2
1X
k
i
= x, which is obviously
supported on (N, n), we obtain
E cap (N, n) E
_
c()
1
E
_
c()
1
; A
. (6.20)
Random Walks and Random Interlacements 47
Therefore, in order to prove (6.16), it suces to show that
E
__
4
[J[
2
n
2
k,lJ
n
i,j=n/2
g(X
k
i
, X
l
j
)
_
1
; A
_
c min
_
Nn, n
(d2)/2
_
. (6.21)
Using the Cauchy-Schwarz inequality and the denition of the event D, the
left-hand side of the last display can be bounded from below by
cN
2
n
2
P(D)
2
_
E
_
k,lJ
n
i,j=n/2
g(X
k
i
, X
l
j
); D
__
1
(6.22)
Since J is a subset of 1, . . . , N, this is larger than
(N/4)
2
n
2
P(D)
2
_
E
_
N
k,l=1
n
i,j=n/2
g(X
k
i
, X
l
j
)
__
1
(6.10)
cN
2
n
2
Nn +N
2
n
3d/2
c min
_
Nn, n
(d2)/2
_
.
(6.23)
This completes the proof.
Lemma 6.6. In the setting of the previous lemma, assume that for some
> 0, n
N n
d4
2
, then for every > 0 suciently small
P
_
cap (N, n) (Nn)
1
= 1 s.e.(n). (6.24)
Proof. To see that this lemma holds it is sucient to split the trajectories
into n
pieces of length n
1
and consider them separately. More precisely,
observe that (N, n)
0n
, where
=
_
kN
X
k
n
1
B(X
k
0
,n
1/2
)
X
k
n
1, . . . , X
k
(+1)n
1 B(X
k
0
, 2n
(1)/2
).
(6.25)
Using the standard random walk scaling, P[X
k
n
1
B(X
k
0
, n
1/2
)] > c for
all 0, . . . , n
(Nn)
1
[T
n
1] c. P-a.s., where T
i
is the -algebra
generated by X
k
j
, k N, j i. As cap (N, n) max
n
cap
, the
lemma follows easily from these facts.
Remark 6.7. Lemma 6.6 holds also for N = 1. An easy adaptation of the
previous proof is left as exercise.
48
Cerny, Teixeira
We may now prove (6.3) of Proposition 6.2. Let w
1
, . . . , w
N
be an
arbitrary enumeration of supp
B(n),u
, that is w
k
s are random walk tra-
jectories hitting B(n) with labels smaller than u. We will show that for
every k, l
P[Range w
k
and Range w
l
are connected within J
u
] 1 s.e.(n). (6.26)
By the denition of random interlacements, N has Poisson distribution with
parameter ucap B(n) n
d2
. Therefore, P[N n
d
] = s.e.(n), and thus,
by the remark after the denition of s.e.(), (6.26) implies (6.3). Without
loss of generality, we take k = 1, l = 2 in (6.26).
Let s = s(d) =
d
2
| 1. We split w
3
, . . . , w
N
into s independent
Poisson processes,
=
B(n),u/r
1
, = 1, . . . , r, (6.27)
where
0
=
w1
+
w2
. As before, P[[ supp
[ n
d2
] = 1 s.e.(n).
For w W, let T
n
(w) = infk 0 : w(k) / B(3n) and R(w) =
w(0), . . . , w(n
2
T
n
(w)). Set V
0
= w
1
, A
0
= R(w
1
), and for =
1, . . . , r,
V
= w supp
: R(w) A
1
,= , (6.28)
A
=
_
wV
R(w). (6.29)
We claim that for all = 1, . . . , r and some > 0 small.
P[[V
[ n
2
] = 1 s.e.(n). (6.30)
Indeed, this is trivially true for [V
0
[. If (6.30) holds for 1, then by
Lemma 6.6,
P[cap A
1
n
(1)(2(1))
n
2(1)
] = 1 s.e.(n), (6.31)
When cap A
1
n
2
n
2d
= cn
2+2d
. As
=
n
2
of them hit A
1
before the time n
2
, and exit B(3n), again with
1 s.e.(n) probability. This completes the proof of (6.30).
From (6.30) it follows that [V
s
[ n
2s
n
d3
, and, by Lemma 6.6
and Remark 6.7, cap R(w
2
) cn
2
, with 1 s.e.(n) probability. Consider
now random walks in V
s
. After hitting A
s1
the rest of their trajectories
is independent of the past, so Lemma 6.3 once more implies that
P[X
k
H
A
s1
, . . . , X
k
H
A
s1
+n
2 R(w
2
)] n
4d
]. (6.32)
Random Walks and Random Interlacements 49
Hence, the event at least one walk in V
s
hits R(w
2
) has probability at
least 1 s.e.(n). This implies (6.26) and completes the proof of (6.3).
The proof of (6.4) is analogous, it is sucient to take w
2
supp
B(n),u
supp
B(n),u
.
The proof of Proposition 6.2, that we just nished, has some interesting
consequences. We actually proved that the set
J
u
n
:=
_
wsupp
B(n),u
R(w) (6.33)
is with high probability connected. Since R(w) B(3n), we see
J
u
n
J
u
n
is connected locally, that is one does not to go far from B(n) to make
a connection. Of course, in general, Range w B(n) ,= R(w) B(n), so
we did not show that J
u
n
B(n) is locally connected. However, it is not
dicult to extend the previous techniques to show
P[every x, y B(n)J
u
are connected in J
u
B(3n)] = 1s.e.(n). (6.34)
Another consequence of the previous proof is the following claim. With
probability 1 s.e.(n), for every pair w, w
t
supp
B(n),u
there are w =
w
0
, w
1
, . . . , w
s+1
= w
t
supp
B(n),u
such that w
i
intersects w
i1
, i =
1, . . . , s + 1. Borel-Cantelli lemma then implies:
Corollary 6.8 (d 5). Consider (random) graph ( whose vertices are all
trajectories in supp
Z
d
,u
, and whose two vertices are connected by an edge
i the corresponding trajectories intersect. Then P-a.s., the diameter of (
satises diam( d/2|.
Surprisingly, this bound is optimal when d is odd. The correct upper
bound (and in fact also the lower bound) is given in the following theorem.
Theorem 6.9 ([31, 34]). For every u > 0, the diameter of ( equals ,d/2|
1, P-a.s.
For even ds, our upper bound exceeds the correct one by one. The main
reason for this is that we decided to prove s.e.(n) bounds on probabilities,
which in turn forced us to loose n
s are what
is missing in the last paragraph of the proof of Proposition 6.2 to get the
optimal bound.
Almost the same techniques, however, allow to show that (cf. (6.30),
(6.32)), for all n large enough,
P[[V
cn
22
] > c
t
> 0 and P[w
2
intersects A
s
] > c
t
. (6.35)
In [34], estimates similar to (6.35) together with ideas inspired by Wiener
test (see [26, Theorem 2.2.5]) are applied to obtain the optimal upper
bound.
50
Cerny, Teixeira
Our proof of Proposition 6.2 has yet another consequence on what is
called chemical distance on the interlacement set, dened for x, y J
u
by
u
(x, y) = min
_
n : x = z
0
, z
1
, . . . , z
n
= y such that
z
i
J
u
and [z
i
z
i1
[ = 1, i = 1, . . . , n
_
.
(6.36)
Claim (6.34) easily yields,
P[
u
(x, y) 9n
d
x, y B(n) J
u
] = 1 s.e.(n). (6.37)
The bound 9n
d
is of course far from being optimal. (6.37) is however one
of the ingredients of the optimal upper bound Cn which is proved in [12]:
Theorem 6.10 ([12]). For every u > 0 and d 3 there exist constants
C, C
t
< and (0, 1) such that
P
_
there exists x J
u
[n, n]
d
such that
u
(0, x) > Cn
0 J
u
C
t
e
n
.
6.1 Notes
The properties of interlacement set J
u
on Z
d
were investigated already in
the paper [43]. A weaker version of Theorem 6.1 is proved there (see (2.21)
in [43]), namely, for every u > 0,
P[J
u
is connected] = 1, (6.38)
which is sucient to deduce the absence of the phase transition. The proof
of [43] is based on the Burton-Keane argument [11].
Theorem 6.1 states that for every u > 0, J
u
is supercritical. It is thus
natural to ask, as in the Bernoulli-percolation theory, to which extend the
geometry of J
u
(at large scales) resembles the geometry of the complete
lattice Z
d
.
Recently, many results going in this direction appeared. In [35] it was
proved that J
u
percolates in two-dimensional slabs, namely that for every
u > 0 and d 3 exists R > 0 such that J
u
(Z
2
[0, R]
d2
) contains
an innite component P-a.s. In [33] it was further shown that the random
walk on J
u
is P-a.s. transient. Theorem 6.10 is another result of this type.
Theorem 6.9, giving the diameter of the graph (, was independently
shown by Procaccia and Tykesson [31] using ideas of stochastic dimension
theory developed in [5], and by Rath and Sapozhnikov [34] whose methods
are commented above.
The techniques used in our proof of Proposition 6.2 are mixture of meth-
ods from [34] (where we borrowed Lemmas 6.4 and 6.5) and [12] (which
contains contains many s.e.(n) estimates).
The results of this chapter can be used to deduce some properties of the
trace X
0
, . . . , X
uN
d of the simple random walk on the torus. E.g., Theo-
rem 6.10 was combined in [12] with the coupling of random interlacements
Random Walks and Random Interlacements 51
and random walk from [49] to control the chemical distance on the random
walk trace.
Chapter 7
Locally tree-like graphs
In the previous chapters we have studied the random walk on the torus
and the corresponding interlacement set on Z
d
. We have seen that in
that case many interesting questions are still open, including the existence
of a sharp phase transition in the connectivity of the vacant set of the
random walk, and its correspondence to the phase transition of random
interlacements. Answering these questions requires a better control of the
random interlacements in both sub-critical and supercritical phase which
is not available at present.
In this chapter we are going to explore random interlacements on graphs
where such control is available, namely on trees. We will then explain how
such control can be used to show the phase transition for the vacant set of
random walk on nite locally tree-like graphs, and to give the equivalence
of critical points in both models. In other worlds, we prove Conjecture 2.6
on the locally tree-like graphs.
7.1 Random interlacements on trees
We start by considering random interlacements on trees. We will show that
vacant clusters of this model behave like Galton-Watson trees, which allows
us to perform many exact computations. As in this lecture notes we only
deal with random walks and random interlacements on regular graphs, we
restrict our attention to regular trees only.
Let T
d
be innite d-regular tree, d 3, for which the simple random
walk is transient. We may therefore dene random interlacements on T
d
similarly as we did for Z
d
.
We write P
x
for the law of the canonical simple random walk (X
n
) on T
d
started at x T
d
, and denote by e
K
, K T
d
the equilibrium measure,
e
K
(x) = P
x
[
H
K
= ]1x K. (7.1)
52
Random Walks and Random Interlacements 53
Observe that if K is connected, e
K
can be easily computed. Indeed, denot-
ing by d(, ) the graph distance on the tree, we observe that under P
x
, the
process d(X
n
, x) has the same law as a drifted random walk on N started
at 0. If not at 0, this walk jumps to the right with probability (d 1)/d
and to the left with probability 1/d; at 0 it goes always to the right. Us-
ing standard computation for the random walk with drift, see e.g. [54],
Lemma 1.24, it is then easy to show that
P
x
[
H
x
= ] = P
y
[H
x
= ] =
d 2
d 1
, (7.2)
for every neighbor y of x. For K connected, we then get
e
K
(x) =
1
d
#y : y x, y / K
d 2
d 1
, (7.3)
where the rst two terms give the probability that the rst step of the
random walk exists K.
We consider spaces W
+
, W, W
, and measures Q
K
dened similarly
as in Chapter 3, replacing Z
d
by T
d
in these denitions when appropriate.
As in Theorem 3.1, it can be proved that there exists a unique -nite
measure on (W
, J
R
+
with intensity measure (dw
=
_
zT
d
W
,z
, (7.6)
54
Cerny, Teixeira
where
W
,z
=
_
w
: z Ran(w
), d
_
x, Ran(w
)
_
= d(x, z)
_
. (7.7)
(The fact that W
,z
are disjoint follows easily from the fact that T
d
is a
tree.)
As a consequence of disjointness we obtain that the random variables
(W
,z
[0, u]) are independent. We may thus dene independent site
Bernoulli percolation on T
d
by setting
Y
u
z
() = 1(W
,z
[0, u]) 1 for z T
d
. (7.8)
By (3.9), (3.14) and (7.7), we see that
P[Y
u
z
= 0] = expuf
x
(z). (7.9)
To nish the proof of the theorem, it remains to observe that the null
cluster of (Y
u
(T
d
) = inf
_
u 0 : P[the cluster of x in 1
u
is innite] = 0
_
. (7.10)
Corollary 7.3. The critical point of the random interlacements on T
d
is
given by
u
(T
d
) =
d(d 1) log(d 1)
(d 2)
2
. (7.11)
Proof. For z ,= x, by considering drifted random walk as above (7.1), it is
easy to see that
f
x
(z) =
d 2
d 1
d 1
d
d 2
d 1
=
(d 2)
2
d(d 1)
. (7.12)
Hence, the ospring distribution of the Galton-Watson process mentioned
in Remark 7.2 is (except in the rst generation) binomial with parameters
(d1, expu
(d2)
2
d(d1)
). This Galton-Watson process is critical if the mean
of its ospring distribution is equal one, implying that u
(T
d
) is the solution
of
(d 1) exp
_
u
(d 2)
2
d(d 1)
_
= 1, (7.13)
yielding (7.11).
Random Walks and Random Interlacements 55
Remark 7.4. For the previous result, the ospring distribution in the rst
generation is irrelevant. Using (7.1) and Theorem 7.1, it is however easy
to see that (for k = 0, . . . , d)
P[x 1
u
] = e
ucap(x)
= e
ufx(x)
= e
u(d2)/(d1)
, (7.14)
P
_
[1
u
y : y x[ = k
x 1
u
=
_
d
k
_
e
uk
(d2)
2
d(d1)
_
1 e
u
(d2)
2
d(d1)
_
dk
.
(7.15)
We will need these formulas later.
7.2 Random walk on tree-like graphs
We now return to the problem of the vacant set of the random walk on
nite graphs. However, instead of considering the torus as in Chapter 2,
we are going to study graphs that locally look like a tree, in hope to use
the results of the previous section.
Actually, most of this section will deal with so-called random regular
graphs. Random d-regular graph with n vertices is a graph that is chosen
uniformly from the set (
n,d
of all simple (i.e. without loops and multiple
edges) graphs with the vertex set V
n
= [n] := 1, . . . , n and all vertices
of degree d, assuming tacitly that nd is even. We let P
n,d
to denote the
distribution of such graph, that is the uniform distribution on (
n,d
.
It is well know that with probability tending to 1 as n increases, the
majority of vertices in random regular graph has a neighborhood with
radius c log n which is graph-isomorph to a ball in T
d
.
For a xed graph G = (V, c) let P
G
be the law of random walk on G
started from the uniform distribution and (X
t
)
t0
the canonical process.
As before we will be interested in the vacant set
1
u
G
= V X
t
: 0 t u[V [, (7.16)
and denote by (
u
max
its maximal connected component.
We will study the properties of the vacant set under the annealed measure
P
n,d
given by
P
n,d
() =
_
P
G
()P
n,d
(dG)
_
=
G
P
G
()P
n,d
[G]
_
. (7.17)
The following theorem states that a phase transition occurs in the be-
havior of the vacant set on random regular graph.
Theorem 7.5 (d 3, u
:= u
(T
d
)).
(a) For every u < u
, then for every there is K(u, ) < such that for all
n large
P
n,d
[[(
max
[ K(u, ) log n] . (7.19)
Observe that Theorem 7.5 not only proves the phase transition, but also
conrms that the critical point coincides with the critical point of random
interlacements on T
d
. The theorem was proved (in a weaker form but
for a larger class of graphs) in [14]. In these notes, we are going to use a
simple proof given by Cooper and Frieze [15] which uses in a clever way the
randomness of the graph. Besides being simple, this proof has an additional
advantage that it can be used also in the vicinity of the critical point: By
a technique very similar to the ones presented here, it was proved in [13]
that the vacant set of the random walk exhibits a double-jump behavior
analogous to the maximal connected cluster in Bernoulli percolation:
Theorem 7.6.
(a) Critical window. Let (u
n
)
n1
be a sequence satisfying
[n
1/3
(u
n
u
u
n
n
0, and n
1/3
(u
u
n
)
n
, (7.22)
then
[(
un
max
[/n
2/3
n
, in P
n,d
-probability. (7.23)
(c) Below the window. When (u
n
)
n1
satises
u
u
n
n
0, and n
1/3
(u
u
n
)
n
, (7.24)
then
[(
un
max
[/n
2/3
n
0. in P
n,d
-probability. (7.25)
We will now sketch the main steps of the proof of Theorem 7.5. Detailed
proofs can be found in [15, 13].
Random Walks and Random Interlacements 57
7.2.1 Very short introduction to random graphs
We start by reviewing some properties of random regular graphs, for more
about these graphs see e.g. [7, 55].
It turns out that it is easier to work with multi-graphs instead of simple
graphs. Therefore we introduce /
n,d
for the set of all d-regular multi-
graphs with vertex set [n].
For reasons that will be explained later, we also dene random graphs
with a given degree sequence d : [n] N. We will use (
d
to denote
the set of graphs for which every vertex x [n] has the degree d
x
=
d(x). Similarly, /
d
stands for the set of such multi-graphs; here loops are
counted twice when considering the degree. P
n,d
and P
d
denote the uniform
distributions on (
n,d
and (
d
respectively. We say that a given event holds
asymptotically almost surely (denoted by P
n,d
-a.a.s. or P
d
-a.a.s depending
on the case) if it holds with probability converging to one, with respect to
P
n,d
or P
d
.
We rst introduce the pairing construction, which allows to generate
graphs distributed according to P
n,d
, starting from a random pairing of a
set with dn elements. The same construction can be used to generate a
random graph chosen uniformly at random from (
d
.
We consider a sequence d : V
n
N such that
xVn
d
x
is even. Given
such a sequence, we associate to every vertex x V
n
, d
x
half-edges. The
set of half-edges is denoted by H
d
= (x, i) : x V
n
, i [d
x
]. We write
H
n,d
for the case d
x
= d for all x V
n
. Every perfect matching M of
H
d
(i.e. partitioning of H
d
into [H
d
[/2 disjoint pairs) corresponds to a
multi-graph G
M
= (V
n
, c
M
) /
d
with
c
M
=
_
x, y :
_
(x, i), (y, j)
_
M for some i [d
x
], j [d
y
]
_
. (7.26)
We say that the matching M is simple, if the corresponding multi-graph
G
M
is simple, that is G
M
is a graph. With a slight abuse of notation, we
write
P
d
for the uniform distribution on the set of all perfect matchings of
H
d
, and also for the induced distribution on the set of multi-graphs /
d
.
It is well known (see e.g. [7] or [29]) that a
P
d
distributed multi-graph G
conditioned on being simple has distribution P
d
, that is
P
d
[G [G (
d
] = P
d
[G ], (7.27)
and that, for d constant, there is c > 0 such that for all n large enough
c <
P
n,d
[G (
n,d
] < 1 c. (7.28)
These two claims allow to deduce P
n,d
-a.a.s. statements directly from
P
n,d
-
a.a.s. statements.
The main advantage of dealing with matchings is that they can be con-
structed sequentially: To construct a uniformly distributed perfect match-
ing of H
d
one samples without replacements a sequence h
1
, . . . , h
]H
d
]
of
58
Cerny, Teixeira
elements of H
d
in the following way. For i odd, h
i
can be chosen by an
arbitrary rule (which might also depend on the previous (h
j
)
j<i
), while if
i is even, h
i
must be chosen uniformly among the remaining half-edges.
Then, for every 1 i [H
d
[/2 one matches h
2i
with h
2i1
.
It is clear from the above construction that, conditionally on M
t
M
for a (partial) matching M
t
of H
d
, M M
t
is distributed as a uniform
perfect matching of H
d
(x, i) : (x, i) is matched in M
t
. Since the law of
the graph G
M
does not depend on the labels i of the half-edges, we obtain
for all partial matchings M
t
of H
d
the following restriction property,
P
d
[G
M\M
[M M
t
] =
P
d
[G
M
], (7.29)
where d
t
x
is the number of half-edges incident to x in H
d
that are not yet
matched in M
t
, that is
d
t
x
= d
x
(y
1
, i), (y
2
, j) M
t
: y
1
= x, i [d
x
]
, (7.30)
and G
M\M
is the graph corresponding to a non-perfect matching M M
t
,
dened in the obvious way.
7.2.2 Distribution of the vacant set
We now study the properties of the vacant set of random walk. Instead of
the vacant set, it will be more convenient to consider the following object
that we call vacant graph V
u
. It is dened by V
u
= (V, c
u
) with
c
u
= x, y c : x, y 1
u
G
. (7.31)
It is important to notice that the vertex set of V
u
is a deterministic set
V and not the random set 1
u
G
, in particular V
u
is not the graph induced
by 1
u
G
in G. Observe however that the maximal connected component of
the vacant set (
max
(dened before in terms of the graph induced by 1
u
in
G) coincides with the maximal connected component of the vacant graph
V
u
(except when 1
u
is empty, but this dierence can be ignored in our
investigations).
We use T
u
: V N to denote the (random) degree sequence of V
u
, and
write Q
u
n,d
for the distribution of this sequence under the annealed measure
P
n,d
, dened by
P
n,d
() :=
_
P
G
()
P
n,d
(dG)
_
=
G
P
G
()
P
n,d
[G]
_
.
The following important but simple observation from [15] allows to re-
duce questions on the properties of the vacant set 1
u
of the random walk
on random regular graphs to questions on random graphs with given degree
sequences.
Proposition 7.7 (Lemma 6 of [15]). For every u 0, the distribution of
the vacant graph V
u
under
P
n,d
is given by
P
d
where d is sampled according
to Q
u
n,d
, that is
P
n,d
[V
u
] =
_
P
d
[G ]Q
u
n,d
(dd)
_
=
d
P
d
[G ]Q
u
n,d
(d)
_
. (7.32)
Random Walks and Random Interlacements 59
Proof. The full proof is given in [15] and [13], here we give a less rigorous
but more transparent explanation. The main observation behind this proof
is the following joint construction of a
P
n,d
distributed multi-graph and a
(discrete-time) random walk on it.
1. Pick X
0
in V uniformly.
2. Pair all half-edges incident to X
0
according to the pairing construc-
tion given above.
3. Pick uniformly a number Z
0
in [d] and set X
1
to be the vertex paired
with (X
0
, Z
0
).
4. Pair all not-yet paired half-edges incident to X
1
according to the
pairing construction.
5. Pick uniformly a number Z
1
in [d] and set X
2
to be the vertex paired
with (X
1
, Z
1
).
6. . . .
7. Stop when X
]V ]u
and its neighbors are known.
At this moment we have constructed the rst [V [u steps of the random
walk trajectory and determined all edges in the graph that are incident to
vertices visited by this trajectory. To nish the construction of the graph
we should
(8) Pair all remaining half-edges according to the pairing construction.
It is not hard to observe that the edges created in step (8) are exactly
the edges of the vacant graph V
u
and that the degree of x in V
u
is known
already at step (7). Using the restriction property of partial matchings
(7.29), it is then not dicult to prove the proposition.
Due to the last proposition, in order to show Theorem 7.5 we need an
information about two objects: the maximal connected component of a
P
d
-distributed random graph, and about the distribution Q
u
n,d
. We deal
with them in the next two subsections.
7.2.3 Random graphs with a given degree sequence.
The random graphs with a given (deterministic) degree sequence are well
studied. A rather surprising fact, due to Molloy and Reed [30] is that the
phase transition in its behavior is characterized by a single real parameter
computed from a degree sequence. We give a very weak version of result
of [30]:
60
Cerny, Teixeira
Theorem 7.8. For a degree sequence d : [n] N, let
Q(d) =
n
x=1
d
2
x
n
x=1
d
x
2. (7.33)
Consider now a sequence of degree sequences (d
n
)
n1
, d
n
: [n] N,
and assume that the degrees d
n
x
are uniformly bounded by some and that
[x [n] : d
n
x
= 1[ n for a > 0. Then
If liminf Q(d
n
) > 0, then there is c > 0 such that with
P
d
probability
tending to one the maximal connected component of the graph is larger
than cn.
When limsup Q(d
n
) < 0, then the size of the maximal connected
component of
P
d
-distributed graph is with high probability o(n).
Later works, see e.g. [23, 21], give a more detailed description of random
graphs with given degree sequences, including the description of the critical
window which allows to deduce Theorem 7.6.
7.2.4 The degree sequence of the vacant graph
We will show that the distribution of the degree sequence of the vacant
graph is the same as the distribution of the number of vacant neighbors of
any given vertex x in a random interlacements on T
d
. More precisely, it
follows from Remark 7.4 that the probability that x 1
u
T
d
and its degree
in 1
u
T
d
is i, i = 0, . . . , d, is given by
d
u
i
:= e
u
d2
d1
_
d
i
_
p
i
u
(1 p
u
)
di
, (7.34)
with p
u
= expu
(d2)
2
d(d1)
.
Recall T
u
denotes the degree sequence of the vacant graph V
u
. For
any degree sequence d, n
i
(d) denotes the number of vertices with degree
i in d. The following theorem states that quenched expectation of n
i
(T
u
)
concentrates around nd
u
i
.
Theorem 7.9. For every u > 0 and every i 0, . . . , d,
E
G
[n
i
(T
u
)] nd
u
i
c(log
5
n)n
1/2
,
P
n,d
-a.a.s. (7.35)
We decided to not to present the proof of this theorem in these notes, as
it uses very similar arguments as the proofs in Chapter 2. The full proof
can be found in [49].
In order to control Q
u
n,d
we need to show that n
i
(T
u
) concentrates around
its mean. This is the result of the following theorem that holds for deter-
ministic graphs.
Random Walks and Random Interlacements 61
Theorem 7.10. Let G be a d-regular (multi)graph on n vertices whose
spectral gap
G
is larger than some > 0. Then, for every (0,
1
4
), and
for every i 0, . . . , d,
P
G
_
[n
i
(T
u
) E
G
[n
i
(T
u
)][ n
1/2+
c
,
e
c,n
. (7.36)
The proof of this theorem uses concentration inequalities for Lipschitz
functions of sequences of not-independent random variables, and can be
found in [13].
From Theorems 7.9 and 7.10, it is easy to compute the typical value of
Q(T
u
). It turns out that it is positive when u < u
. This proves via Theorem 7.8 and Proposition 7.7 the existence of
a phase transition of the vacant set.
In fact, the above results allow to compute Q(T
u
) up to an additive
error which is o(n
1/2+
). This precision is more than enough to apply
the stronger results on the behavior of random graphs with given degree
sequences [21] and to show Theorem 7.6.
7.3 Notes
Theorem 7.1, that is the comparison of the cluster of the vacant set contain-
ing a given point with the cluster of Bernoulli percolation, can be general-
ized to arbitrary locally nite (weighted) trees. However, as the invariant
measure of the random walk is then in general not uniform, a slight care
should be taken in dening the random interlacements, see [46].
Apart T
d
, there is to our knowledge only one other case where the critical
value of random interlacements can be computed explicitly (and is non-
trivial), namely for the base graph being a Galton-Watson tree. In this
case, it was shown in [45] that u
of a d-regular
tree with a d
t
-dimensional lattice, with d 3 and d
t
1. In this case, we
know that there is a transition between a disconnected and a connected
phase for J
u
as u crosses a critical threshold u
c
. It is worth mentioning
that this critical value has been proven in [48] to be unique, despite the
absence of monotonicity in connectivity properties of J
u
.
Index
annealed measure, 55
Bernoulli (site) percolation, 32
canonical coordinates, 16
canonical shift, 21
capacity, 9
critical window, 56
Dirac measure, 16
disconnection of cylinder, 18
disconnection time, 18
domination, 34
entrance time, 8
equilibrium distribution, 5
equilibrium measure, 9
excursion, 4
excursions, 13
nite energy, 33
hitting time, 8
interlacement set, 25
Lebesgue measure, 23
local picture, 4, 5, 7, 9
pairing construction, 57
random interlacements, 22, 25
random regular graph, 55
random walk
lazy, 7
simple, 7
transient, 8
regeneration time, 8
regular graph, 52
regular tree, 52
restriction property, 58
reversible, 7
shift operator, 8
spectral gap, 8
time
departure, 13
entrance, 21
exit, 21
return, 13
trajectories
modulo time-shift, 22
trajectory
doubly-innite, 21
vacant set, 25
vacant set in the torus, 3
63
64 Index
/, 22
cap(A), 9
, 21
, 16
e
A
, 9
H
A
, 21
N
, 8
j
, 8
A
, 26
, 23
, 22
+
, 16
+
, 16
P, 7
P
A
, 16
-path, 33
, 9
, 7
, 22
u
, 25
P
x
, 7
Q
A
, 22
Q
u
A
, 16
Q
u
, 25
R
p
, 32
s
A
, 26
T
A
, 21
T
d
, 52
T
d
N
, 7
k
, 8, 21
W, 21
w
0
, 26
W
A
, 21
J, 21
W
+
, 16
J
+
, 16
W
, 22
W
A
, 22
J
, 22
X
n
, 7, 16
}, 25
Y
x
, 25
Z
d
, 8
Bibliography
[1] David J. Aldous and Mark Brown. Inequalities for rare events in time-
reversible Markov chains. I. In Stochastic inequalities (Seattle, WA,
1991), volume 22 of IMS Lecture Notes Monogr. Ser., pages 116. Inst.
Math. Statist., Hayward, CA, 1992.
[2] David J. Aldous and Mark Brown. Inequalities for rare events in time-
reversible Markov chains. II. Stochastic Process. Appl., 44(1):1525,
1993.
[3] David Belius. Cover levels and random interlacements.
arXiv:1103.2072, 2011.
[4] David Belius. Gumbel uctuations for cover times in the discrete torus.
arXiv:1202.0190, 2012.
[5] Itai Benjamini, Harry Kesten, Yuval Peres, and Oded Schramm. Ge-
ometry of the uniform spanning forest: transitions in dimensions
4, 8, 12, . . . . Ann. of Math. (2), 160(2):465491, 2004.
[6] Itai Benjamini and Alain-Sol Sznitman. Giant component and vacant
set for random walk on a discrete torus. J. Eur. Math. Soc. (JEMS),
10(1):133172, 2008.
[7] Bela Bollobas. Random graphs, volume 73 of Cambridge Studies in Ad-
vanced Mathematics. Cambridge University Press, Cambridge, second
edition, 2001.
[8] Bela Bollobas and Oliver Riordan. Percolation. Cambridge University
Press, New York, 2006.
[9] Christian Borgs, Jennifer T. Chayes, Remco van der Hofstad, Gor-
don Slade, and Joel Spencer. Random subgraphs of nite graphs. I.
The scaling window under the triangle condition. Random Structures
Algorithms, 27(2):137184, 2005.
[10] Christian Borgs, Jennifer T. Chayes, Remco van der Hofstad, Gordon
Slade, and Joel Spencer. Random subgraphs of nite graphs. II. The
65
66 Bibliography
lace expansion and the triangle condition. Ann. Probab., 33(5):1886
1944, 2005.
[11] R. M. Burton and M. Keane. Density and uniqueness in percolation.
Comm. Math. Phys., 121(3):501505, 1989.
[12] Jir
Cern y and Serguei Popov. On the internal distance in the inter-
lacement set. Electron. J. Probab., 17(29):125, 2012.
[13] Jir
Cern y and Augusto Teixeira. Critical window for the vacant set
left by random walk on random regular graphs. to appear in Random
Structures & Algorithms, arXiv:1101.1978, 2011.
[14] Jir
Cern y, Augusto Teixeira, and David Windisch. Giant vacant com-
ponent left by a random walk in a random d-regular graph. Ann. Inst.
H. Poincare Probab. Statist., 47(4):929968, 2011.
[15] Colin Cooper and Alan Frieze. Component structure induced by a
random walk on a random graph. to appear in Random Structures &
Algorithms, arXiv 1005.1564, 2010.
[16] Amir Dembo and Alain-Sol Sznitman. On the disconnection of a
discrete cylinder by a random walk. Probab. Theory Related Fields,
136(2):321340, 2006.
[17] Amir Dembo and Alain-Sol Sznitman. A lower bound on the discon-
nection time of a discrete cylinder. In In and out of equilibrium. 2,
volume 60 of Progr. Probab., pages 211227. Birkhauser, Basel, 2008.
[18] P. Erdos and A. Renyi. On the evolution of random graphs. Magyar
Tud. Akad. Mat. Kutato Int. Kozl., 5:1761, 1960.
[19] Georey Grimmett. Percolation, volume 321 of Grundlehren der Ma-
thematischen Wissenschaften. Springer-Verlag, Berlin, second edition,
1999.
[20] Olle Haggstrom and Johan Jonasson. Uniqueness and non-uniqueness
in percolation theory. Probab. Surv., 3:289344 (electronic), 2006.
[21] Hamed Hatami and Michael Molloy. The scaling window for a
random graph with a given degree sequence. In Proceedings of
SODA 2010, 2010. more exhaustive version of the paper appears at
arXiv:0907.4211.
[22] Marcelo Hilario, Vladas Sidoravicius, and Augusto Teixeira. Cylinders
percolation in three dimensions. preprint, 2012.
[23] Svante Janson and Malwina J. Luczak. A new approach to the giant
component problem. Random Structures Algorithms, 34(2):197216,
2009.
Bibliography 67
[24] Olav Kallenberg. Foundations of modern probability. Probability and
its Applications (New York). Springer-Verlag, New York, second edi-
tion, 2002.
[25] Gregory F. Lawler. A self-avoiding random walk. Duke Math. J.,
47(3):655693, 1980.
[26] Gregory F. Lawler. Intersections of random walks. Probability and its
Applications. Birkhauser Boston Inc., Boston, MA, 1991.
[27] Gregory F. Lawler and Vlada Limic. Random walk: a modern intro-
duction, volume 123 of Cambridge Studies in Advanced Mathematics.
Cambridge University Press, Cambridge, 2010.
[28] David A. Levin, Yuval Peres, and Elizabeth L. Wilmer. Markov chains
and mixing times. American Mathematical Society, Providence, RI,
2009. With a chapter by James G. Propp and David B. Wilson.
[29] Colin McDiarmid. Concentration. In Probabilistic methods for algo-
rithmic discrete mathematics, volume 16 of Algorithms Combin., pages
195248. Springer, Berlin, 1998.
[30] Michael Molloy and Bruce Reed. A critical point for random graphs
with a given degree sequence. In Proceedings of the Sixth International
Seminar on Random Graphs and Probabilistic Methods in Combina-
torics and Computer Science, Random Graphs 93 (Pozna n, 1993),
volume 6, pages 161179, 1995.
[31] Eviatar B. Procaccia and Johan Tykesson. Geometry of the random
interlacement. Electron. Commun. Probab., 16:528544, 2011.
[32] Sidney I. Resnick. Extreme values, regular variation and point pro-
cesses. Springer Series in Operations Research and Financial Engi-
neering. Springer, New York, 2008. Reprint of the 1987 original.
[33] B. Rath and A. Sapozhnikov. On the transience of random interlace-
ments. Electronic Communications in Probability, 16:379391, 2011.
[34] B. Rath and A. Sapozhnikov. Connectivity properties of random in-
terlacement and intersection of random walks. ALEA Latin American
Journal of Probability and Mathematical Statistics, 9:6783, 2012.
[35] Balazs Rath and Artem Sapozhnikov. The eect of small
quenched noise on connectivity properties of random interlacements.
arXiv:1109.5086, 2011.
[36] Vladas Sidoravicius and Alain-Sol Sznitman. Percolation for the
vacant set of random interlacements. Comm. Pure Appl. Math.,
62(6):831858, 2009.
68 Bibliography
[37] Vladas Sidoravicius and Alain-Sol Sznitman. Connectivity bounds for
the vacant set of random interlacements. Ann. Inst. Henri Poincare
Probab. Stat., 46(4):976990, 2010.
[38] Frank Spitzer. Principles of random walks. Springer-Verlag, New York,
second edition, 1976. Graduate Texts in Mathematics, Vol. 34.
[39] Alain-Sol Sznitman. How universal are asymptotics of disconnection
times in discrete cylinders? Ann. Probab., 36(1):153, 2008.
[40] Alain-Sol Sznitman. On the domination of random walk on a discrete
cylinder by random interlacements. Electron. J. Probab., 14:no. 56,
16701704, 2009.
[41] Alain-Sol Sznitman. Random walks on discrete cylinders and ran-
dom interlacements. Probab. Theory Related Fields, 145(1-2):143174,
2009.
[42] Alain-Sol Sznitman. Upper bound on the disconnection time of dis-
crete cylinders and random interlacements. Ann. Probab., 37(5):1715
1746, 2009.
[43] Alain-Sol Sznitman. Vacant set of random interlacements and perco-
lation. Ann. of Math. (2), 171(3):20392087, 2010.
[44] Alain-Sol Sznitman. Decoupling inequalities and interlacement per-
colation on ( Z. Inventiones Mathematicae, 187:645706, 2012.
10.1007/s00222-011-0340-9.
[45] Martin Tassy. Random interlacements on Galton-Watson trees. Elec-
tron. Commun. Probab., 15:562571, 2010.
[46] Augusto Teixeira. Interlacement percolation on transient weighted
graphs. Electron. J. Probab., 14:no. 54, 16041628, 2009.
[47] Augusto Teixeira. On the size of a nite vacant cluster of random
interlacements with small intensity. Probab. Theory Related Fields,
150(3-4):529574, 2011.
[48] Augusto Teixeira and Johan Tykesson. Random interlacements and
amenability. to appear in The Annals of Applied Probability, 2011.
[49] Augusto Teixeira and David Windisch. On the fragmentation of a
torus by random walk. Comm. Pure Appl. Math., 64(12):15991646,
2011.
[50] Johan Tykesson and David Windisch. Percolation in the vacant set of
poisson cylinders. to appear in Probability Theory and Related Fields,
2010.
Bibliography 69
[51] David Windisch. Logarithmic components of the vacant set for random
walk on a discrete torus. Electron. J. Probab., 13:no. 28, 880897, 2008.
[52] David Windisch. Random walk on a discrete torus and random inter-
lacements. Electron. Commun. Probab., 13:140150, 2008.
[53] David Windisch. Random walks on discrete cylinders with large bases
and random interlacements. Ann. Probab., 38(2):841895, 2010.
[54] Wolfgang Woess. Random walks on innite graphs and groups, volume
138 of Cambridge Tracts in Mathematics. Cambridge University Press,
Cambridge, 2000.
[55] N. C. Wormald. Models of random regular graphs. In Surveys in
combinatorics, 1999 (Canterbury), volume 267 of London Math. Soc.
Lecture Note Ser., pages 239298. Cambridge Univ. Press, Cambridge,
1999.
70
Jir
Cern y
Faculty of Mathematics, University of Vienna
Nordbergstrasse 15
1090 Vienna, Austria
[email protected]
http://www.mat.univie.ac.at/cerny/
Augusto Quadros Teixeira
Instituto Nacional de Matematica Pura e Aplicada (IMPA)
Estrada Dona Castorina, 110
22460-320 - Rio de Janeiro, Brazil
[email protected]
http://w3.impa.br/augusto/