Discrete Time
Discrete Time
Joseph C. Watkins
May 5, 2007
Contents
1 Basic Concepts for Stochastic Processes 3
1.1 Conditional Expectation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Stochastic Processes, Filtrations and Stopping Times . . . . . . . . . . . . . . . . . . . . . . . 10
1.3 Coupling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 Random Walk 16
2.1 Recurrence and Transience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.2 The Role of Dimension . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.3 Simple Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3 Renewal Sequences 24
3.1 Waiting Time Distributions and Potential Sequences . . . . . . . . . . . . . . . . . . . . . . . 26
3.2 Renewal Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.3 Applications to Random Walks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
4 Martingales 35
4.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
4.2 Doob Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.3 Optional Sampling Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.4 Inequalities and Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.5 Backward Martingales . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
5 Markov Chains 57
5.1 Definition and Basic Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5.2 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
5.3 Extensions of the Markov Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.4 Classification of States . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.5 Recurrent Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.5.1 Stationary Measures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.5.2 Asymptotic Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
5.5.3 Rates of Convergence to the Stationary Distribution . . . . . . . . . . . . . . . . . . . 82
5.6 Transient Chains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.6.1 Asymptotic Behavior . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
1
CONTENTS 2
6 Stationary Processes 97
6.1 Definitions and Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.2 Birkhoff’s Ergodic Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
6.3 Ergodicity and Mixing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
6.4 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
1 BASIC CONCEPTS FOR STOCHASTIC PROCESSES 3
We now give the definition. Check that the definition of E[Y |G] in (1.1) has the two given conditions.
Definition 1.1. Let Y be an integrable random variable on (Ω, F, P ) and let G be a sub-σ-algebra of F.
The conditional expectation of Y given G, denoted E[Y |G] is the a.s. unique random variable satisfying the
following two conditions.
1. E[Y |G] is G-measurable.
2. E[E[Y |G]; A] = E[Y ; A] for any A ∈ G.
To see that that E[Y |G] as defined in (1.1) follows from these two conditions, note that by condition 1,
E[Y |G] must be constant on each of the Ci . By condition 2, for ω ∈ Ci ,
E[Y ; Ci ]
E[Y |G](ω)P (Ci ) = E[E[Y |G]; Ci ] = E[Y ; Ci ] or E[Y |G](ω) = = E[Y |Ci ].
P (Ci )
For the general case, the definition states that E[Y |G] is essentially the only random variable that uses
the information provided by G and gives the same averages as Y on events that are in G. The existence and
uniqueness is provided by the Radon-Nikodym theorem. For Y positive, define the measure
ν(A) = E[Y ; A] for A ∈ G.
Then ν is a measure defined on G that is absolutely continuous with respect to the underlying probability
P restricted to G. Set E[Y |G] equal to the Radon-Nikodym derivative dν/dP |G .
The general case follows by taking positive and negative parts of Y .
For B ∈ F, the conditional probability of B given G is defined to be P (B|G) = E[IB |G].
1 BASIC CONCEPTS FOR STOCHASTIC PROCESSES 4
If G = σ(X), then we usually write E[Y |G] = E[Y |X]. For these circumstances, we have the following
theorem which can be proved using the standard machine.
Theorem 1.5. Let X be a random variable. Then Z is a measurable function on (Ω, σ(X)) if and only if
there exists a measurable function h on the range of X so that Z = h(X).
This shows that the deifinition of conditional expectation with respect to a σ-algebra extends the definition
of conditional expectation with respect to a random variable.
We now summarize the properties of conditional expectation.
Theorem 1.6. Let Y, Y1 , Y2 , · · · have finite absolute mean on (Ω, F, P ) and let a1 , a2 ∈ R. In addition, let
G and H be σ-algebras contained in F. Then
1. If Z is any version of E[Y |G], then EZ = EY . (E[E[Y |G]] = EY ).
2. If Y is G measurable, then E[Y |G] = Y , a.s.
3. (linearity) E[a1 Y1 + a2 Y2 |G] = a1 E[Y1 |G] + a2 E[Y2 |G], a.s.
4. (positivity) If Y ≥ 0, then E[Y |G] ≥ 0, a.s.
5. (conditional monotone convergence theorem) If Yn ↑ Y , then E[Yn |G] ↑ E[Y |G], a.s.
6. (conditional Fatous’s lemma) If Yn ≥ 0, then E[lim inf n→∞ Yn |G] ≤ lim inf n→∞ E[Yn |G].
7. (conditional dominated convergence theorem) If limn→∞ Yn (ω) = Y (ω), a.s., if |Yn (ω)| ≤ V (ω)
for all n, and if EV < ∞, then
lim E[Yn |G] = E[Y |G],
n→∞
almost surely.
1 BASIC CONCEPTS FOR STOCHASTIC PROCESSES 5
almost surely.
11. (conditional constants) If Z is G-measurable, and E|ZY | < ∞, then
yielding property 2.
4. Because Y ≥ 0 and {E[Y |G] ≤ − n1 }] is G-measurable, we have that
1 1 1 1
0 ≤ E[Y ; {E[Y |G] ≤ − }] = E[E[Y |G]; {E[Y |G] ≤ − }] ≤ − P {E[Y |G] ≤ − }
n n n n
and
1
P {E[Y |G] ≤ − } = 0.
n
Consequently,
∞
!
[ 1
P {E[Y |G] < 0} = P {E[Y |G] ≤ − } = 0.
n=1
n
1 BASIC CONCEPTS FOR STOCHASTIC PROCESSES 6
Z = lim Zn
n→∞
exists almost surely. Note that Z is G-measurable. Choose A ∈ G. By the monotone convergence
theorem,
E[Z; G] = lim E[Zn ; G] = lim E[Yn ; G] = E[Y ; G].
n→∞ n→∞
Because −|Y | ≤ Y ≤ |Y |, we have −E[|Y ||G] ≤ E[Y |G] ≤ E[|Y ||G] and |E[Y |G]| ≤ E[|Y ||G]. Conse-
quently,
E[·|G] : L1 (Ω, F, P ) → L1 (Ω, G, P )
is a continuous linear mapping.
6. Repeat the proof of Fatou’s lemma replacing expectation with E[·|G] and use both positivity and the
conditional monotone convergence theorem.
7. Repeat the proof of the dominated convergence theorem from Fatou’s lemma again replacing expecta-
tion with conditional expectation.
8. Follow the proof of Jensen’s inequality.
9. Use the conditional Jensen’s inequality on the convex function φ(y) = y p , y ≥ 0,
||E[Y |G]||pp = E[|E[Y |G]|p ] ≤ E[E[|Y ||G]p ] ≤ E[E[|Y |p |G]] = E[|Y |p ] = ||Y ||pp .
11. Use the standard machine. The case Z an indicator function follows from the definition of conditional
expectation.
12. We need only consider the case Y ≥ 0, EY > 0. Let Z = E[Y |G] and consider the two probability
measures
E[Y ; A] E[Z; A]
µ(A) = , ν(A) = .
EY EY
Note that EY = EZ and define
C = {A : µ(A) = ν(A)}.
If A = B ∩ C, B ∈ G, C ∈ H, then Y IB and C are independent and
E[Z; B ∩ C] = E[ZIB ; C] = E[ZIB ]P (C) = E[Z; B]P (C) = E[Y ; B]P (C).
1 BASIC CONCEPTS FOR STOCHASTIC PROCESSES 7
Consequently,
D = {B ∩ C; B ∈ G, C ∈ H} ⊂ C.
Now, D is closed under pairwise intersection. Thus, by the the Sierpinski Class Theorem, µ and ν
agree on σ(D) = σ(G, H) and
13. Take G to be the trivial σ-algebra {∅, Ω}. However, this property can be verified directly.
is uniformly integrable.
Remark 1.8. If the sub-σ-algebra G ⊂ F, then by Jensen’s inequality,
If E[Y 2 ] < ∞, we can realize the conditional expectation E[Y |G] as a Hilbert space projection.
Note that L2 (Ω, G, P ) ⊂ L2 (Ω, F, P ). Let ΠG be the orthogonal projection operator onto the closed
subspace L2 (Ω, G, P ). Thus ΠG Y is G-measurable and Y − ΠG Y is orthogonal to any element in X ∈
L2 (Ω, G, P ) Consequently,
Example 1.10. 1. Let Sj be a binomial random variable, the number of heads in j flips of a biased coin
with the probability of heads equal to p. Thus, there exist independent 0-1 valued random variables
{Xj : j ≥ 1} so that Sn = X1 + · · · + Xn . Let k < n, then Sk = X1 + · · · + Xk and Sn − Sk =
Xk+1 + · · · + Xn are independent. Therefore,
E[Sn |Sk ] = E[(Sn − Sk ) + Sk |Sk ] = E[(Sn − Sk )|Sk ] + E[Sk |Sk ] = E[Sn − Sk ] + Sk = (n − k)p + Sk .
Now to find E[Sk |Sn ], note that
x = E[Sn |Sn = x] = E[X1 |Sn = x] + · · · + E[Xn |Sn = x].
Each of these summands has the same value, namely x/n. Therefore
k
E[Sk |Sn = x] = E[X1 |Sn = x] + · · · + E[Xk |Sn = x] = x
n
and
k
E[Sk |Sn ] = Sn .
n
2. Let {Xn : n ≥ 1} be independent random variables having the same distribution. Let µ be their
2
common mean
PN and σ their common variance. Let N be a non-negative valued random variable and
define S = n=1 Xn , then
ES = E[E[S|N ]] = E[N µ] = µEN.
Var(S) = E[Var(S|N )] + Var(E[S|N ]) = E[N σ 2 ] + Var(N µ) = σ 2 EN + µ2 Var(N ).
Alternatively, for an N-valued random variable X, define the generating function
∞
X
GX (z) = Ez X = z x P {X = x}.
x=0
Then
GX (1) = 1, G0X (1) = EX, G00X (1) = E[X(X − 1)], Var(X) = G00X (1) + G0X (1) − G0X (1)2 .
Now,
∞
X ∞
X
S S S
GS (z) = Ez = E[E[z |N ]] = E[z |N = n]P {N = n} = E[z (X1 +···+XN ) |N = n]P {N = n}
n=0 n=0
∞
X ∞
X
E[z (X1 +···+Xn ) ]P {N = n} = E[z X1 ] × · · · × E[z Xn ] P {N = n}
=
n=0 n=0
X∞
= GX (z)n P {N = n} = GN (GX (z))
n=0
Thus,
G0S (z) = G0N (GX (z))G0X (z), ES = G0S (1) = G0N (1)G0X (1) = µEN
and
G00S (z) = G00N (GX (z))G0X (z)2 + G0N (GX (z))G00X (z)
G00S (1) = G00N (1)G0X (1)2 + G0N (1)G00X (1) = G00N (1)µ2 + EN (σ 2 − µ + µ2 ).
Now, substitute to obtain the formula above.
1 BASIC CONCEPTS FOR STOCHASTIC PROCESSES 9
3. The density with respect to Lebesgue measure for a bivariate standard normal is
2
x − 2ρxy + y 2
1
f(X,Y ) (x, y) = exp − .
2(1 − ρ2 )
p
2π 1 − ρ2
To check that E[Y |X] = ρX, we must check that for any Borel set B
E[Y IB (X)] = E[ρXIB (X)] or E[(Y − ρX)IB (X)] = 0.
Complete the square to see that
(y − ρx)2
Z Z
1 2
E[(Y − ρX)IB (X)] = p (y − ρx) exp − 2
dy e−x /2 dx
2
2π 1 − ρ B 2(1 − ρ )
Z Z 2
1 z 2
= z exp − dz e−x /2 dx = 0, z = y − ρx
2(1 − ρ2 )
p
2π 1 − ρ2 B
because the integrand for the z integration is an odd function. Now
Cov(X, Y ) = E[XY ] = E[E[XY |X]] = E[XE[Y |X]] = E[ρX 2 ] = ρ.
Definition 1.11. On a probability space (Ω, F, P ), let G1 , G2 , H be sub-σ-algebras of F. Then the σ-algebras
G1 and G2 are conditionally independent given H if
P (A1 ∩ A2 |H) = P (A1 |H)P (A2 |H),
for Ai ∈ Gi , i = 1, 2.
Exercise 1.12 (conditional Borel-Cantelli lemma). For G a sub-σ-algebra of F, let {An : n ≥ 1} be a
sequence of conditionally independent events given G. Show that for almost all ω ∈ Ω,
0
P {An i.o.|G}(ω) =
1
according as
∞
X =∞
P (An |G)(ω)
< ∞,
n=1
Consequently,
∞
X
P {An i.o.} = P {ω : P (An |G)(ω) = ∞}.
n=1
X :Λ×Ω→S
Typically, for the processes we study Λ will be the natural numbers, and [0, ∞). Occasionally, Λ will be
the integers or the real numbers. In the case that Λ is a subset of a multi-dimensional vector space, we often
call X a random field.
The distribution of a process X is generally stated via its finite dimensional distributions
If this collection probability measures satisfy the consistency consition of the Daniell-Kolmogorov exten-
sion theorem, then they determine the distribution of the process X on the space S Λ .
We will consider the case Λ = N.
Exercise 1.17. Let X be Fn -adapted and let τ be an Fn -stopping time. For B ∈ B, define
Xnτ = Xmin{τ,n}
The following exercise explains the intuitive idea that Fτ gives the information known to observer up to
time τ .
2. Let A ∈ Fτ , then
A ∩ {σ ≤ n} = (A ∩ {τ ≤ n}) ∩ {σ ≤ n} ∈ Fn .
Hence, A ∈ Fσ .
3. Let B ∈ B, then
{Xτ ∈ B} ∩ {τ ≤ n} = ∪nk=1 ({Xk ∈ B} ∩ {τ = k}) ∈ Fn .
Exercise 1.22. Let X be an Fn adapted process. For n = 0, 1, · · · , define Fnτ = Fmin{τ,n} . Then
1. {Fnτ : n ≥ 0} is a filtration.
1.3 Coupling
Definition 1.23. Let ν and ν̃ be probability distributions (of a random variable or of a stochastic process).
Then a coupling of ν and ν̃ is a pair of random variable X and X̃ defined on a probability space (Ω, F, P )
such that the marginal distribution of X is ν and the marginal distribution of X̃ is ν̃
As we shall soon see, coupling are useful in comparing two distribution by constructing a coupling and
comparing random variables.
Definition 1.24. The total variation distance between two probability measures ν and ν̃ on a measurable
space (S, B) is
||ν − ν̃||T V = sup{|ν(B) − ν̃(B)|; B ∈ B}. (1.2)
Example 1.25. Let ν be a Ber(p) and let ν̃ be a Ber(p̃) distribution, p̃ ≥ p. If X and X̃ are a coupling of
independent random variables, then
Now couple ν and ν̃ as follows. Let U be a U (0, 1) random variable. Set X = 1 if and only if U ≤ p and
X̃ = 1 if and only if U ≤ p̃. Then
1X
||ν − ν̃||T V = |ν{x} − ν̃{x}|. (1.3)
2
x∈S
1 BASIC CONCEPTS FOR STOCHASTIC PROCESSES 13
ν ν̃
I II
III
-
A - Ac -
Figure 1: Distribution of ν and ν̃. The event A = {x; ν{x} > ν̃{x}}. Regions I and II have the same area,
namely ||ν − ν̃||T V = ν(A) − ν̃(A) = ν̃(Ac ) − ν(Ac ). Thus, the total area of regions I and III and of regions
II and III is 1.
Proof. (This is one half the area of region I plus region II in Figure 1.) Let A = {x; ν{x} > ν̃{x}} and let
B ∈ B. Then
X X
ν(B) − ν̃(B) = (ν{x} − ν̃{x}) ≤ (ν{x} − ν̃{x}) = ν(A ∩ B) − ν̃(A ∩ B)
x∈B x∈A∩B
because the terms eliminated to create the second sum are negative. Similarly,
X X
ν(A ∩ B) − ν̃(A ∩ B) = (ν{x} − ν̃{x}) ≤ (ν{x} − ν̃{x}) = ν(A) − ν̃(A)
x∈A∩B x∈A
because the additional terms in the second sum are positive. Repeat this reasoning to obtain
ν̃(B) − ν(B) ≤ ν̃(Ac ) − ν(Ac ).
However, because ν and ν̃ are probability measures,
ν̃(Ac ) − ν(Ac ) = (1 − ν̃(A)) − (1 − ν(A)) = ν(A) − ν̃(A).
Consequently, the supremum in (1.2) is obtained in choosing either the event A or the event Ac and thus is
equal to half the sum of the two as given in (1.3).
Exercise 1.27. Let ||f ||∞ = supx∈S |f (x)|. Show that
1 X
||ν − ν̃||T V = sup{ f (x)(ν{x} − ν̃{x}); ||f ||∞ = 1}.
2
x∈S
P {X ∈ B} = P {X ∈ B, X̃ ∈ B} + P {X ∈ B, X̃ ∈ B c } ≥ P {X̃ ∈ B} + P {X ∈ B, X̃ ∈ B c }
Thus,
P {X 6= X̃} ≥ P {X ∈ B, X̃ ∈ B c } ≥ P {X ∈ B} − P {X̃ ∈ B} = ν(B) − ν̃(B).
Thus, P {X 6= X̃} is at least big as the total variation distance.
Next, we construct a coupling that achieve the infimum. First, set
X X1
p= min{ν{x}, ν̃{x}} = (ν{x} + ν̃{x} − |ν{x} − ν̃{x}|) = 1 − ||ν − ν̃||T V ,
2
x∈S x∈S
P {X = x} = P {X = x|coin lands heads}P {coin lands heads} + P {X = x|coin lands tails}P {coin lands tails}
min{ν{x}, ν̃{x}} ν{x} − ν̃{x}
= p+ I{ν{x}>ν̃{x}} (1 − p) = ν{x}
p 1−p
P {X = X̃} = P {X = X̃|coin lands heads}P {coin lands heads} + P {X = X̃|coin lands tails}P {coin lands tails}
X
= p+ P {X = x, X̃ = x|coin lands tails}(1 − p)
x∈S
X
= p+ P {X = x|coin lands tails}P {X̃ = x|coin lands tails}(1 − p)
x∈S
X
= p+ µ{x}µ̃{x}(1 − p) = p.
x∈S
Note that each term in sum is zero. Either the first term, the second term or both terms are zero based on
the sign of ν{x} − ν̃{x}. Thus,
P {X 6= X̃} = 1 − p = ||ν − ν̃||T V .
We will typically work to find successful couplings of stochastic processes. They are defined as follows:
1 BASIC CONCEPTS FOR STOCHASTIC PROCESSES 15
Definition 1.29. A coupling (P, X, X̃) is called a successful coupling of two adapted stochastic processes, if
the stopping time
τ = inf{t ≥ 0; Xt = X̃t }
is almost surely finite and if Xt = X̃t for all t ≥ τ . Then, we say that τ is the coupling time and that X
and X̃ are coupled at time τ .
If νt and ν̃t are the distributions of the processes at time t, then (P, Xt , X̃t ) is a coupling of νt and ν̃t
and, for a successful coupling,
Thus, the earlier the coupling time, the better the estimate of the total variation distance of the time t
distributions.
2 RANDOM WALK 16
2 Random Walk
Definition 2.1. Let X be a sequence of independent and identically distributed Rd -valued random variables
and define
Xn
S0 = 0, Sn = Xk .
k=1
Theorem 2.2. Let X1 , X2 , . . . be independent with distribution ν and let τ be a stoppping time. Then,
conditioned on {τ < ∞}, {Xτ +n , n ≥ 1} is independent of FτX and has the same distribution as the original
sequence.
Proof. Choose A ∈ Fτ and Bj ∈ B(Rd ), then because the finite dimensional distributions determine the
process, it suffices to show that
Y
P (A ∩ {τ < ∞, Xτ +j ∈ Bj , 1 ≤ j ≤ k}) = P (A ∩ {τ < ∞}) ν(Bk ).
1≤j≤k
To do this note that A ∩ {τ = n} ∈ FnX and thus is independent of Xn+1 , Xn+2 , . . .. Therefore,
Now, sum over n and note that each of the events are disjoint for differing values of n,
Exercise 2.3. 1. Assume that the steps of a real valued random walk S have finite mean and let τ be a
stopping with finite mean. Show that
ESτ = µEτ.
2. In the exercise, assume that µ = 0. For c > 0 and τ = min{n ≥ 0; Sn ≥ c}, show that Eτ is infinite.
Definition 2.4. Let S be a random walk. A point x ∈ Rd is called a recurrent state if for every neighborhood
U of x,
P {Sn ∈ U i. o.} = 1.
Definition 2.5. If, for some random real valued random variable X and some ` > 0,
∞
X
P {X = n`} = 1,
n=−∞
then X is said to be distributed on the lattice L` = {n` : n ∈ Z} provided that the equation hold for no larger
`. If this holds for no ` > 0, then X is called non-lattice and is said to be distributed on L0 = R.
Exercise 2.6. Let the steps of a random walk be distributed on a lattice L` , ` > 0. Let τn be the n-th return
of the walk to zero. Then
P {τn < ∞} = P {τ1 < ∞}n .
Theorem 2.7. If the step distribution is on L` , then either every state in L` is recurrent or no states are
recurrent.
Proof. Let G be the set of recurrent points.
Claim I. The set G is closed.
Choose a sequence {xk : k ≥ 0} ⊂ G with limit x and choose U a neighborhood of x. Then, there exists
an integer K so that xk ∈ U whenever k ≥ K and hence U is a neighborhood of xk . Because xk ∈ G,
P {Sn ∈ U i. o.} = 1 and x ∈ G.
Call y be a possible state (y ∈ C) if for every neighborhood U of y, there exists k so that P {Sk ∈ U } > 0.
If Sk is within of y and Sk+n is within of x, then Sk+n − Sk is within 2 of x − y. Therefore, for each
k,
{|Sk+n − x| < , |Sk − y| < } ⊂ {|(Sk+n − Sk ) − (x − y)| < 2, |Sk − y| < }
and
{|Sk+n − x| < f. o., |Sk − y| < } ⊃ {|(Sk+n − Sk ) − (x − y)| < 2 f. o., |Sk − y| < }.
Use the fact that Sk and Sk+n − Sk are independent and that the distribution of Sn and Sk+n − Sk are equal.
Because x ∈ G, 0 = P {|Sn − x| < f. o.}. Recalling that P {|Sk − y| < } > 0, this forces
and x − y ∈ G.
Claim III. G is a group.
If x ∈ G and y ∈ G, then y ∈ C and x − y ∈ G.
Claim IV. If G 6= ∅, and if ` > 0, then G = L` .
Clearly L` ⊂ G.
Because G 6= ∅, 0 ∈ G. Set SX = {x : P {X1 = x} > 0}. If x ∈ SX , then x ∈ C and −x = 0 − x ∈ G and
x ∈ G. By the definition of `, all integer multiples of ` can be written as positive linear sums from x, −x,
x ∈ SX and G ⊂ L`
Claim V. If ` = 0, and G 6= ∅, then G = L0 .
0 = inf{y > 0 : y ∈ G}. Otherwise, we are in the situation of a lattice valued random walk.
Let x ∈ R and let U be a neighborhood of x. Now, choose > 0 so that (x − , x + ) ⊂ U . By the
definition of infimum, there exists `˜ < so that `˜ ∈ G. Thus, L`˜ ⊂ G and L`˜ ∩ U 6= ∅. Consequently,
P {Sn ∈ U i. o.} = 1,
and x ∈ G.
Corollary 2.8. Let S be a random walk in Rd . Then, either the set of recurrent states forms a closed subgroup
of Rd or no states are recurrent. This subgroup is the closed subgroup generated by the step distribution.
Proof. For claims I, II, and III, let | · | stand for some norm in Rd and replace open intervals by open balls
in that norm.
Theorem 2.9. Let S be a random walk in Rd .
1. If there exists an open set U containing 0 so that
∞
X
P {Sn ∈ U } < ∞,
n=0
then 0 is recurrent.
2 RANDOM WALK 19
P∞
Proof. If n=0 P {Sn ∈ U } < ∞, then by the Borel-Cantelli, P {Sn ∈ U i. o.} = 0. Thus 0 is not recurrent.
Because the set of recurrent states is a group, then no states are recurrent.
For the second half, let > 0 and choose a finite set F so that
[
U⊂ B(x, ).
x∈F
Then, [ X
{Sn ∈ U } ⊂ {Sn ∈ B(x, )}, P {Sn ∈ U } ≤ P {Sn ∈ B(x, )}.
x∈F x∈F
Consequently, for some x,
∞
X
P {Sn ∈ B(x, )} = ∞.
n=0
Define the pairwise disjoint sets, Ak , the last visit to B(x, ) occurred for Sk .
{Sn ∈/ B(x, ), n = 1, 2, . . .} for k = 0
Ak =
{Sk ∈ B(x, ), Sn+k ∈ / B(x, ), n = 1, 2, . . .} for k > 0
Then
∞
[ ∞
X
{Sn ∈ B(x, ) f. o.} = Ak , and P {Sn ∈ B(x, ) f. o.} = P (Ak ).
k=0 k=0
For k > 0, if Sk ∈ B(x, ) and |Sn+k − Sk | > 2, then Sn+k ∈
/ B(x, ). Consequently,
P (Ak ) ≥ P {Sk ∈ B(x, ), |Sn+k − Sk | > 2, n = 1, 2, . . .}
= P {Sk ∈ B(x, )}P {|Sn+k − Sk | > 2, n = 1, 2, . . .}
= P {Sk ∈ B(x, )}P {|Sn | > 2, n = 1, 2, . . .}
Summing over k, we find that
∞
!
X
P {Sn ∈ B(x, ) f. o.} ≥ P {Sk ∈ B(x, )} P {|Sn | > 2, n = 1, 2, . . .}.
k=0
The number on the left is at most one, the sum is infinite and consequently for any > 0,
P {|Sn | > 2, n = 1, 2, . . .} = 0.
Now, define the sets Ak as before with x = 0. By the argument above P (A0 ) = 0. For k ≥ 1, define a strictly
increasing sequence {i : i ≥ 1} with limit . By the continuity property of a probability, we have,
P (Ak ) = lim P {Sk ∈ B(0, i ), Sn+k ∈
/ B(0, ), n = 1, 2, . . .}.
i→∞
Because every neighborhood of 0 contains a set of the form B(0, ) for some > 0, we have that 0 is
recurrent.
Consequently,
∞
X ∞ X
X ∞
P {Sn ∈ α + [0, )d } = P {Sn ∈ α + [0, )d , τα = k}
n=0 k=0 n=0
X∞ X ∞
≤ P {|Sn − Sk | < , τα = k}
k=0 n=0
X∞ X ∞
= P {|Sn − Sk | < }P {τα = k}
k=0 n=k
∞ ∞ ∞
!
X X X
≤ P {|Sn | < } P {τα = k} ≤ P {|Sn | < }
k=0 n=0 n=0
because the sum on k is at most 1. The proof ends upon noting that the sum on α consists of (2m)d
terms.
Theorem 2.11 (Chung-Fuchs). Let d = 1. If the weak law of large numbers
1
Sn →P 0
n
holds, then Sn is recurrent.
2 RANDOM WALK 21
Proof. Let A > 0. Using the lemma above with d = 1 and = 1, we have that
∞ ∞ Am
X 1 X 1 X n no
P {|Sn | < 1} ≥ P {|Sn | < m} ≥ P |Sn | < .
n=0
2m n=0 2m n=0 A
Note that, for any A, we have by the weak law of large numbers,
n no
lim P |Sn | < = 1.
n→∞ A
If a sequence has a limit, then its Cesaro sum has the same limit and therefore,
Am
1 X n no A
lim P |Sn | < = .
m→∞ 2m A 2
n=0
Because A is arbitrary, the first sum above is infinite and the walk is recurrent.
Theorem 2.12. Let S be an random walk in R2 and assume that the central limit theorem holds in that
Z
1
lim P √ Sn ∈ B = n(x) dx
n→∞ n B
Note that τ0 may be infinite. In this circumstance, P {τ0 < ∞} = Gf (1) < 1.
Now multiply both sides by z n , sum on n, and use the property of multiplication of power series.
Proposition 2.14. For a simple random walk, with the steps Xk taking the value +1 with probability p and
−1 with value q = 1 − p.
1. Gp (z) = (1 − 4pqz 2 )−1/2 .
2. Gf (z) = 1 − (1 − 4pqz 2 )1/2 .
Proof. 1. Note that pn = 0 if n is odd. For n even, we must have n/2 steps to the left and n/2 steps to
the right. Thus,
n
pn = 1 (pq)n/2 .
2n
√
Now, set x = pqz and use the binomial power series for (1 − x)−1/2 .
2. Apply the formula in 1 with the identity in the previous proposition.
Exercise 2.18. Let S be a random walk in Rd having the property that its components form d independent
simple symmetric random walks on Z. Show that S is recurrent if d = 1 or 2, but is not recurrent if d ≥ 3.
Exercise 2.19. Let S be the simple symmetric random walk in R3 . Show that
n n−j 2
1 2n X X n
P {S2n = 0} = 2n n = 0, 1, . . . .
6 n j=0 j, k, (n − j − k)
k=1
Exercise 2.20. Pn The simple symmetricPrandom walk in R3 is not recurrent. Hint: For a nonnegative sequence
2 n
a, . . . , am , `=1 a` ≤ (max1≤`≤m a` ) `=1 a` .
For S, a simple symmetric random walk in Rd , d > 3. Let S̃ be the projection onto the first three
coordinates. Define the stopping times
Then {S̃τk : k ≥ 0} is a simple symmetric random walk in R3 and consequently return infinitely often to
zero with probability 0. Consequently, S is not recurrent.
3 RENEWAL SEQUENCES 24
3 Renewal Sequences
Definition 3.1. 1. A random {0, 1}-valued sequence X is a renewal sequence if
for all positive integers r and s and sequences (1 , . . . , r+s ) ∈ {0, 1}(r+s) such that r = 1.
2. The random set ΣX = {n; Xn = 1} is called the regenerative set for the renewal sequence X.
3. The stopping times
T0 = 0, Tm = inf{n > Tm−1 ; Xn = 1}
are called the renewal times, their differences
Wj = Tj − Tj−1 , j = 1, 2, . . . .
A renewal sequence is a random sequence whose distribution beginning at a renewal time is the same as
the distribution of the original renewal sequence. We can characterize the renewal sequence in any one of
four equivalent ways, through
1. the renewal sequence X,
2. the regenerative set ΣX ,
3. the sequence of renewal times T , or
4. the sequence of sojourn times W .
Check that you can move easily from one description to another.
Definition 3.2. 1. For a sequence, {an : n ≥ 1}, define the generating function
∞
X
Ga (z) = ak z k .
k=0
2. For pair of sequences, {an : n ≥ 1} and {bn : n ≥ 1}, define the convolution sequence
n
X
(a ∗ b)n = ak bn−k .
k=0
By the uniqueness theorem for power series, a sequence is uniquely determined by its generating function.
Convolution is associative and commutative. We shall write a∗2 = a ∗ a and a∗m for the m-fold convolution.
Exercise 3.4. The sequence {Wj : j ≥ 1} are independent and identically distributed.
Thus, T is a random walk on Z+ having on strictly positive integral steps. If we let R be the waiting
time distribution for T , then
R∗m (K) = P {Tm ∈ K}.
Exercise 3.5. Let S be a random walk on R, then the following are regenerative sets.
1. {n; Sn = 0}
2. {n; Sn > Sk for 0 ≤ k < n} (strict ascending ladder times)
3. {n; Sn < Sk for 0 ≤ k < n} (strict descending ladder times)
4. {n; Sn ≥ Sk for 0 ≤ k < n} (weak ascending ladder times)
5. {n; Sn ≤ Sk for 0 ≤ k < n} (weak decending ladder times)
If S is a random walk in Z, then the following are regenerative sets.
6. {Sn ; Sn > Sk for 0 ≤ k < n}
P {Xn Yn = n , n = 1, . . . , r + s}
X
= P {Xn = X Y
n , Yn = n , n = 1, . . . , r + s}
A
X
= P {Xn = X Y
n , n = 1, . . . , r + s}P {Yn = n , n = 1, . . . , r + s}
A
X
= P {Xn = X X
n , n = 1, . . . , r}P {Xn−r = n , n = r + 1, . . . , r + s}
A
×P {Yn = Yn , n = 1, . . . , r}P {Yn−r = Yn , n = r + 1, . . . , r + s}
X
= P {Xn = X Y X Y
n , Yn = n , n = 1, . . . , r}P {Xn−r = n , Yn = n , n = r + 1, . . . , r + s}
A
= P {Xn Yn = n , n = 1, . . . , r}P {Xn−r Yn−r = n , n = r + 1, . . . , r + s}
The σ-finite random measure N gives the number of renewels during a time set B.
P∞ P∞
2. U (B) = EN (B) = n=0 P {Tn ∈ B} = n=0 R∗n (B) is the potential measure.
Use the monotone convergence theorem to show that it is a measure.
3. The sequence {uk : k ≥ 0} defined by
uk = P {Xk = 1} = U {k}
Note that
∞
X
uk = P {Tn = k}.
n=0
Exercise 3.8. 1.
GU = 1/(1 − GR ) (3.1)
Theorem 3.9. For a renewal process, let R denote the waiting time distribution, {un ; n ≥ 0} the potential
sequence, and N the renewal measure. For non-negative integers k and `,
k
X
P {N [k + 1, k + `] > 0} = R[k + 1 − n, k + ` − n]un
n=0
k
X k+`
X
= 1− R[k + ` + 1 − n, ∞]un = R[k + ` + 1 − n, ∞]un .
n=0 n=k+1
Proof. If we decompose the event according to the last renewal in [k + 1, k + `], we obtain
∞
X
P {N [k + 1, k + `] > 0} = P {Tj ≤ k, k + 1 ≤ Tj+1 ≤ k + `}
j=0
∞ X
X k
= P {Tj = n, k + 1 ≤ Tj+1 ≤ k + `}
j=0 n=0
∞ X
X k
= P {Tj = n, k + 1 − n ≤ Wj+1 ≤ k + ` − n}
j=0 n=0
∞ X
X k
= P {Tj = n}P {k + 1 − n ≤ Wj+1 ≤ k + ` − n}
j=0 n=0
k
X
= R[k + 1 − n, k + ` − n]un .
n=0
Finally, if we decompose the event according to the first renewal in [k + 1, k + `], we obtain
∞
X
P {N [k + 1, k + `] > 0} = P {k + 1 ≤ Tj ≤ k + `, Tj+1 > k + `}
j=0
∞ X
X k+` ∞ X
X k+`
= P {Tj = n, Tj+1 > k + `} = P {Tj = n, Wj+1 > k + ` − n}
j=0 n=k+1 j=0 n=k+1
∞ X
X k+` k+`
X
= P {Tj = n}P {Wj+1 > k + ` − n} = R[k + ` + 1 − n, ∞]un .
j=0 n=k+1 n=k+1
Exercise 3.11. Let X be a renewal sequence with potential measure U and waiting time distribution R.
If U (Z+ ) < ∞, then N (Z+ ) is a geometric random variable with mean U (Z+ ). If U (Z+ ) = ∞, then
N (Z+ ) = ∞ a.s. In either case,
1
U (Z+ ) = .
R{∞}
Definition 3.12. A renewal sequence and the corresponding regenerative set are transient if the correspond-
ing renewal measure is finite almost surely and they are recurrent otherwise.
Theorem 3.13 (Strong Law for Renewal Sequences). Given a renewal sequence X, let N denote the renewal
measure, U , the potential measure and µ ∈ [1, ∞] denote the mean waiting time. Then,
1 1 1 1
lim U [0, n] = , lim N [0, n] = a.s. (3.3)
n→∞ n µ n→∞ n µ
Proof. Because N [0, n] ≤ n + 1, the first equality follows from the second equality and the bounded conver-
gence theorem.
To establish this equality, note that by the strong law of large numbers,
m
Tm 1 X
= Wj = µ a.s..
m m j=1
Definition 3.14. Call a recurrent renewal sequence positive recurrent if the mean waiting time is finite and
null recurrent if the mean waiting time is infinite.
Definition 3.15. Let X be a random sequence Xn ∈ {0, 1} for all n and let τ = min{n ≥ 0; Xn = 1}. Then
X is a delayed renewal seqence if either P {τ = ∞} = 1 or, conditioned on the event {τ < ∞},
{Xτ +n ; n ≥ 0}
Exercise 3.17. Let X be a renewal sequence. Then a {0, 1}-valued sequence, X̃, is a delayed renewal
sequence with the same waiting time distribution as X if and only if
Then X and X̃ have the same distribution. In other words, X̃ and Ỹ is a coupling of the renewal processes
having marginal distributions X and Y with coupling time σ.
Proof. By the Daniell-Kolmogorov extension theorem, it is enough to show that they have the same finite
dimensional distributions.
With this in mind, fix n and (0 , . . . , n ) ∈ {0, 1}(n+1) . Define
Note that on the set {X̃0 = 0 , . . . , X̃n = n }, σ̃ = min{n, σ}. Noting that X and (Ỹ , σ̃) are independent,
we have
n
X
P {X̃0 = 0 . . . , X̃n = n } = P {σ̃ = k, X̃0 = 0 , . . . , X̃n = n }
k=0
Xn
= P {σ̃ = k, X0 = 0 , . . . , Xk = k , Yk+1 = k+1 , . . . , Yn = n }
k=0
Xn
= P {X0 = 0 , . . . , Xk = k }P {σ̃ = k, Ỹk+1 = k+1 , . . . , Ỹn = n }
k=0
Claim. Let Y be a non-delayed renewal sequence with waiting distribution R. Then for k ≤ n
If k = n, then because k + 1 > n, the given event is impossible and both sides of the equation equal zero.
For k < n and k = 0, then {σ̃ = k} = ∅ and again the equation is trivial.
For k < n and k = 1, then {σ̃ = k} is the finite union of events of the form {Ỹ0 = ˜0 , . . . , Ỹk = 1} and
the claim follows from the identity (3.4) above on delayed renewal sequences.
Use this identity once more to obtain
n
X
P {X̃0 = 0 . . . , X̃n = n } = (P {σ̃ = k}P {X0 = 0 , . . . , Xk = k }P {Yk+1 = k+1 , . . . , Yn = n })
k=0
Xn
= P {σ̃ = k}P {X0 = 0 , . . . , Xn = n } = P {X0 = 0 , . . . , Xn = n }.
k=0
then
∞
X
Ef (W ) = an P {W ≥ n}.
n=1
Theorem 3.20. Let R be a waiting distribution with finite mean µ and let Ỹ be a delayed renewal sequence
having waiting time distribution R and delay distribution
1
D{n} = R[n + 1, ∞), n ∈ Z+ .
µ
Theorem 3.22 (Renewal). Let µ ∈ [1, ∞] and let X be an aperiodic renewal sequence, then
1
lim P {Xk = 1} = .
k→∞ µ
Proof. (transient case) In this case R{∞} > 0 and thus µ = ∞. Recall that uk = P {Xk = 1}. Because
∞
X
uk < ∞,
k=0
limk→∞ uk = 0 = 1/µ.
(positive recurrent) Let Ỹ be a delayed renewal sequence, independent of X, having the same waiting
time distribution as X and satisfying
1
P {Ỹk = 1} = .
µ
Define the stopping time
σ = min{k ≥ 0; Xk = Ỹk = 1}.
and the process
Xk if k ≤ σ,
X̃k =
Ỹk if k > σ.
Then X and X̃ have the same distribution. If P {σ < ∞} = 1, then
1
|P {Xk = 1} − |
µ
= |P {X̃k = 1} − P {Ỹk = 1}|
= |(P {X̃k = 1, σ ≤ k} − P {Ỹk = 1, σ ≤ k}) + (P {X̃k = 1, σ > k} − P {Ỹk = 1, σ > n})|
= |P {X̃k = 1, σ > k} − P {Ỹk = 1, σ > k}| ≤ max{P {X̃k = 1, σ > k}, P {Ỹk = 1, σ > k}}P {σ > k}.
Thus, the renewal theorem holds in the positive recurrent case if σ is finite almost surely. With this in mind,
define the FtY -stopping time
τ = min{k; Ỹk = 1 and uk > 0}.
Because uk > 0 for sufficiently large k, and Ỹ has a finite waiting, then Ỹ has a finite delay, τ is finite with
probability 1. Note that X, τ and {Ỹτ +k ; k = 0, 1, . . .} are independent. Consequently {Xk Ỹτ +k ; k = 0, 1, . . .}
is a renewal sequence independent of τ with potential sequence
P {Xk Ỹτ +k = 1} = P {Xk = 1}P {Ỹτ +k = 1} = u2k .
3 RENEWAL SEQUENCES 32
P∞
Note that k=1 u2k = ∞.PnOtherwise, limk→∞ uk = 0 contradicting from (3.3), the strong law for renewal
sequences, that limn→∞ n1 k=1 uk = 1/µ. Thus, the renewal process is recurrent. i.e.,
{Xσ2km +m ; j ≥ 1}
are independent and take the value one with probability um . If um > 0, then
∞
X ∞
X
P {Xσ2km +m = 1} = um = ∞.
k=1 k=1
P {Xσ2mj+m = 1 i.o} = 1.
Introduce the independent delayed renewal processes Ỹ 0 , . . . , Ỹ q with waiting distribution R and fixed
delay r for Ỹ r . Thus X and Ỹ 0 have the same distribution.
Guided by the proof of the positive recurrent case, define the stopping time
and set
Ỹkr if k ≤ σ,
X̃kr =
Xk if k > σ.
We have shown that X̃ r and Ỹ r have
Pthe same distribution.
∞
Because uk > infinitely often, k=1 uq+1k = ∞. Modify the argument about to see that σ < ∞ with
probability one. Consequenty, there exists an integer k > q so that
Thus,
uk−r = P {X̃kr = 1} ≥ P {X̃kr = 1, σ < k} = P {X̃kr = 1|σ < k}P {σ ≤ k} ≥ .
2
Consequently
q+1
X
uk+1−n R[n, ∞) > 1.
n=1
However identity (3.2) states that this sum is at most one. This contradiction proves the null recurrent
case.
Putting this altogether, we see that a renewal sequence is
P∞
recurrent if and only if n=0 un = ∞,
null recurrent if and only if it is recurrent and limn→∞ un = 0.
Exercise 3.23. Fill in the details for the proof of the renewal theorem in the null recurrent case.
Corollary 3.24. Let µ ∈ [1, ∞] and let X be an renewal sequence with period γ, then
γ
lim P {Xγk = 1} = .
k→∞ µ
Proof. The process Y defined by Yk = Xγk is an aperiodic renewal process with renewal measure RY (B) =
RX (γB) and mean µ/γ.
Lemma 3.25. Let n ≥ 0. For a real-valued random walk, S, the probability that n is a strict ascending
ladder time equals the probability that there is not positive ladder time less than or equal to n
Proof. Let {Xk : k ≥ 1} be the steps in S. Then
( n
)
X
{n is a strict ascending ladder time for S} = Xk > 0; for m = 1, 2, . . . , n .
k=m
and
( m
)
X
{1, 2, . . . , n is not a weak descending ladder time for S} = Xk > 0; for m = 1, 2, . . . , n .
k=1
By replacing Xk with Xn+1−k , we see that these two events have the same probability.
Theorem 3.26. Let G++ and G− be the generating functions for the waiting time distributions for strict
ascending ladder times and weak descending latter times for a real-valued random walk. Then
By the generation function identity (3.1), (1 − G++ (z))−1 is the generating function of the sequence {u++
n :
n ≥ 1}, we have for z ∈ [0, 1),
∞ n ∞ ∞
1 X X 1 XX
= (1 − rk− )z n = − rk− z n
1 − G++ (z) n=0
1−z
k=0 k=0 n=k
∞ k
1 X z 1 − G− (z)
= − rk− = .
1−z 1−z 1−z
k=0
and a Taylor’s series expansion gives the waiting time distribution of ladder times.
4 MARTINGALES 35
4 Martingales
Definition 4.1. A real-valued random sequence X with E|Xn | < ∞ for all n ≥ 0 and adapted to a filtration
{Fn ; n ≥ 0} is an Fn -martingale if
an Fn -submartingale if
E[Xn+1 |Fn ] ≥ Xn for n = 0, 1, . . . ,
and an Fn -supermartingale if the inequality above is reversed.
Exercise 4.2. 1. Show that X is an Fn -martingale if and only if for every j < k,
For part 1, the last inequality is an equality. For part 2, the last inequality follows from the assumption that
φ is nondecreasing.
Exercise 4.4. Let X and Y are Fn -submartingales, then {max{Xn , Yn }; n ≥ 0} is an Fn -submartingale.
Exercise 4.5. Let X and X̃ are Fn -supermartingales, and let τ be a finite Fn -stopping time. Assume that
Xτ ≥ X̃τ almost surely. Show that
Xn if n ≤ τ
Yn =
X̃n if n > τ
is an Fn -supermartingale.
4 MARTINGALES 36
4.1 Examples
1. Let S be a random walk whose step sizes have mean µ. Then S is FnX -submartingale, martingale, or
supermartingale according to the property µ > 0, µ = 0, and µ < 0.
To see this, note that
n
X
E|Sn | ≤ E|Xk | = nE|X1 |
k=1
and
E[Sn+1 |FnX ] = E[Sn + Xn+1 |FnX ] = E[Sn |FnX ] + E[Xn+1 |FnX ] = Sn + µ.
2. In addition,
4. (Wald’s martingale) Let φ(t) = E[exp(itX1 )] be the characteristic function for the step distribution
for the random walk S. Fix t ∈ R and define Yn = exp(itSn )/φ(t)n . Then
5. (Likelihood ratios) Let X0 , X1 , . . . be independent and let f0 and f1 be probability density functions
with respect to a σ-finite measure µ on some state space S. A stochastic process of fundamental
importance in the theory of statistical inference is the sequence of likelihood ratios
To assure that Ln is defined assume that f0 (x) > 0 for all x ∈ S. Then
X f1 (Xn+1 ) X f1 (Xn+1 )
E[Ln+1 |Fn ] = E Ln F = Ln E .
f0 (Xn+1 ) n f0 (Xn+1 )
4 MARTINGALES 37
and L is a martingale.
Qn
6. Let {Xn ; n ≥ 0} be independent mean 1 random variables. Then Zn = k=1 Xk is an FnX -martingale.
7. (Doob’s martingale) Let X be a random variable, E|X| < ∞ and define Xn = E[X|Fn ], then by the
tower property,
E[Xn+1 |Fn ] = E[E[X|Fn+1 ]|Fn ] = E[X|Fn ] = Xn .
8. (Polya urn) An urn has initial number of balls, colored green and blue with at least one of each color.
Draw a ball at random from the urn and return it and c others of the same color. Repeat this procedure,
letting Xn be the number of blue and Yn be the number of green after n iterations. Let Rn be the
fraction of blue balls.
Xn
Rn = . (4.1)
Xn + Yn
Then
Xn + c Xn Xn Yn
E[Rn+1 |Fn(X,Y ) ] = +
Xn + Yn + c Xn + Yn Xn + Yn + c Xn + Yn
(Xn + c)Xn + Xn Yn
= = Rn
(Xn + Yn + c)(Xn + Yn )
9. (martingale transform) For a filtration {Fn : n ≥ 0}, Call a random sequence H predictable if Hn is
Fn−1 -measurable. Define (H · X)0 = 0 and
n
X
(H · X)n = Hk (Xk − Xk−1 ).
k=1
2. M is an Fn -martingale.
3. 0 = V0 ≤ V1 ≤ . . ..
4. V is Fn -predictable.
then 3 follows from the submartingale property of X and 4 follows from the basic properties of conditional
expectation. Define M so that 1 is satisfied, then
Thus, there exists processes M and V that satisfy 1-4. To show uniqueness, choose at second pair M̃ and
Ṽ . Then
Mn − M̃n = Ṽn − Vn .
is Fn−1 -measurable. Therefore,
by the martingale property for M and M̃ . Thus, the difference between Mn and M̃n is constant, independent
of n. This difference
M0 − M̃0 = Ṽ0 − V0 = 0
by property 3.
Exercise 4.8. Let Bm ∈ Fm and define the process
n
X
Xn = IBm .
m=0
X = M + V,
V∞ = lim Vn .
n→∞
EV∞ = lim EVn = lim E[Xn − Mn ] = lim E[Xn − M0 ] ≤ sup E|Xn | + E|M0 | < ∞ (4.2)
n→∞ n→∞ n→∞ n
the uniform integrability of X. Because the sequence Vn is dominated by an integrable random variable V∞ ,
it is uniformly integrable. M = X − V is uniformly integrable because the difference of uniformly integrable
sequences is uniformly integrable.
Consequently, by the submartingale property for X, each of the terms in the sum is non-negative.
To check that the last term can be made arbitrarily close to zero, Note that {τn+1 > p} → ∅ a. s. as
p → ∞ and
|Xτn+1 IBm ∩{τn+1 >p} | ≤ |Xτn+1 |,
which is integrable by the first property of the sampling integability condition. Thus by the dominated
convergence theorem,
lim E[|Xτn+1 |; Bm ∩ {τn+1 > p}] = 0.
p→∞
For the remaining term, by property 2 of the sampling integribility condition, we are assured of a subsequence
{p(k); k ≥ 0} so that
lim E[|Xp(k) |; {τn+1 > p(k)}] = 0.
k→∞
A reveiw of the proof shows that we can reverse the inequality in the case of supermartingales and replace
it with an equality in the case of martingales.
The most frequent use of this theorem is:
Corollary 4.12. Let τ be a Fn -stopping time and let X be an Fn -(sub)martingale. Assume that τ is an
almost surely finite random variable that satisfies the sampling integrability conditions for X. Then
E[Xτ ] = (≥)E[X0 ].
Thus, after each bet the fortune either doubles or vanishes. Let τ be the time that the fortune vanishes,
i.e.
τ = min{n > 0 : Xn = 0}.
Note that P {τ > k} = 2−k . However,
EXτ = 0 < 1 = EX0
and τ cannot satisfy the sampling integrability conditions.
Because the sampling integrability conditions are a technical relationship between X and τ , we look to
find easy to verify sufficient conditions. The first of three is the case in which the stopping τ is best behaved
- it is bounded.
Proposition 4.14. If τ is bounded, then τ satifies the sampling integrability conditions for any sequence X
of integrable random variables.
Proof. Let b be a bound for τ . Then
4 MARTINGALES 41
1.
b
X b
X
|Xτ | = | Xk I{τ =k }| ≤ |Xk |,
k=0 k=0
In the second case we look for a trade-off in the growth of the absolute value of conditional increments
of the submartingale verses moments for the stopping time.
Exercise 4.15. Show that if 0 ≤ X0 ≤ X1 ≤ · · · , then 1 implies 2 in the sampling integrability conditions.
Proposition 4.16. Let X be an Fn -adapted sequence of real-valued random variables and let τ be an almost
surely finite Fn -stopping time. Suppose for each n > 0, there exists mn so that
Then Yn ≥ |Xn | and so it suffices to show that τ satisfies the sampling integrability conditions for Y . Because
Y is an increasing sequence, we can use the exercise above to note that the theorem follows if we can prove
that EYτ < ∞.
By the hypotheses, and the tower property, noting that {τ ≥ k} = {τ ≤ k − 1}c , we have
Therefore,
∞
X ∞
X
EYτ = E|X0 | + E[|Xk − Xk−1 |; {τ ≥ k}] ≤ mk P {τ ≥ k} = Ef (τ ).
k=1 k=1
Corollary 4.17. 1. If Eτ < ∞, and |Xk −Xk−1 | ≤ c, then τ satisfies the sampling integrability conditions
for X.
2. If Eτ < ∞, and S is a random walk whose steps have finite mean, then τ satisfies the sampling
integrability conditions for S.
4 MARTINGALES 42
The last of the three cases show that if we have good behavior for the submartingale, then we only require
a bounded stopping time.
Theorem 4.18. Let X be a uniformly integrable Fn -submartingale and τ an almost surely finite Fn -stopping
time. Then τ satisfies the sampling integrability conditions for X.
Proof. Because {τ > n} → ∅, condition 2 follows from the fact that X is uniformly integrable.
By the optional sampling theorem, the stopped sequence Xnτ = Xmin{n,τ } is an Fnτ -submartingale. If we
show that X τ is uniformly integrable, then, because Xnτ → Xτ a.s., we will have condition 1 of the sampling
integrability condition. Write X = M + V , the Doob decomposition for X, with M a martingale and V a
predictable increasing process. Then both M and V are uniformly integrable sequences.
As we learned in (4.2), V∞ = limn→∞ Vn exists and is integrable. Because
Vnτ ≤ V∞ ,
V τ is a uniformly integrable sequence. To check that M τ and consequently X τ are uniformly integrable
sequences, note that {|Mn |; n ≥ 0} is a submartingale and that {Mnτ ≥ c} = {Mmin{τ,n} ≥ c} ∈ Fmin{τ,n} .
Therefore, by the optional sampling theorem applied to {|Mn |; n ≥ 0},
E[|Mnτ |; {|Mnτ | ≥ c}] ≤ E[|Mn |; {|Mmin{τ,n} | ≥ c}]
and using the fact that M is uniformly integrable
E|Mmin,{τ,n} | E|Mn | K
P {|Mmin,{τ,n} | ≥ c} ≤ ≤ ≤
c c c
for some constant K. Therefore, by the uniform integrability of the sequence M ,
lim sup E[|Mnτ |; {|Mnτ ≥ c}] ≤ lim sup E[|Mn |; {|Mmin,{τ,n} ≥ c}] = 0,
c→∞ n c→∞ n
τ
and therefore M is a uniformly integrability sequence.
Theorem 4.19. (First Wald identity) Let S be a real valued random walk whose steps have finite mean µ.
If τ is a stopping time that satisfies the sampling integrability conditions for S then
ESτ = µEτ.
Proof. The random process Yn = Sn − nµ is a martingale. Thus, by the optional sampling theorem, applied
using the bounded stopping time min{τ, n},
0 = EYmin{τ,n} = ESmin{τ,n} − µE[min{τ, n}].
Note that,
lim inf E[|Smin{τ,n} − Sτ |] ≤ lim inf E[|Sn − Sτ |; {τ > n}] ≤ lim inf E[|Sn |; {τ > n}] + lim inf E[|Sτ |; {τ > n}].
n→∞ n→∞ n→∞ n→∞
The first term has liminf zero by the second sampling integrability condition. The second has limit zero.
To see this, use the dominated convergence theorem, noting that the first sampling integrability condition
1
states that E|Sτ | < ∞. Consequently Smin{τ,n} →L Sτ as n → ∞.
Let {n(k) : k ≥ 1} be a subsequence in which this limit is obtained. Then, by the monotone convergence
theorem,
µEτ = µ lim E[min{τ, n(k)}] = lim ESmin{τ,n(k)} = ESτ .
k→∞ k→∞
4 MARTINGALES 43
Theorem 4.20. (Second Wald Identity) Let S be a real-valued random walk whose step sizes have mean 0
and variance σ 2 . If τ is a stopping time that satisfies the sampling integrability conditions for {Sn2 ; n ≥ 0}
then
Var(Sτ ) = σ 2 Eτ.
Proof. Note that τ satisfies the sampling integrability conditions for S. Thus, by the first Wald idenity,
ESτ = 0 and hence Var(Sτ ) = ESτ2 . Consider Zn = Sn2 − nσ 2 , an FnS -martingale, and follow the steps in
the proof of the first Wald identity.
Example 4.21. Let S be the asymmetric simple random work on Z. Let p be the probability of a forward
step. For any a ∈ Z, set
τa = min{n ≥ 0 : Sn = a}.
For a < 0 < b, set τ = min{τa , τb }. The probability of b − a + 1 consecutive foward steps is pb−a+1 . Let
A simple martingale occurs when L(α) = 1. Solving, we have the choices exp(−α) = 1 or exp(−α) =
(1 − p)/p. The first choice gives the constant martingale Yn = 1, the second choice yields
S n
1−p
Yn = = ψ(Sn ).
p
on the event {τ > n} to see that τ satisfies the sampling integrability conditions for Y . Therefore,
or
ψ(0) = ψ(b)(1 − P {τa < τb }) + ψ(a)P {τa < τb },
ψ(b) − ψ(0)
P {τa < τb } = .
ψ(b) − ψ(a)
b
Suppose p > 1/2 and let b → ∞ to obtain
−a
1−p
P {min Sn ≤ a} = P {τa < ∞} = . (4.3)
n p
4 MARTINGALES 44
Note that
|Smin{τb ,n} | ≤ max{b, | min Sm |},
m
Therefore,
b
Eτb = .
2p − 1
Exercise 4.22. 1. Let S denote a real-valued random walk whose steps have mean zero. Let
Show that Eτ = ∞.
2. For the simple symmetric random walk on Z, show that
b
P {τa < τb } = . (4.4)
b−a
3. For the simple symmetric random walk on Z, use the fact that Sn2 − n is a martingale to show that
E[min{τa , τb }] = −ab.
+
Because min{τ, n} ≤ n, the optional sampling theorem gives, Xmin{τ,n} ≤ E[Xn+ |Fmin{τ,n} ] and, therefore,
+
xP (A) ≤ E[Xmin{τ,n} ; A] ≤ E[Xmin{τ,n} ; A] ≤ E[Xn+ ; A] ≤ E[Xn+ ].
Theorem 4.25 (Lp maximal inequality). Let X be a nonnegative submartingale. Then for p > 1,
p
p p
E[ max Xk ] ≤ E(Xn )p . (4.6)
1≤k≤n p−1
Proof. For M > 0, write Z = max{Xk ; 1 ≤ k ≤ n} and ZM = min{Z, M }. Then
Z ∞ Z M
p p
E[ZM ] = ζ dFZM (ζ) = pζ p−1 P {Z > ζ} dζ
0 0
Z M Z M
1
= pζ p−1 P { max Xk > ζ} dζ ≤ pζ p−1 E[Xn ; {Z > ζ}] dζ
0 1≤k≤n 0 ζ
Z M
= E[Xn pζ −2 I{Z>ζ} dζ] = E[Xn β(ZM )]
0
where Z z
p p−1
β(z) = pζ p−2 dζ = z .
0 p−1
Note that (4.6) was used in giving the inequality. Now, use Hölder’s inequality,
p p p (p−1)/p
E[ZM ]≤ E[Xnp ]1/p E[ZM ] .
p−1
and hence
pp 1/p
E[ZM E[Xnp ]1/p .
] ≤
p−1
Take p-th powers, let M → ∞ and use the monotone convergence theorem.
4 MARTINGALES 46
Example 4.26. To show that no similar inequality holds for p = 1, let S be the simple symmetric random
walk starting at 1 and let τ = min{n > 0 : Sn = 0}. Then Yn = Smin{τ,n} is a martingale.
Let M be an integer greater than 1 the by equation (4.4),
M −1
P {max Yn < M } = P {τ0 < τM } = .
n M
and
∞ ∞
X X 1
E[max Yn ] = P {max Yn ≥ M } = = ∞.
n n M
M =1 M =1
Exercise 4.27. For p = 1, we have the following maximum inequality. Let X be a nonnegative submartin-
gale. Then
e
E[ max Xk ] ≤ E[Xn (log Xn )+ ].
1≤k≤n e−1
Definition 4.28. Let X be a real-valued process. For a < b, set
τ1 = min{m ≥ 0 : Xm ≤ a}.
For k = 1, 2, . . ., define
σk = min{m > τk : Xk ≥ b}
and
τk+1 = min{m > σk ; Xk ≤ a}.
Define the number of upcrossings of the interval [a, b] up to time n by
Un (a, b) = max{k ≤ n; σk ≤ n}.
Lemma 4.29 (Doob’s Upcrossing Inequality). Let X be a submartingale. Fix a < b. Then for all n > 0,
E(Xn − a)+
EUn (a, b) ≤ .
b−a
Proof. Define
∞
X
Cm = I{σk <m≤τk+1 } .
k=1
In other words, Cm = 1 between potential upcrossings and 0 otherwise. Note that {σk < m ≤ τk+1 } = {σk ≤
m − 1} ∩ {τk+1 ≤ m − 1}c ∈ Fm−1 and therefore C is a predictable process. Thus, C · X is a submartingale.
Un (a,b)
X
0 ≤ E(C · X)n = E[ (Xmin{τk+1 ,n} − Xmin{σk ,n} )]
k=1
Un (a,b)
X
= E[− (Xmin{σk ,n} − Xmin{τk ,n} ) + Xmin{τUn (a,b)+1 ,n} − (Xmin{σ1 ,n} − a) − a]
k=2
≤ E[−(b − a)Un (a, b) + Xmin{τUn (a,b)+1 ,n} − a]
≤ E[−(b − a)Un (a, b) + (Xn − a)+ ]
Remark 4.31. limn→∞ xn fails to exist in [−∞, ∞] if and only if for some a, b ∈ Q
lim inf xn ≤ a < b ≤ lim sup xn .
n→∞ n→∞
then
lim Xn = X∞
n→∞
exists almost surely with E|X∞ | < ∞.
Proof. Choose a < b, then
{U (a, b) = ∞} = {lim inf Xn < a < b < lim sup Xn }.
n→∞ n→∞
Because U (a, b) has finite mean, this event has probability 0 and thus
[
P {lim inf Xn ≤ a < b ≤ lim sup Xn } = 0.
n→∞ n→∞
a,b∈Q
Hence,
X∞ = lim Xn exists a.s.
n→∞
This limit might be infinite. However, by Fatou’s lemma,
+
EX∞ ≤ lim inf EXn+ < ∞
n→∞
Theorem 4.34. Let X be an Fn -submartingale. Then X is uniformly integrability if and only if there exists
a random variable X∞ such that 1
Xn →L X∞ .
Furthermore, when this holds
Xn → X∞ a.s.
Proof. If X is uniformly integrable, then supn E|Xn | < ∞, and by the submartingale convergence theorem,
there exists a random variable X∞ with finite mean so that Xn → X∞ a.s. Again, because X is uniformly
1
integrable Xn →L X∞ .
L1
If Xn → X∞ , then X is uniformly integrable. Moreover, supn E|Xn | < ∞, then by the submartingale
convergence theorem
lim Xn
n→∞
exists almost surely and must be X∞ .
Theorem 4.35. A sequence X is a uniformly integrable Fn -martingale if and only if there exist a random
variable X∞ such that
Xn = E[X∞ |Fn ].
Proof. Xn → X∞ a.s. and in L1 by the theorem above. Let A ∈ Fn Then for m > n,
Conversely, if such an X∞ ∈ L1 exists, then X is a martingale and the collection {E[X∞ |Fn ] : n ≥ 0} is
uniformly integrable.
Exercise 4.36. Let {Yn ; n ≥ 1} be a sequence of independent mean zero random varibles. Show that
∞
X N
X
EYn2 < ∞ implies Xn converges almost surely
n=1 n=1
as N → ∞.
Exercise 4.37 (Lebesgue points). Let m be Lebesgue measure on [0, 1)n . For n ≥ 0 and x ∈ [0, 1)n , let
I(n, x) be the interval of the form [(k1 − 1)2−n , k1 2−n ) × · · · × [(kn − 1)2−n , kn 2−n ) that contains x. Prove
that for any measurable function f : [0, 1)n → R,
Z
1
lim f dm = f (x)
n→∞ m(I(x, n)) I(x,n)
Example 4.38 (Polya’s urn). Because positive martingales have a limit, we have for the martingale R
defined by equation (4.1)
R∞ = lim Rn a.s.
n→∞
Because the sequence is bounded between 0 and 1, it is uniformly integrable and so the convergence is is also
in L1 .
Start with b blue and g green, the probability that the first m draws are blue and the next n − m draws
are green is
b b+c b + (m − 1)c
···
b+g b+g+c b + g + (m − 1)c
g g+c r + (m − n − 1)c
× ··· .
b + g + mc b + g + (m + 1)c b + g + (n − 1)c
This is the same probability for any other outcome that chooses m blue balls in the first n draws. Con-
sequently, writing ak̄ = a(a + 1) · · · (a + k − 1) for a to the k rising.
¯
n (b/c)m̄ (g/c)n−m
P {Xn = b + mc} = .
m ((b + g)/c)n̄
For the special case c = b = g = 1,
1
P {Xn = 1 + m} =
n+1
and R∞ has a uniform distribution on [0, 1].
In general, R has a Beta(g/c, b/c) distribution. The density for the distribution is
Γ((r + b)/c)
(1 − r)b/c−1 rg/c−1 .
Γ(b/c)Γ(g/c)
Exercise 4.39. Show the beta distribution limit for Polya’s urn.
Example 4.40 (Radon-Nikodym theorem). Let ν be a finite measure and let µ be a probability measure
on (Ω, F) with ν << µ. Choose a filtration {Fn : n ≥ 0} so that F = σ{Fn : n ≥ 0}. Write Fn =
σ{An,1 , . . . , An,k(n) } where {An,1 , . . . , An,k(n) } is a partition of Ω and define for x ∈ An,m ,
ν(An,m )/µ(An,m ) if µ(An,m ) > 0
fn (x) =
0 if µ(An,m ) = 0
Write
`
[
An,j = Ãk , Ã1 , . . . , Ã` ∈ Fn+1 .
k=1
Then,
Z X ν(Ãk ) X
fn+1 dµ = µ(Ãk ) = ν(Ãk ) = ν(An,j ).
An,j µ(Ãk )
k;µ(Ãk )>0 k;µ(Ãk )>0)
Note that R
fn dµ ν(Ω)
µ{fn > c} ≤ = < δ,
c c
for sufficiently large c, and, therefore,
Z
fn dµ = ν{fn > c} < .
{fn >c}
for every A ∈ S = ∪∞ n=1 Fn , and an algebra of sets with σ(S) = F. By the Sierpinski class theorem, the
identity holds for all A ∈ F. Uniqueness of f is an exercise.
Exercise 4.41. Show that if f˜ satisfies Z
ν(A) = f˜ dµ
A
Exercise 4.42. If we drop the condition µ << ν, then show that fn is an Fn supermartingale.
Example 4.43 (Lévy’s 0-1 law). Let F∞ be the smallest σ-algebra that contains a filtration {Fn ; n ≥ 0},
then the martingale convergence theorem gives us Lévy’s 0-1 law:
For any A ∈ F∞ ,
P (A|Fn ) → IA a.s. and in L1 as n → ∞.
4 MARTINGALES 51
Example 4.46 (Branching Processes). Let {Xn,k ; n ≥ 1, k ≥ 1} be a doubly indexed set of independent and
identically distributed N-valued random variables. From this we can define a branching or Galton-Watson-
Bienaymé process by Z0 = 1 and
Xn,1 + · · · + Xm,Zn , if Zn > 0,
Zn+1 =
0, if Zn = 0.
In words, each individual independently of the others, gives rise to offspring with a common offspring
distribution,
ν{j} = P {X1,1 = j}.
Zn gives the population size of the n-th generation.
The mean µ of the offspring distribution is called the Malthusian parameter. Define the filtration Fn =
σ{Xm,k ; n ≥ m ≥ 1, k ≥ 1}. Then
∞
X ∞
X
E[Zn+1 |Fn ] = E[Zn+1 I{Zn =k} |Fn ] = E[(Xn+1,1 + · · · + Xn+1,k )I{Zn =k} |Fn ]
k=1 k=1
X∞ ∞
X
= I{Zn =k} E[(Xn+1,1 + · · · + Xn+1,k )|Fn ] = I{Zn =k} E[(Xn+1,1 + · · · + Xn+1,k )]
k=1 k=1
X∞
= I{Zn =k} kµ = µZn .
k=1
4 MARTINGALES 52
as n → ∞. Thus, Zn = 0 for sufficiently large n. For such n, Mn = 0 and consequently M∞ = 0 Thus, the
martingale does not converge in L1 and therefore cannot be uniformly integrable.
If µ = 1, the the process is called critical. In this case, Zn itself is a martingale. Because Zn is N-valued
and has a limit, it must be constant from some point on. If the process is not trivial (ν{1} < 1), then the
process continues to fluctuate unless the population is 0. Thus, the limit must be zero. EZ0 = 1 6= 0 = EZ∞
and again the martingale is not uniformly integrable.
The case µ > 1 is called the supercritical case. We shall use G the generating function for the offspring
distribution to analyze this case.
Then G is the generating function for Z1 , G2 = G◦G is the generating function for Z2 and Gn = G◦Gn−1
is the generating function for Zn .
This is helpful in studying extinctions because
Note that this sequence is monotonically increasing and set the probability of eventual extinction by
ρ = lim P {Zn = 0} = lim Gn (0) = G lim Gn−1 (0) = G(ρ).
n→∞ n→∞ n→∞
Note that G is strictly convex and increasing. Consider the function H(x) = G(x) − x. Then,
1. H is strictly convex.
2. H(0) = G(0) − 0 ≥ 0 and is equal to zero if and only if ν{0} = 0.
3. H(1) = G(1) − 1 = 0.
4. H 0 (1) = G0 (1) − 1 = µ − 1 > 0.
Consequently, H is negative in a neighborhood of 1. H is non-negative at 0. If ν{0} 6= 0, by the
intermediate value theorem, H has a zero in (0, 1). By the strict convexity of H, this zero is unique and
must be ρ > 0.
In the supercritical case, now add the assumption that the offspring distribution has variance σ 2 . We will
now try to develop a iterative formula for an = E[Mn2 ]. To begin, use identity (4.7) above to write
In particular, supn EMn2 < ∞. Consequently, M is a uniformly integrable martingale and the convergence
is also in L1 . Thus, there exists a mean one random variable M∞ so that
Zn ≈ µn M∞
and M∞ = 0 with probability ρ.
Exercise 4.47. Let the offspring distribution of a mature individual have generating function Gm and let p
be the probability that a immature individual survives to maturity.
1. Given k immature individuals, find the probability generating function for the number of number of
immature individuals in the next generation of a branching process.
2. Given k mature individuals, find the probability generating function for the number of number of mature
individuals in the next generation of a branching process.
Exercise 4.48. For a supercritical branching process, Zn with extinction probability ρ, ρZn is a martingale.
Exercise 4.49. Let Z be a branching process whose offspring distribution has mean µ. Show that for n > m,
E[Zn Zm ] = µn−m EZm2
. Use this to compute the correlation of Zm and Zn
Exercise 4.50. Consider a branching process whose offspring distribution satisfies ν{j}j = 0 for j ≥ 3.
Find the probability of eventual extinction.
Exercise 4.51. Let the offspring distribution satisfy ν{j} = (1−p)pj−1 . Find the generating function Ez Zn .
Hint: Consider the case p = 1/2 and p 6= 1/2 separately. Use this to compute P {Zn = 0} and verify the
limit satisfies G(ρ) = ρ.
4 MARTINGALES 54
Corollary 4.56 (Strong Law of Large Numbers). Let {Yj ; j ≥ 1} be an sequence of independent and
identically distributed random variables. Denote by µ, their common mean. Then,
∞
1X
lim Yj = µ a.s.
n→∞ n
j=1
is a backward martingale. Thus, there there exists a random variable X∞ so that Xn → X∞ almost surely
as n → ∞.
Because X∞ is a tail random variable from an independent set of random variables. Consequently, it is
almost surely a constant. Because Xn → X∞ in L1 and EXn = µ, this constant must be µ.
Example 4.57 (The Ballot Problem). Let {Yk ; k ≥ 1} be independent identically distributed N-valued
random variables. Set
Sn = Y1 + · · · + Yn .
Then,
+
Sn
P {Sj < j for 1 ≤ j ≤ n|Sn } = 1− . (4.8)
n
To explain the name, consider the case in which the Yk ’s take on the values 0 or 2 each with probability
1/2. In an election with two candidates A and B, interpret a 0 as a vote for A and a 2 as a vote for B.
Then
{A leads B throughout the counting} = {Sj < j for 1 ≤ j ≤ n} = G.
and the result above says that
+
2r
P {A leads B throughout the counting|A gets r votes} = 1− .
n
Sτ ≥ τ, and Sτ +1 < τ + 1
and
0 ≤ Yτ +1 = Sτ +1 − Sτ < (τ + 1) − τ = 1.
Thus, Yτ +1 = 0, and
Sτ
Xτ = = 1.
τ
5 Markov Chains
5.1 Definition and Basic Properties
Definition 5.1. A process X is called a Markov chain with values in a state space S if
S × N × B(S) → [0, 1]
that gives the probability of making a transition from the state Xn at time n into a collection of states A at
time n + m. If this mapping does not depend on the time n, call the Markov chain time homogeneous. We
shall consider primarily time homogeneous chains.
Consider the case with m = 1 and time homogeneous transitions probabilities. We now ask that this
mapping have reasonable measurability properties.
Definition 5.2. 1. A function
µ : S × B(S) → [0, 1]
is a transition function if
(a) For every x ∈ S, µ(x, ·) is a probability measure, and
(b) For every A ∈ B(S), µ(·, A) is a measurable function.
2. A transition function µ is called a transition function for a time homogeneous Markov chain X if for
every n ≥ 0 and A ∈ B(S),
P {Xn+1 ∈ A|FnX } = µ(Xn , A).
Naturally associated to a measure is an operator that has this measure as its kernel.
The three expressions above give us three equivalent ways to think about Markov chain transitions.
With dynamical systems, the state of the next time step is determined by the position at the present
time. No knowledge of previous positions of the system is needed. For Markov chains, we have a similar
statement with state replaced by distribution. As with dynamical systems, along with the dynamics we need
a statement about the initial position.
and Z
P {Xm ∈ Am , . . . , X1 ∈ A1 , X0 ∈ A0 } = P {Xm ∈ Am , . . . , X1 ∈ A1 |X0 = x0 } α(dx0 ).
A0
Therefore, the intial distribution and the transition function determine the finite dimensional distributions
and consequently, by the Daniell-Kolmogorov extension theorem, the distribution of the process on the
sequence space S N .
Typically we shall indicate the intial distribution for the Markov by writing Pα or Eα . If α = δx0 , then
we shall write Px0 or Ex0 . Thus, for example,
Z
Eα [f (Xn )] = T n f (x) α(dx).
S
and X
T f (x) = f (y)µ(x, {y}).
y∈S
Consequently, the transition operator T can be viewed as a transition matrix with T (x, y) = µ(x, {y}). The
only requirements on such a matrix is that its entries are non-negative and for each x ∈ S the row sum
X
T (x, y) = 1.
y∈S
In this case the operator identity T m+n = T m T n gives the Chapman-Kolmogorov equations:
X
T m+n (x, z) = T m (x, y)T n (y, z)
y∈S
Show that
t10 t10
Pα {Xn = 0} = + (1 − t01 − t10 )n α{0} − .
t01 + t10 t01 + t10
5.2 Examples
1. (Independent Identically Distributed Sequences) Let ν be a probability measure and set µ(x, A) = ν(A).
Then,
P {Xn+1 ∈ A|FnX } = ν(A).
2. (Random Walk on a Group) Let ν be a probability measure on a group G and define the transition
function
µ(x, A) = ν(x−1 A).
In this case, {Yk ; k ≥ 1} are independent G-valued random variables with distribution ν. Set
Then
P {Yn+1 ∈ B|FnX } = ν(B),
and
P {Xn+1 ∈ A|FnX } = P {Xn−1 Xn+1 ∈ Xn−1 A|FnX } = P {Yn ∈ Xn−1 A|FnX } = ν(Xn−1 A).
In the case of G = Rn under addition, X is a random walk.
3. (Shuffling) If the G is SN , the symmetric group on n letters, then ν is a shuffle.
4. (Simple Random Walk) S = Z, T (x, x + 1) = p, T (x, x − 1) = q, p + q = 1.
5 MARKOV CHAINS 60
5. (Simple Random Walk with Absorption) S = {x ∈ Z : −M1 ≤ j ≤ M2 } For −M1 < j < M2 , the
transition probabilites are the same as in the simple random walk. At the endpoints,
6. (Simple Random Walk with Reflection) This is the same as the simple random walk with absorption
except for the endpoints. In this case,
7. (Renewal sequences) S = N ∪ {0, ∞}. Let R be a distribution on S\{0}. Consider a Markov chain Y
with transition function.
R if y = 1
µ(y, ·) = δy−1 if 1 < y < ∞
δ∞ if y = ∞
Sometimes we write
λx = T (x, x + 1) for the probability of a birth,
and
µx = T (x, x − 1) for the probability of a death.
10. (Branching processes) S = N. Let ν be a distribution on N and let {Zi : i ≥ 0} be independent random
variables with distribution ν. Set the transition matrix
Exercise 5.7. Define the following queue. In one time unit, the individual is served with probability p.
During that time unit, individuals arrive to the queue according to an arrival distribution γ. Give the
transition probabilities for this Markov chain.
5 MARKOV CHAINS 62
Exercise 5.8. The discrete time two allele Moran model is similar to the Wright-Fisher model. To describe
the transitions, choose an individual at random to be removed from the population. Choose a second indi-
vidual to replace that individual. Mutation and selection proceed as with the Wright-Fisher model. Give the
transition probabilities for this Markov chain.
Theorem 5.13 (Strong Markov Property). Let Z be a bounded random sequence and let τ be a stopping
time. Then
Eα [Zτ ◦ θτ |Fτ ] = φτ (Xτ ) on {τ < ∞}
where
φn (x) = Ex Zn .
Proof. Let A ∈ FτX . Then
∞
X
Eα [Zτ ◦ θτ ; A ∩ {τ < ∞}] = Eα [Zτ ◦ θτ ; A ∩ {τ = n}]
n=0
X∞
= Eα [Zn ◦ θn ; A ∩ {τ = n}]
n=0
X∞
= Eα [φn (Xn ); A ∩ {τ = n}]
n=0
= Eα [φ(Xτ ); A ∩ {τ < ∞}].
In particular, for f : S → R, bounded and measurable, then on the set {τ < ∞},
in an Fn -martingale.
2. Call a measurable function h : S → R (super)harmonic (on A) if
then,
Z + = Z ◦ θ.
5 MARKOV CHAINS 64
h+ + +
B (x) = Ex Z = Ex [Ex [Z |F1 ]] = Ex [Ex [Z ◦ θ|F1 ]] = Ex [hB (X1 )] = T hB (x).
T hB (x) = h+
B (x) ≤ hB (x)
τB = min{n ≥ 0 : Xn ∈ B},
then
hB (Xmin{n,τB } ) = h+
B (Xmin{n,τB } )
Theorem 5.15. Let X be a Markov sequence and let B ∈ B(S). Then hB is the minimal bounded function
among those functions h satisfying
1. h = 1 on B.
2. h is harmonic on B c .
3. h ≥ 0.
Proof. hB satisfies 1-3. Let h be a second such function. Then Yn = h(Xmin{n,τB } ) is a nonnegative bounded
martingale. Thus, for some random variable Y∞ , Yn → Y∞ a.s. and in L1 . If n > τB , then Yn = 1. Therefore,
Y∞ = 1 whenever τB < ∞ and
Exercise 5.16. 1. Let B = Z\{−M1 + 1, . . . , M2 − 1}. Find the harmonic function that satisfies 1-3 for
the symmetric and asymmetric random walk.
2. Let B = {0} and p > 1/2 and find the harmonic function that satisfies 1-3 for the asymmetric random
walk.
Example 5.17. Let Gν denote the generating function for the offspring distribution of a branching process
X and consider h0 (x), the probability that the branching process goes extinct given that the initial population
is x. We can view the branching process as x independent branching processes each with initial population
1. Thus, we are looking for the probability that all of these x processes die out, i.e.,
h0 (x) = h0 (1)x .
Because h0 is harmonic at 1,
X ∞
X
h0 (1) = T h0 (1) = T (1, y)h0 (y) = ν{y}h0 (1)y = GZ (h0 (1)).
y y=0
5 MARKOV CHAINS 65
Definition 5.19. A state y is accessible from a state x (written x → y) if for some n ≥ 0, T n (x, y) > 0. If
x → y and y → x, then we write x ↔ y and say that x and y communicate.
Exercise 5.20. 1. ↔ is an equivalence relation. Thus the equivalence of communication partitions the
state space by its equivalence classes.
2. Consider the 3 and 4 state Markov chains with transitions matrices
1/2 1/2 0 0
1/2 1/2 0 1/2 1/2
1/2 1/4 1/4 , 0 0
.
1/4 1/4 1/4 1/4
0 1/3 2/3
0 0 0 1
Give the equivalence classes under communication for the Markov chain associated to these two transtions
matrices.
Definition 5.21. 1. Call a set of states C closed if for all x ∈ C and all n ≥ 0, P {Xn ∈ C} = 1.
2. If T (x, x) = 1, then x is called an absorbing state. In particular, {x} is a closed set of states.
3. A Markov chain is called irreducible if all states communicate.
Proposition 5.22. For a time homogenous Markov chain X, and for y ∈ S, Then, under the measure Pα ,
the process Yn = I{y} (Xn ) is a delayed renewal sequence. Under Py , Y is a renewal sequence.
5 MARKOV CHAINS 66
Proof. Let r and s be positive integers and choose a sequence (1 , . . . , r+s ) ∈ {0, 1}(r+s) such that r = 1
Then, for x ∈ S
The period of a state y, `(y) if τy is distributed on the lattice L`(y) given X0 = y. If T n (y, y) = 0 for
all n define `(y) = 0.
The state y is
5. periodic if `(y) > 2,
6. aperiodic if `(y) = 1,
7. ergodic if it is recurrent and aperiodic.
The symmetric simple random walk, all states are null recurrent with period 2. The asymmetric simple
random walk, all states are transient with period 2.
If k and hence m + k + n is not a multiple of `(y), then T m+k+n (y, y) = 0. Because T m (y, x)T n (x, y) > 0,
T (x, x) = 0. Conversely, if T k (x, x) 6= 0, then k must be a multiple of `(y). Thus, `(x) is a multiple of `(y).
k
Reverse the roles of x and y to conclude that `(y) is a multiple of `(x). This can happen when and only
when `(x) = `(y).
Denote
ρxy = Px {τy < ∞},
then by appealing to the results in renewal theory,
∞
X
y is recurrent if and only if ρyy = 1 if and only if T n (y, y) = ∞.
n=0
Let Ny be the number of visits to state y. Then the sum above is just Ey Ny . If
∞
X ρyy
ρyy < 1 if and only if T n (y, y) =
n=0
1 − ρyy
Proposition 5.27. From a recurrent state, only recurrent states can be reached.
Proof. Assume x is recurrent and select y so that x → y. By the previous proposition y → x. As we argued
to show that x and y have the same period, pick m and n so that
and so
T m (y, x)T k (x, x)T n (x, y) ≤ T m+k+n (y, y).
Therefore,
∞
X ∞
X ∞
X ∞
X
T k (y, y) ≥ T m+k+n (y, y) ≥ T m (y, x)T k (x, x)T n (x, y) ≥ T n (x, y)T m (y, x) T k (x, x).
k=0 k=0 k=0 k=0
Because x is recurrent, the last sum is infinite. Because T n (x, y)T m (y, x) > 0, the first sum is infinite and y
is recurrent.
Corollary 5.28. If for some y, x → y but y 6→ x then x cannot be recurrent.
Consequently, the state space S has a partition {T, Ri , i ∈ I} into T , the transient states and Ri , a closed
irreducible set of recurrent states.
Remark 5.29. 1. All states in the interior are transient for a simple random walk with absorption. The
endpoints are absorbing. The same statement holds for the Wright-Fisher model.
2. In a branching process, if ν{0} > 0, then x → 0 but 0 6→ x and so all states except 0 are transient.
The state 0 is absorbing.
Theorem 5.30. If C is a finite close set, then C contains a recurrent state. If C is irreducible, then all
states in C are recurrent.
Proof. By the theorem, the first statement implies the second. Choose x ∈ C. If all states are transient,
then Ex Ny < ∞. Therefore,
X ∞
XX ∞ X
X ∞
X
n n
∞> Ex N y = T (x, y) = T (x, y) = 1 = ∞.
y∈C y∈C n=1 n=1 y∈C n=1
Example 5.32 (Birth and death chains). The harmonic functions h satisfy
Now, if h is harmonic, then any linear function of h is harmonic. Thus, for simplicity, take h(0) = 0 and
h(1) = 1 to obtain
x−1 z
XY µy
h(x) = .
λ
z=0 y=1 y
Note that the terms in the sum for h are positive. Thus, the limit as z → ∞ exists. If
∞ Y z
X µy
h(∞) = < ∞,
λ
z=0 y=1 y
then h is a bounded harmonic function and X is transient. This will always occur, for example, if
µx
lim inf > 1.
x→∞ λx
and births are more likely than deaths.
Let a < x < b and define τa = min{n > 0; Xn = a}, τb = min{n > 0; Xn = b} and τ = min{τa , τb }. Then
{h(Xmin{n,τ } ); n ≥ 1} is a bounded martingale. This martingale has a limit, which must be h(a) or h(b). In
either case, τ < ∞ a.s. and
h(x) = Eh(Xτ ) = h(a)Px {Xτ = a} + h(b)Px {Xτ = b} = h(a)Px {τa < τb } + h(b)Px {τb < τa }
or
h(b) − h(x)
Px {τa < τb } = .
h(b) − h(a)
If the chain is transient, let b → ∞ to obtain
h(∞) − h(x)
Px {τa < ∞} = .
h(∞) − h(a)
Exercise 5.33. Apply the results above to the asymmetric random walk with p < 1/2.
To obtain a sufficient condition for recurrence we have the following.
5 MARKOV CHAINS 70
Theorem 5.34. Suppose that S is irreducible and suppose that off a finite set F , h is a positive superhar-
monic function for the transition function T associated with X. If, for any M > 0, h−1 ([0, M ]) is a finite
set, then X is recurrent.
Proof. It suffices to show that one state is recurrent. Define the stopping times
Because the chain is irreducible and h−1 ([0, M ]) is a finite set, τM is finite almost surely. Note that the
optional sampling theorem implies that {h(Xmin{n,τF } ); n ≥ 0} is a supermartingale. Consequently,
because h is positive and h(Xmin{τM ,τF } ) ≥ M on the set {τM < τF }. Now, let M → ∞ to see that
Px {τF = ∞} = 0 for all x ∈ S. Because Px {τF < ∞} = 1, whenever the Markov chain leaves F , it returns
almost surely, and thus,
Px {Xn ∈ F i.o.} = 1 for all x ∈ S,
Because F is finite, for some x̃ ∈ F
Px̃ {Xn = x̃ i.o.} = 1
i.e., x̃ is recurrent.
Remark 5.35. By adding a constant to h, the condition that h is bounded below is sufficient to prove the
theorem above.
Remark 5.36. Returning to irreducible birth and death processes, we see that h(∞) = ∞ is a necessary and
sufficient condition for the process to be recurrent.
To determine the mean exit times, consider the solutions to
T f (x) = f (x) − 1.
Then
n−1
X
Mn (f ) = f (Xn ) − (T − I)f (Xk ) = f (Xn ) + n
k=0
f (x) = Ex M (f )0 = Ex M (f )τ = Ex f (Xτ ) + Ex τ.
For the stopping time above, if we solve T f (x) = f (x) − 1 with f (a) = f (b) = 0 then
f (x) = Ex τ.
Exercise 5.37. 1. For a simple random walk on Z, let a < x < b and set
τ = min{n ≥ 0; Xn = a or b}.
Find Ex τ .
5 MARKOV CHAINS 71
or in matrix form
π = πT.
To explain the term stationary measure, let µ is the transition operator for a Markov chain X with initial
distribution α, then Z Z
µ(x, A)α(dx) = Px {X1 ∈ A}α(dx) = Pα {X1 ∈ A}.
S S
Thus, if π is a probability measure and is the initial distribution, then the identity above becomes
Call a Markov chain stationary if Xn has the same distribution for all n
Example 5.39. 1. Call a transition matrix doubly stochastic if
X
T (x, y) = 1.
x∈S
For this case, π{x} constant is a stationary distribution. The converse is easily seen to hold.
2N − x x
T (x, x + 1) = and T (x, x − 1) = .
2N 2N
5 MARKOV CHAINS 72
If a Markov chain is in its stationary distribution π, then Bayes’ formula allows us to look at the process
in reverse time.
Pπ {Xn+1 = x|Xn = y}Pπ {Xn = y} π{y}
T̃ (x, y) = Pπ {Xn = y|Xn+1 = x} = = T (y, x).
Pπ {Xn+1 = x} π{x}
Definition 5.40. The Markov chain X̃ in reverse time is called the dual Markov process and T̃ is called the
dual transition matrix. If T = T̃ , then the Markov chain is called reversible and the stationary distribution
satisfies detailed balance
π{y}T (y, x) = π{x}T (x, y). (5.2)
Sum this equation over y to see that it is a stronger condition for π than stationarity.
Exercise 5.41. If π satisfies detailed balance for a transition matrix T , then
π{y}T n (y, x) = π{x}T n (x, y).
Example 5.42 (Birth and death chains). Here we look for a measure that satisfies detailed balance
π{x}µx = π{x − 1}λx−1 ,
x
λx−1 Y λy−1
π{x} = π{x − 1} = π{0} .
µx y=1
µy
This can be normalized to give a stationary probability distribution provided that
∞ Y x
X λy−1
< ∞.
x=0 y=1
µy
To decide which Markov chains are reversible, we have the following cycle condition due to Kolmogorov.
Theorem 5.43. Let T be the transition matrix for an irreducible Markov chain X. A necessary and sufficient
condition for the eixstence of a reversible measure is
1. T (x, y) > 0 if and only if T (y, x) > 0.
Qn
2. For any loop {x0 , x1 , . . . xn = x0 } with i=1 T (xi , xi−1 ) > 0,
n
Y T (xi−1 , xi )
= 1.
i=1
T (xi , xi−1 )
Proof. Assume that π is a reversible measure. Because x ↔ y, there exists n so that T n (x, y) > 0 and
T n (y, x) > 0. Use
π{y}T n (y, x) = π{x}T n (x, y).
to see that any reversible measure must give positive probability to each state. Now return to the defining
equation for reversible measures to se that this proves 1.
For 2, note that for a reversible measure
n n
Y T (xi−1 , xi ) Y π{xi }
= .
i=1
T (xi , xi−1 ) i=1
π{xi−1 }
Because π{xn } = π{x0 }, each factor π{xi } appears in both the numerator and the demominator, so the
product is 1.
To prove sufficiently, choose x0 ∈ S and set π{x0 } = π0 . Because X is irreducible, for each y ∈ S, there
exists a sequence x0 , x1 , . . . , xn = y so that
n
Y
T (xi−1 , xi ) > 0.
i=1
Define
n
Y T (xi−1 , xi )
π{y} = π0 .
i=1
T (xi , xi−1 )
The cycle condition guarantees us that this definition is independent of the sequence. Now add the point x
to the path to obtain
T (y, x)
π{x} = π{x}
T (x, y)
and thus π is reversible.
Exercise 5.44. Show that an irreducible birth-death process satisfies the Kolmogorov cycle condition.
If we begin the Markov chain at a recurrent state x, we can divide the evolution of the chain into the
excursions.
(X1 , . . . , Xτx1 ), (Xτx1 +1 , . . . , Xτx2 ), · · · .
The average activities of these excursions ought to be to visit the states y proportional to the value of the
stationary distibution. This is the content of the following theorem.
5 MARKOV CHAINS 74
because Px {τx = 0} = 0.
If z 6= x this sum becomes
∞
X
Px {Xn+1 = z, τx > n} = µx {z}
n=0
Remark 5.46. If µx {y} = 0, then the individual terms in the sum (5.3), Px {Xn = y, τx > n} = 0. Thus,
y is not accessible from x on the first x-excursion. By the strong Markov property, y is cannot be accessible
from x. Stated in the contrapositive, x → y implies µx {y} > 0
Theorem 5.47. Let T be the transition matrix for a irreducible and recurrent Markov chain. Then the
stationary measure is unique up to a constant multiple.
5 MARKOV CHAINS 75
Noting that the sum above is from 1 to τx0 rather than 0 to τx0 − 1. To obtain equality in this equation note
that X X
ν{x0 } = ν{z}T n (z, x0 ) ≥ ν{x0 } µx0 {z}T n (z, x0 ) = ν{x0 }µx0 {x0 } = ν{x0 }.
z∈S z∈S
n
Use the fact that πT = π and Fubini’s theorem to conclude that
X ∞
X ∞
X
π{x} T n (x, y) = π{y} = ∞.
x∈S n=1 n=1
Therefore,
X ρxy 1
∞= π{x} ≤ .
1 − ρyy 1 − ρyy
x∈S
Thus, ρyy = 1.
5 MARKOV CHAINS 76
Theorem 5.49. If T is the transition matrix for an irreducible Markov chain and if π is a stationary
probability distribution, then
1
π{x} = .
Ex τx
Proof. Because the chain is irreducible π{x} > 0. Because the stationary measure is unique,
µx {x} 1
π{x} = = .
Ex τx Ex τx
So, the two halves will pass through equibrium about exp(2 × 1012 ) times before the molecules in about the
same time that the molecules become 0.01% removed from equilibrium once.
`(x)
lim T n`(x) (x, x) = .
n→∞ E x τx
Now, by the strong Markov property,
n
X
T n (x, y) = Px {τy = k}T n−k (y, y).
k=1
Thus, by the dominated convergence theorem, we have for the aperiodic case
n ∞
X X `(y) 1
lim T n (x, y) = lim Px {τy = k}T n−k (y, y) Px {τy = k} = ρxy .
n→∞ n→∞ Ey τy Ey τy
k=1 k=1
RTheorem 5.53. (Ergodic theorem for Markov chains) Assume X is an ergodic Markov chain and that
S
|f (y)| µx (dy), then for any initial distribution α,
n Z
1X
f (Xk ) →a.s. f (y) π(dy).
n S
k=1
5 MARKOV CHAINS 78
In the first sum, the random variable is fixed and so as n → ∞, the term has zero limit with probability
one.
For k ≥ 1, we have the independent and identically sequence of random variables
V (f )k = f (Xτxk ) + · · · + f (Xτxk+1 −1 ).
Claim I. Z Z
Ex V (f )1 = f (y) µx (dy) = ( f (y) π(dy))/Ex τx .
S S
n n − τxm τm
= + x
m m m
Note that that {τxm+1 − τxm : m ≥ 1} is an idependent and identically sequence with mean Ex τx1 . Thus,
τxm
→ Ex τx1 a.s.
m
by the strong law of large numbers and
In summary,
n m−1
1X E x τx X
f (Xk ) − V (f )k → 0 a.s.
n m
k=1 k=1
The ergodic theorem corresponds to a strong law. Here is the corresponding central limit theorem. We
begin with a lemma.
Lemma 5.54. Let U be a positive random variable p > 0, and EU p < ∞, then
lim np P {U > n} = 0.
n→∞
Proof. Z ∞ Z ∞
np P {U > n} = np dFU (u) ≤ up dFU (u).
n n
Because Z ∞
EU p = up dFU (u) < ∞,
0
the last term has limit 0 as n → ∞.
Theorem 5.55. In addition to the conditions of the ergodic theorem, assume that
X
f (y)π{y} = 0, and Ex [V (f )21 ] < ∞,
y∈S
then
n
1 X
√ f (Xk )
n
k=1
The same argument in the proof of the ergodic theorem that shows that the first term tends almost surely
to zero applies in this case.
5 MARKOV CHAINS 80
and that
n
1 X 1 1
P { √ max V (|f |)j > } ≤ P { 2 V (|f |)2j > n} ≤ nP { 2 V (|f |)21 > n}.
n 1≤j≤n j=1
Now, use the lemma with U = V (|f |)2 /2 to see that
1
√ max V (|f |)j → 0
n 1≤j≤n
as n → ∞ in probability and hence in distribution. Because this bounds the third term, both the first and
third terms converge in distribution to 0.
Recall that if Zn →D Z and Yn →D c, a constant, then Yn + Zn →D cZ and Yn Zn →D cZ. The first of
these two conclusions show that it suffices to establish convergence in distribution of the middle term.
With this in mind, note that from the central limit theorem for independent identically distributed
random variables, we have that
m
1 X
Zn = √ V (f )j →D σZ.
m j=1
Here, Z is a standard normal random variable and σ 2 is the variance of V (f )1 . From the proof of the ergodic,
with m defined as before,
n
→ Ex τ x
m
almost surely and hence in distribution.
By the continuous mapping theorem,
r r
m D 1
→
n Ex τx
and thus,
m
τx −1 m−1 r
1 X 1 X m 1
√ f (Xk ) = √ V (f )k = Zn →D √ σZ.
n 1
n n E x τx
k=τx k=1
Example 5.56 (Markov chain Monte Carlo). If the goal is to compute an integral
Z
g(x) π(dx),
then, in circumstances in which the probability measure π is easy to simulate, simple Monte Carlo suggests
creating independent samples X0 (ω), X1 (ω), . . . having distribution π. Then, by the strong law of large
numbers,
n−1 Z
1X
lim g(Xj (ω)) = g(x) π(dx) with probability one.
n→∞ n
j=0
5 MARKOV CHAINS 81
where Z is a stardard normal random variable, and σ 2 = g(x)2 π(dx)) − ( g(x) π(dx))2 .
R R
Markov chain Monte Carlo performs the same calculation using a irreducible Markov chain X̃0 (ω), X̃1 (ω), . . .
having stationary distribution π. The most commonly used strategy to define this sequence is the method de-
veloped by Metropolis and extended by Hastings.
To introduce the Metropolis-Hastings algorithm, let π be probability on a countable state space S, π 6= δx0 ,
and let T be a Markov transition matrix on S. Further, assume that the the chain immediately enter states
that have positive π probability. In other words, if T (x, y) > 0 then π{y} > 0. Define
( n o
π{y}T (y,x)
min π{x}T (x,y) , 1 , if π{x}T (x, y) > 0,
α(x, y) =
1, if π{x}T (x, y) = 0.
If X̃n = x, generate a candidate value y with probability T (x, y). With probability α(x, y), this candidate
is accepted and X̃n+1 = y. Otherwise, the candidate is rejected and X̃n+1 = x. Consequently, the transition
matrix for this Markov chain is
Note that this algorithm only requires that we know the ratios π{y}/π{x} and thus we are not required
to normalize π.Also, if π{x}T (x, y) > 0 and if π{y} = 0, then α(x, y) = 0 and thus the chain cannot visit
states with π{y} = 0
Claim. T̃ is the transition matrix for a reversible Markov chain with stationary distribution π.
We must show that π satisfies the detailed balance equation (5.2). Consequently, we can limit ourselves
to the case x 6= y
Case 1. π{x}T (x, y) = 0
In this case α(x, y) = 1. If π{y} = 0. Thus, π{y}T̃ (y, x) = 0,
π{x}T (x, y)
π{y}T̃ (y, x) = π{y} T (y, x) = π{x}T (x, y).
π{y}T (y, x)
5 MARKOV CHAINS 82
π{y}T (y, x)
π{x}T̃ (x, y) = π{x} T (x, y) = π{y}T (y, x).
π{x}T (x, y)
In addition, α(y, x) = 1 and
π{y}T̃ (y, x) = π{y}T (y, x).
Thus, the claim holds.
Example 5.57. The original Metropolis algorithm had T (x, y) = T (y, x) and thus
π{y}
α(x, y) = min ,1 .
π{x}
Example 5.58 (Independent Chains). Let {Xn ; n ≥ 0} be independent discrete random variable with dis-
tribution function f (x) = P {X0 = x}. Then
w(y)
α(x, y) = min ,1 ,
w(x)
where w(x) = f (x)/π{x} is the importance weight function that would be used in importance sampling if the
observations if observations were generated from f
Exercise 5.59. Take T to be the transition matrix for a random walk on a graph and π to be uniform
measure. Describe T̃ .
τ = inf{t ≥ 0; Xt = X̃t }
and if τ is almost surely finite. In particular, X and X̃ have the same transition function T
If νt and ν̃t are the distributions of the processes at time t, then
Theorem 5.60. Let X and X̃ be Markov chains with common transition matrix T . Let νt and ν̃t denote,
respectively, the distribution of Xt and X̃t . Suppose that for some m ≥ 1,
Then
||νt − ν̃t ||T V ≤ (1 − )[t/m] .
5 MARKOV CHAINS 83
Proof. Let X and X̃ move independently until the coupling time. Then
X X
P {τ ≤ m|X0 = x, X̃0 = x̃} ≥ T m (x, y)T m (x̃, y) ≥ T m (x, y) = ,
y∈S y∈S
and P {τ > m|X0 = x, X̃0 = x̃} ≤ 1 − . By the Markov property, for k = [t/m],
X
P {τ ≥ t} ≤ P {τ ≥ km} = P {τ ≥ km|X0 = x, X̃0 = x̃}ν0 {x}ν̃{x̃} ≤ (1 − )k .
x,x̃∈S
The rate of convergence to the stationary distribution is obtained by taking X̃ to start in the stationary
distribution. In this case, for each t, ν̃t = π, the stationary distribution.
Thus, the ”art” of coupling time is to arrange to have the coupling time be as small as possible.
Example 5.61 (Simple Random Walk with Reflection). Let {Yn ; n ≥ 1} be an independent sequence of
Ber(p) random variables. Couple X and X̃ as follows:
Assume X0 and X̃0 differ by an even integer.
At time n, if Yn = 1, then, if it is possible, set Xn = Xn−1 + 1 and X̃n = X̃n−1 + 1. This cannot occur
if one of the two chains is at state M . In this circumstance, the chain must decrease by 1. Similarly, if
Yn = −1, then, when possible, set Xn = Xn−1 − 1 and X̃n = X̃n−1 − 1. This cannot occur when the chain
is at state −M in which case, the chain must increase by 1.
If, for example, X0 ≤ X̃0 , then Xn ≤ X̃n for all n ≥ 1. Thus, the coupling time τ = min{n ≥ 0; Xn =
M }.
Exercise 5.62. Find E(x,x̃) τ in the example above and show that the coupling above is successful.
A second, closely associated for establishing rates of convergence are strong stationary times.
Definition 5.63. Let X be a time homogeneous Ft -Markov chain on a countable state space S having unique
stationary distribution π. An Ft -stopping time τ is called a strong stationary time provided that
1. Xτ has distribution π
2. Xτ and τ are independent.
Remark 5.64. The properties of a strong stationary time imply that
P {Xt = y, τ = s} = P {Xτ = y, τ = s} = π{y}P {τ = s}.
Now, sum on s ≤ t to obtain
t
X t
X
P {Xt = y, τ ≤ t} = P {Xt = y, τ = s} = P {Xt = y|τ = s}P {τ = s} = π{y}P {τ ≤ t}
s=0 s=0
because
X X
P {Xt = y|τ = s} = P {Xt = y, Xs = x|τ = s} = P {Xt = y|Xs = x, τ = s}P {Xs = x|τ = s}
x∈S x∈S
X X
= P {Xt = y|Xs = x}P {Xτ = x|τ = s}. = T t (x, y)π{x} = π{y}
x∈S x∈S
5 MARKOV CHAINS 84
Example 5.65 (Top to random shuffle). Let X be a Markov chain whose state space is the set of permutation
on N letters, i.e, the order of the cards in a deck having a total of N cards. We write Xn (k) to be the card in
the k-th position after n shuffles. The transitions are to take the top card and place it at random uniformly in
the deck. In the exercise to follow, you are asked to check that the uniform distribution on the N ! permutations
is stationary. Define
τ = inf{n > 0; Xn (1) = X0 (N )} + 1,
i.e., the first shuffle after the original bottom card has moved to the top.
Claim. τ is a strong stationary time.
We show that at the time in which we have k cards under the original bottom card (Xn (N −k+1) = X0 (N ),
then all k! orderings of these k cards are equally likely.
In order to establish a proof by induction on k, note that the statement above is obvious in the case k = 1.
For the case k = j, we assume that all j! arrangements of the j cards under X0 (N ) are equally likely. When
one additional card is placed under X0 (N ), each of the j + 1 available positions are equally likely, and thus
all (j + 1)! arrangements of the j = 1 cards under X0 (N ) are equally likely.
Consequently, the first time that the original bottom card is placed into the deck all N ! orderings are
equally likely independent of the value of τ .
Exercise 5.66. Check that the uniform distribution is the stationary distribution for the top to random
shuffle.
We now will establish an inequality similar to (5.4) for strong stationary times. For this segment of the
notes, X is an ergodic Markov chain on a countable state space S having transition matrix T and unique
stationary distribution π
T t (x, y)
st = sup 1 − ; x, y ∈ S .
π{y}
||νt − π||T V ≤ st .
5 MARKOV CHAINS 85
Proof.
X X νt {y}
||νt − π||T V = (π{y} − νt {y})I{π{y}>νt {y}} = π{y} 1 − I{π{y}>νt {y}}
π{y}
y∈S y∈S
t
X X T (x, y)
= π{y} ν0 {x} 1 − I{π{y}>νt {y}}
π{y}
y∈S x∈S
X X
≤ π{y} ν0 {x}st = st
y∈S x∈S
PN
Example 5.71 (Top to random shuffle). Note that τ = 1 + k=1 σk where σk is the number of shuffles
necessary to increase the number of cards under the original bottom card from k − 1 to k. Then these random
variables are independent and σk − 1 has a Geo((k − 1)/N ) distribution.
We have seen these distributions before, albeit in the reverse order for the coupon collectors problem.
In this case, we choose from a finite set, the ”coupons”, uniformly with replacement and set τk to be the
minimum time m so that the range of the first m choices is k. Then τk − τk−1 − 1 has a Geo(1 − (k − 1)/N )
distribution.
Thus the time τ is equivalent to the number of purchases until all the coupons are collected. Define Aj,t
to be the event that the j-th coupon has not been chosen by time t. Then
N N N
[ X X 1 1
P {τ > t} = P Aj,t ≤ P (Aj,t ) = (1 − )t = N (1 − )t ≤ N e−t/N
j=1 j=1 j=1
N N
This shows that the shuffle of the deck has a threshold time of N log N and after that tail of the distribution
of the stopping time decreases exponentially.
lim T n (x, y) = 0, x ∈ S, y ∈ C.
n→∞
5 MARKOV CHAINS 86
0.9
0.8
0.7
N=208
0.6
P{!N > t}
0.5 N=104
0.4
0.3 N=52
0.2
N=26
0.1
0
0 2 4 6 8 10 12
t= shuffles/N
Figure 2: Reading from left to right, the plot of the bound given by the strong stationary time of the total
variation distance to the uniform distribution after t shuffles for a deck of size N = 26, 52, 104, and 208 based
on 5,000 simulations of the stopping time τ .
If we limit our scope at time n to those paths that remain in C, then we are looking at the limiting
behavior of conditional probabilities
T n (x, y)
T̃ n (x, y) = Px {Xn = y|Xn ∈ C} = P n
, x, y ∈ C.
z∈C T (x, z)
What we shall learn is that the exists a number ρC > 1 such that
Lemma 5.72. Assume that a real-valued sequence {an ; n ≥ 0} satisfies the subadditve condition
am+n ≤ am + an ,
an ≤ kam + a` .
As n → ∞, n/k → m. Therefore,
an am
lim sup ≤ .
n→∞ n m
Because this holds for all m,
an
lim sup ≤ a.
n→∞ n
an ≥ na.
Because
T m+n (x, x) ≥ T m (x, x)T n (x, x),
the sequence {an ; n ≥ 0} satisfies the subaddtivity property in the lemma above. Thus,
1
lim − log T n (x, x)
n→∞ n
exists. Call the limit `x .
5 MARKOV CHAINS 88
Corollary 5.75. Assume that C is a communicating class of states having common period d. Then there
exists λC ∈ [0, 1] such that for some n depending on x and y,
λC = lim (T n+kd (x, y))1/(n+kd)
k→∞
Definition 5.78. Call a state x ∈ C, ρC -recurrent if Gxx (ρC ) = +∞ and ρC -transient if Gxx (ρC ) < +∞.
Remark 5.79. 1. ρC -recurrence is equivalent to Gτ,xx (ρC ) = 1.
2. ρC -transience is equivalent to Gτ,xx (ρC ) < 1.
3. 1-recurrent and 1-transient is simply recurrent and transient.
Exercise 5.80. The properties of ρC -recurrence and ρC -transience are properties of the equivalence class
m+n+`
under communication. Hint: T m+n+` (x, x)ρC ≤ T m (x, y)T n (y, y)T ` (y, x)ρm+n+`
C .
Example 5.81. For a simple asymmetric random walk having p as the probability of a step right, T m (x, x) =
0 if m is odd, and
2n 2n n
T (x, x) = p (1 − p)n .
n
√
By Stirling’s approximation, n! ≈ 2πnn+1/2 e−n , we have
22n+1/2
2n
≈ √
n 2πn
and 1/2n
2n 1/2n
p 1 p
lim T (x, x) = lim 2 p(1 − p) √ = 2 p(1 − p).
n→∞ n→∞ πn
Thus,
1
ρC = p .
2 p(1 − p)
In addition,
∞
X 2n 1
Gxx (z) = (p(1 − p)z 2 )n = p .
n=0
n 1 − 4p(1 − p)z 2
Consequently, Gxx (ρC ) = +∞ and the chain is ρC -recurrent.
5 MARKOV CHAINS 90
Thus, for n ≥ 1
TAn (x, y) = Px {Xn = y, Xk ∈
/ A for k = 1, 2, . . . , n − 1}.
Denote the generating functions associated with the taboo probabilities by
∞
X
GA,xy (z) = TAn (x, y)z n .
n=0
Lemma 5.83. Gx,xx (ρC ) ≤ 1, and Gx,yx (ρC ) < +∞ for all x, y ∈ C.
Proof.
Gx,xx (ρC ) ≤ Gτ,xx (ρC ) ≤ 1.
For x 6= y. because y → x, we can choose m > 0 so that Txm (y, x) > 0. Thus, for any n ≥ 0
Consequently,
∞
1 X Gτ,xx (ρC )
Gx,yx (ρC ) ≤ Px {τx = m + n}ρm+n
C ≤ m n < +∞.
Tnm (y, x)ρm
C n=1 T x (y, x)ρC
Then
m{x}T n (x, y)rn ≤ m{y}, for all y ∈ C and n ≥ 0.
P
1. x∈C
By the Chapman-Kolmogorov equation, the first term is T n (x, y). Because C is an equivalence class
under communication, we must either have x 6→ z or z 6→ y. In either case T (x, z)T n−1 (z, y) = 0 and
the second sum vanishes. Thus,
!
X X X
n n n−1
m{x}T (x, y)r = m{x} T (x, z)T (z, y) rn
x∈C x∈C z∈C
!
X X X
= m{x}T (x, z)r T n−1 (z, y)rn−1 ≤ m{z}T n−1 (z, y)rn−1 .
z∈C x∈C z∈C
3. From 1 and 2,
T nd (x, x)rnd ≤ 1.
Thus,
ρ−1
C = lim T
nd
(x, x)1/nd ≤ r−1 .
n→∞
or r ≤ ρC .
Proof. 1. We have, by the lemma, that m̃{y} < ∞. Also, m̃{x} = Gx,xx (ρC ) = Gτ,xx (ρC ) ≤ 1. Using
the definition of taboo probabilities, we see that
X ∞
X X
m̃{x}T (x, z)ρC = ρn+1
C Txn (x, y)T (y, z)
x∈C n=1 y∈C
∞
X X
= ρn+1
C
Txn (x, y)T (y, z) + Txn (x, x)T (x, z)
n=1 y6=x
∞
X
ρn+1 Tx (x, z)n+1 + Txn (x, x)T (x, z)
= C
n=1
∞ ∞
!
X X
= Txn+1 (x, z)ρn+1
C + Txn (x, x)ρnC ρC T (x, z)
n=1 n=1
= m̃{z} − Tx (x, z)ρC + m̃{x}ρC T (x, y) = m̃{z} − (1 − m{x})ρC T (x, y)
≤ m̃{z}.
2. First, assume that the solution m̃ is unique up to constant multiples and define
m̃{y} if y 6= x,
n{y} =
1 if y = x.
Then,
X X
n{y}T (y, z)ρC = m̃{y}T (y, z)ρC + (1 − m̃{x})T (x, z)ρC ≤ m̃{y} ≤ n{y}.
y∈C y∈C
Consequently, {n{y}; y ∈ C} is also a subinvariant measure that differs from m̃ only at x. By unique-
ness, we have that
1 = n{x} = Gx,xx (ρC ) = Gτ,xx (ρC ).
Thus, x and hence every state in C is ρC -recurrent.
m{z} ≥ m{x}m̃{z}.
The case N = 1 is simply the definition of ρC -subinvariant. Thus, suppose that the inequality holds
for N . Then,
N
X +1 N
X
m{x} Txn (x, z)ρnC = m{x} Txn+1 (x, z)ρn+1
C
n=1 n=0
N X
X
= m{x}T (x, z)ρC + m{x} Txn (x, y)T (y, z)ρn+1
C
n=1 y6=x
N
!
X X
= m{x}T (x, z)ρC + m{x} Txn (x, y)ρnC T (y, z)ρC
y6=x n=1
X X
≤ m{x}T (x, z)ρC + m{y}T (y, z)ρC = m{y}T (y, z)ρC ≤ m{z}
y6=x y∈C
Finally, because C is ρC -recurrent, m̃{x} = 1 and from the computation above, we have the equality,
X
m̃{y}T (y, z)ρC = m̃{x}
y∈C
and thus X
m̃{y}T n (y, z)ρC = m̃{x}.
y∈C
We have that m{z} ≥ m{x}m̃{z}. If we have strict inequality for some state, then for some n,
X X
m{x} ≥ m{y}T n (y, x)ρnC > m{x}m̃{y}T n (y, x)ρnC = m{x}m̃{x} = m{x},
y∈C y∈C
Note that T̃ is a transition function with irreducible statespace C ∪ {∆}. Note that
X
T̃ 2 (x, y) = T̃ (x, z)T̃ (z, x)
y∈C
X
= T̃ (x, z)T̃ (z, y)
z∈C
X m{z} m{y} m{y} 2
= ρC T (z, x)ρC T (y, z) = ρ2C T (y, x).
m{x} m{z} m{x}
z∈C
m{y} n
T̃ n (x, y) = ρnC T (y, x).
m{x}
Consequently,
1
lim log T̃ n (x, y) = log ρC − log ρC = 0.
n→∞ n
or
X m̃{x} m̃{y}
T (y, x)ρC ≤ .
m{x} m{y}
x∈C
Thus,
m̃{y}
h(y) =
m{y}
is a strictly positive ρC -superharmonic function.
Remark 5.89. Because limn→∞ T n (x, y) exists and m{x} > 0 for all x,
Exercise 5.91. ρC -positive and null recurrence is a property of the equivalence class.
Theorem 5.92. Suppose that C is ρC -recurrent. Then m, the ρC -subinvariant measure and h, the ρC -
superharmonic function are unique up to constant multiples. In fact, m is ρC -invariant and h is ρC -harmonic.
The class is ρC -positive recurrent if and only if
X
m{x}f (x) < +∞.
x∈C
In this case,
m{y}h(x)
lim ρnC T n (x, y) = P .
z∈C m{z}h(z)
n→∞
Proof. We have shown that m is ρC -invariant and unique up to constant multiple. In addition, using T̃ as
defined in (5.5), we find that
∞ ∞
X m{y} X n
T̃ n (x, y) = T (y, x)ρnC = +∞
n=0
m{x} n=0
because C is ρC -recurrent. Consequently, the Markov chain X̃ associated with T̃ is recurrent and the
measure m̃ and hence the function h above are unique up to constant multiples. Because m̃ is ρC -invariant,
h is ρC -harmonic. In addition, T̃ (x, ∆) = 0.
To prove the second assertion, first, assume that C is ρC -positive recurrent. Then X̃ is positive recurrent
and so
lim T̃ n (x, y) = π{x}
n→∞
and
X π{x} π{y}
T (y, x)ρC =
m{x} m{y}
x∈C
π{x} = cm{x}h(x).
Consequently, π{x} > 0 for each x ∈ C and C is positive recurrent for X̃. Thus, C is ρC -positive recurrent.
When this holds, the limn→∞ T̃ n (y, x) = π{x} becomes
m{y} m{y}h(x)
lim ρnC T n (x, y) = π{x} =P .
n→∞ m{x} z∈C m{z}h(z)
Remark 5.93. 1. If the chain is recurrent, then ρC = 1 and the only bounded harmonic functions are
constant and m{x} = cπ{x}.
P
2. If the chain is finite, then the condition x∈C m{x}f (x) < +∞ is automatically satisfied and the class
C is always ρC -positive recurrent for some value of ρC .
6 STATIONARY PROCESSES 97
6 Stationary Processes
6.1 Definitions and Examples
Definition 6.1. A random sequence {Xn ; n ≥ 0} is called a stationary process if for every k ∈ N, the
sequence
Xk , Xk+1 , . . .
as the same distribution. In other words, for each n ∈ N and each n + 1-dimensional measurable set B,
Note that the Daniell-Kolmogorov extension theorem states that the n + 1 dimensional distributions
determine the distribution of the process on the probability space S N .
Exercise 6.2. Use the Daniell-Kolmogorov extension theorem to show that any stationary sequence {Xn ; n ∈
N} can be embedded in a stationary sequence {X̃n ; n ∈ Z}
Proposition 6.3. Let {Xn ; n ≥ 0} be a stationary process and let
φ : S N → S̃
Example 6.4. 1. Any sequence of independent and identically distributed random variables is a station-
ary process.
2. Let φ : RN → R be defined by
φ(x) = λ0 x0 + · · · + λn xn .
Then Yn = φ(θn X) is called a moving average process.
3. Let X be a time homogeneous ergodic Markov chain. If X0 has the unique stationary distribution, then
X is a stationary process.
6 STATIONARY PROCESSES 98
4. Let X be a Gaussian process, i.e., the finite dimensional distributions of X are multivariate random
vectors. Then, if the means EXk are constant and the covariances
Cov(Xj , Xk )
are a function c(k − j) of the difference in their indices, then X is a stationary process.
5. Call a measurable transformation
T :Ω→Ω
measure preserving if
P (T −1 A) = P (A) for all A ∈ F.
In addition, let X0 be measurable, then
Xn (ω) = X0 (T n ω)
is a stationary process.
6. For the example above, take Ω = [0, 1), P to be Lebesgue measure, and F to be the Borel σ-algebra.
Define
T ω = ω + β mod 1.
Exercise 6.5. Check that the examples above are indeed stationary processes.
Definition 6.6. 1. A subset A of S N is called shift invariant if
A = θA.
Sk,n = Xk + · · · + Xk+n−1 ,
and
Mk,n = max{0, Sk,1 . . . , Sk,n }.
Then,
E[X0 ; {M0,n > 0}] ≥ 0.
Proof. (Garcia) Note that the distribution of Sk,n and Mk,n does not depend on k. If j ≤ n,
M1,n ≥ S1,j .
or
X0 ≥ S0,j − M1,n for j = 2, . . . , n.
Note that for j = 1,
X0 ≥ X0 − M1,n = S0,1 − M1,n .
Therefore, if M0,n > 0,
X0 ≥ max S0,j − M1,n = M0,n − M1,n .
1≤j≤n
Consequently,
E[X0 ; {M0,n > 0}] ≥ E[M0,n − M1,n ; {M0,n > 0}] ≥ E[M0,n − M1,n ] = 0.
The last inequality follows from the fact that on the set {M0,n > 0}c , M0,n = 0 and therefore we have
that M0,n − M1,n ≤ 0 − M1,n ≤ 0. The last equality uses the stationarity of X.
Theorem 6.11 (Birkhoff’s ergodic theorem). Let X be a stationary process of integrable random variables,
then
n
1X
lim Xk = E[X0 |I]
n→∞ n
k=0
Define
1
X ∗ = lim sup Sn Sn = X0 + · · · + Xn−1 .
n→∞ n
Let > 0 and define
D = {X ∗ > }
and note that D ∈ I.
Claim I. P (D ) = 0.
Define
Xn = (Xn − )ID , Sn = X0 + · · · + Xn−1
,
∞
[ 1
Mn = max{0, S1 , . . . , Sn }, F,n = {Mn > 0}, F = F,n = {sup Sn > 0}.
n=1 n≥1 n
Note that F,1 ⊂ F,2 ⊂ · · · and that
1
F = {sup Sn > } ∩ D = D .
n≥1 n
E[X0 ; F,n ] ≥ 0.
Note that E|X0 | < E|X0 | + . Thus, by the dominated donvergence theorem and the fact that F,n → Fn ,
we have that
E[X0 ; D ] = E[X0 ; F ] = lim E[X0 ; F,n ] ≥ 0
n→∞
and that
E[X0 ; F ] = E[X0 ; D ] = E[E[X0 |I]; D ] = 0.
Consequently,
0 ≤ E[X0 ; D ] = E[X0 ; D ] − P (D ) = −P (D )
and the claim follows.
Because this holds for any > 0, we have that
1
lim sup Sn ≤ 0.
n→∞ n
Exercise 6.14. Show that a stationary process X is ergodic if and only if for every (k + 1)-dimensional
measurable set A,
n−1
1X
IA (Xj , . . . , Xj+k ) →a.s. P {(X0 , . . . , Xk ) ∈ A}.
n j=0
4. If A is any Borel set with positive Lebesgue measure, then for any δ > 0, there exists an interval I so
that P (A ∩ I) > (1 − δ)P (I)
5. If A is invariant, then P (A) = 1.
Example 6.17. (Weyl’s equidistribution theorem) For the transformation above,
n−1
1X
IA (T n ω) = P (A)
n
k=0
Remark 6.20. By the Daniell-Kolmogorov extension theorem, it is sufficient to prove that for any k and
any measurable subsets A1 , A2 of S k+1
θn X ∈ A if and only if X ∈ A.
P {X ∈ A} = lim P {X ∈ A, θn X ∈ A} = P {X ∈ A}2 .
n→∞
2. For X an ergodic Markov chain with stationary distribution π, then for n > k
P {τ < ∞} = 1.
Now define
X̃n , if τ > n
Zn =
Xn , if τ ≤ n
Then, by the strong Markov property, Z is a Markov process with transition operator T and initial
distribution α. In other words, Z and X̃ have the same distribution.
We can obtain the result on the ergodic theorem for Markov chains by noting that
n−1 n−1
1X 1X
lim f (Zn ) = lim f (Xn ).
n→∞ n n→∞ n
n=0 n=0
Exercise 6.23. let X be a stationary Gaussian process with covariance function c. Show that
lim c(k) = 0
k→∞
6.4 Entropy
For this section, we shall assume that {Xn ; n ∈ Z} is a stationary and ergodic process with finite state space
S
For a pair of random variables Y and Ỹ with respective finite state spaces F and F̃ , define the joint mass
function
pY,Ỹ (y, ỹ) = P {Y = y, Ỹ = ỹ}
and the conditional mass function
pY |Ỹ (y|ỹ) = P {Y = y|Ỹ = ỹ}.
With this we define the entropy
X
H(Y, Ỹ ) = −E[log pY,Ỹ (Y, Ỹ )] = − pY,Ỹ (y, ỹ) log pY,Ỹ (y, ỹ)
(y,ỹ)∈F ×F̃
or
H(Y, Ỹ ) = H(Y |Ỹ ) + H(Ỹ ).
Because the function φ(t) = t log t is convex, we can use Jensen’s inequality to obtain
XX X X
H(Y |Ỹ ) = − φ(pY |Ỹ (y|ỹ))pỸ (ỹ) ≤ − φ pỸ (ỹ)pY |Ỹ (y|ỹ)
y∈F ỹ∈F̃ y∈F ỹ∈F̃
X X
= − φ(pY (y)) = − pY (y) log pY (y) = H(Y )
ỹ∈F̃ ỹ∈F̃
Therefore,
H(Y, Ỹ ) ≤ H(Y ) + H(Ỹ )
with equality if and only if Y and Ỹ are independent.
Let’s continue this line of analysis with m + 1 random variables X0 , . . . , Xm
H(X0 , . . . , Xm ) = H(Xm |Xm−1 , . . . , X0 ) + H(X0 , . . . , Xm−1 )
= H(Xm |Xm−1 , . . . , X0 ) + H(Xm−1 |Xm−2 , . . . , X0 ) + H(X0 , . . . , Xm−2 )
= H(Xm |Xm−1 , . . . , X0 ) + H(Xm−1 |Xm−2 , . . . , X0 ) +
· · · + H(X2 |X0 , X1 ) + H(X1 |X0 ) + H(X0 ).
This computation above shows that H(Xm , . . . , X0 )/m is the average of {H(Xk |Xk−1 , . . . , X0 ); 0 ≤ k ≤ m}.
Also, by adapting the computation above, we can use Jensen’s inequality to show that
H(Xm |Xm−1 , . . . , X0 ) ≤ H(Xm |Xm−1 , . . . , X1 ).
Theorem 6.26 (Shannon-McMillan-Breiman). Let X be a stationary and ergodic process having a finite
state space, then
1
lim − log pX0 ,...,Xn (X0 , . . . , Xn−1 ) = H(X) a.s
n→∞ n
Remark 6.27. In words, this say that for a stationary process, the likelihood function
pX0 ,...,Xn (X0 , . . . , Xn )
is for large n near to
exp −nH(X).
6 STATIONARY PROCESSES 106
Consequently,
n
!
1 1 X
log pX0 ,...,Xn−1 (x0 , . . . , xn−1 ) = log pX0 (x0 ) + log pXk |Xk−1 ,...,X0 (xk |xk−1 , . . . x0 ) .
n n
k=1
Recall here that the stationary process is defined for all n ∈ Z. Using the filtration Fnk = σ{Xk−j ; 1 ≤
j ≤ n}, we have, by the martingale convergence theorem, that the uniformly integrable martingale
lim pXk |Xk−1 ,...,Xk−n (Xk |Xk−1 , . . . Xk−n ) = pXk |Xk−1 ,... (Xk |Xk−1 , . . .)
n→∞
Now,
H(X) = lim H(Xm |Xm−1 , . . . , X0 ) = lim −E[log pXm |Xm−1 ,...,X0 (Xm |Xm−1 , . . . , X0 )]
m→∞ m→∞
= lim −E[log pX0 |X−1 ,...,X−m (X0 |X−1 , . . . , X−m )] = −E[log pX0 |X−1 ,... (X0 |X−1 , . . .)].
m→∞
The theorem now follows from the exercise following the proof of the ergodic theorem. Take
gk (X) = − log pX0 |X−1 ,...,X−k (X0 |X−1 . . . , X−k ).
and
g(X) = − log pX0 |X−1 ,... (X0 |X−1 . . .).
Then,
gk (θk X) = − log pXk |Xk−1 ,...,X0 (Xk |Xk−1 . . . , X0 ),
and
n
!
1 1 X
lim − log pX0 ,...,Xn−1 (X0 , . . . , Xn−1 ) = lim − log pX0 (X0 ) + log pXk |Xk−1 ,...,X0 (Xk |Xk−1 , . . . X0 ) .
n→∞ n n→∞ n
k=1
= H(X)
almost surely.
Example 6.28. For an ergodic Markov chain having transition operator T in its stationary distribution π,
H(Xm |Xm−1 , . . . , X0 ) = H(Xm |Xm−1 )
X
= − P {Xm = x̃, Xm−1 = x} log P {Xm = x̃|Xm−1 = x}
x,x̃∈S
X
= − π{x}T (x, x̃) log T (x, x̃).
x,x̃∈S