0% found this document useful (0 votes)
60 views4 pages

Lecture 19: Stationary Markov Chains

This lecture discusses stationary Markov chains. Some key points: 1) A sequence of random variables is stationary if its joint distribution is unchanged by time shifts (e.g. (X1, X2, X3) = (X0, X1, X2)). 2) Kac's identity states that for any stationary sequence and set A, the probability of first hitting A is equal to the expected number of steps to first hit A, given the sequence starts in A. 3) For an irreducible, positive recurrent Markov chain with stationary distribution π, Kac's identity implies the expected time to hit any state is finite and equal to 1/πi, where πi is

Uploaded by

spitzersglare
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views4 pages

Lecture 19: Stationary Markov Chains

This lecture discusses stationary Markov chains. Some key points: 1) A sequence of random variables is stationary if its joint distribution is unchanged by time shifts (e.g. (X1, X2, X3) = (X0, X1, X2)). 2) Kac's identity states that for any stationary sequence and set A, the probability of first hitting A is equal to the expected number of steps to first hit A, given the sequence starts in A. 3) For an irreducible, positive recurrent Markov chain with stationary distribution π, Kac's identity implies the expected time to hit any state is finite and equal to 1/πi, where πi is

Uploaded by

spitzersglare
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Stat 150 Stochastic Processes

Spring 2009

Lecture 19: Stationary Markov Chains


Lecturer: Jim Pitman

Symmetry Ideas: General idea of symmetry: make a transformation, and something stays the same. In probability theory, the transformation may be conditioning, or some rearrangement of variables. What stays the same is the distribution of something. For a sequence of random variables X0 , X1 , X2 , . . . , various notions of symmetry: (1) Independence. X0 and X1 are independent. Distribution of X1 given X0 d does not involve X0 ; that is, (X1 |X0 A) = X1 . (2) Identical distribution. The Xn are identically distributed: X0 = Xn for every n. Of course IID = LLN / CLT. Stationary: (X1 , X2 , . . . ) = (X0 , X1 , . . . ) By measure theory, this is the same as
d d d

(shifting time by 1)

(X1 , X2 , . . . , Xn ) = (X0 , X1 , . . . , Xn1 ), for all n. Obviously IID = Stationary. But the converse is not necessarily true. E.g., Xn = X0 for all n, for any non-constant random variable X0 . Another example: Stationary MC, i.e., start a MC with transition matrix P with initial distribution . Then easily, the following are equivalent: P = X1 = X0 (X1 , X2 , . . . ) = (X0 , X1 , . . . ) . Otherwise put, (Xn ) is stationary means (X1 , X2 , . . . ) is a Markov chain with exactly the same distribution as (X0 , X1 , . . . ). Other symmetries: Cyclic symmetry: (X1 , X2 , . . . , Xn ) = (X2 , X3 , . . . , Xn , X1 )
n
d d d d

Reversible: (Xn , Xn1 , . . . , X1 ) = (X1 , X2 , . . . , Xn ) 1

Lecture 19: Stationary Markov Chains

Exchangeable: (X(1) , X(2) , . . . , X(n) ) = (X1 , X2 , . . . , Xn ) for every permutation of (1, . . . , n). Application of stationary idea to MCs. Think about a sequence of RVs X0 , X1 , . . . which is stationary. Take a set least n 1(if any) s.t. Xn A A in the state space. Let TA = ; that is, if no such n TA := least n 1 : 1A (Xn ) = 1. Consider the 1A (Xn ) process. E.g., 0 0 0 0 1 0 0 1 1 0 0 0 . . . To provide an informal notation: P(TA = n) = P(? 0 0 . . . 0 0 1)
n1 zeros

P(TA n) = P(? 0 0 . . . 0 0 ?)
n1 zeros

P(X0 A, TA n) = P(1 0 0 . . . 0 0 ?)
n1 zeros

Lemma: For every stationary sequence X0 , X1 , . . . , P(TA = n) = P(X0 A, TA n) With the informal notation, the claim is: P(? 0 0 . . . 0 0 1) = P(1 0 0 . . . 0 0 ?)
n1 zeros n1 zeros

for n = 1, 2, 3, . . .

Notice. This looks like reversibility, but reversibility of the sequence X0 , X1 , . . . , Xn is not being assumed. Only stationarity is required! Proof

P (? 0 0 . . . 0 0 1)
n1 zeros

P (1 0 0 . . . 0 0 ?)
n1 zeros

+ P (? 0 0 . . . 0 0 0)
n1 zeros

stationarity

+ P (0 0 0 . . . 0 0 ?)
n1 zeros

P (? 0 0 . . . 0 0 ?)
n1 zeros

P (? 0 0 . . . 0 0 ?)
n1 zeros

Lecture 19: Stationary Markov Chains

Take the identity above, sum over n = 1, 2, 3, . . . ,

P(TA < ) =
n=1

PA (T = n) P(X0 A, TA n)
n=1

= =E

1(X0 A, TA n)
n=1

= E(TA 1(X0 A)) This is Marc Kacs identity: For every stationary sequence (Xn ), and every measurable subset A of the state space of (Xn ): P(TA < ) = E(TA 1(X0 A)) Application to recurrence of Markov chains: Suppose that an irreducible transition matrix P on a countable space S has a stationary probability measure , that is P = , with i > 0 for some state i. Then is the unique stationary distribution for P j > 0, j . Ej Tj < , j . Ej Tj = Proof: Apply Kacs identity to A = {i}, with P = P governing (Xn ) as d a MC with transition matrix P started with X0 = . 1 P (Ti < ) = E [Ti 1(X0 = i)] = i Ei Ti Since i > 0 this implies rst Ei Ti < , and hence Pi (Ti < ) = 1. Next, need to argue that Pj (Ti < ) = 1 for all states j . This uses irreducibility, which gives us an n such that P n (i, j ) > 0, hence easily an m n such that it is possible to get from i to j in m steps without revisiting i on the way, i.e. Pi (Ti > m, Xm = j ) > 0 But using the Markov property this makes Pj (Ti < ) = Pi (Ti < | Ti > m, Xm = j ) = 1
Kac 1 , j . j

Lecture 19: Stationary Markov Chains

Finally P (Ti < ) =


j

j Pj (Ti < ) =
j

j = 1

and the Kac formula becomes 1 = i Ei Ti as claimed. Lastly, it is easy that j > 0 and hence 1 = j Ej Tj because = P implies = P n and so j = k P n (k, j ) i P n (i, j ) > 0
k

for some n by irreducibility of P .

You might also like