Chapter 8: Markov Chains: Yunghsiang S. Han
Chapter 8: Markov Chains: Yunghsiang S. Han
Yunghsiang S. Han
Sn = X1 + X2 + · · · + Xn = Sn−1 + Xn ,
where the Xi ’s are an iid sequence. Sn is a Markov process
since
P [Sn+1 = sn+1 |Sn = sn , . . . , S1 = s1 ] = P [Xn+1 = sn+1 − sn ]
= P [Sn+1 = sn+1 |Sn = sn ].
and
P [Yn = 1] = P [Xn = 1, Xn−1 = 1] = 1/4.
Now consider
P [Yn = 1, Yn−1 = 1/2]
P [Yn = 1|Yn−1 = 1/2] =
P [Yn−1 = 1/2]
P [Xn = 1, Xn−1 = 1, Xn−2 = 0]
=
1/2
(1/2)3
= = 1/4.
1/2
Suppose that we have additional knowledge about past,
then
Thus
• In general,
P [X(tk+1 ) = xk+1 , X(tk ) = xk , . . . , X(t1 ) = x1 ]
= P [X(tk+1 ) = xk+1 |X(tk ) = xk ]P [X(tk ) = xk |X(tk−1 ) = xk−1 ] · · ·
×P [X(t2 ) = x2 |X(t1 ) = x1 ]P [X(t1 ) = x1 ].
• Therefore,
P (2) = P (1)P (1) = P 2 .
• In general, we have
P (n) = P n .
State Probabilities
and
" #2 " #
4 0.83 0.17 0.7467 0.2533
P = = .
0.34 0.66 0.5066 0.4934
• As n → ∞
X X
pj (n) = pij pi (0) → πj pi (0) = πj .
i i
Sol: we have
π0 = (1 − α)π0 + βπi
π1 = απ0 + (1 − β)π1 .
Since π0 + π1 = 1,
Thus.
β α
π0 = , π1 = .
α+β α+β