4 - Markov Process
4 - Markov Process
1 / 68
History
2 / 68
Transition Probability
X0 → X1 → · · · → Xn−1 → Xn → Xn+1 → . . .
3 / 68
Example
4 / 68
Question
Compare
P (S3 = 8|S0 = 4, S1 = 8, S2 = 4)
P (S3 = 8|S0 = 4, S1 = 2, S2 = 4)
and
P (S3 = 8|S2 = 4)
5 / 68
Question
Compare
P (S3 = 8|S0 = 4, S1 = 8, S2 = 4)
P (S3 = 8|S0 = 4, S1 = 2, S2 = 4)
and
P (S3 = 8|S2 = 4)
Markov property
5 / 68
Markov Property - Memoryless property
6 / 68
Markov Chain
7 / 68
Markov Chain
7 / 68
Transition matrix
Transition probability
8 / 68
Transition matrix
Transition probability
Transition matrix
To
1 2 ...
1 P11 P12 . . .
From
.. .. .. ..
. . . .
i Pi1 Pi2 . . .
.. .. .. ..
. . . .
9 / 68
0.4 0.4
0.3
0.3 0.3
1 1 2 3 4 1
0.3
10 / 68
0.4 0.4
0.3
0.3 0.3
1 1 2 3 4 1
0.3
Sample episode starting from 2:
.3 1 1
▶ 2−
→1→− 1→
− 1
.3 .3 1
▶ 2−
→3−→4→− 4
10 / 68
0.4 0.4
0.3
0.3 0.3
1 1 2 3 4 1
0.3
Sample episode starting from 2:
.3 1 1
▶ 2−
→1→− 1→
− 1
.3 .3 1
▶ 2−
→3−→4→− 4
.3 .4 .3 .4 .3 .3
▶ 2−
→3−→3−→2−→2−→3−→4
10 / 68
0.4 0.4
0.3
0.3 0.3
1 1 2 3 4 1
0.3
12 / 68
Solution
▶ All possible states: 1, 2, 3, 4
▶ Transition probability
12 / 68
Solution
▶ All possible states: 1, 2, 3, 4
▶ Transition probability
▶ p11 = 1 , p44 = 1
12 / 68
Solution
▶ All possible states: 1, 2, 3, 4
▶ Transition probability
▶ p11 = 1 , p44 = 1
▶
0.3
if j = i + 1
pij = 0.4 if j = i
0.3 if j = i − 1
for i = 2, 3, ..., m − 1
12 / 68
Solution
▶ All possible states: 1, 2, 3, 4
▶ Transition probability
▶ p11 = 1 , p44 = 1
▶
0.3
if j = i + 1
pij = 0.4 if j = i
0.3 if j = i − 1
for i = 2, 3, ..., m − 1
12 / 68
Example: Weather forecast
13 / 68
Solution
0.4
0.7
0.6
rain not rain
0.3
Rain .7 .3
No rain .6 .4
14 / 68
Example
Consider a binomial asset pricing model with p(H) = 2/3 and
p(T ) = 1/3, S0 = 4, u = 1/d = 2. Construct the Markov chain
model.
15 / 68
Example
Consider a binomial asset pricing model with p(H) = 2/3 and
p(T ) = 1/3, S0 = 4, u = 1/d = 2. Construct the Markov chain
model.
Solution
15 / 68
Markov chain for gambler’s ruin
16 / 68
Markov chain for gambler’s ruin
16 / 68
Markov chain for gambler’s ruin
16 / 68
Markov chain for gambler’s ruin
16 / 68
Markov chain for gambler’s ruin
16 / 68
Example - Credit rating
17 / 68
n-steps transition probability
Given that the Markov chain (Xn ) starts at initial state i, want to
know probability that it will be in state j after n steps
Remark
18 / 68
Chapman-Kolmogorov Equation for n-step transition probability
Key recursion
X
rij (n) = rik (n − 1)pkj
k
starting with
rij (1) = pij
1
ri1(n-1) p 1j
...
i k
rik(n-1)
p kj j
...
rim(n-1)
p mj
m
19 / 68
Proof
rim(n-1)
p mj
m
20 / 68
Time 0 Time n-1 Time n
1
Case 1: Starting from state i, it
ri1(n-1) p 1j
...
visits state 1 at time n − 1 and
k
in the last transition, it moves
i
rik(n-1)
p kj j from state 1 to state j at time n
...
rim(n-1)
p mj
m
21 / 68
Case 2:
(n−1)steps last step
state i −−−−−−−→ state 2 −−−−−→ state j
...
Case k:
(n−1)steps last step
state i −−−−−−−→ state k −−−−−→ state j
...
22 / 68
Thanks to total probability rule
By multiple law
Hence
m
X
rij (n) = rik (n − 1)pkj
k=1
23 / 68
Matrix representation
Let
r11 (n) ... r1m (n)
.. .. ..
P (n) = . . .
rm1 (n) . . . rmm (n)
with P (1) = P then
24 / 68
n-step transition matrix
P (n) = P n = P.P
| {z. . . P}
n times
25 / 68
Example - Weather forecast
0.4
0.7
0.6
rain not rain
0.3
If it rains today, calculate the probability that it will rain 4 days
from now.
26 / 68
Solution
▶ Transition matrix
.7 .3
P =
.6 .4
▶ Want to find r00 (4)
▶ Need to calculate P 4
After 4 days
Rain No rain
Current
Rain .5749 .4251
= P4
No rain .5668 .4332
4 = 0.5749
▶ So P00
27 / 68
Unconditional distribution of Xn
▶ Distribution of Xn
28 / 68
Unconditional distribution of Xn
π (n) = π (0) P n
where
π (0) = π1(0) π2(0) . . . πm
(0)
and
π (n) = π1(n) π2(n) . . . πm
(n)
X0 = 1
n
(0) P1j
π1
start X0 = k n
Xn = j after n steps
πk
(0) Pkj
(0) n
πm Pmj
X0 = m
29 / 68
Proof
30 / 68
Example - Weather forecast
0.4
0.7
0.6
rain not rain
0.3
31 / 68
Solution
π (0) = .4 .6
▶ Transition matrix
.7 .3
P =
.6 .4
32 / 68
▶ 4-step transition matrix
33 / 68
Practice - Simulate Markov Chain by Monte Carlo
34 / 68
Practice - Simulate Markov Chain by Monte Carlo
34 / 68
Long term behavior of Markov chain
35 / 68
0.4
0.7
0.6
rain not rain
0.3
.7 .3 .57 .43
rij (1) = P = , rij (∞) =
.6 .4 .57 .43
36 / 68
37 / 68
After a lot of transition, the fly is at position 4 with probability
▶ 1/3 if it starts at position 2
▶ 2/3 if it starts at state 3
▶ 0 if it starts at other state
Probability that the fly is at position j after long time depends
on initial state
38 / 68
1 0.5
1 2 3
0.5 1
39 / 68
1 0.5
1 2 3
0.5 1
39 / 68
Question
40 / 68
Answer for question 2
▶ let n → ∞ X
πj = πk pkj for all j
k
▶ Addition equation j πj = 1
P
41 / 68
Interpretation
42 / 68
Find stationary distribution
Solve
(
πP = π
P
πi = 1
43 / 68
Example
0.8 0.9
0.1
Channel 1 Channel 2
0.2
44 / 68
Solution
.8 .2
▶ Transition matrix P =
.1 .9
▶ Stationary distribution π = π1 π2 satisfies
.8π1 + .1π2 = π1
(
πP = π
or .2π1 + .9π2 = π2
π1 + π2 = 1
π1 + π2 = 1
45 / 68
After a long time, the market is stable. Each year, there is about
46 / 68
Practice
47 / 68
Answer for question 1
48 / 68
Classification of states
49 / 68
Types of state
50 / 68
Example
51 / 68
Recurrent and Transient State
52 / 68
▶ If a recurrent state is visited once, it will be visited infinitely
numbers of time
▶ a transient state will only be visited a finite number of times.
53 / 68
Return time
▶ Return time
and
τii = ∞ if Xn ̸= i ∀n ≥ 1
▶ Probability of return to state i given starting at i
fi = P (τii < ∞)
▶ If i is recurrent then fi = 1
▶ If i is transient then fi < 1
54 / 68
Total number of visits a state
▶
P (N = n) = fin−1 (1 − fi )
▶ N is geometric distributed with parameter 1 − fi
▶ (
∞, if fi = 1
E(N ) = 1
1−fi , , if fi < 1
55 / 68
By linear property of expectation
∞
X
E(N ) = E(I{Xn =i|X0 =i} )
n=0
X∞
= P (Xn = i|X0 = i)
n=0
X∞
= Piin
n=0
Proposition
State i is recurrent if and only if
∞
X
Piin = ∞
n=0
56 / 68
Reccurent Class
57 / 68
Example
58 / 68
Example
59 / 68
Example
60 / 68
Practice
61 / 68
Markov chain decomposition
▶ Transient states
▶ Recurrent classes
62 / 68
▶ once the state enters (or starts in) a class of recurrent
states, it stays within that class; since all states in the class
are accessible from each other, all states in the class will
be visited an infinite number of times;
▶ if the initial state is transient, then the state trajectory
contains an initial portion consisting of transient states and
a final portion consisting of recurrent states from the same
class
63 / 68
Analyze long - term behavior
64 / 68
Analyze long - term behavior
64 / 68
Periodicity
65 / 68
Structure of a periodic reccurrent class
66 / 68
▶ a periodic recurrent class, a positive time n, and a state j in
the class, there must exist some state i such that rij (n) = 0
because he subset to which j belongs can be reached at
time n from the states in only one of the subsets.
67 / 68
▶ a periodic recurrent class, a positive time n, and a state j in
the class, there must exist some state i such that rij (n) = 0
because he subset to which j belongs can be reached at
time n from the states in only one of the subsets.
▶ thus a way to verify aperiodicity of a given recurrent class
R, is to check whether there is a special time n ≥ 1 and a
special state s ∈ R that can be reached at time n from all
initial states in R, i.e., ris (n) > 0 for all i ∈ R
67 / 68
Theorem
Let {Xn } be a Markov chain with a single reccurent class and aperiodic. The
steady-state probability πj associated with the state j satisfies the following
properites
1.
(n)
lim Pij = πj
n→∞
68 / 68