0% found this document useful (0 votes)
56 views76 pages

L18

The document discusses stochastic processes and Markov chains. It defines a stochastic process as a collection of random variables indexed by time. A stochastic process that satisfies the Markov property is a Markov chain. It then provides examples of Markov chains to represent coins and dice with memory, where the next state depends on the current state. Transition probability matrices and state transition diagrams are used to depict these examples.

Uploaded by

Ramandeep Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views76 pages

L18

The document discusses stochastic processes and Markov chains. It defines a stochastic process as a collection of random variables indexed by time. A stochastic process that satisfies the Markov property is a Markov chain. It then provides examples of Markov chains to represent coins and dice with memory, where the next state depends on the current state. Transition probability matrices and state transition diagrams are used to depict these examples.

Uploaded by

Ramandeep Singh
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

Recap

9 / 19
Recap

▶ Stochastic process {X (t), t ∈ T } is a collection of random


variables defined such that for every t ∈ T we have a random
variable X (t) taking values in state space S.

9 / 19
Recap

▶ Stochastic process {X (t), t ∈ T } is a collection of random


variables defined such that for every t ∈ T we have a random
variable X (t) taking values in state space S.

▶ A stochastic process that satisfies the Markov property is a


Markov chain.

9 / 19
Recap

▶ Stochastic process {X (t), t ∈ T } is a collection of random


variables defined such that for every t ∈ T we have a random
variable X (t) taking values in state space S.

▶ A stochastic process that satisfies the Markov property is a


Markov chain.

▶ P(Xn = j|Xn1 = x1 , .., Xnk = i) = P(Xn = j|Xnk = i)

9 / 19
Recap

▶ Stochastic process {X (t), t ∈ T } is a collection of random


variables defined such that for every t ∈ T we have a random
variable X (t) taking values in state space S.

▶ A stochastic process that satisfies the Markov property is a


Markov chain.

▶ P(Xn = j|Xn1 = x1 , .., Xnk = i) = P(Xn = j|Xnk = i)

▶ Markov chain is a stochastic process where the next state of


the process depends on the present state butnot on previous
states.

9 / 19
Running example: Coin with memory!

10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.

10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.

▶ Xn = 1 for heads and Xn = −1 otherwise. S = {+1, −1}.

10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.

▶ Xn = 1 for heads and Xn = −1 otherwise. S = {+1, −1}.

▶ Sticky coin : P(Xn+1 = 1|Xn = 1) = 0.9 and


P(Xn+1 = −1|Xn = −1) = 0.8 for all n.

10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.

▶ Xn = 1 for heads and Xn = −1 otherwise. S = {+1, −1}.

▶ Sticky coin : P(Xn+1 = 1|Xn = 1) = 0.9 and


P(Xn+1 = −1|Xn = −1) = 0.8 for all n.

▶ Flippy Coin: P(Xn+1 = 1|Xn = 1) = 0.1 while


P(Xn+1 = −1|Xn = −1) = 0.3 for all n.

10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.

▶ Xn = 1 for heads and Xn = −1 otherwise. S = {+1, −1}.

▶ Sticky coin : P(Xn+1 = 1|Xn = 1) = 0.9 and


P(Xn+1 = −1|Xn = −1) = 0.8 for all n.

▶ Flippy Coin: P(Xn+1 = 1|Xn = 1) = 0.1 while


P(Xn+1 = −1|Xn = −1) = 0.3 for all n.

▶ This can be represented by a transition diagram (see board)

10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.

▶ Xn = 1 for heads and Xn = −1 otherwise. S = {+1, −1}.

▶ Sticky coin : P(Xn+1 = 1|Xn = 1) = 0.9 and


P(Xn+1 = −1|Xn = −1) = 0.8 for all n.

▶ Flippy Coin: P(Xn+1 = 1|Xn = 1) = 0.1 while


P(Xn+1 = −1|Xn = −1) = 0.3 for all n.

▶ This can be represented by a transition diagram (see board)

▶ The transition
" probability
# matrix
" P for
# the two cases is
0.9 .1 0.1 0.9
Ps = and Pf =
0.2 0.8 0.7 0.3

10 / 19
Running example: Coin with memory!
▶ In a Markovian coin with memory, the outcome of the next
toss depends on the current toss.

▶ Xn = 1 for heads and Xn = −1 otherwise. S = {+1, −1}.

▶ Sticky coin : P(Xn+1 = 1|Xn = 1) = 0.9 and


P(Xn+1 = −1|Xn = −1) = 0.8 for all n.

▶ Flippy Coin: P(Xn+1 = 1|Xn = 1) = 0.1 while


P(Xn+1 = −1|Xn = −1) = 0.3 for all n.

▶ This can be represented by a transition diagram (see board)

▶ The transition
" probability
# matrix
" P for
# the two cases is
0.9 .1 0.1 0.9
Ps = and Pf =
0.2 0.8 0.7 0.3

▶ The row corresponds to present state and the column


corresponds to next state. 10 / 19
Running example: Dice with memory!

11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.

11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.

11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.
▶ Example
 transition probability matrix

0.9 .1 0 0 0 0
0 .9 .1 0 0 0 
 
0 0 0.9 0.1 0 0 
P= 
0 0 0 0.9 0.1 0 
0 0 0 0 0.9 0.1
0.1 0 0 0 0 0.9

11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.
▶ Example
 transition probability matrix

0.9 .1 0 0 0 0
0 .9 .1 0 0 0 
 
0 0 0.9 0.1 0 0 
P= 
0 0 0 0.9 0.1 0 
0 0 0 0 0.9 0.1
0.1 0 0 0 0 0.9

▶ State transition diagram on board

11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.
▶ Example
 transition probability matrix

0.9 .1 0 0 0 0
0 .9 .1 0 0 0 
 
0 0 0.9 0.1 0 0 
P= 
0 0 0 0.9 0.1 0 
0 0 0 0 0.9 0.1
0.1 0 0 0 0 0.9

▶ State transition diagram on board


Pn Sn
▶ Consider Sn = i=1 Xi and µ̂n = n .

11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.
▶ Example
 transition probability matrix

0.9 .1 0 0 0 0
0 .9 .1 0 0 0 
 
0 0 0.9 0.1 0 0 
P= 
0 0 0 0.9 0.1 0 
0 0 0 0 0.9 0.1
0.1 0 0 0 0 0.9

▶ State transition diagram on board


Pn Sn
▶ Consider Sn = i=1 Xi and µ̂n = n . What is limn→∞ µ̂n ?

11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.
▶ Example
 transition probability matrix

0.9 .1 0 0 0 0
0 .9 .1 0 0 0 
 
0 0 0.9 0.1 0 0 
P= 
0 0 0 0.9 0.1 0 
0 0 0 0 0.9 0.1
0.1 0 0 0 0 0.9

▶ State transition diagram on board


Pn Sn
▶ Consider Sn = i=1 Xi and µ̂n = n . What is limn→∞ µ̂n ?
▶ Cannot invoke SLLN as {Xi } are not i.i.d.

11 / 19
Running example: Dice with memory!
▶ In a markovian dice with memory, the outcome of the next roll
depends on the current roll.
▶ Xn = i for i ∈ S where S = {1, . . . , 6}.
▶ Example
 transition probability matrix

0.9 .1 0 0 0 0
0 .9 .1 0 0 0 
 
0 0 0.9 0.1 0 0 
P= 
0 0 0 0.9 0.1 0 
0 0 0 0 0.9 0.1
0.1 0 0 0 0 0.9

▶ State transition diagram on board


Pn Sn
▶ Consider Sn = i=1 Xi and µ̂n = n . What is limn→∞ µ̂n ?
▶ Cannot invoke SLLN as {Xi } are not i.i.d.
▶ We will see later SLLN for Markov chains!
11 / 19
Finite dimensional distributions

12 / 19
Finite dimensional distributions

▶ Consider a Markov dice with transition probability P.

12 / 19
Finite dimensional distributions

▶ Consider a Markov dice with transition probability P.


▶ What is P(X0 = 4, X1 = 5, X2 = 6)?

12 / 19
Finite dimensional distributions

▶ Consider a Markov dice with transition probability P.


▶ What is P(X0 = 4, X1 = 5, X2 = 6)?
▶ = P(X2 = 6|X1 = 5, X0 = 4)P(X1 = 5|X0 = 4)P(X0 = 4)

12 / 19
Finite dimensional distributions

▶ Consider a Markov dice with transition probability P.


▶ What is P(X0 = 4, X1 = 5, X2 = 6)?
▶ = P(X2 = 6|X1 = 5, X0 = 4)P(X1 = 5|X0 = 4)P(X0 = 4)
▶ = p65 p54 P(X0 = 4).

12 / 19
Finite dimensional distributions

▶ Consider a Markov dice with transition probability P.


▶ What is P(X0 = 4, X1 = 5, X2 = 6)?
▶ = P(X2 = 6|X1 = 5, X0 = 4)P(X1 = 5|X0 = 4)P(X0 = 4)
▶ = p65 p54 P(X0 = 4).
▶ What is P(X0 = 4)?

12 / 19
Finite dimensional distributions

▶ Consider a Markov dice with transition probability P.


▶ What is P(X0 = 4, X1 = 5, X2 = 6)?
▶ = P(X2 = 6|X1 = 5, X0 = 4)P(X1 = 5|X0 = 4)P(X0 = 4)
▶ = p65 p54 P(X0 = 4).
▶ What is P(X0 = 4)?
▶ This probability of starting in a particular state is called initial
distribution of the markov chain.

12 / 19
Finite dimensional distributions

13 / 19
Finite dimensional distributions

▶ Consider a DTMC {Xn , n ≥ 0} with transition matrix P.

13 / 19
Finite dimensional distributions

▶ Consider a DTMC {Xn , n ≥ 0} with transition matrix P.


▶ We assume M states and X0 denotes the initial state.

13 / 19
Finite dimensional distributions

▶ Consider a DTMC {Xn , n ≥ 0} with transition matrix P.


▶ We assume M states and X0 denotes the initial state.
▶ You can start in any starting state or may pick your starting
state randomly.

13 / 19
Finite dimensional distributions

▶ Consider a DTMC {Xn , n ≥ 0} with transition matrix P.


▶ We assume M states and X0 denotes the initial state.
▶ You can start in any starting state or may pick your starting
state randomly.
▶ Let µ̄ = (µ1 , . . . , µM ) denote the initial distribution, i.e.,
P(X0 = x0 ) = µx0 .

13 / 19
Finite dimensional distributions

▶ Consider a DTMC {Xn , n ≥ 0} with transition matrix P.


▶ We assume M states and X0 denotes the initial state.
▶ You can start in any starting state or may pick your starting
state randomly.
▶ Let µ̄ = (µ1 , . . . , µM ) denote the initial distribution, i.e.,
P(X0 = x0 ) = µx0 .
▶ How does one obtain the finite dimensional distribution
P(X0 = x0 , X1 = x1 , , X2 = x2 ) ?

13 / 19
Finite dimensional distributions

▶ Consider a DTMC {Xn , n ≥ 0} with transition matrix P.


▶ We assume M states and X0 denotes the initial state.
▶ You can start in any starting state or may pick your starting
state randomly.
▶ Let µ̄ = (µ1 , . . . , µM ) denote the initial distribution, i.e.,
P(X0 = x0 ) = µx0 .
▶ How does one obtain the finite dimensional distribution
P(X0 = x0 , X1 = x1 , , X2 = x2 ) ?
▶ P(X0 = x0 , X1 = x1 , , X2 = x2 ) = px1 ,x2 px0 ,x1 µx0 .

13 / 19
Finite dimensional distributions

▶ Consider a DTMC {Xn , n ≥ 0} with transition matrix P.


▶ We assume M states and X0 denotes the initial state.
▶ You can start in any starting state or may pick your starting
state randomly.
▶ Let µ̄ = (µ1 , . . . , µM ) denote the initial distribution, i.e.,
P(X0 = x0 ) = µx0 .
▶ How does one obtain the finite dimensional distribution
P(X0 = x0 , X1 = x1 , , X2 = x2 ) ?
▶ P(X0 = x0 , X1 = x1 , , X2 = x2 ) = px1 ,x2 px0 ,x1 µx0 .
▶ In general,
P(X0 = x0 , X1 = x1 , . . . Xk = xk ) = pxk−1 ,xk × . . . × px0 ,x1 µx0

13 / 19
Chapman Kolmogorov Equations for DTMC

14 / 19
Chapman Kolmogorov Equations for DTMC
▶ Consider
" a Markov coin# and its transition probability matrix
p1,1 p1,−1
P= .
p−1,1 p−1,−1

14 / 19
Chapman Kolmogorov Equations for DTMC
▶ Consider
" a Markov coin# and its transition probability matrix
p1,1 p1,−1
P= .
p−1,1 p−1,−1

▶ Given X0 = 1, what is P(X2 = 1)?


P(X2 = 1|X0 = 1) = P(X2 = 1|X1 = 1, X0 = 1)P(X1 = 1|X0 = 1)
+ P(X2 = 1|X1 = −1, X0 = 1)P(X1 = −1|X0 = 1)
2
= p1,1 + p−1,1 p1,−1

14 / 19
Chapman Kolmogorov Equations for DTMC
▶ Consider
" a Markov coin# and its transition probability matrix
p1,1 p1,−1
P= .
p−1,1 p−1,−1

▶ Given X0 = 1, what is P(X2 = 1)?


P(X2 = 1|X0 = 1) = P(X2 = 1|X1 = 1, X0 = 1)P(X1 = 1|X0 = 1)
+ P(X2 = 1|X1 = −1, X0 = 1)P(X1 = −1|X0 = 1)
2
= p1,1 + p−1,1 p1,−1

▶ Here the first inequality follow from the fact that

P(C |A) = P(C |BA)P(B|A) + P(C |B c A)P(B c |A) HW: Verify

14 / 19
Chapman Kolmogorov Equations for DTMC
▶ Consider
" a Markov coin# and its transition probability matrix
p1,1 p1,−1
P= .
p−1,1 p−1,−1

▶ Given X0 = 1, what is P(X2 = 1)?


P(X2 = 1|X0 = 1) = P(X2 = 1|X1 = 1, X0 = 1)P(X1 = 1|X0 = 1)
+ P(X2 = 1|X1 = −1, X0 = 1)P(X1 = −1|X0 = 1)
2
= p1,1 + p−1,1 p1,−1

▶ Here the first inequality follow from the fact that

P(C |A) = P(C |BA)P(B|A) + P(C |B c A)P(B c |A) HW: Verify


▶ Similarly, P(X2 = −1|X0 = 1), P(X2 = 1|X0 = −1), P(X2 =
−1|X0 = −1) can be obtained

14 / 19
Chapman Kolmogorov Equations for DTMC
▶ Consider
" a Markov coin# and its transition probability matrix
p1,1 p1,−1
P= .
p−1,1 p−1,−1

▶ Given X0 = 1, what is P(X2 = 1)?


P(X2 = 1|X0 = 1) = P(X2 = 1|X1 = 1, X0 = 1)P(X1 = 1|X0 = 1)
+ P(X2 = 1|X1 = −1, X0 = 1)P(X1 = −1|X0 = 1)
2
= p1,1 + p−1,1 p1,−1

▶ Here the first inequality follow from the fact that

P(C |A) = P(C |BA)P(B|A) + P(C |B c A)P(B c |A) HW: Verify


▶ Similarly, P(X2 = −1|X0 = 1), P(X2 = 1|X0 = −1), P(X2 =
−1|X0 = −1) can be obtained and these are elements of a
two-step transition matrix P (2) .
14 / 19
Chapman Kolmogorov Equations for DTMC

15 / 19
Chapman Kolmogorov Equations for DTMC

▶ The two" step transition probability matrix P (2) is given by#


p 2 +p
(2) 1,1 1,−1 p−1,1 p1,1 p1,−1 + p1,−1 p−1,−1
P = 2 .
p−1,1 p1,1 + p−1,−1 p−1,1 p−1,1 p1,−1 + p−1,−1

15 / 19
Chapman Kolmogorov Equations for DTMC

▶ The two" step transition probability matrix P (2) is given by#


p 2 +p
(2) 1,1 1,−1 p−1,1 p1,1 p1,−1 + p1,−1 p−1,−1
P = 2 .
p−1,1 p1,1 + p−1,−1 p−1,1 p−1,1 p1,−1 + p−1,−1

▶ This implies that P (2) = P × P = P 2 .

15 / 19
Chapman Kolmogorov Equations for DTMC

▶ The two" step transition probability matrix P (2) is given by#


p 2 +p
(2) 1,1 1,−1 p−1,1 p1,1 p1,−1 + p1,−1 p−1,−1
P = 2 .
p−1,1 p1,1 + p−1,−1 p−1,1 p−1,1 p1,−1 + p−1,−1

▶ This implies that P (2) = P × P = P 2 .

▶ In general, P (n) = P n .

15 / 19
Chapman Kolmogorov Equations for DTMC

▶ The two" step transition probability matrix P (2) is given by#


p 2 +p
(2) 1,1 1,−1 p−1,1 p1,1 p1,−1 + p1,−1 p−1,−1
P = 2 .
p−1,1 p1,1 + p−1,−1 p−1,1 p−1,1 p1,−1 + p−1,−1

▶ This implies that P (2) = P × P = P 2 .

▶ In general, P (n) = P n .

▶ Chapman-Kolmogorov equations are a further generalization


of this.

P (n+l) = P (n) P (l)

15 / 19
Chapman Kolmogorov Equations for DTMC

▶ The two" step transition probability matrix P (2) is given by#


p 2 +p
(2) 1,1 1,−1 p−1,1 p1,1 p1,−1 + p1,−1 p−1,−1
P = 2 .
p−1,1 p1,1 + p−1,−1 p−1,1 p−1,1 p1,−1 + p−1,−1

▶ This implies that P (2) = P × P = P 2 .

▶ In general, P (n) = P n .

▶ Chapman-Kolmogorov equations are a further generalization


of this.

P (n+l) = P (n) P (l)

▶ We wont see the proof of this.

15 / 19
Classification of states

16 / 19
Classification of states

▶ Consider a Markov process with state space S

16 / 19
Classification of states

▶ Consider a Markov process with state space S

▶ We say that j is accessible from i if pijn > 0 for some n.

16 / 19
Classification of states

▶ Consider a Markov process with state space S

▶ We say that j is accessible from i if pijn > 0 for some n.

▶ This is denoted by i → j.

16 / 19
Classification of states

▶ Consider a Markov process with state space S

▶ We say that j is accessible from i if pijn > 0 for some n.

▶ This is denoted by i → j.

▶ if i → j and j → i then we say that i and j communicate.


This is denoted by i ↔ j.

16 / 19
Classification of states

▶ Consider a Markov process with state space S

▶ We say that j is accessible from i if pijn > 0 for some n.

▶ This is denoted by i → j.

▶ if i → j and j → i then we say that i and j communicate.


This is denoted by i ↔ j.

A chain is said to be irreducible if i ↔ j for all i, j ∈ S.

16 / 19
Classification of states

▶ Consider a Markov process with state space S

▶ We say that j is accessible from i if pijn > 0 for some n.

▶ This is denoted by i → j.

▶ if i → j and j → i then we say that i and j communicate.


This is denoted by i ↔ j.

A chain is said to be irreducible if i ↔ j for all i, j ∈ S.

▶ Are the examples of Markovian coin and dice we have


considered till now irreducible? check!

16 / 19
Recurrent and Transient states

17 / 19
Recurrent and Transient states
▶ We say that a state i is recurrent if
Fii = P( ever returning to i having started in i ) = 1.

17 / 19
Recurrent and Transient states
▶ We say that a state i is recurrent if
Fii = P( ever returning to i having started in i ) = 1.

▶ Fii is not easy to calculate. (Not part of this course)

17 / 19
Recurrent and Transient states
▶ We say that a state i is recurrent if
Fii = P( ever returning to i having started in i ) = 1.

▶ Fii is not easy to calculate. (Not part of this course)

▶ If a state is not recurrent, it is transient.

17 / 19
Recurrent and Transient states
▶ We say that a state i is recurrent if
Fii = P( ever returning to i having started in i ) = 1.

▶ Fii is not easy to calculate. (Not part of this course)

▶ If a state is not recurrent, it is transient.

▶ For a transient state i, Fii < 1.

17 / 19
Recurrent and Transient states
▶ We say that a state i is recurrent if
Fii = P( ever returning to i having started in i ) = 1.

▶ Fii is not easy to calculate. (Not part of this course)

▶ If a state is not recurrent, it is transient.

▶ For a transient state i, Fii < 1.

▶ If i ↔ j and i is recurrent, then j is recurrent.

17 / 19
Limiting probabilities

18 / 19
Limiting probabilities
" # " # " #
0 0 1 .06 .3 .64 .23 .385 .385
▶ P= 0 0.6 0.4 P 5 = .18 .38 .44 P 30 = .23 .385 .385
0.6 0.4 0 .38 .44 .18 .23 .385 .385

18 / 19
Limiting probabilities
" # " # " #
0 0 1 .06 .3 .64 .23 .385 .385
▶ P= 0 0.6 0.4 P 5 = .18 .38 .44 P 30 = .23 .385 .385
0.6 0.4 0 .38 .44 .18 .23 .385 .385
   b a

▶ P = 1−a a
limn→∞ P n = a+b
b
a+b
a
b 1−b a+b a+b

18 / 19
Limiting probabilities
" # " # " #
0 0 1 .06 .3 .64 .23 .385 .385
▶ P= 0 0.6 0.4 P 5 = .18 .38 .44 P 30 = .23 .385 .385
0.6 0.4 0 .38 .44 .18 .23 .385 .385
   b a

▶ P = 1−a a
limn→∞ P n = a+b
b
a+b
a
b 1−b a+b a+b

▶ What is the interpretation of limn→∞ pij(n) = [limn→∞ P n ]ij ?

18 / 19
Limiting probabilities
" # " # " #
0 0 1 .06 .3 .64 .23 .385 .385
▶ P= 0 0.6 0.4 P 5 = .18 .38 .44 P 30 = .23 .385 .385
0.6 0.4 0 .38 .44 .18 .23 .385 .385
   b a

▶ P = 1−a a
limn→∞ P n = a+b
b
a+b
a
b 1−b a+b a+b

▶ What is the interpretation of limn→∞ pij(n) = [limn→∞ P n ]ij ?

▶ πj = limn→∞ pij(n) denotes the probability of being in state j at


time n when starting in state i.

18 / 19
Limiting probabilities
" # " # " #
0 0 1 .06 .3 .64 .23 .385 .385
▶ P= 0 0.6 0.4 P 5 = .18 .38 .44 P 30 = .23 .385 .385
0.6 0.4 0 .38 .44 .18 .23 .385 .385
   b a

▶ P = 1−a a
limn→∞ P n = a+b
b
a+b
a
b 1−b a+b a+b

▶ What is the interpretation of limn→∞ pij(n) = [limn→∞ P n ]ij ?

▶ πj = limn→∞ pij(n) denotes the probability of being in state j at


time n when starting in state i.

▶ For an M state DTMC, π = (π1 , . . . , πM ) denotes the limiting


or stationary distribution.

18 / 19
Limiting probabilities
" # " # " #
0 0 1 .06 .3 .64 .23 .385 .385
▶ P= 0 0.6 0.4 P 5 = .18 .38 .44 P 30 = .23 .385 .385
0.6 0.4 0 .38 .44 .18 .23 .385 .385
   b a

▶ P = 1−a a
limn→∞ P n = a+b
b
a+b
a
b 1−b a+b a+b

▶ What is the interpretation of limn→∞ pij(n) = [limn→∞ P n ]ij ?

▶ πj = limn→∞ pij(n) denotes the probability of being in state j at


time n when starting in state i.

▶ For an M state DTMC, π = (π1 , . . . , πM ) denotes the limiting


or stationary distribution.

▶ How do we obtain the stationary distribution π?

18 / 19
Stationary distribution

19 / 19
Stationary distribution

The stationary distribution can be obtained as a solution


to the equation π = πP.

19 / 19
Stationary distribution

The stationary distribution can be obtained as a solution


to the equation π = πP.

▶ Obtain stationary distribution


 for the Markov
 Chain with
0 0 1
 
transition probability P =  0 0.6 0.4
0.6 0.4 0

19 / 19
Stationary distribution

The stationary distribution can be obtained as a solution


to the equation π = πP.

▶ Obtain stationary distribution


 for the Markov
 Chain with
0 0 1
 
transition probability P =  0 0.6 0.4
0.6 0.4 0

▶ The limiting distribution need not exist for some Markov


chains, but the stationary distribution exists.

19 / 19
Stationary distribution

The stationary distribution can be obtained as a solution


to the equation π = πP.

▶ Obtain stationary distribution


 for the Markov
 Chain with
0 0 1
 
transition probability P =  0 0.6 0.4
0.6 0.4 0

▶ The limiting distribution need not exist for some Markov


chains,
" but#the stationary distribution exists. For example for
0 1
P= .
1 0

19 / 19
Stationary distribution

The stationary distribution can be obtained as a solution


to the equation π = πP.

▶ Obtain stationary distribution


 for the Markov
 Chain with
0 0 1
 
transition probability P =  0 0.6 0.4
0.6 0.4 0

▶ The limiting distribution need not exist for some Markov


chains,
" but#the stationary distribution exists. For example for
0 1
P= .
1 0

▶ How to tackle such cases?

19 / 19
Stationary distribution

The stationary distribution can be obtained as a solution


to the equation π = πP.

▶ Obtain stationary distribution


 for the Markov
 Chain with
0 0 1
 
transition probability P =  0 0.6 0.4
0.6 0.4 0

▶ The limiting distribution need not exist for some Markov


chains,
" but#the stationary distribution exists. For example for
0 1
P= .
1 0

▶ How to tackle such cases? We will see it (among other thing)


in CS3.307.

19 / 19

You might also like