0% found this document useful (0 votes)
159 views37 pages

Chapter 4 - Discrete Time Markov Chains

This document discusses Markov chains and stochastic processes. It begins by defining stochastic processes as series of random variables indexed over time that can be used to model systems whose attributes change randomly over time. It then defines Markov chains as stochastic processes that have the Markov property, where the probability of future states depends only on the present state. It provides examples of Markov chains and discusses n-step transition probabilities, the Chapman-Kolmogorov equations, classification of states, limiting probabilities, and applications of Markov chains.

Uploaded by

Minh Quân
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
159 views37 pages

Chapter 4 - Discrete Time Markov Chains

This document discusses Markov chains and stochastic processes. It begins by defining stochastic processes as series of random variables indexed over time that can be used to model systems whose attributes change randomly over time. It then defines Markov chains as stochastic processes that have the Markov property, where the probability of future states depends only on the present state. It provides examples of Markov chains and discusses n-step transition probabilities, the Chapman-Kolmogorov equations, classification of states, limiting probabilities, and applications of Markov chains.

Uploaded by

Minh Quân
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 37

Chapter 4: Stochastic Processes

and Discrete Time Markov Chains


 Stochastic Processes.
 Markov Chains.

 Chapman-Kolmogorov Equations

 Classification of States

 Limiting Probabilities

 Application
1. Stochastic Processes
 Why do we need to study stochastic processes? The attributes
of the system randomly change over time
 A stochastic process is aimed at predicting the behavior of the
system rather than making optimal decision
 Stochastic process models ~ stochastic models (↔decision
making component ><descriptive component)
 Stochastic process models:
 Set of states → describe the attributes of the system
 The mechanisms governs transition
 Time
Stochastic Processes
Series of random variables {Xt}
Series indexed over time interval T
Examples: X1, X2, … , Xt, … , XT represent
 monthly inventory levels
 daily closing price for a stock or index
 availability of a new technology
 market demand for a product
Stochastic Processes
 A stochastic process is a collection of random variables
 X  t  , t T 
 Typically, T is continuous (time) and we have  X t  , t  0
 Or, T is discrete and we are observing  X n , n  0,1, 2,... at
discrete time points n that may or may not be evenly spaced.
 Refer to X(t) as the state of the process at time t.
 The state space of the stochastic process is the set of all possible
values of X(t): this set may be discrete or continuous as well.
2. Markov Chains
 Markovian property:
 A stochastic process {Xt} is said to have the Markovian property if
P{Xt+1= j|Xt=i,Xt-1 = kt-1, …,X1=k1,X0 = k0}=P{Xt+1= j|Xt=i}; for t = 0,
1,…and every sequence i, j, k0, k1,…,kt-1
 A stochastic process {Xt, t = 0,1,…} is a Markov chain if it has the
Markovian property
 Process will move to other states with known transition probabilities:
P{Xt+1= j|Xt=i}
 Transition probabilities are stationary (do not change over time) if for each
i and j
 One-step transition probabilities: P{Xt+1= j|Xt=i} =P{X1= j|X0=i} =pij
 n-step transition probabilities: P{Xt+n= j|Xt=i} =P{Xn= j|X0=i} =p(n)ij
 A finite number of possible states M→n-step transition matrix
Markov Chains
• A Markov chain is a stochastic process X n , n  0,1, 2,... ,
where each Xn belongs to the same subset of {0, 1, 2, …}, and
P  X n 1  j X n  i, X n 1  in 1 ,..., X 1  i1 , X 0  i0   P  X n 1  j X n  i
for all states i0, i1,…, in-1 and all n  0 .

X n 1 depends only on the present state X n


• Denote Pij  P  X n1  j X n  i as the transition probability
Then Pij  0 for all i, j
For any i,  Pij  1
all j

Let P   Pij  be the matrix of one-step transition probabilities.


 
Markov Chains - Examples
Example 1: Forecasting the weather
Xn : weather of day n
S = {0 : rain , 1 : no rain}
P00 =  , P10 = 
 1 
 P 
 1  
Markov Chains - Examples
Example 2: Forecasting the weather
• If it rains for the past two days then it will rain tomorrow with probability
0.7
• If it rains today but not yesterday then it will rain tomorrow with
probability 0.5
• If it rains yesterday but not today then it will rain tomorrow with
probability 0.4
• If it has not rain in the past two days then it will rain tomorrow with
probability 0.2

.7 0 .3 0
.5 0 .5 0 
P States: 0: RR, 1: NR, 2: RN, 3: NN
0 .4 0 .6
 
0 .2 0 .8
Markov Chains - Examples
Example 3: Random Walks

• State Space (S): 0, ±1, ±2, ±3, ±4,….

• Pi, i + 1 = p ; Pi, i - 1 = 1 – p i = 0, 1, …

• At each point of time, either it takes one step to the right with
probability p, or one step to the left with probability 1-p.

… -2 -1 0 1 2 …
S
Markov Chains - Examples
Example 3: A Gambling Model
wins $1 with p
Gambler at each play
loses $1 with 1  p 
Gambler quits if he goes broke or if he obtains a fortune N.
Pi ,i 1  p ; Pi ,i 1  1  p i  1,2,3,..., N  1
P00  PNN  1 : 0 and N are absorbing states
1 p p p p
p 1
0 1 2 i-1 i i+1 N-1 N

q q q q q
Markov Chains - Examples

A small community has two service stations: Petroco and


Gasco. The marketing department of Petroco has found that
customers switch between stations according to the following
transition matrix:

This Next Month


Month Petroco Gasco
Petroco 0.60 0.40 =1.0
Gasco 0.20 0.80 =1.0
Note: Rows sum to 1.0 !
Future State Probabilities

Probability that a customer buying from Petroco this month


will buy from Petroco next month:
p  0.6
(1)
11
In two months:

p (2)
11  (0.6  0.6)  (0.4  0.2)  0.44
From Gasco in two months:

p ( 2)
12  (0.4  08
. )  (0.6  0.4)  056
.
Graphical Interpretation
First Period Second Period

0.6 0.6 Petroco 0.36

0.6 Petroco
0.4 Gasco 0.24
Petroco Petroco 0.08
0.4 0.2
0.4 Gasco
0.8 Gasco 0.32

1.00
3. Chapman-Kolmogorov Equations
• n-step Transition Probabilities:
Pijn  P{X nm  j | X m  i}, n, m  0, i, j  0

• Chapman-Kolmogorov Equations

P nm
ij   Pikn Pkjm , n, m  0, i, j  0
k 0

• Let P(n) be the matrix of n-step transition probabilities:


P
nm
 P  P
n m

• So, proven by induction,

P   P n
n
Chapman-Kolmogorov Equations

Let P be the transition matrix for a Markov process. Then the


n-step transition probability matrices can be found from:

P(2) = P·P

P(3) = P·P·P

P(n) = Pn
CK Equations for Example

 0.60 0.40
P(1)   
 0.20 0.80 
0.60 0.40 0.60 0.40 0.44 0.56
P(2)  0.20 0.80 0.20 0.80  0.28 0.72
    
Starting States

In current month, if 70% of customers shop at Petroco and


30% at Gasco, what will be the mix in 2 months?

0.60 0.40 sn = s0 P(n)


P 
 0.20 0.80  0.60 0.40
2

s2 = [0.7 0.3]  
s = [0.70 0.30] 0.20 0.80 

0.44 0.56
= [0.7 0.3]  
 0.28 0.72 

= [0.39 0.61]
CK Equations in Steady State

 0.60 0.40
P(1)   
 0.20 0.80 
0.60 0.40 0.60 0.40 0.44 0.56
P(2)  0.20 0.80 0.20 0.80  0.28 0.72
    

9
 0.60 0.40 0.33 0.67
P 
(9)
  
 0.20 0.80   0.33 0.67 
Convergence to Steady-State

Prob If a customer is buys at Petroco this


month, what is the long-run probability
1.0 that the customer will buy at Petroco
during any month in the future?

0.33

1 5 10 Period
Calculation of Steady State
 Want outcome probabilities equal to incoming
probabilities
 Let s = [s1, s2, …, sn] be the vector of steady-state
probabilities
 Then we want
s=sP
That is, the output state probabilities do not change
from transition to transition (e.g., steady-state!)
 Thus, solve the following system we will have the value
of vector s
s = sP
∑sj = 1, j = 1,2,…,M
Example 1

s=sP
0.60 0.40
0.60 0.40 [p g] = [p g] 0.20 0.80
P   
 0.20 0.80 
p = 0.6p + 0.2g p+g=1
s = [p g] g = 0.4p + 0.8g

p = 0.333
g = 0.667
Example
.7 .3
 Transition probability matrix: P 
.4 .6 
With: i = 1: it rains; i = 2: it does not rain

.5749 .4251
P 
4

.5668 .4332 
• If: Prob. it rains today is α1 = 0.4
Prob. it does not rain today is α2 = 0.6

.5749 .4251
Then P  .4 4
.6   .57 .43
.5668 .4332
4. Classification of States
• State j is accessible from state i if Pij  0 for some n  0
n

• If j is accessible from i and i is accessible from j, we say that states


i and j communicate (i  j).
• Communication is a class property:
(i) State i communicates with itself, for all i  0
(ii) If i  j then j  i : communicate is commutative
(iii) If i  j and j  k, then i  k : communicate is transitive
• Therefore, communication divides the state space up into mutually
exclusive classes.
• If all the states communicate, the Markov chain is irreducible.
Classification of States
An irreducible Markov chain: An reducible Markov chain:

1 2 1 2

0 0

3 4 3 4
Recurrence vs. Transience
• Let fi be the probability that, starting in state i, the process will ever
reenter state i. If fi = 1, the state is recurrent, otherwise it is
transient.
– If state i is recurrent then, starting from state i, the process will reenter state i
infinitely often (w/prob. 1).

– If state i is transient then, starting in state i, the number of periods in which the
process is in state i has a geometric distribution with parameter 1 – fi.

 P   and transient if  n1 Piin  


 n 
• state i is recurrent if n 1 ii

• Recurrence (transience) is a class property: If i is recurrent


(transient) and i  j then j is recurrent (transient).
• A special case of a recurrent state is if Pii = 1 then i is absorbing.
Recurrence vs. Transience (2)
• Not all states in a finite Markov chain can be transient.
• All states of a finite irreducible Markov chain are
recurrent.
• If state i is recurrent and state i does not communicate
with state j, then Pij  0
– when a process enters a recurrent class of states it
can never leave that class.
– A recurrent class is often called a closed class
Examples
0 0 .5 .5
1 0 0 0
P All states are recurrent
0 1 0 0
 
0 1 0 0

 .5 .5 0 0 0
 .5 .5 0 0 0 

P 0 0 .5 .5 0 Classes : 0,1, 2,3 recurrent ; Class : 4 transient
 
0 0 .5 .5 0
. 25 .25 0 0 .5 

0 0 0 1
0 0 0 1 
P irreducible  all states are recurrent
.5 .5 0 0
 
0 0 1 0
5. Limiting Probabilities
• If Pii  0
n
whenever n is not divisible by d, and d is the largest
integer with this property, then state i is periodic with period d.
• If a state has period d = 1, then it is aperiodic.
• If state i is recurrent and if, starting in state i, the expected time
until the process returns to state i is finite, it is positive recurrent
(otherwise it is null recurrent).
• A positive recurrent, aperiodic state is called ergodic.
Limiting Probabilities (2)
Theorem:
• For an irreducible ergodic Markov chain, p j  lim Pijn exists for all j
n
and is independent of i.
• Furthermore, pj is the unique nonnegative solution of

p j   p i Pij , j  0
i 0

p
j 0
j 1

• The probability pj also equals the long run proportion of time that
the process is in state j.
Limiting Probabilities – Examples
 1 
P 
  1   
Limiting probabilities :
p 0  p 0  p 1

p 1  p 0 1     p 1 1   
p  p  1
 0 1
 1
 p0  ;p1 
1   1  
Limiting Probabilities (3)
• The long run proportions pj are also called stationary probabilities
because if P  X 0  j  p j then

P  X n  j  p j for all n, j  0

• Let mjj be the expected number of transitions until the Markov


chain, starting in state j, returns to state j (finite if state j is positive
recurrent). Then

m jj  1 p j
6. Application: Gambler’s Ruin Problem
 Gambler at each play of the game has prob. p to win one unit and has
prob. q=1-p of losing one unit. Successive plays are independent.
 What is the probability that, starting with i units, the gambler’s fortune
will reach N before going broke?
 Let Xn = player’s fortune at time n:
{Xn ; n = 0,1,2…} is a Markov chain with transition probabilities:
 P00  PNN  1

 Pi ,i 1  1  Pi ,i 1  p i  1,2,..., N  1

• This Markov chain has three classes:


– {0} and {N} - Recurrent
– {1,2,…,N-1} - Transient
Application: Gambler’s Ruin Problem
 Let Pi ; 0,1,2,...,N : Prob., starting with i, the gambler reaches N.
 Conditioning on the next game, we have:

or Pi 1  Pi  Pi  Pi 1 
q
Pi  pPi 1  qPi 1 for i  1,2,..., N  1
p
• Note that: P0 = 0

  q i
1   p 
 P1 if q  1
 Pi   1  q
p
p
i  2,..., N 

iP if q 1
 1 p
Application: Gambler’s Ruin Problem
 1 q
 p
 if p  1
1   q 
N 2
• Moreover, PN = 1  P1    p 
 

1 if p  1
 N 2

  q i
 1   p 
 if p  1
  q N 2
 Pi  1    i  0,1,2,..., N 
  
p
i
N if p  1
 2
Application: Gambler’s Ruin Problem

  q i
1   p  if p  2
1
• For N → ∞: Pi  
0 if p  1
 2

• For p > 1/2: there is a positive prob. that the gambler’s fortune will
increase indefinitely.
• For p ≤ 1/2: the gambler will “almost certainly” go broke against an
infinitely rich adversary.
Mean First Passage Time of Recurrent States
• For an ergodic Markov chain, it is possible to go from one state to
another state in a finite number of transitions. Hence:


k 1
f ijk   1

fij(k): Prob. of going from i to j for the first time in exactly k transitions.

• Mean first passage time: ij   kf ijk 
k 1

• Mean first passage time can be found by solving:



ij  1  P 
k 0,k  j
ik kj
Example
0 .5 .5 0
0 0 1 0 
P
0 .25 .5 .25
 
1 0 0 0
Find the mean first passage time to state 3 from states 0,1,2

 03  1  0   03  0.5  13  0.5   23



13  1  0   03  0  13  1  23
  1  0    0.25    0.5  
 23 03 13 23

  03  6.5 ; 13  6 ;  23  5

You might also like