0% found this document useful (0 votes)
7 views18 pages

ST3236 Note5

This lecture provides an overview of Markov Chains, focusing on their definitions, properties, and applications. Key concepts include the Chapman-Kolmogorov equation, transition probabilities, and examples such as weather modeling and a chess player's performance. The lecture also discusses random walks and branching processes as types of Markov Chains.

Uploaded by

Li Xuan Guang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views18 pages

ST3236 Note5

This lecture provides an overview of Markov Chains, focusing on their definitions, properties, and applications. Key concepts include the Chapman-Kolmogorov equation, transition probabilities, and examples such as weather modeling and a chess player's performance. The lecture also discusses random walks and branching processes as types of Markov Chains.

Uploaded by

Li Xuan Guang
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

LECTURE 5: Markov Chains

Somabha Mukherjee

National University of Singapore

1 / 18
Outline

1 Introduction

2 The Chapman-Kolmogorov Equation

3 Examples

2 / 18
What is a Markov Chain?
A (discrete-time) Markov chain is a stochastic process X0 , X1 , X2 , . . .
satisfying:

P(Xn+1 = tn+1 |Xn = tn , Xn−1 = tn−1 , . . . , X1 = t1 ) = P(Xn+1 = tn+1 |Xn = tn )

whenever P(Xn = tn , Xn−1 = tn−1 , . . . , X1 = t1 ) > 0.

Given the present state (time n), the future state (time n + 1) is independent
of the past state (times n − 1, n − 2, . . . , 1).

State Space: The set S of all possible values the Markov chain can take, is
called the state space. If S is discrete, the Markov chain is called
discrete-state. We will work with discrete-state Markov chains only.

Time-homogeneous Markov Chains: If the conditional probability


P(Xn+1 = j|Xn = i) does not depend on n, or in other words, if

P(Xn+1 = j|Xn = i) = P(X1 = j|X0 = i) for all n ≥ 0 and i, j ∈ S .

In this case, the quantity P(X1 = j|X0 = i) is denoted in short by pij .


3 / 18
Transition Probabilities

We are only going to study time-homogeneous Markov chains in this


semester.

In this case, pij := P(X1 = j|X0 = i) are called 1-step transition probabilities,
and the matrix P := ((pij ))i,j∈S is called the transition matrix of the Markov
chain.
P
The transition matrix is stochastic, i.e. j∈S pij = 1 for all i ∈ S.
X X
pij = P(X1 = j|X0 = i) = P(X1 ∈ S|X0 = i) = 1 .
j∈S j∈S

k-Step Transition Probability: It is the conditional probability of the k th step,


given the initial step:
(k)
pij := P(Xk = j|X0 = i) .

How to compute the k-step transition probabilities?

4 / 18
State Diagrams

State diagrams are visual tools to summarize a Markov Chain.

Circles denote states and numbers on the arrows denote transition


probabilities.

1 2/3

1 1/3

The transition matrix of the above state diagram is given by:


2 1
P= 3 3
1 0

5 / 18
Outline

1 Introduction

2 The Chapman-Kolmogorov Equation

3 Examples

6 / 18
The Chapman-Kolmogorov Equation
An easy calculation

P(Xn+k = j, Xn+k−1 = ik−1 , . . . , Xn+1 = i1 |Xn = i)


= P(Xn+k = j|Xn+k−1 = ik−1 , . . . , Xn+1 = i1 , Xn = i)
× P(Xn+k−1 = ik−1 |Xn+k−2 = ik−2 , . . . , Xn+1 = i1 , Xn = i)
× ···
× P(Xn+1 = i1 |Xn = i)
= pik−1 j pik−2 ik−1 . . . pii1 .
(k)
So, we have the following formula for pij . In fact, for all n ≥ 0 and k ≥ 1,
(k)
pij = P(Xn+k = j|Xn = i)
X
= P(Xn+k = j, Xn+k−1 = ik−1 , . . . , Xn+1 = i1 |Xn = i)
i1 ,...,ik−1 ∈S
X
= pii1 pi1 i2 . . . pik−1 j .
i1 ,i2 ,...,ik−1 ∈S

7 / 18
What does the Chapman-Kolmogorov equation say?

(k)
Let us denote by P (k) := ((pij ))i,j∈S the k-step transition matrix.

(k) P
Chapman-Kolmogorov Equation: pij = i1 ,i2 ,...,ik−1 ∈S pii1 pi1 i2 . . . pik−1 j .

This means P (k) = P k .

Remember this!
The k-step transition matrix is just the k th power of the (1-step) transition matrix.

That’s how you can easily compute the k-step transition probabilities using
any programing platform that can handle matrix computations (for example
MATLAB). Well, of course S needs to be finite for this!

8 / 18
Outline

1 Introduction

2 The Chapman-Kolmogorov Equation

3 Examples

9 / 18
The Weather Model

Remember the simple weather model we discussed during our first lecture?

If it rains today, then there is a 60% chance that tomorrow is rainy, and 40%
chance that tomorrow is sunny. If it is sunny today, then there is a 70%
chance that tomorrow is sunny and 30% chance that tomorrow is rainy.

This Markov chain has state space S := {1, 2}, where 1 denotes rainy and 2
denotes sunny.

The transition matrix is given by:


 
0.6 0.4
P=
0.3 0.7

Question: Given that today is rainy, what is the probability that the same day
next week will be sunny?

10 / 18
k-Step Transition Probabilities for the Weather Model
If X0 denotes today’s weather condition and X7 denotes the weather
condition same day next week, then we are interested in:

P(X7 = 2|X0 = 1).

By Chapman-Kolmogorov Equation: P(X7 = 2|X0 = 1) = ((P 7 ))1,2 .

Calculate P 7 in R:

> P < −matrix(c(0.6, 0.3, 0.4, 0.7), 2, 2)


> install.packages(“expm”)
> library(expm)
> P% ∧ %7

 
7 0.4286964 0.5713036
P =
0.4284777 0.5715223
So, the required probability is 0.5713036.
11 / 18
A Chess Player’s Psychology
Grandmaster and reigning world champion Emmanuel’s mood in a particular
chess game is affected only by his performance in the immediately previous
game. He trained his mind strong enough to forget about results of games
before the immediately previous one, given his immediately previous result.
1 If he wins a game, he will win the next one with probability 0.4 and draw the
next one with probability 0.3.
2 If he draws a game, he will win the next one with probability 0.3 and draw the
next one with probability 0.4.
3 If he loses a game, he will win the next one with probability 0.2 and draw the
next one with probability 0.3.

GM Emmanuel’s performance thus follows a Markov chain with state space


S := {1, 2, 3}, with 1 denoting victory, 2 denoting draw and 3 denoting loss.
The transition matrix of his performance is given by:
 
0.4 0.3 0.3
P = 0.3 0.4 0.3
0.2 0.3 0.5
12 / 18
He Plays the World Championship against GM José

GM Emmanuel now starts playing the world championship match against


challenger GM José. This match consists of 14 games.

Their first game starts with a draw. What is the probability that Emmanuel
wins the last (14th ) game?

If Xn denotes Emmanuel’s performance in the nth game, then we are


interested in P(X14 = 1|X1 = 2). This is equal to ((P 13 ))2,1 .

The matrix P 13 is given by (again, using R):


 
0.2916667 0.3333333 0.375
P 13 = 0.2916667 0.3333333 0.375
0.2916667 0.3333333 0.375

Seems that no matter what the first game’s result is, Emmanuel’s probability
of winning the last game is 0.2916667.

13 / 18
Random Walks
Let Y1 , Y2 , . . . be independent random variables, and define
Xn := Y1 + . . . + Yn .

Xn is a Markov Chain

P(Xn+1 = j|Xn = i, Xn−1 = in−1 , . . . , X1 = i1 )


= P(Yn+1 = j − i|Yn = i − in−1 , Yn−1 = in−1 − in−2 . . . , Y1 = i1 )
= P(Yn+1 = j − i)
= P(Yn+1 = j − i|Xn = i)
= P(Xn+1 = j|Xn = i).

Additionally, if Y1 , Y2 , . . . are identically distributed, then the Markov chain is


time-homogeneous, with pij = P(Y1 = j − i).
Simple Random Walk: Here, Y1 , Y2 , . . . are i.i.d. taking values 1 and −1 with
probabilities p and 1 − p, respectively.
14 / 18
Simple Random Walk: 2-Step Transition Probabilities
A Direct Computation
If X1 = i, then X3 is i + 2 if Y2 = Y3 = 1, i − 2 if Y2 = Y3 = −1, and i if
Y2 = −Y3 .
Hence, we have: 
2
p
 if j = i + 2
(2)
pij = (1 − p)2 if j = i − 2

2p(1 − p) if j = i .

Using the Chapman-Kolmogorov Equations


(2) P
pij = k∈Z pik pkj = pi,i−1 pi−1,j + pi,i+1 pi+1,j .
For the above sum to be non-zero, we must have j = i − 2, i or i + 2.
(2)
If j = i − 2, then pij = pi,i−1 pi−1,i−2 = (1 − p)2 .
(2)
If j = i, then pij = pi,i−1 pi−1,i + pi,i+1 pi+1,i = 2p(1 − p).
(2)
If j = i + 2, then pij = pi,i+1 pi+1,i+2 = p 2 .

15 / 18
Branching Process

{Yi,j }i≥1,j≥1 are i.i.d. non-negative integer valued random variables.

Define X0 := 1 and for n ≥ 1,


Xn−1
X
Xn = Yi,n .
i=1

Xn models the population size at generation n.

At generation 1, there was only 1 individual. It gives birth to Y1,1 individuals.

In the (n − 1)th generation, the i th individual gives birth to Yi,n individuals.

Xn is a time-homogeneous Markov Chain. We’ll see why in the next slide.

16 / 18
Why is the Branching Process a Time-Homogeneous
Markov Chain?
Since (X1 , . . . , Xn ) is a function of {Yi,j }i≥1,j≤n , it is independent with
{Yi,n+1 }i≥1 . Hence, we have:

P(Xn+1 = in+1 |Xn = in , . . . , X1 = i1 )


Xn
!
X
= P Yi,n+1 = in+1 |Xn = in , . . . , X1 = i1
i=1
in
!
X
= P Yi,n+1 = in+1
i=1
Xn
!
X
= P Yi,n+1 = in+1 |Xn = in
i=1
= P(Xn+1 = in+1 |Xn = in ) .

The state
Pspace is clearly
 the set of all non-negative integers. Also,
i
pij = P Y
k=1 k,1 = j .
17 / 18
Transition Probabilities of a Branching Process

Each individual in a branching process gives birth to Poisson(λ) many


individuals.

What are the transition probabilities for this branching process?

Note that
i
X
Yk,1 ∼ Poisson(iλ) .
k=1

Hence, we have:
i
!
X e −iλ (iλ)j
pij = P Yk,1 = j = .
j!
k=1

18 / 18

You might also like