0% found this document useful (0 votes)
5 views37 pages

Lecture11 Handout

Uploaded by

super shaggy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views37 pages

Lecture11 Handout

Uploaded by

super shaggy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 37

Definition Transition probabilities Limit behavior Wrap-up

wi4525TU
Monte Carlo Simulation and Stochastic Processes

L.E. Meester

3 December 2024, Lecture 11

1/ 37
Definition Transition probabilities Limit behavior Wrap-up

View back and ahead

8 (Further) Conditioning techniques


9 Two models and some stochastic processes we find in there
10 Describing, modeling, evolution in time; the Markov property
11 Discrete time Markov chains, long-term behaviour
12 Continuous time Markov chains
13 Brownian motion and stochastic differential equations
14 Processes and properties. Repair shop finale.

2/ 37
Definition Transition probabilities Limit behavior Wrap-up

Model 8 from last week: summary


At repair-just-finished instances the number of working
machines completely determines the future evolution.
τn : time nth repair completed; τ0 = 0;
Nn : number of working machines at τn ; N0 = m.
How long until the next repair is completed? If
Nm = i < m: just the next repair time; τn+1 = τn + Rn+1 .
Nn = m: first wait for a breakdown, τn+1 = τn + T1 + Rn+1 ,
with T1 ∼ Exp (mλ), independent.
Each machine working at the start of a repair has a
probability p = P(U > R) to be still working at the end.
appr
If i work at the start and Y still at the end, Y ∼ Bin (i, p).
If Nn = i < m then Nn+1 = Y + 1.
appr
If Nn = m then Nn+1 = Y + 1, but Y ∼ Bin (m − 1, p).
We have a complete model description for the process {Nn , n ≥ 0}
if we determine pij = P(Nn+1 = j | Nn = i) for all i, j ∈ S.
3/ 37
Definition Transition probabilities Limit behavior Wrap-up

Today’s program

Theme: Discrete time Markov chains (DTMCs)


1 Markov chains in discrete time: definition

2 Transition probabilities
Example: extracting a Markov chain from a description
Two-step transition probabilities
The Chapman-Kolmogorov equations
Initial distribution and the distribution of Xn
3 Limiting behavior of Markov chains
Communicating classes
Transience and recurrence
Periodicity
The limit theorem
Implications (for the repair shop analysis)

4/ 37
Definition Transition probabilities Limit behavior Wrap-up

Discrete time Markov chain (DTMC): definition

X = {Xn : n = 0, 1, 2, . . .}:
a stochastic process with finite or countable state space S.
Default assumption: states labeled so that S = {0, 1, 2, . . .}.
Markov chain
If, for any state sequence i0 , . . . , in−1 , any i, j, and any n ≥ 0,

P(Xn+1 = j | Xn = i, Xn−1 = in−1 , . . . , X0 = i0 ) = P(Xn+1 = j | Xn = i),


(1)
the process X is called a (discrete time) Markov chain.

Relation (1) is called the Markov property; intuitively, it says:


Given the present (Xn ), the future (Xn+1 ) is independent
of the past (Xn−1 , . . . , X0 ).
This statement is still true if Xn+1 is replaced by Xn+1 , . . . , Xn+m .
5/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

1 Markov chains in discrete time: definition

2 Transition probabilities
Example: extracting a Markov chain from a description
Two-step transition probabilities
The Chapman-Kolmogorov equations
Initial distribution and the distribution of Xn

3 Limiting behavior of Markov chains


Communicating classes
Transience and recurrence
Periodicity
The limit theorem
Implications (for the repair shop analysis)

6/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

Transition probabilities

Default assumption we make: P(Xn+1 = j | Xn = i) does not


depend on n. The chain then is called time-homogeneous and we
may define:
Transition probabilities
For a time-homogeneous Markov chain,

pij = P(Xn+1 = j | Xn = i)

is called the transition probability from state i to state j.


The transition matrix is defined by P = (pij )i,j∈S .

A discrete time Markov chain is completely specified by: the state


space S, initial state X0 , and the transition probability matrix P.

7/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

Example: extracting the Markov chain from a description

A banker walks from her home to the bank in the morning; and in
the afternoon she walks back. She owns 3 umbrellas. Upon leaving
(her home or the bank) she takes an umbrella with her whenever it
rains, if one is available.
Suppose, on every departure instant, independent of what happens
on other departure instances, the probability of rain is p = 1/3.
Let Xn be the number of umbrellas at home early morning of day n.
Exercise:
1 Suppose Xn = 1. What are the possible values of Xn+1 and
their probabilities? Hint: draw a tree.
2 Repeat for Xn = i and the other values of i.
3 Is {Xn : n ≥ 0} a Markov chain?

8/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

Markov chain description

The full solution:


Since there are three umbrellas, S = {0, 1, 2, 3}.
What happens on any particular day, given the number of
umbrellas at home in the morning, is determined by:
whether or not it rains in the morning; plus
whether or not it rains in the evening;
these events happen independently of everything else,
hence Xn+1 given Xn is independent of X0 , . . . , Xn−1 .
So the Markov property holds.
If Xn = 0, then Xn+1 = 1 if it rains in the afternoon,
otherwise Xn+1 = 0. So:
P(Xn+1 = 1 | Xn = 0) = p = 1/3,
P(Xn+1 = 0 | Xn = 0) = q = 1 − p = 2/3,
P(Xn+1 = j | Xn = 0) = 0 for j = 2, 3.
9/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

If Xn = 1, then Xn+1 = 0 if it is a “rain-dry” day: p1,0 = 2/9.


p1,1 = P(rain-rain or dry-dry) = p 2 + q 2 = 5/9.

The complete transition matrix:


6 3 
9 9 0 0
2 5 2
0
9 9 9
P = (pij ) = 

0 2 5 2
9 9 9

2 7
0 0 9 9

Exercise. Verify this by computing the remaining pij .

10/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

1 Markov chains in discrete time: definition

2 Transition probabilities
Example: extracting a Markov chain from a description
Two-step transition probabilities
The Chapman-Kolmogorov equations
Initial distribution and the distribution of Xn

3 Limiting behavior of Markov chains


Communicating classes
Transience and recurrence
Periodicity
The limit theorem
Implications (for the repair shop analysis)

11/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

2-step transition probabilities

It is natural to ask for P(Xn+ℓ = j | Xn = i) for ℓ ≥ 2: given the


state today, what is are the probabilities ℓ steps into the future?
From now on, assume that S = {0, 1, . . . , m}. Consider ℓ = 2.
The Law of Total Probability says:
X
P(Xn+2 = j | Xn = i ) = P(Xn+2 = j , Xn+1 = k | Xn = i ),
| {z } | {z } | {z } | {z } | {z }
A C k∈S A Bk C

since the events Bk , k ∈ S partition the outcome space.

Remark: This conditional version of the LTP can easily be checked


by writing out the conditional probability on the left; applying the
LTP to the numerator; reassembling into conditional probabilities.

12/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

X
P(Xn+2 = j | Xn = i ) = P(Xn+2 = j , Xn+1 = k | Xn = i ).
| {z } | {z } | {z } | {z } | {z }
A C k∈S A Bk C

Two facts, for you to verify:


1 For any events A, B, C : P(A ∩ B | C ) = P(B | C ) P(A | B ∩ C ).
2 By the Markov property: P(A | Bk ∩ C ) = P(A | Bk ).

Combining them: P(A ∩ Bk | C ) = P(Bk | C ) P(A | Bk ) and

LTP
X
P(Xn+2 = j | Xn = i) = P(Xn+2 = j, Xn+1 = k | Xn = i)
k∈S
X
= P(Xn+1 = k | Xn = i) · P(Xn+2 = j | Xn+1 = k)
k∈S
X
= pik pkj .
k∈S

13/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

The final result says:


m
X
P(Xn+2 = j | Xn = i) = pik pkj . (2)
k=0

The probabilities on the LHS apparently—see the RHS—do not


(2) (2)
depend on n, so we may use pij as shorthand; write P (2) = (pij ).

For the matrix of two-step transition probabilities P (2) , result (2)


can be summarized as

P (2) = P · P = P 2 .

Exercise. Name the objects in red in proper math terminology.

14/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

Applying this, we easily find the 2-step transition probabilities for


the umbrella problem:
6 3  6 3 
9 9 0 0 9 9 0 0
 2 5 2 0  2 5 2 0
(2)
P (2) = (pij ) =  9 92 95 2   9 29 95 2 
  
0 9 9 9  0 9 9 9 
2 7 2 7
0 0 9 9 0 0 9 9
 
0.5185 0.4074 0.0741 0
0.2716 0.4321 0.2469 0.0494
=
0.0494

0.2469 0.4074 0.2963
0 0.0494 0.2963 0.6543

15/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

1 Markov chains in discrete time: definition

2 Transition probabilities
Example: extracting a Markov chain from a description
Two-step transition probabilities
The Chapman-Kolmogorov equations
Initial distribution and the distribution of Xn

3 Limiting behavior of Markov chains


Communicating classes
Transience and recurrence
Periodicity
The limit theorem
Implications (for the repair shop analysis)

16/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

The Chapman-Komogorov equations


The above steps can be repeated for ℓ = 3 and higher, and
combined to show that the ℓ-step transition probability
(ℓ)
P(Xn+ℓ = j | Xn = i) = pij

equals the (i, j)th-element of P ℓ , the ℓth power of the matrix P.


The matrix-multiplication fact P n+ℓ = P n · P ℓ tells us they satisfy
The Chapman-Kolmogorov equations
m
(n+ℓ) (n) (ℓ)
X
For every n ≥ 1, ℓ ≥ 1: pij = pik pkj .
k=0

N.B. As for ℓ = 2, this can also be shown from the LTP by conditioning on Xℓ , and
then proceeding as with ℓ = 2;

m
X
P(Xn+ℓ = j | X0 = i) = P(Xn+ℓ = j, Xℓ = k | X0 = i).
k=0
17/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

1 Markov chains in discrete time: definition

2 Transition probabilities
Example: extracting a Markov chain from a description
Two-step transition probabilities
The Chapman-Kolmogorov equations
Initial distribution and the distribution of Xn

3 Limiting behavior of Markov chains


Communicating classes
Transience and recurrence
Periodicity
The limit theorem
Implications (for the repair shop analysis)

18/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

Initial distribution and the distribution of Xn

Probabilities derived so far are conditional ones: P(Xn = j | X0 = i).


For unconditional probabilities we can use the LTP again (and the
product rule for probabilities):
m
X
P(Xn = j) = P(Xn = j | X0 = k) P(X0 = k) , (3)
k=0

so we need an initial distribution: a collection α = (αk )k∈S with


m
X
αk = P(X0 = k) , and αk = 1.
k=0

(n)
Equation (3) can be written as P(Xn = j) = m
P
k=0 αk Pkj : the
vector-matrix product α · P n represents the distribution of Xn .

19/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

Umbrella example (continued)


Suppose X0 ≡ 3: at time 0, all the umbrellas are at home.
The initial distribution: α = (α0 , . . . , α3 ) = (0, 0, 0, 1).
Then the distribution of X1 is:
6 3 
9 9 0 0
 2 5 2 0
α · P = (0, 0, 0, 1)  9 29 95 2  = (0, 0, 29 , 79 ).
 
0 9 9 9 
2 7
0 0 9 9

That of X2 :
 
0.5185 0.4074 0.0741 0
2
0.2716 0.4321 0.2469 0.0494
α · P = (0, 0, 0, 1) 
0.0494 0.2469 0.4074

0.2963
0 0.0494 0.2963 0.6543
= (0, 0.0494, 0.2963, 0.6543).
20/ 37
Definition Transition probabilities Limit behavior Wrap-up Example 2-step Chapman-Kolmogorov Initial distribution

The distribution of X3 can be read from the last row of


 
0.4362 0.4156 0.1317 0.0165
0.2771 0.3855 0.2442 0.0933
3
P = .
0.0878 0.2442 0.3471 0.3210
0.0110 0.0933 0.3210 0.5748

That of X37 from


 
0.1826 0.2734 0.2724 0.2717
37 0.1822 0.2731 0.2725 0.2722
P = .
0.1816 0.2725 0.2728 0.2730
0.1811 0.2722 0.2730 0.2737

Wow!: all the rows are remarkably close to the probability vector
2 3 3 3
( 11 , 11 , 11 , 11 ) = (0.1818, 0.2727, 0.2727, 0.2727).

This is no coincidence: for “nicely behaved” Markov chains, the


distribution of Xn converges to a limit as n → ∞.
21/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

1 Markov chains in discrete time: definition

2 Transition probabilities
Example: extracting a Markov chain from a description
Two-step transition probabilities
The Chapman-Kolmogorov equations
Initial distribution and the distribution of Xn

3 Limiting behavior of Markov chains


Communicating classes
Transience and recurrence
Periodicity
The limit theorem
Implications (for the repair shop analysis)

22/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

Limiting behavior—intuitive overview

“Nice” Markov chains converge in distribution to a limit.


A Markov chain is “nice” if it satisfies these properties:
Irreducibility: it should not be the case that the state space S
can be partitioned into several parts such that, if the chain
starts in one part, it can never visit any of the other parts;
Aperiodicity: it should not be the case that certain transitions
between states can only happen after an even number of time
steps, or some other fixed multiple.
All Markov chains have so-called stationary distributions,
solutions of α = α · P;
“Nicely behaved” Markov chains have only one, and this is
(also) the limiting distribution.
Next: more detail on these concepts and then the limit theorem.

23/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

1 Markov chains in discrete time: definition

2 Transition probabilities
Example: extracting a Markov chain from a description
Two-step transition probabilities
The Chapman-Kolmogorov equations
Initial distribution and the distribution of Xn

3 Limiting behavior of Markov chains


Communicating classes
Transience and recurrence
Periodicity
The limit theorem
Implications (for the repair shop analysis)

24/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

Example 1:

S = {0, 1, 2, 3} and
 
0.9 0 0.1 0
P1 = 0.6 0 0.4 0  0 1
0 0.5 0 0.5 .
0 0.3 0 0.7

Let’s draw a transition diagram, with arrows


3 2
indicating transitions that may actually happen:

State j is accessible from state i if we can find a path of arrows


that brings us from i to j. If the accessibility holds in both
directions, we say that i and j communicate.

Exercise: a) Find all communicating state pairs. b) Find groups of


states such all group members communicate. The state space is
one communicating class: the chain is irreducible.
25/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

Communicating classes of states—formal definitions

Definitions
(n)
1 State j is called accessible from state i if pij > 0 for some n.
Notation: i ⇝ j.
2 States i and j are said to communicate if i ⇝ j and j ⇝ i.
3 A subset of S of which all members communicate with
eachother is called a communicating class.
4 A Markov chain is called irreducible if it has a single
communicating class: all states communicate with eachother.

Notes:
(m)
Note that i ⇝ j and j ⇝ k imply i ⇝ k: if pij > 0 and
(n) (n+m)
pjk > 0, then pik > 0.
The state space partitions into communicating classes.
26/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

1 Markov chains in discrete time: definition

2 Transition probabilities
Example: extracting a Markov chain from a description
Two-step transition probabilities
The Chapman-Kolmogorov equations
Initial distribution and the distribution of Xn

3 Limiting behavior of Markov chains


Communicating classes
Transience and recurrence
Periodicity
The limit theorem
Implications (for the repair shop analysis)

27/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

Example 2

S = {0, 1, 2, 3} and the transition matrix


 
1 0 0 0
0 1 1
2 2 0
P2 =  .
 
0 1 1
2 2 0
1 1
2 0 0 2

Exercise:
Draw the transition diagram.
Determine the communicating classes.
For each i find or guess the return probability

fii = P(Xn = i for some n | X0 = i).

28/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

Transient and recurrent states


 
1 0 0 0
0 1 1
2 2 0
P2 = 

1 1 .
 0 1
0 2 2 0
1 1
2 0 0 2

Three classes S = {0} ∪ {1, 2} ∪ {3}. 3 2


For the return probabilities we find:
f00 = 1, f11 = f22 = 1, and f33 = 12 .
States 0, 1, and 2 are called recurrent: starting in one of these
states, the chain is certain to return to it.
State 3 is transient: from some point on, the chain never
returns to state 3.
Note that the chain is reducible to two chains, one on {0, 3} and
the other on {1, 2}. And that irreducibility is a natural assumption.
29/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

1 Markov chains in discrete time: definition

2 Transition probabilities
Example: extracting a Markov chain from a description
Two-step transition probabilities
The Chapman-Kolmogorov equations
Initial distribution and the distribution of Xn

3 Limiting behavior of Markov chains


Communicating classes
Transience and recurrence
Periodicity
The limit theorem
Implications (for the repair shop analysis)

30/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

Example 3

 1 1 
0 2
0 2
1 1
0 0
P3 =  0 2
1
2
0 1
2 2
1 1
2
0 2
0

Exercise:
Draw a transition diagram. Check that the chain is irreducible.
In how many steps it is feasible to return to 2 when X0 = 2?
The same for the other states.
For every i ∈ S it is only possible to return to i after an even
number of steps. The chain is periodic with period 2.
Replace first row by ( 13 , 13 , 0, 13 ) and repeat.
Then returns to 2 are possible after 2, 4, 5, 6, . . . steps; period is 1.

31/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

Periodicity—formal definition

The period of a state


The period di of state i is given by
(n)
di = gcd{n ≥ 1 : pii > 0},

where gcd stands for: greatest common divisor. If di = 1 the state


is called aperiodic, if d1 > 1 periodic. The period is a class
property: all states in a communicating class have the same period.

Period d means that returns to state i are only possible after a


multiple of d time steps:
(d) (2d) (3d)
pii ≥ 0, pii ≥ 0, pii ≥ 0, . . . and
(n)
pii = 0 if n is not a multiple of d.

32/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

1 Markov chains in discrete time: definition

2 Transition probabilities
Example: extracting a Markov chain from a description
Two-step transition probabilities
The Chapman-Kolmogorov equations
Initial distribution and the distribution of Xn

3 Limiting behavior of Markov chains


Communicating classes
Transience and recurrence
Periodicity
The limit theorem
Implications (for the repair shop analysis)

33/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

Limit theorem

Theorem (convergence to limiting distribution)


For an irreducible and aperiodic Markov chain, with state space
S = {0, 1, . . . , m}, for any states i and j:
(n)
lim p = πj ,
n→∞ ij

where the vector π = (π0 , . . . , πm ) is the nonnegative solution to

π = πP, π0 + · · · + πm = 1.

π: normalized left eigenvector of P for eigenvalue 1.


π is a stationary distribution: if X0 ∼ π then Xn ∼ π all n.
For countable state spaces and for periodic chains modified
results hold.
34/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

Related results

The convergence theorem implies several so-called


Ergodic results: relations between long term time-averages
and expectations under the stationary distribution.

Long term average cost I.


Suppose, for an irreducible aperiodic Markov chain, at time k a
random cost Ck is incurred, where Ck depends only on state Xk .
Define c(i) = E [Ck | Xk = i], the expected cost when Xk = i.
Then
n m
1X X
lim Ck = πi c(i),
n→∞ n
k=1 i=0

i.e., the long term average cost per unit time equals the expected
cost under the stationary distribution.

35/ 37
Definition Transition probabilities Limit behavior Wrap-up Irreducibility Recurrence Periodicity Limit Implications

Long term average cost II.


Suppose, an irreducible aperiodic Markov chain incurs a cost
C (i, j) upon transition from state i to state j. Then
n m
1X X
lim C (Xk−1 , Xk ) = πi c(i),
n→∞ n
k=1 i=0

where
m
X
c(i) = E [C (X0 , X1 ) | X0 = i] = pij C (i, j).
j=0

i.e., the long term average cost per unit time equals the expected
cost under the stationary distribution.

This result we will use to compute the long term average


production loss for the repair shop model (last lecture).

36/ 37
Definition Transition probabilities Limit behavior Wrap-up

Material covered; exercises

The material in this lecture can be found in Chapter 4 of Ross.


Specifically: §4.1; §4.2 up to the 6th line on page 200 (about
entering a set A); §4.3 (skip the part after Example 4.15 up to
Corollary 4.2), taking note of Corollary 4.2, the remarks following
it and Examples 4.16–17, but skipping the remainder; §4.4 (skip
Examples 4.23–26 and the proof of Proposition 4.3).

We shall practice this material in Assignment 5.

Recommended exercises from Chapter 4 (Ross): 1–3, 5, 7, 9, 14,


29; 41 (harder but insightful).

37/ 37

You might also like