Unit 5 Part 2 Probability
Unit 5 Part 2 Probability
Markov Chains
4.1. Introduction
for all states i,, i , , ..., in-, , i , j and all n r 0. Such a stochastic process
is known as a Markov chain. Equation (4.1) may be interpreted as
stating that, for a Markov chain, the conditional distribution of any
future state X,,, given the past states X o , X , , ..., X,-, and the present
state X , , is independent of the past states and depends only on the
present state.
The value Pij represents the probability that the process will, when in
state i, next make a transition into state j. Since probabilities are non-
negative and since the process must make a transition into some state, we
have that
158 4 Markov Chains
Example 4.3 On any given day Gary is either cheerful (C), so-so (S), or
glum (G). If he is cheerful today, then he will be C, S, or G tomorrow with
respective probabilities 0.5,0.4,0.1. If he is feeling so-so today, then he will
be C, S, or G tomorrow with probabilities 0.3,0.4,0.3. If he is glum today,
then he will be C, S, or G tomorrow with probabilities 0.2, 0.3, 0.5.
Letting X,, denote Gary's mood on the nth day, then (X,, n 2 0) is a
three-state Markov chain (state 0 = C, state 1 = S, state 2 = G) with
transition probability matrix
4.1. lntroductlon 159
The reader should carefully check the matrix P , and make sure he or she
understands how it was obtained. +
Example 4.5 (A Random Walk Model): A Markov chain whose state
space is given by the integers i = 0, & 1, k 2, ... is said to be a random walk
if, for some number 0 < p < 1,
p I., .I + I = p = l - p i,,-l,
. i = O , k l , ...
The preceding Markov chain is called a random walk for we may think of
it as being a model for an individual walking on a straight line who at each
point of time either takes one step to the right with probability p or one step
to the left with probability 1 - p. +
160 4 Markov Chains
States 0 and N are called absorbing states since once entered they are
never left. Note that the above is a finite state random walk with absorbing
barriers (states 0 and N). +
4.2. Chapman-Kolmogorov Equations
and are most easily understood by noting that P,",G. represents the prob-
+
ability that starting in i the process will go to state j in n m transitions
through a path which takes it into state k at the nth transition. Hence,
summing over all intermediate states k yields the probability that the
process will be in state j after n + m transitions. Formally, we have
4.2. Chapman-Kolmogorov Equations 161
If we let PC")
denote the matrix of n-step transition probabilities P;, then
Equation (4.2) asserts that
PC"+") = PC") . PC")
where the dot represents matrix multiplication.* Hence, in particular,
p'2) = p(1+1)= p . p = p2
and by induction
p(n) = PC"-l+l) = pn-1 p = p n .
That is, the n-step transition matrix may be obtained by multiplying the
matrix P by itself n times.
Hence,
For instance, if a, = 0.4, al= 0.6, in Example 4.7, then the (uncon-
ditional) probability that it will rain four days after we begin keeping
weather records is