0% found this document useful (0 votes)
11 views15 pages

Unit 5 Part 2 Probability

Chapter 4 discusses Markov chains, a type of stochastic process where the future state depends only on the current state and not on past states. It introduces concepts such as transition probabilities and provides examples including weather forecasting, communication systems, and gambling models. The chapter also covers the Chapman-Kolmogorov equations for computing n-step transition probabilities.

Uploaded by

xejeko9202
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
11 views15 pages

Unit 5 Part 2 Probability

Chapter 4 discusses Markov chains, a type of stochastic process where the future state depends only on the current state and not on past states. It introduces concepts such as transition probabilities and provides examples including weather forecasting, communication systems, and gambling models. The chapter also covers the Chapman-Kolmogorov equations for computing n-step transition probabilities.

Uploaded by

xejeko9202
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 15

Chapter 4

Markov Chains

4.1. Introduction

In this chapter, we consider a stochastic process [ X , , n = 0 , 1 , 2 , .. .] that


takes on a finite or countable number of possible values. Unless otherwise
mentioned, this set of possible values of the process will be denoted by the
set of nonnegative integers ( 0 , 1 , 2 , ...). If X , = i, then the process is said to
be in state i at time n. We suppose that whenever the process is in state i,
there is a fixed probability Pij that it will next be in state j. That is, we
suppose that

for all states i,, i , , ..., in-, , i , j and all n r 0. Such a stochastic process
is known as a Markov chain. Equation (4.1) may be interpreted as
stating that, for a Markov chain, the conditional distribution of any
future state X,,, given the past states X o , X , , ..., X,-, and the present
state X , , is independent of the past states and depends only on the
present state.
The value Pij represents the probability that the process will, when in
state i, next make a transition into state j. Since probabilities are non-
negative and since the process must make a transition into some state, we
have that
158 4 Markov Chains

Let P denote the matrix of one-step transition probabilities Pii,so that

Example 4.1 (Forecasting the Weather): Suppose that the chance of


rain tomorrow depends on previous weather conditions only through
whether or not it is raining today and not on past weather conditions.
Suppose also that if it rains today, then it will rain tomorrow with prob-
ability a; and if it does not rain today, then it will rain tomorrow with
probability 8.
If we say that the process is in state 0 when it rains and state 1 when it
does not rain, then'the above is a two-state Markov chain whose transition
probabilities are given by

Example 4.2 (A Communications System): Consider a communications


system which transmits the digits 0 and 1. Each digit transmitted must pass
through several stages, at each of which there is a probability p that the
digit entered will be unchanged when it leaves. Letting X,, denote the digit
entering the nth stage, then (X,,, n = 0, 1, .. .] is a two-state Markov chain
having a transition probability matrix

Example 4.3 On any given day Gary is either cheerful (C), so-so (S), or
glum (G). If he is cheerful today, then he will be C, S, or G tomorrow with
respective probabilities 0.5,0.4,0.1. If he is feeling so-so today, then he will
be C, S, or G tomorrow with probabilities 0.3,0.4,0.3. If he is glum today,
then he will be C, S, or G tomorrow with probabilities 0.2, 0.3, 0.5.
Letting X,, denote Gary's mood on the nth day, then (X,, n 2 0) is a
three-state Markov chain (state 0 = C, state 1 = S, state 2 = G) with
transition probability matrix
4.1. lntroductlon 159

Example 4.4 (Transforming a Process into a Markov Chain): Suppose


that whether or not it rains today depends on previous weather conditions
through the last two days. Specifically, suppose that if it has rained for the
past two days, then it will rain tomorrow with probability 0.7; if it rained
today but not yesterday, then it will rain tomorrow with probability 0.5;
if it rained yesterday but not today, then it will rain tomorrow with
probability 0.4; if it has not rained in the past two days, then it will rain
tomorrow with probability 0.2.
If we let the state at time n depend only on whether or not it is raining at
time n, then the above model is not a Markov chain (why not?). However,
we can transform the above model into a Markov chain by saying that the
state at any time is determined by the weather conditions during both that
day and the previous day. In other words, we can say that the process is in
state 0 if it rained both today and yesterday,
state 1 if it rained today but not yesterday,
state 2 if it rained yesterday but not today,
state 3 if it did not rain either yesterday or today.
The preceding would then represent a four-state Markov chain having a
transition probability matrix

The reader should carefully check the matrix P , and make sure he or she
understands how it was obtained. +
Example 4.5 (A Random Walk Model): A Markov chain whose state
space is given by the integers i = 0, & 1, k 2, ... is said to be a random walk
if, for some number 0 < p < 1,
p I., .I + I = p = l - p i,,-l,
. i = O , k l , ...
The preceding Markov chain is called a random walk for we may think of
it as being a model for an individual walking on a straight line who at each
point of time either takes one step to the right with probability p or one step
to the left with probability 1 - p. +
160 4 Markov Chains

Example 4.6 (A Gambling Model): Consider a gambler who, at each


play of the game, either wins $1 with probability p or loses $1 with prob-
ability 1 - p. If we suppose that our gambler quits playing either when he
goes broke or he attains a fortune of $N, then the gambler's fortune is a
Markov chain having transition probabilities
pZ., .I + I = p = 1 - p ,. . i = l , 2 ,...,N - 1

States 0 and N are called absorbing states since once entered they are
never left. Note that the above is a finite state random walk with absorbing
barriers (states 0 and N). +
4.2. Chapman-Kolmogorov Equations

We have already defined the one-step transition probabilities Pi,. We now


define the n-step transition probabilities P{ to be the probability that a
process in state i will be in state j after n additional transitions. That is,

Of course P; = Pi,. The Chapman-Kolmogorov equations provide a method


for computing these n-step transition probabilities. These equations are
OD

P;+" = C P,",Pg for all n, m r 0,all i, j (4.2)


k=0

and are most easily understood by noting that P,",G. represents the prob-
+
ability that starting in i the process will go to state j in n m transitions
through a path which takes it into state k at the nth transition. Hence,
summing over all intermediate states k yields the probability that the
process will be in state j after n + m transitions. Formally, we have
4.2. Chapman-Kolmogorov Equations 161

If we let PC")
denote the matrix of n-step transition probabilities P;, then
Equation (4.2) asserts that
PC"+") = PC") . PC")
where the dot represents matrix multiplication.* Hence, in particular,
p'2) = p(1+1)= p . p = p2
and by induction
p(n) = PC"-l+l) = pn-1 p = p n .
That is, the n-step transition matrix may be obtained by multiplying the
matrix P by itself n times.

Example 4.7 Consider Example 4.1 in which the weather is considered


as a two-state Markov chain. If cr = 0.7 and B = 0.4, then calculate the
probability that it will rain four days from today given that it is raining
today.
Solution: The one-step transition probability matrix is given by

Hence,

and the desired probability P& equals 0.5749. +


Example 4.8 Consider Example 4.4. Given that it rained on Monday
and Tuesday, what is the probability that it will rain on Thursday?
* If A is an N x M matrix whose element in the ith row and j t h column is aij and B is a
-
M x K matrix whose element in the ith row and j t h column is bii, then A B is defined t o be
the N x K matrix whose element in the ith row and j t h column is ~ f =aikbkj.
'=,
162 4 Markov Chains

Solution: The two-step transition matrix is given by

Since rain on Thursday is equivalent to the process being in either state 0


or state 1 on Thursday, the desired probability is given by P& + P& =
0.49 + 0.12 = 0.61. +
So far, all of the probabilities we have considered are conditional
probabilities. For instance, P; is the probability that the state at time n is j
given that the initial state at time 0 is i. If the unconditional distribution of
the state at time n is desired, it is necessary to specify the probability
distribution of the initial state. Let us denote this by

All unconditional probabilities may be computed by conditioning on the


initial state. That is,

For instance, if a, = 0.4, al= 0.6, in Example 4.7, then the (uncon-
ditional) probability that it will rain four days after we begin keeping
weather records is

You might also like