0% found this document useful (0 votes)
20 views2 pages

Markov Chain

The document discusses various scenarios involving Markov chains, including the probabilities of apple harvest years in New Zealand, transitions between states of balls in urns, and weather conditions based on previous days. It also covers the classification of states as transient or recurrent, the impact of game outcomes on team dinners, and the long-term distribution of employees across job classifications. Additionally, it provides calculations for expected values and transition probability matrices for different Markov chain models.

Uploaded by

joarderoikik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views2 pages

Markov Chain

The document discusses various scenarios involving Markov chains, including the probabilities of apple harvest years in New Zealand, transitions between states of balls in urns, and weather conditions based on previous days. It also covers the classification of states as transient or recurrent, the impact of game outcomes on team dinners, and the long-term distribution of employees across job classifications. Additionally, it provides calculations for expected values and transition probability matrices for different Markov chain models.

Uploaded by

joarderoikik
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 2

Markov Chain

1. Suppose that in New Zealand, home of the Gala apple, years for these wonderful apples can be
described as great, average, or poor. Suppose that following a great year the probabilities of
great, average, or poor year are 0.5, 0.3 and 0.2 respectively. Suppose, also, that following an
average year the probabilities of great, average, or poor years are 0.2, 0.5 and 0.3, respectively.
Finally, suppose that following a poor year the probabilities for great, good and poor years are
0.2, 0.2 and 0.6, respectively. Assume we can describe the situation form year to year by a Markov
chain with states 0, 1, and 2 corresponding to great, avearge and poor years, respectively.

(a) Set up TPM P of the Markov Chain.


(b) Suppose the intial probability for a great year is 0.2, for an average year is 0.5 and for a
poor year is 0.3. Calculate the probability distribution after one year and after 5 years.
 
0.5 0.3 0.2
Answer: (a)  0.2 0.5 0.3  ; (b) (0.26,0.37,0.37). (0.2855, 0.3266,0.3879).
0.2 0.2 0.6

2. Three white and three black balls are distributed in two urns in such a way that each contains
three balls. We say that the system is in state i, i = 0, 1, 2, 3, if the first urn contains i white
balls. At each step, we draw one ball from each urn and place the ball drawn from the first urn
into the second, and conversely with the ball from the second urn. Let Xn denote the state of
the system after the nth step. Explain why Xn , n = 0, 1, 2, ... is a Markov chain and calculate
its transition probability matrix.
Answer P01 = 1, P10 = 1/9, P21 = 4/9, P32 = 1, P11 = 4/9, P22 = 4/9, P12 = 4/9, P23 = 1/9
3. Suppose that whether or not it rains today depends on previous weather conditions through the
last three days. Show how this system may be analyzed by using a Markov chain. How many
states are needed?
Answer:{(RRR)(RRD)(RDR)(RDD)(DRR)(DRD)(DDR)(DDD)}, where D = dry and R =
rain. For instance, (DDR) means that it is raining today, was dry yesterday, and was dry the day
before yesterday.
4. In above Exercise, suppose that if it has rained for the past three days, then it will rain today
with probability 0.8; if it did not rain for any of the past three days, then it will rain today with
probability 0.2; and in any other case the weather today will, with probability 0.6, be the same
as the weather
 yesterday. Determine TPM for thisMarkov chain.
0.8 0.2 0 0 0 0 0 0
 0 0 0.4 0.6 0 0 0 0 
 0 0 0 0 0.6 0.4 0 0 
 
 0 0 0 0 0 0 0.4 0.6 
 
Answer: 
 0.6 0.4 0 0 0 0 0 0 
 0 0 0.4 0.6 0 0 0 0 
 
 0 0 0 0 0.6 0.4 0 0 
0 0 0 0 0 0 0.2 0.8
5. Consider the Markov Chain with states 0, 1, 2, 3 with TPM

0 0 1/2 1/2
 
 1 0 0 0 
P =
0 1 0 0 
0 1 0 0

Determine which states are transient and which are recurrent.


Answer All states are recurrent.
6. Consider the Makov Chain with states 0,1,2,3,4 and TPM

3/4 1/4 0 0 0
 
 3/4 1/4 0 0 0 
P = 0 0 3/4 1/4 0
 

 0 0 3/4 1/4 0 
1/4 1/4 0 0 1/2

Determine the classes of this chain and whether each is transient or recurrent.
Answer Classes {0, 1}, {2, 3}, {4}. First two classes are recurrent. Last is transient.
7. Suppose that coin 1 has probability 0.7 of coming up heads, and coin 2 has probability 0.6 of
coming up heads. If the coin flipped today comes up heads, then we select coin 1 to flip tomorrow,
and if it comes up tails, then we select coin 2 to flip tomorrow. If the coin initially flipped is
equally likely to be coin 1 or coin 2, then what is the probability that the coin flipped on the third
day after the initial flip is coin 1? Suppose that the coin flipped on Monday comes up heads.
What is the probability that the coin flipped on Friday of the same week also comes up heads?
Answer P0,04 = .6667

8. An organization has N employees where N is a large number. Each employee has one of three
possible job classifications and changes classifications (independently) according to a Markov
chain with transition probabilities
 
0.7 0.2 0.1
P =  0.2 0.6 0.2 
0.1 0.4 0.5
What percentage of employees are in each classification?
Answer (π1 , π2 , π3 ) = (6/17, 7/17, 4/17). Hence, if N is large, it follows from the law of large
numbers that approximately 6, 7, and 4 of each 17 employees are in categories 1, 2, and 3.
9. Every time that the team wins a game, it wins its next game with probability 0.8; every time it
loses a game, it wins its next game with probability 0.3. If the team wins a game, then it has
dinner together with probability 0.7, whereas if the team loses then it has dinner together with
probability 0.2. What proportion of games result in a team dinner?
Answer fifty percent of the time the team has dinner.
10. A Markov chain Xn , n with states 0, 1, 2, has the transition probability matrix
 
1/2 1/3 1/6
P =  0 1/3 2/3 
1/2 0 1/2

If P (X0 = 0) = P (X0 = 1) = 1/4 , find E[X3 ].


Answer: Hint Cubing the transition probability matrix, we obtain 3
1 3 1 3 1 3 1 3 1 3 1 3
 P . Thus, E[X3 ] = P (X3 =
1) + 2P (X3 = 2) = 4 P01 + 4 P11 + 2 P21 + 2 4 P02 + 4 P12 + 2 P22

You might also like